threads
listlengths 1
2.99k
|
|---|
[
{
"msg_contents": "Hi,\nI was looking at ExplainOneQuery() where ExplainOneQuery_hook is called.\n\nCurrently the call to the hook is in if block and normal processing is in\nelse block.\n\nWhat if the hook doesn't want to duplicate the whole code printing\nexecution plan ?\n\nPlease advise.\n\nThanks\n\nHi,I was looking at ExplainOneQuery() where ExplainOneQuery_hook is called.Currently the call to the hook is in if block and normal processing is in else block.What if the hook doesn't want to duplicate the whole code printing execution plan ?Please advise.Thanks",
"msg_date": "Tue, 26 Jul 2022 14:00:57 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Question about ExplainOneQuery_hook"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 1:54 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n> Hi,\n> I was looking at ExplainOneQuery() where ExplainOneQuery_hook is called.\n>\n> Currently the call to the hook is in if block and normal processing is in\n> else block.\n>\n> What if the hook doesn't want to duplicate the whole code printing\n> execution plan ?\n>\n> Please advise.\n>\n>\nWhat kind of advice are you looking for, especially knowing we don't know\nanything except you find the existing hook unusable.\n\nhttps://github.com/postgres/postgres/commit/604ffd280b955100e5fc24649ee4d42a6f3ebf35\n\nMy advice is pretend the hook doesn't even exist since it was created 15\nyears ago for a specific purpose that isn't what you are doing.\n\nI'm hoping that you already have some idea of how to interact with the open\nsource PostgreSQL project when it doesn't have a feature that you want.\nOtherwise that generic discussion probably is best done on -general with a\nbetter subject line.\n\nDavid J.\n\nOn Tue, Jul 26, 2022 at 1:54 PM Zhihong Yu <zyu@yugabyte.com> wrote:Hi,I was looking at ExplainOneQuery() where ExplainOneQuery_hook is called.Currently the call to the hook is in if block and normal processing is in else block.What if the hook doesn't want to duplicate the whole code printing execution plan ?Please advise.What kind of advice are you looking for, especially knowing we don't know anything except you find the existing hook unusable.https://github.com/postgres/postgres/commit/604ffd280b955100e5fc24649ee4d42a6f3ebf35My advice is pretend the hook doesn't even exist since it was created 15 years ago for a specific purpose that isn't what you are doing.I'm hoping that you already have some idea of how to interact with the open source PostgreSQL project when it doesn't have a feature that you want. Otherwise that generic discussion probably is best done on -general with a better subject line.David J.",
"msg_date": "Tue, 26 Jul 2022 15:58:11 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Question about ExplainOneQuery_hook"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nThe bounded heap sorting status flag is set twice in sort_bounded_heap()\nand tuplesort_performsort(). This patch helps remove one of them.\n\nBest Regards,\nXing",
"msg_date": "Wed, 27 Jul 2022 17:09:54 +0800",
"msg_from": "Xing Guo <higuoxing@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Simple code cleanup in tuplesort.c."
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 5:10 PM Xing Guo <higuoxing@gmail.com> wrote:\n\n> The bounded heap sorting status flag is set twice in sort_bounded_heap()\n> and tuplesort_performsort(). This patch helps remove one of them.\n>\n\n+1. Looks good to me.\n\nThanks\nRichard\n\nOn Wed, Jul 27, 2022 at 5:10 PM Xing Guo <higuoxing@gmail.com> wrote:The bounded heap sorting status flag is set twice in sort_bounded_heap() and tuplesort_performsort(). This patch helps remove one of them.+1. Looks good to me.ThanksRichard",
"msg_date": "Wed, 27 Jul 2022 17:49:38 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Simple code cleanup in tuplesort.c."
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 5:10 PM Xing Guo <higuoxing@gmail.com> wrote:\n\n> The bounded heap sorting status flag is set twice in sort_bounded_heap()\n> and tuplesort_performsort(). This patch helps remove one of them.\n>\n\nRevisiting this patch I think maybe it's better to remove the setting of\nTuplesort status from tuplesort_performsort() for the TSS_BOUNDED case.\nThus we keep the heap manipulation routines, make_bounded_heap and\nsort_bounded_heap, consistent in that they update their status\naccordingly inside the function.\n\nAlso, would you please add it to the CF to not lose track of it?\n\nThanks\nRichard\n\nOn Wed, Jul 27, 2022 at 5:10 PM Xing Guo <higuoxing@gmail.com> wrote:The bounded heap sorting status flag is set twice in sort_bounded_heap() and tuplesort_performsort(). This patch helps remove one of them. Revisiting this patch I think maybe it's better to remove the setting ofTuplesort status from tuplesort_performsort() for the TSS_BOUNDED case.Thus we keep the heap manipulation routines, make_bounded_heap andsort_bounded_heap, consistent in that they update their statusaccordingly inside the function.Also, would you please add it to the CF to not lose track of it?ThanksRichard",
"msg_date": "Fri, 16 Sep 2022 14:43:04 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Simple code cleanup in tuplesort.c."
},
{
"msg_contents": "Hi Richard,\n\nSorry for not responding for a long time, I missed the previous email\nby accident :-)\n\nI attached a newer patch based on your suggestions and created an\nentry in cf manager.\nhttps://commitfest.postgresql.org/40/3924/\n\nBest Regards,\nXing Guo\n\n\n\nOn 9/16/22, Richard Guo <guofenglinux@gmail.com> wrote:\n> On Wed, Jul 27, 2022 at 5:10 PM Xing Guo <higuoxing@gmail.com> wrote:\n>\n>> The bounded heap sorting status flag is set twice in sort_bounded_heap()\n>> and tuplesort_performsort(). This patch helps remove one of them.\n>>\n>\n> Revisiting this patch I think maybe it's better to remove the setting of\n> Tuplesort status from tuplesort_performsort() for the TSS_BOUNDED case.\n> Thus we keep the heap manipulation routines, make_bounded_heap and\n> sort_bounded_heap, consistent in that they update their status\n> accordingly inside the function.\n>\n> Also, would you please add it to the CF to not lose track of it?\n>\n> Thanks\n> Richard\n>\n\n\n-- \nBest Regards,\nXing",
"msg_date": "Fri, 30 Sep 2022 23:32:10 +0800",
"msg_from": "Xing Guo <higuoxing@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Simple code cleanup in tuplesort.c."
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nRemoving \"state->status = TSS_SORTEDINMEM\" should be fine as it is already set in sort_bounded_heap(state) few lines before.\r\n\r\nCary Huang\r\n----------------\r\nHighGo Software Canada\r\nwww.highgo.ca",
"msg_date": "Fri, 25 Nov 2022 21:41:14 +0000",
"msg_from": "Cary Huang <cary.huang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Simple code cleanup in tuplesort.c."
},
{
"msg_contents": "On Fri, Sep 16, 2022 at 1:43 PM Richard Guo <guofenglinux@gmail.com> wrote:\n>\n>\n> On Wed, Jul 27, 2022 at 5:10 PM Xing Guo <higuoxing@gmail.com> wrote:\n>>\n>> The bounded heap sorting status flag is set twice in sort_bounded_heap()\nand tuplesort_performsort(). This patch helps remove one of them.\n>\n>\n> Revisiting this patch I think maybe it's better to remove the setting of\n> Tuplesort status from tuplesort_performsort() for the TSS_BOUNDED case.\n> Thus we keep the heap manipulation routines, make_bounded_heap and\n> sort_bounded_heap, consistent in that they update their status\n> accordingly inside the function.\n\nThe label TSS_BUILDRUNS has a similar style and also the following comment,\nso I will push this patch with a similar comment added unless someone wants\nto make a case for doing otherwise.\n\n* Note that mergeruns sets the correct state->status.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Fri, Sep 16, 2022 at 1:43 PM Richard Guo <guofenglinux@gmail.com> wrote:>>> On Wed, Jul 27, 2022 at 5:10 PM Xing Guo <higuoxing@gmail.com> wrote:>>>> The bounded heap sorting status flag is set twice in sort_bounded_heap() and tuplesort_performsort(). This patch helps remove one of them.>> > Revisiting this patch I think maybe it's better to remove the setting of> Tuplesort status from tuplesort_performsort() for the TSS_BOUNDED case.> Thus we keep the heap manipulation routines, make_bounded_heap and> sort_bounded_heap, consistent in that they update their status> accordingly inside the function.The label TSS_BUILDRUNS has a similar style and also the following comment, so I will push this patch with a similar comment added unless someone wants to make a case for doing otherwise.* Note that mergeruns sets the correct state->status.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 5 Jan 2023 08:18:39 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Simple code cleanup in tuplesort.c."
},
{
"msg_contents": "On Thu, Jan 5, 2023 at 8:18 AM John Naylor <john.naylor@enterprisedb.com>\nwrote:\n>\n> The label TSS_BUILDRUNS has a similar style and also the following\ncomment, so I will push this patch with a similar comment added unless\nsomeone wants to make a case for doing otherwise.\n>\n> * Note that mergeruns sets the correct state->status.\n\nThis has been pushed, thanks. Note that both patches in this thread have\nthe same name. Adding a version number to the name is a good way to\ndistinguish them.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Jan 5, 2023 at 8:18 AM John Naylor <john.naylor@enterprisedb.com> wrote:>> The label TSS_BUILDRUNS has a similar style and also the following comment, so I will push this patch with a similar comment added unless someone wants to make a case for doing otherwise.>> * Note that mergeruns sets the correct state->status.This has been pushed, thanks. Note that both patches in this thread have the same name. Adding a version number to the name is a good way to distinguish them.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 9 Jan 2023 16:58:11 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Simple code cleanup in tuplesort.c."
},
{
"msg_contents": "On 1/9/23, John Naylor <john.naylor@enterprisedb.com> wrote:\n> On Thu, Jan 5, 2023 at 8:18 AM John Naylor <john.naylor@enterprisedb.com>\n> wrote:\n>>\n>> The label TSS_BUILDRUNS has a similar style and also the following\n> comment, so I will push this patch with a similar comment added unless\n> someone wants to make a case for doing otherwise.\n>>\n>> * Note that mergeruns sets the correct state->status.\n>\n> This has been pushed, thanks. Note that both patches in this thread have\n> the same name. Adding a version number to the name is a good way to\n> distinguish them.\n\nThank you John. This is my first patch, I'll keep it in mind that\nadding a version number next time I sending the patch.\n\n> --\n> John Naylor\n> EDB: http://www.enterprisedb.com\n>\n\n\n-- \nBest Regards,\nXing\n\n\n",
"msg_date": "Mon, 9 Jan 2023 20:29:23 +0800",
"msg_from": "Xing Guo <higuoxing@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Simple code cleanup in tuplesort.c."
},
{
"msg_contents": "On Mon, Jan 9, 2023 at 7:29 PM Xing Guo <higuoxing@gmail.com> wrote:\n>\n> Thank you John. This is my first patch, I'll keep it in mind that\n> adding a version number next time I sending the patch.\n\nWelcome to the community! You may also consider reviewing a patch from the\ncurrent commitfest, since we can always use additional help there.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Jan 9, 2023 at 7:29 PM Xing Guo <higuoxing@gmail.com> wrote:>> Thank you John. This is my first patch, I'll keep it in mind that> adding a version number next time I sending the patch.Welcome to the community! You may also consider reviewing a patch from the current commitfest, since we can always use additional help there.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 11 Jan 2023 10:45:55 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Simple code cleanup in tuplesort.c."
}
] |
[
{
"msg_contents": "Hi hackers,\n\nWe came across a slowdown in planning, where queries use tables with many\nindexes. In setups with wide tables it is not uncommon to have easily\n10-100 indexes on a single table. The slowdown is already visible in serial\nworkloads with just a handful of indexes, but gets drastically amplified\nwhen running queries with more indexes in parallel at high throughput.\n\nWe measured the TPS and planning time of running parallel streams of simple\npoint look-up queries on a single empty table with 60 columns and 60\nindexes. The query used is 'SELECT * FROM synth_table WHERE col5 = 42'. No\nrows are returned because the table is empty. We used a machine with 64\nphysical CPU cores. The schema and sysbench script to reproduce these\nnumbers are attached. We used the TPS as reported by sysbench and obtained\nplanning time by running 'EXPLAIN ANALYZE' on the same query in a\nseparately opened connection. We averaged the planning time of 3 successive\n'EXPLAIN ANALYZE' runs. sysbench ran on the same machine with varying\nnumbers of threads using the following command line:\n\nsysbench repro.lua --db-driver=pgsql --pgsql-host=localhost\n--pgsql-db=postgres --pgsql-port=? --pgsql-user=? --pgsql-password=?\n--report-interval=1 --threads=64 run\n\nThe following table shows the results. It is clearly visible that the TPS\nflatten out already at 8 parallel streams, while the planning time is\nincreasing drastically.\n\nParallel streams | TPS (before) | Planning time (before)\n-----------------|--------------|-----------------------\n1 | 5,486 | 0.13 ms\n2 | 8,098 | 0.22 ms\n4 | 15,013 | 0.19 ms\n8 | 27,107 | 0.29 ms\n16 | 30,938 | 0.43 ms\n32 | 26,330 | 1.68 ms\n64 | 24,314 | 2.48 ms\n\nWe tracked down the root cause of this slowdown to lock contention in\n'get_relation_info()'. The index lock of every single index of every single\ntable used in that query is acquired. We attempted a fix by pre-filtering\nout all indexes that anyways cannot be used with a certain query, without\ntaking the index locks (credits to Luc Vlaming for idea and\nimplementation). The patch does so by caching the columns present in every\nindex, inside 'struct Relation', similarly to 'rd_indexlist'. Then, before\nopening (= locking) the indexes in 'get_relation_info()', we check if the\nindex can actually contribute to the query and if not it is discarded right\naway. Caching the index info saves considerable work for every query run\nsubsequently, because less indexes must be inspected and thereby locked.\nThis way we also save cycles in any code that later on goes over all\nrelation indexes.\n\nThe work-in-progress version of the patch is attached. It is still fairly\nrough (e.g. uses a global variable, selects the best index in scans without\nrestrictions by column count instead of physical column size, is missing\nsome renaming, etc.), but shows the principle.\n\nThe following table shows the TPS, planning time and speed-ups after\napplying the patch and rerunning above described benchmark. Now, the\nplanning time remains roughly constant and TPS roughly doubles each time\nthe number of parallel streams is doubled. The higher the stream count the\nmore severe the lock contention is and the more pronounced the gained\nspeed-up gets. Interestingly, even for a single query stream the speed-up\nin planning time is already very significant. This applies also for lower\nindex counts. For example just with 10 indexes the TPS for a single query\nstream goes from 9,159 to 12,558. We can do more measurements if there is\ninterest in details for a lower number of indexes.\n\nParallel streams | TPS (after) | Planning time (after) | Speed-up TPS |\nSpeed-up planning\n-----------------|-------------|-----------------------|--------------|------------------\n1 | 10,344 | 0.046 | 1.9x |\n 2.8x\n2 | 20,140 | 0.045 ms | 2.5x |\n 4.9x\n4 | 40,349 | 0.047 ms | 2.7x |\n 4.0x\n8 | 80,121 | 0.046 ms | 3.0x |\n 6.3x\n16 | 152,632 | 0.051 ms | 4.9x |\n 8.4x\n32 | 301,359 | 0.052 ms | 11.4x |\n32.3x\n64 | 525,115 | 0.062 ms | 21.6x |\n40.0x\n\nWe are happy to receive your feedback and polish up the patch.\n\n--\nDavid Geier\n(ServiceNow)",
"msg_date": "Wed, 27 Jul 2022 14:37:57 +0200",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Reducing planning time on tables with many indexes"
},
{
"msg_contents": "David Geier <geidav.pg@gmail.com> writes:\n> We tracked down the root cause of this slowdown to lock contention in\n> 'get_relation_info()'. The index lock of every single index of every single\n> table used in that query is acquired. We attempted a fix by pre-filtering\n> out all indexes that anyways cannot be used with a certain query, without\n> taking the index locks (credits to Luc Vlaming for idea and\n> implementation). The patch does so by caching the columns present in every\n> index, inside 'struct Relation', similarly to 'rd_indexlist'.\n\nI wonder how much thought you gave to the costs imposed by that extra\ncache space. We have a lot of users who moan about relcache bloat\nalready. But more to the point, I do not buy the assumption that\nan index's set of columns is a good filter for which indexes are of\ninterest. A trivial counterexample from the regression database is\n\nregression=# explain select count(*) from tenk1;\n QUERY PLAN \n \n--------------------------------------------------------------------------------\n------------\n Aggregate (cost=219.28..219.29 rows=1 width=8)\n -> Index Only Scan using tenk1_hundred on tenk1 (cost=0.29..194.28 rows=100\n00 width=0)\n(2 rows)\n\nIt looks to me like the patch also makes unwarranted assumptions about\nbeing able to discard all but the smallest index having a given set\nof columns. This would, for example, possibly lead to dropping the\nindex that has the most useful sort order, or that has the operator\nclass needed to support a specific WHERE clause.\n\nIn short, I'm not sure I buy this concept at all. I think it might\nbe more useful to attack the locking overhead more directly. I kind\nof wonder why we need per-index locks at all during planning ---\nI think that we already interpret AccessShareLock on the parent table\nas being sufficient to block schema changes on existing indexes.\n\nUnfortunately, as things stand today, the planner needs more than the\nright to look at the indexes' schemas, because it makes physical accesses\nto btree indexes to find out their tree height (and I think there are some\ncomparable behaviors in other AMs). I've never particularly cared for\nthat implementation, and would be glad to rip out that behavior if we can\nfind another way. Maybe we could persuade VACUUM or ANALYZE to store that\ninfo in the index's pg_index row, or some such, and then the planner\ncould use it with no lock?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Jul 2022 12:39:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing planning time on tables with many indexes"
},
{
"msg_contents": "I wrote:\n> Unfortunately, as things stand today, the planner needs more than the\n> right to look at the indexes' schemas, because it makes physical accesses\n> to btree indexes to find out their tree height (and I think there are some\n> comparable behaviors in other AMs). I've never particularly cared for\n> that implementation, and would be glad to rip out that behavior if we can\n> find another way. Maybe we could persuade VACUUM or ANALYZE to store that\n> info in the index's pg_index row, or some such, and then the planner\n> could use it with no lock?\n\nA first step here could just be to postpone fetching _bt_getrootheight()\nuntil we actually need it during cost estimation. That would avoid the\nneed to do it at all for indexes that indxpath.c discards as irrelevant,\nwhich is a decision made on considerably more information than the\nproposed patch uses.\n\nHaving done that, you could look into revising plancat.c to fill the\nIndexOptInfo structs from catcache entries instead of opening the\nindex per se. (You'd have to also make sure that the appropriate\nindex locks are acquired eventually, for indexes the query does use\nat runtime. I think that's the case, but I'm not sure if anything\ndownstream has been optimized on the assumption the planner did it.)\n\nThis'd probably get us a large part of the way there. Further\noptimization of acquisition of tree height etc could be an\noptional follow-up.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Jul 2022 13:15:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing planning time on tables with many indexes"
},
{
"msg_contents": "Hi Tom,\n\nOn Wed, Jul 27, 2022 at 7:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I wrote:\n> > Unfortunately, as things stand today, the planner needs more than the\n> > right to look at the indexes' schemas, because it makes physical accesses\n> > to btree indexes to find out their tree height (and I think there are\n> some\n> > comparable behaviors in other AMs). I've never particularly cared for\n> > that implementation, and would be glad to rip out that behavior if we can\n> > find another way. Maybe we could persuade VACUUM or ANALYZE to store\n> that\n> > info in the index's pg_index row, or some such, and then the planner\n> > could use it with no lock?\n>\nIt seems like _bt_getrootheight() first checks if the height is cached and\nonly if it isn't it accesses index meta pages.\nIf the index locks are only taken for the sake of _bt_getrootheight()\naccessing index meta pages in case they are not cached, maybe the index\nlocks could be taken conditionally.\nHowever, postponing the call to where it is really needed sounds even\nbetter.\n\n>\n> A first step here could just be to postpone fetching _bt_getrootheight()\n> until we actually need it during cost estimation. That would avoid the\n> need to do it at all for indexes that indxpath.c discards as irrelevant,\n> which is a decision made on considerably more information than the\n> proposed patch uses.\n>\n> Having done that, you could look into revising plancat.c to fill the\n> IndexOptInfo structs from catcache entries instead of opening the\n> index per se. (You'd have to also make sure that the appropriate\n> index locks are acquired eventually, for indexes the query does use\n> at runtime. I think that's the case, but I'm not sure if anything\n> downstream has been optimized on the assumption the planner did it.)\n>\n> I can give this a try.\nThat way we would get rid of the scalability issues.\nHowever, what about the runtime savings observed with a single query stream?\nIn that case there is no contention, so it seems like having less indexes\nto look at further down the road, also yields substantial savings.\nAny clue where exactly these savings might come from? Or is it actually\nthe calls to _bt_getrootheight()? I can also do a few perf runs to track\nthat down.\n\n\n> This'd probably get us a large part of the way there. Further\n> optimization of acquisition of tree height etc could be an\n> optional follow-up.\n>\n> regards, tom lane\n>\n\nHi Tom,On Wed, Jul 27, 2022 at 7:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:I wrote:\n> Unfortunately, as things stand today, the planner needs more than the\n> right to look at the indexes' schemas, because it makes physical accesses\n> to btree indexes to find out their tree height (and I think there are some\n> comparable behaviors in other AMs). I've never particularly cared for\n> that implementation, and would be glad to rip out that behavior if we can\n> find another way. Maybe we could persuade VACUUM or ANALYZE to store that\n> info in the index's pg_index row, or some such, and then the planner\n> could use it with no lock?It seems like _bt_getrootheight() first checks if the height is cached and only if it isn't it accesses index meta pages.If the index locks are only taken for the sake of _bt_getrootheight() accessing index meta pages in case they are not cached, maybe the index locks could be taken conditionally.However, postponing the call to where it is really needed sounds even better.\n\nA first step here could just be to postpone fetching _bt_getrootheight()\nuntil we actually need it during cost estimation. That would avoid the\nneed to do it at all for indexes that indxpath.c discards as irrelevant,\nwhich is a decision made on considerably more information than the\nproposed patch uses.\n\nHaving done that, you could look into revising plancat.c to fill the\nIndexOptInfo structs from catcache entries instead of opening the\nindex per se. (You'd have to also make sure that the appropriate\nindex locks are acquired eventually, for indexes the query does use\nat runtime. I think that's the case, but I'm not sure if anything\ndownstream has been optimized on the assumption the planner did it.)\nI can give this a try.That way we would get rid of the scalability issues.However, what about the runtime savings observed with a single query stream?In that case there is no contention, so it seems like having less indexes to look at further down the road, also yields substantial savings.Any clue where exactly these savings might come from? Or is it actually the calls to _bt_getrootheight()? I can also do a few perf runs to track that down. \nThis'd probably get us a large part of the way there. Further\noptimization of acquisition of tree height etc could be an\noptional follow-up.\n\n regards, tom lane",
"msg_date": "Mon, 1 Aug 2022 15:33:55 +0200",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing planning time on tables with many indexes"
},
{
"msg_contents": "Hi Tom,\n\nOn Wed, Jul 27, 2022 at 6:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> David Geier <geidav.pg@gmail.com> writes:\n> > We tracked down the root cause of this slowdown to lock contention in\n> > 'get_relation_info()'. The index lock of every single index of every\n> single\n> > table used in that query is acquired. We attempted a fix by pre-filtering\n> > out all indexes that anyways cannot be used with a certain query, without\n> > taking the index locks (credits to Luc Vlaming for idea and\n> > implementation). The patch does so by caching the columns present in\n> every\n> > index, inside 'struct Relation', similarly to 'rd_indexlist'.\n>\n> I wonder how much thought you gave to the costs imposed by that extra\n> cache space. We have a lot of users who moan about relcache bloat\n> already.\n\n\nThe current representation could be compacted (e.g. by storing the column\nindexes as 16-bit integers, instead of using a Bitmapset). That should make\nthe additional space needed neglectable compared to the current size of\nRelationData. On top of that we could maybe reorder the members of\nRelationData to save padding bytes. Currently, there is lots of\ninterleaving of members of different sizes. Even when taking cache locality\ninto consideration it looks like a fair amount could be saved, probably\naccounting for the additional space needed to store the index column data.\n\n But more to the point, I do not buy the assumption that\n> an index's set of columns is a good filter for which indexes are of\n> interest. A trivial counterexample from the regression database is\n>\n> regression=# explain select count(*) from tenk1;\n> QUERY PLAN\n>\n>\n>\n> --------------------------------------------------------------------------------\n> ------------\n> Aggregate (cost=219.28..219.29 rows=1 width=8)\n> -> Index Only Scan using tenk1_hundred on tenk1 (cost=0.29..194.28\n> rows=100\n> 00 width=0)\n> (2 rows)\n>\n> Only for queries without index conditions, the current version of the\npatch simply discards all indexes but the one with the least columns. This\nis case (3) in s64_IsUnnecessaryIndex(). This is an over-simplification\nwith the goal of picking the index which yields least I/O. For single\ncolumn indexes that works, but it can fall short for multi-column indexes\n(e.g. [INT, TEXT] index vs [INT, INT]t index have both 2 columns but the\nlatter would be better suited when there's no other index and we want to\nread the first integer column). What we should do here instead is to\ndiscard indexes based on storage size.\n\n\n> It looks to me like the patch also makes unwarranted assumptions about\n> being able to discard all but the smallest index having a given set\n> of columns. This would, for example, possibly lead to dropping the\n> index that has the most useful sort order, or that has the operator\n> class needed to support a specific WHERE clause.t\n>\nWhy would that be? If we keep all indexes that contain columns that are\nused by the query, we also keep the indexes which provide a certain sort\norder or operator class.\n\n>\n> In short, I'm not sure I buy this concept at all. I think it might\n> be more useful to attack the locking overhead more directly. I kind\n> of wonder why we need per-index locks at all during planning ---\n> I think that we already interpret AccessShareLock on the parent table\n> as being sufficient to block schema changes on existing indexes.\n>\n> As I said in the reply to your other mail, there's huge savings also in\nthe serial case where lock contention is not an issue. It seems like\npre-filtering saves work down the road. I'll do some perf runs to track\ndown where exactly the savings come from. One source I can think of is only\nhaving to consider a subset of all indexes during path creation.\n\n\n> Unfortunately, as things stand today, the planner needs more than the\n> right to look at the indexes' schemas, because it makes physical accesses\n> to btree indexes to find out their tree height (and I think there are some\n> comparable behaviors in other AMs). I've never particularly cared for\n> that implementation, and would be glad to rip out that behavior if we can\n> find another way. Maybe we could persuade VACUUM or ANALYZE to store that\n> info in the index's pg_index row, or some such, and then the planner\n> could use it with no lock?\n>\n> That's another interesting approach, but I would love to save the planner\ncycles for the sequential case.\n\n--\nDavid Geier\n(ServiceNow)\n\nHi Tom,On Wed, Jul 27, 2022 at 6:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:David Geier <geidav.pg@gmail.com> writes:\n> We tracked down the root cause of this slowdown to lock contention in\n> 'get_relation_info()'. The index lock of every single index of every single\n> table used in that query is acquired. We attempted a fix by pre-filtering\n> out all indexes that anyways cannot be used with a certain query, without\n> taking the index locks (credits to Luc Vlaming for idea and\n> implementation). The patch does so by caching the columns present in every\n> index, inside 'struct Relation', similarly to 'rd_indexlist'.\n\nI wonder how much thought you gave to the costs imposed by that extra\ncache space. We have a lot of users who moan about relcache bloat\nalready. The current representation could be compacted (e.g. by storing the column indexes as 16-bit integers, instead of using a Bitmapset). That should make the additional space needed neglectable compared to the current size of RelationData. On top of that we could maybe reorder the members of RelationData to save padding bytes. Currently, there is lots of interleaving of members of different sizes. Even when taking cache locality into consideration it looks like a fair amount could be saved, probably accounting for the additional space needed to store the index column data. But more to the point, I do not buy the assumption that\nan index's set of columns is a good filter for which indexes are of\ninterest. A trivial counterexample from the regression database is\n\nregression=# explain select count(*) from tenk1;\n QUERY PLAN \n\n--------------------------------------------------------------------------------\n------------\n Aggregate (cost=219.28..219.29 rows=1 width=8)\n -> Index Only Scan using tenk1_hundred on tenk1 (cost=0.29..194.28 rows=100\n00 width=0)\n(2 rows)\nOnly for queries without index conditions, the current version of the patch simply discards all indexes but the one with the least columns. This is case (3) in s64_IsUnnecessaryIndex(). This is an over-simplification with the goal of picking the index which yields least I/O. For single column indexes that works, but it can fall short for multi-column indexes (e.g. [INT, TEXT] index vs [INT, INT]t index have both 2 columns but the latter would be better suited when there's no other index and we want to read the first integer column). What we should do here instead is to discard indexes based on storage size. \nIt looks to me like the patch also makes unwarranted assumptions about\nbeing able to discard all but the smallest index having a given set\nof columns. This would, for example, possibly lead to dropping the\nindex that has the most useful sort order, or that has the operator\nclass needed to support a specific WHERE clause.tWhy would that be? If we keep all indexes that contain columns that are used by the query, we also keep the indexes which provide a certain sort order or operator class. \n\nIn short, I'm not sure I buy this concept at all. I think it might\nbe more useful to attack the locking overhead more directly. I kind\nof wonder why we need per-index locks at all during planning ---\nI think that we already interpret AccessShareLock on the parent table\nas being sufficient to block schema changes on existing indexes.\nAs I said in the reply to your other mail, there's huge savings also in the serial case where lock contention is not an issue. It seems like pre-filtering saves work down the road. I'll do some perf runs to track down where exactly the savings come from. One source I can think of is only having to consider a subset of all indexes during path creation. \nUnfortunately, as things stand today, the planner needs more than the\nright to look at the indexes' schemas, because it makes physical accesses\nto btree indexes to find out their tree height (and I think there are some\ncomparable behaviors in other AMs). I've never particularly cared for\nthat implementation, and would be glad to rip out that behavior if we can\nfind another way. Maybe we could persuade VACUUM or ANALYZE to store that\ninfo in the index's pg_index row, or some such, and then the planner\ncould use it with no lock?That's another interesting approach, but I would love to save the planner cycles for the sequential case.--David Geier(ServiceNow)",
"msg_date": "Thu, 4 Aug 2022 11:35:40 +0200",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing planning time on tables with many indexes"
},
{
"msg_contents": "On 27.07.22, 18:39, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\r\n\r\n [External Email]\r\n\r\n\r\n David Geier <geidav.pg@gmail.com> writes:\r\n > We tracked down the root cause of this slowdown to lock contention in\r\n > 'get_relation_info()'. The index lock of every single index of every single\r\n > table used in that query is acquired. We attempted a fix by pre-filtering\r\n > out all indexes that anyways cannot be used with a certain query, without\r\n > taking the index locks (credits to Luc Vlaming for idea and\r\n > implementation). The patch does so by caching the columns present in every\r\n > index, inside 'struct Relation', similarly to 'rd_indexlist'.\r\n\r\n I wonder how much thought you gave to the costs imposed by that extra\r\n cache space. We have a lot of users who moan about relcache bloat\r\n already. But more to the point, I do not buy the assumption that\r\n an index's set of columns is a good filter for which indexes are of\r\n interest. A trivial counterexample from the regression database is\r\n\r\n regression=# explain select count(*) from tenk1;\r\n QUERY PLAN\r\n\r\n --------------------------------------------------------------------------------\r\n ------------\r\n Aggregate (cost=219.28..219.29 rows=1 width=8)\r\n -> Index Only Scan using tenk1_hundred on tenk1 (cost=0.29..194.28 rows=100\r\n 00 width=0)\r\n (2 rows)\r\n\r\n It looks to me like the patch also makes unwarranted assumptions about\r\n being able to discard all but the smallest index having a given set\r\n of columns. This would, for example, possibly lead to dropping the\r\n index that has the most useful sort order, or that has the operator\r\n class needed to support a specific WHERE clause.\r\n\r\nThanks for checking out the patch!\r\n\r\nJust to make sure we're on the same page: we're only making this assumption if you select no fields at all.\r\nIf you select any fields at all it will check for column overlap, and if there's any overlap with any referenced field, \r\nthen the index will not be filtered out.\r\n\r\nFor producing a row count with no referenced fields it is true that it should select the truly cheapest \r\nindex to produce the row count and there should be some Index-am callback introduced for that. \r\nFor now it was just a quick-and-dirty solution.\r\nWouldn't a callback that would estimate the amount of data read be good enough though?\r\n\r\nFor sort orders the field to sort by should be listed and hence the index should not be filtered out,\r\nor what am I missing? Likely I've missed some fields that are referenced somehow (potentially indirectly),\r\nbut that shouldn't disqualify the approach completely.\r\n\r\n In short, I'm not sure I buy this concept at all. I think it might\r\n be more useful to attack the locking overhead more directly. I kind\r\n of wonder why we need per-index locks at all during planning ---\r\n I think that we already interpret AccessShareLock on the parent table\r\n as being sufficient to block schema changes on existing indexes.\r\n\r\nCould you elaborate as to why this approach is not good enough? To me it seems that avoiding work\r\nahead of time is generally useful. Or are you worried that we remove too much?\r\n\r\n Unfortunately, as things stand today, the planner needs more than the\r\n right to look at the indexes' schemas, because it makes physical accesses\r\n to btree indexes to find out their tree height (and I think there are some\r\n comparable behaviors in other AMs). I've never particularly cared for\r\n that implementation, and would be glad to rip out that behavior if we can\r\n find another way. Maybe we could persuade VACUUM or ANALYZE to store that\r\n info in the index's pg_index row, or some such, and then the planner\r\n could use it with no lock?\r\n\r\n regards, tom lane\r\n\r\n\r\nThe thing you're touching on is specific for a btree. Not sure this generalizes to all index types that\r\nare out there though? I could see there being some property that allows you to be \"no-lock\",\r\nand then a callback that allows you to cache some generic data that can be transformed\r\nwhen the indexopt info structs are filled. Is that roughly what you have in mind?\r\n\r\nBest,\r\nLuc\r\n\r\n",
"msg_date": "Mon, 8 Aug 2022 12:29:22 +0000",
"msg_from": "Luc Vlaming Hummel <luc.vlaming@servicenow.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing planning time on tables with many indexes"
},
{
"msg_contents": "On 8/1/22 15:33, David Geier wrote:\n> Hi Tom,\n>\n> On Wed, Jul 27, 2022 at 7:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > Unfortunately, as things stand today, the planner needs more\n> than the\n> > right to look at the indexes' schemas, because it makes physical\n> accesses\n> > to btree indexes to find out their tree height (and I think\n> there are some\n> > comparable behaviors in other AMs). I've never particularly\n> cared for\n> > that implementation, and would be glad to rip out that behavior\n> if we can\n> > find another way. Maybe we could persuade VACUUM or ANALYZE to\n> store that\n> > info in the index's pg_index row, or some such, and then the planner\n> > could use it with no lock?\n>\n> It seems like _bt_getrootheight() first checks if the height is cached \n> and only if it isn't it accesses index meta pages.\n> If the index locks are only taken for the sake of _bt_getrootheight() \n> accessing index meta pages in case they are not cached, maybe the \n> index locks could be taken conditionally.\n> However, postponing the call to where it is really needed sounds even \n> better.\n>\n>\n> A first step here could just be to postpone fetching\n> _bt_getrootheight()\n> until we actually need it during cost estimation. That would\n> avoid the\n> need to do it at all for indexes that indxpath.c discards as\n> irrelevant,\n> which is a decision made on considerably more information than the\n> proposed patch uses.\n>\nHi Tom,\n\nI gave the idea of moving _bt_getrootheight() into costsize.c and \nfilling IndexOptInfo in get_relation_info() via syscache instead of \nrelcache a try, but didn't get very far.\nMoving out _bt_getrootheight() was straightforward, and we should do \nnevertheless. However, it seems like get_relation_info() strongly \ndepends on the index's Relation for quite some stuff. A fair amount of \nfields I could actually fill from syscache, but there are some that \neither need data not stored in syscache (e.g. estimate_rel_size(), \nRelation::rd_smgr needed by RelationGetNumberOfBlocksInFork()) or need \nfields that are cached in the index's Relation and would have to be \nrecomputed otherwise (e.g. Relation::rd_indexprs filled by \nRelationGetIndexExpressions(), Relation::rd_indpred filled by \nRelationGetIndexPredicate()). Even if we could somehow obtain the \nmissing info from somewhere, recomputing the otherwise cached fields \nfrom Relation would likely cause a significant slowdown in the serial case.\n\nBeyond that I did some off-CPU profiling to precisely track down which \nlock serializes execution. It turned out to be the MyProc::fpInfoLock \nlightweight lock. This lock is used in the fast path of the heavyweight \nlock. In the contenting case, fpInfoLock is acquired in LW_EXCLUSIVE \nmode to (1) check if there is no other process holding a stronger lock, \nand if not, to reserve a process local fast path lock slot and (2) to \nreturn the fast path lock slots all in one go. To do so, the current \nimplementation always linearly iterates over all lock slots. The \ncorresponding call stacks are:\n\nget_relation_info() CommitTransaction()\n index_open() ResourceOwnerRelease()\n relation_open() ResourceOwnerReleaseInternal()\n LockRelationOid() ProcReleaseLocks()\n LockAcquireExtended() LockReleaseAll() <-- called \ntwice from ProcReleaseLocks()\n LWLockAcquire()\n\nOn top of that there are only 16 fast path lock slots. One slot is \nalways taken up by the parent relation, leaving only 15 slots for the \nindexes. As soon as a session process runs out of slots, it falls back \nto the normal lock path which has to mess around with the lock table. To \ndo so it also acquires a lightweight lock in LW_EXCLUSIVE mode. This \nlightweight lock however is partitioned and therefore does not content. \nHence, normal lock acquisition is slower but contents less.\n\nTo prove above findings I increased the number of fast path lock slots \nper connection and optimized FastPathGrantRelationLock() and \nFastPathUnGrantRelationLock(). With these changes the lock contention \ndisappeared and the workload scales linearly (the code I tested with \nalso included moving out _bt_getrootheight()):\n\n| Parallel streams | TPS | TPS / stream |\n|------------------|----------|---------------|\n| 1 | 5,253 | 5,253 |\n| 10 | 51,406 | 5,140 |\n| 20 | 101,401 | 5,070 |\n| 30 | 152,023 | 5,067 |\n| 40 | 200,607 | 5,015 |\n| 50 | 245,359 | 4,907 |\n| 60 | 302,994 | 5,049 |\n\nHowever, with the very same setup, the index filtering approach yields \n486k TPS with 60 streams and 9,827 TPS with a single stream. The single \nstream number shows that this is not because it scales even better, but \njust because less work is spent during planning. A quick perf session \nshowed that a fair amount of time is spent to get the relation sizes in \nblocks (RelationGetNumberOfBlocksInFork() -> lseek64()) and creating \nindex paths (pull_varattnos() -> bms_add_member(), surprisingly).\n\n- 32.20% 1.58% postgres postgres [.]\nget_relation_info\n - 30.62% get_relation_info\n - 16.56% RelationGetNumberOfBlocksInFork\n - 16.42% smgrnblocks\n - 16.25% mdnblocks\n - 16.10% _mdnblocks\n + 15.55% __libc_lseek64\n + 5.83% index_open\n + 2.71% estimate_rel_size\n 1.56% build_index_tlist\n + 1.22% palloc\n + 1.57% __libc_start_main\n- 23.02% 0.03% postgres postgres [.]\nmake_one_rel\n - 22.99% make_one_rel\n - 22.01% set_base_rel_pathlists\n - 21.99% set_rel_pathlist\n - 21.89% set_plain_rel_pathlist\n - 21.53% create_index_paths\n - 18.76% get_index_paths\n - 18.33% build_index_paths\n - 15.77% check_index_only\n - 14.75% pull_varattnos\n - 14.58% pull_varattnos_walker\n - 13.05% expression_tree_walker\n - 9.50% pull_varattnos_walker\n 5.77% bms_add_member\n 0.93% bms_add_member\n 0.52% expression_tree_walker\n 1.44% pull_varattnos_walker\n + 1.79% create_index_path\n + 0.90% match_restriction_clauses_to_index\n + 0.95% set_base_rel_sizes\n\nGiven the findings above, the two patches are actually complementary. \nOptimizing the lock fast path not only helps when many indexes exist and \nonly a small subset is used, but whenever there are many locks used by a \nquery. The index filtering is another way to reduce lock contention, but \nbeyond that also greatly reduces the time spent on planning in the \nserial case.\n\nI have attached the patch to improve the heavyweight lock fast path. It \nalso for now contains moving out _bt_getrootheight(). For workloads \nwhere the same set of locks is used over and over again, it only needs \non average a single loop iteration to find the relation (instead of a \nlinear scan before). This allows to increase the number of fast path \nlocks by a lot. In this patch I increased them from 16 to 64. The code \ncan be further improved for cases where to be locked relations change \nfrequently and therefore the chance of not finding a relation and \nbecause of that having to linearly search the whole array is higher.\n\nI would really appreciate your feedback Tom, also on the questions \naround the approach of filtering out indexes, discussed in the last mails.\n\n\n--\nDavid Geier\n(ServiceNow)",
"msg_date": "Fri, 19 Aug 2022 15:03:35 +0200",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing planning time on tables with many indexes"
},
{
"msg_contents": "On 2022-Aug-19, David Geier wrote:\n\n> Beyond that I did some off-CPU profiling to precisely track down which lock\n> serializes execution. It turned out to be the MyProc::fpInfoLock lightweight\n> lock. This lock is used in the fast path of the heavyweight lock. In the\n> contenting case, fpInfoLock is acquired in LW_EXCLUSIVE mode to (1) check if\n> there is no other process holding a stronger lock, and if not, to reserve a\n> process local fast path lock slot and (2) to return the fast path lock slots\n> all in one go. To do so, the current implementation always linearly iterates\n> over all lock slots.\n\nAh, so this is the aspect that you mentioned to me today. I definitely\nthink that this analysis deserves its own thread, and the fix is its own\nseparate patch.\n\n> I have attached the patch to improve the heavyweight lock fast path. It also\n> for now contains moving out _bt_getrootheight(). For workloads where the\n> same set of locks is used over and over again, it only needs on average a\n> single loop iteration to find the relation (instead of a linear scan\n> before). This allows to increase the number of fast path locks by a lot. In\n> this patch I increased them from 16 to 64. The code can be further improved\n> for cases where to be locked relations change frequently and therefore the\n> chance of not finding a relation and because of that having to linearly\n> search the whole array is higher.\n\nI suggest to put each change in a separate patch:\n\n1. improve fast-path lock algorithm to find the element, perhaps\n together with increasing the number of elements in the array\n2. change _bt_getrootheight\n\nHowever, since patch (1) may have nontrivial performance implications,\nyou would also need to justify the change: not only that improves the\ncase where many locks are acquired, but also that it does not make the\ncase with few locks worse.\n\nI strongly suggest to not include C++ comments or any other dirtiness in\nthe patch, as that might deter some potential reviewers.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"It takes less than 2 seconds to get to 78% complete; that's a good sign.\nA few seconds later it's at 90%, but it seems to have stuck there. Did\nsomebody make percentages logarithmic while I wasn't looking?\"\n http://smylers.hates-software.com/2005/09/08/1995c749.html\n\n\n",
"msg_date": "Thu, 27 Oct 2022 19:13:46 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Reducing planning time on tables with many indexes"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nWe came across a slowdown in planning, where queries use tables with many\nindexes. In setups with wide tables it is not uncommon to have easily\n10-100 indexes on a single table. The slowdown is already visible in serial\nworkloads with just a handful of indexes, but gets drastically amplified\nwhen running queries with more indexes in parallel at high throughput.\n\nWe measured the TPS and planning time of running parallel streams of simple\npoint look-up queries on a single empty table with 60 columns and 60\nindexes. The query used is 'SELECT * FROM synth_table WHERE col5 = 42'. No\nrows are returned because the table is empty. We used a machine with 64\nphysical CPU cores. The schema and sysbench script to reproduce these\nnumbers are attached. We used the TPS as reported by sysbench and obtained\nplanning time by running 'EXPLAIN ANALYZE' on the same query in a\nseparately opened connection. We averaged the planning time of 3 successive\n'EXPLAIN ANALYZE' runs. sysbench ran on the same machine with varying\nnumbers of threads using the following command line:\n\nsysbench repro.lua --db-driver=pgsql --pgsql-host=localhost\n--pgsql-db=postgres --pgsql-port=? --pgsql-user=? --pgsql-password=?\n--report-interval=1 --threads=64 run\n\nThe following table shows the results. It is clearly visible that the TPS\nflatten out already at 8 parallel streams, while the planning time is\nincreasing drastically.\n\nParallel streams | TPS (before) | Planning time (before)\n-----------------|--------------|-----------------------\n1 | 5,486 | 0.13 ms\n2 | 8,098 | 0.22 ms\n4 | 15,013 | 0.19 ms\n8 | 27,107 | 0.29 ms\n16 | 30,938 | 0.43 ms\n32 | 26,330 | 1.68 ms\n64 | 24,314 | 2.48 ms\n\nWe tracked down the root cause of this slowdown to lock contention in\n'get_relation_info()'. The index lock of every single index of every single\ntable used in that query is acquired. We attempted a fix by pre-filtering\nout all indexes that anyways cannot be used with a certain query, without\ntaking the index locks (credits to Luc Vlaming for idea and\nimplementation). The patch does so by caching the columns present in every\nindex, inside 'struct Relation', similarly to 'rd_indexlist'. Then, before\nopening (= locking) the indexes in 'get_relation_info()', we check if the\nindex can actually contribute to the query and if not it is discarded right\naway. Caching the index info saves considerable work for every query run\nsubsequently, because less indexes must be inspected and thereby locked.\nThis way we also save cycles in any code that later on goes over all\nrelation indexes.\n\nThe work-in-progress version of the patch is attached. It is still fairly\nrough (e.g. uses a global variable, selects the best index in scans without\nrestrictions by column count instead of physical column size, is missing\nsome renaming, etc.), but shows the principle.\n\nThe following table shows the TPS, planning time and speed-ups after\napplying the patch and rerunning above described benchmark. Now, the\nplanning time remains roughly constant and TPS roughly doubles each time\nthe number of parallel streams is doubled. The higher the stream count the\nmore severe the lock contention is and the more pronounced the gained\nspeed-up gets. Interestingly, even for a single query stream the speed-up\nin planning time is already very significant. This applies also for lower\nindex counts. For example just with 10 indexes the TPS for a single query\nstream goes from 9,159 to 12,558. We can do more measurements if there is\ninterest in details for a lower number of indexes.\n\nParallel streams | TPS (after) | Planning time (after) | Speed-up TPS |\nSpeed-up planning\n-----------------|-------------|-----------------------|--------------|------------------\n1 | 10,344 | 0.046 | 1.9x |\n 2.8x\n2 | 20,140 | 0.045 ms | 2.5x |\n 4.9x\n4 | 40,349 | 0.047 ms | 2.7x |\n 4.0x\n8 | 80,121 | 0.046 ms | 3.0x |\n 6.3x\n16 | 152,632 | 0.051 ms | 4.9x |\n 8.4x\n32 | 301,359 | 0.052 ms | 11.4x |\n32.3x\n64 | 525,115 | 0.062 ms | 21.6x |\n40.0x\n\nWe are happy to receive your feedback and polish up the patch.\n\n--\nDavid Geier\n(ServiceNow)",
"msg_date": "Wed, 27 Jul 2022 14:42:37 +0200",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PoC] Reducing planning time on tables with many indexes"
},
{
"msg_contents": "Sorry, by accident I sent this one out twice.\n\n--\nDavid Geier\n(ServiceNow)\n\nOn Wed, Jul 27, 2022 at 2:42 PM David Geier <geidav.pg@gmail.com> wrote:\n\n> Hi hackers,\n>\n> We came across a slowdown in planning, where queries use tables with many\n> indexes. In setups with wide tables it is not uncommon to have easily\n> 10-100 indexes on a single table. The slowdown is already visible in serial\n> workloads with just a handful of indexes, but gets drastically amplified\n> when running queries with more indexes in parallel at high throughput.\n>\n> We measured the TPS and planning time of running parallel streams of\n> simple point look-up queries on a single empty table with 60 columns and 60\n> indexes. The query used is 'SELECT * FROM synth_table WHERE col5 = 42'. No\n> rows are returned because the table is empty. We used a machine with 64\n> physical CPU cores. The schema and sysbench script to reproduce these\n> numbers are attached. We used the TPS as reported by sysbench and obtained\n> planning time by running 'EXPLAIN ANALYZE' on the same query in a\n> separately opened connection. We averaged the planning time of 3 successive\n> 'EXPLAIN ANALYZE' runs. sysbench ran on the same machine with varying\n> numbers of threads using the following command line:\n>\n> sysbench repro.lua --db-driver=pgsql --pgsql-host=localhost\n> --pgsql-db=postgres --pgsql-port=? --pgsql-user=? --pgsql-password=?\n> --report-interval=1 --threads=64 run\n>\n> The following table shows the results. It is clearly visible that the TPS\n> flatten out already at 8 parallel streams, while the planning time is\n> increasing drastically.\n>\n> Parallel streams | TPS (before) | Planning time (before)\n> -----------------|--------------|-----------------------\n> 1 | 5,486 | 0.13 ms\n> 2 | 8,098 | 0.22 ms\n> 4 | 15,013 | 0.19 ms\n> 8 | 27,107 | 0.29 ms\n> 16 | 30,938 | 0.43 ms\n> 32 | 26,330 | 1.68 ms\n> 64 | 24,314 | 2.48 ms\n>\n> We tracked down the root cause of this slowdown to lock contention in\n> 'get_relation_info()'. The index lock of every single index of every single\n> table used in that query is acquired. We attempted a fix by pre-filtering\n> out all indexes that anyways cannot be used with a certain query, without\n> taking the index locks (credits to Luc Vlaming for idea and\n> implementation). The patch does so by caching the columns present in every\n> index, inside 'struct Relation', similarly to 'rd_indexlist'. Then, before\n> opening (= locking) the indexes in 'get_relation_info()', we check if the\n> index can actually contribute to the query and if not it is discarded right\n> away. Caching the index info saves considerable work for every query run\n> subsequently, because less indexes must be inspected and thereby locked.\n> This way we also save cycles in any code that later on goes over all\n> relation indexes.\n>\n> The work-in-progress version of the patch is attached. It is still fairly\n> rough (e.g. uses a global variable, selects the best index in scans without\n> restrictions by column count instead of physical column size, is missing\n> some renaming, etc.), but shows the principle.\n>\n> The following table shows the TPS, planning time and speed-ups after\n> applying the patch and rerunning above described benchmark. Now, the\n> planning time remains roughly constant and TPS roughly doubles each time\n> the number of parallel streams is doubled. The higher the stream count the\n> more severe the lock contention is and the more pronounced the gained\n> speed-up gets. Interestingly, even for a single query stream the speed-up\n> in planning time is already very significant. This applies also for lower\n> index counts. For example just with 10 indexes the TPS for a single query\n> stream goes from 9,159 to 12,558. We can do more measurements if there is\n> interest in details for a lower number of indexes.\n>\n> Parallel streams | TPS (after) | Planning time (after) | Speed-up TPS |\n> Speed-up planning\n>\n> -----------------|-------------|-----------------------|--------------|------------------\n> 1 | 10,344 | 0.046 | 1.9x |\n> 2.8x\n> 2 | 20,140 | 0.045 ms | 2.5x |\n> 4.9x\n> 4 | 40,349 | 0.047 ms | 2.7x |\n> 4.0x\n> 8 | 80,121 | 0.046 ms | 3.0x |\n> 6.3x\n> 16 | 152,632 | 0.051 ms | 4.9x |\n> 8.4x\n> 32 | 301,359 | 0.052 ms | 11.4x |\n> 32.3x\n> 64 | 525,115 | 0.062 ms | 21.6x |\n> 40.0x\n>\n> We are happy to receive your feedback and polish up the patch.\n>\n> --\n> David Geier\n> (ServiceNow)\n>\n\nSorry, by accident I sent this one out twice.--David Geier(ServiceNow)On Wed, Jul 27, 2022 at 2:42 PM David Geier <geidav.pg@gmail.com> wrote:Hi hackers,We came across a slowdown in planning, where queries use tables with many indexes. In setups with wide tables it is not uncommon to have easily 10-100 indexes on a single table. The slowdown is already visible in serial workloads with just a handful of indexes, but gets drastically amplified when running queries with more indexes in parallel at high throughput.We measured the TPS and planning time of running parallel streams of simple point look-up queries on a single empty table with 60 columns and 60 indexes. The query used is 'SELECT * FROM synth_table WHERE col5 = 42'. No rows are returned because the table is empty. We used a machine with 64 physical CPU cores. The schema and sysbench script to reproduce these numbers are attached. We used the TPS as reported by sysbench and obtained planning time by running 'EXPLAIN ANALYZE' on the same query in a separately opened connection. We averaged the planning time of 3 successive 'EXPLAIN ANALYZE' runs. sysbench ran on the same machine with varying numbers of threads using the following command line:sysbench repro.lua --db-driver=pgsql --pgsql-host=localhost --pgsql-db=postgres --pgsql-port=? --pgsql-user=? --pgsql-password=? --report-interval=1 --threads=64 runThe following table shows the results. It is clearly visible that the TPS flatten out already at 8 parallel streams, while the planning time is increasing drastically.Parallel streams | TPS (before) | Planning time (before)-----------------|--------------|-----------------------1 | 5,486 | 0.13 ms2 | 8,098 | 0.22 ms4 | 15,013 | 0.19 ms8 | 27,107 | 0.29 ms16 | 30,938 | 0.43 ms32 | 26,330 | 1.68 ms64 | 24,314 | 2.48 msWe tracked down the root cause of this slowdown to lock contention in 'get_relation_info()'. The index lock of every single index of every single table used in that query is acquired. We attempted a fix by pre-filtering out all indexes that anyways cannot be used with a certain query, without taking the index locks (credits to Luc Vlaming for idea and implementation). The patch does so by caching the columns present in every index, inside 'struct Relation', similarly to 'rd_indexlist'. Then, before opening (= locking) the indexes in 'get_relation_info()', we check if the index can actually contribute to the query and if not it is discarded right away. Caching the index info saves considerable work for every query run subsequently, because less indexes must be inspected and thereby locked. This way we also save cycles in any code that later on goes over all relation indexes.The work-in-progress version of the patch is attached. It is still fairly rough (e.g. uses a global variable, selects the best index in scans without restrictions by column count instead of physical column size, is missing some renaming, etc.), but shows the principle.The following table shows the TPS, planning time and speed-ups after applying the patch and rerunning above described benchmark. Now, the planning time remains roughly constant and TPS roughly doubles each time the number of parallel streams is doubled. The higher the stream count the more severe the lock contention is and the more pronounced the gained speed-up gets. Interestingly, even for a single query stream the speed-up in planning time is already very significant. This applies also for lower index counts. For example just with 10 indexes the TPS for a single query stream goes from 9,159 to 12,558. We can do more measurements if there is interest in details for a lower number of indexes.Parallel streams | TPS (after) | Planning time (after) | Speed-up TPS | Speed-up planning-----------------|-------------|-----------------------|--------------|------------------1 | 10,344 | 0.046 | 1.9x | 2.8x2 | 20,140 | 0.045 ms | 2.5x | 4.9x4 | 40,349 | 0.047 ms | 2.7x | 4.0x8 | 80,121 | 0.046 ms | 3.0x | 6.3x16 | 152,632 | 0.051 ms | 4.9x | 8.4x32 | 301,359 | 0.052 ms | 11.4x | 32.3x64 | 525,115 | 0.062 ms | 21.6x | 40.0xWe are happy to receive your feedback and polish up the patch.--David Geier(ServiceNow)",
"msg_date": "Wed, 27 Jul 2022 14:46:24 +0200",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time on tables with many indexes"
}
] |
[
{
"msg_contents": "Hi,\n\n089480c077056 seems to have broken pg_prewarm. When pg_prewarm\nis added to shared_preload_libraries, each new connection results in\nthousands of errors such as this:\n\n\n2022-07-27 04:25:14.325 UTC [2903955] LOG: background worker\n\"autoprewarm leader\" (PID 2904146) exited with exit code 1\n2022-07-27 04:25:14.325 UTC [2904148] ERROR: could not find function\n\"autoprewarm_main\" in file\n\"/home/ubuntu/proj/tempdel/lib/postgresql/pg_prewarm.so\"\n\nChecking pg_prewarm.so the function 'autoprewarm_main' visibility\nswitched from GLOBAL to LOCAL. Per [1], using PGDLLEXPORT\nmakes it GLOBAL again, which appears to fix the issue:\n\nBefore commit (089480c077056) -\nubuntu:~/proj/tempdel$ readelf -sW lib/postgresql/pg_prewarm.so | grep main\n103: 0000000000003d79 609 FUNC GLOBAL DEFAULT 14 autoprewarm_main\n109: 00000000000045ad 873 FUNC GLOBAL DEFAULT 14 autoprewarm_database_main\n128: 0000000000003d79 609 FUNC GLOBAL DEFAULT 14 autoprewarm_main\n187: 00000000000045ad 873 FUNC GLOBAL DEFAULT 14 autoprewarm_database_main\n\nAfter commit (089480c077056) -\n78: 0000000000002d79 609 FUNC LOCAL DEFAULT 14 autoprewarm_main\n85: 00000000000035ad 873 FUNC LOCAL DEFAULT 14 autoprewarm_database_main\n\nAfter applying the attached fix:\n103: 0000000000003d79 609 FUNC GLOBAL DEFAULT 14 autoprewarm_main\n84: 00000000000045ad 873 FUNC LOCAL DEFAULT 14 autoprewarm_database_main\n129: 0000000000003d79 609 FUNC GLOBAL DEFAULT 14 autoprewarm_main\n\n\nPlease let me know your thoughts on this approach.\n\n[1] https://www.postgresql.org/message-id/A737B7A37273E048B164557ADEF4A58B5393038C%40ntex2010a.host.magwien.gv.at\n\ndiff --git a/contrib/pg_prewarm/autoprewarm.c b/contrib/pg_prewarm/autoprewarm.c\nindex b2d6026093..ec619be9f2 100644\n--- a/contrib/pg_prewarm/autoprewarm.c\n+++ b/contrib/pg_prewarm/autoprewarm.c\n@@ -82,7 +82,7 @@ typedef struct AutoPrewarmSharedState\nint prewarmed_blocks;\n} AutoPrewarmSharedState;\n\n-void autoprewarm_main(Datum main_arg);\n+PGDLLEXPORT void autoprewarm_main(Datum main_arg);\nvoid autoprewarm_database_main(Datum main_arg);\n\nPG_FUNCTION_INFO_V1(autoprewarm_start_worker);\n\n-\nRobins Tharakan\nAmazon Web Services\n\n\n",
"msg_date": "Thu, 28 Jul 2022 00:18:52 +0930",
"msg_from": "Robins Tharakan <tharakan@gmail.com>",
"msg_from_op": true,
"msg_subject": "autoprewarm worker failing to load"
},
{
"msg_contents": "Robins Tharakan <tharakan@gmail.com> writes:\n> 089480c077056 seems to have broken pg_prewarm.\n\nUgh ... sure would be nice if contrib/pg_prewarm had some regression\ntests.\n\n> Checking pg_prewarm.so the function 'autoprewarm_main' visibility\n> switched from GLOBAL to LOCAL. Per [1], using PGDLLEXPORT\n> makes it GLOBAL again, which appears to fix the issue:\n\nRight, that's the appropriate fix. I suppose we had better look\nat everything else that's passed to bgw_function_name anywhere,\ntoo.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Jul 2022 11:16:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: autoprewarm worker failing to load"
}
] |
[
{
"msg_contents": "Howdy folks,\n\nThe attached patch tweaks the wording around finding the psqlrc file\non windows, with the primary goal of removing the generally incorrect\nstatement that windows has no concept of a home directory.\n\nRobert Treat\nhttps://xzilla.net",
"msg_date": "Wed, 27 Jul 2022 14:42:11 -0400",
"msg_from": "Robert Treat <rob@xzilla.net>",
"msg_from_op": true,
"msg_subject": "small windows psqlrc re-wording"
},
{
"msg_contents": "Hi,\n\nOn Wed, Jul 27, 2022 at 02:42:11PM -0400, Robert Treat wrote:\n>\n> The attached patch tweaks the wording around finding the psqlrc file\n> on windows, with the primary goal of removing the generally incorrect\n> statement that windows has no concept of a home directory.\n\nWindows only has a concept of home directory since Vista, so that used to be\ntrue.\n\nAnyway, since we don't support XP or anything older since about 3 weeks ago\n(495ed0ef2d72a6a74def296e042022479d5d07bd), +1 for the patch.\n\n\n",
"msg_date": "Thu, 28 Jul 2022 21:45:30 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: small windows psqlrc re-wording"
},
{
"msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Wed, Jul 27, 2022 at 02:42:11PM -0400, Robert Treat wrote:\n>> The attached patch tweaks the wording around finding the psqlrc file\n>> on windows, with the primary goal of removing the generally incorrect\n>> statement that windows has no concept of a home directory.\n\n> Windows only has a concept of home directory since Vista, so that used to be\n> true.\n> Anyway, since we don't support XP or anything older since about 3 weeks ago\n> (495ed0ef2d72a6a74def296e042022479d5d07bd), +1 for the patch.\n\nIf all supported versions do have home directories now, should we\ninstead think about aligning the Windows behavior with everywhere\nelse?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 28 Jul 2022 10:04:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: small windows psqlrc re-wording"
},
{
"msg_contents": "On Thu, Jul 28, 2022 at 10:04:12AM -0400, Tom Lane wrote:\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > On Wed, Jul 27, 2022 at 02:42:11PM -0400, Robert Treat wrote:\n> >> The attached patch tweaks the wording around finding the psqlrc file\n> >> on windows, with the primary goal of removing the generally incorrect\n> >> statement that windows has no concept of a home directory.\n>\n> > Windows only has a concept of home directory since Vista, so that used to be\n> > true.\n> > Anyway, since we don't support XP or anything older since about 3 weeks ago\n> > (495ed0ef2d72a6a74def296e042022479d5d07bd), +1 for the patch.\n>\n> If all supported versions do have home directories now, should we\n> instead think about aligning the Windows behavior with everywhere\n> else?\n\nAs far as I know the expected usage on Windows is still different. Even with\nhome directories application are still expected to put stuff in %APPDATA% (1),\nin a dedicated directory. That's especially important since there is still no\nconcept of \"hidden\" files and the explorer still hides the extensions by\ndefault. I can however see that having a file named \".something\" is now mostly\nworking, which IIRC wasn't really the case the last time I used Windows (around\nXP).\n\n[1] https://en.wikipedia.org/wiki/Special_folder#File_system_directories\n\n\n",
"msg_date": "Thu, 28 Jul 2022 22:19:28 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: small windows psqlrc re-wording"
},
{
"msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Thu, Jul 28, 2022 at 10:04:12AM -0400, Tom Lane wrote:\n>> If all supported versions do have home directories now, should we\n>> instead think about aligning the Windows behavior with everywhere\n>> else?\n\n> As far as I know the expected usage on Windows is still different. Even with\n> home directories application are still expected to put stuff in %APPDATA% (1),\n> in a dedicated directory. That's especially important since there is still no\n> concept of \"hidden\" files and the explorer still hides the extensions by\n> default.\n\nAh. Yeah, if there's no convention about hiding files based on a\nleading \".\" then we definitely don't want to do that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 28 Jul 2022 10:28:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: small windows psqlrc re-wording"
},
{
"msg_contents": "After looking at the text more carefully, I thought it could use\na deal more help than Robert has given it. I propose the attached.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 07 Sep 2022 13:10:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: small windows psqlrc re-wording"
},
{
"msg_contents": "On Wed, Sep 07, 2022 at 01:10:11PM -0400, Tom Lane wrote:\n> After looking at the text more carefully, I thought it could use\n> a deal more help than Robert has given it. I propose the attached.\n\nIt looks good to me.\n\n- for example <filename>~/.psqlrc-9.2</filename> or\n- <filename>~/.psqlrc-9.2.5</filename>. The most specific\n+ for example <filename>~/.psqlrc-15</filename> or\n+ <filename>~/.psqlrc-15.2</filename>. The most specific\n\nThis bit is a bit saddening. It's probably good to switch to the new 2 digits\nversioning but not trying to maintain it any further right?\n\nThat being said, should the patch mention versions that at least currently\nexist, like -14 and -14.5?\n\n\n",
"msg_date": "Thu, 8 Sep 2022 15:46:10 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: small windows psqlrc re-wording"
},
{
"msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Wed, Sep 07, 2022 at 01:10:11PM -0400, Tom Lane wrote:\n> - for example <filename>~/.psqlrc-9.2</filename> or\n> - <filename>~/.psqlrc-9.2.5</filename>. The most specific\n> + for example <filename>~/.psqlrc-15</filename> or\n> + <filename>~/.psqlrc-15.2</filename>. The most specific\n\n> This bit is a bit saddening. It's probably good to switch to the new 2 digits\n> versioning but not trying to maintain it any further right?\n\nIt occurred to me later to substitute &majorversion; and &version;\nlike this:\n\n+ for example <filename>~/.psqlrc-&majorversion;</filename> or\n+ <filename>~/.psqlrc-&version;</filename>. The most specific\n\nOn testing that in HEAD, I read\n\n Both the system-wide startup file and the user's personal startup file\n can be made psql-version-specific by appending a dash and the\n PostgreSQL major or minor release number to the file name, for example\n ~/.psqlrc-16 or ~/.psqlrc-16devel.\n\nThat's a little confusing but it's actually accurate, because what\nprocess_psqlrc_file appends is the string PG_VERSION, so in a devel\nbranch or beta release there's a non-numeric \"minor release\".\nI'm inclined to go ahead and do it like that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 08 Sep 2022 11:02:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: small windows psqlrc re-wording"
},
{
"msg_contents": "I wrote:\n> On testing that in HEAD, I read\n\n> Both the system-wide startup file and the user's personal startup file\n> can be made psql-version-specific by appending a dash and the\n> PostgreSQL major or minor release number to the file name, for example\n> ~/.psqlrc-16 or ~/.psqlrc-16devel.\n\n> That's a little confusing but it's actually accurate, because what\n> process_psqlrc_file appends is the string PG_VERSION, so in a devel\n> branch or beta release there's a non-numeric \"minor release\".\n> I'm inclined to go ahead and do it like that.\n\nI decided that what I found jarring about that was the use of \"release\nnumber\" with a non-numeric version, so I changed it to \"release\nidentifier\" and pushed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 09 Sep 2022 13:52:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: small windows psqlrc re-wording"
},
{
"msg_contents": "On Fri, Sep 9, 2022 at 1:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > On testing that in HEAD, I read\n>\n> > Both the system-wide startup file and the user's personal startup file\n> > can be made psql-version-specific by appending a dash and the\n> > PostgreSQL major or minor release number to the file name, for example\n> > ~/.psqlrc-16 or ~/.psqlrc-16devel.\n>\n> > That's a little confusing but it's actually accurate, because what\n> > process_psqlrc_file appends is the string PG_VERSION, so in a devel\n> > branch or beta release there's a non-numeric \"minor release\".\n> > I'm inclined to go ahead and do it like that.\n>\n> I decided that what I found jarring about that was the use of \"release\n> number\" with a non-numeric version, so I changed it to \"release\n> identifier\" and pushed.\n>\n\nLooks good. Thanks Tom / Julien.\n\n\nRobert Treat\nhttps://xzilla.net\n\n\n",
"msg_date": "Sat, 10 Sep 2022 09:07:29 -0400",
"msg_from": "Robert Treat <rob@xzilla.net>",
"msg_from_op": true,
"msg_subject": "Re: small windows psqlrc re-wording"
}
] |
[
{
"msg_contents": "Hey,\n\nJust interacted with a frustrated user on Slack trying to upgrade from v13\nto v14 on Windows. Our official download page for the Windows installer\nclaims the core documentation as its official reference - can someone\nresponsible for this area please suggest and test some changes to make this\nreality more acceptable.\n\nThe particular point that was brought up is our documentation for\npg_upgrade says:\n\nRUNAS /USER:postgres \"CMD.EXE\"\nSET PATH=%PATH%;C:\\Program Files\\PostgreSQL\\14\\bin;\n\nThe problem is apparently (I haven't personally tested) our\nofficial installer doesn't bother to create the postgres operating system\nuser account.\n\nIt is also unclear whether the defaults for pg_hba.conf add some kind of\nbad interaction here should one fix this particular problem.\n\nAnd then there is the issue of file ownership.\n\nAssuming we want better documentation for this specific issue for\nback-patching what would that look like?\n\nGoing forward should our installer be creating the postgres user for\nconsistency with other platforms or not?\n\nI suggest adding relevant discussion about this particular official binary\ndistribution to:\n\nhttps://www.postgresql.org/docs/current/install-binaries.html\n\nDavid J.\n\nHey,Just interacted with a frustrated user on Slack trying to upgrade from v13 to v14 on Windows. Our official download page for the Windows installer claims the core documentation as its official reference - can someone responsible for this area please suggest and test some changes to make this reality more acceptable.The particular point that was brought up is our documentation for pg_upgrade says:RUNAS /USER:postgres \"CMD.EXE\"SET PATH=%PATH%;C:\\Program Files\\PostgreSQL\\14\\bin;The problem is apparently (I haven't personally tested) our official installer doesn't bother to create the postgres operating system user account.It is also unclear whether the defaults for pg_hba.conf add some kind of bad interaction here should one fix this particular problem.And then there is the issue of file ownership.Assuming we want better documentation for this specific issue for back-patching what would that look like?Going forward should our installer be creating the postgres user for consistency with other platforms or not?I suggest adding relevant discussion about this particular official binary distribution to:https://www.postgresql.org/docs/current/install-binaries.htmlDavid J.",
"msg_date": "Wed, 27 Jul 2022 12:21:27 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Official Windows Installer and Documentation"
},
{
"msg_contents": "David G. Johnston schrieb am 27.07.2022 um 21:21:\n> And then there is the issue of file ownership.\n>\n> Assuming we want better documentation for this specific issue for\n> back-patching what would that look like?\n>\n> Going forward should our installer be creating the postgres user for\n> consistency with other platforms or not?\n\nDidn't the installer used to do that in earlier releases and that\nwas removed when Postgres was able to \"drop privileges\" when the\nservice is started?\n\nI remember a lot of problems around the specific Postgres service\naccount when that still was the case.\n\nAs far as I can tell, most of the problems of the Windows installer\nstem from the fact that it tries to use icacls to set privileges\non the data directory. This seems to fail quite frequently,\ncausing the infamous \"Problem running post-install step\" error.\n\nThe fact that the installer still defaults to using \"c:\\Program Files\"\nfor the location of the data directoy might be related to that.\n(but then I don't know enough of the internals of the installer\nand Windows)\n\nJust my 0.02€\n\nThomas\n\n\n",
"msg_date": "Wed, 27 Jul 2022 23:36:11 +0200",
"msg_from": "Thomas Kellerer <shammat@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Official Windows Installer and Documentation"
},
{
"msg_contents": "Hi,\n\nOn Wed, Jul 27, 2022 at 11:36:11PM +0200, Thomas Kellerer wrote:\n> David G. Johnston schrieb am 27.07.2022 um 21:21:\n> > And then there is the issue of file ownership.\n> > \n> > Assuming we want better documentation for this specific issue for\n> > back-patching what would that look like?\n> > \n> > Going forward should our installer be creating the postgres user for\n> > consistency with other platforms or not?\n> \n> Didn't the installer used to do that in earlier releases and that\n> was removed when Postgres was able to \"drop privileges\" when the\n> service is started?\n> \n> I remember a lot of problems around the specific Postgres service\n> account when that still was the case.\n\nNote that there's no \"official\" Windows installer, and companies providing one\nare free to implement it the way they want, which can contradict the official\ndocumentation. The download section of the website clearly says that this is a\nthird-party installer.\n\nFor now there's only the EDB installer that remains, but I think that some time\nago there was 2 or 3 different providers.\n\nFor the EDB installer, I'm not sure why or when it was changed, but it indeed\nused to have a dedicated local account and now relies on \"Local System Account\"\nor something like that. But IIRC, when it used to create a local account the\nname could be configured, so there was no guarantee of a local \"postgres\"\naccount by then either.\n\n\n",
"msg_date": "Thu, 28 Jul 2022 09:42:19 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Official Windows Installer and Documentation"
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 6:42 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> Hi,\n>\n> On Wed, Jul 27, 2022 at 11:36:11PM +0200, Thomas Kellerer wrote:\n> > David G. Johnston schrieb am 27.07.2022 um 21:21:\n> > > And then there is the issue of file ownership.\n> > >\n> > > Assuming we want better documentation for this specific issue for\n> > > back-patching what would that look like?\n> > >\n> > > Going forward should our installer be creating the postgres user for\n> > > consistency with other platforms or not?\n> >\n> > Didn't the installer used to do that in earlier releases and that\n> > was removed when Postgres was able to \"drop privileges\" when the\n> > service is started?\n> >\n> > I remember a lot of problems around the specific Postgres service\n> > account when that still was the case.\n>\n> Note that there's no \"official\" Windows installer, and companies providing\n> one\n> are free to implement it the way they want, which can contradict the\n> official\n> documentation. The download section of the website clearly says that this\n> is a\n> third-party installer.\n>\n> For now there's only the EDB installer that remains, but I think that some\n> time\n> ago there was 2 or 3 different providers.\n>\n> For the EDB installer, I'm not sure why or when it was changed, but it\n> indeed\n> used to have a dedicated local account and now relies on \"Local System\n> Account\"\n> or something like that. But IIRC, when it used to create a local account\n> the\n> name could be configured, so there was no guarantee of a local \"postgres\"\n> account by then either.\n>\n>\nOur technical definition aside, the fact is our users consider the sole EDB\ninstaller to be official.\n\nIf the ultimate solution is to update:\n\nhttps://www.enterprisedb.com/downloads/postgres-postgresql-downloads\n\nto have its own installation and upgrade supplement to the official\ndocumentation then I'd be fine with that. But as of now the \"Installation\nGuide\" points back to the official documentation, which has no actual\ndistribution specific information while simultaneously reinforcing the fact\nthat it is an official installer.\n\nI get sending people to the EDB web services team for download issues since\nwe don't host the binaries. That aspect of them being third-party doesn't\nseem to be an issue.\n\nBut for documentation, given the current state of things, whether we amend\nour docs or highly encourage the people who are benefiting financially from\nbeing our de facto official Windows installer provider to provide separate\ndocumentation to address this apparent short-coming that is harming our\nimage in the Windows community, I don't really care, as long as something\nchanges.\n\nIn the end the problem is ours and cannot be simply assigned to a\nthird-party. So let's resolve it here (on this list, whatever the\nsolution) where representatives from all parties are present.\n\nDavid J.\n\nOn Wed, Jul 27, 2022 at 6:42 PM Julien Rouhaud <rjuju123@gmail.com> wrote:Hi,\n\nOn Wed, Jul 27, 2022 at 11:36:11PM +0200, Thomas Kellerer wrote:\n> David G. Johnston schrieb am 27.07.2022 um 21:21:\n> > And then there is the issue of file ownership.\n> > \n> > Assuming we want better documentation for this specific issue for\n> > back-patching what would that look like?\n> > \n> > Going forward should our installer be creating the postgres user for\n> > consistency with other platforms or not?\n> \n> Didn't the installer used to do that in earlier releases and that\n> was removed when Postgres was able to \"drop privileges\" when the\n> service is started?\n> \n> I remember a lot of problems around the specific Postgres service\n> account when that still was the case.\n\nNote that there's no \"official\" Windows installer, and companies providing one\nare free to implement it the way they want, which can contradict the official\ndocumentation. The download section of the website clearly says that this is a\nthird-party installer.\n\nFor now there's only the EDB installer that remains, but I think that some time\nago there was 2 or 3 different providers.\n\nFor the EDB installer, I'm not sure why or when it was changed, but it indeed\nused to have a dedicated local account and now relies on \"Local System Account\"\nor something like that. But IIRC, when it used to create a local account the\nname could be configured, so there was no guarantee of a local \"postgres\"\naccount by then either.Our technical definition aside, the fact is our users consider the sole EDB installer to be official.If the ultimate solution is to update:https://www.enterprisedb.com/downloads/postgres-postgresql-downloadsto have its own installation and upgrade supplement to the official documentation then I'd be fine with that. But as of now the \"Installation Guide\" points back to the official documentation, which has no actual distribution specific information while simultaneously reinforcing the fact that it is an official installer.I get sending people to the EDB web services team for download issues since we don't host the binaries. That aspect of them being third-party doesn't seem to be an issue.But for documentation, given the current state of things, whether we amend our docs or highly encourage the people who are benefiting financially from being our de facto official Windows installer provider to provide separate documentation to address this apparent short-coming that is harming our image in the Windows community, I don't really care, as long as something changes.In the end the problem is ours and cannot be simply assigned to a third-party. So let's resolve it here (on this list, whatever the solution) where representatives from all parties are present.David J.",
"msg_date": "Wed, 27 Jul 2022 19:02:51 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Official Windows Installer and Documentation"
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 07:02:51PM -0700, David G. Johnston wrote:\n>\n> In the end the problem is ours and cannot be simply assigned to a\n> third-party. So let's resolve it here (on this list, whatever the\n> solution) where representatives from all parties are present.\n\nWe could amend the pg_upgrade (and maybe other if needed, but I don't see any\nother occurences of RUNAS) documentation to be a bit more general, like the\nnon-windows part of it, maybe something like\n\nFor Windows users, you must be logged into an administrative account, and then\nstart a shell as the user running the postgres service and set the proper path.\nAssuming a user named postgres and the binaries installed in C:\\Program\nFiles\\PostgreSQL\\14:\n\nRUNAS /USER:postgres \"CMD.EXE\"\nSET PATH=%PATH%;C:\\Program Files\\PostgreSQL\\14\\bin;\n\nIt's ultimately up to the users to adapt the commands to match their\nenvironment.\n\n\n",
"msg_date": "Thu, 28 Jul 2022 10:17:49 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Official Windows Installer and Documentation"
},
{
"msg_contents": "On Wednesday, July 27, 2022, Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Wed, Jul 27, 2022 at 07:02:51PM -0700, David G. Johnston wrote:\n> >\n> > In the end the problem is ours and cannot be simply assigned to a\n> > third-party. So let's resolve it here (on this list, whatever the\n> > solution) where representatives from all parties are present.\n>\n> We could amend the pg_upgrade (and maybe other if needed, but I don't see\n> any\n> other occurences of RUNAS) documentation to be a bit more general, like the\n> non-windows part of it, maybe something like\n>\n> For Windows users, you must be logged into an administrative account, and\n> then\n> start a shell as the user running the postgres service and set the proper\n> path.\n> Assuming a user named postgres and the binaries installed in C:\\Program\n> Files\\PostgreSQL\\14:\n>\n> RUNAS /USER:postgres \"CMD.EXE\"\n> SET PATH=%PATH%;C:\\Program Files\\PostgreSQL\\14\\bin;\n>\n> It's ultimately up to the users to adapt the commands to match their\n> environment.\n>\n\nUltimately we do our users the best service if when they operate an\ninstallation using defaults that they have documentation showing how to\nperform something like an upgrade that works with those defaults. I don’t\nsee much point making that change in isolation until it is obvious nothing\nbetter is forthcoming. If the o/s user postgres doesn’t exist then you need\nto supply -U postgres cause the install user for PostgresSQL is still\npostgres. So why not assume the user is whatever the EDB installer uses\nand make that the example? If someone has an install on Windows that uses\nthe postgres account adapting the command for them should be trivial and\nthe majority installer users get a command sequence that works.\n\nDavid J.\n\nOn Wednesday, July 27, 2022, Julien Rouhaud <rjuju123@gmail.com> wrote:On Wed, Jul 27, 2022 at 07:02:51PM -0700, David G. Johnston wrote:\n>\n> In the end the problem is ours and cannot be simply assigned to a\n> third-party. So let's resolve it here (on this list, whatever the\n> solution) where representatives from all parties are present.\n\nWe could amend the pg_upgrade (and maybe other if needed, but I don't see any\nother occurences of RUNAS) documentation to be a bit more general, like the\nnon-windows part of it, maybe something like\n\nFor Windows users, you must be logged into an administrative account, and then\nstart a shell as the user running the postgres service and set the proper path.\nAssuming a user named postgres and the binaries installed in C:\\Program\nFiles\\PostgreSQL\\14:\n\nRUNAS /USER:postgres \"CMD.EXE\"\nSET PATH=%PATH%;C:\\Program Files\\PostgreSQL\\14\\bin;\n\nIt's ultimately up to the users to adapt the commands to match their\nenvironment.\nUltimately we do our users the best service if when they operate an installation using defaults that they have documentation showing how to perform something like an upgrade that works with those defaults. I don’t see much point making that change in isolation until it is obvious nothing better is forthcoming. If the o/s user postgres doesn’t exist then you need to supply -U postgres cause the install user for PostgresSQL is still postgres. So why not assume the user is whatever the EDB installer uses and make that the example? If someone has an install on Windows that uses the postgres account adapting the command for them should be trivial and the majority installer users get a command sequence that works.David J.",
"msg_date": "Wed, 27 Jul 2022 19:31:35 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Official Windows Installer and Documentation"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Wed, Jul 27, 2022 at 6:42 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>> Note that there's no \"official\" Windows installer,\n\nYeah, that.\n\n> Our technical definition aside, the fact is our users consider the sole EDB\n> installer to be official.\n> If the ultimate solution is to update:\n> https://www.enterprisedb.com/downloads/postgres-postgresql-downloads\n> to have its own installation and upgrade supplement to the official\n> documentation then I'd be fine with that.\n\nThat's what needs to happen. On the Linux side for example, we do\nnot address packaging-specific behaviors of Devrim's builds, or Debian's\nbuilds, or Red Hat's builds --- all of which act differently, in ways\nthat are sadly a lot more critical to novices than seasoned users.\nIf EDB isn't adequately filling in the documentation for the behavior\nof their packaging, that's on them.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Jul 2022 22:57:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Official Windows Installer and Documentation"
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 07:31:35PM -0700, David G. Johnston wrote:\n>\n> Ultimately we do our users the best service if when they operate an\n> installation using defaults that they have documentation showing how to\n> perform something like an upgrade that works with those defaults.\n\nI don't really agree that it's best service to let users assume that they can\nblindly copy/paste some commands without trying to understand them, or how to\nadapt them to their specificities. That's in my opinion particularly true on\nwindows, since to my knowledge most companies will have the binaries installed\nin one place (C:\\Program Files\\PostgreSQL might be frequent), and have the data\nstored in another place (D: or other). So I don't think the default command\nwill actually work for any non toy installation.\n\n> So why not assume the user is whatever the EDB installer uses\n> and make that the example?\n\nWell, IIUC that used to be the case until EDB changed its installer. Maybe the\nodds for an impacting change to happen again are low, but it's certainly not a\ngreat idea to assume that the community will regularly check their installer\nand update the doc to match what they're doing. So yeah it may be better for\nthem to provide a documentation adapted to their usage.\n\n\n",
"msg_date": "Thu, 28 Jul 2022 10:58:45 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Official Windows Installer and Documentation"
},
{
"msg_contents": "I wrote:\n> If EDB isn't adequately filling in the documentation for the behavior\n> of their packaging, that's on them.\n\nHaving now looked more closely at the pg_upgrade documentation,\nI don't think this is exactly EDB's fault; it's text that should\nnever have been there to begin with. ISTM we need to simply rip out\nlines 431..448 of pgupgrade.sgml, that is all the Windows-specific\ntext starting with\n\n For Windows users, you must be logged into an administrative account, and\n\nThat has got nothing to recommend it: we do not generally provide\nplatform-specific details in these man pages, and to the extent it\nprovides details, those details are likely to be wrong. We need\nlook no further than the references to \"9.6\" to establish that.\nYeah, it says \"e.g.\", but novices will probably fail to understand\nwhich parts of the example are suitable to copy verbatim and which\naren't. Meanwhile non-novices don't need the example to begin with.\nOn top of which, the whole para has been inserted into\nnon-platform-specific text, seemingly with the aid of a dartboard,\nbecause it doesn't particularly connect to what's before or after it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Jul 2022 23:22:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Official Windows Installer and Documentation"
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 8:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I wrote:\n> > If EDB isn't adequately filling in the documentation for the behavior\n> > of their packaging, that's on them.\n>\n> Having now looked more closely at the pg_upgrade documentation,\n> I don't think this is exactly EDB's fault; it's text that should\n> never have been there to begin with. ISTM we need to simply rip out\n> lines 431..448 of pgupgrade.sgml, that is all the Windows-specific\n> text starting with\n>\n> For Windows users, you must be logged into an administrative account,\n> and\n>\n> That has got nothing to recommend it: we do not generally provide\n> platform-specific details in these man pages, and to the extent it\n> provides details, those details are likely to be wrong.\n\n\nI mean, we do provide platform-specific details/examples, it's just that\nplatform is a source installed Linux platform (though pathless)\n\nDoes the avoidance of dealing with other platforms also apply to NET STOP\nor do you find that an acceptable variance? Or are you suggesting that\nbasically all O/S commands should be zapped? If not, then rewriting 442 to\n446 to just be the command seems worthwhile. I'd say pg_upgrade warrants\nan examples section like pg_basebackup has (though obviously pg_upgrade is\nprocedural).\n\nI do have another observation:\n\nhttps://github.com/postgres/postgres/blob/4fc6b6eefcf98f79211bb790ee890ebcb05c178d/src/bin/pg_upgrade/check.c#L665\n\n if (PQntuples(res) != 1 ||\natooid(PQgetvalue(res, 0, 1)) != BOOTSTRAP_SUPERUSERID)\npg_fatal(\"database user \\\"%s\\\" is not the install user\",\nos_info.user);\n\nAny reason to not inform the DBA the name of the install user here? Sure,\nit is almost certainly postgres, but it also seems like an easy win in\norder for them, and anyone they may ask for help, to know exactly the name\nof install user in the clusters should that end up being the issue.\nAdditionally, from what I can tell, if that check does fail (or any of the\nchecks really) it is not possible to tell whether the check was being\nperformed against the old or new server. The user does not know that\nchecks against the old server are performed first then checks against the\nnew one, and there are no banners saying \"checking old/new\"\n\nDavid J.\n\nOn Wed, Jul 27, 2022 at 8:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:I wrote:\n> If EDB isn't adequately filling in the documentation for the behavior\n> of their packaging, that's on them.\n\nHaving now looked more closely at the pg_upgrade documentation,\nI don't think this is exactly EDB's fault; it's text that should\nnever have been there to begin with. ISTM we need to simply rip out\nlines 431..448 of pgupgrade.sgml, that is all the Windows-specific\ntext starting with\n\n For Windows users, you must be logged into an administrative account, and\n\nThat has got nothing to recommend it: we do not generally provide\nplatform-specific details in these man pages, and to the extent it\nprovides details, those details are likely to be wrong.I mean, we do provide platform-specific details/examples, it's just that platform is a source installed Linux platform (though pathless)Does the avoidance of dealing with other platforms also apply to NET STOP or do you find that an acceptable variance? Or are you suggesting that basically all O/S commands should be zapped? If not, then rewriting 442 to 446 to just be the command seems worthwhile. I'd say pg_upgrade warrants an examples section like pg_basebackup has (though obviously pg_upgrade is procedural).I do have another observation:https://github.com/postgres/postgres/blob/4fc6b6eefcf98f79211bb790ee890ebcb05c178d/src/bin/pg_upgrade/check.c#L665 if (PQntuples(res) != 1 ||\t\tatooid(PQgetvalue(res, 0, 1)) != BOOTSTRAP_SUPERUSERID)\t\tpg_fatal(\"database user \\\"%s\\\" is not the install user\",\t\t\t\t os_info.user);Any reason to not inform the DBA the name of the install user here? Sure, it is almost certainly postgres, but it also seems like an easy win in order for them, and anyone they may ask for help, to know exactly the name of install user in the clusters should that end up being the issue. Additionally, from what I can tell, if that check does fail (or any of the checks really) it is not possible to tell whether the check was being performed against the old or new server. The user does not know that checks against the old server are performed first then checks against the new one, and there are no banners saying \"checking old/new\"David J.",
"msg_date": "Wed, 27 Jul 2022 21:28:51 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Official Windows Installer and Documentation"
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 09:28:51PM -0700, David G. Johnston wrote:\n> On Wed, Jul 27, 2022 at 8:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> I wrote:\n> > If EDB isn't adequately filling in the documentation for the behavior\n> > of their packaging, that's on them.\n> \n> Having now looked more closely at the pg_upgrade documentation,\n> I don't think this is exactly EDB's fault; it's text that should\n> never have been there to begin with. ISTM we need to simply rip out\n> lines 431..448 of pgupgrade.sgml, that is all the Windows-specific\n> text starting with\n> \n> For Windows users, you must be logged into an administrative account,\n> and\n> \n> That has got nothing to recommend it: we do not generally provide\n> platform-specific details in these man pages, and to the extent it\n> provides details, those details are likely to be wrong.\n> \n> \n> I mean, we do provide platform-specific details/examples, it's just that\n> platform is a source installed Linux platform (though pathless)\n> \n> Does the avoidance of dealing with other platforms also apply to NET STOP or do\n> you find that an acceptable variance? Or are you suggesting that basically all\n> O/S commands should be zapped? If not, then rewriting 442 to 446 to just be\n> the command seems worthwhile. I'd say pg_upgrade warrants an examples section\n> like pg_basebackup has (though obviously pg_upgrade is procedural).\n\nI have developed the attached patch to remove RUNAS and SET PATH,\nneither of which appear anywhere else in our docs.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Tue, 31 Oct 2023 11:16:29 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Official Windows Installer and Documentation"
},
{
"msg_contents": "On Tue, Oct 31, 2023 at 11:16:29AM -0400, Bruce Momjian wrote:\n> On Wed, Jul 27, 2022 at 09:28:51PM -0700, David G. Johnston wrote:\n> > On Wed, Jul 27, 2022 at 8:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > \n> > I wrote:\n> > > If EDB isn't adequately filling in the documentation for the behavior\n> > > of their packaging, that's on them.\n> > \n> > Having now looked more closely at the pg_upgrade documentation,\n> > I don't think this is exactly EDB's fault; it's text that should\n> > never have been there to begin with. ISTM we need to simply rip out\n> > lines 431..448 of pgupgrade.sgml, that is all the Windows-specific\n> > text starting with\n> > \n> > For Windows users, you must be logged into an administrative account,\n> > and\n> > \n> > That has got nothing to recommend it: we do not generally provide\n> > platform-specific details in these man pages, and to the extent it\n> > provides details, those details are likely to be wrong.\n> > \n> > \n> > I mean, we do provide platform-specific details/examples, it's just that\n> > platform is a source installed Linux platform (though pathless)\n> > \n> > Does the avoidance of dealing with other platforms also apply to NET STOP or do\n> > you find that an acceptable variance? Or are you suggesting that basically all\n> > O/S commands should be zapped? If not, then rewriting 442 to 446 to just be\n> > the command seems worthwhile. I'd say pg_upgrade warrants an examples section\n> > like pg_basebackup has (though obviously pg_upgrade is procedural).\n> \n> I have developed the attached patch to remove RUNAS and SET PATH,\n> neither of which appear anywhere else in our docs.\n\nSorry, fixed patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Tue, 31 Oct 2023 11:24:24 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Official Windows Installer and Documentation"
},
{
"msg_contents": "On Tue, Oct 31, 2023 at 11:24:24AM -0400, Bruce Momjian wrote:\n> On Tue, Oct 31, 2023 at 11:16:29AM -0400, Bruce Momjian wrote:\n> > On Wed, Jul 27, 2022 at 09:28:51PM -0700, David G. Johnston wrote:\n> > > On Wed, Jul 27, 2022 at 8:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > \n> > > I wrote:\n> > > > If EDB isn't adequately filling in the documentation for the behavior\n> > > > of their packaging, that's on them.\n> > > \n> > > Having now looked more closely at the pg_upgrade documentation,\n> > > I don't think this is exactly EDB's fault; it's text that should\n> > > never have been there to begin with. ISTM we need to simply rip out\n> > > lines 431..448 of pgupgrade.sgml, that is all the Windows-specific\n> > > text starting with\n> > > \n> > > For Windows users, you must be logged into an administrative account,\n> > > and\n> > > \n> > > That has got nothing to recommend it: we do not generally provide\n> > > platform-specific details in these man pages, and to the extent it\n> > > provides details, those details are likely to be wrong.\n> > > \n> > > \n> > > I mean, we do provide platform-specific details/examples, it's just that\n> > > platform is a source installed Linux platform (though pathless)\n> > > \n> > > Does the avoidance of dealing with other platforms also apply to NET STOP or do\n> > > you find that an acceptable variance? Or are you suggesting that basically all\n> > > O/S commands should be zapped? If not, then rewriting 442 to 446 to just be\n> > > the command seems worthwhile. I'd say pg_upgrade warrants an examples section\n> > > like pg_basebackup has (though obviously pg_upgrade is procedural).\n> > \n> > I have developed the attached patch to remove RUNAS and SET PATH,\n> > neither of which appear anywhere else in our docs.\n> \n> Sorry, fixed patch.\n\nPatch applied.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Mon, 13 Nov 2023 12:40:49 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Official Windows Installer and Documentation"
}
] |
[
{
"msg_contents": "Greetings,\n\n\nWhen we take backups from a synchronous standby replica, how can we get the accurate timestamp of the backup end time ? (As backup history files are not generated on standbys)\nFor example:\nthis is a part of control file after a backup (created using wal-g by calling pg_startbackup and pg_stopbackup),\nFake LSN counter for unlogged rels: 0/3E8\nMinimum recovery ending location: 28/28000B68\nMin recovery ending loc's timeline: 2\nBackup start location: 0/0\nBackup end location: 0/0\nEnd-of-backup record required: no\nhere I can see that minimum recovery ending location as LSN value, how can we get the timestamp of it ?\nThe backup label file looks like this.\nINFO: 2022/07/26 23:25:23.850621 ------------ LABLE FILE START ----------\nINFO: 2022/07/26 23:25:23.850628 START WAL LOCATION: 1D/F94C7320 (file 000000020000001D000000F9)\nINFO: 2022/07/26 23:25:23.850633 CHECKPOINT LOCATION: 1E/EDA8700\nINFO: 2022/07/26 23:25:23.850639 BACKUP METHOD: streamed\nINFO: 2022/07/26 23:25:23.850645 BACKUP FROM: standby\nINFO: 2022/07/26 23:25:23.850653 START TIME: 2022-07-26 23:10:27 GMT\nINFO: 2022/07/26 23:25:23.850659 LABEL: 2022-07-26 23:10:27.545378 +0000 UTC m=+0.167723956\nINFO: 2022/07/26 23:25:23.850665 START TIMELINE: 2\nINFO: 2022/07/26 23:25:23.850669 \nINFO: 2022/07/26 23:25:23.850676 ------------ LABLE FILE END ----------\n\nHow can we do PITR using timestamp if we don’t know the accurate timestamp of minimum recovery point ?\n\n\n\n\n\n\n\nThanks.\nBest,\nHarinath\nGreetings,When we take backups from a synchronous standby replica, how can we get the accurate timestamp of the backup end time ? (As backup history files are not generated on standbys)For example:this is a part of control file after a backup (created using wal-g by calling pg_startbackup and pg_stopbackup),Fake LSN counter for unlogged rels: 0/3E8\nMinimum recovery ending location: 28/28000B68\nMin recovery ending loc's timeline: 2\nBackup start location: 0/0\nBackup end location: 0/0\nEnd-of-backup record required: nohere I can see that minimum recovery ending location as LSN value, how can we get the timestamp of it ?The backup label file looks like this.INFO: 2022/07/26 23:25:23.850621 ------------ LABLE FILE START ----------\nINFO: 2022/07/26 23:25:23.850628 START WAL LOCATION: 1D/F94C7320 (file 000000020000001D000000F9)\nINFO: 2022/07/26 23:25:23.850633 CHECKPOINT LOCATION: 1E/EDA8700\nINFO: 2022/07/26 23:25:23.850639 BACKUP METHOD: streamed\nINFO: 2022/07/26 23:25:23.850645 BACKUP FROM: standby\nINFO: 2022/07/26 23:25:23.850653 START TIME: 2022-07-26 23:10:27 GMT\nINFO: 2022/07/26 23:25:23.850659 LABEL: 2022-07-26 23:10:27.545378 +0000 UTC m=+0.167723956\nINFO: 2022/07/26 23:25:23.850665 START TIMELINE: 2\nINFO: 2022/07/26 23:25:23.850669 \nINFO: 2022/07/26 23:25:23.850676 ------------ LABLE FILE END ----------How can we do PITR using timestamp if we don’t know the accurate timestamp of minimum recovery point ?Thanks.Best,Harinath",
"msg_date": "Wed, 27 Jul 2022 23:20:24 -0700",
"msg_from": "Harinath Kanchu <hkanchu@apple.com>",
"msg_from_op": true,
"msg_subject": "How to get accurate backup end time when it is taken from synchronous\n standby ?"
},
{
"msg_contents": "On Thu, Jul 28, 2022 at 11:50 AM Harinath Kanchu <hkanchu@apple.com> wrote:\n>\n> Greetings,\n>\n>\n> When we take backups from a synchronous standby replica, how can we get the accurate timestamp of the backup end time ? (As backup history files are not generated on standbys)For example:\n> this is a part of control file after a backup (created using wal-g by calling pg_startbackup and pg_stopbackup),\n>\n> Fake LSN counter for unlogged rels: 0/3E8\n> Minimum recovery ending location: 28/28000B68\n> Min recovery ending loc's timeline: 2\n> Backup start location: 0/0\n> Backup end location: 0/0\n> End-of-backup record required: no\n>\n> here I can see that minimum recovery ending location as LSN value, how can we get the timestamp of it ?The backup label file looks like this.\n>\n> INFO: 2022/07/26 23:25:23.850621 ------------ LABLE FILE START ----------\n> INFO: 2022/07/26 23:25:23.850628 START WAL LOCATION: 1D/F94C7320 (file 000000020000001D000000F9)\n> INFO: 2022/07/26 23:25:23.850633 CHECKPOINT LOCATION: 1E/EDA8700\n> INFO: 2022/07/26 23:25:23.850639 BACKUP METHOD: streamed\n> INFO: 2022/07/26 23:25:23.850645 BACKUP FROM: standby\n> INFO: 2022/07/26 23:25:23.850653 START TIME: 2022-07-26 23:10:27 GMT\n> INFO: 2022/07/26 23:25:23.850659 LABEL: 2022-07-26 23:10:27.545378 +0000 UTC m=+0.167723956\n> INFO: 2022/07/26 23:25:23.850665 START TIMELINE: 2\n> INFO: 2022/07/26 23:25:23.850669\n> INFO: 2022/07/26 23:25:23.850676 ------------ LABLE FILE END ----------\n>\n>\n> How can we do PITR using timestamp if we don’t know the accurate timestamp of minimum recovery point ?\n\nYou can use any of the methods specified in the other thread [1].\nOtherwise, you can as well use recovery_target_lsn =\nmin_recovery_point for PITR target instead of relying on timestamps.\n\nI believe the other thread [1] can be merged into this thread for a\nfocussed, use-case based and meaningful discussion.\n\n[1] https://www.postgresql.org/message-id/CALj2ACVgFvOQQEoyuuZeceQrStGsePWvU1noU5aAvJNenv8qTQ%40mail.gma\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/\n\n\n",
"msg_date": "Thu, 28 Jul 2022 20:25:47 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: How to get accurate backup end time when it is taken from\n synchronous standby ?"
}
] |
[
{
"msg_contents": "Hello,\n\nIs there any way to get the timestamp of the transaction using LSN value ?\n\nFor example: \ncan we use the minimum recovery ending location in pg control file to get the minimum recovery timestamp ?\n\nMinimum recovery ending location: 28/28000B68\n\n\nThanks in advance,\n\nBest,\nHarinath.\nHello,Is there any way to get the timestamp of the transaction using LSN value ?For example: can we use the minimum recovery ending location in pg control file to get the minimum recovery timestamp ?Minimum recovery ending location: 28/28000B68Thanks in advance,Best,Harinath.",
"msg_date": "Thu, 28 Jul 2022 00:47:27 -0700",
"msg_from": "Harinath Kanchu <hkanchu@apple.com>",
"msg_from_op": true,
"msg_subject": "Any way to get timestamp from LSN value ?"
},
{
"msg_contents": "On Thu, Jul 28, 2022 at 1:17 PM Harinath Kanchu <hkanchu@apple.com> wrote:\n>\n> Hello,\n>\n> Is there any way to get the timestamp of the transaction using LSN value ?\n>\n> For example:\n> can we use the minimum recovery ending location in pg control file to get the minimum recovery timestamp ?\n>\n> Minimum recovery ending location: 28/28000B68\n\nCan't pg_waldump be used? If you are on PG 15, you could as well use\npg_walinspect functions, something like below:\n\nselect * from pg_get_wal_records_info_till_end_of_wal(<<start_lsn>>)\nwhere record_type like '%COMMIT%'\n\n[1] https://www.postgresql.org/docs/15/pgwalinspect.html\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/\n\n\n",
"msg_date": "Thu, 28 Jul 2022 19:53:54 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Any way to get timestamp from LSN value ?"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI found that when wal_consistency_checking = brin is set, it may cause redo\nabort, all the standby-nodes lost, and the primary node can not be restart.\n\nThis bug exists in all versions of PostgreSQL.\n\nThe operation steps are as follows:\n\n 1. Create a primary instance, set wal_consistency_checking = brin, and\nstart the primary instance.\n\n initdb -D pg_test\n echo \"wal_consistency_checking = brin\" >> pg_test/postgresql.conf\n echo \"port=53320\" >> pg_test/postgresql.conf\n pg_ctl start -D pg_test -l pg_test.logfile\n\n 2. Create a standby instance.\n\n pg_basebackup -R -p 53320 -D pg_test_slave\n echo \"wal_consistency_checking = brin\" >>\npg_test_slave/postgresql.conf\n echo \"port=53321\" >> pg_test_slave/postgresql.conf\n pg_ctl start -D pg_test_slave -l pg_test_slave.logfile\n\n 3. Execute brin_redo_abort.sql through psql, and find that the standby\nmachine is lost.\n\n psql -p 53320 -f brin_redo_abort.sql\n\n 4. The standby instance is lost during redo, FATAL messages as follows:\n\n FATAL: inconsistent page found, rel 1663/12978/16387, forknum 0,\nblkno 2\n\n 5. The primary instance cannot be restarted through pg_ctl restart -mi.\n\n pg_ctl restart -D pg_test -mi -l pg_test.logfile\n\n 6. FATAL messages when restart primary instance as follows:\n\n FATAL: inconsistent page found, rel 1663/12978/16387, forknum 0,\nblkno 2\n\nI analyzed the reasons as follows:\n\n 1. When the revmap needs to be extended by brinRevmapExtend,\n we may set BRIN_EVACUATE_PAGE flag on a REGULAR_PAGE to prevent\n other concurrent backends from adding more BrinTuple to that page\n in brin_start_evacuating_page.\n\n 2. But, during redo-process, it is not needed to set BRIN_EVACUATE_PAGE\n flag on that REGULAR_PAGE after removing the old BrinTuple in\n brin_xlog_update, since no one will add BrinTuple to that Page at\n this time.\n\n 3. As a result, this will cause a FATAL message to be thrown in\n CheckXLogConsistency after redo, due to inconsistency checking of\n the BRIN_EVACUATE_PAGE flag, finally cause redo to abort.\n\n 4. Therefore, the BRIN_EVACUATE_PAGE flag should be cleared before\n CheckXLogConsistency.\n\n\nFor the above reasons, the patch file, sql file, shell script file, and the\nlog files are given in the attachment.\n\nBest Regards!\nHaiyang Wang",
"msg_date": "Thu, 28 Jul 2022 01:10:44 -0700",
"msg_from": "=?UTF-8?B?546L5rW35rSL?= <wanghaiyang.001@bytedance.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] BUG FIX: redo will abort,\n due to inconsistent page found in BRIN_REGULAR_PAGE"
}
] |
[
{
"msg_contents": "Hi,\n\nFreebsd 13.0, so far used by CI, is out of support. I've changed the\nimage to be built against 13.1, so we can switch to that.\n\nI suspect it'd be better to remove the minor version numbers from the\nimage name, so that switches from 13.0 -> 13.1 don't require CI\nchanges. Any argument against?\n\nI can also see an argument for not having 13 in the image name, given\nthat the image is CI specific anyway? But perhaps we might want to have\na 13 and a 14 image for some debugging issue?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 28 Jul 2022 02:57:04 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "ci: update to freebsd 13.1 / remove minor versions from image names"
},
{
"msg_contents": "On Thu, 28 Jul 2022 at 11:57, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> Freebsd 13.0, so far used by CI, is out of support. I've changed the\n> image to be built against 13.1, so we can switch to that.\n>\n> I suspect it'd be better to remove the minor version numbers from the\n> image name, so that switches from 13.0 -> 13.1 don't require CI\n> changes. Any argument against?\n>\n> I can also see an argument for not having 13 in the image name, given\n> that the image is CI specific anyway? But perhaps we might want to have\n> a 13 and a 14 image for some debugging issue?\n\nHas this change in the BSD configuration been applied today? I see\nfailures in the cfbot builds of 4 different patches [0..3] that all\nfail in 033_replay_tsp_drops.pl with the same output:\n\n---\n\n# poll_query_until timed out executing this query:\n# SELECT '0/40EAXXX' <= replay_lsn AND state = 'streaming'\n# FROM pg_catalog.pg_stat_replication\n# WHERE application_name IN ('standby2_WAL_LOG', 'walreceiver')\n# expecting this output:\n# t\n# last actual query output:\n#\n# with stderr:\n# Tests were run but no plan was declared and done_testing() was not seen.\n# Looks like your test exited with 29 just after 1.\nt/033_replay_tsp_drops.pl ............\nDubious, test returned 29 (wstat 7424, 0x1d00)\nAll 1 subtests passed\n\n---\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://cirrus-ci.com/task/5147001137397760?logs=test_world#L2631-L2662\n[1] https://cirrus-ci.com/task/4960990331666432?logs=test_world#L2631-L2662\n[2] https://cirrus-ci.com/task/5012678384025600?logs=test_world#L2631-L2662\n[3] https://cirrus-ci.com/task/5147001137397760?logs=test_world#L2631-L2662\n\n\n",
"msg_date": "Thu, 28 Jul 2022 19:29:43 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ci: update to freebsd 13.1 / remove minor versions from image\n names"
},
{
"msg_contents": "Hi,\n\nOn July 28, 2022 7:29:43 PM GMT+02:00, Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:\n>On Thu, 28 Jul 2022 at 11:57, Andres Freund <andres@anarazel.de> wrote:\n>>\n>> Hi,\n>>\n>> Freebsd 13.0, so far used by CI, is out of support. I've changed the\n>> image to be built against 13.1, so we can switch to that.\n>>\n>> I suspect it'd be better to remove the minor version numbers from the\n>> image name, so that switches from 13.0 -> 13.1 don't require CI\n>> changes. Any argument against?\n>>\n>> I can also see an argument for not having 13 in the image name, given\n>> that the image is CI specific anyway? But perhaps we might want to have\n>> a 13 and a 14 image for some debugging issue?\n>\n>Has this change in the BSD configuration been applied today? I see\n>failures in the cfbot builds of 4 different patches [0..3] that all\n>fail in 033_replay_tsp_drops.pl with the same output:\n\nNo, this hasn't yet been applied.\n\n\n># poll_query_until timed out executing this query:\n># SELECT '0/40EAXXX' <= replay_lsn AND state = 'streaming'\n># FROM pg_catalog.pg_stat_replication\n># WHERE application_name IN ('standby2_WAL_LOG', 'walreceiver')\n># expecting this output:\n># t\n># last actual query output:\n>#\n># with stderr:\n># Tests were run but no plan was declared and done_testing() was not seen.\n># Looks like your test exited with 29 just after 1.\n>t/033_replay_tsp_drops.pl ............\n>Dubious, test returned 29 (wstat 7424, 0x1d00)\n>All 1 subtests passed\n\nThat seems more likely related to the recent changes in this area.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Thu, 28 Jul 2022 19:31:53 +0200",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "=?US-ASCII?Q?Re=3A_ci=3A_update_to_freebsd_13=2E1_/_re?=\n =?US-ASCII?Q?move_minor_versions_from_image_names?="
},
{
"msg_contents": "On Thu, 28 Jul 2022 at 19:31, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On July 28, 2022 7:29:43 PM GMT+02:00, Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:\n> >On Thu, 28 Jul 2022 at 11:57, Andres Freund <andres@anarazel.de> wrote:\n> >>\n> >> Hi,\n> >>\n> >> Freebsd 13.0, so far used by CI, is out of support. I've changed the\n> >> image to be built against 13.1, so we can switch to that.\n> >>\n> >> I suspect it'd be better to remove the minor version numbers from the\n> >> image name, so that switches from 13.0 -> 13.1 don't require CI\n> >> changes. Any argument against?\n> >>\n> >> I can also see an argument for not having 13 in the image name, given\n> >> that the image is CI specific anyway? But perhaps we might want to have\n> >> a 13 and a 14 image for some debugging issue?\n> >\n> >Has this change in the BSD configuration been applied today? I see\n> >failures in the cfbot builds of 4 different patches [0..3] that all\n> >fail in 033_replay_tsp_drops.pl with the same output:\n>\n> No, this hasn't yet been applied.\n>\n> ># poll_query_until timed out executing this query:\n> ># SELECT '0/40EAXXX' <= replay_lsn AND state = 'streaming'\n> ># FROM pg_catalog.pg_stat_replication\n> ># WHERE application_name IN ('standby2_WAL_LOG', 'walreceiver')\n> ># expecting this output:\n> ># t\n> ># last actual query output:\n> >#\n> ># with stderr:\n> ># Tests were run but no plan was declared and done_testing() was not seen.\n> ># Looks like your test exited with 29 just after 1.\n> >t/033_replay_tsp_drops.pl ............\n> >Dubious, test returned 29 (wstat 7424, 0x1d00)\n> >All 1 subtests passed\n>\n> That seems more likely related to the recent changes in this area.\n\nHmm, I should've looked further than just this, so I would've realised\nthat this was a new test. I guess I'll go bother Alvaro on the\nrelevant thread instead.\n\nThanks for the quick response.\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Thu, 28 Jul 2022 19:45:00 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ci: update to freebsd 13.1 / remove minor versions from image\n names"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-28 02:57:04 -0700, Andres Freund wrote:\n> Freebsd 13.0, so far used by CI, is out of support. I've changed the\n> image to be built against 13.1, so we can switch to that.\n\nI pushed that bit.\n\n\n> I suspect it'd be better to remove the minor version numbers from the\n> image name, so that switches from 13.0 -> 13.1 don't require CI\n> changes. Any argument against?\n\n> I can also see an argument for not having 13 in the image name, given\n> that the image is CI specific anyway? But perhaps we might want to have\n> a 13 and a 14 image for some debugging issue?\n\nBut not yet this, as there've been no comments so far.\n\n- Andres\n\n\n",
"msg_date": "Sun, 31 Jul 2022 12:43:44 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: ci: update to freebsd 13.1 / remove minor versions from image\n names"
},
{
"msg_contents": "On Mon, Aug 1, 2022 at 7:43 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-07-28 02:57:04 -0700, Andres Freund wrote:\n> > Freebsd 13.0, so far used by CI, is out of support. I've changed the\n> > image to be built against 13.1, so we can switch to that.\n>\n> I pushed that bit.\n\nThanks, belated +1.\n\n> > I suspect it'd be better to remove the minor version numbers from the\n> > image name, so that switches from 13.0 -> 13.1 don't require CI\n> > changes. Any argument against?\n\nYeah, that makes sense; it'd remove the need for commits like that.\nFor comparison, the Debian image is Bullseye AKA 11.x without the x in\nthe name.\n\n> > I can also see an argument for not having 13 in the image name, given\n> > that the image is CI specific anyway? But perhaps we might want to have\n> > a 13 and a 14 image for some debugging issue?\n\nI'm not sure about this. I could imagine a naming scheme that has\nsensible options available as pg-ci-{debian,freebsd,...}-default, and\nthose images are currently the same as\npg-ci-{debian-11,freebsd-13,...} but can be re-pointed as appropriate\nwithout having to modify the .cirrus.yml, and someone investigating a\nproblem where they really care about the major version could change\ntheir .cirrus.yml to point to the versioned name. And likewise for\nWindows containers; I'm not sure I understand how Cirrus's macOS\nimages work, but maybe there too. The problem would be if, for some\nreason, you finish up needing to synchronise a change between the\n.cirrus.yml file and the image (like, you need to run slightly\ndifferent commands for the build or something). I don't have a\nconcrete example, but I have a strange feeling in my big toe that it'd\nbe better to state the major version explicitly, and have a few\navailable...\n\n\n",
"msg_date": "Mon, 1 Aug 2022 10:07:32 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ci: update to freebsd 13.1 / remove minor versions from image\n names"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-01 10:07:32 +1200, Thomas Munro wrote:\n> > > I suspect it'd be better to remove the minor version numbers from the\n> > > image name, so that switches from 13.0 -> 13.1 don't require CI\n> > > changes. Any argument against?\n> \n> Yeah, that makes sense; it'd remove the need for commits like that.\n> For comparison, the Debian image is Bullseye AKA 11.x without the x in\n> the name.\n\nCool, doing that in https://github.com/anarazel/pg-vm-images/pull/15\n\nThere's now \"freebsd-13\" and \"netbsd-9-postgres\", \"openbsd-7-postgres\". The\nlatter two include the -postgres because we have to generate the \"base\" image\nourselves, because neither net nor openbsd provide a gcp image themselves. Not\nthat we use net/openbsd images in PG itself yet (I'm about to merge\nopen/netbsd support in the meson branch though).\n\n\n> > > I can also see an argument for not having 13 in the image name, given\n> > > that the image is CI specific anyway? But perhaps we might want to have\n> > > a 13 and a 14 image for some debugging issue?\n> \n> I'm not sure about this. I could imagine a naming scheme that has\n> sensible options available as pg-ci-{debian,freebsd,...}-default, and\n> those images are currently the same as\n> pg-ci-{debian-11,freebsd-13,...} but can be re-pointed as appropriate\n> without having to modify the .cirrus.yml, and someone investigating a\n> problem where they really care about the major version could change\n> their .cirrus.yml to point to the versioned name.\n\nThere's such a concept for gcp, namely \"image families\". But we already use\nthat for pg-ci-bullseye etc - each individual image has a name including the\ndate. There's only one level of families afaiu. We of course could manually\ncopy images, but that's probably not worth it (and would come with storage\ncosts).\n\nE.g. pg-ci-bullseye currently points to pg-ci-bullseye-2022-07-31t21-31.\n\n\n> I don't have a concrete example, but I have a strange feeling in my big toe\n> that it'd be better to state the major version explicitly, and have a few\n> available...\n\nFWIW, at the moment all images are deleted after two weeks ([1]). We probably\ncan make that smarter and not delete the newest image for a family, even if\nthat image is older than two weeks. Not that the gcp API seems to make that\neasy.\n\nGreetings,\n\nAndres Freund\n\n[1] https://github.com/anarazel/pg-vm-images/blob/main/.cirrus.yml#L209\n\n\n",
"msg_date": "Sun, 31 Jul 2022 16:27:37 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: ci: update to freebsd 13.1 / remove minor versions from image\n names"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-31 16:27:37 -0700, Andres Freund wrote:\n> On 2022-08-01 10:07:32 +1200, Thomas Munro wrote:\n> > > > I suspect it'd be better to remove the minor version numbers from the\n> > > > image name, so that switches from 13.0 -> 13.1 don't require CI\n> > > > changes. Any argument against?\n> > \n> > Yeah, that makes sense; it'd remove the need for commits like that.\n> > For comparison, the Debian image is Bullseye AKA 11.x without the x in\n> > the name.\n> \n> Cool, doing that in https://github.com/anarazel/pg-vm-images/pull/15\n\nThat worked, and now I've updated the PG .cirrus.yml to point to that.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 31 Jul 2022 19:02:58 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: ci: update to freebsd 13.1 / remove minor versions from image\n names"
}
] |
[
{
"msg_contents": "Starting new thread with updated patch to avoid confusion, as\nmentioned by David Steele on the original thread:\nOriginal messageid: 20201118020418.GA13408@alvherre.pgsql\nOn Wed, 18 Nov 2020 at 02:04, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2020-Nov-17, Simon Riggs wrote:\n>\n> > As an additional optimization, if we do find a row that needs freezing\n> > on a data block, we should simply freeze *all* row versions on the\n> > page, not just the ones below the selected cutoff. This is justified\n> > since writing the block is the biggest cost and it doesn't make much\n> > sense to leave a few rows unfrozen on a block that we are dirtying.\n>\n> Yeah. We've had earlier proposals to use high and low watermarks: if any\n> tuple is past the high watermark, then freeze all tuples that are past\n> the low watermark. However this is ancient thinking (prior to\n> HEAP_XMIN_FROZEN) and we don't need the low watermark to be different\n> from zero, since the original xid is retained anyway.\n>\n> So +1 for this idea.\n\nUpdated patch attached.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Thu, 28 Jul 2022 14:35:36 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Maximize page freezing"
},
{
"msg_contents": "On Thu, 28 Jul 2022 at 15:36, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> Starting new thread with updated patch to avoid confusion, as\n> mentioned by David Steele on the original thread:\n> Original messageid: 20201118020418.GA13408@alvherre.pgsql\n> On Wed, 18 Nov 2020 at 02:04, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > On 2020-Nov-17, Simon Riggs wrote:\n> >\n> > > As an additional optimization, if we do find a row that needs freezing\n> > > on a data block, we should simply freeze *all* row versions on the\n> > > page, not just the ones below the selected cutoff. This is justified\n> > > since writing the block is the biggest cost and it doesn't make much\n> > > sense to leave a few rows unfrozen on a block that we are dirtying.\n> >\n> > Yeah. We've had earlier proposals to use high and low watermarks: if any\n> > tuple is past the high watermark, then freeze all tuples that are past\n> > the low watermark. However this is ancient thinking (prior to\n> > HEAP_XMIN_FROZEN) and we don't need the low watermark to be different\n> > from zero, since the original xid is retained anyway.\n> >\n> > So +1 for this idea.\n>\n> Updated patch attached.\n\nGreat idea, yet this patch seems to only freeze those tuples that are\nlocated after the first to-be-frozen tuple. It should probably\nre-visit earlier live tuples to potentially freeze those as well.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Thu, 28 Jul 2022 15:55:46 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Maximize page freezing"
},
{
"msg_contents": "On Thu, Jul 28, 2022 at 6:56 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> Great idea, yet this patch seems to only freeze those tuples that are\n> located after the first to-be-frozen tuple. It should probably\n> re-visit earlier live tuples to potentially freeze those as well.\n\nI have a big patch set pending that does this (which I dubbed\n\"page-level freezing\"), plus a bunch of other things that control the\noverhead. Although the basic idea of freezing all of the tuples on a\npage together appears in earlier patching that were posted. These were\nthings that didn't make it into Postgres 15.\n\nI should be able to post something in a couple of weeks.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 28 Jul 2022 12:57:14 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Maximize page freezing"
},
{
"msg_contents": "On Thu, 28 Jul 2022 at 20:57, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Thu, Jul 28, 2022 at 6:56 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > Great idea, yet this patch seems to only freeze those tuples that are\n> > located after the first to-be-frozen tuple. It should probably\n> > re-visit earlier live tuples to potentially freeze those as well.\n>\n> I have a big patch set pending that does this (which I dubbed\n> \"page-level freezing\"), plus a bunch of other things that control the\n> overhead. Although the basic idea of freezing all of the tuples on a\n> page together appears in earlier patching that were posted. These were\n> things that didn't make it into Postgres 15.\n\nYes, my patch from 2020 was never reviewed, which is why I was\nresubmitting here.\n\n> I should be able to post something in a couple of weeks.\n\nHow do you see that affecting this thread?\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 29 Jul 2022 13:55:03 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Maximize page freezing"
},
{
"msg_contents": "On Thu, 28 Jul 2022 at 14:55, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Thu, 28 Jul 2022 at 15:36, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> >\n> > Starting new thread with updated patch to avoid confusion, as\n> > mentioned by David Steele on the original thread:\n> > Original messageid: 20201118020418.GA13408@alvherre.pgsql\n> > On Wed, 18 Nov 2020 at 02:04, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > > On 2020-Nov-17, Simon Riggs wrote:\n> > >\n> > > > As an additional optimization, if we do find a row that needs freezing\n> > > > on a data block, we should simply freeze *all* row versions on the\n> > > > page, not just the ones below the selected cutoff. This is justified\n> > > > since writing the block is the biggest cost and it doesn't make much\n> > > > sense to leave a few rows unfrozen on a block that we are dirtying.\n> > >\n> > > Yeah. We've had earlier proposals to use high and low watermarks: if any\n> > > tuple is past the high watermark, then freeze all tuples that are past\n> > > the low watermark. However this is ancient thinking (prior to\n> > > HEAP_XMIN_FROZEN) and we don't need the low watermark to be different\n> > > from zero, since the original xid is retained anyway.\n> > >\n> > > So +1 for this idea.\n> >\n> > Updated patch attached.\n>\n> Great idea, yet this patch seems to only freeze those tuples that are\n> located after the first to-be-frozen tuple. It should probably\n> re-visit earlier live tuples to potentially freeze those as well.\n\nLike this?\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Fri, 29 Jul 2022 15:37:58 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Maximize page freezing"
},
{
"msg_contents": "On Fri, Jul 29, 2022 at 5:55 AM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n> > I should be able to post something in a couple of weeks.\n>\n> How do you see that affecting this thread?\n\nWell, it's clearly duplicative, at least in part. That in itself\ndoesn't mean much, but there are some general questions (that apply to\nany variant of proactive/batched freezing), particularly around the\nadded overhead, and the question of whether or not we get to advance\nrelfrozenxid substantially in return for that cost. Those parts are\nquite tricky.\n\nI have every intention of addressing these thorny questions in my\nupcoming patch set, which actually does far more than change the rules\nabout when and how we freeze -- changing the mechanism itself is very\nmuch the easy part. I'm taking a holistic approach that involves\nmaking an up-front decision about freezing strategy based on the\nobserved characteristics of the table, driven by what we see in the\nvisibility map at the start.\n\nSimilar questions will also apply to this patch, even though it isn't\nas aggressive (your patch doesn't trigger freezing when a page is\nabout to be set all-visible in order to make sure that it can be set\nall-frozen instead). You still want to give the user a clear benefit\nfor any added overhead. It needs a great deal of performance\nvalidation, too.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 29 Jul 2022 11:48:27 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Maximize page freezing"
},
{
"msg_contents": "On Fri, 29 Jul 2022 at 16:38, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> On Thu, 28 Jul 2022 at 14:55, Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > Great idea, yet this patch seems to only freeze those tuples that are\n> > located after the first to-be-frozen tuple. It should probably\n> > re-visit earlier live tuples to potentially freeze those as well.\n>\n> Like this?\n\nThat wasn't quite what I imagined. In your patch, heap_page_prune is\ndisabled after the first frozen tuple, which makes the retry mechanism\nwith the HTSV check loop forever because it expects that tuple to be\nvacuumed.\n\nI was thinking more in the line of \"do a backtrack in a specialized\ncode block when entering max_freeze_page mode\" (without using\n'retry'), though I'm not sure whether that's the best option\navailable.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Fri, 29 Jul 2022 22:49:38 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Maximize page freezing"
}
] |
[
{
"msg_contents": "In commits 7c34555f8/e1bd4990b, I added a new role used by a TAP\nscript but neglected the auth_extra incantation needed to allow\nlogin as that role. This should have resulted in SSPI auth\nfailures on certain Windows configurations, and indeed it did\non drongo's next run in the v15 branch:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2022-07-27%2022%3A01%3A47\n\nHowever, its immediately-following run on HEAD succeeded,\nthough I'd obviously not had time to put in the fix yet:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2022-07-27%2022%3A30%3A27\n\nHow can that be? Have we somehow broken SSPI authentication\nin HEAD?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 28 Jul 2022 10:24:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "How come drongo didn't fail authentication here?"
},
{
"msg_contents": "\nOn 2022-07-28 Th 10:24, Tom Lane wrote:\n> In commits 7c34555f8/e1bd4990b, I added a new role used by a TAP\n> script but neglected the auth_extra incantation needed to allow\n> login as that role. This should have resulted in SSPI auth\n> failures on certain Windows configurations, and indeed it did\n> on drongo's next run in the v15 branch:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2022-07-27%2022%3A01%3A47\n>\n> However, its immediately-following run on HEAD succeeded,\n> though I'd obviously not had time to put in the fix yet:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2022-07-27%2022%3A30%3A27\n>\n> How can that be? Have we somehow broken SSPI authentication\n> in HEAD?\n>\n> \t\t\t\n\n\nNothing is broken. On HEAD drongo uses Unix sockets.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 28 Jul 2022 10:36:53 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: How come drongo didn't fail authentication here?"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-07-28 Th 10:24, Tom Lane wrote:\n>> How can that be? Have we somehow broken SSPI authentication\n>> in HEAD?\n\n> Nothing is broken. On HEAD drongo uses Unix sockets.\n\nI see. Seems like we've created a gotcha for ourselves:\na test script can look perfectly fine in Unix-based testing,\nand even in Windows CI, and then fail when it hits the back\nbranches in the buildfarm. Is it worth doing something to\ncause the lack of a valid auth_extra spec to fail on Unix?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 28 Jul 2022 10:55:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: How come drongo didn't fail authentication here?"
},
{
"msg_contents": "\nOn 2022-07-28 Th 10:55, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 2022-07-28 Th 10:24, Tom Lane wrote:\n>>> How can that be? Have we somehow broken SSPI authentication\n>>> in HEAD?\n>> Nothing is broken. On HEAD drongo uses Unix sockets.\n> I see. Seems like we've created a gotcha for ourselves:\n> a test script can look perfectly fine in Unix-based testing,\n> and even in Windows CI, and then fail when it hits the back\n> branches in the buildfarm. Is it worth doing something to\n> cause the lack of a valid auth_extra spec to fail on Unix?\n>\n> \t\t\t\n\n\nMaybe we should just have a windows testing instance that doesn't use\nUnix sockets at all.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 28 Jul 2022 11:29:44 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: How come drongo didn't fail authentication here?"
}
] |
[
{
"msg_contents": "Hi,\n\nNext up is the large list of Needs Review. This part 1 should include\nentries as old or older than seven commitfests running.\n\nMy heuristics for classifying these continue to evolve as I go, and\nthere's a lot to read, so please let me know if I've made any mistakes.\n\n= Stalled Patches, Recommend Return =\n\nThese are stalled and I recommend outright that we return them. We don't\nhave a separate status for \"needs more interest\" (working on a patch) so\nI'd just RwF, with a note explaining that what is actually needed to\ncontinue isn't more code work but more coalition building.\n\n- Implement INSERT SET syntax\n https://commitfest.postgresql.org/38/2218/\n\nA recent author rebase this CF, but unfortunately I think the the real\nissue is just a lack of review interest. It's been suggested for return\nfor a few CFs now.\n\n- Fix up partitionwise join on how equi-join conditions between the\npartition keys are identified\n https://commitfest.postgresql.org/38/2266/\n\nIt looks like this one was Returned with Feedback but did not actually\nhave feedback, which may have caused confusion. (Solid motivation for a\nnew close status.) I don't think there's been any review since 2020.\n\n- New default role allowing to change per-role/database settings\n https://commitfest.postgresql.org/38/2918/\n\nStalled on review in January, and needs a rebase.\n\n= Stalled Patches, Need Help =\n\nThese are stalled but seem to have interest. They need help to either\nget them out of the rut, or else be Returned so that the author can try\na different approach instead of perma-rebasing. I plan to move them to\nthe next CF unless someone speaks up to say otherwise.\n\n- Show shared filesets in pg_ls_tmpdir (pg_ls_* functions for showing\nmetadata and recurse)\n https://commitfest.postgresql.org/38/2377/\n\n From a quick skim it looks like there was a flurry of initial positive\nfeedback followed by a stall and then some design whiplash. This thread\nneeds help to avoid burnout, I think.\n\n- Make message at end-of-recovery less scary\n https://commitfest.postgresql.org/38/2490/\n\nThis got marked RfC twice, fell back out, and has been stuck in a rebase\nloop.\n\n- Fix behavior of geo_ops when NaN is involved\n https://commitfest.postgresql.org/38/2710/\n\nStuck in a half-committed state, which is tricky. Could maybe use a\nreframing or recap (or a new thread?).\n\n- Add extra statistics to explain for Nested Loop\n https://commitfest.postgresql.org/38/2765/\n\nI think the author is hoping for help with testing and performance\ncharacterization.\n\n- CREATE INDEX CONCURRENTLY on partitioned table\n https://commitfest.postgresql.org/38/2815/\n\nThis had an author switch since last CF, so I think it'd be\ninappropriate to close it out this time around, but it needs assistance.\n\n- New Table Access Methods for Multi and Single Inserts\n https://commitfest.postgresql.org/38/2871/\n\nAlthough there was a brief flicker in March, I think this one has\nstalled out and is just about ready to be returned.\n\n- Fix pg_rewind race condition just after promotion\n https://commitfest.postgresql.org/38/2864/\n\nSeems like an important fix, but it's silent? Does it need to be\npromoted to an Open Issue?\n\n- pg_stat_statements and \"IN\" conditions\n https://commitfest.postgresql.org/38/2837/\n\nSome good, recent interest. Last review in March.\n\n- Function to log backtrace of postgres processes\n https://commitfest.postgresql.org/38/2863/\n\nThis is just starting to stall; I think it needs some assistance.\n\n- Allow batched insert during cross-partition updates\n https://commitfest.postgresql.org/38/2992/\n\nWas RfC (twice), then dropped out, now it's stuck rebasing. Last\nsubstantial review in 2021.\n\n= Active Patches =\n\nThe following are actively being worked and I expect to move them to\nnext CF:\n\n- session variables, LET command\n- Remove self join on a unique column\n- Incremental Materialized View Maintenance\n- More scalable multixacts buffers and locking\n- Fast COPY FROM command for the foreign tables\n- Extended statistics / estimate Var op Var clauses\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Thu, 28 Jul 2022 14:28:23 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "[Commitfest 2022-07] Patch Triage: Needs Review, Part 1"
},
{
"msg_contents": "Jacob Champion <jchampion@timescale.com> writes:\n> Next up is the large list of Needs Review. This part 1 should include\n> entries as old or older than seven commitfests running.\n\nI'm just commenting on a couple that I've been involved with.\n\n> = Stalled Patches, Recommend Return =\n\n> - Fix up partitionwise join on how equi-join conditions between the\n> partition keys are identified\n> https://commitfest.postgresql.org/38/2266/\n> It looks like this one was Returned with Feedback but did not actually\n> have feedback, which may have caused confusion. (Solid motivation for a\n> new close status.) I don't think there's been any review since 2020.\n\nYeah, there was an earlier discussion of this same patch in some\nprevious CF-closing thread, IIRC, but I can't find that right now.\nI think it basically is stuck behind the outer-join-variables work\nI'm pursuing at https://commitfest.postgresql.org/39/3755/ ... and\nwhen/if that lands, the present patch probably won't be anywhere\nnear what we want anyway. +1 for RWF.\n\n> = Stalled Patches, Need Help =\n\n> - Fix behavior of geo_ops when NaN is involved\n> https://commitfest.postgresql.org/38/2710/\n\n> Stuck in a half-committed state, which is tricky. Could maybe use a\n> reframing or recap (or a new thread?).\n\nWe fixed a couple of easy cases but then realized that the hard cases\nare hard. I don't have much faith that the current patch is going to\nlead to anything committable, and it doesn't look like anyone has the\nappetite to put in a lot of work on the topic. I'd vote for RWF.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 28 Jul 2022 17:50:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Commitfest 2022-07] Patch Triage: Needs Review, Part 1"
},
{
"msg_contents": "Hi,\n\nOn Thu, Jul 28, 2022 at 02:28:23PM -0700, Jacob Champion wrote:\n>\n> = Stalled Patches, Need Help =\n> [...]\n> - Add extra statistics to explain for Nested Loop\n> https://commitfest.postgresql.org/38/2765/\n>\n> I think the author is hoping for help with testing and performance\n> characterization.\n\nAs I mentioned in [1], this patch breaks the current assumption that\nINSTRUMENT_ALL will lead to statement-level metrics that are generally useful.\nAccording to the benchmark, the proposed patch would add a 1.5% overhead for\npg_stat_statements or any other similar extension that relies on INSTRUMENT_ALL\nfor no additional information, and I don't think it's acceptable.\n\nI'm still not sure of what is the best way to fix that, but clearly something\nhas to be done, ideally without requiring every single pg_stat_statements-like\nextension to be modified.\n\n> The following are actively being worked and I expect to move them to\n> next CF:\n>\n> - session variables, LET command\n\nYes please.\n\n\n",
"msg_date": "Fri, 29 Jul 2022 14:38:13 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Commitfest 2022-07] Patch Triage: Needs Review, Part 1"
},
{
"msg_contents": "On Thu, Jul 28, 2022 at 2:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > - Fix behavior of geo_ops when NaN is involved\n> > https://commitfest.postgresql.org/38/2710/\n>\n> > Stuck in a half-committed state, which is tricky. Could maybe use a\n> > reframing or recap (or a new thread?).\n>\n> We fixed a couple of easy cases but then realized that the hard cases\n> are hard. I don't have much faith that the current patch is going to\n> lead to anything committable, and it doesn't look like anyone has the\n> appetite to put in a lot of work on the topic. I'd vote for RWF.\n\nBarring any competing votes, that's what I'll do, then. Thanks!\n\n--Jacob\n\n\n",
"msg_date": "Fri, 29 Jul 2022 09:57:08 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [Commitfest 2022-07] Patch Triage: Needs Review, Part 1"
},
{
"msg_contents": "Hi Julien,\n\nOn Thu, Jul 28, 2022 at 11:38 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > - Add extra statistics to explain for Nested Loop\n> > https://commitfest.postgresql.org/38/2765/\n> >\n> > [...]\n>\n> As I mentioned in [1], this patch breaks the current assumption that\n> INSTRUMENT_ALL will lead to statement-level metrics that are generally useful.\n> According to the benchmark, the proposed patch would add a 1.5% overhead for\n> pg_stat_statements or any other similar extension that relies on INSTRUMENT_ALL\n> for no additional information, and I don't think it's acceptable.\n\n(I'm missing the [1] link.) From skimming the end of the thread, it\nlooks like Ekaterina responded to that concern and was hoping for\nfeedback. If you still think it doesn't go far enough, would you mind\ndropping a note in the thread? Then we can mark WoA and go from there.\n\nThanks!\n--Jacob\n\n\n",
"msg_date": "Fri, 29 Jul 2022 10:08:08 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [Commitfest 2022-07] Patch Triage: Needs Review, Part 1"
},
{
"msg_contents": "Hi Jacob,\n\nOn Fri, Jul 29, 2022 at 10:08:08AM -0700, Jacob Champion wrote:\n> \n> On Thu, Jul 28, 2022 at 11:38 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > > - Add extra statistics to explain for Nested Loop\n> > > https://commitfest.postgresql.org/38/2765/\n> > >\n> > > [...]\n> >\n> > As I mentioned in [1], this patch breaks the current assumption that\n> > INSTRUMENT_ALL will lead to statement-level metrics that are generally useful.\n> > According to the benchmark, the proposed patch would add a 1.5% overhead for\n> > pg_stat_statements or any other similar extension that relies on INSTRUMENT_ALL\n> > for no additional information, and I don't think it's acceptable.\n> \n> (I'm missing the [1] link.)\n\nAh sorry I forgot to include it, here it's:\nhttps://www.postgresql.org/message-id/20220307050830.zahd57wbvezu2d6r%40jrouhaud.\n\n>From skimming the end of the thread, it\n> looks like Ekaterina responded to that concern and was hoping for\n> feedback. If you still think it doesn't go far enough, would you mind\n> dropping a note in the thread? Then we can mark WoA and go from there.\n\nI think that the problem still exist, unfortunately the benchmark done only\ntests various EXPLAIN commands and not normal query execution with pgss\nenabled. I will double check and reply on the thread tomorrow!\n\n\n",
"msg_date": "Sat, 30 Jul 2022 01:21:40 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Commitfest 2022-07] Patch Triage: Needs Review, Part 1"
}
] |
[
{
"msg_contents": "During a recent review, I happened to notice that in the file\nsrc/backend/catalog/pg_publication.c the two functions\n'is_publishable_class' and 'is_publishable_relation' used to be [1]\nadjacent in the source code. This is also evident in\n'is_publishable_relation' because the wording of the function comment\njust refers to the prior function (e.g. \"Another variant of this,\ntaking a Relation.\") and also this just \"wraps\" the prior function.\n\nIt seems that sometime last year another commit [2] inadvertently\ninserted another function ('filter_partitions') between those\naforementioned, and that means the \"Another variant of this\" comment\ndoesn't make much sense anymore.\n\nPSA a patch just to put those original 2 functions back together\nagain. No code is \"changed\" - only moved.\n\n------\n\n[1] https://github.com/postgres/postgres/blame/f0b051e322d530a340e62f2ae16d99acdbcb3d05/src/backend/catalog/pg_publication.c\n[2] https://github.com/postgres/postgres/commit/5a2832465fd8984d089e8c44c094e6900d987fcd#diff-1ecc273c7808aba21749ea2718482c153cd6c4dc9d90c69124f3a7c5963b2b4a\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Fri, 29 Jul 2022 09:17:02 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Functions 'is_publishable_class' and 'is_publishable_relation' should\n stay together."
},
{
"msg_contents": "On Friday, July 29, 2022 7:17 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> During a recent review, I happened to notice that in the file\r\n> src/backend/catalog/pg_publication.c the two functions 'is_publishable_class'\r\n> and 'is_publishable_relation' used to be [1] adjacent in the source code. This is\r\n> also evident in 'is_publishable_relation' because the wording of the function\r\n> comment just refers to the prior function (e.g. \"Another variant of this, taking a\r\n> Relation.\") and also this just \"wraps\" the prior function.\r\n> \r\n> It seems that sometime last year another commit [2] inadvertently inserted\r\n> another function ('filter_partitions') between those aforementioned, and that\r\n> means the \"Another variant of this\" comment doesn't make much sense\r\n> anymore.\r\n\r\nAgreed.\r\n\r\nPersonally, I think it would be better to modify the comments of\r\nis_publishable_relation and directly mention the function name it refers to\r\nwhich can prevent future code to break it again.\r\n\r\nBesides,\r\n\r\n/*\r\n * Returns if relation represented by oid and Form_pg_class entry\r\n * is publishable.\r\n *\r\n * Does same checks as the above,\r\n\r\nThis comment was also intended to refer to the function\r\ncheck_publication_add_relation(), but is invalid now because there is another\r\nfunction check_publication_add_schema() inserted between them. We'd better fix\r\nthis as well.\r\n\r\nBest regards,\r\nHou zj\r\n\r\n",
"msg_date": "Fri, 29 Jul 2022 01:55:38 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Functions 'is_publishable_class' and 'is_publishable_relation'\n should stay together."
},
{
"msg_contents": "On Fri, Jul 29, 2022 at 11:55 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Friday, July 29, 2022 7:17 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > During a recent review, I happened to notice that in the file\n> > src/backend/catalog/pg_publication.c the two functions 'is_publishable_class'\n> > and 'is_publishable_relation' used to be [1] adjacent in the source code. This is\n> > also evident in 'is_publishable_relation' because the wording of the function\n> > comment just refers to the prior function (e.g. \"Another variant of this, taking a\n> > Relation.\") and also this just \"wraps\" the prior function.\n> >\n> > It seems that sometime last year another commit [2] inadvertently inserted\n> > another function ('filter_partitions') between those aforementioned, and that\n> > means the \"Another variant of this\" comment doesn't make much sense\n> > anymore.\n>\n> Agreed.\n>\n> Personally, I think it would be better to modify the comments of\n> is_publishable_relation and directly mention the function name it refers to\n> which can prevent future code to break it again.\n\nI'd intended only to make the minimal changes necessary to set things\nright again, but your way is better.\n\n>\n> Besides,\n>\n> /*\n> * Returns if relation represented by oid and Form_pg_class entry\n> * is publishable.\n> *\n> * Does same checks as the above,\n>\n> This comment was also intended to refer to the function\n> check_publication_add_relation(), but is invalid now because there is another\n> function check_publication_add_schema() inserted between them. We'd better fix\n> this as well.\n\nThanks, I'll post another patch later to address that one too.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 29 Jul 2022 12:56:12 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Functions 'is_publishable_class' and 'is_publishable_relation'\n should stay together."
},
{
"msg_contents": "On Fri, Jul 29, 2022 at 8:26 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Fri, Jul 29, 2022 at 11:55 AM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Friday, July 29, 2022 7:17 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > During a recent review, I happened to notice that in the file\n> > > src/backend/catalog/pg_publication.c the two functions 'is_publishable_class'\n> > > and 'is_publishable_relation' used to be [1] adjacent in the source code. This is\n> > > also evident in 'is_publishable_relation' because the wording of the function\n> > > comment just refers to the prior function (e.g. \"Another variant of this, taking a\n> > > Relation.\") and also this just \"wraps\" the prior function.\n> > >\n> > > It seems that sometime last year another commit [2] inadvertently inserted\n> > > another function ('filter_partitions') between those aforementioned, and that\n> > > means the \"Another variant of this\" comment doesn't make much sense\n> > > anymore.\n> >\n> > Agreed.\n> >\n> > Personally, I think it would be better to modify the comments of\n> > is_publishable_relation and directly mention the function name it refers to\n> > which can prevent future code to break it again.\n>\n> I'd intended only to make the minimal changes necessary to set things\n> right again, but your way is better.\n>\n\nYeah, Hou-San's suggestion sounds better to me as well.\n\n> >\n> > Besides,\n> >\n> > /*\n> > * Returns if relation represented by oid and Form_pg_class entry\n> > * is publishable.\n> > *\n> > * Does same checks as the above,\n> >\n> > This comment was also intended to refer to the function\n> > check_publication_add_relation(), but is invalid now because there is another\n> > function check_publication_add_schema() inserted between them. We'd better fix\n> > this as well.\n>\n\n+1. Here, I think it will be better to add the function name in the\ncomments and keep the current order as it is.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 29 Jul 2022 08:59:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Functions 'is_publishable_class' and 'is_publishable_relation'\n should stay together."
},
{
"msg_contents": "PSA v2 of this patch, modified as suggested.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Fri, 29 Jul 2022 14:38:59 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Functions 'is_publishable_class' and 'is_publishable_relation'\n should stay together."
},
{
"msg_contents": "On 2022-Jul-29, Peter Smith wrote:\n\n> PSA v2 of this patch, modified as suggested.\n\nI don't object to doing this, but I think these two functions should\nstay together nonetheless.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nY una voz del caos me habló y me dijo\n\"Sonríe y sé feliz, podría ser peor\".\nY sonreí. Y fui feliz.\nY fue peor.\n\n\n",
"msg_date": "Fri, 29 Jul 2022 11:35:55 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Functions 'is_publishable_class' and 'is_publishable_relation'\n should stay together."
},
{
"msg_contents": "On Fri, Jul 29, 2022 at 7:36 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Jul-29, Peter Smith wrote:\n>\n> > PSA v2 of this patch, modified as suggested.\n>\n> I don't object to doing this, but I think these two functions should\n> stay together nonetheless.\n\n\nHmm, I think there is some confusion because different people have\nmentioned multiple functions.\n\nAFAIK, the patch *does* ensure the 2 functions (is_publishable_class\nand is_publishable_relation) stay together.\n\nIf you believe there is still a problem after applying the patch\nplease explicitly name what function(s) you think should be moved.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 29 Jul 2022 19:51:07 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Functions 'is_publishable_class' and 'is_publishable_relation'\n should stay together."
},
{
"msg_contents": "On 2022-Jul-29, Peter Smith wrote:\n\n> On Fri, Jul 29, 2022 at 7:36 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > I don't object to doing this, but I think these two functions should\n> > stay together nonetheless.\n> \n> If you believe there is still a problem after applying the patch\n> please explicitly name what function(s) you think should be moved.\n\nWell, I checked the commit and the functions I was talking about look OK\nnow. However, looking again, pg_relation_is_publishable is in the wrong\nplace (should be right below is_publishable_relaton), and I wonder why\naren't get_publication_oid and get_publication_name in lsyscache.c.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 29 Jul 2022 11:59:00 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Functions 'is_publishable_class' and 'is_publishable_relation'\n should stay together."
},
{
"msg_contents": "On Fri, Jul 29, 2022 at 3:29 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Jul-29, Peter Smith wrote:\n>\n> > On Fri, Jul 29, 2022 at 7:36 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > >\n> > > I don't object to doing this, but I think these two functions should\n> > > stay together nonetheless.\n> >\n> > If you believe there is still a problem after applying the patch\n> > please explicitly name what function(s) you think should be moved.\n>\n> Well, I checked the commit and the functions I was talking about look OK\n> now. However, looking again, pg_relation_is_publishable is in the wrong\n> place (should be right below is_publishable_relaton), and I wonder why\n> aren't get_publication_oid and get_publication_name in lsyscache.c.\n>\n\nRight, both these suggestions make sense to me. Similarly, I think\nfunctions get_subscription_name and get_subscription_oid should also\nbe moved to lsyscache.c.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 29 Jul 2022 15:55:10 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Functions 'is_publishable_class' and 'is_publishable_relation'\n should stay together."
},
{
"msg_contents": "On Fri, Jul 29, 2022 at 3:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jul 29, 2022 at 3:29 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > Well, I checked the commit and the functions I was talking about look OK\n> > now. However, looking again, pg_relation_is_publishable is in the wrong\n> > place (should be right below is_publishable_relaton), and I wonder why\n> > aren't get_publication_oid and get_publication_name in lsyscache.c.\n> >\n>\n> Right, both these suggestions make sense to me. Similarly, I think\n> functions get_subscription_name and get_subscription_oid should also\n> be moved to lsyscache.c.\n>\n\nAttached, find a patch to address the above comments.\n\nNote that (a) I didn't change the comment atop\npg_relation_is_publishable to refer to the actual function name\ninstead of 'above' as it seems it can be an SQL variant for both the\nabove functions. (b) didn't need to include pg_publication.h in\nlsyscache.c even after moving code to that file as the code is\ncompiled even without that.\n\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Sat, 30 Jul 2022 16:54:36 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Functions 'is_publishable_class' and 'is_publishable_relation'\n should stay together."
},
{
"msg_contents": "On Saturday, July 30, 2022 7:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Fri, Jul 29, 2022 at 3:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > On Fri, Jul 29, 2022 at 3:29 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\r\n> wrote:\r\n> > >\r\n> > > Well, I checked the commit and the functions I was talking about\r\n> > > look OK now. However, looking again, pg_relation_is_publishable is\r\n> > > in the wrong place (should be right below is_publishable_relaton),\r\n> > > and I wonder why aren't get_publication_oid and get_publication_name in\r\n> lsyscache.c.\r\n> > >\r\n> >\r\n> > Right, both these suggestions make sense to me. Similarly, I think\r\n> > functions get_subscription_name and get_subscription_oid should also\r\n> > be moved to lsyscache.c.\r\n> >\r\n> \r\n> Attached, find a patch to address the above comments.\r\n> \r\n> Note that (a) I didn't change the comment atop pg_relation_is_publishable to\r\n> refer to the actual function name instead of 'above' as it seems it can be an SQL\r\n> variant for both the above functions. (b) didn't need to include pg_publication.h\r\n> in lsyscache.c even after moving code to that file as the code is compiled even\r\n> without that.\r\n\r\nThe patch LGTM. I also ran the headerscheck and didn't find any problem.\r\n\r\nBest regards,\r\nHou Zhijie\r\n",
"msg_date": "Sat, 30 Jul 2022 13:29:20 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Functions 'is_publishable_class' and 'is_publishable_relation'\n should stay together."
},
{
"msg_contents": "On Sat, Jul 30, 2022 at 6:59 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Saturday, July 30, 2022 7:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Jul 29, 2022 at 3:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, Jul 29, 2022 at 3:29 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\n> > wrote:\n> > > >\n> > > > Well, I checked the commit and the functions I was talking about\n> > > > look OK now. However, looking again, pg_relation_is_publishable is\n> > > > in the wrong place (should be right below is_publishable_relaton),\n> > > > and I wonder why aren't get_publication_oid and get_publication_name in\n> > lsyscache.c.\n> > > >\n> > >\n> > > Right, both these suggestions make sense to me. Similarly, I think\n> > > functions get_subscription_name and get_subscription_oid should also\n> > > be moved to lsyscache.c.\n> > >\n> >\n> > Attached, find a patch to address the above comments.\n> >\n> > Note that (a) I didn't change the comment atop pg_relation_is_publishable to\n> > refer to the actual function name instead of 'above' as it seems it can be an SQL\n> > variant for both the above functions. (b) didn't need to include pg_publication.h\n> > in lsyscache.c even after moving code to that file as the code is compiled even\n> > without that.\n>\n> The patch LGTM. I also ran the headerscheck and didn't find any problem.\n>\n\nThanks, I have pushed the patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 3 Aug 2022 10:22:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Functions 'is_publishable_class' and 'is_publishable_relation'\n should stay together."
}
] |
[
{
"msg_contents": "Hello,\n\nPart 2 should include entries four commitfests and older. (For the rest,\nit's probably too early to call something \"stalled\", so I don't plan to\ndo any more triage there.) Patch authors CC'd.\n\n= Stalled Patches, Recommend Return =\n\nI plan to return these with a note saying \"needs more interest\".\n\n- Extended statistics in EXPLAIN\n https://commitfest.postgresql.org/38/3050/\n\n There's interest, but it seemed controversial. And a reviewer\n attempted to revive this in January, but there hasn't been new\n engagement or response from the original people involved in a year.\n\n- Map WAL segment files on PMEM as WAL buffers\n https://commitfest.postgresql.org/38/3181/\n\n Stalled out; last review was in January and it needs a rebase.\n\n- Support pg_ident mapping for LDAP\n https://commitfest.postgresql.org/38/3314/\n\n This one's mine; I think it's clear that there's not enough interest\n in this idea yet and I think I'd rather put effort into SASL, which\n would ideally do the same thing natively.\n\n- Improve logging when using Huge Pages\n https://commitfest.postgresql.org/38/3310/\n\n There was interest last year but no mails this year, so this probably\n needs some buy-in first.\n\n- functions to compute size of schemas/AMs (and maybe \\dn++ and \\dA++)\n https://commitfest.postgresql.org/38/3256/\n\n Been rebasing without review since September 2021. Seems like there's\n an idea in here that people want, though. Any sponsors for a future\n CF?\n\n- Upgrade pgcrypto to crypt_blowfish 1.3\n https://commitfest.postgresql.org/38/3338/\n\n No conversation on this since October 2021. Like above, seems like a\n reasonable feature, so maybe someone can sponsor it and quickly get it\n resurrected in a future CF?\n\n= Stalled Patches, Need Help =\n\nI plan to move these forward unless someone says otherwise, but they\nlook stuck to me and need assistance.\n\n- pgbench: add multiconnect support\n https://commitfest.postgresql.org/38/3227/\n\n Seems to be interest. Not much review, though.\n\n- pg_stats and range statistics\n https://commitfest.postgresql.org/38/3184/\n\n There was immediate agreement that this feature was desirable, and\n then the reviews dried up. Anyone want to bump this?\n\n- Asymmetric partition-wise JOIN\n https://commitfest.postgresql.org/38/3099/\n\n There was a rebase in January by a reviewer, so there's definite\n interest, but work hasn't progressed in a while. I've marked Waiting\n on Author in the meantime.\n\n- Logging plan of the currently running query\n https://commitfest.postgresql.org/38/3142/\n\n Last review in Febrary and currently in a rebase loop.\n\n- schema change not getting invalidated, both renamed table and new\n table data were getting replicated\n https://commitfest.postgresql.org/38/3262/\n\n This looks like a bug fix that should not be closed out, but it's been\n in a rebase loop without review for... a year? Any takers? Should we\n make an open issue?\n\n- pgbench: using prepared BEGIN statement in a pipeline could cause an\n error\n https://commitfest.postgresql.org/38/3260/\n\n A bug fix, but maybe the approach taken for the fix is controversial?\n\n- Atomic rename feature for Windows\n https://commitfest.postgresql.org/38/3347/\n\n I think this got derailed by a committer conversation about platform\n deprecation? I have no idea where the patch stands after that\n exchange; can someone recap?\n\n= Active Patches =\n\nThese will be moved ahead:\n\n- Lazy JIT IR code generation to increase JIT speed with partitions\n- Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns\n- Add proper planner support for ORDER BY / DISTINCT aggregates\n- Fix ExecRTCheckPerms() inefficiency with many prunable partitions\n- Using each rel as both outer and inner for anti-joins\n- Postgres picks suboptimal index after building extended statistics\n- Cache tuple routing info during bulk loads into partitioned tables\n- postgres_fdw: commit remote (sub)transactions in parallel during\n pre-commit\n- add checkpoint stats of snapshot and mapping files of pg_logical dir\n- Allows database-specific role memberships\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Thu, 28 Jul 2022 16:53:49 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "[Commitfest 2022-07] Patch Triage: Needs Review, Part 2"
}
] |
[
{
"msg_contents": "Hi,\n\nStatistics collector has been removed since 5891c7a8ed8f2d3d5, but there \nwas a comment referring 'statistics collector' in pg_statistic.h.\n\n> Note that since the arrays are variable-size, K may be chosen by the \n> statistics collector.\n\nShould it be modified to 'cumulative statistics system' like manual on \nmonitoring stats[1]?\nIts title has changed from 'statistics collector' to 'cumulative \nstatistics system'.\n\n[1] https://www.postgresql.org/docs/current/monitoring-stats.html\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION",
"msg_date": "Fri, 29 Jul 2022 22:05:56 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Should fix a comment referring to stats collector?"
},
{
"msg_contents": "On 2022-Jul-29, torikoshia wrote:\n\n> Statistics collector has been removed since 5891c7a8ed8f2d3d5, but there was\n> a comment referring 'statistics collector' in pg_statistic.h.\n> \n> > Note that since the arrays are variable-size, K may be chosen by the\n> > statistics collector.\n> \n> Should it be modified to 'cumulative statistics system' like manual on\n> monitoring stats[1]?\n\nI don't think this refers to the statistics collector process; I\nunderstand it to refer to ANALYZE that captures the data being stored.\nMaybe it should just say \"K may be chosen at ANALYZE time\".\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Nadie está tan esclavizado como el que se cree libre no siéndolo\" (Goethe)\n\n\n",
"msg_date": "Fri, 29 Jul 2022 19:53:51 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Should fix a comment referring to stats collector?"
},
{
"msg_contents": "On 2022-07-30 02:53, Alvaro Herrera wrote:\n\n> I don't think this refers to the statistics collector process; I\n> understand it to refer to ANALYZE that captures the data being stored.\n\nThanks for the explanation!\n\n> Maybe it should just say \"K may be chosen at ANALYZE time\".\n\nIt seems clearer than current one.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 01 Aug 2022 21:05:45 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Should fix a comment referring to stats collector?"
},
{
"msg_contents": "On Mon, Aug 1, 2022 at 09:05:45PM +0900, torikoshia wrote:\n> On 2022-07-30 02:53, Alvaro Herrera wrote:\n> \n> > I don't think this refers to the statistics collector process; I\n> > understand it to refer to ANALYZE that captures the data being stored.\n> \n> Thanks for the explanation!\n> \n> > Maybe it should just say \"K may be chosen at ANALYZE time\".\n> \n> It seems clearer than current one.\n\nChange made in master.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Tue, 31 Oct 2023 11:02:19 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Should fix a comment referring to stats collector?"
}
] |
[
{
"msg_contents": "Hi,\n\nBoth aset.c and generation.c populate mem_allocated in\nAllocSetContextCreateInternal(), GenerationContextCreate()\nrespectively.\naset.c \n /* Finally, do the type-independent part of context creation */ \n MemoryContextCreate((MemoryContext) set, \n T_AllocSetContext, \n &AllocSetMethods, \n parent, \n name); \n \n ((MemoryContext) set)->mem_allocated = firstBlockSize; \n \n return (MemoryContext) set; \n} \n \ngeneration.c \n /* Finally, do the type-independent part of context creation */ \n MemoryContextCreate((MemoryContext) set, \n T_GenerationContext, \n &GenerationMethods, \n parent, \n name); \n \n ((MemoryContext) set)->mem_allocated = firstBlockSize; \n \n return (MemoryContext) set; \n} \n\nslab.c\ndoes not in SlabContextCreate(). Is this intentional, it seems to be an\noversight to me.\n\n /* Finally, do the type-independent part of context creation */ \n MemoryContextCreate((MemoryContext) slab, \n T_SlabContext, \n &SlabMethods, \n parent, \n name); \n \n return (MemoryContext) slab; \n} \n-- \nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com\n\n\n",
"msg_date": "Fri, 29 Jul 2022 12:43:45 -0400",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Oversight in slab.c SlabContextCreate(), initial memory allocation\n size is not populated to context->mem_allocated"
},
{
"msg_contents": "On Fri, Jul 29, 2022 at 12:43:45PM -0400, Reid Thompson wrote:\n> slab.c\n> does not in SlabContextCreate(). Is this intentional, it seems to be an\n> oversight to me.\n> \n> /* Finally, do the type-independent part of context creation */ \n> MemoryContextCreate((MemoryContext) slab, \n> T_SlabContext, \n> &SlabMethods, \n> parent, \n> name); \n> \n> return (MemoryContext) slab; \n> } \n\nIIUC this is because the header is tracked separately from the first\nregular block, unlike aset.c. See the following comment:\n\n\t/*\n\t * Allocate the context header. Unlike aset.c, we never try to combine\n\t * this with the first regular block; not worth the extra complication.\n\t */\n\nYou'll also notice that the \"reset\" and \"free\" functions in aset.c and\ngeneration.c have special logic for \"keeper\" blocks. Here is a relevant\ncomment from AllocSetReset():\n\n * Actually, this routine has some discretion about what to do.\n * It should mark all allocated chunks freed, but it need not necessarily\n * give back all the resources the set owns. Our actual implementation is\n * that we give back all but the \"keeper\" block (which we must keep, since\n * it shares a malloc chunk with the context header). In this way, we don't\n * thrash malloc() when a context is repeatedly reset after small allocations,\n * which is typical behavior for per-tuple contexts.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 29 Jul 2022 10:48:40 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Oversight in slab.c SlabContextCreate(), initial memory\n allocation size is not populated to context->mem_allocated"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Fri, Jul 29, 2022 at 12:43:45PM -0400, Reid Thompson wrote:\n>> slab.c\n>> does not in SlabContextCreate(). Is this intentional, it seems to be an\n>> oversight to me.\n\n> IIUC this is because the header is tracked separately from the first\n> regular block, unlike aset.c.\n\nThat doesn't make it not an oversight, though. It looks like aset.c\nthinks that mem_allocated includes all the context's overhead, whereas\nthis implementation doesn't seem to have that result. The comments\nassociated with mem_allocated are sufficiently vague that it's impossible\nto tell which implementation is correct. Maybe we don't really care,\nbut ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 29 Jul 2022 13:55:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Oversight in slab.c SlabContextCreate(),\n initial memory allocation size is not populated to context->mem_allocated"
},
{
"msg_contents": "On Fri, Jul 29, 2022 at 01:55:10PM -0400, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> On Fri, Jul 29, 2022 at 12:43:45PM -0400, Reid Thompson wrote:\n>>> slab.c\n>>> does not in SlabContextCreate(). Is this intentional, it seems to be an\n>>> oversight to me.\n> \n>> IIUC this is because the header is tracked separately from the first\n>> regular block, unlike aset.c.\n> \n> That doesn't make it not an oversight, though. It looks like aset.c\n> thinks that mem_allocated includes all the context's overhead, whereas\n> this implementation doesn't seem to have that result. The comments\n> associated with mem_allocated are sufficiently vague that it's impossible\n> to tell which implementation is correct. Maybe we don't really care,\n> but ...\n\nHm. mmgr/README indicates the following note about mem_allocated:\n\n* inquire about the total amount of memory allocated to the context\n (the raw memory from which the context allocates chunks; not the\n chunks themselves)\n\nAFAICT MemoryContextMemAllocated() is only used for determining when to\nspill to disk for hash aggegations at the moment. I don't know whether I'd\nclassify this as an oversight or if it even makes any meaningful\ndifference, but consistency among the different implementations is probably\ndesirable either way. So, I guess I'm +1 for including the memory context\nheader in mem_allocated in this case.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 29 Jul 2022 11:23:47 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Oversight in slab.c SlabContextCreate(), initial memory\n allocation size is not populated to context->mem_allocated"
},
{
"msg_contents": "\n\nOn 7/29/22 20:23, Nathan Bossart wrote:\n> On Fri, Jul 29, 2022 at 01:55:10PM -0400, Tom Lane wrote:\n>> Nathan Bossart <nathandbossart@gmail.com> writes:\n>>> On Fri, Jul 29, 2022 at 12:43:45PM -0400, Reid Thompson wrote:\n>>>> slab.c\n>>>> does not in SlabContextCreate(). Is this intentional, it seems to be an\n>>>> oversight to me.\n>>\n>>> IIUC this is because the header is tracked separately from the first\n>>> regular block, unlike aset.c.\n>>\n>> That doesn't make it not an oversight, though. It looks like aset.c\n>> thinks that mem_allocated includes all the context's overhead, whereas\n>> this implementation doesn't seem to have that result. The comments\n>> associated with mem_allocated are sufficiently vague that it's impossible\n>> to tell which implementation is correct. Maybe we don't really care,\n>> but ...\n> \n> Hm. mmgr/README indicates the following note about mem_allocated:\n> \n> * inquire about the total amount of memory allocated to the context\n> (the raw memory from which the context allocates chunks; not the\n> chunks themselves)\n> \n> AFAICT MemoryContextMemAllocated() is only used for determining when to\n> spill to disk for hash aggegations at the moment. I don't know whether I'd\n> classify this as an oversight or if it even makes any meaningful\n> difference, but consistency among the different implementations is probably\n> desirable either way. So, I guess I'm +1 for including the memory context\n> header in mem_allocated in this case.\n> \n\nI don't think this can make meaningful difference - as you mention, we\nonly really use this to decide when to spill to disk etc. So maybe\nyou'll spill a bit sooner, but the work_mem is pretty crude threshold\nanyway, people don't tune it to an exact byte value (which would be\npretty futile anyway).\n\nOTOH it does seem like an oversight, or at least an inconsistency with\nthe two other contexts, so if anyone feels like tweaking it ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 29 Jul 2022 21:16:51 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Oversight in slab.c SlabContextCreate(), initial memory\n allocation size is not populated to context->mem_allocated"
},
{
"msg_contents": "At Fri, 29 Jul 2022 21:16:51 +0200, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote in \n> I don't think this can make meaningful difference - as you mention, we\n> only really use this to decide when to spill to disk etc. So maybe\n> you'll spill a bit sooner, but the work_mem is pretty crude threshold\n> anyway, people don't tune it to an exact byte value (which would be\n> pretty futile anyway).\n\n From another perspective.. SlabStats includes the header size into\nits total size. So it reports a different total size from\nMemoryContextMemAllocated() (For example, 594 bytes vs 0). Since this\nis an inconsistency within slab.c, no users will notice that\ndifference in the field.\n\n> OTOH it does seem like an oversight, or at least an inconsistency with\n> the two other contexts, so if anyone feels like tweaking it ...\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 01 Aug 2022 16:57:44 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Oversight in slab.c SlabContextCreate(), initial memory\n allocation size is not populated to context->mem_allocated"
}
] |
[
{
"msg_contents": "I've been annoyed several times lately by having to update\nthe list of node types embodied in test_oat_hooks.c's\nnodetag_to_string(). I got around to looking at that more\nclosely, and realized that it is only used for utility\nstatements, which (a) are a very small subset of the node\ntypes that that function knows about, and (b) we already\nhave a mechanism to get string identifiers for, and (c)\nthose identifiers are already standard parts of our user API,\nunlike the strings exposed by nodetag_to_string(). I do not\nthink that test_oat_hooks.c has any business imposing\nan extra maintenance burden on us all, so I propose\nnuking nodetag_to_string() from orbit, as attached.\n\n(Incidentally, this improves test_oat_hooks's own\nreported code coverage from 14.0% to 76.1%, because\nso much of that switch is dead code.)\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 30 Jul 2022 18:08:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Reducing the maintenance overhead of test_oat_hooks"
},
{
"msg_contents": "On 2022-Jul-30, Tom Lane wrote:\n\n> I do not\n> think that test_oat_hooks.c has any business imposing\n> an extra maintenance burden on us all, so I propose\n> nuking nodetag_to_string() from orbit, as attached.\n\n+1\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Sun, 31 Jul 2022 13:17:55 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the maintenance overhead of test_oat_hooks"
}
] |
[
{
"msg_contents": "Having spent much of the day looking at regression tests for\ndifferent bits of contrib, I was inspired to do a quick\nfinger exercise to add a test for contrib/tcn. When that\nmodule was written, we didn't have a nice way to create a\ntest case with stable output. But now, the isolationtester\ncan do the job easily.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 30 Jul 2022 19:09:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Regression coverage for contrib/tcn"
}
] |
[
{
"msg_contents": "Since 3a769d823 (pg_upgrade: Allow use of file cloning)\nfile.c has had:\n\n- if (ioctl(dest_fd, FICLONE, src_fd) < 0)\n- {\n- unlink(dst);\n- pg_fatal(\"error while cloning relation \\\"%s.%s\\\" (\\\"%s\\\" to \\\"%s\\\"): %s\",\n- schemaName, relName, src, dst, strerror(errno));\n- }\n\nBut errno should be saved before strerror/%m.\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 31 Jul 2022 08:41:35 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "pg_upgrade errno"
},
{
"msg_contents": "On Sun, Jul 31, 2022 at 08:41:35AM -0500, Justin Pryzby wrote:\n> Since 3a769d823 (pg_upgrade: Allow use of file cloning)\n> file.c has had:\n> \n> - if (ioctl(dest_fd, FICLONE, src_fd) < 0)\n> - {\n> - unlink(dst);\n> - pg_fatal(\"error while cloning relation \\\"%s.%s\\\" (\\\"%s\\\" to \\\"%s\\\"): %s\",\n> - schemaName, relName, src, dst, strerror(errno));\n> - }\n> \n> But errno should be saved before strerror/%m.\n\nGood catch, Justin. Will fix on HEAD.\n--\nMichael",
"msg_date": "Mon, 1 Aug 2022 08:39:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade errno"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Sun, Jul 31, 2022 at 08:41:35AM -0500, Justin Pryzby wrote:\n>> But errno should be saved before strerror/%m.\n\n> Good catch, Justin. Will fix on HEAD.\n\nIt's been wrong a lot longer than that, no?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 31 Jul 2022 19:43:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade errno"
},
{
"msg_contents": "On Sun, Jul 31, 2022 at 07:43:25PM -0400, Tom Lane wrote:\n> It's been wrong a lot longer than that, no?\n\nSince the beginning of times. But we've never really cared about\nfixing such errno behaviors based on their unlikeliness, have we? I\ndon't mind doing a backpatch here, though, that's isolated enough.\n--\nMichael",
"msg_date": "Mon, 1 Aug 2022 09:19:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade errno"
}
] |
[
{
"msg_contents": "Twice recently (b998196bb5 and 19408aae7f) I have had to adjust new TAP\ntests to handle log_error_verbosity=verbose, and there's a third case\nneeding adjustment in the new auto_explain tests (7c34555f8c). I'm\nwondering if it would be better to set log_error_verbosity to default\nfor TAP tests, no matter what's in TEMP_CONFIG. An individual TAP test\ncould still set log_error_verbosity=verbose explicitly if it wanted it.\n\nIf not I'm going to change one of my buildfarm animals to use\nlog_error_verbosity=verbose so we catch things like this earlier.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 31 Jul 2022 12:10:24 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "TAP tests vs log_error verbosity=verbose"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> Twice recently (b998196bb5 and 19408aae7f) I have had to adjust new TAP\n> tests to handle log_error_verbosity=verbose, and there's a third case\n> needing adjustment in the new auto_explain tests (7c34555f8c). I'm\n> wondering if it would be better to set log_error_verbosity to default\n> for TAP tests, no matter what's in TEMP_CONFIG. An individual TAP test\n> could still set log_error_verbosity=verbose explicitly if it wanted it.\n\n> If not I'm going to change one of my buildfarm animals to use\n> log_error_verbosity=verbose so we catch things like this earlier.\n\nI think it's good to be able to enable log_error_verbosity=verbose\nin case you need to track down some kind of problem. So I'd vote\nfor your second approach.\n\n7c34555f8c is mine, so I'll go fix that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 31 Jul 2022 12:14:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests vs log_error verbosity=verbose"
},
{
"msg_contents": "\nOn 2022-07-31 Su 12:14, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> Twice recently (b998196bb5 and 19408aae7f) I have had to adjust new TAP\n>> tests to handle log_error_verbosity=verbose, and there's a third case\n>> needing adjustment in the new auto_explain tests (7c34555f8c). I'm\n>> wondering if it would be better to set log_error_verbosity to default\n>> for TAP tests, no matter what's in TEMP_CONFIG. An individual TAP test\n>> could still set log_error_verbosity=verbose explicitly if it wanted it.\n>> If not I'm going to change one of my buildfarm animals to use\n>> log_error_verbosity=verbose so we catch things like this earlier.\n> I think it's good to be able to enable log_error_verbosity=verbose\n> in case you need to track down some kind of problem. So I'd vote\n> for your second approach.\n>\n> 7c34555f8c is mine, so I'll go fix that.\n>\n> \t\t\t\n\n\n\nThanks. prion should now catch future instances.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 31 Jul 2022 12:57:43 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: TAP tests vs log_error verbosity=verbose"
}
] |
[
{
"msg_contents": "Dear pgsql hackers,\n\nI am Gianluca Calcagni, a Salesforce Certified Technical Architect. In the course of my career, I accumulated extensive experience working with triggers, both as an architect and as a developer.\n\nWhen I work with multiple developer teams, they often need to create triggers on the same object: when that happens, I usually recommend them to maintain a single common trigger rather than multiple ones - the rationale being that, by sharing the code, they can minimise most \"negative\" interactions, such as recursion, trigger cascades, for-loops, multiple DMLs on the same record, and so on. In the Salesforce ecosystem, such approach is considered a best practice (for good reason).\n\nThe real drawback is that such approach is forgoing the natural principle of \"separation of concerns\"! I have been looking into using trigger frameworks to solve this problem, but there is no trigger framework that is able to meet my main expectations: in short, isolation and conflict detection.\n\nThis is what I want to see:\n\n 1. when an event is firing two or more triggers, then such triggers must be executed within the same transaction, but each trigger must be executed in its own isolated context (in the sense that it cannot see the changes applied by any trigger other than itself)\n 2. when all the triggers are done, postgres must merge the results together and stage an end-result\n 3. postgres must raise all possible conflicts (in the sense that, if a specific field on some record has distinct changes applied by different triggers, then the end-result of the transaction is ambiguous hence the entire transaction should fail)\n 4. If everything goes fine, the end-result of the transaction is finally committed to the database.\n\nYou may notice that I took inspiration from GIT for most of the concepts above (e.g. \"isolation\" actually means \"forking\").\n\nI realize that this is a huge request, but I believe there is some merit in the idea. I also realize that there are very complicated implementation nuances (e.g. in handling foreign keys and constraints), but I am happy to provide more input about my vision of this feature.\n\nLooking forward to your feedback!\n\nWish you all a great day,\nGianluca\n\n\n\n\n\n\n\n\nDear pgsql hackers,\n\n\n\n\nI am Gianluca Calcagni, a Salesforce Certified Technical Architect. In the course of my career, I accumulated extensive experience working with triggers, both as an architect\n and as a developer.\n\n\n\n\nWhen I work with multiple developer teams, they often need to create triggers on the same object: when that happens, I usually recommend them to maintain a\nsingle common trigger rather than multiple ones - the rationale being that, by sharing the code, they can minimise most \"negative\" interactions, such as recursion, trigger cascades, for-loops, multiple DMLs on the same record, and so on. In the Salesforce\n ecosystem, such approach is considered a best practice (for good reason).\n\n\n\n\nThe real drawback is that such approach is forgoing the natural principle of \"separation of concerns\"! I have been looking into using trigger frameworks to solve this problem, but there is no trigger framework that is able to meet my main expectations:\n in short, isolation and conflict detection.\n\n\n\n\nThis is what I want to see:\n\n\nwhen an event is firing two or more triggers, then such triggers must be executed within the same transaction, but each trigger must be executed in its own\n\nisolated context (in the sense that it cannot see the changes applied by any trigger other than itself)when all the triggers are done, postgres must merge the results together and stage an end-resultpostgres must raise all possible conflicts (in the sense that, if a specific field on some record has distinct changes applied by different triggers, then the end-result of the transaction\n is ambiguous hence the entire transaction should fail)If everything goes fine, the end-result of the transaction is finally committed to the database.\nYou may notice that I took inspiration from GIT for most of the concepts above (e.g. \"isolation\" actually means \"forking\").\n\n\n\nI realize that this is a huge request, but I believe there is some merit in the idea. I also realize that there are very complicated implementation nuances (e.g. in handling foreign keys and constraints), but I am happy to provide more input about\n my vision of this feature.\n\n\nLooking forward to your feedback!\n\n\nWish you all a great day,\nGianluca",
"msg_date": "Sun, 31 Jul 2022 16:39:11 +0000",
"msg_from": "Gianluca Calcagni <gclazio@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Triggers should work in isolation, with a final conflict detection\n step"
},
{
"msg_contents": "On Sunday, July 31, 2022, Gianluca Calcagni <gclazio@hotmail.com> wrote:\n\n>\n> The real drawback is that such approach is forgoing the natural principle\n> of *\"separation of concerns\"*! I have been looking into using trigger\n> frameworks to solve this problem, but there is no trigger framework that is\n> able to meet my main expectations: in short, *isolation* and *conflict\n> detection*.\n>\n> You may notice that I took inspiration from GIT for most of the concepts\n> above (e.g. \"isolation\" actually means \"forking\").\n>\n> I realize that this is a huge request,\n>\n\nSo install a single c-language trigger function that does all of that and\nprovide some functions to manage adding delegation commands in user-space.\nMake it all work with create extension. Users will have to adhere to the\nguidelines suggested.\n\nI am against having core incorporate such code into the main project. The\nbenefits/cost_complexity ratio is too small.\n\nIOW, I suspect the only realistic way this gets into core is if you, its\nchampion, pay to have it developed, but even then I don’t think we should\naccept such a feature even if it was well written. So if you go that route\nit should leverage extension mechanisms. We may add some code hooks though\nif those are requested and substantiated.\n\nDavid J.\n\nOn Sunday, July 31, 2022, Gianluca Calcagni <gclazio@hotmail.com> wrote:\n\n\n\nThe real drawback is that such approach is forgoing the natural principle of \"separation of concerns\"! I have been looking into using trigger frameworks to solve this problem, but there is no trigger framework that is able to meet my main expectations:\n in short, isolation and conflict detection.\n\n\n\nYou may notice that I took inspiration from GIT for most of the concepts above (e.g. \"isolation\" actually means \"forking\").\n\n\nI realize that this is a huge request,So install a single c-language trigger function that does all of that and provide some functions to manage adding delegation commands in user-space. Make it all work with create extension. Users will have to adhere to the guidelines suggested.I am against having core incorporate such code into the main project. The benefits/cost_complexity ratio is too small.IOW, I suspect the only realistic way this gets into core is if you, its champion, pay to have it developed, but even then I don’t think we should accept such a feature even if it was well written. So if you go that route it should leverage extension mechanisms. We may add some code hooks though if those are requested and substantiated.David J.",
"msg_date": "Mon, 1 Aug 2022 07:20:37 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Triggers should work in isolation, with a final conflict\n detection step"
}
] |
[
{
"msg_contents": "Hello!\n\nIn previous discussion\n(https://www.postgresql.org/message-id/flat/6b05291c-f252-4fae-317d-b50dba69c311%40inbox.ru)\n\nOn 05.07.2022 22:08, Justin Pryzby wrote:\n> I'm not\n> sure if anyone is interested in patching test.sh in backbranches. I'm not\n> sure, but there may be more interest to backpatch the conversion to TAP\n> (322becb60).\n> \nAs far as i understand from this thread: https://www.postgresql.org/message-id/flat/Yox1ME99GhAemMq1%40paquier.xyz,\nthe aim of the perl version for the pg_upgrade tests is to achieve equality of dumps for most cross-versions cases.\nIf so this is the significant improvement as previously in test.sh resulted dumps retained unequal and the user\nwas asked to eyeball them manually during cross upgrades between different major versions.\nSo, the backport of the perl tests also seems preferable to me.\n\nIn the attached patch has a backport to REL_13_STABLE.\nIt has been tested from 9.2+ and give zero dumps diff from 10+.\nAlso i've backported b34ca595, ba15f161, 95c3a195,\n4c4eaf3d and b3983888 to reduce changes in the 002_pg_upgrade.pl and b33259e2 to fix an error when upgrading from 9.6.\nDumps filtering and some other changes were backported from thread\nhttps://www.postgresql.org/message-id/flat/Yox1ME99GhAemMq1%40paquier.xyz too.\nWould be very grateful for comments and suggestions before trying to do this for other versions.\n\nI have a some question concerning patch tester. As Justin said it fails on non-master patches\n> since it tries to apply all the *.patch files to the master branch, one after\n> another. For branches other than master, I suggest to name the patches *.txt\n> or similar.\nSo, i made a .txt extension for patch, but i would really like to set a patch tester on it.\nIs there any way to do this?\n \nWith best regards,\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 1 Aug 2022 01:02:21 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "[PATCH] Backport perl tests for pg_upgrade from 322becb60"
},
{
"msg_contents": "Add backport to REL_14_STABLE. Unlike to the 13th version's one there are still\nsome differences in the final dumps, eg during upgrade test 12->14.\nThe similar differences present during upgrade test 12->master.\n\nAchieving zero dump diffs needs additional work, now in progress.\n\nWith best regards,\n\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 1 Nov 2022 13:36:15 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Backport perl tests for pg_upgrade from 322becb60"
},
{
"msg_contents": "On Mon, Aug 01, 2022 at 01:02:21AM +0300, Anton A. Melnikov wrote:\n> As far as i understand from this thread: https://www.postgresql.org/message-id/flat/Yox1ME99GhAemMq1%40paquier.xyz,\n> the aim of the perl version for the pg_upgrade tests is to achieve equality of dumps for most cross-versions cases.\n> If so this is the significant improvement as previously in test.sh resulted dumps retained unequal and the user\n> was asked to eyeball them manually during cross upgrades between different major versions.\n> So, the backport of the perl tests also seems preferable to me.\n\nI don't really agree with that. These TAP tests are really new\ndevelopment, and it took a few tries to get them completely right\n(well, as much right as it holds for HEAD). If we were to backport\nany of this, there is a risk of introducing a bug in what we do with\nany of that, potentially hiding a issue critical related to\npg_upgrade. That's not worth taking a risk for.\n\nSaying that, I agree that more needs to be done, but I would limit\nthat only to HEAD and let it mature more into the tree in an\nincremental fashion.\n--\nMichael",
"msg_date": "Fri, 9 Dec 2022 14:19:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Backport perl tests for pg_upgrade from 322becb60"
},
{
"msg_contents": "Hello!\n\nOn 09.12.2022 08:19, Michael Paquier wrote:\n> On Mon, Aug 01, 2022 at 01:02:21AM +0300, Anton A. Melnikov wrote:\n>> As far as i understand from this thread: https://www.postgresql.org/message-id/flat/Yox1ME99GhAemMq1%40paquier.xyz,\n>> the aim of the perl version for the pg_upgrade tests is to achieve equality of dumps for most cross-versions cases.\n>> If so this is the significant improvement as previously in test.sh resulted dumps retained unequal and the user\n>> was asked to eyeball them manually during cross upgrades between different major versions.\n>> So, the backport of the perl tests also seems preferable to me.\n> \n> I don't really agree with that. These TAP tests are really new\n> development, and it took a few tries to get them completely right\n> (well, as much right as it holds for HEAD). If we were to backport\n> any of this, there is a risk of introducing a bug in what we do with\n> any of that, potentially hiding a issue critical related to\n> pg_upgrade. That's not worth taking a risk for.\n> \n> Saying that, I agree that more needs to be done, but I would limit\n> that only to HEAD and let it mature more into the tree in an\n> incremental fashion.\n> --\n\n\nI have withdrawn the patch with the backport, but then the question is whether we\nwill make fixes in older test.sh tests seems to be remains open.\nWill we fix it? Justin is not sure if anyone needs this:\nhttps://www.postgresql.org/message-id/67b6b447-e9cb-ebde-4a6b-127aea7ca268%40inbox.ru\n\nAlso found that the test from older versions fails in the current master.\n\nProposed a fix in a new thread: https://www.postgresql.org/message-id/49f389ba-95ce-8a9b-09ae-f60650c0e7c7%40inbox.ru\n\nWould be glad to any remarks.\n\nWith the best wishes,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Mon, 19 Dec 2022 04:16:53 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Backport perl tests for pg_upgrade from 322becb60"
},
{
"msg_contents": "On Mon, Dec 19, 2022 at 04:16:53AM +0300, Anton A. Melnikov wrote:\n> I have withdrawn the patch with the backport, but then the question is whether we\n> will make fixes in older test.sh tests seems to be remains open.\n> Will we fix it? Justin is not sure if anyone needs this:\n> https://www.postgresql.org/message-id/67b6b447-e9cb-ebde-4a6b-127aea7ca268%40inbox.ru\n\nThis introduces an extra maintenance cost over the existing things in\nthe stable branches.\n\n> Also found that the test from older versions fails in the current master.\n> Proposed a fix in a new thread: https://www.postgresql.org/message-id/49f389ba-95ce-8a9b-09ae-f60650c0e7c7%40inbox.ru\n\nThanks. Yes, this is the change of aclitem from 32b to 64b, which is\nsomething that needs some tweaks. So let's fix this one.\n--\nMichael",
"msg_date": "Mon, 19 Dec 2022 10:56:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Backport perl tests for pg_upgrade from 322becb60"
}
] |
[
{
"msg_contents": "The following documentation comment has been logged on the website:\n\nPage: https://www.postgresql.org/docs/14/plpgsql-errors-and-messages.html\nDescription:\n\nTowards the end of the \"43.9.1. Reporting Errors and Messages\" section (here\nhttps://www.postgresql.org/docs/current/plpgsql-errors-and-messages.html#PLPGSQL-STATEMENTS-RAISE)\nwe have the following sentence:\r\n\r\n> If no condition name nor SQLSTATE is specified in a RAISE EXCEPTION\ncommand, the default is to use ERRCODE_RAISE_EXCEPTION (P0001).\r\n\r\nLooking at the list of error codes (here\nhttps://www.postgresql.org/docs/current/errcodes-appendix.html) I think the\n\"ERRCODE_RAISE_EXCEPTION (P0001)\" is a typo and should remove \"ERRCODE_\" and\nsimply read \"RAISE_EXCEPTION (P0001)\" or perhaps \"ERRCODE =\n'RAISE_EXCEPTION'\" since that's how the default behaviour would be written\nin a RAISE statement.\r\n\r\nMany thanks,\r\nEric Mutta.",
"msg_date": "Sun, 31 Jul 2022 23:37:03 +0000",
"msg_from": "PG Doc comments form <noreply@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Typo in \"43.9.1. Reporting Errors and Messages\"?"
},
{
"msg_contents": "Hello!\n\nI've come across some typos in protocol.sgml for PostgreSQL 15 so please \nhave a look at the attached patch.\n\nI didn't include it in the patch but I also suggest removing single \nquotes around 'method' for the COMPRESSION option to help avoid \nconfusion. (All the supported compression methods consist of a single \nword so in my opinion there is no need to use quotes in this case.)\n-- <term><literal>COMPRESSION</literal> \n<replaceable>'method'</replaceable></term>\n\nI've also noticed that there are two ways to describe an option: \"If set \nto true\" / \"If true\". As far as I know, the option here is specified by \nits name rather than being explicitly set to true so \"if true\" seems to \nbe more correct, and this could be a slight improvement for this page. \nPlease correct me if I'm wrong.\n\nAnother point worth mentioning is that only this file contains the \nphrase \"two-phase transaction\". I believe that \"two-phase commit \ntransaction\" or \"transaction prepared for two-phase commit\" depending on \nthe situation would be better wording.\n\nAnd finally, could you please clarify this part?\n-- The end LSN of the prepare transaction.\nIs it a typo of \"prepared transaction\"? Or is it the LSN of the \ntransaction for Prepare?\nIf it's the latter, perhaps it'd make more sense to capitalize it.\n\n-- \nBest regards,\nEkaterina Kiryanova\nTechnical Writer\nPostgres Professional\nthe Russian PostgreSQL Company",
"msg_date": "Mon, 1 Aug 2022 23:00:20 +0300",
"msg_from": "Ekaterina Kiryanova <e.kiryanova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "PostgreSQL 15 minor fixes in protocol.sgml"
},
{
"msg_contents": "On Mon, Aug 01, 2022 at 11:00:20PM +0300, Ekaterina Kiryanova wrote:\n> I didn't include it in the patch but I also suggest removing single quotes\n> around 'method' for the COMPRESSION option to help avoid confusion. (All the\n> supported compression methods consist of a single word so in my opinion\n> there is no need to use quotes in this case.)\n> -- <term><literal>COMPRESSION</literal>\n> <replaceable>'method'</replaceable></term>\n\nOther options use quotes as well in their description in this area.\n\n> I've also noticed that there are two ways to describe an option: \"If set to\n> true\" / \"If true\". As far as I know, the option here is specified by its\n> name rather than being explicitly set to true so \"if true\" seems to be more\n> correct, and this could be a slight improvement for this page. Please\n> correct me if I'm wrong.\n\nBoth sound pretty much the same to me.\n\n> Another point worth mentioning is that only this file contains the phrase\n> \"two-phase transaction\". I believe that \"two-phase commit transaction\" or\n> \"transaction prepared for two-phase commit\" depending on the situation would\n> be better wording.\n\n\"Prepare for two-phase commit\" may be clearer?\n\n> And finally, could you please clarify this part?\n> -- The end LSN of the prepare transaction.\n> Is it a typo of \"prepared transaction\"? Or is it the LSN of the transaction\n> for Prepare?\n> If it's the latter, perhaps it'd make more sense to capitalize it.\n\nHmm. The internals of 63cf61c refer to a \"STREAM PREPARE\", still the\nprotocol docs are quite messy (\"prepare\", \"prepare timestamp\", etc.)\nso more consistency would be appropriate, it seems. Amit?\n\nThe part for the protocol messages with 2PC and logical replication\ncould use a larger rework. I have left these for now, and fixed the\nrest of the typos you have found.\n--\nMichael",
"msg_date": "Tue, 2 Aug 2022 19:58:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 15 minor fixes in protocol.sgml"
},
{
"msg_contents": "On Sun, Jul 31, 2022, at 8:37 PM, PG Doc comments form wrote:\n> Towards the end of the \"43.9.1. Reporting Errors and Messages\" section (here\n> https://www.postgresql.org/docs/current/plpgsql-errors-and-messages.html#PLPGSQL-STATEMENTS-RAISE)\n> we have the following sentence:\n> \n> > If no condition name nor SQLSTATE is specified in a RAISE EXCEPTION\n> command, the default is to use ERRCODE_RAISE_EXCEPTION (P0001).\n> \n> Looking at the list of error codes (here\n> https://www.postgresql.org/docs/current/errcodes-appendix.html) I think the\n> \"ERRCODE_RAISE_EXCEPTION (P0001)\" is a typo and should remove \"ERRCODE_\" and\n> simply read \"RAISE_EXCEPTION (P0001)\" or perhaps \"ERRCODE =\n> 'RAISE_EXCEPTION'\" since that's how the default behaviour would be written\n> in a RAISE statement.\nIt is referring to the internal constant (see src/backend/utils/errcodes.h). It\nwas like you are proposing and it was changed in\n66bde49d96a9ddacc49dcbdf1b47b5bd6e31ead5. Reading the original thread, there is\nno explanation why it was changed. Refer to internal names is not good for a\nuser-oriented text. I think it would be better to use the condition name (in\nlowercase) like it is referred to in [1]. I mean, change\nERRCODE_RAISE_EXCEPTION to raise_exception.\n\n[1] https://www.postgresql.org/docs/current/errcodes-appendix.html\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Sun, Jul 31, 2022, at 8:37 PM, PG Doc comments form wrote:Towards the end of the \"43.9.1. Reporting Errors and Messages\" section (herehttps://www.postgresql.org/docs/current/plpgsql-errors-and-messages.html#PLPGSQL-STATEMENTS-RAISE)we have the following sentence:> If no condition name nor SQLSTATE is specified in a RAISE EXCEPTIONcommand, the default is to use ERRCODE_RAISE_EXCEPTION (P0001).Looking at the list of error codes (herehttps://www.postgresql.org/docs/current/errcodes-appendix.html) I think the\"ERRCODE_RAISE_EXCEPTION (P0001)\" is a typo and should remove \"ERRCODE_\" andsimply read \"RAISE_EXCEPTION (P0001)\" or perhaps \"ERRCODE ='RAISE_EXCEPTION'\" since that's how the default behaviour would be writtenin a RAISE statement.It is referring to the internal constant (see src/backend/utils/errcodes.h). Itwas like you are proposing and it was changed in66bde49d96a9ddacc49dcbdf1b47b5bd6e31ead5. Reading the original thread, there isno explanation why it was changed. Refer to internal names is not good for auser-oriented text. I think it would be better to use the condition name (inlowercase) like it is referred to in [1]. I mean, changeERRCODE_RAISE_EXCEPTION to raise_exception.[1] https://www.postgresql.org/docs/current/errcodes-appendix.html--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Tue, 02 Aug 2022 09:49:47 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Typo in \"43.9.1. Reporting Errors and Messages\"?"
},
{
"msg_contents": "On Tue, Aug 2, 2022 at 4:28 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Aug 01, 2022 at 11:00:20PM +0300, Ekaterina Kiryanova wrote:\n>\n> > Another point worth mentioning is that only this file contains the phrase\n> > \"two-phase transaction\". I believe that \"two-phase commit transaction\" or\n> > \"transaction prepared for two-phase commit\" depending on the situation would\n> > be better wording.\n>\n> \"Prepare for two-phase commit\" may be clearer?\n>\n\nI think we can use just \"Prepared transaction\" instead. So, the\nmessage \"The user defined GID of the two-phase transaction.\" can be\nchanged to \"The user defined GID of the prepared transaction.\".\nSimilarly, the message \"Identifies the message as a two-phase prepared\ntransaction message.\" could be changed to: \"Identifies the message as\na prepared transaction message.\"\n\n> > And finally, could you please clarify this part?\n> > -- The end LSN of the prepare transaction.\n> > Is it a typo of \"prepared transaction\"?\n\nI think in this case it should be a \"prepared transaction\".\n\n\nThanks for the report and Thanks Michael for including me. I am just\nredirecting it to -hackers so that others involved in this feature\nalso can share their views.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 3 Aug 2022 09:27:58 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 15 minor fixes in protocol.sgml"
},
{
"msg_contents": "On Wed, Aug 3, 2022 at 1:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Aug 2, 2022 at 4:28 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Mon, Aug 01, 2022 at 11:00:20PM +0300, Ekaterina Kiryanova wrote:\n> >\n> > > Another point worth mentioning is that only this file contains the phrase\n> > > \"two-phase transaction\". I believe that \"two-phase commit transaction\" or\n> > > \"transaction prepared for two-phase commit\" depending on the situation would\n> > > be better wording.\n> >\n> > \"Prepare for two-phase commit\" may be clearer?\n> >\n>\n> I think we can use just \"Prepared transaction\" instead. So, the\n> message \"The user defined GID of the two-phase transaction.\" can be\n> changed to \"The user defined GID of the prepared transaction.\".\n> Similarly, the message \"Identifies the message as a two-phase prepared\n> transaction message.\" could be changed to: \"Identifies the message as\n> a prepared transaction message.\"\n>\n> > > And finally, could you please clarify this part?\n> > > -- The end LSN of the prepare transaction.\n> > > Is it a typo of \"prepared transaction\"?\n>\n> I think in this case it should be a \"prepared transaction\".\n>\n>\n> Thanks for the report and Thanks Michael for including me. I am just\n> redirecting it to -hackers so that others involved in this feature\n> also can share their views.\n>\n\nPSA a patch to modify the descriptions as suggested by Amit.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 3 Aug 2022 15:26:25 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 15 minor fixes in protocol.sgml"
},
{
"msg_contents": "On 2022-Aug-03, Amit Kapila wrote:\n\n> Thanks for the report and Thanks Michael for including me. I am just\n> redirecting it to -hackers so that others involved in this feature\n> also can share their views.\n\nI'm sorry, but our policy is that crossposts are not allowed. I think\nthis policy is bad, precisely because it prevents legitimate cases like\nthis one; but it is what it is.\n\nI think we should change the policy, not back to allow indiscriminate\ncross-posting, but to allow some limited form of it. For example I\nthink pg-bugs+pg-hackers and pg-docs+pg-hackers should be allowed\ncombinations. Just saying.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 3 Aug 2022 12:53:15 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 15 minor fixes in protocol.sgml"
},
{
"msg_contents": "On Wed, Aug 3, 2022 at 10:56 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> PSA a patch to modify the descriptions as suggested by Amit.\n>\n\n*\n<para>\n- The end LSN of the commit prepared transaction.\n+ The end LSN of the commit of the prepared transaction.\n...\n...\n- Identifies the message as the commit of a two-phase\ntransaction message.\n+ Identifies the message as the commit of a prepared\ntransaction message.\n\nIn the above messages, we can even directly say \"commit prepared\ntransaction\" but as you have written appears clear to me.\n\n*\nFor timestamp, related messages, we have three different messages:\nCommit timestamp of the transaction. The value is in number of\nmicroseconds since PostgreSQL epoch (2000-01-01).\nPrepare timestamp of the transaction. The value is in number of\nmicroseconds since PostgreSQL epoch (2000-01-01).\nRollback timestamp of the transaction. The value is in number of\nmicroseconds since PostgreSQL epoch (2000-01-01).\n\nWe can improve by saying \"Timestamp of prepared transaction\" for the\nsecond one but it will make it bit inconsistent with others, so not\nsure if changing it makes sense or if there is a better way to change\nall the three messages.\n\nThoughts?\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 4 Aug 2022 08:34:47 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 15 minor fixes in protocol.sgml"
},
{
"msg_contents": "On Wed, Aug 3, 2022 at 4:23 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Aug-03, Amit Kapila wrote:\n>\n> > Thanks for the report and Thanks Michael for including me. I am just\n> > redirecting it to -hackers so that others involved in this feature\n> > also can share their views.\n>\n> I'm sorry, but our policy is that crossposts are not allowed. I think\n> this policy is bad, precisely because it prevents legitimate cases like\n> this one; but it is what it is.\n>\n> I think we should change the policy, not back to allow indiscriminate\n> cross-posting, but to allow some limited form of it. For example I\n> think pg-bugs+pg-hackers and pg-docs+pg-hackers should be allowed\n> combinations. Just saying.\n>\n\n+1.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 4 Aug 2022 08:36:51 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 15 minor fixes in protocol.sgml"
},
{
"msg_contents": "On Thu, Aug 4, 2022 at 1:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Aug 3, 2022 at 10:56 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > PSA a patch to modify the descriptions as suggested by Amit.\n> >\n>\n> *\n> <para>\n> - The end LSN of the commit prepared transaction.\n> + The end LSN of the commit of the prepared transaction.\n> ...\n> ...\n> - Identifies the message as the commit of a two-phase\n> transaction message.\n> + Identifies the message as the commit of a prepared\n> transaction message.\n>\n> In the above messages, we can even directly say \"commit prepared\n> transaction\" but as you have written appears clear to me.\n>\n> *\n> For timestamp, related messages, we have three different messages:\n> Commit timestamp of the transaction. The value is in number of\n> microseconds since PostgreSQL epoch (2000-01-01).\n> Prepare timestamp of the transaction. The value is in number of\n> microseconds since PostgreSQL epoch (2000-01-01).\n> Rollback timestamp of the transaction. The value is in number of\n> microseconds since PostgreSQL epoch (2000-01-01).\n>\n> We can improve by saying \"Timestamp of prepared transaction\" for the\n> second one but it will make it bit inconsistent with others, so not\n> sure if changing it makes sense or if there is a better way to change\n> all the three messages.\n>\n> Thoughts?\n>\n\nThere was no feedback for Amit's previous post [1], so I am just\nattaching the same [2] patch again, but this time for both HEAD and\nREL_15_STABLE.\n\n------\n[1] https://www.postgresql.org/message-id/CAA4eK1LHSDb3KVRZZnYeBF0-SodMKYP%3DV%2B2VmrVBvRNK%3Dej1Tw%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAHut%2BPs8TLKFL0P4ghgERdTcDeB4y61zWm128524h88BhnYmfA%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Tue, 9 Aug 2022 11:05:24 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 15 minor fixes in protocol.sgml"
},
{
"msg_contents": "On Tue, Aug 2, 2022 at 09:49:47AM -0300, Euler Taveira wrote:\n> On Sun, Jul 31, 2022, at 8:37 PM, PG Doc comments form wrote:\n> \n> Towards the end of the \"43.9.1. Reporting Errors and Messages\" section\n> (here\n> https://www.postgresql.org/docs/current/plpgsql-errors-and-messages.html#\n> PLPGSQL-STATEMENTS-RAISE)\n> we have the following sentence:\n> \n> > If no condition name nor SQLSTATE is specified in a RAISE EXCEPTION\n> command, the default is to use ERRCODE_RAISE_EXCEPTION (P0001).\n> \n> Looking at the list of error codes (here\n> https://www.postgresql.org/docs/current/errcodes-appendix.html) I think the\n> \"ERRCODE_RAISE_EXCEPTION (P0001)\" is a typo and should remove \"ERRCODE_\"\n> and\n> simply read \"RAISE_EXCEPTION (P0001)\" or perhaps \"ERRCODE =\n> 'RAISE_EXCEPTION'\" since that's how the default behaviour would be written\n> in a RAISE statement.\n> \n> It is referring to the internal constant (see src/backend/utils/errcodes.h). It\n> was like you are proposing and it was changed in\n> 66bde49d96a9ddacc49dcbdf1b47b5bd6e31ead5. Reading the original thread, there is\n> no explanation why it was changed. Refer to internal names is not good for a\n> user-oriented text. I think it would be better to use the condition name (in\n> lowercase) like it is referred to in [1]. I mean, change\n> ERRCODE_RAISE_EXCEPTION to raise_exception.\n> \n> [1] https://www.postgresql.org/docs/current/errcodes-appendix.html\n\nAlexander, Michael, can you explain why this commit removed ERRCODE_:\n\n\tcommit 66bde49d96\n\tAuthor: Michael Paquier <michael@paquier.xyz>\n\tDate: Tue Aug 13 13:53:41 2019 +0900\n\t\n\t Fix inconsistencies and typos in the tree, take 10\n\t\n\t This addresses some issues with unnecessary code comments, fixes various\n\t typos in docs and comments, and removes some orphaned structures and\n\t definitions.\n\t\n\t Author: Alexander Lakhin\n\t Discussion: https://postgr.es/m/9aabc775-5494-b372-8bcb-4dfc0bd37c68@gmail.com\n\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Tue, 31 Oct 2023 10:52:12 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Typo in \"43.9.1. Reporting Errors and Messages\"?"
},
{
"msg_contents": "Hi Bruce,\n\n31.10.2023 17:52, Bruce Momjian wrote:\n>\n>> It is referring to the internal constant (see src/backend/utils/errcodes.h). It\n>> was like you are proposing and it was changed in\n>> 66bde49d96a9ddacc49dcbdf1b47b5bd6e31ead5. Reading the original thread, there is\n>> no explanation why it was changed. Refer to internal names is not good for a\n>> user-oriented text. I think it would be better to use the condition name (in\n>> lowercase) like it is referred to in [1]. I mean, change\n>> ERRCODE_RAISE_EXCEPTION to raise_exception.\n>>\n>> [1] https://www.postgresql.org/docs/current/errcodes-appendix.html\n> Alexander, Michael, can you explain why this commit removed ERRCODE_:\n>\n> \tcommit 66bde49d96\n\nI don't remember details, but I think the primary reason for the change\nwas that \"RAISE_EXCEPTION\" occurred in the whole tree only once (before\n66bde49d96). Now I see, that I had chosen the wrong replacement — I agree\nwith Euler, change to \"raise_exception\" would be more appropriate.\n\n(I've found a similar mention of ERRCODE_xxx in btree.sgml:\n Before doing so, the function should check the sign\n of <replaceable>offset</replaceable>: if it is less than zero, raise\n error <literal>ERRCODE_INVALID_PRECEDING_OR_FOLLOWING_SIZE</literal> (22013)\n with error text like <quote>invalid preceding or following size in window\n function</quote>.\nbut I think that's okay here, because that identifier supposed to be used\nas-is in ereport/elog.)\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Tue, 31 Oct 2023 21:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Typo in \"43.9.1. Reporting Errors and Messages\"?"
},
{
"msg_contents": "On Tue, Oct 31, 2023 at 09:00:00PM +0300, Alexander Lakhin wrote:\n> I don't remember details, but I think the primary reason for the change\n> was that \"RAISE_EXCEPTION\" occurred in the whole tree only once (before\n> 66bde49d96). Now I see, that I had chosen the wrong replacement — I agree\n> with Euler, change to \"raise_exception\" would be more appropriate.\n\nIndeed, it looks like the origin of the confusion is the casing here,\nso changing to \"raise_exception\" like in the appendix sounds good to\nme:\nhttps://www.postgresql.org/docs/devel/errcodes-appendix.html\n\nSo you mean something like the attached then?\n\n> (I've found a similar mention of ERRCODE_xxx in btree.sgml:\n> Before doing so, the function should check the sign\n> of <replaceable>offset</replaceable>: if it is less than zero, raise\n> error <literal>ERRCODE_INVALID_PRECEDING_OR_FOLLOWING_SIZE</literal> (22013)\n> with error text like <quote>invalid preceding or following size in window\n> function</quote>.\n> but I think that's okay here, because that identifier supposed to be used\n> as-is in ereport/elog.)\n\nYep, still this one is not that old (0a459cec96d3).\n--\nMichael",
"msg_date": "Wed, 1 Nov 2023 09:18:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Typo in \"43.9.1. Reporting Errors and Messages\"?"
},
{
"msg_contents": "On Wed, Nov 01, 2023 at 09:18:47AM +0900, Michael Paquier wrote:\n> So you mean something like the attached then?\n\nFixed that with f8b96c211da0 down to 11, in time for next week's\nrelease set.\n--\nMichael",
"msg_date": "Thu, 2 Nov 2023 07:35:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Typo in \"43.9.1. Reporting Errors and Messages\"?"
}
] |
[
{
"msg_contents": "\nHi, hackers\n\nI think there is a typo in pg_db_role_setting.h, should we fix it?\n\ndiff --git a/src/include/catalog/pg_db_role_setting.h b/src/include/catalog/pg_db_role_setting.h\nindex 45d478e9e7..f92e867df4 100644\n--- a/src/include/catalog/pg_db_role_setting.h\n+++ b/src/include/catalog/pg_db_role_setting.h\n@@ -51,7 +51,7 @@ DECLARE_TOAST_WITH_MACRO(pg_db_role_setting, 2966, 2967, PgDbRoleSettingToastTab\n DECLARE_UNIQUE_INDEX_PKEY(pg_db_role_setting_databaseid_rol_index, 2965, DbRoleSettingDatidRolidIndexId, on pg_db_role_setting using btree(setdatabase oid_ops, setrole oid_ops));\n\n /*\n- * prototypes for functions in pg_db_role_setting.h\n+ * prototypes for functions in pg_db_role_setting.c\n */\n extern void AlterSetting(Oid databaseid, Oid roleid, VariableSetStmt *setstmt);\n extern void DropSetting(Oid databaseid, Oid roleid);\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Mon, 01 Aug 2022 17:18:39 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Typo in pg_db_role_setting.h"
},
{
"msg_contents": "On Mon, Aug 1, 2022 at 5:18 PM Japin Li <japinli@hotmail.com> wrote:\n\n> I think there is a typo in pg_db_role_setting.h, should we fix it?\n\n\nDefinitely this is wrong. +1 for the fix.\n\nThanks\nRichard\n\nOn Mon, Aug 1, 2022 at 5:18 PM Japin Li <japinli@hotmail.com> wrote:\nI think there is a typo in pg_db_role_setting.h, should we fix it?Definitely this is wrong. +1 for the fix.ThanksRichard",
"msg_date": "Mon, 1 Aug 2022 17:28:45 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Typo in pg_db_role_setting.h"
},
{
"msg_contents": "On Mon, Aug 1, 2022 at 4:18 PM Japin Li <japinli@hotmail.com> wrote:\n>\n>\n> Hi, hackers\n>\n> I think there is a typo in pg_db_role_setting.h, should we fix it?\n>\n> diff --git a/src/include/catalog/pg_db_role_setting.h\nb/src/include/catalog/pg_db_role_setting.h\n> index 45d478e9e7..f92e867df4 100644\n> /*\n> - * prototypes for functions in pg_db_role_setting.h\n> + * prototypes for functions in pg_db_role_setting.c\n> */\n\nYou are correct, but I wonder if it'd be better to just drop the comment\nentirely. I checked a couple other random headers with function\ndeclarations and they didn't have such a comment, and it's kind of obvious\nwhat they're for.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Aug 1, 2022 at 4:18 PM Japin Li <japinli@hotmail.com> wrote:>>> Hi, hackers>> I think there is a typo in pg_db_role_setting.h, should we fix it?>> diff --git a/src/include/catalog/pg_db_role_setting.h b/src/include/catalog/pg_db_role_setting.h> index 45d478e9e7..f92e867df4 100644> /*> - * prototypes for functions in pg_db_role_setting.h> + * prototypes for functions in pg_db_role_setting.c> */You are correct, but I wonder if it'd be better to just drop the comment entirely. I checked a couple other random headers with function declarations and they didn't have such a comment, and it's kind of obvious what they're for.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 1 Aug 2022 19:46:30 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Typo in pg_db_role_setting.h"
},
{
"msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> You are correct, but I wonder if it'd be better to just drop the comment\n> entirely. I checked a couple other random headers with function\n> declarations and they didn't have such a comment, and it's kind of obvious\n> what they're for.\n\nSome places have these, some don't. It's probably more useful where\na header foo.h is declaring functions that aren't in the obviously\ncorresponding foo.c file, or live in multiple files. In this case\nI agree it's not adding much.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Aug 2022 10:16:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Typo in pg_db_role_setting.h"
},
{
"msg_contents": "\nOn Mon, 01 Aug 2022 at 20:46, John Naylor <john.naylor@enterprisedb.com> wrote:\n> On Mon, Aug 1, 2022 at 4:18 PM Japin Li <japinli@hotmail.com> wrote:\n>>\n>>\n>> Hi, hackers\n>>\n>> I think there is a typo in pg_db_role_setting.h, should we fix it?\n>>\n>> diff --git a/src/include/catalog/pg_db_role_setting.h\n> b/src/include/catalog/pg_db_role_setting.h\n>> index 45d478e9e7..f92e867df4 100644\n>> /*\n>> - * prototypes for functions in pg_db_role_setting.h\n>> + * prototypes for functions in pg_db_role_setting.c\n>> */\n>\n> You are correct, but I wonder if it'd be better to just drop the comment\n> entirely. I checked a couple other random headers with function\n> declarations and they didn't have such a comment, and it's kind of obvious\n> what they're for.\n\nBoth are fine for me. I find there are some headers also have such a comment,\nlike pg_enum, pg_range and pg_namespace.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Mon, 01 Aug 2022 22:24:24 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Typo in pg_db_role_setting.h"
},
{
"msg_contents": "On Mon, 01 Aug 2022 at 22:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> John Naylor <john.naylor@enterprisedb.com> writes:\n>> You are correct, but I wonder if it'd be better to just drop the comment\n>> entirely. I checked a couple other random headers with function\n>> declarations and they didn't have such a comment, and it's kind of obvious\n>> what they're for.\n>\n> Some places have these, some don't. It's probably more useful where\n> a header foo.h is declaring functions that aren't in the obviously\n> corresponding foo.c file, or live in multiple files. In this case\n> I agree it's not adding much.\n>\n\nAttached patch to remove this comment. Please take a look.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.",
"msg_date": "Mon, 01 Aug 2022 22:42:22 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Typo in pg_db_role_setting.h"
},
{
"msg_contents": "On Mon, Aug 1, 2022 at 10:42 PM Japin Li <japinli@hotmail.com> wrote:\n\n>\n> On Mon, 01 Aug 2022 at 22:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > John Naylor <john.naylor@enterprisedb.com> writes:\n> >> You are correct, but I wonder if it'd be better to just drop the comment\n> >> entirely. I checked a couple other random headers with function\n> >> declarations and they didn't have such a comment, and it's kind of\n> obvious\n> >> what they're for.\n> >\n> > Some places have these, some don't. It's probably more useful where\n> > a header foo.h is declaring functions that aren't in the obviously\n> > corresponding foo.c file, or live in multiple files. In this case\n> > I agree it's not adding much.\n> >\n>\n> Attached patch to remove this comment. Please take a look.\n\n\nI'm not sure that we should remove such comments. And a rough search\nshows that there are much more places with this kind of comments, such\nas below:\n\nnbtxlog transam readfuncs walreceiver buffile bufmgr fd latch pmsignal\nprocsignal sinvaladt logtape rangetypes\n\nThanks\nRichard\n\nOn Mon, Aug 1, 2022 at 10:42 PM Japin Li <japinli@hotmail.com> wrote:\nOn Mon, 01 Aug 2022 at 22:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> John Naylor <john.naylor@enterprisedb.com> writes:\n>> You are correct, but I wonder if it'd be better to just drop the comment\n>> entirely. I checked a couple other random headers with function\n>> declarations and they didn't have such a comment, and it's kind of obvious\n>> what they're for.\n>\n> Some places have these, some don't. It's probably more useful where\n> a header foo.h is declaring functions that aren't in the obviously\n> corresponding foo.c file, or live in multiple files. In this case\n> I agree it's not adding much.\n>\n\nAttached patch to remove this comment. Please take a look.I'm not sure that we should remove such comments. And a rough searchshows that there are much more places with this kind of comments, suchas below:nbtxlog transam readfuncs walreceiver buffile bufmgr fd latch pmsignalprocsignal sinvaladt logtape rangetypesThanksRichard",
"msg_date": "Tue, 2 Aug 2022 11:06:01 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Typo in pg_db_role_setting.h"
},
{
"msg_contents": "On Tue, Aug 2, 2022 at 10:06 AM Richard Guo <guofenglinux@gmail.com> wrote:\n>\n> I'm not sure that we should remove such comments. And a rough search\n> shows that there are much more places with this kind of comments, such\n> as below:\n>\n> nbtxlog transam readfuncs walreceiver buffile bufmgr fd latch pmsignal\n> procsignal sinvaladt logtape rangetypes\n\nI was talking only about catalog/pg_*.c functions, as in Japin Li's latest\npatch. You didn't mention whether your examples fall in the category Tom\nmentioned upthread, so I'm not sure what your angle is.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Aug 2, 2022 at 10:06 AM Richard Guo <guofenglinux@gmail.com> wrote:>> I'm not sure that we should remove such comments. And a rough search> shows that there are much more places with this kind of comments, such> as below:>> nbtxlog transam readfuncs walreceiver buffile bufmgr fd latch pmsignal> procsignal sinvaladt logtape rangetypesI was talking only about catalog/pg_*.c functions, as in Japin Li's latest patch. You didn't mention whether your examples fall in the category Tom mentioned upthread, so I'm not sure what your angle is.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 2 Aug 2022 11:13:25 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Typo in pg_db_role_setting.h"
},
{
"msg_contents": "\nOn Tue, 02 Aug 2022 at 11:06, Richard Guo <guofenglinux@gmail.com> wrote:\n> On Mon, Aug 1, 2022 at 10:42 PM Japin Li <japinli@hotmail.com> wrote:\n>\n>>\n>> On Mon, 01 Aug 2022 at 22:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> > John Naylor <john.naylor@enterprisedb.com> writes:\n>> >> You are correct, but I wonder if it'd be better to just drop the comment\n>> >> entirely. I checked a couple other random headers with function\n>> >> declarations and they didn't have such a comment, and it's kind of\n>> obvious\n>> >> what they're for.\n>> >\n>> > Some places have these, some don't. It's probably more useful where\n>> > a header foo.h is declaring functions that aren't in the obviously\n>> > corresponding foo.c file, or live in multiple files. In this case\n>> > I agree it's not adding much.\n>> >\n>>\n>> Attached patch to remove this comment. Please take a look.\n>\n>\n> I'm not sure that we should remove such comments. And a rough search\n> shows that there are much more places with this kind of comments, such\n> as below:\n>\n> nbtxlog transam readfuncs walreceiver buffile bufmgr fd latch pmsignal\n> procsignal sinvaladt logtape rangetypes\n>\n\nThanks for your review! Here, I think we are only talking about catalog headers.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Tue, 02 Aug 2022 12:34:29 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Typo in pg_db_role_setting.h"
},
{
"msg_contents": "On Mon, Aug 1, 2022 at 9:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> John Naylor <john.naylor@enterprisedb.com> writes:\n> > You are correct, but I wonder if it'd be better to just drop the comment\n> > entirely. I checked a couple other random headers with function\n> > declarations and they didn't have such a comment, and it's kind of\nobvious\n> > what they're for.\n>\n> Some places have these, some don't. It's probably more useful where\n> a header foo.h is declaring functions that aren't in the obviously\n> corresponding foo.c file, or live in multiple files. In this case\n> I agree it's not adding much.\n\nI somehow forgot that just yesterday I working on a project that will\npossibly add a declaration to every catalog header for tuple deforming. In\nthat case, we will want to keep existing comments and possibly add more. In\nthe meantime, I'll go just apply the correction.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Aug 1, 2022 at 9:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:>> John Naylor <john.naylor@enterprisedb.com> writes:> > You are correct, but I wonder if it'd be better to just drop the comment> > entirely. I checked a couple other random headers with function> > declarations and they didn't have such a comment, and it's kind of obvious> > what they're for.>> Some places have these, some don't. It's probably more useful where> a header foo.h is declaring functions that aren't in the obviously> corresponding foo.c file, or live in multiple files. In this case> I agree it's not adding much.I somehow forgot that just yesterday I working on a project that will possibly add a declaration to every catalog header for tuple deforming. In that case, we will want to keep existing comments and possibly add more. In the meantime, I'll go just apply the correction.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 2 Aug 2022 11:43:19 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Typo in pg_db_role_setting.h"
},
{
"msg_contents": "On Tue, Aug 2, 2022 at 12:13 PM John Naylor <john.naylor@enterprisedb.com>\nwrote:\n\n>\n> On Tue, Aug 2, 2022 at 10:06 AM Richard Guo <guofenglinux@gmail.com>\n> wrote:\n> >\n> > I'm not sure that we should remove such comments. And a rough search\n> > shows that there are much more places with this kind of comments, such\n> > as below:\n> >\n> > nbtxlog transam readfuncs walreceiver buffile bufmgr fd latch pmsignal\n> > procsignal sinvaladt logtape rangetypes\n>\n> I was talking only about catalog/pg_*.c functions, as in Japin Li's latest\n> patch. You didn't mention whether your examples fall in the category Tom\n> mentioned upthread, so I'm not sure what your angle is.\n>\n\nSorry I forgot to mention that. The examples listed upthread all contain\nsuch comment in foo.h saying 'prototypes for functions in foo.c'. For\ninstance, in buffile.h, there is comment saying\n\n/*\n * prototypes for functions in buffile.c\n */\n\nSo if we remove such comments, should we also do so for those cases?\n\nThanks\nRichard\n\nOn Tue, Aug 2, 2022 at 12:13 PM John Naylor <john.naylor@enterprisedb.com> wrote:On Tue, Aug 2, 2022 at 10:06 AM Richard Guo <guofenglinux@gmail.com> wrote:>> I'm not sure that we should remove such comments. And a rough search> shows that there are much more places with this kind of comments, such> as below:>> nbtxlog transam readfuncs walreceiver buffile bufmgr fd latch pmsignal> procsignal sinvaladt logtape rangetypesI was talking only about catalog/pg_*.c functions, as in Japin Li's latest patch. You didn't mention whether your examples fall in the category Tom mentioned upthread, so I'm not sure what your angle is.Sorry I forgot to mention that. The examples listed upthread all containsuch comment in foo.h saying 'prototypes for functions in foo.c'. Forinstance, in buffile.h, there is comment saying/* * prototypes for functions in buffile.c */So if we remove such comments, should we also do so for those cases?ThanksRichard",
"msg_date": "Tue, 2 Aug 2022 15:37:58 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Typo in pg_db_role_setting.h"
},
{
"msg_contents": "> On 2 Aug 2022, at 09:37, Richard Guo <guofenglinux@gmail.com> wrote:\n\n> The examples listed upthread all contain such comment in foo.h saying\n> 'prototypes for functions in foo.c'. For instance, in buffile.h, there is\n> comment saying\n\n> /*\n> * prototypes for functions in buffile.c\n> */\n> \n> So if we remove such comments, should we also do so for those cases?\n\nComments which state the obvious are seldom helpful, I would prefer to remove\nsuch comments and only explicitly call out the .c file in a comment when it's a\ndifferent basename from the header.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 2 Aug 2022 09:45:36 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Typo in pg_db_role_setting.h"
},
{
"msg_contents": "\nOn Tue, 02 Aug 2022 at 15:45, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> On 2 Aug 2022, at 09:37, Richard Guo <guofenglinux@gmail.com> wrote:\n>\n>> The examples listed upthread all contain such comment in foo.h saying\n>> 'prototypes for functions in foo.c'. For instance, in buffile.h, there is\n>> comment saying\n>\n>> /*\n>> * prototypes for functions in buffile.c\n>> */\n>> \n>> So if we remove such comments, should we also do so for those cases?\n>\n> Comments which state the obvious are seldom helpful, I would prefer to remove\n> such comments and only explicitly call out the .c file in a comment when it's a\n> different basename from the header.\n\n+1\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Tue, 02 Aug 2022 15:58:40 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Typo in pg_db_role_setting.h"
}
] |
[
{
"msg_contents": "\nHi, hackers\n\nWhen I try to modify the parameters for all users in the following command [1]\n(the library doesn't exist), and I quit the connection, I cannot log in the\ndatabase, how can I bypass this checking?\n\nI find those parameters loaded by process_settings(), and it seems no way to\ndisable this loading process. If we could bypass this checking, how can we\nfix these parameters?\n\n[1]\npostgres=# ALTER ROLE all SET local_preload_libraries TO fdafd;\nALTER ROLE\npostgres=# \\q\n\n$ psql postgres\npsql: error: connection to server on socket \"/tmp/.s.PGSQL.5432\" failed: FATAL: could not access file \"$libdir/plugins/fdafd\": No such file or directory\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Mon, 01 Aug 2022 18:24:33 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Question about user/database-level parameters"
},
{
"msg_contents": "At Mon, 01 Aug 2022 18:24:33 +0800, Japin Li <japinli@hotmail.com> wrote in \n> \n> Hi, hackers\n> \n> When I try to modify the parameters for all users in the following command [1]\n> (the library doesn't exist), and I quit the connection, I cannot log in the\n> database, how can I bypass this checking?\n> \n> I find those parameters loaded by process_settings(), and it seems no way to\n> disable this loading process. If we could bypass this checking, how can we\n> fix these parameters?\n> \n> [1]\n> postgres=# ALTER ROLE all SET local_preload_libraries TO fdafd;\n> ALTER ROLE\n> postgres=# \\q\n> \n> $ psql postgres\n> psql: error: connection to server on socket \"/tmp/.s.PGSQL.5432\" failed: FATAL: could not access file \"$libdir/plugins/fdafd\": No such file or directory\n\nCan you run the server in single-user mode? That mode doesn't try\nloading libraries.\n\nregareds.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 02 Aug 2022 14:01:34 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Question about user/database-level parameters"
},
{
"msg_contents": "\nOn Tue, 02 Aug 2022 at 13:01, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> At Mon, 01 Aug 2022 18:24:33 +0800, Japin Li <japinli@hotmail.com> wrote in \n>> \n>> Hi, hackers\n>> \n>> When I try to modify the parameters for all users in the following command [1]\n>> (the library doesn't exist), and I quit the connection, I cannot log in the\n>> database, how can I bypass this checking?\n>> \n>> I find those parameters loaded by process_settings(), and it seems no way to\n>> disable this loading process. If we could bypass this checking, how can we\n>> fix these parameters?\n>> \n>> [1]\n>> postgres=# ALTER ROLE all SET local_preload_libraries TO fdafd;\n>> ALTER ROLE\n>> postgres=# \\q\n>> \n>> $ psql postgres\n>> psql: error: connection to server on socket \"/tmp/.s.PGSQL.5432\" failed: FATAL: could not access file \"$libdir/plugins/fdafd\": No such file or directory\n>\n> Can you run the server in single-user mode? That mode doesn't try\n> loading libraries.\n>\n\nYeah, the single-user mode works. Thank you very much!\nHowever, if the database is in production, we cannot go into single-user mode,\nshould we provide an option to change this behavior on the fly?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Tue, 02 Aug 2022 13:33:22 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Question about user/database-level parameters"
},
{
"msg_contents": "Japin Li <japinli@hotmail.com> writes:\n> Yeah, the single-user mode works. Thank you very much!\n> However, if the database is in production, we cannot go into single-user mode,\n> should we provide an option to change this behavior on the fly?\n\nThere is not, and never will be, a version of Postgres in which\nit's impossible for a superuser to shoot himself in the foot.\n\nTest your settings more carefully before applying them to a\nproduction database that you can't afford to mess up.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 02 Aug 2022 01:44:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Question about user/database-level parameters"
},
{
"msg_contents": "\nOn Tue, 02 Aug 2022 at 13:44, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Japin Li <japinli@hotmail.com> writes:\n>> Yeah, the single-user mode works. Thank you very much!\n>> However, if the database is in production, we cannot go into single-user mode,\n>> should we provide an option to change this behavior on the fly?\n>\n> There is not, and never will be, a version of Postgres in which\n> it's impossible for a superuser to shoot himself in the foot.\n>\n> Test your settings more carefully before applying them to a\n> production database that you can't afford to mess up.\n>\n\nThanks for your explanation! Got it.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Tue, 02 Aug 2022 13:59:39 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Question about user/database-level parameters"
},
{
"msg_contents": "\tJapin Li wrote:\n\n> However, if the database is in production, we cannot go into single-user\n> mode, should we provide an option to change this behavior on the fly?\n\nIt already exists, through PGOPTIONS, which appears to work\nfor local_preload_libraries, in a quick test.\n\nThat is, you can log in by invoking psql with:\nPGOPTIONS=\"-c local_preload_libraries=\"\nand issue the ALTER USER to reset things back to normal\nwithout stopping the instance.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Thu, 04 Aug 2022 13:29:57 +0200",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: Question about user/database-level parameters"
},
{
"msg_contents": "\nOn Thu, 04 Aug 2022 at 19:29, Daniel Verite <daniel@manitou-mail.org> wrote:\n> \tJapin Li wrote:\n>\n>> However, if the database is in production, we cannot go into single-user\n>> mode, should we provide an option to change this behavior on the fly?\n>\n> It already exists, through PGOPTIONS, which appears to work\n> for local_preload_libraries, in a quick test.\n>\n> That is, you can log in by invoking psql with:\n> PGOPTIONS=\"-c local_preload_libraries=\"\n> and issue the ALTER USER to reset things back to normal\n> without stopping the instance.\n\nOh, great! Thank you very much!\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Thu, 04 Aug 2022 23:03:47 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Question about user/database-level parameters"
}
] |
[
{
"msg_contents": "Hi,\n\nThe previous discussion is:\n\nhttps://www.postgresql.org/message-id/CACJufxEnVqzOFtqhexF2%2BAwOKFrV8zHOY3y%3Dp%2BgPK6eB14pn_w%40mail.gmail.com\n\n\nWe have FORCE_NULL/FORCE_NOT_NULL options when COPY FROM, but users must set the columns one by one.\n\n CREATE TABLE forcetest (\n a INT NOT NULL,\n b TEXT NOT NULL,\n c TEXT,\n d TEXT,\n e TEXT\n );\n \\pset null NULL\n\n BEGIN;\n COPY forcetest (a, b, c, d) FROM STDIN WITH (FORMAT csv, FORCE_NOT_NULL(c,d), FORCE_NULL(c,d));\n 1,'a',,\"\"\n \\.\n COMMIT;\n\n SELECT c, d FROM forcetest WHERE a = 1;\n c | d\n ---+------\n | NULL\n (1 row)\n\n\nWe don’t have FORCE_NULL * or FORCE_NOT_NULL * for all columns of a table like FORCE_QUOTE *.\n\nThey should be helpful if a table have many columns.\n\nThis patch enables FORCE_NULL/FORCE_NOT_NULL options to select all columns of a table just like FORCE_QUOTE * (quote all columns).\n\n\n BEGIN\n COPY forcetest (a, b, c, d) FROM STDIN WITH (FORMAT csv, FORCE_NOT_NULL *, FORCE_NULL *);\n 2,'b',,\"\"\n \\.\n COMMIT;\n\n SELECT c, d FROM forcetest WHERE a = 2;\n c | d\n ---+------\n | NULL\n (1 row)\n\nAny thoughts?\n\nRegards,\nZhang Mingli",
"msg_date": "Mon, 1 Aug 2022 21:56:14 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "[feature]COPY FROM enable FORCE_NULL/FORCE_NOT_NULL on all\n columns"
},
{
"msg_contents": "\nOn 2022-08-01 Mo 09:56, Zhang Mingli wrote:\n> Hi, \n>\n> The previous discussion is:\n>\n> https://www.postgresql.org/message-id/CACJufxEnVqzOFtqhexF2%2BAwOKFrV8zHOY3y%3Dp%2BgPK6eB14pn_w%40mail.gmail.com\n\n\nStarting a new thread is pointless and annoying. As I said in the\nprevious thread, we would need a patch.\n\n\ncheers\n\n\nandrew\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 1 Aug 2022 15:50:58 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: [feature]COPY FROM enable FORCE_NULL/FORCE_NOT_NULL on all\n columns"
},
{
"msg_contents": "\nOn 2022-08-01 Mo 15:50, Andrew Dunstan wrote:\n> On 2022-08-01 Mo 09:56, Zhang Mingli wrote:\n>> Hi, \n>>\n>> The previous discussion is:\n>>\n>> https://www.postgresql.org/message-id/CACJufxEnVqzOFtqhexF2%2BAwOKFrV8zHOY3y%3Dp%2BgPK6eB14pn_w%40mail.gmail.com\n>\n> Starting a new thread is pointless and annoying. As I said in the\n> previous thread, we would need a patch.\n>\n>\n\n\nApologies, I se you have sent a patch. I will check it out.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 1 Aug 2022 15:53:11 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: [feature]COPY FROM enable FORCE_NULL/FORCE_NOT_NULL on all\n columns"
},
{
"msg_contents": "Hi,\n\nHaving FORCE_NULL(*) and FORCE_NOT_NULL(*) sounds good, since postgres\nalready has FORCE_QUOTE(*).\n\nI just quickly tried out your patch. It worked for me as expected.\n\n One little suggestion:\n\n+ if (cstate->opts.force_notnull_all)\n> + {\n> + int i;\n> + for(i = 0; i < num_phys_attrs; i++)\n> + cstate->opts.force_notnull_flags[i] = true;\n> + }\n\n\nInstead of setting force_null/force_notnull flags for all columns, what\nabout simply setting \"attnums\" list to cstate->attnumlist?\nSomething like the following should be enough :\n\n> if (cstate->opts.force_null_all)\n> attnums = cstate->attnumlist;\n> else\n> attnums = CopyGetAttnums(tupDesc, cstate->rel, cstate->opts.force_null);\n\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft\n\nHi,Having FORCE_NULL(*) and FORCE_NOT_NULL(*) sounds good, since postgres already has FORCE_QUOTE(*).I just quickly tried out your patch. It worked for me as expected. One little suggestion:+\tif (cstate->opts.force_notnull_all)+\t{+ int\t\ti;+ for(i = 0; i < num_phys_attrs; i++)+ cstate->opts.force_notnull_flags[i] = true;+\t} Instead of setting force_null/force_notnull flags for all columns, what about simply setting \"attnums\" list to cstate->attnumlist?Something like the following should be enough :if (cstate->opts.force_null_all) attnums = cstate->attnumlist;else attnums = CopyGetAttnums(tupDesc, cstate->rel, cstate->opts.force_null);Thanks, -- Melih MutluMicrosoft",
"msg_date": "Tue, 27 Dec 2022 14:02:15 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [feature]COPY FROM enable FORCE_NULL/FORCE_NOT_NULL on all\n columns"
},
{
"msg_contents": "HI,\n\nOn Dec 27, 2022, 19:02 +0800, Melih Mutlu <m.melihmutlu@gmail.com>, wrote:\n\tHi,\n\nHaving FORCE_NULL(*) and FORCE_NOT_NULL(*) sounds good, since postgres already has FORCE_QUOTE(*).\n\nI just quickly tried out your patch. It worked for me as expected.\n\n One little suggestion:\n\n\t+ if (cstate->opts.force_notnull_all)\n+ {\n+ int i;\n+ for(i = 0; i < num_phys_attrs; i++)\n+ cstate->opts.force_notnull_flags[i] = true;\n+ }\n\nInstead of setting force_null/force_notnull flags for all columns, what about simply setting \"attnums\" list to cstate->attnumlist?\nSomething like the following should be enough :\n\tif (cstate->opts.force_null_all)\n attnums = cstate->attnumlist;\nelse\n attnums = CopyGetAttnums(tupDesc, cstate->rel, cstate->opts.force_null);\nTanks very much for review.\n\nI got your point and we have to handle the case that there are no force_* options at all.\nSo the codes will be like:\n\n```\nList *attnums = NIL;\n\nif (cstate->opts.force_notnull_all)\nattnums = cstate->attnumlist;\nelse if (cstate->opts.force_notnull)\nattnums = CopyGetAttnums(tupDesc, cstate->rel, cstate->opts.force_notnull);\n\nif (attnums != NIL)\n{\n// process force_notnull columns\n\nattnums = NIL; // to process other options later\n}\n\nif (cstate->opts.force_null_all)\nattnums = cstate->attnumlist;\nelse if (cstate->opts.force_null)\nattnums = CopyGetAttnums(tupDesc, cstate->rel, cstate->opts.force_null);\n\nif (attnums != NIL)\n{\n// process force_null columns\n\nattnums = NIL; // to process other options later\n}\n```\nThat seems a little odd.\n\nOr, we could keep attnums as local variables, then the codes will be like:\n\n```\nif (cstate->opts.force_notnull_all || cstate->opts.force_notnull)\n{\nif (cstate->opts.force_notnull_all)\nattnums = cstate->attnumlist;\nelse\nattnums = CopyGetAttnums(tupDesc, cstate->rel, cstate->opts.force_notnull);\n// process force_notnull columns\n}\n```\n\nAny other suggestions?\n\n\nRegards,\nZhang Mingli\n\n\n\n\n\n\n\n\n\n\nHI, \n\n\n\nOn Dec 27, 2022, 19:02 +0800, Melih Mutlu <m.melihmutlu@gmail.com>, wrote:\n\nHi,\n\nHaving FORCE_NULL(*) and FORCE_NOT_NULL(*) sounds good, since postgres already has FORCE_QUOTE(*).\n\nI just quickly tried out your patch. It worked for me as expected.\n\n One little suggestion:\n\n\n+ if (cstate->opts.force_notnull_all)\n+ {\n+ int i;\n+ for(i = 0; i < num_phys_attrs; i++)\n+ cstate->opts.force_notnull_flags[i] = true;\n+ }\n\n \nInstead of setting force_null/force_notnull flags for all columns, what about simply setting \"attnums\" list to cstate->attnumlist?\nSomething like the following should be enough :\n\nif (cstate->opts.force_null_all)\n attnums = cstate->attnumlist;\nelse\n attnums = CopyGetAttnums(tupDesc, cstate->rel, cstate->opts.force_null);\n\n\nTanks very much for review.\n\nI got your point and we have to handle the case that there are no force_* options at all.\n\nSo the codes will be like:\n\n```\nList *attnums = NIL;\n\nif (cstate->opts.force_notnull_all)\nattnums = cstate->attnumlist;\nelse if (cstate->opts.force_notnull)\nattnums = CopyGetAttnums(tupDesc, cstate->rel, cstate->opts.force_notnull);\n\nif (attnums != NIL)\n{\n// process force_notnull columns\n\nattnums = NIL; // to process other options later\n}\n\nif (cstate->opts.force_null_all)\nattnums = cstate->attnumlist;\nelse if (cstate->opts.force_null)\nattnums = CopyGetAttnums(tupDesc, cstate->rel, cstate->opts.force_null);\n\nif (attnums != NIL)\n{\n// process force_null columns\n\nattnums = NIL; // to process other options later\n}\n```\n\nThat seems a little odd.\n\nOr, we could keep attnums as local variables, then the codes will be like:\n\n```\nif (cstate->opts.force_notnull_all || cstate->opts.force_notnull)\n{\nif (cstate->opts.force_notnull_all)\nattnums = cstate->attnumlist;\nelse\nattnums = CopyGetAttnums(tupDesc, cstate->rel, cstate->opts.force_notnull);\n// process force_notnull columns\n}\n```\n\nAny other suggestions?\n\n\n\nRegards,\nZhang Mingli",
"msg_date": "Fri, 13 Jan 2023 22:18:09 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [feature]COPY FROM enable FORCE_NULL/FORCE_NOT_NULL on all\n columns"
},
{
"msg_contents": "Hello!\n\nThe patch does not work for the current version of postgres, it needs to be\nupdated.\nI tested your patch. Everything looks simple and works well.\n\nThere is a suggestion to simplify the code: instead of using\n\nif (cstate->opts.force_notnull_all)\n{\nint i;\nfor(i = 0; i < num_phys_attrs; i++)\ncstate->opt.force_notnull_flags[i] = true;\n}\n\nyou can use MemSet():\n\nif (cstate->opts.force_notnull_all)\nMemSet(cstate->opt.force_notnull_flags, true, num_phys_attrs *\nsizeof(bool));\n\nThe same for the force_null case.\n\nRegards,\nDamir Belyalov,\nPostgres Professional\n\nHello!The patch does not work for the current version of postgres, it needs to be updated.I tested your patch. Everything looks simple and works well. There is a suggestion to simplify the code: instead of usingif (cstate->opts.force_notnull_all)\t{\t\tint\t\ti;\t\tfor(i = 0; i < num_phys_attrs; i++)\t\t\tcstate->opt.force_notnull_flags[i] = true;\t}you can use MemSet():if (cstate->opts.force_notnull_all)\t\tMemSet(cstate->opt.force_notnull_flags, true, num_phys_attrs * sizeof(bool));The same for the force_null case.Regards,Damir Belyalov,Postgres Professional",
"msg_date": "Fri, 7 Jul 2023 13:00:27 +0300",
"msg_from": "Damir Belyalov <dam.bel07@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [feature]COPY FROM enable FORCE_NULL/FORCE_NOT_NULL on all\n columns"
},
{
"msg_contents": "HI,\n\nRegards,\nZhang Mingli\nOn Jul 7, 2023, 18:00 +0800, Damir Belyalov <dam.bel07@gmail.com>, wrote:\n>\n> The patch does not work for the current version of postgres, it needs to be updated.\n> I tested your patch. Everything looks simple and works well.\n>\n> There is a suggestion to simplify the code: instead of using\n>\n> if (cstate->opts.force_notnull_all)\n> {\n> int i;\n> for(i = 0; i < num_phys_attrs; i++)\n> cstate->opt.force_notnull_flags[i] = true;\n> }\n\nThanks very much for review.\n\nNice suggestion, patch rebased and updated.",
"msg_date": "Sun, 9 Jul 2023 11:51:44 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [feature]COPY FROM enable FORCE_NULL/FORCE_NOT_NULL on all\n columns"
},
{
"msg_contents": "Hi,\n\n\nOn Jul 9, 2023 at 11:51 +0800, Zhang Mingli <zmlpostgres@gmail.com>, wrote:\n\tHI,\n\nRegards,\nZhang Mingli\nOn Jul 7, 2023, 18:00 +0800, Damir Belyalov <dam.bel07@gmail.com>, wrote:\n\nThe patch does not work for the current version of postgres, it needs to be updated.\nI tested your patch. Everything looks simple and works well.\n\nThere is a suggestion to simplify the code: instead of using\n\nif (cstate->opts.force_notnull_all)\n{\nint i;\nfor(i = 0; i < num_phys_attrs; i++)\ncstate->opt.force_notnull_flags[i] = true;\n}\n\nThanks very much for review.\n\nNice suggestion, patch rebased and updated.\n\nV2 patch still have some errors when apply file doc/src/sgml/ref/copy.sgml, rebased and fixed it in V3 path.\nThanks a lot for review.\n\n\n\nZhang Mingli\n\nwww.hashdata.xyz\n>",
"msg_date": "Wed, 19 Jul 2023 05:08:18 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [feature]COPY FROM enable FORCE_NULL/FORCE_NOT_NULL on all\n columns"
},
{
"msg_contents": "Hi,\n\nOn Jul 7, 2023 at 18:00 +0800, Damir Belyalov <dam.bel07@gmail.com>, wrote:\n>\n> V2 patch still have some errors when apply file doc/src/sgml/ref/copy.sgml, rebased and fixed it in V3 path.\n> Thanks a lot for review.\nI have updated https://commitfest.postgresql.org/43/3896/ to staus Ready for Committer, thanks again.\n\n\nZhang Mingli\nwww.hashdata.xyz\n\n\n\n\n\n\n\nHi,\n\nOn Jul 7, 2023 at 18:00 +0800, Damir Belyalov <dam.bel07@gmail.com>, wrote:\n\nV2 patch still have some errors when apply file doc/src/sgml/ref/copy.sgml, rebased and fixed it in V3 path.\nThanks a lot for review.\nI have updated https://commitfest.postgresql.org/43/3896/ to staus Ready for Committer, thanks again.\n\n\n\n\nZhang Mingli\nwww.hashdata.xyz",
"msg_date": "Thu, 20 Jul 2023 15:06:32 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [feature]COPY FROM enable FORCE_NULL/FORCE_NOT_NULL on all\n columns"
},
{
"msg_contents": "Hello,\n\nOn Thu, Jul 20, 2023 at 4:06 PM Zhang Mingli <zmlpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> On Jul 7, 2023 at 18:00 +0800, Damir Belyalov <dam.bel07@gmail.com>, wrote:\n>\n>\n> V2 patch still have some errors when apply file doc/src/sgml/ref/copy.sgml, rebased and fixed it in V3 path.\n> Thanks a lot for review.\n>\n> I have updated https://commitfest.postgresql.org/43/3896/ to staus Ready for Committer, thanks again.\n\nI've looked at this patch and it looks mostly fine, though I do not\nintend to commit it myself; perhaps Andrew will.\n\nA few minor things to improve:\n\n+ If <literal>*</literal> is specified, it will be applied in all columns.\n...\n+ If <literal>*</literal> is specified, it will be applied in all columns.\n\nPlease write \"it will be applied in\" as \"the option will be applied to\".\n\n+ bool force_notnull_all; /* FORCE_NOT_NULL * */\n...\n+ bool force_null_all; /* FORCE_NULL * */\n\nLike in the comment for force_quote, please add a \"?\" after * in the\nabove comments.\n\n+ if (cstate->opts.force_notnull_all)\n+ MemSet(cstate->opts.force_notnull_flags, true, num_phys_attrs\n* sizeof(bool));\n...\n+ if (cstate->opts.force_null_all)\n+ MemSet(cstate->opts.force_null_flags, true, num_phys_attrs *\nsizeof(bool));\n\nWhile I am not especially opposed to using this 1-line variant to set\nthe flags array, it does mean that there are now different styles\nbeing used for similar code, because force_quote_flags uses a for\nloop:\n\n if (cstate->opts.force_quote_all)\n {\n int i;\n\n for (i = 0; i < num_phys_attrs; i++)\n cstate->opts.force_quote_flags[i] = true;\n }\n\nPerhaps we could fix the inconsistency by changing the force_quote_all\ncode to use MemSet() too. I'll defer whether to do that to Andrew's\njudgement.\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 26 Jul 2023 15:17:06 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [feature]COPY FROM enable FORCE_NULL/FORCE_NOT_NULL on all\n columns"
},
{
"msg_contents": "HI,\n\n> I've looked at this patch and it looks mostly fine, though I do not\n> intend to commit it myself; perhaps Andrew will.\n\nHI, Amit, thanks for review.\n\n> \n> A few minor things to improve:\n> \n> + If <literal>*</literal> is specified, it will be applied in all columns.\n> ...\n> + If <literal>*</literal> is specified, it will be applied in all columns.\n> \n> Please write \"it will be applied in\" as \"the option will be applied to\".\n\n+1\n\n> \n> + bool force_notnull_all; /* FORCE_NOT_NULL * */\n> ...\n> + bool force_null_all; /* FORCE_NULL * */\n> \n> Like in the comment for force_quote, please add a \"?\" after * in the\n> above comments.\n\n+1\n\n> \n> + if (cstate->opts.force_notnull_all)\n> + MemSet(cstate->opts.force_notnull_flags, true, num_phys_attrs\n> * sizeof(bool));\n> ...\n> + if (cstate->opts.force_null_all)\n> + MemSet(cstate->opts.force_null_flags, true, num_phys_attrs *\n> sizeof(bool));\n> \n> While I am not especially opposed to using this 1-line variant to set\n> the flags array, it does mean that there are now different styles\n> being used for similar code, because force_quote_flags uses a for\n> loop:\n> \n> if (cstate->opts.force_quote_all)\n> {\n> int i;\n> \n> for (i = 0; i < num_phys_attrs; i++)\n> cstate->opts.force_quote_flags[i] = true;\n> }\n> \n> Perhaps we could fix the inconsistency by changing the force_quote_all\n> code to use MemSet() too. I'll defer whether to do that to Andrew's\n> judgement.\n\nSure, let’s wait for Andrew and I will put everything in one pot then.\n\nZhang Mingli\nhttps://www.hashdata.xyz\n\n\nHI,I've looked at this patch and it looks mostly fine, though I do notintend to commit it myself; perhaps Andrew will.HI, Amit, thanks for review.A few minor things to improve:+ If <literal>*</literal> is specified, it will be applied in all columns....+ If <literal>*</literal> is specified, it will be applied in all columns.Please write \"it will be applied in\" as \"the option will be applied to\".+1+ bool force_notnull_all; /* FORCE_NOT_NULL * */...+ bool force_null_all; /* FORCE_NULL * */Like in the comment for force_quote, please add a \"?\" after * in theabove comments.+1+ if (cstate->opts.force_notnull_all)+ MemSet(cstate->opts.force_notnull_flags, true, num_phys_attrs* sizeof(bool));...+ if (cstate->opts.force_null_all)+ MemSet(cstate->opts.force_null_flags, true, num_phys_attrs *sizeof(bool));While I am not especially opposed to using this 1-line variant to setthe flags array, it does mean that there are now different stylesbeing used for similar code, because force_quote_flags uses a forloop: if (cstate->opts.force_quote_all) { int i; for (i = 0; i < num_phys_attrs; i++) cstate->opts.force_quote_flags[i] = true; }Perhaps we could fix the inconsistency by changing the force_quote_allcode to use MemSet() too. I'll defer whether to do that to Andrew'sjudgement.Sure, let’s wait for Andrew and I will put everything in one pot then.\nZhang Minglihttps://www.hashdata.xyz",
"msg_date": "Wed, 26 Jul 2023 15:03:01 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [feature]COPY FROM enable FORCE_NULL/FORCE_NOT_NULL on all\n columns"
},
{
"msg_contents": "On 2023-07-26 We 03:03, Zhang Mingli wrote:\n> HI,\n>\n>> I've looked at this patch and it looks mostly fine, though I do not\n>> intend to commit it myself; perhaps Andrew will.\n>\n> HI, Amit, thanks for review.\n>\n>>\n>> A few minor things to improve:\n>>\n>> + If <literal>*</literal> is specified, it will be applied in \n>> all columns.\n>> ...\n>> + If <literal>*</literal> is specified, it will be applied in \n>> all columns.\n>>\n>> Please write \"it will be applied in\" as \"the option will be applied to\".\n>\n> +1\n>\n>>\n>> + bool force_notnull_all; /* FORCE_NOT_NULL * */\n>> ...\n>> + bool force_null_all; /* FORCE_NULL * */\n>>\n>> Like in the comment for force_quote, please add a \"?\" after * in the\n>> above comments.\n>\n> +1\n>\n>>\n>> + if (cstate->opts.force_notnull_all)\n>> + MemSet(cstate->opts.force_notnull_flags, true, num_phys_attrs\n>> * sizeof(bool));\n>> ...\n>> + if (cstate->opts.force_null_all)\n>> + MemSet(cstate->opts.force_null_flags, true, num_phys_attrs *\n>> sizeof(bool));\n>>\n>> While I am not especially opposed to using this 1-line variant to set\n>> the flags array, it does mean that there are now different styles\n>> being used for similar code, because force_quote_flags uses a for\n>> loop:\n>>\n>> if (cstate->opts.force_quote_all)\n>> {\n>> int i;\n>>\n>> for (i = 0; i < num_phys_attrs; i++)\n>> cstate->opts.force_quote_flags[i] = true;\n>> }\n>>\n>> Perhaps we could fix the inconsistency by changing the force_quote_all\n>> code to use MemSet() too. I'll defer whether to do that to Andrew's\n>> judgement.\n>\n> Sure, let’s wait for Andrew and I will put everything in one pot then.\n>\n>\n\nI was hoping it be able to get to it today but that's not happening. If \nyou want to submit a revised patch as above that will be good. I hope to \nget to it later this week.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-07-26 We 03:03, Zhang Mingli\n wrote:\n\n\n\n HI,\n\nI've looked at this patch and it looks\n mostly fine, though I do not\n intend to commit it myself; perhaps Andrew will.\n\n\n HI, Amit, thanks for review.\n\n\n A few minor things to improve:\n\n + If <literal>*</literal> is specified, it will\n be applied in all columns.\n ...\n + If <literal>*</literal> is specified, it will\n be applied in all columns.\n\n Please write \"it will be applied in\" as \"the option will be\n applied to\".\n\n\n +1\n\n\n + bool force_notnull_all; /* FORCE_NOT_NULL * */\n ...\n + bool force_null_all; /* FORCE_NULL * */\n\n Like in the comment for force_quote, please add a \"?\" after * in\n the\n above comments.\n\n\n +1\n\n\n + if (cstate->opts.force_notnull_all)\n + MemSet(cstate->opts.force_notnull_flags, true,\n num_phys_attrs\n * sizeof(bool));\n ...\n + if (cstate->opts.force_null_all)\n + MemSet(cstate->opts.force_null_flags, true,\n num_phys_attrs *\n sizeof(bool));\n\n While I am not especially opposed to using this 1-line variant\n to set\n the flags array, it does mean that there are now different\n styles\n being used for similar code, because force_quote_flags uses a\n for\n loop:\n\n if (cstate->opts.force_quote_all)\n {\n int i;\n\n for (i = 0; i < num_phys_attrs; i++)\n cstate->opts.force_quote_flags[i] = true;\n }\n\n Perhaps we could fix the inconsistency by changing the\n force_quote_all\n code to use MemSet() too. I'll defer whether to do that to\n Andrew's\n judgement.\n\n\n Sure, let’s wait for Andrew and I will put everything in one pot\n then.\n\n\n\n\n\nI was hoping it be able to get to it today but that's not\n happening. If you want to submit a revised patch as above that\n will be good. I hope to get to it later this week.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 31 Jul 2023 15:35:09 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: [feature]COPY FROM enable FORCE_NULL/FORCE_NOT_NULL on all\n columns"
},
{
"msg_contents": "> On Aug 1, 2023, at 03:35, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n> I was hoping it be able to get to it today but that's not happening. If you want to submit a revised patch as above that will be good. I hope to get to it later this week.\n\nHI, Andrew \n\nPatch rebased and updated like above, thanks.\n\n\nZhang Mingli\nhttps://www.hashdata.xyz",
"msg_date": "Tue, 1 Aug 2023 08:46:15 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [feature]COPY FROM enable FORCE_NULL/FORCE_NOT_NULL on all\n columns"
},
{
"msg_contents": "On 2023-07-31 Mo 20:46, Zhang Mingli wrote:\n>\n>\n>> On Aug 1, 2023, at 03:35, Andrew Dunstan <andrew@dunslane.net> wrote:\n>>\n>> I was hoping it be able to get to it today but that's not happening. \n>> If you want to submit a revised patch as above that will be good. I \n>> hope to get to it later this week.\n>\n> HI, Andrew\n>\n> Patch rebased and updated like above, thanks.\n\n\nPushed at last, thanks.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-07-31 Mo 20:46, Zhang Mingli\n wrote:\n\n\n\n\n\n\nOn Aug 1, 2023, at 03:35, Andrew Dunstan\n <andrew@dunslane.net> wrote:\n\nI was hoping it be able to get to it today\n but that's not happening. If you want to submit a revised\n patch as above that will be good. I hope to get to it\n later this week.\n\n\n\nHI, Andrew \n\n\nPatch rebased and updated like above, thanks.\n\n\n\nPushed at last, thanks.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 30 Sep 2023 12:49:31 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: [feature]COPY FROM enable FORCE_NULL/FORCE_NOT_NULL on all\n columns"
}
] |
[
{
"msg_contents": "Hi all,\n\nI've just closed out the July commitfest. I'll be working to clear out\nall remaining active patches today.\n\nFinal statistics:\n\n Needs review: 142\n Waiting on Author: 44\n Ready for Committer: 19\n Committed: 76\n Moved to next CF: 6\n Returned with Feedback: 7\n Rejected: 3\n Withdrawn: 11\n --\n Total: 308\n\nOver the course of the month, 55 additional entries were committed;\nsince March, that's 76 entries committed from 64 authors. There were of\ncourse many more threads that made progress thanks to reviewers'\ncontributions and authors' efforts -- it's just harder for me to track\nthose numbers.\n\nThanks to all for your contributions and reviews!\n\n--Jacob\n\n\n",
"msg_date": "Mon, 1 Aug 2022 08:40:18 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "[Commitfest 2022-07] is Done!"
},
{
"msg_contents": "Jacob Champion <jchampion@timescale.com> writes:\n> I've just closed out the July commitfest. I'll be working to clear out\n> all remaining active patches today.\n\nThanks for all your hard work on this! An active CFM really makes\nthings work better.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Aug 2022 12:44:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Commitfest 2022-07] is Done!"
},
{
"msg_contents": "On 2022-Aug-01, Tom Lane wrote:\n\n> Jacob Champion <jchampion@timescale.com> writes:\n> > I've just closed out the July commitfest. I'll be working to clear out\n> > all remaining active patches today.\n> \n> Thanks for all your hard work on this! An active CFM really makes\n> things work better.\n\nAgreed, great work here.\n\nI hate to suggest even more work, but it would be excellent to have some\nsort of write-up of what you did here, and have it supplant the obsolete\ntext currently in the canonical commifest wiki page,\nhttps://wiki.postgresql.org/wiki/CommitFest_Checklist\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 1 Aug 2022 19:33:51 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [Commitfest 2022-07] is Done!"
},
{
"msg_contents": "On Mon, Aug 1, 2022 at 10:34 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Aug-01, Tom Lane wrote:\n> > Thanks for all your hard work on this! An active CFM really makes\n> > things work better.\n>\n> Agreed, great work here.\n\nThanks, both of you!\n\n> I hate to suggest even more work, but it would be excellent to have some\n> sort of write-up of what you did here, and have it supplant the obsolete\n> text currently in the canonical commifest wiki page,\n> https://wiki.postgresql.org/wiki/CommitFest_Checklist\n\nSure, I think I can do that. I'll be writing up my experiences for\nTimescale later this month. So maybe I could remove (or else clearly\nmark) the things that are personal opinions only, keep the more\nprocedural notes, and use that as the base for a rework of the wiki?\n\n--Jacob\n\n\n",
"msg_date": "Mon, 1 Aug 2022 16:03:11 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [Commitfest 2022-07] is Done!"
},
{
"msg_contents": "On 8/1/22 08:40, Jacob Champion wrote:\n> I've just closed out the July commitfest. I'll be working to clear out\n> all remaining active patches today.\n\"Today\" was slightly optimistic. I'm down to the final stretch of forty\npatches; I'll come back to those tomorrow with fresh eyes.\n\n--Jacob\n\n\n",
"msg_date": "Mon, 1 Aug 2022 16:08:35 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [Commitfest 2022-07] is Done!"
},
{
"msg_contents": "On 2022-Aug-01, Jacob Champion wrote:\n\n> On Mon, Aug 1, 2022 at 10:34 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > I hate to suggest even more work, but it would be excellent to have some\n> > sort of write-up of what you did here, and have it supplant the obsolete\n> > text currently in the canonical commifest wiki page,\n> > https://wiki.postgresql.org/wiki/CommitFest_Checklist\n> \n> Sure, I think I can do that. I'll be writing up my experiences for\n> Timescale later this month. So maybe I could remove (or else clearly\n> mark) the things that are personal opinions only, keep the more\n> procedural notes, and use that as the base for a rework of the wiki?\n\nI don't think there's anything in that page that I would keep. If you\ndid read this page and follow some part of it while running this CF, by\nall means keep that; but otherwise I think it may be better to start\nafresh.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Nunca se desea ardientemente lo que solo se desea por razón\" (F. Alexandre)\n\n\n",
"msg_date": "Tue, 2 Aug 2022 08:44:04 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [Commitfest 2022-07] is Done!"
},
{
"msg_contents": "On 8/1/22 16:08, Jacob Champion wrote:\n> \"Today\" was slightly optimistic. I'm down to the final stretch of forty\n> patches; I'll come back to those tomorrow with fresh eyes.\n\nAll right, every entry from July has been closed out or moved! Apologies\nfor dropping entries from cfbot temporarily; that should all be fixed\nnow (and I've made a note in the wiki for the next CFM).\n\nWe closed roughly 40% of the July entries, and decreased the volume of\nentries in the next CF by almost a hundred, which I think we should all\nfeel pretty good about. (Hopefully many of the Returned patches will\ncome back refreshed, but also hopefully that will be balanced out by\ncommits from the RfC queue.)\n\nThanks for having me as CFM!\n\n--Jacob\n\n\n",
"msg_date": "Tue, 2 Aug 2022 15:31:14 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [Commitfest 2022-07] is Done!"
},
{
"msg_contents": "On Tue, Aug 02, 2022 at 03:31:14PM -0700, Jacob Champion wrote:\n> Apologies for dropping entries from cfbot temporarily; that should all be\n> fixed now (and I've made a note in the wiki for the next CFM).\n\nActually, I think that might happen every CF, no matter what you do.\n\ncfbot indexes on (commitfest_id, submission_id). It's a known deficiency, and\nI was thinking of sending a partial patch the next time I look at it.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 2 Aug 2022 17:53:25 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [Commitfest 2022-07] is Done!"
},
{
"msg_contents": "On Tue, Aug 02, 2022 at 08:44:04AM +0200, Alvaro Herrera wrote:\n> On 2022-Aug-01, Jacob Champion wrote:\n> \n> > On Mon, Aug 1, 2022 at 10:34 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n> > > I hate to suggest even more work, but it would be excellent to have some\n> > > sort of write-up of what you did here, and have it supplant the obsolete\n> > > text currently in the canonical commifest wiki page,\n> > > https://wiki.postgresql.org/wiki/CommitFest_Checklist\n> > \n> > Sure, I think I can do that. I'll be writing up my experiences for\n> > Timescale later this month. So maybe I could remove (or else clearly\n> > mark) the things that are personal opinions only, keep the more\n> > procedural notes, and use that as the base for a rework of the wiki?\n> \n> I don't think there's anything in that page that I would keep. If you\n> did read this page and follow some part of it while running this CF, by\n> all means keep that; but otherwise I think it may be better to start\n> afresh.\n\n+1, I had the same feeling last time (1).\n\n[1] https://www.postgresql.org/message-id/20220202175656.5zacgx3ucrquvi35@jrouhaud\n\n\n",
"msg_date": "Wed, 3 Aug 2022 09:30:11 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Commitfest 2022-07] is Done!"
}
] |
[
{
"msg_contents": "\"A mathematical catastrophe is a point in a model of an input-output\nsystem, where a vanishingly small change in the input can produce a\nlarge change in the output.\"\n\nWe have just such a change in Postgres: when a snapshot overflows. In\nthis case it takes only one subxid over the subxid cache limit to slow\ndown every request in XidInMVCCSnapshot(), which becomes painful when\na long running transaction exists at the same time. This situation has\nbeen noted by various bloggers, but is illustrated clearly in the\nattached diagram, generated by test results from Julien Tachoires.\n\nThe reason for the slowdown is clear: when we overflow we check every\nxid against subtrans, producing a large stream of lookups. Some\nprevious hackers have tried to speed up subtrans - this patch takes a\ndifferent approach: remove as many subtrans lookups as possible. (So\nis not competing with those other solutions).\n\nAttached patch improves on the situation, as also shown in the attached diagram.\n\nThe patch does these things:\n\n1. Rework XidInMVCCSnapshot() so that it always checks the snapshot\nfirst, before attempting to lookup subtrans. A related change means\nthat we always keep full subxid info in the snapshot, even if one of\nthe backends has overflowed.\n\n2. Use binary search for standby snapshots, since the snapshot subxip\nis in sorted order.\n\n3. Rework GetTopmostTransaction so that it a) checks xmin as it goes,\nb) only does one iteration on standby snapshots, both of which save\nsubtrans lookups in appropriate cases.\n(This was newly added in v6)\n\nNow, is this a panacea? Not at all. What this patch does is smooth out\nthe catastrophic effect so that a few overflowed subxids don't spoil\neverybody else's performance, but eventually, if many or all sessions\nhave their overflowed subxid caches then the performance will descend\nas before, albeit that the attached patch has some additional\noptimizations (2, 3 above). So what this gives is a better flight\nenvelope in case of a small number of occasional overflows.\n\nPlease review. Thank you.\n\n--\nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Mon, 1 Aug 2022 17:42:49 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Smoothing the subtrans performance catastrophe"
},
{
"msg_contents": "On Mon, Aug 1, 2022 at 10:13 PM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n>\n> \"A mathematical catastrophe is a point in a model of an input-output\n> system, where a vanishingly small change in the input can produce a\n> large change in the output.\"\n>\n> We have just such a change in Postgres: when a snapshot overflows. In\n> this case it takes only one subxid over the subxid cache limit to slow\n> down every request in XidInMVCCSnapshot(), which becomes painful when\n> a long running transaction exists at the same time. This situation has\n> been noted by various bloggers, but is illustrated clearly in the\n> attached diagram, generated by test results from Julien Tachoires.\n>\n> The reason for the slowdown is clear: when we overflow we check every\n> xid against subtrans, producing a large stream of lookups. Some\n> previous hackers have tried to speed up subtrans - this patch takes a\n> different approach: remove as many subtrans lookups as possible. (So\n> is not competing with those other solutions).\n>\n> Attached patch improves on the situation, as also shown in the attached diagram.\n>\n> The patch does these things:\n>\n> 1. Rework XidInMVCCSnapshot() so that it always checks the snapshot\n> first, before attempting to lookup subtrans. A related change means\n> that we always keep full subxid info in the snapshot, even if one of\n> the backends has overflowed.\n>\n> 2. Use binary search for standby snapshots, since the snapshot subxip\n> is in sorted order.\n>\n> 3. Rework GetTopmostTransaction so that it a) checks xmin as it goes,\n> b) only does one iteration on standby snapshots, both of which save\n> subtrans lookups in appropriate cases.\n> (This was newly added in v6)\n>\n> Now, is this a panacea? Not at all. What this patch does is smooth out\n> the catastrophic effect so that a few overflowed subxids don't spoil\n> everybody else's performance, but eventually, if many or all sessions\n> have their overflowed subxid caches then the performance will descend\n> as before, albeit that the attached patch has some additional\n> optimizations (2, 3 above). So what this gives is a better flight\n> envelope in case of a small number of occasional overflows.\n>\n> Please review. Thank you.\n\n+1,\nI had a quick look into the patch to understand the idea and I think\nthe idea looks really promising to me.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 3 Aug 2022 17:25:42 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Smoothing the subtrans performance catastrophe"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-01 17:42:49 +0100, Simon Riggs wrote:\n> The reason for the slowdown is clear: when we overflow we check every\n> xid against subtrans, producing a large stream of lookups. Some\n> previous hackers have tried to speed up subtrans - this patch takes a\n> different approach: remove as many subtrans lookups as possible. (So\n> is not competing with those other solutions).\n> \n> Attached patch improves on the situation, as also shown in the attached diagram.\n\nI think we should consider redesigning subtrans more substantially - even with\nthe changes you propose here, there's still plenty ways to hit really bad\nperformance. And there's only so much we can do about that without more\nfundamental design changes.\n\nOne way to fix a lot of the issues around pg_subtrans would be remove the\npg_subtrans SLRU and replace it with a purely in-memory hashtable. IMO there's\nreally no good reason to use an SLRU for it (anymore).\n\nIn contrast to e.g. clog or multixact we don't need to access a lot of old\nentries, we don't need persistency etc. Nor is it a good use of memory and IO\nto have loads of pg_subtrans pages that don't point anywhere, because the xid\nis just a \"normal\" xid.\n\nWhile we can't put a useful hard cap on the number of potential subtrans\nentries (we can only throw subxid->parent mappings away once no existing\nsnapshot might need them), saying that there can't be more subxids \"considered\nrunning\" at a time than can fit in memory doesn't seem like a particularly\nproblematic restriction.\n\nSo, why don't we use a dshash table with some amount of statically allocated\nmemory for the mapping? In common cases that will *reduce* memory usage\n(because we don't need to reserve space for [as many] subxids in snapshots /\nprocarray anymore) and IO (no mostly-zeroes pg_subtrans).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 3 Aug 2022 12:18:26 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Smoothing the subtrans performance catastrophe"
},
{
"msg_contents": "On Wed, Aug 3, 2022 at 3:18 PM Andres Freund <andres@anarazel.de> wrote:\n> In contrast to e.g. clog or multixact we don't need to access a lot of old\n> While we can't put a useful hard cap on the number of potential subtrans\n> entries (we can only throw subxid->parent mappings away once no existing\n> snapshot might need them), saying that there can't be more subxids \"considered\n> running\" at a time than can fit in memory doesn't seem like a particularly\n> problematic restriction.\n\nThat sounds really problematic to me, unless I misunderstand what\nyou're proposing. Say I have a plpgsql containing a FOR loop which in\nturn contains an EXCEPTION block which in turn does DML. Right now,\nthat loop could iterate millions of times and everything would still\nwork. Sure, there might be performance impacts depending on what else\nis happening on the system, but it might also be totally fine. IIUC,\nyou'd like to make that case fail outright. I think that's a\nnon-starter.\n\nI don't know whether Simon's ideas here are amazingly good, utterly\nterrible, or something in between, but I think we can evaluate the\npatch actually submitted rather than propose a complete redesign of\nthe entire mechanism - especially one that seems like it would break\nstuff that currently works.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 3 Aug 2022 15:36:40 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Smoothing the subtrans performance catastrophe"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-03 15:36:40 -0400, Robert Haas wrote:\n> On Wed, Aug 3, 2022 at 3:18 PM Andres Freund <andres@anarazel.de> wrote:\n> > In contrast to e.g. clog or multixact we don't need to access a lot of old\n> > While we can't put a useful hard cap on the number of potential subtrans\n> > entries (we can only throw subxid->parent mappings away once no existing\n> > snapshot might need them), saying that there can't be more subxids \"considered\n> > running\" at a time than can fit in memory doesn't seem like a particularly\n> > problematic restriction.\n>\n> That sounds really problematic to me, unless I misunderstand what\n> you're proposing. Say I have a plpgsql containing a FOR loop which in\n> turn contains an EXCEPTION block which in turn does DML. Right now,\n> that loop could iterate millions of times and everything would still\n> work. Sure, there might be performance impacts depending on what else\n> is happening on the system, but it might also be totally fine. IIUC,\n> you'd like to make that case fail outright. I think that's a\n> non-starter.\n\nI don't think this scenario would fundamentally change - we already keep the\nset of subxids in backend local memory (e.g. either a dedicated\nTransactionStateData or an element in ->childXids) and in the locking table\n(via XactLockTableInsert()).\n\nThe problematic part isn't keeping \"actually\" running subxids in memory, but\nkeeping subxids that might be \"considered running\" in memory (i.e. subxids\nthat are considered running by an old snapshot in another backend).\n\nA hashtable just containing child->parent mapping for subxids doesn't actually\nneed that much memory. It'd be approximately (2 * 4 bytes) * subxids *\n(2-fillfactor) or such? So maybe ~10MB for 1 milllion subxids? Allocating\nthat on-demand doesn't strike me as prohibitive.\n\n\n> I don't know whether Simon's ideas here are amazingly good, utterly\n> terrible, or something in between, but I think we can evaluate the\n> patch actually submitted rather than propose a complete redesign of\n> the entire mechanism - especially one that seems like it would break\n> stuff that currently works.\n\nWe've had quite a few patches that try to address issues around subids, but\nonly ever improve things on the margins. I'm doubtful that's a useful use of\ntime.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 3 Aug 2022 13:14:37 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Smoothing the subtrans performance catastrophe"
},
{
"msg_contents": "On Wed, 3 Aug 2022 at 20:18, Andres Freund <andres@anarazel.de> wrote:\n\n> On 2022-08-01 17:42:49 +0100, Simon Riggs wrote:\n> > The reason for the slowdown is clear: when we overflow we check every\n> > xid against subtrans, producing a large stream of lookups. Some\n> > previous hackers have tried to speed up subtrans - this patch takes a\n> > different approach: remove as many subtrans lookups as possible. (So\n> > is not competing with those other solutions).\n> >\n> > Attached patch improves on the situation, as also shown in the attached diagram.\n>\n> I think we should consider redesigning subtrans more substantially - even with\n> the changes you propose here, there's still plenty ways to hit really bad\n> performance. And there's only so much we can do about that without more\n> fundamental design changes.\n\nI completely agree - you will be glad to hear that I've been working\non a redesign of the subtrans module.\n\nBut we should be clear that redesigning subtrans has nothing to do\nwith this patch; they are separate ideas and this patch relates to\nXidInMVCCSnapshot(), an important caller of subtrans.\n\nI will post my patch, when complete, in a different thread.\n\n> One way to fix a lot of the issues around pg_subtrans would be remove the\n> pg_subtrans SLRU and replace it with a purely in-memory hashtable. IMO there's\n> really no good reason to use an SLRU for it (anymore).\n>\n> In contrast to e.g. clog or multixact we don't need to access a lot of old\n> entries, we don't need persistency etc. Nor is it a good use of memory and IO\n> to have loads of pg_subtrans pages that don't point anywhere, because the xid\n> is just a \"normal\" xid.\n>\n> While we can't put a useful hard cap on the number of potential subtrans\n> entries (we can only throw subxid->parent mappings away once no existing\n> snapshot might need them), saying that there can't be more subxids \"considered\n> running\" at a time than can fit in memory doesn't seem like a particularly\n> problematic restriction.\n\nI do agree that sometimes it is easier to impose restrictions than to\ntry to provide unbounded resources.\n\nHaving said that, I can't see an easy way of making that work well in\npractice for this case. Making write transactions just suddenly stop\nworking at some point doesn't sound like it would be good for\navailability, especially when it happens sporadically and\nunpredictably as that would, whenever long running transactions appear\nalongside users of subtransactions.\n\n> So, why don't we use a dshash table with some amount of statically allocated\n> memory for the mapping? In common cases that will *reduce* memory usage\n> (because we don't need to reserve space for [as many] subxids in snapshots /\n> procarray anymore) and IO (no mostly-zeroes pg_subtrans).\n\nI considered this and have ruled it out, but as I said above, we can\ndiscuss that on a different thread.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 4 Aug 2022 13:11:25 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Smoothing the subtrans performance catastrophe"
},
{
"msg_contents": "On Wed, Aug 3, 2022 at 4:14 PM Andres Freund <andres@anarazel.de> wrote:\n> I don't think this scenario would fundamentally change - we already keep the\n> set of subxids in backend local memory (e.g. either a dedicated\n> TransactionStateData or an element in ->childXids) and in the locking table\n> (via XactLockTableInsert()).\n\nSure....\n\n> The problematic part isn't keeping \"actually\" running subxids in memory, but\n> keeping subxids that might be \"considered running\" in memory (i.e. subxids\n> that are considered running by an old snapshot in another backend).\n>\n> A hashtable just containing child->parent mapping for subxids doesn't actually\n> need that much memory. It'd be approximately (2 * 4 bytes) * subxids *\n> (2-fillfactor) or such? So maybe ~10MB for 1 milllion subxids? Allocating\n> that on-demand doesn't strike me as prohibitive.\n\nI mean the worst case is ~2 bn, no?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 4 Aug 2022 09:17:23 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Smoothing the subtrans performance catastrophe"
},
{
"msg_contents": "On Thu, 4 Aug 2022 at 13:11, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n\n> I will post my patch, when complete, in a different thread.\n\nTo avoid confusion, I will withdraw this patch from the CF, in favour\nof my other patch on a similar topic,\nMinimizing calls to SubTransSetParent()\nhttps://commitfest.postgresql.org/39/3806/.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 6 Sep 2022 09:36:48 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Smoothing the subtrans performance catastrophe"
},
{
"msg_contents": "Hello Simon,\r\n\r\nCan you please provide the test cases that you used to plot the performance\r\ngraph you've attached.\r\nAlso do you know if your optimization will be useful for a very large amount\r\nof subtransactions per transaction (around 1000)?\r\n\r\n-- \r\nThanks,\r\nYaroslav\r\n",
"msg_date": "Tue, 21 Mar 2023 06:51:42 +0000",
"msg_from": "Kurlaev Jaroslav <j.kurlaev@cft.ru>",
"msg_from_op": false,
"msg_subject": "RE: Smoothing the subtrans performance catastrophe"
}
] |
[
{
"msg_contents": "Recent typos...",
"msg_date": "Mon, 1 Aug 2022 20:04:54 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "fix typos"
},
{
"msg_contents": "On Mon, Aug 01, 2022 at 08:04:54PM +0200, Erik Rijkers wrote:\n> Recent typos...\n\nLGTM, thanks.\n\nHere are some others I've been sitting on, mostly in .c files.\n\n-- \nJustin",
"msg_date": "Mon, 1 Aug 2022 13:11:36 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: fix typos"
},
{
"msg_contents": "On Tue, Aug 2, 2022 at 1:05 AM Erik Rijkers <er@xs4all.nl> wrote:\n>\n> Recent typos...\n\nThe part of the sentence inside parentheses is not clear to me, before or\nafter the patch:\n\n Dropping an extension causes its component objects, and other explicitly\n dependent routines (see <xref linkend=\"sql-alterroutine\"/>,\n- the depends on extension action), to be dropped as well.\n+ that depend on extension action), to be dropped as well.\n </para>\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Aug 2, 2022 at 1:05 AM Erik Rijkers <er@xs4all.nl> wrote:>> Recent typos...The part of the sentence inside parentheses is not clear to me, before or after the patch: Dropping an extension causes its component objects, and other explicitly dependent routines (see <xref linkend=\"sql-alterroutine\"/>,- the depends on extension action), to be dropped as well.+ that depend on extension action), to be dropped as well. </para>--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 2 Aug 2022 12:28:21 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: fix typos"
},
{
"msg_contents": "On Tue, Aug 2, 2022 at 1:11 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> Here are some others I've been sitting on, mostly in .c files.\n\n0002:\nweird since c91560defc57f89f7e88632ea14ae77b5cec78ee\n\nIt was weird long before that, maybe we should instead change most of those\ntabs in the top comment to single space, as is customary? The rest LGTM.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Aug 2, 2022 at 1:11 AM Justin Pryzby <pryzby@telsasoft.com> wrote:>> Here are some others I've been sitting on, mostly in .c files.0002:weird since c91560defc57f89f7e88632ea14ae77b5cec78eeIt was weird long before that, maybe we should instead change most of those tabs in the top comment to single space, as is customary? The rest LGTM.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 2 Aug 2022 12:52:53 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: fix typos"
},
{
"msg_contents": "Op 02-08-2022 om 07:28 schreef John Naylor:\n> \n> On Tue, Aug 2, 2022 at 1:05 AM Erik Rijkers <er@xs4all.nl \n> <mailto:er@xs4all.nl>> wrote:\n> >\n> > Recent typos...\n> \n> The part of the sentence inside parentheses is not clear to me, before \n> or after the patch:\n> \n> Dropping an extension causes its component objects, and other \n> explicitly\n> dependent routines (see <xref linkend=\"sql-alterroutine\"/>,\n> - the depends on extension action), to be dropped as well.\n> + that depend on extension action), to be dropped as well.\n> </para>\n> \n\nHm, I see what you mean, I did not notice that earlier and I won't make \na guess as to intention. Maybe Bruce can have another look? (commit \n5fe2d4c56e)\n\n\n> --\n> John Naylor\n> EDB: http://www.enterprisedb.com <http://www.enterprisedb.com>\n\n\n",
"msg_date": "Tue, 2 Aug 2022 10:32:15 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "Re: fix typos"
},
{
"msg_contents": "On Tue, Aug 2, 2022 at 4:32 AM Erik Rijkers <er@xs4all.nl> wrote:\n> > The part of the sentence inside parentheses is not clear to me, before\n> > or after the patch:\n> >\n> > Dropping an extension causes its component objects, and other\n> > explicitly\n> > dependent routines (see <xref linkend=\"sql-alterroutine\"/>,\n> > - the depends on extension action), to be dropped as well.\n> > + that depend on extension action), to be dropped as well.\n> > </para>\n> >\n>\n> Hm, I see what you mean, I did not notice that earlier and I won't make\n> a guess as to intention. Maybe Bruce can have another look? (commit\n> 5fe2d4c56e)\n\nI think that it's talking about this (documented) syntax:\n\nALTER ROUTINE name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ]\n [ NO ] DEPENDS ON EXTENSION extension_name\n\nSo the change from \"depends\" to \"depend\" here is incorrect. Maybe we\ncan say something like:\n\nthe <literal>DEPENDS ON EXTENSION\n<replaceable>extension_name</replaceable><literal> action\n\n(I haven't tested whether this markup works.)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 3 Aug 2022 12:41:33 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fix typos"
},
{
"msg_contents": "On Wed, Aug 3, 2022 at 11:41 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> I think that it's talking about this (documented) syntax:\n>\n> ALTER ROUTINE name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ]\n> [ NO ] DEPENDS ON EXTENSION extension_name\n>\n> So the change from \"depends\" to \"depend\" here is incorrect. Maybe we\n> can say something like:\n>\n> the <literal>DEPENDS ON EXTENSION\n> <replaceable>extension_name</replaceable><literal> action\n>\n> (I haven't tested whether this markup works.)\n\nMakes sense, I'll go make it happen.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Aug 3, 2022 at 11:41 PM Robert Haas <robertmhaas@gmail.com> wrote:>> I think that it's talking about this (documented) syntax:>> ALTER ROUTINE name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ]> [ NO ] DEPENDS ON EXTENSION extension_name>> So the change from \"depends\" to \"depend\" here is incorrect. Maybe we> can say something like:>> the <literal>DEPENDS ON EXTENSION> <replaceable>extension_name</replaceable><literal> action>> (I haven't tested whether this markup works.)Makes sense, I'll go make it happen.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 4 Aug 2022 15:08:59 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: fix typos"
},
{
"msg_contents": "On Tue, Aug 2, 2022 at 1:11 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Mon, Aug 01, 2022 at 08:04:54PM +0200, Erik Rijkers wrote:\n> > Recent typos...\n>\n> LGTM, thanks.\n>\n> Here are some others I've been sitting on, mostly in .c files.\n\nI pushed Robert's suggestion, then pushed the rest of Erik's changes and\ntwo of Justin's. For Justin's 0004:\n\n--- a/src/backend/replication/logical/origin.c\n+++ b/src/backend/replication/logical/origin.c\n@@ -364,7 +364,7 @@ restart:\n if (nowait)\n ereport(ERROR,\n (errcode(ERRCODE_OBJECT_IN_USE),\n- errmsg(\"could not drop replication origin with OID %d, in use by PID %d\",\n+ errmsg(\"could not drop replication origin with OID %u, in use by PID %d\",\n\nRepOriginId is a typedef for uint16, so this can't print the wrong answer,\nbut it is inconsistent with other uses. So it seems we don't need to\nbackpatch this one?\n\nFor patch 0002, the whitespace issue in the top comment in inval.c, I'm\ninclined to just change all the out-of-place tabs in a single commit, so we\ncan add that to the list of whitespace commits.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Aug 2, 2022 at 1:11 AM Justin Pryzby <pryzby@telsasoft.com> wrote:>> On Mon, Aug 01, 2022 at 08:04:54PM +0200, Erik Rijkers wrote:> > Recent typos...>> LGTM, thanks.>> Here are some others I've been sitting on, mostly in .c files.I pushed Robert's suggestion, then pushed the rest of Erik's changes and two of Justin's. For Justin's 0004:--- a/src/backend/replication/logical/origin.c+++ b/src/backend/replication/logical/origin.c@@ -364,7 +364,7 @@ restart: \t\t\t\tif (nowait) \t\t\t\t\tereport(ERROR, \t\t\t\t\t\t\t(errcode(ERRCODE_OBJECT_IN_USE),-\t\t\t\t\t\t\t errmsg(\"could not drop replication origin with OID %d, in use by PID %d\",+\t\t\t\t\t\t\t errmsg(\"could not drop replication origin with OID %u, in use by PID %d\",RepOriginId is a typedef for uint16, so this can't print the wrong answer, but it is inconsistent with other uses. So it seems we don't need to backpatch this one?For patch 0002, the whitespace issue in the top comment in inval.c, I'm inclined to just change all the out-of-place tabs in a single commit, so we can add that to the list of whitespace commits.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 4 Aug 2022 17:21:38 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: fix typos"
},
{
"msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> On Tue, Aug 2, 2022 at 1:11 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> ereport(ERROR,\n> (errcode(ERRCODE_OBJECT_IN_USE),\n> - errmsg(\"could not drop replication origin with OID %d, in use by PID %d\",\n> + errmsg(\"could not drop replication origin with OID %u, in use by PID %d\",\n\n> RepOriginId is a typedef for uint16, so this can't print the wrong answer,\n> but it is inconsistent with other uses. So it seems we don't need to\n> backpatch this one?\n\nUm ... if it's int16, then it can't be an OID, so I'd say this message has\nfar worse problems than %d vs %u. It should not use that terminology.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Aug 2022 09:41:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: fix typos"
},
{
"msg_contents": "On Thu, Aug 4, 2022 at 8:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> John Naylor <john.naylor@enterprisedb.com> writes:\n\n> > RepOriginId is a typedef for uint16, so this can't print the wrong\nanswer,\n> > but it is inconsistent with other uses. So it seems we don't need to\n> > backpatch this one?\n>\n> Um ... if it's int16, then it can't be an OID, so I'd say this message has\n> far worse problems than %d vs %u. It should not use that terminology.\n\nThe catalog has the following. Since it's not a real oid, maybe this column\nshould be rethought?\n\nCATALOG(pg_replication_origin,6000,ReplicationOriginRelationId)\nBKI_SHARED_RELATION\n{\n/*\n* Locally known id that get included into WAL.\n*\n* This should never leave the system.\n*\n* Needs to fit into an uint16, so we don't waste too much space in WAL\n* records. For this reason we don't use a normal Oid column here, since\n* we need to handle allocation of new values manually.\n*/\nOid roident;\n[...]\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Aug 4, 2022 at 8:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:>> John Naylor <john.naylor@enterprisedb.com> writes:> > RepOriginId is a typedef for uint16, so this can't print the wrong answer,> > but it is inconsistent with other uses. So it seems we don't need to> > backpatch this one?>> Um ... if it's int16, then it can't be an OID, so I'd say this message has> far worse problems than %d vs %u. It should not use that terminology.The catalog has the following. Since it's not a real oid, maybe this column should be rethought? CATALOG(pg_replication_origin,6000,ReplicationOriginRelationId) BKI_SHARED_RELATION{\t/*\t * Locally known id that get included into WAL.\t *\t * This should never leave the system.\t *\t * Needs to fit into an uint16, so we don't waste too much space in WAL\t * records. For this reason we don't use a normal Oid column here, since\t * we need to handle allocation of new values manually.\t */\tOid\t\t\troident;[...]--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 5 Aug 2022 11:47:23 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: fix typos"
},
{
"msg_contents": "I wrote:\n\n> On Thu, Aug 4, 2022 at 8:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > John Naylor <john.naylor@enterprisedb.com> writes:\n>\n> > > RepOriginId is a typedef for uint16, so this can't print the wrong answer,\n> > > but it is inconsistent with other uses. So it seems we don't need to\n> > > backpatch this one?\n> >\n> > Um ... if it's int16, then it can't be an OID, so I'd say this message has\n> > far worse problems than %d vs %u. It should not use that terminology.\n>\n> The catalog has the following. Since it's not a real oid, maybe this column should be rethought?\n\nThis is really a straw-man proposal, since I'm not volunteering to do\nthe work, or suggest anybody else should do the same. That being the\ncase, it seems we should just go ahead with Justin's patch for\nconsistency. Possibly we could also change the messages to say \"ID\"?\n\n> CATALOG(pg_replication_origin,6000,ReplicationOriginRelationId) BKI_SHARED_RELATION\n> {\n> /*\n> * Locally known id that get included into WAL.\n> *\n> * This should never leave the system.\n> *\n> * Needs to fit into an uint16, so we don't waste too much space in WAL\n> * records. For this reason we don't use a normal Oid column here, since\n> * we need to handle allocation of new values manually.\n> */\n> Oid roident;\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 12 Aug 2022 13:59:30 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: fix typos"
},
{
"msg_contents": "On Fri, Aug 12, 2022, at 3:59 AM, John Naylor wrote:\n> This is really a straw-man proposal, since I'm not volunteering to do\n> the work, or suggest anybody else should do the same. That being the\n> case, it seems we should just go ahead with Justin's patch for\n> consistency. Possibly we could also change the messages to say \"ID\"?\n... or say\n\n could not drop replication origin %u, in use by PID %d\n\nAFAICS there is no \"with ID\" but there is \"with identifier\". I personally\nprefer to omit these additional words; it seems clear without them.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Fri, Aug 12, 2022, at 3:59 AM, John Naylor wrote:This is really a straw-man proposal, since I'm not volunteering to dothe work, or suggest anybody else should do the same. That being thecase, it seems we should just go ahead with Justin's patch forconsistency. Possibly we could also change the messages to say \"ID\"?... or say could not drop replication origin %u, in use by PID %dAFAICS there is no \"with ID\" but there is \"with identifier\". I personallyprefer to omit these additional words; it seems clear without them.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Fri, 12 Aug 2022 09:04:21 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: fix typos"
},
{
"msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> This is really a straw-man proposal, since I'm not volunteering to do\n> the work, or suggest anybody else should do the same. That being the\n> case, it seems we should just go ahead with Justin's patch for\n> consistency. Possibly we could also change the messages to say \"ID\"?\n\nI'd be content if we change the user-facing messages (and documentation\nif any) to say \"ID\" not \"OID\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 12 Aug 2022 09:55:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: fix typos"
},
{
"msg_contents": "On Fri, Aug 12, 2022 at 8:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> John Naylor <john.naylor@enterprisedb.com> writes:\n> > This is really a straw-man proposal, since I'm not volunteering to do\n> > the work, or suggest anybody else should do the same. That being the\n> > case, it seems we should just go ahead with Justin's patch for\n> > consistency. Possibly we could also change the messages to say \"ID\"?\n>\n> I'd be content if we change the user-facing messages (and documentation\n> if any) to say \"ID\" not \"OID\".\n\nThe documentation has both, so it makes sense to standardize on \"ID\".\nThe messages all had oid/OID, which I changed in the attached. I think\nI got all the places.\n\nI'm thinking it's not wrong enough to confuse people, but consistency\nis good, so backpatch to v15 and no further. Does anyone want to make\na case otherwise?\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 16 Aug 2022 08:48:27 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: fix typos"
},
{
"msg_contents": "On Tue, Aug 16, 2022 at 8:48 AM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n>\n> On Fri, Aug 12, 2022 at 8:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > John Naylor <john.naylor@enterprisedb.com> writes:\n> > > This is really a straw-man proposal, since I'm not volunteering to do\n> > > the work, or suggest anybody else should do the same. That being the\n> > > case, it seems we should just go ahead with Justin's patch for\n> > > consistency. Possibly we could also change the messages to say \"ID\"?\n> >\n> > I'd be content if we change the user-facing messages (and documentation\n> > if any) to say \"ID\" not \"OID\".\n>\n> The documentation has both, so it makes sense to standardize on \"ID\".\n> The messages all had oid/OID, which I changed in the attached. I think\n> I got all the places.\n>\n> I'm thinking it's not wrong enough to confuse people, but consistency\n> is good, so backpatch to v15 and no further. Does anyone want to make\n> a case otherwise?\n\nThis is done.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 18 Aug 2022 09:11:55 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: fix typos"
}
] |
[
{
"msg_contents": "This patch tweaks a some tabbing and replaces some spaces with tabs to\nimprove slightly the comment alignment in file\n'postgresql.conf.sample'\n\nPSA.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Tue, 2 Aug 2022 09:24:02 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] postgresql.conf.sample comment alignment."
},
{
"msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> This patch tweaks a some tabbing and replaces some spaces with tabs to\n> improve slightly the comment alignment in file\n> 'postgresql.conf.sample'\n\nHmm ... the parts you want to change generally look OK to me.\nI wonder if you are looking at it with tab stops set to 4 spaces\nrather than 8 spaces?\n\nWhile 4 spaces is our convention for C code, postgresql.conf\nis going to be edited by end users who almost certainly have their\neditors set up for 8 spaces, so it's going to look funny to them\nif the comments are aligned on the assumption of 4 spaces.\n\nOne idea for avoiding confusion is to legislate that we won't\nuse tabs at all in this file (which we could enforce via\n.gitattributes, I think). But that might just be making things\nequally inconvenient for everybody.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Aug 2022 20:03:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] postgresql.conf.sample comment alignment."
},
{
"msg_contents": "On Tue, Aug 2, 2022 at 10:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Smith <smithpb2250@gmail.com> writes:\n> > This patch tweaks a some tabbing and replaces some spaces with tabs to\n> > improve slightly the comment alignment in file\n> > 'postgresql.conf.sample'\n>\n> Hmm ... the parts you want to change generally look OK to me.\n> I wonder if you are looking at it with tab stops set to 4 spaces\n> rather than 8 spaces?\n>\n\nNo. I did fall into that 4/8 trap originally, but I definitely used\n:set tapstop=8 when modifying this file.\n\n> While 4 spaces is our convention for C code, postgresql.conf\n> is going to be edited by end users who almost certainly have their\n> editors set up for 8 spaces, so it's going to look funny to them\n> if the comments are aligned on the assumption of 4 spaces.\n>\n> One idea for avoiding confusion is to legislate that we won't\n> use tabs at all in this file (which we could enforce via\n> .gitattributes, I think). But that might just be making things\n> equally inconvenient for everybody.\n>\n\n------\nKind Regards,\nPeter Smith\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 2 Aug 2022 10:28:56 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] postgresql.conf.sample comment alignment."
},
{
"msg_contents": "On 2022-Aug-01, Tom Lane wrote:\n\n> One idea for avoiding confusion is to legislate that we won't\n> use tabs at all in this file (which we could enforce via\n> .gitattributes, I think).\n\n+1.\n\n> But that might just be making things equally inconvenient for\n> everybody.\n\nIn this situation, the only disadvantaged users are those using a\nnon-fixed-width font in their editor, but those are lost souls already.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Having your biases confirmed independently is how scientific progress is\nmade, and hence made our great society what it is today\" (Mary Gardiner)\n\n\n",
"msg_date": "Wed, 3 Aug 2022 12:58:04 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] postgresql.conf.sample comment alignment."
},
{
"msg_contents": "On Wed, Aug 03, 2022 at 12:58:04PM +0200, Alvaro Herrera wrote:\n> On 2022-Aug-01, Tom Lane wrote:\n>> One idea for avoiding confusion is to legislate that we won't\n>> use tabs at all in this file (which we could enforce via\n>> .gitattributes, I think).\n> \n> +1.\n\nThat's not the first time this 4- or 8-character tab issue is popping\nup around here, so enforcing spaces and having a rule sounds like a\ngood idea at the end.\n\n>> But that might just be making things equally inconvenient for\n>> everybody.\n> \n> In this situation, the only disadvantaged users are those using a\n> non-fixed-width font in their editor, but those are lost souls already.\n\nHaha.\n--\nMichael",
"msg_date": "Thu, 4 Aug 2022 10:09:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] postgresql.conf.sample comment alignment."
},
{
"msg_contents": "On Thu, Aug 04, 2022 at 10:09:27AM +0900, Michael Paquier wrote:\n> On Wed, Aug 03, 2022 at 12:58:04PM +0200, Alvaro Herrera wrote:\n> > On 2022-Aug-01, Tom Lane wrote:\n> >> One idea for avoiding confusion is to legislate that we won't\n> >> use tabs at all in this file (which we could enforce via\n> >> .gitattributes, I think).\n> >\n> > +1.\n>\n> That's not the first time this 4- or 8-character tab issue is popping\n> up around here, so enforcing spaces and having a rule sounds like a\n> good idea at the end.\n\n+1\n\n\n",
"msg_date": "Thu, 4 Aug 2022 10:19:39 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] postgresql.conf.sample comment alignment."
},
{
"msg_contents": "On Thu, Aug 4, 2022 at 11:09 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Aug 03, 2022 at 12:58:04PM +0200, Alvaro Herrera wrote:\n> > On 2022-Aug-01, Tom Lane wrote:\n> >> One idea for avoiding confusion is to legislate that we won't\n> >> use tabs at all in this file (which we could enforce via\n> >> .gitattributes, I think).\n> >\n> > +1.\n>\n> That's not the first time this 4- or 8-character tab issue is popping\n> up around here, so enforcing spaces and having a rule sounds like a\n> good idea at the end.\n>\n\nWell, it was only assumed that I had probably confused 4- 8- tabs, but\nI don't think I did, so the tabbing issue did not really \"pop up\"\nhere.\n\ne.g. you can see some of the existing alignments I'd suggested\nmodifying here [1]\n- #shared_preload_libraries = '' # (change requires restart)\n- #idle_in_transaction_session_timeout = 0 # in milliseconds, 0 is\ndisable <- (moved comments of the neighbours to keep them all aligned)\n- etc.\n\nI'm not saying replacing the tabs with spaces isn't a good idea - I\nalso agree probably it is, but that's a different problem to the\nalignments I was trying to correct with the patch\n\n------\n[1] https://github.com/postgres/postgres/blob/master/src/backend/utils/misc/postgresql.conf.sample\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 4 Aug 2022 12:42:38 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] postgresql.conf.sample comment alignment."
},
{
"msg_contents": "On Thu, Aug 4, 2022 at 12:42:38PM +1000, Peter Smith wrote:\n> On Thu, Aug 4, 2022 at 11:09 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Wed, Aug 03, 2022 at 12:58:04PM +0200, Alvaro Herrera wrote:\n> > > On 2022-Aug-01, Tom Lane wrote:\n> > >> One idea for avoiding confusion is to legislate that we won't\n> > >> use tabs at all in this file (which we could enforce via\n> > >> .gitattributes, I think).\n> > >\n> > > +1.\n> >\n> > That's not the first time this 4- or 8-character tab issue is popping\n> > up around here, so enforcing spaces and having a rule sounds like a\n> > good idea at the end.\n> >\n> \n> Well, it was only assumed that I had probably confused 4- 8- tabs, but\n> I don't think I did, so the tabbing issue did not really \"pop up\"\n> here.\n> \n> e.g. you can see some of the existing alignments I'd suggested\n> modifying here [1]\n> - #shared_preload_libraries = '' # (change requires restart)\n> - #idle_in_transaction_session_timeout = 0 # in milliseconds, 0 is\n> disable <- (moved comments of the neighbours to keep them all aligned)\n> - etc.\n> \n> I'm not saying replacing the tabs with spaces isn't a good idea - I\n> also agree probably it is, but that's a different problem to the\n> alignments I was trying to correct with the patch\n\nPatch applied to master. Perhaps someday we will adjust tabs, but for\nnow, this is an improvements. I made a few small adjustments myself.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Tue, 31 Oct 2023 08:51:48 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] postgresql.conf.sample comment alignment."
},
{
"msg_contents": "On Tue, Oct 31, 2023 at 11:51 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n...\n> Patch applied to master. Perhaps someday we will adjust tabs, but for\n> now, this is an improvements. I made a few small adjustments myself.\n>\n\nI had long forgotten this old patch. Thanks for resurrecting it and pushing!\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 1 Nov 2023 11:29:43 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] postgresql.conf.sample comment alignment."
}
] |
[
{
"msg_contents": "Hi Hackers.\n\nI noticed that there are quite a few items in Chapter 28.2 \"The\nCumulative Statistics System\" [1] which have no obvious order.\n\ne.g.\n\n- The views (28.2.3 -> 28.2.23) don't seem to be in any order that I\ncould work out. Why not alphabetical?\n\n- [2] Table 2.1 \"Dynamic Statistics View\" views are not in alphabetical order?\n\n- [2] Table 2.2 \"Collected Statistics View\" views are not in alphabetical order?\n\n- [3] Table 28.34 \"Additional Statistics Functions\" the\n'pg_stat_clear_snapshot' is the only one not in order?\n\n- [3] Table 28.35 \"Per-Backend Statistics Functions\" the\n'pg_stat_get_backend_idset' is the only one not in order?\n\n~~\n\nSo it doesn't seem as readable as it could be. If other people think\nthe same, I can write a patch for it.\n\nThoughts?\n\n------\n[1] https://www.postgresql.org/docs/devel/monitoring-stats.html\n[2] https://www.postgresql.org/docs/devel/monitoring-stats.html#MONITORING-STATS-VIEWS\n[3] https://www.postgresql.org/docs/devel/monitoring-stats.html#MONITORING-STATS-FUNCTIONS\n\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Tue, 2 Aug 2022 09:40:26 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "[DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "On Tue, Aug 2, 2022 at 9:40 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi Hackers.\n>\n> I noticed that there are quite a few items in Chapter 28.2 \"The\n> Cumulative Statistics System\" [1] which have no obvious order.\n>\n> e.g.\n>\n> - The views (28.2.3 -> 28.2.23) don't seem to be in any order that I\n> could work out. Why not alphabetical?\n>\n> - [2] Table 2.1 \"Dynamic Statistics View\" views are not in alphabetical order?\n>\n> - [2] Table 2.2 \"Collected Statistics View\" views are not in alphabetical order?\n>\n> - [3] Table 28.34 \"Additional Statistics Functions\" the\n> 'pg_stat_clear_snapshot' is the only one not in order?\n>\n> - [3] Table 28.35 \"Per-Backend Statistics Functions\" the\n> 'pg_stat_get_backend_idset' is the only one not in order?\n>\n> ~~\n>\n> So it doesn't seem as readable as it could be. If other people think\n> the same, I can write a patch for it.\n>\n\nI received no feedback when I reported this about a month ago, so I\nwent ahead and made patches to fix the problem anyway.\n\nPSA. Note that no content was harmed in the making of these patches -\nI only moved things around to be ordered.\n\nIMO these docs look better now.\n\nThoughts?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Tue, 30 Aug 2022 10:19:56 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "A rebase was needed.\n\nPSA v2*.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Thu, 6 Oct 2022 16:07:52 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "A rebase was needed.\n\nPSA v3*.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 24 Oct 2022 12:51:26 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "Sorry, I forgot the attachments in the previous post. PSA.\n\nOn Mon, Oct 24, 2022 at 12:51 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> A rebase was needed.\n>\n> PSA v3*.\n>\n> ------\n> Kind Regards,\n> Peter Smith.\n> Fujitsu Australia",
"msg_date": "Mon, 24 Oct 2022 12:53:16 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> Sorry, I forgot the attachments in the previous post. PSA.\n\nI spent a bit of time looking at this. I agree that a lot of the\ncurrent ordering choices here look like they were made with the\nadvice of a dartboard, and there's a number of things that are\npretty blatantly just sloppy merging (like the out-of-order\nwait-event items). However, I'm not a big fan of \"alphabetical\norder at all costs\", because that frequently leads to ordering\ndecisions that are not a lot better than random from a semantic\nstandpoint. For example, I resist the idea that it's sensible\nto put pg_stat_all_indexes before pg_stat_all_tables.\nI'm unconvinced that putting pg_stat_sys_tables and\npg_stat_user_tables far away from pg_stat_all_tables is great,\neither.\n\nSo ... how do we proceed?\n\nOne thing I'm unhappy about that you didn't address is that\nthe subsection ordering in \"28.4. Progress Reporting\" could\nhardly have been invented even with a dartboard. Perhaps it\nreflects development order, but that's a poor excuse.\nI'd be inclined to alphabetize by SQL command name, but maybe\nleave Base Backup to the end since it's not a SQL command.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 06 Nov 2022 13:50:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "On Mon, Nov 7, 2022 at 5:50 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Smith <smithpb2250@gmail.com> writes:\n> > Sorry, I forgot the attachments in the previous post. PSA.\n>\n> I spent a bit of time looking at this. I agree that a lot of the\n> current ordering choices here look like they were made with the\n> advice of a dartboard, and there's a number of things that are\n> pretty blatantly just sloppy merging (like the out-of-order\n> wait-event items). However, I'm not a big fan of \"alphabetical\n> order at all costs\", because that frequently leads to ordering\n> decisions that are not a lot better than random from a semantic\n> standpoint. For example, I resist the idea that it's sensible\n> to put pg_stat_all_indexes before pg_stat_all_tables.\n> I'm unconvinced that putting pg_stat_sys_tables and\n> pg_stat_user_tables far away from pg_stat_all_tables is great,\n> either.\n>\n\nThanks for taking the time to look at my patch. The \"at all costs\"\napproach was not the intention - I was just trying only to apply some\nsane ordering where I did not recognize a reason for the current\norder.\n\n> So ... how do we proceed?\n>\n\nTo proceed with the existing patches I need some guidance on exactly\nwhich of the changes can be considered improvements versus which ones\nare maybe just trading one 'random' order for another.\n\nHow about below?\n\nTable 28.1. Dynamic Statistics Views -- Alphabetical order would be a\nsmall improvement here, right?\n\nTable 28.2. Collected Statistics Views -- Leave this one unchanged\n(per your comments above).\n\nTable 28.12 Wait Events of type LWLock -- Seems a clear case of bad\nmerging. Alphabetical order is surely needed here, right?\n\nTable 28.34 Additional Statistic Functions -- Alphabetical order would\nbe a small improvement here, right?\n\nTable 28.35 Per-Backend Statistics Functions -- Alphabetical order\nwould be a small improvement here, right?\n\n> One thing I'm unhappy about that you didn't address is that\n> the subsection ordering in \"28.4. Progress Reporting\" could\n> hardly have been invented even with a dartboard. Perhaps it\n> reflects development order, but that's a poor excuse.\n> I'd be inclined to alphabetize by SQL command name, but maybe\n> leave Base Backup to the end since it's not a SQL command.\n>\n\nYes, I had previously only looked at the content of section 28.2\nbecause I didn't want to get carried away by changing too much until\nthere was some support for doing the first part.\n\nNow PSA a separate patch for fixing section \"28.4. Progress Reporting\"\norder as suggested.\n\n-----\nKind Regards,\nPeter Smith.\nFujitsu Australia.",
"msg_date": "Tue, 8 Nov 2022 11:18:36 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "On Mon, Nov 7, 2022 at 5:19 PM Peter Smith <smithpb2250@gmail.com> wrote:\n\n> On Mon, Nov 7, 2022 at 5:50 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Peter Smith <smithpb2250@gmail.com> writes:\n> > > Sorry, I forgot the attachments in the previous post. PSA.\n> >\n> > I spent a bit of time looking at this. I agree that a lot of the\n> > current ordering choices here look like they were made with the\n> > advice of a dartboard, and there's a number of things that are\n> > pretty blatantly just sloppy merging (like the out-of-order\n> > wait-event items). However, I'm not a big fan of \"alphabetical\n> > order at all costs\", because that frequently leads to ordering\n> > decisions that are not a lot better than random from a semantic\n> > standpoint. For example, I resist the idea that it's sensible\n> > to put pg_stat_all_indexes before pg_stat_all_tables.\n> > I'm unconvinced that putting pg_stat_sys_tables and\n> > pg_stat_user_tables far away from pg_stat_all_tables is great,\n> > either.\n> >\n>\n> Thanks for taking the time to look at my patch. The \"at all costs\"\n> approach was not the intention - I was just trying only to apply some\n> sane ordering where I did not recognize a reason for the current\n> order.\n>\n> > So ... how do we proceed?\n> >\n>\n> To proceed with the existing patches I need some guidance on exactly\n> which of the changes can be considered improvements versus which ones\n> are maybe just trading one 'random' order for another.\n>\n> How about below?\n>\n> Table 28.1. Dynamic Statistics Views -- Alphabetical order would be a\n> small improvement here, right?\n>\n\nThe present ordering seems mostly OK, though just like the \"Progress\"\nupdate below the bottom 6 pg_stat_progress_* entries should be\nalphabetized; but leaving them as a group at the end seems desirable.\n\nMove pg_stat_recovery_prefetch either after subscription or after activity\n- the replication/received/subscription stuff all seems like it should be\ngrouped together. As well as the security related ssl/gssapi.\n\n\n> Table 28.2. Collected Statistics Views -- Leave this one unchanged\n> (per your comments above).\n>\n\nI would suggest moving the 3 pg_statio_*_tables rows between the\npg_stat_*_tables and the pg_stat_xact_*_tables groups.\n\nEverything pertaining to cluster, database, tables, indexes, functions.\nslru and replication slots should likewise shift to the (near) top in the\ncluster/database grouping.\n\n\n> Table 28.12 Wait Events of type LWLock -- Seems a clear case of bad\n> merging. Alphabetical order is surely needed here, right?\n>\n\n+1 Agreed.\n\n>\n> Table 28.34 Additional Statistic Functions -- Alphabetical order would\n> be a small improvement here, right?\n>\n\nNo. All \"reset\" items should be grouped at the end like they are. I don't\nsee an alternative ordering among them that is clearly superior. Same for\nthe first four.\n\n\n>\n> Table 28.35 Per-Backend Statistics Functions -- Alphabetical order\n> would be a small improvement here, right?\n>\n>\nThis one I would rearrange alphabetically. Or, at least, I have a\ndifferent opinion of what would make a decent order but it doesn't seem all\nthat clearly better than alphabetical.\n\n\n> > I'd be inclined to alphabetize by SQL command name, but maybe\n> > leave Base Backup to the end since it's not a SQL command.\n> >\n>\n> Yes, I had previously only looked at the content of section 28.2\n> because I didn't want to get carried away by changing too much until\n> there was some support for doing the first part.\n>\n> Now PSA a separate patch for fixing section \"28.4. Progress Reporting\"\n> order as suggested.\n>\n>\nThis seems like a clear win.\n\nDavid J.\n\nOn Mon, Nov 7, 2022 at 5:19 PM Peter Smith <smithpb2250@gmail.com> wrote:On Mon, Nov 7, 2022 at 5:50 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Smith <smithpb2250@gmail.com> writes:\n> > Sorry, I forgot the attachments in the previous post. PSA.\n>\n> I spent a bit of time looking at this. I agree that a lot of the\n> current ordering choices here look like they were made with the\n> advice of a dartboard, and there's a number of things that are\n> pretty blatantly just sloppy merging (like the out-of-order\n> wait-event items). However, I'm not a big fan of \"alphabetical\n> order at all costs\", because that frequently leads to ordering\n> decisions that are not a lot better than random from a semantic\n> standpoint. For example, I resist the idea that it's sensible\n> to put pg_stat_all_indexes before pg_stat_all_tables.\n> I'm unconvinced that putting pg_stat_sys_tables and\n> pg_stat_user_tables far away from pg_stat_all_tables is great,\n> either.\n>\n\nThanks for taking the time to look at my patch. The \"at all costs\"\napproach was not the intention - I was just trying only to apply some\nsane ordering where I did not recognize a reason for the current\norder.\n\n> So ... how do we proceed?\n>\n\nTo proceed with the existing patches I need some guidance on exactly\nwhich of the changes can be considered improvements versus which ones\nare maybe just trading one 'random' order for another.\n\nHow about below?\n\nTable 28.1. Dynamic Statistics Views -- Alphabetical order would be a\nsmall improvement here, right?The present ordering seems mostly OK, though just like the \"Progress\" update below the bottom 6 pg_stat_progress_* entries should be alphabetized; but leaving them as a group at the end seems desirable.Move pg_stat_recovery_prefetch either after subscription or after activity - the replication/received/subscription stuff all seems like it should be grouped together. As well as the security related ssl/gssapi.\n\nTable 28.2. Collected Statistics Views -- Leave this one unchanged\n(per your comments above).I would suggest moving the 3 pg_statio_*_tables rows between the pg_stat_*_tables and the pg_stat_xact_*_tables groups.Everything pertaining to cluster, database, tables, indexes, functions. slru and replication slots should likewise shift to the (near) top in the cluster/database grouping.\n\nTable 28.12 Wait Events of type LWLock -- Seems a clear case of bad\nmerging. Alphabetical order is surely needed here, right?+1 Agreed.\n\nTable 28.34 Additional Statistic Functions -- Alphabetical order would\nbe a small improvement here, right?No. All \"reset\" items should be grouped at the end like they are. I don't see an alternative ordering among them that is clearly superior. Same for the first four. \n\nTable 28.35 Per-Backend Statistics Functions -- Alphabetical order\nwould be a small improvement here, right?This one I would rearrange alphabetically. Or, at least, I have a different opinion of what would make a decent order but it doesn't seem all that clearly better than alphabetical. \n> I'd be inclined to alphabetize by SQL command name, but maybe\n> leave Base Backup to the end since it's not a SQL command.\n>\n\nYes, I had previously only looked at the content of section 28.2\nbecause I didn't want to get carried away by changing too much until\nthere was some support for doing the first part.\n\nNow PSA a separate patch for fixing section \"28.4. Progress Reporting\"\norder as suggested.This seems like a clear win.David J.",
"msg_date": "Wed, 9 Nov 2022 16:03:42 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "On Thu, Nov 10, 2022 at 10:04 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n...\n>> > So ... how do we proceed?\n>> >\n>>\n>> To proceed with the existing patches I need some guidance on exactly\n>> which of the changes can be considered improvements versus which ones\n>> are maybe just trading one 'random' order for another.\n>>\n>> How about below?\n>>\n>> Table 28.1. Dynamic Statistics Views -- Alphabetical order would be a\n>> small improvement here, right?\n>\n>\n> The present ordering seems mostly OK, though just like the \"Progress\" update below the bottom 6 pg_stat_progress_* entries should be alphabetized; but leaving them as a group at the end seems desirable.\n>\n> Move pg_stat_recovery_prefetch either after subscription or after activity - the replication/received/subscription stuff all seems like it should be grouped together. As well as the security related ssl/gssapi.\n>\n>>\n>> Table 28.2. Collected Statistics Views -- Leave this one unchanged\n>> (per your comments above).\n>\n>\n> I would suggest moving the 3 pg_statio_*_tables rows between the pg_stat_*_tables and the pg_stat_xact_*_tables groups.\n>\n> Everything pertaining to cluster, database, tables, indexes, functions. slru and replication slots should likewise shift to the (near) top in the cluster/database grouping.\n>\n>>\n>> Table 28.12 Wait Events of type LWLock -- Seems a clear case of bad\n>> merging. Alphabetical order is surely needed here, right?\n>\n>\n> +1 Agreed.\n>>\n>>\n>> Table 28.34 Additional Statistic Functions -- Alphabetical order would\n>> be a small improvement here, right?\n>\n>\n> No. All \"reset\" items should be grouped at the end like they are. I don't see an alternative ordering among them that is clearly superior. Same for the first four.\n>\n>>\n>>\n>> Table 28.35 Per-Backend Statistics Functions -- Alphabetical order\n>> would be a small improvement here, right?\n>>\n>\n> This one I would rearrange alphabetically. Or, at least, I have a different opinion of what would make a decent order but it doesn't seem all that clearly better than alphabetical.\n>\n>>\n>> > I'd be inclined to alphabetize by SQL command name, but maybe\n>> > leave Base Backup to the end since it's not a SQL command.\n>> >\n>>\n>> Yes, I had previously only looked at the content of section 28.2\n>> because I didn't want to get carried away by changing too much until\n>> there was some support for doing the first part.\n>>\n>> Now PSA a separate patch for fixing section \"28.4. Progress Reporting\"\n>> order as suggested.\n>>\n>\n> This seems like a clear win.\n>\n> David J.\n\nThanks for the review and table ordering advice. AFAICT I have made\nall the changes according to the suggestions.\n\nEach re-ordering was done as a separate patch (so maybe they can be\npushed separately, in case some but not all are OK). PSA.\n\n~~\n\nI was also wondering (but have not yet done) if the content *outside*\nthe tables should be reordered to match the table 28.1/28.2 order.\n\ne.g. Currently it is not quite the same:\n\nCURRENT\n28.2.3. pg_stat_activity\n28.2.4. pg_stat_replication\n28.2.5. pg_stat_replication_slots\n28.2.6. pg_stat_wal_receiver\n28.2.7. pg_stat_recovery_prefetch\n28.2.8. pg_stat_subscription\n28.2.9. pg_stat_subscription_stats\n28.2.10. pg_stat_ssl\n28.2.11. pg_stat_gssapi\n\n28.2.12. pg_stat_archiver\n28.2.13. pg_stat_bgwriter\n28.2.14. pg_stat_wal\n28.2.15. pg_stat_database\n28.2.16. pg_stat_database_conflicts\n28.2.17. pg_stat_all_tables\n28.2.18. pg_stat_all_indexes\n28.2.19. pg_statio_all_tables\n28.2.20. pg_statio_all_indexes\n28.2.21. pg_statio_all_sequences\n28.2.22. pg_stat_user_functions\n28.2.23. pg_stat_slru\n\nSUGGESTED\n28.2.3. pg_stat_activity\n28.2.4. pg_stat_replication\n28.2.6. pg_stat_wal_receiver\n28.2.7. pg_stat_recovery_prefetch\n28.2.8. pg_stat_subscription\n28.2.10. pg_stat_ssl\n28.2.11. pg_stat_gssapi\n\n28.2.12. pg_stat_archiver\n28.2.13. pg_stat_bgwriter\n28.2.14. pg_stat_wal\n28.2.15. pg_stat_database\n28.2.16. pg_stat_database_conflicts\n28.2.23. pg_stat_slru\n28.2.5. pg_stat_replication_slots\n28.2.17. pg_stat_all_tables\n28.2.18. pg_stat_all_indexes\n28.2.19. pg_statio_all_tables\n28.2.20. pg_statio_all_indexes\n28.2.21. pg_statio_all_sequences\n28.2.22. pg_stat_user_functions\n28.2.9. pg_stat_subscription_stats\n\nThoughts?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 16 Nov 2022 12:38:56 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "On Tue, Nov 15, 2022 at 6:39 PM Peter Smith <smithpb2250@gmail.com> wrote:\n\n>\n> I was also wondering (but have not yet done) if the content *outside*\n> the tables should be reordered to match the table 28.1/28.2 order.\n>\n> Thoughts?\n>\n>\nI would love to do away with the ToC listing of view names in 28.2\naltogether.\n\nAlso, make it so each view ends up being its own separate page.\n\nThe name of the views in the table should then be the hyperlinks to those\npages.\n\nBasically the way Chapter 54.1 works. Though the interplay between the top\nChapter 54 and 54.1 is a bit repetitive.\n\nhttps://www.postgresql.org/docs/current/views.html\n\nI wonder whether having the table be structured but the ToC be purely\nalphabetical would be considered a good idea...\n\nThe tables need hyperlinks regardless. I wouldn't insist on changing the\nordering to match the table, especially with the hyperlinks, but I also\nwouldn't reject it. Figuring out how to make them one-per-page would be\ntime better spent though.\n\nDavid J.\n\nOn Tue, Nov 15, 2022 at 6:39 PM Peter Smith <smithpb2250@gmail.com> wrote:\nI was also wondering (but have not yet done) if the content *outside*\nthe tables should be reordered to match the table 28.1/28.2 order.\nThoughts?I would love to do away with the ToC listing of view names in 28.2 altogether.Also, make it so each view ends up being its own separate page.The name of the views in the table should then be the hyperlinks to those pages.Basically the way Chapter 54.1 works. Though the interplay between the top Chapter 54 and 54.1 is a bit repetitive.https://www.postgresql.org/docs/current/views.htmlI wonder whether having the table be structured but the ToC be purely alphabetical would be considered a good idea...The tables need hyperlinks regardless. I wouldn't insist on changing the ordering to match the table, especially with the hyperlinks, but I also wouldn't reject it. Figuring out how to make them one-per-page would be time better spent though.David J.",
"msg_date": "Wed, 16 Nov 2022 14:46:41 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "On Thu, Nov 17, 2022 at 8:46 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Tue, Nov 15, 2022 at 6:39 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>>\n>>\n>> I was also wondering (but have not yet done) if the content *outside*\n>> the tables should be reordered to match the table 28.1/28.2 order.\n>>\n>> Thoughts?\n>>\n\nThanks for the feedback/suggestions\n\n>\n> I would love to do away with the ToC listing of view names in 28.2 altogether.\n>\n\nOK, done. See patch 0006. To prevent all the views sections from\nparticipating in the ToC I simply changed them to <sect3> instead of\n<sect2>. I’m not 100% sure if this was a brilliant modification or a\ntotal hack, but it does do exactly what you wanted.\n\n> Also, make it so each view ends up being its own separate page.\n>\n\nI did not do this. AFAIK those views of chapter 54 get rendered to\nseparate pages only because they are top-level <sect1>. So I do not\nknow how to put all these stats views onto different pages without\nradically changing the document structure. Anyway – doing this would\nbe incompatible with my <sect3> changes of patch 0006 (see above).\n\n\n> The name of the views in the table should then be the hyperlinks to those pages.\n>\n\nOK done. See patch 0005. All the view names (in column one of the\ntables) are hyperlinked to the views the same way as Chapter 54 does.\nThe tables are a lot cleaner now. A couple of inconsistent view ids\nwere also corrected.\n\n> Basically the way Chapter 54.1 works. Though the interplay between the top Chapter 54 and 54.1 is a bit repetitive.\n>\n> https://www.postgresql.org/docs/current/views.html\n>\n> I wonder whether having the table be structured but the ToC be purely alphabetical would be considered a good idea...\n>\n> The tables need hyperlinks regardless. I wouldn't insist on changing the ordering to match the table, especially with the hyperlinks, but I also wouldn't reject it. Figuring out how to make them one-per-page would be time better spent though.\n>\n\nPSA new patches. Now there are 6 of them. If some of the earlier\npatches are agreeable can those ones please be committed? (because I\nthink this patch may be susceptible to needing a big rebase if\nanything in those tables changes).\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.",
"msg_date": "Wed, 23 Nov 2022 19:36:31 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "On 23.11.22 09:36, Peter Smith wrote:\n> PSA new patches. Now there are 6 of them. If some of the earlier\n> patches are agreeable can those ones please be committed? (because I\n> think this patch may be susceptible to needing a big rebase if\n> anything in those tables changes).\n\nI have committed\n\nv6-0001-Re-order-sections-of-28.4.-Progress-Reporting.patch\nv6-0003-Re-order-Table-28.12-Wait-Events-of-type-LWLock.patch\nv6-0004-Re-order-Table-28.35-Per-Backend-Statistics-Funct.patch\n\nwhich seemed to have clear consensus.\n\nv6-0002-Re-order-Table-28.2-Collected-Statistics-Views.patch\n\nThis one also seems ok, need a bit more time to look it over.\n\nv6-0005-Cleanup-view-name-hyperlinks-for-Tables-28.1-and-.patch\nv6-0006-Remove-all-stats-views-from-the-ToC-of-28.2.patch\n\nI wasn't sure yet whether these had been reviewed yet, sine they were \nlate additions to the patch series.\n\n\n\n",
"msg_date": "Fri, 25 Nov 2022 13:09:09 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "On Fri, Nov 25, 2022 at 5:09 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 23.11.22 09:36, Peter Smith wrote:\n>\n\n> v6-0005-Cleanup-view-name-hyperlinks-for-Tables-28.1-and-.patch\n> v6-0006-Remove-all-stats-views-from-the-ToC-of-28.2.patch\n>\n> I wasn't sure yet whether these had been reviewed yet, sine they were\n> late additions to the patch series.\n>\n>\nThey have not been reviewed.\n\nIf it's a matter of either-or I'd really prefer one page per grouping over\ngetting rid of the table-of-contents. But I suspect there has to be some\nway to add an sgml element to the markup to force a new page and would\nprefer to confirm or refute that prior to committing 0006.\n\n0005 seems a win either way though I haven't reviewed it yet.\n\nDavid J.\n\nOn Fri, Nov 25, 2022 at 5:09 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 23.11.22 09:36, Peter Smith wrote:\n\nv6-0005-Cleanup-view-name-hyperlinks-for-Tables-28.1-and-.patch\nv6-0006-Remove-all-stats-views-from-the-ToC-of-28.2.patch\n\nI wasn't sure yet whether these had been reviewed yet, sine they were \nlate additions to the patch series.\nThey have not been reviewed.If it's a matter of either-or I'd really prefer one page per grouping over getting rid of the table-of-contents. But I suspect there has to be some way to add an sgml element to the markup to force a new page and would prefer to confirm or refute that prior to committing 0006.0005 seems a win either way though I haven't reviewed it yet.David J.",
"msg_date": "Fri, 25 Nov 2022 08:10:43 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "On Wed, Nov 23, 2022 at 1:36 AM Peter Smith <smithpb2250@gmail.com> wrote:\n\n> On Thu, Nov 17, 2022 at 8:46 AM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n>\n> > Also, make it so each view ends up being its own separate page.\n> >\n>\n> I did not do this. AFAIK those views of chapter 54 get rendered to\n> separate pages only because they are top-level <sect1>. So I do not\n> know how to put all these stats views onto different pages without\n> radically changing the document structure. Anyway – doing this would\n> be incompatible with my <sect3> changes of patch 0006 (see above).\n>\n>\nI did some experimentation and reading on this today. Short answer - turn\neach view into a refentry under a dedicated sect2 where the table resides.\n\nDavid J.\n\n<chapter>\n[...]\n<sect1> <!--The Cumulative Statistics System -->\n[...]\n<sect2>\n<title>Statistics Views</title>\n <para>Table of Statistics Views...</para>\n\n <refentry id=\"monitoring-pg-stat-activity-view\">\n <refnamediv><refname>pg_stat_activity</refname><refpurpose>Purpose</refpurpose></refnamediv>\n <refsect1>\n <title><structname>pg_stat_activity</structname></title>\n\n <indexterm>\n <primary>pg_stat_activity</primary>\n </indexterm>\n\n </refsect1></refentry>\n\n</sect2> <!-- Statistics Views -->\n\n</sect1>\n</chapter>\n\nI was doing quite a bit of experimentation and basically gutted the actual\npage to make that easier. The end result looked basically like below.\n\nChapter 28. Monitoring Database Activity\n\nTable of Contents\n\n28.1. Standard Unix Tools\n28.2. The Cumulative Statistics System\n\n 28.2.1. Statistics Collection Configuration\n 28.2.2. Viewing Statistics\n 28.2.3. Statistics Views\n\nA database administrator frequently wonders, “What is the system doing\nright now?” This chapter discusses how to find that out.\n\nSeveral tools are available for monitoring database activity and analyzing\nperformance. Most of this chapter is devoted to describing PostgreSQL's\ncumulative statistics system, but one should not neglect regular Unix\nmonitoring programs such as ps, top, iostat, and vmstat. Also, once one has\nidentified a poorly-performing query, further investigation might be needed\nusing PostgreSQL's EXPLAIN command. Section 14.1 discusses EXPLAIN and\nother methods for understanding the behavior of an individual query.\n\n============== Page for 28.2 (sect1) ==============\n28.2. The Cumulative Statistics System\n\n28.2.1. Statistics Collection Configuration\n28.2.2. Viewing Statistics\n28.2.3. Statistics Views\n\nPostgreSQL's cumulative statistics system supports collection and reporting\nof information about server activity. Presently, accesses to tables and\nindexes in both disk-block and individual-row terms are counted. The total\nnumber of rows in each table, and information about vacuum and analyze\nactions for each table are also counted. If enabled, calls to user-defined\nfunctions and the total time spent in each one are counted as well.\n\nPostgreSQL also supports reporting dynamic information about exactly what\nis going on in the system right now, such as the exact command currently\nbeing executed by other server processes, and which other connections exist\nin the system. This facility is independent of the cumulative statistics\nsystem.\n28.2.1. Statistics Collection Configuration\n\nSince collection of statistics adds some overhead to query execution, the\nsystem can be configured to collect or not collect information. This is\ncontrolled by configuration parameters that are normally set in\npostgresql.conf. (See Chapter 20 for details about setting configuration\nparameters.)\n\nThe parameter track_activities enables monitoring of the current command\nbeing executed by any server process.\n\nThe parameter track_counts controls whether cumulative statistics are\ncollected about table and index accesses.\n\nThe parameter track_functions enables tracking of usage of user-defined\nfunctions.\n\nThe parameter track_io_timing enables monitoring of block read and write\ntimes.\n\nThe parameter track_wal_io_timing enables monitoring of WAL write times.\n\nNormally these parameters are set in postgresql.conf so that they apply to\nall server processes, but it is possible to turn them on or off in\nindividual sessions using the SET command. (To prevent ordinary users from\nhiding their activity from the administrator, only superusers are allowed\nto change these parameters with SET.)\n\nCumulative statistics are collected in shared memory. Every PostgreSQL\nprocess collects statistics locally, then updates the shared data at\nappropriate intervals. When a server, including a physical replica, shuts\ndown cleanly, a permanent copy of the statistics data is stored in the\npg_stat subdirectory, so that statistics can be retained across server\nrestarts. In contrast, when starting from an unclean shutdown (e.g., after\nan immediate shutdown, a server crash, starting from a base backup, and\npoint-in-time recovery), all statistics counters are reset.\n28.2.2. Viewing Statistics\n\ntest\n28.2.3. Statistics Views\n\nTable of Statistics Views...\n\n===============\nfile:///usr/local/pgsql/share/doc/html/monitoring-pg-stat-activity-view.html\n=============\n(no ToC entry but the Next link in our footer does point to here)\n\npg_stat_activity\n\npg_stat_activity — Purpose\npg_stat_activity\n\n\nThe pg_stat_activity view will have one row per server process, showing\ninformation related to the current activity of that process.\n\nHere is an example of how wait events can be viewed:\n\nSELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE\nwait_event is NOT NULL;\n pid | wait_event_type | wait_event\n------+-----------------+------------\n 2540 | Lock | relation\n 6644 | LWLock | ProcArray\n(2 rows)\n\nOn Wed, Nov 23, 2022 at 1:36 AM Peter Smith <smithpb2250@gmail.com> wrote:On Thu, Nov 17, 2022 at 8:46 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> Also, make it so each view ends up being its own separate page.\n>\n\nI did not do this. AFAIK those views of chapter 54 get rendered to\nseparate pages only because they are top-level <sect1>. So I do not\nknow how to put all these stats views onto different pages without\nradically changing the document structure. Anyway – doing this would\nbe incompatible with my <sect3> changes of patch 0006 (see above).I did some experimentation and reading on this today. Short answer - turn each view into a refentry under a dedicated sect2 where the table resides.David J.<chapter>[...]<sect1> <!--The Cumulative Statistics System -->[...]<sect2><title>Statistics Views</title> <para>Table of Statistics Views...</para> <refentry id=\"monitoring-pg-stat-activity-view\"> <refnamediv><refname>pg_stat_activity</refname><refpurpose>Purpose</refpurpose></refnamediv> <refsect1> <title><structname>pg_stat_activity</structname></title> <indexterm> <primary>pg_stat_activity</primary> </indexterm> </refsect1></refentry></sect2> <!-- Statistics Views --></sect1></chapter>I was doing quite a bit of experimentation and basically gutted the actual page to make that easier. The end result looked basically like below.Chapter 28. Monitoring Database ActivityTable of Contents28.1. Standard Unix Tools28.2. The Cumulative Statistics System 28.2.1. Statistics Collection Configuration 28.2.2. Viewing Statistics 28.2.3. Statistics ViewsA database administrator frequently wonders, “What is the system doing right now?” This chapter discusses how to find that out.Several tools are available for monitoring database activity and analyzing performance. Most of this chapter is devoted to describing PostgreSQL's cumulative statistics system, but one should not neglect regular Unix monitoring programs such as ps, top, iostat, and vmstat. Also, once one has identified a poorly-performing query, further investigation might be needed using PostgreSQL's EXPLAIN command. Section 14.1 discusses EXPLAIN and other methods for understanding the behavior of an individual query. ============== Page for 28.2 (sect1) ==============28.2. The Cumulative Statistics System28.2.1. Statistics Collection Configuration28.2.2. Viewing Statistics28.2.3. Statistics ViewsPostgreSQL's cumulative statistics system supports collection and reporting of information about server activity. Presently, accesses to tables and indexes in both disk-block and individual-row terms are counted. The total number of rows in each table, and information about vacuum and analyze actions for each table are also counted. If enabled, calls to user-defined functions and the total time spent in each one are counted as well.PostgreSQL also supports reporting dynamic information about exactly what is going on in the system right now, such as the exact command currently being executed by other server processes, and which other connections exist in the system. This facility is independent of the cumulative statistics system.28.2.1. Statistics Collection ConfigurationSince collection of statistics adds some overhead to query execution, the system can be configured to collect or not collect information. This is controlled by configuration parameters that are normally set in postgresql.conf. (See Chapter 20 for details about setting configuration parameters.)The parameter track_activities enables monitoring of the current command being executed by any server process.The parameter track_counts controls whether cumulative statistics are collected about table and index accesses.The parameter track_functions enables tracking of usage of user-defined functions.The parameter track_io_timing enables monitoring of block read and write times.The parameter track_wal_io_timing enables monitoring of WAL write times.Normally these parameters are set in postgresql.conf so that they apply to all server processes, but it is possible to turn them on or off in individual sessions using the SET command. (To prevent ordinary users from hiding their activity from the administrator, only superusers are allowed to change these parameters with SET.)Cumulative statistics are collected in shared memory. Every PostgreSQL process collects statistics locally, then updates the shared data at appropriate intervals. When a server, including a physical replica, shuts down cleanly, a permanent copy of the statistics data is stored in the pg_stat subdirectory, so that statistics can be retained across server restarts. In contrast, when starting from an unclean shutdown (e.g., after an immediate shutdown, a server crash, starting from a base backup, and point-in-time recovery), all statistics counters are reset.28.2.2. Viewing Statisticstest28.2.3. Statistics ViewsTable of Statistics Views...=============== file:///usr/local/pgsql/share/doc/html/monitoring-pg-stat-activity-view.html =============(no ToC entry but the Next link in our footer does point to here)pg_stat_activitypg_stat_activity — Purposepg_stat_activityThe pg_stat_activity view will have one row per server process, showing information related to the current activity of that process.Here is an example of how wait events can be viewed:SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event is NOT NULL; pid | wait_event_type | wait_event------+-----------------+------------ 2540 | Lock | relation 6644 | LWLock | ProcArray(2 rows)",
"msg_date": "Fri, 25 Nov 2022 20:43:38 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "On Fri, Nov 25, 2022 at 11:09 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 23.11.22 09:36, Peter Smith wrote:\n> > PSA new patches. Now there are 6 of them. If some of the earlier\n> > patches are agreeable can those ones please be committed? (because I\n> > think this patch may be susceptible to needing a big rebase if\n> > anything in those tables changes).\n>\n> I have committed\n>\n> v6-0001-Re-order-sections-of-28.4.-Progress-Reporting.patch\n> v6-0003-Re-order-Table-28.12-Wait-Events-of-type-LWLock.patch\n> v6-0004-Re-order-Table-28.35-Per-Backend-Statistics-Funct.patch\n>\n> which seemed to have clear consensus.\n>\n> v6-0002-Re-order-Table-28.2-Collected-Statistics-Views.patch\n>\n> This one also seems ok, need a bit more time to look it over.\n>\n> v6-0005-Cleanup-view-name-hyperlinks-for-Tables-28.1-and-.patch\n> v6-0006-Remove-all-stats-views-from-the-ToC-of-28.2.patch\n>\n> I wasn't sure yet whether these had been reviewed yet, sine they were\n> late additions to the patch series.\n\nThank you for pushing those ones.\n\nPSA the remaining patches re-posted so cfbot can keep working\n\nv6-0002 --> v7-0001\nv6-0005 -> v7-0002\nv6-0006 -> v7-0003\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Mon, 28 Nov 2022 11:07:02 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "On Sat, Nov 26, 2022 at 2:43 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Wed, Nov 23, 2022 at 1:36 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>>\n>> On Thu, Nov 17, 2022 at 8:46 AM David G. Johnston\n>> <david.g.johnston@gmail.com> wrote:\n>>\n>> > Also, make it so each view ends up being its own separate page.\n>> >\n>>\n>> I did not do this. AFAIK those views of chapter 54 get rendered to\n>> separate pages only because they are top-level <sect1>. So I do not\n>> know how to put all these stats views onto different pages without\n>> radically changing the document structure. Anyway – doing this would\n>> be incompatible with my <sect3> changes of patch 0006 (see above).\n>>\n>\n> I did some experimentation and reading on this today. Short answer - turn each view into a refentry under a dedicated sect2 where the table resides.\n\nThanks very much for your suggestion.\n\nI will look at redoing the v7-0003 patch using that approach when I\nget some more time (maybe in a day or so),\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 28 Nov 2022 11:10:30 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "On Sat, Nov 26, 2022 at 2:43 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Wed, Nov 23, 2022 at 1:36 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>>\n>> On Thu, Nov 17, 2022 at 8:46 AM David G. Johnston\n>> <david.g.johnston@gmail.com> wrote:\n>>\n>> > Also, make it so each view ends up being its own separate page.\n>> >\n>>\n>> I did not do this. AFAIK those views of chapter 54 get rendered to\n>> separate pages only because they are top-level <sect1>. So I do not\n>> know how to put all these stats views onto different pages without\n>> radically changing the document structure. Anyway – doing this would\n>> be incompatible with my <sect3> changes of patch 0006 (see above).\n>>\n>\n> I did some experimentation and reading on this today. Short answer - turn each view into a refentry under a dedicated sect2 where the table resides.\n>\n> David J.\n>\n> <chapter>\n> [...]\n> <sect1> <!--The Cumulative Statistics System -->\n> [...]\n> <sect2>\n> <title>Statistics Views</title>\n> <para>Table of Statistics Views...</para>\n>\n> <refentry id=\"monitoring-pg-stat-activity-view\">\n> <refnamediv><refname>pg_stat_activity</refname><refpurpose>Purpose</refpurpose></refnamediv>\n> <refsect1>\n> <title><structname>pg_stat_activity</structname></title>\n>\n> <indexterm>\n> <primary>pg_stat_activity</primary>\n> </indexterm>\n>\n> </refsect1></refentry>\n>\n> </sect2> <!-- Statistics Views -->\n>\n> </sect1>\n> </chapter>\n>\n> I was doing quite a bit of experimentation and basically gutted the actual page to make that easier. The end result looked basically like below.\n>\n> Chapter 28. Monitoring Database Activity\n>\n> Table of Contents\n>\n> 28.1. Standard Unix Tools\n> 28.2. The Cumulative Statistics System\n>\n> 28.2.1. Statistics Collection Configuration\n> 28.2.2. Viewing Statistics\n> 28.2.3. Statistics Views\n>\n\nPSA v8* patches.\n\nHere, patches 0001 and 0002 are unchanged, but 0003 has many changes\nper David's suggestion [1] to change all these views to <refentry>\nblocks.\n\nSo, I've done pretty much the same as per the above advice, except:\n- I just called the <refpurpose> text for all these views \"View\"\n- I changed the <refsect1> <title> to be \"Description\". This renders\nnicer (without the double text of the view name) and is also more in\nkeeping with the example I found here [2].\n\nEnd result seems OK. YMMV.\n\n~\n\nNote that the refentry order within the monitoring.sgml is unchanged\nfrom the previous <sect2> section order, so it's neither alphabetical\nnor is it in the same order as within the tables. This is noticeable\nonly if you pay attention to the NEXT/PREV links at the bottom of the\nbrowser page... so I'm not sure if it's worth shuffling these refentry\nblocks into some better order or not?\n\n------\n[1] David's restructure suggestion\nhttps://www.postgresql.org/message-id/CAKFQuwYkM5UZT%2B6tG%2BNgZvDcd5VavS%2BxNHsGsWC8jS-KJsxh7w%40mail.gmail.com\n[2] Example of a refentry https://tdg.docbook.org/tdg/3.1/refentry.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Tue, 29 Nov 2022 18:29:48 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "On 29.11.22 08:29, Peter Smith wrote:\n> PSA v8* patches.\n> \n> Here, patches 0001 and 0002 are unchanged, but 0003 has many changes\n> per David's suggestion [1] to change all these views to <refentry>\n> blocks.\n\nI don't understand what order 0001 is trying to achieve. I know we \ndidn't necessarily want to go fully alphabetic, but if we're going to \nspend time on this, let's come up with a system that the next \ncontributor who adds a view will be able to understand and follow.\n\nAs an aside, I find the mixing of pg_stat_* and pg_statio_* views \nvisually distracting. It was easier to read before when they were in \nseparate blocks.\n\nI think something like this would be manageable:\n\n<!-- everything related to global objects, alphabetically -->\npg_stat_archiver\npg_stat_bgwriter\npg_stat_database\npg_stat_database_conflicts\npg_stat_replication_slots\npg_stat_slru\npg_stat_subscription_stats\npg_stat_wal\n\n<!-- all \"stat\" for schema objects, by \"importance\" -->\npg_stat_all_tables\npg_stat_sys_tables\npg_stat_user_tables\npg_stat_xact_all_tables\npg_stat_xact_sys_tables\npg_stat_xact_user_tables\npg_stat_all_indexes\npg_stat_sys_indexes\npg_stat_user_indexes\npg_stat_user_functions\npg_stat_xact_user_functions\n\n<!-- all \"statio\" for schema objects, by \"importance\" -->\npg_statio_all_tables\npg_statio_sys_tables\npg_statio_user_tables\npg_statio_all_indexes\npg_statio_sys_indexes\npg_statio_user_indexes\npg_statio_all_sequences\npg_statio_sys_sequences\npg_statio_user_sequences\n\n\nIn any case, the remaining patches are new and need further review, so \nI'll move this to the next CF.\n\n\n\n\n",
"msg_date": "Thu, 1 Dec 2022 10:20:34 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "On Thu, Dec 1, 2022 at 2:20 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 29.11.22 08:29, Peter Smith wrote:\n> > PSA v8* patches.\n> >\n> > Here, patches 0001 and 0002 are unchanged, but 0003 has many changes\n> > per David's suggestion [1] to change all these views to <refentry>\n> > blocks.\n>\n> I don't understand what order 0001 is trying to achieve.\n\n\nThe rule behind 0001 is:\n\nAll global object stats\nAll table object stats (stat > statio > xact; all > sys > user)\nAll index object stats\nAll sequence object stats\nAll function object stats\n\n\n> As an aside, I find the mixing of pg_stat_* and pg_statio_* views\n> visually distracting. It was easier to read before when they were in\n> separate blocks.\n>\n\nI found that having the statio at the end of each object type block added a\nnatural partitioning for tables and indexes that the existing order lacked\nand that made reading the table be more \"wall-of-text-ish\", and thus more\ndifficult to read, than necessary.\n\nI'm not opposed to the following though. The object-type driven order just\nfeels more useful but I really cannot justify it beyond that.\n\nI'm not particularly enamored with the existing single large table but\ndon't have a better structure to offer at this time.\n\n\n> I think something like this would be manageable:\n>\n> <!-- everything related to global objects, alphabetically -->\n> pg_stat_archiver\n> pg_stat_bgwriter\n> pg_stat_database\n> pg_stat_database_conflicts\n> pg_stat_replication_slots\n> pg_stat_slru\n> pg_stat_subscription_stats\n> pg_stat_wal\n>\n\nWAL being adjacent to archiver/bgwriter seemed reasonable so I left that\nalone.\nReplication and Subscription being adjacent seemed reasonable so I left\nthat alone.\nThus slru ended up last, with database* remaining as-is.\n\nAt 8 items, with a group size average of 2, pure alphabetical is also\nreasonable.\n\n\n> <!-- all \"stat\" for schema objects, by \"importance\" -->\n>\n> <!-- all \"statio\" for schema objects, by \"importance\" -->\n>\n>\nDavid J.\n\nOn Thu, Dec 1, 2022 at 2:20 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 29.11.22 08:29, Peter Smith wrote:\n> PSA v8* patches.\n> \n> Here, patches 0001 and 0002 are unchanged, but 0003 has many changes\n> per David's suggestion [1] to change all these views to <refentry>\n> blocks.\n\nI don't understand what order 0001 is trying to achieve.The rule behind 0001 is:All global object statsAll table object stats (stat > statio > xact; all > sys > user)All index object statsAll sequence object statsAll function object stats\n\nAs an aside, I find the mixing of pg_stat_* and pg_statio_* views \nvisually distracting. It was easier to read before when they were in \nseparate blocks.I found that having the statio at the end of each object type block added a natural partitioning for tables and indexes that the existing order lacked and that made reading the table be more \"wall-of-text-ish\", and thus more difficult to read, than necessary.I'm not opposed to the following though. The object-type driven order just feels more useful but I really cannot justify it beyond that.I'm not particularly enamored with the existing single large table but don't have a better structure to offer at this time. \n\nI think something like this would be manageable:\n\n<!-- everything related to global objects, alphabetically -->\npg_stat_archiver\npg_stat_bgwriter\npg_stat_database\npg_stat_database_conflicts\npg_stat_replication_slots\npg_stat_slru\npg_stat_subscription_stats\npg_stat_walWAL being adjacent to archiver/bgwriter seemed reasonable so I left that alone.Replication and Subscription being adjacent seemed reasonable so I left that alone.Thus slru ended up last, with database* remaining as-is.At 8 items, with a group size average of 2, pure alphabetical is also reasonable.\n\n<!-- all \"stat\" for schema objects, by \"importance\" -->\n<!-- all \"statio\" for schema objects, by \"importance\" -->David J.",
"msg_date": "Thu, 1 Dec 2022 07:35:15 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "I'd like to \"fix\" this but IIUC there is no consensus yet about what\norder is best for patch 0001, right?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 7 Dec 2022 12:36:05 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "On Tue, Dec 6, 2022 at 6:36 PM Peter Smith <smithpb2250@gmail.com> wrote:\n\n> I'd like to \"fix\" this but IIUC there is no consensus yet about what\n> order is best for patch 0001, right?\n>\n>\nI'm planning on performing a more thorough review of 0003 and 0004 tomorrow.\n\nAs for 0001 - go with Peter E.'s suggested ordering.\n\nDavid J.\n\nOn Tue, Dec 6, 2022 at 6:36 PM Peter Smith <smithpb2250@gmail.com> wrote:I'd like to \"fix\" this but IIUC there is no consensus yet about what\norder is best for patch 0001, right?I'm planning on performing a more thorough review of 0003 and 0004 tomorrow.As for 0001 - go with Peter E.'s suggested ordering.David J.",
"msg_date": "Tue, 6 Dec 2022 19:57:30 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "On Tue, Dec 6, 2022 at 7:57 PM David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> On Tue, Dec 6, 2022 at 6:36 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n>> I'd like to \"fix\" this but IIUC there is no consensus yet about what\n>> order is best for patch 0001, right?\n>>\n>>\n> I'm planning on performing a more thorough review of 0003 and 0004\n> tomorrow.\n>\n>\nCompiled just fine.\n\nI do think every row of the views table should be hyperlinked. None of the\n\"xact\" ones are for some reason. For the sys/user ones just point to the\nsame place as the corresponding \"all\" link.\n\npg_stat_subscription_stats needs to be moved up to the \"globals\" section.\n\nThere are a bunch of trailing \". See\" in the descriptions now that need to\nbe cleaned up. (0002)\n\nDavid J.\n\nOn Tue, Dec 6, 2022 at 7:57 PM David G. Johnston <david.g.johnston@gmail.com> wrote:On Tue, Dec 6, 2022 at 6:36 PM Peter Smith <smithpb2250@gmail.com> wrote:I'd like to \"fix\" this but IIUC there is no consensus yet about what\norder is best for patch 0001, right?I'm planning on performing a more thorough review of 0003 and 0004 tomorrow.Compiled just fine.I do think every row of the views table should be hyperlinked. None of the \"xact\" ones are for some reason. For the sys/user ones just point to the same place as the corresponding \"all\" link.pg_stat_subscription_stats needs to be moved up to the \"globals\" section.There are a bunch of trailing \". See\" in the descriptions now that need to be cleaned up. (0002)David J.",
"msg_date": "Wed, 7 Dec 2022 09:26:12 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "Thanks for the ongoing feedback.\n\nPSA patches for v9*\n\nv9-0001 - Now the table rows are ordered per PeterE's suggestions [1]\n\nv9-0002 - All the review comments from DavidJ [2] are addressed\n\nv9-0003 - Unchanged since v8.\n\n------\n[1] https://www.postgresql.org/message-id/cfdb0030-8f62-ed6d-4246-8d9bf855bc48%40enterprisedb.com\n[2] https://www.postgresql.org/message-id/CAKFQuwby7xWHek8%3D6UPaNXrhGA-i0B2zMOmBoGHgc4yaO8NH_w%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia.",
"msg_date": "Thu, 8 Dec 2022 13:30:16 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "On 08.12.22 03:30, Peter Smith wrote:\n> PSA patches for v9*\n> \n> v9-0001 - Now the table rows are ordered per PeterE's suggestions [1]\n\ncommitted\n\n> v9-0002 - All the review comments from DavidJ [2] are addressed\n\nI'm not sure about this one. It removes the \"see [link] for details\" \nphrases and instead makes the view name a link. I think this loses the \ncue that there is more information elsewhere. Otherwise, one could \nthink that, say, the entry about pg_stat_activity is the primary source \nand the link just links to itself. Also keep in mind that people use \nmedia where links are not that apparent (PDF), so the presence of a link \nby itself cannot be the only cue about the flow of the information.\n\n\n\n",
"msg_date": "Mon, 2 Jan 2023 09:17:28 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "On Mon, 2 Jan 2023 at 13:47, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 08.12.22 03:30, Peter Smith wrote:\n> > PSA patches for v9*\n> >\n> > v9-0001 - Now the table rows are ordered per PeterE's suggestions [1]\n>\n> committed\n>\n> > v9-0002 - All the review comments from DavidJ [2] are addressed\n>\n> I'm not sure about this one. It removes the \"see [link] for details\"\n> phrases and instead makes the view name a link. I think this loses the\n> cue that there is more information elsewhere. Otherwise, one could\n> think that, say, the entry about pg_stat_activity is the primary source\n> and the link just links to itself. Also keep in mind that people use\n> media where links are not that apparent (PDF), so the presence of a link\n> by itself cannot be the only cue about the flow of the information.\n\nI'm not sure if anything is pending for v9-0003, if there is something\npending, please post an updated patch for the same.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 4 Jan 2023 12:37:59 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "On Wed, Jan 4, 2023 at 6:08 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, 2 Jan 2023 at 13:47, Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n> >\n> > On 08.12.22 03:30, Peter Smith wrote:\n> > > PSA patches for v9*\n> > >\n> > > v9-0001 - Now the table rows are ordered per PeterE's suggestions [1]\n> >\n> > committed\n\nThanks for pushing.\n\n> >\n> > > v9-0002 - All the review comments from DavidJ [2] are addressed\n> >\n> > I'm not sure about this one. It removes the \"see [link] for details\"\n> > phrases and instead makes the view name a link. I think this loses the\n> > cue that there is more information elsewhere. Otherwise, one could\n> > think that, say, the entry about pg_stat_activity is the primary source\n> > and the link just links to itself. Also keep in mind that people use\n> > media where links are not that apparent (PDF), so the presence of a link\n> > by itself cannot be the only cue about the flow of the information.\n>\n\nPSA new patch for v10-0001\n\nv9-0001 --> pushed, thanks!\nv9-0002 --> I removed this based on the reject reason above\nv9-0003 --> v10-0001\n\n> I'm not sure if anything is pending for v9-0003, if there is something\n> pending, please post an updated patch for the same.\n>\n\nThanks for the reminder. PSA v10.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 11 Jan 2023 17:11:38 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "On 11.01.23 07:11, Peter Smith wrote:\n> v9-0003 --> v10-0001\n> \n>> I'm not sure if anything is pending for v9-0003, if there is something\n>> pending, please post an updated patch for the same.\n> \n> Thanks for the reminder. PSA v10.\n\nSo this patch changes some sections describing system views to \nrefentry's. What is the reason for that? refentry's are basically man \npages; do we want man pages for each system view?\n\nMaybe (*), but then we should also do the same to all the other system \nviews, all the system catalogs, everything else. Just changing a few in \na single place seems odd.\n\n(*) -- but also maybe not?\n\n\n\n",
"msg_date": "Wed, 18 Jan 2023 11:36:18 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "On Wed, Jan 18, 2023 at 3:36 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 11.01.23 07:11, Peter Smith wrote:\n> > v9-0003 --> v10-0001\n> >\n> >> I'm not sure if anything is pending for v9-0003, if there is something\n> >> pending, please post an updated patch for the same.\n> >\n> > Thanks for the reminder. PSA v10.\n>\n> So this patch changes some sections describing system views to\n> refentry's. What is the reason for that? refentry's are basically man\n> pages; do we want man pages for each system view?\n>\n\nI didn't really consider the effect this might have on man pages. I knew\nit would produce the desired effect in the HTML and assumed it would\nproduce an acceptable effect in the PDF. I was going for the html effect\nof having these views chunked into their own pages, any other changes being\nnon-detrimental. And inspecting the DocBook configurations learned that\nsect1 and refentry had this effect. Using sect1 is not possible in this\npart of the documentation.\n\n\n>\n> Maybe (*), but then we should also do the same to all the other system\n> views, all the system catalogs, everything else. Just changing a few in\n> a single place seems odd.\n>\n> (*) -- but also maybe not?\n>\n>\nI could see those who use man pages being pleased with having access to\nthese core building blocks of the system at ready access. I am not one of\nthose people, using the website exclusively. If there is a champion of man\npages here that wants to ensure that changes in this area work well there\nthis patch would be better for it.\n\nI really want a one-page-per-view output on the website in this section.\nThis is the only way I could see getting to that point (as noted upthread,\nsystem catalogs don't have this problem because they are able to be\nmarked up as sect1) . The existing side-effect is, for me, an acceptable\ntrade-off situation. If you want to provide a statement for why these are\nspecial, it's because they are in the System Monitoring chapter instead of\nSystem Internals and the man pages don't cover system internals...\n\nI'm not opposed to alternative markup that gets the pagination job done,\nthough it likely involves tool-chain configuration/modifications. There is\na nearby thread where this is being done presently so maybe if refentry is\na commit-blocker there is still hope, but it is presently outside my\ncapability. I'm after the pagination and have no current preference as to\nhow it is technically accomplished.\n\nDavid J.\n\nOn Wed, Jan 18, 2023 at 3:36 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 11.01.23 07:11, Peter Smith wrote:\n> v9-0003 --> v10-0001\n> \n>> I'm not sure if anything is pending for v9-0003, if there is something\n>> pending, please post an updated patch for the same.\n> \n> Thanks for the reminder. PSA v10.\n\nSo this patch changes some sections describing system views to \nrefentry's. What is the reason for that? refentry's are basically man \npages; do we want man pages for each system view?I didn't really consider the effect this might have on man pages. I knew it would produce the desired effect in the HTML and assumed it would produce an acceptable effect in the PDF. I was going for the html effect of having these views chunked into their own pages, any other changes being non-detrimental. And inspecting the DocBook configurations learned that sect1 and refentry had this effect. Using sect1 is not possible in this part of the documentation. \n\nMaybe (*), but then we should also do the same to all the other system \nviews, all the system catalogs, everything else. Just changing a few in \na single place seems odd.\n\n(*) -- but also maybe not?\nI could see those who use man pages being pleased with having access to these core building blocks of the system at ready access. I am not one of those people, using the website exclusively. If there is a champion of man pages here that wants to ensure that changes in this area work well there this patch would be better for it.I really want a one-page-per-view output on the website in this section. This is the only way I could see getting to that point (as noted upthread, system catalogs don't have this problem because they are able to be marked up as sect1) . The existing side-effect is, for me, an acceptable trade-off situation. If you want to provide a statement for why these are special, it's because they are in the System Monitoring chapter instead of System Internals and the man pages don't cover system internals...I'm not opposed to alternative markup that gets the pagination job done, though it likely involves tool-chain configuration/modifications. There is a nearby thread where this is being done presently so maybe if refentry is a commit-blocker there is still hope, but it is presently outside my capability. I'm after the pagination and have no current preference as to how it is technically accomplished.David J.",
"msg_date": "Wed, 18 Jan 2023 08:07:33 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> ... I was going for the html effect\n> of having these views chunked into their own pages, any other changes being\n> non-detrimental.\n\nBut is that a result we want? It will for example break any bookmarks\nthat people might have for these documentation entries. It will also\npretty thoroughly break the cross-version navigation links in this\npart of the docs.\n\nMaybe the benefit is worth those costs, but I'm entirely not convinced\nof that. I think we need to tread pretty lightly when rearranging\nlongstanding documentation-layout decisions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Jan 2023 10:38:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "On Wed, Jan 18, 2023 at 8:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > ... I was going for the html effect\n> > of having these views chunked into their own pages, any other changes\n> being\n> > non-detrimental.\n>\n> But is that a result we want? It will for example break any bookmarks\n> that people might have for these documentation entries. It will also\n> pretty thoroughly break the cross-version navigation links in this\n> part of the docs.\n\n\n> Maybe the benefit is worth those costs, but I'm entirely not convinced\n> of that. I think we need to tread pretty lightly when rearranging\n> longstanding documentation-layout decisions.\n>\n>\nFair points.\n\nThe external linking can be solved with redirect rules, as I believe we've\ndone before, and fairly recently. Even if not I think when they see why\nthe break happened they will be happy for the improved user experience.\n\nI do think it is important enough a change to warrant breaking the\ncross-version navigation links. I can imagine a linking scheme that would\nstill work but I'm doubtful that this is important enough to expend the\ndevelopment effort.\n\nDavid J.\n\nOn Wed, Jan 18, 2023 at 8:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> ... I was going for the html effect\n> of having these views chunked into their own pages, any other changes being\n> non-detrimental.\n\nBut is that a result we want? It will for example break any bookmarks\nthat people might have for these documentation entries. It will also\npretty thoroughly break the cross-version navigation links in this\npart of the docs.\n\nMaybe the benefit is worth those costs, but I'm entirely not convinced\nof that. I think we need to tread pretty lightly when rearranging\nlongstanding documentation-layout decisions.Fair points.The external linking can be solved with redirect rules, as I believe we've done before, and fairly recently. Even if not I think when they see why the break happened they will be happy for the improved user experience.I do think it is important enough a change to warrant breaking the cross-version navigation links. I can imagine a linking scheme that would still work but I'm doubtful that this is important enough to expend the development effort.David J.",
"msg_date": "Wed, 18 Jan 2023 08:55:28 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "On Thu, Jan 19, 2023 at 2:55 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Wed, Jan 18, 2023 at 8:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n>> > ... I was going for the html effect\n>> > of having these views chunked into their own pages, any other changes being\n>> > non-detrimental.\n>>\n>> But is that a result we want? It will for example break any bookmarks\n>> that people might have for these documentation entries. It will also\n>> pretty thoroughly break the cross-version navigation links in this\n>> part of the docs.\n>>\n>>\n>> Maybe the benefit is worth those costs, but I'm entirely not convinced\n>> of that. I think we need to tread pretty lightly when rearranging\n>> longstanding documentation-layout decisions.\n>>\n>\n\nDavid already gave a good summary [1], but since I was the OP here is\nthe background of v10-0001 from my PoV.\n\n~\n\nThe original $SUBJECT requirements evolved to also try to make each\nview appear on a separate page after that was suggested by DavidJ [2].\nI was unable to achieve per-page views \"without radically changing the\ndocument structure.\" [3], but DavidJ found a way [4] to do it using\nrefentry. I then wrote the patch v8-0003 using that strategy, which\nafter more rebasing became the v10-0001 you see today.\n\nI did prefer the view-per-page results (although I also only use HTML\ndocs). But my worry is that there seem still to be a few unknowns\nabout how this might affect other (not the HTML) renderings of the\ndocs. If you think that risk is too great, or if you feel this patch\nwill cause unwarranted link/bookmark grief, then I am happy to just\ndrop it.\n\n------\n[1] DJ overview -\nhttps://www.postgresql.org/message-id/CAKFQuwaVm%3D6d_sw9Wrp4cdSm5_k%3D8ZVx0--v2v4BH4KnJtqXqg%40mail.gmail.com\n[2] DJ suggested view-per-page -\nhttps://www.postgresql.org/message-id/CAKFQuwa9JtoCBVc6CJb7NC5FqMeEAy_A8X4H8t6kVaw7fz9LTw%40mail.gmail.com\n[3] PS don't know how to do it -\nhttps://www.postgresql.org/message-id/CAHut%2BPv5Efz1TLWOLSoFvoyC0mq%2Bs92yFSd534ctWSdjEFtKCw%40mail.gmail.com\n[4] DJ how to do it using refentry -\nhttps://www.postgresql.org/message-id/CAKFQuwYkM5UZT%2B6tG%2BNgZvDcd5VavS%2BxNHsGsWC8jS-KJsxh7w%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 19 Jan 2023 10:45:35 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "On 19.01.23 00:45, Peter Smith wrote:\n> The original $SUBJECT requirements evolved to also try to make each\n> view appear on a separate page after that was suggested by DavidJ [2].\n> I was unable to achieve per-page views \"without radically changing the\n> document structure.\" [3], but DavidJ found a way [4] to do it using\n> refentry. I then wrote the patch v8-0003 using that strategy, which\n> after more rebasing became the v10-0001 you see today.\n> \n> I did prefer the view-per-page results (although I also only use HTML\n> docs). But my worry is that there seem still to be a few unknowns\n> about how this might affect other (not the HTML) renderings of the\n> docs. If you think that risk is too great, or if you feel this patch\n> will cause unwarranted link/bookmark grief, then I am happy to just\n> drop it.\n\nI'm wary of making semantic markup changes to achieve an ad-hoc \npresentation effects. Sometimes it's necessary, but it should be \nconsidered carefully and globally.\n\nWe could change the chunking boundary to be sect2 globally. This is \neasily configurable (chunk.section.depth).\n\nThinking about it now, maybe this is what we need. As the documentation \ngrows, as it clearly does, the depth of the structure increases and \npages get longer. This can also be seen in other chapters.\n\nOf course, this would need to be tested and checked in more detail.\n\n\n\n",
"msg_date": "Fri, 27 Jan 2023 12:30:00 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "On Fri, Jan 27, 2023 at 10:30 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 19.01.23 00:45, Peter Smith wrote:\n> > The original $SUBJECT requirements evolved to also try to make each\n> > view appear on a separate page after that was suggested by DavidJ [2].\n> > I was unable to achieve per-page views \"without radically changing the\n> > document structure.\" [3], but DavidJ found a way [4] to do it using\n> > refentry. I then wrote the patch v8-0003 using that strategy, which\n> > after more rebasing became the v10-0001 you see today.\n> >\n> > I did prefer the view-per-page results (although I also only use HTML\n> > docs). But my worry is that there seem still to be a few unknowns\n> > about how this might affect other (not the HTML) renderings of the\n> > docs. If you think that risk is too great, or if you feel this patch\n> > will cause unwarranted link/bookmark grief, then I am happy to just\n> > drop it.\n>\n> I'm wary of making semantic markup changes to achieve an ad-hoc\n> presentation effects. Sometimes it's necessary, but it should be\n> considered carefully and globally.\n>\n> We could change the chunking boundary to be sect2 globally. This is\n> easily configurable (chunk.section.depth).\n>\n> Thinking about it now, maybe this is what we need. As the documentation\n> grows, as it clearly does, the depth of the structure increases and\n> pages get longer. This can also be seen in other chapters.\n>\n> Of course, this would need to be tested and checked in more detail.\n>\n\nThis chunk configuration idea sounds a better approach. If somebody\nelse wants to champion that change separately then I can maybe help to\nreview it.\n\nMeanwhile, this pagination topic has strayed far away from the\noriginal $SUBJECT, so I guess since there is nothing else pending this\nthread's CF entry [1] can just be marked as \"Committed\" now?\n\n------\n[1] https://commitfest.postgresql.org/41/3904/\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 30 Jan 2023 17:12:33 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "On 30.01.23 07:12, Peter Smith wrote:\n> Meanwhile, this pagination topic has strayed far away from the\n> original $SUBJECT, so I guess since there is nothing else pending this\n> thread's CF entry [1] can just be marked as \"Committed\" now?\n\ndone\n\n\n\n",
"msg_date": "Mon, 30 Jan 2023 11:42:12 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
},
{
"msg_contents": "On 2023-Jan-30, Peter Smith wrote:\n\n> On Fri, Jan 27, 2023 at 10:30 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n\n> > We could change the chunking boundary to be sect2 globally. This is\n> > easily configurable (chunk.section.depth).\n\n> > Thinking about it now, maybe this is what we need. As the documentation\n> > grows, as it clearly does, the depth of the structure increases and\n> > pages get longer. This can also be seen in other chapters.\n\n> This chunk configuration idea sounds a better approach. If somebody\n> else wants to champion that change separately then I can maybe help to\n> review it.\n\nChanging the chunking depth will change every single doc URL, though, so\nthe website will need some work to ensure there's a good transition\nmechanism for the \"this page in older/newer versions\" functionality.\n\nIt sounds doable, but someone will need to craft it and test it. (Maybe\nit would work to populate a table with all URLs at each side of the\ndivide, and its equivalent at the other side.)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 1 Feb 2023 10:26:34 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] Stats views and functions not in order?"
}
] |
[
{
"msg_contents": "Hi, hackers\nI added some SQL statements to improve test coverage.\nAs data was inserted, the expected file changed.\nSo should I change all `select *` for a stable expected result?\n\nAnd it's the coverage change as I add\n50.6% -> 78.7%\n---\nregards,\nLee Dong Wook",
"msg_date": "Tue, 2 Aug 2022 10:36:33 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "pgstattuple: add test for coverage"
},
{
"msg_contents": "Dong Wook Lee <sh95119@gmail.com> writes:\n> Hi, hackers\n> I added some SQL statements to improve test coverage.\n\nI do not think it's a great idea to create random dependencies\nbetween modules like the pgstattuple -> bloom dependency you\ncasually added here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 02 Aug 2022 01:47:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgstattuple: add test for coverage"
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> I do not think it's a great idea to create random dependencies\n> between modules like the pgstattuple -> bloom dependency you\n> casually added here.\nI agree with your option.\n\nIs there no problem with selecting all the columns during SELECT statements?\nI thought there might be a problem where the test results could change easily.\n\n---\nregards\nLee Dong Wook.",
"msg_date": "Wed, 3 Aug 2022 11:19:59 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pgstattuple: add test for coverage"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-03 11:19:59 +0900, Dong Wook Lee wrote:\n> Is there no problem with selecting all the columns during SELECT statements?\n> I thought there might be a problem where the test results could change easily.\n\nWhich indeed is the case, e.g. on 32bit systems it fails:\n\nhttps://cirrus-ci.com/task/4619535222308864?logs=test_world_32#L253\n\nhttps://api.cirrus-ci.com/v1/artifact/task/4619535222308864/testrun/build-32/testrun/pgstattuple/regress/regression.diffs\n\n table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space | free_percent\n -----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+--------------\n- 1171456 | 5000 | 560000 | 47.8 | 5000 | 560000 | 47.8 | 7452 | 0.64\n+ 1138688 | 5000 | 540000 | 47.42 | 5000 | 540000 | 47.42 | 14796 | 1.3\n (1 row)\n\n...\n\n\nYou definitely can't rely on such details not to change across platforms.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 2 Oct 2022 00:14:30 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgstattuple: add test for coverage"
},
{
"msg_contents": "> Which indeed is the case, e.g. on 32bit systems it fails:\n>\n> https://cirrus-ci.com/task/4619535222308864?logs=test_world_32#L253\n>\n> https://api.cirrus-ci.com/v1/artifact/task/4619535222308864/testrun/build-32/testrun/pgstattuple/regress/regression.diffs\n>\n> table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space | free_percent\n> -----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+--------------\n> - 1171456 | 5000 | 560000 | 47.8 | 5000 | 560000 | 47.8 | 7452 | 0.64\n> + 1138688 | 5000 | 540000 | 47.42 | 5000 | 540000 | 47.42 | 14796 | 1.3\n> (1 row)\n>\n> ...\n>\n>\n> You definitely can't rely on such details not to change across platforms.\n\n\nThank you for letting me know I'll fix it and check if there's any problem.\n\n\n",
"msg_date": "Mon, 3 Oct 2022 00:42:27 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pgstattuple: add test for coverage"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-03 00:42:27 +0900, Dong Wook Lee wrote:\n> > Which indeed is the case, e.g. on 32bit systems it fails:\n> >\n> > https://cirrus-ci.com/task/4619535222308864?logs=test_world_32#L253\n> >\n> > https://api.cirrus-ci.com/v1/artifact/task/4619535222308864/testrun/build-32/testrun/pgstattuple/regress/regression.diffs\n> >\n> > table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space | free_percent\n> > -----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+--------------\n> > - 1171456 | 5000 | 560000 | 47.8 | 5000 | 560000 | 47.8 | 7452 | 0.64\n> > + 1138688 | 5000 | 540000 | 47.42 | 5000 | 540000 | 47.42 | 14796 | 1.3\n> > (1 row)\n> >\n> > ...\n> >\n> >\n> > You definitely can't rely on such details not to change across platforms.\n\n> Thank you for letting me know I'll fix it and check if there's any problem.\n\nI've marked the patch as returned with feedback for now. Please change that\nonce you submit an updated version.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 22 Nov 2022 14:28:12 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgstattuple: add test for coverage"
}
] |
[
{
"msg_contents": "Hi,\n\nFor long strings, iterate_word_similarity() can run into long-running\ntight-loops without honouring interrupts or statement_timeouts. For\nexample:\n\npostgres=# set statement_timeout='1s';\nSET\npostgres=# select 1 where repeat('1.1',80000) %>> 'Lorem ipsum dolor sit amet';\n?column?\n----------\n(0 rows)\nTime: 29615.842 ms (00:29.616)\n\nThe associated perf report:\n\n+ 99.98% 0.00% postgres postgres [.] ExecQual\n+ 99.98% 0.00% postgres postgres [.] ExecEvalExprSwitchContext\n+ 99.98% 0.00% postgres pg_trgm.so [.] strict_word_similarity_commutator_op\n+ 99.98% 0.00% postgres pg_trgm.so [.] calc_word_similarity\n+ 99.68% 99.47% postgres pg_trgm.so [.] iterate_word_similarity\n0.21% 0.03% postgres postgres [.] pg_qsort\n0.16% 0.00% postgres [kernel.kallsyms] [k] asm_sysvec_apic_timer_interrupt\n0.16% 0.00% postgres [kernel.kallsyms] [k] sysvec_apic_timer_interrupt\n0.16% 0.11% postgres [kernel.kallsyms] [k] __softirqentry_text_start\n0.16% 0.00% postgres [kernel.kallsyms] [k] irq_exit_rcu\n\nAdding CHECK_FOR_INTERRUPTS() ensures that such queries respond to\nstatement_timeout & Ctrl-C signals. With the patch applied, the\nabove query will interrupt more quickly:\n\npostgres=# select 1 where repeat('1.1',80000) %>> 'Lorem ipsum dolor sit amet';\nERROR: canceling statement due to statement timeout\nTime: 1000.768 ms (00:01.001)\n\nPlease find the patch attached. The patch does not show any performance\nregressions when run against the above use-case. Thanks to SQLSmith\nfor indirectly leading me to this scenario.\n\n-\nRobins Tharakan\nAmazon Web Services",
"msg_date": "Tue, 2 Aug 2022 12:11:10 +0930",
"msg_from": "Robins Tharakan <tharakan@gmail.com>",
"msg_from_op": true,
"msg_subject": "Missing CFI in iterate_word_similarity()"
},
{
"msg_contents": "> On 2 Aug 2022, at 04:41, Robins Tharakan <tharakan@gmail.com> wrote:\n\n> For long strings, iterate_word_similarity() can run into long-running\n> tight-loops without honouring interrupts or statement_timeouts.\n\n> Adding CHECK_FOR_INTERRUPTS() ensures that such queries respond to\n> statement_timeout & Ctrl-C signals. With the patch applied, the\n> above query will interrupt more quickly:\n\nMakes sense. While this might be a bit of a theoretical issue given the\nlengths required, the fix is still sane and any such query should honor\nstatement timeouts (especially in a trusted extension).\n\n> Please find the patch attached. The patch does not show any performance\n> regressions when run against the above use-case.\n\nI wasn't able to find one either.\n\n+ CHECK_FOR_INTERRUPTS();\n+\n /* Get index of next trigram */\n int trgindex = trg2indexes[i];\n\nPlacing code before declarations will generate a compiler warning, so the check\nmust go after trgindex is declared. I've fixed that in the attached to get the\ncfbot green. Marking this ready for committer in the meantime.\n\nLooking at this I also noticed that commit be8a7a68662 changed the check_only\nparam to instead use a flag value but didn't update all comments. 0002 fixes\nthat while in there.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Fri, 2 Sep 2022 14:26:50 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Missing CFI in iterate_word_similarity()"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> Placing code before declarations will generate a compiler warning, so the check\n> must go after trgindex is declared. I've fixed that in the attached to get the\n> cfbot green. Marking this ready for committer in the meantime.\n\nI noticed the same thing, but sticking the CFI immediately after the\ndeclaration didn't read well either. I was considering moving it to\nthe bottom of the loop instead of that. A possible objection is that\nif there's ever a \"continue;\" in the loop, those iterations would bypass\nthe CFI; but we don't necessarily need a CFI every time.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Sep 2022 08:57:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Missing CFI in iterate_word_similarity()"
},
{
"msg_contents": "> On 2 Sep 2022, at 14:57, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> Placing code before declarations will generate a compiler warning, so the check\n>> must go after trgindex is declared. I've fixed that in the attached to get the\n>> cfbot green. Marking this ready for committer in the meantime.\n> \n> I noticed the same thing, but sticking the CFI immediately after the\n> declaration didn't read well either. I was considering moving it to\n> the bottom of the loop instead of that. \n\nI was contemplating that too, but kept it at the top after seeing quite a few\nexamples of that in other contrib modules (like amcheck/verify_nbtree.c and\npg_visibility/pg_visibility.c). I don't have any strong feelings either way,\nI'm happy to move it last.\n\n> A possible objection is that\n> if there's ever a \"continue;\" in the loop, those iterations would bypass\n> the CFI; but we don't necessarily need a CFI every time.\n\nYeah, I don't think we need to worry about that. If an added continue;\nshortcuts the loop to the point where keeping the CFI last becomes a problem\nthen it's probably time to look at rewriting the loop.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 2 Sep 2022 15:06:34 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Missing CFI in iterate_word_similarity()"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 2 Sep 2022, at 14:57, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I noticed the same thing, but sticking the CFI immediately after the\n>> declaration didn't read well either. I was considering moving it to\n>> the bottom of the loop instead of that. \n\n> I was contemplating that too, but kept it at the top after seeing quite a few\n> examples of that in other contrib modules (like amcheck/verify_nbtree.c and\n> pg_visibility/pg_visibility.c). I don't have any strong feelings either way,\n> I'm happy to move it last.\n\nYou could keep it at the top, but then I'd be inclined to split up\nthe existing code:\n\n int trgindex;\n\n CHECK_FOR_INTERRUPTS();\n\n /* Get index of next trigram */\n trgindex = trg2indexes[i];\n\n /* Update last position of this trigram */\n ...\n\nWhat's annoying me about the one-liner fix is that it makes it\nlook like CFI is part of the \"Get index\" action.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Sep 2022 09:16:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Missing CFI in iterate_word_similarity()"
},
{
"msg_contents": "> On 2 Sep 2022, at 15:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> What's annoying me about the one-liner fix is that it makes it\n> look like CFI is part of the \"Get index\" action.\n\nThats a good point, I'll split the code up to make it clearer.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 2 Sep 2022 15:22:06 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Missing CFI in iterate_word_similarity()"
},
{
"msg_contents": "> On 2 Sep 2022, at 15:22, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 2 Sep 2022, at 15:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n>> What's annoying me about the one-liner fix is that it makes it\n>> look like CFI is part of the \"Get index\" action.\n> \n> Thats a good point, I'll split the code up to make it clearer.\n\nDone that way and pushed, thanks!\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 5 Sep 2022 11:23:47 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Missing CFI in iterate_word_similarity()"
}
] |
[
{
"msg_contents": "I noticed that COPY TO accepts FREEZE option but it is pointless.\n\nDon't we reject that option as the first-attached does? I tempted to\nadd tests for those option combinations that are to be rejected but I\ndidin't come up with a clean way to do that.\n\n\nBy the way, most of the invalid option combinations for COPY are\nmarked as ERRCODE_FEATURE_NOT_SUPPORTED. I looks to me saying that\n\"that feature is theoretically possible or actually realized\nelsewhere, but impossible now or here\".\n\nIf it is correct, aren't they better be ERRCODE_INVALID_PARAMETER_VALUE? The code is being used for similar messages \"unrecognized parameter <name>\" and \"parameter <name> specified more than once\" (or some others?). At least a quote string longer than a single character seems like to fit INVALID_PARAMETER_VALUE. (I believe we don't mean to support multicharacter (or even multibyte) escape/quote character anddelimiter). That being said, I'm not sure if the change will be worth the trouble.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 02 Aug 2022 13:30:46 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "COPY TO (FREEZE)?"
},
{
"msg_contents": "Hi,\n\nOn Tue, Aug 02, 2022 at 01:30:46PM +0900, Kyotaro Horiguchi wrote:\n> I noticed that COPY TO accepts FREEZE option but it is pointless.\n>\n> Don't we reject that option as the first-attached does?\n\nI agree that we should reject it, +1 for the patch.\n\n> By the way, most of the invalid option combinations for COPY are\n> marked as ERRCODE_FEATURE_NOT_SUPPORTED. I looks to me saying that\n> \"that feature is theoretically possible or actually realized\n> elsewhere, but impossible now or here\".\n>\n> If it is correct, aren't they better be ERRCODE_INVALID_PARAMETER_VALUE? The\n> code is being used for similar messages \"unrecognized parameter <name>\" and\n> \"parameter <name> specified more than once\" (or some others?). At least a\n> quote string longer than a single character seems like to fit\n> INVALID_PARAMETER_VALUE. (I believe we don't mean to support multicharacter\n> (or even multibyte) escape/quote character anddelimiter). That being said,\n> I'm not sure if the change will be worth the trouble.\n\nI also feel weird about it. I raised the same point recently about COPY FROM +\nHEADER MATCH (1), and at that time there wasn't a real consensus on the way to\ngo, just keep the things consistent. I'm +0.5 on that patch for the same\nreason as back then. My only concern is that it can in theory break things if\nyou rely on the current sqlstate, but given the errors I don't think it's\nreally a problem.\n\n[1]: https://www.postgresql.org/message-id/flat/20220614091319.jk4he5migtpwyd7r%40jrouhaud#b18bf3705fb9f69d0112b6febf0fa1be\n\n\n",
"msg_date": "Tue, 2 Aug 2022 14:17:46 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY TO (FREEZE)?"
},
{
"msg_contents": "At Tue, 2 Aug 2022 14:17:46 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> Hi,\n> \n> On Tue, Aug 02, 2022 at 01:30:46PM +0900, Kyotaro Horiguchi wrote:\n> > I noticed that COPY TO accepts FREEZE option but it is pointless.\n> >\n> > Don't we reject that option as the first-attached does?\n> \n> I agree that we should reject it, +1 for the patch.\n\nThanks for looking it!\n\n> > By the way, most of the invalid option combinations for COPY are\n> > marked as ERRCODE_FEATURE_NOT_SUPPORTED. I looks to me saying that\n> > \"that feature is theoretically possible or actually realized\n> > elsewhere, but impossible now or here\".\n> >\n> > If it is correct, aren't they better be ERRCODE_INVALID_PARAMETER_VALUE? The\n> > code is being used for similar messages \"unrecognized parameter <name>\" and\n> > \"parameter <name> specified more than once\" (or some others?). At least a\n> > quote string longer than a single character seems like to fit\n> > INVALID_PARAMETER_VALUE. (I believe we don't mean to support multicharacter\n> > (or even multibyte) escape/quote character anddelimiter). That being said,\n> > I'm not sure if the change will be worth the trouble.\n> \n> I also feel weird about it. I raised the same point recently about COPY FROM +\n> HEADER MATCH (1), and at that time there wasn't a real consensus on the way to\n> go, just keep the things consistent. I'm +0.5 on that patch for the same\n> reason as back then. My only concern is that it can in theory break things if\n> you rely on the current sqlstate, but given the errors I don't think it's\n> really a problem.\n\nExactly. That is the exact reason for my to say \"I'm not sure if..\". \n\n> [1]: https://www.postgresql.org/message-id/flat/20220614091319.jk4he5migtpwyd7r%40jrouhaud#b18bf3705fb9f69d0112b6febf0fa1be\n\n> Maybe that's just me but I understand \"not supported\" as \"this makes\n> sense, but this is currently a limitation that might be lifted\n> later\".\n\nFWIW I understand it the same way.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 02 Aug 2022 17:17:35 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: COPY TO (FREEZE)?"
},
{
"msg_contents": "Regards,\nZhang Mingli\nOn Aug 2, 2022, 12:30 +0800, Kyotaro Horiguchi <horikyota.ntt@gmail.com>, wrote:\n> I noticed that COPY TO accepts FREEZE option but it is pointless.\n>\n> Don't we reject that option as the first-attached does? I\n+1, should be rejected like other invalid options.\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n\n\n\n\n\n\n\nRegards,\nZhang Mingli\n\n\n\nOn Aug 2, 2022, 12:30 +0800, Kyotaro Horiguchi <horikyota.ntt@gmail.com>, wrote:\nI noticed that COPY TO accepts FREEZE option but it is pointless.\n\nDon't we reject that option as the first-attached does? I \n+1, should be rejected like other invalid options.\n\nregards.\n\n--\nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 2 Aug 2022 16:20:29 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY TO (FREEZE)?"
},
{
"msg_contents": "On Tue, Aug 2, 2022 at 05:17:35PM +0900, Kyotaro Horiguchi wrote:\n> At Tue, 2 Aug 2022 14:17:46 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> > Hi,\n> > \n> > On Tue, Aug 02, 2022 at 01:30:46PM +0900, Kyotaro Horiguchi wrote:\n> > > I noticed that COPY TO accepts FREEZE option but it is pointless.\n> > >\n> > > Don't we reject that option as the first-attached does?\n> > \n> > I agree that we should reject it, +1 for the patch.\n> \n> Thanks for looking it!\n> \n> > > By the way, most of the invalid option combinations for COPY are\n> > > marked as ERRCODE_FEATURE_NOT_SUPPORTED. I looks to me saying that\n> > > \"that feature is theoretically possible or actually realized\n> > > elsewhere, but impossible now or here\".\n> > >\n> > > If it is correct, aren't they better be ERRCODE_INVALID_PARAMETER_VALUE? The\n> > > code is being used for similar messages \"unrecognized parameter <name>\" and\n> > > \"parameter <name> specified more than once\" (or some others?). At least a\n> > > quote string longer than a single character seems like to fit\n> > > INVALID_PARAMETER_VALUE. (I believe we don't mean to support multicharacter\n> > > (or even multibyte) escape/quote character anddelimiter). That being said,\n> > > I'm not sure if the change will be worth the trouble.\n> > \n> > I also feel weird about it. I raised the same point recently about COPY FROM +\n> > HEADER MATCH (1), and at that time there wasn't a real consensus on the way to\n> > go, just keep the things consistent. I'm +0.5 on that patch for the same\n> > reason as back then. My only concern is that it can in theory break things if\n> > you rely on the current sqlstate, but given the errors I don't think it's\n> > really a problem.\n> \n> Exactly. That is the exact reason for my to say \"I'm not sure if..\". \n> \n> > [1]: https://www.postgresql.org/message-id/flat/20220614091319.jk4he5migtpwyd7r%40jrouhaud#b18bf3705fb9f69d0112b6febf0fa1be\n> \n> > Maybe that's just me but I understand \"not supported\" as \"this makes\n> > sense, but this is currently a limitation that might be lifted\n> > later\".\n> \n> FWIW I understand it the same way.\n\nI would like to apply the attached patch to master. Looking at your\nadjustments for ERRCODE_FEATURE_NOT_SUPPORTED to\nERRCODE_INVALID_PARAMETER_VALUE, I only changed the cases where it would\nbe illogical to implement the feature, not just that we have no\nintention of implementing the feature. I read \"invalid\" as \"illogical\".\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Sat, 28 Oct 2023 20:38:26 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: COPY TO (FREEZE)?"
},
{
"msg_contents": "On Sat, Oct 28, 2023 at 08:38:26PM -0400, Bruce Momjian wrote:\n> I would like to apply the attached patch to master. Looking at your\n> adjustments for ERRCODE_FEATURE_NOT_SUPPORTED to\n> ERRCODE_INVALID_PARAMETER_VALUE, I only changed the cases where it would\n> be illogical to implement the feature, not just that we have no\n> intention of implementing the feature. I read \"invalid\" as \"illogical\".\n\nMy apologies, wrong patch attached, right one attached now.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Sat, 28 Oct 2023 20:41:14 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: COPY TO (FREEZE)?"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> My apologies, wrong patch attached, right one attached now.\n\nI think this one is fine as-is:\n\n \t/* Only single-byte delimiter strings are supported. */\n \tif (strlen(opts_out->delim) != 1)\n \t\tereport(ERROR,\n-\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n \t\t\t\t errmsg(\"COPY delimiter must be a single one-byte character\")));\n \nWhile we have good implementation reasons for this restriction,\nthere's nothing illogical about wanting the delimiter to be more\ngeneral. It's particularly silly, from an end-user's standpoint,\nthat for example 'é' is an allowed delimiter in LATIN1 encoding\nbut not when the server is using UTF8. So I don't see how the\ndistinction you presented justifies this change.\n\n+\tif (opts_out->freeze && !is_from)\n+\t\tereport(ERROR,\n+\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+\t\t\t\t errmsg(\"COPY freeze only available using COPY FROM\")));\n\nNot thrilled by the wording here. I don't like the fact that the\nkeyword FREEZE isn't capitalized, and I think you omitted too many\nwords for intelligibility to be preserved. Notably, all the adjacent\nexamples use \"must\" or \"must not\", and this decides that that can be\nomitted.\n\nI realize that you probably modeled the non-capitalization on nearby\nmessages like \"COPY delimiter\", but there's a difference IMO:\n\"delimiter\" can be read as an English noun, but it's hard to read\n\"freeze\" as a noun.\n\nHow about, say,\n\n\terrmsg(\"COPY FREEZE must not be used in COPY TO\")));\n\nor perhaps that's redundant and we could write\n\n\terrmsg(\"FREEZE option must not be used in COPY TO\")));\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 28 Oct 2023 21:39:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: COPY TO (FREEZE)?"
},
{
"msg_contents": "On Sat, Oct 28, 2023 at 09:39:53PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > My apologies, wrong patch attached, right one attached now.\n> \n> I think this one is fine as-is:\n> \n> \t/* Only single-byte delimiter strings are supported. */\n> \tif (strlen(opts_out->delim) != 1)\n> \t\tereport(ERROR,\n> -\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> +\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> \t\t\t\t errmsg(\"COPY delimiter must be a single one-byte character\")));\n> \n> While we have good implementation reasons for this restriction,\n> there's nothing illogical about wanting the delimiter to be more\n> general. It's particularly silly, from an end-user's standpoint,\n> that for example 'é' is an allowed delimiter in LATIN1 encoding\n> but not when the server is using UTF8. So I don't see how the\n> distinction you presented justifies this change.\n\nAgreed, my mistake.\n \n> +\tif (opts_out->freeze && !is_from)\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t errmsg(\"COPY freeze only available using COPY FROM\")));\n> \n> Not thrilled by the wording here. I don't like the fact that the\n> keyword FREEZE isn't capitalized, and I think you omitted too many\n> words for intelligibility to be preserved. Notably, all the adjacent\n> examples use \"must\" or \"must not\", and this decides that that can be\n> omitted.\n\nI think it is modeled after:\n\n\terrmsg(\"COPY force null only available using COPY FROM\")));\n\n> I realize that you probably modeled the non-capitalization on nearby\n> messages like \"COPY delimiter\", but there's a difference IMO:\n> \"delimiter\" can be read as an English noun, but it's hard to read\n> \"freeze\" as a noun.\n> \n> How about, say,\n> \n> \terrmsg(\"COPY FREEZE must not be used in COPY TO\")));\n> \n> or perhaps that's redundant and we could write\n> \n> \terrmsg(\"FREEZE option must not be used in COPY TO\")));\n\nI now have:\n\n\terrmsg(\"COPY FREEZE mode only available using COPY FROM\")));\n\nUpdated patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Sat, 28 Oct 2023 21:47:03 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: COPY TO (FREEZE)?"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Sat, Oct 28, 2023 at 09:39:53PM -0400, Tom Lane wrote:\n>> Not thrilled by the wording here.\n\n> I think it is modeled after:\n\n> \terrmsg(\"COPY force null only available using COPY FROM\")));\n\nWell, now that you bring it up, that's no sterling example of\nclear writing either. Maybe change that while we're at it,\nsay to \"FORCE NULL option must not be used in COPY TO\"?\n(Also, has it got the right ERRCODE?)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 28 Oct 2023 21:54:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: COPY TO (FREEZE)?"
},
{
"msg_contents": "On Sat, Oct 28, 2023 at 09:54:05PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Sat, Oct 28, 2023 at 09:39:53PM -0400, Tom Lane wrote:\n> >> Not thrilled by the wording here.\n> \n> > I think it is modeled after:\n> \n> > \terrmsg(\"COPY force null only available using COPY FROM\")));\n> \n> Well, now that you bring it up, that's no sterling example of\n> clear writing either. Maybe change that while we're at it,\n> say to \"FORCE NULL option must not be used in COPY TO\"?\n\nI used:\n\n\t\"COPY FREEZE mode cannot be used with COPY FROM\"\n\nand adjusted the others.\n\n> (Also, has it got the right ERRCODE?)\n\nFixed, and the other cases too. Patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Sat, 28 Oct 2023 22:03:59 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: COPY TO (FREEZE)?"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us>于2023年10月29日 周日10:04写道:\n\n> On Sat, Oct 28, 2023 at 09:54:05PM -0400, Tom Lane wrote:\n> > Bruce Momjian <bruce@momjian.us> writes:\n> > > On Sat, Oct 28, 2023 at 09:39:53PM -0400, Tom Lane wrote:\n> > >> Not thrilled by the wording here.\n> >\n> > > I think it is modeled after:\n> >\n> > > errmsg(\"COPY force null only available using COPY FROM\")));\n> >\n> > Well, now that you bring it up, that's no sterling example of\n> > clear writing either. Maybe change that while we're at it,\n> > say to \"FORCE NULL option must not be used in COPY TO\"?\n>\n> I used:\n>\n> \"COPY FREEZE mode cannot be used with COPY FROM\"\n>\n> and adjusted the others.\n>\n> > (Also, has it got the right ERRCODE?)\n>\n> Fixed, and the other cases too. Patch attached.\n>\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> Only you can decide what is important to you.\n\n\n errmsg(\"COPY force not null only available using COPY FROM\")));\n>\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n>\n> + errmsg(\"COPY force not null cannot be used with COPY FROM\")));\n>\n>\ncannot -> can ?\n\n>\n\nBruce Momjian <bruce@momjian.us>于2023年10月29日 周日10:04写道:On Sat, Oct 28, 2023 at 09:54:05PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Sat, Oct 28, 2023 at 09:39:53PM -0400, Tom Lane wrote:\n> >> Not thrilled by the wording here.\n> \n> > I think it is modeled after:\n> \n> > errmsg(\"COPY force null only available using COPY FROM\")));\n> \n> Well, now that you bring it up, that's no sterling example of\n> clear writing either. Maybe change that while we're at it,\n> say to \"FORCE NULL option must not be used in COPY TO\"?\n\nI used:\n\n \"COPY FREEZE mode cannot be used with COPY FROM\"\n\nand adjusted the others.\n\n> (Also, has it got the right ERRCODE?)\n\nFixed, and the other cases too. Patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you. errmsg(\"COPY force not null only available using COPY FROM\")));+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),+ errmsg(\"COPY force not null cannot be used with COPY FROM\")));cannot -> can ?",
"msg_date": "Sun, 29 Oct 2023 14:35:39 +0800",
"msg_from": "Mingli Zhang <zmlpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY TO (FREEZE)?"
},
{
"msg_contents": "Mingli Zhang <zmlpostgres@gmail.com>于2023年10月29日 周日14:35写道:\n\n>\n>\n> Bruce Momjian <bruce@momjian.us>于2023年10月29日 周日10:04写道:\n>\n>> On Sat, Oct 28, 2023 at 09:54:05PM -0400, Tom Lane wrote:\n>> > Bruce Momjian <bruce@momjian.us> writes:\n>> > > On Sat, Oct 28, 2023 at 09:39:53PM -0400, Tom Lane wrote:\n>> > >> Not thrilled by the wording here.\n>> >\n>> > > I think it is modeled after:\n>> >\n>> > > errmsg(\"COPY force null only available using COPY FROM\")));\n>> >\n>> > Well, now that you bring it up, that's no sterling example of\n>> > clear writing either. Maybe change that while we're at it,\n>> > say to \"FORCE NULL option must not be used in COPY TO\"?\n>>\n>> I used:\n>>\n>> \"COPY FREEZE mode cannot be used with COPY FROM\"\n>>\n>> and adjusted the others.\n>>\n>> > (Also, has it got the right ERRCODE?)\n>>\n>> Fixed, and the other cases too. Patch attached.\n>>\n>> --\n>> Bruce Momjian <bruce@momjian.us> https://momjian.us\n>> EDB https://enterprisedb.com\n>>\n>> Only you can decide what is important to you.\n>\n>\n> errmsg(\"COPY force not null only available using COPY FROM\")));\n>>\n>> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n>>\n>> + errmsg(\"COPY force not null cannot be used with COPY FROM\")));\n>>\n>>\n> cannot -> can ?\n>\n\nI guess you want to write “cannot be used with COPY TO”\n\n>\n\nMingli Zhang <zmlpostgres@gmail.com>于2023年10月29日 周日14:35写道:Bruce Momjian <bruce@momjian.us>于2023年10月29日 周日10:04写道:On Sat, Oct 28, 2023 at 09:54:05PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Sat, Oct 28, 2023 at 09:39:53PM -0400, Tom Lane wrote:\n> >> Not thrilled by the wording here.\n> \n> > I think it is modeled after:\n> \n> > errmsg(\"COPY force null only available using COPY FROM\")));\n> \n> Well, now that you bring it up, that's no sterling example of\n> clear writing either. Maybe change that while we're at it,\n> say to \"FORCE NULL option must not be used in COPY TO\"?\n\nI used:\n\n \"COPY FREEZE mode cannot be used with COPY FROM\"\n\nand adjusted the others.\n\n> (Also, has it got the right ERRCODE?)\n\nFixed, and the other cases too. Patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you. errmsg(\"COPY force not null only available using COPY FROM\")));+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),+ errmsg(\"COPY force not null cannot be used with COPY FROM\")));cannot -> can ?I guess you want to write “cannot be used with COPY TO”",
"msg_date": "Sun, 29 Oct 2023 14:50:37 +0800",
"msg_from": "Mingli Zhang <zmlpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY TO (FREEZE)?"
},
{
"msg_contents": "On Sun, Oct 29, 2023 at 02:50:37PM +0800, Mingli Zhang wrote:\n> I guess you want to write “cannot be used with COPY TO”\n\nYou are 100% correct. Updated patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Sun, 29 Oct 2023 15:35:02 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: COPY TO (FREEZE)?"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us>于2023年10月30日 周一03:35写道:\n\n> On Sun, Oct 29, 2023 at 02:50:37PM +0800, Mingli Zhang wrote:\n> > I guess you want to write “cannot be used with COPY TO”\n>\n> You are 100% correct. Updated patch attached.\n>\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> Only you can decide what is important to you.\n\n\n\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n>\n> + errmsg(\"COPY FREEZE mode cannot be used with COPY FROM\")));\n>\n> +\n>\n>\nCOPY FROM-> COPY TO\n\n>\n\nBruce Momjian <bruce@momjian.us>于2023年10月30日 周一03:35写道:On Sun, Oct 29, 2023 at 02:50:37PM +0800, Mingli Zhang wrote:\n> I guess you want to write “cannot be used with COPY TO”\n\nYou are 100% correct. Updated patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.(errcode(ERRCODE_INVALID_PARAMETER_VALUE),+ errmsg(\"COPY FREEZE mode cannot be used with COPY FROM\")));+COPY FROM-> COPY TO",
"msg_date": "Mon, 30 Oct 2023 05:07:48 +0800",
"msg_from": "Mingli Zhang <zmlpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY TO (FREEZE)?"
},
{
"msg_contents": "On Mon, Oct 30, 2023 at 05:07:48AM +0800, Mingli Zhang wrote:\n> \n> Bruce Momjian <bruce@momjian.us>于2023年10月30日周一03:35写道:\n> \n> On Sun, Oct 29, 2023 at 02:50:37PM +0800, Mingli Zhang wrote:\n> > I guess you want to write “cannot be used with COPY TO”\n> \n> You are 100% correct. Updated patch attached.\n> \n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n> \n> Only you can decide what is important to you.\n> \n> \n> \n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> \n> + errmsg(\"COPY FREEZE mode cannot be used with COPY FROM\")));\n> \n> +\n> \n> \n> COPY FROM-> COPY TO\n\nAgreed, patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Sun, 29 Oct 2023 22:58:02 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: COPY TO (FREEZE)?"
},
{
"msg_contents": "HI,\n\n\nZhang Mingli\nwww.hashdata.xyz\nOn Oct 30, 2023 at 10:58 +0800, Bruce Momjian <bruce@momjian.us>, wrote:\n> On Mon, Oct 30, 2023 at 05:07:48AM +0800, Mingli Zhang wrote:\n> >\n> > Bruce Momjian <bruce@momjian.us>于2023年10月30日周一03:35写道:\n> >\n> > On Sun, Oct 29, 2023 at 02:50:37PM +0800, Mingli Zhang wrote:\n> > > I guess you want to write “cannot be used with COPY TO”\n> >\n> > You are 100% correct. Updated patch attached.\n> >\n> > --\n> > Bruce Momjian <bruce@momjian.us> https://momjian.us\n> > EDB https://enterprisedb.com\n> >\n> > Only you can decide what is important to you.\n> >\n> >\n> >\n> > (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> >\n> > + errmsg(\"COPY FREEZE mode cannot be used with COPY FROM\")));\n> >\n> > +\n> >\n> >\n> > COPY FROM-> COPY TO\n>\n> Agreed, patch attached.\n>\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> Only you can decide what is important to you.\n\nLGTM.\n\n\n\n\n\n\n\nHI,\n\n\n\n\nZhang Mingli\nwww.hashdata.xyz\n\n\n\nOn Oct 30, 2023 at 10:58 +0800, Bruce Momjian <bruce@momjian.us>, wrote:\nOn Mon, Oct 30, 2023 at 05:07:48AM +0800, Mingli Zhang wrote:\n\nBruce Momjian <bruce@momjian.us>于2023年10月30日周一03:35写道:\n\nOn Sun, Oct 29, 2023 at 02:50:37PM +0800, Mingli Zhang wrote:\nI guess you want to write “cannot be used with COPY TO”\n\nYou are 100% correct. Updated patch attached.\n\n--\n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n\n+ errmsg(\"COPY FREEZE mode cannot be used with COPY FROM\")));\n\n+\n\n\nCOPY FROM-> COPY TO\n\nAgreed, patch attached.\n\n--\nBruce Momjian <bruce@momjian.us> https://momjian.us\nEDB https://enterprisedb.com\n\nOnly you can decide what is important to you.\n\nLGTM.",
"msg_date": "Mon, 30 Oct 2023 11:22:14 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY TO (FREEZE)?"
},
{
"msg_contents": "At Sun, 29 Oct 2023 15:35:02 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> You are 100% correct. Updated patch attached.\n\n-\t\t\t\t errmsg(\"COPY force not null only available using COPY FROM\")));\n+\t\t\t\t errmsg(\"COPY force not null cannot be used with COPY TO\")));\n\nI find the term \"force not null\" hard to translate, especially into\nJapaese, as its literal translation doesn't align with the entire\nmessage. The most recent translation for it is the literal rendition\nof \"FORCE_NOT_NULL option of COPY can only be used with COPY FROM\".\n\nIn short, for translation convenience, I would prefer if \"force not\nnull\" were \"FORCE_NOT_NULL\".\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 30 Oct 2023 15:16:58 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: COPY TO (FREEZE)?"
},
{
"msg_contents": "On Mon, Oct 30, 2023 at 03:16:58PM +0900, Kyotaro Horiguchi wrote:\n> At Sun, 29 Oct 2023 15:35:02 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> > You are 100% correct. Updated patch attached.\n> \n> -\t\t\t\t errmsg(\"COPY force not null only available using COPY FROM\")));\n> +\t\t\t\t errmsg(\"COPY force not null cannot be used with COPY TO\")));\n> \n> I find the term \"force not null\" hard to translate, especially into\n> Japaese, as its literal translation doesn't align with the entire\n> message. The most recent translation for it is the literal rendition\n> of \"FORCE_NOT_NULL option of COPY can only be used with COPY FROM\".\n> \n> In short, for translation convenience, I would prefer if \"force not\n> null\" were \"FORCE_NOT_NULL\".\n\nThat is a good point. I reviewed more of the messages and added\ncapitalization where appropriate, patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Mon, 30 Oct 2023 09:58:20 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: COPY TO (FREEZE)?"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> That is a good point. I reviewed more of the messages and added\n> capitalization where appropriate, patch attached.\n\nThis is starting to look pretty good. I have one more thought,\nas long as we're touching all these messages anyway: how about\ns/FOO available only in CSV mode/FOO requires CSV mode/ ?\nThat's both shorter and less telegraphic, as it's not omitting the verb.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 30 Oct 2023 14:29:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: COPY TO (FREEZE)?"
},
{
"msg_contents": "On Mon, Oct 30, 2023 at 02:29:05PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > That is a good point. I reviewed more of the messages and added\n> > capitalization where appropriate, patch attached.\n> \n> This is starting to look pretty good. I have one more thought,\n> as long as we're touching all these messages anyway: how about\n> s/FOO available only in CSV mode/FOO requires CSV mode/ ?\n> That's both shorter and less telegraphic, as it's not omitting the verb.\n\nSure, updated patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Mon, 30 Oct 2023 15:55:21 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: COPY TO (FREEZE)?"
},
{
"msg_contents": "HI,\n\n\nZhang Mingli\nwww.hashdata.xyz\nOn Oct 31, 2023 at 03:55 +0800, Bruce Momjian <bruce@momjian.us>, wrote:\n>\n> Sure, updated patch attached.\n\nLGTM.\n\n\n\n\n\n\n\nHI,\n\n\n\n\nZhang Mingli\nwww.hashdata.xyz\n\n\n\nOn Oct 31, 2023 at 03:55 +0800, Bruce Momjian <bruce@momjian.us>, wrote:\n\nSure, updated patch attached.\n\nLGTM.",
"msg_date": "Tue, 31 Oct 2023 10:05:04 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY TO (FREEZE)?"
},
{
"msg_contents": "On Mon, Oct 30, 2023 at 03:55:21PM -0400, Bruce Momjian wrote:\n> On Mon, Oct 30, 2023 at 02:29:05PM -0400, Tom Lane wrote:\n> > Bruce Momjian <bruce@momjian.us> writes:\n> > > That is a good point. I reviewed more of the messages and added\n> > > capitalization where appropriate, patch attached.\n> > \n> > This is starting to look pretty good. I have one more thought,\n> > as long as we're touching all these messages anyway: how about\n> > s/FOO available only in CSV mode/FOO requires CSV mode/ ?\n> > That's both shorter and less telegraphic, as it's not omitting the verb.\n> \n> Sure, updated patch attached.\n\nPatch applied to master.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Mon, 13 Nov 2023 12:53:27 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: COPY TO (FREEZE)?"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> Patch applied to master.\n\nThe buildfarm is quite unhappy with you.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 13 Nov 2023 13:17:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: COPY TO (FREEZE)?"
},
{
"msg_contents": "On Mon, Nov 13, 2023 at 01:17:32PM -0500, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > Patch applied to master.\n> \n> The buildfarm is quite unhappy with you.\n\nWow, I never suspeced that, fixed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Mon, 13 Nov 2023 14:44:51 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: COPY TO (FREEZE)?"
}
] |
[
{
"msg_contents": "Hi,\n\nereport_startup_progress infrastructure added by commit 9ce346e [1]\nwill be super-useful for reporting progress of any long-running server\noperations, not just the startup process operations. For instance,\npostmaster can use it for reporting progress of temp file and temp\nrelation file removals [2], checkpointer can use it for reporting\nprogress of snapshot or mapping file processing or even WAL file\nprocessing and so on. And I'm sure there can be many places in the\ncode where we have while or for loops which can, at times, take a long\ntime to finish and having a log message there would definitely help.\n\nHere's an attempt to generalize the ereport_startup_progress\ninfrastructure. The attached v1 patch places the code in elog.c/.h,\nrenames associated functions and variables, something like\nereport_startup_progress to ereport_progress,\nlog_startup_progress_interval to log_progress_report_interval and so\non.\n\nThoughts?\n\nThanks Robert for an offlist chat.\n\n[1]\ncommit 9ce346eabf350a130bba46be3f8c50ba28506969\nAuthor: Robert Haas <rhaas@postgresql.org>\nDate: Mon Oct 25 11:51:57 2021 -0400\n\n Report progress of startup operations that take a long time.\n\n[2] https://www.postgresql.org/message-id/CALj2ACWeUFhhnDJKm6R5YxCsF4K7aB2pmRMvqP0BVTxdyce3EA%40mail.gmail.com\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/",
"msg_date": "Tue, 2 Aug 2022 12:55:01 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Generalize ereport_startup_progress infrastructure"
},
{
"msg_contents": "On Tue, Aug 2, 2022 at 3:25 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> ereport_startup_progress infrastructure added by commit 9ce346e [1]\n> will be super-useful for reporting progress of any long-running server\n> operations, not just the startup process operations. For instance,\n> postmaster can use it for reporting progress of temp file and temp\n> relation file removals [2], checkpointer can use it for reporting\n> progress of snapshot or mapping file processing or even WAL file\n> processing and so on. And I'm sure there can be many places in the\n> code where we have while or for loops which can, at times, take a long\n> time to finish and having a log message there would definitely help.\n>\n> Here's an attempt to generalize the ereport_startup_progress\n> infrastructure. The attached v1 patch places the code in elog.c/.h,\n> renames associated functions and variables, something like\n> ereport_startup_progress to ereport_progress,\n> log_startup_progress_interval to log_progress_report_interval and so\n> on.\n\nI'm not averse to reusing this infrastructure in other places, but I\ndoubt we'd want all of those places to be controlled by a single GUC,\nespecially because that GUC is also the on/off switch for the feature.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 2 Aug 2022 14:40:56 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Generalize ereport_startup_progress infrastructure"
},
{
"msg_contents": "Hi,\n\nOn 8/2/22 8:40 PM, Robert Haas wrote:\n> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>\n>\n>\n> On Tue, Aug 2, 2022 at 3:25 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> ereport_startup_progress infrastructure added by commit 9ce346e [1]\n>> will be super-useful for reporting progress of any long-running server\n>> operations, not just the startup process operations. For instance,\n>> postmaster can use it for reporting progress of temp file and temp\n>> relation file removals [2], checkpointer can use it for reporting\n>> progress of snapshot or mapping file processing or even WAL file\n>> processing and so on. And I'm sure there can be many places in the\n>> code where we have while or for loops which can, at times, take a long\n>> time to finish and having a log message there would definitely help.\n>>\n>> Here's an attempt to generalize the ereport_startup_progress\n>> infrastructure. The attached v1 patch places the code in elog.c/.h,\n>> renames associated functions and variables, something like\n>> ereport_startup_progress to ereport_progress,\n>> log_startup_progress_interval to log_progress_report_interval and so\n>> on.\n> I'm not averse to reusing this infrastructure in other places, but I\n> doubt we'd want all of those places to be controlled by a single GUC,\n> especially because that GUC is also the on/off switch for the feature.\n\n+1 on the idea to generalize this infrastructure in other places.\n\nI also doubt about having one single GUC to control all the places: What \nabout adding in the patch the calls to the new API where you think it \ncould be useful too? (and in the same time make use of dedicated GUC(s) \nwhere it makes sense?)\n\nRegards,\n\n-- \n\nBertrand Drouvot\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Wed, 3 Aug 2022 13:49:37 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Generalize ereport_startup_progress infrastructure"
},
{
"msg_contents": "On Wed, Aug 3, 2022 at 12:11 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Aug 2, 2022 at 3:25 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > ereport_startup_progress infrastructure added by commit 9ce346e [1]\n> > will be super-useful for reporting progress of any long-running server\n> > operations, not just the startup process operations. For instance,\n> > postmaster can use it for reporting progress of temp file and temp\n> > relation file removals [2], checkpointer can use it for reporting\n> > progress of snapshot or mapping file processing or even WAL file\n> > processing and so on. And I'm sure there can be many places in the\n> > code where we have while or for loops which can, at times, take a long\n> > time to finish and having a log message there would definitely help.\n> >\n> > Here's an attempt to generalize the ereport_startup_progress\n> > infrastructure. The attached v1 patch places the code in elog.c/.h,\n> > renames associated functions and variables, something like\n> > ereport_startup_progress to ereport_progress,\n> > log_startup_progress_interval to log_progress_report_interval and so\n> > on.\n>\n> I'm not averse to reusing this infrastructure in other places, but I\n> doubt we'd want all of those places to be controlled by a single GUC,\n> especially because that GUC is also the on/off switch for the feature.\n\nThanks Robert! How about we tweak the function a bit -\nbegin_progress_report_phase(int timeout), so that each process can use\ntheir own timeout interval? In this case, do we want to retain\nlog_startup_progress_interval as-is specific to the startup process?\nIf yes, other processes might come up with their own GUCs (if they\ndon't want to use hard-coded timeouts) similar to\nlog_startup_progress_interval, which isn't the right way IMO.\n\nI think the notion of ereport_progress feature being disabled when the\ntimeout is 0, makes sense to me at least.\n\nOn the flip side, what if we just have a single GUC\nlog_progress_report_interval (as proposed in the v1 patch)? Do we ever\nwant different processes to emit progress report messages at different\nfrequencies? Well, I can think of the startup process during standby\nrecovery needing to emit recovery progress report messages at a much\nlower frequency than the startup process during the crash recovery.\nAgain, controlling the frequencies with different GUCs isn't the way\nforward. But we can do something like: process 1 emits messages with a\nfrequency of 2*log_progress_report_interval, process 2 with a\nfrequency 4*log_progress_report_interval and so on without needing\nadditional GUCs.\n\nThoughts?\n\n--\nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n",
"msg_date": "Thu, 4 Aug 2022 09:57:48 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Generalize ereport_startup_progress infrastructure"
},
{
"msg_contents": "On Thu, Aug 4, 2022 at 9:57 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Aug 3, 2022 at 12:11 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Tue, Aug 2, 2022 at 3:25 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > ereport_startup_progress infrastructure added by commit 9ce346e [1]\n> > > will be super-useful for reporting progress of any long-running server\n> > > operations, not just the startup process operations. For instance,\n> > > postmaster can use it for reporting progress of temp file and temp\n> > > relation file removals [2], checkpointer can use it for reporting\n> > > progress of snapshot or mapping file processing or even WAL file\n> > > processing and so on. And I'm sure there can be many places in the\n> > > code where we have while or for loops which can, at times, take a long\n> > > time to finish and having a log message there would definitely help.\n> > >\n> > > Here's an attempt to generalize the ereport_startup_progress\n> > > infrastructure. The attached v1 patch places the code in elog.c/.h,\n> > > renames associated functions and variables, something like\n> > > ereport_startup_progress to ereport_progress,\n> > > log_startup_progress_interval to log_progress_report_interval and so\n> > > on.\n> >\n> > I'm not averse to reusing this infrastructure in other places, but I\n> > doubt we'd want all of those places to be controlled by a single GUC,\n> > especially because that GUC is also the on/off switch for the feature.\n>\n> Thanks Robert! How about we tweak the function a bit -\n> begin_progress_report_phase(int timeout), so that each process can use\n> their own timeout interval? In this case, do we want to retain\n> log_startup_progress_interval as-is specific to the startup process?\n> If yes, other processes might come up with their own GUCs (if they\n> don't want to use hard-coded timeouts) similar to\n> log_startup_progress_interval, which isn't the right way IMO.\n>\n> I think the notion of ereport_progress feature being disabled when the\n> timeout is 0, makes sense to me at least.\n>\n> On the flip side, what if we just have a single GUC\n> log_progress_report_interval (as proposed in the v1 patch)? Do we ever\n> want different processes to emit progress report messages at different\n> frequencies? Well, I can think of the startup process during standby\n> recovery needing to emit recovery progress report messages at a much\n> lower frequency than the startup process during the crash recovery.\n> Again, controlling the frequencies with different GUCs isn't the way\n> forward. But we can do something like: process 1 emits messages with a\n> frequency of 2*log_progress_report_interval, process 2 with a\n> frequency 4*log_progress_report_interval and so on without needing\n> additional GUCs.\n>\n> Thoughts?\n\nHere's v2 patch, passing progress report interval as an input to\nbegin_progress_report_phase() so that the processes can use their own\nintervals(hard-coded or GUC) if they wish to not use the generic GUC\nlog_progress_report_interval.\n\nThoughts?\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/",
"msg_date": "Mon, 8 Aug 2022 09:59:09 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Generalize ereport_startup_progress infrastructure"
},
{
"msg_contents": "On Mon, Aug 8, 2022 at 12:29 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Here's v2 patch, passing progress report interval as an input to\n> begin_progress_report_phase() so that the processes can use their own\n> intervals(hard-coded or GUC) if they wish to not use the generic GUC\n> log_progress_report_interval.\n\nI don't think we should rename the GUC to be more generic. I like it\nthe way that it is.\n\nI also think you should extend this patch series with 1 or 2\nadditional patches showing where else you think we should be using\nthis infrastructure.\n\nIf no such places exist, this is pointless.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 9 Aug 2022 08:35:24 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Generalize ereport_startup_progress infrastructure"
},
{
"msg_contents": "On Tue, Aug 9, 2022 at 6:05 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Aug 8, 2022 at 12:29 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Here's v2 patch, passing progress report interval as an input to\n> > begin_progress_report_phase() so that the processes can use their own\n> > intervals(hard-coded or GUC) if they wish to not use the generic GUC\n> > log_progress_report_interval.\n>\n> I don't think we should rename the GUC to be more generic. I like it\n> the way that it is.\n\nDone.\n\n> I also think you should extend this patch series with 1 or 2\n> additional patches showing where else you think we should be using\n> this infrastructure.\n>\n> If no such places exist, this is pointless.\n\nI'm attaching 0002 for reporting removal of temp files and temp\nrelation files by postmaster.\n\nIf this looks okay, I can code 0003 for reporting processing of\nsnapshot, mapping and old WAL files by checkpointer.\n\nThoughts?\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/",
"msg_date": "Tue, 9 Aug 2022 21:24:37 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Generalize ereport_startup_progress infrastructure"
},
{
"msg_contents": "> > > Here's an attempt to generalize the ereport_startup_progress\n> > > infrastructure. The attached v1 patch places the code in elog.c/.h,\n> > > renames associated functions and variables, something like\n> > > ereport_startup_progress to ereport_progress,\n> > > log_startup_progress_interval to log_progress_report_interval and so\n> > > on.\n> >\n> > I'm not averse to reusing this infrastructure in other places, but I\n> > doubt we'd want all of those places to be controlled by a single GUC,\n> > especially because that GUC is also the on/off switch for the feature.\n>\n> Thanks Robert! How about we tweak the function a bit -\n> begin_progress_report_phase(int timeout), so that each process can use\n> their own timeout interval? In this case, do we want to retain\n> log_startup_progress_interval as-is specific to the startup process?\n> If yes, other processes might come up with their own GUCs (if they\n> don't want to use hard-coded timeouts) similar to\n> log_startup_progress_interval, which isn't the right way IMO.\n>\n> I think the notion of ereport_progress feature being disabled when the\n> timeout is 0, makes sense to me at least.\n>\n> On the flip side, what if we just have a single GUC\n> log_progress_report_interval (as proposed in the v1 patch)? Do we ever\n> want different processes to emit progress report messages at different\n> frequencies? Well, I can think of the startup process during standby\n> recovery needing to emit recovery progress report messages at a much\n> lower frequency than the startup process during the crash recovery.\n> Again, controlling the frequencies with different GUCs isn't the way\n> forward. But we can do something like: process 1 emits messages with a\n> frequency of 2*log_progress_report_interval, process 2 with a\n> frequency 4*log_progress_report_interval and so on without needing\n> additional GUCs.\n>\n> Thoughts?\n\n+1 for the idea to generalize the infrastructure.\n\nGiven two options, option-1 is to use a single GUC across all kind of\nlog running operations and option-2 is to use multiple GUCs (one for\neach kind of long running operations), I go with option-1 because if a\nuser is interested to see a log message after every 10s for startup\noperations (or any other long running operations) then it is likely\nthat he is interested to see other long running operations after every\n10s only. It does not make sense to use different intervals for each\nkind of long running operation here. It also increases the number of\nGUCs which makes things complex. So it is a good idea to use a single\nGUC here. But I am worried about the on/off switch as Robert\nmentioned. How about using a new GUC to indicate features on/off. Say\n\"log_long_running_operations\" which contains a comma separated string\nwhich indicates the features to be enabled. For example,\n\"log_long_running_operations = startup, postmaster\" will enable\nlogging for startup and postmaster operations and disables logging of\nother long running operations. With this the number of GUCs will be\nlimited to 2 and it is simple and easy for the user.\n\n\nThanks & Regards,\nNitin Jadhav\n\nOn Thu, Aug 4, 2022 at 9:58 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Aug 3, 2022 at 12:11 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Tue, Aug 2, 2022 at 3:25 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > ereport_startup_progress infrastructure added by commit 9ce346e [1]\n> > > will be super-useful for reporting progress of any long-running server\n> > > operations, not just the startup process operations. For instance,\n> > > postmaster can use it for reporting progress of temp file and temp\n> > > relation file removals [2], checkpointer can use it for reporting\n> > > progress of snapshot or mapping file processing or even WAL file\n> > > processing and so on. And I'm sure there can be many places in the\n> > > code where we have while or for loops which can, at times, take a long\n> > > time to finish and having a log message there would definitely help.\n> > >\n> > > Here's an attempt to generalize the ereport_startup_progress\n> > > infrastructure. The attached v1 patch places the code in elog.c/.h,\n> > > renames associated functions and variables, something like\n> > > ereport_startup_progress to ereport_progress,\n> > > log_startup_progress_interval to log_progress_report_interval and so\n> > > on.\n> >\n> > I'm not averse to reusing this infrastructure in other places, but I\n> > doubt we'd want all of those places to be controlled by a single GUC,\n> > especially because that GUC is also the on/off switch for the feature.\n>\n> Thanks Robert! How about we tweak the function a bit -\n> begin_progress_report_phase(int timeout), so that each process can use\n> their own timeout interval? In this case, do we want to retain\n> log_startup_progress_interval as-is specific to the startup process?\n> If yes, other processes might come up with their own GUCs (if they\n> don't want to use hard-coded timeouts) similar to\n> log_startup_progress_interval, which isn't the right way IMO.\n>\n> I think the notion of ereport_progress feature being disabled when the\n> timeout is 0, makes sense to me at least.\n>\n> On the flip side, what if we just have a single GUC\n> log_progress_report_interval (as proposed in the v1 patch)? Do we ever\n> want different processes to emit progress report messages at different\n> frequencies? Well, I can think of the startup process during standby\n> recovery needing to emit recovery progress report messages at a much\n> lower frequency than the startup process during the crash recovery.\n> Again, controlling the frequencies with different GUCs isn't the way\n> forward. But we can do something like: process 1 emits messages with a\n> frequency of 2*log_progress_report_interval, process 2 with a\n> frequency 4*log_progress_report_interval and so on without needing\n> additional GUCs.\n>\n> Thoughts?\n>\n> --\n> Bharath Rupireddy\n> RDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n>\n>\n\n\n",
"msg_date": "Wed, 10 Aug 2022 18:20:54 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Generalize ereport_startup_progress infrastructure"
},
{
"msg_contents": "On Tue, Aug 9, 2022 at 11:54 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> I'm attaching 0002 for reporting removal of temp files and temp\n> relation files by postmaster.\n>\n> If this looks okay, I can code 0003 for reporting processing of\n> snapshot, mapping and old WAL files by checkpointer.\n\nI think that someone is going to complain about the changes to\ntimeout.c. Some trouble has been taken to allow things like\nSetLatch(MyLatch) to be unconditional. Aside from that, I am unsure\nhow generally safe it is to use the timeout infrastructure in the\npostmaster.\n\n From a user-interface point of view, log_postmaster_progress_interval\nseems a bit awkward. It's really quite narrow, basically just checking\nfor one thing. I'm not sure I like adding a GUC for something that\nspecific, although I also don't have another idea at the moment\neither. Hmm.\n\nMaybe the checkpointer is a better candidate, but somehow I feel that\nwe can't consider this sort of thing separate from the existing\nprogress reporting that checkpointer already does. Perhaps we need to\nthink of changing or improving that in some way rather than adding\nsomething wholly new alongside the existing system.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 10 Aug 2022 11:00:20 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Generalize ereport_startup_progress infrastructure"
},
{
"msg_contents": "On Wed, Aug 10, 2022 at 11:00:20AM -0400, Robert Haas wrote:\n> Maybe the checkpointer is a better candidate, but somehow I feel that\n> we can't consider this sort of thing separate from the existing\n> progress reporting that checkpointer already does. Perhaps we need to\n> think of changing or improving that in some way rather than adding\n> something wholly new alongside the existing system.\n\nI agree that the checkpointer has a good chance of being a better\ncandidate. Are you thinking of integrating this into log_checkpoints\nsomehow? Perhaps this parameter could optionally accept an interval for\nlogging the progress of ongoing checkpoints.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 16 Aug 2022 14:15:44 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Generalize ereport_startup_progress infrastructure"
},
{
"msg_contents": "On Wed, Aug 10, 2022 at 6:21 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> Given two options, option-1 is to use a single GUC across all kind of\n> log running operations and option-2 is to use multiple GUCs (one for\n> each kind of long running operations), I go with option-1 because if a\n> user is interested to see a log message after every 10s for startup\n> operations (or any other long running operations) then it is likely\n> that he is interested to see other long running operations after every\n> 10s only. It does not make sense to use different intervals for each\n> kind of long running operation here. It also increases the number of\n> GUCs which makes things complex. So it is a good idea to use a single\n> GUC here.\n\n+1.\n\n> But I am worried about the on/off switch as Robert\n> mentioned.\n\nAre you worried that users might want to switch off the progress\nreport messages at process level, for instance, they want to log the\nstartup process' long running operations progress but not, say,\ncheckpointer or postmaster? IMO, a long running operation, if it is\nhappening in any of the processes, is a concern for the users and\nhaving progress report log messages for them would help users debug\nany issues or improve observability of the server as a whole. With\nsingle GUC, the server log might contain progress reports of all the\nlong running (wherever we use this ereport_progress()) operations in\nthe entire server's lifecycle, which isn't bad IMO.\n\nI'd still vote for a single GUC log_progress_report_interval without\nworrying much about process-level enable/disable capability. However,\nlet's hear what other hackers think about this.\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n",
"msg_date": "Wed, 17 Aug 2022 13:31:10 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Generalize ereport_startup_progress infrastructure"
},
{
"msg_contents": "On Wed, Aug 17, 2022 at 2:45 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Wed, Aug 10, 2022 at 11:00:20AM -0400, Robert Haas wrote:\n> > Maybe the checkpointer is a better candidate, but somehow I feel that\n> > we can't consider this sort of thing separate from the existing\n> > progress reporting that checkpointer already does. Perhaps we need to\n> > think of changing or improving that in some way rather than adding\n> > something wholly new alongside the existing system.\n>\n> I agree that the checkpointer has a good chance of being a better\n> candidate. Are you thinking of integrating this into log_checkpoints\n> somehow? Perhaps this parameter could optionally accept an interval for\n> logging the progress of ongoing checkpoints.\n\nCertainly the checkpointer is an immediate candidate. For instance, I\ncan think of using ereport_progress() in CheckPointSnapBuild() for\nsnapshot files processing, CheckPointLogicalRewriteHeap() for mapping\nfiles processing, BufferSync() for checkpointing dirty buffers (?),\nProcessSyncRequests() for processing fsync() requests,\nRemoveOldXlogFiles(), RemoveNonParentXlogFiles()(?). I personally have\nseen cases where some of these checkpoint operations take a lot of\ntime in production environments and a better observability would help\na lot.\n\nHowever, I'm not sure if turning log_checkpoints to an integer type to\nuse for checkpoint progress reporting is a good idea here.\n\nAs I explained upthread [1], I'd vote for a single GUC at the entire\nserver level. If the users/customers request per-process or\nper-operation progress report GUCs, we can then consider it.\n\nThoughts?\n\n[1] https://www.postgresql.org/message-id/CALj2ACUJA73nCK_Li7v4_OOkRqwQBX14Fx2ALb7GDRwUTNGK-Q%40mail.gmail.com\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n",
"msg_date": "Wed, 17 Aug 2022 13:59:58 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Generalize ereport_startup_progress infrastructure"
},
{
"msg_contents": "On Wed, Aug 17, 2022 at 4:30 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> As I explained upthread [1], I'd vote for a single GUC at the entire\n> server level. If the users/customers request per-process or\n> per-operation progress report GUCs, we can then consider it.\n\nWell, I don't agree that either of the proposed new uses of this\ninfrastructure are the right way to solve the problems in question, so\nworrying about how to name the GUCs when we have a bunch of uses of\nthis infrastructure seems to me to be premature. The proposed use in\nthe postmaster doesn't look very safe, so you either need to give up\non that or figure out a way to make it safe. The proposed use in the\ncheckpointer looks like it needs more design work, because it's not\nclear whether or how it should interact with log_checkpoints. While I\nagree that changing log_checkpoints into an integer value doesn't\nnecessarily make sense, having some kind of new checkpoint logging\nthat is completely unrelated to existing checkpoint logging doesn't\nnecessarily make sense to me either.\n\nI do have some sympathy with the idea that if people care about\noperations that unexpectedly run for a long time, they probably care\nabout all of them, and probably don't care about changing the timeout\nor even the enable switch for each one individually. Honestly, it's\nnot very clear to me who would want to ever turn off the startup\nprogress stuff, or why they'd want to change the interval. I added a\nGUC for it out of an abundance of caution, but I don't know why you'd\nreally want a different setting. Maybe there's some reason, but it's\nnot clear to me. At the same time, I don't think the overall picture\nhere is too clear yet. I'm reluctant to commit to a specific UI for a\nfeature whose scope we don't seem to know.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 17 Aug 2022 11:14:24 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Generalize ereport_startup_progress infrastructure"
},
{
"msg_contents": "On Wed, Aug 17, 2022 at 8:44 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> Well, I don't agree that either of the proposed new uses of this\n> infrastructure are the right way to solve the problems in question, so\n> worrying about how to name the GUCs when we have a bunch of uses of\n> this infrastructure seems to me to be premature.\n\nAgreed.\n\n> The proposed use in\n> the postmaster doesn't look very safe, so you either need to give up\n> on that or figure out a way to make it safe.\n\nIs registering a SIGALRM handler in postmaster not a good idea? Is\nsetting the MyLatch conditionally [1] a concern?\n\nI agree that the handle_sig_alarm() code for postmaster may not look\ngood as it holds interrupts and does a bunch of other things. But is\nit a bigger issue?\n\n> The proposed use in the\n> checkpointer looks like it needs more design work, because it's not\n> clear whether or how it should interact with log_checkpoints. While I\n> agree that changing log_checkpoints into an integer value doesn't\n> necessarily make sense, having some kind of new checkpoint logging\n> that is completely unrelated to existing checkpoint logging doesn't\n> necessarily make sense to me either.\n\nHm. Yes, we cannot forget about log_checkpoints while considering\nadding more logs and controls with other GUCs. We could say that one\nneeds to enable both log_checkpoints and the progress report GUC, but\nthat's not great from usability perspective.\n\n> I do have some sympathy with the idea that if people care about\n> operations that unexpectedly run for a long time, they probably care\n> about all of them, and probably don't care about changing the timeout\n> or even the enable switch for each one individually.\n\nI've seen the cases myself and asked by many about the server being\nunresponsive in the cases where it processes files, for instance, temp\nfiles in postmaster after a restart or snapshot or mapping or\nBufferSync() during checkpoint where this sort of progress reporting\nwould've helped.\n\nThinking of another approach for reporting file processing alone - a\nGUC log_file_processing_traffic = {none, medium, high} or {0, 1, 2,\n..... limit} that users can set to emit a file processing log after a\ncertain number of files. It doesn't require a timeout mechanism, so it\ncan be used by any process. But, it is specific to just files.\n\nSimilar to above but a bit generic, not specific to just file\nprocessing, a GUC log_processing_traffic = {none, medium, high} or {0,\n1, 2, ..... limit}.\n\nThoughts?\n\n[1]\n /*\n * SIGALRM is always cause for waking anything waiting on the process\n * latch.\n+ *\n+ * Postmaster has no latch associated with it.\n */\n- SetLatch(MyLatch);\n+ if (MyLatch)\n+ SetLatch(MyLatch);\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 29 Sep 2022 17:27:54 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Generalize ereport_startup_progress infrastructure"
}
] |
[
{
"msg_contents": "While doing the search in [1], I spotted several places where the\ncomments for the function prototypes are obsoleted. For instance,\nbtree_desc() and btree_identify() are now located in nbtdesc.c but the\ncomment in nbtxlog.h is still claiming they are in nbtxlog.c.\n\nFix these places with the attached. With high possibility there are\nother places with this kind of obsoleted comments, but I don't know how\nto find them all :-(.\n\nThanks\nRichard\n\n[1]\nhttps://www.postgresql.org/message-id/CAMbWs489%2Bu6P_9qMjABsse0dNNBr36MA1SX5Ss7yZ7TD86mfKQ%40mail.gmail.com",
"msg_date": "Tue, 2 Aug 2022 17:06:44 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix obsoleted comments for function prototypes"
},
{
"msg_contents": "On Tue, Aug 02, 2022 at 05:06:44PM +0800, Richard Guo wrote:\n> While doing the search in [1], I spotted several places where the\n> comments for the function prototypes are obsoleted. For instance,\n> btree_desc() and btree_identify() are now located in nbtdesc.c but the\n> comment in nbtxlog.h is still claiming they are in nbtxlog.c.\n> \n> Fix these places with the attached. With high possibility there are\n> other places with this kind of obsoleted comments, but I don't know how\n> to find them all :-(.\n\nThese declarations are linked to comments with their file paths, so\nmaking that automated looks rather complicated to me. I have looked\nat the surroundings without noticing anything obvious, so what you\nhave caught here sounds fine to me, good catches :)\n--\nMichael",
"msg_date": "Tue, 2 Aug 2022 19:25:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix obsoleted comments for function prototypes"
},
{
"msg_contents": "On Tue, Aug 02, 2022 at 07:25:49PM +0900, Michael Paquier wrote:\n> These declarations are linked to comments with their file paths, so\n> making that automated looks rather complicated to me. I have looked\n> at the surroundings without noticing anything obvious, so what you\n> have caught here sounds fine to me, good catches :)\n\nDone as of 245e14e.\n--\nMichael",
"msg_date": "Thu, 4 Aug 2022 17:38:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix obsoleted comments for function prototypes"
},
{
"msg_contents": "On Thu, Aug 4, 2022 at 4:38 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Aug 02, 2022 at 07:25:49PM +0900, Michael Paquier wrote:\n> > These declarations are linked to comments with their file paths, so\n> > making that automated looks rather complicated to me. I have looked\n> > at the surroundings without noticing anything obvious, so what you\n> > have caught here sounds fine to me, good catches :)\n>\n> Done as of 245e14e.\n\n\nThank you Michael!\n\nThanks\nRichard\n\nOn Thu, Aug 4, 2022 at 4:38 PM Michael Paquier <michael@paquier.xyz> wrote:On Tue, Aug 02, 2022 at 07:25:49PM +0900, Michael Paquier wrote:\n> These declarations are linked to comments with their file paths, so\n> making that automated looks rather complicated to me. I have looked\n> at the surroundings without noticing anything obvious, so what you\n> have caught here sounds fine to me, good catches :)\n\nDone as of 245e14e.Thank you Michael!ThanksRichard",
"msg_date": "Thu, 4 Aug 2022 19:02:13 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix obsoleted comments for function prototypes"
}
] |
[
{
"msg_contents": "Recently there have been several threads where the problem at hand lends\nitself to using SSE2 SIMD intrinsics. These are convenient because on\n64-bit x86 the instructions are always present and so don't need a runtime\ncheck. To integrate them into our code base, we will need to take some\nmeasures for portability, but after looking around it seems fairly\nlightweight:\n\n1. Compiler invocation and symbols\n\nSince SSE2 is part of the AMD64 spec, gcc enables it always:\n\n$ gcc -dM -E - < /dev/null | grep SSE | sort\n$ gcc -dM -E -msse2 - < /dev/null | grep SSE | sort\n#define __MMX_WITH_SSE__ 1\n#define __SSE__ 1\n#define __SSE2__ 1\n#define __SSE2_MATH__ 1\n#define __SSE_MATH__ 1\n\nPassing -m32 discards the \"MATH\" macros but keeps the rest:\n\n$ gcc -dM -E -m32 - < /dev/null | grep SSE | sort\n#define __SSE__ 1\n#define __SSE2__ 1\n\nClang behaves similarly.\n\nMSVC doesn't define __SSE2__ (although it does define __AVX__ etc), but we\ncan just test for _M_X64 or _M_AMD64 (they are equivalent according to [1],\nand we have both in our code base already). We could test for __SSE2__ for\n32-bit gcc-alikes in the build farm, but I don't think that would tell us\nanything interesting, so we can just test for __x86_64__.\n\n2. The intrinsics header\n\n From Peter Cordes on StackOverflow [2]:\n\n```\nimmintrin.h is portable across all compilers, and includes all Intel SIMD\nintrinsics, and some scalar extensions like BMI2 _pdep_u32. (For AMD SSE4a\nand XOP (Bulldozer-family only, dropped for Zen), you need to include a\ndifferent header as well.)\n\nThe only reason I can think of for including <emmintrin.h> specifically\nwould be if you're using MSVC and want to leave intrinsics undefined for\nISA extensions you don't want to depend on.\n```\n\nIt seems then that MSVC will compile intrinsics without prompting, so to be\nsafe we'd need to take the latter advice and use <emmintrin.h>.\n\n3. Support for SSE2 intrinsics\n\nThis seems to be well-nigh universal AFAICT and doesn't need to be tested\nfor at configure time. A quick search doesn't turn up anything weird for\nMsys or Cygwin. From [2] again, gcc older than 4.4 can generate poor code,\nbut there is no mention that correctness is a problem.\n\n4. Helper functions\n\nIn a couple proposed patches, there has been some interest in abstracting\nsome SIMD functionality into functions to hide implementation details away.\nI agree there are cases where that would help readability and avoid\nduplication.\n\nGiven all this, the anti-climax is: it seems we can start with something\nlike src/include/port/simd.h with:\n\n#if (defined(__x86_64__) || defined(_M_AMD64))\n#include <emmintrin.h>\n#define USE_SSE2\n#endif\n\n(plus a comment summarizing the above)\n\nThat we can include into other files, and would be the place to put helper\nfunctions. Thoughts?\n\n[1] https://docs.microsoft.com/en-us/archive/blogs/reiley/macro-revisited\n[2]\nhttps://stackoverflow.com/questions/56049110/including-the-correct-intrinsic-header\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nRecently there have been several threads where the problem at hand lends itself to using SSE2 SIMD intrinsics. These are convenient because on 64-bit x86 the instructions are always present and so don't need a runtime check. To integrate them into our code base, we will need to take some measures for portability, but after looking around it seems fairly lightweight:1. Compiler invocation and symbolsSince SSE2 is part of the AMD64 spec, gcc enables it always:$ gcc -dM -E - < /dev/null | grep SSE | sort$ gcc -dM -E -msse2 - < /dev/null | grep SSE | sort#define __MMX_WITH_SSE__ 1#define __SSE__ 1#define __SSE2__ 1#define __SSE2_MATH__ 1#define __SSE_MATH__ 1Passing -m32 discards the \"MATH\" macros but keeps the rest:$ gcc -dM -E -m32 - < /dev/null | grep SSE | sort#define __SSE__ 1#define __SSE2__ 1Clang behaves similarly.MSVC doesn't define __SSE2__ (although it does define __AVX__ etc), but we can just test for _M_X64 or _M_AMD64 (they are equivalent according to [1], and we have both in our code base already). We could test for __SSE2__ for 32-bit gcc-alikes in the build farm, but I don't think that would tell us anything interesting, so we can just test for __x86_64__.2. The intrinsics headerFrom Peter Cordes on StackOverflow [2]:```immintrin.h is portable across all compilers, and includes all Intel SIMD intrinsics, and some scalar extensions like BMI2 _pdep_u32. (For AMD SSE4a and XOP (Bulldozer-family only, dropped for Zen), you need to include a different header as well.)The only reason I can think of for including <emmintrin.h> specifically would be if you're using MSVC and want to leave intrinsics undefined for ISA extensions you don't want to depend on.```It seems then that MSVC will compile intrinsics without prompting, so to be safe we'd need to take the latter advice and use <emmintrin.h>.3. Support for SSE2 intrinsicsThis seems to be well-nigh universal AFAICT and doesn't need to be tested for at configure time. A quick search doesn't turn up anything weird for Msys or Cygwin. From [2] again, gcc older than 4.4 can generate poor code, but there is no mention that correctness is a problem.4. Helper functionsIn a couple proposed patches, there has been some interest in abstracting some SIMD functionality into functions to hide implementation details away. I agree there are cases where that would help readability and avoid duplication.Given all this, the anti-climax is: it seems we can start with something like src/include/port/simd.h with:#if (defined(__x86_64__) || defined(_M_AMD64))#include <emmintrin.h>#define USE_SSE2#endif(plus a comment summarizing the above)That we can include into other files, and would be the place to put helper functions. Thoughts?[1] https://docs.microsoft.com/en-us/archive/blogs/reiley/macro-revisited[2] https://stackoverflow.com/questions/56049110/including-the-correct-intrinsic-header-- John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 2 Aug 2022 17:22:52 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "support for SSE2 intrinsics"
},
{
"msg_contents": "On Tue, Aug 02, 2022 at 05:22:52PM +0700, John Naylor wrote:\n> Given all this, the anti-climax is: it seems we can start with something\n> like src/include/port/simd.h with:\n> \n> #if (defined(__x86_64__) || defined(_M_AMD64))\n> #include <emmintrin.h>\n> #define USE_SSE2\n> #endif\n> \n> (plus a comment summarizing the above)\n> \n> That we can include into other files, and would be the place to put helper\n> functions. Thoughts?\n\n+1\n\nI did a bit of cross-checking, and AFAICT this is a reasonable starting\npoint. emmintrin.h appears to be sufficient for one of my patches that\nmakes use of SSE2 instructions. That being said, I imagine it'll be\nespecially important to keep an eye on the buildfarm when this change is\ncommitted.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 2 Aug 2022 09:53:48 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: support for SSE2 intrinsics"
},
{
"msg_contents": "On Tue, Aug 2, 2022 at 11:53 PM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n> I did a bit of cross-checking, and AFAICT this is a reasonable starting\n> point. emmintrin.h appears to be sufficient for one of my patches that\n> makes use of SSE2 instructions. That being said, I imagine it'll be\n> especially important to keep an eye on the buildfarm when this change is\n> committed.\n\nThanks for checking! Here's a concrete patch for testing.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 3 Aug 2022 12:00:39 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: support for SSE2 intrinsics"
},
{
"msg_contents": "On Wed, Aug 03, 2022 at 12:00:39PM +0700, John Naylor wrote:\n> Thanks for checking! Here's a concrete patch for testing.\n\nLGTM\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 3 Aug 2022 09:16:28 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: support for SSE2 intrinsics"
},
{
"msg_contents": "Hi,\n\nOn Wed, Aug 3, 2022 at 2:01 PM John Naylor <john.naylor@enterprisedb.com> wrote:\n>\n>\n> On Tue, Aug 2, 2022 at 11:53 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> > I did a bit of cross-checking, and AFAICT this is a reasonable starting\n> > point. emmintrin.h appears to be sufficient for one of my patches that\n> > makes use of SSE2 instructions. That being said, I imagine it'll be\n> > especially important to keep an eye on the buildfarm when this change is\n> > committed.\n>\n> Thanks for checking! Here's a concrete patch for testing.\n\nI also think it's a good start. There is a typo in the commit message:\n\ns/hepler/helper/\n\nThe rest looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 4 Aug 2022 14:37:59 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: support for SSE2 intrinsics"
},
{
"msg_contents": "On Thu, Aug 4, 2022 at 12:38 PM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n>\n> I also think it's a good start. There is a typo in the commit message:\n>\n> s/hepler/helper/\n>\n> The rest looks good to me.\n\nFixed, and pushed, thanks to you both! I'll polish a small patch I have\nthat actually uses this.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Aug 4, 2022 at 12:38 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:>> I also think it's a good start. There is a typo in the commit message:>> s/hepler/helper/>> The rest looks good to me.Fixed, and pushed, thanks to you both! I'll polish a small patch I have that actually uses this.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 4 Aug 2022 13:56:02 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: support for SSE2 intrinsics"
}
] |
[
{
"msg_contents": "abstract the logic of `scankey change attribute num to index col\nnumber` to change_sk_attno_to_index_column_num, which is a static\ninline function.\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Tue, 2 Aug 2022 19:27:30 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Add a inline function to eliminate duplicate code"
},
{
"msg_contents": "Patch is looking good to me.\n\nThanks,\nMahendrakar.\n\nOn Tue, 2 Aug 2022 at 16:57, Junwang Zhao <zhjwpku@gmail.com> wrote:\n\n> abstract the logic of `scankey change attribute num to index col\n> number` to change_sk_attno_to_index_column_num, which is a static\n> inline function.\n>\n> --\n> Regards\n> Junwang Zhao\n>\n\nPatch is looking good to me.Thanks,Mahendrakar.On Tue, 2 Aug 2022 at 16:57, Junwang Zhao <zhjwpku@gmail.com> wrote:abstract the logic of `scankey change attribute num to index col\nnumber` to change_sk_attno_to_index_column_num, which is a static\ninline function.\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Tue, 2 Aug 2022 18:53:51 +0530",
"msg_from": "mahendrakar s <mahendrakarforpg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add a inline function to eliminate duplicate code"
},
{
"msg_contents": "Any more reviews?\n\nOn Tue, Aug 2, 2022 at 9:24 PM mahendrakar s <mahendrakarforpg@gmail.com> wrote:\n>\n> Patch is looking good to me.\n>\n> Thanks,\n> Mahendrakar.\n>\n> On Tue, 2 Aug 2022 at 16:57, Junwang Zhao <zhjwpku@gmail.com> wrote:\n>>\n>> abstract the logic of `scankey change attribute num to index col\n>> number` to change_sk_attno_to_index_column_num, which is a static\n>> inline function.\n>>\n>> --\n>> Regards\n>> Junwang Zhao\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Fri, 5 Aug 2022 15:56:14 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add a inline function to eliminate duplicate code"
}
] |
[
{
"msg_contents": "Over on [1] I was complaining that I thought DEFAULT_FDW_TUPLE_COST,\nwhich is defined as 0.01 was unrealistically low.\n\nFor comparison, cpu_tuple_cost, something we probably expect to be in\na CPU cache is also 0.01. We've defined DEFAULT_PARALLEL_TUPLE_COST\nto be 0.1, which is 10x cpu_tuple_cost. That's coming from a shared\nmemory segment. So why do we think DEFAULT_FDW_TUPLE_COST should be\nthe same as cpu_tuple_cost when that's probably pulling a tuple from\nsome remote server over some (possibly slow) network?\n\nI did a little experiment in the attached .sql file and did some maths\nto try to figure out what it's really likely to be costing us. I tried\nthis with and without the attached hack to have the planner not\nconsider remote grouping just to see how much slower pulling a million\ntuples through the FDW would cost.\n\nI setup a loopback server on localhost (which has about the lowest\npossible network latency) and found the patched query to the foreign\nserver took:\n\nExecution Time: 530.000 ms\n\nThis is pulling all million tuples over and doing the aggregate locally.\n\nUnpatched, the query took:\n\nExecution Time: 35.334 ms\n\nso about 15x faster.\n\nIf I take the seqscan cost for querying the local table, which is\n14425.00 multiply that by 15 (the extra time it took to pull the 1\nmillion tuples) then divide by 1 million to get the extra cost per\ntuple, then that comes to about 0.216. So that says\nDEFAULT_FDW_TUPLE_COST is about 21x lower than it should be.\n\nI tried cranking DEFAULT_FDW_TUPLE_COST up to 0.5 to see what plans\nwould change in the postgres_fdw regression tests and quite a number\nchanged. Many seem to be pushing the sorts down to the remote server\nwhere they were being done locally before. A few others just seem\nweird. For example, the first one seems to be blindly adding a remote\nsort when it does no good. I think it would take quite a bit of study\nwith a debugger to figure out what's going on with many of these.\n\nDoes anyone have any ideas why DEFAULT_FDW_TUPLE_COST was set so low?\n\nDoes anyone object to it being set to something more realistic?\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvpXiXLxg4TsA8P_4etnuGQqAAbHWEOM4hGe=DCaXmi_jA@mail.gmail.com",
"msg_date": "Wed, 3 Aug 2022 02:56:12 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Why is DEFAULT_FDW_TUPLE_COST so insanely low?"
},
{
"msg_contents": "\nHas anything been done about this issue?\n\n---------------------------------------------------------------------------\n\nOn Wed, Aug 3, 2022 at 02:56:12AM +1200, David Rowley wrote:\n> Over on [1] I was complaining that I thought DEFAULT_FDW_TUPLE_COST,\n> which is defined as 0.01 was unrealistically low.\n> \n> For comparison, cpu_tuple_cost, something we probably expect to be in\n> a CPU cache is also 0.01. We've defined DEFAULT_PARALLEL_TUPLE_COST\n> to be 0.1, which is 10x cpu_tuple_cost. That's coming from a shared\n> memory segment. So why do we think DEFAULT_FDW_TUPLE_COST should be\n> the same as cpu_tuple_cost when that's probably pulling a tuple from\n> some remote server over some (possibly slow) network?\n> \n> I did a little experiment in the attached .sql file and did some maths\n> to try to figure out what it's really likely to be costing us. I tried\n> this with and without the attached hack to have the planner not\n> consider remote grouping just to see how much slower pulling a million\n> tuples through the FDW would cost.\n> \n> I setup a loopback server on localhost (which has about the lowest\n> possible network latency) and found the patched query to the foreign\n> server took:\n> \n> Execution Time: 530.000 ms\n> \n> This is pulling all million tuples over and doing the aggregate locally.\n> \n> Unpatched, the query took:\n> \n> Execution Time: 35.334 ms\n> \n> so about 15x faster.\n> \n> If I take the seqscan cost for querying the local table, which is\n> 14425.00 multiply that by 15 (the extra time it took to pull the 1\n> million tuples) then divide by 1 million to get the extra cost per\n> tuple, then that comes to about 0.216. So that says\n> DEFAULT_FDW_TUPLE_COST is about 21x lower than it should be.\n> \n> I tried cranking DEFAULT_FDW_TUPLE_COST up to 0.5 to see what plans\n> would change in the postgres_fdw regression tests and quite a number\n> changed. Many seem to be pushing the sorts down to the remote server\n> where they were being done locally before. A few others just seem\n> weird. For example, the first one seems to be blindly adding a remote\n> sort when it does no good. I think it would take quite a bit of study\n> with a debugger to figure out what's going on with many of these.\n> \n> Does anyone have any ideas why DEFAULT_FDW_TUPLE_COST was set so low?\n> \n> Does anyone object to it being set to something more realistic?\n> \n> David\n> \n> [1] https://www.postgresql.org/message-id/CAApHDvpXiXLxg4TsA8P_4etnuGQqAAbHWEOM4hGe=DCaXmi_jA@mail.gmail.com\n\n> diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c\n> index 64632db73c..b4e3b91d7f 100644\n> --- a/src/backend/optimizer/plan/planner.c\n> +++ b/src/backend/optimizer/plan/planner.c\n> @@ -3921,7 +3921,7 @@ create_ordinary_grouping_paths(PlannerInfo *root, RelOptInfo *input_rel,\n> \t * If there is an FDW that's responsible for all baserels of the query,\n> \t * let it consider adding ForeignPaths.\n> \t */\n> -\tif (grouped_rel->fdwroutine &&\n> +\tif (0 && grouped_rel->fdwroutine &&\n> \t\tgrouped_rel->fdwroutine->GetForeignUpperPaths)\n> \t\tgrouped_rel->fdwroutine->GetForeignUpperPaths(root, UPPERREL_GROUP_AGG,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t input_rel, grouped_rel,\n\n> ALTER SYSTEM SET max_parallel_workers_per_gather = 0;\n> SELECT pg_reload_conf();\n> \n> CREATE EXTENSION postgres_fdw;\n> CREATE EXTENSION pg_prewarm;\n> \n> \n> DO $d$\n> BEGIN\n> EXECUTE $$CREATE SERVER loopback FOREIGN DATA WRAPPER postgres_fdw\n> OPTIONS (dbname '$$||current_database()||$$',\n> port '$$||current_setting('port')||$$'\n> )$$;\n> END;\n> $d$;\n> \n> CREATE USER MAPPING FOR CURRENT_USER SERVER loopback;\n> \n> CREATE TABLE public.t (a INT);\n> INSERT INTO t SELECT x FROM generate_series(1,1000000) x;\n> VACUUM FREEZE ANALYZE t;\n> SELECT pg_prewarm('t');\n> \n> CREATE FOREIGN TABLE ft (\n> \ta INT\n> ) SERVER loopback OPTIONS (schema_name 'public', table_name 't');\n> \n> EXPLAIN (ANALYZE) SELECT COUNT(*) FROM ft;\n> EXPLAIN (ANALYZE) SELECT COUNT(*) FROM t;\n> \n> DROP FOREIGN TABLE ft;\n> DROP TABLE t;\n> DROP SERVER loopback CASCADE;\n> ALTER SYSTEM RESET max_parallel_workers_per_gather;\n> SELECT pg_reload_conf();\n\n> --- \"expected\\\\postgres_fdw.out\"\t2022-08-03 01:34:42.806967000 +1200\n> +++ \"results\\\\postgres_fdw.out\"\t2022-08-03 02:33:40.719712900 +1200\n> @@ -2164,8 +2164,8 @@\n> -- unsafe conditions on one side (c8 has a UDT), not pushed down.\n> EXPLAIN (VERBOSE, COSTS OFF)\n> SELECT t1.c1, t2.c1 FROM ft1 t1 LEFT JOIN ft2 t2 ON (t1.c1 = t2.c1) WHERE t1.c8 = 'foo' ORDER BY t1.c3, t1.c1 OFFSET 100 LIMIT 10;\n> - QUERY PLAN \n> ------------------------------------------------------------------------------\n> + QUERY PLAN \n> +------------------------------------------------------------------------------------------------------------------------------\n> Limit\n> Output: t1.c1, t2.c1, t1.c3\n> -> Sort\n> @@ -2182,7 +2182,7 @@\n> -> Foreign Scan on public.ft1 t1\n> Output: t1.c1, t1.c3\n> Filter: (t1.c8 = 'foo'::user_enum)\n> - Remote SQL: SELECT \"C 1\", c3, c8 FROM \"S 1\".\"T 1\"\n> + Remote SQL: SELECT \"C 1\", c3, c8 FROM \"S 1\".\"T 1\" ORDER BY c3 ASC NULLS LAST, \"C 1\" ASC NULLS LAST\n> (17 rows)\n> \n> SELECT t1.c1, t2.c1 FROM ft1 t1 LEFT JOIN ft2 t2 ON (t1.c1 = t2.c1) WHERE t1.c8 = 'foo' ORDER BY t1.c3, t1.c1 OFFSET 100 LIMIT 10;\n> @@ -2873,13 +2873,13 @@\n> Sort\n> Output: (sum(c1)), c2\n> Sort Key: (sum(ft1.c1))\n> - -> HashAggregate\n> + -> GroupAggregate\n> Output: sum(c1), c2\n> Group Key: ft1.c2\n> Filter: (avg((ft1.c1 * ((random() <= '1'::double precision))::integer)) > '100'::numeric)\n> -> Foreign Scan on public.ft1\n> Output: c1, c2\n> - Remote SQL: SELECT \"C 1\", c2 FROM \"S 1\".\"T 1\"\n> + Remote SQL: SELECT \"C 1\", c2 FROM \"S 1\".\"T 1\" ORDER BY c2 ASC NULLS LAST\n> (10 rows)\n> \n> -- Remote aggregate in combination with a local Param (for the output\n> @@ -3123,12 +3123,12 @@\n> Sort\n> Output: (sum(c1) FILTER (WHERE ((((c1 / c1))::double precision * random()) <= '1'::double precision))), c2\n> Sort Key: (sum(ft1.c1) FILTER (WHERE ((((ft1.c1 / ft1.c1))::double precision * random()) <= '1'::double precision)))\n> - -> HashAggregate\n> + -> GroupAggregate\n> Output: sum(c1) FILTER (WHERE ((((c1 / c1))::double precision * random()) <= '1'::double precision)), c2\n> Group Key: ft1.c2\n> -> Foreign Scan on public.ft1\n> Output: c1, c2\n> - Remote SQL: SELECT \"C 1\", c2 FROM \"S 1\".\"T 1\"\n> + Remote SQL: SELECT \"C 1\", c2 FROM \"S 1\".\"T 1\" ORDER BY c2 ASC NULLS LAST\n> (9 rows)\n> \n> explain (verbose, costs off)\n> @@ -3885,24 +3885,21 @@\n> -- subquery using stable function (can't be sent to remote)\n> PREPARE st2(int) AS SELECT * FROM ft1 t1 WHERE t1.c1 < $2 AND t1.c3 IN (SELECT c3 FROM ft2 t2 WHERE c1 > $1 AND date(c4) = '1970-01-17'::date) ORDER BY c1;\n> EXPLAIN (VERBOSE, COSTS OFF) EXECUTE st2(10, 20);\n> - QUERY PLAN \n> -----------------------------------------------------------------------------------------------------------\n> - Sort\n> + QUERY PLAN \n> +----------------------------------------------------------------------------------------------------------------------------------\n> + Nested Loop Semi Join\n> Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7, t1.c8\n> - Sort Key: t1.c1\n> - -> Nested Loop Semi Join\n> + Join Filter: (t1.c3 = t2.c3)\n> + -> Foreign Scan on public.ft1 t1\n> Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7, t1.c8\n> - Join Filter: (t1.c3 = t2.c3)\n> - -> Foreign Scan on public.ft1 t1\n> - Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7, t1.c8\n> - Remote SQL: SELECT \"C 1\", c2, c3, c4, c5, c6, c7, c8 FROM \"S 1\".\"T 1\" WHERE ((\"C 1\" < 20))\n> - -> Materialize\n> + Remote SQL: SELECT \"C 1\", c2, c3, c4, c5, c6, c7, c8 FROM \"S 1\".\"T 1\" WHERE ((\"C 1\" < 20)) ORDER BY \"C 1\" ASC NULLS LAST\n> + -> Materialize\n> + Output: t2.c3\n> + -> Foreign Scan on public.ft2 t2\n> Output: t2.c3\n> - -> Foreign Scan on public.ft2 t2\n> - Output: t2.c3\n> - Filter: (date(t2.c4) = '01-17-1970'::date)\n> - Remote SQL: SELECT c3, c4 FROM \"S 1\".\"T 1\" WHERE ((\"C 1\" > 10))\n> -(15 rows)\n> + Filter: (date(t2.c4) = '01-17-1970'::date)\n> + Remote SQL: SELECT c3, c4 FROM \"S 1\".\"T 1\" WHERE ((\"C 1\" > 10))\n> +(12 rows)\n> \n> EXECUTE st2(10, 20);\n> c1 | c2 | c3 | c4 | c5 | c6 | c7 | c8 \n> @@ -9381,21 +9378,19 @@\n> -- test FOR UPDATE; partitionwise join does not apply\n> EXPLAIN (COSTS OFF)\n> SELECT t1.a, t2.b FROM fprt1 t1 INNER JOIN fprt2 t2 ON (t1.a = t2.b) WHERE t1.a % 25 = 0 ORDER BY 1,2 FOR UPDATE OF t1;\n> - QUERY PLAN \n> ---------------------------------------------------------------\n> + QUERY PLAN \n> +--------------------------------------------------------\n> LockRows\n> - -> Sort\n> - Sort Key: t1.a\n> - -> Hash Join\n> - Hash Cond: (t2.b = t1.a)\n> + -> Nested Loop\n> + Join Filter: (t1.a = t2.b)\n> + -> Append\n> + -> Foreign Scan on ftprt1_p1 t1_1\n> + -> Foreign Scan on ftprt1_p2 t1_2\n> + -> Materialize\n> -> Append\n> -> Foreign Scan on ftprt2_p1 t2_1\n> -> Foreign Scan on ftprt2_p2 t2_2\n> - -> Hash\n> - -> Append\n> - -> Foreign Scan on ftprt1_p1 t1_1\n> - -> Foreign Scan on ftprt1_p2 t1_2\n> -(12 rows)\n> +(10 rows)\n> \n> SELECT t1.a, t2.b FROM fprt1 t1 INNER JOIN fprt2 t2 ON (t1.a = t2.b) WHERE t1.a % 25 = 0 ORDER BY 1,2 FOR UPDATE OF t1;\n> a | b \n> @@ -9430,18 +9425,16 @@\n> SET enable_partitionwise_aggregate TO false;\n> EXPLAIN (COSTS OFF)\n> SELECT a, sum(b), min(b), count(*) FROM pagg_tab GROUP BY a HAVING avg(b) < 22 ORDER BY 1;\n> - QUERY PLAN \n> ------------------------------------------------------------\n> - Sort\n> - Sort Key: pagg_tab.a\n> - -> HashAggregate\n> - Group Key: pagg_tab.a\n> - Filter: (avg(pagg_tab.b) < '22'::numeric)\n> - -> Append\n> - -> Foreign Scan on fpagg_tab_p1 pagg_tab_1\n> - -> Foreign Scan on fpagg_tab_p2 pagg_tab_2\n> - -> Foreign Scan on fpagg_tab_p3 pagg_tab_3\n> -(9 rows)\n> + QUERY PLAN \n> +-----------------------------------------------------\n> + GroupAggregate\n> + Group Key: pagg_tab.a\n> + Filter: (avg(pagg_tab.b) < '22'::numeric)\n> + -> Append\n> + -> Foreign Scan on fpagg_tab_p1 pagg_tab_1\n> + -> Foreign Scan on fpagg_tab_p2 pagg_tab_2\n> + -> Foreign Scan on fpagg_tab_p3 pagg_tab_3\n> +(7 rows)\n> \n> -- Plan with partitionwise aggregates is enabled\n> SET enable_partitionwise_aggregate TO true;\n> @@ -9475,34 +9468,32 @@\n> -- Should have all the columns in the target list for the given relation\n> EXPLAIN (VERBOSE, COSTS OFF)\n> SELECT a, count(t1) FROM pagg_tab t1 GROUP BY a HAVING avg(b) < 22 ORDER BY 1;\n> - QUERY PLAN \n> -------------------------------------------------------------------------\n> - Sort\n> - Output: t1.a, (count(((t1.*)::pagg_tab)))\n> + QUERY PLAN \n> +--------------------------------------------------------------------------------------------\n> + Merge Append\n> Sort Key: t1.a\n> - -> Append\n> - -> HashAggregate\n> - Output: t1.a, count(((t1.*)::pagg_tab))\n> - Group Key: t1.a\n> - Filter: (avg(t1.b) < '22'::numeric)\n> - -> Foreign Scan on public.fpagg_tab_p1 t1\n> - Output: t1.a, t1.*, t1.b\n> - Remote SQL: SELECT a, b, c FROM public.pagg_tab_p1\n> - -> HashAggregate\n> - Output: t1_1.a, count(((t1_1.*)::pagg_tab))\n> - Group Key: t1_1.a\n> - Filter: (avg(t1_1.b) < '22'::numeric)\n> - -> Foreign Scan on public.fpagg_tab_p2 t1_1\n> - Output: t1_1.a, t1_1.*, t1_1.b\n> - Remote SQL: SELECT a, b, c FROM public.pagg_tab_p2\n> - -> HashAggregate\n> - Output: t1_2.a, count(((t1_2.*)::pagg_tab))\n> - Group Key: t1_2.a\n> - Filter: (avg(t1_2.b) < '22'::numeric)\n> - -> Foreign Scan on public.fpagg_tab_p3 t1_2\n> - Output: t1_2.a, t1_2.*, t1_2.b\n> - Remote SQL: SELECT a, b, c FROM public.pagg_tab_p3\n> -(25 rows)\n> + -> GroupAggregate\n> + Output: t1.a, count(((t1.*)::pagg_tab))\n> + Group Key: t1.a\n> + Filter: (avg(t1.b) < '22'::numeric)\n> + -> Foreign Scan on public.fpagg_tab_p1 t1\n> + Output: t1.a, t1.*, t1.b\n> + Remote SQL: SELECT a, b, c FROM public.pagg_tab_p1 ORDER BY a ASC NULLS LAST\n> + -> GroupAggregate\n> + Output: t1_1.a, count(((t1_1.*)::pagg_tab))\n> + Group Key: t1_1.a\n> + Filter: (avg(t1_1.b) < '22'::numeric)\n> + -> Foreign Scan on public.fpagg_tab_p2 t1_1\n> + Output: t1_1.a, t1_1.*, t1_1.b\n> + Remote SQL: SELECT a, b, c FROM public.pagg_tab_p2 ORDER BY a ASC NULLS LAST\n> + -> GroupAggregate\n> + Output: t1_2.a, count(((t1_2.*)::pagg_tab))\n> + Group Key: t1_2.a\n> + Filter: (avg(t1_2.b) < '22'::numeric)\n> + -> Foreign Scan on public.fpagg_tab_p3 t1_2\n> + Output: t1_2.a, t1_2.*, t1_2.b\n> + Remote SQL: SELECT a, b, c FROM public.pagg_tab_p3 ORDER BY a ASC NULLS LAST\n> +(23 rows)\n> \n> SELECT a, count(t1) FROM pagg_tab t1 GROUP BY a HAVING avg(b) < 22 ORDER BY 1;\n> a | count \n> @@ -9518,24 +9509,23 @@\n> -- When GROUP BY clause does not match with PARTITION KEY.\n> EXPLAIN (COSTS OFF)\n> SELECT b, avg(a), max(a), count(*) FROM pagg_tab GROUP BY b HAVING sum(a) < 700 ORDER BY 1;\n> - QUERY PLAN \n> ------------------------------------------------------------------\n> - Sort\n> - Sort Key: pagg_tab.b\n> - -> Finalize HashAggregate\n> - Group Key: pagg_tab.b\n> - Filter: (sum(pagg_tab.a) < 700)\n> - -> Append\n> - -> Partial HashAggregate\n> - Group Key: pagg_tab.b\n> - -> Foreign Scan on fpagg_tab_p1 pagg_tab\n> - -> Partial HashAggregate\n> - Group Key: pagg_tab_1.b\n> - -> Foreign Scan on fpagg_tab_p2 pagg_tab_1\n> - -> Partial HashAggregate\n> - Group Key: pagg_tab_2.b\n> - -> Foreign Scan on fpagg_tab_p3 pagg_tab_2\n> -(15 rows)\n> + QUERY PLAN \n> +-----------------------------------------------------------\n> + Finalize GroupAggregate\n> + Group Key: pagg_tab.b\n> + Filter: (sum(pagg_tab.a) < 700)\n> + -> Merge Append\n> + Sort Key: pagg_tab.b\n> + -> Partial GroupAggregate\n> + Group Key: pagg_tab.b\n> + -> Foreign Scan on fpagg_tab_p1 pagg_tab\n> + -> Partial GroupAggregate\n> + Group Key: pagg_tab_1.b\n> + -> Foreign Scan on fpagg_tab_p2 pagg_tab_1\n> + -> Partial GroupAggregate\n> + Group Key: pagg_tab_2.b\n> + -> Foreign Scan on fpagg_tab_p3 pagg_tab_2\n> +(14 rows)\n> \n> -- ===================================================================\n> -- access rights and superuser\n\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Sat, 28 Oct 2023 19:45:08 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Why is DEFAULT_FDW_TUPLE_COST so insanely low?"
},
{
"msg_contents": "On Sun, 29 Oct 2023 at 12:45, Bruce Momjian <bruce@momjian.us> wrote:\n> Has anything been done about this issue?\n\nNothing has been done. I was hoping to get the attention of a few\npeople who have dealt more with postgres_fdw in the past.\n\nI've attached a patch with adjusts DEFAULT_FDW_TUPLE_COST and sets it\nto 0.2. I set it to this because my experiment in [1] showed that it\nwas about 21x lower than the actual costs (on my machine with a\nloopback fdw connecting to the same instance and database using my\nexample query). Given that we have parallel_tuple_cost set to 0.1 by\ndefault, the network cost of a tuple from an FDW of 0.2 seems\nreasonable to me. Slightly higher is probably also reasonable, but\ngiven the seeming lack of complaints, I think I'd rather err on the\nlow side.\n\nChanging it to 0.2, I see 4 plans change in postgres_fdw's regression\ntests. All of these changes are due to STD_FUZZ_FACTOR causing some\nother plan to win in add_path().\n\nFor example the query EXPLAIN (VERBOSE, ANALYZE) SELECT a, sum(b),\nmin(b), count(*) FROM pagg_tab GROUP BY a HAVING avg(b) < 22 ORDER BY\n1; the plan switches from a HashAggregate to a GroupAggregate. This is\nbecause after increasing the DEFAULT_FDW_TUPLE_COST to 0.2 the sorted\nappend child (fuzzily) costs the same as the unsorted seq scan path\nand the sorted path wins in add_path due to having better pathkeys.\nThe seq scan path is then thrown away and we end up doing the Group\nAggregate using the sorted append children.\n\nIf I change STD_FUZZ_FACTOR to something like 1.0000001 then the plans\nno longer change when I do:\n\nalter server loopback options (add fdw_tuple_cost '0.01');\n<run the query>\nalter server loopback options (drop fdw_tuple_cost);\n<run the query>\n\nOrdinarily, I'd not care too much about that, but I did test the\nperformance of one of the plans and the new plan came out slower than\nthe old one.\n\nI'm not exactly sure how best to proceed on this in the absence of any feedback.\n\nDavid\n\n[1] https://postgr.es/m/CAApHDvopVjjfh5c1Ed2HRvDdfom2dEpMwwiu5-f1AnmYprJngA@mail.gmail.com",
"msg_date": "Mon, 30 Oct 2023 14:22:08 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Why is DEFAULT_FDW_TUPLE_COST so insanely low?"
},
{
"msg_contents": "Looks like the value goes long back to\nd0d75c402217421b691050857eb3d7af82d0c770. The comment there adds\n\"above and beyond cpu_tuple_cost\". So certainly it's expected to be\nhigher than cpu_tuple_cost. I have no memories of this. But looking at\nthe surrounding code, I think DEFAULT_FDW_STARTUP_COST takes care of\nnetwork costs and bandwidths. So DEFAULT_FDW_TUPLE_COST is just\nassembling row from bytes on network. That might have been equated to\nassembling row from heap buffer.\n\nBut I think you are right, it should be comparable to the parallel\ntuple cost which at least is IPC like socket. This will also mean that\noperations which reduce the number of rows will be favoured and pushed\ndown. That's what is desired.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Mon, Oct 30, 2023 at 6:52 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Sun, 29 Oct 2023 at 12:45, Bruce Momjian <bruce@momjian.us> wrote:\n> > Has anything been done about this issue?\n>\n> Nothing has been done. I was hoping to get the attention of a few\n> people who have dealt more with postgres_fdw in the past.\n>\n> I've attached a patch with adjusts DEFAULT_FDW_TUPLE_COST and sets it\n> to 0.2. I set it to this because my experiment in [1] showed that it\n> was about 21x lower than the actual costs (on my machine with a\n> loopback fdw connecting to the same instance and database using my\n> example query). Given that we have parallel_tuple_cost set to 0.1 by\n> default, the network cost of a tuple from an FDW of 0.2 seems\n> reasonable to me. Slightly higher is probably also reasonable, but\n> given the seeming lack of complaints, I think I'd rather err on the\n> low side.\n>\n> Changing it to 0.2, I see 4 plans change in postgres_fdw's regression\n> tests. All of these changes are due to STD_FUZZ_FACTOR causing some\n> other plan to win in add_path().\n>\n> For example the query EXPLAIN (VERBOSE, ANALYZE) SELECT a, sum(b),\n> min(b), count(*) FROM pagg_tab GROUP BY a HAVING avg(b) < 22 ORDER BY\n> 1; the plan switches from a HashAggregate to a GroupAggregate. This is\n> because after increasing the DEFAULT_FDW_TUPLE_COST to 0.2 the sorted\n> append child (fuzzily) costs the same as the unsorted seq scan path\n> and the sorted path wins in add_path due to having better pathkeys.\n> The seq scan path is then thrown away and we end up doing the Group\n> Aggregate using the sorted append children.\n>\n> If I change STD_FUZZ_FACTOR to something like 1.0000001 then the plans\n> no longer change when I do:\n>\n> alter server loopback options (add fdw_tuple_cost '0.01');\n> <run the query>\n> alter server loopback options (drop fdw_tuple_cost);\n> <run the query>\n>\n> Ordinarily, I'd not care too much about that, but I did test the\n> performance of one of the plans and the new plan came out slower than\n> the old one.\n>\n> I'm not exactly sure how best to proceed on this in the absence of any feedback.\n>\n> David\n>\n> [1] https://postgr.es/m/CAApHDvopVjjfh5c1Ed2HRvDdfom2dEpMwwiu5-f1AnmYprJngA@mail.gmail.com\n\n\n",
"msg_date": "Mon, 30 Oct 2023 16:15:44 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why is DEFAULT_FDW_TUPLE_COST so insanely low?"
},
{
"msg_contents": "On Mon, Oct 30, 2023 at 02:22:08PM +1300, David Rowley wrote:\n> If I change STD_FUZZ_FACTOR to something like 1.0000001 then the plans\n> no longer change when I do:\n> \n> alter server loopback options (add fdw_tuple_cost '0.01');\n> <run the query>\n> alter server loopback options (drop fdw_tuple_cost);\n> <run the query>\n> \n> Ordinarily, I'd not care too much about that, but I did test the\n> performance of one of the plans and the new plan came out slower than\n> the old one.\n> \n> I'm not exactly sure how best to proceed on this in the absence of any feedback.\n\nI think you just go and change it. Your number is better than what we\nhave, and if someone wants to suggest a better number, we can change it\nlater.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Mon, 30 Oct 2023 10:01:14 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Why is DEFAULT_FDW_TUPLE_COST so insanely low?"
},
{
"msg_contents": "On Tue, 31 Oct 2023 at 03:01, Bruce Momjian <bruce@momjian.us> wrote:\n> I think you just go and change it. Your number is better than what we\n> have, and if someone wants to suggest a better number, we can change it\n> later.\n\nI did some more experimentation on the actual costs of getting a tuple\nfrom a foreign server.\n\nUsing the attached setup, I did:\n\npostgres=# explain (analyze, timing off) SELECT * FROM t;\n QUERY PLAN\n--------------------------------------------------------------------------------------------\n Seq Scan on t (cost=0.00..144248.48 rows=10000048 width=4) (actual\nrows=10000000 loops=1)\n Planning Time: 0.077 ms\n Execution Time: 385.978 ms\n\npostgres=# explain (analyze, timing off) SELECT * FROM ft;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------\n Foreign Scan on ft (cost=100.00..244348.96 rows=10000048 width=4)\n(actual rows=10000000 loops=1)\n Planning Time: 0.126 ms\n Execution Time: 8335.392 ms\n\nSo, let's take the first query and figure out the total cost per\nmillisecond of execution time. We can then multiply that by the\nexecution time of the 2nd query to calculate what we might expect the\ncosts to be for the foreign table scan based on how long it took\ncompared to the local table scan.\n\npostgres=# select 144248.48/385.978*8335.392;\n ?column?\n-----------------------------\n 3115119.5824740270171341280\n\nSo, the above number is what we expect the foreign table scan to cost\nwith the assumption that the cost per millisecond is about right for\nthe local scan. We can then calculate how much we'll need to charge\nfor a foreign tuple by subtracting the total cost of that query from\nour calculated value to calculate how much extra we need to charge, in\ntotal, then divide that by the number of tuples to get actual foreign\ntuple cost for this query.\n\npostgres=# select (3115119.58-244348.96)/10000000;\n ?column?\n------------------------\n 0.28707706200000000000\n\nThis is on an AMD 3990x running Linux 6.5 kernel. I tried the same on\nan Apple M2 mini and got:\n\npostgres=# select 144247.77/257.763*3052.084;\n ?column?\n-----------------------------\n 1707988.7759402241595200680\n\npostgres=# select (1707988.78-244347.54)/10000000;\n ?column?\n------------------------\n 0.14636412400000000000\n\nSo the actual foreign tuple cost on the M2 seems about half of what it\nis on the Zen 2 machine.\n\nBased on this, I agree with my original analysis that setting\nDEFAULT_FDW_TUPLE_COST to 0.2 is about right. Of course, this is a\nloopback onto localhost so remote networks likely would benefit from\nhigher values, but based on this 0.01 is far too low and we should\nchange it to at least 0.2.\n\nI'd be happy if anyone else would like to try the same experiment to\nsee if there's some other value of DEFAULT_FDW_TUPLE_COST that might\nsuit better.\n\nDavid",
"msg_date": "Tue, 31 Oct 2023 11:16:30 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Why is DEFAULT_FDW_TUPLE_COST so insanely low?"
},
{
"msg_contents": "On Tue, 31 Oct 2023 at 11:16, David Rowley <dgrowleyml@gmail.com> wrote:\n> I'd be happy if anyone else would like to try the same experiment to\n> see if there's some other value of DEFAULT_FDW_TUPLE_COST that might\n> suit better.\n\nNo takers on the additional testing so I've pushed the patch that\nincreases it to 0.2.\n\nDavid\n\n\n",
"msg_date": "Thu, 2 Nov 2023 14:32:44 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Why is DEFAULT_FDW_TUPLE_COST so insanely low?"
},
{
"msg_contents": "On Thu, Nov 2, 2023 at 02:32:44PM +1300, David Rowley wrote:\n> On Tue, 31 Oct 2023 at 11:16, David Rowley <dgrowleyml@gmail.com> wrote:\n> > I'd be happy if anyone else would like to try the same experiment to\n> > see if there's some other value of DEFAULT_FDW_TUPLE_COST that might\n> > suit better.\n> \n> No takers on the additional testing so I've pushed the patch that\n> increases it to 0.2.\n\nGreat! Thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 1 Nov 2023 22:32:06 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Why is DEFAULT_FDW_TUPLE_COST so insanely low?"
},
{
"msg_contents": "On Thu, Nov 02, 2023 at 02:32:44PM +1300, David Rowley wrote:\n> No takers on the additional testing so I've pushed the patch that\n> increases it to 0.2.\n\nThe CI has been telling me that the plans of the tests introduced by\nthis patch are not that stable when building with 32b. See:\ndiff -U3 /tmp/cirrus-ci-build/contrib/postgres_fdw/expected/postgres_fdw.out /tmp/cirrus-ci-build/build-32/testrun/postgres_fdw/regress/results/postgres_fdw.out\n--- /tmp/cirrus-ci-build/contrib/postgres_fdw/expected/postgres_fdw.out\t2023-11-02 05:25:47.290268511 +0000\n+++ /tmp/cirrus-ci-build/build-32/testrun/postgres_fdw/regress/results/postgres_fdw.out\t2023-11-02 05:30:45.242316423 +0000\n@@ -4026,13 +4026,13 @@\n Sort\n Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7, t1.c8\n Sort Key: t1.c1\n- -> Nested Loop Semi Join\n+ -> Hash Semi Join\n Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7, t1.c8\n- Join Filter: (t2.c3 = t1.c3)\n+ Hash Cond: (t1.c3 = t2.c3)\n -> Foreign Scan on public.ft1 t1\n Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7, t1.c8\n Remote SQL: SELECT \"C 1\", c2, c3, c4, c5, c6, c7, c8 FROM \"S 1\".\"T 1\" WHERE ((\"C 1\" < 20))\n- -> Materialize\n+ -> Hash\n Output: t2.c3\n -> Foreign Scan on public.ft2 t2\n Output: t2.c3\n--\nMichael",
"msg_date": "Thu, 2 Nov 2023 14:39:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Why is DEFAULT_FDW_TUPLE_COST so insanely low?"
},
{
"msg_contents": "On Thu, 2 Nov 2023 at 18:39, Michael Paquier <michael@paquier.xyz> wrote:\n> The CI has been telling me that the plans of the tests introduced by\n> this patch are not that stable when building with 32b. See:\n> diff -U3 /tmp/cirrus-ci-build/contrib/postgres_fdw/expected/postgres_fdw.out /tmp/cirrus-ci-build/build-32/testrun/postgres_fdw/regress/results/postgres_fdw.out\n> --- /tmp/cirrus-ci-build/contrib/postgres_fdw/expected/postgres_fdw.out 2023-11-02 05:25:47.290268511 +0000\n> +++ /tmp/cirrus-ci-build/build-32/testrun/postgres_fdw/regress/results/postgres_fdw.out 2023-11-02 05:30:45.242316423 +0000\n> @@ -4026,13 +4026,13 @@\n> Sort\n> Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7, t1.c8\n> Sort Key: t1.c1\n> - -> Nested Loop Semi Join\n> + -> Hash Semi Join\n> Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7, t1.c8\n> - Join Filter: (t2.c3 = t1.c3)\n> + Hash Cond: (t1.c3 = t2.c3)\n> -> Foreign Scan on public.ft1 t1\n> Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7, t1.c8\n> Remote SQL: SELECT \"C 1\", c2, c3, c4, c5, c6, c7, c8 FROM \"S 1\".\"T 1\" WHERE ((\"C 1\" < 20))\n> - -> Materialize\n> + -> Hash\n> Output: t2.c3\n> -> Foreign Scan on public.ft2 t2\n> Output: t2.c3\n\nNo tests were introduced. Is this the only existing one that's\nunstable as far as you saw?\n\nI'm not yet seeing any failures in the buildfarm, so don't really want\nto push a fix for this one if there are going to be a few more\nunstable ones to fix. I may just hold off a while to see.\n\nThanks for letting me know about this.\n\nDavid\n\n\n",
"msg_date": "Thu, 2 Nov 2023 20:19:35 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Why is DEFAULT_FDW_TUPLE_COST so insanely low?"
},
{
"msg_contents": "On Thu, Nov 02, 2023 at 08:19:35PM +1300, David Rowley wrote:\n> No tests were introduced. Is this the only existing one that's\n> unstable as far as you saw?\n\nThat seems to be the only one.\n\n> I'm not yet seeing any failures in the buildfarm, so don't really want\n> to push a fix for this one if there are going to be a few more\n> unstable ones to fix. I may just hold off a while to see.\n\nThe CF bot is also thinking that this is not really stable, impacting\nthe tests of the patches:\nhttps://cirrus-ci.com/task/6685074121293824\nhttps://cirrus-ci.com/task/4739402799251456\nhttps://cirrus-ci.com/task/5209803589419008\n--\nMichael",
"msg_date": "Thu, 2 Nov 2023 16:22:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Why is DEFAULT_FDW_TUPLE_COST so insanely low?"
},
{
"msg_contents": "On Thu, Nov 2, 2023 at 3:19 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> I'm not yet seeing any failures in the buildfarm, so don't really want\n> to push a fix for this one if there are going to be a few more\n> unstable ones to fix. I may just hold off a while to see.\n\n\nIt seems that the test is still not stable on 32-bit machines even after\n4b14e18714. I see the following plan diff on cfbot [1].\n\n--- /tmp/cirrus-ci-build/contrib/postgres_fdw/expected/postgres_fdw.out\n2023-11-02 11:35:12.016196978 +0000\n+++\n/tmp/cirrus-ci-build/build-32/testrun/postgres_fdw/regress/results/postgres_fdw.out\n2023-11-02 11:42:09.092242808 +0000\n@@ -4022,24 +4022,21 @@\n -- subquery using stable function (can't be sent to remote)\n PREPARE st2(int) AS SELECT * FROM ft1 t1 WHERE t1.c1 < $2 AND t1.c3 IN\n(SELECT c3 FROM ft2 t2 WHERE c1 > $1 AND date(c4) = '1970-01-17'::date)\nORDER BY c1;\n EXPLAIN (VERBOSE, COSTS OFF) EXECUTE st2(10, 20);\n- QUERY PLAN\n-----------------------------------------------------------------------------------------------------------\n- Sort\n+ QUERY PLAN\n+----------------------------------------------------------------------------------------------------------------------------------\n+ Nested Loop Semi Join\n Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7, t1.c8\n- Sort Key: t1.c1\n- -> Nested Loop Semi Join\n+ Join Filter: (t2.c3 = t1.c3)\n+ -> Foreign Scan on public.ft1 t1\n Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7, t1.c8\n- Join Filter: (t2.c3 = t1.c3)\n- -> Foreign Scan on public.ft1 t1\n- Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7,\nt1.c8\n- Remote SQL: SELECT \"C 1\", c2, c3, c4, c5, c6, c7, c8 FROM\n\"S 1\".\"T 1\" WHERE ((\"C 1\" < 20))\n- -> Materialize\n+ Remote SQL: SELECT \"C 1\", c2, c3, c4, c5, c6, c7, c8 FROM \"S\n1\".\"T 1\" WHERE ((\"C 1\" < 20)) ORDER BY \"C 1\" ASC NULLS LAST\n+ -> Materialize\n+ Output: t2.c3\n+ -> Foreign Scan on public.ft2 t2\n Output: t2.c3\n- -> Foreign Scan on public.ft2 t2\n- Output: t2.c3\n- Filter: (date(t2.c4) = '01-17-1970'::date)\n- Remote SQL: SELECT c3, c4 FROM \"S 1\".\"T 1\" WHERE ((\"C\n1\" > 10))\n-(15 rows)\n+ Filter: (date(t2.c4) = '01-17-1970'::date)\n+ Remote SQL: SELECT c3, c4 FROM \"S 1\".\"T 1\" WHERE ((\"C 1\" >\n10))\n+(12 rows)\n\n[1]\nhttps://api.cirrus-ci.com/v1/artifact/task/5727898984775680/testrun/build-32/testrun/postgres_fdw/regress/regression.diffs\n\nThanks\nRichard\n\nOn Thu, Nov 2, 2023 at 3:19 PM David Rowley <dgrowleyml@gmail.com> wrote:\nI'm not yet seeing any failures in the buildfarm, so don't really want\nto push a fix for this one if there are going to be a few more\nunstable ones to fix. I may just hold off a while to see.It seems that the test is still not stable on 32-bit machines even after4b14e18714. I see the following plan diff on cfbot [1].--- /tmp/cirrus-ci-build/contrib/postgres_fdw/expected/postgres_fdw.out 2023-11-02 11:35:12.016196978 +0000+++ /tmp/cirrus-ci-build/build-32/testrun/postgres_fdw/regress/results/postgres_fdw.out 2023-11-02 11:42:09.092242808 +0000@@ -4022,24 +4022,21 @@ -- subquery using stable function (can't be sent to remote) PREPARE st2(int) AS SELECT * FROM ft1 t1 WHERE t1.c1 < $2 AND t1.c3 IN (SELECT c3 FROM ft2 t2 WHERE c1 > $1 AND date(c4) = '1970-01-17'::date) ORDER BY c1; EXPLAIN (VERBOSE, COSTS OFF) EXECUTE st2(10, 20);- QUERY PLAN------------------------------------------------------------------------------------------------------------ Sort+ QUERY PLAN+----------------------------------------------------------------------------------------------------------------------------------+ Nested Loop Semi Join Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7, t1.c8- Sort Key: t1.c1- -> Nested Loop Semi Join+ Join Filter: (t2.c3 = t1.c3)+ -> Foreign Scan on public.ft1 t1 Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7, t1.c8- Join Filter: (t2.c3 = t1.c3)- -> Foreign Scan on public.ft1 t1- Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7, t1.c8- Remote SQL: SELECT \"C 1\", c2, c3, c4, c5, c6, c7, c8 FROM \"S 1\".\"T 1\" WHERE ((\"C 1\" < 20))- -> Materialize+ Remote SQL: SELECT \"C 1\", c2, c3, c4, c5, c6, c7, c8 FROM \"S 1\".\"T 1\" WHERE ((\"C 1\" < 20)) ORDER BY \"C 1\" ASC NULLS LAST+ -> Materialize+ Output: t2.c3+ -> Foreign Scan on public.ft2 t2 Output: t2.c3- -> Foreign Scan on public.ft2 t2- Output: t2.c3- Filter: (date(t2.c4) = '01-17-1970'::date)- Remote SQL: SELECT c3, c4 FROM \"S 1\".\"T 1\" WHERE ((\"C 1\" > 10))-(15 rows)+ Filter: (date(t2.c4) = '01-17-1970'::date)+ Remote SQL: SELECT c3, c4 FROM \"S 1\".\"T 1\" WHERE ((\"C 1\" > 10))+(12 rows)[1] https://api.cirrus-ci.com/v1/artifact/task/5727898984775680/testrun/build-32/testrun/postgres_fdw/regress/regression.diffsThanksRichard",
"msg_date": "Thu, 2 Nov 2023 20:02:12 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why is DEFAULT_FDW_TUPLE_COST so insanely low?"
},
{
"msg_contents": "On Fri, 3 Nov 2023 at 01:02, Richard Guo <guofenglinux@gmail.com> wrote:\n> It seems that the test is still not stable on 32-bit machines even after\n> 4b14e18714. I see the following plan diff on cfbot [1].\n\nI recreated that locally this time. Seems there's still flexibility\nto push or not push down the sort and the costs of each are close\nenough that it differs between 32 and 64-bit.\n\nThe fix I just pushed removes the flexibility for doing a local sort\nby turning off enable_sort.\n\nThanks for the report.\n\nDavid\n\n\n",
"msg_date": "Fri, 3 Nov 2023 12:38:26 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Why is DEFAULT_FDW_TUPLE_COST so insanely low?"
}
] |
[
{
"msg_contents": "I've complained before that the snapshot_too_old TAP tests seem\nridiculously slow --- close to a minute of runtime even on very fast\nmachines. Today I happened to look closer and realized that there's\nan absolutely trivial way to cut that. The core of the slow runtime\nis that there's a \"pg_sleep(6)\" in the test case; which perhaps could\nbe trimmed, but I'm not on about that right now. What I'm on about\nis that two of the three isolation tests allow the isolationtester to\ndefault to running every possible permutation of steps, one of which\ndoesn't even generate the \"snapshot too old\" failure. IMV it's\nsufficient to run just one permutation. That opinion was shared by\nwhoever wrote sto_using_hash_index.spec, but they didn't propagate\nthe idea into the other two tests.\n\nThe attached cuts the test runtime (exclusive of setup) from\napproximately 30+24+6 seconds to 6+6+6 seconds, and I don't see\nthat it loses us one iota of coverage.\n\nI cleaned up some unused tables and bad comment grammar, too.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 02 Aug 2022 11:38:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Cutting test runtime for src/test/modules/snapshot_too_old"
},
{
"msg_contents": "On Tue, Aug 2, 2022 at 11:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I've complained before that the snapshot_too_old TAP tests seem\n> ridiculously slow --- close to a minute of runtime even on very fast\n> machines. Today I happened to look closer and realized that there's\n> an absolutely trivial way to cut that. The core of the slow runtime\n> is that there's a \"pg_sleep(6)\" in the test case; which perhaps could\n> be trimmed, but I'm not on about that right now. What I'm on about\n> is that two of the three isolation tests allow the isolationtester to\n> default to running every possible permutation of steps, one of which\n> doesn't even generate the \"snapshot too old\" failure. IMV it's\n> sufficient to run just one permutation. That opinion was shared by\n> whoever wrote sto_using_hash_index.spec, but they didn't propagate\n> the idea into the other two tests.\n>\n> The attached cuts the test runtime (exclusive of setup) from\n> approximately 30+24+6 seconds to 6+6+6 seconds, and I don't see\n> that it loses us one iota of coverage.\n>\n> I cleaned up some unused tables and bad comment grammar, too.\n\nYeah, I feel like it was a mistake to allow the list of permutations\nto be unspecified. It encourages people to just run them all, which is\nalmost never a thoughtful decision. Maybe there's something to be said\nfor running these tests in one successful permutation and one failing\npermutation -- or maybe even that is overkill -- but running them all\nseems like a poor idea.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 2 Aug 2022 13:28:31 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Cutting test runtime for src/test/modules/snapshot_too_old"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Yeah, I feel like it was a mistake to allow the list of permutations\n> to be unspecified. It encourages people to just run them all, which is\n> almost never a thoughtful decision. Maybe there's something to be said\n> for running these tests in one successful permutation and one failing\n> permutation -- or maybe even that is overkill -- but running them all\n> seems like a poor idea.\n\nYeah, I considered letting the no-error permutation survive. But\nI didn't really see what coverage it was adding at all, let alone\ncoverage that'd justify doubling the test runtime.\n\nAlso ... while doing further research I was reminded that a couple\nyears ago we were seriously discussing nuking old_snapshot_threshold\naltogether, on the grounds that it was so buggy as to be unsafe\nto use, and nobody was stepping up to fix it [1][2]. It doesn't\nappear to me that the situation has got any better, so I wonder if\nwe're prepared to pull that trigger yet.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/20200401064008.qob7bfnnbu4w5cw4%40alap3.anarazel.de\n[2] https://www.postgresql.org/message-id/flat/CA%2BTgmoY%3Daqf0zjTD%2B3dUWYkgMiNDegDLFjo%2B6ze%3DWtpik%2B3XqA%40mail.gmail.com\n\n\n",
"msg_date": "Tue, 02 Aug 2022 13:50:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Cutting test runtime for src/test/modules/snapshot_too_old"
}
] |
[
{
"msg_contents": "[Trying -hackers rather than -www this time, since the impacted users are here.]\n\nThere are occasionally patches that are dutifully rebased by a\nresponsive author, all feedback implemented... but there have been no\nreviews for a while, and there's no sign of any on the way. This case\nseems to tie us up in knots. We don't want to Reject since the patch\nis fine, it's just not a current priority; and we don't want to Return\nwith Feedback because there is no feedback to act upon. Since no one\nwants to close it, they can drag on forever, with hopeful authors\nrebasing eternally. I ended up closing several patches like this with\nRwF, but I felt the need to write a huge explanation in the\naccompanying email.\n\nThis has been discussed before (e.g. [1]); the two competing proposals\nI've seen are to add a new close state or to simply remove the \"with\nFeedback\" and treat all Returned patches the same. I can see the case\nfor minimizing the number of choices, but in this patchset, I've opted\nto implement a new state. I think it's useful to communicate to a\ncontributor that their next steps, if they still want to pursue them,\nneed to be focused on coalition building rather than code changes. And\nI think Returned with Feedback has a useful meaning already.\n\n0001 just adds the \"Returned: Needs more interest\" state to the\ndatabase and makes it available to the UI. The (optional) 0002 goes\nfarther, and puts the two Returned states along with the Next CF state\ninto a new \"Deferred\" section in the UI. (It's UI-only; the app still\ntreats them as closed patches for the purposes of the CF.) This\nemphasizes the distinction between Returned and Rejected: Rejected is\nmeant to be a full stop to the story, while Returned is more of a\ncomma.\n\nScreenshots attached. WDYT?\n\n--Jacob\n\n[1] https://www.postgresql.org/message-id/flat/3905363.1633288498%40sss.pgh.pa.us",
"msg_date": "Tue, 2 Aug 2022 15:55:28 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] CF app: add \"Returned: Needs more interest\""
},
{
"msg_contents": "Hi,\n\nOn Tue, Aug 02, 2022 at 03:55:28PM -0700, Jacob Champion wrote:\n> [Trying -hackers rather than -www this time, since the impacted users are here.]\n>\n> There are occasionally patches that are dutifully rebased by a\n> responsive author, all feedback implemented... but there have been no\n> reviews for a while, and there's no sign of any on the way. This case\n> seems to tie us up in knots. We don't want to Reject since the patch\n> is fine, it's just not a current priority; and we don't want to Return\n> with Feedback because there is no feedback to act upon. Since no one\n> wants to close it, they can drag on forever, with hopeful authors\n> rebasing eternally. I ended up closing several patches like this with\n> RwF, but I felt the need to write a huge explanation in the\n> accompanying email.\n> [...]\n> [1] https://www.postgresql.org/message-id/flat/3905363.1633288498%40sss.pgh.pa.us\n\nI'm personally fine with the current statutes, as closing a patch with RwF\nexplaining that there was no interest is still a feedback, and having a\ndifferent status won't make it any more pleasant for both the CFM and the\nauthor.\n\nMy biggest complaint here is that it doesn't really do anything to try to\nimprove the current situation (lack of review and/or lack of committer\ninterest).\n\nMaybe it would be better to discuss some clear rules and thresholds on when\naction should be taken on such patches. It doesn't have to be closing the CF\nentry directly but instead sending some email to ask for community / committer\nfeedback as in the thread you pointed, and document that in the commitfest wiki\npage.\n\n\n",
"msg_date": "Wed, 3 Aug 2022 11:00:10 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] CF app: add \"Returned: Needs more interest\""
},
{
"msg_contents": "On Tue, Aug 2, 2022 at 8:00 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> I'm personally fine with the current statutes, as closing a patch with RwF\n> explaining that there was no interest is still a feedback,\n\nHi Julien,\n\nMaking that explanation each time we intend to close a patch \"needs\ninterest\" takes a lot of time and wordsmithing. \"Returned with\nfeedback\" clearly has an established meaning to the community, and\nthis is counter to that meaning, so people just avoid using it that\nway.\n\nWhen they do, miscommunications happen easily, which can lead to\nauthors reopening patches thinking that there's been some kind of\nmistake (as happened to at least one of the patches in this past CF,\nwhich I had to close again). Language and cultural differences likely\nexacerbate the problem, so the less ad hoc messaging a CFM has to do\nto explain that \"this is RwF but not actually RwF\", the better.\n\n> and having a\n> different status won't make it any more pleasant for both the CFM and the\n> author.\n\n\"More pleasant\" is not really the goal here. I don't think it should\never be pleasant for a CFM to return someone's patch when it hasn't\nreceived review, and it's certainly not going to be pleasant for the\nauthor. But we can be more honest and clear about why we're returning\nit, and hopefully make it less unpleasant.\n\n> My biggest complaint here is that it doesn't really do anything to try to\n> improve the current situation (lack of review and/or lack of committer\n> interest).\n\nIt's not really meant to improve that. This is just trying to move the\nneedle a little bit, in a way that's been requested several times.\n\n> Maybe it would be better to discuss some clear rules and thresholds on when\n> action should be taken on such patches.\n\nI think that's also important to discuss, and I have thoughts on that\ntoo, but I don't think the discussions for these sorts of incremental\nchanges should wait for that discussion.\n\n--Jacob\n\n\n",
"msg_date": "Wed, 3 Aug 2022 08:58:49 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] CF app: add \"Returned: Needs more interest\""
},
{
"msg_contents": "Hi,\n\nOn Wed, Aug 03, 2022 at 08:58:49AM -0700, Jacob Champion wrote:\n> On Tue, Aug 2, 2022 at 8:00 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > I'm personally fine with the current statutes, as closing a patch with RwF\n> > explaining that there was no interest is still a feedback,\n>\n> Making that explanation each time we intend to close a patch \"needs\n> interest\" takes a lot of time and wordsmithing. \"Returned with\n> feedback\" clearly has an established meaning to the community, and\n> this is counter to that meaning, so people just avoid using it that\n> way.\n>\n> When they do, miscommunications happen easily, which can lead to\n> authors reopening patches thinking that there's been some kind of\n> mistake (as happened to at least one of the patches in this past CF,\n> which I had to close again). Language and cultural differences likely\n> exacerbate the problem, so the less ad hoc messaging a CFM has to do\n> to explain that \"this is RwF but not actually RwF\", the better.\n>\n> > and having a\n> > different status won't make it any more pleasant for both the CFM and the\n> > author.\n>\n> \"More pleasant\" is not really the goal here. I don't think it should\n> ever be pleasant for a CFM to return someone's patch when it hasn't\n> received review, and it's certainly not going to be pleasant for the\n> author. But we can be more honest and clear about why we're returning\n> it, and hopefully make it less unpleasant.\n>\n> > My biggest complaint here is that it doesn't really do anything to try to\n> > improve the current situation (lack of review and/or lack of committer\n> > interest).\n>\n> It's not really meant to improve that. This is just trying to move the\n> needle a little bit, in a way that's been requested several times.\n>\n> > Maybe it would be better to discuss some clear rules and thresholds on when\n> > action should be taken on such patches.\n>\n> I think that's also important to discuss, and I have thoughts on that\n> too, but I don't think the discussions for these sorts of incremental\n> changes should wait for that discussion.\n\nFirst of all, I didn't want to imply that rejecting a patch should be pleasant,\nsorry if that sounded that way.\n\nIt's not that I'm opposed to adding that status, I just don't see how it's\nreally going to improve the situation on its own. Or maybe because it wouldn't\nmake any difference to me as a patch author to get my patches returned \"with\nfeedback\" or \"for any other reason\" if they are ignored. I'm afraid that\npatches will still be left alone to rot and there still be no clear rules on\nwhat to do and when, reminder for CFM and such, and that this new status would\nnever be used anyway. So I guess I will just stop hijacking this thread and\nwait for a discussion on that, sorry for the noise.\n\n\n",
"msg_date": "Thu, 4 Aug 2022 01:09:28 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] CF app: add \"Returned: Needs more interest\""
},
{
"msg_contents": "On Wed, Aug 3, 2022 at 10:09 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> First of all, I didn't want to imply that rejecting a patch should be pleasant,\n> sorry if that sounded that way.\n\nNo worries, I don't think it really sounded that way. :D\n\n> It's not that I'm opposed to adding that status, I just don't see how it's\n> really going to improve the situation on its own.\n\nIf the situation you're referring to is the fact that we have a lot of\npatches sitting without review, it won't improve that situation, I\nagree.\n\nThe situation I'm looking at, though, is where we have a dozen patches\nfloating forward that multiple CFMs in a row feel should be returned,\nbut they won't because claiming \"they have feedback\" is clearly unfair\nto the author. I think this will improve that situation.\n\n> Or maybe because it wouldn't\n> make any difference to me as a patch author to get my patches returned \"with\n> feedback\" or \"for any other reason\" if they are ignored.\n\nSure. I think this change helps the newer contributors (and the CFMs\ntalking to them) more than it helps the established ones.\n\nI'm in your boat, where I don't personally care how a patch of mine is\nreturned (and I'm fine with Withdrawing them myself). But I'm also\npaid to do this. From some of my past experiences with other projects,\nI tend to feel more sensitive to bad communication if I've developed a\npatch using volunteer hours, on evenings or weekends.\n\n> I'm afraid that\n> patches will still be left alone to rot and there still be no clear rules on\n> what to do and when, reminder for CFM and such, and that this new status would\n> never be used anyway. So I guess I will just stop hijacking this thread and\n> wait for a discussion on that, sorry for the noise.\n\nWell, here, let's keep that conversation going too while there's\nmomentum. One sec while I switch Subjects and continue...\n\n--Jacob\n\n\n",
"msg_date": "Wed, 3 Aug 2022 10:52:36 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] CF app: add \"Returned: Needs more interest\""
},
{
"msg_contents": "[was: CF app: add \"Returned: Needs more interest\"]\n\nOn Wed, Aug 3, 2022 at 10:09 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> I'm afraid that\n> patches will still be left alone to rot and there still be no clear rules on\n> what to do and when, reminder for CFM and such, and that this new status would\n> never be used anyway.\n\nYeah, so the lack of clear rules is an issue -- maybe not because we\ncan't work without them (we have, clearly, and we can continue to do\nso) but because each of us kind of makes it up as we go along? When\ndiscussions about these \"rules\" happen on the list, it doesn't always\nhappen with the same people, and opinions can vary wildly.\n\nThere have been a couple of suggestions recently:\n- Revamp the CF Checklist on the wiki. I plan to do so later this\nmonth, but that will still need some community review.\n- Provide in-app explanations and documentation for some of the less\nobvious points. (What should the target version be? What's the\ndifference between Rejected and Returned?)\n\nIs that enough, or should we do more?\n\nMy preference, as I think Daniel also said in a recent thread, would\nbe for most of this information to be in the application somewhere.\nThat would make it more immediately accessible to everyone. (The\ntradeoff is, it gets harder to update.)\n\n--Jacob\n\n\n",
"msg_date": "Wed, 3 Aug 2022 11:03:58 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Clarifying Commitfest policies"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-03 10:52:36 -0700, Jacob Champion wrote:\n> The situation I'm looking at, though, is where we have a dozen patches\n> floating forward that multiple CFMs in a row feel should be returned,\n> but they won't because claiming \"they have feedback\" is clearly unfair\n> to the author. I think this will improve that situation.\n\nWhat patches are we concretely talking about?\n\nMy impression is that a lot of the patches floating from CF to CF have gotten\nsceptical feedback and at best a minor amount of work to address that has been\ndone.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 3 Aug 2022 11:41:49 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] CF app: add \"Returned: Needs more interest\""
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> My impression is that a lot of the patches floating from CF to CF have gotten\n> sceptical feedback and at best a minor amount of work to address that has been\n> done.\n\nThat I think is a distinct issue: nobody wants to take on the\nunpleasant job of saying \"no, we don't want this\" in a final way.\nWe may raise some objections but actually rejecting a patch is hard.\nSo it tends to slide forward until the author gives up.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 03 Aug 2022 14:53:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] CF app: add \"Returned: Needs more interest\""
},
{
"msg_contents": "On 8/3/22 11:41, Andres Freund wrote:\n> What patches are we concretely talking about?>\n> My impression is that a lot of the patches floating from CF to CF have gotten\n> sceptical feedback and at best a minor amount of work to address that has been\n> done.\n\n- https://commitfest.postgresql.org/38/2482/\n- https://commitfest.postgresql.org/38/3338/\n- https://commitfest.postgresql.org/38/3181/\n- https://commitfest.postgresql.org/38/2918/\n- https://commitfest.postgresql.org/38/2710/\n- https://commitfest.postgresql.org/38/2266/ (this one was particularly\nmiscommunicated during the first RwF)\n- https://commitfest.postgresql.org/38/2218/\n- https://commitfest.postgresql.org/38/3256/\n- https://commitfest.postgresql.org/38/3310/\n- https://commitfest.postgresql.org/38/3050/\n\nLooking though, some of those have received skeptical feedback as you\nsay, but certainly not all; not even a majority IMO. (Even if they'd all\nreceived skeptical feedback, if the author replies in good faith and is\nmet with silence for months, we need to not keep stringing them along.)\n\n--Jacob\n\n\n",
"msg_date": "Wed, 3 Aug 2022 12:06:03 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] CF app: add \"Returned: Needs more interest\""
},
{
"msg_contents": "Hi,\n\nOn 2022-08-03 12:06:03 -0700, Jacob Champion wrote:\n> On 8/3/22 11:41, Andres Freund wrote:\n> > What patches are we concretely talking about?>\n> > My impression is that a lot of the patches floating from CF to CF have gotten\n> > sceptical feedback and at best a minor amount of work to address that has been\n> > done.\n> \n> - https://commitfest.postgresql.org/38/2482/\n\nHm - \"Returned: Needs more interest\" doesn't seem like it'd have been more\ndescriptive? It was split off a patchset that was committed at the tail end of\n15 (and which still has *severe* code quality issues). Imo having a CF entry\nbefore the rest of the jsonpath stuff made it in doesn't seem like a good\nidea.\n\n\n> - https://commitfest.postgresql.org/38/3338/\n\nHere it'd have fit.\n\n\n> - https://commitfest.postgresql.org/38/3181/\n\nFWIW, I mentioned at least once that I didn't think this was worth pursuing.\n\n\n> - https://commitfest.postgresql.org/38/2918/\n\nHm, certainly not a lot of review activity.\n\n\n> - https://commitfest.postgresql.org/38/2710/\n\nA good bit of this was committed in some form with a decent amount of review\nactivity for a while.\n\n\n> - https://commitfest.postgresql.org/38/2266/ (this one was particularly\n> miscommunicated during the first RwF)\n\nI'd say misunderstanding than miscommunication...\n\nIt seems partially stalled due to the potential better approach based on\nhttps://www.postgresql.org/message-id/flat/15848.1576515643%40sss.pgh.pa.us ?\nIn which case RwF doesn't seem to inappropriate.\n\n\n> - https://commitfest.postgresql.org/38/2218/\n\nYep.\n\n\n> - https://commitfest.postgresql.org/38/3256/\n\nYep.\n\n\n> - https://commitfest.postgresql.org/38/3310/\n\nI don't really understand why this has been RwF'd, doesn't seem that long\nsince the last review leading to changes.\n\n\n> - https://commitfest.postgresql.org/38/3050/\n\nGiven that a non-author did a revision of the patch, listed a number of TODO\nitems and said \"I'll create regression tests firstly.\" - I don't think \"lacks\ninterest\" would have been appropriate, and RwF is?\n\n\n> (Even if they'd all received skeptical feedback, if the author replies in\n> good faith and is met with silence for months, we need to not keep stringing\n> them along.)\n\nI agree very much with that - just am doubtful that \"lacks interest\" is a good\nway of dealing with it, unless we just want to treat it as a nicer sounding\n\"rejected\".\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 3 Aug 2022 12:46:45 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] CF app: add \"Returned: Needs more interest\""
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I agree very much with that - just am doubtful that \"lacks interest\" is a good\n> way of dealing with it, unless we just want to treat it as a nicer sounding\n> \"rejected\".\n\nI think there is a difference. \"Lacks interest\" suggests that there\nis a path forward for the patch, namely (as Jacob has mentioned\nrepeatedly) doing some sort of consensus-building that it's worth\npursuing. The author may or may not have the interest/skills to do\nthat, but it's possible that it could happen. \"Rejected\" says \"don't\nbother pursuing this, it's a bad idea\". Neither of these seems the\nsame as RWF, which I think we mostly understand to mean \"this patch\nhas technical problems that can probably be fixed\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 03 Aug 2022 15:59:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] CF app: add \"Returned: Needs more interest\""
},
{
"msg_contents": "On Wed, 3 Aug 2022 at 20:04, Jacob Champion <jchampion@timescale.com> wrote:\n>\n> [was: CF app: add \"Returned: Needs more interest\"]\n>\n> On Wed, Aug 3, 2022 at 10:09 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > I'm afraid that\n> > patches will still be left alone to rot and there still be no clear rules on\n> > what to do and when, reminder for CFM and such, and that this new status would\n> > never be used anyway.\n>\n> Yeah, so the lack of clear rules is an issue -- maybe not because we\n> can't work without them (we have, clearly, and we can continue to do\n> so) but because each of us kind of makes it up as we go along? When\n> discussions about these \"rules\" happen on the list, it doesn't always\n> happen with the same people, and opinions can vary wildly.\n>\n> There have been a couple of suggestions recently:\n> - Revamp the CF Checklist on the wiki. I plan to do so later this\n> month, but that will still need some community review.\n> - Provide in-app explanations and documentation for some of the less\n> obvious points. (What should the target version be? What's the\n> difference between Rejected and Returned?)\n>\n> Is that enough, or should we do more?\n\n\"The CF Checklist\" seems to refer to only the page that is (or seems\nto be) intended for the CFM only. We should probably also update the\npages of \"Commitfest\", \"Submitting a patch\", \"Reviewing a Patch\", \"So,\nyou want to be a developer?\", and the \"Developer FAQ\" page, which\ndoesn't have to be more than removing outdated information and\nrefering to any (new) documentation on how to participate in the\nPostgreSQL development and/or Commitfest workflow as a non-CFM.\n\nAdditionally, we might want to add extra text to the \"developers\"\nsection of the main website [0] to refer to any new documentation.\nThis suggestion does depend on whether the new documentation has a\nhigh value for potential community members.\n\nLastly, a top-level CONTRIBUTING.md file in git repositories is also\noften used as an entry point for potential contributors. I don't\nsuggest we copy all documentation into the main repo, just that a\npointer to our existing contributer entry documentation in such a file\ncould help lower the barrier of entry.\nAs an example, the GitHub mirror of the main PostgreSQL repository\nreceives a decent amount of pull request traffic. When a project has a\nCONTRIBUTING.md -file at the top level people writing the pull request\nmessage will be pointed to those contributing guidelines. This could\n\nThank you for raising this to a topical thread.\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://www.postgresql.org/developer/\n\n\n",
"msg_date": "Wed, 3 Aug 2022 23:05:15 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Clarifying Commitfest policies"
},
{
"msg_contents": "Hi Andres,\n\nMy intention had not quite been for this to be a referendum on the\ndecision for every patch -- we can do that if it helps, but I don't\nthink we necessarily have to have unanimity on the bucketing for every\npatch in order for the new state to be useful.\n\nOn 8/3/22 12:46, Andres Freund wrote:\n>> - https://commitfest.postgresql.org/38/2482/\n> \n> Hm - \"Returned: Needs more interest\" doesn't seem like it'd have been more\n> descriptive? It was split off a patchset that was committed at the tail end of\n> 15 (and which still has *severe* code quality issues). Imo having a CF entry\n> before the rest of the jsonpath stuff made it in doesn't seem like a good\n> idea\nThere were no comments about code quality issues on the thread that I\ncan see, and there were three people who independently said \"I don't\nknow why this isn't getting review.\" Seems like a shoe-in for \"needs\nmore interest\".\n\n>> - https://commitfest.postgresql.org/38/3338/\n> \n> Here it'd have fit.\n\nOkay. That's one.\n\n>> - https://commitfest.postgresql.org/38/3181/\n> \n> FWIW, I mentioned at least once that I didn't think this was worth pursuing.\n\n(I don't see that comment on that thread? You mentioned it needed a rebase.)\n\nIMO, mentioning that something is not worth pursuing is not actionable\nfeedback. It's a declaration of non-interest in the mildest case, and a\nRejection in the strongest case. But let's please not say \"meh\" and then\nReturn with Feedback; an author can't do anything with that.\n\n>> - https://commitfest.postgresql.org/38/2918/\n> \n> Hm, certainly not a lot of review activity.\n\nThat's two.\n\n>> - https://commitfest.postgresql.org/38/2710/\n> \n> A good bit of this was committed in some form with a decent amount of review\n> activity for a while.\n\nBut then the rest of it stalled. Something has to be done with the open\nentry.\n\n>> - https://commitfest.postgresql.org/38/2266/ (this one was particularly\n>> miscommunicated during the first RwF)\n> \n> I'd say misunderstanding than miscommunication...\n\nThe CFM sending it said, \"It seems there has been no activity since last\nversion of the patch so I don't think RwF is correct\" [1], and then the\nemail sent said \"you are encouraged to send a new patch [...] with the\nsuggested changes.\" But there were no suggested changes left to make.\n\nThis really highlights, for me, why the two states should not be\ncombined into one.\n\n> It seems partially stalled due to the potential better approach based on\n> https://www.postgresql.org/message-id/flat/15848.1576515643%40sss.pgh.pa.us ?\n> In which case RwF doesn't seem to inappropriate.\n\nThose comments are, as far as I can tell, not in the thread. (And the\nnew thread you linked is also stalled.)\n\n>> - https://commitfest.postgresql.org/38/2218/\n> \n> Yep.\n\nThat's three.\n\n>> - https://commitfest.postgresql.org/38/3256/\n> \n> Yep.\n\nThat's four.\n\n>> - https://commitfest.postgresql.org/38/3310/\n> \n> I don't really understand why this has been RwF'd, doesn't seem that long\n> since the last review leading to changes.\n\nEight months without feedback, when we expect authors to turn around a\npatch in two weeks or less to avoid being RwF'd, is a long time IMHO. I\ndon't think a patch should sit motionless in CF for eight months; it's\nnot at all fair to the author.\n\n>> - https://commitfest.postgresql.org/38/3050/\n> \n> Given that a non-author did a revision of the patch, listed a number of TODO\n> items and said \"I'll create regression tests firstly.\" - I don't think \"lacks\n> interest\" would have been appropriate, and RwF is?\n\nThat was six months ago, and prior to that there was another six month\nsilence. I'd say that lacks interest, and I don't feel like it's\ncurrently reviewable in CF.\n\n>> (Even if they'd all received skeptical feedback, if the author replies in\n>> good faith and is met with silence for months, we need to not keep stringing\n>> them along.)\n> \n> I agree very much with that - just am doubtful that \"lacks interest\" is a good\n> way of dealing with it, unless we just want to treat it as a nicer sounding\n> \"rejected\".\nTom summed up my position well: there's a difference between those two\nthat is both meaningful and actionable for contributors. Is there an\nalternative you'd prefer?\n\nThanks for the discussion!\n--Jacob\n\n[1] https://www.postgresql.org/message-id/20211004071249.GA6304%40ahch-to\n\n\n\n",
"msg_date": "Thu, 4 Aug 2022 11:19:28 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] CF app: add \"Returned: Needs more interest\""
},
{
"msg_contents": "On Wed, Aug 3, 2022 at 2:05 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> On Wed, 3 Aug 2022 at 20:04, Jacob Champion <jchampion@timescale.com> wrote:\n> > Is that enough, or should we do more?\n>\n> \"The CF Checklist\" seems to refer to only the page that is (or seems\n> to be) intended for the CFM only. We should probably also update the\n> pages of \"Commitfest\", \"Submitting a patch\", \"Reviewing a Patch\", \"So,\n> you want to be a developer?\", and the \"Developer FAQ\" page, which\n> doesn't have to be more than removing outdated information and\n> refering to any (new) documentation on how to participate in the\n> PostgreSQL development and/or Commitfest workflow as a non-CFM.\n\nAgreed, a sweep of those materials would be helpful as well. I'm\npersonally focused on CFM tasks, since it's fresh in my brain and\ndocumentation is almost non-existent for it, but if you have ideas for\nthose areas, I definitely don't want to shut down that line of the\nconversation.\n\n> Additionally, we might want to add extra text to the \"developers\"\n> section of the main website [0] to refer to any new documentation.\n> This suggestion does depend on whether the new documentation has a\n> high value for potential community members.\n\nRight. So what kinds of info do we want to highlight in this\ndocumentation, to make it high-quality?\n\nDrawing from some of the questions I've seen recently, we could talk about\n- CF \"power\" structure (perhaps simply to highlight that the CFM has\nno additional authority to get patches in)\n- the back-and-forth process on the mailing list, maybe including\nexpected response times\n- what to do when a patch is returned (or rejected)\n\n> As an example, the GitHub mirror of the main PostgreSQL repository\n> receives a decent amount of pull request traffic. When a project has a\n> CONTRIBUTING.md -file at the top level people writing the pull request\n> message will be pointed to those contributing guidelines. This could\n\n(I think some text got cut here.)\n\nThe mirror bot will point you to the \"So, you want to be a developer?\"\nwiki when you open a PR, but I agree that a CONTRIBUTING doc would\nhelp prevent that small embarrassment.\n\n--Jacob\n\n\n",
"msg_date": "Thu, 4 Aug 2022 11:38:29 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: Clarifying Commitfest policies"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-04 11:19:28 -0700, Jacob Champion wrote:\n> My intention had not quite been for this to be a referendum on the\n> decision for every patch -- we can do that if it helps, but I don't\n> think we necessarily have to have unanimity on the bucketing for every\n> patch in order for the new state to be useful.\n\nSorry, I should have been clearer. It wasn't mine either! I was just trying to\nunderstand what you see as the usecase / get a better feel for it. I'm now a\nbit more convinced it's useful than before.\n\n\n> >> - https://commitfest.postgresql.org/38/3310/\n> > \n> > I don't really understand why this has been RwF'd, doesn't seem that long\n> > since the last review leading to changes.\n> \n> Eight months without feedback, when we expect authors to turn around a\n> patch in two weeks or less to avoid being RwF'd, is a long time IMHO.\n\nWhy is it better to mark it as lacks interested than RwF if there actually\n*has* been feedback?\n\n\n> I don't think a patch should sit motionless in CF for eight months; it's not\n> at all fair to the author.\n\nIt's not great, I agree, but wishes don't conjure up resources :(\n\n\n> >> - https://commitfest.postgresql.org/38/3050/\n> > \n> > Given that a non-author did a revision of the patch, listed a number of TODO\n> > items and said \"I'll create regression tests firstly.\" - I don't think \"lacks\n> > interest\" would have been appropriate, and RwF is?\n> \n> That was six months ago, and prior to that there was another six month\n> silence. I'd say that lacks interest, and I don't feel like it's\n> currently reviewable in CF.\n\nI don't think the entry needs more review - it needs changes:\nhttps://www.postgresql.org/message-id/CAOKkKFtc45uNFoWYOCo4St19ayxrh-_%2B4TnZtwxGZz6-3k_GSA%40mail.gmail.com\nThat contains quite a few things that should be changed.\n\nA patch that has gotten feedback, but that feedback hasn't been processed\npretty much is the definition of RwF, no?\n\n\n> >> (Even if they'd all received skeptical feedback, if the author replies in\n> >> good faith and is met with silence for months, we need to not keep stringing\n> >> them along.)\n> > \n> > I agree very much with that - just am doubtful that \"lacks interest\" is a good\n> > way of dealing with it, unless we just want to treat it as a nicer sounding\n> > \"rejected\".\n>\n> Tom summed up my position well: there's a difference between those two\n> that is both meaningful and actionable for contributors. Is there an\n> alternative you'd prefer?\n\nI agree that \"lacks interest\" could be useful. But I'm wary of it becoming\njust a renaming if we end up marking patches that should be RwF or rejected as\n\"lacks interest\".\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 4 Aug 2022 15:00:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] CF app: add \"Returned: Needs more interest\""
},
{
"msg_contents": "On Thu, 4 Aug 2022 at 20:38, Jacob Champion <jchampion@timescale.com> wrote:\n>\n> On Wed, Aug 3, 2022 at 2:05 PM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > On Wed, 3 Aug 2022 at 20:04, Jacob Champion <jchampion@timescale.com> wrote:\n> > > Is that enough, or should we do more?\n> >\n> > \"The CF Checklist\" seems to refer to only the page that is (or seems\n> > to be) intended for the CFM only. We should probably also update the\n> > pages of \"Commitfest\", \"Submitting a patch\", \"Reviewing a Patch\", \"So,\n> > you want to be a developer?\", and the \"Developer FAQ\" page, which\n> > doesn't have to be more than removing outdated information and\n> > refering to any (new) documentation on how to participate in the\n> > PostgreSQL development and/or Commitfest workflow as a non-CFM.\n>\n> Agreed, a sweep of those materials would be helpful as well. I'm\n> personally focused on CFM tasks, since it's fresh in my brain and\n> documentation is almost non-existent for it, but if you have ideas for\n> those areas, I definitely don't want to shut down that line of the\n> conversation.\n\nNor would I want to hold you back on CFM documentation.\n\n> > Additionally, we might want to add extra text to the \"developers\"\n> > section of the main website [0] to refer to any new documentation.\n> > This suggestion does depend on whether the new documentation has a\n> > high value for potential community members.\n>\n> Right. So what kinds of info do we want to highlight in this\n> documentation, to make it high-quality?\n\nI think it would be a combined and abbreviated version of the detailed\nmanuals that we (will) have: The pages \"Submitting a patch\" and\n\"Reviewing a patch\" on the wiki, and the CommitFest manual (plus\npotentially info on CFBot).\n\nThe first part of \"So, you want to be a developer?\" seems like a very\ngood starting point for dense, high-quality entry-level documentation.\nEach section should then further refer to the relevant sections of the\n\"Developer FAQ\" and the \"Submitting / Reviewing a Patch\" pages for the\nin-and-outs of the specific procedure (such as \"installing development\ndependencies\", \"reviewing changes\", \"code style\", etc.).\n\n> Drawing from some of the questions I've seen recently, we could talk about\n> - CF \"power\" structure (perhaps simply to highlight that the CFM has\n> no additional authority to get patches in)\n> - the back-and-forth process on the mailing list, maybe including\n> expected response times\n> - what to do when a patch is returned (or rejected)\n>\n> > As an example, the GitHub mirror of the main PostgreSQL repository\n> > receives a decent amount of pull request traffic. When a project has a\n> > CONTRIBUTING.md -file at the top level people writing the pull request\n> > message will be pointed to those contributing guidelines. This could\n>\n> (I think some text got cut here.)\n\n... This could help reduce the amount of mis-addressed (maybe better\nword: mis-located?) contributions, and potentially help the\ncontributor get involved at -hackers. Indeed this process is much more\ninvolved than 'just' opening a pull request, but at least it is now\nslightly more visible.\n\n> The mirror bot will point you to the \"So, you want to be a developer?\"\n> wiki when you open a PR, but I agree that a CONTRIBUTING doc would\n> help prevent that small embarrassment.\n\nThat's news to me, but nice to see some improvements there. I have\npreviously noticed that there were PRs on GitHub that went unnoticed\nfor several weeks, so this bot is a significant improvement.\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Mon, 8 Aug 2022 17:15:30 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Clarifying Commitfest policies"
},
{
"msg_contents": "On Thu, Aug 4, 2022 at 3:00 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-08-04 11:19:28 -0700, Jacob Champion wrote:\n> > My intention had not quite been for this to be a referendum on the\n> > decision for every patch -- we can do that if it helps, but I don't\n> > think we necessarily have to have unanimity on the bucketing for every\n> > patch in order for the new state to be useful.\n>\n> Sorry, I should have been clearer. It wasn't mine either! I was just trying to\n> understand what you see as the usecase / get a better feel for it. I'm now a\n> bit more convinced it's useful than before.\n\nGreat!\n\n> > >> - https://commitfest.postgresql.org/38/3310/\n> > >\n> > > I don't really understand why this has been RwF'd, doesn't seem that long\n> > > since the last review leading to changes.\n> >\n> > Eight months without feedback, when we expect authors to turn around a\n> > patch in two weeks or less to avoid being RwF'd, is a long time IMHO.\n>\n> Why is it better to mark it as lacks interested than RwF if there actually\n> *has* been feedback?\n\nBecause I don't think the utility of RwF is in saying \"we gave you\nfeedback once and then ghosted\"; I think it's in saying \"this patchset\nneeds work before the next round of review.\" If an author has\nresponded to the feedback and the patchset is just sitting there for\nmonths, the existence of the feedback is less relevant.\n\n> > I don't think a patch should sit motionless in CF for eight months; it's not\n> > at all fair to the author.\n>\n> It's not great, I agree, but wishes don't conjure up resources :(\n\nI see this less as a wish for resources, and more as an honest\nadmission -- we don't currently have enough resources to give each\npatch the eyes it deserves, so if an author finds themselves in this\nstate, they'll have to put in some more work to find those eyes\nsomewhere.\n\n> > >> - https://commitfest.postgresql.org/38/3050/\n> > >\n> > > Given that a non-author did a revision of the patch, listed a number of TODO\n> > > items and said \"I'll create regression tests firstly.\" - I don't think \"lacks\n> > > interest\" would have been appropriate, and RwF is?\n> >\n> > That was six months ago, and prior to that there was another six month\n> > silence. I'd say that lacks interest, and I don't feel like it's\n> > currently reviewable in CF.\n>\n> I don't think the entry needs more review - it needs changes:\n> https://www.postgresql.org/message-id/CAOKkKFtc45uNFoWYOCo4St19ayxrh-_%2B4TnZtwxGZz6-3k_GSA%40mail.gmail.com\n> That contains quite a few things that should be changed.\n>\n> A patch that has gotten feedback, but that feedback hasn't been processed\n> pretty much is the definition of RwF, no?\n\nLooking through again, I see now what you're saying. Yes, I agree that\nRwF would have been a fine fit there.\n\n> I agree that \"lacks interest\" could be useful. But I'm wary of it becoming\n> just a renaming if we end up marking patches that should be RwF or rejected as\n> \"lacks interest\".\n\nAgreed. This probably bleeds over into the other documentation thread\na bit -- how do we want to communicate the subtle points to people in\na CF?\n\n--Jacob\n\n\n",
"msg_date": "Mon, 8 Aug 2022 08:37:41 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] CF app: add \"Returned: Needs more interest\""
},
{
"msg_contents": "Hi,\n\nOn 2022-08-08 08:37:41 -0700, Jacob Champion wrote:\n> Agreed. This probably bleeds over into the other documentation thread\n> a bit -- how do we want to communicate the subtle points to people in\n> a CF?\n\nWe should write a docs patch for it and then reference if from a bunch of\nplaces. I started down that road a few years back [1] but unfortunately lost\nsteam.\n\nRegards,\n\nAndres\n\n[1] https://postgr.es/m/20180302224056.3fps7kc6hokqk3th%40alap3.anarazel.de\n\n\n",
"msg_date": "Mon, 8 Aug 2022 08:45:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] CF app: add \"Returned: Needs more interest\""
},
{
"msg_contents": "On Wed, Aug 3, 2022 at 02:53:23PM -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > My impression is that a lot of the patches floating from CF to CF have gotten\n> > sceptical feedback and at best a minor amount of work to address that has been\n> > done.\n> \n> That I think is a distinct issue: nobody wants to take on the\n> unpleasant job of saying \"no, we don't want this\" in a final way.\n> We may raise some objections but actually rejecting a patch is hard.\n> So it tends to slide forward until the author gives up.\n\nAgreed. There is a sense when I look at patches in that status that\nthey seem like a good idea to someone and could be useful to someone,\nbut the overhead or complexity it would add to the software doesn't seem\nwarranted. It is complex to explain that to someone, and since it is a\njudgement call, not worth the argument.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 10 Aug 2022 13:19:01 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] CF app: add \"Returned: Needs more interest\""
},
{
"msg_contents": "On Mon, Aug 8, 2022 at 8:45 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-08-08 08:37:41 -0700, Jacob Champion wrote:\n> > Agreed. This probably bleeds over into the other documentation thread\n> > a bit -- how do we want to communicate the subtle points to people in\n> > a CF?\n>\n> We should write a docs patch for it and then reference if from a bunch of\n> places. I started down that road a few years back [1] but unfortunately lost\n> steam.\n\nAs we approach a new CF, I'm reminded of this patch again.\n\nAre there any concerns preventing a consensus here, that I can help\nwith? I can draft the docs patch that Andres has suggested, if that's\nseen as a prerequisite.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Tue, 25 Oct 2022 15:55:28 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] CF app: add \"Returned: Needs more interest\""
},
{
"msg_contents": "On Wed, 26 Oct 2022 at 04:25, Jacob Champion <jchampion@timescale.com> wrote:\n>\n> On Mon, Aug 8, 2022 at 8:45 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-08-08 08:37:41 -0700, Jacob Champion wrote:\n> > > Agreed. This probably bleeds over into the other documentation thread\n> > > a bit -- how do we want to communicate the subtle points to people in\n> > > a CF?\n> >\n> > We should write a docs patch for it and then reference if from a bunch of\n> > places. I started down that road a few years back [1] but unfortunately lost\n> > steam.\n>\n> As we approach a new CF, I'm reminded of this patch again.\n>\n> Are there any concerns preventing a consensus here, that I can help\n> with? I can draft the docs patch that Andres has suggested, if that's\n> seen as a prerequisite.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n\n=== Applying patches on top of PostgreSQL commit ID\ne351f85418313e97c203c73181757a007dfda6d0 ===\n=== applying patch ./0001-Add-a-Returned-Needs-more-interest-close-code.patch\npatching file pgcommitfest/commitfest/migrations/0006_alter_patchoncommitfest_status.py\ncan't find file to patch at input line 57\nPerhaps you used the wrong -p or --strip option?\nThe text leading up to this was:\n--------------------------\n|diff --git a/pgcommitfest/commitfest/models.py\nb/pgcommitfest/commitfest/models.py\n|index 28722f0..433eb4a 100644\n|--- a/pgcommitfest/commitfest/models.py\n|+++ b/pgcommitfest/commitfest/models.py\n--------------------------\nNo file to patch. Skipping patch.\n3 out of 3 hunks ignored\ncan't find file to patch at input line 85\n\n[1] - http://cfbot.cputube.org/patch_41_3991.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 3 Jan 2023 17:43:49 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] CF app: add \"Returned: Needs more interest\""
},
{
"msg_contents": "On Tue, Jan 3, 2023 at 4:14 AM vignesh C <vignesh21@gmail.com> wrote:\n> The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n\nHi Vignesh -- this is a patch for the CF app, not the Postgres repo,\nso cfbot won't be able to apply it. Let me know if there's a better\nplace for me to put it.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Tue, 3 Jan 2023 08:30:59 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] CF app: add \"Returned: Needs more interest\""
},
{
"msg_contents": "On Tue, 3 Jan 2023 at 22:01, Jacob Champion <jchampion@timescale.com> wrote:\n>\n> On Tue, Jan 3, 2023 at 4:14 AM vignesh C <vignesh21@gmail.com> wrote:\n> > The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n>\n> Hi Vignesh -- this is a patch for the CF app, not the Postgres repo,\n> so cfbot won't be able to apply it. Let me know if there's a better\n> place for me to put it.\n\nI'm not sure if this should be included in commitfest as we generally\ninclude the postgres repository patches in the commitfest. I felt we\ncould have the discussion in the thread and remove the entry from\ncommitfest.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 4 Jan 2023 10:26:20 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] CF app: add \"Returned: Needs more interest\""
},
{
"msg_contents": "On Tue, Jan 3, 2023 at 8:56 PM vignesh C <vignesh21@gmail.com> wrote:\n> I'm not sure if this should be included in commitfest as we generally\n> include the postgres repository patches in the commitfest. I felt we\n> could have the discussion in the thread and remove the entry from\n> commitfest.\n\nIs there a good way to remind people that, hey, this exists as a\npatchset? (Other than me pinging the list every so often.)\n\n--Jacob\n\n\n",
"msg_date": "Wed, 4 Jan 2023 09:33:35 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] CF app: add \"Returned: Needs more interest\""
},
{
"msg_contents": "On Wed, Jan 4, 2023 at 9:33 AM Jacob Champion <jchampion@timescale.com> wrote:\n> Is there a good way to remind people that, hey, this exists as a\n> patchset? (Other than me pinging the list every so often.)\n\nI've withdrawn this patchset for now, but if anyone has any ideas on\nwhere and how I can better propose features for CF itself, I'm all\nears.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Wed, 1 Feb 2023 12:44:53 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] CF app: add \"Returned: Needs more interest\""
}
] |
[
{
"msg_contents": "The effective_multixact_freeze_max_age mechanism added by commit\n53bb309d2d forces aggressive VACUUMs to take place earlier, as\nprotection against wraparound of the MultiXact member space SLRU.\nThere was also a follow-up bugfix several years later -- commit\n6bda2af039 -- which made sure that the MXID-wise cutoff used to\ndetermine which MXIDs to freeze in vacuumlazy.c could never exceed\noldestMxact (VACUUM generally cannot freeze MultiXacts that are still\nseen as running by somebody according to oldestMxact).\n\nI would like to talk about making the\neffective_multixact_freeze_max_age stuff more robust, particularly in\nthe presence of a long held snapshot that holds things up even as SLRU\nspace for MultiXact members dwindles. I have to admit that I always\nfound this part of vacuum_set_xid_limits() confusing. I suspect that\nthe problem has something to do with how we calculate vacuumlazy.c's\nmultiXactCutoff (as well as FreezeLimit): vacuum_set_xid_limits() just\nsubtracts a freezemin value from GetOldestMultiXactId(). This is\nconfusingly similar (though different in important ways) to the\nhandling for other related cutoffs that happens nearby. In particular,\nwe start from ReadNextMultiXactId() (not from GetOldestMultiXactId())\nfor the cutoff that determines if the VACUUM is going to be\naggressive. I think that this can be fixed -- see the attached patch.\n\nOf course, it wouldn't be safe to allow vacuum_set_xid_limits() to\nhand off a multiXactCutoff to vacuumlazy.c that is (for whatever\nreason) less than GetOldestMultiXactId()/oldestMxact (the bug fixed by\n6bda2af039 involved just such a scenario). But that doesn't seem like\nmuch of a problem to me. We can just handle it directly, as needed.\nThe attached patch handles it as follows:\n\n /* Compute multiXactCutoff, being careful to generate a valid value */\n *multiXactCutoff = nextMXID - mxid_freezemin;\n if (*multiXactCutoff < FirstMultiXactId)\n *multiXactCutoff = FirstMultiXactId;\n /* multiXactCutoff must always be <= oldestMxact */\n if (MultiXactIdPrecedes(*oldestMxact, *multiXactCutoff))\n *multiXactCutoff = *oldestMxact;\n\nThat is, we only need to make sure that the \"multiXactCutoff must\nalways be <= oldestMxact\" invariant holds once, by checking for it\ndirectly. The same thing happens with OldestXmin/FreezeLimit. That\nseems like a simpler foundation. It's also a lot more logical. Why\nshould the cutoff for freezing be held back by a long running\ntransaction, except to the extent that it is strictly necessary to do\nso to avoid wrong answers (wrong answers seen by the long running\ntransaction)?\n\nThis allows us to simplify the code that issues a WARNING about\noldestMxact/OldestXmin inside vacuum_set_xid_limits(). Why not\nactually test oldestMxact/OldestXmin directly, without worrying about\nthe limits (multiXactCutoff/FreezeLimit)? That also seems more\nlogical; there is more to be concerned about than freezing being\nblocked when OldestXmin gets very old. Though we still rely on the\nautovacuum_freeze_max_age GUC to represent \"a wildly unreasonable\nnumber of XIDs for OldestXmin to be held back by\", just because that's\nstill convenient.\n\n-- \nPeter Geoghegan",
"msg_date": "Tue, 2 Aug 2022 16:12:18 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "effective_multixact_freeze_max_age issue"
},
{
"msg_contents": "On Tue, Aug 2, 2022 at 4:12 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> That is, we only need to make sure that the \"multiXactCutoff must\n> always be <= oldestMxact\" invariant holds once, by checking for it\n> directly. The same thing happens with OldestXmin/FreezeLimit. That\n> seems like a simpler foundation. It's also a lot more logical. Why\n> should the cutoff for freezing be held back by a long running\n> transaction, except to the extent that it is strictly necessary to do\n> so to avoid wrong answers (wrong answers seen by the long running\n> transaction)?\n\nAnybody have any input on this? I'm hoping that this can be committed soon.\n\nISTM that the way that we currently derive FreezeLimit (by starting\nwith OldestXmin rather than starting with the same\nReadNextTransactionId() value that gets used for the aggressiveness\ncutoffs) is just an accident of history. The \"Routine vacuuming\" docs\nalready describe this behavior in terms that sound closer to the\nbehavior with the patch than the actual current behavior:\n\n\"When VACUUM scans every page in the table that is not already\nall-frozen, it should set age(relfrozenxid) to a value just a little\nmore than the vacuum_freeze_min_age setting that was used (more by the\nnumber of transactions started since the VACUUM started)\"\n\nBesides, why should there be an idiosyncratic definition of \"age\" that\nis only used with\nvacuum_freeze_min_age/vacuum_multixact_freeze_min_age? Why would\nanyone want to do less freezing in the presence of a long running\ntransaction? It simply makes no sense (unless we're forced to do so to\npreserve basic guarantees needed for correctness, such as the\n\"FreezeLimit <= OldestXmin\" invariant).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 28 Aug 2022 11:38:09 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: effective_multixact_freeze_max_age issue"
},
{
"msg_contents": "On Sun, Aug 28, 2022 at 11:38:09AM -0700, Peter Geoghegan wrote:\n> On Tue, Aug 2, 2022 at 4:12 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>> That is, we only need to make sure that the \"multiXactCutoff must\n>> always be <= oldestMxact\" invariant holds once, by checking for it\n>> directly. The same thing happens with OldestXmin/FreezeLimit. That\n>> seems like a simpler foundation. It's also a lot more logical. Why\n>> should the cutoff for freezing be held back by a long running\n>> transaction, except to the extent that it is strictly necessary to do\n>> so to avoid wrong answers (wrong answers seen by the long running\n>> transaction)?\n> \n> Anybody have any input on this? I'm hoping that this can be committed soon.\n> \n> ISTM that the way that we currently derive FreezeLimit (by starting\n> with OldestXmin rather than starting with the same\n> ReadNextTransactionId() value that gets used for the aggressiveness\n> cutoffs) is just an accident of history. The \"Routine vacuuming\" docs\n> already describe this behavior in terms that sound closer to the\n> behavior with the patch than the actual current behavior:\n> \n> \"When VACUUM scans every page in the table that is not already\n> all-frozen, it should set age(relfrozenxid) to a value just a little\n> more than the vacuum_freeze_min_age setting that was used (more by the\n> number of transactions started since the VACUUM started)\"\n\nThe idea seems sound to me, and IMO your patch simplifies things nicely,\nwhich might be reason enough to proceed with it. However, I'm struggling\nto understand when this change would help much in practice. IIUC it will\ncause vacuums to freeze a bit more, but outside of extreme cases (maybe\nwhen vacuum_freeze_min_age is set very high and there are long-running\ntransactions), ISTM it might not have tremendously much impact. Is the\nintent to create some sort of long-term behavior change for autovacuum, or\nis this mostly aimed towards consistency among the cutoff calculations?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sun, 28 Aug 2022 16:14:05 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: effective_multixact_freeze_max_age issue"
},
{
"msg_contents": "On Sun, 28 Aug 2022 at 20:38, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, Aug 2, 2022 at 4:12 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > That is, we only need to make sure that the \"multiXactCutoff must\n> > always be <= oldestMxact\" invariant holds once, by checking for it\n> > directly. The same thing happens with OldestXmin/FreezeLimit. That\n> > seems like a simpler foundation. It's also a lot more logical. Why\n> > should the cutoff for freezing be held back by a long running\n> > transaction, except to the extent that it is strictly necessary to do\n> > so to avoid wrong answers (wrong answers seen by the long running\n> > transaction)?\n>\n> Anybody have any input on this? I'm hoping that this can be committed soon.\n\nApart from the message that this behaviour is changing, I'd prefer\nsome more description in the commit message as to why this needs\nchanging.\n\nThen, on to the patch itself:\n\n> + * XXX We don't do push back oldestMxact here, which is not ideal\n\nDo you intend to commit this marker, or is this leftover from the\ndevelopment process?\n\n> + if (*multiXactCutoff < FirstMultiXactId)\n[...]\n> + if (safeOldestMxact < FirstMultiXactId)\n[...]\n> + if (aggressiveMXIDCutoff < FirstMultiXactId)\n\nI prefer !TransactionId/MultiXactIdIsValid() over '< First\n[MultiXact/Transaction]Id', even though it is the same in\nfunctionality, because it clarifies the problem we're trying to solve.\nI understand that the use of < is pre-existing, but since we're\ntouching this code shouldn't we try to get this new code up to current\nstandards?\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Mon, 29 Aug 2022 11:20:19 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: effective_multixact_freeze_max_age issue"
},
{
"msg_contents": "On Sun, Aug 28, 2022 at 4:14 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> The idea seems sound to me, and IMO your patch simplifies things nicely,\n> which might be reason enough to proceed with it.\n\nIt is primarily a case of making things simpler. Why would it ever\nmake sense to interpret age differently in the presence of a long\nrunning transaction, though only for the FreezeLimit/MultiXactCutoff\ncutoff calculation? And not for the closely related\nfreeze_table_age/multixact_freeze_table_age calculation? It's hard to\nimagine that that was ever a deliberate choice.\n\nvacuum_set_xid_limits() didn't contain the logic for determining if\nits caller's VACUUM should be an aggressive VACUUM until quite\nrecently. Postgres 15 commit efa4a9462a put the logic for determining\naggressiveness right next to the logic for determining FreezeLimit,\nwhich made the inconsistency much more noticeable. It is easy to\nbelieve that this was really just an oversight, all along.\n\n> However, I'm struggling\n> to understand when this change would help much in practice. IIUC it will\n> cause vacuums to freeze a bit more, but outside of extreme cases (maybe\n> when vacuum_freeze_min_age is set very high and there are long-running\n> transactions), ISTM it might not have tremendously much impact. Is the\n> intent to create some sort of long-term behavior change for autovacuum, or\n> is this mostly aimed towards consistency among the cutoff calculations?\n\nI agree that this will have only a negligible impact on the majority\n(perhaps even the vast majority) of applications. The primary\njustification for this patch (simplification) seems sufficient, all\nthings considered. Even still, it's possible that it will help in\nextreme cases. Cases with pathological performance issues,\nparticularly those involving MultiXacts.\n\nWe already set FreezeLimit to the most aggressive possible value of\nOldestXmin when OldestXmin has itself already crossed a quasi\narbitrary XID-age threshold of autovacuum_freeze_max_age XIDs (i.e.\nwhen OldestXmin < safeLimit), with analogous rules for\nMultiXactCutoff/OldestMxact. Consequently, the way that we set the\ncutoffs for freezing in pathological cases where (say) there is a\nleaked replication slot will see a sharp discontinuity in how\nFreezeLimit is set, within and across tables. And for what?\n\nInitially, these pathological cases will end up using exactly the same\nFreezeLimit for every VACUUM against every table (assuming that we're\nusing a system-wide min_freeze_age setting) -- every VACUUM operation\nwill use a FreezeLimit of `OldestXmin - vacuum_freeze_min_age`. At a\ncertain point that'll suddenly flip -- now every VACUUM operation will\nuse a FreezeLimit of `OldestXmin`. OTOH with the patch they'd all have\na FreezeLimit that is tied to the time that each VACUUM started --\nwhich is exactly the FreezeLimit behavior that we'd get if there was\nno leaked replication slot (at least until OldestXmin finally attains\nan age of vacuum_freeze_min_age, when it finally becomes unavoidable,\neven with the patch).\n\nThere is something to be said for preserving the \"natural diversity\"\nof the relfrozenxid values among tables, too. The FreezeLimit we use\nis (at least for now) almost always going to be very close to (if not\nexactly) the same value as the NewFrozenXid value that we set\nrelfrozenxid to at the end of VACUUM (at least in larger tables).\nWithout the patch, a once-off problem with a leaked replication slot\ncan accidentally result in lasting problems where all of the largest\ntables get their antiwraparound autovacuums at exactly the same time.\nThe current behavior increases the risk that we'll accidentally\n\"synchronize\" the relfrozenxid values for large tables that had an\nantiwraparound vacuum during the time when OldestXmin was held back.\n\nNeedlessly using the same FreezeLimit across each VACUUM operation\nrisks disrupting the natural ebb and flow of things. It's hard to say\nhow much of a problem that is in the real word. But why take any\nchances?\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 29 Aug 2022 10:25:50 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: effective_multixact_freeze_max_age issue"
},
{
"msg_contents": "On Mon, Aug 29, 2022 at 10:25:50AM -0700, Peter Geoghegan wrote:\n> On Sun, Aug 28, 2022 at 4:14 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> The idea seems sound to me, and IMO your patch simplifies things nicely,\n>> which might be reason enough to proceed with it.\n> \n> It is primarily a case of making things simpler. Why would it ever\n> make sense to interpret age differently in the presence of a long\n> running transaction, though only for the FreezeLimit/MultiXactCutoff\n> cutoff calculation? And not for the closely related\n> freeze_table_age/multixact_freeze_table_age calculation? It's hard to\n> imagine that that was ever a deliberate choice.\n> \n> vacuum_set_xid_limits() didn't contain the logic for determining if\n> its caller's VACUUM should be an aggressive VACUUM until quite\n> recently. Postgres 15 commit efa4a9462a put the logic for determining\n> aggressiveness right next to the logic for determining FreezeLimit,\n> which made the inconsistency much more noticeable. It is easy to\n> believe that this was really just an oversight, all along.\n\nAgreed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 29 Aug 2022 15:40:13 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: effective_multixact_freeze_max_age issue"
},
{
"msg_contents": "On Mon, Aug 29, 2022 at 2:20 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> Apart from the message that this behaviour is changing, I'd prefer\n> some more description in the commit message as to why this needs\n> changing.\n\nI usually only write a full commit message before posting a patch when\nit's a full patch series, where it can be helpful to be very explicit\nabout how the parts fit together. The single line commit message is\njust a placeholder -- I'll definitely write a better one before\ncommit.\n\n> Then, on to the patch itself:\n>\n> > + * XXX We don't do push back oldestMxact here, which is not ideal\n>\n> Do you intend to commit this marker, or is this leftover from the\n> development process?\n\nOrdinarily I would never commit an XXX comment, and probably wouldn't\neven leave one in early revisions of patches that I post to the list.\nThis is a special case, though -- it involves the \"snapshot too old\"\nfeature, which has many similar XXX/FIXME/TODO comments. I think I\nmight leave it like that when committing.\n\nThe background here is that the snapshot too old code still has lots\nof problems -- there is a FIXME comment that gives an overview of this\nin TransactionIdLimitedForOldSnapshots(). We're going to have to live\nwith the fact that that feature isn't in good shape for the\nforeseeable future. I can only really work around it.\n\n> > + if (*multiXactCutoff < FirstMultiXactId)\n> [...]\n> > + if (safeOldestMxact < FirstMultiXactId)\n> [...]\n> > + if (aggressiveMXIDCutoff < FirstMultiXactId)\n>\n> I prefer !TransactionId/MultiXactIdIsValid() over '< First\n> [MultiXact/Transaction]Id', even though it is the same in\n> functionality, because it clarifies the problem we're trying to solve.\n> I understand that the use of < is pre-existing, but since we're\n> touching this code shouldn't we try to get this new code up to current\n> standards?\n\nI agree in principle, but there are already 40+ other places that use\nthe same idiom in places like multixact.c. Perhaps you can propose a\npatch to change all of them at once, together?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 29 Aug 2022 18:21:37 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: effective_multixact_freeze_max_age issue"
},
{
"msg_contents": "On Mon, Aug 29, 2022 at 3:40 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> Agreed.\n\nAttached is v2, which cleans up the structure of\nvacuum_set_xid_limits() a bit more. The overall idea was to improve\nthe overall flow/readability of the function by moving the WARNINGs\ninto their own \"code stanza\", just after the logic for establishing\nfreeze cutoffs and just before the logic for deciding on\naggressiveness. That is now the more logical approach (group the\nstanzas by functionality), since we can't sensibly group the code\nbased on whether it deals with XIDs or with Multis anymore (not since\nthe function was taught to deal with the question of whether caller's\nVACUUM will be aggressive).\n\nGoing to push this in the next day or so.\n\nI also removed some local variables that seem to make the function a\nlot harder to follow in v2. Consider code like this:\n\n- freezemin = freeze_min_age;\n- if (freezemin < 0)\n- freezemin = vacuum_freeze_min_age;\n- freezemin = Min(freezemin, autovacuum_freeze_max_age / 2);\n- Assert(freezemin >= 0);\n+ if (freeze_min_age < 0)\n+ freeze_min_age = vacuum_freeze_min_age;\n+ freeze_min_age = Min(freeze_min_age, autovacuum_freeze_max_age / 2);\n+ Assert(freeze_min_age >= 0);\n\nWhy have this freezemin temp variable? Why not just use the\nvacuum_freeze_min_age function parameter directly instead? That is a\nbetter representation of what's going on at the conceptual level. We\nnow assign vacuum_freeze_min_age to the vacuum_freeze_min_age arg (not\nto the freezemin variable) when our VACUUM caller passes us a value of\n-1 for that arg. -1 effectively means \"whatever the value of the\nvacuum_freeze_min_age GUC is'', which is clearer without the\nsuperfluous freezemin variable.\n\n-- \nPeter Geoghegan",
"msg_date": "Tue, 30 Aug 2022 17:24:17 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: effective_multixact_freeze_max_age issue"
},
{
"msg_contents": "On Tue, Aug 30, 2022 at 05:24:17PM -0700, Peter Geoghegan wrote:\n> Attached is v2, which cleans up the structure of\n> vacuum_set_xid_limits() a bit more. The overall idea was to improve\n> the overall flow/readability of the function by moving the WARNINGs\n> into their own \"code stanza\", just after the logic for establishing\n> freeze cutoffs and just before the logic for deciding on\n> aggressiveness. That is now the more logical approach (group the\n> stanzas by functionality), since we can't sensibly group the code\n> based on whether it deals with XIDs or with Multis anymore (not since\n> the function was taught to deal with the question of whether caller's\n> VACUUM will be aggressive).\n> \n> Going to push this in the next day or so.\n\nLGTM\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 30 Aug 2022 20:56:54 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: effective_multixact_freeze_max_age issue"
},
{
"msg_contents": "On Tue, Aug 30, 2022 at 8:56 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> LGTM\n\nPushed, thanks.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 31 Aug 2022 11:38:54 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: effective_multixact_freeze_max_age issue"
},
{
"msg_contents": "Hello!\n\nOn 31.08.2022 21:38, Peter Geoghegan wrote:\n> On Tue, Aug 30, 2022 at 8:56 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> LGTM\n> \n> Pushed, thanks.\n> \n\nIn this commit https://github.com/postgres/postgres/commit/c3ffa731a5f99c4361203015ce2219d209fea94c\nthere are checks if oldestXmin and oldestMxact havn't become too far in the past.\nBut the corresponding error messages say also some different things about 'cutoff for freezing tuples',\nie about checks for another variables: freezeLimit and multiXactCutoff.\nSee: https://github.com/postgres/postgres/commit/c3ffa731a5f99c4361203015ce2219d209fea94c?diff=split#diff-795a3938e3bed9884d426bd010670fe505580732df7d7012fad9edeb9df54badR1075\nand\nhttps://github.com/postgres/postgres/commit/c3ffa731a5f99c4361203015ce2219d209fea94c?diff=split#diff-795a3938e3bed9884d426bd010670fe505580732df7d7012fad9edeb9df54badR1080\n\nIt's interesting that prior to this commit, checks were made for freeze limits, but the error messages were talking about oldestXmin and oldestMxact.\n\nCould you clarify this moment please? Would be very grateful.\n\nAs variant may be split these checks for the freeze cuttoffs and the oldest xmins for clarity?\nThe patch attached tries to do this.\n\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 18 Oct 2022 13:43:53 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": false,
"msg_subject": "Re: effective_multixact_freeze_max_age issue"
},
{
"msg_contents": "On Tue, Oct 18, 2022 at 3:43 AM Anton A. Melnikov <aamelnikov@inbox.ru> wrote:\n> It's interesting that prior to this commit, checks were made for freeze limits, but the error messages were talking about oldestXmin and oldestMxact.\n\nThe problem really is that oldestXmin and oldestMxact are held back,\nthough. While that can ultimately result in older FreezeLimit and\nMultiXactCutoff cutoffs in vacuumlazy.c, that's just one problem.\nUsually not the worst problem.\n\nThe term \"removable cutoff\" is how VACUUM VERBOSE reports OldestXmin.\nI think that it's good to use the same terminology here.\n\n> Could you clarify this moment please? Would be very grateful.\n\nWhile this WARNING is triggered when a threshold controlled by\nautovacuum_freeze_max_age is crossed, it's not just a problem with\nfreezing. It's convenient to use autovacuum_freeze_max_age to\nrepresent \"a wildly excessive number of XIDs for OldestXmin to be held\nback by, no matter what\". In practice it is usually already a big\nproblem when OldestXmin is held back by far fewer XIDs than that, but\nit's hard to reason about when exactly we need to consider that a\nproblem. However, we can at least be 100% sure of real problems when\nOldestXmin age reaches autovacuum_freeze_max_age. There is no longer\nany doubt that we need to warn the user, since antiwraparound\nautovacuum cannot work as designed at that point. But the WARNING is\nnevertheless not primarily (or not exclusively) about not being able\nto freeze. It's also about not being able to remove bloat.\n\nFreezing can be thought of as roughly the opposite process to removing\ndead tuples deleted by now committed transactions. OldestXmin is the\ncutoff both for removing dead tuples (which we want to get rid of\nimmediately), and freezing live all-visible tuples (which we want to\nkeep long term). FreezeLimit is usually 50 million XIDs before\nOldestXmin (the freeze_min_age default), but that's just how we\nimplement lazy freezing, which is just an optimization.\n\n> As variant may be split these checks for the freeze cuttoffs and the oldest xmins for clarity?\n> The patch attached tries to do this.\n\nI don't think that this is an improvement. For one thing the\nFreezeLimit cutoff will have been held back (relative to nextXID-wise\ntable age) by more than the freeze_min_age setting for a long time\nbefore this WARNING is hit -- so we're not going to show the WARNING\nin most cases where freeze_min_age has been completely ignored (it\nmust be ignored in extreme cases because FreezeLimit must always be <=\nOldestXmin). Also, the proposed new WARNING is only seen when we're\nbound to also see the existing OldestXmin WARNING already. Why have 2\nWARNINGs about exactly the same problem?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 18 Oct 2022 10:56:03 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: effective_multixact_freeze_max_age issue"
},
{
"msg_contents": "Hello!\n\nOn 18.10.2022 20:56, Peter Geoghegan wrote:\n \n> The term \"removable cutoff\" is how VACUUM VERBOSE reports OldestXmin.\n> I think that it's good to use the same terminology here.\n\nThanks for the explanation! Firstly exactly this term confused me.\nSure, the same terminology makes all easier to understand.\n\n> \n>> Could you clarify this moment please? Would be very grateful.\n> \n> While this WARNING is triggered when a threshold controlled by\n> autovacuum_freeze_max_age is crossed, it's not just a problem with\n> freezing. It's convenient to use autovacuum_freeze_max_age to\n> represent \"a wildly excessive number of XIDs for OldestXmin to be held\n> back by, no matter what\". In practice it is usually already a big\n> problem when OldestXmin is held back by far fewer XIDs than that, but\n> it's hard to reason about when exactly we need to consider that a\n> problem. However, we can at least be 100% sure of real problems when\n> OldestXmin age reaches autovacuum_freeze_max_age. There is no longer\n> any doubt that we need to warn the user, since antiwraparound\n> autovacuum cannot work as designed at that point. But the WARNING is\n> nevertheless not primarily (or not exclusively) about not being able\n> to freeze. It's also about not being able to remove bloat.> Freezing can be thought of as roughly the opposite process to removing\n> dead tuples deleted by now committed transactions. OldestXmin is the\n> cutoff both for removing dead tuples (which we want to get rid of\n> immediately), and freezing live all-visible tuples (which we want to\n> keep long term). FreezeLimit is usually 50 million XIDs before\n> OldestXmin (the freeze_min_age default), but that's just how we\n> implement lazy freezing, which is just an optimization.\n>\n\nThat's clear. Thanks a lot!\n\n>> As variant may be split these checks for the freeze cuttoffs and the oldest xmins for clarity?\n>> The patch attached tries to do this.\n> \n> I don't think that this is an improvement. For one thing the\n> FreezeLimit cutoff will have been held back (relative to nextXID-wise\n> table age) by more than the freeze_min_age setting for a long time\n> before this WARNING is hit -- so we're not going to show the WARNING\n> in most cases where freeze_min_age has been completely ignored (it\n> must be ignored in extreme cases because FreezeLimit must always be <=\n> OldestXmin).\n\n> Also, the proposed new WARNING is only seen when we're\n> bound to also see the existing OldestXmin WARNING already. Why have 2\n> WARNINGs about exactly the same problem?> \n\nI didn't understand this moment.\n\nIf the FreezeLimit is always older than OldestXmin or equal to it according to:\n\n> FreezeLimit is usually 50 million XIDs before\n> OldestXmin (the freeze_min_age default),\n \ncan't there be a situation like this?\n\n ______________________________\n | autovacuum_freeze_max_age |\npast <_______|____________|_____________|________________|> future\n FreezeLimit safeOldestXmin oldestXmin nextXID\n |___________________________________________|\n freeze_min_age\n\nIn that case the existing OldestXmin WARNING will not fire while the new one will.\nAs the FreezeLimit is only optimization it's likely not a warning but a notice message\nbefore OldestXmin WARNING and possible real problems in the future.\nMaybe it can be useful in a such kind?\n\nWith best wishes,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Mon, 24 Oct 2022 11:18:11 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": false,
"msg_subject": "Re: effective_multixact_freeze_max_age issue"
},
{
"msg_contents": "On Mon, Oct 24, 2022 at 1:18 AM Anton A. Melnikov <aamelnikov@inbox.ru> wrote:\n> > Also, the proposed new WARNING is only seen when we're\n> > bound to also see the existing OldestXmin WARNING already. Why have 2\n> > WARNINGs about exactly the same problem?>\n>\n> I didn't understand this moment.\n>\n> If the FreezeLimit is always older than OldestXmin or equal to it according to:\n>\n> > FreezeLimit is usually 50 million XIDs before\n> > OldestXmin (the freeze_min_age default),\n>\n> can't there be a situation like this?\n\nI don't understand what you mean. FreezeLimit *isn't* always exactly\n50 million XIDs before OldestXmin -- not anymore. In fact that's the\nmain benefit of this work (commit c3ffa731). That detail has changed\n(and changed for the better). Though it will only be noticeable to\nusers when an old snapshot holds back OldestXmin by a significant\namount.\n\nIt is true that we must always respect the classic \"FreezeLimit <=\nOldestXmin\" invariant. So naturally vacuum_set_xid_limits() continues\nto make sure that the invariant holds in all cases, if necessary by\nholding back FreezeLimit:\n\n+ /* freezeLimit must always be <= oldestXmin */\n+ if (TransactionIdPrecedes(*oldestXmin, *freezeLimit))\n+ *freezeLimit = *oldestXmin;\n\nWhen we *don't* have to do this (very typical when\nvacuum_freeze_min_age is set to its default of 50 million), then\nFreezeLimit won't be affected by old snapshots. Overall, FreezeLimit\nmust either be:\n\n1. *Exactly* freeze_min_age XIDs before nextXID (note that it is\nnextXID instead of OldestXmin here, as of commit c3ffa731).\n\nor:\n\n2. *Exactly* OldestXmin.\n\nFreezeLimit must always be either exactly 1 or exactly 2, regardless\nof anything else (like long running transactions/snapshots).\nImportantly, we still never interpret freeze_min_age as more than\n\"autovacuum_freeze_max_age / 2\" when determining FreezeLimit. While\nthe safeOldestXmin cutoff is \"nextXID - autovacuum_freeze_max_age\".\n\nBefore commit c3ffa731, FreezeLimit would sometimes be interpreted as\nexactly OldestXmin -- it would be set to OldestXmin directly when the\nWARNING was given. But now we get smoother behavior, without any big\ndiscontinuities in how FreezeLimit is set over time when OldestXmin is\nheld back generally.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 24 Oct 2022 07:56:24 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: effective_multixact_freeze_max_age issue"
},
{
"msg_contents": "On Mon, Oct 24, 2022 at 7:56 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> I don't understand what you mean. FreezeLimit *isn't* always exactly\n> 50 million XIDs before OldestXmin -- not anymore. In fact that's the\n> main benefit of this work (commit c3ffa731). That detail has changed\n> (and changed for the better). Though it will only be noticeable to\n> users when an old snapshot holds back OldestXmin by a significant\n> amount.\n\nI meant that the new behavior will only have a noticeable impact when\nOldestXmin is significantly earlier than nextXID. Most of the time\nthere won't be any old snapshots, which means that there will only be\na negligible difference between OldestXmin and nextXID when things are\noperating normally (OldestXmin will still probably be a tiny bit\nearlier than nextXID, but not enough to matter). And so most of the\ntime the difference between the old approach and the new approach will\nbe completely negligible; 50 million XIDs is usually a huge number (it\nis usually far far larger than the difference between OldestXmin and\nnextXID).\n\nBTW, I have some sympathy for the argument that the WARNINGs that we\nhave here may not be enough -- we only warn when the situation is\nalready extremely serious. I just don't see any reason why that\nproblem should be treated as a regression caused by commit c3ffa731.\nThe WARNINGs may be inadequate, but that isn't new.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 24 Oct 2022 08:32:14 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: effective_multixact_freeze_max_age issue"
},
{
"msg_contents": "Hi, Peter!\n\nSorry! For a some time i forgot about this thread and forgot to\nthank you for your answer.\n\nThereby now its clear for me that this patch allows the autovacuum to win some\ntime between OldestXmin and nextXID that could not be used before.\nI think, it maybe especially useful for high-load applications.\n\nDigging depeer, i found some inconsistency between current docs and\nthe real behavior and would like to bring this to your attention.\n\nNow the doc says that an aggressive vacuum scan will occur for any\ntable whose multixact-age is greater than autovacuum_multixact_freeze_max_age.\nBut really vacuum_get_cutoffs() will return true when\nmultixact-age is greater or equal than autovacuum_multixact_freeze_max_age\nif relminmxid is not equal to one.\nIf relminmxid is equal to one then vacuum_get_cutoffs() return true even\nmultixact-age is less by one then autovacuum_multixact_freeze_max_age.\nFor instance, if relminmxid = 1 and multixact_freeze_table_age\n = 100,\nvacuum will start to be aggressive from the age of 99\n(when ReadNextMultiXactId()\n= 100).\n \n\nThe patch attached attempts to fix this and tries to use semantic like in the doc.\nThe similar fix was made for common xacts too.\nAdditional check for relminxid allows to disable agressive scan\nat all if it is invalid. But i'm not sure if such a check is needed here.\nPlease take it into account.\n \nWith kindly regards,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 5 Apr 2024 09:30:02 +0300",
"msg_from": "\"Anton A. Melnikov\" <a.melnikov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: effective_multixact_freeze_max_age issue"
}
] |
[
{
"msg_contents": "Hi, I hope supported Unicode Variation Selector on collate.\n\nD209007=# create table ivstesticu (\nD209007(# moji text\nD209007(# );\nD209007=# create table ivstest (\nD209007(# moji text collate \"ja-x-icu\" CONSTRAINT firstkey PRIMARY KEY D209007(# ); \nD209007=# insert into ivstest (moji) values ( U&'\\+003436' || U&'\\+0E0101' || U&'\\+00304D');\nD209007=# insert into ivstest (moji) values ( U&'\\+003436' || U&'\\+00304D');\nD209007=# select moji from ivstest where moji like '%' || U&'\\+00304B' || '%';\n-------------\n㐶󠄁き\n㐶き\n(2 行)\n\nexpected\n-------------\n㐶き\n(1 行)\n\n\nBest regards,\n\n\n\n\n",
"msg_date": "Wed, 3 Aug 2022 08:54:31 +0900",
"msg_from": "=?UTF-8?B?6I2S5LqV5YWD5oiQ?= <n2029@ndensan.co.jp>",
"msg_from_op": true,
"msg_subject": "collate not support Unicode Variation Selector"
},
{
"msg_contents": "Hi, I hope supported Unicode Variation Selector on collate.\n\nI will resend it because there was a typo.\n\nD209007=# create table ivstest ( moji text collate \"ja-x-icu\" CONSTRAINT firstkey PRIMARY KEY ); \nD209007=# insert into ivstest (moji) values ( U&'\\+003436' || U&'\\+0E0101' || U&'\\+00304D');\nD209007=# insert into ivstest (moji) values ( U&'\\+003436' || U&'\\+00304D');\nD209007=# select moji from ivstest where moji like '%' || U&'\\+003436' || '%';\n-------------\n㐶󠄁き\n㐶き\n(2 行)\n\nexpected\n-------------\n㐶き\n(1 行)\n\n\nBest regards,\n\n\n\n\n\n\n",
"msg_date": "Wed, 3 Aug 2022 09:09:35 +0900",
"msg_from": "=?UTF-8?B?6I2S5LqV5YWD5oiQ?= <n2029@ndensan.co.jp>",
"msg_from_op": true,
"msg_subject": "RE: collate not support Unicode Variation Selector"
},
{
"msg_contents": "On Wed, Aug 3, 2022 at 12:09 PM 荒井元成 <n2029@ndensan.co.jp> wrote:\n> D209007=# create table ivstest ( moji text collate \"ja-x-icu\" CONSTRAINT firstkey PRIMARY KEY );\n> D209007=# insert into ivstest (moji) values ( U&'\\+003436' || U&'\\+0E0101' || U&'\\+00304D');\n> D209007=# insert into ivstest (moji) values ( U&'\\+003436' || U&'\\+00304D');\n> D209007=# select moji from ivstest where moji like '%' || U&'\\+003436' || '%';\n> -------------\n> 㐶󠄁き\n> 㐶き\n> (2 行)\n>\n> expected\n> -------------\n> 㐶き\n> (1 行)\n\nSo you want to match only strings that contain U&'\\+003436' *not*\nfollowed by a variation selector (as we also discussed at [1]). I'm\npretty sure that everything in PostgreSQL considers variation\nselectors to be separate characters. Perhaps it is possible to write\na regular expression covering the variation selector ranges, something\nlike '\\U00003436[^\\U000E0100-\\U000E010EF]'?\n\nHere's an example using Latin characters that are easier for me, but\nshow approximately the same thing, since variation selectors are a bit\nlike \"combining\" characters:\n\npostgres=# create table t (x text);\nCREATE TABLE\npostgres=# insert into t values ('e'), ('ef'), ('e' || U&'\\0301');\nINSERT 0 3\npostgres=# select * from t;\n x\n----\n e\n ef\n é\n(3 rows)\n\npostgres=# select * from t where x ~ 'e([^\\u0300-\\u036f]|$)';\n x\n----\n e\n ef\n(2 rows)\n\n[1] https://www.postgresql.org/message-id/flat/013f01d873bb%24ff5f64b0%24fe1e2e10%24%40ndensan.co.jp\n\n\n",
"msg_date": "Wed, 3 Aug 2022 12:41:51 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: collate not support Unicode Variation Selector"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> So you want to match only strings that contain U&'\\+003436' *not*\n> followed by a variation selector (as we also discussed at [1]). I'm\n> pretty sure that everything in PostgreSQL considers variation\n> selectors to be separate characters.\n\nThere might be something that doesn't, but LIKE certainly isn't it.\nI don't believe plain LIKE is collation-aware at all, it just sees\ncharacters to match or not match. ILIKE is a little collation-aware,\nbut it's still not going to consider a combining sequence as one\ncharacter. The same for the regex operators.\n\nMaybe it would help if you run the strings through normalize() first?\nI'm not sure if that can combine combining characters.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 02 Aug 2022 20:56:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: collate not support Unicode Variation Selector"
},
{
"msg_contents": "On Wed, Aug 3, 2022 at 12:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Maybe it would help if you run the strings through normalize() first?\n> I'm not sure if that can combine combining characters.\n\nI think the similarity between Latin combining characters and these\nideographic variations might end there. I don't think there is a\nsingle codepoint version of U&'\\+003436' || U&'\\+0E0101', unlike é.\nThis system is for controlling small differences in rendering for the\n\"same\" character[1]. My computer doesn't even show the OP's example\nglyphs as different (to my eyes, at least; I can see on a random\npicture I found[2] that the one with the e0101 selector is supposed to\nhave a ... what do you call that ... a tiny gap :-)).\n\n[1] http://www.unicode.org/reports/tr37/tr37-14.html\n[2] https://glyphwiki.org/wiki/u3436\n\n\n",
"msg_date": "Wed, 3 Aug 2022 14:02:08 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: collate not support Unicode Variation Selector"
},
{
"msg_contents": "At Wed, 3 Aug 2022 14:02:08 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in \r\n> On Wed, Aug 3, 2022 at 12:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n> > Maybe it would help if you run the strings through normalize() first?\r\n> > I'm not sure if that can combine combining characters.\r\n> \r\n> I think the similarity between Latin combining characters and these\r\n> ideographic variations might end there. I don't think there is a\r\n> single codepoint version of U&'\\+003436' || U&'\\+0E0101', unlike é.\r\n\r\nRight. At least in Japanese texts, the two \"character\"s are the same\r\nglyph. In that sense the loss of variation selectors from a text\r\ndoesn't alter its meaning and doesn't hurt correctness at all.\r\nIdeographic variation is useful in special cases where their\r\nideographic identity is crucial.\r\n\r\n> This system is for controlling small differences in rendering for the\r\n> \"same\" character[1]. My computer doesn't even show the OP's example\r\n> glyphs as different (to my eyes, at least; I can see on a random\r\n> picture I found[2] that the one with the e0101 selector is supposed to\r\n> have a ... what do you call that ... a tiny gap :-)).\r\n\r\nThey need variation-aware fonts and application support to render. So\r\nwhen even *I* see the two characters on Excel (which I believe doesn't\r\nhave that support by default), they would look exactly same. In that\r\nsense, my opinion on the behavior is that all ideographic variations\r\nrather should be treated as the same character in searching in general\r\ncontext. In other words, text matching should just drop variation\r\nselectors as the default behavior.\r\n\r\nICU:Collator [1] has the notion of \"collation strength\" and I saw in\r\nan article that only Colator::IDENTICAL among five alternatives makes\r\ndistinction between ideographic variations of a glyph.\r\n\r\n> [1] http://www.unicode.org/reports/tr37/tr37-14.html\r\n> [2] https://glyphwiki.org/wiki/u3436\r\n\r\n[1] https://unicode-org.github.io/icu-docs/apidoc/dev/icu4c/classicu_1_1Collator.html\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Wed, 03 Aug 2022 15:25:32 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: collate not support Unicode Variation Selector"
},
{
"msg_contents": "Thank you for your reply.\n\nAbout 60,000 characters are registered in the IPAmj Mincho font designated by the national specifications. \nIt should be able to handle all characters.\n\nregards.\n\n\n-----Original Message-----\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com> \nSent: Wednesday, August 3, 2022 3:26 PM\nTo: thomas.munro@gmail.com\nCc: tgl@sss.pgh.pa.us; n2029@ndensan.co.jp; pgsql-hackers@lists.postgresql.org\nSubject: Re: collate not support Unicode Variation Selector\n\nAt Wed, 3 Aug 2022 14:02:08 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in \n> On Wed, Aug 3, 2022 at 12:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Maybe it would help if you run the strings through normalize() first?\n> > I'm not sure if that can combine combining characters.\n> \n> I think the similarity between Latin combining characters and these \n> ideographic variations might end there. I don't think there is a \n> single codepoint version of U&'\\+003436' || U&'\\+0E0101', unlike é.\n\nRight. At least in Japanese texts, the two \"character\"s are the same glyph. In that sense the loss of variation selectors from a text doesn't alter its meaning and doesn't hurt correctness at all.\nIdeographic variation is useful in special cases where their ideographic identity is crucial.\n\n> This system is for controlling small differences in rendering for the \n> \"same\" character[1]. My computer doesn't even show the OP's example \n> glyphs as different (to my eyes, at least; I can see on a random \n> picture I found[2] that the one with the e0101 selector is supposed to \n> have a ... what do you call that ... a tiny gap :-)).\n\nThey need variation-aware fonts and application support to render. So when even *I* see the two characters on Excel (which I believe doesn't have that support by default), they would look exactly same. In that sense, my opinion on the behavior is that all ideographic variations rather should be treated as the same character in searching in general context. In other words, text matching should just drop variation selectors as the default behavior.\n\nICU:Collator [1] has the notion of \"collation strength\" and I saw in an article that only Colator::IDENTICAL among five alternatives makes distinction between ideographic variations of a glyph.\n\n> [1] http://www.unicode.org/reports/tr37/tr37-14.html\n> [2] https://glyphwiki.org/wiki/u3436\n\n[1] https://unicode-org.github.io/icu-docs/apidoc/dev/icu4c/classicu_1_1Collator.html\n\nregards.\n\n--\nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Wed, 3 Aug 2022 20:12:53 +0900",
"msg_from": "=?UTF-8?B?6I2S5LqV5YWD5oiQ?= <n2029@ndensan.co.jp>",
"msg_from_op": true,
"msg_subject": "RE: collate not support Unicode Variation Selector"
},
{
"msg_contents": "At Wed, 3 Aug 2022 20:12:53 +0900, 荒井元成 <n2029@ndensan.co.jp> wrote in \n> Thank you for your reply.\n> \n> About 60,000 characters are registered in the IPAmj Mincho font designated by the national specifications. \n> It should be able to handle all characters.\n\nYeah, it is one of that fonts. But I didn't know that MS-Word can\n*display* ideographic variations. But it is dissapoinging that input\nrequires to copy-paste from the Web.. Maybe that characters can be\ninput smoothly by using ATOK or alikes..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 04 Aug 2022 17:23:48 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: collate not support Unicode Variation Selector"
},
{
"msg_contents": "Thank you for your reply.\n\nSQLServer supports Unicode Variation Selector, so I would like PostgreSQL to\nsupport them as well.\n\nRegards.\n\n--\nMotonari Arai\n\n-----Original Message-----\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nSent: Thursday, August 4, 2022 5:24 PM\nTo: n2029@ndensan.co.jp\nCc: thomas.munro@gmail.com; tgl@sss.pgh.pa.us;\npgsql-hackers@lists.postgresql.org\nSubject: Re: collate not support Unicode Variation Selector\n\nAt Wed, 3 Aug 2022 20:12:53 +0900, 荒井元成 <n2029@ndensan.co.jp> wrote in\n> Thank you for your reply.\n>\n> About 60,000 characters are registered in the IPAmj Mincho font designated\nby the national specifications.\n> It should be able to handle all characters.\n\nYeah, it is one of that fonts. But I didn't know that MS-Word can\n*display* ideographic variations. But it is dissapoinging that input\nrequires to copy-paste from the Web.. Maybe that characters can be input\nsmoothly by using ATOK or alikes..\n\nregards.\n\n--\nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n\n",
"msg_date": "Thu, 4 Aug 2022 19:01:33 +0900",
"msg_from": "=?iso-2022-jp?B?GyRCOVMwZjg1QC4bKEI=?= <n2029@ndensan.co.jp>",
"msg_from_op": false,
"msg_subject": "RE: collate not support Unicode Variation Selector"
},
{
"msg_contents": "At Thu, 4 Aug 2022 19:01:33 +0900, 荒井元成 <n2029@ndensan.co.jp> wrote in \n> Thank you for your reply.\n> \n> SQLServer supports Unicode Variation Selector, so I would like PostgreSQL to\n> support them as well.\n\nI studied the code a bit further, then found that simple comparison\ncan ignore selectors by using nondeterministic collation.\n\nCREATE COLLATION col1 (provider=icu, locale='ja', deterministic=false);\nSELECT (U&'\\+003436' || U&'\\+0E0101' || U&'\\+00304D' collate col1) = U&'\\+003436' || U&'\\+00304D';\n ?column? \n----------\n t\n\nHowever LIKE dislikes this.\n\n> ERROR: nondeterministic collations are not supported for LIKE\n\nDeterministic collations assume text equality means bytewise\nequality. So, the \"problem\" behavior is correct in a sense. In that\nsense, those functions that do not support nondeterministic collations\ncan be implemented without considering ICU, which leads to the\n\"problem\" behavior. ICU has regular expression function so LIKE might\nbe ableto be implemented using this. If it is done, and if a\nnon-deterministic IVS-sensitive collation is available (I didin't find\nhow to get one..), LIKE would work as you expect.\n\nBut..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 05 Aug 2022 15:50:32 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: collate not support Unicode Variation Selector"
}
] |
[
{
"msg_contents": "Hi.\n\nSuppose on master, I run a *multi-query* using PQexec and save the value\nreturned by pg_current_wal_insert_lsn:\n\nmaster_lsn = query(master, \"INSERT INTO some VALUES (...); SELECT\npg_current_wal_insert_lsn()\")\n\nThen I run a PQexec query on a replica and save the value returned by\npg_last_wal_replay_lsn:\n\nreplica_lsn = query(replica, \"SELECT pg_last_wal_replay_lsn()\")\n\nThe question to experts in PG internals: *is it guaranteed that, as long as\nreplica_lsn >= master_lsn (GREATER OR EQUAL, not just greater), then a\nsubsequent read from replica will always return me the inserted record*\n(i.e. the replica is up to date), considering noone updates/deletes in that\ntable?\n\nI'm asking, because according to some hints in the docs, this should be\ntrue. But for some reason, we have to use \"greater\" (not \"greater or\nequals\") condition in the real code, since with just \">=\" the replica\ndoesn't sometimes read the written data.\n\nHi.Suppose on master, I run a multi-query using PQexec and save the value returned by pg_current_wal_insert_lsn:master_lsn = query(master, \"INSERT INTO some VALUES (...); SELECT pg_current_wal_insert_lsn()\")Then I run a PQexec query on a replica and save the value returned by pg_last_wal_replay_lsn:replica_lsn = query(replica, \"SELECT pg_last_wal_replay_lsn()\")The question to experts in PG internals: is it guaranteed that, as long as replica_lsn >= master_lsn (GREATER OR EQUAL, not just greater), then a subsequent read from replica will always return me the inserted record (i.e. the replica is up to date), considering noone updates/deletes in that table?I'm asking, because according to some hints in the docs, this should be true. But for some reason, we have to use \"greater\" (not \"greater or equals\") condition in the real code, since with just \">=\" the replica doesn't sometimes read the written data.",
"msg_date": "Tue, 2 Aug 2022 18:57:41 -0700",
"msg_from": "Dmitry Koterov <dmitry.koterov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Does having pg_last_wal_replay_lsn[replica] >=\n pg_current_wal_insert_lsn[master]\n guarantee that the replica is caught up?"
},
{
"msg_contents": "I'm not sure this fits -hackers..\n\nAt Tue, 2 Aug 2022 18:57:41 -0700, Dmitry Koterov <dmitry.koterov@gmail.com> wrote in \n> Hi.\n> \n> Suppose on master, I run a *multi-query* using PQexec and save the value\n> returned by pg_current_wal_insert_lsn:\n> \n> master_lsn = query(master, \"INSERT INTO some VALUES (...); SELECT\n> pg_current_wal_insert_lsn()\")\n> \n> Then I run a PQexec query on a replica and save the value returned by\n> pg_last_wal_replay_lsn:\n> \n> replica_lsn = query(replica, \"SELECT pg_last_wal_replay_lsn()\")\n> \n> The question to experts in PG internals: *is it guaranteed that, as long as\n> replica_lsn >= master_lsn (GREATER OR EQUAL, not just greater), then a\n> subsequent read from replica will always return me the inserted record*\n> (i.e. the replica is up to date), considering noone updates/deletes in that\n> table?\n\nhttps://www.postgresql.org/docs/devel/libpq-exec.html\n\n> The command string can include multiple SQL commands (separated by\n> semicolons). Multiple queries sent in a single PQexec call are\n> processed in a single transaction, unless there are explicit\n> BEGIN/COMMIT commands included in the query string to divide it into\n> multiple transactions.\n\nIf the query() runs PQexec() with the same string, the call to\npg_current_wal_insert_lsn() is made before the insert is commited.\nThat behavior can be emulated on psql. (The backslash before semicolon\nis crucial. It lets the connected queries be sent in a single\nPQexec())\n\n=# select pg_current_wal_insert_lsn();\n pg_current_wal_insert_lsn \n---------------------------\n 0/68E5038\n(1 row)\n=# insert into t values(0)\\; select pg_current_wal_lsn();\nINSERT 0 1\n pg_current_wal_lsn \n--------------------\n 0/68E5038\n(1 row)\n=# select pg_current_wal_insert_lsn();\n pg_current_wal_insert_lsn \n---------------------------\n 0/68E50A0\n(1 row)\n\n$ pg_waldump -s'0/68E5038' -e'0/68E50A0' $PGDATA/pg_wal/000000010000000000000006\nrmgr: Heap len (rec/tot): 59/ 59, tx: 770, lsn: 0/068E5038, prev 0/068E5000, desc: INSERT off 15 flags 0x00, blkref #0: rel 1663/5/16424 blk 0\nrmgr: Transaction len (rec/tot): 34/ 34, tx: 770, lsn: 0/068E5078, prev 0/068E5038, desc: COMMIT 2022-08-03 15:49:43.749158 JST\n\nSo, the replica cannot show the inserted data at the LSN the function\nreturned. If you explicitly ended transaction before\npg_current_wal_insert_lsn() call, the expected LSN would be returned.\n\n=# select pg_current_wal_insert_lsn();\n pg_current_wal_insert_lsn \n---------------------------\n 0/68E75C8\n(1 row)\n=# begin\\;insert into t values(0)\\;commit\\; select pg_current_wal_lsn();\n pg_current_wal_lsn \n--------------------\n 0/68E7958\n$ pg_waldump -s'0/68E75C8' -e'0/68E7958' $PGDATA/pg_wal/000000010000000000000006\n prev 0/068E7590, desc: INSERT off 22 flags 0x00, blkref #0: rel 1663/5/16424 blk 0 FPW\nrmgr: Transaction len (rec/tot): 34/ 34, tx: 777, lsn: 0/068E7930, prev 0/068E75C8, desc: COMMIT 2022-08-03 16:09:13.516498 JST\n\n\n> I'm asking, because according to some hints in the docs, this should be\n> true. But for some reason, we have to use \"greater\" (not \"greater or\n> equals\") condition in the real code, since with just \">=\" the replica\n> doesn't sometimes read the written data.\n\nThus the wrong LSN appears to have caused the behavior.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 03 Aug 2022 16:32:17 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Does having pg_last_wal_replay_lsn[replica] >=\n pg_current_wal_insert_lsn[master] guarantee that the replica is caught up?"
},
{
"msg_contents": "Thank you for the detailed explanation!\n\nI doubt many people from -general would actually be able to provide such\ninfo since the spirit of that list is to find work-arounds for problems and\nquestions at user level rather than dig into the details on how something\nactually works.\n\nIt's worth adding to the documentation, with that exact example BTW:\nhttps://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADMIN-BACKUP-TABLE\n(I can try submitting a docs PR if you think it's a good idea).\n\nAlso, when I said that we use PQexec, I did it just for an illustration: in\npractice we use the node-postgres JS library which sends multi-statement\nprotocol messages. So - transaction wise - it works the same way as PQexec\nwith multiple queries, but it returns responses for ALL queries in the\nbatch, not just for the last one (very convenient BTW, saves on network\nround-trip latency). This mode is fully supported by PG wire protocol:\nhttps://www.postgresql.org/docs/current/protocol-flow.html#PROTOCOL-FLOW-MULTI-STATEMENT\n\n\nOn Wed, Aug 3, 2022 at 12:32 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n>\n> <snip>\n>\n\n\n=# select pg_current_wal_insert_lsn();\n> pg_current_wal_insert_lsn\n> ---------------------------\n> 0/*68E5038*\n> (1 row)\n> =# insert into t values(0)\\; select pg_current_wal_lsn();\n> INSERT 0 1\n> pg_current_wal_lsn\n> --------------------\n> 0/*68E5038*\n> (1 row)\n> =# select pg_current_wal_insert_lsn();\n> pg_current_wal_insert_lsn\n> ---------------------------\n> 0/68E50A0\n> (1 row)\n\n\n\n<snip>\n>\n\n> =# select pg_current_wal_insert_lsn();\n> pg_current_wal_insert_lsn\n> ---------------------------\n> 0/*68E75C8*\n> (1 row)\n> =# begin\\;insert into t values(0)\\;commit\\; select pg_current_wal_lsn();\n> pg_current_wal_lsn\n> --------------------\n> 0/*68E7958*\n>\n\nThank you for the detailed explanation!I doubt many people from -general would actually be able to provide such info since the spirit of that list is to find work-arounds for problems and questions at user level rather than dig into the details on how something actually works.It's worth adding to the documentation, with that exact example BTW:https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADMIN-BACKUP-TABLE(I can try submitting a docs PR if you think it's a good idea).Also, when I said that we use PQexec, I did it just for an illustration: in practice we use the node-postgres JS library which sends multi-statement protocol messages. So - transaction wise - it works the same way as PQexec with multiple queries, but it returns responses for ALL queries in the batch, not just for the last one (very convenient BTW, saves on network round-trip latency). This mode is fully supported by PG wire protocol: https://www.postgresql.org/docs/current/protocol-flow.html#PROTOCOL-FLOW-MULTI-STATEMENTOn Wed, Aug 3, 2022 at 12:32 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:<snip> \n=# select pg_current_wal_insert_lsn();\n pg_current_wal_insert_lsn \n---------------------------\n 0/68E5038\n(1 row)\n=# insert into t values(0)\\; select pg_current_wal_lsn();\nINSERT 0 1\n pg_current_wal_lsn \n--------------------\n 0/68E5038\n(1 row)\n=# select pg_current_wal_insert_lsn();\n pg_current_wal_insert_lsn \n---------------------------\n 0/68E50A0\n(1 row) <snip>\n=# select pg_current_wal_insert_lsn();\n pg_current_wal_insert_lsn \n---------------------------\n 0/68E75C8\n(1 row)\n=# begin\\;insert into t values(0)\\;commit\\; select pg_current_wal_lsn();\n pg_current_wal_lsn \n--------------------\n 0/68E7958",
"msg_date": "Wed, 3 Aug 2022 02:10:08 -0700",
"msg_from": "Dmitry Koterov <dmitry.koterov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Does having pg_last_wal_replay_lsn[replica] >=\n pg_current_wal_insert_lsn[master] guarantee that the replica is caught up?"
}
] |
[
{
"msg_contents": "Hello,\n\nFollowing the bug report at [1], I sent the attached patch to pgsql-bugs \nmailing list. I'm starting a thread here to add it to the next commitfest.\n\nThe problem I'm trying to solve is that, contrary to btree, gist and sp-gist \nindexes, gin indexes do not charge any cpu-cost for descending the entry tree.\n\nThis can be a problem in cases where the io cost is very low. This can happen \nwith manual tuning of course, but more surprisingly when the the IO cost is \namortized over a large number of iterations in a nested loop. In that case, we \nbasically consider it free since everything should already be in the shared \nbuffers. This leads to some inefficient plans, as an equivalent btree index \nshould be picked instead. \n\nThis has been discovered in PG14, as this release makes it possible to use a \npg_trgm gin index with the equality operator. Before that, only the btree \nwould have been considered and as such the discrepancy in the way we charge \ncpu cost didn't have noticeable effects. However, I suspect users of btree_gin \ncould have the same kind of problems in prior versions.\n\nBest regards,\n\n[1]: https://www.postgresql.org/message-id/flat/\n2187702.iZASKD2KPV%40aivenronan#0c2498c6a85e31a589b3e9a6a3616c52\n\n-- \nRonan Dunklau",
"msg_date": "Wed, 03 Aug 2022 09:26:32 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Fix gin index cost estimation"
},
{
"msg_contents": "Ronan Dunklau <ronan.dunklau@aiven.io> writes:\n> Following the bug report at [1], I sent the attached patch to pgsql-bugs \n> mailing list. I'm starting a thread here to add it to the next commitfest.\n\nThat link didn't work easily for me (possibly because it got split across\nlines). Here's another one for anybody having similar issues:\n\nhttps://www.postgresql.org/message-id/flat/CABs3KGQnOkyQ42-zKQqiE7M0Ks9oWDSee%3D%2BJx3-TGq%3D68xqWYw%40mail.gmail.com\n\n> The problem I'm trying to solve is that, contrary to btree, gist and sp-gist \n> indexes, gin indexes do not charge any cpu-cost for descending the entry tree.\n\nAs I said in the bug report thread, I think we really need to take a look\nat all of our index AMs not just GIN. I extended your original reproducer\nscript to check all the AMs (attached), and suppressed memoize because it\nseemed to behave differently for different AMs. Here's what I see for the\nestimated costs of the inner indexscan, and the actual runtime, for each:\n\nbtree:\n -> Index Only Scan using t1_btree_index on t1 (cost=0.28..0.30 rows=1 width=4) (actual time=0.001..0.001 rows=1 loops=20000)\n Execution Time: 19.763 ms\n\ngin (gin_trgm_ops):\n -> Bitmap Heap Scan on t1 (cost=0.01..0.02 rows=1 width=4) (actual time=0.003..0.003 rows=1 loops=20000)\n -> Bitmap Index Scan on t1_gin_index (cost=0.00..0.01 rows=1 width=0) (actual time=0.003..0.003 rows=1 loops=20000)\n Execution Time: 75.216 ms\n\ngist:\n -> Index Only Scan using t1_gist_index on t1 (cost=0.14..0.16 rows=1 width=4) (actual time=0.014..0.014 rows=1 loops=20000)\n Execution Time: 277.799 ms\n\nspgist:\n -> Index Only Scan using t1_spgist_index on t1 (cost=0.14..0.16 rows=1 width=4) (actual time=0.002..0.002 rows=1 loops=20000)\n Execution Time: 51.407 ms\n\nhash:\n -> Index Scan using t1_hash_index on t1 (cost=0.00..0.02 rows=1 width=4) (actual time=0.000..0.000 rows=1 loops=20000)\n Execution Time: 13.090 ms\n\nbrin:\n -> Bitmap Heap Scan on t1 (cost=0.03..18.78 rows=1 width=4) (actual time=0.049..0.093 rows=1 loops=20000)\n -> Bitmap Index Scan on t1_brin_index (cost=0.00..0.03 rows=1500 width=0) (actual time=0.003..0.003 rows=70 loops=20000)\n Execution Time: 1890.161 ms\n\nbloom:\n -> Bitmap Heap Scan on t1 (cost=11.25..11.26 rows=1 width=4) (actual time=0.004..0.004 rows=1 loops=20000)\n -> Bitmap Index Scan on t1_bloom_index (cost=0.00..11.25 rows=1 width=0) (actual time=0.003..0.003 rows=2 loops=20000)\n Execution Time: 88.703 ms\n\n(These figures shouldn't be trusted too much because I did nothing\nto suppress noise. They seem at least somewhat reproducible, though.)\n\nSo, taking btree as our reference point, gin has clearly got a problem\nbecause it's estimating less than a tenth as much cost despite actually\nbeing nearly 4X slower. gist and spgist are not as bad off, but\nnonetheless they claim to be cheaper than btree when they are not.\nThe result for hash looks suspicious as well, though at least we'd\nmake the right index choice for this particular case. brin and bloom\ncorrectly report being a lot more expensive than btree, so at least\nfor the moment I'm not worried about them.\n\nBTW, the artificially small random_page_cost doesn't really affect\nthis much. If I set it to a perfectly reasonable value like 1.0,\ngin produces a saner cost estimate but gist, spgist, and hash do\nnot change their estimates at all. btree's estimate doesn't change\neither, which seems like it might be OK for index-only scans but\nI doubt I believe it for index scans. In any case, at least one\nof gin and hash is doing it wrong.\n\nIn short, I think gist and spgist probably need a minor tweak to\nestimate more CPU cost than they do now, and hash needs a really\nhard look at whether it's sane at all.\n\nThat's all orthogonal to the merits of your patch for gin,\nso I'll respond separately about that.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 08 Sep 2022 18:12:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix gin index cost estimation"
},
{
"msg_contents": "Ronan Dunklau <ronan.dunklau@aiven.io> writes:\n> The problem I'm trying to solve is that, contrary to btree, gist and sp-gist \n> indexes, gin indexes do not charge any cpu-cost for descending the entry tree.\n\nI looked this over briefly. I think you are correct to charge an\ninitial-search cost per searchEntries count, but don't we also need to\nscale up by arrayScans, similar to the \"corrections for cache effects\"?\n\n+\t * We model index descent costs similarly to those for btree, but we also\n+\t * need an idea of the tree_height.\n+\t * We use numEntries / numEntryPages as the fanout factor.\n\nI'm not following that calculation? It seems like it'd be correct\nonly for a tree height of 1, although maybe I'm just misunderstanding\nthis (overly terse, perhaps) comment.\n\n+\t * We charge descentCost once for every entry\n+\t */\n+\tif (numTuples > 1)\n+\t{\n+\t\tdescentCost = ceil(log(numTuples) / log(2.0)) * cpu_operator_cost;\n+\t\t*indexStartupCost += descentCost * counts.searchEntries;\n+\t}\n\nI had to read this twice before absorbing the point of the numTuples\ntest. Maybe help the reader a bit:\n\n+\tif (numTuples > 1) /* ensure positive log() */\n\nPersonally I'd duplicate the comments from nbtreecostestimate rather\nthan just assuming the reader will go consult them. For that matter,\nwhy didn't you duplicate nbtree's logic for charging for SA scans?\nThis bit seems just as relevant for GIN:\n\n\t * If there are ScalarArrayOpExprs, charge this once per SA scan. The\n\t * ones after the first one are not startup cost so far as the overall\n\t * plan is concerned, so add them only to \"total\" cost.\n\nKeep in mind also that pgindent will have its own opinions about how to\nformat these comments, and it can out-stubborn you. Either run the\ncomments into single paragraphs, or if you really want them to be two\nparas then leave an empty comment line between. Another formatting\nnitpick is that you seem to have added a number of unnecessary blank\nlines.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 08 Sep 2022 18:32:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix gin index cost estimation"
},
{
"msg_contents": "Thank you for looking at it.\n\n> I looked this over briefly. I think you are correct to charge an\n> initial-search cost per searchEntries count, but don't we also need to\n> scale up by arrayScans, similar to the \"corrections for cache effects\"?\n> \n> +\t * We model index descent costs similarly to those for btree, but \nwe also\n> +\t * need an idea of the tree_height.\n> +\t * We use numEntries / numEntryPages as the fanout factor.\n> \n> I'm not following that calculation? It seems like it'd be correct\n> only for a tree height of 1, although maybe I'm just misunderstanding\n> this (overly terse, perhaps) comment.\n\nI don't really understand why that would work only with a tree height of one ? \nEvery entry page contains a certain amount of entries, and as such computing \nthe average number of entries per page seems to be a good approximation for \nthe fanout. But I may have misunderstood what was done in other index types.\n\nFor consistency, maybe we should just use a hard coded value of 100 for the \nfanout factor, similarly to what we do for other index types.\n\nBut I realised that another approach might be better suited: since we want to \ncharge a cpu cost for every page visited, actually basing that on the already \nestimated entryPagesFetched and dataPagesFetched would be better, instead of \ncopying what is done for other indexes type and estimating the tree height. It \nwould be simpler, as we don't need to estimate the tree height anymore.\n\nI will submit a patch doing that.\n\n> \n> +\t * We charge descentCost once for every entry\n> +\t */\n> +\tif (numTuples > 1)\n> +\t{\n> +\t\tdescentCost = ceil(log(numTuples) / log(2.0)) * \ncpu_operator_cost;\n> +\t\t*indexStartupCost += descentCost * \ncounts.searchEntries;\n> +\t}\n> \n> I had to read this twice before absorbing the point of the numTuples\n> test. Maybe help the reader a bit:\n> \n> +\tif (numTuples > 1) /* ensure positive log() */\n> \n\nOk. On second read, I think that part was actually wrong: what we care about \nis not the number of tuples here, but the number of entries. \n\n> Personally I'd duplicate the comments from nbtreecostestimate rather\n> than just assuming the reader will go consult them. For that matter,\n> why didn't you duplicate nbtree's logic for charging for SA scans?\n> This bit seems just as relevant for GIN:\n> \n> \t * If there are ScalarArrayOpExprs, charge this once per SA scan. \nThe\n> \t * ones after the first one are not startup cost so far as the \noverall\n> \t * plan is concerned, so add them only to \"total\" cost.\n> \n\nYou're right. So what we need to do here is scale up whatever we charge for \nthe startup cost by the number of arrayscans for the total cost.\n\n> Keep in mind also that pgindent will have its own opinions about how to\n> format these comments, and it can out-stubborn you. Either run the\n> comments into single paragraphs, or if you really want them to be two\n> paras then leave an empty comment line between. Another formatting\n> nitpick is that you seem to have added a number of unnecessary blank\n> lines.\n\nThanks, noted.\n\nI'll submit a new patch soon, as soon as i've resolved some of the problems I \nhave when accounting for scalararrayops.\n\nBest regards,\n\n-- \nRonan Dunklau\n\n\n\n\n",
"msg_date": "Mon, 12 Sep 2022 16:41:16 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: Fix gin index cost estimation"
},
{
"msg_contents": "Le lundi 12 septembre 2022, 16:41:16 CEST Ronan Dunklau a écrit :\n> But I realised that another approach might be better suited: since we want \nto\n> charge a cpu cost for every page visited, actually basing that on the \nalready\n> estimated entryPagesFetched and dataPagesFetched would be better, instead of\n> copying what is done for other indexes type and estimating the tree height. \nIt\n> would be simpler, as we don't need to estimate the tree height anymore.\n> \n> I will submit a patch doing that.\n\nThe attached does that and is much simpler. I only took into account \nentryPagesFetched, not sure if we should also charge something for data pages.\n\nInstead of trying to estimate the height of the tree, we rely on the \n(imperfect) estimation of the number of entry pages fetched, and charge 50 \ntimes cpu_operator_cost to that, in addition to the cpu_operator_cost charged \nper entry visited.\n\nI also adapted to take into accounts multiple scans induced by scalar array \noperations. \n\nAs it is, I don't understand the following calculation:\n\n/*\n* Estimate number of entry pages read. We need to do\n* counts.searchEntries searches. Use a power function as it should be,\n * but tuples on leaf pages usually is much greater. Here we include all\n * searches in entry tree, including search of first entry in partial\n * match algorithm\n */\nentryPagesFetched += ceil(counts.searchEntries * rint(pow(numEntryPages, \n0.15)));\n\nIs the power(0.15) used an approximation for a log ? If so why ? Also \nshouldn't we round that up ?\nIt seems to me it's unlikely to affect the total too much in normal cases \n(adding at worst random_page_cost) but if we start to charge cpu operator \ncosts as proposed here it makes a big difference and it is probably\nsafer to overestimate a bit than the opposite.\n\nWith those changes, the gin cost (purely cpu-wise) stays above the btree one \nas I think it should be. \n\n-- \nRonan Dunklau",
"msg_date": "Thu, 15 Sep 2022 12:25:06 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: Fix gin index cost estimation"
},
{
"msg_contents": "Ronan Dunklau <ronan.dunklau@aiven.io> writes:\n> The attached does that and is much simpler. I only took into account \n> entryPagesFetched, not sure if we should also charge something for data pages.\n\nI think this is wrong, because there is already a CPU charge based on\nthe number of tuples visited, down at the very end of the routine:\n\n\t*indexTotalCost += (numTuples * *indexSelectivity) * (cpu_index_tuple_cost + qual_op_cost);\n\nIt's certainly possible to argue that that's incorrectly computed,\nbut just adding another cost charge for the same topic can't be right.\n\nI do suspect that that calculation is bogus, because it looks like it's\nbased on the concept of \"apply the quals at each index entry\", which we\nknow is not how GIN operates. So maybe we should drop that bit in favor\nof a per-page-ish cost like you have here. Not sure. In any case it\nseems orthogonal to the question of startup/descent costs. Maybe we'd\nbetter tackle just one thing at a time.\n\n(BTW, given that that charge does exist and is not affected by\nrepeated-scan amortization, why do we have a problem in the first place?\nIs it just too small? I guess that when we're only expecting one tuple\nto be retrieved, it would only add about cpu_index_tuple_cost.)\n\n> Is the power(0.15) used an approximation for a log ? If so why ? Also \n> shouldn't we round that up ?\n\nNo idea, but I'm pretty hesitant to just randomly fool with that equation\nwhen (a) neither of us know where it came from and (b) exactly no evidence\nhas been provided that it's wrong.\n\nI note for instance that the existing logic switches from charging 1 page\nper search to 2 pages per search at numEntryPages = 15 (1.5 ^ (1/0.15)).\nYour version would switch at 2 pages, as soon as the pow() result is even\nfractionally above 1.0. Maybe the existing logic is too optimistic there,\nbut ceil() makes it too pessimistic I think. I'd sooner tweak the power\nconstant than change from rint() to ceil().\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 16 Sep 2022 16:04:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix gin index cost estimation"
},
{
"msg_contents": "Le vendredi 16 septembre 2022, 22:04:59 CEST Tom Lane a écrit :\n> Ronan Dunklau <ronan.dunklau@aiven.io> writes:\n> > The attached does that and is much simpler. I only took into account\n> > entryPagesFetched, not sure if we should also charge something for data \npages.\n> \n> I think this is wrong, because there is already a CPU charge based on\n> the number of tuples visited, down at the very end of the routine:\n> \n> \t*indexTotalCost += (numTuples * *indexSelectivity) * \n(cpu_index_tuple_cost + qual_op_cost);\n> \n> It's certainly possible to argue that that's incorrectly computed,\n> but just adding another cost charge for the same topic can't be right.\n\nI don't think it's the same thing. The entryPagesFetched is computed \nindependently of the selectivity and the number of tuples. As such, I think it \nmakes sense to use it to compute the cost of descending the entry tree.\n\nAs mentioned earlier, I don't really understand the formula for computing \nentryPagesFetched. If we were to estimate the tree height to compute the \ndescent cost as I first proposed, I feel like we would use two different metrics \nfor what is essentially the same cost: something proportional to the size of \nthe entry tree.\n\n> \n> I do suspect that that calculation is bogus, because it looks like it's\n> based on the concept of \"apply the quals at each index entry\", which we\n> know is not how GIN operates. So maybe we should drop that bit in favor\n> of a per-page-ish cost like you have here. Not sure. In any case it\n> seems orthogonal to the question of startup/descent costs. Maybe we'd\n> better tackle just one thing at a time.\n\nHum, good point. Maybe that should be revisited too.\n\n> \n> (BTW, given that that charge does exist and is not affected by\n> repeated-scan amortization, why do we have a problem in the first place?\n> Is it just too small? I guess that when we're only expecting one tuple\n> to be retrieved, it would only add about cpu_index_tuple_cost.)\n\nBecause with a very low selectivity, we end up under-charging for the cost of \nwalking the entry tree by a significant amount. As said above, I don't see how \nthose two things are the same: that charge estimates the cost of applying \nindex quals to the visited tuples, which is not the same as charging per entry \npage visited.\n\n> \n> > Is the power(0.15) used an approximation for a log ? If so why ? Also\n> > shouldn't we round that up ?\n> \n> No idea, but I'm pretty hesitant to just randomly fool with that equation\n> when (a) neither of us know where it came from and (b) exactly no evidence\n> has been provided that it's wrong.\n> \n> I note for instance that the existing logic switches from charging 1 page\n> per search to 2 pages per search at numEntryPages = 15 (1.5 ^ (1/0.15)).\n> Your version would switch at 2 pages, as soon as the pow() result is even\n> fractionally above 1.0. Maybe the existing logic is too optimistic there,\n> but ceil() makes it too pessimistic I think. I'd sooner tweak the power\n> constant than change from rint() to ceil().\n\nYou're right, I was too eager to try to raise the CPU cost proportionnally to \nthe number of array scans (scalararrayop). I'd really like to understand where \nthis equation comes from though... \n\nBest regards,\n\n-- \nRonan Dunklau\n\n\n\n\n",
"msg_date": "Mon, 19 Sep 2022 09:15:25 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: Fix gin index cost estimation"
},
{
"msg_contents": "On Mon, Sep 19, 2022 at 09:15:25AM +0200, Ronan Dunklau wrote:\n> You're right, I was too eager to try to raise the CPU cost proportionnally to \n> the number of array scans (scalararrayop). I'd really like to understand where \n> this equation comes from though... \n\nSo, what's the latest update here?\n--\nMichael",
"msg_date": "Wed, 12 Oct 2022 14:20:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix gin index cost estimation"
},
{
"msg_contents": "> > You're right, I was too eager to try to raise the CPU cost proportionnally \nto\n> > the number of array scans (scalararrayop). I'd really like to understand \nwhere\n> > this equation comes from though...\n> \n> So, what's the latest update here?\n\nThanks Michael for reviving this thread.\n\nBefore proceeding any further with this, I'd like to understand where we \nstand. Tom argued my way of charging cost per entry pages visited boils down \nto charging per tuple, which I expressed disagreement with. \n\nIf we can come to a consensus whether that's a bogus way of thinking about it \nI'll proceed with what we agree on.\n\n-- \nRonan Dunklau\n\n\n\n\n",
"msg_date": "Wed, 12 Oct 2022 09:15:10 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: Fix gin index cost estimation"
},
{
"msg_contents": "Hi, Ronan!\n\nOn Wed, Oct 12, 2022 at 10:15 AM Ronan Dunklau <ronan.dunklau@aiven.io>\nwrote:\n> > > You're right, I was too eager to try to raise the CPU cost\nproportionnally\n> to\n> > > the number of array scans (scalararrayop). I'd really like to\nunderstand\n> where\n> > > this equation comes from though...\n> >\n> > So, what's the latest update here?\n>\n> Thanks Michael for reviving this thread.\n>\n> Before proceeding any further with this, I'd like to understand where we\n> stand. Tom argued my way of charging cost per entry pages visited boils\ndown\n> to charging per tuple, which I expressed disagreement with.\n>\n> If we can come to a consensus whether that's a bogus way of thinking\nabout it\n> I'll proceed with what we agree on.\n\nI briefly read the thread. I think this line is copy-paste from other index\naccess methods and trying to estimate the whole index scan CPU cost by\nbypassing all the details.\n\n*indexTotalCost += (numTuples * *indexSelectivity) * (cpu_index_tuple_cost\n+ qual_op_cost);\n\nI think Tom's point was that it's wrong to add a separate entry-tree CPU\ncost estimation to another estimation, which tries (very inadequately) to\nestimate the whole scan cost. Instead, I propose writing better estimations\nfor both entry-tree CPU cost and data-trees CPU cost and replacing existing\nCPU estimation altogether.\n\n------\nRegards,\nAlexander Korotkov\n\nHi, Ronan!On Wed, Oct 12, 2022 at 10:15 AM Ronan Dunklau <ronan.dunklau@aiven.io> wrote:> > > You're right, I was too eager to try to raise the CPU cost proportionnally> to> > > the number of array scans (scalararrayop). I'd really like to understand> where> > > this equation comes from though...> >> > So, what's the latest update here?>> Thanks Michael for reviving this thread.>> Before proceeding any further with this, I'd like to understand where we> stand. Tom argued my way of charging cost per entry pages visited boils down> to charging per tuple, which I expressed disagreement with.>> If we can come to a consensus whether that's a bogus way of thinking about it> I'll proceed with what we agree on.I briefly read the thread. I think this line is copy-paste from other index access methods and trying to estimate the whole index scan CPU cost by bypassing all the details.*indexTotalCost += (numTuples * *indexSelectivity) * (cpu_index_tuple_cost + qual_op_cost);I think Tom's point was that it's wrong to add a separate entry-tree CPU cost estimation to another estimation, which tries (very inadequately) to estimate the whole scan cost. Instead, I propose writing better estimations for both entry-tree CPU cost and data-trees CPU cost and replacing existing CPU estimation altogether.------Regards,Alexander Korotkov",
"msg_date": "Tue, 25 Oct 2022 17:08:58 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix gin index cost estimation"
},
{
"msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> I think Tom's point was that it's wrong to add a separate entry-tree CPU\n> cost estimation to another estimation, which tries (very inadequately) to\n> estimate the whole scan cost. Instead, I propose writing better estimations\n> for both entry-tree CPU cost and data-trees CPU cost and replacing existing\n> CPU estimation altogether.\n\nGreat idea, if someone is willing to do it ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 25 Oct 2022 10:18:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix gin index cost estimation"
},
{
"msg_contents": "Le mardi 25 octobre 2022, 16:18:57 CET Tom Lane a écrit :\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > I think Tom's point was that it's wrong to add a separate entry-tree CPU\n> > cost estimation to another estimation, which tries (very inadequately) to\n> > estimate the whole scan cost. Instead, I propose writing better\n> > estimations\n> > for both entry-tree CPU cost and data-trees CPU cost and replacing\n> > existing\n> > CPU estimation altogether.\n> \n> Great idea, if someone is willing to do it ...\n> \n> \t\t\tregards, tom lane\n\nHello,\n\nSorry for the delay, but here is an updated patch which changes the costing in \nthe following way:\n\n- add a descent cost similar to the btree one is charged for the initial \nentry-tree\n- additionally, a charge is applied per page in both the entry tree and \nposting trees / lists\n- instead of charging the quals to each tuple, charge them per entry only. We \nstill charge cpu_index_tuple_cost per tuple though.\n\nWith those changes, no need to tweak the magic number formula estimating the \nnumber of pages. Maybe we can come up with something better for estimating \nthose later on ?\n\nBest regards,\n\n--\nRonan Dunklau",
"msg_date": "Fri, 02 Dec 2022 11:19:11 +0100",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: Fix gin index cost estimation"
},
{
"msg_contents": "Hi, Ronan!\n\nOn Fri, Dec 2, 2022 at 1:19 PM Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n> Sorry for the delay, but here is an updated patch which changes the costing in\n> the following way:\n>\n> - add a descent cost similar to the btree one is charged for the initial\n> entry-tree\n> - additionally, a charge is applied per page in both the entry tree and\n> posting trees / lists\n> - instead of charging the quals to each tuple, charge them per entry only. We\n> still charge cpu_index_tuple_cost per tuple though.\n>\n> With those changes, no need to tweak the magic number formula estimating the\n> number of pages. Maybe we can come up with something better for estimating\n> those later on ?\n\nThank you for your patch. Couple of quick questions.\n1) What magic number 50.0 stands for? I think we at least should make\nit a macro.\n2) \"We only charge one data page for the startup cost\" – should this\nbe dependent on number of search entries?\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Fri, 2 Dec 2022 14:33:33 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix gin index cost estimation"
},
{
"msg_contents": "Le vendredi 2 décembre 2022, 12:33:33 CET Alexander Korotkov a écrit :\n> Hi, Ronan!\n> Thank you for your patch. Couple of quick questions.\n> 1) What magic number 50.0 stands for? I think we at least should make\n> it a macro.\n\nThis is what is used in other tree-descending estimation functions, so I used \nthat too. Maybe a DEFAULT_PAGE_CPU_COST macro would work for both ? If so I'll \nseparate this into two patches, one introducing the macro for the other \nestimation functions, and this patch for gin.\n\n> 2) \"We only charge one data page for the startup cost\" – should this\n> be dependent on number of search entries?\n\nGood point, multiplying it by the number of search entries would do the trick. \n\nThank you for looking at this !\n\nRegards,\n\n--\nRonan Dunklau\n\n\n\n\n\n",
"msg_date": "Fri, 02 Dec 2022 13:58:27 +0100",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: Fix gin index cost estimation"
},
{
"msg_contents": "Le vendredi 2 décembre 2022, 13:58:27 CET Ronan Dunklau a écrit :\n> Le vendredi 2 décembre 2022, 12:33:33 CET Alexander Korotkov a écrit :\n> > Hi, Ronan!\n> > Thank you for your patch. Couple of quick questions.\n> > 1) What magic number 50.0 stands for? I think we at least should make\n> > it a macro.\n> \n> This is what is used in other tree-descending estimation functions, so I\n> used that too. Maybe a DEFAULT_PAGE_CPU_COST macro would work for both ? If\n> so I'll separate this into two patches, one introducing the macro for the\n> other estimation functions, and this patch for gin.\n\nThe 0001 patch does this.\n\n> \n> > 2) \"We only charge one data page for the startup cost\" – should this\n> > be dependent on number of search entries?\n\nIn fact there was another problem. The current code estimate two different \npathes for fetching data pages: in the case of a partial match, it takes into \naccount that all the data pages will have to be fetched. So this is is now \ntaken into account for the CPU cost as well. \n\nFor the regular search, we scale the number of data pages by the number of \nsearch entries.\n\nBest regards,\n\n--\nRonan Dunklau",
"msg_date": "Tue, 06 Dec 2022 11:22:20 +0100",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Re: Fix gin index cost estimation"
},
{
"msg_contents": "On Tue, Dec 6, 2022 at 1:22 PM Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n> Le vendredi 2 décembre 2022, 13:58:27 CET Ronan Dunklau a écrit :\n> > Le vendredi 2 décembre 2022, 12:33:33 CET Alexander Korotkov a écrit :\n> > > Hi, Ronan!\n> > > Thank you for your patch. Couple of quick questions.\n> > > 1) What magic number 50.0 stands for? I think we at least should make\n> > > it a macro.\n> >\n> > This is what is used in other tree-descending estimation functions, so I\n> > used that too. Maybe a DEFAULT_PAGE_CPU_COST macro would work for both ? If\n> > so I'll separate this into two patches, one introducing the macro for the\n> > other estimation functions, and this patch for gin.\n>\n> The 0001 patch does this.\n>\n> >\n> > > 2) \"We only charge one data page for the startup cost\" – should this\n> > > be dependent on number of search entries?\n>\n> In fact there was another problem. The current code estimate two different\n> pathes for fetching data pages: in the case of a partial match, it takes into\n> account that all the data pages will have to be fetched. So this is is now\n> taken into account for the CPU cost as well.\n>\n> For the regular search, we scale the number of data pages by the number of\n> search entries.\n\nNow the patch looks good for me. I made some tests.\n\n# create extension pg_trgm;\n# create extension btree_gin;\n# create table test1 as (select random() as val from\ngenerate_series(1,1000000) i);\n# create index test1_gin_idx on test1 using gin (val);\n\n# explain select * from test1 where val between 0.1 and 0.2;\n QUERY PLAN\n---------------------------------------------------------------------------------------------\n Bitmap Heap Scan on test1 (cost=1186.21..7089.57 rows=98557 width=8)\n Recheck Cond: ((val >= '0.1'::double precision) AND (val <=\n'0.2'::double precision))\n -> Bitmap Index Scan on test1_gin_idx (cost=0.00..1161.57\nrows=98557 width=0)\n Index Cond: ((val >= '0.1'::double precision) AND (val <=\n'0.2'::double precision))\n(4 rows)\n\n# create index test1_btree_idx on test1 using btree (val);\n# explain select * from test1 where val between 0.1 and 0.2;\n QUERY PLAN\n-----------------------------------------------------------------------------------------\n Index Only Scan using test1_btree_idx on test1 (cost=0.42..3055.57\nrows=98557 width=8)\n Index Cond: ((val >= '0.1'::double precision) AND (val <=\n'0.2'::double precision))\n(2 rows)\n\nLooks reasonable. In this case GIN is much more expensive, because it\ncan't handle range query properly and overlaps two partial matches.\n\n# create table test2 as (select 'samplestring' || i as val from\ngenerate_series(1,1000000) i);\n# create index test2_gin_idx on test2 using gin (val);\n# explain select * from test2 where val = 'samplestring500000';\n QUERY PLAN\n-----------------------------------------------------------------------------\n Bitmap Heap Scan on test2 (cost=20.01..24.02 rows=1 width=18)\n Recheck Cond: (val = 'samplestring500000'::text)\n -> Bitmap Index Scan on test2_gin_idx (cost=0.00..20.01 rows=1 width=0)\n Index Cond: (val = 'samplestring500000'::text)\n(4 rows)\n\n# create index test2_btree_idx on test2 using btree (val);\n# explain select * from test2 where val = 'samplestring500000';\n QUERY PLAN\n-----------------------------------------------------------------------------------\n Index Only Scan using test2_btree_idx on test2 (cost=0.42..4.44\nrows=1 width=18)\n Index Cond: (val = 'samplestring500000'::text)\n(2 rows)\n\nThis also looks reasonable. GIN is not terribly bad for this case,\nbut B-tree is much cheaper.\n\nI'm going to push this and backpatch to all supported versions if no objections.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sun, 8 Jan 2023 14:45:35 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix gin index cost estimation"
},
{
"msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> I'm going to push this and backpatch to all supported versions if no objections.\n\nPush yes, but I'd counsel against back-patching. People don't\ngenerally like unexpected plan changes in stable versions, and\nthat's what a costing change could produce. There's no argument\nthat we are fixing a failure or wrong answer here, so it doesn't\nseem like back-patch material.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 08 Jan 2023 11:08:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix gin index cost estimation"
},
{
"msg_contents": "On Sun, Jan 8, 2023 at 7:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > I'm going to push this and backpatch to all supported versions if no objections.\n>\n> Push yes, but I'd counsel against back-patching. People don't\n> generally like unexpected plan changes in stable versions, and\n> that's what a costing change could produce. There's no argument\n> that we are fixing a failure or wrong answer here, so it doesn't\n> seem like back-patch material.\n\nPushed to master, thank you. I've noticed the reason for\nnon-backpatching in the commit message.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sun, 8 Jan 2023 22:53:00 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix gin index cost estimation"
}
] |
[
{
"msg_contents": "I think in the following sentence, were should be replaced with have,\nwhat do you think?\n\n```\n /*\n- * We were just issued a SAVEPOINT inside a\ntransaction block.\n+ * We have just issued a SAVEPOINT inside a\ntransaction block.\n * Start a subtransaction. (DefineSavepoint already did\n * PushTransaction, so as to have someplace to\nput the SUBBEGIN\n * state.)\n```\n\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Wed, 3 Aug 2022 16:10:25 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "[doc] fix a potential grammer mistake"
},
{
"msg_contents": "> On 3 Aug 2022, at 10:10, Junwang Zhao <zhjwpku@gmail.com> wrote:\n> \n> I think in the following sentence, were should be replaced with have,\n> what do you think?\n> \n> ```\n> /*\n> - * We were just issued a SAVEPOINT inside a\n> transaction block.\n> + * We have just issued a SAVEPOINT inside a\n> transaction block.\n> * Start a subtransaction. (DefineSavepoint already did\n> * PushTransaction, so as to have someplace to\n> put the SUBBEGIN\n> * state.)\n> ```\n\nI'm not so sure. If I read this right the intent of the sentence is to convey\nthat the user has issued a SAVEPOINT to the backend, not the backend itself. I\nthink the current wording is the correct one.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 3 Aug 2022 10:23:49 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [doc] fix a potential grammer mistake"
},
{
"msg_contents": "Op 03-08-2022 om 10:10 schreef Junwang Zhao:\n> I think in the following sentence, were should be replaced with have,\n> what do you think?\n> \n> ```\n> /*\n> - * We were just issued a SAVEPOINT inside a\n> transaction block.\n> + * We have just issued a SAVEPOINT inside a\n> transaction block.\n> * Start a subtransaction. (DefineSavepoint already did\n> * PushTransaction, so as to have someplace to\n> put the SUBBEGIN\n> * state.)\n> ```\n\nI don't think these \"were\"s are wrong but arguably changing them to \n\"have\" helps non-native speakers (like myself), as it doesn't change the \nmeaning significantly as far as I can see.\n\n'we were issued' does reflect the perspective of the receiving code a \nbit better.\n\n\nErik\n\n\n\n",
"msg_date": "Wed, 3 Aug 2022 10:27:42 +0200",
"msg_from": "Erikjan Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: [doc] fix a potential grammer mistake"
},
{
"msg_contents": "On Wed, Aug 3, 2022 at 4:23 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 3 Aug 2022, at 10:10, Junwang Zhao <zhjwpku@gmail.com> wrote:\n> >\n> > I think in the following sentence, were should be replaced with have,\n> > what do you think?\n> >\n> > ```\n> > /*\n> > - * We were just issued a SAVEPOINT inside a\n> > transaction block.\n> > + * We have just issued a SAVEPOINT inside a\n> > transaction block.\n> > * Start a subtransaction. (DefineSavepoint already did\n> > * PushTransaction, so as to have someplace to\n> > put the SUBBEGIN\n> > * state.)\n> > ```\n>\n> I'm not so sure. If I read this right the intent of the sentence is to convey\n> that the user has issued a SAVEPOINT to the backend, not the backend itself. I\n> think the current wording is the correct one.\n>\n\nGot it, using `were` here means the backend is the receiver of the\naction, not the sender.\nThat makes sense, thanks a lot.\n\n> --\n> Daniel Gustafsson https://vmware.com/\n>\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Wed, 3 Aug 2022 16:35:52 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [doc] fix a potential grammer mistake"
},
{
"msg_contents": "yeah, not a grammar mistake at all, \"were\" should be used here, thanks\nfor pointing that out ;)\n\nOn Wed, Aug 3, 2022 at 4:27 PM Erikjan Rijkers <er@xs4all.nl> wrote:\n>\n> Op 03-08-2022 om 10:10 schreef Junwang Zhao:\n> > I think in the following sentence, were should be replaced with have,\n> > what do you think?\n> >\n> > ```\n> > /*\n> > - * We were just issued a SAVEPOINT inside a\n> > transaction block.\n> > + * We have just issued a SAVEPOINT inside a\n> > transaction block.\n> > * Start a subtransaction. (DefineSavepoint already did\n> > * PushTransaction, so as to have someplace to\n> > put the SUBBEGIN\n> > * state.)\n> > ```\n>\n> I don't think these \"were\"s are wrong but arguably changing them to\n> \"have\" helps non-native speakers (like myself), as it doesn't change the\n> meaning significantly as far as I can see.\n>\n> 'we were issued' does reflect the perspective of the receiving code a\n> bit better.\n>\n>\n> Erik\n>\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Wed, 3 Aug 2022 16:38:01 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [doc] fix a potential grammer mistake"
},
{
"msg_contents": "Erikjan Rijkers <er@xs4all.nl> writes:\n> I don't think these \"were\"s are wrong but arguably changing them to \n> \"have\" helps non-native speakers (like myself), as it doesn't change the \n> meaning significantly as far as I can see.\n\nI think it does --- it changes the meaning from passive to active.\nI don't necessarily object to rewriting these sentences more broadly,\nbut I don't think \"have issued\" is the correct phrasing.\n\nPossibly \"The user issued ...\" would work.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 03 Aug 2022 09:56:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [doc] fix a potential grammer mistake"
},
{
"msg_contents": "Attachment is a corrected version based on Tom's suggestion.\n\nThanks.\n\nOn Wed, Aug 3, 2022 at 9:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Erikjan Rijkers <er@xs4all.nl> writes:\n> > I don't think these \"were\"s are wrong but arguably changing them to\n> > \"have\" helps non-native speakers (like myself), as it doesn't change the\n> > meaning significantly as far as I can see.\n>\n> I think it does --- it changes the meaning from passive to active.\n> I don't necessarily object to rewriting these sentences more broadly,\n> but I don't think \"have issued\" is the correct phrasing.\n>\n> Possibly \"The user issued ...\" would work.\n>\n> regards, tom lane\n\n\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Wed, 3 Aug 2022 23:14:47 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [doc] fix a potential grammer mistake"
},
{
"msg_contents": "On Wed, Aug 3, 2022 at 11:15 AM Junwang Zhao <zhjwpku@gmail.com> wrote:\n>\n> Attachment is a corrected version based on Tom's suggestion.\n>\n> Thanks.\n>\n> On Wed, Aug 3, 2022 at 9:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Erikjan Rijkers <er@xs4all.nl> writes:\n> > > I don't think these \"were\"s are wrong but arguably changing them to\n> > > \"have\" helps non-native speakers (like myself), as it doesn't change the\n> > > meaning significantly as far as I can see.\n> >\n> > I think it does --- it changes the meaning from passive to active.\n> > I don't necessarily object to rewriting these sentences more broadly,\n> > but I don't think \"have issued\" is the correct phrasing.\n> >\n> > Possibly \"The user issued ...\" would work.\n> >\n\nIs there a reason that the first case says \"just\" issued vs the other\ntwo cases? It seems to me that it should be removed.\n\nRobert Treat\nhttps://xzilla.net\n\n\n",
"msg_date": "Wed, 3 Aug 2022 12:42:11 -0400",
"msg_from": "Robert Treat <rob@xzilla.net>",
"msg_from_op": false,
"msg_subject": "Re: [doc] fix a potential grammer mistake"
},
{
"msg_contents": "On Thu, Aug 4, 2022 at 12:42 AM Robert Treat <rob@xzilla.net> wrote:\n>\n> On Wed, Aug 3, 2022 at 11:15 AM Junwang Zhao <zhjwpku@gmail.com> wrote:\n> >\n> > Attachment is a corrected version based on Tom's suggestion.\n> >\n> > Thanks.\n> >\n> > On Wed, Aug 3, 2022 at 9:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > Erikjan Rijkers <er@xs4all.nl> writes:\n> > > > I don't think these \"were\"s are wrong but arguably changing them to\n> > > > \"have\" helps non-native speakers (like myself), as it doesn't change the\n> > > > meaning significantly as far as I can see.\n> > >\n> > > I think it does --- it changes the meaning from passive to active.\n> > > I don't necessarily object to rewriting these sentences more broadly,\n> > > but I don't think \"have issued\" is the correct phrasing.\n> > >\n> > > Possibly \"The user issued ...\" would work.\n> > >\n>\n> Is there a reason that the first case says \"just\" issued vs the other\n> two cases? It seems to me that it should be removed.\nAttachment is a patch with the \"just\" removed.\n\nThanks\n>\n> Robert Treat\n> https://xzilla.net\n\n\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Thu, 4 Aug 2022 06:44:09 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [doc] fix a potential grammer mistake"
},
{
"msg_contents": "> On 4 Aug 2022, at 00:44, Junwang Zhao <zhjwpku@gmail.com> wrote:\n\n> Attachment is a patch with the \"just\" removed.\n\nI think this is a change for better, so I've pushed it. Thanks for the\ncontribution!\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 4 Aug 2022 16:32:37 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [doc] fix a potential grammer mistake"
},
{
"msg_contents": "On Thu, Aug 4, 2022 at 10:32 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 4 Aug 2022, at 00:44, Junwang Zhao <zhjwpku@gmail.com> wrote:\n>\n> > Attachment is a patch with the \"just\" removed.\n>\n> I think this is a change for better, so I've pushed it. Thanks for the\n> contribution!\n>\n>\nThanks!\n\nRobert Treat\nhttps://xzilla.net\n\nOn Thu, Aug 4, 2022 at 10:32 AM Daniel Gustafsson <daniel@yesql.se> wrote:> On 4 Aug 2022, at 00:44, Junwang Zhao <zhjwpku@gmail.com> wrote:\n\n> Attachment is a patch with the \"just\" removed.\n\nI think this is a change for better, so I've pushed it. Thanks for the\ncontribution!\nThanks!Robert Treathttps://xzilla.net",
"msg_date": "Fri, 5 Aug 2022 11:02:19 -0700",
"msg_from": "Robert Treat <rob@xzilla.net>",
"msg_from_op": false,
"msg_subject": "Re: [doc] fix a potential grammer mistake"
}
] |
[
{
"msg_contents": "HI All,\n\nFollowing comment in RemoveNonParentXlogFiles() says that we are trying to\nremove any WAL file whose segment number is >= the segment number of the\nfirst WAL file on the new timeline. However, looking at the code, I can say\nthat we are trying to remove the WAL files from the previous timeline whose\nsegment number is just greater than (not equal to) the segment number of\nthe first WAL file in the new timeline. I think we should improve this\ncomment, thoughts?\n\n\n /*\n * Remove files that are on a timeline older than the new one we're\n * switching to, but with a segment number >= the first segment on\nthe\n * new timeline.\n */\n if (strncmp(xlde->d_name, switchseg, 8) < 0 &&\n strcmp(xlde->d_name + 8, switchseg + 8) > 0)\n\n--\nWith Regards,\nAshutosh Sharma.\n\nHI All,Following comment in RemoveNonParentXlogFiles() says that we are trying to remove any WAL file whose segment number is >= the segment number of the first WAL file on the new timeline. However, looking at the code, I can say that we are trying to remove the WAL files from the previous timeline whose segment number is just greater than (not equal to) the segment number of the first WAL file in the new timeline. I think we should improve this comment, thoughts? /* * Remove files that are on a timeline older than the new one we're * switching to, but with a segment number >= the first segment on the * new timeline. */ if (strncmp(xlde->d_name, switchseg, 8) < 0 && strcmp(xlde->d_name + 8, switchseg + 8) > 0)--With Regards,Ashutosh Sharma.",
"msg_date": "Wed, 3 Aug 2022 18:16:33 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Correct comment in RemoveNonParentXlogFiles()"
},
{
"msg_contents": "At Wed, 3 Aug 2022 18:16:33 +0530, Ashutosh Sharma <ashu.coek88@gmail.com> wrote in \n> Following comment in RemoveNonParentXlogFiles() says that we are trying to\n> remove any WAL file whose segment number is >= the segment number of the\n> first WAL file on the new timeline. However, looking at the code, I can say\n> that we are trying to remove the WAL files from the previous timeline whose\n> segment number is just greater than (not equal to) the segment number of\n> the first WAL file in the new timeline. I think we should improve this\n> comment, thoughts?\n>\n> /*\n> * Remove files that are on a timeline older than the new one we're\n> * switching to, but with a segment number >= the first segment on\n> the\n> * new timeline.\n> */\n> if (strncmp(xlde->d_name, switchseg, 8) < 0 &&\n> strcmp(xlde->d_name + 8, switchseg + 8) > 0)\n\nI'm not sure I'm fully getting your point. The current comment is\ncorrectly saying that it removes the segments \"on a timeline older\nthan the new one\". I agree about segment comparison.\n\nSo, if I changed that comment, I would finish with the following change.\n\n- * switching to, but with a segment number >= the first segment on\n+ * switching to, but with a segment number greater than the first segment on\n\nThat disagreement started at the time the code was introduced by\nb2a5545bd6. Leaving the last segment in the old timeline is correct\nsince it is renamed to .partial later. If timeline switch happened\njust at segment boundary, that segment would not not be created.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 04 Aug 2022 15:00:06 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Correct comment in RemoveNonParentXlogFiles()"
},
{
"msg_contents": "At Thu, 04 Aug 2022 15:00:06 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Wed, 3 Aug 2022 18:16:33 +0530, Ashutosh Sharma <ashu.coek88@gmail.com> wrote in \n> > Following comment in RemoveNonParentXlogFiles() says that we are trying to\n> > remove any WAL file whose segment number is >= the segment number of the\n> > first WAL file on the new timeline. However, looking at the code, I can say\n> > that we are trying to remove the WAL files from the previous timeline whose\n> > segment number is just greater than (not equal to) the segment number of\n> > the first WAL file in the new timeline. I think we should improve this\n> > comment, thoughts?\n> >\n> > /*\n> > * Remove files that are on a timeline older than the new one we're\n> > * switching to, but with a segment number >= the first segment on\n> > the\n> > * new timeline.\n> > */\n> > if (strncmp(xlde->d_name, switchseg, 8) < 0 &&\n> > strcmp(xlde->d_name + 8, switchseg + 8) > 0)\n> \n> I'm not sure I'm fully getting your point. The current comment is\n> correctly saying that it removes the segments \"on a timeline older\n> than the new one\". I agree about segment comparison.\n> \n> So, if I changed that comment, I would finish with the following change.\n> \n> - * switching to, but with a segment number >= the first segment on\n> + * switching to, but with a segment number greater than the first segment on\n> \n> That disagreement started at the time the code was introduced by\n> b2a5545bd6. Leaving the last segment in the old timeline is correct\n> since it is renamed to .partial later. If timeline switch happened\n> just at segment boundary, that segment would not not be created.\n\n\"the last segment in the old timeline\" here means \"the segment in the\nold timeline, with the segment number == the first segmetn on the new\ntimeline\".\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 04 Aug 2022 15:05:51 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Correct comment in RemoveNonParentXlogFiles()"
},
{
"msg_contents": "On Thu, Aug 4, 2022 at 11:30 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Wed, 3 Aug 2022 18:16:33 +0530, Ashutosh Sharma <ashu.coek88@gmail.com>\n> wrote in\n> > Following comment in RemoveNonParentXlogFiles() says that we are trying\n> to\n> > remove any WAL file whose segment number is >= the segment number of the\n> > first WAL file on the new timeline. However, looking at the code, I can\n> say\n> > that we are trying to remove the WAL files from the previous timeline\n> whose\n> > segment number is just greater than (not equal to) the segment number of\n> > the first WAL file in the new timeline. I think we should improve this\n> > comment, thoughts?\n> >\n> > /*\n> > * Remove files that are on a timeline older than the new one\n> we're\n> > * switching to, but with a segment number >= the first segment\n> on\n> > the\n> > * new timeline.\n> > */\n> > if (strncmp(xlde->d_name, switchseg, 8) < 0 &&\n> > strcmp(xlde->d_name + 8, switchseg + 8) > 0)\n>\n> I'm not sure I'm fully getting your point. The current comment is\n> correctly saying that it removes the segments \"on a timeline older\n> than the new one\". I agree about segment comparison.\n>\n\nYeah my complaint is about the comment on segment comparison for removal.\n\n\n>\n> So, if I changed that comment, I would finish with the following change.\n>\n> - * switching to, but with a segment number >= the first segment\n> on\n> + * switching to, but with a segment number greater than the\n> first segment on\n>\n\nwhich looks correct to me and will inline it with the code.\n\n\n>\n> That disagreement started at the time the code was introduced by\n> b2a5545bd6. Leaving the last segment in the old timeline is correct\n> since it is renamed to .partial later. If timeline switch happened\n> just at segment boundary, that segment would not not be created.\n>\n\nYeah, that's why we keep the last segment (partially written) from the old\ntimeline, which means we're not deleting it here. So the comment should not\nbe saying that we are also removing the last wal segment from the old\ntimeline which is equal to the first segment from the new timeline.\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Thu, Aug 4, 2022 at 11:30 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:At Wed, 3 Aug 2022 18:16:33 +0530, Ashutosh Sharma <ashu.coek88@gmail.com> wrote in \n> Following comment in RemoveNonParentXlogFiles() says that we are trying to\n> remove any WAL file whose segment number is >= the segment number of the\n> first WAL file on the new timeline. However, looking at the code, I can say\n> that we are trying to remove the WAL files from the previous timeline whose\n> segment number is just greater than (not equal to) the segment number of\n> the first WAL file in the new timeline. I think we should improve this\n> comment, thoughts?\n>\n> /*\n> * Remove files that are on a timeline older than the new one we're\n> * switching to, but with a segment number >= the first segment on\n> the\n> * new timeline.\n> */\n> if (strncmp(xlde->d_name, switchseg, 8) < 0 &&\n> strcmp(xlde->d_name + 8, switchseg + 8) > 0)\n\nI'm not sure I'm fully getting your point. The current comment is\ncorrectly saying that it removes the segments \"on a timeline older\nthan the new one\". I agree about segment comparison.Yeah my complaint is about the comment on segment comparison for removal. \n\nSo, if I changed that comment, I would finish with the following change.\n\n- * switching to, but with a segment number >= the first segment on\n+ * switching to, but with a segment number greater than the first segment onwhich looks correct to me and will inline it with the code. \n\nThat disagreement started at the time the code was introduced by\nb2a5545bd6. Leaving the last segment in the old timeline is correct\nsince it is renamed to .partial later. If timeline switch happened\njust at segment boundary, that segment would not not be created.Yeah, that's why we keep the last segment (partially written) from the old timeline, which means we're not deleting it here. So the comment should not be saying that we are also removing the last wal segment from the old timeline which is equal to the first segment from the new timeline.--With Regards,Ashutosh Sharma.",
"msg_date": "Thu, 4 Aug 2022 13:33:16 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Correct comment in RemoveNonParentXlogFiles()"
}
] |
[
{
"msg_contents": "Hi,\n\nIt appears that config.sgml and pg_settings have not been updated, even \nthough a new subcategory was added in 249d649. a55a984 may have been \nmissed in the cleaning.\n--\nCategory is 'CONNECTIONS AND AUTHENTICATION' and subcategory is \n'Connection Settings' at config.sgml.\nCategory is 'CONNECTIONS AND AUTHENTICATION' and subcategory is \n'Connection Settings' at pg_settings.\nCategory is 'CONNECTIONS AND AUTHENTICATION' and subcategory is 'TCP \nsettings' at postgresql.conf.sample.\n--\n\nI would like to unify the following with config.sgml as in a55a984.\n--\nCategory is 'REPORTING AND LOGGING' and subcategory is 'PROCESS TITLE' \nat config.sgml.\nCategory is 'REPORTING AND LOGGING' and subcategory is 'PROCESS TITLE' \nat pg_settings.\nCategory is 'PROCESS TITLE' and subcategory is none at \npostgresql.conf.sample.\n--\n\nTrivial changes were made to the following short_desc.\n--\nrecovery_prefetch\nenable_group_by_reordering\nstats_fetch_consistency\n--\n\nI've attached a patch.\nThoghts?\n\n-- \nRegards,\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Thu, 04 Aug 2022 20:09:51 +0900",
"msg_from": "Shinya Kato <Shinya11.Kato@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Fix inconsistencies GUC categories"
},
{
"msg_contents": "On Thu, Aug 04, 2022 at 08:09:51PM +0900, Shinya Kato wrote:\n> I would like to unify the following with config.sgml as in a55a984.\n> --\n> Category is 'REPORTING AND LOGGING' and subcategory is 'PROCESS TITLE' at\n> config.sgml.\n> Category is 'REPORTING AND LOGGING' and subcategory is 'PROCESS TITLE' at\n> pg_settings.\n\nYep. I agree with these changes, even for\nclient_connection_check_interval.\n\n> Category is 'PROCESS TITLE' and subcategory is none at\n> postgresql.conf.sample.\n\nYep. This change sounds right as well. \n--\nMichael",
"msg_date": "Sat, 6 Aug 2022 21:54:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix inconsistencies GUC categories"
},
{
"msg_contents": "On Sat, Aug 06, 2022 at 09:54:36PM +0900, Michael Paquier wrote:\n> Yep. This change sounds right as well. \n\nDone as of 0b039e3. Thanks, Kato-san.\n--\nMichael",
"msg_date": "Tue, 9 Aug 2022 20:04:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix inconsistencies GUC categories"
}
] |
[
{
"msg_contents": "Hi,\n\nA DSS developer from my company, Julien Roze, reported me an error I cannot explained. Is it a new behavior or a bug ?\n\nOriginal query is much more complicated but here is a simplified test case with postgresql 14 and 15 beta 2 on Debian 11, packages from pgdg :\n\nVer Cluster Port Status Owner Data directory Log file\n14 main 5432 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log\n15 main 5433 online postgres /var/lib/postgresql/15/main /var/log/postgresql/postgresql-15-main.log\n\npsql -p 5432\n\nselect version();\n version \n-----------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 14.4 (Debian 14.4-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit\n(1 ligne)\n\n\nwith fakedata as (\n select 'hello' word\n union all\n select 'world' word\n)\nselect *\nfrom (\n select word, count(*) over (partition by word) nb from fakedata\n) t where nb = 1;\n \n word | nb \n-------+----\n hello | 1\n world | 1\n(2 lignes)\n\n\nwith fakedata as (\n select 'hello' word\n union all\n select 'world' word\n)\nselect *\nfrom (\n select word, count(*) nb from fakedata group by word\n) t where nb = 1;\n \n word | nb \n-------+----\n hello | 1\n world | 1\n(2 lignes)\n\npsql -p 5433\n\n select version();\n version \n------------------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 15beta2 (Debian 15~beta2-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit\n(1 ligne)\n\nwith fakedata as (\n select 'hello' word\n union all\n select 'world' word\n)\nselect *\nfrom (\n select word, count(*) over (partition by word) nb from fakedata\n) t where nb = 1;\nERREUR: cache lookup failed for function 0\n\nwith fakedata as (\n select 'hello' word\n union all\n select 'world' word\n)\nselect *\nfrom (\n select word, count(*) nb from fakedata group by word\n) t where nb = 1;\n \n word | nb \n-------+----\n hello | 1\n world | 1\n(2 lignes)\n\n\nBest regards,\nPhil\n\n",
"msg_date": "Thu, 4 Aug 2022 13:19:59 +0000",
"msg_from": "Phil Florent <philflorent@hotmail.com>",
"msg_from_op": true,
"msg_subject": "ERREUR: cache lookup failed for function 0 with PostgreSQL 15 beta\n 2, no error with PostgreSQL 14.4"
},
{
"msg_contents": "On Thu, Aug 04, 2022 at 01:19:59PM +0000, Phil Florent wrote:\n> A DSS developer from my company, Julien Roze, reported me an error I cannot explained. Is it a new behavior or a bug ?\n> \n> Original query is much more complicated but here is a simplified test case with postgresql 14 and 15 beta 2 on Debian 11, packages from pgdg :\n\nThanks for simplifying and reporting it.\n\nIt looks like an issue with window run conditions (commit 9d9c02ccd).\n\n+David\n\n(gdb) b pg_re_throw\n(gdb) bt\n#0 pg_re_throw () at elog.c:1795\n#1 0x0000557c85645e69 in errfinish (filename=<optimized out>, filename@entry=0x557c858db7da \"fmgr.c\", lineno=lineno@entry=183, funcname=funcname@entry=0x557c858dc410 <__func__.24841> \"fmgr_info_cxt_security\") at elog.c:588\n#2 0x0000557c85650e21 in fmgr_info_cxt_security (functionId=functionId@entry=0, finfo=finfo@entry=0x557c86a05ad0, mcxt=<optimized out>, ignore_security=ignore_security@entry=false) at fmgr.c:183\n#3 0x0000557c85651284 in fmgr_info (functionId=functionId@entry=0, finfo=finfo@entry=0x557c86a05ad0) at fmgr.c:128\n#4 0x0000557c84b32c73 in ExecInitFunc (scratch=scratch@entry=0x7ffc369a9cf0, node=node@entry=0x557c869f59b8, args=0x557c869f5a68, funcid=funcid@entry=0, inputcollid=inputcollid@entry=0, state=state@entry=0x557c86a05620)\n at execExpr.c:2748\n#5 0x0000557c84b27904 in ExecInitExprRec (node=node@entry=0x557c869f59b8, state=state@entry=0x557c86a05620, resv=resv@entry=0x557c86a05628, resnull=resnull@entry=0x557c86a05625) at execExpr.c:1147\n#6 0x0000557c84b33a1d in ExecInitQual (qual=0x557c869f5b18, parent=parent@entry=0x557c86a05080) at execExpr.c:253\n#7 0x0000557c84c8eadb in ExecInitWindowAgg (node=node@entry=0x557c869f4d20, estate=estate@entry=0x557c86a04e10, eflags=eflags@entry=16) at nodeWindowAgg.c:2420\n#8 0x0000557c84b8edda in ExecInitNode (node=node@entry=0x557c869f4d20, estate=estate@entry=0x557c86a04e10, eflags=eflags@entry=16) at execProcnode.c:345\n#9 0x0000557c84b70ea2 in InitPlan (queryDesc=queryDesc@entry=0x557c8695af50, eflags=eflags@entry=16) at execMain.c:938\n#10 0x0000557c84b71658 in standard_ExecutorStart (queryDesc=queryDesc@entry=0x557c8695af50, eflags=16, eflags@entry=0) at execMain.c:265\n#11 0x0000557c84b71ca4 in ExecutorStart (queryDesc=queryDesc@entry=0x557c8695af50, eflags=0) at execMain.c:144\n#12 0x0000557c8525292b in PortalStart (portal=portal@entry=0x557c869a45e0, params=params@entry=0x0, eflags=eflags@entry=0, snapshot=snapshot@entry=0x0) at pquery.c:517\n#13 0x0000557c8524b2a4 in exec_simple_query (\n query_string=query_string@entry=0x557c86938af0 \"with fakedata as (\\n\", ' ' <repetidos 15 veces>, \"select 'hello' word\\n\", ' ' <repetidos 15 veces>, \"union all\\n\", ' ' <repetidos 15 veces>, \"select 'world' word\\n)\\nselect *\\nfrom (\\n\", ' ' <repetidos 15 veces>, \"select word, count(*) over (partition by word) nb fro\"...) at postgres.c:1204\n#14 0x0000557c8524e8bd in PostgresMain (dbname=<optimized out>, username=username@entry=0x557c86964298 \"pryzbyj\") at postgres.c:4505\n#15 0x0000557c85042db6 in BackendRun (port=port@entry=0x557c8695a910) at postmaster.c:4490\n#16 0x0000557c8504a79a in BackendStartup (port=port@entry=0x557c8695a910) at postmaster.c:4218\n#17 0x0000557c8504ae12 in ServerLoop () at postmaster.c:1808\n#18 0x0000557c8504c926 in PostmasterMain (argc=3, argv=<optimized out>) at postmaster.c:1480\n#19 0x0000557c84ce4209 in main (argc=3, argv=0x557c86933000) at main.c:197\n\n(gdb) fr 7 \n#7 0x0000557c84c8eadb in ExecInitWindowAgg (node=node@entry=0x557c869f4d20, estate=estate@entry=0x557c86a04e10, eflags=eflags@entry=16) at nodeWindowAgg.c:2420\n2420 winstate->runcondition = ExecInitQual(node->runCondition,\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 4 Aug 2022 08:33:06 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: ERREUR: cache lookup failed for function 0 with PostgreSQL 15\n beta 2, no error with PostgreSQL 14.4"
},
{
"msg_contents": "On Fri, 5 Aug 2022 at 01:20, Phil Florent <philflorent@hotmail.com> wrote:\n> with fakedata as (\n> select 'hello' word\n> union all\n> select 'world' word\n> )\n> select *\n> from (\n> select word, count(*) over (partition by word) nb from fakedata\n> ) t where nb = 1;\n> ERREUR: cache lookup failed for function 0\n\n> A DSS developer from my company, Julien Roze, reported me an error I cannot explained. Is it a new behavior or a bug ?\n\nThank you for the report and the minimal self-contained test case.\nThat's highly useful for us.\n\nI've now committed a fix for this ([1]). It will appear in the next\nbeta release for PG15.\n\nDavid\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=270eb4b5d4986534f2d522ebb19f67396d13cf44\n\n\n",
"msg_date": "Fri, 5 Aug 2022 10:21:11 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERREUR: cache lookup failed for function 0 with PostgreSQL 15\n beta 2, no error with PostgreSQL 14.4"
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed that dir_open_for_write() in walmethods.c uses write() for\nWAL file initialization (note that this code is used by pg_receivewal\nand pg_basebackup) as opposed to core using pg_pwritev_with_retry() in\nXLogFileInitInternal() to avoid partial writes. Do we need to fix\nthis?\n\nThoughts?\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n",
"msg_date": "Fri, 5 Aug 2022 15:55:26 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write()\n to avoid partial writes?"
},
{
"msg_contents": "On Fri, Aug 05, 2022 at 03:55:26PM +0530, Bharath Rupireddy wrote:\n> I noticed that dir_open_for_write() in walmethods.c uses write() for\n> WAL file initialization (note that this code is used by pg_receivewal\n> and pg_basebackup) as opposed to core using pg_pwritev_with_retry() in\n> XLogFileInitInternal() to avoid partial writes. Do we need to fix\n> this?\n\n0d56acfb has moved pg_pwritev_with_retry to be backend-only in fd.c :/\n\n> Thoughts?\n\nMakes sense to me for the WAL segment pre-padding initialization, as\nwe still want to point to the beginning of the segment after we are\ndone with the pre-padding, and the code has an extra lseek().\n--\nMichael",
"msg_date": "Sat, 6 Aug 2022 15:41:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Sat, Aug 6, 2022 at 12:11 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Aug 05, 2022 at 03:55:26PM +0530, Bharath Rupireddy wrote:\n> > I noticed that dir_open_for_write() in walmethods.c uses write() for\n> > WAL file initialization (note that this code is used by pg_receivewal\n> > and pg_basebackup) as opposed to core using pg_pwritev_with_retry() in\n> > XLogFileInitInternal() to avoid partial writes. Do we need to fix\n> > this?\n>\n> 0d56acfb has moved pg_pwritev_with_retry to be backend-only in fd.c :/\n\nYeah. pg_pwritev_with_retry can also be part of common/file_utils.c/.h\nso that everyone can use it.\n\n> > Thoughts?\n>\n> Makes sense to me for the WAL segment pre-padding initialization, as\n> we still want to point to the beginning of the segment after we are\n> done with the pre-padding, and the code has an extra lseek().\n\nThanks. I attached the v1 patch, please review it.\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/",
"msg_date": "Sun, 7 Aug 2022 06:42:11 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Sun, Aug 7, 2022 at 1:12 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Sat, Aug 6, 2022 at 12:11 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Yeah. pg_pwritev_with_retry can also be part of common/file_utils.c/.h\n> so that everyone can use it.\n>\n> > > Thoughts?\n> >\n> > Makes sense to me for the WAL segment pre-padding initialization, as\n> > we still want to point to the beginning of the segment after we are\n> > done with the pre-padding, and the code has an extra lseek().\n>\n> Thanks. I attached the v1 patch, please review it.\n\nHi Bharath,\n\n+1 for moving pg_pwritev_with_retry() to file_utils.c, but I think the\ncommit message should say that is happening. Maybe the move should\neven be in a separate patch (IMHO it's nice to separate mechanical\nchange patches from new logic/work patches).\n\n+/*\n+ * A convenience wrapper for pg_pwritev_with_retry() that zero-fills the given\n+ * file of size total_sz in batches of size block_sz.\n+ */\n+ssize_t\n+pg_pwritev_with_retry_and_init(int fd, int total_sz, int block_sz)\n\nHmm, why not give it a proper name that says it writes zeroes?\n\nSize arguments around syscall-like facilities are usually size_t.\n\n+ memset(zbuffer.data, 0, block_sz);\n\nI believe the modern fashion as of a couple of weeks ago is to tell\nthe compiler to zero-initialise. Since it's a union you'd need\ndesignated initialiser syntax, something like zbuffer = { .data = {0}\n}?\n\n+ iov[i].iov_base = zbuffer.data;\n+ iov[i].iov_len = block_sz;\n\nHow can it be OK to use caller supplied block_sz, when\nsizeof(zbuffer.data) is statically determined? What is the point of\nthis argument?\n\n- dir_data->lasterrno = errno;\n+ /* If errno isn't set, assume problem is no disk space */\n+ dir_data->lasterrno = errno ? errno : ENOSPC;\n\nThis coding pattern is used in places where we blame short writes on\nlack of disk space without bothering to retry. But the code used in\nthis patch already handles short writes: it always retries, until\neventually, if you really are out of disk space, you should get an\nactual ENOSPC from the operating system. So I don't think the\nguess-it-must-be-ENOSPC technique applies here.\n\n\n",
"msg_date": "Sun, 7 Aug 2022 14:12:56 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Sun, Aug 7, 2022 at 7:43 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Sun, Aug 7, 2022 at 1:12 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > On Sat, Aug 6, 2022 at 12:11 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > Yeah. pg_pwritev_with_retry can also be part of common/file_utils.c/.h\n> > so that everyone can use it.\n> >\n> > > > Thoughts?\n> > >\n> > > Makes sense to me for the WAL segment pre-padding initialization, as\n> > > we still want to point to the beginning of the segment after we are\n> > > done with the pre-padding, and the code has an extra lseek().\n> >\n> > Thanks. I attached the v1 patch, please review it.\n>\n> Hi Bharath,\n>\n> +1 for moving pg_pwritev_with_retry() to file_utils.c, but I think the\n> commit message should say that is happening. Maybe the move should\n> even be in a separate patch (IMHO it's nice to separate mechanical\n> change patches from new logic/work patches).\n\nAgree. I separated out the changes.\n\n> +/*\n> + * A convenience wrapper for pg_pwritev_with_retry() that zero-fills the given\n> + * file of size total_sz in batches of size block_sz.\n> + */\n> +ssize_t\n> +pg_pwritev_with_retry_and_init(int fd, int total_sz, int block_sz)\n>\n> Hmm, why not give it a proper name that says it writes zeroes?\n\nDone.\n\n> Size arguments around syscall-like facilities are usually size_t.\n>\n> + memset(zbuffer.data, 0, block_sz);\n>\n> I believe the modern fashion as of a couple of weeks ago is to tell\n> the compiler to zero-initialise. Since it's a union you'd need\n> designated initialiser syntax, something like zbuffer = { .data = {0}\n> }?\n\nHm, but we have many places still using memset(). If we were to change\nthese syntaxes, IMO, it must be done separately.\n\n> + iov[i].iov_base = zbuffer.data;\n> + iov[i].iov_len = block_sz;\n>\n> How can it be OK to use caller supplied block_sz, when\n> sizeof(zbuffer.data) is statically determined? What is the point of\n> this argument?\n\nYes, removed block_sz function parameter.\n\n> - dir_data->lasterrno = errno;\n> + /* If errno isn't set, assume problem is no disk space */\n> + dir_data->lasterrno = errno ? errno : ENOSPC;\n>\n> This coding pattern is used in places where we blame short writes on\n> lack of disk space without bothering to retry. But the code used in\n> this patch already handles short writes: it always retries, until\n> eventually, if you really are out of disk space, you should get an\n> actual ENOSPC from the operating system. So I don't think the\n> guess-it-must-be-ENOSPC technique applies here.\n\nDone.\n\nThanks for reviewing. PSA v2 patch-set. Also,I added a CF entry\nhttps://commitfest.postgresql.org/39/3803/ to give the patches a\nchance to get tested.\n\n--\nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/",
"msg_date": "Sun, 7 Aug 2022 10:41:49 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Sun, Aug 07, 2022 at 10:41:49AM +0530, Bharath Rupireddy wrote:\n> Agree. I separated out the changes.\n\n+\n+/*\n+ * A convenience wrapper for pwritev() that retries on partial write. If an\n+ * error is returned, it is unspecified how much has been written.\n+ */\n+ssize_t\n+pg_pwritev_with_retry(int fd, const struct iovec *iov, int iovcnt, off_t offset)\n\nIf moving this routine, this could use a more explicit description,\nespecially on errno, for example, that could be consumed by the caller\non failure to know what's happening. \n\n>> +/*\n>> + * A convenience wrapper for pg_pwritev_with_retry() that zero-fills the given\n>> + * file of size total_sz in batches of size block_sz.\n>> + */\n>> +ssize_t\n>> +pg_pwritev_with_retry_and_init(int fd, int total_sz, int block_sz)\n>>\n>> Hmm, why not give it a proper name that says it writes zeroes?\n> \n> Done.\n\nFWIW, when it comes to that we have a couple of routines that just use\n'0' to mean such a thing, aka palloc0(). I find 0002 confusing, as it\nintroduces in fe_utils.c a new wrapper\n(pg_pwritev_with_retry_and_write_zeros) on what's already a wrapper\n(pg_pwritev_with_retry) for pwrite().\n\nA second thing is that pg_pwritev_with_retry_and_write_zeros() is\ndesigned to work on WAL segments initialization and it uses\nXLOG_BLCKSZ and PGAlignedXLogBlock for the job, but there is nothing\nin its name that tells us so. This makes me question whether\nfile_utils.c is a good location for this second thing. Could a new\nfile be a better location? We have a xlogutils.c in the backend, and\na name similar to that in src/common/ would be one possibility.\n--\nMichael",
"msg_date": "Sun, 7 Aug 2022 16:56:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Sun, Aug 7, 2022 at 7:56 PM Michael Paquier <michael@paquier.xyz> wrote:\n> FWIW, when it comes to that we have a couple of routines that just use\n> '0' to mean such a thing, aka palloc0(). I find 0002 confusing, as it\n> introduces in fe_utils.c a new wrapper\n> (pg_pwritev_with_retry_and_write_zeros) on what's already a wrapper\n> (pg_pwritev_with_retry) for pwrite().\n\nWhat about pg_write_initial_zeros(fd, size)? How it writes zeros (ie\nusing vector I/O, and retrying) seems to be an internal detail, no?\n\"initial\" to make it clearer that it's at offset 0, or maybe\npg_pwrite_zeros(fd, size, offset).\n\n> A second thing is that pg_pwritev_with_retry_and_write_zeros() is\n> designed to work on WAL segments initialization and it uses\n> XLOG_BLCKSZ and PGAlignedXLogBlock for the job, but there is nothing\n> in its name that tells us so. This makes me question whether\n> file_utils.c is a good location for this second thing. Could a new\n> file be a better location? We have a xlogutils.c in the backend, and\n> a name similar to that in src/common/ would be one possibility.\n\nYeah, I think it should probably be disconnected from XLOG_BLCKSZ, or\nmaybe it's OK to use BLCKSZ with a comment to say that it's a bit\narbitrary, or maybe it's better to define a new zero buffer of some\narbitrary size just in this code if that is too strange. We could\nexperiment with different size buffers to see how it performs, bearing\nin mind that every time we double it you halve the number of system\ncalls, but also bearing in mind that at some point it's too much for\nthe stack. I can tell you that the way that code works today was not\nreally written with performance in mind (unlike, say, the code\nreverted from 9.4 that tried to do this with posix_fallocate()), it\nwas just finding an excuse to call pwritev(), to exercise new fallback\ncode being committed for use by later AIO stuff (more patches coming\nsoon). The retry support was added because it seemed plausible that\nsome system out there would start to do short writes as we cranked up\nthe sizes for some implementation reason other than ENOSPC, so we\nshould make a reusable retry routine.\n\nI think this should also handle the remainder after processing whole\nblocks, just for completeness. If I call the code as presented with size\n8193, I think this code will only write 8192 bytes.\n\nI think if this ever needs to work on O_DIRECT files there would be an\nalignment constraint on the buffer and size, but I don't think we have\nto worry about that for now.\n\n\n",
"msg_date": "Sun, 7 Aug 2022 21:49:19 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Sun, Aug 7, 2022 at 3:19 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> > A second thing is that pg_pwritev_with_retry_and_write_zeros() is\n> > designed to work on WAL segments initialization and it uses\n> > XLOG_BLCKSZ and PGAlignedXLogBlock for the job, but there is nothing\n> > in its name that tells us so. This makes me question whether\n> > file_utils.c is a good location for this second thing. Could a new\n> > file be a better location? We have a xlogutils.c in the backend, and\n> > a name similar to that in src/common/ would be one possibility.\n>\n> Yeah, I think it should probably be disconnected from XLOG_BLCKSZ, or\n> maybe it's OK to use BLCKSZ with a comment to say that it's a bit\n> arbitrary, or maybe it's better to define a new zero buffer of some\n> arbitrary size just in this code if that is too strange. We could\n> experiment with different size buffers to see how it performs, bearing\n> in mind that every time we double it you halve the number of system\n> calls, but also bearing in mind that at some point it's too much for\n> the stack. I can tell you that the way that code works today was not\n> really written with performance in mind (unlike, say, the code\n> reverted from 9.4 that tried to do this with posix_fallocate()), it\n> was just finding an excuse to call pwritev(), to exercise new fallback\n> code being committed for use by later AIO stuff (more patches coming\n> soon). The retry support was added because it seemed plausible that\n> some system out there would start to do short writes as we cranked up\n> the sizes for some implementation reason other than ENOSPC, so we\n> should make a reusable retry routine.\n\nYes, doubling the zerobuffer size to say 2 * XLOG_BLCKSZ or 2 * BLCKSZ\nreduces the system calls to half (right now, pg_pwritev_with_retry()\ngets called 64 times per 16MB WAL file, it writes in the batches of 32\nblocks per call).\n\nIs there a ready-to-use tool or script or specific settings for\npgbench (pgbench command line options or GUC settings) that I can play\nwith to measure the performance?\n\n> I think this should also handle the remainder after processing whole\n> blocks, just for completeness. If I call the code as presented with size\n> 8193, I think this code will only write 8192 bytes.\n\nHm, I will fix it.\n\n> I think if this ever needs to work on O_DIRECT files there would be an\n> alignment constraint on the buffer and size, but I don't think we have\n> to worry about that for now.\n\nWe can add a comment about the above limitation, if required.\n\n--\nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n",
"msg_date": "Sun, 7 Aug 2022 21:22:39 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Sun, Aug 7, 2022 at 9:22 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Sun, Aug 7, 2022 at 3:19 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> >\n> > > A second thing is that pg_pwritev_with_retry_and_write_zeros() is\n> > > designed to work on WAL segments initialization and it uses\n> > > XLOG_BLCKSZ and PGAlignedXLogBlock for the job, but there is nothing\n> > > in its name that tells us so. This makes me question whether\n> > > file_utils.c is a good location for this second thing. Could a new\n> > > file be a better location? We have a xlogutils.c in the backend, and\n> > > a name similar to that in src/common/ would be one possibility.\n> >\n> > Yeah, I think it should probably be disconnected from XLOG_BLCKSZ, or\n> > maybe it's OK to use BLCKSZ with a comment to say that it's a bit\n> > arbitrary, or maybe it's better to define a new zero buffer of some\n> > arbitrary size just in this code if that is too strange. We could\n> > experiment with different size buffers to see how it performs, bearing\n> > in mind that every time we double it you halve the number of system\n> > calls, but also bearing in mind that at some point it's too much for\n> > the stack. I can tell you that the way that code works today was not\n> > really written with performance in mind (unlike, say, the code\n> > reverted from 9.4 that tried to do this with posix_fallocate()), it\n> > was just finding an excuse to call pwritev(), to exercise new fallback\n> > code being committed for use by later AIO stuff (more patches coming\n> > soon). The retry support was added because it seemed plausible that\n> > some system out there would start to do short writes as we cranked up\n> > the sizes for some implementation reason other than ENOSPC, so we\n> > should make a reusable retry routine.\n>\n> Yes, doubling the zerobuffer size to say 2 * XLOG_BLCKSZ or 2 * BLCKSZ\n> reduces the system calls to half (right now, pg_pwritev_with_retry()\n> gets called 64 times per 16MB WAL file, it writes in the batches of 32\n> blocks per call).\n>\n> Is there a ready-to-use tool or script or specific settings for\n> pgbench (pgbench command line options or GUC settings) that I can play\n> with to measure the performance?\n\nI played with a simple insert use-case [1] that generates ~380 WAL\nfiles, with different block sizes. To my surprise, I have not seen any\nimprovement with larger block sizes. I may be doing something wrong\nhere, suggestions on to test and see the benefits are welcome.\n\n> > I think this should also handle the remainder after processing whole\n> > blocks, just for completeness. If I call the code as presented with size\n> > 8193, I think this code will only write 8192 bytes.\n>\n> Hm, I will fix it.\n\nFixed.\n\nI'm attaching v5 patch-set. I've addressed review comments received so\nfar and fixed a compiler warning that CF bot complained about.\n\nPlease review it further.\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/",
"msg_date": "Mon, 8 Aug 2022 18:10:23 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Mon, Aug 8, 2022 at 6:10 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> I played with a simple insert use-case [1] that generates ~380 WAL\n> files, with different block sizes. To my surprise, I have not seen any\n> improvement with larger block sizes. I may be doing something wrong\n> here, suggestions on to test and see the benefits are welcome.\n>\n> > > I think this should also handle the remainder after processing whole\n> > > blocks, just for completeness. If I call the code as presented with size\n> > > 8193, I think this code will only write 8192 bytes.\n> >\n> > Hm, I will fix it.\n>\n> Fixed.\n>\n> I'm attaching v5 patch-set. I've addressed review comments received so\n> far and fixed a compiler warning that CF bot complained about.\n>\n> Please review it further.\n\nI tried to vary the zero buffer size to see if there's any huge\nbenefit for the WAL-generating queries. Unfortunately, I didn't see\nany benefit on my dev system (16 vcore, 512GB SSD, 32GB RAM) . The use\ncase I've tried is at [1] and the results are at [2].\n\nHaving said that, the use of pg_pwritev_with_retry() in walmethods.c\nwill definitely reduce number of system calls - on HEAD the\ndir_open_for_write() makes pad_to_size/XLOG_BLCKSZ i.e. 16MB/8KB =\n2,048 write() calls and with patch it makes only 64\npg_pwritev_with_retry() calls with XLOG_BLCKSZ zero buffer size. The\nproposed patches will provide straight 32x reduction in system calls\n(for pg_receivewal and pg_basebackup) apart from the safety against\npartial writes.\n\n[1]\n/* built source code with release flags */\n./configure --with-zlib --enable-depend --prefix=$PWD/inst/\n--with-openssl --with-readline --with-perl --with-libxml CFLAGS='-O2'\n> install.log && make -j 8 install > install.log 2>&1 &\n\n\\q\n./pg_ctl -D data -l logfile stop\nrm -rf data\n\n/* ensured that nothing exists in OS page cache */\nfree -m\nsudo su\nsync; echo 3 > /proc/sys/vm/drop_caches\nexit\nfree -m\n\n./initdb -D data\n./pg_ctl -D data -l logfile start\n./psql -d postgres -c 'ALTER SYSTEM SET max_wal_size = \"64GB\";'\n./psql -d postgres -c 'ALTER SYSTEM SET shared_buffers = \"8GB\";'\n./psql -d postgres -c 'ALTER SYSTEM SET work_mem = \"16MB\";'\n./psql -d postgres -c 'ALTER SYSTEM SET checkpoint_timeout = \"1d\";'\n./pg_ctl -D data -l logfile restart\n./psql -d postgres -c 'create table foo(bar int);'\n./psql -d postgres\n\\timing\ninsert into foo select * from generate_series(1, 100000000); /* this\nquery generates about 385 WAL files, no checkpoint hence no recycle of\nold WAL files, all new WAL files */\n\n[2]\nHEAD\nTime: 84249.535 ms (01:24.250)\n\nHEAD with wal_init_zero off\nTime: 75086.300 ms (01:15.086)\n\n#define PWRITEV_BLCKSZ XLOG_BLCKSZ\nTime: 85254.302 ms (01:25.254)\n\n#define PWRITEV_BLCKSZ (4 * XLOG_BLCKSZ)\nTime: 83542.885 ms (01:23.543)\n\n#define PWRITEV_BLCKSZ (16 * XLOG_BLCKSZ)\nTime: 84035.770 ms (01:24.036)\n\n#define PWRITEV_BLCKSZ (64 * XLOG_BLCKSZ)\nTime: 84749.021 ms (01:24.749)\n\n#define PWRITEV_BLCKSZ (256 * XLOG_BLCKSZ)\nTime: 84273.466 ms (01:24.273)\n\n#define PWRITEV_BLCKSZ (512 * XLOG_BLCKSZ)\nTime: 84233.576 ms (01:24.234)\n\n--\nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n",
"msg_date": "Tue, 9 Aug 2022 13:00:35 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Mon, Aug 08, 2022 at 06:10:23PM +0530, Bharath Rupireddy wrote:\n> I'm attaching v5 patch-set. I've addressed review comments received so\n> far and fixed a compiler warning that CF bot complained about.\n> \n> Please review it further.\n\n0001 looks reasonable to me.\n\n+ errno = 0;\n+ rc = pg_pwritev_zeros(fd, pad_to_size);\n\nDo we need to reset errno? pg_pwritev_zeros() claims to set errno\nappropriately.\n\n+/*\n+ * PWRITEV_BLCKSZ is same as XLOG_BLCKSZ for now, however it may change if\n+ * writing more bytes per pg_pwritev_with_retry() call is proven to be more\n+ * performant.\n+ */\n+#define PWRITEV_BLCKSZ XLOG_BLCKSZ\n\nThis seems like something we should sort out now instead of leaving as\nfuture work. Given your recent note, I think we should just use\nXLOG_BLCKSZ and PGAlignedXLogBlock and add a comment about the performance\nfindings with different buffer sizes.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 20 Sep 2022 16:00:26 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Tue, Sep 20, 2022 at 04:00:26PM -0700, Nathan Bossart wrote:\n> On Mon, Aug 08, 2022 at 06:10:23PM +0530, Bharath Rupireddy wrote:\n>> I'm attaching v5 patch-set. I've addressed review comments received so\n>> far and fixed a compiler warning that CF bot complained about.\n>> \n>> Please review it further.\n> \n> 0001 looks reasonable to me.\n> \n> + errno = 0;\n> + rc = pg_pwritev_zeros(fd, pad_to_size);\n> \n> Do we need to reset errno? pg_pwritev_zeros() claims to set errno\n> appropriately.\n> \n> +/*\n> + * PWRITEV_BLCKSZ is same as XLOG_BLCKSZ for now, however it may change if\n> + * writing more bytes per pg_pwritev_with_retry() call is proven to be more\n> + * performant.\n> + */\n> +#define PWRITEV_BLCKSZ XLOG_BLCKSZ\n> \n> This seems like something we should sort out now instead of leaving as\n> future work. Given your recent note, I think we should just use\n> XLOG_BLCKSZ and PGAlignedXLogBlock and add a comment about the performance\n> findings with different buffer sizes.\n\nI also noticed that the latest patch set no longer applies, so I've marked\nthe commitfest entry as waiting-on-author.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 20 Sep 2022 16:06:08 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Wed, Sep 21, 2022 at 4:30 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> 0001 looks reasonable to me.\n\nThanks for reviewing.\n\n> + errno = 0;\n> + rc = pg_pwritev_zeros(fd, pad_to_size);\n>\n> Do we need to reset errno? pg_pwritev_zeros() claims to set errno\n> appropriately.\n\nRight, pg_pwritev_zeros(), (rather pg_pwritev_with_retry() ensures\nthat pwritev() or pwrite()) sets the correct errno, please see\nThomas's comments [1] as well. Removed it.\n\n> +/*\n> + * PWRITEV_BLCKSZ is same as XLOG_BLCKSZ for now, however it may change if\n> + * writing more bytes per pg_pwritev_with_retry() call is proven to be more\n> + * performant.\n> + */\n> +#define PWRITEV_BLCKSZ XLOG_BLCKSZ\n>\n> This seems like something we should sort out now instead of leaving as\n> future work. Given your recent note, I think we should just use\n> XLOG_BLCKSZ and PGAlignedXLogBlock and add a comment about the performance\n> findings with different buffer sizes.\n\nAgreed. Removed the new structure and added a comment.\n\nAnother change that I had to do was to allow lseek() even after\npwrite() (via pg_pwritev_zeros()) on Windows in walmethods.c. Without\nthis, the regression tests start to fail on Windows. And I figured out\nthat it's not an issue with this patch, it looks like an issue with\npwrite() implementation in win32pwrite.c, see the failure here [2], I\nplan to start a separate thread to discuss this.\n\nPlease review the attached v4 patch set further.\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKGJKwUrpP0igOFAv5khj3dbHvfWqLzFeo7vtNzDYXv_YZw%40mail.gmail.com\n[2] https://github.com/BRupireddy/postgres/tree/use_pwrite_without_lseek_on_windiws\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 23 Sep 2022 11:46:56 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "+ PGAlignedXLogBlock zbuffer;\n+\n+ memset(zbuffer.data, 0, XLOG_BLCKSZ);\n\nThis seems excessive for only writing a single byte.\n\n+#ifdef WIN32\n+ /*\n+ * XXX: It looks like on Windows, we need an explicit lseek() call here\n+ * despite using pwrite() implementation from win32pwrite.c. Otherwise\n+ * an error occurs.\n+ */\n\nI think this comment is too vague. Can we describe the error in more\ndetail? Or better yet, can we fix it as a prerequisite to this patch set?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 23 Sep 2022 13:24:39 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Sat, Sep 24, 2022 at 8:24 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> +#ifdef WIN32\n> + /*\n> + * XXX: It looks like on Windows, we need an explicit lseek() call here\n> + * despite using pwrite() implementation from win32pwrite.c. Otherwise\n> + * an error occurs.\n> + */\n>\n> I think this comment is too vague. Can we describe the error in more\n> detail? Or better yet, can we fix it as a prerequisite to this patch set?\n\nAlthough WriteFile() with a synchronous file handle and an explicit\noffset doesn't use the current file position, it appears that it still\nchanges it. :-(\n\nYou'd think from the documentation[1] that that isn't the case, because it says:\n\n\"Considerations for working with synchronous file handles:\n\n * If lpOverlapped is NULL, the write operation starts at the current\nfile position and WriteFile does not return until the operation is\ncomplete, and the system updates the file pointer before WriteFile\nreturns.\n\n * If lpOverlapped is not NULL, the write operation starts at the\noffset that is specified in the OVERLAPPED structure and WriteFile\ndoes not return until the write operation is complete. The system\nupdates the OVERLAPPED Internal and InternalHigh fields before\nWriteFile returns.\"\n\nSo it's explicitly saying the file pointer is updated in the first\nbullet point and not the second, but src/port/win32pwrite.c is doing\nthe second thing. Yet the following assertion added to Bharath's code\nfails[2]:\n\n+++ b/src/bin/pg_basebackup/walmethods.c\n@@ -221,6 +221,10 @@ dir_open_for_write(WalWriteMethod *wwmethod,\nconst char *pathname,\n if (pad_to_size && wwmethod->compression_algorithm ==\nPG_COMPRESSION_NONE)\n {\n ssize_t rc;\n+ off_t before_offset;\n+ off_t after_offset;\n+\n+ before_offset = lseek(fd, 0, SEEK_CUR);\n\n rc = pg_pwritev_zeros(fd, pad_to_size);\n\n@@ -231,6 +235,9 @@ dir_open_for_write(WalWriteMethod *wwmethod, const\nchar *pathname,\n return NULL;\n }\n\n+ after_offset = lseek(fd, 0, SEEK_CUR);\n+ Assert(before_offset == after_offset);\n+\n\n[1] https://learn.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-writefile#synchronization-and-file-position\n[2] https://api.cirrus-ci.com/v1/artifact/task/6201051266154496/testrun/build/testrun/pg_basebackup/010_pg_basebackup/log/regress_log_010_pg_basebackup\n\n\n",
"msg_date": "Sat, 24 Sep 2022 16:14:20 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Sat, Sep 24, 2022 at 9:44 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> Although WriteFile() with a synchronous file handle and an explicit\n> offset doesn't use the current file position, it appears that it still\n> changes it. :-(\n>\n> You'd think from the documentation[1] that that isn't the case, because it says:\n>\n> \"Considerations for working with synchronous file handles:\n>\n> * If lpOverlapped is NULL, the write operation starts at the current\n> file position and WriteFile does not return until the operation is\n> complete, and the system updates the file pointer before WriteFile\n> returns.\n>\n> * If lpOverlapped is not NULL, the write operation starts at the\n> offset that is specified in the OVERLAPPED structure and WriteFile\n> does not return until the write operation is complete. The system\n> updates the OVERLAPPED Internal and InternalHigh fields before\n> WriteFile returns.\"\n>\n> So it's explicitly saying the file pointer is updated in the first\n> bullet point and not the second, but src/port/win32pwrite.c is doing\n> the second thing.\n\nThe WriteFile() and pwrite() implementation in win32pwrite.c are\nchanging the file pointer on Windows, see the following and [4] for\nmore details.\n\n# Running: pg_basebackup --no-sync -cfast -D\nC:\\cirrus\\build\\testrun\\pg_basebackup\\010_pg_basebackup\\data\\tmp_test_sV4r/backup2\n--no-manifest --waldir\nC:\\cirrus\\build\\testrun\\pg_basebackup\\010_pg_basebackup\\data\\tmp_test_sV4r/xlog2\npg_basebackup: offset_before 0 and offset_after 16777216 aren't the same\nAssertion failed: offset_before == offset_after, file\n../src/bin/pg_basebackup/walmethods.c, line 254\n\nIrrespective of what Windows does with file pointers in WriteFile(),\nshould we add lseek(SEEK_SET) in our own pwrite()'s implementation,\nsomething like [5]? This is rather hackish without fully knowing what\nWindows does internally in WriteFile(), but this does fix inherent\nissues that our pwrite() callers (there are quite a number of places\nthat use pwrite() and presumes file pointer doesn't change on Windows)\nmay have on Windows. See the regression tests passing [6] with the fix\n[5].\n\n> Yet the following assertion added to Bharath's code\n> fails[2]:\n\nThat was a very quick patch though, here's another version adding\noffset checks and an assertion [3], see the assertion failures here\n[4].\n\nI also think that we need to discuss this issue separately.\n\nThoughts?\n\n> [1] https://learn.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-writefile#synchronization-and-file-position\n> [2] https://api.cirrus-ci.com/v1/artifact/task/6201051266154496/testrun/build/testrun/pg_basebackup/010_pg_basebackup/log/regress_log_010_pg_basebackup\n[3] - https://github.com/BRupireddy/postgres/tree/add_pwrite_and_offset_checks_in_walmethods_v2\n[4] - https://api.cirrus-ci.com/v1/artifact/task/5294264635621376/testrun/build/testrun/pg_basebackup/010_pg_basebackup/log/regress_log_010_pg_basebackup\n[5]\ndiff --git a/src/port/win32pwrite.c b/src/port/win32pwrite.c\nindex 7f2e62e8a7..542b548279 100644\n--- a/src/port/win32pwrite.c\n+++ b/src/port/win32pwrite.c\n@@ -37,5 +37,16 @@ pwrite(int fd, const void *buf, size_t size, off_t offset)\n return -1;\n }\n\n+ /*\n+ * On Windows, it is found that WriteFile() changes the file\npointer and we\n+ * want pwrite() to not change. Hence, we explicitly reset the\nfile pointer\n+ * to beginning of the file.\n+ */\n+ if (lseek(fd, 0, SEEK_SET) != 0)\n+ {\n+ _dosmaperr(GetLastError());\n+ return -1;\n+ }\n+\n return result;\n }\n[6] - https://github.com/BRupireddy/postgres/tree/add_pwrite_and_offset_checks_in_walmethods_v3\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 26 Sep 2022 20:33:53 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Mon, Sep 26, 2022 at 08:33:53PM +0530, Bharath Rupireddy wrote:\n> Irrespective of what Windows does with file pointers in WriteFile(),\n> should we add lseek(SEEK_SET) in our own pwrite()'s implementation,\n> something like [5]? This is rather hackish without fully knowing what\n> Windows does internally in WriteFile(), but this does fix inherent\n> issues that our pwrite() callers (there are quite a number of places\n> that use pwrite() and presumes file pointer doesn't change on Windows)\n> may have on Windows. See the regression tests passing [6] with the fix\n> [5].\n\nI think so. I don't see why we would rather have each caller ensure\npwrite() behaves as documented.\n\n> + /*\n> + * On Windows, it is found that WriteFile() changes the file\n> pointer and we\n> + * want pwrite() to not change. Hence, we explicitly reset the\n> file pointer\n> + * to beginning of the file.\n> + */\n> + if (lseek(fd, 0, SEEK_SET) != 0)\n> + {\n> + _dosmaperr(GetLastError());\n> + return -1;\n> + }\n> +\n> return result;\n> }\n\nWhy reset to the beginning of the file? Shouldn't we reset it to what it\nwas before the call to pwrite()?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 26 Sep 2022 14:27:04 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Tue, Sep 27, 2022 at 10:27 AM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n> On Mon, Sep 26, 2022 at 08:33:53PM +0530, Bharath Rupireddy wrote:\n> > Irrespective of what Windows does with file pointers in WriteFile(),\n> > should we add lseek(SEEK_SET) in our own pwrite()'s implementation,\n> > something like [5]? This is rather hackish without fully knowing what\n> > Windows does internally in WriteFile(), but this does fix inherent\n> > issues that our pwrite() callers (there are quite a number of places\n> > that use pwrite() and presumes file pointer doesn't change on Windows)\n> > may have on Windows. See the regression tests passing [6] with the fix\n> > [5].\n>\n> I think so. I don't see why we would rather have each caller ensure\n> pwrite() behaves as documented.\n\nI don't think so, that's an extra kernel call. I think I'll just have\nto revert part of my recent change that removed the pg_ prefix from\nthose function names in our code, and restore the comment that warns\nyou about the portability hazard (I thought it went away with HP-UX\n10, where we were literally calling lseek() before every write()).\nThe majority of users of these functions don't intermix them with\ncalls to read()/write(), so they don't care about the file position,\nso I think it's just something we'll have to continue to be mindful of\nin the places that do.\n\nUnless, that is, I can find a way to stop it from doing that... I've\nadded this to my Windows to-do list. I am going to have a round of\nWindows hacking quite soon.\n\n\n",
"msg_date": "Tue, 27 Sep 2022 10:37:38 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Tue, Sep 27, 2022 at 10:37:38AM +1300, Thomas Munro wrote:\n> I don't think so, that's an extra kernel call. I think I'll just have\n> to revert part of my recent change that removed the pg_ prefix from\n> those function names in our code, and restore the comment that warns\n> you about the portability hazard (I thought it went away with HP-UX\n> 10, where we were literally calling lseek() before every write()).\n> The majority of users of these functions don't intermix them with\n> calls to read()/write(), so they don't care about the file position,\n> so I think it's just something we'll have to continue to be mindful of\n> in the places that do.\n\nAh, you're right, it's probably best to avoid the extra system call for the\nmajority of callers that don't care about the file position. I retract my\nprevious message.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 26 Sep 2022 14:55:31 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Tue, Sep 27, 2022 at 3:08 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> I don't think so, that's an extra kernel call. I think I'll just have\n> to revert part of my recent change that removed the pg_ prefix from\n> those function names in our code, and restore the comment that warns\n> you about the portability hazard (I thought it went away with HP-UX\n> 10, where we were literally calling lseek() before every write()).\n> The majority of users of these functions don't intermix them with\n> calls to read()/write(), so they don't care about the file position,\n> so I think it's just something we'll have to continue to be mindful of\n> in the places that do.\n\nYes, all of the existing pwrite() callers don't care about the file\nposition, but the new callers such as the actual idea and patch\nproposed here in this thread cares.\n\nIs this the commit cf112c122060568aa06efe4e6e6fb9b2dd4f1090 part of\nwhich [1] you're planning to revert? If so, will we have the\npg_{pwrite, pread} back? IIUC, we don't have lseek(SEEK_SET) in\npg_{pwrite, pread} right? It is the callers responsibility to set the\nfile position correctly if they wish to, isn't it? Oftentimes,\ndevelopers miss the notes in the function comments and use these\nfunctions expecting them to not change file position which works well\non non-Windows platforms but fails on Windows.\n\nThis makes me think that we can have pwrite(), pread() introduced by\ncf112c122060568aa06efe4e6e6fb9b2dd4f1090 as-is and re-introduce\npg_{pwrite, pread} with pwrite()/pread()+lseek(SEEK_SET) in\nwin32pwrite.c and win32pread.c. These functions reduce the caller's\nefforts and reduce the duplicate code. If okay, I'm happy to lend my\nhands on this patch.\n\nThoughts?\n\n> Unless, that is, I can find a way to stop it from doing that... I've\n> added this to my Windows to-do list. I am going to have a round of\n> Windows hacking quite soon.\n\nThanks! I think it's time for me to start a new thread just for this\nto get more attention and opinion from other hackers and not to\nsidetrack the original idea and patch proposed in this thread.\n\n[1] https://github.com/postgres/postgres/blob/REL_14_STABLE/src/port/pwrite.c#L11\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 27 Sep 2022 11:12:59 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Tue, Sep 27, 2022 at 6:43 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Tue, Sep 27, 2022 at 3:08 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> >\n> > I don't think so, that's an extra kernel call. I think I'll just have\n> > to revert part of my recent change that removed the pg_ prefix from\n> > those function names in our code, and restore the comment that warns\n> > you about the portability hazard (I thought it went away with HP-UX\n> > 10, where we were literally calling lseek() before every write()).\n> > The majority of users of these functions don't intermix them with\n> > calls to read()/write(), so they don't care about the file position,\n> > so I think it's just something we'll have to continue to be mindful of\n> > in the places that do.\n>\n> Yes, all of the existing pwrite() callers don't care about the file\n> position, but the new callers such as the actual idea and patch\n> proposed here in this thread cares.\n>\n> Is this the commit cf112c122060568aa06efe4e6e6fb9b2dd4f1090 part of\n> which [1] you're planning to revert?\n\nYeah, just the renaming parts of that. The lseek()-based emulation is\ndefinitely not coming back. Something like the attached.",
"msg_date": "Tue, 27 Sep 2022 22:30:22 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Tue, Sep 27, 2022 at 3:01 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> Something like the attached.\n\nIsn't it also better to add notes in win32pread.c and win32pwrite.c\nabout the callers doing lseek(SEEK_SET) if they wish to and Windows\nimplementations changing file position? We can also add a TODO item\nabout replacing pg_ versions with pread and friends someday when\nWindows fixes the issue? Having it in the commit and include/port.h is\ngood, but the comments in win32pread.c and win32pwrite.c make life\neasier IMO. Otherwise, the patch LGTM.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 27 Sep 2022 17:33:33 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Wed, Sep 28, 2022 at 1:03 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:>\n> On Tue, Sep 27, 2022 at 3:01 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Something like the attached.\n>\n> Isn't it also better to add notes in win32pread.c and win32pwrite.c\n> about the callers doing lseek(SEEK_SET) if they wish to and Windows\n> implementations changing file position? We can also add a TODO item\n> about replacing pg_ versions with pread and friends someday when\n> Windows fixes the issue? Having it in the commit and include/port.h is\n> good, but the comments in win32pread.c and win32pwrite.c make life\n> easier IMO. Otherwise, the patch LGTM.\n\nThanks, will do. FWIW I doubt the OS itself will change released\nbehaviour like that, but I did complain about the straight-up\nmisleading documentation and they agreed and fixed it[1].\n\nAfter some looking around, the only way I could find to avoid this\nbehaviour is to switch to async handles, which do behave as documented\nin this respect, but then you can't use plain read/write at all unless\nyou write replacement wrappers for those too, and AFAICT that adds\nmore system calls (start write, wait for write to finish) and\ncomplexity/weirdness without any real payoff so it seems like the\nwrong direction, at least without more research that I'm not pursuing\ncurrently. (FWIW in AIO porting experiments a while back we used\nasync handles to get IOs running concurrently with CPU work and\npossibly other IOs so it was actually worth doing, but that was only\nfor smgr and the main wal writing code where there's no intermixed\nplain read/write calls as you have here, so the problem doesn't even\ncome up.)\n\n[1] https://github.com/MicrosoftDocs/sdk-api/pull/1309\n\n\n",
"msg_date": "Wed, 28 Sep 2022 08:01:26 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Wed, Sep 28, 2022 at 12:32 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Wed, Sep 28, 2022 at 1:03 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:>\n> > On Tue, Sep 27, 2022 at 3:01 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > Something like the attached.\n> >\n> > Isn't it also better to add notes in win32pread.c and win32pwrite.c\n> > about the callers doing lseek(SEEK_SET) if they wish to and Windows\n> > implementations changing file position? We can also add a TODO item\n> > about replacing pg_ versions with pread and friends someday when\n> > Windows fixes the issue? Having it in the commit and include/port.h is\n> > good, but the comments in win32pread.c and win32pwrite.c make life\n> > easier IMO. Otherwise, the patch LGTM.\n>\n> Thanks, will do.\n\nI'm looking forward to getting it in.\n\n> FWIW I doubt the OS itself will change released\n> behaviour like that, but I did complain about the straight-up\n> misleading documentation and they agreed and fixed it[1].\n>\n> [1] https://github.com/MicrosoftDocs/sdk-api/pull/1309\n\nGreat! Is there any plan to request for change in WriteFile() to not\nalter file position?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 28 Sep 2022 10:13:20 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Wed, Sep 28, 2022 at 5:43 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Wed, Sep 28, 2022 at 12:32 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > FWIW I doubt the OS itself will change released\n> > behaviour like that, but I did complain about the straight-up\n> > misleading documentation and they agreed and fixed it[1].\n> >\n> > [1] https://github.com/MicrosoftDocs/sdk-api/pull/1309\n>\n> Great! Is there any plan to request for change in WriteFile() to not\n> alter file position?\n\nNot from me. I stick to open source problems. Reporting bugs in\ndocumentation is legitimate self defence, though.\n\n\n",
"msg_date": "Wed, 28 Sep 2022 19:11:52 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Sat, Sep 24, 2022 at 1:54 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> + PGAlignedXLogBlock zbuffer;\n> +\n> + memset(zbuffer.data, 0, XLOG_BLCKSZ);\n>\n> This seems excessive for only writing a single byte.\n\nYes, I removed it now, instead doing pg_pwrite(fd, \"\\0\", 1,\nwal_segment_size - 1).\n\n> +#ifdef WIN32\n> + /*\n> + * XXX: It looks like on Windows, we need an explicit lseek() call here\n> + * despite using pwrite() implementation from win32pwrite.c. Otherwise\n> + * an error occurs.\n> + */\n>\n> I think this comment is too vague. Can we describe the error in more\n> detail? Or better yet, can we fix it as a prerequisite to this patch set?\n\nThe commit b6d8a60aba322678585ebe11dab072a37ac32905 brings back\npg_pwrite() and its friends. This puts the responsibility of doing\nlseek(SEEK_SET) on the callers if they wish to.\n\nPlease see the v5 patch set and review it further.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 29 Sep 2022 11:32:32 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Thu, Sep 29, 2022 at 11:32:32AM +0530, Bharath Rupireddy wrote:\n> On Sat, Sep 24, 2022 at 1:54 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>>\n>> + PGAlignedXLogBlock zbuffer;\n>> +\n>> + memset(zbuffer.data, 0, XLOG_BLCKSZ);\n>>\n>> This seems excessive for only writing a single byte.\n> \n> Yes, I removed it now, instead doing pg_pwrite(fd, \"\\0\", 1,\n> wal_segment_size - 1).\n\nI don't think removing the use of PGAlignedXLogBlock here introduces any\nsort of alignment risk, so this should be alright.\n\n+#ifdef WIN32\n+ /*\n+ * WriteFile() on Windows changes the current file position, hence we\n+ * need an explicit lseek() here. See pg_pwrite() implementation in\n+ * win32pwrite.c for more details.\n+ */\n\nShould we really surround this with a WIN32 check, or should we just\nunconditionally lseek() here? I understand that this helps avoid an extra\nsystem call on many platforms, but in theory another platform introduced in\nthe future could have the same problem, and this seems like something that\ncould easily be missed. Presumably we could do something fancier to\nindicate pg_pwrite()'s behavior in this regard, but I don't know if that\nsort of complexity is really worth it in order to save an lseek().\n\n+ iov[0].iov_base = zbuffer.data;\n\nThis seems superfluous, but I don't think it's hurting anything.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 29 Sep 2022 10:27:11 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Fri, Sep 30, 2022 at 6:27 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> On Thu, Sep 29, 2022 at 11:32:32AM +0530, Bharath Rupireddy wrote:\n> > On Sat, Sep 24, 2022 at 1:54 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> >>\n> >> + PGAlignedXLogBlock zbuffer;\n> >> +\n> >> + memset(zbuffer.data, 0, XLOG_BLCKSZ);\n> >>\n> >> This seems excessive for only writing a single byte.\n> >\n> > Yes, I removed it now, instead doing pg_pwrite(fd, \"\\0\", 1,\n> > wal_segment_size - 1).\n>\n> I don't think removing the use of PGAlignedXLogBlock here introduces any\n> sort of alignment risk, so this should be alright.\n>\n> +#ifdef WIN32\n> + /*\n> + * WriteFile() on Windows changes the current file position, hence we\n> + * need an explicit lseek() here. See pg_pwrite() implementation in\n> + * win32pwrite.c for more details.\n> + */\n>\n> Should we really surround this with a WIN32 check, or should we just\n> unconditionally lseek() here? I understand that this helps avoid an extra\n> system call on many platforms, but in theory another platform introduced in\n> the future could have the same problem, and this seems like something that\n> could easily be missed. Presumably we could do something fancier to\n> indicate pg_pwrite()'s behavior in this regard, but I don't know if that\n> sort of complexity is really worth it in order to save an lseek().\n\n+1 for just doing it always, with a one-liner comment like\n\"pg_pwritev*() might move the file position\". No reason to spam the\nsource tree with more explanations of the exact reason. If someone\never comes up with another case where p- and non-p- I/O functions are\nintermixed and it's really worth saving a system call (don't get me\nwrong, we call lseek() an obscene amount elsewhere and I'd like to fix\nthat, but this case isn't hot?) then I like your idea of a macro to\ntell you whether you need to.\n\nEarlier I wondered why we'd want to include \"pg_pwritev\" in the name\nof this zero-filling function (pwritev being an internal\nimplementation detail), but now it seems like maybe a good idea\nbecause it highlights the file position portability problem by being a\nmember of that family of similarly-named functions.\n\n\n",
"msg_date": "Fri, 30 Sep 2022 14:15:30 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Fri, Sep 30, 2022 at 6:46 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> +1 for just doing it always, with a one-liner comment like\n> \"pg_pwritev*() might move the file position\". No reason to spam the\n> source tree with more explanations of the exact reason.\n\n+1 for resetting the file position in a platform-independent manner.\nBut, a description there won't hurt IMO and it saves time for the\nhackers who spend time there and think why it's that way.\n\n> If someone\n> ever comes up with another case where p- and non-p- I/O functions are\n> intermixed and it's really worth saving a system call (don't get me\n> wrong, we call lseek() an obscene amount elsewhere and I'd like to fix\n> that, but this case isn't hot?) then I like your idea of a macro to\n> tell you whether you need to.\n\nI don't think we go that route as the code isn't a hot path and an\nextra system call wouldn't hurt performance much, a comment there\nshould work.\n\n> Earlier I wondered why we'd want to include \"pg_pwritev\" in the name\n> of this zero-filling function (pwritev being an internal\n> implementation detail), but now it seems like maybe a good idea\n> because it highlights the file position portability problem by being a\n> member of that family of similarly-named functions.\n\nHm.\n\nOn Thu, Sep 29, 2022 at 10:57 PM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n>\n> + iov[0].iov_base = zbuffer.data;\n>\n> This seems superfluous, but I don't think it's hurting anything.\n\nYes, I removed it. Adding a comment, something like [1], would make it\nmore verbose, hence I've not added.\n\nI'm attaching the v6 patch set, please review it further.\n\n[1]\n /*\n * Use the first vector buffer to write the remaining size. Note that\n * zero buffer was already pointed to it above, hence just specifying\n * the size is enough here.\n */\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 30 Sep 2022 08:09:04 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Fri, Sep 30, 2022 at 08:09:04AM +0530, Bharath Rupireddy wrote:\n> I'm attaching the v6 patch set, please review it further.\n\nLooks reasonable to me.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 29 Sep 2022 20:09:56 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Thu, Sep 29, 2022 at 08:09:56PM -0700, Nathan Bossart wrote:\n> Looks reasonable to me.\n\n0001, to move pg_pwritev_with_retry() to a new home, seems fine, so\napplied.\n\nRegarding 0002, using pg_pwrite_zeros() as a routine name, as\nsuggested by Thomas, sounds good to me. However, I am not really a\nfan of its dependency with PGAlignedXLogBlock, because it should be\nable to work with any buffers of any sizes, as long as the input\nbuffer is aligned, shouldn't it? For example, what about\nPGAlignedBlock? So, should we make this more extensible? My guess\nwould be the addition of the block size and the block pointer to the \narguments of pg_pwrite_zeros(), in combination with a check to make\nsure that the input buffer is MAXALIGN()'d (with an Assert() rather\nthan just an elog/pg_log_error?).\n--\nMichael",
"msg_date": "Thu, 27 Oct 2022 14:54:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Thu, Oct 27, 2022 at 11:24 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Sep 29, 2022 at 08:09:56PM -0700, Nathan Bossart wrote:\n> > Looks reasonable to me.\n>\n> 0001, to move pg_pwritev_with_retry() to a new home, seems fine, so\n> applied.\n\nThanks.\n\n> Regarding 0002, using pg_pwrite_zeros() as a routine name, as\n> suggested by Thomas, sounds good to me.\n\nChanged.\n\n> However, I am not really a\n> fan of its dependency with PGAlignedXLogBlock, because it should be\n> able to work with any buffers of any sizes, as long as the input\n> buffer is aligned, shouldn't it? For example, what about\n> PGAlignedBlock? So, should we make this more extensible? My guess\n> would be the addition of the block size and the block pointer to the\n> arguments of pg_pwrite_zeros(), in combination with a check to make\n> sure that the input buffer is MAXALIGN()'d (with an Assert() rather\n> than just an elog/pg_log_error?).\n\n+1 to pass in the aligned buffer, its size and an assertion on the buffer size.\n\nPlease see the attached v7 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 27 Oct 2022 14:57:47 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "Hi,\n\nInterestingly, I also needed something like pg_pwrite_zeros() today. Exposed\nvia smgr, for more efficient relation extensions.\n\nOn 2022-10-27 14:54:00 +0900, Michael Paquier wrote:\n> Regarding 0002, using pg_pwrite_zeros() as a routine name, as\n> suggested by Thomas, sounds good to me. However, I am not really a\n> fan of its dependency with PGAlignedXLogBlock, because it should be\n> able to work with any buffers of any sizes, as long as the input\n> buffer is aligned, shouldn't it? For example, what about\n> PGAlignedBlock? So, should we make this more extensible? My guess\n> would be the addition of the block size and the block pointer to the\n> arguments of pg_pwrite_zeros(), in combination with a check to make\n> sure that the input buffer is MAXALIGN()'d (with an Assert() rather\n> than just an elog/pg_log_error?).\n\nI don't like passing in the buffer. That leads to code like in Bharat's latest\nversion, where we now zero that buffer on every invocation of\npg_pwrite_zeros() - not at all cheap. And every caller has to have provisions\nto provide that buffer.\n\nThe block sizes don't need to match, do they? As long as the block is properly\naligned, we can change the iov_len of the final iov to match whatever the size\nis being passed in, no?\n\nWhy don't we define a\n\nstatic PGAlignedBlock source_of_zeroes;\n\nin file_utils.c, and use that in pg_pwrite_zeros(), being careful to set the\niov_len arguments correctly?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 27 Oct 2022 15:58:25 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Thu, Oct 27, 2022 at 03:58:25PM -0700, Andres Freund wrote:\n> The block sizes don't need to match, do they? As long as the block is properly\n> aligned, we can change the iov_len of the final iov to match whatever the size\n> is being passed in, no?\n\nHmm. Based on what Bharath has written upthread, it does not seem to\nmatter if the size of the aligned block changes, either:\nhttps://www.postgresql.org/message-id/CALj2ACUccjR7KbKqWOsQmqH1ZGEDyJ7hH5Ef+DOhcv7+kOnjCQ@mail.gmail.com\n\nI am honestly not sure whether it is a good idea to make file_utils.c\ndepend on one of the compile-time page sizes in this routine, be it\nthe page size of the WAL page size, as pg_write_zeros() would be used\nfor some rather low-level operations. But we could as well just use a\nlocally-defined structure with a buffer at 4kB or 8kB and call it a\nday?\n--\nMichael",
"msg_date": "Fri, 28 Oct 2022 11:09:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Fri, Oct 28, 2022 at 7:39 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Oct 27, 2022 at 03:58:25PM -0700, Andres Freund wrote:\n> > The block sizes don't need to match, do they? As long as the block is\nproperly\n> > aligned, we can change the iov_len of the final iov to match whatever\nthe size\n> > is being passed in, no?\n>\n> Hmm. Based on what Bharath has written upthread, it does not seem to\n> matter if the size of the aligned block changes, either:\n>\nhttps://www.postgresql.org/message-id/CALj2ACUccjR7KbKqWOsQmqH1ZGEDyJ7hH5Ef+DOhcv7+kOnjCQ@mail.gmail.com\n>\n> I am honestly not sure whether it is a good idea to make file_utils.c\n> depend on one of the compile-time page sizes in this routine, be it\n> the page size of the WAL page size, as pg_write_zeros() would be used\n> for some rather low-level operations. But we could as well just use a\n> locally-defined structure with a buffer at 4kB or 8kB and call it a\n> day?\n\n+1. Please see the attached v8 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 28 Oct 2022 11:38:51 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Fri, Oct 28, 2022 at 11:38:51AM +0530, Bharath Rupireddy wrote:\n> +1. Please see the attached v8 patch.\n\n+ char data[PG_WRITE_BLCKSZ];\n+ double force_align_d;\n+ int64 force_align_i64;\n+} PGAlignedWriteBlock;\nI have not checked in details, but that should do the job.\n\nAndres, Thomas, Nathan, perhaps you have a different view on the\nmatter?\n--\nMichael",
"msg_date": "Fri, 28 Oct 2022 17:00:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-28 11:09:38 +0900, Michael Paquier wrote:\n> On Thu, Oct 27, 2022 at 03:58:25PM -0700, Andres Freund wrote:\n> > The block sizes don't need to match, do they? As long as the block is properly\n> > aligned, we can change the iov_len of the final iov to match whatever the size\n> > is being passed in, no?\n> \n> Hmm. Based on what Bharath has written upthread, it does not seem to\n> matter if the size of the aligned block changes, either:\n> https://www.postgresql.org/message-id/CALj2ACUccjR7KbKqWOsQmqH1ZGEDyJ7hH5Ef+DOhcv7+kOnjCQ@mail.gmail.com\n> \n> I am honestly not sure whether it is a good idea to make file_utils.c\n> depend on one of the compile-time page sizes in this routine, be it\n> the page size of the WAL page size, as pg_write_zeros() would be used\n> for some rather low-level operations. But we could as well just use a\n> locally-defined structure with a buffer at 4kB or 8kB and call it a\n> day?\n\nShrug. I don't think we gain much by having yet another PGAlignedXYZBlock.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 28 Oct 2022 15:07:00 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Sat, Oct 29, 2022 at 3:37 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-10-28 11:09:38 +0900, Michael Paquier wrote:\n> > On Thu, Oct 27, 2022 at 03:58:25PM -0700, Andres Freund wrote:\n> > > The block sizes don't need to match, do they? As long as the block is properly\n> > > aligned, we can change the iov_len of the final iov to match whatever the size\n> > > is being passed in, no?\n> >\n> > Hmm. Based on what Bharath has written upthread, it does not seem to\n> > matter if the size of the aligned block changes, either:\n> > https://www.postgresql.org/message-id/CALj2ACUccjR7KbKqWOsQmqH1ZGEDyJ7hH5Ef+DOhcv7+kOnjCQ@mail.gmail.com\n> >\n> > I am honestly not sure whether it is a good idea to make file_utils.c\n> > depend on one of the compile-time page sizes in this routine, be it\n> > the page size of the WAL page size, as pg_write_zeros() would be used\n> > for some rather low-level operations. But we could as well just use a\n> > locally-defined structure with a buffer at 4kB or 8kB and call it a\n> > day?\n>\n> Shrug. I don't think we gain much by having yet another PGAlignedXYZBlock.\n\nHm. I tend to agree with Andres here, i.e. using PGAlignedBlock is\nsufficient. It seems like we are using PGAlignedBlock for heap, index,\nhistory file, fsm, visibility map file pages as well.\n\nPlease see the attached v9 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 29 Oct 2022 11:54:02 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On 2022-Oct-27, Michael Paquier wrote:\n\n> On Thu, Sep 29, 2022 at 08:09:56PM -0700, Nathan Bossart wrote:\n> > Looks reasonable to me.\n> \n> 0001, to move pg_pwritev_with_retry() to a new home, seems fine, so\n> applied.\n\nMaybe something a bit useless, but while perusing the commits I noticed\na forward struct declaration was moved from one file to another; this is\nclaimed to avoid including port/pg_iovec.h in common/file_utils.h. We\ndo that kind of thing in a few places, but in this particular case it\nseems a bit of a pointless exercise, since pg_iovec.h doesn't include\nanything else and it's a quite simple header.\n\nSo I'm kinda proposing that we only do the forward struct initialization\ndance when it really saves on things -- in particular, when it helps\navoid or reduce massive indirect header inclusion.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/",
"msg_date": "Sun, 30 Oct 2022 15:44:32 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Sun, Oct 30, 2022 at 03:44:32PM +0100, Alvaro Herrera wrote:\n> So I'm kinda proposing that we only do the forward struct initialization\n> dance when it really saves on things -- in particular, when it helps\n> avoid or reduce massive indirect header inclusion.\n\nSure.\n\n> extern ssize_t pg_pwritev_with_retry(int fd,\n> - const struct iovec *iov,\n> + const iovec *iov,\n> int iovcnt,\n> off_t offset);\n\nHowever this still needs to be defined as a struct, no?\n--\nMichael",
"msg_date": "Mon, 31 Oct 2022 08:31:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Mon, Oct 31, 2022 at 5:01 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sun, Oct 30, 2022 at 03:44:32PM +0100, Alvaro Herrera wrote:\n> > So I'm kinda proposing that we only do the forward struct initialization\n> > dance when it really saves on things -- in particular, when it helps\n> > avoid or reduce massive indirect header inclusion.\n>\n> Sure.\n\nI don't think including pg_iovec.h in file_utils.h is a good idea. I\nagree that pg_iovec.h is fairly a small header file but file_utils.h\nis included in 21 .c files, as of today and the file_utils.h footprint\nmight increase in future. Therefore, I'd vote for forward struct\ninitialization as it is on HEAD today.\n\n> > extern ssize_t pg_pwritev_with_retry(int fd,\n> > - const struct iovec *iov,\n> > + const iovec *iov,\n> > int iovcnt,\n> > off_t offset);\n>\n> However this still needs to be defined as a struct, no?\n\nYes, we need a struct there because we haven't typedef'ed struct iovec.\n\nAlso, the patch forgets to remove \"port/pg_iovec.h\" from file_utils.c\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 31 Oct 2022 11:50:27 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Mon, Oct 31, 2022 at 11:50 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Oct 31, 2022 at 5:01 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Sun, Oct 30, 2022 at 03:44:32PM +0100, Alvaro Herrera wrote:\n> > > So I'm kinda proposing that we only do the forward struct initialization\n> > > dance when it really saves on things -- in particular, when it helps\n> > > avoid or reduce massive indirect header inclusion.\n> >\n> > Sure.\n>\n> I don't think including pg_iovec.h in file_utils.h is a good idea. I\n> agree that pg_iovec.h is fairly a small header file but file_utils.h\n> is included in 21 .c files, as of today and the file_utils.h footprint\n> might increase in future. Therefore, I'd vote for forward struct\n> initialization as it is on HEAD today.\n\nI'm attaching the v9 patch from upthread here again for further review\nand to make CF bot happy.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 1 Nov 2022 08:32:48 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Tue, Nov 01, 2022 at 08:32:48AM +0530, Bharath Rupireddy wrote:\n> I'm attaching the v9 patch from upthread here again for further review\n> and to make CF bot happy.\n\nSo, I have looked at that, and at the end concluded that Andres'\nsuggestion to use PGAlignedBlock in pg_write_zeros() will serve better\nin the long run. Thomas has mentioned upthread that some of the\ncomments don't need to be that long, so I have tweaked these to be\nminimal, and updated a few more areas. Note that this has been split\ninto two commits: one to introduce the new routine in file_utils.c and\na second for the switch in walmethods.c.\n--\nMichael",
"msg_date": "Tue, 8 Nov 2022 13:07:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Tue, Nov 8, 2022 at 11:08 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> So, I have looked at that, and at the end concluded that Andres'\n> suggestion to use PGAlignedBlock in pg_write_zeros() will serve better\n> in the long run. Thomas has mentioned upthread that some of the\n> comments don't need to be that long, so I have tweaked these to be\n> minimal, and updated a few more areas. Note that this has been split\n> into two commits: one to introduce the new routine in file_utils.c and\n> a second for the switch in walmethods.c.\n\nWas there supposed to be an attachment here?\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Nov 8, 2022 at 11:08 AM Michael Paquier <michael@paquier.xyz> wrote:>> So, I have looked at that, and at the end concluded that Andres'> suggestion to use PGAlignedBlock in pg_write_zeros() will serve better> in the long run. Thomas has mentioned upthread that some of the> comments don't need to be that long, so I have tweaked these to be> minimal, and updated a few more areas. Note that this has been split> into two commits: one to introduce the new routine in file_utils.c and> a second for the switch in walmethods.c.Was there supposed to be an attachment here?--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 11 Nov 2022 12:44:14 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Fri, Nov 11, 2022 at 11:14 AM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n>\n> On Tue, Nov 8, 2022 at 11:08 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > So, I have looked at that, and at the end concluded that Andres'\n> > suggestion to use PGAlignedBlock in pg_write_zeros() will serve better\n> > in the long run. Thomas has mentioned upthread that some of the\n> > comments don't need to be that long, so I have tweaked these to be\n> > minimal, and updated a few more areas. Note that this has been split\n> > into two commits: one to introduce the new routine in file_utils.c and\n> > a second for the switch in walmethods.c.\n>\n> Was there supposed to be an attachment here?\n\nNope. The patches have already been committed -\n3bdbdf5d06f2179d4c17926d77ff734ea9e7d525 and\n28cc2976a9cf0ed661dbc55f49f669192cce1c89.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 11 Nov 2022 11:53:08 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Fri, Nov 11, 2022 at 11:53:08AM +0530, Bharath Rupireddy wrote:\n> On Fri, Nov 11, 2022 at 11:14 AM John Naylor <john.naylor@enterprisedb.com> wrote:\n>> Was there supposed to be an attachment here?\n> \n> Nope. The patches have already been committed -\n> 3bdbdf5d06f2179d4c17926d77ff734ea9e7d525 and\n> 28cc2976a9cf0ed661dbc55f49f669192cce1c89.\n\nThe committed patches are pretty much the same as the last version\nsent on this thread, except that the changes have been split across\nthe files they locally impact, with a few simplifications tweaks to\nthe comments. Hencem I did not see any need to send a new version for\nthis case.\n--\nMichael",
"msg_date": "Fri, 11 Nov 2022 16:11:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Fri, Nov 11, 2022 at 2:12 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Nov 11, 2022 at 11:53:08AM +0530, Bharath Rupireddy wrote:\n> > On Fri, Nov 11, 2022 at 11:14 AM John Naylor <\njohn.naylor@enterprisedb.com> wrote:\n> >> Was there supposed to be an attachment here?\n> >\n> > Nope. The patches have already been committed -\n> > 3bdbdf5d06f2179d4c17926d77ff734ea9e7d525 and\n> > 28cc2976a9cf0ed661dbc55f49f669192cce1c89.\n>\n> The committed patches are pretty much the same as the last version\n> sent on this thread, except that the changes have been split across\n> the files they locally impact, with a few simplifications tweaks to\n> the comments. Hencem I did not see any need to send a new version for\n> this case.\n\nAh, I missed that -- sorry for the noise.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Fri, Nov 11, 2022 at 2:12 PM Michael Paquier <michael@paquier.xyz> wrote:>> On Fri, Nov 11, 2022 at 11:53:08AM +0530, Bharath Rupireddy wrote:> > On Fri, Nov 11, 2022 at 11:14 AM John Naylor <john.naylor@enterprisedb.com> wrote:> >> Was there supposed to be an attachment here?> >> > Nope. The patches have already been committed -> > 3bdbdf5d06f2179d4c17926d77ff734ea9e7d525 and> > 28cc2976a9cf0ed661dbc55f49f669192cce1c89.>> The committed patches are pretty much the same as the last version> sent on this thread, except that the changes have been split across> the files they locally impact, with a few simplifications tweaks to> the comments. Hencem I did not see any need to send a new version for> this case.Ah, I missed that -- sorry for the noise.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 11 Nov 2022 16:21:31 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-01 08:32:48 +0530, Bharath Rupireddy wrote:\n> +/*\n> + * pg_pwrite_zeros\n> + *\n> + * Writes zeros to a given file. Input parameters are \"fd\" (file descriptor of\n> + * the file), \"size\" (size of the file in bytes).\n> + *\n> + * On failure, a negative value is returned and errno is set appropriately so\n> + * that the caller can use it accordingly.\n> + */\n> +ssize_t\n> +pg_pwrite_zeros(int fd, size_t size)\n> +{\n> +\tPGAlignedBlock\tzbuffer;\n> +\tsize_t\tzbuffer_sz;\n> +\tstruct iovec\tiov[PG_IOV_MAX];\n> +\tint\t\tblocks;\n> +\tsize_t\tremaining_size = 0;\n> +\tint\t\ti;\n> +\tssize_t\twritten;\n> +\tssize_t\ttotal_written = 0;\n> +\n> +\tzbuffer_sz = sizeof(zbuffer.data);\n> +\n> +\t/* Zero-fill the buffer. */\n> +\tmemset(zbuffer.data, 0, zbuffer_sz);\n\nI previously commented on this - why are we memseting a buffer on every call\nto this? That's not at all free.\n\nSomething like\n static const PGAlignedBlock zerobuf = {0};\nwould do the trick. You do need to cast the const away, to assign to\niov_base, but that's not too ugly.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 11 Feb 2023 14:44:24 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Sun, Feb 12, 2023 at 4:14 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> > +ssize_t\n> > +pg_pwrite_zeros(int fd, size_t size)\n> > +{\n> > + PGAlignedBlock zbuffer;\n> > + size_t zbuffer_sz;\n> > + struct iovec iov[PG_IOV_MAX];\n> > + int blocks;\n> > + size_t remaining_size = 0;\n> > + int i;\n> > + ssize_t written;\n> > + ssize_t total_written = 0;\n> > +\n> > + zbuffer_sz = sizeof(zbuffer.data);\n> > +\n> > + /* Zero-fill the buffer. */\n> > + memset(zbuffer.data, 0, zbuffer_sz);\n>\n> I previously commented on this - why are we memseting a buffer on every call\n> to this? That's not at all free.\n>\n> Something like\n> static const PGAlignedBlock zerobuf = {0};\n> would do the trick. You do need to cast the const away, to assign to\n> iov_base, but that's not too ugly.\n\nThanks for looking at it. We know that we don't change the zbuffer in\nthe function, so can we avoid static const and have just a static\nvariable, like the attached\nv1-0001-Use-static-variable-to-avoid-memset-calls-in-pg_p.patch? Do\nyou see any problem with it?\n\nFWIW, it comes out like the attached\nv1-0001-Use-static-const-variable-to-avoid-memset-calls-i.patch with\nstatic const.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sun, 12 Feb 2023 19:59:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-12 19:59:00 +0530, Bharath Rupireddy wrote:\n> Thanks for looking at it. We know that we don't change the zbuffer in\n> the function, so can we avoid static const and have just a static\n> variable, like the attached\n> v1-0001-Use-static-variable-to-avoid-memset-calls-in-pg_p.patch? Do\n> you see any problem with it?\n\nMaking it static const is better, because it allows the memory for the\nvariable to be put in a readonly section.\n\n\n> \t/* Prepare to write out a lot of copies of our zero buffer at once. */\n> \tfor (i = 0; i < lengthof(iov); ++i)\n> \t{\n> -\t\tiov[i].iov_base = zbuffer.data;\n> +\t\tiov[i].iov_base = (void *) (unconstify(PGAlignedBlock *, &zbuffer)->data);\n> \t\tiov[i].iov_len = zbuffer_sz;\n> \t}\n\nAnother thing: I think we should either avoid iterating over all the IOVs if\nwe don't need them, or, even better, initialize the array as a constant, once.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 12 Feb 2023 09:31:36 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Sun, Feb 12, 2023 at 09:31:36AM -0800, Andres Freund wrote:\n> Another thing: I think we should either avoid iterating over all the IOVs if\n> we don't need them, or, even better, initialize the array as a constant, once.\n\nWhere you imply that we'd still use memset() once on iov[PG_IOV_MAX],\nright?\n--\nMichael",
"msg_date": "Mon, 13 Feb 2023 13:08:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Sun, Feb 12, 2023 at 11:01 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2023-02-12 19:59:00 +0530, Bharath Rupireddy wrote:\n> > Thanks for looking at it. We know that we don't change the zbuffer in\n> > the function, so can we avoid static const and have just a static\n> > variable, like the attached\n> > v1-0001-Use-static-variable-to-avoid-memset-calls-in-pg_p.patch? Do\n> > you see any problem with it?\n>\n> Making it static const is better, because it allows the memory for the\n> variable to be put in a readonly section.\n\nOkay.\n\n> > /* Prepare to write out a lot of copies of our zero buffer at once. */\n> > for (i = 0; i < lengthof(iov); ++i)\n> > {\n> > - iov[i].iov_base = zbuffer.data;\n> > + iov[i].iov_base = (void *) (unconstify(PGAlignedBlock *, &zbuffer)->data);\n> > iov[i].iov_len = zbuffer_sz;\n> > }\n>\n> Another thing: I think we should either avoid iterating over all the IOVs if\n> we don't need them, or, even better, initialize the array as a constant, once.\n\nHow about like the attached patch? It makes the iovec static variable\nand points the zero buffer only once/for the first time to iovec. This\navoids for-loop on every call.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 13 Feb 2023 10:15:03 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "At Mon, 13 Feb 2023 10:15:03 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \r\n> On Sun, Feb 12, 2023 at 11:01 PM Andres Freund <andres@anarazel.de> wrote:\r\n> >\r\n> > On 2023-02-12 19:59:00 +0530, Bharath Rupireddy wrote:\r\n> > > /* Prepare to write out a lot of copies of our zero buffer at once. */\r\n> > > for (i = 0; i < lengthof(iov); ++i)\r\n> > > {\r\n> > > - iov[i].iov_base = zbuffer.data;\r\n> > > + iov[i].iov_base = (void *) (unconstify(PGAlignedBlock *, &zbuffer)->data);\r\n> > > iov[i].iov_len = zbuffer_sz;\r\n> > > }\r\n> >\r\n> > Another thing: I think we should either avoid iterating over all the IOVs if\r\n> > we don't need them, or, even better, initialize the array as a constant, once.\r\n\r\nFWIW, I tried to use the \"{[start .. end] = {}}\" trick (GNU extension?\r\n[*1]) for constant array initialization, but individual members don't\r\naccept assigning a const value, thus I did deconstify as the follows.\r\n\r\n>\tstatic const struct iovec\tiov[PG_IOV_MAX] =\r\n>\t\t{[0 ... PG_IOV_MAX - 1] =\r\n>\t\t {\r\n>\t\t\t .iov_base = (void *)&zbuffer.data,\r\n>\t\t\t .iov_len = BLCKSZ\r\n>\t\t }\r\n>\t\t};\r\n\r\nI didn't checked the actual mapping, but if I tried an assignment\r\n\"iov[0].iov_base = NULL\", it failed as \"assignment of member\r\n‘iov_base’ in read-only object\", so is it successfully placed in a\r\nread-only segment?\r\n\r\nLater code assigns iov[0].iov_len thus we need to provide a separate\r\niov non-const variable, or can we use pwrite instead there? (I didn't\r\nfind pg_pwrite_with_retry(), though)\r\n\r\n> How about like the attached patch? It makes the iovec static variable\r\n> and points the zero buffer only once/for the first time to iovec. This\r\n> avoids for-loop on every call.\r\n\r\nAs the patch itself, it seems forgetting to reset iov[0].iov_len after\r\nwriting a partial block.\r\n\r\n\r\nretards.\r\n\r\n\r\n*1: https://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Designated-Inits.html\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Mon, 13 Feb 2023 18:33:34 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On 2023-02-13 18:33:34 +0900, Kyotaro Horiguchi wrote:\n> At Mon, 13 Feb 2023 10:15:03 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> > On Sun, Feb 12, 2023 at 11:01 PM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > On 2023-02-12 19:59:00 +0530, Bharath Rupireddy wrote:\n> > > > /* Prepare to write out a lot of copies of our zero buffer at once. */\n> > > > for (i = 0; i < lengthof(iov); ++i)\n> > > > {\n> > > > - iov[i].iov_base = zbuffer.data;\n> > > > + iov[i].iov_base = (void *) (unconstify(PGAlignedBlock *, &zbuffer)->data);\n> > > > iov[i].iov_len = zbuffer_sz;\n> > > > }\n> > >\n> > > Another thing: I think we should either avoid iterating over all the IOVs if\n> > > we don't need them, or, even better, initialize the array as a constant, once.\n> \n> FWIW, I tried to use the \"{[start .. end] = {}}\" trick (GNU extension?\n> [*1]) for constant array initialization, but individual members don't\n> accept assigning a const value, thus I did deconstify as the follows.\n> \n> >\tstatic const struct iovec\tiov[PG_IOV_MAX] =\n> >\t\t{[0 ... PG_IOV_MAX - 1] =\n> >\t\t {\n> >\t\t\t .iov_base = (void *)&zbuffer.data,\n> >\t\t\t .iov_len = BLCKSZ\n> >\t\t }\n> >\t\t};\n> \n> I didn't checked the actual mapping, but if I tried an assignment\n> \"iov[0].iov_base = NULL\", it failed as \"assignment of member\n> ‘iov_base’ in read-only object\", so is it successfully placed in a\n> read-only segment?\n> \n> Later code assigns iov[0].iov_len thus we need to provide a separate\n> iov non-const variable, or can we use pwrite instead there? (I didn't\n> find pg_pwrite_with_retry(), though)\n\nGiven that we need to do that, and given that we already need to loop to\nhandle writes that are longer than PG_IOV_MAX * BLCKSZ, it's probably not\nworth avoiding iov initialization.\n\nBut I think it's worth limiting the initialization to blocks.\n\nI'd also try to combine the first pg_writev_* with the second one.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Feb 2023 09:39:47 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-12 09:31:36 -0800, Andres Freund wrote:\n> Another thing: I think we should either avoid iterating over all the IOVs if\n> we don't need them, or, even better, initialize the array as a constant, once.\n\nI just tried to use pg_pwrite_zeros - and couldn't because it doesn't have an\noffset parameter. Huh, what lead to the function being so constrained?\n\n- Andres\n\n\n",
"msg_date": "Mon, 13 Feb 2023 17:10:56 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Mon, Feb 13, 2023 at 05:10:56PM -0800, Andres Freund wrote:\n> I just tried to use pg_pwrite_zeros - and couldn't because it doesn't have an\n> offset parameter. Huh, what lead to the function being so constrained?\n\nIts current set of uses cases, where we only use it now to initialize\nwith zeros with WAL segments. If you have a case that plans to use\nthat stuff with an offset, no problem with me. \n--\nMichael",
"msg_date": "Tue, 14 Feb 2023 16:06:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Mon, Feb 13, 2023 at 11:09 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> > Later code assigns iov[0].iov_len thus we need to provide a separate\n> > iov non-const variable, or can we use pwrite instead there? (I didn't\n> > find pg_pwrite_with_retry(), though)\n>\n> Given that we need to do that, and given that we already need to loop to\n> handle writes that are longer than PG_IOV_MAX * BLCKSZ, it's probably not\n> worth avoiding iov initialization.\n>\n> But I think it's worth limiting the initialization to blocks.\n\nWe can still optimize away the for loop by using a single iovec for\nremaining size, like the attached v2 patch.\n\n> I'd also try to combine the first pg_writev_* with the second one.\n\nDone, PSA v2 patch.\n\nOn Tue, Feb 14, 2023 at 6:40 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2023-02-12 09:31:36 -0800, Andres Freund wrote:\n> > Another thing: I think we should either avoid iterating over all the IOVs if\n> > we don't need them, or, even better, initialize the array as a constant, once.\n>\n> I just tried to use pg_pwrite_zeros - and couldn't because it doesn't have an\n> offset parameter. Huh, what lead to the function being so constrained?\n\nDone, PSA v2 patch.\n\nWe could do few more things, but honestly I feel they're unnecessary:\n1) An assert-only code that checks if the asked file contents are\nzeroed at the end of pg_pwrite_zeros (to be more defensive, but\nreading 16MB files and checking if it's zero-filled will surely\nslowdown the Assert builds).\n2) A small test module passing in a file with the size to write isn't\nmultiple of block size, meaning, the code we have in the function to\nwrite last remaining bytes (less than BLCKSZ) gets covered which isn't\ncovered right now -\nhttps://coverage.postgresql.org/src/common/file_utils.c.gcov.html.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 14 Feb 2023 18:00:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-14 16:06:24 +0900, Michael Paquier wrote:\n> On Mon, Feb 13, 2023 at 05:10:56PM -0800, Andres Freund wrote:\n> > I just tried to use pg_pwrite_zeros - and couldn't because it doesn't have an\n> > offset parameter. Huh, what lead to the function being so constrained?\n> \n> Its current set of uses cases, where we only use it now to initialize\n> with zeros with WAL segments. If you have a case that plans to use\n> that stuff with an offset, no problem with me.\n\nThen it really shouldn't have been named pg_pwrite_zeros(). The point of the\np{write,read}{,v} family of functions is to be able to specify the offset to\nread/write at. I assume the p is for position, but I'm not sure.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 Feb 2023 16:46:07 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-14 18:00:00 +0530, Bharath Rupireddy wrote:\n> On Mon, Feb 13, 2023 at 11:09 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > > Later code assigns iov[0].iov_len thus we need to provide a separate\n> > > iov non-const variable, or can we use pwrite instead there? (I didn't\n> > > find pg_pwrite_with_retry(), though)\n> >\n> > Given that we need to do that, and given that we already need to loop to\n> > handle writes that are longer than PG_IOV_MAX * BLCKSZ, it's probably not\n> > worth avoiding iov initialization.\n> >\n> > But I think it's worth limiting the initialization to blocks.\n>\n> We can still optimize away the for loop by using a single iovec for\n> remaining size, like the attached v2 patch.\n>\n> > I'd also try to combine the first pg_writev_* with the second one.\n>\n> Done, PSA v2 patch.\n\nThis feels way too complicated to me. How about something more like the\nattached?\n\n\n> 2) A small test module passing in a file with the size to write isn't\n> multiple of block size, meaning, the code we have in the function to\n> write last remaining bytes (less than BLCKSZ) gets covered which isn't\n> covered right now -\n\nFWIW, I tested this locally by just specifying a smaller size than BLCKSZ for\nthe write size.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 14 Feb 2023 16:55:25 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Tue, Feb 14, 2023 at 04:46:07PM -0800, Andres Freund wrote:\n> Then it really shouldn't have been named pg_pwrite_zeros(). The point of the\n> p{write,read}{,v} family of functions is to be able to specify the offset to\n> read/write at. I assume the p is for position, but I'm not sure.\n\n'p' could stand for POSIX, though both read() and pread() are in it.\nAnyway, it looks that your guess may be right:\nhttps://stackoverflow.com/questions/17877556/what-does-p-stand-for-in-function-names-pwrite-and-pread\n\nEven there, people don't seem completely sure.\n--\nMichael",
"msg_date": "Wed, 15 Feb 2023 10:26:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "At Tue, 14 Feb 2023 16:55:25 -0800, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> On 2023-02-14 18:00:00 +0530, Bharath Rupireddy wrote:\n> > Done, PSA v2 patch.\n> \n> This feels way too complicated to me. How about something more like the\n> attached?\n\nI like this one, but the parameters offset and size are in a different\norder from pwrite(fd, buf, count, offset). I perfer the arrangement\nsuggested by Bharath. And isn't it better to use Min(remaining_size,\nBLCKSZ) instead of a bare if statement?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 15 Feb 2023 10:28:37 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "Hi\n\nOn 2023-02-15 10:28:37 +0900, Kyotaro Horiguchi wrote:\n> At Tue, 14 Feb 2023 16:55:25 -0800, Andres Freund <andres@anarazel.de> wrote in \n> > Hi,\n> > \n> > On 2023-02-14 18:00:00 +0530, Bharath Rupireddy wrote:\n> > > Done, PSA v2 patch.\n> > \n> > This feels way too complicated to me. How about something more like the\n> > attached?\n> \n> I like this one, but the parameters offset and size are in a different\n> order from pwrite(fd, buf, count, offset). I perfer the arrangement\n> suggested by Bharath.\n\nYes, it probably is better. Not sure why I went with that order.\n\n\n> And isn't it better to use Min(remaining_size, BLCKSZ) instead of a bare if\n> statement?\n\nI really can't make myself care about which version is better :)\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 Feb 2023 18:00:28 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Wed, Feb 15, 2023 at 6:25 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> > Done, PSA v2 patch.\n>\n> This feels way too complicated to me. How about something more like the\n> attached?\n\nThanks. I kind of did cost analysis of v2 and v3:\n\nInput: zero-fill a file of size 256*8K.\nv2 patch:\niovec initialization with zerobuf for loop - 1 time\nzero-fill 32 blocks at once - 8 times\nstack memory - sizeof(PGAlignedBlock) + sizeof(struct iovec) * PG_IOV_MAX\n\nv3 patch:\niovec initialization with zerobuf for loop - 8 times (7 times more\nthan v2 patch)\nzero-fill 32 blocks at once - 8 times (no change from v2 patch)\nstack memory - sizeof(PGAlignedBlock) + sizeof(struct iovec) *\nPG_IOV_MAX (no change from v2 patch)\n\nThe v3 patch reduces initialization of iovec array elements which is a\nclear win when pg_pwrite_zeros is called for sizes less than BLCKSZ\nmany times (I assume this is what is needed for the relation extension\nlock improvements feature). However, it increases the number of iovec\ninitialization with zerobuf for the cases when pg_pwrite_zeros is\ncalled for sizes far greater than BLCKSZ (for instance, WAL file\ninitialization).\n\nFWIW, I attached v4 patch, a simplified version of the v2 - it\ninitializes all the iovec array elements if the total blocks to be\nwritten crosses lengthof(iovec array), otherwise it initializes only\nthe needed blocks.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 15 Feb 2023 13:00:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Wed, Feb 15, 2023 at 01:00:00PM +0530, Bharath Rupireddy wrote:\n> The v3 patch reduces initialization of iovec array elements which is a\n> clear win when pg_pwrite_zeros is called for sizes less than BLCKSZ\n> many times (I assume this is what is needed for the relation extension\n> lock improvements feature). However, it increases the number of iovec\n> initialization with zerobuf for the cases when pg_pwrite_zeros is\n> called for sizes far greater than BLCKSZ (for instance, WAL file\n> initialization).\n\nIt seems to me that v3 would do extra initializations only if\npg_pwritev_with_retry() does *not* retry its writes, but that's not\nthe case as it retries on a partial write as per its name. The number\nof iov buffers is stricly capped by remaining_size. FWIW, I find v3\nproposed more elegant.\n\n> FWIW, I attached v4 patch, a simplified version of the v2 - it\n> initializes all the iovec array elements if the total blocks to be\n> written crosses lengthof(iovec array), otherwise it initializes only\n> the needed blocks.\n\n+ static size_t zbuf_sz = BLCKSZ;\nIn v4, what's the advantage of marking that as static? It could\nactually be dangerous if this is carelessly updated. Well, that's not\nthe case, still..\n--\nMichael",
"msg_date": "Thu, 16 Feb 2023 16:58:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-16 16:58:23 +0900, Michael Paquier wrote:\n> On Wed, Feb 15, 2023 at 01:00:00PM +0530, Bharath Rupireddy wrote:\n> > The v3 patch reduces initialization of iovec array elements which is a\n> > clear win when pg_pwrite_zeros is called for sizes less than BLCKSZ\n> > many times (I assume this is what is needed for the relation extension\n> > lock improvements feature). However, it increases the number of iovec\n> > initialization with zerobuf for the cases when pg_pwrite_zeros is\n> > called for sizes far greater than BLCKSZ (for instance, WAL file\n> > initialization).\n\nIn those cases the cost of initializing the IOV doesn't matter, relative to\nthe other costs. The important point is to not initialize a lot of elements if\nthey're not even needed. Because we need to overwrite the trailing iov\nelement, it doesn't seem worth to try to \"pre-initialize\" iov.\n\nReferencing a static variable is more expensive than accessing an on-stack\nvariable. Having a first-call check is more expensive than not having it.\n\nThus making the iov and zbuf_sz static isn't helpful. Particularly the latter\nseems like a bad idea, because it's a compiler constant.\n\n\n> It seems to me that v3 would do extra initializations only if\n> pg_pwritev_with_retry() does *not* retry its writes, but that's not\n> the case as it retries on a partial write as per its name. The number\n> of iov buffers is stricly capped by remaining_size.\n\nI don't really understand this bit?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 16 Feb 2023 11:00:20 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Thu, Feb 16, 2023 at 11:00:20AM -0800, Andres Freund wrote:\n> I don't really understand this bit?\n\nAs of this message, I saw this quote:\nhttps://www.postgresql.org/message-id/fCALj2ACXEBwY_bM3kmZEkYpcXsM+yGitpYHi4FdT6MSk6YRtKTQ@mail.gmail.com\n\"However, it increases the number of iovec initialization with zerobuf\nfor the cases when pg_pwrite_zeros is called for sizes far greater\nthan BLCKSZ (for instance, WAL file initialization).\"\n\nBut it looks like I misunderstood what this quote meant compared to\nwhat v3 does. It is true that v3 sets iov_len and iov_base more than\nneeded when writing sizes larger than BLCKSZ. Seems like you think\nthat it is not really going to matter much to track which iovecs have\nbeen already initialized during the first loop on\npg_pwritev_with_retry() to keep the code shorter?\n--\nMichael",
"msg_date": "Fri, 17 Feb 2023 16:19:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-17 16:19:46 +0900, Michael Paquier wrote:\n> But it looks like I misunderstood what this quote meant compared to\n> what v3 does. It is true that v3 sets iov_len and iov_base more than\n> needed when writing sizes larger than BLCKSZ.\n\nI don't think it does for writes larger than BLCKSZ, it just does more for\nwrites larger than PG_IKOV_MAX * BLCKSZ. But in those cases CPU time is going\nto be spent elsewhere.\n\n\n> Seems like you think that it is not really going to matter much to track\n> which iovecs have been already initialized during the first loop on\n> pg_pwritev_with_retry() to keep the code shorter?\n\nYes. I'd bet that, in the unlikely case you're going to see any difference at\nall, unconditionally initializing is going to win.\n\nRight now we memset() 8KB, and iterate over 32 IOVs, unconditionally, on every\ncall. Even if we could do some further optimizations of what I did in the\npatch, you can initialize needed IOVs repeatedly a *lot* of times, before it\nshows up...\n\nI'm inclined to go with my version, with the argument order swapped to\nBharath's order.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 Feb 2023 09:31:14 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Fri, Feb 17, 2023 at 09:31:14AM -0800, Andres Freund wrote:\n> On 2023-02-17 16:19:46 +0900, Michael Paquier wrote:\n>> But it looks like I misunderstood what this quote meant compared to\n>> what v3 does. It is true that v3 sets iov_len and iov_base more than\n>> needed when writing sizes larger than BLCKSZ.\n> \n> I don't think it does for writes larger than BLCKSZ, it just does more for\n> writes larger than PG_IKOV_MAX * BLCKSZ. But in those cases CPU time is going\n> to be spent elsewhere.\n\nYep.\n\n>> Seems like you think that it is not really going to matter much to track\n>> which iovecs have been already initialized during the first loop on\n>> pg_pwritev_with_retry() to keep the code shorter?\n> \n> Yes. I'd bet that, in the unlikely case you're going to see any difference at\n> all, unconditionally initializing is going to win.\n> \n> Right now we memset() 8KB, and iterate over 32 IOVs, unconditionally, on every\n> call. Even if we could do some further optimizations of what I did in the\n> patch, you can initialize needed IOVs repeatedly a *lot* of times, before it\n> shows up...\n> \n> I'm inclined to go with my version, with the argument order swapped to\n> Bharath's order.\n\nOkay. That's fine by me.\n--\nMichael",
"msg_date": "Mon, 20 Feb 2023 14:33:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Mon, Feb 20, 2023 at 11:03 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> > I'm inclined to go with my version, with the argument order swapped to\n> > Bharath's order.\n>\n> Okay. That's fine by me.\n\nI ran some tests on my dev system [1] and I don't see much difference\nbetween v3 and v4. So, +1 for v3 patch (+ argument order swap) from\nAndres to keep the code simple and elegant.\n\n[1]\nHEAD: 16MB (12.231 ms), 8190 Bytes (0.199 ms), 8192 Bytes (0.176 ms),\n1GB (603.668 ms), 10GB (21184.936 ms (00:21.185))\nv3 patch: 16MB (12.632 ms), 8190 Bytes (0.183 ms), 8192 Bytes (0.166\nms), 1GB (610.428 ms), 10GB (22647.308 ms (00:22.647))\nv4 patch: 16MB (12.044 ms), 8190 Bytes (0.167 ms), 8192 Bytes (0.139\nms), 1GB (603.848 ms), 10GB (21225.331 ms (00:21.225))\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 20 Feb 2023 13:54:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Mon, Feb 20, 2023 at 01:54:00PM +0530, Bharath Rupireddy wrote:\n> I ran some tests on my dev system [1] and I don't see much difference\n> between v3 and v4. So, +1 for v3 patch (+ argument order swap) from\n> Andres to keep the code simple and elegant.\n\nThis thread has stalled for a couple of weeks, so I have gone back to\nit. Testing on a tmpfs I am not seeing a difference if performance\nfor any of the approaches discussed. At the end, as I am the one\nbehind the introduction of pg_pwrite_zeros(), I have applied v3 after \nswitches the size and offset parameters to be the same way as in v4.\n--\nMichael",
"msg_date": "Mon, 6 Mar 2023 13:29:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-06 13:29:50 +0900, Michael Paquier wrote:\n> On Mon, Feb 20, 2023 at 01:54:00PM +0530, Bharath Rupireddy wrote:\n> > I ran some tests on my dev system [1] and I don't see much difference\n> > between v3 and v4. So, +1 for v3 patch (+ argument order swap) from\n> > Andres to keep the code simple and elegant.\n>\n> This thread has stalled for a couple of weeks, so I have gone back to\n> it. Testing on a tmpfs I am not seeing a difference if performance\n> for any of the approaches discussed. At the end, as I am the one\n> behind the introduction of pg_pwrite_zeros(), I have applied v3 after\n> switches the size and offset parameters to be the same way as in v4.\n\nThanks.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 6 Mar 2023 14:29:50 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Mon, Mar 6, 2023 at 5:30 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, Feb 20, 2023 at 01:54:00PM +0530, Bharath Rupireddy wrote:\n> > I ran some tests on my dev system [1] and I don't see much difference\n> > between v3 and v4. So, +1 for v3 patch (+ argument order swap) from\n> > Andres to keep the code simple and elegant.\n>\n> This thread has stalled for a couple of weeks, so I have gone back to\n> it. Testing on a tmpfs I am not seeing a difference if performance\n> for any of the approaches discussed. At the end, as I am the one\n> behind the introduction of pg_pwrite_zeros(), I have applied v3 after\n> switches the size and offset parameters to be the same way as in v4.\n\nApparently ye olde GCC 4.7 on \"lapwing\" doesn't like the way you\ninitialised that struct. I guess it wants {{0}} instead of {0}.\nApparently old GCC was wrong about that warning[1], but that system\ndoesn't have the back-patched fixes? Not sure.\n\n[1] https://stackoverflow.com/questions/63355760/how-standard-is-the-0-initializer-in-c89\n\n\n",
"msg_date": "Tue, 7 Mar 2023 15:42:03 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Tue, Mar 7, 2023 at 3:42 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Apparently ye olde GCC 4.7 on \"lapwing\" doesn't like the way you\n> initialised that struct. I guess it wants {{0}} instead of {0}.\n> Apparently old GCC was wrong about that warning[1], but that system\n> doesn't have the back-patched fixes? Not sure.\n\nOh, you already pushed a fix. But now I'm wondering if it's useful to\nhave old buggy compilers set to run with -Werror.\n\n\n",
"msg_date": "Tue, 7 Mar 2023 15:44:46 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Tue, Mar 07, 2023 at 03:44:46PM +1300, Thomas Munro wrote:\n> On Tue, Mar 7, 2023 at 3:42 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> Apparently ye olde GCC 4.7 on \"lapwing\" doesn't like the way you\n>> initialised that struct. I guess it wants {{0}} instead of {0}.\n>> Apparently old GCC was wrong about that warning[1], but that system\n>> doesn't have the back-patched fixes? Not sure.\n\n6392f2a was one such case.\n\n> Oh, you already pushed a fix. But now I'm wondering if it's useful to\n> have old buggy compilers set to run with -Werror.\n\nYes, as far as I can see when investigating the issue, this is an old\nbug of gcc when detecting where the initialization needs to be\napplied. And at the same time the fix is deadly simple, so the\ncurrent statu-quo does not sound that bad to me. Note that lapwing is\none of the only animals testing 32b builds, and it has saved from\nquite few bugs over the years.\n--\nMichael",
"msg_date": "Tue, 7 Mar 2023 13:32:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Mon, Mar 06, 2023 at 02:29:50PM -0800, Andres Freund wrote:\n> Thanks.\n\nSure, no problem. If there is anything else needed for this thread,\nfeel free to ping me here.\n--\nMichael",
"msg_date": "Tue, 7 Mar 2023 14:00:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Tue, Mar 7, 2023 at 5:32 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Mar 07, 2023 at 03:44:46PM +1300, Thomas Munro wrote:\n> > Oh, you already pushed a fix. But now I'm wondering if it's useful to\n> > have old buggy compilers set to run with -Werror.\n>\n> Yes, as far as I can see when investigating the issue, this is an old\n> bug of gcc when detecting where the initialization needs to be\n> applied. And at the same time the fix is deadly simple, so the\n> current statu-quo does not sound that bad to me. Note that lapwing is\n> one of the only animals testing 32b builds, and it has saved from\n> quite few bugs over the years.\n\nYeah, but I'm just wondering, why not run a current release on it[1]?\nDebian is one of the few distributions still supporting 32 bit\nkernels, and it's good to test rare things, but AFAIK the primary\nreason we finish up with EOL'd OSes in the 'farm is because they have\nbeen forgotten (the secondary reason is because they couldn't be\nupgraded because the OS dropped the [micro]architecture). Unlike\nvintage SPARC, actual users might plausibly be running a current\nrelease on a 32 bit Intel system, I guess (maybe on a Quark\nmicrocontroller?)?\n\nBTW CI also tests 32 bit with -m32 on Debian, but with a 64 bit\nkernel, which probably doesn't change much at the level we care about,\nso maybe this doesn't matter much... just sharing an observation that\nwe're wasting time thinking about an OS release that gave up the ghost\nin 2016, because it is running with -Werror. *shrug*\n\n[1] https://wiki.debian.org/DebianReleases\n\n\n",
"msg_date": "Tue, 7 Mar 2023 19:14:51 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Tue, Mar 07, 2023 at 07:14:51PM +1300, Thomas Munro wrote:\n> On Tue, Mar 7, 2023 at 5:32 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > On Tue, Mar 07, 2023 at 03:44:46PM +1300, Thomas Munro wrote:\n> > > Oh, you already pushed a fix. But now I'm wondering if it's useful to\n> > > have old buggy compilers set to run with -Werror.\n> >\n> > Yes, as far as I can see when investigating the issue, this is an old\n> > bug of gcc when detecting where the initialization needs to be\n> > applied. And at the same time the fix is deadly simple, so the\n> > current statu-quo does not sound that bad to me. Note that lapwing is\n> > one of the only animals testing 32b builds, and it has saved from\n> > quite few bugs over the years.\n>\n> Yeah, but I'm just wondering, why not run a current release on it[1]?\n> Debian is one of the few distributions still supporting 32 bit\n> kernels, and it's good to test rare things, but AFAIK the primary\n> reason we finish up with EOL'd OSes in the 'farm is because they have\n> been forgotten (the secondary reason is because they couldn't be\n> upgraded because the OS dropped the [micro]architecture).\n\nI registered lapwing as a 32b Debian 7 so I thought it would be expected to\nkeep it as-is rather than upgrading to all newer major Debian versions,\nespecially since there were newer debian animal registered (no 32b though\nAFAICS). I'm not opposed to upgrading it but I think there's still value in\nhaving somewhat old packages versions being tested, especially since there\nisn't much 32b coverage of those. I would be happy to register a newer 32b\nversion, or even sid, if needed but the -m32 part on the CI makes me think\nthere isn't much value doing that now.\n\nNow about the -Werror:\n\n> BTW CI also tests 32 bit with -m32 on Debian, but with a 64 bit\n> kernel, which probably doesn't change much at the level we care about,\n> so maybe this doesn't matter much... just sharing an observation that\n> we're wasting time thinking about an OS release that gave up the ghost\n> in 2016, because it is running with -Werror. *shrug*\n\nI think this is the first time that a problem raised by -Werror on that old\nanimal is actually a false positive, while there were many times it reported\nuseful stuff. Now this has been up for years before we got better CI tooling,\nespecially with -m32 support, so there might not be any value to have it\nanymore. As I mentioned at [1] I don't mind removing it or just work on\nupgrading any dependency (or removing known buggy compiler flags) to keep it\nwithout being annoying. In any case I'm usually quite fast at reacting to any\nproblem/complaint on that animal, so you don't have to worry about the\nbuildfarm being red too long if it came to that.\n\n[1] https://www.postgresql.org/message-id/20220921155025.wdixzbrt2uzbi6vz%40jrouhaud\n\n\n",
"msg_date": "Tue, 7 Mar 2023 14:47:05 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On Tue, Mar 7, 2023 at 7:47 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> I registered lapwing as a 32b Debian 7 so I thought it would be expected to\n> keep it as-is rather than upgrading to all newer major Debian versions,\n> especially since there were newer debian animal registered (no 32b though\n> AFAICS).\n\nAnimals do get upgraded: see the \"w. e. f.\" (\"with effect from\") line\nin https://buildfarm.postgresql.org/cgi-bin/show_members.pl which\ncomes from people running something like ./update_personality.pl\n--os-version \"11\" so that it shows up on the website.\n\n> I'm not opposed to upgrading it but I think there's still value in\n> having somewhat old packages versions being tested, especially since there\n> isn't much 32b coverage of those. I would be happy to register a newer 32b\n> version, or even sid, if needed but the -m32 part on the CI makes me think\n> there isn't much value doing that now.\n\nTotally up to you as an animal zoo keeper but in my humble opinion the\ninteresting range of Debian releases currently is 11-13, or maybe 10\nif you really want to test the LTS/old-stable release (and CI is\ntesting 11).\n\n> I think this is the first time that a problem raised by -Werror on that old\n> animal is actually a false positive, while there were many times it reported\n> useful stuff. Now this has been up for years before we got better CI tooling,\n> especially with -m32 support, so there might not be any value to have it\n> anymore. As I mentioned at [1] I don't mind removing it or just work on\n> upgrading any dependency (or removing known buggy compiler flags) to keep it\n> without being annoying. In any case I'm usually quite fast at reacting to any\n> problem/complaint on that animal, so you don't have to worry about the\n> buildfarm being red too long if it came to that.\n\nYeah, it's given us lots of useful data, thanks. Personally I would\nupgrade it so it keeps telling us useful things but I feel like I've\nsaid enough about that so I'll shut up now :-) Re: being red too\nlong... yeah that reminds me, I really need to fix seawasp ASAP...\n\n\n",
"msg_date": "Tue, 7 Mar 2023 20:33:11 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "On 2023-Mar-07, Julien Rouhaud wrote:\n\n> I registered lapwing as a 32b Debian 7 so I thought it would be expected to\n> keep it as-is rather than upgrading to all newer major Debian versions,\n> especially since there were newer debian animal registered (no 32b though\n> AFAICS). I'm not opposed to upgrading it but I think there's still value in\n> having somewhat old packages versions being tested, especially since there\n> isn't much 32b coverage of those. I would be happy to register a newer 32b\n> version, or even sid, if needed but the -m32 part on the CI makes me think\n> there isn't much value doing that now.\n\nI think a pure 32bit animal running contemporary Debian would be better\nthan just ditching the animal completely, as would appear to be the\nalternative, precisely because we have no other 32bit machine running\nx86 Linux.\n\nMaybe you can have *two* animals on the same machine: one running the\nold Debian without -Werror, and the one with new Debian and that flag\nkept.\n\nI think CI is not a replacement for the buildfarm. It helps catche some\nproblems earlier, but we shouldn't think that we no longer need some\nbuildfarm animals because CI runs those configs.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 7 Mar 2023 11:09:20 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-07 15:44:46 +1300, Thomas Munro wrote:\n> On Tue, Mar 7, 2023 at 3:42 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Apparently ye olde GCC 4.7 on \"lapwing\" doesn't like the way you\n> > initialised that struct. I guess it wants {{0}} instead of {0}.\n> > Apparently old GCC was wrong about that warning[1], but that system\n> > doesn't have the back-patched fixes? Not sure.\n> \n> Oh, you already pushed a fix. But now I'm wondering if it's useful to\n> have old buggy compilers set to run with -Werror.\n\nI think it's actively harmful to do so. Avoiding warnings on a > 10 year old\ncompiler a) is a waste of time b) unnecessarily requires making our code\nuglier.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 16 Mar 2023 17:00:10 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Use pg_pwritev_with_retry() instead of write() in\n dir_open_for_write() to avoid partial writes?"
}
] |
[
{
"msg_contents": "Hey all !\n\nI'm on a quest to help the planner (on pg14) use the best of several\npartial, expressional indices we have on some large tables (few TBs in\nsize, billions of records).\n\nAs we know, stats for expressions in partial indices aren't gathered by\ndefault - so I'm tinkering with expressional extended stats to cover for\nthose.\n\nI've tackled two interesting points there:\n1. Seems like expressional stats involving the equality operator are\nskipped or mismatched (fiddle\n<https://www.db-fiddle.com/f/4jyoMCicNSZpjMt4jFYoz5/5379>)\nLet's take the following naive example:\n\n\n\n\n*create table t1 (x integer[]);insert into t1 select array[1]::integer[]\nfrom generate_series(1, 100000, 1);create statistics s1 on (x[1] = 1) from\nt1;analyze t1;*\n*explain analyze select * from t1 where x[1] = 1;*\n*> Seq Scan on t1 (cost=0.00..1986.00 rows=500 width=29) (actual\ntime=0.009..36.035 rows=100000 loops=1)*\n\nNow, of course one can just create the stat on x[1] directly in this case,\nbut I have a more complex use case where an equality operator is\nbeneficial;\n\nAfter debugging it a bit - it seems that the root cause here is that we go\nthrough a flow where we only ever\nconsider statistics for the lhs of the expression, and not the entire\nexpression:\nclause_selectivity_ext -> restriction_selectivity -> eqsel_internal ->\nvar_eq_const, where the vardata holds info about x[1].\n\nThe case expression goes through a slightly different flow\n(clause_selectivity_ext -> boolvarsel -> ...) and is matched on the entire\nexpression.\n\nI wonder if it would make sense to first check for if there's a valid stat\ndata on the expression in its entirety before\njumping to the restriction selectivity on the variable itself, as there's\nnothing preventing users from defining such an extended statistic.\n\nThe below naive implementation works, for instance (clearly, I'm not versed\nin the source code, this is for demonstration purposes only):\n\n---\n src/backend/optimizer/path/clausesel.c | 20 +++++++++++++++-----\n 1 file changed, 15 insertions(+), 5 deletions(-)\n\ndiff --git a/src/backend/optimizer/path/clausesel.c\nb/src/backend/optimizer/path/clausesel.c\nindex 06f836308d..5e03d21dc0 100644\n--- a/src/backend/optimizer/path/clausesel.c\n+++ b/src/backend/optimizer/path/clausesel.c\n@@ -871,11 +871,21 @@ clause_selectivity_ext(PlannerInfo *root,\n }\n else\n {\n- /* Estimate selectivity for a restriction clause. */\n- s1 = restriction_selectivity(root, opno,\n- opclause->args,\n- opclause->inputcollid,\n- varRelid);\n+ VariableStatData vardata;\n+\n+ examine_variable(root, clause, varRelid, &vardata);\n+ if (HeapTupleIsValid(vardata.statsTuple))\n+ {\n+ /* Try estimating selectivity based on the entire\nexpression first */\n+ s1 = boolvarsel(root, clause, varRelid);\n+ } else {\n+ /* There's no expressional statistic on the restriction\nclause - fallback to estimating restriction selectivity for the given node\n*/\n+ s1 = restriction_selectivity(root, opno,\n+ opclause->args,\n+ opclause->inputcollid,\n+ varRelid);\n+ }\n+ ReleaseVariableStats(vardata);\n }\n\n /*\n--\n\n2. Less important, just a minor note - feel free to ignore - although the\neq. operator above seems to be skipped when matching the ext. stats, I can\nwork around this by using a CASE expression (fiddle\n<https://www.db-fiddle.com/f/wJZNH1rNwJSo3D5aByQiWX/1>);\nBuilding on the above example, we can:\n*create statistics s2 on (case x[1] when 1 then true else false end) from\nt1;*\n*explain analyze select * from t1 where (case x[1] when 1 then true else\nfalse end*\n*> Seq Scan on t1 (cost=0.00..1986.00 rows=100000 width=25) (actual\ntime=0.011..33.721 rows=100000 loops=1)*\n\nWhat's a bit problematic here, though, is that if we mix other dependent\ncolumns to the extended stat, and specifically if we create an mcv,\nqueries involving the CASE expression throw with `error: unknown clause\ntype 130`, where clause type == T_CaseExpr.\n\nThe second point for me would be that I've found it a bit non intuitive\nthat creating an extended statistic can fail queries at query time; it\nmakes sense that the mcv wouldn't work for case expressions, but it\nmight've been a bit clearer to:\n\na. Fail this at statistic creation time, potentially, or\nb. Convert the type numeric in the above error to its text representation,\nif we can extract it out at runtime somehow -\nI couldn't find a mapping of clause type numerics to their names, and as\nthe node tags are generated at compile time, it could be build-dependent\nand a bit hard to track down if one doesn't control the build flags\n\n\nThanks a ton for your help - appreciate your time,\nDanny\n\nHey all !I'm on a quest to help the planner (on pg14) use the best of several partial, expressional indices we have on some large tables (few TBs in size, billions of records).As we know, stats for expressions in partial indices aren't gathered by default - so I'm tinkering with expressional extended stats to cover for those.I've tackled two interesting points there:1. Seems like expressional stats involving the equality operator are skipped or mismatched (fiddle)Let's take the following naive example:create table t1 (x integer[]);insert into t1 select array[1]::integer[] from generate_series(1, 100000, 1);create statistics s1 on (x[1] = 1) from t1;analyze t1;explain analyze select * from t1 where x[1] = 1;> Seq Scan on t1 (cost=0.00..1986.00 rows=500 width=29) (actual time=0.009..36.035 rows=100000 loops=1)Now, of course one can just create the stat on x[1] directly in this case, but I have a more complex use case where an equality operator is beneficial; After debugging it a bit - it seems that the root cause here is that we go through a flow where we only everconsider statistics for the lhs of the expression, and not the entire expression:clause_selectivity_ext -> restriction_selectivity -> eqsel_internal -> var_eq_const, where the vardata holds info about x[1].The case expression goes through a slightly different flow (clause_selectivity_ext -> boolvarsel -> ...) and is matched on the entire expression.I wonder if it would make sense to first check for if there's a valid stat data on the expression in its entirety beforejumping to the restriction selectivity on the variable itself, as there's nothing preventing users from defining such an extended statistic.The below naive implementation works, for instance (clearly, I'm not versed in the source code, this is for demonstration purposes only):--- src/backend/optimizer/path/clausesel.c | 20 +++++++++++++++----- 1 file changed, 15 insertions(+), 5 deletions(-)diff --git a/src/backend/optimizer/path/clausesel.c b/src/backend/optimizer/path/clausesel.cindex 06f836308d..5e03d21dc0 100644--- a/src/backend/optimizer/path/clausesel.c+++ b/src/backend/optimizer/path/clausesel.c@@ -871,11 +871,21 @@ clause_selectivity_ext(PlannerInfo *root, } else {- /* Estimate selectivity for a restriction clause. */- s1 = restriction_selectivity(root, opno,- opclause->args,- opclause->inputcollid,- varRelid);+ VariableStatData vardata;++ examine_variable(root, clause, varRelid, &vardata);+ if (HeapTupleIsValid(vardata.statsTuple))+ {+ /* Try estimating selectivity based on the entire expression first */+ s1 = boolvarsel(root, clause, varRelid);+ } else {+ /* There's no expressional statistic on the restriction clause - fallback to estimating restriction selectivity for the given node */+ s1 = restriction_selectivity(root, opno,+ opclause->args,+ opclause->inputcollid,+ varRelid);+ }+ ReleaseVariableStats(vardata); } /*--2. Less important, just a minor note - feel free to ignore - although the eq. operator above seems to be skipped when matching the ext. stats, I can work around this by using a CASE expression (fiddle);Building on the above example, we can:create statistics s2 on (case x[1] when 1 then true else false end) from t1;explain analyze select * from t1 where (case x[1] when 1 then true else false end> Seq Scan on t1 (cost=0.00..1986.00 rows=100000 width=25) (actual time=0.011..33.721 rows=100000 loops=1)What's a bit problematic here, though, is that if we mix other dependent columns to the extended stat, and specifically if we create an mcv, queries involving the CASE expression throw with `error: unknown clause type 130`, where clause type == T_CaseExpr.The second point for me would be that I've found it a bit non intuitive that creating an extended statistic can fail queries at query time; it makes sense that the mcv wouldn't work for case expressions, but it might've been a bit clearer to:a. Fail this at statistic creation time, potentially, or b. Convert the type numeric in the above error to its text representation, if we can extract it out at runtime somehow - I couldn't find a mapping of clause type numerics to their names, and as the node tags are generated at compile time, it could be build-dependent and a bit hard to track down if one doesn't control the build flagsThanks a ton for your help - appreciate your time,Danny",
"msg_date": "Fri, 5 Aug 2022 16:43:36 +0300",
"msg_from": "Danny Shemesh <dany74q@gmail.com>",
"msg_from_op": true,
"msg_subject": "Expr. extended stats are skipped with equality operator"
},
{
"msg_contents": "On Fri, Aug 05, 2022 at 04:43:36PM +0300, Danny Shemesh wrote:\n> 2. Less important, just a minor note - feel free to ignore - although the\n> eq. operator above seems to be skipped when matching the ext. stats, I can\n> work around this by using a CASE expression (fiddle\n> <https://www.db-fiddle.com/f/wJZNH1rNwJSo3D5aByQiWX/1>);\n> Building on the above example, we can:\n> *create statistics s2 on (case x[1] when 1 then true else false end) from\n> t1;*\n> *explain analyze select * from t1 where (case x[1] when 1 then true else\n> false end*\n> *> Seq Scan on t1 (cost=0.00..1986.00 rows=100000 width=25) (actual\n> time=0.011..33.721 rows=100000 loops=1)*\n> \n> What's a bit problematic here, though, is that if we mix other dependent\n> columns to the extended stat, and specifically if we create an mcv,\n> queries involving the CASE expression throw with `error: unknown clause\n> type 130`, where clause type == T_CaseExpr.\n\n> The second point for me would be that I've found it a bit non intuitive\n> that creating an extended statistic can fail queries at query time; it\n\nA reproducer for this:\n\nCREATE TABLE t1(x int[], y float);\nINSERT INTO t1 SELECT array[1], a FROM generate_series(1,99)a;\nCREATE STATISTICS s2 ON (CASE x[1] WHEN 1 THEN true ELSE false END), y FROM t1;\nANALYZE t1; \n\nexplain analyze SELECT * FROM t1 WHERE CASE x[1] WHEN 1 THEN true ELSE false END AND y=1;\nERROR: unknown clause type: 134\n\\errverbose \nERROR: XX000: unknown clause type: 134\nLOCATION: mcv_get_match_bitmap, mcv.c:1950\n\nI'm not sure what Tomas will say, but XX000 errors from elog() are internal and\nnot intended to be user-facing, which is why there's no attempt to output a\nfriendly clause name. It might be that this wasn't reachable until statistics\non expressions were added in v14.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 5 Aug 2022 10:16:45 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Expr. extended stats are skipped with equality operator"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> A reproducer for this:\n\n> CREATE TABLE t1(x int[], y float);\n> INSERT INTO t1 SELECT array[1], a FROM generate_series(1,99)a;\n> CREATE STATISTICS s2 ON (CASE x[1] WHEN 1 THEN true ELSE false END), y FROM t1;\n> ANALYZE t1; \n\n> explain analyze SELECT * FROM t1 WHERE CASE x[1] WHEN 1 THEN true ELSE false END AND y=1;\n> ERROR: unknown clause type: 134\n\nSigh ... this is just horrid. I think I see what to do about it though,\nand since Tomas seems to have been AWOL for awhile now, I don't think\nwe'll get a fix by Monday if we wait for him. I'll take a shot at\nfixing it; it seems unlikely that I can make it worse.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Aug 2022 14:08:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Expr. extended stats are skipped with equality operator"
}
] |
[
{
"msg_contents": "BRIN: mask BRIN_EVACUATE_PAGE for WAL consistency checking\n\nThat bit is unlogged and therefore it's wrong to consider it in WAL page\ncomparison.\n\nAdd a test that tickles the case, as branch testing technology allows.\n\nThis has been a problem ever since wal consistency checking was\nintroduced (commit a507b86900f6 for pg10), so backpatch to all supported\nbranches.\n\nAuthor: 王海洋 (Haiyang Wang) <wanghaiyang.001@bytedance.com>\nReviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nDiscussion: https://postgr.es/m/CACciXAD2UvLMOhc4jX9VvOKt7DtYLr3OYRBhvOZ-jRxtzc_7Jg@mail.gmail.com\nDiscussion: https://postgr.es/m/CACciXADOfErX9Bx0nzE_SkdfXr6Bbpo5R=v_B6MUTEYW4ya+cg@mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/e44dae07f931383151e2eb34ed9b4cbf4bf14482\n\nModified Files\n--------------\nsrc/backend/access/brin/brin_pageops.c | 7 ++-\nsrc/backend/access/brin/brin_xlog.c | 6 +++\nsrc/test/modules/brin/Makefile | 2 +-\nsrc/test/modules/brin/t/02_wal_consistency.pl | 75 +++++++++++++++++++++++++++\n4 files changed, 88 insertions(+), 2 deletions(-)",
"msg_date": "Fri, 05 Aug 2022 16:04:43 +0000",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "pgsql: BRIN: mask BRIN_EVACUATE_PAGE for WAL consistency checking"
},
{
"msg_contents": "On 2022-Aug-05, Alvaro Herrera wrote:\n\n> Add a test that tickles the case, as branch testing technology allows.\n\nOne point here is that this confirms that the backpatched renaming alias\nfor PostgreSQL::Test::Cluster is working well.\n\nAnother is that, as far as I know, this is the going to be the only case\nof any code being run under wal_consistency_checking=[not off]\nregularly. 027_stream_regress.pl is equipped to do so, but as far as I\nknow we have no buildfarm animal with PG_EXTRA_TESTS set it so. I did\nconsider to make this new test conditional on having that flag be on,\nbut I disregarded it because of that.\n\nA third point is that in branches 15+ I made it use pg_walinspect to\nensure that the desired WAL record is being emitted.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 5 Aug 2022 18:10:43 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: BRIN: mask BRIN_EVACUATE_PAGE for WAL consistency checking"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> BRIN: mask BRIN_EVACUATE_PAGE for WAL consistency checking\n\nsnapper doesn't like this too much, because\n\nerror running SQL: 'psql:<stdin>:17: ERROR: time zone \"america/punta_arenas\" not recognized\nCONTEXT: PL/pgSQL function inline_code_block line 3 during statement block local variable initialization'\n\nIs there a particular reason why you used that zone, rather than say UTC?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Aug 2022 15:01:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: BRIN: mask BRIN_EVACUATE_PAGE for WAL consistency checking"
},
{
"msg_contents": "On Sat, Aug 6, 2022, at 9:01 PM, Tom Lane wrote:\n> snapper doesn't like this too much, because\n> \n> error running SQL: 'psql:<stdin>:17: ERROR: time zone \"america/punta_arenas\" not recognized\n> CONTEXT: PL/pgSQL function inline_code_block line 3 during statement block local variable initialization'\n> \n> Is there a particular reason why you used that zone, rather than say UTC?\n\nNone very good — I just wanted it to be not Moscow, which it was in the OP. I'll change it — to UTC, I suppose.\nOn Sat, Aug 6, 2022, at 9:01 PM, Tom Lane wrote:snapper doesn't like this too much, becauseerror running SQL: 'psql:<stdin>:17: ERROR: time zone \"america/punta_arenas\" not recognizedCONTEXT: PL/pgSQL function inline_code_block line 3 during statement block local variable initialization'Is there a particular reason why you used that zone, rather than say UTC?None very good — I just wanted it to be not Moscow, which it was in the OP. I'll change it — to UTC, I suppose.",
"msg_date": "Sat, 06 Aug 2022 22:03:55 +0200",
"msg_from": "=?UTF-8?Q?=C3=81lvaro_Herrera?= <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: BRIN: mask BRIN_EVACUATE_PAGE for WAL consistency checking"
},
{
"msg_contents": "On 2022-Aug-06, Álvaro Herrera wrote:\n\n> On Sat, Aug 6, 2022, at 9:01 PM, Tom Lane wrote:\n> > snapper doesn't like this too much, because\n> > \n> > error running SQL: 'psql:<stdin>:17: ERROR: time zone \"america/punta_arenas\" not recognized\n> > CONTEXT: PL/pgSQL function inline_code_block line 3 during statement block local variable initialization'\n> > \n> > Is there a particular reason why you used that zone, rather than say UTC?\n> \n> None very good — I just wanted it to be not Moscow, which it was in\n> the OP. I'll change it — to UTC, I suppose.\n\nDone.\n\n-- \nÁlvaro Herrera\n\n\n",
"msg_date": "Sun, 7 Aug 2022 10:21:23 +0200",
"msg_from": "=?utf-8?Q?=C3=81lvaro?= Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: BRIN: mask BRIN_EVACUATE_PAGE for WAL consistency checking"
},
{
"msg_contents": "=?utf-8?Q?=C3=81lvaro?= Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2022-Aug-06, Álvaro Herrera wrote:\n>> None very good — I just wanted it to be not Moscow, which it was in\n>> the OP. I'll change it — to UTC, I suppose.\n\n> Done.\n\nThanks. I wondered why this was a problem, when we have various\nother dependencies on specific zone names in the tests. The\nanswer seems to be that America/Punta_Arenas is a fairly new\nzone name: it was introduced in tzdata 2017a [1]. So snapper's\ntzdata must be older than that. I see it is using the system\ntzdata not our own:\n\n '--with-system-tzdata=/usr/share/zoneinfo',\n\nYou would've been fine with America/Santiago, likely :-(\n\n\t\t\tregards, tom lane\n\n[1] http://mm.icann.org/pipermail/tz-announce/2017-February/000045.html\n\n\n",
"msg_date": "Sun, 07 Aug 2022 09:44:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: BRIN: mask BRIN_EVACUATE_PAGE for WAL consistency checking"
}
] |
[
{
"msg_contents": "... at\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=aab05919a685449826db986a921c1d8632d673e0\n\nPlease send corrections and comments by Sunday.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Aug 2022 17:40:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Draft back-branch release notes are up"
}
] |
[
{
"msg_contents": "My bugfix commit 74388a1a (which was pushed back in February) added\nheuristics to VACUUM's reltuples calculation/estimate. This prevented\nVACUUM from distorting our estimate of reltuples over time, across\nsuccessive VACUUM operations run against the same table. The problem\nwas that VACUUM could scan the same single heap page again and again,\nwhile believing it saw a random sample each time. This eventually\nleads to a pg_class.reltuples value that is based on the assumption\nthat every single heap page in the table is just like the heap page\nthat gets \"sampled\" again and again. This was always the last heap\npage (due to implementation details related to the work done by commit\ne8429082), which in practice tend to be particularly poor\nrepresentations of the overall reltuples density of tables.\n\nI have discovered a gap in these heuristics: there are remaining cases\nwhere its percentage threshold doesn't prevent reltuples distortion as\nintended. It can still happen with tables that are small enough that a\ncutoff of 2% of rel_pages is less than a single page, yet still large\nenough that vacuumlazy.c will consider it worth its while to skip some\npages using the visibility map. It will typically skip all but the\nfinal heap page from the relation (same as the first time around).\n\nHere is a test case that shows how this can still happen on HEAD (and\nin Postgres 15):\n\nregression=# create table foo(bar int);insert into foo select i from\ngenerate_series(1, 10000) i;\nCREATE TABLE\nINSERT 0 10000\n\nNow run vacuum verbose against the table several times:\n\nregression=# vacuum verbose foo;\n*** SNIP ***\nregression=# vacuum verbose foo;\n\nThe first vacuum shows \"tuples: 0 removed, 10000 remain...\", which is\ncorrect. However, each subsequent vacuum verbose revises the estimate\ndownwards, eventually making pg_class.reltuples significantly\nunderestimate tuple density (same as the first time around).\n\nAttached patch fixes closes the remaining gap. With the patch applied,\nthe second and subsequent vacuum verbose operations from the test case\nwill show that reltuples is still 10000 (it won't ever change). The\npatch just extends an old behavior that was applied when scanned_pages\n== 0 to cases where scanned_pages <= 1 (unless we happened to scan all\nof the relation's tables, of course). It doesn't remove the original\ntest from commit 74388a1a, which still seems like a good idea to me.\n\n-- \nPeter Geoghegan",
"msg_date": "Fri, 5 Aug 2022 17:39:56 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Remaining case where reltuples can become distorted across multiple\n VACUUM operations"
},
{
"msg_contents": "On Fri, Aug 5, 2022 at 5:39 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached patch fixes closes the remaining gap. With the patch applied,\n> the second and subsequent vacuum verbose operations from the test case\n> will show that reltuples is still 10000 (it won't ever change). The\n> patch just extends an old behavior that was applied when scanned_pages\n> == 0 to cases where scanned_pages <= 1 (unless we happened to scan all\n> of the relation's tables, of course).\n\nMy plan is to commit this later in the week, barring objections. Maybe\non Thursday.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 8 Aug 2022 07:51:51 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Remaining case where reltuples can become distorted across\n multiple VACUUM operations"
},
{
"msg_contents": "On Mon, 8 Aug 2022 at 16:52, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Fri, Aug 5, 2022 at 5:39 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Attached patch fixes closes the remaining gap. With the patch applied,\n> > the second and subsequent vacuum verbose operations from the test case\n> > will show that reltuples is still 10000 (it won't ever change). The\n> > patch just extends an old behavior that was applied when scanned_pages\n> > == 0 to cases where scanned_pages <= 1 (unless we happened to scan all\n> > of the relation's tables, of course).\n>\n> My plan is to commit this later in the week, barring objections. Maybe\n> on Thursday.\n\nI do not have intimate knowledge of this code, but shouldn't we also\nadd some sefety guarantees like the following in these blocks? Right\nnow, we'll keep underestimating the table size even when we know that\nthe count is incorrect.\n\nif (scanned_tuples > old_rel_tuples)\n return some_weighted_scanned_tuples;\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Mon, 8 Aug 2022 17:14:08 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remaining case where reltuples can become distorted across\n multiple VACUUM operations"
},
{
"msg_contents": "On Mon, Aug 8, 2022 at 8:14 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> I do not have intimate knowledge of this code, but shouldn't we also\n> add some sefety guarantees like the following in these blocks? Right\n> now, we'll keep underestimating the table size even when we know that\n> the count is incorrect.\n>\n> if (scanned_tuples > old_rel_tuples)\n> return some_weighted_scanned_tuples;\n\nNot sure what you mean -- we do something very much like that already.\n\nWe take the existing tuple density, and assume that that hasn't\nchanged for any unscanned pages -- that is used to build a total\nnumber of tuples for the unscanned pages. Then we add the number of\nlive tuples/scanned_tuples that the vacuumlazy.c caller just\nencountered on scanned_pages. That's often where the final reltuples\nvalue comes from.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 8 Aug 2022 08:25:54 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Remaining case where reltuples can become distorted across\n multiple VACUUM operations"
},
{
"msg_contents": "On Mon, 8 Aug 2022 at 17:26, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Aug 8, 2022 at 8:14 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > I do not have intimate knowledge of this code, but shouldn't we also\n> > add some sefety guarantees like the following in these blocks? Right\n> > now, we'll keep underestimating the table size even when we know that\n> > the count is incorrect.\n> >\n> > if (scanned_tuples > old_rel_tuples)\n> > return some_weighted_scanned_tuples;\n>\n> Not sure what you mean -- we do something very much like that already.\n>\n> We take the existing tuple density, and assume that that hasn't\n> changed for any unscanned pages -- that is used to build a total\n> number of tuples for the unscanned pages. Then we add the number of\n> live tuples/scanned_tuples that the vacuumlazy.c caller just\n> encountered on scanned_pages. That's often where the final reltuples\n> value comes from.\n\nIndeed we often apply this, but not always. It is the default case,\nbut never applied in the special cases.\n\nFor example, if currently the measured 2% of the pages contains more\nthan 100% of the previous count of tuples, or with your patch the last\npage contains more than 100% of the previous count of the tuples, that\nnew count is ignored, which seems silly considering that the vacuum\ncount is supposed to be authorative.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Mon, 8 Aug 2022 17:33:44 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remaining case where reltuples can become distorted across\n multiple VACUUM operations"
},
{
"msg_contents": "On Mon, Aug 8, 2022 at 8:33 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> For example, if currently the measured 2% of the pages contains more\n> than 100% of the previous count of tuples, or with your patch the last\n> page contains more than 100% of the previous count of the tuples, that\n> new count is ignored, which seems silly considering that the vacuum\n> count is supposed to be authorative.\n\nThe 2% thing is conditioned on the new relpages value precisely\nmatching the existing relpages from pg_class -- which makes it very\ntargeted. I don't see why scanned_tuples greatly exceeding the\nexisting reltuples from pg_class is interesting (any more interesting\nthan the other way around).\n\nWe'll always accept scanned_tuples as authoritative when VACUUM\nactually scans all pages, no matter what. Currently it isn't possible\nfor VACUUM to skip pages in a table that is 32 pages or less in size.\nSo even the new \"single page\" thing from the patch cannot matter\nthere.\n\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 8 Aug 2022 08:49:16 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Remaining case where reltuples can become distorted across\n multiple VACUUM operations"
},
{
"msg_contents": "On Mon, 8 Aug 2022 at 17:49, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Aug 8, 2022 at 8:33 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > For example, if currently the measured 2% of the pages contains more\n> > than 100% of the previous count of tuples, or with your patch the last\n> > page contains more than 100% of the previous count of the tuples, that\n> > new count is ignored, which seems silly considering that the vacuum\n> > count is supposed to be authorative.\n>\n> The 2% thing is conditioned on the new relpages value precisely\n> matching the existing relpages from pg_class -- which makes it very\n> targeted. I don't see why scanned_tuples greatly exceeding the\n> existing reltuples from pg_class is interesting (any more interesting\n> than the other way around).\n\nBecause if a subset of the pages of a relation contains more tuples\nthan your current total expected tuples in the table, you should\nupdate your expectations regardless of which blocks or which number of\nblocks you've scanned - the previous stored value is a strictly worse\nestimation than your last measurement.\n\n> We'll always accept scanned_tuples as authoritative when VACUUM\n> actually scans all pages, no matter what. Currently it isn't possible\n> for VACUUM to skip pages in a table that is 32 pages or less in size.\n> So even the new \"single page\" thing from the patch cannot matter\n> there.\n\nA 33-block relation with first 32 1-tuple pages is still enough to\nhave a last page with 250 tuples, which would be ignored in that\nscheme and have a total tuple count of 33 or so. Sure, this is an\nartificial sample, but you can construct similarly wrong vacuum\nsamples: Two classes of tuples that have distinct update regimes, one\nwith 32B-tuples and one with MaxFreeSpaceRequestSize-d tuples. When\nyou start running VACUUM against these separate classes of updated\nblocks you'll see that the relation tuple count will also tend to\n1*nblocks, due to the disjoint nature of these updates and the\ntendency to ignore all updates to densely packed blocks.\n\nWith current code, we ignore the high counts of those often-updated\nblocks and expect low density in the relation, precisely because we\nignore areas that are extremely dense and updated in VACUUM cycles\nthat are different from bloated blocks.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Mon, 8 Aug 2022 18:17:37 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remaining case where reltuples can become distorted across\n multiple VACUUM operations"
},
{
"msg_contents": "On Mon, Aug 8, 2022 at 9:17 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> Because if a subset of the pages of a relation contains more tuples\n> than your current total expected tuples in the table, you should\n> update your expectations regardless of which blocks or which number of\n> blocks you've scanned - the previous stored value is a strictly worse\n> estimation than your last measurement.\n\nThe previous stored value could be -1, which represents the idea that\nwe don't know the tuple density yet. So it doesn't necessarily follow\nthat the new estimate is strictly better, even in this exact scenario.\n\n> A 33-block relation with first 32 1-tuple pages is still enough to\n> have a last page with 250 tuples, which would be ignored in that\n> scheme and have a total tuple count of 33 or so.\n\nThe simple fact is that there is only so much we can do with the\nlimited information/context that we have. Heuristics are not usually\nfree of all bias. Often the bias is the whole point -- the goal can be\nto make sure that we have the bias that we know we can live with, and\nnot the opposite bias, which is much worse. Details of which are\nusually very domain specific.\n\nI presented my patch with a very simple test case -- a very clear\nproblem. Can you do the same for this scenario?\n\nI accept that it is possible that we'll keep an old reltuples which is\nprovably less accurate than doing something with the latest\ninformation from vacuumlazy.c. But the conditions under which this can\nhappen are *very* narrow. I am not inclined to do anything about it\nfor that reason.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 8 Aug 2022 09:48:18 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Remaining case where reltuples can become distorted across\n multiple VACUUM operations"
},
{
"msg_contents": "On Mon, 8 Aug 2022 at 18:48, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Aug 8, 2022 at 9:17 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > Because if a subset of the pages of a relation contains more tuples\n> > than your current total expected tuples in the table, you should\n> > update your expectations regardless of which blocks or which number of\n> > blocks you've scanned - the previous stored value is a strictly worse\n> > estimation than your last measurement.\n>\n> The previous stored value could be -1, which represents the idea that\n> we don't know the tuple density yet. So it doesn't necessarily follow\n> that the new estimate is strictly better, even in this exact scenario.\n>\n> > A 33-block relation with first 32 1-tuple pages is still enough to\n> > have a last page with 250 tuples, which would be ignored in that\n> > scheme and have a total tuple count of 33 or so.\n>\n> The simple fact is that there is only so much we can do with the\n> limited information/context that we have. Heuristics are not usually\n> free of all bias. Often the bias is the whole point -- the goal can be\n> to make sure that we have the bias that we know we can live with, and\n> not the opposite bias, which is much worse. Details of which are\n> usually very domain specific.\n>\n> I presented my patch with a very simple test case -- a very clear\n> problem. Can you do the same for this scenario?\n\nCREATE TABLE tst (id int primary key generated by default as identity,\npayload text) with (fillfactor 50); -- fillfactor to make pages fill\nup fast\nINSERT INTO tst (payload) select repeat('a', 5000) from\ngenerate_series(32); -- 32 pages filled with large tuples\nINSERT INTO tst (payload) select repeat('a', 4); -- small tuple at last page\nvacuum (verbose, freeze) tst; -- 33 tuples on 33 pages, with lots of\nspace left on last page\nINSERT INTO tst(payload) select repeat('a', 4) from\ngenerate_series(1,63); -- now, we have 64 tuples on the last page\nvacuum verbose tst; -- with your patch it reports only 33 tuples\ntotal, while the single page that was scanned contains 64 tuples, and\nthe table contains 96 tuples.\n\n> I accept that it is possible that we'll keep an old reltuples which is\n> provably less accurate than doing something with the latest\n> information from vacuumlazy.c. But the conditions under which this can\n> happen are *very* narrow. I am not inclined to do anything about it\n> for that reason.\n\nI think I understand your reasoning, but I don't agree with the\nconclusion. The attached patch 0002 does fix that skew too, at what I\nconsider negligible cost. 0001 is your patch with a new version\nnumber.\n\nI'm fine with your patch as is, but would appreciate it if known\nestimate mistakes would also be fixed.\n\nAn alternative solution could be doing double-vetting, where we ignore\ntuples_scanned if <2% of pages AND <2% of previous estimated tuples\nwas scanned.\n\nKind regards,\n\nMatthias van de Meent",
"msg_date": "Thu, 11 Aug 2022 10:47:54 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remaining case where reltuples can become distorted across\n multiple VACUUM operations"
},
{
"msg_contents": "On Thu, Aug 11, 2022 at 1:48 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> I think I understand your reasoning, but I don't agree with the\n> conclusion. The attached patch 0002 does fix that skew too, at what I\n> consider negligible cost. 0001 is your patch with a new version\n> number.\n\nYour patch added allowSystemTableMods to one of the tests. I guess\nthat this was an oversight?\n\n> I'm fine with your patch as is, but would appreciate it if known\n> estimate mistakes would also be fixed.\n\nWhy do you think that this particular scenario/example deserves\nspecial attention? As I've acknowledged already, it is true that your\nscenario is one in which we provably give a less accurate estimate,\nbased on already-available information. But other than that, I don't\nsee any underlying principle that would be violated by my original\npatch (any kind of principle, held by anybody). reltuples is just an\nestimate.\n\nI was thinking of going your way on this, purely because it didn't\nseem like there'd be much harm in it (why not just handle your case\nand be done with it?). But I don't think that it's a good idea now.\nreltuples is usually derived by ANALYZE using a random sample, so the\nidea that tuple density can be derived accurately enough from a random\nsample is pretty baked in. You're talking about a case where ignoring\njust one page (\"sampling\" all but one of the pages) *isn't* good\nenough. It just doesn't seem like something that needs to be addressed\n-- it's quite awkward to do so.\n\nBarring any further objections, I plan on committing the original\nversion tomorrow.\n\n> An alternative solution could be doing double-vetting, where we ignore\n> tuples_scanned if <2% of pages AND <2% of previous estimated tuples\n> was scanned.\n\nI'm not sure that I've understood you, but I think that you're talking\nabout remembering more information (in pg_class), which is surely out\nof scope for a bug fix.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 18 Aug 2022 16:50:37 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Remaining case where reltuples can become distorted across\n multiple VACUUM operations"
}
] |
[
{
"msg_contents": "Hi,\n\nEnum COPY_NEW_FE is removed in commit 3174d69fb9.\n\nShould use COPY_FRONTEND instead.\n\nIssue exists on 15 and master.\n\n```\ntypedef struct CopyFromStateData\n\n- StringInfo fe_msgbuf; /* used if copy_src == COPY_NEW_FE */\n+ StringInfo fe_msgbuf; /* used if copy_src == COPY_FRONTEND */\n\n```\n\nRegards,\nZhang Mingli",
"msg_date": "Sat, 6 Aug 2022 19:20:25 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "[Code Comments]enum COPY_NEW_FE is removed"
},
{
"msg_contents": "On Sat, Aug 06, 2022 at 07:20:25PM +0800, Zhang Mingli wrote:\n> Enum COPY_NEW_FE is removed in commit 3174d69fb9.\n> \n> Should use COPY_FRONTEND instead.\n> \n> Issue exists on 15 and master.\n\nThis also exists in REL_14_STABLE. I have fixed that on HEAD, as\nthat's just a comment.\n--\nMichael",
"msg_date": "Sat, 6 Aug 2022 21:17:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [Code Comments]enum COPY_NEW_FE is removed"
},
{
"msg_contents": "Ok, thanks.\n\nMichael Paquier <michael@paquier.xyz>于2022年8月6日 周六20:17写道:\n\n> On Sat, Aug 06, 2022 at 07:20:25PM +0800, Zhang Mingli wrote:\n> > Enum COPY_NEW_FE is removed in commit 3174d69fb9.\n> >\n> > Should use COPY_FRONTEND instead.\n> >\n> > Issue exists on 15 and master.\n>\n> This also exists in REL_14_STABLE. I have fixed that on HEAD, as\n> that's just a comment.\n> --\n> Michael\n>\n\nOk, thanks.Michael Paquier <michael@paquier.xyz>于2022年8月6日 周六20:17写道:On Sat, Aug 06, 2022 at 07:20:25PM +0800, Zhang Mingli wrote:\n> Enum COPY_NEW_FE is removed in commit 3174d69fb9.\n> \n> Should use COPY_FRONTEND instead.\n> \n> Issue exists on 15 and master.\n\nThis also exists in REL_14_STABLE. I have fixed that on HEAD, as\nthat's just a comment.\n--\nMichael",
"msg_date": "Sat, 6 Aug 2022 21:01:27 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [Code Comments]enum COPY_NEW_FE is removed"
}
] |
[
{
"msg_contents": "Hi,\n\nAbout the error:\nResult of 'malloc' is converted to a pointer of type 'char', which is\nincompatible with sizeof operand type 'struct guts'\n\nThe patch attached tries to fix this.\n\nregards,\nRanier Vilela",
"msg_date": "Sat, 6 Aug 2022 09:12:53 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Allocator sizeof operand mismatch (src/backend/regex/regcomp.c)"
},
{
"msg_contents": "I think it’s ok, re_guts is converted when used\n\n(struct guts *) re->re_guts;\n\nAnd there is comments in regex.h\n\n\n\tchar *re_guts; /* `char *' is more portable than `void *' */\n\nRegards,\nZhang Mingli\nOn Aug 6, 2022, 20:13 +0800, Ranier Vilela <ranier.vf@gmail.com>, wrote:\n> Hi,\n>\n> About the error:\n> Result of 'malloc' is converted to a pointer of type 'char', which is incompatible with sizeof operand type 'struct guts'\n>\n> The patch attached tries to fix this.\n>\n> regards,\n> Ranier Vilela\n\n\n\n\n\n\n\n\nI think it’s ok, re_guts is converted when used \n\n(struct guts *) re->re_guts;\n\nAnd there is comments in regex.h\n\n\n\tchar *re_guts; /* `char *' is more portable than `void *' */\n\n\nRegards,\nZhang Mingli\n\n\nOn Aug 6, 2022, 20:13 +0800, Ranier Vilela <ranier.vf@gmail.com>, wrote:\n\n\nHi,\n\nAbout the error:\nResult of 'malloc' is converted to a pointer of type 'char', which is incompatible with sizeof operand type 'struct guts'\n\nThe patch attached tries to fix this.\n\nregards,\nRanier Vilela",
"msg_date": "Sat, 6 Aug 2022 22:05:41 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allocator sizeof operand mismatch\n (src/backend/regex/regcomp.c)"
},
{
"msg_contents": "Zhang Mingli <zmlpostgres@gmail.com> writes:\n> I think it’s ok, re_guts is converted when used\n> (struct guts *) re->re_guts;\n> And there is comments in regex.h\n> \tchar *re_guts; /* `char *' is more portable than `void *' */\n\nBoy, that comment is showing its age isn't it? If we were to do\nanything about this, I'd be more inclined to change re_guts to void*.\nBut, never having seen any compiler warnings about this code,\nI don't feel a strong need to do something.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Aug 2022 10:47:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allocator sizeof operand mismatch (src/backend/regex/regcomp.c)"
},
{
"msg_contents": "On Aug 6, 2022, 22:47 +0800, Tom Lane <tgl@sss.pgh.pa.us>, wrote:\n> Zhang Mingli <zmlpostgres@gmail.com> writes:\n> > I think it’s ok, re_guts is converted when used\n> > (struct guts *) re->re_guts;\n> > And there is comments in regex.h\n> > char *re_guts; /* `char *' is more portable than `void *' */\n>\n> Boy, that comment is showing its age isn't it? If we were to do\n> anything about this, I'd be more inclined to change re_guts to void*.\nGot it , thanks.\n\n\n\n\n\n\n\n\nOn Aug 6, 2022, 22:47 +0800, Tom Lane <tgl@sss.pgh.pa.us>, wrote:\nZhang Mingli <zmlpostgres@gmail.com> writes:\nI think it’s ok, re_guts is converted when used\n(struct guts *) re->re_guts;\nAnd there is comments in regex.h\nchar *re_guts; /* `char *' is more portable than `void *' */\n\nBoy, that comment is showing its age isn't it? If we were to do\nanything about this, I'd be more inclined to change re_guts to void*.\nGot it , thanks.",
"msg_date": "Sat, 6 Aug 2022 23:02:53 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allocator sizeof operand mismatch\n (src/backend/regex/regcomp.c)"
}
] |
[
{
"msg_contents": "Hello,\n\nPostgres seems to always optimize ORDER BY + LIMIT as top-k sort.\nRecently I happened to notice\nthat in this scenario the output tuple number of the sort node is not\nthe same as the LIMIT tuple number.\n\nSee below,\n\npostgres=# explain analyze verbose select * from t1 order by a limit 10;\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------\n------------------------------\n Limit (cost=69446.17..69446.20 rows=10 width=4) (actual\ntime=282.896..282.923 rows=10 loops=1)\n Output: a\n -> Sort (cost=69446.17..71925.83 rows=991862 width=4) (actual\ntime=282.894..282.896 rows=10 l\noops=1)\n Output: a\n Sort Key: t1.a\n Sort Method: top-N heapsort Memory: 25kB\n -> Seq Scan on public.t1 (cost=0.00..44649.62 rows=991862\nwidth=4) (actual time=0.026..\n195.438 rows=1001000 loops=1)\n Output: a\n Planning Time: 0.553 ms\n Execution Time: 282.961 ms\n(10 rows)\n\nWe can see from the output that the LIMIT cost is wrong also due to\nthis since it assumes the input_rows\nfrom the sort node is 991862 (per gdb debugging).\n\nThis could be easily fixed by below patch,\n\ndiff --git a/src/backend/optimizer/path/costsize.c\nb/src/backend/optimizer/path/costsize.c\nindex fb28e6411a..800cf0b256 100644\n--- a/src/backend/optimizer/path/costsize.c\n+++ b/src/backend/optimizer/path/costsize.c\n@@ -2429,7 +2429,11 @@ cost_sort(Path *path, PlannerInfo *root,\n\n startup_cost += input_cost;\n\n- path->rows = tuples;\n+ if (limit_tuples > 0 && limit_tuples < tuples)\n+ path->rows = limit_tuples;\n+ else\n+ path->rows = tuples;\n+\n path->startup_cost = startup_cost;\n path->total_cost = startup_cost + run_cost;\n }\n\nWithe the patch the explain output looks like this.\n\npostgres=# explain analyze verbose select * from t1 order by a limit 10;\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------\n------------------------------\n Limit (cost=69446.17..71925.83 rows=10 width=4) (actual\ntime=256.204..256.207 rows=10 loops=1)\n Output: a\n -> Sort (cost=69446.17..71925.83 rows=10 width=4) (actual\ntime=256.202..256.203 rows=10 loops\n=1)\n Output: a\n Sort Key: t1.a\n Sort Method: top-N heapsort Memory: 25kB\n -> Seq Scan on public.t1 (cost=0.00..44649.62 rows=991862\nwidth=4) (actual time=1.014..\n169.509 rows=1001000 loops=1)\n Output: a\n Planning Time: 0.076 ms\n Execution Time: 256.232 ms\n(10 rows)\n\nRegards,\nPaul\n\n\n",
"msg_date": "Sat, 6 Aug 2022 23:38:17 +0800",
"msg_from": "Paul Guo <paulguo@gmail.com>",
"msg_from_op": true,
"msg_subject": "A cost issue in ORDER BY + LIMIT"
},
{
"msg_contents": "HI,\n\nWhat if the the rows of t1 is less than the limit number(ex: t1 has 5 rows, limit 10)?\nDoes it matter?\n\n\nRegards,\nZhang Mingli\nOn Aug 6, 2022, 23:38 +0800, Paul Guo <paulguo@gmail.com>, wrote:\n>\n> limit_tuples\n\n\n\n\n\n\n\nHI,\n\nWhat if the the rows of t1 is less than the limit number(ex: t1 has 5 rows, limit 10)?\nDoes it matter?\n\n\n\nRegards,\nZhang Mingli\n\n\nOn Aug 6, 2022, 23:38 +0800, Paul Guo <paulguo@gmail.com>, wrote:\n\nlimit_tuples",
"msg_date": "Sat, 6 Aug 2022 23:48:55 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A cost issue in ORDER BY + LIMIT"
},
{
"msg_contents": "Paul Guo <paulguo@gmail.com> writes:\n> Postgres seems to always optimize ORDER BY + LIMIT as top-k sort.\n> Recently I happened to notice\n> that in this scenario the output tuple number of the sort node is not\n> the same as the LIMIT tuple number.\n\nNo, it isn't, and your proposed patch is completely misguided.\nThe cost and rowcount estimates for a plan node are always written\non the assumption that the node is run to completion.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Aug 2022 12:12:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: A cost issue in ORDER BY + LIMIT"
}
] |
[
{
"msg_contents": "Buildfarm animal conchuela recently started spitting a lot of warnings\nlike this one:\n\n conchuela | 2022-08-06 12:35:46 | /home/pgbf/buildroot/HEAD/pgsql.build/../pgsql/src/include/port.h:208:70: warning: 'format' attribute argument not supported: gnu_printf [-Wignored-attributes]\n\nI first thought we'd broken something, but upon digging through the\nbuildfarm history, the oldest build showing these warnings is\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2022-07-18%2020%3A20%3A18\n\nThe new commits in that build don't look related, but what does look\nrelated is that the choice of C++ compiler changed:\n\nconfigure: using compiler=gcc 8.3 [DragonFly] Release/2019-02-22\nconfigure: using CXX=ccache clang++14\n\nvs\n\nconfigure: using compiler=gcc 8.3 [DragonFly] Release/2019-02-22\nconfigure: using CXX=g++\n\nThis is seemingly an intentional configuration change, because the\nanimal is reporting different config_env than before. However,\nwe decide what to set PG_PRINTF_ATTRIBUTE to based on what CC\nlikes, and if CXX doesn't like it then you'll get these warnings.\n(The warnings only appear in C++ compiles, else there'd REALLY\nbe a lot of them.)\n\nIs it worth the trouble to try to set PG_PRINTF_ATTRIBUTE differently\nin C and C++ builds? I doubt it. Probably the right fix for this\nis to use matched C and C++ compilers, either both clang or both gcc.\nI fear that inconsistency could lead to bigger problems than some\nwarnings.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Aug 2022 12:59:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "conchuela doesn't like gnu_printf anymore"
},
{
"msg_contents": "\n\nOn 2022-08-06 18:59, Tom Lane wrote:\n\n> This is seemingly an intentional configuration change, because the\n> animal is reporting different config_env than before. However,\n> we decide what to set PG_PRINTF_ATTRIBUTE to based on what CC\n> likes, and if CXX doesn't like it then you'll get these warnings.\n> (The warnings only appear in C++ compiles, else there'd REALLY\n> be a lot of them.)\n\nYes, when I upgraded to the lastest DragonFly BSD 6.2.2 I was meaning to \nswitch to CLANG14 for both C and C++. I guess I fat fingered the \nconfiguration somehow.\n\nI have switch to CLANG14 for both C and C++ now.\n\nLet's see if the warnings disappears now.\n\n/Mikael\n\n\n",
"msg_date": "Mon, 8 Aug 2022 21:45:15 +0200",
"msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>",
"msg_from_op": false,
"msg_subject": "Re: conchuela doesn't like gnu_printf anymore"
},
{
"msg_contents": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu> writes:\n> Yes, when I upgraded to the lastest DragonFly BSD 6.2.2 I was meaning to \n> switch to CLANG14 for both C and C++. I guess I fat fingered the \n> configuration somehow.\n> I have switch to CLANG14 for both C and C++ now.\n> Let's see if the warnings disappears now.\n\nYup, looks clean now. Thanks!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Aug 2022 19:13:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: conchuela doesn't like gnu_printf anymore"
}
] |
[
{
"msg_contents": "It's quite possible (and probably very common) for certain tables to\nhave autovacuum scheduling trigger autovacuums based on both the\n\"regular\" bloat-orientated thresholds, and the newer insert-based\nthresholds. It may be far from obvious which triggering condition\nautovacuum.c must have applied to trigger any given autovacuum, since\nthat information isn't currently passed down to lazyvacuum.c. This\nseems like a problem to me; how are users supposed to tune\nautovacuum's thresholds without even basic feedback about how the\nthresholds get applied?\n\nAttached patch teaches autovacuum.c to pass the information down to\nlazyvacuum.c, which includes the information in the autovacuum log.\nThe approach I've taken is very similar to the existing approach with\nanti-wraparound autovacuum. It's pretty straightforward. Note that a\nVACUUM that is an \"automatic vacuum for inserted tuples\" cannot also\nbe an antiwraparound autovacuum, nor can it also be a \"regular\"\nautovacuum/VACUUM -- there are now 3 distinct \"triggering conditions\"\nfor autovacuum.\n\nAlthough this patch doesn't touch antiwraparound autovacuums at all, I\nwill note in passing that I think that anti-wraparound autovacuums\nshould become just another triggering condition for autovacuum -- IMV\nthey shouldn't be special in *any* way. We'd still need to keep\nantiwraparound's \"cannot automatically cancel autovacuum worker\"\nbehavior in some form, but that would become dynamic, a little like\nthe failsafe is today, and would trigger on its own timeline (probably\n*much* later than we first trigger antiwraparound autovacuum). We'd\nalso need to decouple \"aggressiveness\" (the behaviors that we\nassociate with aggressive mode in vacuumlazy.c) from the condition of\nthe table/system when VACUUM first began -- those could all become\ndynamic, too.\n\n-- \nPeter Geoghegan",
"msg_date": "Sat, 6 Aug 2022 13:03:57 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Making autovacuum logs indicate if insert-based threshold was the\n triggering condition"
},
{
"msg_contents": "On Sat, Aug 06, 2022 at 01:03:57PM -0700, Peter Geoghegan wrote:\n> thresholds. It may be far from obvious which triggering condition\n> autovacuum.c must have applied to trigger any given autovacuum, since\n> that information isn't currently passed down to lazyvacuum.c. This\n> seems like a problem to me; how are users supposed to tune\n> autovacuum's thresholds without even basic feedback about how the\n> thresholds get applied?\n\n+1\n\nThis sounded familiar, and it seems like I anticipated that it might be an\nissue. Here, I was advocating for the new insert-based GUCs to default to -1,\nto have insert-based autovacuum fall back to the thresholds specified by the\npre-existing GUCs (20% + 50), which would (in my proposal) remain be the normal\nway to tune any type of vacuum.\n\nhttps://www.postgresql.org/message-id/20200317233218.GD26184@telsasoft.com\n\nI haven't heard of anyone who had trouble setting the necessary GUC, but I'm\nnot surprised if most postgres installations are running versions before 13.\n\n> Note that a VACUUM that is an \"automatic vacuum for inserted tuples\" cannot\n> [...] also be a \"regular\" autovacuum/VACUUM\n\nWhy not ?\n\n$ ./tmp_install/usr/local/pgsql/bin/postgres -D src/test/regress/tmp_check/data -c log_min_messages=debug3 -c autovacuum_naptime=9s&\nDROP TABLE t; CREATE TABLE t (i int); INSERT INTO t SELECT generate_series(1,99999); DELETE FROM t; INSERT INTO t SELECT generate_series(1,99999);\n\n2022-08-06 16:47:47.674 CDT autovacuum worker[12707] DEBUG: t: vac: 99999 (threshold 50), ins: 99999 (threshold 1000), anl: 199998 (threshold 50)\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 6 Aug 2022 16:50:33 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Making autovacuum logs indicate if insert-based threshold was\n the triggering condition"
},
{
"msg_contents": "On Sat, Aug 6, 2022 at 2:50 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> This sounded familiar, and it seems like I anticipated that it might be an\n> issue. Here, I was advocating for the new insert-based GUCs to default to -1,\n> to have insert-based autovacuum fall back to the thresholds specified by the\n> pre-existing GUCs (20% + 50), which would (in my proposal) remain be the normal\n> way to tune any type of vacuum.\n>\n> https://www.postgresql.org/message-id/20200317233218.GD26184@telsasoft.com\n>\n> I haven't heard of anyone who had trouble setting the necessary GUC, but I'm\n> not surprised if most postgres installations are running versions before 13.\n\nISTM that having insert-based triggering conditions is definitely a\ngood idea, but what we have right now still has problems. It currently\nwon't work very well unless the user goes out of their way to tune\nfreezing to do the right thing. Typically we miss out on the\nopportunity to freeze early, because without sophisticated\nintervention from the user there is only a slim chance of *any*\nfreezing taking place outside of the inevitable antiwraparound\nautovacuum.\n\n> > Note that a VACUUM that is an \"automatic vacuum for inserted tuples\" cannot\n> > [...] also be a \"regular\" autovacuum/VACUUM\n>\n> Why not ?\n\nWell, autovacuum.c should have (and/or kind of already has) 3\ndifferent triggering conditions. These are mutually exclusive\nconditions -- technically autovacuum.c always launches an autovacuum\nagainst a table because exactly 1 of the 3 thresholds were crossed. My\npatch makes sure that it always gives exactly one reason why\nautovacuum.c decided to VACUUM, so by definition there is only one\nrelevant piece of information for vacuumlazy.c to report in the log.\nThat's fairly simple and high level, and presumably something that\nusers won't have much trouble understanding.\n\nRight now antiwraparound autovacuum \"implies aggressive\", in that it\nalmost always makes vacuumlazy.c use aggressive mode, but this seems\ntotally arbitrary to me -- they don't have to be virtually synonymous.\nI think that antiwraparound autovacuum could even be rebranded as \"an\nautovacuum that takes place because the table hasn't had one in a long\ntime\". This is much less scary, and makes it clearer that autovacuum.c\nshouldn't be expected to really understand what will turn out to be\nimportant \"at runtime\". That's the time to make important decisions\nabout what work to do -- when we actually have accurate information.\n\nMy antiwraparound example is just that: an example. There is a broader\nidea: we shouldn't be too confident that the exact triggering\ncondition autovacuum.c applied to launch an autovacuum worker turns\nout to be the best reason to VACUUM, or even a good reason --\nvacuumlazy.c should be able to cope with that. The user is kept in the\nloop about both, by reporting the triggering condition and the details\nof what really happened at runtime. Maybe lazyvacuum.c can be taught\nto speed up and slow down based on the conditions it observes as it\nscans the heap -- there are many possibilities.\n\nThis broader idea is pretty much what you were getting at with your\nexample, I think.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 6 Aug 2022 15:41:57 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Making autovacuum logs indicate if insert-based threshold was the\n triggering condition"
},
{
"msg_contents": "On Sat, Aug 06, 2022 at 03:41:57PM -0700, Peter Geoghegan wrote:\n> > > Note that a VACUUM that is an \"automatic vacuum for inserted tuples\" cannot\n> > > [...] also be a \"regular\" autovacuum/VACUUM\n> >\n> > Why not ?\n\nI think maybe you missed my intent in trimming the \"anti-wraparound\" part of\nyour text.\n\nMy point was concerning your statement that \"autovacuum for inserted tuples ..\ncannot also be a regular autovacuum\" (meaning triggered by dead tuples).\n\n> Well, autovacuum.c should have (and/or kind of already has) 3\n> different triggering conditions. These are mutually exclusive\n> conditions -- technically autovacuum.c always launches an autovacuum\n> against a table because exactly 1 of the 3 thresholds were crossed.\n\nThe issue being that both thresholds can be crossed:\n\n>> 2022-08-06 16:47:47.674 CDT autovacuum worker[12707] DEBUG: t: VAC: 99999 (THRESHOLD 50), INS: 99999 (THRESHOLD 1000), anl: 199998 (threshold 50)\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 6 Aug 2022 17:51:55 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Making autovacuum logs indicate if insert-based threshold was\n the triggering condition"
},
{
"msg_contents": "On Sat, Aug 6, 2022 at 3:51 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Well, autovacuum.c should have (and/or kind of already has) 3\n> > different triggering conditions. These are mutually exclusive\n> > conditions -- technically autovacuum.c always launches an autovacuum\n> > against a table because exactly 1 of the 3 thresholds were crossed.\n>\n> The issue being that both thresholds can be crossed:\n>\n> >> 2022-08-06 16:47:47.674 CDT autovacuum worker[12707] DEBUG: t: VAC: 99999 (THRESHOLD 50), INS: 99999 (THRESHOLD 1000), anl: 199998 (threshold 50)\n\nWhat are the chances that both thresholds will be crossed at *exactly*\n(not approximately) the same time in a real world case, where the\ntable isn't tiny (tiny relative to the \"autovacuum_naptime quantum\")?\nThis is a very narrow case.\n\nBesides, the same can already be said with how autovacuum.c crosses\nthe XID-based antiwraparound threshold. Yet we still arbitrarily\nreport that it's antiwraparound in the logs, which (at least right\nnow) is generally assumed to mostly be about advancing relfrozenxid.\n(Or maybe it's the other way around; it doesn't matter.)\n\nIt might make sense to *always* show how close we were to hitting each\nof the thresholds, including the ones that we didn't end up hitting\n(we may come pretty close quite often, which seems like it might\nmatter). But showing multiple conditions together just because the\nplanets aligned (we hit multiple thresholds together) emphasizes the\nlow-level mechanism, which is pretty far removed from anything that\nmatters. You might as well pick either threshold at random once this\nhappens -- even an expert won't be able to tell the difference.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 6 Aug 2022 16:09:28 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Making autovacuum logs indicate if insert-based threshold was the\n triggering condition"
},
{
"msg_contents": "On Sat, Aug 06, 2022 at 04:09:28PM -0700, Peter Geoghegan wrote:\n> What are the chances that both thresholds will be crossed at *exactly*\n> (not approximately) the same time in a real world case, where the\n> table isn't tiny (tiny relative to the \"autovacuum_naptime quantum\")?\n> This is a very narrow case.\n\nThe threshold wouldn't need to be crossed within autovacuum_naptime, since all\nthe workers might be busy. Consider autovacuum delay, analyze on long/wide\ntables, multiple extended stats objects, or autovacuum parameters which are\nchanged off-hours by a cronjob.\n\n> It might make sense to *always* show how close we were to hitting each\n> of the thresholds, including the ones that we didn't end up hitting\n> (we may come pretty close quite often, which seems like it might\n> matter). But showing multiple conditions together just because the\n> planets aligned (we hit multiple thresholds together) emphasizes the\n> low-level mechanism, which is pretty far removed from anything that\n> matters. You might as well pick either threshold at random once this\n> happens -- even an expert won't be able to tell the difference.\n\nI don't have strong feelings about it; I'm just pointing out that the\ntwo of the conditions aren't actually exclusive.\n\nIt seems like it could be less confusing to show both. Consider someone who is\ntrying to *reduce* how often autovacuum runs, or give priority to some tables\nby raising the thresholds for other tables.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 9 Sep 2022 13:11:24 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Making autovacuum logs indicate if insert-based threshold was\n the triggering condition"
},
{
"msg_contents": "On Fri, Sep 9, 2022 at 11:11 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > It might make sense to *always* show how close we were to hitting each\n> > of the thresholds, including the ones that we didn't end up hitting\n> > (we may come pretty close quite often, which seems like it might\n> > matter). But showing multiple conditions together just because the\n> > planets aligned (we hit multiple thresholds together) emphasizes the\n> > low-level mechanism, which is pretty far removed from anything that\n> > matters. You might as well pick either threshold at random once this\n> > happens -- even an expert won't be able to tell the difference.\n>\n> I don't have strong feelings about it; I'm just pointing out that the\n> two of the conditions aren't actually exclusive.\n\nFair enough. I'm just pointing out that the cutoffs are continuous for\nall practical purposes, even if there are cases where they seem kind\nof discrete, due only to implementation details (e.g.\nautovacuum_naptime stuff). Displaying only one reason for triggering\nan autovacuum is consistent with the idea that the cutoffs are\ncontinuous. It's not literally true that they're continuous, but it\nmight as well be.\n\nI think of it as similar to how it's not literally true that a coin\ntoss is always either heads or tails, though it might as well be true.\nSure, even a standard fair coin toss might result in the coin landing\non its side. That'll probably never happen even once, but if does:\njust flip the coin again! The physical coin toss was never the\nimportant part.\n\n> It seems like it could be less confusing to show both. Consider someone who is\n> trying to *reduce* how often autovacuum runs, or give priority to some tables\n> by raising the thresholds for other tables.\n\nMy objection to that sort of approach is that it suggests a difference\nin what each VACUUM actually does -- as if autovacuum.c actually\ndecided on a particular runtime behavior for the VACUUM up front,\nbased on its own considerations that come from statistics about the\nworkload. I definitely want to avoid creating that false impression in\nthe minds of users.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 9 Sep 2022 12:10:02 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Making autovacuum logs indicate if insert-based threshold was the\n triggering condition"
},
{
"msg_contents": "On Sat, Aug 6, 2022 at 1:03 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached patch teaches autovacuum.c to pass the information down to\n> lazyvacuum.c, which includes the information in the autovacuum log.\n> The approach I've taken is very similar to the existing approach with\n> anti-wraparound autovacuum. It's pretty straightforward.\n\nI have formally withdrawn this patch. I still think it's a good idea,\nand I'm not abandoning the idea. The patch has just been superseded by\nanother patch of mine:\n\nhttps://postgr.es/m/CAH2-Wz=hj-RCr6fOj_L3_0J1Ws8fOoxTQLmtM57gPc19beZz=Q@mail.gmail.com\n\nThis other patch has a much broader goal: it decouples\n\"antiwraparound-ness vs regular-ness\" from the criteria that triggered\nautovacuum (which includes a \"table age\" trigger criteria). Since the\nother patch has to invent the whole concept of an autovacuum trigger\ncriteria (which it reports on via a line in autovacuum server log\nreports), it seemed best to do everything together.\n\nThanks\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 25 Nov 2022 15:06:15 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Making autovacuum logs indicate if insert-based threshold was the\n triggering condition"
}
] |
[
{
"msg_contents": "Hi,\n\nI tried PG on the gcc compile farm solaris 11.31 host. When compiling with sun\nstudio I can build the backend etc, but preproc.c fails to compile:\n\nccache /opt/developerstudio12.6/bin/cc -m64 -Xa -g -v -O0 -D_POSIX_PTHREAD_SEMANTICS -mt -D_REENTRANT -D_THREAD_SAFE -I../include -I../../../../src/interfaces/ecpg/include -I. -I. -I../../../../src/interfaces/ecpg/ecpglib -I../../../../src/interfaces/libpq -I../../../../src/include -D_POSIX_PTHREAD_SEMANTICS -c -o preproc.o preproc.c\nAssertion failed: hmap_size (phdl->fb.map) == 0, file ../src/line_num_internal.c, line 230, function twolist_proc_clear\nAssertion failed: hmap_size (phdl->fb.map) == 0, file ../src/line_num_internal.c, line 230, function twolist_proc_clear\ncc: Fatal error in /opt/developerstudio12.6/lib/compilers/bin/acomp\ncc: Status 134\n\nthe assertion is just a consequence of running out of memory, I believe, acomp\nis well over 20GB at that point.\n\nHowever I see that wrasse doesn't seem to have that problem. Which leaves me a\nbit confused, because I think that's the same machine and compiler binary.\n\nNoah, did you encounter this before / do anything to avoid this?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 6 Aug 2022 14:07:24 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "On Sat, Aug 06, 2022 at 02:07:24PM -0700, Andres Freund wrote:\n> I tried PG on the gcc compile farm solaris 11.31 host. When compiling with sun\n> studio I can build the backend etc, but preproc.c fails to compile:\n> \n> ccache /opt/developerstudio12.6/bin/cc -m64 -Xa -g -v -O0 -D_POSIX_PTHREAD_SEMANTICS -mt -D_REENTRANT -D_THREAD_SAFE -I../include -I../../../../src/interfaces/ecpg/include -I. -I. -I../../../../src/interfaces/ecpg/ecpglib -I../../../../src/interfaces/libpq -I../../../../src/include -D_POSIX_PTHREAD_SEMANTICS -c -o preproc.o preproc.c\n> Assertion failed: hmap_size (phdl->fb.map) == 0, file ../src/line_num_internal.c, line 230, function twolist_proc_clear\n> Assertion failed: hmap_size (phdl->fb.map) == 0, file ../src/line_num_internal.c, line 230, function twolist_proc_clear\n> cc: Fatal error in /opt/developerstudio12.6/lib/compilers/bin/acomp\n> cc: Status 134\n> \n> the assertion is just a consequence of running out of memory, I believe, acomp\n> is well over 20GB at that point.\n> \n> However I see that wrasse doesn't seem to have that problem. Which leaves me a\n> bit confused, because I think that's the same machine and compiler binary.\n> \n> Noah, did you encounter this before / do anything to avoid this?\n\nYes. Drop --enable-debug, and override TMPDIR to some disk-backed location.\n\n From the earliest days of wrasse, the compiler used too much RAM to build\npreproc.o with --enable-debug. As of 2021-04, the compiler's \"acomp\" phase\nneeded 10G in one process, and later phases needed 11.6G across two processes.\nCompilation wrote 3.7G into TMPDIR. Since /tmp consumes RAM+swap, overriding\nTMPDIR relieved 3.7G of RAM pressure. Even with those protections, wrasse\nintermittently reaches the 14G limit I impose (via \"ulimit -v 14680064\"). I\nhad experimented with different optimization levels, but that didn't help.\n\n\n",
"msg_date": "Sat, 6 Aug 2022 16:09:24 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-06 16:09:24 -0700, Noah Misch wrote:\n> On Sat, Aug 06, 2022 at 02:07:24PM -0700, Andres Freund wrote:\n> > I tried PG on the gcc compile farm solaris 11.31 host. When compiling with sun\n> > studio I can build the backend etc, but preproc.c fails to compile:\n> > \n> > ccache /opt/developerstudio12.6/bin/cc -m64 -Xa -g -v -O0 -D_POSIX_PTHREAD_SEMANTICS -mt -D_REENTRANT -D_THREAD_SAFE -I../include -I../../../../src/interfaces/ecpg/include -I. -I. -I../../../../src/interfaces/ecpg/ecpglib -I../../../../src/interfaces/libpq -I../../../../src/include -D_POSIX_PTHREAD_SEMANTICS -c -o preproc.o preproc.c\n> > Assertion failed: hmap_size (phdl->fb.map) == 0, file ../src/line_num_internal.c, line 230, function twolist_proc_clear\n> > Assertion failed: hmap_size (phdl->fb.map) == 0, file ../src/line_num_internal.c, line 230, function twolist_proc_clear\n> > cc: Fatal error in /opt/developerstudio12.6/lib/compilers/bin/acomp\n> > cc: Status 134\n> > \n> > the assertion is just a consequence of running out of memory, I believe, acomp\n> > is well over 20GB at that point.\n> > \n> > However I see that wrasse doesn't seem to have that problem. Which leaves me a\n> > bit confused, because I think that's the same machine and compiler binary.\n> > \n> > Noah, did you encounter this before / do anything to avoid this?\n> \n> Yes. Drop --enable-debug, and override TMPDIR to some disk-backed location.\n\nThanks - that indeed helped...\n\n\n> From the earliest days of wrasse, the compiler used too much RAM to build\n> preproc.o with --enable-debug. As of 2021-04, the compiler's \"acomp\" phase\n> needed 10G in one process, and later phases needed 11.6G across two processes.\n> Compilation wrote 3.7G into TMPDIR. Since /tmp consumes RAM+swap, overriding\n> TMPDIR relieved 3.7G of RAM pressure. Even with those protections, wrasse\n> intermittently reaches the 14G limit I impose (via \"ulimit -v 14680064\"). I\n> had experimented with different optimization levels, but that didn't help.\n\nYikes. And it's not like newer compiler versions are likely to be forthcoming\n(12.6 is newest and is from 2017...). Wonder if we should just require gcc on\nsolaris... There's a decent amount of stuff we could rip out in that case.\n\nI was trying to build on solaris because I was seeing if we could get rid of\nwith_gnu_ld, motivated by making the meson build generate a working\nMakefile.global for pgxs' benefit.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 6 Aug 2022 16:52:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-08-06 16:09:24 -0700, Noah Misch wrote:\n>> From the earliest days of wrasse, the compiler used too much RAM to build\n>> preproc.o with --enable-debug. As of 2021-04, the compiler's \"acomp\" phase\n>> needed 10G in one process, and later phases needed 11.6G across two processes.\n>> Compilation wrote 3.7G into TMPDIR. Since /tmp consumes RAM+swap, overriding\n>> TMPDIR relieved 3.7G of RAM pressure. Even with those protections, wrasse\n>> intermittently reaches the 14G limit I impose (via \"ulimit -v 14680064\"). I\n>> had experimented with different optimization levels, but that didn't help.\n\n> Yikes. And it's not like newer compiler versions are likely to be forthcoming\n> (12.6 is newest and is from 2017...). Wonder if we should just require gcc on\n> solaris... There's a decent amount of stuff we could rip out in that case.\n\nSeems like it's only a matter of time before we add enough stuff to\nthe grammar that the build fails, period.\n\nHowever, I wonder why exactly it's so large, and why the backend's gram.o\nisn't an even bigger problem. Maybe an effort to cut preproc.o's code\nsize could yield dividends?\n\nFWIW, my late and unlamented animal gaur was also showing unhappiness with\nthe size of preproc.o, manifested as a boatload of warnings like\n/var/tmp//cc0MHZPD.s:11594: Warning: .stabn: description field '109d3' too big, try a different debug format\nwhich did not happen with gram.o.\n\nEven on a modern Linux:\n\n$ size src/backend/parser/gram.o\n text data bss dec hex filename\n 656568 0 0 656568 a04b8 src/backend/parser/gram.o\n$ size src/interfaces/ecpg/preproc/preproc.o\n text data bss dec hex filename\n 912005 188 7348 919541 e07f5 src/interfaces/ecpg/preproc/preproc.o\n\nSo there's something pretty bloated there. It doesn't seem like\necpg's additional productions should justify a nigh 50% code\nsize increase.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Aug 2022 20:05:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "On Sun, Aug 7, 2022 at 11:52 AM Andres Freund <andres@anarazel.de> wrote:\n> Yikes. And it's not like newer compiler versions are likely to be forthcoming\n> (12.6 is newest and is from 2017...). Wonder if we should just require gcc on\n> solaris... There's a decent amount of stuff we could rip out in that case.\n\nIndependently of the RAM requirements topic, I totally agree that\ndoing extra work to support a compiler that hasn't had a release in 5\nyears doesn't seem like time well spent.\n\n\n",
"msg_date": "Sun, 7 Aug 2022 12:22:02 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "On Sat, Aug 06, 2022 at 08:05:14PM -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-08-06 16:09:24 -0700, Noah Misch wrote:\n> >> From the earliest days of wrasse, the compiler used too much RAM to build\n> >> preproc.o with --enable-debug. As of 2021-04, the compiler's \"acomp\" phase\n> >> needed 10G in one process, and later phases needed 11.6G across two processes.\n> >> Compilation wrote 3.7G into TMPDIR. Since /tmp consumes RAM+swap, overriding\n> >> TMPDIR relieved 3.7G of RAM pressure. Even with those protections, wrasse\n> >> intermittently reaches the 14G limit I impose (via \"ulimit -v 14680064\"). I\n> >> had experimented with different optimization levels, but that didn't help.\n> \n> > Yikes. And it's not like newer compiler versions are likely to be forthcoming\n> > (12.6 is newest and is from 2017...). Wonder if we should just require gcc on\n> > solaris... There's a decent amount of stuff we could rip out in that case.\n> \n> Seems like it's only a matter of time before we add enough stuff to\n> the grammar that the build fails, period.\n\nI wouldn't worry about that enough to work hard in advance. The RAM usage can\ngrow by about 55% before that's a problem. (The 14G ulimit can tolerate a\nraise.) By then, the machine may be gone or have more RAM. Perhaps even\nBison will have changed its code generation. If none of those happen, I could\nswitch to gcc, hack things to use gcc for just preproc.o, etc.\n\n> So there's something pretty bloated there. It doesn't seem like\n> ecpg's additional productions should justify a nigh 50% code\n> size increase.\n\nTrue.\n\n\n",
"msg_date": "Sat, 6 Aug 2022 17:25:52 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-06 20:05:14 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-08-06 16:09:24 -0700, Noah Misch wrote:\n> >> From the earliest days of wrasse, the compiler used too much RAM to build\n> >> preproc.o with --enable-debug. As of 2021-04, the compiler's \"acomp\" phase\n> >> needed 10G in one process, and later phases needed 11.6G across two processes.\n> >> Compilation wrote 3.7G into TMPDIR. Since /tmp consumes RAM+swap, overriding\n> >> TMPDIR relieved 3.7G of RAM pressure. Even with those protections, wrasse\n> >> intermittently reaches the 14G limit I impose (via \"ulimit -v 14680064\"). I\n> >> had experimented with different optimization levels, but that didn't help.\n>\n> > Yikes. And it's not like newer compiler versions are likely to be forthcoming\n> > (12.6 is newest and is from 2017...). Wonder if we should just require gcc on\n> > solaris... There's a decent amount of stuff we could rip out in that case.\n>\n> Seems like it's only a matter of time before we add enough stuff to\n> the grammar that the build fails, period.\n\nYea, it doesn't look too far off.\n\n\n> However, I wonder why exactly it's so large, and why the backend's gram.o\n> isn't an even bigger problem. Maybe an effort to cut preproc.o's code\n> size could yield dividends?\n\ngram.c also compiles slowly and uses a lot of memory. Roughly ~8GB memory at\nthe peak (just watching top) and 1m40s (with debugging disabled, temp files on\ndisk etc).\n\nI don't entirely know what parse.pl actually tries to achieve. The generated\noutput looks more different from gram.y than I'd have imagined.\n\nIt's certainly interesting that it ends up rougly 30% larger .c bison\noutput. Which roughly matches the difference in memory usage.\n\n\n> FWIW, my late and unlamented animal gaur was also showing unhappiness with\n> the size of preproc.o, manifested as a boatload of warnings like\n> /var/tmp//cc0MHZPD.s:11594: Warning: .stabn: description field '109d3' too big, try a different debug format\n> which did not happen with gram.o.\n\nI suspect we're going to have to do something about the gram.c size on its\nown. It's already the slowest compilation step by a lot, even on modern\ncompilers.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 6 Aug 2022 17:42:19 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-06 17:25:52 -0700, Noah Misch wrote:\n> On Sat, Aug 06, 2022 at 08:05:14PM -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > Yikes. And it's not like newer compiler versions are likely to be forthcoming\n> > > (12.6 is newest and is from 2017...). Wonder if we should just require gcc on\n> > > solaris... There's a decent amount of stuff we could rip out in that case.\n> > \n> > Seems like it's only a matter of time before we add enough stuff to\n> > the grammar that the build fails, period.\n> \n> I wouldn't worry about that enough to work hard in advance. The RAM usage can\n> grow by about 55% before that's a problem. (The 14G ulimit can tolerate a\n> raise.) By then, the machine may be gone or have more RAM. Perhaps even\n> Bison will have changed its code generation. If none of those happen, I could\n> switch to gcc, hack things to use gcc for just preproc.o, etc.\n\nSure, we can hack around it in some way. But if we need such hackery to\ncompile postgres with a compiler, what's the point of supporting that\ncompiler? It's not like sunpro provides with awesome static analysis or such.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 6 Aug 2022 17:43:50 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "On Sat, Aug 06, 2022 at 05:43:50PM -0700, Andres Freund wrote:\n> On 2022-08-06 17:25:52 -0700, Noah Misch wrote:\n> > On Sat, Aug 06, 2022 at 08:05:14PM -0400, Tom Lane wrote:\n> > > Andres Freund <andres@anarazel.de> writes:\n> > > > Yikes. And it's not like newer compiler versions are likely to be forthcoming\n> > > > (12.6 is newest and is from 2017...). Wonder if we should just require gcc on\n> > > > solaris... There's a decent amount of stuff we could rip out in that case.\n> > > \n> > > Seems like it's only a matter of time before we add enough stuff to\n> > > the grammar that the build fails, period.\n> > \n> > I wouldn't worry about that enough to work hard in advance. The RAM usage can\n> > grow by about 55% before that's a problem. (The 14G ulimit can tolerate a\n> > raise.) By then, the machine may be gone or have more RAM. Perhaps even\n> > Bison will have changed its code generation. If none of those happen, I could\n> > switch to gcc, hack things to use gcc for just preproc.o, etc.\n> \n> Sure, we can hack around it in some way. But if we need such hackery to\n> compile postgres with a compiler, what's the point of supporting that\n> compiler? It's not like sunpro provides with awesome static analysis or such.\n\nTo have a need to decide that, PostgreSQL would need to grow preproc.o such\nthat it causes 55% higher RAM usage, and the sunpro buildfarm members extant\nat that time would need to have <= 32 GiB RAM. I give a 15% chance of\nreaching such conditions, and we don't gain much by deciding in advance. I'd\nprefer to focus on decisions affecting more-probable outcomes.\n\n\n",
"msg_date": "Sat, 6 Aug 2022 17:59:54 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "On 2022-08-06 17:59:54 -0700, Noah Misch wrote:\n> On Sat, Aug 06, 2022 at 05:43:50PM -0700, Andres Freund wrote:\n> > Sure, we can hack around it in some way. But if we need such hackery to\n> > compile postgres with a compiler, what's the point of supporting that\n> > compiler? It's not like sunpro provides with awesome static analysis or such.\n> \n> To have a need to decide that, PostgreSQL would need to grow preproc.o such\n> that it causes 55% higher RAM usage, and the sunpro buildfarm members extant\n> at that time would need to have <= 32 GiB RAM. I give a 15% chance of\n> reaching such conditions, and we don't gain much by deciding in advance. I'd\n> prefer to focus on decisions affecting more-probable outcomes.\n\nMy point wasn't about the future - *today* a compile with normal settings\ndoesn't work, on a machine with a reasonable amount of ram. Who does it help\nif one person can get postgres to compile with some applied magic - normal\nusers won't.\n\nAnd it's not a cost free thing to support, e.g. I tried to build because\nsolaris with suncc forces me to generate with_gnu_ld when generating a\ncompatible Makefile.global for pgxs with meson.\n\n\n",
"msg_date": "Sat, 6 Aug 2022 18:09:27 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Sat, Aug 06, 2022 at 05:43:50PM -0700, Andres Freund wrote:\n>> Sure, we can hack around it in some way. But if we need such hackery to\n>> compile postgres with a compiler, what's the point of supporting that\n>> compiler? It's not like sunpro provides with awesome static analysis or such.\n\n> To have a need to decide that, PostgreSQL would need to grow preproc.o such\n> that it causes 55% higher RAM usage, and the sunpro buildfarm members extant\n> at that time would need to have <= 32 GiB RAM. I give a 15% chance of\n> reaching such conditions, and we don't gain much by deciding in advance. I'd\n> prefer to focus on decisions affecting more-probable outcomes.\n\nI think it's the same rationale as with other buildfarm animals\nrepresenting niche systems: we make the effort to support them\nin order to avoid becoming locked into a software monoculture.\nThere's not that many compilers in the farm besides gcc/clang/MSVC,\nso I feel anyplace we can find one is valuable.\n\nAs per previous discussion, it may well be that gcc/clang are\ndominating the field so thoroughly that nobody wants to develop\ncompetitors anymore. So in the long run this may be a dead end.\nBut it's hard to be sure about that. For now, as long as\nsomebody's willing to do the work to support a compiler that's\nnot gcc/clang, we should welcome it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Aug 2022 21:10:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "On Sat, Aug 06, 2022 at 06:09:27PM -0700, Andres Freund wrote:\n> On 2022-08-06 17:59:54 -0700, Noah Misch wrote:\n> > On Sat, Aug 06, 2022 at 05:43:50PM -0700, Andres Freund wrote:\n> > > Sure, we can hack around it in some way. But if we need such hackery to\n> > > compile postgres with a compiler, what's the point of supporting that\n> > > compiler? It's not like sunpro provides with awesome static analysis or such.\n> > \n> > To have a need to decide that, PostgreSQL would need to grow preproc.o such\n> > that it causes 55% higher RAM usage, and the sunpro buildfarm members extant\n> > at that time would need to have <= 32 GiB RAM. I give a 15% chance of\n> > reaching such conditions, and we don't gain much by deciding in advance. I'd\n> > prefer to focus on decisions affecting more-probable outcomes.\n> \n> My point wasn't about the future - *today* a compile with normal settings\n> doesn't work, on a machine with a reasonable amount of ram. Who does it help\n> if one person can get postgres to compile with some applied magic - normal\n> users won't.\n\nTo me, 32G is on the low side of reasonable, and omitting --enable-debug isn't\nthat magical. (The TMPDIR hack is optional, but I did it to lessen harm to\nother users of that shared machine.)\n\n> And it's not a cost free thing to support, e.g. I tried to build because\n> solaris with suncc forces me to generate with_gnu_ld when generating a\n> compatible Makefile.global for pgxs with meson.\n\nThere may be a strong argument along those lines. Let's suppose you were to\nwrite that revoking sunpro support would save four weeks of Andres Freund time\nin the meson adoption project. I bet a critical mass of people would like\nthat trade. That's orthogonal to preproc.o compilation RAM usage.\n\n\n",
"msg_date": "Sat, 6 Aug 2022 18:26:00 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Sat, Aug 06, 2022 at 06:09:27PM -0700, Andres Freund wrote:\n>> And it's not a cost free thing to support, e.g. I tried to build because\n>> solaris with suncc forces me to generate with_gnu_ld when generating a\n>> compatible Makefile.global for pgxs with meson.\n\n> There may be a strong argument along those lines. Let's suppose you were to\n> write that revoking sunpro support would save four weeks of Andres Freund time\n> in the meson adoption project. I bet a critical mass of people would like\n> that trade. That's orthogonal to preproc.o compilation RAM usage.\n\nIMO, it'd be entirely reasonable for Andres to say that *he* doesn't\nwant to fix the meson build scripts for niche platform X. Then\nit'd be up to people who care about platform X to make that happen.\nGiven the current plan of supporting the Makefiles for some years\nmore, there wouldn't even be any great urgency in that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Aug 2022 22:55:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-06 22:55:14 -0400, Tom Lane wrote:\n> IMO, it'd be entirely reasonable for Andres to say that *he* doesn't\n> want to fix the meson build scripts for niche platform X. Then\n> it'd be up to people who care about platform X to make that happen.\n> Given the current plan of supporting the Makefiles for some years\n> more, there wouldn't even be any great urgency in that.\n\nThe \"problem\" in this case is that maintaining pgxs compatibility, as we'd\ndiscussed at pgcon, requires emitting stuff for all the @whatever@ things in\nMakefile.global.in, including with_gnu_ld. Which lead me down the rabbithole\nof trying to build on solaris, with sun studio, to see if we could just remove\nwith_gnu_ld (and some others).\n\nThere's a lot of replacements that really aren't needed for pgxs, including\nwith_gnu_ld (after the patch I just sent on the \"baggage\" thread). I tried to\nthink of a way to have a 'missing' equivalent for variables filled with bogus\ncontents, to trigger an error when they're used. But I don't think there's\nsuch a thing?\n\n\nI haven't \"really\" tried because recent-ish python fails to configure on\nsolaris without modifications, and patching python's configure was further\nthan I wanted to go, but I don't forsee much issues supporting building on\nsolaris with gcc.\n\n\nBaring minor adjustments (for e.g. dragonflybsd vs freebsd), there's two\ncurrently \"supported\" OS that require some work:\n- AIX, due to the symbol import / export & linking differences\n- cygwin, although calling that supported right now is a stretch... I don't\n think it'd be too hard, but ...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 6 Aug 2022 20:12:54 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "On Sat, Aug 06, 2022 at 08:12:54PM -0700, Andres Freund wrote:\n> The \"problem\" in this case is that maintaining pgxs compatibility, as we'd\n> discussed at pgcon, requires emitting stuff for all the @whatever@ things in\n> Makefile.global.in, including with_gnu_ld. Which lead me down the rabbithole\n> of trying to build on solaris, with sun studio, to see if we could just remove\n> with_gnu_ld (and some others).\n> \n> There's a lot of replacements that really aren't needed for pgxs, including\n> with_gnu_ld (after the patch I just sent on the \"baggage\" thread). I tried to\n> think of a way to have a 'missing' equivalent for variables filled with bogus\n> contents, to trigger an error when they're used. But I don't think there's\n> such a thing?\n\nFor some patterns of variable use, this works:\n\nbadvar = $(error do not use badvar)\nok:\n\techo hello\nbad:\n\techo $(badvar)\n\n\n",
"msg_date": "Sat, 6 Aug 2022 20:23:40 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-08-06 22:55:14 -0400, Tom Lane wrote:\n>> IMO, it'd be entirely reasonable for Andres to say that *he* doesn't\n>> want to fix the meson build scripts for niche platform X. Then\n>> it'd be up to people who care about platform X to make that happen.\n>> Given the current plan of supporting the Makefiles for some years\n>> more, there wouldn't even be any great urgency in that.\n\n> The \"problem\" in this case is that maintaining pgxs compatibility, as we'd\n> discussed at pgcon, requires emitting stuff for all the @whatever@ things in\n> Makefile.global.in, including with_gnu_ld.\n\nSure, but why can't you just leave that for later by hard-wiring it\nto false in the meson build? As long as you don't break the Makefile\nbuild, no one is worse off.\n\nI think if we want to get this past the finish line, we need to\nacknowledge that the initial commit isn't going to be perfect.\nThe whole point of continuing to maintain the Makefiles is to\ngive us breathing room to fix remaining issues in a leisurely\nfashion.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 07 Aug 2022 01:17:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-07 01:17:22 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-08-06 22:55:14 -0400, Tom Lane wrote:\n> >> IMO, it'd be entirely reasonable for Andres to say that *he* doesn't\n> >> want to fix the meson build scripts for niche platform X. Then\n> >> it'd be up to people who care about platform X to make that happen.\n> >> Given the current plan of supporting the Makefiles for some years\n> >> more, there wouldn't even be any great urgency in that.\n>\n> > The \"problem\" in this case is that maintaining pgxs compatibility, as we'd\n> > discussed at pgcon, requires emitting stuff for all the @whatever@ things in\n> > Makefile.global.in, including with_gnu_ld.\n>\n> Sure, but why can't you just leave that for later by hard-wiring it\n> to false in the meson build? As long as you don't break the Makefile\n> build, no one is worse off.\n\nYea, that's what I am doing now. But it's a fair bit of work figuring out\nwhich values need at least approximately correct values and which not.\n\nIt'd be nice if we had an automated way of building a lot of the extensions\nout there...\n\n\n> I think if we want to get this past the finish line, we need to\n> acknowledge that the initial commit isn't going to be perfect.\n> The whole point of continuing to maintain the Makefiles is to\n> give us breathing room to fix remaining issues in a leisurely\n> fashion.\n\nWholeheartedly agreed.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 6 Aug 2022 23:46:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "On Sun, Aug 7, 2022 at 7:05 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Even on a modern Linux:\n>\n> $ size src/backend/parser/gram.o\n> text data bss dec hex filename\n> 656568 0 0 656568 a04b8 src/backend/parser/gram.o\n> $ size src/interfaces/ecpg/preproc/preproc.o\n> text data bss dec hex filename\n> 912005 188 7348 919541 e07f5 src/interfaces/ecpg/preproc/preproc.o\n>\n> So there's something pretty bloated there. It doesn't seem like\n> ecpg's additional productions should justify a nigh 50% code\n> size increase.\n\nComparing gram.o with preproc.o:\n\n$ objdump -t src/backend/parser/gram.o | grep yy | grep -v\nUND | awk '{print $5, $6}' | sort -r | head -n3\n000000000003a24a yytable\n000000000003a24a yycheck\n0000000000013672 base_yyparse\n\n$ objdump -t src/interfaces/ecpg/preproc/preproc.o | grep yy | grep -v\nUND | awk '{print $5, $6}' | sort -r | head -n3\n000000000004d8e2 yytable\n000000000004d8e2 yycheck\n000000000002841e base_yyparse\n\nThe largest lookup tables are ~25% bigger (other tables are trivial in\ncomparison), and the function base_yyparse is about double the size,\nmost of which is a giant switch statement with 2510 / 3912 cases,\nrespectively. That difference does seem excessive. I've long wondered\nif it would be possible / feasible to have more strict separation for\neach C, ECPG commands, and SQL. That sounds like a huge amount of\nwork, though.\n\nPlaying around with the compiler flags on preproc.c, I get these\ncompile times, gcc memory usage as reported by /usr/bin/time -v , and\nsymbol sizes (non-debug build):\n\n-O2:\ntime 8.0s\nMaximum resident set size (kbytes): 255884\n\n-O1:\ntime 6.3s\nMaximum resident set size (kbytes): 170636\n000000000004d8e2 yytable\n000000000004d8e2 yycheck\n00000000000292de base_yyparse\n\n-O0:\ntime 2.9s\nMaximum resident set size (kbytes): 153148\n000000000004d8e2 yytable\n000000000004d8e2 yycheck\n000000000003585e base_yyparse\n\nNote that -O0 bloats the binary probably because it's not using a jump\ntable anymore. O1 might be worth it just to reduce build times for\nslower animals, even if Noah reported this didn't help the issue\nupthread. I suspect it wouldn't slow down production use much since\nthe output needs to be compiled anyway.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 7 Aug 2022 14:47:36 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "\nOn 2022-08-07 Su 02:46, Andres Freund wrote:\n>\n>> I think if we want to get this past the finish line, we need to\n>> acknowledge that the initial commit isn't going to be perfect.\n>> The whole point of continuing to maintain the Makefiles is to\n>> give us breathing room to fix remaining issues in a leisurely\n>> fashion.\n> Wholeheartedly agreed.\n>\n\nI'm waiting for that first commit so I can start working on the\nbuildfarm client changes. Ideally (from my POV) this would happen by\nearly Sept when I will be leaving on a trip for some weeks, and this\nwould be a good project to take with me. Is that possible?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 8 Aug 2022 11:14:58 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-08 11:14:58 -0400, Andrew Dunstan wrote:\n> I'm waiting for that first commit so I can start working on the\n> buildfarm client changes. Ideally (from my POV) this would happen by\n> early Sept when I will be leaving on a trip for some weeks, and this\n> would be a good project to take with me. Is that possible?\n\nYes, I think that should be possible. I think what's required before then is\n1) a minimal docs patch 2) a discussion about where to store tests results\netc. It'll clearly not be finished, but we agreed that a project like this can\nonly be done incrementally after a certain stage...\n\nI've been doing a lot of cleanup over the last few days, and I'll send a new\nversion soon and then kick off the discussion for 2).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 8 Aug 2022 08:56:18 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-07 14:47:36 +0700, John Naylor wrote:\n> Playing around with the compiler flags on preproc.c, I get these\n> compile times, gcc memory usage as reported by /usr/bin/time -v , and\n> symbol sizes (non-debug build):\n>\n> -O2:\n> time 8.0s\n> Maximum resident set size (kbytes): 255884\n>\n> -O1:\n> time 6.3s\n> Maximum resident set size (kbytes): 170636\n> 000000000004d8e2 yytable\n> 000000000004d8e2 yycheck\n> 00000000000292de base_yyparse\n>\n> -O0:\n> time 2.9s\n> Maximum resident set size (kbytes): 153148\n> 000000000004d8e2 yytable\n> 000000000004d8e2 yycheck\n> 000000000003585e base_yyparse\n>\n> Note that -O0 bloats the binary probably because it's not using a jump\n> table anymore. O1 might be worth it just to reduce build times for\n> slower animals, even if Noah reported this didn't help the issue\n> upthread. I suspect it wouldn't slow down production use much since\n> the output needs to be compiled anyway.\n\nFWIW, I noticed that the build was much slower on gcc 12 than 11, and reported\nthat as a bug:\nhttps://gcc.gnu.org/bugzilla/show_bug.cgi?id=106809\n\nWhich, impressively promptly, got a workaround in the development branch, and\nwill (based on past experience) likely be backported to the 12 branch\nsoon. Looks like the next set of minor releases will have at least a\nworkaround for that slowdown.\n\nIt's less clear to me if they're going to backport anyting about the -On\nregression starting in gcc 9.\n\nIf I understand correctly the problem is due to basic blocks reached from a\nlot of different places. Not too hard to see how that's a problem particularly\nfor preproc.c.\n\n\nIt's worth noting that clang is also very slow, starting at -O1. Albeit in a\nvery different place:\n===-------------------------------------------------------------------------===\n ... Pass execution timing report ...\n===-------------------------------------------------------------------------===\n Total Execution Time: 9.8708 seconds (9.8716 wall clock)\n\n ---User Time--- --System Time-- --User+System-- ---Wall Time--- --- Name ---\n...\n 7.1019 ( 72.7%) 0.0435 ( 40.8%) 7.1454 ( 72.4%) 7.1462 ( 72.4%) Greedy Register Allocator\n\n\n\nThere's lots of code in ecpg like the following:\n\nc_anything: ecpg_ident\t\t\t\t{ $$ = $1; }\n\t\t| Iconst\t\t\t{ $$ = $1; }\n\t\t| ecpg_fconst\t\t\t{ $$ = $1; }\n\t\t| ecpg_sconst\t\t\t{ $$ = $1; }\n\t\t| '*'\t\t\t\t{ $$ = mm_strdup(\"*\"); }\n\t\t| '+'\t\t\t\t{ $$ = mm_strdup(\"+\"); }\n\t\t| '-'\t\t\t\t{ $$ = mm_strdup(\"-\"); }\n\t\t| '/'\t\t\t\t{ $$ = mm_strdup(\"/\"); }\n...\n\t\t| UNION\t\t\t\t{ $$ = mm_strdup(\"union\"); }\n\t\t| VARCHAR\t\t\t{ $$ = mm_strdup(\"varchar\"); }\n\t\t| '['\t\t\t\t{ $$ = mm_strdup(\"[\"); }\n\t\t| ']'\t\t\t\t{ $$ = mm_strdup(\"]\"); }\n\t\t| '='\t\t\t\t{ $$ = mm_strdup(\"=\"); }\n\t\t| ':'\t\t\t\t{ $$ = mm_strdup(\":\"); }\n\t\t;\n\nI wonder if that couldn't be done smarter pretty easily. Not immediately sure\nif we can just get the string matching a keyword from the lexer? But even if\nnot, replacing all the branches with a single lookup table of the\nkeyword->string. Seems that could reduce the number of switch cases and parser\nstates a decent amount.\n\n\nI also wonder if we shouldn't just make ecpg optional at some point. Or even\nmove it out of the tree.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 2 Sep 2022 10:40:33 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I also wonder if we shouldn't just make ecpg optional at some point. Or even\n> move it out of the tree.\n\nThe reason it's in the tree is to ensure its grammar stays in sync\nwith the core grammar, and perhaps more to the point, that it's\npossible to build its grammar at all. If it were at arm's length,\nwe'd probably not have noticed the conflict over STRING in the JSON\npatches until unpleasantly far down the road (to mention just the\nmost recent example). However, those aren't arguments against\nmaking it optional-to-build like the PLs are.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Sep 2022 13:56:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "\nOn 2022-09-02 Fr 13:56, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> I also wonder if we shouldn't just make ecpg optional at some point. Or even\n>> move it out of the tree.\n> The reason it's in the tree is to ensure its grammar stays in sync\n> with the core grammar, and perhaps more to the point, that it's\n> possible to build its grammar at all. If it were at arm's length,\n> we'd probably not have noticed the conflict over STRING in the JSON\n> patches until unpleasantly far down the road (to mention just the\n> most recent example). However, those aren't arguments against\n> making it optional-to-build like the PLs are.\n>\n> \t\t\t\n\n\nThat seems reasonable. Note that the buildfarm client would then need an\nextra build step.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 4 Sep 2022 09:46:31 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-09-02 Fr 13:56, Tom Lane wrote:\n>> ... However, those aren't arguments against\n>> making it optional-to-build like the PLs are.\n\n> That seems reasonable. Note that the buildfarm client would then need an\n> extra build step.\n\nNot sure why there'd be an extra build step; I'd envision it more\nlike \"configure ... --with-ecpg\" and the main build step either\ndoes it or not. You would need to make the ecpg-check step\nconditional, though, so it's moot: we'd have to fix the buildfarm\nfirst in any case, unless it's default-enabled which would seem\na bit odd.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 04 Sep 2022 09:56:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "\nOn 2022-09-04 Su 09:56, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 2022-09-02 Fr 13:56, Tom Lane wrote:\n>>> ... However, those aren't arguments against\n>>> making it optional-to-build like the PLs are.\n>> That seems reasonable. Note that the buildfarm client would then need an\n>> extra build step.\n> Not sure why there'd be an extra build step; I'd envision it more\n> like \"configure ... --with-ecpg\" and the main build step either\n> does it or not. \n\n\nAh, ok, makes sense.\n\n\n> You would need to make the ecpg-check step\n> conditional, though, so it's moot: we'd have to fix the buildfarm\n> first in any case, unless it's default-enabled which would seem\n> a bit odd.\n>\n> \t\t\t\n\n\n*nod*\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 4 Sep 2022 10:11:48 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-09-04 Su 09:56, Tom Lane wrote:\n>> You would need to make the ecpg-check step\n>> conditional, though, so it's moot: we'd have to fix the buildfarm\n>> first in any case, unless it's default-enabled which would seem\n>> a bit odd.\n\n> *nod*\n\nI guess we could proceed like this:\n\n1. Invent the --with option. Temporarily make \"make check\" in ecpg\nprint a message but not fail if the option wasn't selected.\n\n2. Update buildfarm client to recognize the option and skip ecpg-check\nif not selected.\n\n3. Sometime down the road, after everyone's updated their buildfarm\nanimals, flip ecpg \"make check\" to throw an error reporting that\necpg wasn't built.\n\n\nThere'd need to be a separate discussion around how much to\nencourage buildfarm owners to add --with-ecpg to their\nconfigurations. One thing that would make that easier is\nadding --with-ecpg as a no-op option to the back branches,\nso that if you do want it on it doesn't have to be done\nwith a branch-specific test. (I guess packagers might\nappreciate that too.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 04 Sep 2022 10:55:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "On Sun, Sep 04, 2022 at 10:55:43AM -0400, Tom Lane wrote:\n> There'd need to be a separate discussion around how much to\n> encourage buildfarm owners to add --with-ecpg to their\n> configurations. One thing that would make that easier is\n> adding --with-ecpg as a no-op option to the back branches,\n> so that if you do want it on it doesn't have to be done\n> with a branch-specific test.\n\nThat would not make it easier. \"configure\" doesn't fail when given unknown\noptions, so there's already no need for a branch-specific test. For example,\ntopminnow has no problem passing --with-llvm on branches lacking that option:\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=topminnow&dt=2022-08-27%2005%3A57%3A45&stg=configure\n\n\n",
"msg_date": "Sun, 4 Sep 2022 08:06:14 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "On 04.09.22 16:55, Tom Lane wrote:\n> I guess we could proceed like this:\n> \n> 1. Invent the --with option. Temporarily make \"make check\" in ecpg\n> print a message but not fail if the option wasn't selected.\n> \n> 2. Update buildfarm client to recognize the option and skip ecpg-check\n> if not selected.\n> \n> 3. Sometime down the road, after everyone's updated their buildfarm\n> animals, flip ecpg \"make check\" to throw an error reporting that\n> ecpg wasn't built.\n\nWhy is this being proposed?\n\n\n",
"msg_date": "Mon, 5 Sep 2022 22:52:03 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-05 22:52:03 +0200, Peter Eisentraut wrote:\n> On 04.09.22 16:55, Tom Lane wrote:\n> > I guess we could proceed like this:\n> > \n> > 1. Invent the --with option. Temporarily make \"make check\" in ecpg\n> > print a message but not fail if the option wasn't selected.\n> > \n> > 2. Update buildfarm client to recognize the option and skip ecpg-check\n> > if not selected.\n> > \n> > 3. Sometime down the road, after everyone's updated their buildfarm\n> > animals, flip ecpg \"make check\" to throw an error reporting that\n> > ecpg wasn't built.\n> \n> Why is this being proposed?\n\nOn slower machines / certain platforms it's the bottleneck during compilation\n(as e.g. evidenced in this thread). There's no proper way to run check-world\nexempting ecpg. Most changes don't involve ecpg in any way, so having every\ndeveloper build preproc.o etc isn't necessary.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 5 Sep 2022 14:30:04 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Why is this being proposed?\n\nAndres is annoyed by the long build time of ecpg, which he has to\nwait for whether he wants to test it or not. I could imagine that\nI might disable ecpg testing on my slowest buildfarm animals, too.\n\nI suppose maybe we could compromise on inventing --with-ecpg but\nhaving it default to \"on\", so that you have to take positive\naction if you don't want it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Sep 2022 17:34:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "On 05.09.22 23:34, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> Why is this being proposed?\n> \n> Andres is annoyed by the long build time of ecpg, which he has to\n> wait for whether he wants to test it or not. I could imagine that\n> I might disable ecpg testing on my slowest buildfarm animals, too.\n> \n> I suppose maybe we could compromise on inventing --with-ecpg but\n> having it default to \"on\", so that you have to take positive\n> action if you don't want it.\n\nWe already have \"make all\" vs. \"make world\" to build just the important \nstuff versus everything. And we have \"make world-bin\" to build, \napproximately, everything except the slow stuff. Let's try to work \nwithin the existing mechanisms. For example, removing ecpg from \"make \nall\" might be sensible.\n\n(Obviously, \"all\" is then increasingly becoming a lie. Maybe a renaming \nlike \"all\" -> \"core\" and \"world\" -> \"all\" could be in order.)\n\nThe approach with the make targets is better than a configure option, \nbecause it allows you to build a narrow set of things during development \nand then build everything for final confirmation, without having to \nre-run configure. Also, it's less confusing for packagers.\n\n\n",
"msg_date": "Wed, 7 Sep 2022 08:45:27 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "On Wed, Sep 7, 2022 at 1:45 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 05.09.22 23:34, Tom Lane wrote:\n> > Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> >> Why is this being proposed?\n> >\n> > Andres is annoyed by the long build time of ecpg, which he has to\n> > wait for whether he wants to test it or not. I could imagine that\n> > I might disable ecpg testing on my slowest buildfarm animals, too.\n> >\n> > I suppose maybe we could compromise on inventing --with-ecpg but\n> > having it default to \"on\", so that you have to take positive\n> > action if you don't want it.\n>\n> We already have \"make all\" vs. \"make world\" to build just the important\n> stuff versus everything. And we have \"make world-bin\" to build,\n> approximately, everything except the slow stuff. Let's try to work\n> within the existing mechanisms. For example, removing ecpg from \"make\n> all\" might be sensible.\n>\n> (Obviously, \"all\" is then increasingly becoming a lie. Maybe a renaming\n> like \"all\" -> \"core\" and \"world\" -> \"all\" could be in order.)\n>\n> The approach with the make targets is better than a configure option,\n> because it allows you to build a narrow set of things during development\n> and then build everything for final confirmation, without having to\n> re-run configure. Also, it's less confusing for packagers.\n\nAnother point is that the --with-FOO options are intended for building\nand linking with external library FOO.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Sep 2022 10:01:47 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "On Tue, Sep 6, 2022 at 9:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > Why is this being proposed?\n>\n> Andres is annoyed by the long build time of ecpg, which he has to\n> wait for whether he wants to test it or not. I could imagine that\n> I might disable ecpg testing on my slowest buildfarm animals, too.\n\nThis message triggered me to try to teach ccache how to cache\npreproc.y -> preproc.{c,h}, and I got that basically working[1], but\nupstream doesn't want it (yet). I'll try again if the proposed\nrefactoring to allow more kinds of compiler-like-things goes\nsomewhere. I think that started with people's struggles with GCC vs\nMSVC. Given the simplicity of this case, though, I suppose we could\nhave a little not-very-general shell/python/whatever wrapper script --\njust compute a checksum of the input and keep the output files around.\n\n[1] https://github.com/ccache/ccache/pull/1156\n\n\n",
"msg_date": "Wed, 14 Sep 2022 10:23:46 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "On Wed, Sep 14, 2022 at 10:23 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Given the simplicity of this case, though, I suppose we could\n> have a little not-very-general shell/python/whatever wrapper script --\n> just compute a checksum of the input and keep the output files around.\n\nSomething as dumb as this perhaps...",
"msg_date": "Wed, 14 Sep 2022 15:08:06 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "On Wed, Sep 14, 2022 at 03:08:06PM +1200, Thomas Munro wrote:\n> On Wed, Sep 14, 2022 at 10:23 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Given the simplicity of this case, though, I suppose we could\n> > have a little not-very-general shell/python/whatever wrapper script --\n> > just compute a checksum of the input and keep the output files around.\n> \n> Something as dumb as this perhaps...\n\n> if [ -z \"$c_file\" ] ; then\n>\tc_file=\"(echo \"$y_file\" | sed 's/\\.y/.tab.c/')\"\n> fi\n\nThis looks wrong. I guess you mean to use $() and missing \"$\" ?\n\nIt could be:\n[ -z \"$c_file\" ] &&\n\tc_file=${y_file%.y}.tab.c\n\n> if [ -z \"$SIMPLE_BISON_CACHE_PATH\" ] ; then\n> \tSIMPLE_BISON_CACHE_PATH=\"/tmp/simple-bison-cache\"\n> fi\n\nShould this default to CCACHE_DIR? Then it would work under cirrusci...\n\n> h_file=\"$(echo $c_file | sed 's/\\.c/.h/')\"\n\nCould be ${c_file%.c}.h\n\n> if [ ! -e \"$cached_c_file\" -o ! -e \"$cached_h_file\" ] ; then\n\nYou could write the easy case first (I forget whether it's considered to\nbe more portable to write && outside of []).\n\n> if [ -e \"$cached_c_file\" ] && [ -e \"$cached_h_file\" ] ; then\n\nI can't see what part of this would fail to handle filenames with spaces\n(?)\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 13 Sep 2022 23:34:19 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "On Wed, Sep 14, 2022 at 5:24 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Tue, Sep 6, 2022 at 9:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > > Why is this being proposed?\n> >\n> > Andres is annoyed by the long build time of ecpg, which he has to\n> > wait for whether he wants to test it or not. I could imagine that\n> > I might disable ecpg testing on my slowest buildfarm animals, too.\n>\n> This message triggered me to try to teach ccache how to cache\n> preproc.y -> preproc.{c,h}, and I got that basically working[1], but\n> upstream doesn't want it (yet). I'll try again if the proposed\n> refactoring to allow more kinds of compiler-like-things goes\n> somewhere. I think that started with people's struggles with GCC vs\n> MSVC. Given the simplicity of this case, though, I suppose we could\n> have a little not-very-general shell/python/whatever wrapper script --\n> just compute a checksum of the input and keep the output files around.\n\nIf we're going to go to this length, it seems more straightforward to\njust check the .c/.h files into version control, like every other\nproject that I have such knowledge of.\n\nTo be fair, our grammar changes much more often. One other possible\ndeal-breaker of that is that it makes it more painful for forks to\nmaintain additional syntax.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 14 Sep 2022 11:51:21 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> If we're going to go to this length, it seems more straightforward to\n> just check the .c/.h files into version control, like every other\n> project that I have such knowledge of.\n\nStrong -1 on that, because then we'd have to mandate that every\ncommitter use exactly the same version of bison. It's been\npainful enough to require that for autoconf (and I'm pleased that\nit looks like meson will let us drop that nonsense).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 14 Sep 2022 01:02:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "On Wed, Sep 14, 2022 at 4:34 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Wed, Sep 14, 2022 at 03:08:06PM +1200, Thomas Munro wrote:\n> > On Wed, Sep 14, 2022 at 10:23 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > Given the simplicity of this case, though, I suppose we could\n> > > have a little not-very-general shell/python/whatever wrapper script --\n> > > just compute a checksum of the input and keep the output files around.\n> >\n> > Something as dumb as this perhaps...\n>\n> > if [ -z \"$c_file\" ] ; then\n> > c_file=\"(echo \"$y_file\" | sed 's/\\.y/.tab.c/')\"\n> > fi\n>\n> This looks wrong. I guess you mean to use $() and missing \"$\" ?\n\nYeah, but your %.y style is much nicer. Fixed that way. (I was\ntrying to avoid what I thought were non-standard extensions but I see\nthat's in POSIX sh. Cool.)\n\n> It could be:\n> [ -z \"$c_file\" ] &&\n> c_file=${y_file%.y}.tab.c\n\nMeh.\n\n> > if [ -z \"$SIMPLE_BISON_CACHE_PATH\" ] ; then\n> > SIMPLE_BISON_CACHE_PATH=\"/tmp/simple-bison-cache\"\n> > fi\n>\n> Should this default to CCACHE_DIR? Then it would work under cirrusci...\n\nNot sure it's OK to put random junk in ccache's directory, and in any\ncase we'd certainly want to teach it to trim itself before doing that\non CI... On the other hand, adding another registered cache dir would\nlikely add several seconds to CI, more than what can be saved with\nthis trick! The amount of time we can save is only a few seconds, or\nless on a fast machine.\n\nSo... I guess the target audience of this script is extremely\nimpatient people working locally, since with Meson our clean builds\nare cleaner, and will lead to re-execution this step. I just tried\nAndres's current meson branch on my fast-ish 16 core desktop, and\nthen, after priming caches, \"ninja clean && time ninja\" tells me:\n\nreal 0m3.133s\n\nAfter doing 'meson configure\n-DBISON=\"/path/to/simple-bison-cache.sh\"', I get it down to:\n\nreal 0m2.440s\n\nHowever, in doing that I realised that you need an executable name,\nnot a hairy shell command fragment, so you can't use\n\"simple-bison-cache.sh bison\", so I had to modify the script to be a\nwrapper that knows how to find bison. Bleugh.\n\n> > h_file=\"$(echo $c_file | sed 's/\\.c/.h/')\"\n>\n> Could be ${c_file%.c}.h\n\nMuch nicer.\n\n> > if [ ! -e \"$cached_c_file\" -o ! -e \"$cached_h_file\" ] ; then\n>\n> You could write the easy case first (I forget whether it's considered to\n> be more portable to write && outside of []).\n\nAgreed, it's nicer that way around.\n\n> I can't see what part of this would fail to handle filenames with spaces> (?)\n\nYeah, seems OK. I also fixed the uncertainty about -d, and made a\nsmall tweak after testing on Debian, MacOS and FreeBSD. BTW this\nisn't a proposal for src/tools yet, I'm just sharing for curiosity...\nI suppose a version good enough to live in src/tools would need to\ntrim the cache, and I don't enjoy writing code that deletes files in\nshell script, so maybe this'd need to be written in Python...",
"msg_date": "Thu, 15 Sep 2022 16:53:09 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "On Thu, Sep 15, 2022 at 04:53:09PM +1200, Thomas Munro wrote:\n> Not sure it's OK to put random junk in ccache's directory, and in any\n> case we'd certainly want to teach it to trim itself before doing that\n> on CI...\n\n> I suppose a version good enough to live in src/tools would need to\n> trim the cache, and I don't enjoy writing code that deletes files in\n> shell script, so maybe this'd need to be written in Python...\n\nMaybe it'd be maybe better in python (for portability?).\n\nBut also, maybe the cache should be a single file with hash content in\nit, plus the two cached files. Then, rather than storing and pruning N\nfiles with dynamic names, you'd be overwriting the same two files, and\navoid the need to prune.\n\nAs I understand, the utility of this cache is rebuilding when the\ngrammar hasn't changed; but it doesn't seem important to be able to use\ncached output when switching branches (for example).\n\n> bison=\"bison\"\n\nMaybe this should do:\n: ${BISON:=bison} # assign default value\n\n(and then change to refer to $BISON everywhere)\n\n>\t\t\"--version\")\n>\t\t\"-d\")\n>\t\t\"-o\")\n>\t\t\"-\"*)\n\nThese don't need to be quoted\n\n>\techo \"could not find .y file in command line arguments: $@\"\n\nCould add >&2 to write to stderr\n\n> if [ -z \"$SIMPLE_BISON_CACHE_PATH\" ] ; then\n> \tSIMPLE_BISON_CACHE_PATH=\"/tmp/simple-bison-cache\"\n> fi\n\nMaybe\n: ${SIMPLE_BISON_CACHE_PATH:=/tmp/simple-bison-cache} # assign default value\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 15 Sep 2022 04:21:54 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-14 01:02:39 -0400, Tom Lane wrote:\n> John Naylor <john.naylor@enterprisedb.com> writes:\n> > If we're going to go to this length, it seems more straightforward to\n> > just check the .c/.h files into version control, like every other\n> > project that I have such knowledge of.\n> \n> Strong -1 on that, because then we'd have to mandate that every\n> committer use exactly the same version of bison.\n\nYea, I don't think we want to go there either.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 15 Sep 2022 12:35:45 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: failing to build preproc.c on solaris with sun studio"
}
] |
[
{
"msg_contents": "I have to fix log files because its content is not properly formatted, I´m\nusing version 14.4 but that happened when I was using version 11 too. It\nhappens only when that statement is huge, or because it is a long sequence\nof updates in a WITH or DO statements, or because i´m updating a bytea\nfield. This kind of statement occurs on log dozens of times a day, but just\n2 or 3 are wrongly formatted.\n\none example is a WITH statement with dozens of inserts and updates on it\n2022-08-06 10:05:10.822\n-03,\"user_0566\",\"db\",1956432,\"10.158.0.17:56098\",62ee66ee.1dda50,1,\"SELECT\",2022-08-06\n10:04:46 -03,266/383801,229907216,LOG,00000,\"duration: 1735.107 ms execute\nPRSTMTST155939968154/PORTALST155939968154: /*Cobranca.ImportaCobranca*/\nWITH Receber1 AS (UPDATE fin_Re>\nNNLiquidado1 AS (UPDATE fin_ReceberNossoNumero SET Status = 7 WHERE\nNossoNumero = any($${90062164}$$)),\n--statement continues, with some more dozens of update/inserts\nNNDesconsiderado48 AS (UPDATE fin_recebernossonumero SET status = 9 WHERE\nreceber_id = 104201 AND status = 1 AND nossonumero <> 90086321),\nNNExcluir48 AS (UPDATE fin_recebernossonumero SET status = 4 WHERE\nreceber_id = 104201 AND status = any($IN${2,3}$IN$) AND nossonumero <>\n90086321 RETURNING recebernossonumero_id),\nBaixa48 AS (INSERT INTO fin_ReceberBaixa(Historico, Desconto, Valor,\nLancamento) VALUES ($vs$Paga2022-08-06 10:07:07.505\n-03,\"user_0566\",\"db\",1956432,\"10.158.0.17:56098\",62ee66ee.1dda50,2,\"idle\",2022-08-06\n10:04:46 -03,266/0,0,FATAL,57P01,\"terminating connection due to\nadministrator command\",,,,,,,,,\"2022062701adriana.aguiar\",\"client\nbackend\",,0\n2022-08-06 10:07:07.507\n-03,\"user_0328\",\"db\",1957035,\"10.158.0.17:57194\",62ee6730.1ddcab,1,\"idle\",2022-08-06\n10:05:52 -03,410/0,0,FATAL,57P01,\"terminating connection due to\nadministrator command\",,,,,,,,,\"2022062701tmk06.madureira\",\"client\nbackend\",,0\n\nif you search for \"$vs$Paga2022-08-06 10:07:07.505\" you´ll see that\n\"$vs$Paga\" is still part of first statement but \"2022-08-06 10:07:07.505\"\nis the starting of next statement, but there are some missing chars of\nprevious statement.\n\nanother example is just one update with a large bytea field on it\n2022-08-06 15:57:46.955\n-03,\"user_0591\",\"db\",2103042,\"10.158.0.17:43662\",62eeb9aa.201702,1,\"INSERT\",2022-08-06\n15:57:46 -03,49/639939,230013197,LOG,00000,\"duration: 11.012 ms execute\nPRSTMTST1612483842/PORTALST1612483842: WITH upsert AS (\nUPDATE sys_var SET varBlob = $1 WHERE name = $2 RETURNING *) INSERT\nINTO sys_var (Name, varBlob) SELECT $3 , $4 WHERE NOT EXISTS (SELECT *\nFROM upsert)\",\"parameters: $1 =\n'\\x3c3f786d6 --field content continues\n$2 = '/users/39818/XMLConfig', $3 = '/users/39818/XMLConfig', $4 =\n'\\x3c3f786d6 --field content continues\ne4445583e2d313c2f47524f555045445f494e4445583e32022-08-06 15:58:42.436\n-03,\"user_0591\",\"db\",2103042,\"10.158.0.17:43662\",62eeb9aa.201702,2,\"idle\",2022-08-06\n15:57:46 -03,49/0,0,FATAL,57P01,\"terminating connection due to\nadministrator command\",,,,,,,,,\"\",\"client backend\",,-4199849316459484872\n2022-08-06 15:58:42.436\n-03,\"user_0143\",\"db\",2103112,\"10.158.0.17:43794\",62eeb9bf.201748,1,\"idle\",2022-08-06\n15:58:07 -03,44/0,0,FATAL,57P01,\"terminating connection due to\nadministrator command\",,,,,,,,,\"2022062701joyceb.l@hotmail.com\",\"client\nbackend\",,0\n\nHere \"4445583e32022-08-06 15:58:42.436\", where bytea content \"4445583e\" was\nbeing displayed and the next statement started with \"32022-08-06\n15:58:42.436\".\n\nObviously because that the previous line is not finished correctly and I\ncannot import log files properly, so I have to edit those log files to\nproperly import them.\n\nthanks\nMarcos\n\nI have to fix log files because its content is not properly formatted, I´m using version 14.4 but that happened when I was using version 11 too. It happens only when that statement is huge, or because it is a long sequence of updates in a WITH or DO statements, or because i´m updating a bytea field. This kind of statement occurs on log dozens of times a day, but just 2 or 3 are wrongly formatted.one example is a WITH statement with dozens of inserts and updates on it2022-08-06 10:05:10.822 -03,\"user_0566\",\"db\",1956432,\"10.158.0.17:56098\",62ee66ee.1dda50,1,\"SELECT\",2022-08-06 10:04:46 -03,266/383801,229907216,LOG,00000,\"duration: 1735.107 ms execute PRSTMTST155939968154/PORTALST155939968154: /*Cobranca.ImportaCobranca*/ WITH Receber1 AS (UPDATE fin_Re>NNLiquidado1 AS (UPDATE fin_ReceberNossoNumero SET Status = 7 WHERE NossoNumero = any($${90062164}$$)),--statement continues, with some more dozens of update/insertsNNDesconsiderado48 AS (UPDATE fin_recebernossonumero SET status = 9 WHERE receber_id = 104201 AND status = 1 AND nossonumero <> 90086321),NNExcluir48 AS (UPDATE fin_recebernossonumero SET status = 4 WHERE receber_id = 104201 AND status = any($IN${2,3}$IN$) AND nossonumero <> 90086321 RETURNING recebernossonumero_id),Baixa48 AS (INSERT INTO fin_ReceberBaixa(Historico, Desconto, Valor, Lancamento) VALUES ($vs$Paga2022-08-06 10:07:07.505 -03,\"user_0566\",\"db\",1956432,\"10.158.0.17:56098\",62ee66ee.1dda50,2,\"idle\",2022-08-06 10:04:46 -03,266/0,0,FATAL,57P01,\"terminating connection due to administrator command\",,,,,,,,,\"2022062701adriana.aguiar\",\"client backend\",,02022-08-06 10:07:07.507 -03,\"user_0328\",\"db\",1957035,\"10.158.0.17:57194\",62ee6730.1ddcab,1,\"idle\",2022-08-06 10:05:52 -03,410/0,0,FATAL,57P01,\"terminating connection due to administrator command\",,,,,,,,,\"2022062701tmk06.madureira\",\"client backend\",,0if you search for \"$vs$Paga2022-08-06 10:07:07.505\" you´ll see that \"$vs$Paga\" is still part of first statement but \"2022-08-06 10:07:07.505\" is the starting of next statement, but there are some missing chars of previous statement.another example is just one update with a large bytea field on it2022-08-06 15:57:46.955 -03,\"user_0591\",\"db\",2103042,\"10.158.0.17:43662\",62eeb9aa.201702,1,\"INSERT\",2022-08-06 15:57:46 -03,49/639939,230013197,LOG,00000,\"duration: 11.012 ms execute PRSTMTST1612483842/PORTALST1612483842: WITH upsert AS (UPDATE sys_var SET varBlob = $1 WHERE name = $2 RETURNING *) INSERT INTO sys_var (Name, varBlob) SELECT $3 , $4 WHERE NOT EXISTS (SELECT * FROM upsert)\",\"parameters: $1 ='\\x3c3f786d6 --field content continues$2 = '/users/39818/XMLConfig', $3 = '/users/39818/XMLConfig', $4 = '\\x3c3f786d6 --field content continuese4445583e2d313c2f47524f555045445f494e4445583e32022-08-06 15:58:42.436 -03,\"user_0591\",\"db\",2103042,\"10.158.0.17:43662\",62eeb9aa.201702,2,\"idle\",2022-08-06 15:57:46 -03,49/0,0,FATAL,57P01,\"terminating connection due to administrator command\",,,,,,,,,\"\",\"client backend\",,-41998493164594848722022-08-06 15:58:42.436 -03,\"user_0143\",\"db\",2103112,\"10.158.0.17:43794\",62eeb9bf.201748,1,\"idle\",2022-08-06 15:58:07 -03,44/0,0,FATAL,57P01,\"terminating connection due to administrator command\",,,,,,,,,\"2022062701joyceb.l@hotmail.com\",\"client backend\",,0Here \"4445583e32022-08-06 15:58:42.436\", where bytea content \"4445583e\" was being displayed and the next statement started with \"32022-08-06 15:58:42.436\".Obviously because that the previous line is not finished correctly and I cannot import log files properly, so I have to edit those log files to properly import them.thanksMarcos",
"msg_date": "Sun, 7 Aug 2022 10:56:10 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "bug on log generation ?"
},
{
"msg_contents": "Marcos Pegoraro <marcos@f10.com.br> writes:\n> I have to fix log files because its content is not properly formatted,\n\nWhat mechanism are you using to store the log? If syslog is involved,\nit's reputed to drop data under load.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 07 Aug 2022 10:12:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: bug on log generation ?"
},
{
"msg_contents": "it´s csvlog only\n\nAtenciosamente,\n\n\n\n\nEm dom., 7 de ago. de 2022 às 11:12, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Marcos Pegoraro <marcos@f10.com.br> writes:\n> > I have to fix log files because its content is not properly formatted,\n>\n> What mechanism are you using to store the log? If syslog is involved,\n> it's reputed to drop data under load.\n>\n> regards, tom lane\n>\n\nit´s csvlog onlyAtenciosamente, Em dom., 7 de ago. de 2022 às 11:12, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Marcos Pegoraro <marcos@f10.com.br> writes:\n> I have to fix log files because its content is not properly formatted,\n\nWhat mechanism are you using to store the log? If syslog is involved,\nit's reputed to drop data under load.\n\n regards, tom lane",
"msg_date": "Sun, 7 Aug 2022 11:56:44 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Re: bug on log generation ?"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-07 11:56:44 -0300, Marcos Pegoraro wrote:\n> it�s csvlog only\n\nHow are you running postgres? If the logger process runs into trouble it might\nwrite to stderr.\n\nIs there a chance your huge statements would make you run out of space?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 7 Aug 2022 09:47:01 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: bug on log generation ?"
},
{
"msg_contents": ">\n>\n> How are you running postgres? If the logger process runs into trouble it\n> might\n> write to stderr.\n>\n> Is there a chance your huge statements would make you run out of space?\n>\n> Well, I don't think it is a out of space problem, because it\ndoesn´t stop logging, it just splits that message. As you can see, the next\nmessage is logged properly. And that statement is not so huge, these\nstatements have not more than 10 or 20kb. And as I said these statements\noccur dozens of times a day, but only once or twice is not correctly logged\nAn additional info, that splitted message has an out of order log time. At\nthat time the log file was having 2 or 3 logs per second, and that message was\n1 or 2 minutes later. It seems like it occurs now but it's stored a minute\nor two later.\n\nthanks\nMarcos\n\n\nHow are you running postgres? If the logger process runs into trouble it might\nwrite to stderr.\n\nIs there a chance your huge statements would make you run out of space?\nWell, I don't think it is a out of space problem, because it doesn´t stop logging, it just splits that message. As you can see, the next message is logged properly. And that statement is not so huge, these statements have not more than 10 or 20kb. And as I said these statements occur dozens of times a day, but only once or twice is not correctly logged An additional info, that splitted message has an out of order log time. At that time the log file was having 2 or 3 logs per second, and that message was 1 or 2 minutes later. It seems like it occurs now but it's stored a minute or two later.thanksMarcos",
"msg_date": "Mon, 8 Aug 2022 08:34:08 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Re: bug on log generation ?"
},
{
"msg_contents": "\nOn 2022-08-08 Mo 07:34, Marcos Pegoraro wrote:\n>\n>\n> How are you running postgres? If the logger process runs into\n> trouble it might\n> write to stderr.\n>\n> Is there a chance your huge statements would make you run out of\n> space?\n>\n> Well, I don't think it is a out of space problem, because it\n> doesn´t stop logging, it just splits that message. As you can see, the\n> next message is logged properly. And that statement is not so huge,\n> these statements have not more than 10 or 20kb. And as I said these\n> statements occur dozens of times a day, but only once or twice is\n> not correctly logged \n> An additional info, that splitted message has an out of order log\n> time. At that time the log file was having 2 or 3 logs per second, and\n> that message was 1 or 2 minutes later. It seems like it occurs now but\n> it's stored a minute or two later.\n>\n>\n\nIt looks like a failure of the log chunking protocol, with long messages\nbeing improperly interleaved. I don't think we've had reports of such a\nfailure since commit c17e863bc7 back in 2012, but maybe my memory is\nfailing.\n\nWhat platform is this on? Is it possible that on some platform the chunk\nsize we're using is not doing an atomic write?\n\n\nsyslogger.h says:\n\n\n #ifdef PIPE_BUF\n /* Are there any systems with PIPE_BUF > 64K? Unlikely, but ... */\n #if PIPE_BUF > 65536\n #define PIPE_CHUNK_SIZE 65536\n #else\n #define PIPE_CHUNK_SIZE ((int) PIPE_BUF)\n #endif\n #else /* not defined */\n /* POSIX says the value of PIPE_BUF must be at least 512, so use that */\n #define PIPE_CHUNK_SIZE 512\n #endif\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 8 Aug 2022 09:59:10 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: bug on log generation ?"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> What platform is this on? Is it possible that on some platform the chunk\n> size we're using is not doing an atomic write?\n\nAnother idea is that some of the write() calls are failing --- elog.c\ndoesn't check for that. Eyeing the POSIX spec for write(), I wonder\nif somehow the pipe has gotten set into O_NONBLOCK mode and we're\nnot retrying EAGAIN failures.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Aug 2022 10:32:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: bug on log generation ?"
},
{
"msg_contents": ">\n> What platform is this on? Is it possible that on some platform the chunk\n> size we're using is not doing an atomic write?\n>\n\nUntil last year we were Ubuntu 16.04 and Postgres 11 with the latest minor\nupdate.\nThis January we changed to Ubuntu 20.04 and Postgres 14, now updated to\n14.4.\n\nBut the problem occured on both old and new SO and Postgres versions.\nRight now I opened the current log file and there are 20 or 30 of these\nstatements and all of them are fine, maybe tomorrow the problem comes back,\nmaybe this afternoon.\n\nthanks\nMarcos\n\nWhat platform is this on? Is it possible that on some platform the chunk\nsize we're using is not doing an atomic write? Until last year we were Ubuntu 16.04 and Postgres 11 with the latest minor update.This January we changed to Ubuntu 20.04 and Postgres 14, now updated to 14.4.But the problem occured on both old and new SO and Postgres versions.Right now I opened the current log file and there are 20 or 30 of these statements and all of them are fine, maybe tomorrow the problem comes back, maybe this afternoon.thanksMarcos",
"msg_date": "Mon, 8 Aug 2022 12:07:59 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Re: bug on log generation ?"
},
{
"msg_contents": "\nOn 2022-08-08 Mo 11:07, Marcos Pegoraro wrote:\n>\n> What platform is this on? Is it possible that on some platform the\n> chunk\n> size we're using is not doing an atomic write?\n>\n> \n> Until last year we were Ubuntu 16.04 and Postgres 11 with the latest\n> minor update.\n> This January we changed to Ubuntu 20.04 and Postgres 14, now updated\n> to 14.4.\n>\n> But the problem occured on both old and new SO and Postgres versions.\n> Right now I opened the current log file and there are 20 or 30 of\n> these statements and all of them are fine, maybe tomorrow the problem\n> comes back, maybe this afternoon.\n>\n>\n\nOK, we really need a repeatable test if possible. Perhaps a pgbench run\nwith lots of concurrent runs of a some very long query would do the trick.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 8 Aug 2022 11:36:17 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: bug on log generation ?"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-08 10:32:22 -0400, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > What platform is this on? Is it possible that on some platform the chunk\n> > size we're using is not doing an atomic write?\n> \n> Another idea is that some of the write() calls are failing --- elog.c\n> doesn't check for that.\n\nI was suspicious of those as well. It might be a good idea to at least write\nsuch failures to stderr, otherwise it's just about impossible to debug. Not\nthat stderr will always point anywhere useful...\n\nI can imagine that a system under heavy memory pressure might fail writing, if\nthere's a lot of writes in a row or such.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 8 Aug 2022 09:02:04 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: bug on log generation ?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-08-08 10:32:22 -0400, Tom Lane wrote:\n>> Another idea is that some of the write() calls are failing --- elog.c\n>> doesn't check for that.\n\n> I was suspicious of those as well. It might be a good idea to at least write\n> such failures to stderr, otherwise it's just about impossible to debug. Not\n> that stderr will always point anywhere useful...\n\nUh ... what we are talking about is a failure to write to stderr.\nIt's not likely that adding more output will help.\n\nHaving said that, looping on EAGAIN seems like a reasonably harmless\nchange. Whether it will help here is really hard to say, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Aug 2022 12:19:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: bug on log generation ?"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-08 12:19:22 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-08-08 10:32:22 -0400, Tom Lane wrote:\n> >> Another idea is that some of the write() calls are failing --- elog.c\n> >> doesn't check for that.\n> \n> > I was suspicious of those as well. It might be a good idea to at least write\n> > such failures to stderr, otherwise it's just about impossible to debug. Not\n> > that stderr will always point anywhere useful...\n> \n> Uh ... what we are talking about is a failure to write to stderr.\n> It's not likely that adding more output will help.\n\nI forgot that we don't preserve the original stderr in some other fd, likely\nbecause the logger itself still has it open and can use write_stderr().\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 8 Aug 2022 09:38:22 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: bug on log generation ?"
},
{
"msg_contents": ">\n> OK, we really need a repeatable test if possible. Perhaps a pgbench run\n> with lots of concurrent runs of a some very long query would do the trick.\n>\n\nOK, I can do it but ... strangely that error usually occurs at random\ntimes, sometimes at 08:00, sometimes at 19:00, and it's busier between\n10:00 and 16:00. If I cron some of those queries to run every second is\nenough ? What exactly do you expect to see on log files ?\n\nOK, we really need a repeatable test if possible. Perhaps a pgbench run\nwith lots of concurrent runs of a some very long query would do the trick.\nOK, I can do it but ... strangely that error usually occurs at random times, sometimes at 08:00, sometimes at 19:00, and it's busier between 10:00 and 16:00. If I cron some of those queries to run every second is enough ? What exactly do you expect to see on log files ?",
"msg_date": "Mon, 8 Aug 2022 15:34:27 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Re: bug on log generation ?"
}
] |
[
{
"msg_contents": "Hi hackers,\nI wrote a test of the old_snapshot extension for coverage.\nI hope that this is written correctly.\n\nbefore:\n 0%\nafter:\n 100%\n---\nregards,\nLee Dong Wook.",
"msg_date": "Mon, 8 Aug 2022 12:54:36 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "old_snapshot: add test for coverage"
},
{
"msg_contents": "Dong Wook Lee <sh95119@gmail.com> writes:\n> I wrote a test of the old_snapshot extension for coverage.\n\nHmm, does this really provide any meaningful coverage? The test\nsure looks like it's not doing much.\n\nI spent some time a week or so ago trying to graft testing of\ncontrib/old_snapshot into src/test/modules/snapshot_too_old.\nI didn't come up with anything I liked, but I still think\nthat that might lead to a more thorough test than a standalone\nexercise.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Aug 2022 01:37:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: old_snapshot: add test for coverage"
},
{
"msg_contents": "> On 8 Aug 2022, at 07:37, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Dong Wook Lee <sh95119@gmail.com> writes:\n\n>> I wrote a test of the old_snapshot extension for coverage.\n> \n> Hmm, does this really provide any meaningful coverage? The test\n> sure looks like it's not doing much.\n\nLooking at this I agree, this test doesn't provide enough to be of value and\nthe LIMIT 0 might even hide bugs under a postive test result. I think we\nshould mark this entry RwF.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 17 Nov 2022 13:41:49 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: old_snapshot: add test for coverage"
},
{
"msg_contents": "On Mon, Aug 8, 2022 at 2:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Dong Wook Lee <sh95119@gmail.com> writes:\n> > I wrote a test of the old_snapshot extension for coverage.\n>\n> Hmm, does this really provide any meaningful coverage? The test\n> sure looks like it's not doing much.\n\nPreviously written tests were simply test codes to increase coverage.\nTherefore, we can make a better test.\nI'll think about it by this week. If this work exceeds my ability, I\nwill let you know by reply.\nIt's okay that the issue should be closed unless I write a meaningful test.\n\n---\nRegards,\nDongWook Lee.\n\n\n",
"msg_date": "Wed, 7 Dec 2022 00:00:26 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: old_snapshot: add test for coverage"
}
] |
[
{
"msg_contents": "Dear hackers,\n\nI'm Yedil. I'm working on the project \"Postgres Performance Farm\" during\nGsoc. Pgperffarm is a project like Postgres build farm but focuses on the\nperformance of the database. Now it has 2 types of benchmarks, pgbench and\ntpc-h. The website is online here <http://140.211.168.145/>, and the repo\nis here <https://github.com/PGPerfFarm/pgperffarm_server>.\n\nI would like you to take a look at our website and, if possible, give some\nfeedback on, for example, what other data should be collected or what other\nmetrics could be used to compare performance.\n\nThanks for your time in advance!\n\nBest regards\nYedil\n\nDear hackers,I'm Yedil. I'm working on the project \"Postgres Performance Farm\" during Gsoc. Pgperffarm is a project like Postgres build farm but focuses on the performance of the database. Now it has 2 types of benchmarks, pgbench and tpc-h. The website is online here, and the repo is here. I would like you to take a look at our website and, if possible, give some feedback on, for example, what other data should be collected or what other metrics could be used to compare performance.Thanks for your time in advance!Best regardsYedil",
"msg_date": "Mon, 8 Aug 2022 14:50:17 +0200",
"msg_from": "Yedil Serzhan <edilserjan@gmail.com>",
"msg_from_op": true,
"msg_subject": "Asking for feedback on Pgperffarm"
},
{
"msg_contents": "Hi Yedil,\n\nOn Mon, Aug 08, 2022 at 02:50:17PM +0200, Yedil Serzhan wrote:\n> Dear hackers,\n> \n> I'm Yedil. I'm working on the project \"Postgres Performance Farm\" during\n> Gsoc. Pgperffarm is a project like Postgres build farm but focuses on the\n> performance of the database. Now it has 2 types of benchmarks, pgbench and\n> tpc-h. The website is online here <http://140.211.168.145/>, and the repo\n> is here <https://github.com/PGPerfFarm/pgperffarm_server>.\n> \n> I would like you to take a look at our website and, if possible, give some\n> feedback on, for example, what other data should be collected or what other\n> metrics could be used to compare performance.\n\nNice work!\n\nWe need to be careful with how results based on the TPC-H specification\nare presented. It needs to be changed, but maybe not dramatically.\nSomething like \"Fair use derivation of TPC-H\". It needs to be clear\nthat it's not an official TPC-H result.\n\nI think I've hinted at it in the #perffarm slack channel, that I think\nit would be better if you leveraged one of the already existing TPC-H\nderived kits. While I'm partial to dbt-3, because I'm trying to\nmaintain it and because it sounded like you were starting to do\nsomething similar to that, I think you can save a good amount of effort\nfrom reimplementing another kit from scratch.\n\nRegards,\nMark\n\n--\nMark Wong\nEDB https://enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 8 Aug 2022 10:06:14 -0700",
"msg_from": "Mark Wong <markwkm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Asking for feedback on Pgperffarm"
},
{
"msg_contents": "Hi, Mark, really thank you for your feedback.\n\n\nOn Mon, Aug 8, 2022 at 7:06 PM Mark Wong <markwkm@gmail.com> wrote:\n\n> Hi Yedil,\n>\n> On Mon, Aug 08, 2022 at 02:50:17PM +0200, Yedil Serzhan wrote:\n> > Dear hackers,\n> >\n> > I'm Yedil. I'm working on the project \"Postgres Performance Farm\" during\n> > Gsoc. Pgperffarm is a project like Postgres build farm but focuses on the\n> > performance of the database. Now it has 2 types of benchmarks, pgbench\n> and\n> > tpc-h. The website is online here <http://140.211.168.145/>, and the\n> repo\n> > is here <https://github.com/PGPerfFarm/pgperffarm_server>.\n> >\n> > I would like you to take a look at our website and, if possible, give\n> some\n> > feedback on, for example, what other data should be collected or what\n> other\n> > metrics could be used to compare performance.\n>\n> Nice work!\n>\n> We need to be careful with how results based on the TPC-H specification\n> are presented. It needs to be changed, but maybe not dramatically.\n> Something like \"Fair use derivation of TPC-H\". It needs to be clear\n> that it's not an official TPC-H result.\n>\n> I think I've hinted at it in the #perffarm slack channel, that I think\n> it would be better if you leveraged one of the already existing TPC-H\n> derived kits. While I'm partial to dbt-3, because I'm trying to\n> maintain it and because it sounded like you were starting to do\n> something similar to that, I think you can save a good amount of effort\n> from reimplementing another kit from scratch.\n>\n> Regards,\n> Mark\n>\n\n\nIt makes sense to put it as a \"fair use derivation of TPC-H\". I also used\nthe term \"composite score\" because of your previous feedback on it.\n\nI'll also check out the dbt-3 tool and if the effort is worth it, and if\nit's necessary, I'll try to switch to it.\n\nThese are very valuable feedback, thank you again.\n\nBest,\nYedil\n\nHi, Mark, really thank you for your feedback.On Mon, Aug 8, 2022 at 7:06 PM Mark Wong <markwkm@gmail.com> wrote:Hi Yedil,\n\nOn Mon, Aug 08, 2022 at 02:50:17PM +0200, Yedil Serzhan wrote:\n> Dear hackers,\n> \n> I'm Yedil. I'm working on the project \"Postgres Performance Farm\" during\n> Gsoc. Pgperffarm is a project like Postgres build farm but focuses on the\n> performance of the database. Now it has 2 types of benchmarks, pgbench and\n> tpc-h. The website is online here <http://140.211.168.145/>, and the repo\n> is here <https://github.com/PGPerfFarm/pgperffarm_server>.\n> \n> I would like you to take a look at our website and, if possible, give some\n> feedback on, for example, what other data should be collected or what other\n> metrics could be used to compare performance.\n\nNice work!\n\nWe need to be careful with how results based on the TPC-H specification\nare presented. It needs to be changed, but maybe not dramatically.\nSomething like \"Fair use derivation of TPC-H\". It needs to be clear\nthat it's not an official TPC-H result.\n\nI think I've hinted at it in the #perffarm slack channel, that I think\nit would be better if you leveraged one of the already existing TPC-H\nderived kits. While I'm partial to dbt-3, because I'm trying to\nmaintain it and because it sounded like you were starting to do\nsomething similar to that, I think you can save a good amount of effort\nfrom reimplementing another kit from scratch.\n\nRegards,\nMark It makes sense to put it as a \"fair use derivation of TPC-H\". I also used the term \"composite score\" because of your previous feedback on it. I'll also check out the dbt-3 tool and if the effort is worth it, and if it's necessary, I'll try to switch to it.These are very valuable feedback, thank you again.Best,Yedil",
"msg_date": "Tue, 9 Aug 2022 16:19:55 +0200",
"msg_from": "Yedil Serzhan <edilserjan@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Asking for feedback on Pgperffarm"
}
] |
[
{
"msg_contents": "On Thu, 4 Aug 2022 at 13:11, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> On Wed, 3 Aug 2022 at 20:18, Andres Freund <andres@anarazel.de> wrote:\n>\n> > I think we should consider redesigning subtrans more substantially - even with\n> > the changes you propose here, there's still plenty ways to hit really bad\n> > performance. And there's only so much we can do about that without more\n> > fundamental design changes.\n>\n> I completely agree - you will be glad to hear that I've been working\n> on a redesign of the subtrans module.\n...\n> I will post my patch, when complete, in a different thread.\n\nThe attached patch reduces the overhead of SUBTRANS by minimizing the\nnumber of times SubTransSetParent() is called, to below 1% of the\ncurrent rate in common cases.\n\nInstead of blindly calling SubTransSetParent() for every subxid, this\nproposal only calls SubTransSetParent() when that information will be\nrequired for later use. It does this by analyzing all of the callers\nof SubTransGetParent() and uses these pre-conditions to filter out\ncalls/subxids that will never be required, for various reasons. It\nredesigns the way XactLockTableWait() calls\nSubTransGetTopmostTransactionId() to allow this.\n\nThis short patchset compiles and passes make check-world, with lengthy comments.\n\nThis might then make viable a simple rewrite of SUBTRANS using a hash\ntable, as proposed by Andres. But in any case, it will allow us to\ndesign a densely packed SUBTRANS replacement that does not generate as\nmuch contention and I/O.\n\nNOTE that this patchset does not touch SUBTRANS at all, it just\nminimizes the calls in preparation for a later redesign in a later\npatch. If this patch/later versions of it is committed in Sept CF,\nthen we should be in good shape to post a subtrans redesign patch by\nmajor patch deadline at end of year.\n\nPatches 001 and 002 are common elements of a different patch,\n\"Smoothing the subtrans performance catastrophe\", but other than that,\nthe two patches are otherwise independent of each other.\n\nWhere does this come from? I learnt a lot about subxids when coding\nHot Standby, specifically commit 06da3c570f21394003. This patch just\nbuilds upon that earlier understanding.\n\nComments please.\n\n--\nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Mon, 8 Aug 2022 14:11:36 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "On Mon, Aug 8, 2022 at 6:41 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> On Thu, 4 Aug 2022 at 13:11, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> >\n> > On Wed, 3 Aug 2022 at 20:18, Andres Freund <andres@anarazel.de> wrote:\n> >\n> > > I think we should consider redesigning subtrans more substantially - even with\n> > > the changes you propose here, there's still plenty ways to hit really bad\n> > > performance. And there's only so much we can do about that without more\n> > > fundamental design changes.\n> >\n> > I completely agree - you will be glad to hear that I've been working\n> > on a redesign of the subtrans module.\n> ...\n> > I will post my patch, when complete, in a different thread.\n>\n> The attached patch reduces the overhead of SUBTRANS by minimizing the\n> number of times SubTransSetParent() is called, to below 1% of the\n> current rate in common cases.\n>\n> Instead of blindly calling SubTransSetParent() for every subxid, this\n> proposal only calls SubTransSetParent() when that information will be\n> required for later use. It does this by analyzing all of the callers\n> of SubTransGetParent() and uses these pre-conditions to filter out\n> calls/subxids that will never be required, for various reasons. It\n> redesigns the way XactLockTableWait() calls\n> SubTransGetTopmostTransactionId() to allow this.\n>\n> This short patchset compiles and passes make check-world, with lengthy comments.\n\nDoes this patch set work independently or it has dependency on the\npatches on the other thread \"Smoothing the subtrans performance\ncatastrophe\"? Because in this patch I see no code where we are\nchanging anything to control the access of SubTransGetParent() from\nSubTransGetTopmostTransactionId()?\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 9 Aug 2022 17:09:22 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "On Tue, 9 Aug 2022 at 12:39, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> > This short patchset compiles and passes make check-world, with lengthy comments.\n>\n> Does this patch set work independently or it has dependency on the\n> patches on the other thread \"Smoothing the subtrans performance\n> catastrophe\"?\n\nPatches 001 and 002 are common elements of a different patch,\n\"Smoothing the subtrans performance catastrophe\", but other than that,\nthe two patches are otherwise independent of each other.\n\ni.e. there are common elements in both patches\n001 puts all subxid data in a snapshot (up to a limit of 64 xids per\ntopxid), even if one or more xids overflows.\n\n> Because in this patch I see no code where we are\n> changing anything to control the access of SubTransGetParent() from\n> SubTransGetTopmostTransactionId()?\n\nThose calls are unaffected, i.e. they both still work.\n\nRight now, we register all subxids in subtrans. But not all xids are\nsubxids, so in fact, subtrans has many \"holes\" in it, where if you\nlook up the parent for an xid it will just return\nInvalidTransactionId. There is a protection against that causing a\nproblem because if you call TransactionIdDidCommit/Abort you can get a\nWARNING, or if you call SubTransGetTopmostTransaction() you can get an\nERROR, but it is possible if you do a lookup for an inappropriate xid.\ni.e. if you call TransactionIdDidCommit() without first calling\nTransactionIdIsInProgress() as you are supposed to do.\n\nWhat this patch does is increase the number of \"holes\" in subtrans,\nreducing the overhead and making the subtrans data structure more\namenable to using a dense structure rather than a sparse structure as\nwe do now, which then leads to I/O overheads. But in this patch, we\nonly have holes when we can prove that the subxid's parent will never\nbe requested.\n\nSpecifically, with this patch, running PL/pgSQL with a few\nsubtransactions in will cause those subxids to be logged in subtrans\nabout 1% as often as they are now, so greatly reducing the number of\nsubtrans calls.\n\nHappy to provide more detailed review thoughts, so please keep asking questions.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 9 Aug 2022 17:16:33 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "On Tue, Aug 9, 2022 at 9:46 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n\n> Those calls are unaffected, i.e. they both still work.\n>\n> Right now, we register all subxids in subtrans. But not all xids are\n> subxids, so in fact, subtrans has many \"holes\" in it, where if you\n> look up the parent for an xid it will just return\n> c. There is a protection against that causing a\n> problem because if you call TransactionIdDidCommit/Abort you can get a\n> WARNING, or if you call SubTransGetTopmostTransaction() you can get an\n> ERROR, but it is possible if you do a lookup for an inappropriate xid.\n> i.e. if you call TransactionIdDidCommit() without first calling\n> TransactionIdIsInProgress() as you are supposed to do.\n\nIIUC, if SubTransGetParent SubTransGetParent then\nSubTransGetTopmostTransaction() loop will break and return the\npreviousxid. So if we pass any topxid to\nSubTransGetTopmostTransaction() it will return back the same xid and\nthat's fine as next we are going to search in the snapshot->xip array.\n\nBut if we are calling this function with the subxid which might be\nthere in the snapshot->subxip array but if we are first calling\nSubTransGetTopmostTransaction() then it will just return the same xid\nif the parent is not set for it. And now if we search this in the\nsnapshot->xip array then we will get the wrong answer?\n\nSo I still think some adjustment is required in XidInMVCCSnapdhot()\nsuch that we first search the snapshot->subxip array.\n\nAm I still missing something?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 10 Aug 2022 13:03:45 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "On Wed, 10 Aug 2022 at 08:34, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> Am I still missing something?\n\nNo, you have found a dependency between the patches that I was unaware\nof. So there is no bug if you apply both patches.\n\nThanks for looking.\n\n\n> So I still think some adjustment is required in XidInMVCCSnapdhot()\n\nThat is one way to resolve the issue, but not the only one. I can also\nchange AssignTransactionId() to recursively register parent xids for\nall of a subxid's parents.\n\nI will add in a test case and resolve the dependency in my next patch.\n\nThanks again.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 10 Aug 2022 14:01:30 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "On Wed, Aug 10, 2022 at 6:31 PM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n>\n> On Wed, 10 Aug 2022 at 08:34, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > Am I still missing something?\n>\n> No, you have found a dependency between the patches that I was unaware\n> of. So there is no bug if you apply both patches.\n\nRight\n\n>\n> > So I still think some adjustment is required in XidInMVCCSnapdhot()\n>\n> That is one way to resolve the issue, but not the only one. I can also\n> change AssignTransactionId() to recursively register parent xids for\n> all of a subxid's parents.\n>\n> I will add in a test case and resolve the dependency in my next patch.\n\nOkay, thanks, I will look into the updated patch after you submit that.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 11 Aug 2022 11:01:51 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "On Thu, 11 Aug 2022 at 06:32, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> > > So I still think some adjustment is required in XidInMVCCSnapdhot()\n> >\n> > That is one way to resolve the issue, but not the only one. I can also\n> > change AssignTransactionId() to recursively register parent xids for\n> > all of a subxid's parents.\n> >\n> > I will add in a test case and resolve the dependency in my next patch.\n>\n> Okay, thanks, I will look into the updated patch after you submit that.\n\nPFA two patches, replacing earlier work\n001_new_isolation_tests_for_subxids.v3.patch\n002_minimize_calls_to_SubTransSetParent.v8.patch\n\n001_new_isolation_tests_for_subxids.v3.patch\nAdds new test cases to master without adding any new code, specifically\naddressing the two areas of code that are not tested by existing tests.\nThis gives us a baseline from which we can do test driven development.\nI'm hoping this can be reviewed and committed fairly smoothly.\n\n002_minimize_calls_to_SubTransSetParent.v8.patch\nReduces the number of calls to subtrans below 1% for the first 64 subxids,\nso overall will substantially reduce subtrans contention on master for the\ntypical case, as well as smoothing the overflow case.\nSome discussion needed on this; there are various options.\nThis combines the work originally posted here with another patch posted on the\nthread \"Smoothing the subtrans performance catastrophe\".\n\nI will do some performance testing also, but more welcome.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Tue, 30 Aug 2022 17:45:54 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "On Tue, Aug 30, 2022 at 10:16 PM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n\n> PFA two patches, replacing earlier work\n> 001_new_isolation_tests_for_subxids.v3.patch\n> 002_minimize_calls_to_SubTransSetParent.v8.patch\n>\n> 001_new_isolation_tests_for_subxids.v3.patch\n> Adds new test cases to master without adding any new code, specifically\n> addressing the two areas of code that are not tested by existing tests.\n> This gives us a baseline from which we can do test driven development.\n> I'm hoping this can be reviewed and committed fairly smoothly.\n>\n> 002_minimize_calls_to_SubTransSetParent.v8.patch\n> Reduces the number of calls to subtrans below 1% for the first 64 subxids,\n> so overall will substantially reduce subtrans contention on master for the\n> typical case, as well as smoothing the overflow case.\n> Some discussion needed on this; there are various options.\n> This combines the work originally posted here with another patch posted on the\n> thread \"Smoothing the subtrans performance catastrophe\".\n>\n> I will do some performance testing also, but more welcome.\n\nThanks for the updated patch, I have some questions/comments.\n\n1.\n+ * This has the downside that anyone waiting for a lock on aborted\n+ * subtransactions would not be released immediately; that may or\n+ * may not be an acceptable compromise. If not acceptable, this\n+ * simple call needs to be replaced with a loop to register the\n+ * parent for the current subxid stack, so we can walk\nback up it to\n+ * the topxid.\n+ */\n+ SubTransSetParent(subxid, GetTopTransactionId());\n\nI do not understand in which situation we will see this downside. I\nmean if we see the logic of XactLockTableWait() then in the current\nsituation also if the subtransaction is committed we directly wait on\nthe top transaction by calling SubTransGetTopmostTransaction(xid);\n\nSo if the lock-taking subtransaction is committed then we will wait\ndirectly for the top-level transaction and after that, it doesn't\nmatter if we abort any of the parent subtransactions, because it will\nwait for the topmost transaction to complete. And if the lock-taking\nsubtransaction is aborted then it will anyway stop waiting because\nTransactionIdIsInProgress() should return false.\n\n2.\n /*\n * Notice that we update pg_subtrans with the top-level xid, rather than\n * the parent xid. This is a difference between normal processing and\n * recovery, yet is still correct in all cases. The reason is that\n * subtransaction commit is not marked in clog until commit processing, so\n * all aborted subtransactions have already been clearly marked in clog.\n * As a result we are able to refer directly to the top-level\n * transaction's state rather than skipping through all the intermediate\n * states in the subtransaction tree. This should be the first time we\n * have attempted to SubTransSetParent().\n */\n for (i = 0; i < nsubxids; i++)\n SubTransSetParent(subxids[i], topxid);\n\nI think this comment needs some modification because in this patch now\nin normal processing also we are setting the topxid as a parent right?\n\n3.\n+ while (TransactionIdIsValid(parentXid))\n+ {\n+ previousXid = parentXid;\n+\n+ /*\n+ * Stop as soon as we are earlier than the cutoff. This saves multiple\n+ * lookups against subtrans when we have a deeply nested subxid with\n+ * a later snapshot with an xmin much higher than TransactionXmin.\n+ */\n+ if (TransactionIdPrecedes(parentXid, cutoff_xid))\n+ {\n+ *xid = previousXid;\n+ return true;\n+ }\n+ parentXid = SubTransGetParent(parentXid);\n\nDo we need this while loop if we are directly setting topxid as a\nparent, so with that, we do not need multiple iterations to go to the\ntop xid?\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 6 Sep 2022 17:06:57 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "On Tue, 6 Sept 2022 at 12:37, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Aug 30, 2022 at 10:16 PM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n>\n> > PFA two patches, replacing earlier work\n> > 001_new_isolation_tests_for_subxids.v3.patch\n> > 002_minimize_calls_to_SubTransSetParent.v8.patch\n> >\n> > 001_new_isolation_tests_for_subxids.v3.patch\n> > Adds new test cases to master without adding any new code, specifically\n> > addressing the two areas of code that are not tested by existing tests.\n> > This gives us a baseline from which we can do test driven development.\n> > I'm hoping this can be reviewed and committed fairly smoothly.\n> >\n> > 002_minimize_calls_to_SubTransSetParent.v8.patch\n> > Reduces the number of calls to subtrans below 1% for the first 64 subxids,\n> > so overall will substantially reduce subtrans contention on master for the\n> > typical case, as well as smoothing the overflow case.\n> > Some discussion needed on this; there are various options.\n> > This combines the work originally posted here with another patch posted on the\n> > thread \"Smoothing the subtrans performance catastrophe\".\n> >\n> > I will do some performance testing also, but more welcome.\n>\n> Thanks for the updated patch, I have some questions/comments.\n\nThanks for the review.\n\n> 1.\n> + * This has the downside that anyone waiting for a lock on aborted\n> + * subtransactions would not be released immediately; that may or\n> + * may not be an acceptable compromise. If not acceptable, this\n> + * simple call needs to be replaced with a loop to register the\n> + * parent for the current subxid stack, so we can walk\n> back up it to\n> + * the topxid.\n> + */\n> + SubTransSetParent(subxid, GetTopTransactionId());\n>\n> I do not understand in which situation we will see this downside. I\n> mean if we see the logic of XactLockTableWait() then in the current\n> situation also if the subtransaction is committed we directly wait on\n> the top transaction by calling SubTransGetTopmostTransaction(xid);\n>\n> So if the lock-taking subtransaction is committed then we will wait\n> directly for the top-level transaction and after that, it doesn't\n> matter if we abort any of the parent subtransactions, because it will\n> wait for the topmost transaction to complete. And if the lock-taking\n> subtransaction is aborted then it will anyway stop waiting because\n> TransactionIdIsInProgress() should return false.\n\nYes, correct.\n\n> 2.\n> /*\n> * Notice that we update pg_subtrans with the top-level xid, rather than\n> * the parent xid. This is a difference between normal processing and\n> * recovery, yet is still correct in all cases. The reason is that\n> * subtransaction commit is not marked in clog until commit processing, so\n> * all aborted subtransactions have already been clearly marked in clog.\n> * As a result we are able to refer directly to the top-level\n> * transaction's state rather than skipping through all the intermediate\n> * states in the subtransaction tree. This should be the first time we\n> * have attempted to SubTransSetParent().\n> */\n> for (i = 0; i < nsubxids; i++)\n> SubTransSetParent(subxids[i], topxid);\n>\n> I think this comment needs some modification because in this patch now\n> in normal processing also we are setting the topxid as a parent right?\n\nCorrect\n\n> 3.\n> + while (TransactionIdIsValid(parentXid))\n> + {\n> + previousXid = parentXid;\n> +\n> + /*\n> + * Stop as soon as we are earlier than the cutoff. This saves multiple\n> + * lookups against subtrans when we have a deeply nested subxid with\n> + * a later snapshot with an xmin much higher than TransactionXmin.\n> + */\n> + if (TransactionIdPrecedes(parentXid, cutoff_xid))\n> + {\n> + *xid = previousXid;\n> + return true;\n> + }\n> + parentXid = SubTransGetParent(parentXid);\n>\n> Do we need this while loop if we are directly setting topxid as a\n> parent, so with that, we do not need multiple iterations to go to the\n> top xid?\n\nCorrect. I think we can dispense with\nSubTransGetTopmostTransactionPrecedes() entirely.\n\nI was initially trying to leave options open but that is confusing and\nas a result, some parts are misleading after I merged the two patches.\n\nI will update the patch, thanks for your scrutiny.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 6 Sep 2022 13:14:04 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "On Tue, 6 Sept 2022 at 13:14, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n\n> I will update the patch, thanks for your scrutiny.\n\nI attach a diff showing what has changed between v8 and v9, and will\nreattach a full set of new patches in the next post, so patchtester\ndoesn't squeal.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Tue, 13 Sep 2022 11:56:12 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "On Tue, 13 Sept 2022 at 11:56, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> On Tue, 6 Sept 2022 at 13:14, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> > I will update the patch, thanks for your scrutiny.\n>\n> I attach a diff showing what has changed between v8 and v9, and will\n> reattach a full set of new patches in the next post, so patchtester\n> doesn't squeal.\n\nFull set of v9 patches\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Tue, 13 Sep 2022 11:56:48 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "On 2022-Aug-30, Simon Riggs wrote:\n\n> 001_new_isolation_tests_for_subxids.v3.patch\n> Adds new test cases to master without adding any new code, specifically\n> addressing the two areas of code that are not tested by existing tests.\n> This gives us a baseline from which we can do test driven development.\n> I'm hoping this can be reviewed and committed fairly smoothly.\n\nI gave this a quick run to confirm the claimed increase of coverage. It\nchecks out, so pushed.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 14 Sep 2022 16:21:06 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "On Wed, 14 Sept 2022 at 15:21, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Aug-30, Simon Riggs wrote:\n>\n> > 001_new_isolation_tests_for_subxids.v3.patch\n> > Adds new test cases to master without adding any new code, specifically\n> > addressing the two areas of code that are not tested by existing tests.\n> > This gives us a baseline from which we can do test driven development.\n> > I'm hoping this can be reviewed and committed fairly smoothly.\n>\n> I gave this a quick run to confirm the claimed increase of coverage. It\n> checks out, so pushed.\n\nThank you.\n\nSo now we just have the main part of the patch, reattached here for\nthe auto patch tester's benefit.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Thu, 15 Sep 2022 11:04:14 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "\nOn Thu, 15 Sep 2022 at 18:04, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> On Wed, 14 Sept 2022 at 15:21, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>>\n>> On 2022-Aug-30, Simon Riggs wrote:\n>>\n>> > 001_new_isolation_tests_for_subxids.v3.patch\n>> > Adds new test cases to master without adding any new code, specifically\n>> > addressing the two areas of code that are not tested by existing tests.\n>> > This gives us a baseline from which we can do test driven development.\n>> > I'm hoping this can be reviewed and committed fairly smoothly.\n>>\n>> I gave this a quick run to confirm the claimed increase of coverage. It\n>> checks out, so pushed.\n>\n> Thank you.\n>\n> So now we just have the main part of the patch, reattached here for\n> the auto patch tester's benefit.\n\nHi Simon,\n\nThanks for the updated patch, here are some comments.\n\nThere is a typo, s/SubTransGetTopMostTransaction/SubTransGetTopmostTransaction/g.\n\n+\t\t * call SubTransGetTopMostTransaction() if that xact overflowed;\n\n\nIs there a punctuation mark missing on the following first line?\n\n+\t\t * 2. When IsolationIsSerializable() we sometimes need to access topxid\n+\t\t * This occurs only when SERIALIZABLE is requested by app user.\n\n\nWhen we use function name in comments, some places we use parentheses,\nbut others do not use it. Why? I think, we should keep them consistent,\nat least in the same commit.\n\n+\t\t * 3. When TransactionIdSetTreeStatus will use a status of SUB_COMMITTED,\n+\t\t * which then requires us to consult subtrans to find parent, which\n+\t\t * is needed to avoid race condition. In this case we ask Clog/Xact\n+\t\t * module if TransactionIdsAreOnSameXactPage(). Since we start a new\n+\t\t * clog page every 32000 xids, this is usually <<1% of subxids.\n\nMaybe we declaration a topxid to avoid calling GetTopTransactionId()\ntwice when we should set subtrans parent?\n\n+\t\tTransactionId subxid = XidFromFullTransactionId(s->fullTransactionId);\n+\t\tTransactionId topxid = GetTopTransactionId();\n ...\n+\t\tif (MyProc->subxidStatus.overflowed ||\n+\t\t\tIsolationIsSerializable() ||\n+\t\t\t!TransactionIdsAreOnSameXactPage(topxid, subxid))\n+\t\t{\n ...\n+\t\t\tSubTransSetParent(subxid, topxid);\n+\t\t}\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Thu, 15 Sep 2022 19:36:40 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "On Thu, 15 Sept 2022 at 12:36, Japin Li <japinli@hotmail.com> wrote:\n>\n>\n> On Thu, 15 Sep 2022 at 18:04, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> > On Wed, 14 Sept 2022 at 15:21, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >>\n> >> On 2022-Aug-30, Simon Riggs wrote:\n> >>\n> >> > 001_new_isolation_tests_for_subxids.v3.patch\n> >> > Adds new test cases to master without adding any new code, specifically\n> >> > addressing the two areas of code that are not tested by existing tests.\n> >> > This gives us a baseline from which we can do test driven development.\n> >> > I'm hoping this can be reviewed and committed fairly smoothly.\n> >>\n> >> I gave this a quick run to confirm the claimed increase of coverage. It\n> >> checks out, so pushed.\n> >\n> > Thank you.\n> >\n> > So now we just have the main part of the patch, reattached here for\n> > the auto patch tester's benefit.\n>\n> Hi Simon,\n>\n> Thanks for the updated patch, here are some comments.\n\nThanks for your comments.\n\n> There is a typo, s/SubTransGetTopMostTransaction/SubTransGetTopmostTransaction/g.\n>\n> + * call SubTransGetTopMostTransaction() if that xact overflowed;\n>\n>\n> Is there a punctuation mark missing on the following first line?\n>\n> + * 2. When IsolationIsSerializable() we sometimes need to access topxid\n> + * This occurs only when SERIALIZABLE is requested by app user.\n>\n>\n> When we use function name in comments, some places we use parentheses,\n> but others do not use it. Why? I think, we should keep them consistent,\n> at least in the same commit.\n>\n> + * 3. When TransactionIdSetTreeStatus will use a status of SUB_COMMITTED,\n> + * which then requires us to consult subtrans to find parent, which\n> + * is needed to avoid race condition. In this case we ask Clog/Xact\n> + * module if TransactionIdsAreOnSameXactPage(). Since we start a new\n> + * clog page every 32000 xids, this is usually <<1% of subxids.\n\nI've reworded those comments, hoping to address all of your above points.\n\n> Maybe we declaration a topxid to avoid calling GetTopTransactionId()\n> twice when we should set subtrans parent?\n>\n> + TransactionId subxid = XidFromFullTransactionId(s->fullTransactionId);\n> + TransactionId topxid = GetTopTransactionId();\n> ...\n> + if (MyProc->subxidStatus.overflowed ||\n> + IsolationIsSerializable() ||\n> + !TransactionIdsAreOnSameXactPage(topxid, subxid))\n> + {\n> ...\n> + SubTransSetParent(subxid, topxid);\n> + }\n\nSeems a minor point, but I've done this anyway.\n\nThanks for the review.\n\nv10 attached\n\n--\nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Fri, 16 Sep 2022 13:20:20 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "On Fri, 16 Sept 2022 at 13:20, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> Thanks for the review.\n>\n> v10 attached\n\nv11 attached, corrected for recent commit\n14ff44f80c09718d43d853363941457f5468cc03.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Mon, 26 Sep 2022 14:57:04 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "Hi,\n\nLe lun. 26 sept. 2022 à 15:57, Simon Riggs\n<simon.riggs@enterprisedb.com> a écrit :\n>\n> On Fri, 16 Sept 2022 at 13:20, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> >\n> > Thanks for the review.\n> >\n> > v10 attached\n>\n> v11 attached, corrected for recent commit\n> 14ff44f80c09718d43d853363941457f5468cc03.\n\nPlease find below the performance tests results I have produced for this patch.\nAttaching some charts and the scripts used to reproduce these tests.\n\n1. Assumption\n\nThe number of sub-transaction issued by only one long running\ntransaction may affect global TPS throughput if the number of\nsub-transaction exceeds 64 (sub-overflow)\n\n2. Testing scenario\n\nBased on pgbench, 2 different types of DB activity are applied concurrently:\n- 1 long running transaction, including N sub-transactions\n- X pgbench clients running read-only workload\n\nTests are executed with a varying number of sub-transactions: from 0 to 128\nKey metric is the TPS rate reported by pgbench runs in read-only mode\n\nTests are executed against\n- HEAD (14a737)\n- HEAD (14a737) + 002_minimize_calls_to_SubTransSetParent.v11.patch\n\n3. Long transaction anatomy\n\nTwo different long transactions are tested because they don't have the\nexact same impact on performance.\n\nTransaction number 1 includes one UPDATE affecting each row of\npgbench_accounts, plus an additional UPDATE affecting only one row but\nexecuted in its own rollbacked sub-transaction:\nBEGIN;\nSAVEPOINT s1;\nSAVEPOINT s2;\n-- ...\nSAVEPOINT sN - 1;\nUPDATE pgbench_accounts SET abalance = abalance + 1 WHERE aid > 0;\nSAVEPOINT sN;\nUPDATE pgbench_accounts SET abalance = abalance + 1 WHERE aid = 12345;\nROLLBACK TO SAVEPOINT sN;\n-- sleeping until the end of the test\nROLLBACK;\n\nTransaction 2 includes one UPDATE affecting each row of pgbench_accounts:\nBEGIN;\nSAVEPOINT s1;\nSAVEPOINT s2;\n-- ...\nSAVEPOINT sN;\nUPDATE pgbench_accounts SET abalance = abalance + 1 WHERE aid > 0;\n-- sleeping until the end of the test\nROLLBACK;\n\n4. Test results with transaction 1\n\nTPS vs number of sub-transaction\n\nnsubx HEAD patched\n--------------------\n 0 441109 439474\n 8 439045 438103\n 16 439123 436993\n 24 436269 434194\n 32 439707 437429\n 40 439997 437220\n 48 439388 437422\n 56 439409 437210\n 64 439748 437366\n 72 92869 434448\n 80 66577 434100\n 88 61243 434255\n 96 57016 434419\n104 52132 434917\n112 49181 433755\n120 46581 434044\n128 44067 434268\n\nPerf profiling on HEAD with 80 sub-transactions:\nOverhead Symbol\n 51.26% [.] LWLockAttemptLock\n 24.59% [.] LWLockRelease\n 0.36% [.] base_yyparse\n 0.35% [.] PinBuffer\n 0.34% [.] AllocSetAlloc\n 0.33% [.] hash_search_with_hash_value\n 0.22% [.] LWLockAcquire\n 0.20% [.] UnpinBuffer\n 0.15% [.] SimpleLruReadPage_ReadOnly\n 0.15% [.] _bt_compare\n\nPerf profiling on patched with 80 sub-transactions:\nOverhead Symbol\n 2.64% [.] AllocSetAlloc\n 2.09% [.] base_yyparse\n 1.76% [.] hash_search_with_hash_value\n 1.62% [.] LWLockAttemptLock\n 1.26% [.] MemoryContextAllocZeroAligned\n 0.93% [.] _bt_compare\n 0.92% [.] expression_tree_walker_impl.part.4\n 0.84% [.] SearchCatCache1\n 0.79% [.] palloc\n 0.64% [.] core_yylex\n\n5. Test results with transaction 2\n\nnsubx HEAD patched\n--------------------\n 0 440145 443816\n 8 438867 443081\n 16 438634 441786\n 24 436406 440187\n 32 439203 442447\n 40 439819 443574\n 48 439314 442941\n 56 439801 443736\n 64 439074 441970\n 72 439833 444132\n 80 148737 439941\n 88 413714 443343\n 96 251098 442021\n104 70190 443488\n112 405507 438866\n120 177827 443202\n128 399431 441842\n\n From the performance point of view, this patch clearly fixes the\ndramatic TPS collapse shown in these tests.\n\nRegards,\n\n-- \nJulien Tachoires\nEDB",
"msg_date": "Fri, 28 Oct 2022 19:24:47 +0200",
"msg_from": "Julien Tachoires <julmon@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "On Fri, Oct 28, 2022 at 10:55 PM Julien Tachoires <julmon@gmail.com> wrote:\n>\n> Hi,\n>\n> Le lun. 26 sept. 2022 à 15:57, Simon Riggs\n> <simon.riggs@enterprisedb.com> a écrit :\n> >\n> > On Fri, 16 Sept 2022 at 13:20, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> > >\n> > > Thanks for the review.\n> > >\n> > > v10 attached\n> >\n> > v11 attached, corrected for recent commit\n> > 14ff44f80c09718d43d853363941457f5468cc03.\n>\n> Please find below the performance tests results I have produced for this patch.\n> Attaching some charts and the scripts used to reproduce these tests.\n>\n> 1. Assumption\n>\n> The number of sub-transaction issued by only one long running\n> transaction may affect global TPS throughput if the number of\n> sub-transaction exceeds 64 (sub-overflow)\n>\n> 2. Testing scenario\n>\n> Based on pgbench, 2 different types of DB activity are applied concurrently:\n> - 1 long running transaction, including N sub-transactions\n> - X pgbench clients running read-only workload\n>\n> Tests are executed with a varying number of sub-transactions: from 0 to 128\n> Key metric is the TPS rate reported by pgbench runs in read-only mode\n>\n> Tests are executed against\n> - HEAD (14a737)\n> - HEAD (14a737) + 002_minimize_calls_to_SubTransSetParent.v11.patch\n>\n> 3. Long transaction anatomy\n>\n> Two different long transactions are tested because they don't have the\n> exact same impact on performance.\n>\n> Transaction number 1 includes one UPDATE affecting each row of\n> pgbench_accounts, plus an additional UPDATE affecting only one row but\n> executed in its own rollbacked sub-transaction:\n> BEGIN;\n> SAVEPOINT s1;\n> SAVEPOINT s2;\n> -- ...\n> SAVEPOINT sN - 1;\n> UPDATE pgbench_accounts SET abalance = abalance + 1 WHERE aid > 0;\n> SAVEPOINT sN;\n> UPDATE pgbench_accounts SET abalance = abalance + 1 WHERE aid = 12345;\n> ROLLBACK TO SAVEPOINT sN;\n> -- sleeping until the end of the test\n> ROLLBACK;\n>\n> Transaction 2 includes one UPDATE affecting each row of pgbench_accounts:\n> BEGIN;\n> SAVEPOINT s1;\n> SAVEPOINT s2;\n> -- ...\n> SAVEPOINT sN;\n> UPDATE pgbench_accounts SET abalance = abalance + 1 WHERE aid > 0;\n> -- sleeping until the end of the test\n> ROLLBACK;\n>\n> 4. Test results with transaction 1\n>\n> TPS vs number of sub-transaction\n>\n> nsubx HEAD patched\n> --------------------\n> 0 441109 439474\n> 8 439045 438103\n> 16 439123 436993\n> 24 436269 434194\n> 32 439707 437429\n> 40 439997 437220\n> 48 439388 437422\n> 56 439409 437210\n> 64 439748 437366\n> 72 92869 434448\n> 80 66577 434100\n> 88 61243 434255\n> 96 57016 434419\n> 104 52132 434917\n> 112 49181 433755\n> 120 46581 434044\n> 128 44067 434268\n>\n> Perf profiling on HEAD with 80 sub-transactions:\n> Overhead Symbol\n> 51.26% [.] LWLockAttemptLock\n> 24.59% [.] LWLockRelease\n> 0.36% [.] base_yyparse\n> 0.35% [.] PinBuffer\n> 0.34% [.] AllocSetAlloc\n> 0.33% [.] hash_search_with_hash_value\n> 0.22% [.] LWLockAcquire\n> 0.20% [.] UnpinBuffer\n> 0.15% [.] SimpleLruReadPage_ReadOnly\n> 0.15% [.] _bt_compare\n>\n> Perf profiling on patched with 80 sub-transactions:\n> Overhead Symbol\n> 2.64% [.] AllocSetAlloc\n> 2.09% [.] base_yyparse\n> 1.76% [.] hash_search_with_hash_value\n> 1.62% [.] LWLockAttemptLock\n> 1.26% [.] MemoryContextAllocZeroAligned\n> 0.93% [.] _bt_compare\n> 0.92% [.] expression_tree_walker_impl.part.4\n> 0.84% [.] SearchCatCache1\n> 0.79% [.] palloc\n> 0.64% [.] core_yylex\n>\n> 5. Test results with transaction 2\n>\n> nsubx HEAD patched\n> --------------------\n> 0 440145 443816\n> 8 438867 443081\n> 16 438634 441786\n> 24 436406 440187\n> 32 439203 442447\n> 40 439819 443574\n> 48 439314 442941\n> 56 439801 443736\n> 64 439074 441970\n> 72 439833 444132\n> 80 148737 439941\n> 88 413714 443343\n> 96 251098 442021\n> 104 70190 443488\n> 112 405507 438866\n> 120 177827 443202\n> 128 399431 441842\n>\n> From the performance point of view, this patch clearly fixes the\n> dramatic TPS collapse shown in these tests.\n\nI think these are really promising results. Although the perf result\nshows that the bottleneck on the SLRU is no more there with the patch,\nI think it would be nice to see the wait event as well.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 1 Nov 2022 14:08:45 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "Hi,\n\nLe mar. 1 nov. 2022 à 09:39, Dilip Kumar <dilipbalaut@gmail.com> a écrit :\n>\n> On Fri, Oct 28, 2022 at 10:55 PM Julien Tachoires <julmon@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > Le lun. 26 sept. 2022 à 15:57, Simon Riggs\n> > <simon.riggs@enterprisedb.com> a écrit :\n> > >\n> > > On Fri, 16 Sept 2022 at 13:20, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> > > >\n> > > > Thanks for the review.\n> > > >\n> > > > v10 attached\n> > >\n> > > v11 attached, corrected for recent commit\n> > > 14ff44f80c09718d43d853363941457f5468cc03.\n> >\n> > Please find below the performance tests results I have produced for this patch.\n> > Attaching some charts and the scripts used to reproduce these tests.\n> >\n> > 1. Assumption\n> >\n> > The number of sub-transaction issued by only one long running\n> > transaction may affect global TPS throughput if the number of\n> > sub-transaction exceeds 64 (sub-overflow)\n> >\n> > 2. Testing scenario\n> >\n> > Based on pgbench, 2 different types of DB activity are applied concurrently:\n> > - 1 long running transaction, including N sub-transactions\n> > - X pgbench clients running read-only workload\n> >\n> > Tests are executed with a varying number of sub-transactions: from 0 to 128\n> > Key metric is the TPS rate reported by pgbench runs in read-only mode\n> >\n> > Tests are executed against\n> > - HEAD (14a737)\n> > - HEAD (14a737) + 002_minimize_calls_to_SubTransSetParent.v11.patch\n> >\n> > 3. Long transaction anatomy\n> >\n> > Two different long transactions are tested because they don't have the\n> > exact same impact on performance.\n> >\n> > Transaction number 1 includes one UPDATE affecting each row of\n> > pgbench_accounts, plus an additional UPDATE affecting only one row but\n> > executed in its own rollbacked sub-transaction:\n> > BEGIN;\n> > SAVEPOINT s1;\n> > SAVEPOINT s2;\n> > -- ...\n> > SAVEPOINT sN - 1;\n> > UPDATE pgbench_accounts SET abalance = abalance + 1 WHERE aid > 0;\n> > SAVEPOINT sN;\n> > UPDATE pgbench_accounts SET abalance = abalance + 1 WHERE aid = 12345;\n> > ROLLBACK TO SAVEPOINT sN;\n> > -- sleeping until the end of the test\n> > ROLLBACK;\n> >\n> > Transaction 2 includes one UPDATE affecting each row of pgbench_accounts:\n> > BEGIN;\n> > SAVEPOINT s1;\n> > SAVEPOINT s2;\n> > -- ...\n> > SAVEPOINT sN;\n> > UPDATE pgbench_accounts SET abalance = abalance + 1 WHERE aid > 0;\n> > -- sleeping until the end of the test\n> > ROLLBACK;\n> >\n> > 4. Test results with transaction 1\n> >\n> > TPS vs number of sub-transaction\n> >\n> > nsubx HEAD patched\n> > --------------------\n> > 0 441109 439474\n> > 8 439045 438103\n> > 16 439123 436993\n> > 24 436269 434194\n> > 32 439707 437429\n> > 40 439997 437220\n> > 48 439388 437422\n> > 56 439409 437210\n> > 64 439748 437366\n> > 72 92869 434448\n> > 80 66577 434100\n> > 88 61243 434255\n> > 96 57016 434419\n> > 104 52132 434917\n> > 112 49181 433755\n> > 120 46581 434044\n> > 128 44067 434268\n> >\n> > Perf profiling on HEAD with 80 sub-transactions:\n> > Overhead Symbol\n> > 51.26% [.] LWLockAttemptLock\n> > 24.59% [.] LWLockRelease\n> > 0.36% [.] base_yyparse\n> > 0.35% [.] PinBuffer\n> > 0.34% [.] AllocSetAlloc\n> > 0.33% [.] hash_search_with_hash_value\n> > 0.22% [.] LWLockAcquire\n> > 0.20% [.] UnpinBuffer\n> > 0.15% [.] SimpleLruReadPage_ReadOnly\n> > 0.15% [.] _bt_compare\n> >\n> > Perf profiling on patched with 80 sub-transactions:\n> > Overhead Symbol\n> > 2.64% [.] AllocSetAlloc\n> > 2.09% [.] base_yyparse\n> > 1.76% [.] hash_search_with_hash_value\n> > 1.62% [.] LWLockAttemptLock\n> > 1.26% [.] MemoryContextAllocZeroAligned\n> > 0.93% [.] _bt_compare\n> > 0.92% [.] expression_tree_walker_impl.part.4\n> > 0.84% [.] SearchCatCache1\n> > 0.79% [.] palloc\n> > 0.64% [.] core_yylex\n> >\n> > 5. Test results with transaction 2\n> >\n> > nsubx HEAD patched\n> > --------------------\n> > 0 440145 443816\n> > 8 438867 443081\n> > 16 438634 441786\n> > 24 436406 440187\n> > 32 439203 442447\n> > 40 439819 443574\n> > 48 439314 442941\n> > 56 439801 443736\n> > 64 439074 441970\n> > 72 439833 444132\n> > 80 148737 439941\n> > 88 413714 443343\n> > 96 251098 442021\n> > 104 70190 443488\n> > 112 405507 438866\n> > 120 177827 443202\n> > 128 399431 441842\n> >\n> > From the performance point of view, this patch clearly fixes the\n> > dramatic TPS collapse shown in these tests.\n>\n> I think these are really promising results. Although the perf result\n> shows that the bottleneck on the SLRU is no more there with the patch,\n> I think it would be nice to see the wait event as well.\n\nPlease find attached samples returned by the following query when\ntesting transaction 1 with 80 subxacts:\nSELECT wait_event_type, wait_event, locktype, mode, database,\nrelation, COUNT(*) from pg_stat_activity AS psa JOIN pg_locks AS pl ON\n(psa.pid = pl.pid) GROUP BY 1, 2, 3, 4, 5, 6 ORDER BY 7 DESC;\n\nRegards,\n\n\n-- \nJulien Tachoires\nEDB",
"msg_date": "Tue, 1 Nov 2022 09:55:10 +0100",
"msg_from": "Julien Tachoires <julmon@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "On Tue, 1 Nov 2022 at 08:55, Julien Tachoires <julmon@gmail.com> wrote:\n>\n> > > 4. Test results with transaction 1\n> > >\n> > > TPS vs number of sub-transaction\n> > >\n> > > nsubx HEAD patched\n> > > --------------------\n> > > 0 441109 439474\n> > > 8 439045 438103\n> > > 16 439123 436993\n> > > 24 436269 434194\n> > > 32 439707 437429\n> > > 40 439997 437220\n> > > 48 439388 437422\n> > > 56 439409 437210\n> > > 64 439748 437366\n> > > 72 92869 434448\n> > > 80 66577 434100\n> > > 88 61243 434255\n> > > 96 57016 434419\n> > > 104 52132 434917\n> > > 112 49181 433755\n> > > 120 46581 434044\n> > > 128 44067 434268\n> > >\n> > > Perf profiling on HEAD with 80 sub-transactions:\n> > > Overhead Symbol\n> > > 51.26% [.] LWLockAttemptLock\n> > > 24.59% [.] LWLockRelease\n> > > 0.36% [.] base_yyparse\n> > > 0.35% [.] PinBuffer\n> > > 0.34% [.] AllocSetAlloc\n> > > 0.33% [.] hash_search_with_hash_value\n> > > 0.22% [.] LWLockAcquire\n> > > 0.20% [.] UnpinBuffer\n> > > 0.15% [.] SimpleLruReadPage_ReadOnly\n> > > 0.15% [.] _bt_compare\n> > >\n> > > Perf profiling on patched with 80 sub-transactions:\n> > > Overhead Symbol\n> > > 2.64% [.] AllocSetAlloc\n> > > 2.09% [.] base_yyparse\n> > > 1.76% [.] hash_search_with_hash_value\n> > > 1.62% [.] LWLockAttemptLock\n> > > 1.26% [.] MemoryContextAllocZeroAligned\n> > > 0.93% [.] _bt_compare\n> > > 0.92% [.] expression_tree_walker_impl.part.4\n> > > 0.84% [.] SearchCatCache1\n> > > 0.79% [.] palloc\n> > > 0.64% [.] core_yylex\n> > >\n> > > 5. Test results with transaction 2\n> > >\n> > > nsubx HEAD patched\n> > > --------------------\n> > > 0 440145 443816\n> > > 8 438867 443081\n> > > 16 438634 441786\n> > > 24 436406 440187\n> > > 32 439203 442447\n> > > 40 439819 443574\n> > > 48 439314 442941\n> > > 56 439801 443736\n> > > 64 439074 441970\n> > > 72 439833 444132\n> > > 80 148737 439941\n> > > 88 413714 443343\n> > > 96 251098 442021\n> > > 104 70190 443488\n> > > 112 405507 438866\n> > > 120 177827 443202\n> > > 128 399431 441842\n> > >\n> > > From the performance point of view, this patch clearly fixes the\n> > > dramatic TPS collapse shown in these tests.\n> >\n> > I think these are really promising results. Although the perf result\n> > shows that the bottleneck on the SLRU is no more there with the patch,\n> > I think it would be nice to see the wait event as well.\n>\n> Please find attached samples returned by the following query when\n> testing transaction 1 with 80 subxacts:\n> SELECT wait_event_type, wait_event, locktype, mode, database,\n> relation, COUNT(*) from pg_stat_activity AS psa JOIN pg_locks AS pl ON\n> (psa.pid = pl.pid) GROUP BY 1, 2, 3, 4, 5, 6 ORDER BY 7 DESC;\n\nThese results are compelling, thank you.\n\nSetting this to Ready for Committer.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 7 Nov 2022 21:14:47 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "On Mon, 7 Nov 2022 at 21:14, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n\n> These results are compelling, thank you.\n>\n> Setting this to Ready for Committer.\n\nNew version attached.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Tue, 15 Nov 2022 09:34:29 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "\nOn Tue, 15 Nov 2022 at 17:34, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> On Mon, 7 Nov 2022 at 21:14, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n>> These results are compelling, thank you.\n>>\n>> Setting this to Ready for Committer.\n>\n> New version attached.\n\nTake a quick look, I think it should be PGPROC instead of PG_PROC, right?\n\n+\t\t * 1. When there's no room in PG_PROC, as mentioned above.\n+\t\t * During XactLockTableWait() we sometimes need to know the topxid.\n+\t\t * If there is room in PG_PROC we can get a subxid's topxid direct\n+\t\t * from the procarray if the topxid is still running, using\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Tue, 15 Nov 2022 21:31:14 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> New version attached.\n\nI looked at this patch and I do not see how it can possibly be safe.\n\nThe most direct counterexample arises from the fact that\nHeapCheckForSerializableConflictOut checks SubTransGetTopmostTransaction\nin some cases. You haven't tried to analyze when, but just disabled\nthe optimization in serializable mode:\n\n+ * 2. When IsolationIsSerializable() we sometimes need to access topxid.\n+ * This occurs only when SERIALIZABLE is requested by app user.\n+...\n+ if (MyProc->subxidStatus.overflowed ||\n+ IsolationIsSerializable() ||\n\nHowever, what this is checking is whether *our current transaction*\nis serializable. If we're not serializable, but other transactions\nin the system are, then this fails to store information that they'll\nneed for correctness. You'd have to have some way of knowing that\nno transaction in the system is using serializable mode, and that\nnone will start to do so while this transaction is still in-doubt.\n\nI fear that's already enough to kill the idea; but there's more.\nThe subxidStatus.overflowed check quoted above has a similar sort\nof myopia: it's checking whether our current transaction has\nalready suboverflowed. But (a) that doesn't prove it won't suboverflow\nlater, and (b) the relevant logic in XidInMVCCSnapshot needs to run\nSubTransGetTopmostTransaction if *any* proc in the snapshot has\nsuboverflowed.\n\nLastly, I don't see what the \"transaction on same page\" business\nhas got to do with anything. The comment is certainly failing\nto make the case that it's safe to skip filling subtrans when that\nis true.\n\nI think we could salvage this small idea:\n\n+ * Insert entries into subtrans for this xid, noting that the entry\n+ * points directly to the topxid, not the immediate parent. This is\n+ * done for two reasons:\n+ * (1) so it is faster in a long chain of subxids, because the\n+ * algorithm is then O(1), no matter how many subxids are assigned.\n\nbut some work would be needed to update the comments around\nSubTransGetParent and SubTransGetTopmostTransaction to explain that\nthey're no longer reliably different. I think that that is okay for\nthe existing use-cases, but they'd better be documented. In fact,\ncouldn't we simplify them down to one function? Given the restriction\nthat we don't look back in pg_subtrans further than TransactionXmin,\nI don't think that updated code would ever need to resolve cases\nwritten by older code. So we could remove the recursive checks\nentirely, or at least be confident that they don't recurse more\nthan once.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 15 Nov 2022 16:03:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "On Tue, 15 Nov 2022 at 21:03, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> > New version attached.\n>\n> I looked at this patch and I do not see how it can possibly be safe.\n\nI grant you it is complex, so please bear with me.\n\n\n> The most direct counterexample arises from the fact that\n> HeapCheckForSerializableConflictOut checks SubTransGetTopmostTransaction\n> in some cases. You haven't tried to analyze when, but just disabled\n> the optimization in serializable mode:\n>\n> + * 2. When IsolationIsSerializable() we sometimes need to access topxid.\n> + * This occurs only when SERIALIZABLE is requested by app user.\n> +...\n> + if (MyProc->subxidStatus.overflowed ||\n> + IsolationIsSerializable() ||\n>\n> However, what this is checking is whether *our current transaction*\n> is serializable. If we're not serializable, but other transactions\n> in the system are, then this fails to store information that they'll\n> need for correctness. You'd have to have some way of knowing that\n> no transaction in the system is using serializable mode, and that\n> none will start to do so while this transaction is still in-doubt.\n\nNot true.\n\nsrc/backend/storage/lmgr/README-SSI says this...\n\n* Any transaction which is run at a transaction isolation level\nother than SERIALIZABLE will not be affected by SSI. If you want to\nenforce business rules through SSI, all transactions should be run at\nthe SERIALIZABLE transaction isolation level, and that should\nprobably be set as the default.\n\nIf HeapCheckForSerializableConflictOut() cannot find a subxid's parent\nthen it will not be involved in serialization errors.\n\nSo skipping the storage of subxids in subtrans for non-serializable\nxacts is valid for both SSI and non-SSI xacts.\n\nThus the owning transaction can decide to skip the insert into\nsubtrans if it is not serializable.\n\n> I fear that's already enough to kill the idea; but there's more.\n> The subxidStatus.overflowed check quoted above has a similar sort\n> of myopia: it's checking whether our current transaction has\n> already suboverflowed. But (a) that doesn't prove it won't suboverflow\n> later, and (b) the relevant logic in XidInMVCCSnapshot needs to run\n> SubTransGetTopmostTransaction if *any* proc in the snapshot has\n> suboverflowed.\n\nNot the way it is coded now.\n\nFirst, we search the subxid cache in snapshot->subxip.\nThen, and only if the snapshot overflowed (i.e. ANY xact overflowed),\ndo we check subtrans.\n\nThus, the owning xact knows that anyone else will find the first 64\nxids in the subxid cache, so it need not insert them into subtrans,\neven if someone else overflowed. When the owning xact overflows, it\nknows it must now insert the subxid into subtrans before the xid is\nused anywhere in storage, which the patch does. This allows each\nowning xact to decide what to do, independent of the actions of\nothers.\n\n> Lastly, I don't see what the \"transaction on same page\" business\n> has got to do with anything. The comment is certainly failing\n> to make the case that it's safe to skip filling subtrans when that\n> is true.\n\nThat seems strange, I grant you. It's the same logic that is used in\nTransactionIdSetTreeStatus(), in reverse. I understand it 'cos I wrote\nit.\n\nTRANSACTION_STATUS_SUB_COMMITTED is only ever used if the topxid and\nsubxid are on different pages. Therefore TransactionIdDidCommit()\nwon't ever see a value of TRANSACTION_STATUS_SUB_COMMITTED unless they\nare on separate pages. So the owning transaction can predict in\nadvance whether anyone will ever call SubTransGetParent() for one of\nits xids. If they might, then we record the values just in case. If\nthey NEVER will, then we can skip recording them.\n\n\nAnd just to be clear, all 3 of the above preconditions must be true\nbefore the owning xact decides to skip writing a subxid to subtrans.\n\n> I think we could salvage this small idea:\n>\n> + * Insert entries into subtrans for this xid, noting that the entry\n> + * points directly to the topxid, not the immediate parent. This is\n> + * done for two reasons:\n> + * (1) so it is faster in a long chain of subxids, because the\n> + * algorithm is then O(1), no matter how many subxids are assigned.\n>\n> but some work would be needed to update the comments around\n> SubTransGetParent and SubTransGetTopmostTransaction to explain that\n> they're no longer reliably different. I think that that is okay for\n> the existing use-cases, but they'd better be documented. In fact,\n> couldn't we simplify them down to one function? Given the restriction\n> that we don't look back in pg_subtrans further than TransactionXmin,\n> I don't think that updated code would ever need to resolve cases\n> written by older code. So we could remove the recursive checks\n> entirely, or at least be confident that they don't recurse more\n> than once.\n\nHappy to do so, I'd left it that way to soften the blow of the\nabsorbing the earlier thoughts.\n\n(Since we know all subxids point directly to parent we know we only\never need to do one lookup).\n\n\nI know that if there is a flaw in the above logic then you will find it.\n\nHappy to make any comments changes needed to record the above thoughts\nmore permanently. I tried, but clearly didn't get everything down\nclearly.\n\nThanks for your detailed thoughts.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 15 Nov 2022 22:59:34 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> On Tue, 15 Nov 2022 at 21:03, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The subxidStatus.overflowed check quoted above has a similar sort\n>> of myopia: it's checking whether our current transaction has\n>> already suboverflowed. But (a) that doesn't prove it won't suboverflow\n>> later, and (b) the relevant logic in XidInMVCCSnapshot needs to run\n>> SubTransGetTopmostTransaction if *any* proc in the snapshot has\n>> suboverflowed.\n\n> Not the way it is coded now.\n\n> First, we search the subxid cache in snapshot->subxip.\n> Then, and only if the snapshot overflowed (i.e. ANY xact overflowed),\n> do we check subtrans.\n\nNo, that's not what XidInMVCCSnapshot does. If snapshot->suboverflowed\nis set (ie, somebody somewhere/somewhen overflowed), then it does\nSubTransGetTopmostTransaction and searches only the xips with the result.\nThis behavior requires that all live subxids be correctly mapped by\nSubTransGetTopmostTransaction, or we'll draw false conclusions.\n\nWe could perhaps make it do what you suggest, but that would require\na complete performance analysis to make sure we're not giving up\nmore than we would gain.\n\nAlso, both GetSnapshotData and CopySnapshot assume that the subxips\narray is not used if suboverflowed is set, and don't bother\n(continuing to) populate it. So we would need code changes and\nadditional cycles in those areas too.\n\nI'm not sure about your other claims, but I'm pretty sure this one\npoint is enough to kill the patch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 15 Nov 2022 19:09:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "On Wed, 16 Nov 2022 at 00:09, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> > On Tue, 15 Nov 2022 at 21:03, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> The subxidStatus.overflowed check quoted above has a similar sort\n> >> of myopia: it's checking whether our current transaction has\n> >> already suboverflowed. But (a) that doesn't prove it won't suboverflow\n> >> later, and (b) the relevant logic in XidInMVCCSnapshot needs to run\n> >> SubTransGetTopmostTransaction if *any* proc in the snapshot has\n> >> suboverflowed.\n>\n> > Not the way it is coded now.\n>\n> > First, we search the subxid cache in snapshot->subxip.\n> > Then, and only if the snapshot overflowed (i.e. ANY xact overflowed),\n> > do we check subtrans.\n>\n> No, that's not what XidInMVCCSnapshot does. If snapshot->suboverflowed\n> is set (ie, somebody somewhere/somewhen overflowed), then it does\n> SubTransGetTopmostTransaction and searches only the xips with the result.\n> This behavior requires that all live subxids be correctly mapped by\n> SubTransGetTopmostTransaction, or we'll draw false conclusions.\n\nYour comments are correct wrt to the existing coding, but not to the\npatch, which is coded as described and does not suffer those issues.\n\n\n> We could perhaps make it do what you suggest,\n\nAlready done in the patch since v5.\n\n\n> but that would require\n> a complete performance analysis to make sure we're not giving up\n> more than we would gain.\n\nI agree that a full performance analysis is sensible and an objective\nanalysis has been performed by Julien Tachoires. This is described\nhere, along with other explanations:\nhttps://docs.google.com/presentation/d/1A7Ar8_LM5EdC2OHL_j3U9J-QwjMiGw9mmXeBLJOmFlg/edit?usp=sharing\n\nIt is important to understand the context here: there is already a\nwell documented LOSS of performance with the current coding. The patch\nalleviates that, and I have not been able to find a performance case\nwhere there is any negative impact.\n\nFurther tests welcome.\n\n\n> Also, both GetSnapshotData and CopySnapshot assume that the subxips\n> array is not used if suboverflowed is set, and don't bother\n> (continuing to) populate it. So we would need code changes and\n> additional cycles in those areas too.\n\nAlready done in the patch since v5.\n\nAny additional cycles apply only to the case of snapshot overflow,\nwhich currently performs very badly.\n\n\n> I'm not sure about your other claims, but I'm pretty sure this one\n> point is enough to kill the patch.\n\nThen please look again because there are misunderstandings above.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 16 Nov 2022 03:10:50 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "On Wed, Nov 16, 2022 at 8:41 AM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n>\n> > No, that's not what XidInMVCCSnapshot does. If snapshot->suboverflowed\n> > is set (ie, somebody somewhere/somewhen overflowed), then it does\n> > SubTransGetTopmostTransaction and searches only the xips with the result.\n> > This behavior requires that all live subxids be correctly mapped by\n> > SubTransGetTopmostTransaction, or we'll draw false conclusions.\n>\n> Your comments are correct wrt to the existing coding, but not to the\n> patch, which is coded as described and does not suffer those issues.\n>\n\nThis will work because of these two changes in patch 1) even though\nthe snapshot is marked \"overflow\" we will include all the\nsubtransactions information in snapshot->subxip. 2) As Simon mentioned\nin XidInMVCCSnapshot(), first, we search the subxip cache in\nsnapshot->subxip, and only if it is not found in that we will look\ninto the SLRU. So now because of 1) we will always find any\nconcurrent subtransaction in \"snapshot->subxip\".\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 16 Nov 2022 09:56:15 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> On Wed, 16 Nov 2022 at 00:09, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> No, that's not what XidInMVCCSnapshot does. If snapshot->suboverflowed\n>> is set (ie, somebody somewhere/somewhen overflowed), then it does\n>> SubTransGetTopmostTransaction and searches only the xips with the result.\n>> This behavior requires that all live subxids be correctly mapped by\n>> SubTransGetTopmostTransaction, or we'll draw false conclusions.\n\n> Your comments are correct wrt to the existing coding, but not to the\n> patch, which is coded as described and does not suffer those issues.\n\nAh, OK.\n\nStill ... I really do not like this patch. It introduces a number of\nextremely fragile assumptions, and I think those would come back to\nbite us someday, even if they hold now which I'm still unsure about.\nIt doesn't help that you've chosen to document them only at the place\nmaking them and not at the place(s) likely to break them.\n\nAlso, to be blunt, this is not Ready For Committer. It's more WIP,\nbecause even if the code is okay there are comments all over the system\nthat you've invalidated. (At the very least, the file header comments\nin subtrans.c and the comments in struct SnapshotData need work; I've\nnot looked hard but I'm sure there are more places with comments\nbearing on these data structures.)\n\nPerhaps it would be a good idea to split up the patch. The business\nabout making pg_subtrans flat rather than a tree seems like a good\nidea in any event, although as I said it doesn't seem like we've got\na fleshed-out version of that here. We could push forward on getting\nthat done and then separately consider the rest of it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 16 Nov 2022 10:44:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "On Wed, 16 Nov 2022 at 15:44, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> > On Wed, 16 Nov 2022 at 00:09, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> No, that's not what XidInMVCCSnapshot does. If snapshot->suboverflowed\n> >> is set (ie, somebody somewhere/somewhen overflowed), then it does\n> >> SubTransGetTopmostTransaction and searches only the xips with the result.\n> >> This behavior requires that all live subxids be correctly mapped by\n> >> SubTransGetTopmostTransaction, or we'll draw false conclusions.\n>\n> > Your comments are correct wrt to the existing coding, but not to the\n> > patch, which is coded as described and does not suffer those issues.\n>\n> Ah, OK.\n>\n> Still ... I really do not like this patch. It introduces a number of\n> extremely fragile assumptions, and I think those would come back to\n> bite us someday, even if they hold now which I'm still unsure about.\n\nCompletely understand. It took me months to think this through.\n\n> It doesn't help that you've chosen to document them only at the place\n> making them and not at the place(s) likely to break them.\n\nYes, apologies for that, I focused on the holistic explanation in the slides.\n\n> Also, to be blunt, this is not Ready For Committer. It's more WIP,\n> because even if the code is okay there are comments all over the system\n> that you've invalidated. (At the very least, the file header comments\n> in subtrans.c and the comments in struct SnapshotData need work; I've\n> not looked hard but I'm sure there are more places with comments\n> bearing on these data structures.)\n\nNew version with greatly improved comments coming very soon.\n\n> Perhaps it would be a good idea to split up the patch. The business\n> about making pg_subtrans flat rather than a tree seems like a good\n> idea in any event, although as I said it doesn't seem like we've got\n> a fleshed-out version of that here. We could push forward on getting\n> that done and then separately consider the rest of it.\n\nYes, I thought you might ask that so, after some thought, have found a\nclean way to do that and have split this into two parts.\n\nJulien has agreed to do further perf tests and is working on that now.\n\nI will post new versions soon, earliest tomorrow.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 17 Nov 2022 17:04:39 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "On Thu, 17 Nov 2022 at 17:04, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> New version with greatly improved comments coming very soon.\n\n> > Perhaps it would be a good idea to split up the patch. The business\n> > about making pg_subtrans flat rather than a tree seems like a good\n> > idea in any event, although as I said it doesn't seem like we've got\n> > a fleshed-out version of that here. We could push forward on getting\n> > that done and then separately consider the rest of it.\n>\n> Yes, I thought you might ask that so, after some thought, have found a\n> clean way to do that and have split this into two parts.\n\nAttached.\n\n002 includes many comment revisions, as well as flattening the loops\nin SubTransGetTopmostTransaction and TransactionIdDidCommit/Abort\n003 includes the idea to not-always do SubTransSetParent()\n\n> Julien has agreed to do further perf tests and is working on that now.\n>\n> I will post new versions soon, earliest tomorrow.\n\nJulien's results show that 002 patch on its own is probably all we\nneed, but I'm posting 003 also in case that situation changes based on\nother later results with different test cases.\n\nDetailed numbers shown here, plus graph derived from them - thanks Julien!\n\nnsubxacts HEAD (3d0c95) patched 002-v13 patched 002+003-v13\n0 434161 436778 437287\n8 432619 434718 435381\n16 432856 434710 435092\n24 429954 431835 431974\n32 434643 436134 436793\n40 433939 436121 435622\n48 434503 434368 435662\n56 432965 434229 436182\n64 433672 433951 436192\n72 93555 431626 433551\n80 66642 431421 434305\n88 61349 432776 433664\n96 55892 432306 434212\n104 52270 432571 434133\n112 49166 433655 434754\n120 46477 432817 434104\n128 43226 432258 432611\n(yes, the last line shows x10 performance patched, that is not a typo)\n\n--\nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Thu, 17 Nov 2022 17:29:13 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "\n\nOn 11/17/22 18:29, Simon Riggs wrote:\n> On Thu, 17 Nov 2022 at 17:04, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>>\n>> New version with greatly improved comments coming very soon.\n> \n>>> Perhaps it would be a good idea to split up the patch. The business\n>>> about making pg_subtrans flat rather than a tree seems like a good\n>>> idea in any event, although as I said it doesn't seem like we've got\n>>> a fleshed-out version of that here. We could push forward on getting\n>>> that done and then separately consider the rest of it.\n>>\n>> Yes, I thought you might ask that so, after some thought, have found a\n>> clean way to do that and have split this into two parts.\n> \n> Attached.\n> \n> 002 includes many comment revisions, as well as flattening the loops\n> in SubTransGetTopmostTransaction and TransactionIdDidCommit/Abort\n> 003 includes the idea to not-always do SubTransSetParent()\n> \n\nI'm a bit confused by the TransactionIdsAreOnSameXactPage naming. Isn't\nthis really checking clog pages?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 17 Nov 2022 21:29:53 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "On Thu, 17 Nov 2022 at 20:29, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 11/17/22 18:29, Simon Riggs wrote:\n> > On Thu, 17 Nov 2022 at 17:04, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> >>\n> > 003 includes the idea to not-always do SubTransSetParent()\n> >\n> I'm a bit confused by the TransactionIdsAreOnSameXactPage naming. Isn't\n> this really checking clog pages?\n\nYes, clog page. I named it to match the new name of pg_xact\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 18 Nov 2022 08:57:22 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "On Thu, 17 Nov 2022 at 17:29, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n\n> (yes, the last line shows x10 performance patched, that is not a typo)\n\nNew version of patch, now just a one-line patch!\n\nResults show it's still a good win for XidInMVCCSnapshot().\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Tue, 29 Nov 2022 12:49:35 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "On 11/29/22 13:49, Simon Riggs wrote:\n> On Thu, 17 Nov 2022 at 17:29, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> \n>> (yes, the last line shows x10 performance patched, that is not a typo)\n> \n> New version of patch, now just a one-line patch!\n> \n> Results show it's still a good win for XidInMVCCSnapshot().\n> \n\nI'm a bit confused - for which workload/benchmark are there results?\nIt's generally a good idea to share the scripts used to run the test and\nnot just a chart.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 29 Nov 2022 14:05:58 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "Hi Tomas,\n\nLe mar. 29 nov. 2022 à 14:06, Tomas Vondra\n<tomas.vondra@enterprisedb.com> a écrit :\n>\n> On 11/29/22 13:49, Simon Riggs wrote:\n> > On Thu, 17 Nov 2022 at 17:29, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> >\n> >> (yes, the last line shows x10 performance patched, that is not a typo)\n> >\n> > New version of patch, now just a one-line patch!\n> >\n> > Results show it's still a good win for XidInMVCCSnapshot().\n> >\n>\n> I'm a bit confused - for which workload/benchmark are there results?\n> It's generally a good idea to share the scripts used to run the test and\n> not just a chart.\n\nThe scripts have been attached to this thread with the initial\nperformance results.\nAnyway, re-sending those (including a minor fix).\n\n-- \nJulien Tachoires\nEDB",
"msg_date": "Tue, 29 Nov 2022 14:18:01 +0100",
"msg_from": "Julien Tachoires <julmon@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> New version of patch, now just a one-line patch!\n\nOf course, there's all the documentation and comments that you falsified.\nAlso, what of the SubTransSetParent call in ProcessTwoPhaseBuffer?\n\n(The one in ProcArrayApplyXidAssignment is actually okay, though the\ncomment making excuses for it no longer is.)\n\nAlso, if we're going to go over to a one-level structure in pg_subtrans,\nwe really ought to simplify the code in subtrans.c accordingly, and\nget rid of the extra lookup currently done for the top parent's parent.\n\nI still wonder whether we'll regret losing information about the\nsubtransaction tree structure, as discussed in the other thread [1].\nThat seems like the main barrier to proceeding with this.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/CANbhV-HYfP0ebZRERkpt84ZCDsNX-UYJGYsjfS88jtbYzY%2BKcQ%40mail.gmail.com\n\n\n",
"msg_date": "Tue, 29 Nov 2022 13:30:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-29 13:30:02 -0500, Tom Lane wrote:\n> I still wonder whether we'll regret losing information about the\n> subtransaction tree structure, as discussed in the other thread [1].\n> That seems like the main barrier to proceeding with this.\n\nYea, this has me worried. I suspect that we have a bunch of places relying on\nthis. I'm not at all convinced that optimizing XidInMVCCSnapshot() is a good\nreason for this structural change, given that there's other possible ways to\noptimize (e.g. my proposal to add overflowed subxids to the Snapshot during\nlookup when there's space).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 29 Nov 2022 10:35:24 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
},
{
"msg_contents": "On Tue, Nov 29, 2022 at 1:35 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-11-29 13:30:02 -0500, Tom Lane wrote:\n> > I still wonder whether we'll regret losing information about the\n> > subtransaction tree structure, as discussed in the other thread [1].\n> > That seems like the main barrier to proceeding with this.\n>\n> Yea, this has me worried.\n\nMe, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 29 Nov 2022 13:44:03 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SUBTRANS: Minimizing calls to SubTransSetParent()"
}
] |
[
{
"msg_contents": "While working on pg_stat_stements, I got some questions from customers to\nhave statistics by application and IP address. I know that we are\ncollecting the\nstatistics by query id, user id, database id and top-level query. There is\nno way to\ncollect the statistics based on IP address and application\nname. That's possible that\nmultiple applications issue the same queries with the same user on the same\ndatabase. We\ncannot segregate those queries from which application this query comes. I\nknow we can\nthis in the log file with log_line_prefix, but I want to see that\naggregates like call count based on IP and application\nname. I did some POC and had a patch. But before sharing the patch.\n\nI need to know if there has been any previous discussion about this topic;\nby the way,\nI did some Googling to find that but failed.\n\nThoughts?\n\n\n-- \n\nIbrar Ahmed.\nSenior Software Engineer, PostgreSQL Consultant.\n\nWhile working on pg_stat_stements, I got some questions from customers tohave statistics by application and IP address. I know that we are collecting the statistics by query id, user id, database id and top-level query. There is no way tocollect the statistics based on IP address and application name. That's possible thatmultiple applications issue the same queries with the same user on the same database. Wecannot segregate those queries from which application this query comes. I know we canthis in the log file with log_line_prefix, but I want to see that aggregates like call count based on IP and applicationname. I did some POC and had a patch. But before sharing the patch. I need to know if there has been any previous discussion about this topic; by the way, I did some Googling to find that but failed. Thoughts?-- Ibrar Ahmed. Senior Software Engineer, PostgreSQL Consultant.",
"msg_date": "Mon, 8 Aug 2022 20:21:06 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmed@percona.com>",
"msg_from_op": true,
"msg_subject": "Get the statistics based on the application name and IP address"
},
{
"msg_contents": "Hi,\n\nOn Mon, Aug 08, 2022 at 08:21:06PM +0500, Ibrar Ahmed wrote:\n> While working on pg_stat_stements, I got some questions from customers to\n> have statistics by application and IP address.\n> [...]\n> name. I did some POC and had a patch. But before sharing the patch.\n>\n> I need to know if there has been any previous discussion about this topic;\n> by the way,\n\nI don't think there was any discussion on this exactly, but there have been\nsome related discussions.\n\nThis would likely bring 2 problems. First, for now each entry contains its own\nquery text in the query file. There can already be some duplication, which\nisn't great, but adding the application_name and/or IP address will make things\nway worse, so you would probably need to fix that first. There has been some\ndiscussion about it recently (1) but more work and benchmarking are needed.\n\nThe other problem is the multiplication of entries. It's a well known\nlimitation that pg_stat_statements eviction are so costly that it makes it\nunusable. The last numbers I saw about it was ~55% overhead (2). Adding\napplication_name or ip address to the key would probably make\npg_stat_statements unusable for anyone who would actually need those metrics.\n\n[1]: https://www.postgresql.org/message-id/flat/604E3199-2DD2-47DD-AC47-774A6F97DCA9%40amazon.com\n[2]: https://twitter.com/AndresFreundTec/status/1105585237772263424\n\n\n",
"msg_date": "Tue, 9 Aug 2022 01:11:40 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Get the statistics based on the application name and IP address"
},
{
"msg_contents": "On Mon, Aug 8, 2022 at 10:11 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> Hi,\n>\n> On Mon, Aug 08, 2022 at 08:21:06PM +0500, Ibrar Ahmed wrote:\n> > While working on pg_stat_stements, I got some questions from customers to\n> > have statistics by application and IP address.\n> > [...]\n> > name. I did some POC and had a patch. But before sharing the patch.\n> >\n> > I need to know if there has been any previous discussion about this\n> topic;\n> > by the way,\n>\n> Thanks for the input.\n\n> I don't think there was any discussion on this exactly, but there have been\n> some related discussions.\n>\n> This would likely bring 2 problems.\n\n\n\n> First, for now each entry contains its own\n> query text in the query file. There can already be some duplication, which\n> isn't great, but adding the application_name and/or IP address will make\n> things\n> way worse, so you would probably need to fix that first.\n\nI doubt that makes it worst because these (IP and Application) will be part\nof\nthe key, not the query text. But yes, I agree that it will increase the\nfootprint of rows,\nexcluding query text.\n\nI am not 100% sure about the query text duplication but will look at that\nin detail,\nif you have more insight, then it will help to solve that.\n\n\n\n> There has been some\n> discussion about it recently (1) but more work and benchmarking are needed.\n>\n> The other problem is the multiplication of entries. It's a well known\n> limitation that pg_stat_statements eviction are so costly that it makes it\n> unusable. The last numbers I saw about it was ~55% overhead (2). Adding\n> application_name or ip address to the key would probably make\n> pg_stat_statements unusable for anyone who would actually need those\n> metrics.\n>\n\nI am sure adding a new item in the key does not affect the performance of\nevictions of the row,\nas it will not be part of that area. I am doing some benchmarking and\nhacking to reduce that and will\nsend results with the patch.\n\n\n> [1]:\n> https://www.postgresql.org/message-id/flat/604E3199-2DD2-47DD-AC47-774A6F97DCA9%40amazon.com\n> [2]: https://twitter.com/AndresFreundTec/status/1105585237772263424\n>\n\n\n-- \n\nIbrar Ahmed.\nSenior Software Engineer, PostgreSQL Consultant.\n\nOn Mon, Aug 8, 2022 at 10:11 PM Julien Rouhaud <rjuju123@gmail.com> wrote:Hi,\n\nOn Mon, Aug 08, 2022 at 08:21:06PM +0500, Ibrar Ahmed wrote:\n> While working on pg_stat_stements, I got some questions from customers to\n> have statistics by application and IP address.\n> [...]\n> name. I did some POC and had a patch. But before sharing the patch.\n>\n> I need to know if there has been any previous discussion about this topic;\n> by the way,\nThanks for the input. \nI don't think there was any discussion on this exactly, but there have been\nsome related discussions.\n\nThis would likely bring 2 problems. First, for now each entry contains its own\nquery text in the query file. There can already be some duplication, which\nisn't great, but adding the application_name and/or IP address will make things\nway worse, so you would probably need to fix that first. I doubt that makes it worst because these (IP and Application) will be part ofthe key, not the query text. But yes, I agree that it will increase the footprint of rows, excluding query text.I am not 100% sure about the query text duplication but will look at that in detail,if you have more insight, then it will help to solve that. There has been some\ndiscussion about it recently (1) but more work and benchmarking are needed.\n\nThe other problem is the multiplication of entries. It's a well known\nlimitation that pg_stat_statements eviction are so costly that it makes it\nunusable. The last numbers I saw about it was ~55% overhead (2). Adding\napplication_name or ip address to the key would probably make\npg_stat_statements unusable for anyone who would actually need those metrics.I am sure adding a new item in the key does not affect the performance of evictions of the row, as it will not be part of that area. I am doing some benchmarking and hacking to reduce that and will send results with the patch.\n\n[1]: https://www.postgresql.org/message-id/flat/604E3199-2DD2-47DD-AC47-774A6F97DCA9%40amazon.com\n[2]: https://twitter.com/AndresFreundTec/status/1105585237772263424\n-- Ibrar Ahmed. Senior Software Engineer, PostgreSQL Consultant.",
"msg_date": "Wed, 10 Aug 2022 22:42:31 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmed@percona.com>",
"msg_from_op": true,
"msg_subject": "Re: Get the statistics based on the application name and IP address"
},
{
"msg_contents": "Hi,\n\nOn Wed, Aug 10, 2022 at 10:42:31PM +0500, Ibrar Ahmed wrote:\n> On Mon, Aug 8, 2022 at 10:11 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> > First, for now each entry contains its own\n> > query text in the query file. There can already be some duplication, which\n> > isn't great, but adding the application_name and/or IP address will make\n> > things\n> > way worse, so you would probably need to fix that first.\n>\n> I doubt that makes it worst because these (IP and Application) will be part\n> of\n> the key, not the query text.\n\nIt's because you want to add new elements to the key that it would make it\nworse, as the exact same query text will be stored much more often.\n\n> But yes, I agree that it will increase the\n> footprint of rows,\n> excluding query text.\n\nI don't think that this part should be a concern.\n\n> I am not 100% sure about the query text duplication but will look at that\n> in detail,\n> if you have more insight, then it will help to solve that.\n\nYou can refer to the mentioned thread for the (only?) discussion about that.\n\n\n> > There has been some\n> > discussion about it recently (1) but more work and benchmarking are needed.\n> >\n> > The other problem is the multiplication of entries. It's a well known\n> > limitation that pg_stat_statements eviction are so costly that it makes it\n> > unusable. The last numbers I saw about it was ~55% overhead (2). Adding\n> > application_name or ip address to the key would probably make\n> > pg_stat_statements unusable for anyone who would actually need those\n> > metrics.\n> >\n>\n> I am sure adding a new item in the key does not affect the performance of\n> evictions of the row,\n> as it will not be part of that area. I am doing some benchmarking and\n> hacking to reduce that and will\n> send results with the patch.\n\nSorry if that was unclear. I didn't meant that adding new items to the key\nwould make evictions way costlier (although it would have some impact), but\nmuch more frequent. Adding application_name and the IP to the key can very\neasily amplify the number of entries so much that you will either need an\nunreasonable value for pg_stat_statements.max (which would likely bring its own\nset of problems) if possible at all, or evict entries frequently.\n\n\n",
"msg_date": "Thu, 11 Aug 2022 09:12:40 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Get the statistics based on the application name and IP address"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nPlease see attached draft of the 2022-08-11 release announcement.\r\n\r\nPlease provide feedback on {technical accuracy, omissions, any other \r\nerrors} no later than 2022-08-11 0:00 AoE[1].\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://en.wikipedia.org/wiki/Anywhere_on_Earth",
"msg_date": "Mon, 8 Aug 2022 11:50:16 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "2022-08-11 release announcement draft"
},
{
"msg_contents": "On Mon, Aug 08, 2022 at 11:50:16AM -0400, Jonathan S. Katz wrote:\n> * Fix [`pg_upgrade`](https://www.postgresql.org/docs/current/pgupgrade.html) to\n> detect non-upgradable usages of functions accepting `anyarray` parameters.\n\nuse or usage\n\n> and regressions before the general availability of PostgreSQL 15. As\n> this is a beta release , changes to database behaviors, feature details,\n\nExtraneous space before comma\n\n> APIs are still possible. Your feedback and testing will help determine the final\n\nand APIs\n\n> tweaks on the new features, so please test in the near future. The quality of\n\nremove \"new\" before features ?\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 8 Aug 2022 11:44:31 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: 2022-08-11 release announcement draft"
},
{
"msg_contents": "On 8/8/22 12:44 PM, Justin Pryzby wrote:\r\n> On Mon, Aug 08, 2022 at 11:50:16AM -0400, Jonathan S. Katz wrote:\r\n>> * Fix [`pg_upgrade`](https://www.postgresql.org/docs/current/pgupgrade.html) to\r\n>> detect non-upgradable usages of functions accepting `anyarray` parameters.\r\n> \r\n> use or usage\r\n\r\nThis line comes directly from the release notes and appears to be \r\ncorrect as is.\r\n\r\n>> and regressions before the general availability of PostgreSQL 15. As\r\n>> this is a beta release , changes to database behaviors, feature details,\r\n> \r\n> Extraneous space before comma\r\n\r\nFixed.\r\n\r\n>> APIs are still possible. Your feedback and testing will help determine the final\r\n> \r\n> and APIs\r\n\r\nFixed.\r\n\r\n>> tweaks on the new features, so please test in the near future. The quality of\r\n> \r\n> remove \"new\" before features ?\r\n\r\nI believe this is personal preference. I would like to leave in \"new\" as \r\nwe are releasing new features in PostgreSQL 15.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Mon, 8 Aug 2022 14:35:25 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: 2022-08-11 release announcement draft"
}
] |
[
{
"msg_contents": "I'm trying to wrap my head around the shared memory stats collector\ninfrastructure from\n<20220406030008.2qxipjxo776dwnqs@alap3.anarazel.de> committed in\n5891c7a8ed8f2d3d577e7eea34dacff12d7b6bbd.\n\nI have one question about locking -- afaics there's nothing protecting\nreading the shared memory stats. There is an lwlock protecting\nconcurrent updates of the shared memory stats, but that lock isn't\ntaken when we read the stats. Are we ok relying on atomic 64-bit reads\nor is there something else going on that I'm missing?\n\nIn particular I'm looking at pgstat.c:847 in pgstat_fetch_entry()\nwhich does this:\n\nmemcpy(stats_data,\n pgstat_get_entry_data(kind, entry_ref->shared_stats),\n kind_info->shared_data_len);\n\nstats_data is the returned copy of the stats entry with all the\nstatistics in it. But it's copied from the shared memory location\ndirectly using memcpy and there's no locking or change counter or\nanything protecting this memcpy that I can see.\n\n-- \ngreg\n\n\n",
"msg_date": "Mon, 8 Aug 2022 11:53:19 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: shared-memory based stats collector - v70"
},
{
"msg_contents": "At Mon, 8 Aug 2022 11:53:19 -0400, Greg Stark <stark@mit.edu> wrote in \n> I'm trying to wrap my head around the shared memory stats collector\n> infrastructure from\n> <20220406030008.2qxipjxo776dwnqs@alap3.anarazel.de> committed in\n> 5891c7a8ed8f2d3d577e7eea34dacff12d7b6bbd.\n> \n> I have one question about locking -- afaics there's nothing protecting\n> reading the shared memory stats. There is an lwlock protecting\n> concurrent updates of the shared memory stats, but that lock isn't\n> taken when we read the stats. Are we ok relying on atomic 64-bit reads\n> or is there something else going on that I'm missing?\n> \n> In particular I'm looking at pgstat.c:847 in pgstat_fetch_entry()\n> which does this:\n> \n> memcpy(stats_data,\n> pgstat_get_entry_data(kind, entry_ref->shared_stats),\n> kind_info->shared_data_len);\n> \n> stats_data is the returned copy of the stats entry with all the\n> statistics in it. But it's copied from the shared memory location\n> directly using memcpy and there's no locking or change counter or\n> anything protecting this memcpy that I can see.\n\nWe take LW_SHARED while creating a snapshot of fixed-numbered\nstats. On the other hand we don't for variable-numbered stats. I\nagree to you, that we need that also for variable-numbered stats.\n\nIf I'm not missing something, it's strange that pgstat_lock_entry()\nonly takes LW_EXCLUSIVE. The atached changes the interface of\npgstat_lock_entry() but there's only one user since another read of\nshared stats entry is not using reference. Thus the interface change\nmight be too much. If I just add bare LWLockAcquire/Release() to\npgstat_fetch_entry,the amount of the patch will be reduced.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 09 Aug 2022 17:24:35 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shared-memory based stats collector - v70"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-09 17:24:35 +0900, Kyotaro Horiguchi wrote:\n> At Mon, 8 Aug 2022 11:53:19 -0400, Greg Stark <stark@mit.edu> wrote in\n> > I'm trying to wrap my head around the shared memory stats collector\n> > infrastructure from\n> > <20220406030008.2qxipjxo776dwnqs@alap3.anarazel.de> committed in\n> > 5891c7a8ed8f2d3d577e7eea34dacff12d7b6bbd.\n> >\n> > I have one question about locking -- afaics there's nothing protecting\n> > reading the shared memory stats. There is an lwlock protecting\n> > concurrent updates of the shared memory stats, but that lock isn't\n> > taken when we read the stats. Are we ok relying on atomic 64-bit reads\n> > or is there something else going on that I'm missing?\n\nYes, that's not right. Not sure how it ended up that way. There was a lot of\nrefactoring and pushing down the locking into different places, I guess it got\nlost somewhere on the way :(. It's unlikely to be a large problem, but we\nshould fix it.\n\n\n> If I'm not missing something, it's strange that pgstat_lock_entry()\n> only takes LW_EXCLUSIVE.\n\nI think it makes some sense, given that there's a larger number of callers for\nthat in various stats-emitting code. Perhaps we could just add a separate\nfunction with a _shared() suffix?\n\n\n> The atached changes the interface of\n> pgstat_lock_entry() but there's only one user since another read of\n> shared stats entry is not using reference. Thus the interface change\n> might be too much. If I just add bare LWLockAcquire/Release() to\n> pgstat_fetch_entry,the amount of the patch will be reduced.\n\nCould you try the pgstat_lock_entry_shared() approach?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 9 Aug 2022 09:53:19 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: shared-memory based stats collector - v70"
},
{
"msg_contents": "At Tue, 9 Aug 2022 09:53:19 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> On 2022-08-09 17:24:35 +0900, Kyotaro Horiguchi wrote:\n> > If I'm not missing something, it's strange that pgstat_lock_entry()\n> > only takes LW_EXCLUSIVE.\n> \n> I think it makes some sense, given that there's a larger number of callers for\n> that in various stats-emitting code. Perhaps we could just add a separate\n> function with a _shared() suffix?\n\nSure. That was an alternative I had in my mind.\n\n> > The atached changes the interface of\n> > pgstat_lock_entry() but there's only one user since another read of\n> > shared stats entry is not using reference. Thus the interface change\n> > might be too much. If I just add bare LWLockAcquire/Release() to\n> > pgstat_fetch_entry,the amount of the patch will be reduced.\n> \n> Could you try the pgstat_lock_entry_shared() approach?\n\nOf course. Please find the attached.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 10 Aug 2022 11:39:10 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shared-memory based stats collector - v70"
},
{
"msg_contents": "Hi,\n\nOn 8/10/22 4:39 AM, Kyotaro Horiguchi wrote:\n> At Tue, 9 Aug 2022 09:53:19 -0700, Andres Freund <andres@anarazel.de> wrote in\n>> Could you try the pgstat_lock_entry_shared() approach?\n> Of course. Please find the attached.\n\nThanks for the patch!\n\nIt looks good to me.\n\nOne nit comment though, instead of:\n\n+ /*\n+ * Take lwlock directly instead of using \npg_stat_lock_entry_shared()\n+ * which requires a reference.\n+ */\n\nwhat about?\n\n+ /*\n+ * Acquire the LWLock directly instead of using \npg_stat_lock_entry_shared()\n+ * which requires a reference.\n+ */\n\n\nI think that's more consistent with other comments mentioning LWLock \nacquisition.\n\nRegards,\n\n-- \nBertrand Drouvot\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Wed, 10 Aug 2022 14:02:34 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: shared-memory based stats collector - v70"
},
{
"msg_contents": "At Wed, 10 Aug 2022 14:02:34 +0200, \"Drouvot, Bertrand\" <bdrouvot@amazon.com> wrote in \n> what about?\n> \n> + /*\n> + * Acquire the LWLock directly instead of using\n> pg_stat_lock_entry_shared()\n> + * which requires a reference.\n> + */\n> \n> \n> I think that's more consistent with other comments mentioning LWLock\n> acquisition.\n\nSure. Thaks!. I did that in the attached.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 22 Aug 2022 11:32:14 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shared-memory based stats collector - v70"
},
{
"msg_contents": "Hi,\n\nOn 8/22/22 4:32 AM, Kyotaro Horiguchi wrote:\n> At Wed, 10 Aug 2022 14:02:34 +0200, \"Drouvot, Bertrand\" <bdrouvot@amazon.com> wrote in\n>> what about?\n>>\n>> + /*\n>> + * Acquire the LWLock directly instead of using\n>> pg_stat_lock_entry_shared()\n>> + * which requires a reference.\n>> + */\n>>\n>>\n>> I think that's more consistent with other comments mentioning LWLock\n>> acquisition.\n> Sure. Thaks!. I did that in the attached.\n\nThank you!\n\nThe patch looks good to me.\n\nRegards,\n\nBertrand\n\n\n\n",
"msg_date": "Mon, 22 Aug 2022 07:51:39 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: shared-memory based stats collector - v70"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-22 11:32:14 +0900, Kyotaro Horiguchi wrote:\n> At Wed, 10 Aug 2022 14:02:34 +0200, \"Drouvot, Bertrand\" <bdrouvot@amazon.com> wrote in \n> > what about?\n> > \n> > +�������������� /*\n> > +��������������� * Acquire the LWLock directly instead of using\n> > pg_stat_lock_entry_shared()\n> > +��������������� * which requires a reference.\n> > +��������������� */\n> > \n> > \n> > I think that's more consistent with other comments mentioning LWLock\n> > acquisition.\n> \n> Sure. Thaks!. I did that in the attached.\n\nPushed, thanks!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 22 Aug 2022 20:20:45 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: shared-memory based stats collector - v70"
}
] |
[
{
"msg_contents": "I couldn't find a clear document which showed how this was done.\nThe example would help.\n\n\nDave Cramer",
"msg_date": "Mon, 8 Aug 2022 13:49:55 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Patch to provide example for ssl certification authentication"
}
] |
[
{
"msg_contents": "I'm building a Postgres index access method. For a variety of reasons it's\nmore efficient to store the index data in multiple physical files on disk\nrather in the index's main fork.\n\nI'm trying to create separate rels that can be created and destroyed by the\nparent index access method. I've succeeded in creating them with\nheap.c/heap_create() and also with heap_create_with_catalog(). Where I'm\nstruggling is in getting the rels to be dropped when the parent index is\ndropped.\n\nWhen I use heap_create(), followed by recordDependencyOn(parent_oid,\nchild_oid), the child doesn't get dropped when the parent index is dropped.\n\nWhen I use heap_create_with_catalog(), followed by\nrecordDependencyOn(parent_oid, child_oid), I get a \"cache lookup failed for\nindex <my_child_oid>\" triggered from within CommandCounterIncrement(), and\nno amount of spelunking in the code turns up which cache entry is the\nproblem.\n\nBacking up just a minute, am I going about this the right way?\n\nShould I be using heap_create() at all?\n\nIs there some other way to do this?\n\n\n***********************************************\nHere is my code:\n\n Oid namespaceId = get_namespace_oid(\"relevantdb\", false);\n Oid tablespaceId = parent_index_rel->rd_rel->reltablespace;\n Oid new_seg_oid = get_new_filenode(tablespaceId);\n char *new_seg_name = make_seg_name(parent_index_rel, new_seg_oid);\n Oid new_seg_filenode = InvalidOid; // heap_create() will create it\n\n // this is required, unfortunately. It goes in the relcache.\n // it creates a totally empty tupdesc. The natts=1 arg is to avoid an\nerror.\n TupleDesc segTupleDesc = CreateTemplateTupleDesc(1);\n TupleDescInitEntry(segTupleDesc, (AttrNumber) 1,\n \"dummy\",\n INT4OID,\n -1, 0);\n bool shared_relation = parent_index_rel->rd_rel->relisshared;\n bool mapped_relation = RelationIsMapped(parent_index_rel);\n bool allow_system_table_mods = false;\n\n // not used\n TransactionId relfrozenxid;\n MultiXactId relminmxid;\n\n Relation seg_rel = heap_create(\n new_seg_name,\n namespaceId,\n tablespaceId,\n new_seg_oid,\n new_seg_filenode,\n 0, // access method oid\n segTupleDesc,\n RELKIND_INDEX, // or RELKIND_RELATION or\nRELKIND_PARTITIONED_INDEX?\n RELPERSISTENCE,\n shared_relation,\n mapped_relation,\n allow_system_table_mods,\n &relfrozenxid,\n &relminmxid\n );\n\n Assert(relfrozenxid == InvalidTransactionId);\n Assert(relminmxid == InvalidMultiXactId);\n Assert(new_seg_oid == RelationGetRelid(seg_rel));\n\n // make changes visible\n CommandCounterIncrement();\n\n // record dependency so seg gets dropped when index dropped\n Oid parent_oid = parent_index_rel->rd_id;\n record_dependency(parent_oid, new_seg_oid);\n\n table_close(seg_rel, NoLock); /* do not unlock till end of xact */\n\n...\n\nvoid record_dependency(Oid parent_oid, Oid child_oid) {\n ObjectAddress baseobject;\n ObjectAddress segobject;\n baseobject.classId = IndexRelationId;\n baseobject.objectId = parent_oid;\n baseobject.objectSubId = 0;\n segobject.classId = IndexRelationId;\n segobject.objectId = child_oid;\n segobject.objectSubId = 0;\n recordDependencyOn(&segobject, &baseobject, DEPENDENCY_INTERNAL);\n}\n\n\nThe code where I use heap_create_with_catalog() is substantially the same.\n\nI'm building a Postgres index access method. For a variety of reasons it's more efficient to store the index data in multiple physical files on disk rather in the index's main fork.I'm trying to create separate rels that can be created and destroyed by the parent index access method. I've succeeded in creating them with heap.c/heap_create() and also with heap_create_with_catalog(). Where I'm struggling is in getting the rels to be dropped when the parent index is dropped.When I use heap_create(), followed by recordDependencyOn(parent_oid, child_oid), the child doesn't get dropped when the parent index is dropped.When I use heap_create_with_catalog(), followed by recordDependencyOn(parent_oid, child_oid), I get a \"cache lookup failed for index <my_child_oid>\" triggered from within CommandCounterIncrement(), and no amount of spelunking in the code turns up which cache entry is the problem.Backing up just a minute, am I going about this the right way? Should I be using heap_create() at all?Is there some other way to do this?***********************************************Here is my code: Oid namespaceId = get_namespace_oid(\"relevantdb\", false); Oid tablespaceId = parent_index_rel->rd_rel->reltablespace; Oid new_seg_oid = get_new_filenode(tablespaceId); char *new_seg_name = make_seg_name(parent_index_rel, new_seg_oid); Oid new_seg_filenode = InvalidOid; // heap_create() will create it // this is required, unfortunately. It goes in the relcache. // it creates a totally empty tupdesc. The natts=1 arg is to avoid an error. TupleDesc segTupleDesc = CreateTemplateTupleDesc(1); TupleDescInitEntry(segTupleDesc, (AttrNumber) 1, \"dummy\", INT4OID, -1, 0); bool shared_relation = parent_index_rel->rd_rel->relisshared; bool mapped_relation = RelationIsMapped(parent_index_rel); bool allow_system_table_mods = false; // not used TransactionId relfrozenxid; MultiXactId relminmxid; Relation seg_rel = heap_create( new_seg_name, namespaceId, tablespaceId, new_seg_oid, new_seg_filenode, 0, // access method oid segTupleDesc, RELKIND_INDEX, // or RELKIND_RELATION or RELKIND_PARTITIONED_INDEX? RELPERSISTENCE, shared_relation, mapped_relation, allow_system_table_mods, &relfrozenxid, &relminmxid ); Assert(relfrozenxid == InvalidTransactionId); Assert(relminmxid == InvalidMultiXactId); Assert(new_seg_oid == RelationGetRelid(seg_rel)); // make changes visible CommandCounterIncrement(); // record dependency so seg gets dropped when index dropped Oid parent_oid = parent_index_rel->rd_id; record_dependency(parent_oid, new_seg_oid); table_close(seg_rel, NoLock); /* do not unlock till end of xact */...void record_dependency(Oid parent_oid, Oid child_oid) { ObjectAddress baseobject; ObjectAddress segobject; baseobject.classId = IndexRelationId; baseobject.objectId = parent_oid; baseobject.objectSubId = 0; segobject.classId = IndexRelationId; segobject.objectId = child_oid; segobject.objectSubId = 0; recordDependencyOn(&segobject, &baseobject, DEPENDENCY_INTERNAL);}The code where I use heap_create_with_catalog() is substantially the same.",
"msg_date": "Mon, 8 Aug 2022 13:40:00 -0500",
"msg_from": "Chris Cleveland <ccleve+github@dieselpoint.com>",
"msg_from_op": true,
"msg_subject": "Managing my own index partitions"
}
] |
[
{
"msg_contents": "(resending because I sent the original from the wrong email address...)\n\nI'm building a Postgres index access method. For a variety of reasons it's\nmore efficient to store the index data in multiple physical files on disk\nrather than in the index's main fork.\n\nI'm trying to create separate rels that can be created and destroyed by the\nparent index access method. I've succeeded in creating them with\nheap.c/heap_create() and also with heap_create_with_catalog(). Where I'm\nstruggling is in getting the rels to be dropped when the parent index is\ndropped.\n\nWhen I use heap_create(), followed by recordDependencyOn(parent_oid,\nchild_oid), the child doesn't get dropped when the parent index is dropped.\n\nWhen I use heap_create_with_catalog(), followed by\nrecordDependencyOn(parent_oid, child_oid), I get a \"cache lookup failed for\nindex <my_child_oid>\" triggered from within CommandCounterIncrement(), and\nno amount of spelunking in the code turns up which cache entry is the\nproblem.\n\nBacking up just a minute, am I going about this the right way?\n\nShould I be using heap_create() at all?\n\nIs there some other way to do this?\n\n\n***********************************************\nHere is my code:\n\n Oid namespaceId = get_namespace_oid(\"relevantdb\", false);\n Oid tablespaceId = parent_index_rel->rd_rel->reltablespace;\n Oid new_seg_oid = get_new_filenode(tablespaceId);\n char *new_seg_name = make_seg_name(parent_index_rel, new_seg_oid);\n Oid new_seg_filenode = InvalidOid; // heap_create() will create it\n\n // this is required, unfortunately. It goes in the relcache.\n // it creates a totally empty tupdesc. The natts=1 arg is to avoid an\nerror.\n TupleDesc segTupleDesc = CreateTemplateTupleDesc(1);\n TupleDescInitEntry(segTupleDesc, (AttrNumber) 1,\n \"dummy\",\n INT4OID,\n -1, 0);\n bool shared_relation = parent_index_rel->rd_rel->relisshared;\n bool mapped_relation = RelationIsMapped(parent_index_rel);\n bool allow_system_table_mods = false;\n\n // not used\n TransactionId relfrozenxid;\n MultiXactId relminmxid;\n\n Relation seg_rel = heap_create(\n new_seg_name,\n namespaceId,\n tablespaceId,\n new_seg_oid,\n new_seg_filenode,\n 0, // access method oid\n segTupleDesc,\n RELKIND_INDEX, // or RELKIND_RELATION or\nRELKIND_PARTITIONED_INDEX?\n RELPERSISTENCE,\n shared_relation,\n mapped_relation,\n allow_system_table_mods,\n &relfrozenxid,\n &relminmxid\n );\n\n Assert(relfrozenxid == InvalidTransactionId);\n Assert(relminmxid == InvalidMultiXactId);\n Assert(new_seg_oid == RelationGetRelid(seg_rel));\n\n // make changes visible\n CommandCounterIncrement();\n\n // record dependency so seg gets dropped when index dropped\n Oid parent_oid = parent_index_rel->rd_id;\n record_dependency(parent_oid, new_seg_oid);\n\n table_close(seg_rel, NoLock); /* do not unlock till end of xact */\n\n...\n\nvoid record_dependency(Oid parent_oid, Oid child_oid) {\n ObjectAddress baseobject;\n ObjectAddress segobject;\n baseobject.classId = IndexRelationId;\n baseobject.objectId = parent_oid;\n baseobject.objectSubId = 0;\n segobject.classId = IndexRelationId;\n segobject.objectId = child_oid;\n segobject.objectSubId = 0;\n recordDependencyOn(&segobject, &baseobject, DEPENDENCY_INTERNAL);\n}\n\n\nThe code where I use heap_create_with_catalog() is substantially the same.\n\n\n\n\n-- \nChris Cleveland\n312-339-2677 mobile\n\n(resending because I sent the original from the wrong email address...)I'm building a Postgres index access method. For a variety of reasons \nit's more efficient to store the index data in multiple physical files \non disk rather than in the index's main fork.I'm trying to create \nseparate rels that can be created and destroyed by the parent index \naccess method. I've succeeded in creating them with heap.c/heap_create()\n and also with heap_create_with_catalog(). Where I'm struggling is in \ngetting the rels to be dropped when the parent index is dropped.When\n I use heap_create(), followed by recordDependencyOn(parent_oid, \nchild_oid), the child doesn't get dropped when the parent index is \ndropped.When I use heap_create_with_catalog(), followed by \nrecordDependencyOn(parent_oid, child_oid), I get a \"cache lookup failed \nfor index <my_child_oid>\" triggered from within \nCommandCounterIncrement(), and no amount of spelunking in the code turns\n up which cache entry is the problem.Backing up just a minute, am I going about this the right way? Should I be using heap_create() at all?Is there some other way to do this?***********************************************Here is my code: Oid namespaceId = get_namespace_oid(\"relevantdb\", false); Oid tablespaceId = parent_index_rel->rd_rel->reltablespace; Oid new_seg_oid = get_new_filenode(tablespaceId); char *new_seg_name = make_seg_name(parent_index_rel, new_seg_oid); Oid new_seg_filenode = InvalidOid; // heap_create() will create it // this is required, unfortunately. It goes in the relcache. // it creates a totally empty tupdesc. The natts=1 arg is to avoid an error. TupleDesc segTupleDesc = CreateTemplateTupleDesc(1); TupleDescInitEntry(segTupleDesc, (AttrNumber) 1, \"dummy\", INT4OID, -1, 0); bool shared_relation = parent_index_rel->rd_rel->relisshared; bool mapped_relation = RelationIsMapped(parent_index_rel); bool allow_system_table_mods = false; // not used TransactionId relfrozenxid; MultiXactId relminmxid; Relation seg_rel = heap_create( new_seg_name, namespaceId, tablespaceId, new_seg_oid, new_seg_filenode, 0, // access method oid segTupleDesc, RELKIND_INDEX, // or RELKIND_RELATION or RELKIND_PARTITIONED_INDEX? RELPERSISTENCE, shared_relation, mapped_relation, allow_system_table_mods, &relfrozenxid, &relminmxid ); Assert(relfrozenxid == InvalidTransactionId); Assert(relminmxid == InvalidMultiXactId); Assert(new_seg_oid == RelationGetRelid(seg_rel)); // make changes visible CommandCounterIncrement(); // record dependency so seg gets dropped when index dropped Oid parent_oid = parent_index_rel->rd_id; record_dependency(parent_oid, new_seg_oid); table_close(seg_rel, NoLock); /* do not unlock till end of xact */...void record_dependency(Oid parent_oid, Oid child_oid) { ObjectAddress baseobject; ObjectAddress segobject; baseobject.classId = IndexRelationId; baseobject.objectId = parent_oid; baseobject.objectSubId = 0; segobject.classId = IndexRelationId; segobject.objectId = child_oid; segobject.objectSubId = 0; recordDependencyOn(&segobject, &baseobject, DEPENDENCY_INTERNAL);}The code where I use heap_create_with_catalog() is substantially the same.-- Chris Cleveland312-339-2677 mobile",
"msg_date": "Mon, 8 Aug 2022 14:04:01 -0500",
"msg_from": "Chris Cleveland <ccleveland@dieselpoint.com>",
"msg_from_op": true,
"msg_subject": "Managing my own index partitions"
}
] |
[
{
"msg_contents": "Hello,\n\nLogical replication of DDL commands support is being worked on in [1].\nHowever, global object commands are quite different from other\nnon-global object DDL commands and need to be handled differently. For\nexample, global object commands include ROLE statements, DATABASE\nstatements, TABLESPACE statements and a subset of GRANT/REVOKE\nstatements if the object being modified is a global object. These\ncommands are different from other DDL commands in that:\n\n1. Global object commands can be executed in any database.\n2. Global objects are not schema qualified.\n3. Global object commands are not captured by event triggers.\n\nI’ve put together a prototype to support logical replication of global\nobject commands in the attached patch. This patch builds on the DDL\nreplication patch from ZJ in [2] and must be applied on top of it.\nHere is a list of global object commands that the patch replicate, you\ncan find more details in function LogGlobalObjectCommand:\n\n/* ROLE statements */\nCreateRoleStmt\nAlterRoleStmt\nAlterRoleSetStmt\nDropRoleStmt\nReassignOwnedStmt\nGrantRoleStmt\n\n/* Database statements */\nCreatedbStmt\nAlterDatabaseStmt\nAlterDatabaseRefreshCollStmt\nAlterDatabaseSetStmt\nDropdbStmt\n\n/* TableSpace statements */\nCreateTableSpaceStmt\nDropTableSpaceStmt\nAlterTableSpaceOptionsStmt\n\n/* GrantStmt and RevokeStmt if objtype is a global object determined\nby EventTriggerSupportsObjectType() */\nGrantStmt\nRevokeStmt\n\nThe idea with this patch is to support global objects commands\nreplication by WAL logging the command using the same function for DDL\nlogging - LogLogicalDDLMessage towards the end of\nstandard_ProcessUtility. Because global objects are not schema\nqualified, we can skip the deparser invocation and directly log the\noriginal command string for replay on the subscriber.\n\nA key problem to address is that global objects can become\ninconsistent between the publisher and the subscriber if a command\nmodifying the global object gets executed in a database (on the source\nside) that doesn't replicate the global object commands. I think we\ncan work on the following two aspects in order to avoid such\ninconsistency:\n\n1. Introduce a publication option for global object commands\nreplication and document that logical replication of global object\ncommands is preferred to be enabled on all databases. Otherwise\ninconsistency can happen if a command modifies the global object in a\ndatabase that doesn't replicate global object commands.\n\nFor example, we could introduce the following publication option\npublish_global_object_command :\nCREATE PUBLICATION mypub\nFOR ALL TABLES\nWITH (publish = 'insert, delete, update', publish_global_object_command = true);\n\nWe may consider other fine tuned global command options such as\n“publish_role_statements”, “publish_database_statements”,\n“publish_tablespace_statements” and \"publish_grant_statements\", i.e.\nyou pick which global commands you want replicated. For example, you\ncan do this if you need a permission or tablespace to be set up\ndifferently on the target cluster. In addition, we may need to adjust\nthe syntax once the DDL replication syntax finalizes.\n\n2. Introduce the following database cluster level logical replication\ncommands to avoid such inconsistency, this is especially handy when\nthere is a large number of databases to configure for logical\nreplication.\n\nCREATE PUBLICATION GROUP mypub_\nFOR ALL DATABASES\nWITH (publish = 'insert, delete, update', publish_global_object_command = true);\n\nCREATE SUBSCRIPTION GROUP mysub_\nCONNECTION 'dbnames = \\“path to file\\” host=hostname user=username port=5432'\nPUBLICATION GROUP mypub_;\n\nUnder the hood, the CREATE PUBLICATION GROUP command generates one\nCREATE PUBLICATION mypub_n sub-command for each database in the\ncluster where n is a monotonically increasing integer from 1. The\ncommand outputs the (dbname, publication name) pairs which can be\nsaved in a file and then used on the subscription side.\n\nSimilarly, the CREATE SUBSCRIPTION GROUP command will generate one\nCREATE SUBSCRIPTION mysub_n sub-command for each database in the\ndbnames file. The dbnames file contains the (dbname, publication name)\npairs which come from the output of the CREATE PUBLICATION GROUP\ncommand. Notice the connection string doesn’t have the dbname field,\nDuring execution the connection string will be appended the dbname\nretrieved from the dbnames file. By default the target DB name is the\nsame as the source DB name, optionally user can specify the source_db\nto target_db mapping in the dbnames file.\n\nIn addition, we might want to create dependencies for the\npublications/subscriptions created by the above commands in order to\nguarantee the group consistency. Also we need to enforce that there is\nonly one group of publications/subscriptions for database cluster\nlevel replication.\n\nLogical replication of all commands across an entire cluster (instead\nof on a per-database basis) is a separate topic. We can start another\nthread after implementing a prototype.\n\nPlease let me know your thoughts.\n\n[1] https://www.postgresql.org/message-id/flat/CAAD30U%2BpVmfKwUKy8cbZOnUXyguJ-uBNejwD75Kyo%3DOjdQGJ9g%40mail.gmail.com\n[2]https://www.postgresql.org/message-id/OS0PR01MB5716009FDCCC0B50BCB14A99949D9%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\nWith Regards,\nZheng Li\nAmazon RDS/Aurora for PostgreSQL",
"msg_date": "Mon, 8 Aug 2022 16:01:33 -0400",
"msg_from": "Zheng Li <zhengli10@gmail.com>",
"msg_from_op": true,
"msg_subject": "Support logical replication of global object commands"
},
{
"msg_contents": "On Tue, Aug 9, 2022 at 1:31 AM Zheng Li <zhengli10@gmail.com> wrote:\n>\n> Hello,\n>\n> Logical replication of DDL commands support is being worked on in [1].\n> However, global object commands are quite different from other\n> non-global object DDL commands and need to be handled differently. For\n> example, global object commands include ROLE statements, DATABASE\n> statements, TABLESPACE statements and a subset of GRANT/REVOKE\n> statements if the object being modified is a global object. These\n> commands are different from other DDL commands in that:\n>\n> 1. Global object commands can be executed in any database.\n> 2. Global objects are not schema qualified.\n> 3. Global object commands are not captured by event triggers.\n>\n> I’ve put together a prototype to support logical replication of global\n> object commands in the attached patch. This patch builds on the DDL\n> replication patch from ZJ in [2] and must be applied on top of it.\n> Here is a list of global object commands that the patch replicate, you\n> can find more details in function LogGlobalObjectCommand:\n>\n> /* ROLE statements */\n> CreateRoleStmt\n> AlterRoleStmt\n> AlterRoleSetStmt\n> DropRoleStmt\n> ReassignOwnedStmt\n> GrantRoleStmt\n>\n> /* Database statements */\n> CreatedbStmt\n> AlterDatabaseStmt\n> AlterDatabaseRefreshCollStmt\n> AlterDatabaseSetStmt\n> DropdbStmt\n>\n> /* TableSpace statements */\n> CreateTableSpaceStmt\n> DropTableSpaceStmt\n> AlterTableSpaceOptionsStmt\n>\n> /* GrantStmt and RevokeStmt if objtype is a global object determined\n> by EventTriggerSupportsObjectType() */\n> GrantStmt\n> RevokeStmt\n>\n> The idea with this patch is to support global objects commands\n> replication by WAL logging the command using the same function for DDL\n> logging - LogLogicalDDLMessage towards the end of\n> standard_ProcessUtility. Because global objects are not schema\n> qualified, we can skip the deparser invocation and directly log the\n> original command string for replay on the subscriber.\n>\n> A key problem to address is that global objects can become\n> inconsistent between the publisher and the subscriber if a command\n> modifying the global object gets executed in a database (on the source\n> side) that doesn't replicate the global object commands. I think we\n> can work on the following two aspects in order to avoid such\n> inconsistency:\n>\n> 1. Introduce a publication option for global object commands\n> replication and document that logical replication of global object\n> commands is preferred to be enabled on all databases. Otherwise\n> inconsistency can happen if a command modifies the global object in a\n> database that doesn't replicate global object commands.\n>\n> For example, we could introduce the following publication option\n> publish_global_object_command :\n> CREATE PUBLICATION mypub\n> FOR ALL TABLES\n> WITH (publish = 'insert, delete, update', publish_global_object_command = true);\n>\n\nTying global objects with FOR ALL TABLES seems odd to me. One possible\nidea could be to introduce publications FOR ALL OBJECTS. However, I am\nnot completely sure whether tying global objects with\ndatabase-specific publications is what users would expect but OTOH I\ndon't have any better ideas here.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 12 Aug 2022 17:41:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Support logical replication of global object commands"
},
{
"msg_contents": "On Fri, Aug 12, 2022 at 5:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Aug 9, 2022 at 1:31 AM Zheng Li <zhengli10@gmail.com> wrote:\n> >\n> > Hello,\n> >\n> > Logical replication of DDL commands support is being worked on in [1].\n> > However, global object commands are quite different from other\n> > non-global object DDL commands and need to be handled differently. For\n> > example, global object commands include ROLE statements, DATABASE\n> > statements, TABLESPACE statements and a subset of GRANT/REVOKE\n> > statements if the object being modified is a global object. These\n> > commands are different from other DDL commands in that:\n> >\n> > 1. Global object commands can be executed in any database.\n> > 2. Global objects are not schema qualified.\n> > 3. Global object commands are not captured by event triggers.\n> >\n> > I’ve put together a prototype to support logical replication of global\n> > object commands in the attached patch. This patch builds on the DDL\n> > replication patch from ZJ in [2] and must be applied on top of it.\n> > Here is a list of global object commands that the patch replicate, you\n> > can find more details in function LogGlobalObjectCommand:\n> >\n> > /* ROLE statements */\n> > CreateRoleStmt\n> > AlterRoleStmt\n> > AlterRoleSetStmt\n> > DropRoleStmt\n> > ReassignOwnedStmt\n> > GrantRoleStmt\n> >\n> > /* Database statements */\n> > CreatedbStmt\n> > AlterDatabaseStmt\n> > AlterDatabaseRefreshCollStmt\n> > AlterDatabaseSetStmt\n> > DropdbStmt\n> >\n> > /* TableSpace statements */\n> > CreateTableSpaceStmt\n> > DropTableSpaceStmt\n> > AlterTableSpaceOptionsStmt\n> >\n> > /* GrantStmt and RevokeStmt if objtype is a global object determined\n> > by EventTriggerSupportsObjectType() */\n> > GrantStmt\n> > RevokeStmt\n> >\n> > The idea with this patch is to support global objects commands\n> > replication by WAL logging the command using the same function for DDL\n> > logging - LogLogicalDDLMessage towards the end of\n> > standard_ProcessUtility. Because global objects are not schema\n> > qualified, we can skip the deparser invocation and directly log the\n> > original command string for replay on the subscriber.\n> >\n> > A key problem to address is that global objects can become\n> > inconsistent between the publisher and the subscriber if a command\n> > modifying the global object gets executed in a database (on the source\n> > side) that doesn't replicate the global object commands. I think we\n> > can work on the following two aspects in order to avoid such\n> > inconsistency:\n> >\n> > 1. Introduce a publication option for global object commands\n> > replication and document that logical replication of global object\n> > commands is preferred to be enabled on all databases. Otherwise\n> > inconsistency can happen if a command modifies the global object in a\n> > database that doesn't replicate global object commands.\n> >\n> > For example, we could introduce the following publication option\n> > publish_global_object_command :\n> > CREATE PUBLICATION mypub\n> > FOR ALL TABLES\n> > WITH (publish = 'insert, delete, update', publish_global_object_command = true);\n> >\n>\n> Tying global objects with FOR ALL TABLES seems odd to me. One possible\n> idea could be to introduce publications FOR ALL OBJECTS. However, I am\n> not completely sure whether tying global objects with\n> database-specific publications is what users would expect but OTOH I\n> don't have any better ideas here.\n>\n\nCan we think of relying to send WAL of such DDLs just based on whether\nthere is a corresponding publication (ex. publication of ALL OBJECTS)?\nI mean avoid database-specific filtering in decoding for such DDL\ncommands but not sure how much better it is than the current proposal?\nThe other idea that occurred to me is to have separate event triggers\nfor global objects that we can store in the shared catalog but again\nit is not clear how to specify the corresponding function as functions\nare database specific.\n\nThoughts?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 16 Aug 2022 10:32:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Support logical replication of global object commands"
},
{
"msg_contents": "> Can we think of relying to send WAL of such DDLs just based on whether\n> there is a corresponding publication (ex. publication of ALL OBJECTS)?\n> I mean avoid database-specific filtering in decoding for such DDL\n> commands but not sure how much better it is than the current proposal?\n\nI think a publication of ALL OBJECTS sounds intuitive. Does it mean we'll\npublish all DDL commands, all commit and abort operations in every\ndatabase if there is such publication of ALL OBJECTS?\n\nBest,\nZheng\n\n\n",
"msg_date": "Tue, 16 Aug 2022 02:05:16 -0400",
"msg_from": "Zheng Li <zhengli10@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Support logical replication of global object commands"
},
{
"msg_contents": "On Tue, Aug 16, 2022 at 11:35 AM Zheng Li <zhengli10@gmail.com> wrote:\n>\n> > Can we think of relying to send WAL of such DDLs just based on whether\n> > there is a corresponding publication (ex. publication of ALL OBJECTS)?\n> > I mean avoid database-specific filtering in decoding for such DDL\n> > commands but not sure how much better it is than the current proposal?\n>\n> I think a publication of ALL OBJECTS sounds intuitive. Does it mean we'll\n> publish all DDL commands, all commit and abort operations in every\n> database if there is such publication of ALL OBJECTS?\n>\n\nActually, I intend something for global objects. But the main thing\nthat is worrying me about this is that we don't have a clean way to\nuntie global object replication from database-specific object\nreplication.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 16 Aug 2022 17:17:59 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Support logical replication of global object commands"
},
{
"msg_contents": "> > I think a publication of ALL OBJECTS sounds intuitive. Does it mean we'll\n> > publish all DDL commands, all commit and abort operations in every\n> > database if there is such publication of ALL OBJECTS?\n> >\n>\n> Actually, I intend something for global objects. But the main thing\n> that is worrying me about this is that we don't have a clean way to\n> untie global object replication from database-specific object\n> replication.\n\nI think ultimately we need a clean and efficient way to publish (and\nsubscribe to) any changes in all databases, preferably in one logical\nreplication slot.\n\n--\nRegards,\nZheng\n\n\n",
"msg_date": "Mon, 29 Aug 2022 22:39:02 -0400",
"msg_from": "Zheng Li <zhengli10@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Support logical replication of global object commands"
},
{
"msg_contents": "On Tue, Aug 30, 2022 at 8:09 AM Zheng Li <zhengli10@gmail.com> wrote:\n>\n> > > I think a publication of ALL OBJECTS sounds intuitive. Does it mean we'll\n> > > publish all DDL commands, all commit and abort operations in every\n> > > database if there is such publication of ALL OBJECTS?\n> > >\n> >\n> > Actually, I intend something for global objects. But the main thing\n> > that is worrying me about this is that we don't have a clean way to\n> > untie global object replication from database-specific object\n> > replication.\n>\n> I think ultimately we need a clean and efficient way to publish (and\n> subscribe to) any changes in all databases, preferably in one logical\n> replication slot.\n>\n\nAgreed. I was thinking currently for logical replication both\nwalsender and slot are database-specific. So we need a way to\ndistinguish the WAL for global objects and then avoid filtering based\non the slot's database during decoding. I also thought about whether\nwe want to have a WALSender that is not connected to a database for\nthe replication of global objects but I couldn't come up with a reason\nfor doing so. Do you have any thoughts on this matter?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 16 Feb 2023 12:02:16 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Support logical replication of global object commands"
},
{
"msg_contents": "On Thu, Feb 16, 2023 at 12:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Aug 30, 2022 at 8:09 AM Zheng Li <zhengli10@gmail.com> wrote:\n> >\n> > > > I think a publication of ALL OBJECTS sounds intuitive. Does it mean we'll\n> > > > publish all DDL commands, all commit and abort operations in every\n> > > > database if there is such publication of ALL OBJECTS?\n> > > >\n> > >\n> > > Actually, I intend something for global objects. But the main thing\n> > > that is worrying me about this is that we don't have a clean way to\n> > > untie global object replication from database-specific object\n> > > replication.\n> >\n> > I think ultimately we need a clean and efficient way to publish (and\n> > subscribe to) any changes in all databases, preferably in one logical\n> > replication slot.\n> >\n>\n> Agreed. I was thinking currently for logical replication both\n> walsender and slot are database-specific. So we need a way to\n> distinguish the WAL for global objects and then avoid filtering based\n> on the slot's database during decoding. I also thought about whether\n> we want to have a WALSender that is not connected to a database for\n> the replication of global objects but I couldn't come up with a reason\n> for doing so. Do you have any thoughts on this matter?\n>\n\nAnother thing about the patch proposed here is that it LOGs the DDL\nfor global objects without any consideration of whether that is\nrequired for logical replication. This is quite unlike what we are\nplanning to do for other DDLs where it will be logged only when the\npublication has defined an event trigger for it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 16 Feb 2023 13:58:40 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Support logical replication of global object commands"
},
{
"msg_contents": "> > > Actually, I intend something for global objects. But the main thing\n> > > that is worrying me about this is that we don't have a clean way to\n> > > untie global object replication from database-specific object\n> > > replication.\n> >\n> > I think ultimately we need a clean and efficient way to publish (and\n> > subscribe to) any changes in all databases, preferably in one logical\n> > replication slot.\n> >\n>\n> Agreed. I was thinking currently for logical replication both\n> walsender and slot are database-specific. So we need a way to\n> distinguish the WAL for global objects and then avoid filtering based\n> on the slot's database during decoding.\n\nBut which WALSender should handle the WAL for global objects if we\ndon't filter by database? Is there any specific problem you see for\ndecoding global objects commands in a database specific WALSender?\n\n> I also thought about whether\n> we want to have a WALSender that is not connected to a database for\n> the replication of global objects but I couldn't come up with a reason\n> for doing so. Do you have any thoughts on this matter?\n\nRegards,\nZane\n\n\n",
"msg_date": "Fri, 17 Feb 2023 00:28:06 -0500",
"msg_from": "Zheng Li <zhengli10@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Support logical replication of global object commands"
},
{
"msg_contents": "On Fri, Feb 17, 2023 at 10:58 AM Zheng Li <zhengli10@gmail.com> wrote:\n>\n> > > > Actually, I intend something for global objects. But the main thing\n> > > > that is worrying me about this is that we don't have a clean way to\n> > > > untie global object replication from database-specific object\n> > > > replication.\n> > >\n> > > I think ultimately we need a clean and efficient way to publish (and\n> > > subscribe to) any changes in all databases, preferably in one logical\n> > > replication slot.\n> > >\n> >\n> > Agreed. I was thinking currently for logical replication both\n> > walsender and slot are database-specific. So we need a way to\n> > distinguish the WAL for global objects and then avoid filtering based\n> > on the slot's database during decoding.\n>\n> But which WALSender should handle the WAL for global objects if we\n> don't filter by database? Is there any specific problem you see for\n> decoding global objects commands in a database specific WALSender?\n>\n\nI haven't verified but I was concerned about the below check:\nlogicalddl_decode\n{\n...\n+\n+ if (message->dbId != ctx->slot->data.database ||\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 17 Feb 2023 15:17:52 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Support logical replication of global object commands"
},
{
"msg_contents": "On Fri, Feb 17, 2023 at 4:48 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Feb 17, 2023 at 10:58 AM Zheng Li <zhengli10@gmail.com> wrote:\n> >\n> > > > > Actually, I intend something for global objects. But the main thing\n> > > > > that is worrying me about this is that we don't have a clean way to\n> > > > > untie global object replication from database-specific object\n> > > > > replication.\n> > > >\n> > > > I think ultimately we need a clean and efficient way to publish (and\n> > > > subscribe to) any changes in all databases, preferably in one logical\n> > > > replication slot.\n> > > >\n> > >\n> > > Agreed. I was thinking currently for logical replication both\n> > > walsender and slot are database-specific. So we need a way to\n> > > distinguish the WAL for global objects and then avoid filtering based\n> > > on the slot's database during decoding.\n> >\n> > But which WALSender should handle the WAL for global objects if we\n> > don't filter by database? Is there any specific problem you see for\n> > decoding global objects commands in a database specific WALSender?\n> >\n>\n> I haven't verified but I was concerned about the below check:\n> logicalddl_decode\n> {\n> ...\n> +\n> + if (message->dbId != ctx->slot->data.database ||\n\nOK, let's suppose we don't filter by database for global commands when\ndecoding ddl records, roughly what the following code does:\n logicalddl_decode\n {\n ...\n\n if (message->dbId != ctx->slot->data.database ||\n + message->cmdtype != DCT_GlobalObjectCmd\n\nBut this is not enough, we also need the subsequent commit record of\nthe txn to be decoded in order to replicate the global command. So I\nthink we also need to make DecodeCommit bypass the filter by database\nif global object replication is turned on and we have decoded a global\ncommand in the txn.\n\nRegards,\nZane\n\n\n",
"msg_date": "Wed, 1 Mar 2023 00:19:50 -0500",
"msg_from": "Zheng Li <zhengli10@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Support logical replication of global object commands"
},
{
"msg_contents": "Hi,\n\nOn Tue, Aug 9, 2022 at 5:01 AM Zheng Li <zhengli10@gmail.com> wrote:\n>\n> Hello,\n>\n> Logical replication of DDL commands support is being worked on in [1].\n> However, global object commands are quite different from other\n> non-global object DDL commands and need to be handled differently. For\n> example, global object commands include ROLE statements, DATABASE\n> statements, TABLESPACE statements and a subset of GRANT/REVOKE\n> statements if the object being modified is a global object. These\n> commands are different from other DDL commands in that:\n>\n> 1. Global object commands can be executed in any database.\n> 2. Global objects are not schema qualified.\n> 3. Global object commands are not captured by event triggers.\n>\n> I’ve put together a prototype to support logical replication of global\n> object commands in the attached patch. This patch builds on the DDL\n> replication patch from ZJ in [2] and must be applied on top of it.\n> Here is a list of global object commands that the patch replicate, you\n> can find more details in function LogGlobalObjectCommand:\n>\n> /* ROLE statements */\n> CreateRoleStmt\n> AlterRoleStmt\n> AlterRoleSetStmt\n> DropRoleStmt\n> ReassignOwnedStmt\n> GrantRoleStmt\n>\n> /* Database statements */\n> CreatedbStmt\n> AlterDatabaseStmt\n> AlterDatabaseRefreshCollStmt\n> AlterDatabaseSetStmt\n> DropdbStmt\n>\n> /* TableSpace statements */\n> CreateTableSpaceStmt\n> DropTableSpaceStmt\n> AlterTableSpaceOptionsStmt\n>\n> /* GrantStmt and RevokeStmt if objtype is a global object determined\n> by EventTriggerSupportsObjectType() */\n> GrantStmt\n> RevokeStmt\n>\n> The idea with this patch is to support global objects commands\n> replication by WAL logging the command using the same function for DDL\n> logging - LogLogicalDDLMessage towards the end of\n> standard_ProcessUtility. Because global objects are not schema\n> qualified, we can skip the deparser invocation and directly log the\n> original command string for replay on the subscriber.\n>\n> A key problem to address is that global objects can become\n> inconsistent between the publisher and the subscriber if a command\n> modifying the global object gets executed in a database (on the source\n> side) that doesn't replicate the global object commands. I think we\n> can work on the following two aspects in order to avoid such\n> inconsistency:\n>\n> 1. Introduce a publication option for global object commands\n> replication and document that logical replication of global object\n> commands is preferred to be enabled on all databases. Otherwise\n> inconsistency can happen if a command modifies the global object in a\n> database that doesn't replicate global object commands.\n>\n> For example, we could introduce the following publication option\n> publish_global_object_command :\n> CREATE PUBLICATION mypub\n> FOR ALL TABLES\n> WITH (publish = 'insert, delete, update', publish_global_object_command = true);\n>\n> We may consider other fine tuned global command options such as\n> “publish_role_statements”, “publish_database_statements”,\n> “publish_tablespace_statements” and \"publish_grant_statements\", i.e.\n> you pick which global commands you want replicated. For example, you\n> can do this if you need a permission or tablespace to be set up\n> differently on the target cluster. In addition, we may need to adjust\n> the syntax once the DDL replication syntax finalizes.\n>\n> 2. Introduce the following database cluster level logical replication\n> commands to avoid such inconsistency, this is especially handy when\n> there is a large number of databases to configure for logical\n> replication.\n>\n> CREATE PUBLICATION GROUP mypub_\n> FOR ALL DATABASES\n> WITH (publish = 'insert, delete, update', publish_global_object_command = true);\n>\n> CREATE SUBSCRIPTION GROUP mysub_\n> CONNECTION 'dbnames = \\“path to file\\” host=hostname user=username port=5432'\n> PUBLICATION GROUP mypub_;\n>\n> Under the hood, the CREATE PUBLICATION GROUP command generates one\n> CREATE PUBLICATION mypub_n sub-command for each database in the\n> cluster where n is a monotonically increasing integer from 1. The\n> command outputs the (dbname, publication name) pairs which can be\n> saved in a file and then used on the subscription side.\n>\n> Similarly, the CREATE SUBSCRIPTION GROUP command will generate one\n> CREATE SUBSCRIPTION mysub_n sub-command for each database in the\n> dbnames file. The dbnames file contains the (dbname, publication name)\n> pairs which come from the output of the CREATE PUBLICATION GROUP\n> command. Notice the connection string doesn’t have the dbname field,\n> During execution the connection string will be appended the dbname\n> retrieved from the dbnames file. By default the target DB name is the\n> same as the source DB name, optionally user can specify the source_db\n> to target_db mapping in the dbnames file.\n>\n> In addition, we might want to create dependencies for the\n> publications/subscriptions created by the above commands in order to\n> guarantee the group consistency. Also we need to enforce that there is\n> only one group of publications/subscriptions for database cluster\n> level replication.\n>\n> Logical replication of all commands across an entire cluster (instead\n> of on a per-database basis) is a separate topic. We can start another\n> thread after implementing a prototype.\n>\n> Please let me know your thoughts.\n\nThank you for working on this item.\n\nI think that there are some (possibly) tricky challenges that haven't\nbeen discussed yet to support replicating global objects.\n\nFirst, as for publications having global objects (roles, databases,\nand tablespaces), but storing them in database specific tables like\npg_publication doesn't make sense, because it should be at some shared\nplace where all databases can have access to it. Maybe we need to have\na shared catalog like pg_shpublication or pg_publication_role to store\npublications related to global objects or the relationship between\nsuch publications and global objects. Second, we might need to change\nthe logical decoding infrastructure so that it's aware of shared\ncatalog changes. Currently we need to scan only db-specific catalogs.\nFinally, since we process CREATE DATABASE in a different way than\nother DDLs (by cloning another database such as template1), simply\nreplicating the CREATE DATABASE statement would not produce the same\nresults as the publisher. Also, since event triggers are not fired on\nDDLs for global objects, always WAL-logging such DDL statements like\nthe proposed patch does is not a good idea.\n\nGiven that there seems to be some tricky problems and there is a\ndiscussion for cutting the scope to make the initial patch small[1], I\nthink it's better to do this work after the first version.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAA4eK1K3VXfTWXbLADcH81J%3D%3D7ussvNdqLFHN68sEokDPueu7w%40mail.gmail.com\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 28 Mar 2023 15:41:15 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Support logical replication of global object commands"
},
{
"msg_contents": "> I think that there are some (possibly) tricky challenges that haven't\n> been discussed yet to support replicating global objects.\n>\n> First, as for publications having global objects (roles, databases,\n> and tablespaces), but storing them in database specific tables like\n> pg_publication doesn't make sense, because it should be at some shared\n> place where all databases can have access to it. Maybe we need to have\n> a shared catalog like pg_shpublication or pg_publication_role to store\n> publications related to global objects or the relationship between\n> such publications and global objects. Second, we might need to change\n> the logical decoding infrastructure so that it's aware of shared\n> catalog changes.\n\nThanks for the feedback. This is insightful.\n\n> Currently we need to scan only db-specific catalogs.\n> Finally, since we process CREATE DATABASE in a different way than\n> other DDLs (by cloning another database such as template1), simply\n> replicating the CREATE DATABASE statement would not produce the same\n> results as the publisher. Also, since event triggers are not fired on\n> DDLs for global objects, always WAL-logging such DDL statements like\n> the proposed patch does is not a good idea.\n\n> Given that there seems to be some tricky problems and there is a\n> discussion for cutting the scope to make the initial patch small[1], I\n> think it's better to do this work after the first version.\n\nAgreed.\n\nRegards,\nZane\n\n\n",
"msg_date": "Tue, 28 Mar 2023 10:29:49 -0400",
"msg_from": "Zheng Li <zhengli10@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Support logical replication of global object commands"
}
] |
[
{
"msg_contents": "Hi,\nI was looking at ExecSort() w.r.t. datum sort.\n\nI wonder if the datumSort field can be dropped.\nHere is a patch illustrating the potential simplification.\n\nPlease take a look.\n\nThanks",
"msg_date": "Mon, 8 Aug 2022 14:57:59 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "dropping datumSort field"
},
{
"msg_contents": "On Mon, Aug 8, 2022 at 5:51 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> Hi,\n> I was looking at ExecSort() w.r.t. datum sort.\n>\n> I wonder if the datumSort field can be dropped.\n> Here is a patch illustrating the potential simplification.\n>\n> Please take a look.\n\nOne problem with this patch is that, if I apply it, PostgreSQL does not compile:\n\nnodeSort.c:197:6: error: use of undeclared identifier 'tupDesc'\n if (tupDesc->natts == 1)\n ^\n1 error generated.\n\nLeaving that aside, I don't really see any advantage in this sort of change.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 9 Aug 2022 11:01:44 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: dropping datumSort field"
},
{
"msg_contents": "On Tue, Aug 9, 2022 at 8:01 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Aug 8, 2022 at 5:51 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > Hi,\n> > I was looking at ExecSort() w.r.t. datum sort.\n> >\n> > I wonder if the datumSort field can be dropped.\n> > Here is a patch illustrating the potential simplification.\n> >\n> > Please take a look.\n>\n> One problem with this patch is that, if I apply it, PostgreSQL does not\n> compile:\n>\n> nodeSort.c:197:6: error: use of undeclared identifier 'tupDesc'\n> if (tupDesc->natts == 1)\n> ^\n> 1 error generated.\n>\n> Leaving that aside, I don't really see any advantage in this sort of\n> change.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n\n\nThanks for trying out the patch.\n\n TupleDesc tupDesc;\n\ntupDesc is declared inside `if (!node->sort_Done)` block whereas the last\nreference to tupDesc is outside the if block.\n\nI take your review comment and will go back to do more homework.\n\nCheers\n\nOn Tue, Aug 9, 2022 at 8:01 AM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Aug 8, 2022 at 5:51 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> Hi,\n> I was looking at ExecSort() w.r.t. datum sort.\n>\n> I wonder if the datumSort field can be dropped.\n> Here is a patch illustrating the potential simplification.\n>\n> Please take a look.\n\nOne problem with this patch is that, if I apply it, PostgreSQL does not compile:\n\nnodeSort.c:197:6: error: use of undeclared identifier 'tupDesc'\n if (tupDesc->natts == 1)\n ^\n1 error generated.\n\nLeaving that aside, I don't really see any advantage in this sort of change.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.comThanks for trying out the patch. TupleDesc tupDesc;tupDesc is declared inside `if (!node->sort_Done)` block whereas the last reference to tupDesc is outside the if block.I take your review comment and will go back to do more homework.Cheers",
"msg_date": "Tue, 9 Aug 2022 08:23:27 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: dropping datumSort field"
},
{
"msg_contents": "On Tue, Aug 9, 2022 at 11:16 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> tupDesc is declared inside `if (!node->sort_Done)` block whereas the last reference to tupDesc is outside the if block.\n\nYep.\n\n> I take your review comment and will go back to do more homework.\n\nThe real point for me here is you haven't offered any reason to make\nthis change. The structure member in question is basically free.\nBecause of alignment padding it uses no more memory, and it makes the\nintent of the code clearer.\n\nLet's not change things just because we could.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 9 Aug 2022 11:24:22 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: dropping datumSort field"
},
{
"msg_contents": "On Tue, Aug 9, 2022 at 8:24 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Aug 9, 2022 at 11:16 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > tupDesc is declared inside `if (!node->sort_Done)` block whereas the\n> last reference to tupDesc is outside the if block.\n>\n> Yep.\n>\n> > I take your review comment and will go back to do more homework.\n>\n> The real point for me here is you haven't offered any reason to make\n> this change. The structure member in question is basically free.\n> Because of alignment padding it uses no more memory, and it makes the\n> intent of the code clearer.\n>\n> Let's not change things just because we could.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n\n\nI should have provided motivation for the patch.\n\nI was aware of recent changes around ExprEvalStep. e.g.\n\nRemove unused fields from ExprEvalStep\n\nThough the datumSort field may be harmless for now, in the future, the\n(auto) alignment padding may not work if more fields are added to the\nstruct.\nI think we should always leave some room in struct's for future expansion.\n\nAs for making the intent of the code clearer, the datumSort field is only\nused in one method.\nMy previous patch removed some comment which should have been shifted into\nthis method.\nIn my opinion, localizing the check in single method is easier to\nunderstand than resorting to additional struct field.\n\nCheers\n\nOn Tue, Aug 9, 2022 at 8:24 AM Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Aug 9, 2022 at 11:16 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> tupDesc is declared inside `if (!node->sort_Done)` block whereas the last reference to tupDesc is outside the if block.\n\nYep.\n\n> I take your review comment and will go back to do more homework.\n\nThe real point for me here is you haven't offered any reason to make\nthis change. The structure member in question is basically free.\nBecause of alignment padding it uses no more memory, and it makes the\nintent of the code clearer.\n\nLet's not change things just because we could.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.comI should have provided motivation for the patch.I was aware of recent changes around ExprEvalStep. e.g.Remove unused fields from ExprEvalStepThough the datumSort field may be harmless for now, in the future, the (auto) alignment padding may not work if more fields are added to the struct.I think we should always leave some room in struct's for future expansion.As for making the intent of the code clearer, the datumSort field is only used in one method.My previous patch removed some comment which should have been shifted into this method.In my opinion, localizing the check in single method is easier to understand than resorting to additional struct field.Cheers",
"msg_date": "Tue, 9 Aug 2022 08:48:44 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: dropping datumSort field"
},
{
"msg_contents": "On Tue, Aug 9, 2022 at 11:42 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> Though the datumSort field may be harmless for now, in the future, the (auto) alignment padding may not work if more fields are added to the struct.\n> I think we should always leave some room in struct's for future expansion.\n\nI doubt the size of this struct is particularly important, unlike\nExprEvalStep which needs to be small. But if it turns out in the\nfuture that we need to try to squeeze this struct into fewer bytes, we\ncan always do something like this then. Right now there's no obvious\npoint to it.\n\nSure, it might be valuable *if* we add more fields to the struct and\n*if* that means that the byte taken up by this flag actually makes the\nstruct bigger and *if* the size of the struct is demonstrated to be a\nperformance problem. But right now none of that has happened, and\nmaybe none of it will ever happen.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 9 Aug 2022 13:43:43 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: dropping datumSort field"
},
{
"msg_contents": "On Wed, 10 Aug 2022 at 03:16, Zhihong Yu <zyu@yugabyte.com> wrote:\n> On Tue, Aug 9, 2022 at 8:01 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>>\n>> One problem with this patch is that, if I apply it, PostgreSQL does not compile:\n>>\n>> nodeSort.c:197:6: error: use of undeclared identifier 'tupDesc'\n>> if (tupDesc->natts == 1)\n>> ^\n>> 1 error generated.\n>>\n>> Leaving that aside, I don't really see any advantage in this sort of change.\n\nI'm with Robert on this one. If you look at the discussion for that\ncommit, you'll find quite a bit of benchmarking was done around this\n[1]. The \"if\" test here is in quite a hot code path, so we want to\nensure whatever is being referenced here causes the least amount of\nCPU cache misses. Volcano executors which process a single row at a\ntime are not exactly an ideal workload for a modern processor due to\nthe bad L1i and L1d hit ratios. This becomes worse with more plan\nnodes as these caches are more likely to get flushed of useful cache\nlines when mode notes are visited. Various other fields in 'node' have\njust been accessed in the code leading up to the \"if\n(node->datumSort)\" check, so we're probably not going to encounter any\nCPU pipeline stalls waiting for cache lines to be loaded into L1d. It\nseems you're proposing to change this and have offered no evidence of\nno performance regressions from doing so. Going by the compilation\nerror above, it seems unlikely that you've given performance any\nconsideration at all.\n\nI mean this in the best way possible; for the future, I really\nrecommend arriving with ideas that are well researched and tested.\nWhen you can, arrive with evidence to prove your change is good. For\nthis case, evidence would be benchmark results. The problem is if you\nwere to arrive with patches such as this too often then you'll start\nto struggle to get attention from anyone, let alone a committer. You\ndon't want to build a reputation for bad quality work as it's likely\nmost committers will steer clear of your work. If you want a good\nrecent example of a good proposal, have a look at Yuya Watari's\nwrite-up at [2] and [3]. There was a clear problem statement there and\na patch that was clearly a proof of concept only, so the author was\nunder no illusion that what he had was ideal. Now, some other ideas\nwere suggested on that thread to hopefully simplify the task and\nprovide even better performance. Yuya went off and did that and\narrived back armed with further benchmark results. I was quite\nimpressed with this considering he's not posted to -hackers very\noften. Now, if you were a committer, would you be looking at the\npatches from the person who sends in half-thought-through ideas, or\npatches from someone that has clearly put a great deal of effort into\nfirst clearly stating the problem and then proving that the problem is\nsolved by the given patch?\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvrWV%3Dv0qKsC9_BHqhCn9TusrNvCaZDz77StCO--fmgbKA%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAJ2pMkZNCgoUKSE+_5LthD+KbXKvq6h2hQN8Esxpxd+cxmgomg@mail.gmail.com\n[3] https://www.postgresql.org/message-id/CAJ2pMkZKFVmPHovyyueBpwb_nYYVk2+GaDqgzxZVnjkvxgtXog@mail.gmail.com\n\n\n",
"msg_date": "Wed, 10 Aug 2022 16:04:22 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: dropping datumSort field"
},
{
"msg_contents": "On Tue, Aug 9, 2022 at 9:04 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Wed, 10 Aug 2022 at 03:16, Zhihong Yu <zyu@yugabyte.com> wrote:\n> > On Tue, Aug 9, 2022 at 8:01 AM Robert Haas <robertmhaas@gmail.com>\n> wrote:\n> >>\n> >> One problem with this patch is that, if I apply it, PostgreSQL does not\n> compile:\n> >>\n> >> nodeSort.c:197:6: error: use of undeclared identifier 'tupDesc'\n> >> if (tupDesc->natts == 1)\n> >> ^\n> >> 1 error generated.\n> >>\n> >> Leaving that aside, I don't really see any advantage in this sort of\n> change.\n>\n> I'm with Robert on this one. If you look at the discussion for that\n> commit, you'll find quite a bit of benchmarking was done around this\n> [1]. The \"if\" test here is in quite a hot code path, so we want to\n> ensure whatever is being referenced here causes the least amount of\n> CPU cache misses. Volcano executors which process a single row at a\n> time are not exactly an ideal workload for a modern processor due to\n> the bad L1i and L1d hit ratios. This becomes worse with more plan\n> nodes as these caches are more likely to get flushed of useful cache\n> lines when mode notes are visited. Various other fields in 'node' have\n> just been accessed in the code leading up to the \"if\n> (node->datumSort)\" check, so we're probably not going to encounter any\n> CPU pipeline stalls waiting for cache lines to be loaded into L1d. It\n> seems you're proposing to change this and have offered no evidence of\n> no performance regressions from doing so. Going by the compilation\n> error above, it seems unlikely that you've given performance any\n> consideration at all.\n>\n> I mean this in the best way possible; for the future, I really\n> recommend arriving with ideas that are well researched and tested.\n> When you can, arrive with evidence to prove your change is good. For\n> this case, evidence would be benchmark results. The problem is if you\n> were to arrive with patches such as this too often then you'll start\n> to struggle to get attention from anyone, let alone a committer. You\n> don't want to build a reputation for bad quality work as it's likely\n> most committers will steer clear of your work. If you want a good\n> recent example of a good proposal, have a look at Yuya Watari's\n> write-up at [2] and [3]. There was a clear problem statement there and\n> a patch that was clearly a proof of concept only, so the author was\n> under no illusion that what he had was ideal. Now, some other ideas\n> were suggested on that thread to hopefully simplify the task and\n> provide even better performance. Yuya went off and did that and\n> arrived back armed with further benchmark results. I was quite\n> impressed with this considering he's not posted to -hackers very\n> often. Now, if you were a committer, would you be looking at the\n> patches from the person who sends in half-thought-through ideas, or\n> patches from someone that has clearly put a great deal of effort into\n> first clearly stating the problem and then proving that the problem is\n> solved by the given patch?\n>\n> David\n>\n> [1]\n> https://www.postgresql.org/message-id/CAApHDvrWV%3Dv0qKsC9_BHqhCn9TusrNvCaZDz77StCO--fmgbKA%40mail.gmail.com\n> [2]\n> https://www.postgresql.org/message-id/CAJ2pMkZNCgoUKSE+_5LthD+KbXKvq6h2hQN8Esxpxd+cxmgomg@mail.gmail.com\n> [3]\n> https://www.postgresql.org/message-id/CAJ2pMkZKFVmPHovyyueBpwb_nYYVk2+GaDqgzxZVnjkvxgtXog@mail.gmail.com\n\n\nHi, David:\nThanks for your detailed response.\n\nOn Tue, Aug 9, 2022 at 9:04 PM David Rowley <dgrowleyml@gmail.com> wrote:On Wed, 10 Aug 2022 at 03:16, Zhihong Yu <zyu@yugabyte.com> wrote:\n> On Tue, Aug 9, 2022 at 8:01 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>>\n>> One problem with this patch is that, if I apply it, PostgreSQL does not compile:\n>>\n>> nodeSort.c:197:6: error: use of undeclared identifier 'tupDesc'\n>> if (tupDesc->natts == 1)\n>> ^\n>> 1 error generated.\n>>\n>> Leaving that aside, I don't really see any advantage in this sort of change.\n\nI'm with Robert on this one. If you look at the discussion for that\ncommit, you'll find quite a bit of benchmarking was done around this\n[1]. The \"if\" test here is in quite a hot code path, so we want to\nensure whatever is being referenced here causes the least amount of\nCPU cache misses. Volcano executors which process a single row at a\ntime are not exactly an ideal workload for a modern processor due to\nthe bad L1i and L1d hit ratios. This becomes worse with more plan\nnodes as these caches are more likely to get flushed of useful cache\nlines when mode notes are visited. Various other fields in 'node' have\njust been accessed in the code leading up to the \"if\n(node->datumSort)\" check, so we're probably not going to encounter any\nCPU pipeline stalls waiting for cache lines to be loaded into L1d. It\nseems you're proposing to change this and have offered no evidence of\nno performance regressions from doing so. Going by the compilation\nerror above, it seems unlikely that you've given performance any\nconsideration at all.\n\nI mean this in the best way possible; for the future, I really\nrecommend arriving with ideas that are well researched and tested.\nWhen you can, arrive with evidence to prove your change is good. For\nthis case, evidence would be benchmark results. The problem is if you\nwere to arrive with patches such as this too often then you'll start\nto struggle to get attention from anyone, let alone a committer. You\ndon't want to build a reputation for bad quality work as it's likely\nmost committers will steer clear of your work. If you want a good\nrecent example of a good proposal, have a look at Yuya Watari's\nwrite-up at [2] and [3]. There was a clear problem statement there and\na patch that was clearly a proof of concept only, so the author was\nunder no illusion that what he had was ideal. Now, some other ideas\nwere suggested on that thread to hopefully simplify the task and\nprovide even better performance. Yuya went off and did that and\narrived back armed with further benchmark results. I was quite\nimpressed with this considering he's not posted to -hackers very\noften. Now, if you were a committer, would you be looking at the\npatches from the person who sends in half-thought-through ideas, or\npatches from someone that has clearly put a great deal of effort into\nfirst clearly stating the problem and then proving that the problem is\nsolved by the given patch?\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvrWV%3Dv0qKsC9_BHqhCn9TusrNvCaZDz77StCO--fmgbKA%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAJ2pMkZNCgoUKSE+_5LthD+KbXKvq6h2hQN8Esxpxd+cxmgomg@mail.gmail.com\n[3] https://www.postgresql.org/message-id/CAJ2pMkZKFVmPHovyyueBpwb_nYYVk2+GaDqgzxZVnjkvxgtXog@mail.gmail.comHi, David:Thanks for your detailed response.",
"msg_date": "Tue, 9 Aug 2022 21:26:28 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: dropping datumSort field"
}
] |
[
{
"msg_contents": "Hi,\n\nI found that there are two .c and .h files whose identification in the\nheader comment doesn't match its actual path.\n\nsrc/include/common/compression.h has:\n\n * IDENTIFICATION\n * src/common/compression.h\n *-------------------------------------------------------------------------\n\nsrc/fe_utils/cancel.c has:\n\n * src/fe-utils/cancel.c\n *\n *------------------------------------------------------------------------\n\nThe attached small patch fixes them.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Tue, 9 Aug 2022 10:57:20 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix unmatched file identifications"
},
{
"msg_contents": "On Tue, Aug 9, 2022 at 8:58 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> I found that there are two .c and .h files whose identification in the\n> header comment doesn't match its actual path.\n\n> The attached small patch fixes them.\n\nPushed, thanks!\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 9 Aug 2022 09:24:46 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix unmatched file identifications"
},
{
"msg_contents": "On Tue, Aug 9, 2022 at 11:24 AM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n>\n> On Tue, Aug 9, 2022 at 8:58 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I found that there are two .c and .h files whose identification in the\n> > header comment doesn't match its actual path.\n>\n> > The attached small patch fixes them.\n>\n> Pushed, thanks!\n\nThank you!\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 9 Aug 2022 12:05:00 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix unmatched file identifications"
}
] |
[
{
"msg_contents": "Hi,\n\nAttached a patch to fix as well. If the patch looks good to you, can you\nconsider getting this to PG 15?\n\nSteps to repro:\n-- some basic examples from src/test/regress/sql/create_am.sql\nCREATE TABLE heaptable USING heap AS\nSELECT a, repeat(a::text, 100) FROM generate_series(1,9) AS a;\nCREATE ACCESS METHOD heap2 TYPE TABLE HANDLER heap_tableam_handler;\nCREATE MATERIALIZED VIEW heapmv USING heap AS SELECT * FROM heaptable;\n\n-- altering MATERIALIZED\nALTER MATERIALIZED VIEW heapmv SET ACCESS METHOD heap2;\nALTER MATERIALIZED VIEW heapmv SET ACCESS METHOD heap;\n\n-- setup event trigger\nCREATE OR REPLACE FUNCTION empty_event_trigger()\n RETURNS event_trigger AS $$\nDECLARE\nBEGIN\nEND;\n$$ LANGUAGE plpgsql;\nCREATE EVENT TRIGGER empty_triggger ON sql_drop EXECUTE PROCEDURE\nempty_event_trigger();\n\n-- now, after creating an event trigger, ALTER MATERIALIZED VIEW fails\nunexpectedly\nALTER MATERIALIZED VIEW heapmv SET ACCESS METHOD heap2;\nERROR: unexpected command tag \"ALTER MATERIALIZED VIEW\"\n\nThanks,\nOnder Kalaci",
"msg_date": "Tue, 9 Aug 2022 14:55:06 +0200",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Materialized view rewrite is broken when there is an event trigger"
},
{
"msg_contents": "On Tue, Aug 09, 2022 at 02:55:06PM +0200, Önder Kalacı wrote:\n> Attached a patch to fix as well. If the patch looks good to you, can you\n> consider getting this to PG 15?\n\nThanks, this one is on me so I have added an open item. I will\nunfortunately not be able to address that this week because of life,\nbut I should be able to grab a little bit of time next week to look at\nwhat you have.\n\nPlease note that we should not add an event in create_am.sql even if\nit is empty, as it gets run in parallel of other tests so there could\nbe interferences. I think that this had better live in\nsql/event_trigger.sql, even if it requires an extra table AM to check\nthis specific case.\n--\nMichael",
"msg_date": "Tue, 9 Aug 2022 22:13:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Materialized view rewrite is broken when there is an event\n trigger"
},
{
"msg_contents": "Hi,\n\nI should be able to grab a little bit of time next week to look at\n> what you have.\n>\n\nThanks!\n\n\n>\n> Please note that we should not add an event in create_am.sql even if\n> it is empty, as it gets run in parallel of other tests so there could\n> be interferences. I think that this had better live in\n> sql/event_trigger.sql, even if it requires an extra table AM to check\n> this specific case.\n> --\n>\n\n\nMoved the test to event_trigger.sql.\n\n> parallel group (2 tests, in groups of 1): event_trigger oidjoins\n\nThough, it also seems to run in parallel, but I assume the parallel test\nalready works fine with concurrent event triggers?\n\nThanks,\nOnder",
"msg_date": "Tue, 9 Aug 2022 16:29:37 +0200",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Materialized view rewrite is broken when there is an event\n trigger"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Please note that we should not add an event in create_am.sql even if\n> it is empty, as it gets run in parallel of other tests so there could\n> be interferences. I think that this had better live in\n> sql/event_trigger.sql, even if it requires an extra table AM to check\n> this specific case.\n\nAgreed this is a bug, but I do not think we should add the proposed\nregression test, regardless of where exactly. It looks like expending\na lot of cycles forevermore to watch for an extremely unlikely thing,\nie that we break this for ALTER MATERIALIZED VIEW and not anything\nelse.\n\nI think the real problem here is that we don't have any mechanism\nfor verifying that table_rewrite_ok is correct. The \"cross-check\"\nin EventTriggerCommonSetup is utterly worthless, as this failure\nshows. Does it give any confidence at all that there are no other\nmislabelings? I sure have none now. What can we do to verify that\nmore rigorously? Or maybe we should find a way to get rid of the\ntable_rewrite_ok flag altogether?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 09 Aug 2022 10:35:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Materialized view rewrite is broken when there is an event\n trigger"
},
{
"msg_contents": "On Tue, Aug 09, 2022 at 10:35:01AM -0400, Tom Lane wrote:\n> Agreed this is a bug, but I do not think we should add the proposed\n> regression test, regardless of where exactly. It looks like expending\n> a lot of cycles forevermore to watch for an extremely unlikely thing,\n> ie that we break this for ALTER MATERIALIZED VIEW and not anything\n> else.\n\nHmm. I see a second point in keeping a test in this area, because we\nhave nothing that directly checks after AT_REWRITE_ACCESS_METHOD as\nintroduced by b048326. It makes me wonder whether we should have a\nsecond test for a plain table with SET ACCESS METHOD, actually, but\nwe have already cases for rewrites there, so..\n\n> I think the real problem here is that we don't have any mechanism\n> for verifying that table_rewrite_ok is correct. The \"cross-check\"\n> in EventTriggerCommonSetup is utterly worthless, as this failure\n> shows. Does it give any confidence at all that there are no other\n> mislabelings? I sure have none now. What can we do to verify that\n> more rigorously? Or maybe we should find a way to get rid of the\n> table_rewrite_ok flag altogether?\n\nThis comes down to the dependency between the event trigger paths in\nutility.c and tablecmds.c, which gets rather trickier with the ALTERs\non various relkinds. I don't really know about if we could cut this\nspecific flag, perhaps we could manage a list of command tags\nsupported for it as that's rather short. I can also see that\nsomething could be done for the firing matrix in the docs, as well\n(the proposed patch has forgotten the docs). That's not something\nthat should be done for v15 anyway, so I have fixed the issue at hand\nto take care of this open item.\n--\nMichael",
"msg_date": "Wed, 17 Aug 2022 14:56:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Materialized view rewrite is broken when there is an event\n trigger"
},
{
"msg_contents": "On Tue, Aug 09, 2022 at 04:29:37PM +0200, Önder Kalacı wrote:\n> Though, it also seems to run in parallel, but I assume the parallel test\n> already works fine with concurrent event triggers?\n\nWe've had issues with such assumptions in the past as event triggers\nare global, see for example 676858b or c219cbf, so I would rather\navoid more of that.\n--\nMichael",
"msg_date": "Wed, 17 Aug 2022 15:00:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Materialized view rewrite is broken when there is an event\n trigger"
}
] |
[
{
"msg_contents": "The comments considering checking share/ directory was there for\nalmost 13 years, yet nobody ever trying to add the checking, and\nthere seems never any trouble for not checking it, then I think\nwe should remove those comments.\n\ndiff --git a/src/backend/postmaster/postmaster.c\nb/src/backend/postmaster/postmaster.c\nindex 81cb585891..ecdc59ce5e 100644\n--- a/src/backend/postmaster/postmaster.c\n+++ b/src/backend/postmaster/postmaster.c\n@@ -1581,11 +1581,6 @@ getInstallationPaths(const char *argv0)\n errhint(\"This may indicate an\nincomplete PostgreSQL installation, or that the file \\\"%s\\\" has been\nmoved away from its proper location.\",\n my_exec_path)));\n FreeDir(pdir);\n-\n- /*\n- * XXX is it worth similarly checking the share/ directory? If the lib/\n- * directory is there, then share/ probably is too.\n- */\n }\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Tue, 9 Aug 2022 23:42:51 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "remove useless comments"
},
{
"msg_contents": "Junwang Zhao <zhjwpku@gmail.com> writes:\n> The comments considering checking share/ directory was there for\n> almost 13 years, yet nobody ever trying to add the checking, and\n> there seems never any trouble for not checking it, then I think\n> we should remove those comments.\n\nI think that comment is valuable. It shows that checking the\nsibling directories was considered and didn't seem worthwhile.\nPerhaps it should be rephrased in a more positive way (without XXX),\nbut merely deleting it is a net negative because future hackers\nwould have to reconstruct that reasoning.\n\nBTW, we're working in a 30+-year-old code base, so the mere fact\nthat a comment has been there a long time does not make it bad.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 09 Aug 2022 11:50:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: remove useless comments"
},
{
"msg_contents": "Fair enough, the rephased version of the comments is in the attachment,\nplease take a look.\n\n--- a/src/backend/postmaster/postmaster.c\n+++ b/src/backend/postmaster/postmaster.c\n@@ -1583,8 +1583,8 @@ getInstallationPaths(const char *argv0)\n FreeDir(pdir);\n\n /*\n- * XXX is it worth similarly checking the share/ directory? If the lib/\n- * directory is there, then share/ probably is too.\n+ * It's not worth checking the share/ directory. If the lib/ directory\n+ * is there, then share/ probably is too.\n */\n }\n\nOn Tue, Aug 9, 2022 at 11:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Junwang Zhao <zhjwpku@gmail.com> writes:\n> > The comments considering checking share/ directory was there for\n> > almost 13 years, yet nobody ever trying to add the checking, and\n> > there seems never any trouble for not checking it, then I think\n> > we should remove those comments.\n>\n> I think that comment is valuable. It shows that checking the\n> sibling directories was considered and didn't seem worthwhile.\n> Perhaps it should be rephrased in a more positive way (without XXX),\n> but merely deleting it is a net negative because future hackers\n> would have to reconstruct that reasoning.\n>\n> BTW, we're working in a 30+-year-old code base, so the mere fact\n> that a comment has been there a long time does not make it bad.\n>\n> regards, tom lane\n\n\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Wed, 10 Aug 2022 00:24:02 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: remove useless comments"
},
{
"msg_contents": "On Wed, Aug 10, 2022 at 12:24:02AM +0800, Junwang Zhao wrote:\n> Fair enough, the rephased version of the comments is in the attachment,\n> please take a look.\n> \n> --- a/src/backend/postmaster/postmaster.c\n> +++ b/src/backend/postmaster/postmaster.c\n> @@ -1583,8 +1583,8 @@ getInstallationPaths(const char *argv0)\n> FreeDir(pdir);\n> \n> /*\n> - * XXX is it worth similarly checking the share/ directory? If the lib/\n> - * directory is there, then share/ probably is too.\n> + * It's not worth checking the share/ directory. If the lib/ directory\n> + * is there, then share/ probably is too.\n> */\n> }\n\nPatch applied to master.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Sat, 28 Oct 2023 12:58:47 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: remove useless comments"
}
] |
[
{
"msg_contents": "Hi,\n\nI was thinking that it might make sense, to reduce clutter, to move\n*backup*.c from src/backend/replication to a new directory, perhaps\nsrc/backend/replication/backup or src/backend/backup.\n\nThere's no particular reason we *have* to do this, but there are 21 C\nfiles in that directory and 11 of them are basebackup-related, so\nmaybe it's time, especially because I think we might end up adding\nmore basebackup-related stuff.\n\nThoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 9 Aug 2022 12:08:04 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "moving basebackup code to its own directory"
},
{
"msg_contents": "On Tue, Aug 9, 2022 at 6:08 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> Hi,\n>\n> I was thinking that it might make sense, to reduce clutter, to move\n> *backup*.c from src/backend/replication to a new directory, perhaps\n> src/backend/replication/backup or src/backend/backup.\n>\n> There's no particular reason we *have* to do this, but there are 21 C\n> files in that directory and 11 of them are basebackup-related, so\n> maybe it's time, especially because I think we might end up adding\n> more basebackup-related stuff.\n>\n> Thoughts?\n>\n>\nThose 11 files are mostly your fault, of course ;)\n\nAnyway, I have no objection. If there'd been that many files, or plans to\nhave it, in the beginning we probably would've put them in\nreplication/basebackup or something like that from the beginning. I'm not\nsure how much it's worth doing wrt effects on backpatching etc, but if\nwe're planning to add even more files in the future, the pain will just\nbecome bigger once we eventually do it...\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Aug 9, 2022 at 6:08 PM Robert Haas <robertmhaas@gmail.com> wrote:Hi,\n\nI was thinking that it might make sense, to reduce clutter, to move\n*backup*.c from src/backend/replication to a new directory, perhaps\nsrc/backend/replication/backup or src/backend/backup.\n\nThere's no particular reason we *have* to do this, but there are 21 C\nfiles in that directory and 11 of them are basebackup-related, so\nmaybe it's time, especially because I think we might end up adding\nmore basebackup-related stuff.\n\nThoughts?Those 11 files are mostly your fault, of course ;)Anyway, I have no objection. If there'd been that many files, or plans to have it, in the beginning we probably would've put them in replication/basebackup or something like that from the beginning. I'm not sure how much it's worth doing wrt effects on backpatching etc, but if we're planning to add even more files in the future, the pain will just become bigger once we eventually do it...-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 9 Aug 2022 18:12:16 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: moving basebackup code to its own directory"
},
{
"msg_contents": "On Tue, Aug 9, 2022 at 12:12 PM Magnus Hagander <magnus@hagander.net> wrote:\n> Those 11 files are mostly your fault, of course ;)\n\nThey are. I tend to prefer smaller source files than many developers,\nbecause I find them easier to understand and maintain. If you only\ninclude <zlib.h> in basebackup_gzip.c, then you can be pretty sure\nnothing else involved with basebackup is accidentally depending on it.\nSimilarly with static variables. If you just have one giant file, it's\nharder to be sure about that sort of thing.\n\n> Anyway, I have no objection. If there'd been that many files, or plans to have it, in the beginning we probably would've put them in replication/basebackup or something like that from the beginning. I'm not sure how much it's worth doing wrt effects on backpatching etc, but if we're planning to add even more files in the future, the pain will just become bigger once we eventually do it...\n\nRight.\n\nIt's not exactly clear to me what the optimal source code layout is\nhere. I think the placement here is under src/backend/replication\nbecause the functionality is accessed via the replication protocol,\nbut I'm not sure if all backup-related code we ever add will be\nrelated to the replication protocol. As a thought experiment, imagine\na background worker that triggers a backup periodically, or a\nmonitoring view that tells you about the status of your last 10 backup\nattempts, or an in-memory hash table that tracks which files have been\nmodified since the last backup. I'm not planning on implementing any\nof those things specifically, but I guess I'm a little concerned that\nif we just do the obvious thing of src/backend/replication/backup it's\ngoing to be end up being a little awkward if I or anyone else want to\nadd backup-related code that isn't specifically about the replication\nprotocol.\n\nSo maybe src/backend/backup? Or is that too grandiose for the amount\nof stuff we have here?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 9 Aug 2022 12:34:46 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: moving basebackup code to its own directory"
},
{
"msg_contents": "On 8/9/22 12:12, Magnus Hagander wrote:\n> On Tue, Aug 9, 2022 at 6:08 PM Robert Haas <robertmhaas@gmail.com \n> <mailto:robertmhaas@gmail.com>> wrote:\n> \n> Hi,\n> \n> I was thinking that it might make sense, to reduce clutter, to move\n> *backup*.c from src/backend/replication to a new directory, perhaps\n> src/backend/replication/backup or src/backend/backup.\n> \n> There's no particular reason we *have* to do this, but there are 21 C\n> files in that directory and 11 of them are basebackup-related, so\n> maybe it's time, especially because I think we might end up adding\n> more basebackup-related stuff.\n> \n> Thoughts?\n> \n> \n> Those 11 files are mostly your fault, of course ;)\n> \n> Anyway, I have no objection. If there'd been that many files, or plans \n> to have it, in the beginning we probably would've put them in \n> replication/basebackup or something like that from the beginning. I'm \n> not sure how much it's worth doing wrt effects on backpatching etc, but \n> if we're planning to add even more files in the future, the pain will \n> just become bigger once we eventually do it...\n\nThere are big changes all around for PG15 so back-patching will be \ncomplicated no matter what.\n\n+1 from me and it would be great if we can get this into the PG15 branch \nas well.\n\nRegards,\n-David\n\n\n",
"msg_date": "Tue, 9 Aug 2022 12:35:58 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: moving basebackup code to its own directory"
},
{
"msg_contents": "On 8/9/22 12:34, Robert Haas wrote:\n> On Tue, Aug 9, 2022 at 12:12 PM Magnus Hagander <magnus@hagander.net> wrote:\n> \n>> Anyway, I have no objection. If there'd been that many files, or plans to have it, in the beginning we probably would've put them in replication/basebackup or something like that from the beginning. I'm not sure how much it's worth doing wrt effects on backpatching etc, but if we're planning to add even more files in the future, the pain will just become bigger once we eventually do it...\n> \n> Right.\n> \n> It's not exactly clear to me what the optimal source code layout is\n> here. I think the placement here is under src/backend/replication\n> because the functionality is accessed via the replication protocol,\n> but I'm not sure if all backup-related code we ever add will be\n> related to the replication protocol. As a thought experiment, imagine\n> a background worker that triggers a backup periodically, or a\n> monitoring view that tells you about the status of your last 10 backup\n> attempts, or an in-memory hash table that tracks which files have been\n> modified since the last backup. I'm not planning on implementing any\n> of those things specifically, but I guess I'm a little concerned that\n> if we just do the obvious thing of src/backend/replication/backup it's\n> going to be end up being a little awkward if I or anyone else want to\n> add backup-related code that isn't specifically about the replication\n> protocol.\n> \n> So maybe src/backend/backup? Or is that too grandiose for the amount\n> of stuff we have here?\n\n+1 for src/backend/backup. I'd also be happy to see the start/stop code \nmove here at some point.\n\nRegards,\n-David\n\n\n",
"msg_date": "Tue, 9 Aug 2022 12:41:28 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: moving basebackup code to its own directory"
},
{
"msg_contents": "On Tue, Aug 9, 2022 at 6:41 PM David Steele <david@pgmasters.net> wrote:\n\n> On 8/9/22 12:34, Robert Haas wrote:\n> > On Tue, Aug 9, 2022 at 12:12 PM Magnus Hagander <magnus@hagander.net>\n> wrote:\n> >\n> >> Anyway, I have no objection. If there'd been that many files, or plans\n> to have it, in the beginning we probably would've put them in\n> replication/basebackup or something like that from the beginning. I'm not\n> sure how much it's worth doing wrt effects on backpatching etc, but if\n> we're planning to add even more files in the future, the pain will just\n> become bigger once we eventually do it...\n> >\n> > Right.\n> >\n> > It's not exactly clear to me what the optimal source code layout is\n> > here. I think the placement here is under src/backend/replication\n> > because the functionality is accessed via the replication protocol,\n> > but I'm not sure if all backup-related code we ever add will be\n> > related to the replication protocol. As a thought experiment, imagine\n> > a background worker that triggers a backup periodically, or a\n> > monitoring view that tells you about the status of your last 10 backup\n> > attempts, or an in-memory hash table that tracks which files have been\n> > modified since the last backup. I'm not planning on implementing any\n> > of those things specifically, but I guess I'm a little concerned that\n> > if we just do the obvious thing of src/backend/replication/backup it's\n> > going to be end up being a little awkward if I or anyone else want to\n> > add backup-related code that isn't specifically about the replication\n> > protocol.\n> >\n> > So maybe src/backend/backup? Or is that too grandiose for the amount\n> > of stuff we have here?\n>\n> +1 for src/backend/backup. I'd also be happy to see the start/stop code\n> move here at some point.\n>\n\nYeah, sounds reasonable. There's never an optimal source code layout, but I\nagree this one is better than putting it under replication.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Aug 9, 2022 at 6:41 PM David Steele <david@pgmasters.net> wrote:On 8/9/22 12:34, Robert Haas wrote:\n> On Tue, Aug 9, 2022 at 12:12 PM Magnus Hagander <magnus@hagander.net> wrote:\n> \n>> Anyway, I have no objection. If there'd been that many files, or plans to have it, in the beginning we probably would've put them in replication/basebackup or something like that from the beginning. I'm not sure how much it's worth doing wrt effects on backpatching etc, but if we're planning to add even more files in the future, the pain will just become bigger once we eventually do it...\n> \n> Right.\n> \n> It's not exactly clear to me what the optimal source code layout is\n> here. I think the placement here is under src/backend/replication\n> because the functionality is accessed via the replication protocol,\n> but I'm not sure if all backup-related code we ever add will be\n> related to the replication protocol. As a thought experiment, imagine\n> a background worker that triggers a backup periodically, or a\n> monitoring view that tells you about the status of your last 10 backup\n> attempts, or an in-memory hash table that tracks which files have been\n> modified since the last backup. I'm not planning on implementing any\n> of those things specifically, but I guess I'm a little concerned that\n> if we just do the obvious thing of src/backend/replication/backup it's\n> going to be end up being a little awkward if I or anyone else want to\n> add backup-related code that isn't specifically about the replication\n> protocol.\n> \n> So maybe src/backend/backup? Or is that too grandiose for the amount\n> of stuff we have here?\n\n+1 for src/backend/backup. I'd also be happy to see the start/stop code \nmove here at some point.Yeah, sounds reasonable. There's never an optimal source code layout, but I agree this one is better than putting it under replication.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 9 Aug 2022 18:43:28 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: moving basebackup code to its own directory"
},
{
"msg_contents": "On Tue, Aug 9, 2022 at 12:43 PM Magnus Hagander <magnus@hagander.net> wrote:\n>> > So maybe src/backend/backup? Or is that too grandiose for the amount\n>> > of stuff we have here?\n>>\n>> +1 for src/backend/backup. I'd also be happy to see the start/stop code\n>> move here at some point.\n>\n> Yeah, sounds reasonable. There's never an optimal source code layout, but I agree this one is better than putting it under replication.\n\nOK, here's a patch.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 9 Aug 2022 13:32:49 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: moving basebackup code to its own directory"
},
{
"msg_contents": "On 8/9/22 13:32, Robert Haas wrote:\n> On Tue, Aug 9, 2022 at 12:43 PM Magnus Hagander <magnus@hagander.net> wrote:\n>>>> So maybe src/backend/backup? Or is that too grandiose for the amount\n>>>> of stuff we have here?\n>>>\n>>> +1 for src/backend/backup. I'd also be happy to see the start/stop code\n>>> move here at some point.\n>>\n>> Yeah, sounds reasonable. There's never an optimal source code layout, but I agree this one is better than putting it under replication.\n> \n> OK, here's a patch.\n\nThis looks good to me.\n\nRegards,\n-David\n\n\n",
"msg_date": "Tue, 9 Aug 2022 13:49:34 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: moving basebackup code to its own directory"
},
{
"msg_contents": "On Tue, Aug 09, 2022 at 01:32:49PM -0400, Robert Haas wrote:\n> On Tue, Aug 9, 2022 at 12:43 PM Magnus Hagander <magnus@hagander.net> wrote:\n> >> > So maybe src/backend/backup? Or is that too grandiose for the amount\n> >> > of stuff we have here?\n> >>\n> >> +1 for src/backend/backup. I'd also be happy to see the start/stop code\n> >> move here at some point.\n> >\n> > Yeah, sounds reasonable. There's never an optimal source code layout, but I agree this one is better than putting it under replication.\n> \n> OK, here's a patch.\n\nIt looks like this updates the header comments in the .h files but not the .c\nfiles.\n\nPersonally, I find these to be silly boilerplate ..\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 9 Aug 2022 13:40:16 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: moving basebackup code to its own directory"
},
{
"msg_contents": "On 8/9/22 14:40, Justin Pryzby wrote:\n> On Tue, Aug 09, 2022 at 01:32:49PM -0400, Robert Haas wrote:\n>> On Tue, Aug 9, 2022 at 12:43 PM Magnus Hagander <magnus@hagander.net> wrote:\n>>>>> So maybe src/backend/backup? Or is that too grandiose for the amount\n>>>>> of stuff we have here?\n>>>>\n>>>> +1 for src/backend/backup. I'd also be happy to see the start/stop code\n>>>> move here at some point.\n>>>\n>>> Yeah, sounds reasonable. There's never an optimal source code layout, but I agree this one is better than putting it under replication.\n>>\n>> OK, here's a patch.\n> \n> It looks like this updates the header comments in the .h files but not the .c\n> files.\n> \n> Personally, I find these to be silly boilerplate ..\n\nGood catch. I did not notice that just looking at the diff.\n\nDefinitely agree that repeating the filename in the top comment is \nmostly useless, but that seems like a separate conversation.\n\n-David\n\n\n",
"msg_date": "Tue, 9 Aug 2022 14:49:39 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: moving basebackup code to its own directory"
},
{
"msg_contents": "On Tue, Aug 9, 2022 at 2:40 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> It looks like this updates the header comments in the .h files but not the .c\n> files.\n>\n> Personally, I find these to be silly boilerplate ..\n\nHere is a version with some updates to the silly boilerplate.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 9 Aug 2022 15:28:39 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: moving basebackup code to its own directory"
},
{
"msg_contents": "On Tue, Aug 9, 2022 at 3:28 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Tue, Aug 9, 2022 at 2:40 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > It looks like this updates the header comments in the .h files but not the .c\n> > files.\n> >\n> > Personally, I find these to be silly boilerplate ..\n>\n> Here is a version with some updates to the silly boilerplate.\n\nIf there are no further comments on this I will go ahead and commit it.\n\nDavid Steele voted for back-patching this on the grounds that it would\nmake future back-patching easier, which is an argument that seems to\nme to have some merit, although on the other hand, we are already into\nAugust so it's quite late in the day. Anyone else want to vote?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 10 Aug 2022 10:08:02 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: moving basebackup code to its own directory"
},
{
"msg_contents": "On Wed, Aug 10, 2022 at 10:08:02AM -0400, Robert Haas wrote:\n> On Tue, Aug 9, 2022 at 3:28 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Tue, Aug 9, 2022 at 2:40 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > It looks like this updates the header comments in the .h files but not the .c\n> > > files.\n> > >\n> > > Personally, I find these to be silly boilerplate ..\n> >\n> > Here is a version with some updates to the silly boilerplate.\n> \n> If there are no further comments on this I will go ahead and commit it.\n> \n> David Steele voted for back-patching this on the grounds that it would\n> make future back-patching easier, which is an argument that seems to\n> me to have some merit, although on the other hand, we are already into\n> August so it's quite late in the day. Anyone else want to vote?\n\nNo objection to backpatching to v15, but if you don't, git ought to handle\nrenamed files just fine.\n\nThese look like similar precedent for \"late\" renaming+backpatching: 41dae3553,\n47ca48364\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 10 Aug 2022 09:25:03 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: moving basebackup code to its own directory"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> David Steele voted for back-patching this on the grounds that it would\n> make future back-patching easier, which is an argument that seems to\n> me to have some merit, although on the other hand, we are already into\n> August so it's quite late in the day. Anyone else want to vote?\n\nSeems like low-risk refactoring, so +1 for keeping v15 close to HEAD.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 10 Aug 2022 12:20:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: moving basebackup code to its own directory"
},
{
"msg_contents": "On Wed, Aug 10, 2022 at 6:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > David Steele voted for back-patching this on the grounds that it would\n> > make future back-patching easier, which is an argument that seems to\n> > me to have some merit, although on the other hand, we are already into\n> > August so it's quite late in the day. Anyone else want to vote?\n>\n> Seems like low-risk refactoring, so +1 for keeping v15 close to HEAD.\n\n+1, but I suggest also getting a hat-tip from the RMT on it.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Wed, 10 Aug 2022 18:32:10 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: moving basebackup code to its own directory"
},
{
"msg_contents": "On 8/10/22 12:32 PM, Magnus Hagander wrote:\r\n> On Wed, Aug 10, 2022 at 6:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n>>\r\n>> Robert Haas <robertmhaas@gmail.com> writes:\r\n>>> David Steele voted for back-patching this on the grounds that it would\r\n>>> make future back-patching easier, which is an argument that seems to\r\n>>> me to have some merit, although on the other hand, we are already into\r\n>>> August so it's quite late in the day. Anyone else want to vote?\r\n>>\r\n>> Seems like low-risk refactoring, so +1 for keeping v15 close to HEAD.\r\n> \r\n> +1, but I suggest also getting a hat-tip from the RMT on it.\r\n\r\nWith RMT hat on, given a few folks who maintain backup utilities seem to \r\nbe in favor of backpatching to v15 and they are the ones to be most \r\naffected by this, it seems to me that this is an acceptable, \r\nnoncontroversial course of action.\r\n\r\nJonathan",
"msg_date": "Wed, 10 Aug 2022 12:41:25 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: moving basebackup code to its own directory"
},
{
"msg_contents": "On 2022-Aug-10, Robert Haas wrote:\n\n> David Steele voted for back-patching this on the grounds that it would\n> make future back-patching easier, which is an argument that seems to\n> me to have some merit, although on the other hand, we are already into\n> August so it's quite late in the day. Anyone else want to vote?\n\nGiven that 10 of these 11 files are new in 15, I definitely agree with\nbackpatching the move.\n\nMoving the include/ files is going to cause some pain for any\nthird-party code #including those files. I don't think this is a\nproblem.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 10 Aug 2022 18:57:34 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: moving basebackup code to its own directory"
},
{
"msg_contents": "On Wed, Aug 10, 2022 at 12:57 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Given that 10 of these 11 files are new in 15, I definitely agree with\n> backpatching the move.\n\nOK, done.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 10 Aug 2022 14:04:00 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: moving basebackup code to its own directory"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\n(Personal hat, not RMT hat unless otherwise noted).\r\n\r\nThis thread[1] raised some concerns around the implementation of the \r\nSQL/JSON features that are slated for v15, which includes an outstanding \r\nopen item[2]. Given the current state of the discussion, when the RMT \r\nmet on Aug 8, they several options, readable here[3]. Given we are now \r\ninto the later part of the release cycle, we need to make some decisions \r\non how to proceed with this feature given the concerns raised.\r\n\r\nPer additional discussion on the thread, the group wanted to provide \r\nmore visibility into the discussion to get opinions on how to proceed \r\nfor the v15 release.\r\n\r\nWithout rehashing the thread, the options presented were:\r\n\r\n1. Fixing the concerns addressed in the thread around the v15 SQL/JSON \r\nfeatures implementation, noting that this would likely entail at least \r\none more beta release and would push the GA date past our normal timeframe.\r\n\r\n2. Try to commit a subset of the features that caused less debate. This \r\nwas ruled out.\r\n\r\n3. Revert the v15 SQL/JSON features work.\r\n\r\n<RMT hat>\r\nBased on the current release timing and the open issues presented on the \r\nthread, and the RMT had recommended reverting, but preferred to drive \r\nconsensus on next steps.\r\n</RMT hat>\r\n\r\n From a release advocacy standpoint, I need about 6 weeks lead time to \r\nput together the GA launch. We're at the point where I typically deliver \r\na draft release announcement. From this, given this involves a high \r\nvisibility feature, I would want some clarity on what option we would \r\nlike to pursue. Once the announcement translation process has begun (and \r\nthis is when we have consensus on the release announcement), it becomes \r\nmore challenging to change things out.\r\n\r\n From a personal standpoint (restating from[3]), I would like to see \r\nwhat we could do to include support for this batch of the SQL/JSON \r\nfeatures in v15. What is included looks like it closes most of the gap \r\non what we've been missing syntactically since the standard was adopted, \r\nand the JSON_TABLE work is very convenient for converting JSON data into \r\na relational format. I believe having this feature set is important for \r\nmaintaining standards compliance, interoperability, tooling support, and \r\ngeneral usability. Plus, JSON still seems to be pretty popular.\r\n\r\nWe're looking for additional input on what makes sense as a best course \r\nof action, given what is presented in[3].\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://www.postgresql.org/message-id/flat/20220616233130.rparivafipt6doj3%40alap3.anarazel.de\r\n[2] https://wiki.postgresql.org/wiki/PostgreSQL_15_Open_Items\r\n[3] \r\nhttps://www.postgresql.org/message-id/787cef45-15de-8f1d-ed58-a1c1435bfc0e%40postgresql.org",
"msg_date": "Tue, 9 Aug 2022 16:58:56 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "SQL/JSON features for v15"
},
{
"msg_contents": "On 8/9/22 4:58 PM, Jonathan S. Katz wrote:\r\n\r\n> We're looking for additional input on what makes sense as a best course \r\n> of action, given what is presented in[3].\r\n\r\nMissed adding Amit on the CC.\r\n\r\nJonathan",
"msg_date": "Tue, 9 Aug 2022 16:59:46 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "\nOn 2022-08-09 Tu 16:58, Jonathan S. Katz wrote:\n> Hi,\n>\n> (Personal hat, not RMT hat unless otherwise noted).\n>\n> This thread[1] raised some concerns around the implementation of the\n> SQL/JSON features that are slated for v15, which includes an\n> outstanding open item[2]. Given the current state of the discussion,\n> when the RMT met on Aug 8, they several options, readable here[3].\n> Given we are now into the later part of the release cycle, we need to\n> make some decisions on how to proceed with this feature given the\n> concerns raised.\n>\n> Per additional discussion on the thread, the group wanted to provide\n> more visibility into the discussion to get opinions on how to proceed\n> for the v15 release.\n>\n> Without rehashing the thread, the options presented were:\n>\n> 1. Fixing the concerns addressed in the thread around the v15 SQL/JSON\n> features implementation, noting that this would likely entail at least\n> one more beta release and would push the GA date past our normal\n> timeframe.\n>\n> 2. Try to commit a subset of the features that caused less debate.\n> This was ruled out.\n>\n> 3. Revert the v15 SQL/JSON features work.\n>\n> <RMT hat>\n> Based on the current release timing and the open issues presented on\n> the thread, and the RMT had recommended reverting, but preferred to\n> drive consensus on next steps.\n> </RMT hat>\n>\n> From a release advocacy standpoint, I need about 6 weeks lead time to\n> put together the GA launch. We're at the point where I typically\n> deliver a draft release announcement. From this, given this involves a\n> high visibility feature, I would want some clarity on what option we\n> would like to pursue. Once the announcement translation process has\n> begun (and this is when we have consensus on the release\n> announcement), it becomes more challenging to change things out.\n>\n> From a personal standpoint (restating from[3]), I would like to see\n> what we could do to include support for this batch of the SQL/JSON\n> features in v15. What is included looks like it closes most of the gap\n> on what we've been missing syntactically since the standard was\n> adopted, and the JSON_TABLE work is very convenient for converting\n> JSON data into a relational format. I believe having this feature set\n> is important for maintaining standards compliance, interoperability,\n> tooling support, and general usability. Plus, JSON still seems to be\n> pretty popular.\n>\n> We're looking for additional input on what makes sense as a best\n> course of action, given what is presented in[3].\n>\n> Thanks,\n>\n> Jonathan\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/20220616233130.rparivafipt6doj3%40alap3.anarazel.de\n> [2] https://wiki.postgresql.org/wiki/PostgreSQL_15_Open_Items\n> [3]\n> https://www.postgresql.org/message-id/787cef45-15de-8f1d-ed58-a1c1435bfc0e%40postgresql.org\n\n\nTo preserve options I will start preparing reversion patches. Given\nthere are I think more than 20 commits all told that could be fun, and\nwill probably take me a little while. The sad part is that to the best\nof my knowledge this code is producing correct results, and not\ndisturbing the stability or performance of anything else. There was a\nperformance issue but it's been dealt with AIUI.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 10 Aug 2022 11:50:42 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On 8/10/22 11:50 AM, Andrew Dunstan wrote:\r\n> \r\n> On 2022-08-09 Tu 16:58, Jonathan S. Katz wrote:\r\n>> Hi,\r\n>>\r\n>> (Personal hat, not RMT hat unless otherwise noted).\r\n>>\r\n>> This thread[1] raised some concerns around the implementation of the\r\n>> SQL/JSON features that are slated for v15, which includes an\r\n>> outstanding open item[2]. Given the current state of the discussion,\r\n>> when the RMT met on Aug 8, they several options, readable here[3].\r\n>> Given we are now into the later part of the release cycle, we need to\r\n>> make some decisions on how to proceed with this feature given the\r\n>> concerns raised.\r\n>>\r\n>> Per additional discussion on the thread, the group wanted to provide\r\n>> more visibility into the discussion to get opinions on how to proceed\r\n>> for the v15 release.\r\n>>\r\n>> Without rehashing the thread, the options presented were:\r\n>>\r\n>> 1. Fixing the concerns addressed in the thread around the v15 SQL/JSON\r\n>> features implementation, noting that this would likely entail at least\r\n>> one more beta release and would push the GA date past our normal\r\n>> timeframe.\r\n>>\r\n>> 2. Try to commit a subset of the features that caused less debate.\r\n>> This was ruled out.\r\n>>\r\n>> 3. Revert the v15 SQL/JSON features work.\r\n>>\r\n>> <RMT hat>\r\n>> Based on the current release timing and the open issues presented on\r\n>> the thread, and the RMT had recommended reverting, but preferred to\r\n>> drive consensus on next steps.\r\n>> </RMT hat>\r\n>>\r\n>> From a release advocacy standpoint, I need about 6 weeks lead time to\r\n>> put together the GA launch. We're at the point where I typically\r\n>> deliver a draft release announcement. From this, given this involves a\r\n>> high visibility feature, I would want some clarity on what option we\r\n>> would like to pursue. Once the announcement translation process has\r\n>> begun (and this is when we have consensus on the release\r\n>> announcement), it becomes more challenging to change things out.\r\n>>\r\n>> From a personal standpoint (restating from[3]), I would like to see\r\n>> what we could do to include support for this batch of the SQL/JSON\r\n>> features in v15. What is included looks like it closes most of the gap\r\n>> on what we've been missing syntactically since the standard was\r\n>> adopted, and the JSON_TABLE work is very convenient for converting\r\n>> JSON data into a relational format. I believe having this feature set\r\n>> is important for maintaining standards compliance, interoperability,\r\n>> tooling support, and general usability. Plus, JSON still seems to be\r\n>> pretty popular.\r\n>>\r\n>> We're looking for additional input on what makes sense as a best\r\n>> course of action, given what is presented in[3].\r\n\r\n> To preserve options I will start preparing reversion patches. Given\r\n> there are I think more than 20 commits all told that could be fun, and\r\n> will probably take me a little while. The sad part is that to the best\r\n> of my knowledge this code is producing correct results, and not\r\n> disturbing the stability or performance of anything else. There was a\r\n> performance issue but it's been dealt with AIUI.\r\n\r\nPersonally, I hope we don't need to revert. If everything from the open \r\nitem standpoint is addressed, I want to ensure we capture and complete \r\nthe remaining issues that were raised on the other thread, i.e.\r\n\r\n* adding design docs\r\n* simplifying the type-coercion code\r\n* another other design concerns that were presented\r\n\r\nWe switched this discussion out to a different thread to get some more \r\nvisibility on the issue and see if other folks would weigh in. Thus far, \r\nthere has not been much additional say either way. It would be good if \r\nother folks chimed in.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Thu, 11 Aug 2022 13:08:03 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "\nHi,\n\nContinuation from the thread at\nhttps://postgr.es/m/20220811171740.m5b4h7x63g4lzgrk%40awork3.anarazel.de\n\n\nOn 2022-08-11 10:17:40 -0700, Andres Freund wrote:\n> On 2022-08-11 13:08:27 -0400, Jonathan S. Katz wrote:\n> > With RMT hat on, Andres do you have any thoughts on this?\n>\n> I think I need to prototype how it'd look like to give a more detailed\n> answer. I have a bunch of meetings over the next few hours, but after that I\n> can give it a shot.\n\nI started hacking on this Friday. I think there's some relatively easy\nimprovements that make the code substantially more understandable, at least\nfor me, without even addressing the structural stuff.\n\n\nOne thing I could use help understanding is the logic behind\nExecEvalJsonNeedsSubTransaction() - there's no useful comments, so it's hard\nto follow.\n\nbool\nExecEvalJsonNeedsSubTransaction(JsonExpr *jsexpr,\n\t\t\t\t\t\t\t\tstruct JsonCoercionsState *coercions)\n{\n\t/* want error to be raised, so clearly no subtrans needed */\n\tif (jsexpr->on_error->btype == JSON_BEHAVIOR_ERROR)\n\t\treturn false;\n\n\tif (jsexpr->op == JSON_EXISTS_OP && !jsexpr->result_coercion)\n\t\treturn false;\n\n\tif (!coercions)\n\t\treturn true;\n\n\treturn false;\n}\n\nI guess the !coercions bit is just about the planner, where we want to be\npessimistic about when subtransactions are used, for the purpose of\nparallelism? Because that's the only place that passes in NULL.\n\n\nWhat really baffles me is that last 'return false' - it seems to indicate that\nthere's no paths during query execution where\nExecEvalJsonNeedsSubTransaction() returns true. And indeed, tests pass with an\nAssert(!needSubtrans) added to ExecEvalJson() (and then unsurprisingly also\nafter removing the ExecEvalJsonExprSubtrans() indirection).\n\nWhat's going on here?\n\n\nWe, somewhat confusingly, still rely on subtransactions, heavily\nso. Responsible for that is this hunk of code:\n\n bool throwErrors = jexpr->on_error->btype == JSON_BEHAVIOR_ERROR;\n [...]\n cxt.error = throwErrors ? NULL : &error;\n cxt.coercionInSubtrans = !needSubtrans && !throwErrors;\n Assert(!needSubtrans || cxt.error);\n\nSo basically we start a subtransaction inside ExecEvalJsonExpr(), to coerce\nthe result type, whenever !needSubtrans (which is always!), unless ERROR ON\nERROR is used.\n\n\nWhich then also explains the theory behind the EXISTS_OP check in\nExecEvalJsonNeedsSubTransaction(). In that case ExecEvalJsonExpr() returns\nearly, before doing a return value coercion, thus not starting a\nsubtransaction.\n\n\n\nI don't think it's sane from a performance view to start a subtransaction for\nevery coercion, particularly because most coercion paths will never trigger an\nerror, leaving things like out-of-memory or interrupts aside. And those are\nre-thrown by ExecEvalJsonExprSubtrans(). A quick and dirty benchmark shows\nERROR ON ERROR nearly 2xing speed. I'm worried about the system impact of\nusing subtransactions this heavily, it's not exactly the best performing\nsystem - the only reason it's kind of ok here is that it's going to be very\nrare to allocate a subxid, I think.\n\n\n\nNext question:\n\n\t/*\n\t * We should catch exceptions of category ERRCODE_DATA_EXCEPTION and\n\t * execute the corresponding ON ERROR behavior then.\n\t */\n\toldcontext = CurrentMemoryContext;\n\toldowner = CurrentResourceOwner;\n\n\tAssert(error);\n\n\tBeginInternalSubTransaction(NULL);\n\t/* Want to execute expressions inside function's memory context */\n\tMemoryContextSwitchTo(oldcontext);\n\n\tPG_TRY();\n\t{\n\t\tres = func(op, econtext, res, resnull, p, error);\n\n\t\t/* Commit the inner transaction, return to outer xact context */\n\t\tReleaseCurrentSubTransaction();\n\t\tMemoryContextSwitchTo(oldcontext);\n\t\tCurrentResourceOwner = oldowner;\n\t}\n\tPG_CATCH();\n\t{\n\t\tErrorData *edata;\n\t\tint\t\t\tecategory;\n\n\t\t/* Save error info in oldcontext */\n\t\tMemoryContextSwitchTo(oldcontext);\n\t\tedata = CopyErrorData();\n\t\tFlushErrorState();\n\n\t\t/* Abort the inner transaction */\n\t\tRollbackAndReleaseCurrentSubTransaction();\n\t\tMemoryContextSwitchTo(oldcontext);\n\t\tCurrentResourceOwner = oldowner;\n\n\nTwo points:\n\n1) I suspect it's not safe to switch to oldcontext before calling func().\n\nOn error we'll have leaked memory into oldcontext and we'll just continue\non. It might not be very consequential here, because the calling context\npresumably isn't very long lived, but that's probably not something we should\nrely on.\n\nAlso, are we sure that the context will be in a clean state when it's used\nwithin an erroring subtransaction?\n\n\nI think the right thing here would be to stay in the subtransaction context\nand then copy the datum out to the surrounding context in the success case.\n\n\n2) If there was an out-of-memory error, it'll have been in oldcontext. So\nswitching back to it before calling CopyErrorData() doesn't seem good - we'll\njust hit OOM issues again.\n\n\nI realize that both of these issues are present in plenty other code (see\ne.g. plperl_spi_exec()). So I'm curious why they are ok?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 15 Aug 2022 15:38:53 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "Hi,\n\n\nOn 16.08.2022 01:38, Andres Freund wrote:\n> Continuation from the thread at\n> https://postgr.es/m/20220811171740.m5b4h7x63g4lzgrk%40awork3.anarazel.de\n>\n>\n> I started hacking on this Friday. I think there's some relatively easy\n> improvements that make the code substantially more understandable, at least\n> for me, without even addressing the structural stuff.\n\nI also started hacking Friday, hacked all weekend, and now have a new\nversion of the patch.\n\nI received your message when I finished writing of mine, so I will\ntry answer your new questions only in next message. But in short, I\ncan say that some things like ExecEvalJsonExprSubtrans() were fixed.\n\n\nI took Amit's patch and tried to simplify execution further.\nExplanation of the patches is at the very end of message.\n\nNext, I try to answer some of previous questions.\n\nOn Aug 2, 2022 at 9:39 AM Andres Freund<andres@anarazel.de> wrote:\n> The whole coercion stuff just seems incredibly clunky (in a\n> slightly different shape before this patch).\n> ExecEvalJsonExprItemCoercion() calls ExecPrepareJsonItemCoercion(),\n> which gets a pointer to one of the per-type elements in\n> JsonItemCoercionsState, dispatching on the type of the json\n> object. Then we later call ExecGetJsonItemCoercion() (via a\n> convoluted path), which again will dispatch on the type\n> (extracting the json object again afaics!), to then somehow\n> eventually get the coerced value. I think it might be possible\n> to make this a bit simpler, by not leaving anything\n> coercion-related in ExecEvalJsonExpr().\n\nOn 2022-08-02 12:05:55 +0900, Amit Langote wrote:\n> I left some pieces there, because I thought the error of not finding an\n> appropriate coercion must be thrown right away as the code in\n> ExecEvalJsonExpr() does after calling ExecGetJsonItemCoercion().\n\n> ExecPrepareJsonItemCoercion() is called later when it's time to\n> actually evaluate the coercion. If we move the error path to\n> ExecPrepareJsonItemCoercion(), both ExecGetJsonItemCoercion() and the\n> error path code in ExecEvalJsonExpr() will be unnecessary. I will\n> give that a try.\n\nThe first dispatch is done only for throwing error about missing cast\nwithout starting subtransaction in which second dispatch is executed.\nI agree, this is bad that result of first dispatch is not used later,\nand I have removed second dispatch.\n\n\n> I don't understand the design of what needs to have error handling,\n> and what not.\n\n> I don't think subtransactions per-se are a fundamental problem.\n> I think the error handling implementation is ridiculously complicated,\n> and while I started to hack on improving it, I stopped when I really\n> couldn't understand what errors it actually needs to handle when and\n> why.\n\nHere is the diagram that may help to understand error handling in\nSQL/JSON functions (I hope it will be displayed correctly):\n\n\n JSON path -------\n expression \\\n ->+-----------+ SQL/JSON +----------+ Result\n PASSING args ------->| JSON path |--> item or --->| Output |-> SQL\n ->| executor | JSONB .->| Coercion | value\n / +-----------+ datum | +----------+\n JSON + - - - -+ | | | |\n Context ->: FORMAT : v v | v\n item : JSON : error? empty? | error?\n + - - - -+ | | | |\n | | +----------+ | /\n v | | ON EMPTY |--> SQL --' /\n error? | +----------+ value /\n | | | /\n \\ | v /\n \\ \\ error? /\n \\ \\ | /\n \\______ \\ | _____________/\n \\ \\ | /\n v v v v +----------+\n +----------+ | Output | Result\n | ON ERROR |--->| Coercion |--> SQL\n +----------+ +----------+ value\n | |\n V V\n EXCEPTION EXCEPTION\n\n\nThe first dashed box \"FORMAT JSON\" used for parsing JSON is absent in\nour implementation, because we support only jsonb type which is\npre-parsed. This could be used in queries like that:\nJSON_VALUE('invalid json', '$' DEFAULT 'error' ON ERROR) => 'error'\n\nJSON path executor already has error handling and does not need\nsubtransactions. We had to add functions like numeric_add_opt_error()\nwhich return error flag instead of throwing exceptions.\n\n\nOn Aug 10, 2022 at 3:57 AM Andres Freund<andres@anarazel.de> wrote:\n>>> One way this code could be drastically simplified is to force all\n>>> type-coercions to go through the \"io coercion\" path, which could be\n>>> implemented as a single execution step (which thus could trivially\n>>> start/finish a subtransaction) and would remove a lot of the\n>>> complicated code around coercions.\n\n>> Could you please clarify how you think we might do the io coercion\n>> wrapped with a subtransaction all as a single execution step? I\n>> would've thought that we couldn't do the sub-transaction without\n>> leaving ExecInterpExpr() anyway, so maybe you meant the io coercion\n>> itself was done using some code outside ExecInterpExpr()?\n\n>> The current JsonExpr code does it by recursively calling\n>> ExecInterpExpr() using the nested ExprState expressly for the\n>> coercion.\n\n> The basic idea is to rip out all the type-dependent stuff out and\n> replace it with a single JSON_IOCERCE step, which has a parameter\n> about whether to wrap things in a subtransaction or not. That step\n> would always perform the coercion by calling the text output function\n> of the input and the text input function of the output.\n\nOn Aug 3, 2022 at 12:00 AM Andres Freund<andres@anarazel.de> wrote:\n> But we don't need to wrap arbitrary evaluation in a subtransaction -\n> afaics the coercion calls a single function, not an arbitrary\n> expression?\n\nSQL standard says that scalar SQL/JSON items are converted to SQL type\nthrough CAST(corresponding_SQL_type_for_item AS returning_type).\nOur JSON_VALUE implementation supports arbitrary output types that can\nhave specific CASTs from numeric, bool, datetime, which we can't\nemulate with simple I/O coercion. But supporting of arbitrary types\nmay be dangerous, because SQL standard denotes only a limited set of\ntypes:\n\n The <data type> contained in the explicit or implicit\n <JSON returning clause> shall be a <predefined type> that identifies\n a character string data type, numeric data type, boolean data type,\n or datetime data type.\n\nI/O coercion will not even work in the following simple case:\n JSON_VALUE('1.23', '$' RETURNING int)\nIt is expected to return 1::int, like ordinary cast 1.23::numeric::int.\n\nExceptions may come not only from coercions. Expressions in DEFAULT ON\nEMPTY can also throw exceptions, which also must be handled.\n\nHere is excerpt from ISO/IEC 19075-6:2021(E) \"Part 6: Support for JSON\",\nwhich explains SQL standard features in human-readable manner:\n\n 6.4.3 JSON_VALUE:\n\n <JSON value error behavior> specifies what to do if there is an\n unhandled error. Unhandled errors can arise if there is an input\n conversion error (for example, if the context item cannot be parsed),\n an error returned by the SQL/JSON path engine, or an output\n conversion error. The choices are the same as for\n <JSON value empty behavior>.\n\n When using DEFAULT <value expression> for either the empty or error\n behavior, what happens if the <value expression> raises an exception?\n The answer is that an error during empty behavior \"falls through\"\n to the error behavior. If the error behavior itself has an error,\n there is no further recourse but to raise the exception.\n\nSo, we need to support execution of arbitrary expressions inside a\nsubtransaction, and do not try to somehow simplify coercions.\n\n\nIn Amit's fix, wrapping DEFAULT ON EMPTY into subtransactions was\nlost, mainly because there were no tests for this case. The following\ntest should not fall on the second row:\n\n SELECT JSON_VALUE(jsonb '1', '$.a' RETURNING int\n DEFAULT 1 / x ON EMPTY\n DEFAULT 2 ON ERROR)\n FROM (VALUES (1::int), (0)) x(x);\n\n json_value\n ------------\n 1\n 2\n\nI have added this test in 0003.\n\n\nOn Aug 3, 2022 at 12:00 AM Andres Freund<andres@anarazel.de> wrote:\n> On 2022-08-02 12:05:55 +0900, Amit Langote wrote:\n>> I am not really sure if different coercions may be used\n>> in the same query over multiple evaluations of the same JSON path\n>> expression, but maybe that's also possible.\n\n> Even if the type can change, I don't think that means we need to have\n> space for multiple types at the same time - there can't be multiple\n> coercions happening at the same time, otherwise there could be two\n> coercions of the same type as well. So we don't need memory for\n> every coercion type.\n\nOnly the one item coercion is used in execution of JSON_VALUE().\nMultiple coercions could be executed, if we supported quite useful SRF\nJSON_QUERY() using \"RETURNING SETOF type\" (I had this idea for a long\ntime, but I didn't dare to implement it).\n\nI don't understand what \"memory\" you mean. If we will not emit all\npossible expressions statically, we will need to generate them\ndynamically at run-time, and this could be hardly acceptable. In the\nlast version of the fix there is only 4 bytes (int jump) of additional\nstate space per coercion.\n\n\nOn Aug 6, 2022 at 5:37 Andres Freund<andres@anarazel.de> wrote:\n> There's one layer of subtransactions in one of the paths in\n> ExecEvalJsonExpr(), another in ExecEvalJson(). Some paths of\n> ExecEvalJsonExpr() go through subtransactions, others don't.\n\nReally, there is only one level of subtransactions. Outer subtransaction\nmay be used for FORMAT JSON handling which always requires\nsubtransaction at the beginning of expression execution.\nInner subtransactions are conditional, they are started only and when\nthere is no outer subtransaction.\n\nNow, outer subtransactions are not used at all,\nExecEvalJsonNeedsSubtransaction(NULL) always returns false. (AFAIR,\nFORMAT JSON was present in older version of SQL/JSON patches, then it\nwas removed, but outer subtransactions were not). In the last version\nof the fix I have removed them completely and moved inner\nsubtransactions into a separate executor step (see below).\n\n\n\nThe description of the patches:\n\n0001 - Fix returning of json[b] domains in JSON_VALUE()\n\n(This may require a separate thread.)\n\nI found a bug in returning json[b] domains in JSON_VALUE(). json[b]\nhas special processing in JSON_VALUE, bypassing oridinary\nSQL/JSON item type => SQL type coercions. But json[b] domains miss\nthis processing:\n\n CREATE DOMAIN jsonb_not_null AS jsonb NOT NULL;\n\n SELECT JSON_VALUE('\"123\"', '$' RETURNING jsonb);\n \"123\"\n\n SELECT JSON_VALUE( '123', '$' RETURNING jsonb);\n 123\n\n SELECT JSON_VALUE('\"123\"', '$' RETURNING jsonb_not_null);\n 123\n\n SELECT JSON_VALUE( '123', '$' RETURNING jsonb_not_null ERROR ON ERROR);\n ERROR: SQL/JSON item cannot be cast to target type\n\nFixed by examinating output base type in parse_expr.c and skipping\nallocation of item coercions, what later will be a signal for special\nprocessing in ExecEvalJsonExpr().\n\n\n0002 - Add EEOP_SUBTRANS executor step\n\nOn 2022-08-02 12:05:55 +0900, Amit Langote wrote:\n> So, the problem with inlining coercion evaluation into the main parent\n> JsonExpr's is that it needs to be wrapped in a sub-transaction to\n> catch any errors and return NULL instead. I don't know a way to wrap\n> ExprEvalStep evaluation in a sub-transaction to achieve that effect.\n\nI also don't know way to run subtransactions without recursion in\nexecutor, but I still managed to elimiate subsidary ExprStates.\n\nI have introduced new EEOP_SUBTRANS step which executes its subsequent\nsteps in a subtransaction. It recursively calls a new variant of\nExecInterpExpr() in which starting stepno is passed. The return from\nsubtransaction is done with EEOP_DONE step that emitted after\nsubexpression. This step can be reused for other future expressions,\nthat's why it has no JSON prefix in its name (you could see recent\nmessage in the thread about casts with default values, which are\nmissing in PostgreSQL).\n\nBut for JIT I still had to construct additional ExprState with a\nfunction compiled from subexpression steps.\n\n\n0003 - Simplify JsonExpr execution:\n\n - New EEOP_SUBTRANS was used to wrap individual coercion expressions:\n after execution it jumps to \"done\" or \"onerror\" step\n - JSONEXPR_ITEM_COERCE step was removed\n - JSONEXPR_COERCE split into JSONEXPR_IOCOERCE and JSONEXPR_POPULATE\n - Removed all JsonExprPostEvalState\n - JSONEXPR step simply returns jump address to one of its possible\n continuations: done, onempty, onerror, coercion, coercion_subtrans,\n io_coercion or one of item_coercions\n - Fixed JsonExprNeedsSubTransaction(): considired more cases\n - Eliminated transactions on Const expressions\n\n-- \nNikita Glukhov\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 16 Aug 2022 04:02:17 +0300",
"msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-15 15:38:53 -0700, Andres Freund wrote:\n> Next question:\n> \n> \t/*\n> \t * We should catch exceptions of category ERRCODE_DATA_EXCEPTION and\n> \t * execute the corresponding ON ERROR behavior then.\n> \t */\n> \toldcontext = CurrentMemoryContext;\n> \toldowner = CurrentResourceOwner;\n> \n> \tAssert(error);\n> \n> \tBeginInternalSubTransaction(NULL);\n> \t/* Want to execute expressions inside function's memory context */\n> \tMemoryContextSwitchTo(oldcontext);\n> \n> \tPG_TRY();\n> \t{\n> \t\tres = func(op, econtext, res, resnull, p, error);\n> \n> \t\t/* Commit the inner transaction, return to outer xact context */\n> \t\tReleaseCurrentSubTransaction();\n> \t\tMemoryContextSwitchTo(oldcontext);\n> \t\tCurrentResourceOwner = oldowner;\n> \t}\n> \tPG_CATCH();\n> \t{\n> \t\tErrorData *edata;\n> \t\tint\t\t\tecategory;\n> \n> \t\t/* Save error info in oldcontext */\n> \t\tMemoryContextSwitchTo(oldcontext);\n> \t\tedata = CopyErrorData();\n> \t\tFlushErrorState();\n> \n> \t\t/* Abort the inner transaction */\n> \t\tRollbackAndReleaseCurrentSubTransaction();\n> \t\tMemoryContextSwitchTo(oldcontext);\n> \t\tCurrentResourceOwner = oldowner;\n> \n> \n> Two points:\n> \n> 1) I suspect it's not safe to switch to oldcontext before calling func().\n> \n> On error we'll have leaked memory into oldcontext and we'll just continue\n> on. It might not be very consequential here, because the calling context\n> presumably isn't very long lived, but that's probably not something we should\n> rely on.\n> \n> Also, are we sure that the context will be in a clean state when it's used\n> within an erroring subtransaction?\n> \n> \n> I think the right thing here would be to stay in the subtransaction context\n> and then copy the datum out to the surrounding context in the success case.\n> \n> \n> 2) If there was an out-of-memory error, it'll have been in oldcontext. So\n> switching back to it before calling CopyErrorData() doesn't seem good - we'll\n> just hit OOM issues again.\n> \n> \n> I realize that both of these issues are present in plenty other code (see\n> e.g. plperl_spi_exec()). So I'm curious why they are ok?\n\nCertainly seems to be missing a FreeErrorData() for the happy path?\n\n\nIt'd be nicer if we didn't copy the error. In the case we rethrow we don't\nneed it, because we can just PG_RE_THROW(). And in the other path we just want\nto get the error code. It just risks additional errors to CopyErrorData(). But\nit's not entirely obvious that geterrcode() is intended for this:\n\n * This is only intended for use in error callback subroutines, since there\n * is no other place outside elog.c where the concept is meaningful.\n */\n\na PG_CATCH() block isn't really an error callback subroutine. But it should be\nfine.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 15 Aug 2022 18:04:21 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-16 04:02:17 +0300, Nikita Glukhov wrote:\n> Hi,\n>\n>\n> On 16.08.2022 01:38, Andres Freund wrote:\n> > Continuation from the thread at\n> > https://postgr.es/m/20220811171740.m5b4h7x63g4lzgrk%40awork3.anarazel.de\n> >\n> >\n> > I started hacking on this Friday. I think there's some relatively easy\n> > improvements that make the code substantially more understandable, at least\n> > for me, without even addressing the structural stuff.\n>\n> I also started hacking Friday, hacked all weekend, and now have a new\n> version of the patch.\n\nCool.\n\n\n> > I don't understand the design of what needs to have error handling,\n> > and what not.\n>\n> > I don't think subtransactions per-se are a fundamental problem.\n> > I think the error handling implementation is ridiculously complicated,\n> > and while I started to hack on improving it, I stopped when I really\n> > couldn't understand what errors it actually needs to handle when and\n> > why.\n>\n> Here is the diagram that may help to understand error handling in\n> SQL/JSON functions (I hope it will be displayed correctly):\n\nI think that is helpful.\n\n\n> JSON path -------\n> expression \\\n> ->+-----------+ SQL/JSON +----------+ Result\n> PASSING args ------->| JSON path |--> item or --->| Output |-> SQL\n> ->| executor | JSONB .->| Coercion | value\n> / +-----------+ datum | +----------+\n> JSON + - - - -+ | | | |\n> Context ->: FORMAT : v v | v\n> item : JSON : error? empty? | error?\n> + - - - -+ | | | |\n> | | +----------+ | /\n> v | | ON EMPTY |--> SQL --' /\n> error? | +----------+ value /\n> | | | /\n> \\ | v /\n> \\ \\ error? /\n> \\ \\ | /\n> \\______ \\ | _____________/\n> \\ \\ | /\n> v v v v +----------+\n> +----------+ | Output | Result\n> | ON ERROR |--->| Coercion |--> SQL\n> +----------+ +----------+ value\n> | |\n> V V\n> EXCEPTION EXCEPTION\n>\n>\n> The first dashed box \"FORMAT JSON\" used for parsing JSON is absent in\n> our implementation, because we support only jsonb type which is\n> pre-parsed. This could be used in queries like that:\n> JSON_VALUE('invalid json', '$' DEFAULT 'error' ON ERROR) => 'error'\n\n\n> On Aug 3, 2022 at 12:00 AM Andres Freund<andres@anarazel.de> wrote:\n> > But we don't need to wrap arbitrary evaluation in a subtransaction -\n> > afaics the coercion calls a single function, not an arbitrary\n> > expression?\n>\n> SQL standard says that scalar SQL/JSON items are converted to SQL type\n> through CAST(corresponding_SQL_type_for_item AS returning_type).\n> Our JSON_VALUE implementation supports arbitrary output types that can\n> have specific CASTs from numeric, bool, datetime, which we can't\n> emulate with simple I/O coercion. But supporting of arbitrary types\n> may be dangerous, because SQL standard denotes only a limited set of\n> types:\n>\n> The <data type> contained in the explicit or implicit\n> <JSON returning clause> shall be a <predefined type> that identifies\n> a character string data type, numeric data type, boolean data type,\n> or datetime data type.\n>\n> I/O coercion will not even work in the following simple case:\n> JSON_VALUE('1.23', '$' RETURNING int)\n> It is expected to return 1::int, like ordinary cast 1.23::numeric::int.\n\nWhether it's just IO coercions or also coercions through function calls\ndoesn't matter terribly, as long as both can be wrapped as a single\ninterpretation step. You can have a EEOP_JSON_COERCE_IO,\nEEOP_JSON_COERCE_FUNC that respectively call input/output function and the\ntransformation routine within a subtransaction. On error they can jump to some\non_error execution step.\n\nThe difficulty is likely just dealing with the intermediary nodes like\nRelabelType.\n\n\n> Exceptions may come not only from coercions. Expressions in DEFAULT ON\n> EMPTY can also throw exceptions, which also must be handled.\n\nAre there other cases?\n\n\n> Only the one item coercion is used in execution of JSON_VALUE().\n> Multiple coercions could be executed, if we supported quite useful SRF\n> JSON_QUERY() using \"RETURNING SETOF type\" (I had this idea for a long\n> time, but I didn't dare to implement it).\n>\n> I don't understand what \"memory\" you mean.\n\nI'm not entirely sure what I meant at that time either. Understanding this\ncode involves a lot of guessing since there's practically no explanatory\ncomments.\n\n\n> If we will not emit all possible expressions statically, we will need to\n> generate them dynamically at run-time, and this could be hardly acceptable.\n\nI'm not convinced that that's true. We spend a fair amount of memory\ngenerating expression paths for the per-type elements in JsonItemCoercions,\nmost of which will never be used. Even trivial stuff ends up with ~2kB.\n\nThen there's of course the executor side, where the various ExprStates really\nadd up:\nMemoryContextStats(CurrentMemoryContext) in ExecInitExprRec(), just before\nif (jext->coercions)\n\nExecutorState: 8192 total in 1 blocks; 4464 free (0 chunks); 3728 used\n ExprContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used\nGrand total: 16384 bytes in 2 blocks; 12392 free (0 chunks); 3992 used\n\njust after:\n\nExecutorState: 32768 total in 3 blocks; 15032 free (2 chunks); 17736 used\n ExprContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used\nGrand total: 40960 bytes in 4 blocks; 22960 free (2 chunks); 18000 used\n\nfor SELECT JSON_VALUE(NULL::jsonb, '$');\n\n\n> In the last version of the fix there is only 4 bytes (int jump) of\n> additional state space per coercion.\n\nThat's certainly a *lot* better.\n\n\n\n> On Aug 6, 2022 at 5:37 Andres Freund<andres@anarazel.de> wrote:\n> > There's one layer of subtransactions in one of the paths in\n> > ExecEvalJsonExpr(), another in ExecEvalJson(). Some paths of\n> > ExecEvalJsonExpr() go through subtransactions, others don't.\n>\n> Really, there is only one level of subtransactions. Outer subtransaction\n> may be used for FORMAT JSON handling which always requires\n> subtransaction at the beginning of expression execution.\n> Inner subtransactions are conditional, they are started only and when\n> there is no outer subtransaction.\n\nYea, I realized that by now as well. But the code doesn't make that\nunderstandable. E.g.:\n\n> Now, outer subtransactions are not used at all,\n> ExecEvalJsonNeedsSubtransaction(NULL) always returns false. (AFAIR,\n> FORMAT JSON was present in older version of SQL/JSON patches, then it\n> was removed, but outer subtransactions were not).\n\nis very misleading.\n\n\n> 0002 - Add EEOP_SUBTRANS executor step\n>\n> On 2022-08-02 12:05:55 +0900, Amit Langote wrote:\n> > So, the problem with inlining coercion evaluation into the main parent\n> > JsonExpr's is that it needs to be wrapped in a sub-transaction to\n> > catch any errors and return NULL instead. I don't know a way to wrap\n> > ExprEvalStep evaluation in a sub-transaction to achieve that effect.\n>\n> I also don't know way to run subtransactions without recursion in\n> executor, but I still managed to elimiate subsidary ExprStates.\n>\n> I have introduced new EEOP_SUBTRANS step which executes its subsequent\n> steps in a subtransaction. It recursively calls a new variant of\n> ExecInterpExpr() in which starting stepno is passed. The return from\n> subtransaction is done with EEOP_DONE step that emitted after\n> subexpression. This step can be reused for other future expressions,\n> that's why it has no JSON prefix in its name (you could see recent\n> message in the thread about casts with default values, which are\n> missing in PostgreSQL).\n\nI've wondered about this as well, but I think it'd require quite careful work\nto be safe. And certainly isn't something we can do at this point in the cycle\n- it'll potentially impact every query, not just ones with json in, if we\nscrew up something (or introduce overhead).\n\n\n> But for JIT I still had to construct additional ExprState with a\n> function compiled from subexpression steps.\n\nJIT is one of the reason *not* want to construct subsidiary ExprState's, since\nthey will trigger separate code generation (and thus overhead).\n\nWhy did you have to do this?\n\n\n\n> 0003 - Simplify JsonExpr execution:\n>\n> - New EEOP_SUBTRANS was used to wrap individual coercion expressions:\n> after execution it jumps to \"done\" or \"onerror\" step\n> - JSONEXPR_ITEM_COERCE step was removed\n> - JSONEXPR_COERCE split into JSONEXPR_IOCOERCE and JSONEXPR_POPULATE\n> - Removed all JsonExprPostEvalState\n> - JSONEXPR step simply returns jump address to one of its possible\n> continuations: done, onempty, onerror, coercion, coercion_subtrans,\n> io_coercion or one of item_coercions\n> - Fixed JsonExprNeedsSubTransaction(): considired more cases\n> - Eliminated transactions on Const expressions\n\n\nI pushed a few cleanups to https://github.com/anarazel/postgres/commits/json\nwhile I was hacking on this (ignore that it's based on the meson tree, that's\njust faster for me). Some of them might not be applicable anymore, but it\nmight still make sense for you to look at.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 15 Aug 2022 19:14:25 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On Mon, Aug 15, 2022 at 6:39 PM Andres Freund <andres@anarazel.de> wrote:\n> I don't think it's sane from a performance view to start a subtransaction for\n> every coercion, particularly because most coercion paths will never trigger an\n> error, leaving things like out-of-memory or interrupts aside. And those are\n> re-thrown by ExecEvalJsonExprSubtrans(). A quick and dirty benchmark shows\n> ERROR ON ERROR nearly 2xing speed. I'm worried about the system impact of\n> using subtransactions this heavily, it's not exactly the best performing\n> system - the only reason it's kind of ok here is that it's going to be very\n> rare to allocate a subxid, I think.\n\nI agree. It kinda surprises me that we thought it was OK to commit\nsomething that uses that many subtransactions. I feel like that's\ngoing to cause people to hose themselves in ways that we can't really\ndo anything about. Like they'll test it out, it will work, and then\nwhen they put it into production, they'll have constant wraparound\nissues for which the only real solution is to not use the feature they\nrelied on to build the application.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 16 Aug 2022 09:55:14 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "Hi,\r\n\r\nOn 8/15/22 10:14 PM, Andres Freund wrote:\r\n\r\n> I pushed a few cleanups to https://github.com/anarazel/postgres/commits/json\r\n> while I was hacking on this (ignore that it's based on the meson tree, that's\r\n> just faster for me). Some of them might not be applicable anymore, but it\r\n> might still make sense for you to look at.\r\n\r\nWith RMT hat on, this appears to be making progress. A few questions / \r\ncomments for the group:\r\n\r\n1. Nikita: Did you have a chance to review Andres's changes as well?\r\n\r\n2. There seems to be some, though limited, progress on design docs. \r\nAndres keeps making a point on adding additional comments to the code to \r\nmake it easier to follow. Please do not lose sight of this.\r\n\r\n3. Robert raised a point about the use of subtransactions and the \r\nincreased risk of wraparound on busy systems using the SQL/JSON \r\nfeatures. Do these patches help reduce this risk? I read some clarity on \r\nthe use of subtransactions within the patchset, but want to better \r\nunderstand if the risks pointed out are a concern.\r\n\r\nThanks everyone for your work on this so far!\r\n\r\nJonathan",
"msg_date": "Tue, 16 Aug 2022 21:45:06 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "Hi,\n\nOn 17.08.2022 04:45, Jonathan S. Katz wrote:\n>\n> On 8/15/22 10:14 PM, Andres Freund wrote:\n>\n>> I pushed a few cleanups to \n>> https://github.com/anarazel/postgres/commits/json\n>> while I was hacking on this (ignore that it's based on the meson \n>> tree, that's\n>> just faster for me). Some of them might not be applicable anymore, \n>> but it\n>> might still make sense for you to look at.\n>\n> With RMT hat on, this appears to be making progress. A few questions / \n> comments for the group:\n>\n> 1. Nikita: Did you have a chance to review Andres's changes as well?\n\nYes, I have reviewed Andres's changes, they all are ok.\n\nThen I started to do on the top of it other fixes that help to avoid\nsubtransactions when they are not needed. And it ended in the new\nrefactoring of coercion code. Also I moved here from v6-0003 fix of\nExecEvalJsonNeedSubtransaction() which considers more cases.\n\n\n\nOn 16.08.2022 05:14, Andres Freund wrote:\n>> But for JIT I still had to construct additional ExprState with a\n>> function compiled from subexpression steps.\n\n> Why did you have to do this?\n\nI simply did not dare to implement compilation of recursively-callable\nfunction with additional parameter stepno. In the v8 patch I did it\nby adding a switch with all possible jump addresses of EEOP_SUBTRANS\nsteps in the beginning of the function. And it really seems to work\nfaster, but needs more exploration. See patch 0003, where both\nvariants preserved using #ifdef.\n\n\nThe desciprion of the v7 patches:\n\n0001 Simplify JsonExpr execution\n Andres's changes + mine:\n - Added JsonCoercionType enum, fields like via_io replaced with it\n - Emit only context item steps in JSON_TABLE_OP case\n - Skip coercion of NULLs to non-domain types (is it correct?)\n\n0002 Fix returning of json[b] domains in JSON_VALUE:\n simply rebase of v6 onto 0001\n\n0003 Add EEOP_SUBTRANS executor step\n v6 + new recursive JIT\n\n0004 Split JsonExpr execution into steps\n simply rebase of v6 + used LLMBuildSwitch() in EEOP_JSONEXPR\n\n-- \nNikita Glukhov\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 18 Aug 2022 06:45:56 +0300",
"msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "Hi,\r\n\r\nOn 8/17/22 11:45 PM, Nikita Glukhov wrote:\r\n> Hi,\r\n> \r\n> On 17.08.2022 04:45, Jonathan S. Katz wrote:\r\n>>\r\n>> On 8/15/22 10:14 PM, Andres Freund wrote:\r\n>>\r\n>>> I pushed a few cleanups to \r\n>>> https://github.com/anarazel/postgres/commits/json\r\n>>> while I was hacking on this (ignore that it's based on the meson \r\n>>> tree, that's\r\n>>> just faster for me). Some of them might not be applicable anymore, \r\n>>> but it\r\n>>> might still make sense for you to look at.\r\n>>\r\n>> With RMT hat on, this appears to be making progress. A few questions / \r\n>> comments for the group:\r\n>>\r\n>> 1. Nikita: Did you have a chance to review Andres's changes as well?\r\n> \r\n> Yes, I have reviewed Andres's changes, they all are ok.\r\n\r\nThank you!\r\n\r\n> Then I started to do on the top of it other fixes that help to avoid\r\n> subtransactions when they are not needed. And it ended in the new\r\n> refactoring of coercion code. Also I moved here from v6-0003 fix of\r\n> ExecEvalJsonNeedSubtransaction() which considers more cases.\r\n\r\nGreat.\r\n\r\nAndres, Robert: Do these changes address your concerns about the use of \r\nsubstransactions and reduce the risk of xid wraparound?\r\n\r\n> On 16.08.2022 05:14, Andres Freund wrote:\r\n>>> But for JIT I still had to construct additional ExprState with a\r\n>>> function compiled from subexpression steps.\r\n> \r\n>> Why did you have to do this?\r\n> \r\n> I simply did not dare to implement compilation of recursively-callable\r\n> function with additional parameter stepno. In the v8 patch I did it\r\n> by adding a switch with all possible jump addresses of EEOP_SUBTRANS\r\n> steps in the beginning of the function. And it really seems to work\r\n> faster, but needs more exploration. See patch 0003, where both\r\n> variants preserved using #ifdef.\r\n> \r\n> \r\n> The desciprion of the v7 patches:\r\n> \r\n> 0001 Simplify JsonExpr execution\r\n> Andres's changes + mine:\r\n> - Added JsonCoercionType enum, fields like via_io replaced with it\r\n> - Emit only context item steps in JSON_TABLE_OP case\r\n> - Skip coercion of NULLs to non-domain types (is it correct?)\r\n> \r\n> 0002 Fix returning of json[b] domains in JSON_VALUE:\r\n> simply rebase of v6 onto 0001\r\n> \r\n> 0003 Add EEOP_SUBTRANS executor step\r\n> v6 + new recursive JIT\r\n> \r\n> 0004 Split JsonExpr execution into steps\r\n> simply rebase of v6 + used LLMBuildSwitch() in EEOP_JSONEXPR\r\n\r\nWhat do folks think of these patches?\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Fri, 19 Aug 2022 10:11:01 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On 8/19/22 10:11 AM, Jonathan S. Katz wrote:\r\n> Hi,\r\n> \r\n> On 8/17/22 11:45 PM, Nikita Glukhov wrote:\r\n>> Hi,\r\n>>\r\n>> On 17.08.2022 04:45, Jonathan S. Katz wrote:\r\n>>>\r\n>>> On 8/15/22 10:14 PM, Andres Freund wrote:\r\n>>>\r\n>>>> I pushed a few cleanups to \r\n>>>> https://github.com/anarazel/postgres/commits/json\r\n>>>> while I was hacking on this (ignore that it's based on the meson \r\n>>>> tree, that's\r\n>>>> just faster for me). Some of them might not be applicable anymore, \r\n>>>> but it\r\n>>>> might still make sense for you to look at.\r\n>>>\r\n>>> With RMT hat on, this appears to be making progress. A few questions \r\n>>> / comments for the group:\r\n>>>\r\n>>> 1. Nikita: Did you have a chance to review Andres's changes as well?\r\n>>\r\n>> Yes, I have reviewed Andres's changes, they all are ok.\r\n> \r\n> Thank you!\r\n> \r\n>> Then I started to do on the top of it other fixes that help to avoid\r\n>> subtransactions when they are not needed. And it ended in the new\r\n>> refactoring of coercion code. Also I moved here from v6-0003 fix of\r\n>> ExecEvalJsonNeedSubtransaction() which considers more cases.\r\n> \r\n> Great.\r\n> \r\n> Andres, Robert: Do these changes address your concerns about the use of \r\n> substransactions and reduce the risk of xid wraparound?\r\n> \r\n>> On 16.08.2022 05:14, Andres Freund wrote:\r\n>>>> But for JIT I still had to construct additional ExprState with a\r\n>>>> function compiled from subexpression steps.\r\n>>\r\n>>> Why did you have to do this?\r\n>>\r\n>> I simply did not dare to implement compilation of recursively-callable\r\n>> function with additional parameter stepno. In the v8 patch I did it\r\n>> by adding a switch with all possible jump addresses of EEOP_SUBTRANS\r\n>> steps in the beginning of the function. And it really seems to work\r\n>> faster, but needs more exploration. See patch 0003, where both\r\n>> variants preserved using #ifdef.\r\n>>\r\n>>\r\n>> The desciprion of the v7 patches:\r\n>>\r\n>> 0001 Simplify JsonExpr execution\r\n>> Andres's changes + mine:\r\n>> - Added JsonCoercionType enum, fields like via_io replaced with it\r\n>> - Emit only context item steps in JSON_TABLE_OP case\r\n>> - Skip coercion of NULLs to non-domain types (is it correct?)\r\n>>\r\n>> 0002 Fix returning of json[b] domains in JSON_VALUE:\r\n>> simply rebase of v6 onto 0001\r\n>>\r\n>> 0003 Add EEOP_SUBTRANS executor step\r\n>> v6 + new recursive JIT\r\n>>\r\n>> 0004 Split JsonExpr execution into steps\r\n>> simply rebase of v6 + used LLMBuildSwitch() in EEOP_JSONEXPR\r\n> \r\n> What do folks think of these patches?\r\n\r\nAndres, Andrew, Amit, Robert -- as you have either worked on this or \r\nexpressed opinions -- any thoughts on this current patch set?\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Mon, 22 Aug 2022 21:52:01 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 10:52 AM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> Andres, Andrew, Amit, Robert -- as you have either worked on this or\n> expressed opinions -- any thoughts on this current patch set?\n\nFWIW, I've started looking at these patches.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Aug 2022 11:35:11 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-22 21:52:01 -0400, Jonathan S. Katz wrote:\n> Andres, Andrew, Amit, Robert -- as you have either worked on this or\n> expressed opinions -- any thoughts on this current patch set?\n\nTo me it feels like there's a probably too much work here to cram it at this\npoint. If several other committers shared the load of working on this it'd\nperhaps be doable, but I've not seen many volunteers.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 22 Aug 2022 19:57:29 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On Mon, Aug 22, 2022 at 07:57:29PM -0700, Andres Freund wrote:\n> To me it feels like there's a probably too much work here to cram it at this\n> point. If several other committers shared the load of working on this it'd\n> perhaps be doable, but I've not seen many volunteers.\n\nWhile 0002 is dead simple, I am worried about the complexity created\nby 0001, 0003 (particularly tightening subtransactions with a\nCASE_EEOP) and 0004 at this late stage of the release process:\nhttps://www.postgresql.org/message-id/7d83684b-7932-9f29-400b-0beedfafcdd4@postgrespro.ru\n\nThis is not a good sign after three betas for a feature as complex as\nthis one.\n--\nMichael",
"msg_date": "Tue, 23 Aug 2022 13:13:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "Hi Nikita,\n\nOn Thu, Aug 18, 2022 at 12:46 PM Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:\n> The desciprion of the v7 patches:\n>\n> 0001 Simplify JsonExpr execution\n> Andres's changes + mine:\n> - Added JsonCoercionType enum, fields like via_io replaced with it\n> - Emit only context item steps in JSON_TABLE_OP case\n> - Skip coercion of NULLs to non-domain types (is it correct?)\n\nI like the parser changes to add JsonCoercionType, because that makes\nExecEvalJsonExprCoercion() so much simpler to follow.\n\nIn coerceJsonExpr():\n\n+ if (!allow_io_coercion)\n+ return NULL;\n+\n\nMight it make more sense to create a JsonCoercion even in this case\nand assign it the type JSON_COERCION_ERROR, rather than allow the\ncoercion to be NULL and doing the following in ExecInitExprRec():\n\n+ if (!*coercion)\n+ /* Missing coercion here means missing cast */\n+ cstate->type = JSON_COERCION_ERROR;\n\nLikewise in transformJsonFuncExpr():\n\n+ if (coercion_expr != (Node *) placeholder)\n+ {\n+ jsexpr->result_coercion = makeNode(JsonCoercion);\n+ jsexpr->result_coercion->expr = coercion_expr;\n+ jsexpr->result_coercion->ctype = JSON_COERCION_VIA_EXPR;\n+ }\n\nHow about creating a JSON_COERCION_NONE coercion in the else block of\nthis, just like coerceJsonExpr() does?\n\nRelated to that, the JSON_EXISTS_OP block in\nExecEvalJsonExprInternal() sounds to assume that result_coercion would\nalways be non-NULL, per the comment in the last line:\n\n case JSON_EXISTS_OP:\n {\n bool exists = JsonPathExists(item, path,\n jsestate->args,\n error);\n\n *resnull = error && *error;\n res = BoolGetDatum(exists);\n break; /* always use result coercion */\n }\n\n...but it won't be if the above condition is false?\n\n> 0002 Fix returning of json[b] domains in JSON_VALUE:\n> simply rebase of v6 onto 0001\n\nEspecially after seeing the new comments in this one, I'm wondering if\nit makes sense to rename result_coercion to, say, default_coercion?\n\n> 0003 Add EEOP_SUBTRANS executor step\n> v6 + new recursive JIT\n>\n> 0004 Split JsonExpr execution into steps\n> simply rebase of v6 + used LLMBuildSwitch() in EEOP_JSONEXPR\n\nWill need to spend more time looking at these.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Aug 2022 16:48:44 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 4:48 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Thu, Aug 18, 2022 at 12:46 PM Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:\n> > The desciprion of the v7 patches:\n> >\n> > 0003 Add EEOP_SUBTRANS executor step\n> > v6 + new recursive JIT\n> >\n> > 0004 Split JsonExpr execution into steps\n> > simply rebase of v6 + used LLMBuildSwitch() in EEOP_JSONEXPR\n>\n> Will need to spend more time looking at these.\n\n0004 adds the following to initJsonItemCoercions():\n\n+ /* When returning JSON types, no need to initialize coercions */\n+ /* XXX domain types on json/jsonb */\n+ if (returning->typid == JSONBOID || returning->typid == JSONOID)\n+ return NULL;\n\nBut maybe it's dead code, because 0001 has this:\n\n+ if (jsexpr->returning->typid != JSONOID &&\n+ jsexpr->returning->typid != JSONBOID)\n+ jsexpr->coercions =\n+ initJsonItemCoercions(pstate, jsexpr->returning,\n+ exprType(contextItemExpr));\n\n+ /* We need to handle RETURNING int etc. */\n\nIs this a TODO and what does it mean?\n\n+ * \"JsonCoercion == NULL\" means no cast is available.\n+ * \"JsonCoercion.expr == NULL\" means no coercion is needed.\n\nAs said in my previous email, I wonder if these cases are better\nhandled by adding JSON_COERCION_ERROR and JSON_COERCION_NONE\ncoercions?\n\n+/* Skip calling ExecEvalJson() on a JsonExpr? */\n\nExecEvalJsonExpr()\n\nWill look more.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Aug 2022 17:21:59 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On 8/23/22 12:13 AM, Michael Paquier wrote:\r\n> On Mon, Aug 22, 2022 at 07:57:29PM -0700, Andres Freund wrote:\r\n>> To me it feels like there's a probably too much work here to cram it at this\r\n>> point. If several other committers shared the load of working on this it'd\r\n>> perhaps be doable, but I've not seen many volunteers.\r\n> \r\n> While 0002 is dead simple, I am worried about the complexity created\r\n> by 0001, 0003 (particularly tightening subtransactions with a\r\n> CASE_EEOP) and 0004 at this late stage of the release process:\r\n> https://www.postgresql.org/message-id/7d83684b-7932-9f29-400b-0beedfafcdd4@postgrespro.ru\r\n> \r\n> This is not a good sign after three betas for a feature as complex as\r\n> this one.\r\n\r\nI see Amit is taking a closer look at the patches.\r\n\r\nThe RMT had its regular meeting today and discussed the state of \r\nprogress on this and how it reflects release timing. Our feeling is that \r\nregardless if the patchset is included/reverted, it would necessitate a \r\nBeta 4 (to be discussed with release team). While no Beta 4 date is set, \r\ngiven where we are this would probably push the release into early \r\nOctober to allow for adequate testing time.\r\n\r\nTo say it another way, if we want to ensure we can have a 15.1 in the \r\nNovember update releases, we need to make a decision soon on how we want \r\nto proceed.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Tue, 23 Aug 2022 10:47:28 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On Mon, Aug 22, 2022 at 9:52 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> > Andres, Robert: Do these changes address your concerns about the use of\n> > substransactions and reduce the risk of xid wraparound?\n>\n> Andres, Andrew, Amit, Robert -- as you have either worked on this or\n> expressed opinions -- any thoughts on this current patch set?\n\nI do not think that using subtransactions as part of the expression\nevaluation process is a sound idea pretty much under any\ncircumstances. Maybe if the subtransations aren't commonly created and\ndon't usually get XIDs there wouldn't be a big problem in practice,\nbut it's an awfully heavyweight operation to be done inside expression\nevaluation even in corner cases. I think that if we need to make\ncertain operations that would throw errors not throw errors, we need\nto refactor interfaces until it's possible to return an error\nindicator up to the appropriate level, not just let the error be\nthrown and catch it.\n\nThe patches in question are thousands of lines of new code that I\nsimply do not have time or interest to review in detail. I didn't\ncommit this feature, or write this feature, or review this feature.\nI'm not familiar with any of the code. To really know what's going on\nhere, I would need to review not only the new patches but also all the\ncode in the original commits, and probably some of the preexisting\ncode from before those commits that I have never examined in the past.\nThat would take me quite a few months even if I had absolutely nothing\nelse to do. And because I haven't been involved in this patch set in\nany way, I don't think it's really my responsibility.\n\nAt the end of the day, the RMT is going to have to take a call here.\nIt seems to me that Andres's concerns about code quality and lack of\ncomments are probably somewhat legitimate, and in particular I do not\nthink the use of subtransactions is a good idea. I also don't think\nthat trying to fix those problems or generally improve the code by\ncommitting thousands of lines of new code in August when we're\ntargeting a release in September or October is necessarily a good\nidea. But I'm also not in a position to say that the project is going\nto be irreparably damaged if we just ship what we've got, perhaps\nafter fixing the most acute problems that we currently know about.\nThis is after all relatively isolated from the rest of the system.\nFixing the stuff that touches the core executor is probably pretty\nimportant, but beyond that, the worst thing that happens is the\nfeature sucks and people who try to use it have bad experiences. That\nwould be bad, and might be a sufficient reason to revert, but it's not\nnearly as bad as, say, the whole system being slow, or data loss for\nevery user, or something like that. And we do have other bad code in\nthe system. Is this a lot worse? I'm not in a position to say one way\nor the other.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Aug 2022 10:51:04 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> At the end of the day, the RMT is going to have to take a call here.\n> It seems to me that Andres's concerns about code quality and lack of\n> comments are probably somewhat legitimate, and in particular I do not\n> think the use of subtransactions is a good idea. I also don't think\n> that trying to fix those problems or generally improve the code by\n> committing thousands of lines of new code in August when we're\n> targeting a release in September or October is necessarily a good\n> idea. But I'm also not in a position to say that the project is going\n> to be irreparably damaged if we just ship what we've got, perhaps\n> after fixing the most acute problems that we currently know about.\n\nThe problem here is that this was going to be a headline new feature\nfor v15. Shipping what apparently is only an alpha-quality implementation\nseems pretty problematic unless we advertise it as such, and that's\nnot something we've done very much in the past. I also wonder how\nmuch any attempts at fixing it later would be constrained by concerns\nabout compatibility with the v15 version.\n\n> ... And we do have other bad code in the system.\n\nCan't deny that, but a lot of it is legacy code that we wish we could\nrip out and can't because backwards compatibility. This is not legacy\ncode ... not yet anyway.\n\nAs you say, we've delegated this sort of decision to the RMT, but\nif I were on the RMT I'd be voting to revert.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Aug 2022 11:08:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-23 11:08:31 -0400, Tom Lane wrote:\n> As you say, we've delegated this sort of decision to the RMT, but\n> if I were on the RMT I'd be voting to revert.\n\nYea, I don't really see an alternative at this point. If we really wanted we\ncould try to cut the more complicated pieces out, e.g., by only supporting\nERROR ON ERROR, but I'm not sure it'd get us far enough.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 23 Aug 2022 08:27:05 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "\nOn 2022-08-23 Tu 11:08, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> At the end of the day, the RMT is going to have to take a call here.\n>> It seems to me that Andres's concerns about code quality and lack of\n>> comments are probably somewhat legitimate, and in particular I do not\n>> think the use of subtransactions is a good idea. I also don't think\n>> that trying to fix those problems or generally improve the code by\n>> committing thousands of lines of new code in August when we're\n>> targeting a release in September or October is necessarily a good\n>> idea. But I'm also not in a position to say that the project is going\n>> to be irreparably damaged if we just ship what we've got, perhaps\n>> after fixing the most acute problems that we currently know about.\n> The problem here is that this was going to be a headline new feature\n> for v15. Shipping what apparently is only an alpha-quality implementation\n> seems pretty problematic unless we advertise it as such, and that's\n> not something we've done very much in the past. I also wonder how\n> much any attempts at fixing it later would be constrained by concerns\n> about compatibility with the v15 version.\n>\n>> ... And we do have other bad code in the system.\n> Can't deny that, but a lot of it is legacy code that we wish we could\n> rip out and can't because backwards compatibility. This is not legacy\n> code ... not yet anyway.\n>\n> As you say, we've delegated this sort of decision to the RMT, but\n> if I were on the RMT I'd be voting to revert.\n>\n> \t\t\t\n\n\n\nI know I previously said that this was not really severable, but I've\nstarted having second thoughts about that. If we disabled as Not\nImplemented the DEFAULT form of the ON ERROR and ON EMPTY clauses, and\npossibly the RETURNING clause in some cases, it's possible we could get\nrid of most of what's been controversial. That could still leave us a\ngood deal of what we want, including JSON_TABLE, which is by far the\nmost interesting of these features. I haven't looked closely yet at how\npossible this is, it only occurred to me today, but I think it's worth\nexploring.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 23 Aug 2022 11:29:39 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-23 10:51:04 -0400, Robert Haas wrote:\n> I do not think that using subtransactions as part of the expression\n> evaluation process is a sound idea pretty much under any\n> circumstances. Maybe if the subtransations aren't commonly created and\n> don't usually get XIDs there wouldn't be a big problem in practice,\n> but it's an awfully heavyweight operation to be done inside expression\n> evaluation even in corner cases. I think that if we need to make\n> certain operations that would throw errors not throw errors, we need\n> to refactor interfaces until it's possible to return an error\n> indicator up to the appropriate level, not just let the error be\n> thrown and catch it.\n\nI don't think that's quite realistic - that's the input/output functions for\nall types, basically. I'd be somewhat content if we'd a small list of very\ncommon coercion paths we knew wouldn't error out, leaving things like OOM\naside. Even just knowing that for ->text conversions would be a huge deal in\nthe context of this patch. One problem here is that the whole type coercion\ninfrastructure doesn't make it easy to know what \"happened inside\" atm, one\nhas to reconstruct it from the emitted expressions, where there can be\nmultiple layers of things to poke through.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 23 Aug 2022 08:55:11 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "Hi\n\nút 23. 8. 2022 v 17:55 odesílatel Andres Freund <andres@anarazel.de> napsal:\n\n> Hi,\n>\n> On 2022-08-23 10:51:04 -0400, Robert Haas wrote:\n> > I do not think that using subtransactions as part of the expression\n> > evaluation process is a sound idea pretty much under any\n> > circumstances. Maybe if the subtransations aren't commonly created and\n> > don't usually get XIDs there wouldn't be a big problem in practice,\n> > but it's an awfully heavyweight operation to be done inside expression\n> > evaluation even in corner cases. I think that if we need to make\n> > certain operations that would throw errors not throw errors, we need\n> > to refactor interfaces until it's possible to return an error\n> > indicator up to the appropriate level, not just let the error be\n> > thrown and catch it.\n>\n> I don't think that's quite realistic - that's the input/output functions\n> for\n> all types, basically. I'd be somewhat content if we'd a small list of very\n> common coercion paths we knew wouldn't error out, leaving things like OOM\n> aside. Even just knowing that for ->text conversions would be a huge deal\n> in\n> the context of this patch. One problem here is that the whole type\n> coercion\n> infrastructure doesn't make it easy to know what \"happened inside\" atm, one\n> has to reconstruct it from the emitted expressions, where there can be\n> multiple layers of things to poke through.\n>\n\nThe errors that should be handled are related to json structure errors. I\ndon't think so we have to handle all errors and all conversions.\n\nThe JSON knows only three types - and these conversions can be written\nspecially for this case - or we can write json io routines to be able to\nsignal error\nwithout an exception.\n\nRegards\n\nPavel\n\n\n\n\n\n>\n> Greetings,\n>\n> Andres Freund\n>\n>\n>\n\nHiút 23. 8. 2022 v 17:55 odesílatel Andres Freund <andres@anarazel.de> napsal:Hi,\n\nOn 2022-08-23 10:51:04 -0400, Robert Haas wrote:\n> I do not think that using subtransactions as part of the expression\n> evaluation process is a sound idea pretty much under any\n> circumstances. Maybe if the subtransations aren't commonly created and\n> don't usually get XIDs there wouldn't be a big problem in practice,\n> but it's an awfully heavyweight operation to be done inside expression\n> evaluation even in corner cases. I think that if we need to make\n> certain operations that would throw errors not throw errors, we need\n> to refactor interfaces until it's possible to return an error\n> indicator up to the appropriate level, not just let the error be\n> thrown and catch it.\n\nI don't think that's quite realistic - that's the input/output functions for\nall types, basically. I'd be somewhat content if we'd a small list of very\ncommon coercion paths we knew wouldn't error out, leaving things like OOM\naside. Even just knowing that for ->text conversions would be a huge deal in\nthe context of this patch. One problem here is that the whole type coercion\ninfrastructure doesn't make it easy to know what \"happened inside\" atm, one\nhas to reconstruct it from the emitted expressions, where there can be\nmultiple layers of things to poke through.The errors that should be handled are related to json structure errors. I don't think so we have to handle all errors and all conversions.The JSON knows only three types - and these conversions can be written specially for this case - or we can write json io routines to be able to signal errorwithout an exception.RegardsPavel \n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 23 Aug 2022 18:06:22 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 11:55 AM Andres Freund <andres@anarazel.de> wrote:\n> I don't think that's quite realistic - that's the input/output functions for\n> all types, basically. I'd be somewhat content if we'd a small list of very\n> common coercion paths we knew wouldn't error out, leaving things like OOM\n> aside. Even just knowing that for ->text conversions would be a huge deal in\n> the context of this patch. One problem here is that the whole type coercion\n> infrastructure doesn't make it easy to know what \"happened inside\" atm, one\n> has to reconstruct it from the emitted expressions, where there can be\n> multiple layers of things to poke through.\n\nBut that's exactly what I'm complaining about. Catching an error that\nunwound a bunch of stack frames where complicated things are happening\nis fraught with peril. There's probably a bunch of errors that could\nbe thrown from somewhere in that code - out of memory being a great\nexample - that should not be caught. What you (probably) want is to\nknow whether one specific error happened or not, and catch only that\none. And the error machinery isn't designed for that. It's not\ndesigned to let you catch specific errors for specific call sites, and\nit's also not designed to be particularly efficient if lots of errors\nneed to be caught over and over again. If you decide to ignore all\nthat and do it anyway, you'll end up with, at best, code that is\ncomplicated, hard to maintain, and probably slow when a lot of errors\nare trapped, and at worst, code that is fragile or outright buggy.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Aug 2022 12:26:55 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On 23.08.2022 19:06, Pavel Stehule wrote:\n> Hi\n>\n> út 23. 8. 2022 v 17:55 odesílatel Andres Freund <andres@anarazel.de> \n> napsal:\n>\n> Hi,\n>\n> On 2022-08-23 10:51:04 -0400, Robert Haas wrote:\n> > I do not think that using subtransactions as part of the expression\n> > evaluation process is a sound idea pretty much under any\n> > circumstances. Maybe if the subtransations aren't commonly\n> created and\n> > don't usually get XIDs there wouldn't be a big problem in practice,\n> > but it's an awfully heavyweight operation to be done inside\n> expression\n> > evaluation even in corner cases. I think that if we need to make\n> > certain operations that would throw errors not throw errors, we need\n> > to refactor interfaces until it's possible to return an error\n> > indicator up to the appropriate level, not just let the error be\n> > thrown and catch it.\n>\n> I don't think that's quite realistic - that's the input/output\n> functions for\n> all types, basically. I'd be somewhat content if we'd a small\n> list of very\n> common coercion paths we knew wouldn't error out, leaving things\n> like OOM\n> aside. Even just knowing that for ->text conversions would be a\n> huge deal in\n> the context of this patch. One problem here is that the whole\n> type coercion\n> infrastructure doesn't make it easy to know what \"happened inside\"\n> atm, one\n> has to reconstruct it from the emitted expressions, where there can be\n> multiple layers of things to poke through.\n>\n>\n> The errors that should be handled are related to json structure \n> errors. I don't think so we have to handle all errors and all conversions.\n>\n> The JSON knows only three types - and these conversions can be written \n> specially for this case - or we can write json io routines to be able \n> to signal error\n> without an exception.\n\n\nI also wanted to suggest to limit the set of returning types to the\npredefined set of JSON-compatible types for which can write safe\nconversion functions: character types (text, char), boolean, number\ntypes (integers, floats types, numeric), datetime types. The SQL\nstandard even does not require support of other returning types.\n\nFor the float8 and datetime types we already have safe input functions\nlike float8in_internal_opt_error() and parse_datetime() which are used\ninside jsonpath and return error code instead of throwing errors.\nWe need to implement numeric_intN_safe() and maybe a few other trivial\nfunctions like that.\n\nThe set of returning types, for which we do not need any special\ncoercions, is very limited: json, jsonb, text. More precisely,\neven RETURNING json[b] can throw errors in JSON_QUERY(OMIT QUOTES),\nand we also need safe json parsing, but it can be easily done\nwith pg_parse_json(), which returns error code.\n\n\n\n\n\n\n\n\n\nOn 23.08.2022 19:06, Pavel Stehule\n wrote:\n\n\n\n\nHi\n\n\n\nút 23. 8. 2022 v 17:55\n odesílatel Andres Freund <andres@anarazel.de>\n napsal:\n\nHi,\n\n On 2022-08-23 10:51:04 -0400, Robert Haas wrote:\n > I do not think that using subtransactions as part of\n the expression\n > evaluation process is a sound idea pretty much under\n any\n > circumstances. Maybe if the subtransations aren't\n commonly created and\n > don't usually get XIDs there wouldn't be a big problem\n in practice,\n > but it's an awfully heavyweight operation to be done\n inside expression\n > evaluation even in corner cases. I think that if we\n need to make\n > certain operations that would throw errors not throw\n errors, we need\n > to refactor interfaces until it's possible to return an\n error\n > indicator up to the appropriate level, not just let the\n error be\n > thrown and catch it.\n\n I don't think that's quite realistic - that's the\n input/output functions for\n all types, basically. I'd be somewhat content if we'd a\n small list of very\n common coercion paths we knew wouldn't error out, leaving\n things like OOM\n aside. Even just knowing that for ->text conversions\n would be a huge deal in\n the context of this patch. One problem here is that the\n whole type coercion\n infrastructure doesn't make it easy to know what \"happened\n inside\" atm, one\n has to reconstruct it from the emitted expressions, where\n there can be\n multiple layers of things to poke through.\n\n\n\nThe errors that should be handled are related to json\n structure errors. I don't think so we have to handle all\n errors and all conversions.\n\n\nThe JSON knows only three types - and these conversions\n can be written specially for this case - or we can write\n json io routines to be able to signal error\nwithout an exception.\n\n\n\n\n\nI also wanted to suggest to limit the set of returning types to the\npredefined set of JSON-compatible types for which can write safe\nconversion functions: character types (text, char), boolean, number\ntypes (integers, floats types, numeric), datetime types. The SQL\nstandard even does not require support of other returning types.\n\nFor the float8 and datetime types we already have safe input functions\nlike float8in_internal_opt_error() and parse_datetime() which are used\ninside jsonpath and return error code instead of throwing errors.\nWe need to implement numeric_intN_safe() and maybe a few other trivial\nfunctions like that.\n\nThe set of returning types, for which we do not need any special\ncoercions, is very limited: json, jsonb, text. More precisely,\neven RETURNING json[b] can throw errors in JSON_QUERY(OMIT QUOTES), \nand we also need safe json parsing, but it can be easily done \nwith pg_parse_json(), which returns error code.",
"msg_date": "Tue, 23 Aug 2022 19:36:11 +0300",
"msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On 8/23/22 11:08 AM, Tom Lane wrote:\r\n> Robert Haas <robertmhaas@gmail.com> writes:\r\n>> At the end of the day, the RMT is going to have to take a call here.\r\n>> It seems to me that Andres's concerns about code quality and lack of\r\n>> comments are probably somewhat legitimate, and in particular I do not\r\n>> think the use of subtransactions is a good idea. I also don't think\r\n>> that trying to fix those problems or generally improve the code by\r\n>> committing thousands of lines of new code in August when we're\r\n>> targeting a release in September or October is necessarily a good\r\n>> idea. But I'm also not in a position to say that the project is going\r\n>> to be irreparably damaged if we just ship what we've got, perhaps\r\n>> after fixing the most acute problems that we currently know about.\r\n> \r\n> The problem here is that this was going to be a headline new feature\r\n> for v15. Shipping what apparently is only an alpha-quality implementation\r\n> seems pretty problematic unless we advertise it as such, and that's\r\n> not something we've done very much in the past. \r\n\r\nWith my user hat on, we have done this before -- if inadvertently -- but \r\nagree it's not recommended nor a habit we should get into.\r\n\r\n> As you say, we've delegated this sort of decision to the RMT, but\r\n> if I were on the RMT I'd be voting to revert.\r\n\r\nWith RMT hat on,the RMT does have power of forced commit/revert in \r\nabsence of consensus through regular community processes[1]. We did \r\nexplicitly discuss at our meeting today if we were going to make the \r\ndecision right now. We decided that we would come back and set a \r\ndeadline on letting the community processes play out, otherwise we will \r\nmake the decision.\r\n\r\nFor decision deadline: if there is no community consensus by end of Aug \r\n28, 2022 AoE, the RMT will make the decision. I know Andrew has been \r\nprepping for the outcome of a revert -- this should give enough for \r\nreview and merge prior to a Beta 4 release (targeted for Sep 8). If \r\nthere is concern about that, the RMT can move up the decision timeframe.\r\n\r\nTaking RMT hat off, if the outcome is \"revert\", I do want to ensure we \r\ndon't lose momentum on getting this into v16. I know a lot of time and \r\neffort has gone into this featureset and it seems to be trending in the \r\nright direction. We have a mixed history on reverts in terms of if/when \r\nthey are committed and I don't want to see that happen to these \r\nfeatures. I do think this will remain a headline feature even if we \r\ndelay it for v16.\r\n\r\nI saw Andrew suggest that the controversial parts of the patchset may be \r\nseverable from some of the new functionality, so I would like to see \r\nthat proposal and if it is enough to overcome concerns.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://wiki.postgresql.org/wiki/Release_Management_Team",
"msg_date": "Tue, 23 Aug 2022 13:18:49 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-23 12:26:55 -0400, Robert Haas wrote:\n> On Tue, Aug 23, 2022 at 11:55 AM Andres Freund <andres@anarazel.de> wrote:\n> > I don't think that's quite realistic - that's the input/output functions for\n> > all types, basically. I'd be somewhat content if we'd a small list of very\n> > common coercion paths we knew wouldn't error out, leaving things like OOM\n> > aside. Even just knowing that for ->text conversions would be a huge deal in\n> > the context of this patch. One problem here is that the whole type coercion\n> > infrastructure doesn't make it easy to know what \"happened inside\" atm, one\n> > has to reconstruct it from the emitted expressions, where there can be\n> > multiple layers of things to poke through.\n> \n> But that's exactly what I'm complaining about. Catching an error that\n> unwound a bunch of stack frames where complicated things are happening\n> is fraught with peril. There's probably a bunch of errors that could\n> be thrown from somewhere in that code - out of memory being a great\n> example - that should not be caught.\n\nThe code as is handles this to some degree. Only ERRCODE_DATA_EXCEPTION,\nERRCODE_INTEGRITY_CONSTRAINT_VIOLATION are caught, the rest is immediately\nrethrown.\n\n\n> What you (probably) want is to know whether one specific error happened or\n> not, and catch only that one. And the error machinery isn't designed for\n> that. It's not designed to let you catch specific errors for specific call\n> sites, and it's also not designed to be particularly efficient if lots of\n> errors need to be caught over and over again. If you decide to ignore all\n> that and do it anyway, you'll end up with, at best, code that is\n> complicated, hard to maintain, and probably slow when a lot of errors are\n> trapped, and at worst, code that is fragile or outright buggy.\n\nI'm not sure what the general alternative is though. Part of the feature is\ngenerating a composite type from json - there's just no way we can make all\npossible coercion pathways not error out. That'd necessitate requiring all\nbuiltin types and extensions types out there to provide input functions that\ndon't throw on invalid input and all coercions to not throw either. That just\nseems unrealistic.\n\nI think the best we could without subtransactions do perhaps is to add\nmetadata to pg_cast, pg_type telling us whether certain types of errors are\npossible, and requiring ERROR ON ERROR when coercion paths are required that\ndon't have those options.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 23 Aug 2022 10:23:50 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> I saw Andrew suggest that the controversial parts of the patchset may be \n> severable from some of the new functionality, so I would like to see \n> that proposal and if it is enough to overcome concerns.\n\nIt's an interesting suggestion. Do people have the cycles available\nto make it happen in the next few days?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Aug 2022 13:24:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-23 13:18:49 -0400, Jonathan S. Katz wrote:\n> Taking RMT hat off, if the outcome is \"revert\", I do want to ensure we don't\n> lose momentum on getting this into v16. I know a lot of time and effort has\n> gone into this featureset and it seems to be trending in the right\n> direction. We have a mixed history on reverts in terms of if/when they are\n> committed and I don't want to see that happen to these features. I do think\n> this will remain a headline feature even if we delay it for v16.\n\nWe could decide to revert this for 15, but leave it in tree for HEAD.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 23 Aug 2022 10:26:17 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-23 18:06:22 +0200, Pavel Stehule wrote:\n> The errors that should be handled are related to json structure errors. I\n> don't think so we have to handle all errors and all conversions.\n> \n> The JSON knows only three types - and these conversions can be written\n> specially for this case - or we can write json io routines to be able to\n> signal error\n> without an exception.\n\nI think that's not true unfortunately. You can specify return types, and\ncomposite types can be populated. Which essentially requires arbitrary\ncoercions.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 23 Aug 2022 10:27:18 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-08-23 12:26:55 -0400, Robert Haas wrote:\n>> But that's exactly what I'm complaining about. Catching an error that\n>> unwound a bunch of stack frames where complicated things are happening\n>> is fraught with peril. There's probably a bunch of errors that could\n>> be thrown from somewhere in that code - out of memory being a great\n>> example - that should not be caught.\n\n> The code as is handles this to some degree. Only ERRCODE_DATA_EXCEPTION,\n> ERRCODE_INTEGRITY_CONSTRAINT_VIOLATION are caught, the rest is immediately\n> rethrown.\n\nThat's still a lot of territory, considering how nonspecific most\nSQLSTATEs are. Even if you can prove that only the intended cases\nare caught today, somebody could inadvertently break it next week\nby using one of those codes somewhere else.\n\nI agree with the upthread comments that we only need/want to catch\nforeseeable incorrect-input errors, and that the way to make that\nhappen is to refactor the related type input functions, and that\na lot of the heavy lifting for that has been done already.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Aug 2022 13:28:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 1:23 PM Andres Freund <andres@anarazel.de> wrote:\n> > But that's exactly what I'm complaining about. Catching an error that\n> > unwound a bunch of stack frames where complicated things are happening\n> > is fraught with peril. There's probably a bunch of errors that could\n> > be thrown from somewhere in that code - out of memory being a great\n> > example - that should not be caught.\n>\n> The code as is handles this to some degree. Only ERRCODE_DATA_EXCEPTION,\n> ERRCODE_INTEGRITY_CONSTRAINT_VIOLATION are caught, the rest is immediately\n> rethrown.\n\nAFAIK, Tom has rejected every previous effort to introduce this type\nof coding into the tree rather forcefully. What makes it OK now?\n\n> I'm not sure what the general alternative is though. Part of the feature is\n> generating a composite type from json - there's just no way we can make all\n> possible coercion pathways not error out. That'd necessitate requiring all\n> builtin types and extensions types out there to provide input functions that\n> don't throw on invalid input and all coercions to not throw either. That just\n> seems unrealistic.\n\nWell, I think that having input functions report input that is not\nvalid for the data type in some way other than just chucking an error\nas they'd also do for a missing TOAST chunk would be a pretty sensible\nplan. I'd support doing that if we forced a hard compatibility break,\nand I'd support that if we provided some way for old code to continue\nrunning in degraded mode. I haven't thought too much about the\ncoercion case, but I suppose the issues are similar. What I don't\nsupport is saying -- well, upgrading our infrastructure is hard, so\nlet's just kludge it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Aug 2022 13:33:42 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "út 23. 8. 2022 v 19:27 odesílatel Andres Freund <andres@anarazel.de> napsal:\n\n> Hi,\n>\n> On 2022-08-23 18:06:22 +0200, Pavel Stehule wrote:\n> > The errors that should be handled are related to json structure errors. I\n> > don't think so we have to handle all errors and all conversions.\n> >\n> > The JSON knows only three types - and these conversions can be written\n> > specially for this case - or we can write json io routines to be able to\n> > signal error\n> > without an exception.\n>\n> I think that's not true unfortunately. You can specify return types, and\n> composite types can be populated. Which essentially requires arbitrary\n> coercions.\n>\n\nPlease, can you send an example? Maybe we try to fix a feature that is not\nrequired by standard.\n\nRegards\n\nPavel\n\n\n\n\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nút 23. 8. 2022 v 19:27 odesílatel Andres Freund <andres@anarazel.de> napsal:Hi,\n\nOn 2022-08-23 18:06:22 +0200, Pavel Stehule wrote:\n> The errors that should be handled are related to json structure errors. I\n> don't think so we have to handle all errors and all conversions.\n> \n> The JSON knows only three types - and these conversions can be written\n> specially for this case - or we can write json io routines to be able to\n> signal error\n> without an exception.\n\nI think that's not true unfortunately. You can specify return types, and\ncomposite types can be populated. Which essentially requires arbitrary\ncoercions.Please, can you send an example? Maybe we try to fix a feature that is not required by standard.RegardsPavel \n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 23 Aug 2022 19:38:35 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> út 23. 8. 2022 v 19:27 odesílatel Andres Freund <andres@anarazel.de> napsal:\n>> I think that's not true unfortunately. You can specify return types, and\n>> composite types can be populated. Which essentially requires arbitrary\n>> coercions.\n\n> Please, can you send an example? Maybe we try to fix a feature that is not\n> required by standard.\n\nEven if it is required by spec, I'd have zero hesitation about tossing\nthat case overboard if that's what we need to do to get to a shippable\nfeature.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Aug 2022 13:45:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "\nOn 2022-08-23 Tu 13:24, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> I saw Andrew suggest that the controversial parts of the patchset may be \n>> severable from some of the new functionality, so I would like to see \n>> that proposal and if it is enough to overcome concerns.\n> It's an interesting suggestion. Do people have the cycles available\n> to make it happen in the next few days?\n>\n> \t\t\t\n\n\nI will make time although probably Nikita and/or Amit would be quicker\nthan I would be.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 23 Aug 2022 14:10:59 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On 23.08.2022 20:38, Pavel Stehule wrote:\n> út 23. 8. 2022 v 19:27 odesílatel Andres Freund <andres@anarazel.de> \n> napsal:\n>\n> Hi,\n>\n> On 2022-08-23 18:06:22 +0200, Pavel Stehule wrote:\n> > The errors that should be handled are related to json structure\n> errors. I\n> > don't think so we have to handle all errors and all conversions.\n> >\n> > The JSON knows only three types - and these conversions can be\n> written\n> > specially for this case - or we can write json io routines to be\n> able to\n> > signal error\n> > without an exception.\n>\n> I think that's not true unfortunately. You can specify return\n> types, and\n> composite types can be populated. Which essentially requires arbitrary\n> coercions.\n>\n>\n> Please, can you send an example? Maybe we try to fix a feature that is \n> not required by standard.\n\n- Returning arbitrary types in JSON_VALUE using I/O coercion\nfrom JSON string (more precisely, text::arbitrary_type cast):\n\nSELECT JSON_QUERY(jsonb '\"1, 2\"', '$' RETURNING point);\n json_query\n------------\n (1,2)\n(1 row)\n\n\n- Returning composite and array types in JSON_QUERY, which is implemented\nreusing the code of our json[b]_populate_record[set]():\n\nSELECT JSON_QUERY(jsonb '[1, \"2\", null]', '$' RETURNING int[]);\n json_query\n------------\n {1,2,NULL}\n(1 row)\n\n-- \nNikita Glukhov\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 23.08.2022 20:38, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\n\n\nút 23. 8. 2022 v 19:27\n odesílatel Andres Freund <andres@anarazel.de>\n napsal:\n\nHi,\n\n On 2022-08-23 18:06:22 +0200, Pavel Stehule wrote:\n > The errors that should be handled are related to json\n structure errors. I\n > don't think so we have to handle all errors and all\n conversions.\n > \n > The JSON knows only three types - and these conversions\n can be written\n > specially for this case - or we can write json io\n routines to be able to\n > signal error\n > without an exception.\n\n I think that's not true unfortunately. You can specify\n return types, and\n composite types can be populated. Which essentially requires\n arbitrary\n coercions.\n\n\n\n Please, can you send an example? Maybe we try to fix a feature\n that is not required by standard.\n\n\n\n- Returning arbitrary types in JSON_VALUE using I/O coercion \nfrom JSON string (more precisely, text::arbitrary_type cast):\n\nSELECT JSON_QUERY(jsonb '\"1, 2\"', '$' RETURNING point);\n json_query \n------------\n (1,2)\n(1 row)\n\n\n- Returning composite and array types in JSON_QUERY, which is implemented \nreusing the code of our json[b]_populate_record[set]():\n\nSELECT JSON_QUERY(jsonb '[1, \"2\", null]', '$' RETURNING int[]);\n json_query \n------------\n {1,2,NULL}\n(1 row)\n\n-- \nNikita Glukhov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 23 Aug 2022 21:16:24 +0300",
"msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-23 13:33:42 -0400, Robert Haas wrote:\n> On Tue, Aug 23, 2022 at 1:23 PM Andres Freund <andres@anarazel.de> wrote:\n> > > But that's exactly what I'm complaining about. Catching an error that\n> > > unwound a bunch of stack frames where complicated things are happening\n> > > is fraught with peril. There's probably a bunch of errors that could\n> > > be thrown from somewhere in that code - out of memory being a great\n> > > example - that should not be caught.\n> >\n> > The code as is handles this to some degree. Only ERRCODE_DATA_EXCEPTION,\n> > ERRCODE_INTEGRITY_CONSTRAINT_VIOLATION are caught, the rest is immediately\n> > rethrown.\n> \n> AFAIK, Tom has rejected every previous effort to introduce this type\n> of coding into the tree rather forcefully. What makes it OK now?\n\nI didn't say it was! I don't like it much - I was just saying that it handles\nthat case to some degree.\n\n\n> > I'm not sure what the general alternative is though. Part of the feature is\n> > generating a composite type from json - there's just no way we can make all\n> > possible coercion pathways not error out. That'd necessitate requiring all\n> > builtin types and extensions types out there to provide input functions that\n> > don't throw on invalid input and all coercions to not throw either. That just\n> > seems unrealistic.\n> \n> Well, I think that having input functions report input that is not\n> valid for the data type in some way other than just chucking an error\n> as they'd also do for a missing TOAST chunk would be a pretty sensible\n> plan. I'd support doing that if we forced a hard compatibility break,\n> and I'd support that if we provided some way for old code to continue\n> running in degraded mode. I haven't thought too much about the\n> coercion case, but I suppose the issues are similar. What I don't\n> support is saying -- well, upgrading our infrastructure is hard, so\n> let's just kludge it.\n\nI guess the 'degraded mode' approach is kind of what I was trying to describe\nwith:\n\n> I think the best we could without subtransactions do perhaps is to add\n> metadata to pg_cast, pg_type telling us whether certain types of errors are\n> possible, and requiring ERROR ON ERROR when coercion paths are required that\n> don't have those options.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 23 Aug 2022 11:26:15 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-23 13:28:50 -0400, Tom Lane wrote:\n> I agree with the upthread comments that we only need/want to catch\n> foreseeable incorrect-input errors, and that the way to make that\n> happen is to refactor the related type input functions, and that\n> a lot of the heavy lifting for that has been done already.\n\nI think it's a good direction to go in. What of the heavy lifting for that has\nbeen done already? I'd have guessed that the hard part is to add different,\noptional, type input, type coercion signatures, and then converting a lot of\ntypes to that?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 23 Aug 2022 11:30:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On 8/23/22 2:10 PM, Andrew Dunstan wrote:\r\n> \r\n> On 2022-08-23 Tu 13:24, Tom Lane wrote:\r\n>> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\r\n>>> I saw Andrew suggest that the controversial parts of the patchset may be\r\n>>> severable from some of the new functionality, so I would like to see\r\n>>> that proposal and if it is enough to overcome concerns.\r\n>> It's an interesting suggestion. Do people have the cycles available\r\n>> to make it happen in the next few days?\r\n>>\r\n> I will make time although probably Nikita and/or Amit would be quicker\r\n> than I would be.\r\n\r\nIf you all can, you have my +1 to try it and see what folks think.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Tue, 23 Aug 2022 15:31:06 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On 8/23/22 1:26 PM, Andres Freund wrote:\r\n> Hi,\r\n> \r\n> On 2022-08-23 13:18:49 -0400, Jonathan S. Katz wrote:\r\n>> Taking RMT hat off, if the outcome is \"revert\", I do want to ensure we don't\r\n>> lose momentum on getting this into v16. I know a lot of time and effort has\r\n>> gone into this featureset and it seems to be trending in the right\r\n>> direction. We have a mixed history on reverts in terms of if/when they are\r\n>> committed and I don't want to see that happen to these features. I do think\r\n>> this will remain a headline feature even if we delay it for v16.\r\n> \r\n> We could decide to revert this for 15, but leave it in tree for HEAD.\r\n\r\nIf it comes to that, I think that is a reasonable suggestion so long as \r\nwe're committed to making the requisite changes.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Tue, 23 Aug 2022 15:32:14 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "\nOn 2022-08-23 Tu 15:32, Jonathan S. Katz wrote:\n> On 8/23/22 1:26 PM, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2022-08-23 13:18:49 -0400, Jonathan S. Katz wrote:\n>>> Taking RMT hat off, if the outcome is \"revert\", I do want to ensure\n>>> we don't\n>>> lose momentum on getting this into v16. I know a lot of time and\n>>> effort has\n>>> gone into this featureset and it seems to be trending in the right\n>>> direction. We have a mixed history on reverts in terms of if/when\n>>> they are\n>>> committed and I don't want to see that happen to these features. I\n>>> do think\n>>> this will remain a headline feature even if we delay it for v16.\n>>\n>> We could decide to revert this for 15, but leave it in tree for HEAD.\n>\n> If it comes to that, I think that is a reasonable suggestion so long\n> as we're committed to making the requisite changes.\n>\n>\n\nOne good reason for this is that way we're not fighting against the node\nchanges, which complicate any reversion significantly.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 23 Aug 2022 15:45:01 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-08-23 13:28:50 -0400, Tom Lane wrote:\n>> I agree with the upthread comments that we only need/want to catch\n>> foreseeable incorrect-input errors, and that the way to make that\n>> happen is to refactor the related type input functions, and that\n>> a lot of the heavy lifting for that has been done already.\n\n> I think it's a good direction to go in. What of the heavy lifting for that has\n> been done already? I'd have guessed that the hard part is to add different,\n> optional, type input, type coercion signatures, and then converting a lot of\n> types to that?\n\nI was assuming that we would only bother to do this for a few core types.\nOf those, at least the datetime types were already done for previous\nJSON-related features. If we want extensibility, then as Robert said\nthere's going to have to be work done to create a common API that type\ninput functions can implement, which seems like a pretty heavy lift.\nWe could get it done for v16 if we start now, I imagine.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Aug 2022 15:54:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "út 23. 8. 2022 v 21:54 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-08-23 13:28:50 -0400, Tom Lane wrote:\n> >> I agree with the upthread comments that we only need/want to catch\n> >> foreseeable incorrect-input errors, and that the way to make that\n> >> happen is to refactor the related type input functions, and that\n> >> a lot of the heavy lifting for that has been done already.\n>\n> > I think it's a good direction to go in. What of the heavy lifting for\n> that has\n> > been done already? I'd have guessed that the hard part is to add\n> different,\n> > optional, type input, type coercion signatures, and then converting a\n> lot of\n> > types to that?\n>\n> I was assuming that we would only bother to do this for a few core types.\n> Of those, at least the datetime types were already done for previous\n> JSON-related features. If we want extensibility, then as Robert said\n> there's going to have to be work done to create a common API that type\n> input functions can implement, which seems like a pretty heavy lift.\n> We could get it done for v16 if we start now, I imagine.\n>\n\n+1\n\nPavel\n\n\n> regards, tom lane\n>\n>\n>\n\nút 23. 8. 2022 v 21:54 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Andres Freund <andres@anarazel.de> writes:\n> On 2022-08-23 13:28:50 -0400, Tom Lane wrote:\n>> I agree with the upthread comments that we only need/want to catch\n>> foreseeable incorrect-input errors, and that the way to make that\n>> happen is to refactor the related type input functions, and that\n>> a lot of the heavy lifting for that has been done already.\n\n> I think it's a good direction to go in. What of the heavy lifting for that has\n> been done already? I'd have guessed that the hard part is to add different,\n> optional, type input, type coercion signatures, and then converting a lot of\n> types to that?\n\nI was assuming that we would only bother to do this for a few core types.\nOf those, at least the datetime types were already done for previous\nJSON-related features. If we want extensibility, then as Robert said\nthere's going to have to be work done to create a common API that type\ninput functions can implement, which seems like a pretty heavy lift.\nWe could get it done for v16 if we start now, I imagine.+1Pavel\n\n regards, tom lane",
"msg_date": "Tue, 23 Aug 2022 21:57:37 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-08-23 Tu 15:32, Jonathan S. Katz wrote:\n>> On 8/23/22 1:26 PM, Andres Freund wrote:\n>>> We could decide to revert this for 15, but leave it in tree for HEAD.\n\n>> If it comes to that, I think that is a reasonable suggestion so long\n>> as we're committed to making the requisite changes.\n\nI'm not particularly on board with that. In the first place, I'm\nunconvinced that very much of the current code will survive, and\nI don't want people contorting the rewrite in order to salvage\ncommitted code that would be better off junked. In the second\nplace, if we still don't have a shippable feature in a year, then\nundoing it again is going to be just that much harder.\n\n> One good reason for this is that way we're not fighting against the node\n> changes, which complicate any reversion significantly.\n\nHaving said that, I'm prepared to believe that a lot of the node\ninfrastructure won't change because it's dictated by the SQL-spec\ngrammar. So we could leave that part alone in HEAD; at worst\nit adds some dead code in backend/nodes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Aug 2022 16:00:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On 23.08.2022 22:31, Jonathan S. Katz wrote:\n> On 8/23/22 2:10 PM, Andrew Dunstan wrote:\n>>\n>> On 2022-08-23 Tu 13:24, Tom Lane wrote:\n>>> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>>>> I saw Andrew suggest that the controversial parts of the patchset \n>>>> may be\n>>>> severable from some of the new functionality, so I would like to see\n>>>> that proposal and if it is enough to overcome concerns.\n>>> It's an interesting suggestion. Do people have the cycles available\n>>> to make it happen in the next few days?\n>>>\n>> I will make time although probably Nikita and/or Amit would be quicker\n>> than I would be.\n>\n> If you all can, you have my +1 to try it and see what folks think.\n\nI am ready to start hacking now, but it's already night in Moscow, so\nany result will be only tomorrow.\n\nHere is my plan:\n\n0. Take my last v7-0001 patch as a base. It already contains refactoring\nof JsonCoercion code. (Fix 0002 is not needed anymore, because it is for\njson[b] domains, which simply will not be supported.)\n\n1. Replace JSON_COERCION_VIA_EXPR in JsonCoercion with new\nJsonCoercionType(s) for hardcoded coercions.\n\n2. Disable all non-JSON-compatible output types in coerceJsonFuncExpr().\n\n3. Add missing safe type input functions for integers, numerics, and\nmaybe others.\n\n4. Implement hardcoded coercions using these functions in\nExecEvalJsonExprCoercion().\n\n5. Try to allow only constants (and also maybe column/parameter\nreferences) in JSON_VALUE's DEFAULT expressions. This should be enough\nfor the most of practical cases. JSON_QUERY even does not have DEFAULT\nexpressions -- it has only EMPTY ARRAY and EMPTY OBJECT, which can be\ntreated as simple JSON constants.\n\nBut it is possible to allow all other expressions in ERROR ON ERROR\ncase, and I don't know if it will be consistent enough to allow some\nexpressions in one case and deny in other.\n\nAnd there is another problem: expressions can be only checked for\nConst-ness only after expression simplification. AFAIU, at the\nparsing stage they look like 'string'::type. So, it's unclear if it\nis correct to check expressions in ExecInitExpr().\n\n6. Remove subtransactions.\n\n-- \nNikita Glukhov\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\nOn 23.08.2022 22:31, Jonathan S. Katz\n wrote:\n\nOn\n 8/23/22 2:10 PM, Andrew Dunstan wrote:\n \n\n\n On 2022-08-23 Tu 13:24, Tom Lane wrote:\n \n\"Jonathan S. Katz\"\n <jkatz@postgresql.org> writes:\n \nI saw Andrew suggest that the\n controversial parts of the patchset may be\n \n severable from some of the new functionality, so I would\n like to see\n \n that proposal and if it is enough to overcome concerns.\n \n\n It's an interesting suggestion. Do people have the cycles\n available\n \n to make it happen in the next few days?\n \n\n\n I will make time although probably Nikita and/or Amit would be\n quicker\n \n than I would be.\n \n\n\n If you all can, you have my +1 to try it and see what folks think.\n \n\nI am ready to start hacking now, but it's already night in Moscow, so\nany result will be only tomorrow.\n\nHere is my plan:\n\n0. Take my last v7-0001 patch as a base. It already contains refactoring \nof JsonCoercion code. (Fix 0002 is not needed anymore, because it is for \njson[b] domains, which simply will not be supported.)\n\n1. Replace JSON_COERCION_VIA_EXPR in JsonCoercion with new\nJsonCoercionType(s) for hardcoded coercions.\n\n2. Disable all non-JSON-compatible output types in coerceJsonFuncExpr().\n\n3. Add missing safe type input functions for integers, numerics, and\nmaybe others.\n\n4. Implement hardcoded coercions using these functions in\nExecEvalJsonExprCoercion().\n\n5. Try to allow only constants (and also maybe column/parameter\nreferences) in JSON_VALUE's DEFAULT expressions. This should be enough\nfor the most of practical cases. JSON_QUERY even does not have DEFAULT\nexpressions -- it has only EMPTY ARRAY and EMPTY OBJECT, which can be\ntreated as simple JSON constants. \n\nBut it is possible to allow all other expressions in ERROR ON ERROR \ncase, and I don't know if it will be consistent enough to allow some \nexpressions in one case and deny in other. \n\nAnd there is another problem: expressions can be only checked for \nConst-ness only after expression simplification. AFAIU, at the\nparsing stage they look like 'string'::type. So, it's unclear if it \nis correct to check expressions in ExecInitExpr().\n\n6. Remove subtransactions.\n\n-- \nNikita Glukhov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 24 Aug 2022 00:29:00 +0300",
"msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "\nOn 2022-08-23 Tu 17:29, Nikita Glukhov wrote:\n>\n>\n> On 23.08.2022 22:31, Jonathan S. Katz wrote:\n>> On 8/23/22 2:10 PM, Andrew Dunstan wrote:\n>>>\n>>> On 2022-08-23 Tu 13:24, Tom Lane wrote:\n>>>> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>>>>> I saw Andrew suggest that the controversial parts of the patchset\n>>>>> may be\n>>>>> severable from some of the new functionality, so I would like to see\n>>>>> that proposal and if it is enough to overcome concerns.\n>>>> It's an interesting suggestion. Do people have the cycles available\n>>>> to make it happen in the next few days?\n>>>>\n>>> I will make time although probably Nikita and/or Amit would be quicker\n>>> than I would be.\n>>\n>> If you all can, you have my +1 to try it and see what folks think.\n> I am ready to start hacking now, but it's already night in Moscow, so\n> any result will be only tomorrow.\n>\n> Here is my plan:\n>\n> 0. Take my last v7-0001 patch as a base. It already contains refactoring \n> of JsonCoercion code. (Fix 0002 is not needed anymore, because it is for \n> json[b] domains, which simply will not be supported.)\n>\n> 1. Replace JSON_COERCION_VIA_EXPR in JsonCoercion with new\n> JsonCoercionType(s) for hardcoded coercions.\n>\n> 2. Disable all non-JSON-compatible output types in coerceJsonFuncExpr().\n>\n> 3. Add missing safe type input functions for integers, numerics, and\n> maybe others.\n>\n> 4. Implement hardcoded coercions using these functions in\n> ExecEvalJsonExprCoercion().\n>\n> 5. Try to allow only constants (and also maybe column/parameter\n> references) in JSON_VALUE's DEFAULT expressions. This should be enough\n> for the most of practical cases. JSON_QUERY even does not have DEFAULT\n> expressions -- it has only EMPTY ARRAY and EMPTY OBJECT, which can be\n> treated as simple JSON constants. \n\n\n\ner, really? This is from the regression output:\n\n\nSELECT JSON_QUERY(jsonb '[]', '$[*]' DEFAULT '\"empty\"' ON EMPTY);\n json_query\n------------\n \"empty\"\n(1 row)\n\nSELECT JSON_QUERY(jsonb '[1,2]', '$[*]' DEFAULT '\"empty\"' ON ERROR);\n json_query\n------------\n \"empty\"\n(1 row)\n\n\n\n>\n> But it is possible to allow all other expressions in ERROR ON ERROR \n> case, and I don't know if it will be consistent enough to allow some \n> expressions in one case and deny in other. \n>\n> And there is another problem: expressions can be only checked for \n> Const-ness only after expression simplification. AFAIU, at the\n> parsing stage they look like 'string'::type. So, it's unclear if it \n> is correct to check expressions in ExecInitExpr().\n>\n> 6. Remove subtransactions.\n>\n\n\nSounds like a good plan, modulo the issues in item 5. I would rather\nlose some features temporarily than try to turn handsprings to make them\nwork and jeopardize the rest.\n\n\nI'll look forward to seeing your patch in the morning :-)\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 23 Aug 2022 18:12:49 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "Hi Nikita,\n\nOn Wed, Aug 24, 2022 at 6:29 AM Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:\n> Here is my plan:\n>\n> 0. Take my last v7-0001 patch as a base. It already contains refactoring\n> of JsonCoercion code. (Fix 0002 is not needed anymore, because it is for\n> json[b] domains, which simply will not be supported.)\n>\n> 1. Replace JSON_COERCION_VIA_EXPR in JsonCoercion with new\n> JsonCoercionType(s) for hardcoded coercions.\n>\n> 2. Disable all non-JSON-compatible output types in coerceJsonFuncExpr().\n>\n> 3. Add missing safe type input functions for integers, numerics, and\n> maybe others.\n>\n> 4. Implement hardcoded coercions using these functions in\n> ExecEvalJsonExprCoercion().\n>\n> 5. Try to allow only constants (and also maybe column/parameter\n> references) in JSON_VALUE's DEFAULT expressions. This should be enough\n> for the most of practical cases. JSON_QUERY even does not have DEFAULT\n> expressions -- it has only EMPTY ARRAY and EMPTY OBJECT, which can be\n> treated as simple JSON constants.\n>\n> But it is possible to allow all other expressions in ERROR ON ERROR\n> case, and I don't know if it will be consistent enough to allow some\n> expressions in one case and deny in other.\n>\n> And there is another problem: expressions can be only checked for\n> Const-ness only after expression simplification. AFAIU, at the\n> parsing stage they look like 'string'::type. So, it's unclear if it\n> is correct to check expressions in ExecInitExpr().\n\nIIUC, the idea is to remove the support for `DEFAULT expression` in\nthe following, no?\n\njson_value ( context_item, path_expression\n...\n[ { ERROR | NULL | DEFAULT expression } ON EMPTY ]\n[ { ERROR | NULL | DEFAULT expression } ON ERROR ])\n\njson_query ( context_item, path_expression\n...\n[ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression }\nON EMPTY ]\n[ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression }\nON ERROR ])\n\nIf that's the case, I'd imagine that `default_expr` in the following\nwill be NULL for now:\n\n/*\n * JsonBehavior -\n * representation of JSON ON ... BEHAVIOR clause\n */\ntypedef struct JsonBehavior\n{\n NodeTag type;\n JsonBehaviorType btype; /* behavior type */\n Node *default_expr; /* default expression, if any */\n} JsonBehavior;\n\nAnd if so, no expression left to check the Const-ness of?\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 24 Aug 2022 11:55:37 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On Wed, Aug 24, 2022 at 11:55 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Aug 24, 2022 at 6:29 AM Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:\n> > Here is my plan:\n> >\n> > 0. Take my last v7-0001 patch as a base. It already contains refactoring\n> > of JsonCoercion code. (Fix 0002 is not needed anymore, because it is for\n> > json[b] domains, which simply will not be supported.)\n> >\n> > 1. Replace JSON_COERCION_VIA_EXPR in JsonCoercion with new\n> > JsonCoercionType(s) for hardcoded coercions.\n> >\n> > 2. Disable all non-JSON-compatible output types in coerceJsonFuncExpr().\n> >\n> > 3. Add missing safe type input functions for integers, numerics, and\n> > maybe others.\n> >\n> > 4. Implement hardcoded coercions using these functions in\n> > ExecEvalJsonExprCoercion().\n> >\n> > 5. Try to allow only constants (and also maybe column/parameter\n> > references) in JSON_VALUE's DEFAULT expressions. This should be enough\n> > for the most of practical cases. JSON_QUERY even does not have DEFAULT\n> > expressions -- it has only EMPTY ARRAY and EMPTY OBJECT, which can be\n> > treated as simple JSON constants.\n> >\n> > But it is possible to allow all other expressions in ERROR ON ERROR\n> > case, and I don't know if it will be consistent enough to allow some\n> > expressions in one case and deny in other.\n> >\n> > And there is another problem: expressions can be only checked for\n> > Const-ness only after expression simplification. AFAIU, at the\n> > parsing stage they look like 'string'::type. So, it's unclear if it\n> > is correct to check expressions in ExecInitExpr().\n>\n> IIUC, the idea is to remove the support for `DEFAULT expression` in\n> the following, no?\n>\n> json_value ( context_item, path_expression\n> ...\n> [ { ERROR | NULL | DEFAULT expression } ON EMPTY ]\n> [ { ERROR | NULL | DEFAULT expression } ON ERROR ])\n>\n> json_query ( context_item, path_expression\n> ...\n> [ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression }\n> ON EMPTY ]\n> [ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression }\n> ON ERROR ])\n\nOr is the idea rather to restrict the set of data types we allow in `[\nRETURNING data_type ]`?\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 24 Aug 2022 13:17:55 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "Hi,\n\nOn 24.08.2022 01:12, Andrew Dunstan wrote:\n> On 2022-08-23 Tu 17:29, Nikita Glukhov wrote:\n>\n>> Here is my plan:\n>>\n>> 0. Take my last v7-0001 patch as a base. It already contains refactoring\n>> of JsonCoercion code. (Fix 0002 is not needed anymore, because it is for\n>> json[b] domains, which simply will not be supported.)\n>>\n>> 1. Replace JSON_COERCION_VIA_EXPR in JsonCoercion with new\n>> JsonCoercionType(s) for hardcoded coercions.\n\nJsonCoerion node were completely removed because they are not needed\nanymore (see p. 4).\n\n>> 2. Disable all non-JSON-compatible output types in coerceJsonFuncExpr().\n>>\n>> 3. Add missing safe type input functions for integers, numerics, and\n>> maybe others.\n\nWe need to write much more functions, than I expected. And I still didn't\nimplemented safe input functions for numeric and datetime types.\nI will start to do it tomorrow.\n\n>> 4. Implement hardcoded coercions using these functions in\n>> ExecEvalJsonExprCoercion().\n\nThat was done using simple `switch (returning_typid) { .. }`,\nwhich can be nested into `switch (jbv->type)`.\n\n>> 5. Try to allow only constants (and also maybe column/parameter\n>> references) in JSON_VALUE's DEFAULT expressions. This should be enough\n>> for the most of practical cases. JSON_QUERY even does not have DEFAULT\n>> expressions -- it has only EMPTY ARRAY and EMPTY OBJECT, which can be\n>> treated as simple JSON constants.\n\nI have not tried to implement this yet.\n\n\n\n> er, really? This is from the regression output:\n>\n>\n> SELECT JSON_QUERY(jsonb '[]', '$[*]' DEFAULT '\"empty\"' ON EMPTY);\n> json_query\n> ------------\n> \"empty\"\n> (1 row)\n>\n> SELECT JSON_QUERY(jsonb '[1,2]', '$[*]' DEFAULT '\"empty\"' ON ERROR);\n> json_query\n> ------------\n> \"empty\"\n> (1 row)\n>\nThis is another extension. SQL standard defines only\nEMPTY ARRAY an EMPTY OBJECT behavior for JSON_QUERY:\n\n<JSON query empty behavior> ::=\nERROR\n| NULL\n| EMPTY ARRAY\n| EMPTY OBJECT\n\n<JSON query error behavior> ::=\nERROR\n| NULL\n| EMPTY ARRAY\n| EMPTY OBJECT\n\n\n>> But it is possible to allow all other expressions in ERROR ON ERROR\n>> case, and I don't know if it will be consistent enough to allow some\n>> expressions in one case and deny in other.\n>>\n>> And there is another problem: expressions can be only checked for\n>> Const-ness only after expression simplification. AFAIU, at the\n>> parsing stage they look like 'string'::type. So, it's unclear if it\n>> is correct to check expressions in ExecInitExpr().\n>>\n>> 6. Remove subtransactions.\n\nThey were completely removed. Only DEFAULT expression needs to be fixed now.\n\n\n> Sounds like a good plan, modulo the issues in item 5. I would rather\n> lose some features temporarily than try to turn handsprings to make them\n> work and jeopardize the rest.\n>\n> I'll look forward to seeing your patch in the morning :-)\n>\nv8 - is a highly WIP patch, which I failed to finish today.\nEven some test cases fail now, and they simply show unfinished\nthings like casts to bytea (they can be simply removed) and missing\nsafe input functions.\n\n\n-- \nNikita Glukhov\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 25 Aug 2022 03:05:15 +0300",
"msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "\nOn 2022-08-24 We 20:05, Nikita Glukhov wrote:\n>\n>\n> v8 - is a highly WIP patch, which I failed to finish today.\n> Even some test cases fail now, and they simply show unfinished\n> things like casts to bytea (they can be simply removed) and missing\n> safe input functions.\n>\n\nThanks for your work, please keep going.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 24 Aug 2022 20:16:31 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On 8/24/22 8:16 PM, Andrew Dunstan wrote:\r\n> \r\n> On 2022-08-24 We 20:05, Nikita Glukhov wrote:\r\n>>\r\n>>\r\n>> v8 - is a highly WIP patch, which I failed to finish today.\r\n>> Even some test cases fail now, and they simply show unfinished\r\n>> things like casts to bytea (they can be simply removed) and missing\r\n>> safe input functions.\r\n>>\r\n> \r\n> Thanks for your work, please keep going.\r\n\r\nThanks for the efforts Nikita.\r\n\r\nWith RMT hat on, I want to point out that it's nearing the end of the \r\nweek, and if we are going to go forward with this path, we do need to \r\nreview soon. The Beta 4 release date is set to 9/8, and if we are going \r\nto commit or revert, we should leave enough time to ensure that we have \r\nenough time to review and the patches are able to successfully get \r\nthrough the buildfarm.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Fri, 26 Aug 2022 12:36:11 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "\nOn 2022-08-26 Fr 12:36, Jonathan S. Katz wrote:\n> On 8/24/22 8:16 PM, Andrew Dunstan wrote:\n>>\n>> On 2022-08-24 We 20:05, Nikita Glukhov wrote:\n>>>\n>>>\n>>> v8 - is a highly WIP patch, which I failed to finish today.\n>>> Even some test cases fail now, and they simply show unfinished\n>>> things like casts to bytea (they can be simply removed) and missing\n>>> safe input functions.\n>>>\n>>\n>> Thanks for your work, please keep going.\n>\n> Thanks for the efforts Nikita.\n>\n> With RMT hat on, I want to point out that it's nearing the end of the\n> week, and if we are going to go forward with this path, we do need to\n> review soon. The Beta 4 release date is set to 9/8, and if we are\n> going to commit or revert, we should leave enough time to ensure that\n> we have enough time to review and the patches are able to successfully\n> get through the buildfarm.\n>\n>\n\nAlso I'm going to be traveling and more or less offline from Sept 5th,\nso if I'm going to be involved we'd need a decision by Sept 1st or 2nd,\nI think, so time is running very short. Of course, others could do the\nrequired commit work either way a bit later, but not much.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 26 Aug 2022 15:25:03 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "Hi,\n\nOn 26.08.2022 22:25, Andrew Dunstan wrote:\n> On 2022-08-24 We 20:05, Nikita Glukhov wrote:\n>> v8 - is a highly WIP patch, which I failed to finish today.\n>> Even some test cases fail now, and they simply show unfinished\n>> things like casts to bytea (they can be simply removed) and missing\n>> safe input functions.\n> Thanks for your work, please keep going.\n\nI have completed in v9 all the things I previously planned:\n\n - Added missing safe I/O and type conversion functions for\n datetime, float4, varchar, bpchar. This introduces a lot\n of boilerplate code for returning errors and also maybe\n adds some overhead.\n\n - Added JSON_QUERY coercion to UTF8 bytea using pg_convert_to().\n\n - Added immutability checks that were missed with elimination\n of coercion expressions.\n Coercions text::datetime, datetime1::datetime2 and even\n datetime::text for some datetime types are mutable.\n datetime::text can be made immutable by passing ISO date\n style into output functions (like in jsonpath).\n\n - Disabled non-Const expressions in DEFAULT ON EMPTY in non\n ERROR ON ERROR case. Non-constant expressions are tried to\n evaluate into Const directly inside transformExpr().\n Maybe it would be better to simply remove DEFAULT ON EMPTY.\n\n\nIt is possible to easily split this patch into several subpatches,\nI will do it if needed.\n\n-- \nNikita Glukhov\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 26 Aug 2022 23:11:14 +0300",
"msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "\nOn 2022-08-26 Fr 16:11, Nikita Glukhov wrote:\n>\n> Hi,\n>\n> On 26.08.2022 22:25, Andrew Dunstan wrote:\n>> On 2022-08-24 We 20:05, Nikita Glukhov wrote:\n>>> v8 - is a highly WIP patch, which I failed to finish today.\n>>> Even some test cases fail now, and they simply show unfinished\n>>> things like casts to bytea (they can be simply removed) and missing\n>>> safe input functions.\n>> Thanks for your work, please keep going.\n> I have completed in v9 all the things I previously planned:\n>\n> - Added missing safe I/O and type conversion functions for \n> datetime, float4, varchar, bpchar. This introduces a lot \n> of boilerplate code for returning errors and also maybe \n> adds some overhead.\n>\n> - Added JSON_QUERY coercion to UTF8 bytea using pg_convert_to().\n>\n> - Added immutability checks that were missed with elimination \n> of coercion expressions. \n> Coercions text::datetime, datetime1::datetime2 and even \n> datetime::text for some datetime types are mutable.\n> datetime::text can be made immutable by passing ISO date \n> style into output functions (like in jsonpath).\n>\n> - Disabled non-Const expressions in DEFAULT ON EMPTY in non \n> ERROR ON ERROR case. Non-constant expressions are tried to \n> evaluate into Const directly inside transformExpr().\n> Maybe it would be better to simply remove DEFAULT ON EMPTY.\n\n\nYes, I think that's what I suggested upthread. I don't think DEFAULT ON\nEMPTY matters that much, and we can revisit it for release 16. If it's\nsimpler please do it that way.\n\n\n>\n>\n> It is possible to easily split this patch into several subpatches, \n> I will do it if needed.\n\n\nThanks, probably a good idea but I will start reviewing what you have\nnow. Andres and others please chime in if you can.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 26 Aug 2022 16:36:34 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On 8/26/22 4:36 PM, Andrew Dunstan wrote:\r\n> \r\n> On 2022-08-26 Fr 16:11, Nikita Glukhov wrote:\r\n>>\r\n>> Hi,\r\n>>\r\n>> On 26.08.2022 22:25, Andrew Dunstan wrote:\r\n>>> On 2022-08-24 We 20:05, Nikita Glukhov wrote:\r\n>>>> v8 - is a highly WIP patch, which I failed to finish today.\r\n>>>> Even some test cases fail now, and they simply show unfinished\r\n>>>> things like casts to bytea (they can be simply removed) and missing\r\n>>>> safe input functions.\r\n>>> Thanks for your work, please keep going.\r\n>> I have completed in v9 all the things I previously planned:\r\n>>\r\n>> - Added missing safe I/O and type conversion functions for\r\n>> datetime, float4, varchar, bpchar. This introduces a lot\r\n>> of boilerplate code for returning errors and also maybe\r\n>> adds some overhead.\r\n>>\r\n>> - Added JSON_QUERY coercion to UTF8 bytea using pg_convert_to().\r\n>>\r\n>> - Added immutability checks that were missed with elimination\r\n>> of coercion expressions.\r\n>> Coercions text::datetime, datetime1::datetime2 and even\r\n>> datetime::text for some datetime types are mutable.\r\n>> datetime::text can be made immutable by passing ISO date\r\n>> style into output functions (like in jsonpath).\r\n>>\r\n>> - Disabled non-Const expressions in DEFAULT ON EMPTY in non\r\n>> ERROR ON ERROR case. Non-constant expressions are tried to\r\n>> evaluate into Const directly inside transformExpr().\r\n>> Maybe it would be better to simply remove DEFAULT ON EMPTY.\r\n> \r\n> \r\n> Yes, I think that's what I suggested upthread. I don't think DEFAULT ON\r\n> EMPTY matters that much, and we can revisit it for release 16. If it's\r\n> simpler please do it that way.\r\n> \r\n> \r\n>> It is possible to easily split this patch into several subpatches,\r\n>> I will do it if needed.\r\n> \r\n> \r\n> Thanks, probably a good idea but I will start reviewing what you have\r\n> now. Andres and others please chime in if you can.\r\n\r\nThanks Nikita!\r\n\r\nI looked through the tests to see if we would need any doc changes, e.g. \r\nin [1]. I noticed that this hint:\r\n\r\n\"HINT: Use ERROR ON ERROR clause or try to simplify expression into \r\nconstant-like form\"\r\n\r\nlacks a period on the end, which is convention.\r\n\r\nI don't know if the SQL/JSON standard calls out if domains should be \r\ncastable, but if it does, we should document in [1] that we are not \r\ncurrently supporting them as return types, so that we're only supporting \r\n\"constant-like\" expressions with examples.\r\n\r\nLooking forward to hearing other feedback.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://www.postgresql.org/docs/15/functions-json.html#FUNCTIONS-SQLJSON",
"msg_date": "Sat, 27 Aug 2022 12:30:43 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On Sat, Aug 27, 2022 at 5:11 AM Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:\n> On 26.08.2022 22:25, Andrew Dunstan wrote:\n>\n> On 2022-08-24 We 20:05, Nikita Glukhov wrote:\n>\n> v8 - is a highly WIP patch, which I failed to finish today.\n> Even some test cases fail now, and they simply show unfinished\n> things like casts to bytea (they can be simply removed) and missing\n> safe input functions.\n>\n> Thanks for your work, please keep going.\n>\n> I have completed in v9 all the things I previously planned:\n>\n> - Added missing safe I/O and type conversion functions for\n> datetime, float4, varchar, bpchar. This introduces a lot\n> of boilerplate code for returning errors and also maybe\n> adds some overhead.\n\nDidn't know that we have done similar things in the past for jsonpath, as in:\n\ncommit 16d489b0fe058e527619f5e9d92fd7ca3c6c2994\nAuthor: Alexander Korotkov <akorotkov@postgresql.org>\nDate: Sat Mar 16 12:21:19 2019 +0300\n\n Numeric error suppression in jsonpath\n\nBTW, maybe the following hunk in boolin_opt_error() is unnecessary?\n\n- len = strlen(str);\n+ len -= str - in_str;\n\n> - Added JSON_QUERY coercion to UTF8 bytea using pg_convert_to().\n>\n> - Added immutability checks that were missed with elimination\n> of coercion expressions.\n> Coercions text::datetime, datetime1::datetime2 and even\n> datetime::text for some datetime types are mutable.\n> datetime::text can be made immutable by passing ISO date\n> style into output functions (like in jsonpath).\n>\n> - Disabled non-Const expressions in DEFAULT ON EMPTY in non\n> ERROR ON ERROR case. Non-constant expressions are tried to\n> evaluate into Const directly inside transformExpr().\n\nI am not sure if it's OK to eval_const_expressions() on a Query\nsub-expression during parse-analysis. IIUC, it is only correct to\napply it to after the rewriting phase.\n\n> Maybe it would be better to simply remove DEFAULT ON EMPTY.\n\nSo +1 to this for now.\n\n> It is possible to easily split this patch into several subpatches,\n> I will do it if needed.\n\nThat would be nice indeed.\n\nI'm wondering if you're going to change the PASSING values\ninitialization to add the steps into the parent JsonExpr's ExprState,\nlike the previous patch was doing?\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 29 Aug 2022 21:56:40 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On 8/29/22 8:56 AM, Amit Langote wrote:\r\n> On Sat, Aug 27, 2022 at 5:11 AM Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:\r\n\r\n> I am not sure if it's OK to eval_const_expressions() on a Query\r\n> sub-expression during parse-analysis. IIUC, it is only correct to\r\n> apply it to after the rewriting phase.\r\n> \r\n>> Maybe it would be better to simply remove DEFAULT ON EMPTY.\r\n> \r\n> So +1 to this for now.\r\n\r\n+1, if this simplifies the patch and makes it acceptable for v15\r\n\r\n>> It is possible to easily split this patch into several subpatches,\r\n>> I will do it if needed.\r\n> \r\n> That would be nice indeed.\r\n\r\nWith RMT hat on, the RMT has its weekly meetings on Tuesdays. Based on \r\nthe timing of the Beta 4 commit freeze[1] and how both \r\nincluding/reverting are nontrivial operations (e.g. we should ensure \r\nwe're confident in both and that they pass through the buildfarm), we \r\nare going to have to make a decision on how to proceed at the next meeting.\r\n\r\nCan folks please chime in on what they think of the current patchset and \r\nif this is acceptable for v15?\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://www.postgresql.org/message-id/9d251aec-cea2-bc1a-5ed8-46ef0bcf6c69@postgresql.org",
"msg_date": "Mon, 29 Aug 2022 09:35:41 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "\nOn 2022-08-29 Mo 09:35, Jonathan S. Katz wrote:\n> On 8/29/22 8:56 AM, Amit Langote wrote:\n>> On Sat, Aug 27, 2022 at 5:11 AM Nikita Glukhov\n>> <n.gluhov@postgrespro.ru> wrote:\n>\n>> I am not sure if it's OK to eval_const_expressions() on a Query\n>> sub-expression during parse-analysis. IIUC, it is only correct to\n>> apply it to after the rewriting phase.\n>>\n>>> Maybe it would be better to simply remove DEFAULT ON EMPTY.\n>>\n>> So +1 to this for now.\n>\n> +1, if this simplifies the patch and makes it acceptable for v15\n>\n>>> It is possible to easily split this patch into several subpatches,\n>>> I will do it if needed.\n>>\n>> That would be nice indeed.\n>\n> With RMT hat on, the RMT has its weekly meetings on Tuesdays. Based on\n> the timing of the Beta 4 commit freeze[1] and how both\n> including/reverting are nontrivial operations (e.g. we should ensure\n> we're confident in both and that they pass through the buildfarm), we\n> are going to have to make a decision on how to proceed at the next\n> meeting.\n>\n> Can folks please chime in on what they think of the current patchset\n> and if this is acceptable for v15?\n>\n>\n\nI think at a pinch we could probably go with it, but it's a close call.\nI think it deals with the most pressing issues that have been raised. If\npeople are still worried I think it would be trivial to add in calls\nthat error out of the DEFAULT clauses are used at all.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 29 Aug 2022 17:48:26 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "Hi,\n\nOn 29.08.2022 15:56, Amit Langote wrote:\n> On Sat, Aug 27, 2022 at 5:11 AM Nikita Glukhov<n.gluhov@postgrespro.ru> wrote:\n>> On 26.08.2022 22:25, Andrew Dunstan wrote:\n>>\n>> On 2022-08-24 We 20:05, Nikita Glukhov wrote:\n>>\n>> I have completed in v9 all the things I previously planned:\n>>\n>> - Added missing safe I/O and type conversion functions for\n>> datetime, float4, varchar, bpchar. This introduces a lot\n>> of boilerplate code for returning errors and also maybe\n>> adds some overhead.\n> Didn't know that we have done similar things in the past for jsonpath, as in:\n>\n> commit 16d489b0fe058e527619f5e9d92fd7ca3c6c2994\n> Author: Alexander Korotkov<akorotkov@postgresql.org>\n> Date: Sat Mar 16 12:21:19 2019 +0300\n>\n> Numeric error suppression in jsonpath\n\nThis was necessary for handling errors in arithmetic operations.\n\n\n> BTW, maybe the following hunk in boolin_opt_error() is unnecessary?\n>\n> - len = strlen(str);\n> + len -= str - in_str;\n>\nThis is really not necessary, but helps to avoid extra strlen() call.\nI have replaced it with more intuitive\n\n+ {\n\n str++;\n+ len--;\n+ }\n \n- len = strlen(str);\n\n>> - Added JSON_QUERY coercion to UTF8 bytea using pg_convert_to().\n>>\n>> - Added immutability checks that were missed with elimination\n>> of coercion expressions.\n>> Coercions text::datetime, datetime1::datetime2 and even\n>> datetime::text for some datetime types are mutable.\n>> datetime::text can be made immutable by passing ISO date\n>> style into output functions (like in jsonpath).\n>>\n>> - Disabled non-Const expressions in DEFAULT ON EMPTY in non\n>> ERROR ON ERROR case. Non-constant expressions are tried to\n>> evaluate into Const directly inside transformExpr().\n> I am not sure if it's OK to eval_const_expressions() on a Query\n> sub-expression during parse-analysis. IIUC, it is only correct to\n> apply it to after the rewriting phase.\n\nI also was not sure. Maybe it can be moved to rewriting phase or\neven to execution phase.\n\n\n>> Maybe it would be better to simply remove DEFAULT ON EMPTY.\n> So +1 to this for now.\n\nSee last patch #9.\n\n\n>> It is possible to easily split this patch into several subpatches,\n>> I will do it if needed.\n> That would be nice indeed.\n\nI have extracted patches #1-6 with numerous safe input and type conversion\nfunctions.\n\n\n> I'm wondering if you're going to change the PASSING values\n> initialization to add the steps into the parent JsonExpr's ExprState,\n> like the previous patch was doing?\n\nI forget to incorporate your changes for subsidary ExprStates elimination.\nSee patch #8.\n\n-- \nNikita Glukhov\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 30 Aug 2022 00:49:08 +0300",
"msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On Tue, Aug 30, 2022 at 6:49 AM Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:\n> On 29.08.2022 15:56, Amit Langote wrote:\n> On Sat, Aug 27, 2022 at 5:11 AM Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:\n> I have completed in v9 all the things I previously planned:\n>\n> BTW, maybe the following hunk in boolin_opt_error() is unnecessary?\n>\n> - len = strlen(str);\n> + len -= str - in_str;\n>\n> This is really not necessary, but helps to avoid extra strlen() call.\n> I have replaced it with more intuitive\n>\n> + {\n>\n> str++;\n> + len--;\n> + }\n>\n> - len = strlen(str);\n\n+1\n\n> - Added JSON_QUERY coercion to UTF8 bytea using pg_convert_to().\n>\n> - Added immutability checks that were missed with elimination\n> of coercion expressions.\n> Coercions text::datetime, datetime1::datetime2 and even\n> datetime::text for some datetime types are mutable.\n> datetime::text can be made immutable by passing ISO date\n> style into output functions (like in jsonpath).\n>\n> - Disabled non-Const expressions in DEFAULT ON EMPTY in non\n> ERROR ON ERROR case. Non-constant expressions are tried to\n> evaluate into Const directly inside transformExpr().\n>\n> I am not sure if it's OK to eval_const_expressions() on a Query\n> sub-expression during parse-analysis. IIUC, it is only correct to\n> apply it to after the rewriting phase.\n>\n> I also was not sure. Maybe it can be moved to rewriting phase or\n> even to execution phase.\n\nI suppose we wouldn't need to bother with doing this when we\neventually move to supporting the DEFAULT expressions.\n\n> Maybe it would be better to simply remove DEFAULT ON EMPTY.\n>\n> So +1 to this for now.\n>\n> See last patch #9.\n>\n>\n> It is possible to easily split this patch into several subpatches,\n> I will do it if needed.\n>\n> That would be nice indeed.\n>\n> I have extracted patches #1-6 with numerous safe input and type conversion\n> functions.\n>\n>\n> I'm wondering if you're going to change the PASSING values\n> initialization to add the steps into the parent JsonExpr's ExprState,\n> like the previous patch was doing?\n>\n> I forget to incorporate your changes for subsidary ExprStates elimination.\n> See patch #8.\n\nThanks. Here are some comments.\n\nFirst of all, regarding 0009, my understanding was that we should\ndisallow DEFAULT expression ON ERROR too for now, so something like\nthe following does not occur:\n\nSELECT JSON_VALUE(jsonb '\"err\"', '$' RETURNING numeric DEFAULT ('{\"'\n|| -1+a || '\"}')::text ON ERROR) from foo;\nERROR: invalid input syntax for type numeric: \"{\"0\"}\"\n\nPatches 0001-0006:\n\nYeah, these add the overhead of an extra function call (typin() ->\ntypin_opt_error()) in possibly very common paths. Other than\nrefactoring *all* places that call typin() to use the new API, the\nonly other option seems to be to leave the typin() functions alone and\nduplicate their code in typin_opt_error() versions for all the types\nthat this patch cares about. Though maybe, that's not necessarily a\nbetter compromise than accepting the extra function call overhead.\n\nPatch 0007:\n\n+\n+ /* Override default coercion in OMIT QUOTES case */\n+ if (ExecJsonQueryNeedsIOCoercion(jexpr, res, *resnull))\n+ {\n+ char *str = JsonbUnquote(DatumGetJsonbP(res));\n...\n+ else if (ret_typid == VARCHAROID || ret_typid == BPCHAROID ||\n+ ret_typid == BYTEAOID)\n+ {\n+ Jsonb *jb = DatumGetJsonbP(res);\n+ char *str = JsonbToCString(NULL, &jb->root, VARSIZE(jb));\n+\n+ return ExecJsonStringCoercion(str, strlen(str),\nret_typid, ret_typmod);\n+ }\n\nI think it might be better to create ExecJsonQueryCoercion() similar\nto ExecJsonValueCoercion() and put the above block in that function\nrather than inlining it in ExecEvalJsonExprInternal().\n\n+ ExecJsonStringCoercion(const char *str, int32 len, Oid typid, int32 typmod)\n\nI'd suggest renaming this one to ExecJsonConvertCStringToText().\n\n+ ExecJsonCoercionToText(PGFunction outfunc, Datum value, Oid typid,\nint32 typmod)\n+ ExecJsonDatetimeCoercion(Datum val, Oid val_typid, Oid typid, int32 typmod,\n+ ExecJsonBoolCoercion(bool val, Oid typid, int32 typmod, Datum *res)\n\nAnd also rename these to sound like verbs:\n\nExecJsonCoerceToText\nExecJsonCoerceDatetime[ToType]\nExecJsonCoerceBool[ToType]\n\n+ /*\n+ * XXX coercion to text is done using output functions, and they\n+ * are mutable for non-time[tz] types due to using of DateStyle.\n+ * We can pass USE_ISO_DATES, which is used inside jsonpath, to\n+ * make these coercions and JSON_VALUE(RETURNING text) immutable.\n+ *\n+ * XXX Also timestamp[tz] output functions can throw \"out of range\"\n+ * error, but this error seem to be not possible.\n+ */\n\nAre we planning to fix these before committing?\n\n+static Datum\n+JsonbPGetTextDatum(Jsonb *jb)\n\nMaybe static inline?\n\n- coercion = &coercions->composite;\n- res = JsonbPGetDatum(JsonbValueToJsonb(item));\n+ Assert(0); /* non-scalars must be rejected by JsonPathValue() */\n\nI didn't notice any changes to JsonPathValue(). Is the new comment\nreferring to an existing behavior of JsonPathValue() or something that\nmust be done by the patch?\n\n@@ -411,6 +411,26 @@ contain_mutable_functions_walker(Node *node, void *context)\n {\n JsonExpr *jexpr = castNode(JsonExpr, node);\n Const *cnst;\n+ bool returns_datetime;\n+\n+ /*\n+ * Input fuctions for datetime types are stable. They can be\n+ * called in JSON_VALUE(), when the resulting SQL/JSON is a\n+ * string.\n+ */\n...\n\nSorry if you've mentioned it before, but are these hunks changing\ncontain_mutable_functions_walker() fixing a bug? That is, did the\noriginal SQL/JSON patch miss doing this?\n\n+ Oid collation; /* OID of collation, or InvalidOid if none */\n\nI think the comment should rather say: /* Collation of <what>, ... */\n\n+\n+bool\n+expr_can_throw_errors(Node *expr)\n+{\n+ if (!expr)\n+ return false;\n+\n+ if (IsA(expr, Const))\n+ return false;\n+\n+ /* TODO consider more cases */\n+ return true;\n+}\n\n+extern bool expr_can_throw_errors(Node *expr);\n+\n\nNot used anymore.\n\nPatch 0008:\n\nThanks for re-introducing this.\n\n+bool\n+ExecEvalJsonExprSkip(ExprState *state, ExprEvalStep *op)\n+{\n+ JsonExprState *jsestate = op->d.jsonexpr_skip.jsestate;\n+\n+ /*\n+ * Skip if either of the input expressions has turned out to be\n+ * NULL, though do execute domain checks for NULLs, which are\n+ * handled by the coercion step.\n+ */\n\nI think the part starting with \", though\" is no longer necessary.\n\n+ * Return value:\n+ * 1 - Ok, jump to the end of JsonExpr\n+ * 0 - empty result, need to execute DEFAULT ON EMPTY expression\n+ * -1 - error occured, need to execute DEFAULT ON ERROR expression\n\n...need to execute ON EMPTY/ERROR behavior\n\n+ return 0; /* jump to ON EMPTY expression */\n...\n+ return -1; /* jump to ON ERROR expression */\n\nLikewise:\n\n/* jump to handle ON EMPTY/ERROR behavior */\n\n+ * Jump to coercion step if true was returned,\n+ * which signifies skipping of JSON path evaluation,\n...\n\nJump to \"end\" if true was returned.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 30 Aug 2022 17:09:44 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On 2022-Aug-30, Amit Langote wrote:\n\n> Patches 0001-0006:\n> \n> Yeah, these add the overhead of an extra function call (typin() ->\n> typin_opt_error()) in possibly very common paths. Other than\n> refactoring *all* places that call typin() to use the new API, the\n> only other option seems to be to leave the typin() functions alone and\n> duplicate their code in typin_opt_error() versions for all the types\n> that this patch cares about. Though maybe, that's not necessarily a\n> better compromise than accepting the extra function call overhead.\n\nI think another possibility is to create a static inline function in the\ncorresponding .c module (say boolin_impl() in bool.c), which is called\nby both the opt_error variant as well as the regular one. This would\navoid the duplicate code as well as the added function-call overhead.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\nSyntax error: function hell() needs an argument.\nPlease choose what hell you want to involve.\n\n\n",
"msg_date": "Tue, 30 Aug 2022 11:20:08 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On Tue, Aug 30, 2022 at 6:19 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Aug-30, Amit Langote wrote:\n>\n> > Patches 0001-0006:\n> >\n> > Yeah, these add the overhead of an extra function call (typin() ->\n> > typin_opt_error()) in possibly very common paths. Other than\n> > refactoring *all* places that call typin() to use the new API, the\n> > only other option seems to be to leave the typin() functions alone and\n> > duplicate their code in typin_opt_error() versions for all the types\n> > that this patch cares about. Though maybe, that's not necessarily a\n> > better compromise than accepting the extra function call overhead.\n>\n> I think another possibility is to create a static inline function in the\n> corresponding .c module (say boolin_impl() in bool.c), which is called\n> by both the opt_error variant as well as the regular one. This would\n> avoid the duplicate code as well as the added function-call overhead.\n\n+1\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 30 Aug 2022 19:29:26 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "\nOn 2022-08-30 Tu 06:29, Amit Langote wrote:\n> On Tue, Aug 30, 2022 at 6:19 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>> On 2022-Aug-30, Amit Langote wrote:\n>>\n>>> Patches 0001-0006:\n>>>\n>>> Yeah, these add the overhead of an extra function call (typin() ->\n>>> typin_opt_error()) in possibly very common paths. Other than\n>>> refactoring *all* places that call typin() to use the new API, the\n>>> only other option seems to be to leave the typin() functions alone and\n>>> duplicate their code in typin_opt_error() versions for all the types\n>>> that this patch cares about. Though maybe, that's not necessarily a\n>>> better compromise than accepting the extra function call overhead.\n>> I think another possibility is to create a static inline function in the\n>> corresponding .c module (say boolin_impl() in bool.c), which is called\n>> by both the opt_error variant as well as the regular one. This would\n>> avoid the duplicate code as well as the added function-call overhead.\n> +1\n>\n\n\nMakes plenty of sense, I'll try to come up with replacements for these\nforthwith.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 30 Aug 2022 09:16:32 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On 8/30/22 9:16 AM, Andrew Dunstan wrote:\r\n> \r\n> On 2022-08-30 Tu 06:29, Amit Langote wrote:\r\n>> On Tue, Aug 30, 2022 at 6:19 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\r\n>>> On 2022-Aug-30, Amit Langote wrote:\r\n>>>\r\n>>>> Patches 0001-0006:\r\n>>>>\r\n>>>> Yeah, these add the overhead of an extra function call (typin() ->\r\n>>>> typin_opt_error()) in possibly very common paths. Other than\r\n>>>> refactoring *all* places that call typin() to use the new API, the\r\n>>>> only other option seems to be to leave the typin() functions alone and\r\n>>>> duplicate their code in typin_opt_error() versions for all the types\r\n>>>> that this patch cares about. Though maybe, that's not necessarily a\r\n>>>> better compromise than accepting the extra function call overhead.\r\n>>> I think another possibility is to create a static inline function in the\r\n>>> corresponding .c module (say boolin_impl() in bool.c), which is called\r\n>>> by both the opt_error variant as well as the regular one. This would\r\n>>> avoid the duplicate code as well as the added function-call overhead.\r\n>> +1\r\n>>\r\n> \r\n> Makes plenty of sense, I'll try to come up with replacements for these\r\n> forthwith.\r\n\r\nThe RMT had its weekly meeting today to discuss open items. As stated \r\nlast week, to keep the v15 release within a late Sept / early Oct \r\ntimeframe, we need to make a decision about the inclusion of SQL/JSON \r\nthis week.\r\n\r\nFirst, we appreciate all of the effort and work that has gone into \r\nincorporating community feedback into the patches. We did note that \r\nfolks working on this made a lot of progress over the past week.\r\n\r\nThe RMT still has a few concerns, summarized as:\r\n\r\n1. There is not yet consensus on the current patch proposals as we \r\napproach the end of the major release cycle\r\n2. There is a lack of general feedback from folks who raised concerns \r\nabout the implementation\r\n\r\nThe RMT is still inclined to revert, but will give folks until Sep 1 \r\n0:00 AoE[1] to reach consensus on if SQL/JSON can be included in v15. \r\nThis matches up to Andrew's availability timeline for a revert, and \r\ngives enough time to get through the buildfarm prior to the Beta 4 \r\nrelease[2].\r\n\r\nAfter the deadline, if there is no consensus on how to proceed, the RMT \r\nwill request that the patches are reverted.\r\n\r\nWhile noting that this RMT has no decision making over v16, in the event \r\nof a revert we do hope this recent work can be the basis of the feature \r\nin v16.\r\n\r\nAgain, we appreciate the efforts that have gone into addressing the \r\ncommunity feedback.\r\n\r\nSincerely,\r\n\r\nJohn, Jonathan, Michael\r\n\r\n[1] https://en.wikipedia.org/wiki/Anywhere_on_Earth\r\n[2] \r\nhttps://www.postgresql.org/message-id/9d251aec-cea2-bc1a-5ed8-46ef0bcf6c69@postgresql.org",
"msg_date": "Tue, 30 Aug 2022 10:33:45 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On 30.08.2022 11:09, Amit Langote wrote:\n>> - Added JSON_QUERY coercion to UTF8 bytea using pg_convert_to().\n>>\n>> - Added immutability checks that were missed with elimination\n>> of coercion expressions.\n>> Coercions text::datetime, datetime1::datetime2 and even\n>> datetime::text for some datetime types are mutable.\n>> datetime::text can be made immutable by passing ISO date\n>> style into output functions (like in jsonpath).\n>>\n>> - Disabled non-Const expressions in DEFAULT ON EMPTY in non\n>> ERROR ON ERROR case. Non-constant expressions are tried to\n>> evaluate into Const directly inside transformExpr().\n>>\n>> I am not sure if it's OK to eval_const_expressions() on a Query\n>> sub-expression during parse-analysis. IIUC, it is only correct to\n>> apply it to after the rewriting phase.\n>>\n>> I also was not sure. Maybe it can be moved to rewriting phase or\n>> even to execution phase.\n> I suppose we wouldn't need to bother with doing this when we\n> eventually move to supporting the DEFAULT expressions.\n>> Maybe it would be better to simply remove DEFAULT ON EMPTY.\n>>\n>> So +1 to this for now.\n>>\n>> See last patch #9.\n>>\n>>\n>> It is possible to easily split this patch into several subpatches,\n>> I will do it if needed.\n>>\n>> That would be nice indeed.\n>>\n>> I have extracted patches #1-6 with numerous safe input and type conversion\n>> functions.\n>>\n>>\n>> I'm wondering if you're going to change the PASSING values\n>> initialization to add the steps into the parent JsonExpr's ExprState,\n>> like the previous patch was doing?\n>>\n>> I forget to incorporate your changes for subsidary ExprStates elimination.\n>> See patch #8.\n> Thanks. Here are some comments.\n>\n> First of all, regarding 0009, my understanding was that we should\n> disallow DEFAULT expression ON ERROR too for now, so something like\n> the following does not occur:\n>\n> SELECT JSON_VALUE(jsonb '\"err\"', '$' RETURNING numeric DEFAULT ('{\"'\n> || -1+a || '\"}')::text ON ERROR) from foo;\n> ERROR: invalid input syntax for type numeric: \"{\"0\"}\"\n\nPersonally, I don't like complete removal of DEFAULT behaviors, but\nI've done it in patch #10 (JsonBehavior node removed, grammar fixed).\n\n> Patches 0001-0006:\n\nOn 30.08.2022 13:29, Amit Langote wrote:\n> On Tue, Aug 30, 2022 at 6:19 PM Alvaro Herrera<alvherre@alvh.no-ip.org> wrote:\n>> On 2022-Aug-30, Amit Langote wrote:\n>>\n>>> Patches 0001-0006:\n>>>\n>>> Yeah, these add the overhead of an extra function call (typin() ->\n>>> typin_opt_error()) in possibly very common paths. Other than\n>>> refactoring *all* places that call typin() to use the new API, the\n>>> only other option seems to be to leave the typin() functions alone and\n>>> duplicate their code in typin_opt_error() versions for all the types\n>>> that this patch cares about. Though maybe, that's not necessarily a\n>>> better compromise than accepting the extra function call overhead.\n>> I think another possibility is to create a static inline function in the\n>> corresponding .c module (say boolin_impl() in bool.c), which is called\n>> by both the opt_error variant as well as the regular one. This would\n>> avoid the duplicate code as well as the added function-call overhead.\n> +1\n\nI always thought about such internal inline functions, I 've added them in v10.\n\n> Patch 0007:\n>\n> +\n> + /* Override default coercion in OMIT QUOTES case */\n> + if (ExecJsonQueryNeedsIOCoercion(jexpr, res, *resnull))\n> + {\n> + char *str = JsonbUnquote(DatumGetJsonbP(res));\n> ...\n> + else if (ret_typid == VARCHAROID || ret_typid == BPCHAROID ||\n> + ret_typid == BYTEAOID)\n> + {\n> + Jsonb *jb = DatumGetJsonbP(res);\n> + char *str = JsonbToCString(NULL, &jb->root, VARSIZE(jb));\n> +\n> + return ExecJsonStringCoercion(str, strlen(str),\n> ret_typid, ret_typmod);\n> + }\n>\n> I think it might be better to create ExecJsonQueryCoercion() similar\n> to ExecJsonValueCoercion() and put the above block in that function\n> rather than inlining it in ExecEvalJsonExprInternal().\n\nExtracted ExecJsonQueryCoercion().\n\n> + ExecJsonStringCoercion(const char *str, int32 len, Oid typid, int32 typmod)\n>\n> I'd suggest renaming this one to ExecJsonConvertCStringToText().\n>\n> + ExecJsonCoercionToText(PGFunction outfunc, Datum value, Oid typid,\n> int32 typmod)\n> + ExecJsonDatetimeCoercion(Datum val, Oid val_typid, Oid typid, int32 typmod,\n> + ExecJsonBoolCoercion(bool val, Oid typid, int32 typmod, Datum *res)\n>\n> And also rename these to sound like verbs:\n>\n> ExecJsonCoerceToText\n> ExecJsonCoerceDatetime[ToType]\n> ExecJsonCoerceBool[ToType]\n\nFixed.\n\n> + /*\n> + * XXX coercion to text is done using output functions, and they\n> + * are mutable for non-time[tz] types due to using of DateStyle.\n> + * We can pass USE_ISO_DATES, which is used inside jsonpath, to\n> + * make these coercions and JSON_VALUE(RETURNING text) immutable.\n> + *\n> + * XXX Also timestamp[tz] output functions can throw \"out of range\"\n> + * error, but this error seem to be not possible.\n> + */\n>\n> Are we planning to fix these before committing?\n\nI don't know, but the first issue is critical for building functional indexes\non JSON_VALUE().\n\n\n> +static Datum\n> +JsonbPGetTextDatum(Jsonb *jb)\n>\n> Maybe static inline?\n\nFixed.\n\n> - coercion = &coercions->composite;\n> - res = JsonbPGetDatum(JsonbValueToJsonb(item));\n> + Assert(0); /* non-scalars must be rejected by JsonPathValue() */\n>\n> I didn't notice any changes to JsonPathValue(). Is the new comment\n> referring to an existing behavior of JsonPathValue() or something that\n> must be done by the patch?\n\nJsonPathValue() has a check for non-scalars items, this is simply a new comment.\n\n\n> @@ -411,6 +411,26 @@ contain_mutable_functions_walker(Node *node, void *context)\n> {\n> JsonExpr *jexpr = castNode(JsonExpr, node);\n> Const *cnst;\n> + bool returns_datetime;\n> +\n> + /*\n> + * Input fuctions for datetime types are stable. They can be\n> + * called in JSON_VALUE(), when the resulting SQL/JSON is a\n> + * string.\n> + */\n> ...\n>\n>\n> Sorry if you've mentioned it before, but are these hunks changing\n> contain_mutable_functions_walker() fixing a bug? That is, did the\n> original SQL/JSON patch miss doing this?\n\nIn the original patch there were checks for mutability of expressions contained\nin JsonCoercion nodes. After their removal, we need to use hardcoded checks.\n\n> + Oid collation; /* OID of collation, or InvalidOid if none */\n>\n> I think the comment should rather say: /* Collation of <what>, ... */\n\nFixed.\n\n> +\n> +bool\n> +expr_can_throw_errors(Node *expr)\n> +{\n> + if (!expr)\n> + return false;\n> +\n> + if (IsA(expr, Const))\n> + return false;\n> +\n> + /* TODO consider more cases */\n> + return true;\n> +}\n>\n> +extern bool expr_can_throw_errors(Node *expr);\n> +\n>\n> Not used anymore.\n\nexpr_can_throw_errors() removed.\n\n\n> Patch 0008:\n>\n> Thanks for re-introducing this.\n>\n> +bool\n> +ExecEvalJsonExprSkip(ExprState *state, ExprEvalStep *op)\n> +{\n> + JsonExprState *jsestate = op->d.jsonexpr_skip.jsestate;\n> +\n> + /*\n> + * Skip if either of the input expressions has turned out to be\n> + * NULL, though do execute domain checks for NULLs, which are\n> + * handled by the coercion step.\n> + */\n>\n> I think the part starting with \", though\" is no longer necessary.\n\nFixed.\n\n> + * Return value:\n> + * 1 - Ok, jump to the end of JsonExpr\n> + * 0 - empty result, need to execute DEFAULT ON EMPTY expression\n> + * -1 - error occured, need to execute DEFAULT ON ERROR expression\n>\n> ...need to execute ON EMPTY/ERROR behavior\n>\n> + return 0; /* jump to ON EMPTY expression */\n> ...\n> + return -1; /* jump to ON ERROR expression */\n>\n> Likewise:\n>\n> /* jump to handle ON EMPTY/ERROR behavior */\n>\n> + * Jump to coercion step if true was returned,\n> + * which signifies skipping of JSON path evaluation,\n> ...\n>\n> Jump to \"end\" if true was returned.\n\nFixed, but I leaved \"expression\" instead of \"behavior\" because\nthese jumps are needed only for execution of DEFAULT expressions.\n\n\n\n-- \nNikita Glukhov\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 31 Aug 2022 00:25:01 +0300",
"msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 6:25 AM Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:\n> On 30.08.2022 11:09, Amit Langote wrote:\n> First of all, regarding 0009, my understanding was that we should\n> disallow DEFAULT expression ON ERROR too for now, so something like\n> the following does not occur:\n>\n> SELECT JSON_VALUE(jsonb '\"err\"', '$' RETURNING numeric DEFAULT ('{\"'\n> || -1+a || '\"}')::text ON ERROR) from foo;\n> ERROR: invalid input syntax for type numeric: \"{\"0\"}\"\n>\n> Personally, I don't like complete removal of DEFAULT behaviors, but\n> I've done it in patch #10 (JsonBehavior node removed, grammar fixed).\n\nTo clarify, I had meant to ask if the standard specifies how to handle\nthe errors of evaluating the DEFAULT ON ERROR expressions themselves?\nMy understanding is that the sub-transaction that is being removed\nwould have caught and suppressed the above error too, so along with\nremoving the sub-transactions, we should also remove anything that\nmight cause such errors.\n\n> On 30.08.2022 13:29, Amit Langote wrote:\n> On Tue, Aug 30, 2022 at 6:19 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Aug-30, Amit Langote wrote:\n>\n> Patches 0001-0006:\n>\n> Yeah, these add the overhead of an extra function call (typin() ->\n> typin_opt_error()) in possibly very common paths. Other than\n> refactoring *all* places that call typin() to use the new API, the\n> only other option seems to be to leave the typin() functions alone and\n> duplicate their code in typin_opt_error() versions for all the types\n> that this patch cares about. Though maybe, that's not necessarily a\n> better compromise than accepting the extra function call overhead.\n>\n> I think another possibility is to create a static inline function in the\n> corresponding .c module (say boolin_impl() in bool.c), which is called\n> by both the opt_error variant as well as the regular one. This would\n> avoid the duplicate code as well as the added function-call overhead.\n>\n> +1\n>\n> I always thought about such internal inline functions, I 've added them in v10.\n\nThanks.\n\nIn 0003:\n\n-Datum\n-float4in(PG_FUNCTION_ARGS)\n+static float\n+float4in_internal(char *num, bool *have_error)\n\nLooks like you forgot the inline marker?\n\nIn 0006:\n\n-static inline Datum jsonb_from_cstring(char *json, int len, bool unique_keys);\n...\n+extern Datum jsonb_from_cstring(char *json, int len, bool unique_keys,\n+ bool *error);\n\nDid you intentionally remove the inline marker from\njsonb_from_cstring() as opposed to the other cases?\n\n> Patch 0007:\n>\n> +\n> + /* Override default coercion in OMIT QUOTES case */\n> + if (ExecJsonQueryNeedsIOCoercion(jexpr, res, *resnull))\n> + {\n> + char *str = JsonbUnquote(DatumGetJsonbP(res));\n> ...\n> + else if (ret_typid == VARCHAROID || ret_typid == BPCHAROID ||\n> + ret_typid == BYTEAOID)\n> + {\n> + Jsonb *jb = DatumGetJsonbP(res);\n> + char *str = JsonbToCString(NULL, &jb->root, VARSIZE(jb));\n> +\n> + return ExecJsonStringCoercion(str, strlen(str),\n> ret_typid, ret_typmod);\n> + }\n>\n> I think it might be better to create ExecJsonQueryCoercion() similar\n> to ExecJsonValueCoercion() and put the above block in that function\n> rather than inlining it in ExecEvalJsonExprInternal().\n>\n> Extracted ExecJsonQueryCoercion().\n\nThanks.\n\n+/* Coerce JSONB datum to the output typid(typmod) */\n static Datum\n+ExecJsonQueryCoercion(JsonExpr *jexpr, Oid typid, int32 typmod,\n+ Datum jb, bool *error)\n\nMight make sense to expand to comment to mention JSON_QUERY, say as:\n\n/* Coerce JSONB datum returned by JSON_QUERY() to the output typid(typmod) */\n\n+/* Coerce SQL/JSON item to the output typid */\n+static Datum\n+ExecJsonValueCoercion(JsonbValue *item, Oid typid, int32 typmod,\n+ bool *isnull, bool *error)\n\nWhile at it, also update the comment of ExecJsonValueCoercion() as:\n\n/* Coerce SQL/JSON item returned by JSON_VALUE() to the output typid */\n\n> + /*\n> + * XXX coercion to text is done using output functions, and they\n> + * are mutable for non-time[tz] types due to using of DateStyle.\n> + * We can pass USE_ISO_DATES, which is used inside jsonpath, to\n> + * make these coercions and JSON_VALUE(RETURNING text) immutable.\n> + *\n> + * XXX Also timestamp[tz] output functions can throw \"out of range\"\n> + * error, but this error seem to be not possible.\n> + */\n>\n> Are we planning to fix these before committing?\n>\n> I don't know, but the first issue is critical for building functional indexes\n> on JSON_VALUE().\n\nOk.\n\n> - coercion = &coercions->composite;\n> - res = JsonbPGetDatum(JsonbValueToJsonb(item));\n> + Assert(0); /* non-scalars must be rejected by JsonPathValue() */\n>\n> I didn't notice any changes to JsonPathValue(). Is the new comment\n> referring to an existing behavior of JsonPathValue() or something that\n> must be done by the patch?\n>\n> JsonPathValue() has a check for non-scalars items, this is simply a new comment.\n\nOk.\n\n> @@ -411,6 +411,26 @@ contain_mutable_functions_walker(Node *node, void *context)\n> {\n> JsonExpr *jexpr = castNode(JsonExpr, node);\n> Const *cnst;\n> + bool returns_datetime;\n> +\n> + /*\n> + * Input fuctions for datetime types are stable. They can be\n> + * called in JSON_VALUE(), when the resulting SQL/JSON is a\n> + * string.\n> + */\n> ...\n>\n>\n> Sorry if you've mentioned it before, but are these hunks changing\n> contain_mutable_functions_walker() fixing a bug? That is, did the\n> original SQL/JSON patch miss doing this?\n>\n> In the original patch there were checks for mutability of expressions contained\n> in JsonCoercion nodes. After their removal, we need to use hardcoded checks.\n\nAh, okay, makes sense. Though I do wonder why list the individual\ntype OIDs here rather than checking the mutability markings on their\ninput/output functions? For example, we could do what the following\nblob in check_funcs_in_node() that is called by\ncontain_mutable_functions_walker() does:\n\n case T_CoerceViaIO:\n {\n CoerceViaIO *expr = (CoerceViaIO *) node;\n Oid iofunc;\n Oid typioparam;\n bool typisvarlena;\n\n /* check the result type's input function */\n getTypeInputInfo(expr->resulttype,\n &iofunc, &typioparam);\n if (checker(iofunc, context))\n return true;\n /* check the input type's output function */\n getTypeOutputInfo(exprType((Node *) expr->arg),\n &iofunc, &typisvarlena);\n if (checker(iofunc, context))\n return true;\n }\n\nI guess that's what would get used when the JsonCoercion nodes were present.\n\nOn 0010:\n\n@@ -5402,7 +5401,7 @@ ExecEvalJsonExprSkip(ExprState *state, ExprEvalStep *op)\n * true - Ok, jump to the end of JsonExpr\n * false - error occured, need to execute DEFAULT ON ERROR expression\n */\n-bool\n+void\n\nLooks like you forgot to update the comment.\n\n SELECT JSON_VALUE(jsonb '\"aaa\"', '$' RETURNING int DEFAULT 111 ON ERROR);\n- json_value\n-------------\n- 111\n-(1 row)\n-\n+ERROR: syntax error at or near \"DEFAULT\"\n+LINE 1: ...ELECT JSON_VALUE(jsonb '\"aaa\"', '$' RETURNING int DEFAULT 11...\n\nIs it intentional that you left many instances of the regression test\noutput changes like the above?\n\nFinally, I get this warning:\n\nexecExprInterp.c: In function ‘ExecJsonCoerceCStringToText’:\nexecExprInterp.c:4765:3: warning: missing braces around initializer\n[-Wmissing-braces]\n NameData encoding = {0};\n ^\nexecExprInterp.c:4765:3: warning: (near initialization for\n‘encoding.data’) [-Wmissing-braces]\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 31 Aug 2022 15:51:18 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 3:51 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Aug 31, 2022 at 6:25 AM Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:\n> > v10 patches\n>\n> Finally, I get this warning:\n>\n> execExprInterp.c: In function ‘ExecJsonCoerceCStringToText’:\n> execExprInterp.c:4765:3: warning: missing braces around initializer\n> [-Wmissing-braces]\n> NameData encoding = {0};\n> ^\n> execExprInterp.c:4765:3: warning: (near initialization for\n> ‘encoding.data’) [-Wmissing-braces]\n\nGiven the time constraints on making a decision on this, I'd like to\nalso mention that other than the things mentioned in my last email,\nwhich don't sound like a big deal for Nikita to take care of, I don't\nhave any further comments on the patches.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 31 Aug 2022 16:48:08 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 3:51 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> SELECT JSON_VALUE(jsonb '\"aaa\"', '$' RETURNING int DEFAULT 111 ON ERROR);\n> - json_value\n> -------------\n> - 111\n> -(1 row)\n> -\n> +ERROR: syntax error at or near \"DEFAULT\"\n> +LINE 1: ...ELECT JSON_VALUE(jsonb '\"aaa\"', '$' RETURNING int DEFAULT 11...\n>\n> Is it intentional that you left many instances of the regression test\n> output changes like the above?\n\nActually, thinking more about this, I am wondering if we should not\nremove the DEFAULT expression productions in gram.y. Maybe we can\nkeep the syntax and give an unsupported error during parse-analysis,\nlike the last version of the patch did for DEFAULT ON EMPTY. Which\nalso means to also leave JsonBehavior alone but with default_expr\nalways NULL for now.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 31 Aug 2022 20:01:13 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "\nOn 2022-08-31 We 07:01, Amit Langote wrote:\n> On Wed, Aug 31, 2022 at 3:51 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> SELECT JSON_VALUE(jsonb '\"aaa\"', '$' RETURNING int DEFAULT 111 ON ERROR);\n>> - json_value\n>> -------------\n>> - 111\n>> -(1 row)\n>> -\n>> +ERROR: syntax error at or near \"DEFAULT\"\n>> +LINE 1: ...ELECT JSON_VALUE(jsonb '\"aaa\"', '$' RETURNING int DEFAULT 11...\n>>\n>> Is it intentional that you left many instances of the regression test\n>> output changes like the above?\n> Actually, thinking more about this, I am wondering if we should not\n> remove the DEFAULT expression productions in gram.y. Maybe we can\n> keep the syntax and give an unsupported error during parse-analysis,\n> like the last version of the patch did for DEFAULT ON EMPTY. Which\n> also means to also leave JsonBehavior alone but with default_expr\n> always NULL for now.\n>\n\nProducing an error in the parse analysis phase seems best to me.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 31 Aug 2022 08:38:01 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On 8/31/22 8:38 AM, Andrew Dunstan wrote:\r\n> \r\n> On 2022-08-31 We 07:01, Amit Langote wrote:\r\n>> On Wed, Aug 31, 2022 at 3:51 PM Amit Langote <amitlangote09@gmail.com> wrote:\r\n>>> SELECT JSON_VALUE(jsonb '\"aaa\"', '$' RETURNING int DEFAULT 111 ON ERROR);\r\n>>> - json_value\r\n>>> -------------\r\n>>> - 111\r\n>>> -(1 row)\r\n>>> -\r\n>>> +ERROR: syntax error at or near \"DEFAULT\"\r\n>>> +LINE 1: ...ELECT JSON_VALUE(jsonb '\"aaa\"', '$' RETURNING int DEFAULT 11...\r\n>>>\r\n>>> Is it intentional that you left many instances of the regression test\r\n>>> output changes like the above?\r\n>> Actually, thinking more about this, I am wondering if we should not\r\n>> remove the DEFAULT expression productions in gram.y. Maybe we can\r\n>> keep the syntax and give an unsupported error during parse-analysis,\r\n>> like the last version of the patch did for DEFAULT ON EMPTY. Which\r\n>> also means to also leave JsonBehavior alone but with default_expr\r\n>> always NULL for now.\r\n>>\r\n> \r\n> Producing an error in the parse analysis phase seems best to me.\r\n\r\nAndres, Robert, Tom: With this recent work, have any of your opinions \r\nchanged on including SQL/JSON in v15?\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Wed, 31 Aug 2022 10:20:24 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-31 10:20:24 -0400, Jonathan S. Katz wrote:\n> Andres, Robert, Tom: With this recent work, have any of your opinions\n> changed on including SQL/JSON in v15?\n\nI don't really know what to do here. It feels blatantly obvious that this code\nisn't even remotely close to being releasable. I'm worried about the impact of\nthe big revert at this stage of the release cycle, and that's not getting\nbetter by delaying further. And I'm getting weary of being asked to make the\nobvious call that the authors of this feature as well as the RMT should have\nmade a while ago.\n\n From my POV the only real discussion is whether we'd want to revert this in 15\nand HEAD or just 15. There's imo a decent point to be made to just revert in\n15 and aggressively press forward with the changes posted in this thread.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 31 Aug 2022 08:49:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "\nOn 2022-08-30 Tu 17:25, Nikita Glukhov wrote:\n>\n>\n>\n>>>> Patches 0001-0006:\n>>>>\n>>>> Yeah, these add the overhead of an extra function call (typin() ->\n>>>> typin_opt_error()) in possibly very common paths. Other than\n>>>> refactoring *all* places that call typin() to use the new API, the\n>>>> only other option seems to be to leave the typin() functions alone and\n>>>> duplicate their code in typin_opt_error() versions for all the types\n>>>> that this patch cares about. Though maybe, that's not necessarily a\n>>>> better compromise than accepting the extra function call overhead.\n>>> I think another possibility is to create a static inline function in the\n>>> corresponding .c module (say boolin_impl() in bool.c), which is called\n>>> by both the opt_error variant as well as the regular one. This would\n>>> avoid the duplicate code as well as the added function-call overhead.\n>> +1\n> I always thought about such internal inline functions, I 've added them in v10.\n>\n>\n\nA couple of questions about these:\n\n\n1. Patch 5 changes the API of DecodeDateTime() and DecodeTimeOnly() by\nadding an extra parameter bool *error. Would it be better to provide\n_opt_error flavors of these?\n\n2. Patch 6 changes jsonb_from_cstring so that it's no longer static\ninline. Shouldn't we have a static inline function that can be called\nfrom inside jsonb.c and is called by the extern function?\n\n\nchanging both of these things would be quite trivial and should not hold\nanything up.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 31 Aug 2022 11:59:38 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-08-31 10:20:24 -0400, Jonathan S. Katz wrote:\n>> Andres, Robert, Tom: With this recent work, have any of your opinions\n>> changed on including SQL/JSON in v15?\n\n> I don't really know what to do here. It feels blatantly obvious that this code\n> isn't even remotely close to being releasable. I'm worried about the impact of\n> the big revert at this stage of the release cycle, and that's not getting\n> better by delaying further. And I'm getting weary of being asked to make the\n> obvious call that the authors of this feature as well as the RMT should have\n> made a while ago.\n\nI have to agree. There is a large amount of code at stake here.\nWe're being asked to review a bunch of hastily-produced patches\nto that code on an even more hasty schedule (and personally\nI have other things I need to do today...) I think the odds\nof a favorable end result are small.\n\n> From my POV the only real discussion is whether we'd want to revert this in 15\n> and HEAD or just 15. There's imo a decent point to be made to just revert in\n> 15 and aggressively press forward with the changes posted in this thread.\n\nI'm not for that. Code that we don't think is ready to ship\nhas no business being in the common tree, nor does it make\nreview any easier to be looking at one bulky set of\nalready-committed patches and another bulky set of deltas.\n\nI'm okay with making an exception for the include/nodes/ and\nbackend/nodes/ files in HEAD, since the recent changes in that\narea mean it'd be a lot of error-prone work to produce a reverting\npatch there. We can leave those in as dead code temporarily, I think.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 31 Aug 2022 12:04:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 10:20 AM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> Andres, Robert, Tom: With this recent work, have any of your opinions\n> changed on including SQL/JSON in v15?\n\nNo. Nothing's been committed, and there's no time to review anything\nin detail, and there was never going to be. Nikita said he was ready\nto start hacking in mid-August. That's good of him, but feature freeze\nwas in April. We don't start hacking on a feature 4 months after the\nfreeze. I'm unwilling to drop everything I'm working on to review\npatches that were written in a last minute rush. Even if these patches\nwere more important to me than my own work, which they are not, I\ncouldn't possibly do a good job reviewing complex patches on top of\nother complex patches in an area that I haven't studied in years. And\nif I could do a good job, no doubt I'd find a bunch of problems -\nwhether they would be large or small, I don't know - and then that\nwould lead to more changes even closer to the intended release date.\n\nI just don't understand what the RMT thinks it is doing here. When a\nconcern is raised about whether a feature is anywhere close to being\nin a releasable state in August, \"hack on it some more and then see\nwhere we're at\" seems like an obviously impractical way forward. It\nseemed clear to me from the moment that Andres raised his concerns\nthat the only two viable strategies were (1) revert the feature and be\nsad or (2) decide to ship it anyway and hope that Andres is incorrect\nin thinking that it will become an embarrassment to the project. The\nRMT has chosen neither of these, and in fact, really seems to want\nsomeone else to make the decision. But that's not how it works. The\nRMT concept was invented precisely to solve problems like this one,\nwhere the patch authors don't really want to revert it but other\npeople think it's pretty busted. If such problems were best addressed\nby waiting for a long time to see whether anything changes, we\nwouldn't need an RMT. That's exactly how we used to handle these kinds\nof problems, and it sucked.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 31 Aug 2022 12:26:29 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-31 12:26:29 -0400, Robert Haas wrote:\n> On Wed, Aug 31, 2022 at 10:20 AM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> > Andres, Robert, Tom: With this recent work, have any of your opinions\n> > changed on including SQL/JSON in v15?\n> \n> No. Nothing's been committed, and there's no time to review anything\n> in detail, and there was never going to be. Nikita said he was ready\n> to start hacking in mid-August. That's good of him, but feature freeze\n> was in April.\n\nAs additional context: I had started raising those concerns mid June.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 31 Aug 2022 09:37:29 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On 8/31/22 12:26 PM, Robert Haas wrote:\r\n> On Wed, Aug 31, 2022 at 10:20 AM Jonathan S. Katz <jkatz@postgresql.org> wrote:\r\n>> Andres, Robert, Tom: With this recent work, have any of your opinions\r\n>> changed on including SQL/JSON in v15?\r\n> \r\n> No. Nothing's been committed, and there's no time to review anything\r\n> in detail, and there was never going to be.\r\n\r\nOK. Based on this feedback, the RMT is going to request that this is \r\nreverted.\r\n\r\nWith RMT hat on -- Andrew can you please revert the patchset?\r\n\r\n> Nikita said he was ready\r\n> to start hacking in mid-August. That's good of him, but feature freeze\r\n> was in April. We don't start hacking on a feature 4 months after the\r\n> freeze. I'm unwilling to drop everything I'm working on to review\r\n> patches that were written in a last minute rush. Even if these patches\r\n> were more important to me than my own work, which they are not, I\r\n> couldn't possibly do a good job reviewing complex patches on top of\r\n> other complex patches in an area that I haven't studied in years. And\r\n> if I could do a good job, no doubt I'd find a bunch of problems -\r\n> whether they would be large or small, I don't know - and then that\r\n> would lead to more changes even closer to the intended release date.\r\n> \r\n> I just don't understand what the RMT thinks it is doing here. When a\r\n> concern is raised about whether a feature is anywhere close to being\r\n> in a releasable state in August, \"hack on it some more and then see\r\n> where we're at\" seems like an obviously impractical way forward. It\r\n> seemed clear to me from the moment that Andres raised his concerns\r\n> that the only two viable strategies were (1) revert the feature and be\r\n> sad or (2) decide to ship it anyway and hope that Andres is incorrect\r\n> in thinking that it will become an embarrassment to the project. The\r\n> RMT has chosen neither of these, and in fact, really seems to want\r\n> someone else to make the decision. But that's not how it works. The\r\n> RMT concept was invented precisely to solve problems like this one,\r\n> where the patch authors don't really want to revert it but other\r\n> people think it's pretty busted. If such problems were best addressed\r\n> by waiting for a long time to see whether anything changes, we\r\n> wouldn't need an RMT. That's exactly how we used to handle these kinds\r\n> of problems, and it sucked.\r\n\r\nThis is fair feedback. However, there are a few things to consider here:\r\n\r\n1. When Andres raised his initial concerns, the RMT did recommend to \r\nrevert but did not force it. Part of the RMT charter is to try to get \r\nconsensus before doing so and after we've exhausted the community \r\nprocess. As we moved closer, the patch others proposed some suggestions \r\nwhich other folks were amenable to trying.\r\n\r\nUnfortunately, time has run out. However,\r\n\r\n2. One of the other main goals of the RMT is to ensure the release ships \r\n\"on time\" which we define to be late Q3/early Q4. We factored that into \r\nthe decision making process around this. We are still on time for the \r\nrelease.\r\n\r\nI take responsibility for the decision making. I would be open to \r\ndiscuss this further around what worked / what didn't with the RMT and \r\nwhere we can improve in the future.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Wed, 31 Aug 2022 12:48:52 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "I wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> From my POV the only real discussion is whether we'd want to revert this in 15\n>> and HEAD or just 15. There's imo a decent point to be made to just revert in\n>> 15 and aggressively press forward with the changes posted in this thread.\n\n> I'm not for that. Code that we don't think is ready to ship\n> has no business being in the common tree, nor does it make\n> review any easier to be looking at one bulky set of\n> already-committed patches and another bulky set of deltas.\n\nTo enlarge on that a bit: it seems to me that the really fundamental\nissue here is how to catch datatype-specific input and conversion\nerrors without using subtransactions, because those are too expensive\nand can mask errors we'd rather not be masking, such as OOM. (Andres\nhad some additional, more localized concerns, but I think this is the\none with big-picture implications.)\n\nThe currently proposed patchset hacks up a relatively small number\nof core datatypes to be able to do that. But it's just a hack\nand there's no prospect of extension types being able to join\nin the fun. I think where we need to start, for v16, is making\nan API design that will let any datatype have this functionality.\n(I don't say that we'd convert every datatype to do so right away;\nin the long run we should, but I'm content to start with just the\nsame core types touched here.) Beside the JSON stuff, there is\nanother even more pressing application for such behavior, namely\nthe often-requested COPY functionality to be able to shunt bad data\noff somewhere without losing the entire transfer. In the COPY case\nI think we'd want to be able to capture the error message that\nwould have been issued, which means the current patches are not\nat all appropriate as a basis for that API design: they're just\nreturning a bool without any details.\n\nSo that's why I'm in favor of reverting and starting over.\nThere are probably big chunks of what's been done that can be\nre-used, but it all needs to be re-examined with this sort of\ndesign in mind.\n\nAs a really quick sketch of what such an API might look like:\nwe could invent a new node type, say IOCallContext, which is\nintended to be passed as FunctionCallInfo.context to type\ninput functions and perhaps type conversion functions.\nCall sites wishing to have no-thrown-error functionality would\ninitialize one of these to show \"no error\" and then pass it\nto the data type's usual input function. Old-style input\nfunctions would ignore this and just throw errors as usual;\nsorry, you don't get the no-error functionality you wanted.\nBut I/O functions that had been updated would know to store the\nreport of a relevant error into that node and then return NULL.\n(Although I think there may be assumptions somewhere that\nI/O functions don't return NULL, so maybe \"just return any\ndummy value\" is a better idea? Although likely it wouldn't\nbe hard to remove such assumptions from callers using this\nfunctionality.) The caller would detect the presence of an error\nby examining the node contents and then do whatever it needs to do.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 31 Aug 2022 13:06:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 1:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The currently proposed patchset hacks up a relatively small number\n> of core datatypes to be able to do that. But it's just a hack\n> and there's no prospect of extension types being able to join\n> in the fun. I think where we need to start, for v16, is making\n> an API design that will let any datatype have this functionality.\n\nThis would be really nice to have.\n\n> (I don't say that we'd convert every datatype to do so right away;\n> in the long run we should, but I'm content to start with just the\n> same core types touched here.)\n\nI would be in favor of making more of an effort than just a few token\ndata types. The initial patch could just touch a few, but once the\ninfrastructure is in place we should really make a sweep through the\ntree and tidy up.\n\n> Beside the JSON stuff, there is\n> another even more pressing application for such behavior, namely\n> the often-requested COPY functionality to be able to shunt bad data\n> off somewhere without losing the entire transfer. In the COPY case\n> I think we'd want to be able to capture the error message that\n> would have been issued, which means the current patches are not\n> at all appropriate as a basis for that API design: they're just\n> returning a bool without any details.\n\nFully agreed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 31 Aug 2022 13:09:51 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Aug 31, 2022 at 1:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> (I don't say that we'd convert every datatype to do so right away;\n>> in the long run we should, but I'm content to start with just the\n>> same core types touched here.)\n\n> I would be in favor of making more of an effort than just a few token\n> data types. The initial patch could just touch a few, but once the\n> infrastructure is in place we should really make a sweep through the\n> tree and tidy up.\n\nSure, but my point is that we can do that in a time-extended fashion\nrather than having a flag day where everything must be updated.\nThe initial patch just needs to update a few types as proof of concept.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 31 Aug 2022 13:14:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "\nOn 2022-08-31 We 12:48, Jonathan S. Katz wrote:\n>\n>\n> With RMT hat on -- Andrew can you please revert the patchset?\n\n\n:-(\n\n\nYes, I'll do it, starting with the v15 branch. Might take a day or so.\n\n\ncheers (kinda)\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 31 Aug 2022 14:22:54 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 12:26:29PM -0400, Robert Haas wrote:\n> someone else to make the decision. But that's not how it works. The\n> RMT concept was invented precisely to solve problems like this one,\n> where the patch authors don't really want to revert it but other\n> people think it's pretty busted. If such problems were best addressed\n> by waiting for a long time to see whether anything changes, we\n> wouldn't need an RMT. That's exactly how we used to handle these kinds\n> of problems, and it sucked.\n\nI saw the RMT/Jonathan stated August 28 as the cut-off date for a\ndecision, which was later changed to September 1:\n\n> The RMT is still inclined to revert, but will give folks until Sep 1 0:00\n> AoE[1] to reach consensus on if SQL/JSON can be included in v15. This matches\n> up to Andrew's availability timeline for a revert, and gives enough time to\n> get through the buildfarm prior to the Beta 4 release[2].\n \nI guess you are saying that setting a cut-off was a bad idea, or that\nthe cut-off was too close to the final release date. For me, I think\nthere were three questions:\n\n1. Were subtransactions acceptable, consensus no\n2. Could trapping errors work for PG 15, consensus no\n3. Could the feature be trimmed back for PG 15 to avoid these, consensus ?\n\nI don't think our community works well when there are three issues in\nplay at once.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 31 Aug 2022 14:23:33 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 12:04:44PM -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > From my POV the only real discussion is whether we'd want to revert this in 15\n> > and HEAD or just 15. There's imo a decent point to be made to just revert in\n> > 15 and aggressively press forward with the changes posted in this thread.\n> \n> I'm not for that. Code that we don't think is ready to ship\n> has no business being in the common tree, nor does it make\n> review any easier to be looking at one bulky set of\n> already-committed patches and another bulky set of deltas.\n\nAgreed on removing from PG 15 and master --- it would be confusing to\nhave lots of incomplete code in master that is not in PG 15.\n\n> I'm okay with making an exception for the include/nodes/ and\n> backend/nodes/ files in HEAD, since the recent changes in that\n> area mean it'd be a lot of error-prone work to produce a reverting\n> patch there. We can leave those in as dead code temporarily, I think.\n\nI don't have an opinion on this.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 31 Aug 2022 14:25:28 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> I guess you are saying that setting a cut-off was a bad idea, or that\n> the cut-off was too close to the final release date. For me, I think\n> there were three questions:\n\n> 1. Were subtransactions acceptable, consensus no\n> 2. Could trapping errors work for PG 15, consensus no\n> 3. Could the feature be trimmed back for PG 15 to avoid these, consensus ?\n\nWe could probably have accomplished #3 if there was more time,\nbut we're out of time. (I'm not entirely convinced that spending\neffort towards #3 was productive anyway, given that we're now thinking\nabout a much differently-scoped patch with API changes.)\n\n> I don't think our community works well when there are three issues in\n> play at once.\n\nTo the extent that there was a management failure here, it was that\nwe didn't press for a resolution sooner. Given the scale of the\nconcerns raised in June, I kind of agree with Andres' opinion that\nfixing them post-freeze was doomed to failure. It was definitely\ndoomed once we reached August with no real work done towards it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 31 Aug 2022 14:45:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "\nOn 2022-08-31 We 14:45, Tom Lane wrote:\n> To the extent that there was a management failure here, it was that\n> we didn't press for a resolution sooner. Given the scale of the\n> concerns raised in June, I kind of agree with Andres' opinion that\n> fixing them post-freeze was doomed to failure. It was definitely\n> doomed once we reached August with no real work done towards it.\n\n\nI'm not going to comment publicly in general about this, you might\nimagine what my reaction is. The decision is the RMT's to make and I\nhave no quarrel with that.\n\nBut I do want it understood that there was work being done right from\nthe time in June when Andres' complaints were published. These were\ndifficult issues, and we didn't let the grass grow looking for a fix. I\nconcede that might not have been visible until later.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 31 Aug 2022 15:08:46 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On 8/31/22 3:08 PM, Andrew Dunstan wrote:\r\n> \r\n> On 2022-08-31 We 14:45, Tom Lane wrote:\r\n>> To the extent that there was a management failure here, it was that\r\n>> we didn't press for a resolution sooner. Given the scale of the\r\n>> concerns raised in June, I kind of agree with Andres' opinion that\r\n>> fixing them post-freeze was doomed to failure. It was definitely\r\n>> doomed once we reached August with no real work done towards it.\r\n> \r\n> \r\n> I'm not going to comment publicly in general about this, you might\r\n> imagine what my reaction is. The decision is the RMT's to make and I\r\n> have no quarrel with that.\r\n> \r\n> But I do want it understood that there was work being done right from\r\n> the time in June when Andres' complaints were published. These were\r\n> difficult issues, and we didn't let the grass grow looking for a fix. I\r\n> concede that might not have been visible until later.\r\n\r\nJune was a bit of a rough month too -- we had the issues that spawned \r\nthe out-of-cycle release at the top of the month, which started almost \r\nright after Beta 1, and then almost immediately into Beta 2 after 14.4. \r\nI know that consumed a lot of my cycles. At that point in time for the \r\nv15 release process I was primarily focused on monitoring open items at \r\nthat point, so I missed the June comments.\r\n\r\nJonathan",
"msg_date": "Wed, 31 Aug 2022 16:18:00 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On 31.08.2022 20:14, Tom Lane wrote:\n> Robert Haas<robertmhaas@gmail.com> writes:\n>> On Wed, Aug 31, 2022 at 1:06 PM Tom Lane<tgl@sss.pgh.pa.us> wrote:\n>>> The currently proposed patchset hacks up a relatively small number\n>>> of core datatypes to be able to do that. But it's just a hack\n>>> and there's no prospect of extension types being able to join\n>>> in the fun. I think where we need to start, for v16, is making\n>>> an API design that will let any datatype have this functionality.\n>>>\n>>> (I don't say that we'd convert every datatype to do so right away;\n>>> in the long run we should, but I'm content to start with just the\n>>> same core types touched here.) Beside the JSON stuff, there is\n>>> another even more pressing application for such behavior, namely\n>>> the often-requested COPY functionality to be able to shunt bad data\n>>> off somewhere without losing the entire transfer. In the COPY case\n>>> I think we'd want to be able to capture the error message that\n>>> would have been issued, which means the current patches are not\n>>> at all appropriate as a basis for that API design: they're just\n>>> returning a bool without any details.\n>>>\n>> I would be in favor of making more of an effort than just a few token\n>> data types. The initial patch could just touch a few, but once the\n>> infrastructure is in place we should really make a sweep through the\n>> tree and tidy up.\n> Sure, but my point is that we can do that in a time-extended fashion\n> rather than having a flag day where everything must be updated.\n> The initial patch just needs to update a few types as proof of concept.\n>\nAnd here is a quick POC patch with an example for COPY and float4:\n\n=# CREATE TABLE test (i int, f float4);\nCREATE TABLE\n\n=# COPY test (f) FROM stdin WITH (null_on_error (f));\n1\nerr\n2\n\\.\n\nCOPY 3\n\n=# SELECT f FROM test;\n f\n---\n 1\n \n 2\n(3 rows)\n\n=# COPY test (i) FROM stdin WITH (null_on_error (i));\nERROR: input function for datatype \"integer\" does not support error handling\n\n\n\nPG_RETURN_ERROR() is a reincarnation of ereport_safe() macro for returning\nErrorData, which was present in older versions (~v18) of SQL/JSON patches.\nLater it was replaced with `bool *have_error` and less magical\n`if (have_error) ... else ereport(...)`.\n\n\nObviously, this needs a separate thread.\n\n-- \nNikita Glukhov\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 31 Aug 2022 23:39:31 +0300",
"msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On 31.08.2022 23:39, Nikita Glukhov wrote:\n\n> And here is a quick POC patch with an example for COPY and float4\n\nI decided to go further and use new API in SQL/JSON functions\n(even if it does not make real sense now).\n\nI have added function for checking expressions trees, special\nexecutor steps for handling errors in FuncExpr, CoerceViaIO,\nCoerceToDomain which are passed through ExprState.edata.\n\nOf course, there is still a lot of work:\n 1. JIT for new expression steps\n 2. Removal of subsidary ExprStates (needs another solution for\n ErrorData passing)\n 3. Checking of domain constraint expressions\n 4. Error handling in coercion to bytea\n 5. Error handling in json_populate_type()\n 6. Error handling in jsonb::type casts\n 7. ...\n\n\nAlso I have added lazy creation of JSON_VALUE coercions, which was\nnot present in previous patches. It really greatly speeds up JIT\nand reduces memory consumption. But it requires using of subsidary\nExprStates.\n\n\njsonb_sqljson test now fails because of points 4, 5, 6.\n\n--\nNikita Glukhov\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 1 Sep 2022 16:54:42 +0300",
"msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "\nOn 2022-08-31 We 14:22, Andrew Dunstan wrote:\n> On 2022-08-31 We 12:48, Jonathan S. Katz wrote:\n>>\n>> With RMT hat on -- Andrew can you please revert the patchset?\n>\n> :-(\n>\n>\n> Yes, I'll do it, starting with the v15 branch. Might take a day or so.\n>\n>\n\ndone\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 1 Sep 2022 17:13:50 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On 9/1/22 5:13 PM, Andrew Dunstan wrote:\r\n> \r\n> On 2022-08-31 We 14:22, Andrew Dunstan wrote:\r\n>> On 2022-08-31 We 12:48, Jonathan S. Katz wrote:\r\n>>>\r\n>>> With RMT hat on -- Andrew can you please revert the patchset?\r\n>>\r\n>> :-(\r\n>>\r\n>>\r\n>> Yes, I'll do it, starting with the v15 branch. Might take a day or so.\r\n>>\r\n>>\r\n> \r\n> done\r\n\r\nThank you Andrew.\r\n\r\nJonathan",
"msg_date": "Thu, 1 Sep 2022 17:55:04 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 03:51:18PM +0900, Amit Langote wrote:\n> Finally, I get this warning:\n> \n> execExprInterp.c: In function ‘ExecJsonCoerceCStringToText’:\n> execExprInterp.c:4765:3: warning: missing braces around initializer\n> [-Wmissing-braces]\n> NameData encoding = {0};\n> ^\n> execExprInterp.c:4765:3: warning: (near initialization for\n> ‘encoding.data’) [-Wmissing-braces]\n\nWith what compiler ?\n\nThis has came up before:\n20211202033145.GK17618@telsasoft.com\n20220716115932.GV18011@telsasoft.com\n\n\n",
"msg_date": "Fri, 2 Sep 2022 06:56:29 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "On Fri, Sep 2, 2022 at 8:56 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Wed, Aug 31, 2022 at 03:51:18PM +0900, Amit Langote wrote:\n> > Finally, I get this warning:\n> >\n> > execExprInterp.c: In function ‘ExecJsonCoerceCStringToText’:\n> > execExprInterp.c:4765:3: warning: missing braces around initializer\n> > [-Wmissing-braces]\n> > NameData encoding = {0};\n> > ^\n> > execExprInterp.c:4765:3: warning: (near initialization for\n> > ‘encoding.data’) [-Wmissing-braces]\n>\n> With what compiler ?\n>\n> This has came up before:\n> 20211202033145.GK17618@telsasoft.com\n> 20220716115932.GV18011@telsasoft.com\n\nDidn't realize it when I was reviewing the patch but somehow my build\nscript had started using gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44),\nwhich I know is old.\n\n- Amit\n\n\n",
"msg_date": "Mon, 5 Sep 2022 15:17:35 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "\nOn 2022-09-01 Th 09:54, Nikita Glukhov wrote:\n>\n> On 31.08.2022 23:39, Nikita Glukhov wrote:\n>\n>> And here is a quick POC patch with an example for COPY and float4\n> I decided to go further and use new API in SQL/JSON functions \n> (even if it does not make real sense now).\n>\n> I have added function for checking expressions trees, special\n> executor steps for handling errors in FuncExpr, CoerceViaIO, \n> CoerceToDomain which are passed through ExprState.edata.\n>\n> Of course, there is still a lot of work:\n> 1. JIT for new expression steps\n> 2. Removal of subsidary ExprStates (needs another solution for \n> ErrorData passing)\n> 3. Checking of domain constraint expressions\n> 4. Error handling in coercion to bytea\n> 5. Error handling in json_populate_type()\n> 6. Error handling in jsonb::type casts\n> 7. ...\n>\n>\n> Also I have added lazy creation of JSON_VALUE coercions, which was \n> not present in previous patches. It really greatly speeds up JIT \n> and reduces memory consumption. But it requires using of subsidary \n> ExprStates.\n>\n>\n> jsonb_sqljson test now fails because of points 4, 5, 6.\n\n\n\n\nIt looks like this needs to be rebased anyway.\n\nI suggest just submitting the Input function stuff on its own, I think\nthat means not patches 3,4,15 at this stage. Maybe we would also need a\nsmall test module to call the functions, or at least some of them.\n\nThe earlier we can get this in the earlier SQL/JSON patches based on it\ncan be considered.\n\nA few comments:\n\n\n. proissafe isn't really a very informative name. Safe for what? maybe\nproerrorsafe or something would be better?\n\n. I don't think we need the if test or else clause here:\n\n+ if (edata)\n+ return InputFunctionCallInternal(flinfo, str, typioparam,\ntypmod, edata);\n+ else\n+ return InputFunctionCall(flinfo, str, typioparam, typmod);\n\n. I think we should probably cover float8 as well as float4, and there\nmight be some other odd gaps.\n\n\nAs mentioned previously, this should really go in a new thread, so\nplease don't reply to this but start a completely new thread.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 29 Sep 2022 23:05:07 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> I suggest just submitting the Input function stuff on its own, I think\n> that means not patches 3,4,15 at this stage. Maybe we would also need a\n> small test module to call the functions, or at least some of them.\n> The earlier we can get this in the earlier SQL/JSON patches based on it\n> can be considered.\n\n+1\n\n> . proissafe isn't really a very informative name. Safe for what? maybe\n> proerrorsafe or something would be better?\n\nI strongly recommend against having a new pg_proc column at all.\nI doubt that you really need it, and having one will create\nenormous mechanical burdens to making the conversion. (For example,\nneeding a catversion bump every time we convert one more function,\nor an extension version bump to convert extensions.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 29 Sep 2022 23:28:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON features for v15"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nWhilst debugging an issue with the output of pg_get_constraintdef, we've\ndiscovered that pg_get_constraintdef doesn't schema qualify foreign tables\nmentioned in the REFERENCES clause, even if pretty printing\n(PRETTYFLAG_SCHEMA) is turned off.\n\nThis is a problem because it means there is no way to get a constraint\ndefinition that can be recreated on another system when multiple schemas\nare in use, but a different search_path is set. It's also different from\npg_get_indexdef, where this flag is correctly respected.\n\nI assume this is an oversight, since the fix is pretty straightforward, see\nattached patch. I'll register the patch for the next commitfest.\n\nHere is a test case from my colleague Maciek showing this difference:\n\ncreate schema s;\ncreate table s.foo(a int primary key);\ncreate table s.bar(a int primary key, b int references s.foo(a));\n\nselect pg_get_indexdef(indexrelid, 0, false) from pg_index order by\nindexrelid desc limit 3;\n\n pg_get_indexdef\n\n-------------------------------------------------------------------------------------------------------\n CREATE UNIQUE INDEX bar_pkey ON s.bar USING btree (a)\n CREATE UNIQUE INDEX foo_pkey ON s.foo USING btree (a)\n CREATE UNIQUE INDEX pg_toast_13593_index ON pg_toast.pg_toast_13593 USING\nbtree (chunk_id, chunk_seq)\n(3 rows)\n\nselect pg_get_constraintdef(oid, false) from pg_constraint order by oid\ndesc limit 3;\n pg_get_constraintdef\n-----------------------------------\n FOREIGN KEY (b) REFERENCES foo(a)\n PRIMARY KEY (a)\n PRIMARY KEY (a)\n(3 rows)\n\nThanks,\nLukas\n\n-- \nLukas Fittl",
"msg_date": "Tue, 9 Aug 2022 17:10:35 -0700",
"msg_from": "Lukas Fittl <lukas@fittl.com>",
"msg_from_op": true,
"msg_subject": "pg_get_constraintdef: Schema qualify foreign tables unless pretty\n printing is enabled"
},
{
"msg_contents": "Lukas Fittl <lukas@fittl.com> writes:\n> Whilst debugging an issue with the output of pg_get_constraintdef, we've\n> discovered that pg_get_constraintdef doesn't schema qualify foreign tables\n> mentioned in the REFERENCES clause, even if pretty printing\n> (PRETTYFLAG_SCHEMA) is turned off.\n\n> This is a problem because it means there is no way to get a constraint\n> definition that can be recreated on another system when multiple schemas\n> are in use, but a different search_path is set. It's also different from\n> pg_get_indexdef, where this flag is correctly respected.\n\nI would say that pg_get_indexdef is the one that's out of step.\nI count 11 calls of generate_relation_name in ruleutils.c,\nof which only three have this business of being overridden\nwhen not-pretty. What is the rationale for that, and why\nwould we move pg_get_constraintdef from one category to the\nother?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 09 Aug 2022 20:33:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_get_constraintdef: Schema qualify foreign tables unless pretty\n printing is enabled"
},
{
"msg_contents": "On Tue, Aug 9, 2022 at 5:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I would say that pg_get_indexdef is the one that's out of step.\n> I count 11 calls of generate_relation_name in ruleutils.c,\n> of which only three have this business of being overridden\n> when not-pretty. What is the rationale for that, and why\n> would we move pg_get_constraintdef from one category to the\n> other?\n>\n\nThe overall motivation here is to make it easy to recreate the schema\nwithout having to match the search_path on the importing side to be\nidentical to the exporting side. There is a workaround, which is to do a\nSET search_path before calling these functions that excludes the referenced\nschemas (which I guess is what pg_dump does?).\n\nBut I wonder, why do we have an explicit pretty printing flag on these\nfunctions, and PRETTYFLAG_SCHEMA in the code to represent this behavior. If\nwe don't want pretty printing to affect schema qualification, why does that\nflag exist?\n\nOf the other call sites, in terms of using generate_relation_name vs\ngenerate_qualified_relation_name:\n\n* pg_get_triggerdef_worker makes it conditional on pretty=true, but only\nfor ON, not the FROM (not clear why that difference exists?)\n* pg_get_indexdef_worker makes it conditional on prettyFlags &\nPRETTYFLAG_SCHEMA for the ON\n* pg_get_statisticsobj_worker does not handle pretty printing (always uses\ngenerate_relation_name)\n* make_ruledef makes it conditional on prettyFlags & PRETTYFLAG_SCHEMA for\nthe TO\n* get_insert_query_def does not handle pretty printing (always uses\ngenerate_relation_name)\n* get_update_query_def does not handle pretty printing (always uses\ngenerate_relation_name)\n* get_delete_query_def does not handle pretty printing (always uses\ngenerate_relation_name)\n* get_rule_expr does not handle pretty printing (always uses\ngenerate_relation_name)\n* get_from_clause_item does not handle pretty printing (always uses\ngenerate_relation_name)\n\nLooking at that, it seems we didn't make the effort for the view related\ncode with all its complexity, and didn't do it for pg_get_statisticsobjdef\nsince it doesn't have a pretty flag. Why we didn't do it in\npg_get_triggerdef_worker for FROM isn't clear to me.\n\nIf we want to be entirely consistent (and keep supporting\nPRETTYFLAG_SCHEMA), that probably means:\n\n* Adding a pretty flag to pg_get_statisticsobjdef\n* Teaching get_query_def to pass down prettyFlags to get_*_query_def\nfunctions\n* Update pg_get_triggerdef_worker to handle pretty for FROM as well\n\nIf that seems like a sensible direction I'd be happy to work on a patch.\n\nThanks,\nLukas\n\n-- \nLukas Fittl\n\nOn Tue, Aug 9, 2022 at 5:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nI would say that pg_get_indexdef is the one that's out of step.\nI count 11 calls of generate_relation_name in ruleutils.c,\nof which only three have this business of being overridden\nwhen not-pretty. What is the rationale for that, and why\nwould we move pg_get_constraintdef from one category to the\nother?The overall motivation here is to make it easy to recreate the schema without having to match the search_path on the importing side to be identical to the exporting side. There is a workaround, which is to do a SET search_path before calling these functions that excludes the referenced schemas (which I guess is what pg_dump does?).But I wonder, why do we have an explicit pretty printing flag on these functions, and PRETTYFLAG_SCHEMA in the code to represent this behavior. If we don't want pretty printing to affect schema qualification, why does that flag exist?Of the other call sites, in terms of using generate_relation_name vs generate_qualified_relation_name:* pg_get_triggerdef_worker makes it conditional on pretty=true, but only for ON, not the FROM (not clear why that difference exists?)* pg_get_indexdef_worker makes it conditional on prettyFlags & PRETTYFLAG_SCHEMA for the ON* pg_get_statisticsobj_worker does not handle pretty printing (always uses generate_relation_name)* make_ruledef makes it conditional on prettyFlags & PRETTYFLAG_SCHEMA for the TO* get_insert_query_def does not handle pretty printing (always uses generate_relation_name)* get_update_query_def does not handle pretty printing (always uses generate_relation_name)* get_delete_query_def does not handle pretty printing (always uses generate_relation_name)* get_rule_expr does not handle pretty printing (always uses generate_relation_name)* get_from_clause_item does not handle pretty printing (always uses generate_relation_name)Looking at that, it seems we didn't make the effort for the view related code with all its complexity, and didn't do it for pg_get_statisticsobjdef since it doesn't have a pretty flag. Why we didn't do it in pg_get_triggerdef_worker for FROM isn't clear to me.If we want to be entirely consistent (and keep supporting PRETTYFLAG_SCHEMA), that probably means:* Adding a pretty flag to pg_get_statisticsobjdef* Teaching get_query_def to pass down prettyFlags to get_*_query_def functions* Update pg_get_triggerdef_worker to handle pretty for FROM as wellIf that seems like a sensible direction I'd be happy to work on a patch.Thanks,Lukas-- Lukas Fittl",
"msg_date": "Tue, 9 Aug 2022 18:07:30 -0700",
"msg_from": "Lukas Fittl <lukas@fittl.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_get_constraintdef: Schema qualify foreign tables unless pretty\n printing is enabled"
},
{
"msg_contents": "On 2022-Aug-09, Lukas Fittl wrote:\n\n> But I wonder, why do we have an explicit pretty printing flag on these\n> functions, and PRETTYFLAG_SCHEMA in the code to represent this behavior.\n> If we don't want pretty printing to affect schema qualification, why\n> does that flag exist?\n\nBecause of CVE-2018-1058. See commit 815172ba8068.\n\nI imagine that that commit only touched the minimum necessary to solve\nthe immediate security problem, but that further work is needed to make\nPRETTYFLAG_SCHEMA become a fully functional gadget; but that would\nrequire that the whole of ruleutils.c (and everything downstream from\nit) behaves sanely. In other words, I think your patch is too small.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 10 Aug 2022 10:58:50 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_get_constraintdef: Schema qualify foreign tables unless\n pretty printing is enabled"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2022-Aug-09, Lukas Fittl wrote:\n>> But I wonder, why do we have an explicit pretty printing flag on these\n>> functions, and PRETTYFLAG_SCHEMA in the code to represent this behavior.\n>> If we don't want pretty printing to affect schema qualification, why\n>> does that flag exist?\n\n> Because of CVE-2018-1058. See commit 815172ba8068.\n\n> I imagine that that commit only touched the minimum necessary to solve\n> the immediate security problem, but that further work is needed to make\n> PRETTYFLAG_SCHEMA become a fully functional gadget; but that would\n> require that the whole of ruleutils.c (and everything downstream from\n> it) behaves sanely. In other words, I think your patch is too small.\n\nWhat I'm inclined to do, rather than repeat the same finicky &\nundocumented coding pattern in one more place, is write a convenience\nfunction for it that can be named and documented to reflect the coding\nrule about which call sites should use it (rather than calling plain\ngenerate_relation_name). However, the first requirement for that\nis to have a clearly defined rule. I think the intent of 815172ba8068\nwas to convert all uses that would determine the object-creation schema\nin commands issued by pg_dump. Do we want to widen that, and if so\nby how much? I'd be on board I think with adjusting other ruleutils.c\nfunctions that could plausibly be used for building creation commands,\nbut happen not to be called by pg_dump. I'm not on board with\nconverting every single generate_relation_name call --- mainly because\nit'd be pointless unless you also qualify every single function name,\noperator name, etc; and that would be unreadable.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 10 Aug 2022 09:48:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_get_constraintdef: Schema qualify foreign tables unless pretty\n printing is enabled"
},
{
"msg_contents": "On Wed, Aug 10, 2022 at 09:48:08AM -0400, Tom Lane wrote:\n> What I'm inclined to do, rather than repeat the same finicky &\n> undocumented coding pattern in one more place, is write a convenience\n> function for it that can be named and documented to reflect the coding\n> rule about which call sites should use it (rather than calling plain\n> generate_relation_name). However, the first requirement for that\n> is to have a clearly defined rule. I think the intent of 815172ba8068\n> was to convert all uses that would determine the object-creation schema\n> in commands issued by pg_dump. Do we want to widen that, and if so\n> by how much? I'd be on board I think with adjusting other ruleutils.c\n> functions that could plausibly be used for building creation commands,\n> but happen not to be called by pg_dump. I'm not on board with\n> converting every single generate_relation_name call --- mainly because\n> it'd be pointless unless you also qualify every single function name,\n> operator name, etc; and that would be unreadable.\n\nLukas, please note that this patch is waiting for your input for a few\nweeks now. Could you reply to the reviews provided?\n--\nMichael",
"msg_date": "Wed, 12 Oct 2022 14:19:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_get_constraintdef: Schema qualify foreign tables unless\n pretty printing is enabled"
},
{
"msg_contents": "On Wed, Oct 12, 2022 at 02:19:25PM +0900, Michael Paquier wrote:\n> Lukas, please note that this patch is waiting for your input for a few\n> weeks now. Could you reply to the reviews provided?\n\nThis has stalled for six weeks, so I have marked the patch as returned\nwith feedback.\n--\nMichael",
"msg_date": "Wed, 30 Nov 2022 15:56:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_get_constraintdef: Schema qualify foreign tables unless\n pretty printing is enabled"
}
] |
[
{
"msg_contents": "Hi,\n\nOne CI run for the meson branch just failed in a way I hadn't seen before on\nwindows, when nothing had changed on windows\n\nhttps://cirrus-ci.com/task/6111743586861056\n\n027_stream_regress.pl ended up failing due to a timeout. Which in turn was\ncaused by the standby crashing.\n\n2022-08-10 01:46:20.731 GMT [2212][startup] PANIC: hash_xlog_split_allocate_page: failed to acquire cleanup lock\n2022-08-10 01:46:20.731 GMT [2212][startup] CONTEXT: WAL redo at 0/7A6EED8 for Hash/SPLIT_ALLOCATE_PAGE: new_bucket 31, meta_page_masks_updated F, issplitpoint_changed F; blkref #0: rel 1663/16384/24210, blk 23; blkref #1: rel 1663/16384/24210, blk 45; blkref #2: rel 1663/16384/24210, blk 0\nabort() has been called2022-08-10 01:46:31.919 GMT [7560][checkpointer] LOG: restartpoint starting: time\n2022-08-10 01:46:32.430 GMT [8304][postmaster] LOG: startup process (PID 2212) was terminated by exception 0xC0000354\n\nstack dump:\nhttps://api.cirrus-ci.com/v1/artifact/task/6111743586861056/crashlog/crashlog-postgres.exe_21c8_2022-08-10_01-46-28-215.txt\n\nThe relevant code triggering it:\n\n\tnewbuf = XLogInitBufferForRedo(record, 1);\n\t_hash_initbuf(newbuf, xlrec->new_bucket, xlrec->new_bucket,\n\t\t\t\t xlrec->new_bucket_flag, true);\n\tif (!IsBufferCleanupOK(newbuf))\n\t\telog(PANIC, \"hash_xlog_split_allocate_page: failed to acquire cleanup lock\");\n\nWhy do we just crash if we don't already have a cleanup lock? That can't be\nright. Or is there supposed to be a guarantee this can't happen?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 9 Aug 2022 19:26:17 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "\n\n> On Aug 9, 2022, at 7:26 PM, Andres Freund <andres@anarazel.de> wrote:\n> \n> The relevant code triggering it:\n> \n> \tnewbuf = XLogInitBufferForRedo(record, 1);\n> \t_hash_initbuf(newbuf, xlrec->new_bucket, xlrec->new_bucket,\n> \t\t\t\t xlrec->new_bucket_flag, true);\n> \tif (!IsBufferCleanupOK(newbuf))\n> \t\telog(PANIC, \"hash_xlog_split_allocate_page: failed to acquire cleanup lock\");\n> \n> Why do we just crash if we don't already have a cleanup lock? That can't be\n> right. Or is there supposed to be a guarantee this can't happen?\n\nPerhaps the code assumes that when xl_hash_split_allocate_page record was written, the new_bucket field referred to an unused page, and so during replay it should also refer to an unused page, and being unused, that nobody will have it pinned. But at least in heap we sometimes pin unused pages just long enough to examine them and to see that they are unused. Maybe something like that is happening here?\n\nI'd be curious to see the count returned by BUF_STATE_GET_REFCOUNT(LockBufHdr(newbuf)) right before this panic. If it's just 1, then it's not another backend, but our own, and we'd want to debug why we're pinning the same page twice (or more) while replaying wal. Otherwise, maybe it's a race condition with some other process that transiently pins a buffer and occasionally causes this code to panic?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 9 Aug 2022 20:21:19 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On Wed, Aug 10, 2022 at 3:21 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > On Aug 9, 2022, at 7:26 PM, Andres Freund <andres@anarazel.de> wrote:\n> > The relevant code triggering it:\n> >\n> > newbuf = XLogInitBufferForRedo(record, 1);\n> > _hash_initbuf(newbuf, xlrec->new_bucket, xlrec->new_bucket,\n> > xlrec->new_bucket_flag, true);\n> > if (!IsBufferCleanupOK(newbuf))\n> > elog(PANIC, \"hash_xlog_split_allocate_page: failed to acquire cleanup lock\");\n> >\n> > Why do we just crash if we don't already have a cleanup lock? That can't be\n> > right. Or is there supposed to be a guarantee this can't happen?\n>\n> Perhaps the code assumes that when xl_hash_split_allocate_page record was written, the new_bucket field referred to an unused page, and so during replay it should also refer to an unused page, and being unused, that nobody will have it pinned. But at least in heap we sometimes pin unused pages just long enough to examine them and to see that they are unused. Maybe something like that is happening here?\n\nHere's an email about that:\n\nhttps://www.postgresql.org/message-id/CAE9k0P=OXww6RQCGrmDNa8=L3EeB01SGbYuP23y-qZJ=4td38Q@mail.gmail.com\n\n> I'd be curious to see the count returned by BUF_STATE_GET_REFCOUNT(LockBufHdr(newbuf)) right before this panic. If it's just 1, then it's not another backend, but our own, and we'd want to debug why we're pinning the same page twice (or more) while replaying wal. Otherwise, maybe it's a race condition with some other process that transiently pins a buffer and occasionally causes this code to panic?\n\nBut which backend could that be? We aren't starting any at that point\nin the test.\n\nSomeone might wonder if it's the startup process itself via the new\nWAL prefetching machinery, but that doesn't pin pages, it only probes\nthe buffer mapping table to see if future pages are cached already\n(see bufmgr.c PrefetchSharedBuffer()). (This is a topic I've thought\nabout a bit because I have another installment of recovery prefetching\nin development using real AIO that *does* pin pages in advance, and\nhas to deal with code that wants cleanup locks like this...)\n\nIt's possible that git log src/backend/access/hash/ can explain a\nbehaviour change, as there were some recent changes there, but it's\nnot jumping out at me. Maybe 4f1f5a7f \"Remove fls(), use\npg_leftmost_one_pos32() instead.\" has a maths error, but I don't see\nit. Maybe e09d7a12 \"Improve speed of hash index build.\" accidentally\nreaches a new state and triggers a latent bug. Maybe a latent bug\nshowed up now just because we started testing recovery not too long\nago... but all of that still needs another backend involved. We can\nsee which blocks the startup process has pinned, 23 != 45. Hmmm.\n\n\n",
"msg_date": "Wed, 10 Aug 2022 16:38:45 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-09 20:21:19 -0700, Mark Dilger wrote:\n> > On Aug 9, 2022, at 7:26 PM, Andres Freund <andres@anarazel.de> wrote:\n> >\n> > The relevant code triggering it:\n> >\n> > \tnewbuf = XLogInitBufferForRedo(record, 1);\n> > \t_hash_initbuf(newbuf, xlrec->new_bucket, xlrec->new_bucket,\n> > \t\t\t\t xlrec->new_bucket_flag, true);\n> > \tif (!IsBufferCleanupOK(newbuf))\n> > \t\telog(PANIC, \"hash_xlog_split_allocate_page: failed to acquire cleanup lock\");\n> >\n> > Why do we just crash if we don't already have a cleanup lock? That can't be\n> > right. Or is there supposed to be a guarantee this can't happen?\n>\n> Perhaps the code assumes that when xl_hash_split_allocate_page record was\n> written, the new_bucket field referred to an unused page, and so during\n> replay it should also refer to an unused page, and being unused, that nobody\n> will have it pinned. But at least in heap we sometimes pin unused pages\n> just long enough to examine them and to see that they are unused. Maybe\n> something like that is happening here?\n\nI don't think it's a safe assumption that nobody would hold a pin on such a\npage during recovery. While not the case here, somebody else could have used\npg_prewarm to read it in.\n\nBut also, the checkpointer or bgwriter could have it temporarily pinned, to\nwrite it out, or another backend could try to write it out as a victim buffer\nand have it temporarily pinned.\n\n\nstatic int\nSyncOneBuffer(int buf_id, bool skip_recently_used, WritebackContext *wb_context)\n{\n...\n\t/*\n\t * Pin it, share-lock it, write it. (FlushBuffer will do nothing if the\n\t * buffer is clean by the time we've locked it.)\n\t */\n\tPinBuffer_Locked(bufHdr);\n\tLWLockAcquire(BufferDescriptorGetContentLock(bufHdr), LW_SHARED);\n\n\nAs you can see we acquire a pin without holding a lock on the page (and that\ncan't be changed!).\n\n\nI assume this is trying to defend against some sort of deadlock by not\nactually getting a cleanup lock (by passing get_cleanup_lock = true to\nXLogReadBufferForRedoExtended()).\n\nI don't think it's possible to rely on a dirty page to never be pinned by\nanother backend. All you can rely on with a cleanup lock is that there's no\n*prior* references to the buffer, and thus it's safe to reorganize the buffer,\nbecause the pin-holder hasn't yet gotten a lock on the page.\n\n\n> I'd be curious to see the count returned by\n> BUF_STATE_GET_REFCOUNT(LockBufHdr(newbuf)) right before this panic. If it's\n> just 1, then it's not another backend, but our own, and we'd want to debug\n> why we're pinning the same page twice (or more) while replaying wal.\n\nThis was the first time in a couple hundred runs on that I have seen this, so\nI don't think it's that easily debuggable for me.\n\n\n> Otherwise, maybe it's a race condition with some other process that\n> transiently pins a buffer and occasionally causes this code to panic?\n\nAs pointed out above, it's legal to have a transient pin on a page, so this\njust looks like a bad assumption in the hash code to me.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 9 Aug 2022 22:28:30 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On Wed, Aug 10, 2022 at 5:28 PM Andres Freund <andres@anarazel.de> wrote:\n> I don't think it's a safe assumption that nobody would hold a pin on such a\n> page during recovery. While not the case here, somebody else could have used\n> pg_prewarm to read it in.\n>\n> But also, the checkpointer or bgwriter could have it temporarily pinned, to\n> write it out, or another backend could try to write it out as a victim buffer\n> and have it temporarily pinned.\n\nRight, of course. So it's just that hash indexes didn't get xlog'd\nuntil 2017, and still aren't very popular, and then recovery didn't\nget regression tested until 2021, so nobody ever hit it.\n\n\n",
"msg_date": "Wed, 10 Aug 2022 17:35:09 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On Wed, Aug 10, 2022 at 5:35 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> 2021\n\nOr, rather, 14 days into 2022 :-)\n\n\n",
"msg_date": "Wed, 10 Aug 2022 17:36:41 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On Wed, Aug 10, 2022 at 10:58 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-08-09 20:21:19 -0700, Mark Dilger wrote:\n> > > On Aug 9, 2022, at 7:26 PM, Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > The relevant code triggering it:\n> > >\n> > > newbuf = XLogInitBufferForRedo(record, 1);\n> > > _hash_initbuf(newbuf, xlrec->new_bucket, xlrec->new_bucket,\n> > > xlrec->new_bucket_flag, true);\n> > > if (!IsBufferCleanupOK(newbuf))\n> > > elog(PANIC, \"hash_xlog_split_allocate_page: failed to acquire cleanup lock\");\n> > >\n> > > Why do we just crash if we don't already have a cleanup lock? That can't be\n> > > right. Or is there supposed to be a guarantee this can't happen?\n> >\n> > Perhaps the code assumes that when xl_hash_split_allocate_page record was\n> > written, the new_bucket field referred to an unused page, and so during\n> > replay it should also refer to an unused page, and being unused, that nobody\n> > will have it pinned. But at least in heap we sometimes pin unused pages\n> > just long enough to examine them and to see that they are unused. Maybe\n> > something like that is happening here?\n>\n> I don't think it's a safe assumption that nobody would hold a pin on such a\n> page during recovery. While not the case here, somebody else could have used\n> pg_prewarm to read it in.\n>\n> But also, the checkpointer or bgwriter could have it temporarily pinned, to\n> write it out, or another backend could try to write it out as a victim buffer\n> and have it temporarily pinned.\n>\n>\n> static int\n> SyncOneBuffer(int buf_id, bool skip_recently_used, WritebackContext *wb_context)\n> {\n> ...\n> /*\n> * Pin it, share-lock it, write it. (FlushBuffer will do nothing if the\n> * buffer is clean by the time we've locked it.)\n> */\n> PinBuffer_Locked(bufHdr);\n> LWLockAcquire(BufferDescriptorGetContentLock(bufHdr), LW_SHARED);\n>\n>\n> As you can see we acquire a pin without holding a lock on the page (and that\n> can't be changed!).\n>\n\nI think this could be the probable reason for failure though I didn't\ntry to debug/reproduce this yet. AFAIU, this is possible during\nrecovery/replay of WAL record XLOG_HASH_SPLIT_ALLOCATE_PAGE as via\nXLogReadBufferForRedoExtended, we can mark the buffer dirty while\nrestoring from full page image. OTOH, because during normal operation\nwe didn't mark the page dirty SyncOneBuffer would have skipped it due\nto check (if (!(buf_state & BM_VALID) || !(buf_state & BM_DIRTY))).\n\n>\n> I assume this is trying to defend against some sort of deadlock by not\n> actually getting a cleanup lock (by passing get_cleanup_lock = true to\n> XLogReadBufferForRedoExtended()).\n>\n\nIIRC, this is just following what we do during normal operation and\nbased on the theory that the meta-page is not updated yet so no\nbackend will access it. I think we can do what you wrote unless there\nis some other reason behind this failure.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 10 Aug 2022 14:52:36 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On Wed, Aug 10, 2022 at 12:39 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Here's an email about that:\n>\n> https://www.postgresql.org/message-id/CAE9k0P=OXww6RQCGrmDNa8=L3EeB01SGbYuP23y-qZJ=4td38Q@mail.gmail.com\n\nHmm. If I'm reading that email correctly, it indicates that I noticed\nthis problem before commit and asked for it to be changed, but then\nfor some reason it wasn't changed and I still committed it.\n\nI can't immediately think of a reason why it wouldn't be safe to\ninsist on acquiring a cleanup lock there.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 10 Aug 2022 09:09:16 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On Wed, Aug 10, 2022 at 2:52 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Aug 10, 2022 at 10:58 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2022-08-09 20:21:19 -0700, Mark Dilger wrote:\n> > > > On Aug 9, 2022, at 7:26 PM, Andres Freund <andres@anarazel.de> wrote:\n> > > >\n> > > > The relevant code triggering it:\n> > > >\n> > > > newbuf = XLogInitBufferForRedo(record, 1);\n> > > > _hash_initbuf(newbuf, xlrec->new_bucket, xlrec->new_bucket,\n> > > > xlrec->new_bucket_flag, true);\n> > > > if (!IsBufferCleanupOK(newbuf))\n> > > > elog(PANIC, \"hash_xlog_split_allocate_page: failed to acquire cleanup lock\");\n> > > >\n> > > > Why do we just crash if we don't already have a cleanup lock? That can't be\n> > > > right. Or is there supposed to be a guarantee this can't happen?\n> > >\n> > > Perhaps the code assumes that when xl_hash_split_allocate_page record was\n> > > written, the new_bucket field referred to an unused page, and so during\n> > > replay it should also refer to an unused page, and being unused, that nobody\n> > > will have it pinned. But at least in heap we sometimes pin unused pages\n> > > just long enough to examine them and to see that they are unused. Maybe\n> > > something like that is happening here?\n> >\n> > I don't think it's a safe assumption that nobody would hold a pin on such a\n> > page during recovery. While not the case here, somebody else could have used\n> > pg_prewarm to read it in.\n> >\n> > But also, the checkpointer or bgwriter could have it temporarily pinned, to\n> > write it out, or another backend could try to write it out as a victim buffer\n> > and have it temporarily pinned.\n> >\n> >\n> > static int\n> > SyncOneBuffer(int buf_id, bool skip_recently_used, WritebackContext *wb_context)\n> > {\n> > ...\n> > /*\n> > * Pin it, share-lock it, write it. (FlushBuffer will do nothing if the\n> > * buffer is clean by the time we've locked it.)\n> > */\n> > PinBuffer_Locked(bufHdr);\n> > LWLockAcquire(BufferDescriptorGetContentLock(bufHdr), LW_SHARED);\n> >\n> >\n> > As you can see we acquire a pin without holding a lock on the page (and that\n> > can't be changed!).\n> >\n>\n> I think this could be the probable reason for failure though I didn't\n> try to debug/reproduce this yet. AFAIU, this is possible during\n> recovery/replay of WAL record XLOG_HASH_SPLIT_ALLOCATE_PAGE as via\n> XLogReadBufferForRedoExtended, we can mark the buffer dirty while\n> restoring from full page image. OTOH, because during normal operation\n> we didn't mark the page dirty SyncOneBuffer would have skipped it due\n> to check (if (!(buf_state & BM_VALID) || !(buf_state & BM_DIRTY))).\n\nI'm trying to simulate the scenario in streaming replication using the below:\nCREATE TABLE pvactst (i INT, a INT[], p POINT) with (autovacuum_enabled = off);\nCREATE INDEX hash_pvactst ON pvactst USING hash (i);\nINSERT INTO pvactst SELECT i, array[1,2,3], point(i, i+1) FROM\ngenerate_series(1,1000) i;\n\nWith the above scenario, it will be able to replay allocation of page\nfor split operation. I will slightly change the above statements and\ntry to debug and see if we can make the background writer process to\npin this buffer and simulate the scenario. I will post my findings\nonce I'm done with the analysis.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 11 Aug 2022 22:06:38 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-10 14:52:36 +0530, Amit Kapila wrote:\n> I think this could be the probable reason for failure though I didn't\n> try to debug/reproduce this yet. AFAIU, this is possible during\n> recovery/replay of WAL record XLOG_HASH_SPLIT_ALLOCATE_PAGE as via\n> XLogReadBufferForRedoExtended, we can mark the buffer dirty while\n> restoring from full page image. OTOH, because during normal operation\n> we didn't mark the page dirty SyncOneBuffer would have skipped it due\n> to check (if (!(buf_state & BM_VALID) || !(buf_state & BM_DIRTY))).\n\nI think there might still be short-lived references from other paths, even if\nnot marked dirty, but it isn't realy important.\n\n\n> > I assume this is trying to defend against some sort of deadlock by not\n> > actually getting a cleanup lock (by passing get_cleanup_lock = true to\n> > XLogReadBufferForRedoExtended()).\n> >\n> \n> IIRC, this is just following what we do during normal operation and\n> based on the theory that the meta-page is not updated yet so no\n> backend will access it. I think we can do what you wrote unless there\n> is some other reason behind this failure.\n\nWell, it's not really the same if you silently continue in normal operation\nand PANIC during recovery... If it's an optional operation the tiny race\naround not getting the cleanup lock is fine, but it's a totally different\nstory during recovery.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 11 Aug 2022 14:12:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On Wed, Aug 10, 2022 at 1:28 AM Andres Freund <andres@anarazel.de> wrote:\n> I assume this is trying to defend against some sort of deadlock by not\n> actually getting a cleanup lock (by passing get_cleanup_lock = true to\n> XLogReadBufferForRedoExtended()).\n\nI had that thought too, but I don't *think* it's the case. This\nfunction acquires a lock on the oldest bucket page, then on the new\nbucket page. We could deadlock if someone who holds a pin on the new\nbucket page tries to take a content lock on the old bucket page. But\nwho would do that? The new bucket page isn't yet linked from the\nmetapage at this point, so no scan should do that. There can be no\nconcurrent writers during replay. I think that if someone else has the\nnew page pinned they probably should not be taking content locks on\nother buffers at the same time.\n\nSo maybe we can just apply something like the attached.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 16 Aug 2022 16:46:59 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I had that thought too, but I don't *think* it's the case. This\n> function acquires a lock on the oldest bucket page, then on the new\n> bucket page. We could deadlock if someone who holds a pin on the new\n> bucket page tries to take a content lock on the old bucket page. But\n> who would do that? The new bucket page isn't yet linked from the\n> metapage at this point, so no scan should do that. There can be no\n> concurrent writers during replay. I think that if someone else has the\n> new page pinned they probably should not be taking content locks on\n> other buffers at the same time.\n\nAgreed, the core code shouldn't do that, but somebody doing random stuff\nwith pageinspect functions could probably make a query do this.\nSee [1]; unless we're going to reject that bug with \"don't do that\",\nI'm not too comfortable with this line of reasoning.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/17568-ef121b956ec1559c%40postgresql.org\n\n\n",
"msg_date": "Tue, 16 Aug 2022 17:02:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On Tue, Aug 16, 2022 at 5:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I had that thought too, but I don't *think* it's the case. This\n> > function acquires a lock on the oldest bucket page, then on the new\n> > bucket page. We could deadlock if someone who holds a pin on the new\n> > bucket page tries to take a content lock on the old bucket page. But\n> > who would do that? The new bucket page isn't yet linked from the\n> > metapage at this point, so no scan should do that. There can be no\n> > concurrent writers during replay. I think that if someone else has the\n> > new page pinned they probably should not be taking content locks on\n> > other buffers at the same time.\n>\n> Agreed, the core code shouldn't do that, but somebody doing random stuff\n> with pageinspect functions could probably make a query do this.\n> See [1]; unless we're going to reject that bug with \"don't do that\",\n> I'm not too comfortable with this line of reasoning.\n\nI don't see the connection. The problem there has to do with bypassing\nshared buffers, but this operation isn't bypassing shared buffers.\n\nWhat sort of random things would someone do with pageinspect functions\nthat would hold buffer pins on one buffer while locking another one?\nThe functions in hashfuncs.c don't even seem like they would access\nmultiple buffers in total, let alone at overlapping times. And I don't\nthink that a query pageinspect could realistically be suspended while\nholding a buffer pin either. If you wrapped it in a cursor it'd be\nsuspended before or after accessing any given buffer, not right in the\nmiddle of that operation.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 16 Aug 2022 19:44:52 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> What sort of random things would someone do with pageinspect functions\n> that would hold buffer pins on one buffer while locking another one?\n> The functions in hashfuncs.c don't even seem like they would access\n> multiple buffers in total, let alone at overlapping times. And I don't\n> think that a query pageinspect could realistically be suspended while\n> holding a buffer pin either. If you wrapped it in a cursor it'd be\n> suspended before or after accessing any given buffer, not right in the\n> middle of that operation.\n\npin != access. Unless things have changed really drastically since\nI last looked, a seqscan will sit on a buffer pin throughout the\nseries of fetches from a single page.\n\nAdmittedly, that's about *heap* page pins while indexscans have\ndifferent rules. But I recall that btrees at least use persistent\npins as well.\n\nIt may be that there is indeed no way to make this happen with\navailable SQL tools. But I wouldn't put a lot of money on that,\nand even less that it'll stay true in the future.\n\nHaving said that, you're right that this is qualitatively different\nfrom the other bug, in that this is a deadlock not apparent data\ncorruption. However, IIUC it's an LWLock deadlock, which we don't\nhandle all that nicely.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Aug 2022 19:55:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-16 17:02:27 -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I had that thought too, but I don't *think* it's the case. This\n> > function acquires a lock on the oldest bucket page, then on the new\n> > bucket page. We could deadlock if someone who holds a pin on the new\n> > bucket page tries to take a content lock on the old bucket page. But\n> > who would do that? The new bucket page isn't yet linked from the\n> > metapage at this point, so no scan should do that. There can be no\n> > concurrent writers during replay. I think that if someone else has the\n> > new page pinned they probably should not be taking content locks on\n> > other buffers at the same time.\n> \n> Agreed, the core code shouldn't do that, but somebody doing random stuff\n> with pageinspect functions could probably make a query do this.\n> See [1]; unless we're going to reject that bug with \"don't do that\",\n> I'm not too comfortable with this line of reasoning.\n\nI don't think we can defend against lwlock deadlocks where somebody doesn't\nfollow the AM's deadlock avoidance strategy. I.e. it's fine to pin and lock\npages from some AM without knowing that AM's rules, as long as you only block\nwhile holding a pin/lock of a single page. But it is *not* ok to block waiting\nfor an lwlock / pin while already holding an lwlock / pin on some other\nbuffer. If we were concerned about this we'd have to basically throw many of\nour multi-page operations that rely on lock order logic out.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 16 Aug 2022 17:38:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-16 19:55:18 -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > What sort of random things would someone do with pageinspect functions\n> > that would hold buffer pins on one buffer while locking another one?\n> > The functions in hashfuncs.c don't even seem like they would access\n> > multiple buffers in total, let alone at overlapping times. And I don't\n> > think that a query pageinspect could realistically be suspended while\n> > holding a buffer pin either. If you wrapped it in a cursor it'd be\n> > suspended before or after accessing any given buffer, not right in the\n> > middle of that operation.\n> \n> pin != access. Unless things have changed really drastically since\n> I last looked, a seqscan will sit on a buffer pin throughout the\n> series of fetches from a single page.\n\nThat's still the case. But for heap that shouldn't be a problem, because we'll\nnever try to take a cleanup lock while holding another page locked (nor even\nanother heap page pinned, I think).\n\n\nI find it *highly* suspect that hash needs to acquire a cleanup lock while\nholding another buffer locked. The recovery aspect alone makes that seem quite\nunwise. Even if there's possibly no deadlock here for some reason or another.\n\n\nLooking at the non-recovery code makes me even more suspicious:\n\n\t/*\n\t * Physically allocate the new bucket's primary page. We want to do this\n\t * before changing the metapage's mapping info, in case we can't get the\n\t * disk space. Ideally, we don't need to check for cleanup lock on new\n\t * bucket as no other backend could find this bucket unless meta page is\n\t * updated. However, it is good to be consistent with old bucket locking.\n\t */\n\tbuf_nblkno = _hash_getnewbuf(rel, start_nblkno, MAIN_FORKNUM);\n\tif (!IsBufferCleanupOK(buf_nblkno))\n\t{\n\t\t_hash_relbuf(rel, buf_oblkno);\n\t\t_hash_relbuf(rel, buf_nblkno);\n\t\tgoto fail;\n\t}\n\n\n_hash_getnewbuf() calls _hash_pageinit() which calls PageInit(), which\nmemset(0)s the whole page. What does it even mean to check whether you\neffectively have a cleanup lock after you zeroed out the page?\n\nReading the README and the comment above makes me wonder if this whole cleanup\nlock business here is just cargo culting and could be dropped?\n\n\n\n> Admittedly, that's about *heap* page pins while indexscans have\n> different rules. But I recall that btrees at least use persistent\n> pins as well.\n\nI think that's been changed, although not in an unproblematic way.\n\n\n> Having said that, you're right that this is qualitatively different\n> from the other bug, in that this is a deadlock not apparent data\n> corruption. However, IIUC it's an LWLock deadlock, which we don't\n> handle all that nicely.\n\nTheoretically the startup side could be interrupted. Except that we don't\naccept the startup process dying...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 16 Aug 2022 17:57:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On Wed, Aug 17, 2022 at 6:27 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-08-16 19:55:18 -0400, Tom Lane wrote:\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> > > What sort of random things would someone do with pageinspect functions\n> > > that would hold buffer pins on one buffer while locking another one?\n> > > The functions in hashfuncs.c don't even seem like they would access\n> > > multiple buffers in total, let alone at overlapping times. And I don't\n> > > think that a query pageinspect could realistically be suspended while\n> > > holding a buffer pin either. If you wrapped it in a cursor it'd be\n> > > suspended before or after accessing any given buffer, not right in the\n> > > middle of that operation.\n> >\n> > pin != access. Unless things have changed really drastically since\n> > I last looked, a seqscan will sit on a buffer pin throughout the\n> > series of fetches from a single page.\n>\n> That's still the case. But for heap that shouldn't be a problem, because we'll\n> never try to take a cleanup lock while holding another page locked (nor even\n> another heap page pinned, I think).\n>\n>\n> I find it *highly* suspect that hash needs to acquire a cleanup lock while\n> holding another buffer locked. The recovery aspect alone makes that seem quite\n> unwise. Even if there's possibly no deadlock here for some reason or another.\n>\n>\n> Looking at the non-recovery code makes me even more suspicious:\n>\n> /*\n> * Physically allocate the new bucket's primary page. We want to do this\n> * before changing the metapage's mapping info, in case we can't get the\n> * disk space. Ideally, we don't need to check for cleanup lock on new\n> * bucket as no other backend could find this bucket unless meta page is\n> * updated. However, it is good to be consistent with old bucket locking.\n> */\n> buf_nblkno = _hash_getnewbuf(rel, start_nblkno, MAIN_FORKNUM);\n> if (!IsBufferCleanupOK(buf_nblkno))\n> {\n> _hash_relbuf(rel, buf_oblkno);\n> _hash_relbuf(rel, buf_nblkno);\n> goto fail;\n> }\n>\n>\n> _hash_getnewbuf() calls _hash_pageinit() which calls PageInit(), which\n> memset(0)s the whole page. What does it even mean to check whether you\n> effectively have a cleanup lock after you zeroed out the page?\n>\n> Reading the README and the comment above makes me wonder if this whole cleanup\n> lock business here is just cargo culting and could be dropped?\n>\n\nI think it is okay to not acquire a clean-up lock on the new bucket\npage both in recovery and non-recovery paths. It is primarily required\non the old bucket page to avoid concurrent scans/inserts. As mentioned\nin the comments and as per my memory serves, it is mainly for keeping\nit consistent with old bucket locking.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 17 Aug 2022 10:18:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On Tue, Aug 16, 2022 at 8:38 PM Andres Freund <andres@anarazel.de> wrote:\n> I don't think we can defend against lwlock deadlocks where somebody doesn't\n> follow the AM's deadlock avoidance strategy.\n\nThat's a good way of putting it. Tom seems to be postulating that\nmaybe someone can use random tools that exist to take buffer locks and\npins in arbitrary order, and if that is true then you can make any AM\ndeadlock. I think it isn't true, though, and I think if it were true\nthe right fix would be to remove the tools that are letting people do\nthat.\n\nThere's also zero evidence that this was ever intended as a deadlock\navoidance maneuver. I think that we are only hypothesizing that it was\nintended that way because the code looks weird. But I think the email\ndiscussion shows that I thought it was wrong at the time it was\ncommitted, and just missed the fact that the final version of the\npatch hadn't fixed it. And if it *were* a deadlock avoidance maneuver\nit would still be pretty broken, because it would make the startup\nprocess error out and the whole system go down.\n\nRegarding the question of whether we need a cleanup lock on the new\nbucket I am not really seeing the advantage of going down that path.\nSimply fixing this code to take a cleanup lock instead of hoping that\nit always gets one by accident is low risk and should fix the observed\nproblem. Getting rid of the cleanup lock will be more invasive and I'd\nlike to see some evidence that it's a necessary step before we take\nthe risk of breaking things.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 17 Aug 2022 08:25:06 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-17 10:18:14 +0530, Amit Kapila wrote:\n> > Looking at the non-recovery code makes me even more suspicious:\n> >\n> > /*\n> > * Physically allocate the new bucket's primary page. We want to do this\n> > * before changing the metapage's mapping info, in case we can't get the\n> > * disk space. Ideally, we don't need to check for cleanup lock on new\n> > * bucket as no other backend could find this bucket unless meta page is\n> > * updated. However, it is good to be consistent with old bucket locking.\n> > */\n> > buf_nblkno = _hash_getnewbuf(rel, start_nblkno, MAIN_FORKNUM);\n> > if (!IsBufferCleanupOK(buf_nblkno))\n> > {\n> > _hash_relbuf(rel, buf_oblkno);\n> > _hash_relbuf(rel, buf_nblkno);\n> > goto fail;\n> > }\n> >\n> >\n> > _hash_getnewbuf() calls _hash_pageinit() which calls PageInit(), which\n> > memset(0)s the whole page. What does it even mean to check whether you\n> > effectively have a cleanup lock after you zeroed out the page?\n> >\n> > Reading the README and the comment above makes me wonder if this whole cleanup\n> > lock business here is just cargo culting and could be dropped?\n> >\n> \n> I think it is okay to not acquire a clean-up lock on the new bucket\n> page both in recovery and non-recovery paths. It is primarily required\n> on the old bucket page to avoid concurrent scans/inserts. As mentioned\n> in the comments and as per my memory serves, it is mainly for keeping\n> it consistent with old bucket locking.\n\nIt's not keeping it consistent with bucket locking to zero out a page before\ngetting a cleanup lock, hopefully at least. This code is just broken on\nmultiple fronts, and consistency isn't a defense.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Aug 2022 11:36:23 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-17 08:25:06 -0400, Robert Haas wrote:\n> Regarding the question of whether we need a cleanup lock on the new\n> bucket I am not really seeing the advantage of going down that path.\n> Simply fixing this code to take a cleanup lock instead of hoping that\n> it always gets one by accident is low risk and should fix the observed\n> problem. Getting rid of the cleanup lock will be more invasive and I'd\n> like to see some evidence that it's a necessary step before we take\n> the risk of breaking things.\n\nGiven that the cleanup locks in question are \"taken\" *after* re-initializing\nthe page, I'm doubtful that's a sane path forward. It seems quite likely to\nmislead somebody to rely on it working as a cleanup lock in the future.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Aug 2022 11:45:34 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On Wed, Aug 17, 2022 at 2:45 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-08-17 08:25:06 -0400, Robert Haas wrote:\n> > Regarding the question of whether we need a cleanup lock on the new\n> > bucket I am not really seeing the advantage of going down that path.\n> > Simply fixing this code to take a cleanup lock instead of hoping that\n> > it always gets one by accident is low risk and should fix the observed\n> > problem. Getting rid of the cleanup lock will be more invasive and I'd\n> > like to see some evidence that it's a necessary step before we take\n> > the risk of breaking things.\n>\n> Given that the cleanup locks in question are \"taken\" *after* re-initializing\n> the page, I'm doubtful that's a sane path forward. It seems quite likely to\n> mislead somebody to rely on it working as a cleanup lock in the future.\n\nThere's not a horde of people lining up to work on the hash index\ncode, but if you feel like writing and testing the more invasive fix,\nI'm not really going to fight you over it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 17 Aug 2022 15:21:55 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-17 15:21:55 -0400, Robert Haas wrote:\n> On Wed, Aug 17, 2022 at 2:45 PM Andres Freund <andres@anarazel.de> wrote:\n> > Given that the cleanup locks in question are \"taken\" *after* re-initializing\n> > the page, I'm doubtful that's a sane path forward. It seems quite likely to\n> > mislead somebody to rely on it working as a cleanup lock in the future.\n>\n> There's not a horde of people lining up to work on the hash index\n> code, but if you feel like writing and testing the more invasive fix,\n> I'm not really going to fight you over it.\n\nMy problem is that the code right now is an outright lie. At the absolute very\nleast this code needs a big honking \"we check if we have a cleanup lock here,\nbut that's just for show, because WE ALREADY OVERWROTE THE WHOLE PAGE\".\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Aug 2022 12:30:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On Wed, Aug 17, 2022 at 5:55 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> Regarding the question of whether we need a cleanup lock on the new\n> bucket I am not really seeing the advantage of going down that path.\n> Simply fixing this code to take a cleanup lock instead of hoping that\n> it always gets one by accident is low risk and should fix the observed\n> problem. Getting rid of the cleanup lock will be more invasive and I'd\n> like to see some evidence that it's a necessary step before we take\n> the risk of breaking things.\n>\n\nThe patch proposed by you is sufficient to fix the observed issue.\nBTW, we are able to reproduce the issue and your patch fixed it. The\nidea is to ensure that checkpointer only tries to sync the buffer for\nthe new bucket, otherwise, it will block while acquiring the lock on\nthe old bucket buffer in SyncOneBuffer because the replay process\nwould already have it and we won't be able to hit required condition.\n\nTo simulate it, we need to stop the replay before we acquire the lock\nfor the old bucket. Then, let checkpointer advance the buf_id beyond\nthe buffer which we will get for the old bucket (in the place where it\nloops over all buffers, and mark the ones that need to be written with\nBM_CHECKPOINT_NEEDED.). After that let the replay process proceed till\nthe point where it checks for the clean-up lock on the new bucket.\nNext, let the checkpointer advance to sync the buffer corresponding to\nthe new bucket buffer. This will reproduce the required condition.\n\nWe have tried many other combinations but couldn't able to hit it. For\nexample, we were not able to generate it via bgwriter because it\nexpects the buffer to have zero usage and ref count which is not\npossible during the replay in hash_xlog_split_allocate_page() as we\nalready have increased the usage count for the new bucket buffer\nbefore checking the cleanup lock on it.\n\nI agree with you that getting rid of the clean-up lock on the new\nbucket is a more invasive patch and should be done separately if\nrequired. Yesterday, I have done a brief analysis and I think that is\npossible but it doesn't seem to be a good idea to backpatch it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 18 Aug 2022 15:17:47 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "Hi,\n\nThis issue does occasionally happen in CI, as e.g. noted in this thread:\nhttps://www.postgresql.org/message-id/20220930185345.GD6256%40telsasoft.com\n\nOn 2022-08-18 15:17:47 +0530, Amit Kapila wrote:\n> I agree with you that getting rid of the clean-up lock on the new\n> bucket is a more invasive patch and should be done separately if\n> required. Yesterday, I have done a brief analysis and I think that is\n> possible but it doesn't seem to be a good idea to backpatch it.\n\nMy problem with this approach is that the whole cleanup lock is hugely\nmisleading as-is. As I noted in\nhttps://www.postgresql.org/message-id/20220817193032.z35vdjhpzkgldrd3%40awork3.anarazel.de\nwe take the cleanup lock *after* re-initializing the page. Thereby\ncompletely breaking the properties that a cleanup lock normally tries to\nguarantee.\n\nEven if that were to achieve something useful (doubtful in this case),\nit'd need a huge comment explaining what's going on.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 30 Sep 2022 12:05:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On Sat, Oct 1, 2022 at 12:35 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> This issue does occasionally happen in CI, as e.g. noted in this thread:\n> https://www.postgresql.org/message-id/20220930185345.GD6256%40telsasoft.com\n>\n> On 2022-08-18 15:17:47 +0530, Amit Kapila wrote:\n> > I agree with you that getting rid of the clean-up lock on the new\n> > bucket is a more invasive patch and should be done separately if\n> > required. Yesterday, I have done a brief analysis and I think that is\n> > possible but it doesn't seem to be a good idea to backpatch it.\n>\n> My problem with this approach is that the whole cleanup lock is hugely\n> misleading as-is. As I noted in\n> https://www.postgresql.org/message-id/20220817193032.z35vdjhpzkgldrd3%40awork3.anarazel.de\n> we take the cleanup lock *after* re-initializing the page. Thereby\n> completely breaking the properties that a cleanup lock normally tries to\n> guarantee.\n>\n> Even if that were to achieve something useful (doubtful in this case),\n> it'd need a huge comment explaining what's going on.\n>\n\nAttached are two patches. The first patch is what Robert has proposed\nwith some changes in comments to emphasize the fact that cleanup lock\non the new bucket is just to be consistent with the old bucket page\nlocking as we are initializing it just before checking for cleanup\nlock. In the second patch, I removed the acquisition of cleanup lock\non the new bucket page and changed the comments/README accordingly.\n\nI think we can backpatch the first patch and the second patch can be\njust a HEAD-only patch. Does that sound reasonable to you?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Thu, 6 Oct 2022 12:44:24 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On Thu, 6 Oct 2022 at 12:44, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Oct 1, 2022 at 12:35 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > This issue does occasionally happen in CI, as e.g. noted in this thread:\n> > https://www.postgresql.org/message-id/20220930185345.GD6256%40telsasoft.com\n> >\n> > On 2022-08-18 15:17:47 +0530, Amit Kapila wrote:\n> > > I agree with you that getting rid of the clean-up lock on the new\n> > > bucket is a more invasive patch and should be done separately if\n> > > required. Yesterday, I have done a brief analysis and I think that is\n> > > possible but it doesn't seem to be a good idea to backpatch it.\n> >\n> > My problem with this approach is that the whole cleanup lock is hugely\n> > misleading as-is. As I noted in\n> > https://www.postgresql.org/message-id/20220817193032.z35vdjhpzkgldrd3%40awork3.anarazel.de\n> > we take the cleanup lock *after* re-initializing the page. Thereby\n> > completely breaking the properties that a cleanup lock normally tries to\n> > guarantee.\n> >\n> > Even if that were to achieve something useful (doubtful in this case),\n> > it'd need a huge comment explaining what's going on.\n> >\n>\n> Attached are two patches. The first patch is what Robert has proposed\n> with some changes in comments to emphasize the fact that cleanup lock\n> on the new bucket is just to be consistent with the old bucket page\n> locking as we are initializing it just before checking for cleanup\n> lock. In the second patch, I removed the acquisition of cleanup lock\n> on the new bucket page and changed the comments/README accordingly.\n>\n> I think we can backpatch the first patch and the second patch can be\n> just a HEAD-only patch. Does that sound reasonable to you?\n\nThanks for the patches.\nI have verified that the issue is fixed using a manual test upto\nREL_10_STABLE version and found it to be working fine.\n\nI have added code to print the old buffer and new buffer values when\nboth old buffer and new buffer will get dirtied. Then I had executed\nthe following test and note down the old buffer and new buffer value\nfrom the log file:\nCREATE TABLE pvactst (i INT, a INT[], p POINT) with (autovacuum_enabled = off);\nCREATE INDEX hash_pvactst ON pvactst USING hash (i);\ncreate table t1(c1 int);\nINSERT INTO pvactst SELECT i, array[1,2,3], point(i, i+1) FROM\ngenerate_series(1,1000) i;\nINSERT INTO pvactst SELECT i, array[1,2,3], point(i, i+1) FROM\ngenerate_series(1,1000) i;\nINSERT INTO pvactst SELECT i, array[1,2,3], point(i, i+1) FROM\ngenerate_series(1,1000) i;\n\nIn my environment, the issue will occur when oldbuf is 38 and newbuf is 60.\n\nOnce we know the old buffer and new buffer values, we will have to\ndebug the checkpointer and recovery process to simulate the scenario.\nI used the following steps to simulate the issue in my environment:\n1) Create streaming replication setup with the following configurations:\nwal_consistency_checking = all\nshared_buffers = 128MB # min 128kB\nbgwriter_lru_maxpages = 0 # max buffers written/round, 0 disables\ncheckpoint_timeout = 30s # range 30s-1d\n2) Execute the following in master node:\nCREATE TABLE pvactst (i INT, a INT[], p POINT) with (autovacuum_enabled = off);\nCREATE INDEX hash_pvactst ON pvactst USING hash (i);\n3) Hold checkpointer process of standby instance at BufferSync while debugging.\n4) Execute the following in master node:\ncreate table t1(c1 int); -- This is required so that the old buffer\nvalue is not dirty in checkpoint process. (If old buffer is dirty then\nwe will not be able to sync the new buffer as checkpointer will wait\nwhile trying to acquire the lock on old buffer).\n5) Make checkpoint process to check the buffers up to old buffer + 1.\nIn our case it should cross 38.\n6) Hold recovery process at\nhash_xlog_split_allocate_page->IsBufferCleanupOK (approximately line\nhash_xlog.c:357) while executing the following for the last insert in\nthe master node:\nINSERT INTO pvactst SELECT i, array[1,2,3], point(i, i+1) FROM\ngenerate_series(1,1000) i;\nINSERT INTO pvactst SELECT i, array[1,2,3], point(i, i+1) FROM\ngenerate_series(1,1000) i;\nINSERT INTO pvactst SELECT i, array[1,2,3], point(i, i+1) FROM\ngenerate_series(1,1000) i;\n7) Continue the checkpointer process and make it proceed to\nSyncOneBuffer with buf_id = 60(newbuf value that was noted from the\nearlier execution) and let it proceed up to PinBuffer_Locked(bufHdr);\n8) Continue the recovery process will reproduce the PANIC scenario.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 12 Oct 2022 16:16:19 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On Wed, 12 Oct 2022 at 16:16, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, 6 Oct 2022 at 12:44, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sat, Oct 1, 2022 at 12:35 AM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > This issue does occasionally happen in CI, as e.g. noted in this thread:\n> > > https://www.postgresql.org/message-id/20220930185345.GD6256%40telsasoft.com\n> > >\n> > > On 2022-08-18 15:17:47 +0530, Amit Kapila wrote:\n> > > > I agree with you that getting rid of the clean-up lock on the new\n> > > > bucket is a more invasive patch and should be done separately if\n> > > > required. Yesterday, I have done a brief analysis and I think that is\n> > > > possible but it doesn't seem to be a good idea to backpatch it.\n> > >\n> > > My problem with this approach is that the whole cleanup lock is hugely\n> > > misleading as-is. As I noted in\n> > > https://www.postgresql.org/message-id/20220817193032.z35vdjhpzkgldrd3%40awork3.anarazel.de\n> > > we take the cleanup lock *after* re-initializing the page. Thereby\n> > > completely breaking the properties that a cleanup lock normally tries to\n> > > guarantee.\n> > >\n> > > Even if that were to achieve something useful (doubtful in this case),\n> > > it'd need a huge comment explaining what's going on.\n> > >\n> >\n> > Attached are two patches. The first patch is what Robert has proposed\n> > with some changes in comments to emphasize the fact that cleanup lock\n> > on the new bucket is just to be consistent with the old bucket page\n> > locking as we are initializing it just before checking for cleanup\n> > lock. In the second patch, I removed the acquisition of cleanup lock\n> > on the new bucket page and changed the comments/README accordingly.\n> >\n> > I think we can backpatch the first patch and the second patch can be\n> > just a HEAD-only patch. Does that sound reasonable to you?\n>\n> Thanks for the patches.\n> I have verified that the issue is fixed using a manual test upto\n> REL_10_STABLE version and found it to be working fine.\n\nJust to clarify, I have verified that the first patch with Head,\nREL_15_STABLE, REL_14_STABLE, REL_13_STABLE, REL_12_STABLE,\nREL_11_STABLE and REL_10_STABLE branch fixes the issue. Also verified\nthat the first and second patch with Head branch fixes the issue.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 12 Oct 2022 17:22:54 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On Wed, Oct 12, 2022 at 4:16 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, 6 Oct 2022 at 12:44, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sat, Oct 1, 2022 at 12:35 AM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > This issue does occasionally happen in CI, as e.g. noted in this thread:\n> > > https://www.postgresql.org/message-id/20220930185345.GD6256%40telsasoft.com\n> > >\n> > > On 2022-08-18 15:17:47 +0530, Amit Kapila wrote:\n> > > > I agree with you that getting rid of the clean-up lock on the new\n> > > > bucket is a more invasive patch and should be done separately if\n> > > > required. Yesterday, I have done a brief analysis and I think that is\n> > > > possible but it doesn't seem to be a good idea to backpatch it.\n> > >\n> > > My problem with this approach is that the whole cleanup lock is hugely\n> > > misleading as-is. As I noted in\n> > > https://www.postgresql.org/message-id/20220817193032.z35vdjhpzkgldrd3%40awork3.anarazel.de\n> > > we take the cleanup lock *after* re-initializing the page. Thereby\n> > > completely breaking the properties that a cleanup lock normally tries to\n> > > guarantee.\n> > >\n> > > Even if that were to achieve something useful (doubtful in this case),\n> > > it'd need a huge comment explaining what's going on.\n> > >\n> >\n> > Attached are two patches. The first patch is what Robert has proposed\n> > with some changes in comments to emphasize the fact that cleanup lock\n> > on the new bucket is just to be consistent with the old bucket page\n> > locking as we are initializing it just before checking for cleanup\n> > lock. In the second patch, I removed the acquisition of cleanup lock\n> > on the new bucket page and changed the comments/README accordingly.\n> >\n> > I think we can backpatch the first patch and the second patch can be\n> > just a HEAD-only patch. Does that sound reasonable to you?\n>\n> Thanks for the patches.\n> I have verified that the issue is fixed using a manual test upto\n> REL_10_STABLE version and found it to be working fine.\n>\n\nThanks for the verification. I am planning to push the first patch\n(and backpatch it) next week (by next Tuesday) unless we have more\ncomments or Robert intends to push it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 13 Oct 2022 16:28:30 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-06 12:44:24 +0530, Amit Kapila wrote:\n> On Sat, Oct 1, 2022 at 12:35 AM Andres Freund <andres@anarazel.de> wrote:\n> > My problem with this approach is that the whole cleanup lock is hugely\n> > misleading as-is. As I noted in\n> > https://www.postgresql.org/message-id/20220817193032.z35vdjhpzkgldrd3%40awork3.anarazel.de\n> > we take the cleanup lock *after* re-initializing the page. Thereby\n> > completely breaking the properties that a cleanup lock normally tries to\n> > guarantee.\n> >\n> > Even if that were to achieve something useful (doubtful in this case),\n> > it'd need a huge comment explaining what's going on.\n> >\n>\n> Attached are two patches. The first patch is what Robert has proposed\n> with some changes in comments to emphasize the fact that cleanup lock\n> on the new bucket is just to be consistent with the old bucket page\n> locking as we are initializing it just before checking for cleanup\n> lock. In the second patch, I removed the acquisition of cleanup lock\n> on the new bucket page and changed the comments/README accordingly.\n>\n> I think we can backpatch the first patch and the second patch can be\n> just a HEAD-only patch. Does that sound reasonable to you?\n\nNot particularly, no. I don't understand how \"overwrite a page and then get a\ncleanup lock\" can sensibly be described by this comment:\n\n> +++ b/src/backend/access/hash/hashpage.c\n> @@ -807,7 +807,8 @@ restart_expand:\n> \t * before changing the metapage's mapping info, in case we can't get the\n> \t * disk space. Ideally, we don't need to check for cleanup lock on new\n> \t * bucket as no other backend could find this bucket unless meta page is\n> -\t * updated. However, it is good to be consistent with old bucket locking.\n> +\t * updated and we initialize the page just before it. However, it is just\n> +\t * to be consistent with old bucket locking.\n> \t */\n> \tbuf_nblkno = _hash_getnewbuf(rel, start_nblkno, MAIN_FORKNUM);\n> \tif (!IsBufferCleanupOK(buf_nblkno))\n\nThis is basically saying \"I am breaking basic rules of locking just to be\nconsistent\", no?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 13 Oct 2022 13:55:35 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On Fri, Sep 30, 2022 at 12:05 PM Andres Freund <andres@anarazel.de> wrote:\n> My problem with this approach is that the whole cleanup lock is hugely\n> misleading as-is.\n\nWhile nbtree VACUUM does use cleanup locks, they don't protect the\nindex structure itself -- it actually functions as an interlock\nagainst concurrent TID recycling, which might otherwise confuse\nin-flight index scans. That's why we need cleanup locks for VACUUM,\nbut not for index deletions, even though the physical modifications\nthat are performed to physical leaf pages are identical (the WAL\nrecords are almost identical). Clearly the use of cleanup locks is not\nreally about protecting the leaf page itself -- it's about using the\nphysical leaf page as a proxy for the heap TIDs contained therein. A\nvery narrow protocol with a very specific purpose.\n\nMore generally, cleanup locks exist to protect transient references\nthat point into a heap page. References held by one backend only. A\nTID, or a HeapTuple C pointer, or something similar. Cleanup locks are\nnot intended to protect a physical data structure in the heap, either\n-- just a reference/pointer that points to the structure. There are\nimplications for the physical page structure itself, of course, but\nthat seems secondary. The guarantees are often limited to \"never allow\nthe backend holding the pin to become utterly confused\".\n\nI am skeptical of the idea of using cleanup locks for anything more\nambitious than this. Especially in index AM code. It seems\nuncomfortably close to \"a buffer lock, but somehow also not a buffer\nlock\".\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 13 Oct 2022 17:46:25 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-13 17:46:25 -0700, Peter Geoghegan wrote:\n> On Fri, Sep 30, 2022 at 12:05 PM Andres Freund <andres@anarazel.de> wrote:\n> > My problem with this approach is that the whole cleanup lock is hugely\n> > misleading as-is.\n>\n> While nbtree VACUUM does use cleanup locks, they don't protect the\n> index structure itself -- it actually functions as an interlock\n> against concurrent TID recycling, which might otherwise confuse\n> in-flight index scans. That's why we need cleanup locks for VACUUM,\n> but not for index deletions, even though the physical modifications\n> that are performed to physical leaf pages are identical (the WAL\n> records are almost identical). Clearly the use of cleanup locks is not\n> really about protecting the leaf page itself -- it's about using the\n> physical leaf page as a proxy for the heap TIDs contained therein. A\n> very narrow protocol with a very specific purpose.\n>\n> More generally, cleanup locks exist to protect transient references\n> that point into a heap page. References held by one backend only. A\n> TID, or a HeapTuple C pointer, or something similar. Cleanup locks are\n> not intended to protect a physical data structure in the heap, either\n> -- just a reference/pointer that points to the structure. There are\n> implications for the physical page structure itself, of course, but\n> that seems secondary. The guarantees are often limited to \"never allow\n> the backend holding the pin to become utterly confused\".\n>\n> I am skeptical of the idea of using cleanup locks for anything more\n> ambitious than this. Especially in index AM code. It seems\n> uncomfortably close to \"a buffer lock, but somehow also not a buffer\n> lock\".\n\nMy point here is a lot more mundane. The code essentially does\n_hash_pageinit(), overwriting the whole page, and *then* conditionally\nacquires a cleanup lock. It simply is bogus code.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 13 Oct 2022 18:10:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On Thu, Oct 13, 2022 at 6:10 PM Andres Freund <andres@anarazel.de> wrote:\n> My point here is a lot more mundane. The code essentially does\n> _hash_pageinit(), overwriting the whole page, and *then* conditionally\n> acquires a cleanup lock. It simply is bogus code.\n\nI understood that that was what you meant. It's easy to see why this\ncode is broken, but to me it seems related to having too much\nconfidence in what is possible while relying on cleanup locks. That's\njust my take.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 13 Oct 2022 18:24:27 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On Fri, Oct 14, 2022 at 2:25 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> >\n> > Attached are two patches. The first patch is what Robert has proposed\n> > with some changes in comments to emphasize the fact that cleanup lock\n> > on the new bucket is just to be consistent with the old bucket page\n> > locking as we are initializing it just before checking for cleanup\n> > lock. In the second patch, I removed the acquisition of cleanup lock\n> > on the new bucket page and changed the comments/README accordingly.\n> >\n> > I think we can backpatch the first patch and the second patch can be\n> > just a HEAD-only patch. Does that sound reasonable to you?\n>\n> Not particularly, no. I don't understand how \"overwrite a page and then get a\n> cleanup lock\" can sensibly be described by this comment:\n>\n> > +++ b/src/backend/access/hash/hashpage.c\n> > @@ -807,7 +807,8 @@ restart_expand:\n> > * before changing the metapage's mapping info, in case we can't get the\n> > * disk space. Ideally, we don't need to check for cleanup lock on new\n> > * bucket as no other backend could find this bucket unless meta page is\n> > - * updated. However, it is good to be consistent with old bucket locking.\n> > + * updated and we initialize the page just before it. However, it is just\n> > + * to be consistent with old bucket locking.\n> > */\n> > buf_nblkno = _hash_getnewbuf(rel, start_nblkno, MAIN_FORKNUM);\n> > if (!IsBufferCleanupOK(buf_nblkno))\n>\n> This is basically saying \"I am breaking basic rules of locking just to be\n> consistent\", no?\n>\n\nFair point. How about something like: \"XXX Do we really need to check\nfor cleanup lock on the new bucket? Here, we initialize the page, so\nideally we don't need to perform any operation that requires such a\ncheck.\"?\n\nFeel free to suggest something better.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 14 Oct 2022 10:40:11 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-14 10:40:11 +0530, Amit Kapila wrote:\n> On Fri, Oct 14, 2022 at 2:25 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > >\n> > > Attached are two patches. The first patch is what Robert has proposed\n> > > with some changes in comments to emphasize the fact that cleanup lock\n> > > on the new bucket is just to be consistent with the old bucket page\n> > > locking as we are initializing it just before checking for cleanup\n> > > lock. In the second patch, I removed the acquisition of cleanup lock\n> > > on the new bucket page and changed the comments/README accordingly.\n> > >\n> > > I think we can backpatch the first patch and the second patch can be\n> > > just a HEAD-only patch. Does that sound reasonable to you?\n> >\n> > Not particularly, no. I don't understand how \"overwrite a page and then get a\n> > cleanup lock\" can sensibly be described by this comment:\n> >\n> > > +++ b/src/backend/access/hash/hashpage.c\n> > > @@ -807,7 +807,8 @@ restart_expand:\n> > > * before changing the metapage's mapping info, in case we can't get the\n> > > * disk space. Ideally, we don't need to check for cleanup lock on new\n> > > * bucket as no other backend could find this bucket unless meta page is\n> > > - * updated. However, it is good to be consistent with old bucket locking.\n> > > + * updated and we initialize the page just before it. However, it is just\n> > > + * to be consistent with old bucket locking.\n> > > */\n> > > buf_nblkno = _hash_getnewbuf(rel, start_nblkno, MAIN_FORKNUM);\n> > > if (!IsBufferCleanupOK(buf_nblkno))\n> >\n> > This is basically saying \"I am breaking basic rules of locking just to be\n> > consistent\", no?\n> >\n> \n> Fair point. How about something like: \"XXX Do we really need to check\n> for cleanup lock on the new bucket? Here, we initialize the page, so\n> ideally we don't need to perform any operation that requires such a\n> check.\"?.\n\nThis still seems to omit that the code is quite broken.\n\n> Feel free to suggest something better.\n\nHow about something like:\n\n XXX: This code is wrong, we're overwriting the buffer before \"acquiring\" the\n cleanup lock. Currently this is not known to have bad consequences because\n XYZ and the fix seems a bit too risky for the backbranches.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 14 Oct 2022 11:21:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On Fri, Oct 14, 2022 at 11:51 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> How about something like:\n>\n> XXX: This code is wrong, we're overwriting the buffer before \"acquiring\" the\n> cleanup lock. Currently this is not known to have bad consequences because\n> XYZ and the fix seems a bit too risky for the backbranches.\n>\n\nIt looks mostly good to me. I am slightly uncomfortable with the last\npart of the sentence: \"the fix seems a bit too risky for the\nbackbranches.\" because it will stay like that in the back branches\ncode even after we fix it in HEAD. Instead, can we directly use the\nFIXME tag like in the comments: \"FIXME: This code is wrong, we're\noverwriting the buffer before \"acquiring\" the cleanup lock. Currently,\nthis is not known to have bad consequences because no other backend\ncould find this bucket unless the meta page is updated.\"? Then, in the\ncommit message, we can use that sentence, something like: \"... While\nfixing this issue, we have observed that cleanup lock is not required\non the new bucket for the split operation as we're overwriting the\nbuffer before \"acquiring\" the cleanup lock. Currently, this is not\nknown to have bad consequences and the fix seems a bit too risky for\nthe back branches.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 15 Oct 2022 11:57:09 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On Fri, Oct 14, 2022 at 2:21 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-10-14 10:40:11 +0530, Amit Kapila wrote:\n> > On Fri, Oct 14, 2022 at 2:25 AM Andres Freund <andres@anarazel.de> wrote:\n> > Fair point. How about something like: \"XXX Do we really need to check\n> > for cleanup lock on the new bucket? Here, we initialize the page, so\n> > ideally we don't need to perform any operation that requires such a\n> > check.\"?.\n>\n> This still seems to omit that the code is quite broken.\n\nI don't think it's the job of a commit which is trying to fix a\ncertain bug to document all the other bugs it isn't fixing. If that\nwere the standard, we'd never be able to commit any bug fixes, which\nis exactly what's happening on this thread. The first hunk of 0001\ndemonstrably fixes a real bug that has real consequences and, as far\nas I can see, makes nothing worse in any way. We should commit that\nhunk - and only that hunk - back-patch it all the way, and be happy to\nhave gotten something done. We've known what the correct fix is here\nfor 2 months and we're not doing anything about it because there's\nsome other problem that we're trying to worry about at the same time.\nLet's stop doing that.\n\nTo put that another way, we don't need to fix or document anything\nelse to have a conditional cleanup lock acquisition shouldn't be\nconditional. If it shouldn't be a cleanup lock either, well, that's a\nseparate patch.\n\nAlternatively, if we all agree that 0001 and 0002 are both safe and\ncorrect, then let's just merge the two patches together and commit it\nwith an explanatory message like:\n\n===\nDon't require a cleanup lock on the new page when splitting a hash index bucket.\n\nThe previous code took a cleanup lock conditionally and panicked if it\nfailed, which is wrong, because a process such as the background\nwriter or checkpointer can transiently pin pages on a standby. We\ncould make the cleanup lock acquisition unconditional, but it turns\nout that it isn't needed at all, because no scan can examine the new\nbucket before the metapage has been updated, and thus an exclusive\nlock on the new bucket's primary page is sufficient. Note that we\nstill need a cleanup lock on the old bucket's primary page, because\nthat one is visible to scans.\n===\n\n> > Feel free to suggest something better.\n>\n> How about something like:\n>\n> XXX: This code is wrong, we're overwriting the buffer before \"acquiring\" the\n> cleanup lock. Currently this is not known to have bad consequences because\n> XYZ and the fix seems a bit too risky for the backbranches.\n\nI think here you're talking about the code that runs in\nnormal-running, not recovery, in _hash_expandtable, where we call\n_hash_getnewbuf and then IsBufferCleanupOK. That code indeed seems\nstupid, because as you say, there's no point in calling\n_hash_getnewbuf() and thus overwriting the buffer and then only\nafterwards checking IsBufferCleanupOK. By then the die is cast. But at\nthe same time, I think that it's not wrong in any way that matters to\nthe best of our current knowledge. That new buffer that we just went\nand got might be pinned by somebody else, but they can't be doing\nanything interesting with it, because we wouldn't be allocating it as\na new page if it were already in use for anything, and only one\nprocess is allowed to be doing such an allocation at a time. That both\nmeans that we can likely remove the cleanup lock acquisition, but it\nalso means that if we don't, there is no correctness problem here,\nstrictly speaking.\n\nSo I would suggest that if we feel we absolutely must put a comment\nhere, we could make it say something like \"XXX. It doesn't make sense\nto call _hash_getnewbuf() first, zeroing the buffer, and then only\nafterwards check whether we have a cleanup lock. However, since no\nscan can be accessing the new buffer yet, any concurrent accesses will\njust be from processes like the bgwriter or checkpointer which don't\ncare about its contents, so it doesn't really matter.\"\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 17 Oct 2022 10:43:16 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-17 10:43:16 -0400, Robert Haas wrote:\n> On Fri, Oct 14, 2022 at 2:21 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-10-14 10:40:11 +0530, Amit Kapila wrote:\n> > > On Fri, Oct 14, 2022 at 2:25 AM Andres Freund <andres@anarazel.de> wrote:\n> > > Fair point. How about something like: \"XXX Do we really need to check\n> > > for cleanup lock on the new bucket? Here, we initialize the page, so\n> > > ideally we don't need to perform any operation that requires such a\n> > > check.\"?.\n> >\n> > This still seems to omit that the code is quite broken.\n>\n> I don't think it's the job of a commit which is trying to fix a\n> certain bug to document all the other bugs it isn't fixing.\n\nThat's true in general, but the case of fixing a bug in one place but not in\nanother nearby is a different story.\n\n\n> If that were the standard, we'd never be able to commit any bug fixes\n\n<eyeroll/>\n\n\n> which is exactly what's happening on this thread.\n\nWhat's been happening from my POV is that Amit and you didn't even acknowledge\nthe broken cleanup lock logic for weeks. I don't mind a reasoned decision to\nnot care about the non-recovery case. But until now I've not seen that, but I\nalso have a hard time keeping up with email, so I might have missed it.\n\n\n> > > Feel free to suggest something better.\n> >\n> > How about something like:\n> >\n> > XXX: This code is wrong, we're overwriting the buffer before \"acquiring\" the\n> > cleanup lock. Currently this is not known to have bad consequences because\n> > XYZ and the fix seems a bit too risky for the backbranches.\n>\n> I think here you're talking about the code that runs in\n> normal-running, not recovery, in _hash_expandtable, where we call\n> _hash_getnewbuf and then IsBufferCleanupOK.\n\nYes.\n\n\n> That code indeed seems stupid, because as you say, there's no point in\n> calling _hash_getnewbuf() and thus overwriting the buffer and then only\n> afterwards checking IsBufferCleanupOK. By then the die is cast. But at the\n> same time, I think that it's not wrong in any way that matters to the best\n> of our current knowledge. That new buffer that we just went and got might be\n> pinned by somebody else, but they can't be doing anything interesting with\n> it, because we wouldn't be allocating it as a new page if it were already in\n> use for anything, and only one process is allowed to be doing such an\n> allocation at a time. That both means that we can likely remove the cleanup\n> lock acquisition, but it also means that if we don't, there is no\n> correctness problem here, strictly speaking.\n\nIf that's the case cool - I just don't know the locking protocol of hash\nindexes well enough to judge this.\n\n\n> So I would suggest that if we feel we absolutely must put a comment\n> here, we could make it say something like \"XXX. It doesn't make sense\n> to call _hash_getnewbuf() first, zeroing the buffer, and then only\n> afterwards check whether we have a cleanup lock. However, since no\n> scan can be accessing the new buffer yet, any concurrent accesses will\n> just be from processes like the bgwriter or checkpointer which don't\n> care about its contents, so it doesn't really matter.\"\n\nWFM. I'd probably lean to just fixing in the backbranches instead, but as long\nas we make a conscious decision...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 17 Oct 2022 10:02:10 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On Mon, Oct 17, 2022 at 1:02 PM Andres Freund <andres@anarazel.de> wrote:\n> That's true in general, but the case of fixing a bug in one place but not in\n> another nearby is a different story.\n\nI agree, but I still think we shouldn't let the perfect be the enemy\nof the good.\n\n> > That code indeed seems stupid, because as you say, there's no point in\n> > calling _hash_getnewbuf() and thus overwriting the buffer and then only\n> > afterwards checking IsBufferCleanupOK. By then the die is cast. But at the\n> > same time, I think that it's not wrong in any way that matters to the best\n> > of our current knowledge. That new buffer that we just went and got might be\n> > pinned by somebody else, but they can't be doing anything interesting with\n> > it, because we wouldn't be allocating it as a new page if it were already in\n> > use for anything, and only one process is allowed to be doing such an\n> > allocation at a time. That both means that we can likely remove the cleanup\n> > lock acquisition, but it also means that if we don't, there is no\n> > correctness problem here, strictly speaking.\n>\n> If that's the case cool - I just don't know the locking protocol of hash\n> indexes well enough to judge this.\n\nDarn, I was hoping you did, because I think this could certainly use\nmore than one pair of educated eyes.\n\n> > So I would suggest that if we feel we absolutely must put a comment\n> > here, we could make it say something like \"XXX. It doesn't make sense\n> > to call _hash_getnewbuf() first, zeroing the buffer, and then only\n> > afterwards check whether we have a cleanup lock. However, since no\n> > scan can be accessing the new buffer yet, any concurrent accesses will\n> > just be from processes like the bgwriter or checkpointer which don't\n> > care about its contents, so it doesn't really matter.\"\n>\n> WFM. I'd probably lean to just fixing in the backbranches instead, but as long\n> as we make a conscious decision...\n\nI am reasonably confident that just making the cleanup lock\nacquisition unconditional will not break anything that isn't broken\nalready. Perhaps that confidence will turn out to be misplaced, but at\nthe moment I just don't see what can go wrong. Since it's a standby,\nnobody else should be trying to get a cleanup lock, and even if they\ndid, err, so what? We can't deadlock because we don't hold any other\nlocks.\n\nI don't feel quite as confident that not attempting a cleanup lock on\nthe new bucket's primary page is OK. I think it should be fine. The\nexisting comment even says it should be fine. But, that comment could\nbe wrong, and I'm not sure that I have my head around what all of the\npossible interactions around that cleanup lock are. So changing it\nmakes me a little nervous.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 17 Oct 2022 13:34:02 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On 2022-10-17 13:34:02 -0400, Robert Haas wrote:\n> I don't feel quite as confident that not attempting a cleanup lock on\n> the new bucket's primary page is OK. I think it should be fine. The\n> existing comment even says it should be fine. But, that comment could\n> be wrong, and I'm not sure that I have my head around what all of the\n> possible interactions around that cleanup lock are. So changing it\n> makes me a little nervous.\n\nIf it's not OK, then the acquire-cleanuplock-after-reinit would be an\nactive bug though, right?\n\n\n",
"msg_date": "Mon, 17 Oct 2022 13:30:02 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On Mon, Oct 17, 2022 at 4:30 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-10-17 13:34:02 -0400, Robert Haas wrote:\n> > I don't feel quite as confident that not attempting a cleanup lock on\n> > the new bucket's primary page is OK. I think it should be fine. The\n> > existing comment even says it should be fine. But, that comment could\n> > be wrong, and I'm not sure that I have my head around what all of the\n> > possible interactions around that cleanup lock are. So changing it\n> > makes me a little nervous.\n>\n> If it's not OK, then the acquire-cleanuplock-after-reinit would be an\n> active bug though, right?\n\nYes, probably so.\n\nAnother approach here would be to have something like _hash_getnewbuf\nthat does not use RBM_ZERO_AND_LOCK or call _hash_pageinit, and then\ncall _hash_pageinit here, perhaps just before nopaque =\nHashPageGetOpaque(npage), so that it's within the critical section.\nBut that doesn't feel very consistent with the rest of the code.\n\nMaybe just nuking the IsBufferCleanupOK call is best, I don't know. I\nhonestly doubt that it matters very much what we pick here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 18 Oct 2022 10:55:03 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On Tue, Oct 18, 2022 at 8:25 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Oct 17, 2022 at 4:30 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-10-17 13:34:02 -0400, Robert Haas wrote:\n>\n> Maybe just nuking the IsBufferCleanupOK call is best, I don't know. I\n> honestly doubt that it matters very much what we pick here.\n>\n\nAgreed, I think the important point to decide is what to do for\nback-branches. We have the next minor release in a few days' time and\nthis is the last release for v10. I see the following options based on\nthe discussion here.\n\na. Use the code change in 0001 from email [1] and a comment change\nproposed by Robert in email [2] to fix the bug reported. This should\nbe backpatched till v10. Then separately, we can consider committing\nsomething like 0002 from email [1] as a HEAD-only patch.\nb. Use the code change in 0001 from email [1] to fix the bug reported.\nThis should be backpatched till v10. Then separately, we can consider\ncommitting something like 0002 from email [1] as a HEAD-only patch.\nc. Combine 0001 and 0002 from the email [1] and push them in all\nbranches till v10.\n\nI prefer going with (a).\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1LekwAZU5yf2h%2BW1Ko_c85TZHuNLg6jVPD6KDXrYYFo1g%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CA%2BTgmoYruVb7Nh5TUt47sTyYui2zE8Ke9T3DcHeB1wSkb%3DuSCw%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 31 Oct 2022 16:56:56 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On Mon, Oct 31, 2022 at 7:27 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Agreed, I think the important point to decide is what to do for\n> back-branches. We have the next minor release in a few days' time and\n> this is the last release for v10. I see the following options based on\n> the discussion here.\n>\n> a. Use the code change in 0001 from email [1] and a comment change\n> proposed by Robert in email [2] to fix the bug reported. This should\n> be backpatched till v10. Then separately, we can consider committing\n> something like 0002 from email [1] as a HEAD-only patch.\n> b. Use the code change in 0001 from email [1] to fix the bug reported.\n> This should be backpatched till v10. Then separately, we can consider\n> committing something like 0002 from email [1] as a HEAD-only patch.\n> c. Combine 0001 and 0002 from the email [1] and push them in all\n> branches till v10.\n>\n> I prefer going with (a).\n\nI vote for (a) or (b) for now, and we can consider what else to do\nlater. It might even include back-patching. But fixing things that are\ncausing problems we can see seems to me to have higher priority than\nfixing things that are not.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 31 Oct 2022 13:10:40 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On Mon, Oct 31, 2022 at 10:40 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Oct 31, 2022 at 7:27 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Agreed, I think the important point to decide is what to do for\n> > back-branches. We have the next minor release in a few days' time and\n> > this is the last release for v10. I see the following options based on\n> > the discussion here.\n> >\n> > a. Use the code change in 0001 from email [1] and a comment change\n> > proposed by Robert in email [2] to fix the bug reported. This should\n> > be backpatched till v10. Then separately, we can consider committing\n> > something like 0002 from email [1] as a HEAD-only patch.\n> > b. Use the code change in 0001 from email [1] to fix the bug reported.\n> > This should be backpatched till v10. Then separately, we can consider\n> > committing something like 0002 from email [1] as a HEAD-only patch.\n> > c. Combine 0001 and 0002 from the email [1] and push them in all\n> > branches till v10.\n> >\n> > I prefer going with (a).\n>\n> I vote for (a) or (b) for now, and we can consider what else to do\n> later.\n>\n\nI am fine with any of those. Would you like to commit or do you prefer\nme to take care of this?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 1 Nov 2022 09:19:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On Mon, Oct 31, 2022 at 11:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> I am fine with any of those. Would you like to commit or do you prefer\n> me to take care of this?\n\nSorry for not responding to this sooner. I think it's too late to do\nanything about this for the current round of releases at this point,\nbut I am fine if you want to take care of it after that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 7 Nov 2022 12:42:45 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On Mon, Nov 7, 2022 at 11:12 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Oct 31, 2022 at 11:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I am fine with any of those. Would you like to commit or do you prefer\n> > me to take care of this?\n>\n> Sorry for not responding to this sooner. I think it's too late to do\n> anything about this for the current round of releases at this point,\n> but I am fine if you want to take care of it after that.\n>\n\nOkay, I'll take care of this either later this week after the release\nwork is finished or early next week.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 8 Nov 2022 15:07:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On Tue, Nov 8, 2022 at 3:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Nov 7, 2022 at 11:12 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Mon, Oct 31, 2022 at 11:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > I am fine with any of those. Would you like to commit or do you prefer\n> > > me to take care of this?\n> >\n> > Sorry for not responding to this sooner. I think it's too late to do\n> > anything about this for the current round of releases at this point,\n> > but I am fine if you want to take care of it after that.\n> >\n>\n> Okay, I'll take care of this either later this week after the release\n> work is finished or early next week.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 14 Nov 2022 16:06:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On 2022-11-14 16:06:27 +0530, Amit Kapila wrote:\n> Pushed.\n\nThanks.\n\n\n",
"msg_date": "Mon, 14 Nov 2022 09:48:32 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
},
{
"msg_contents": "On Mon, Nov 14, 2022 at 11:18 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-11-14 16:06:27 +0530, Amit Kapila wrote:\n> > Pushed.\n>\n> Thanks.\n>\n\nPlease find the attached patch to remove the buffer cleanup check on\nthe new bucket page. I think we should do this only for the HEAD. Do\nyou have any suggestions or objections on this one?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Wed, 16 Nov 2022 07:33:42 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: hash_xlog_split_allocate_page: failed to acquire cleanup lock"
}
] |
[
{
"msg_contents": "The separate TRIGGER privilege is considered obsolescent. It is not\nheavily used and exists mainly to facilitate trigger-based replication\nin a multi-user system.\ni.e.\nGRANT TRIGGER ON foo TO bob;\n\nSince logical replication recommends \"Limit ownership and TRIGGER\nprivilege on such tables to trusted roles.\", then it would be useful\nto have a way to put in a restriction on that for the trigger\nprivilege.\n\nWe might suggest removing it completely, but it does appear to be a\npart of the SQL Standard, T211-07, so that is not an option. In any\ncase, such a move would need us to do a lengthy deprecation dance\nacross multiple releases.\n\nBut we can just have an option to prevent the TRIGGER privilege being granted.\n\nallow_trigger_privilege = off (new default in PG16) | on\nshown in postgresql.conf, only settable at server start so that it\neven blocks superusers and special roles.\n\nExisting usage of the trigger privilege would not be touched, only new usage.\n\n(No, this does not mean I want to ban triggers, only the trigger privilege).\n\nThoughts?\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 10 Aug 2022 06:09:38 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Blocking the use of TRIGGER privilege"
}
] |
[
{
"msg_contents": "(I suppose this is a pg15 issue)\n\ncreateuser --help shows the following help text.\n\n> --bypassrls role can bypass row-level security (RLS) policy\n> --no-bypassrls role cannot bypass row-level security (RLS) policy\n> --replication role can initiate replication\n> --no-replication role cannot initiate replication\n\nFor other options the text tells which one is the default, which I\nthink the two options also should have the same.\n\n> -r, --createrole role can create new roles\n> -R, --no-createrole role cannot create roles (default)\n\nIn correspondence, it seems to me that the command should explicitly\nplace the default value (of the command's own) in generated SQL\ncommand even if the corresponding command line options are omitted, as\ncreaterole and so do. (attached first)\n\nThe interacitive mode doesn't cover all options, but I'm not sure what\nwe should do to the mode since I don't have a clear idea of how the\nmode is used. In the attached only --bypassrls is arbirarily added.\nThe remaining options omitted in the interactive mode are: password,\nvalid-until, role, member and replication. (attached second)\n\nThe ternary options are checked against decimal 0, but it should use\nTRI_DEFAULT instead. (attached third)\n\nI tempted to check no ternary options remains set to TRY_DEFAULT\nbefore generating SQL command, but I didn't that in the attached.\n\nWhat do you think about this?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 10 Aug 2022 15:12:43 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "createuser doesn't tell default settings for some options"
},
{
"msg_contents": "> On 10 Aug 2022, at 08:12, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> \n> (I suppose this is a pg15 issue)\n> \n> createuser --help shows the following help text.\n> \n>> --bypassrls role can bypass row-level security (RLS) policy\n>> --no-bypassrls role cannot bypass row-level security (RLS) policy\n>> --replication role can initiate replication\n>> --no-replication role cannot initiate replication\n> \n> For other options the text tells which one is the default, which I\n> think the two options also should have the same.\n\nAgreed. For --no-replication the docs in createuser.sgml should fixed to\ninclude a \"This is the default\" sentence like the others have as well.\n\n> The interacitive mode doesn't cover all options, but I'm not sure what\n> we should do to the mode since I don't have a clear idea of how the\n> mode is used. In the attached only --bypassrls is arbirarily added.\n> The remaining options omitted in the interactive mode are: password,\n> valid-until, role, member and replication. (attached second)\n\nI'm not convinced that we should add more to the interactive mode, it's IMO\nmostly a backwards compat option for ancient (pre-9.2) createuser where this\nwas automatically done. Back then we had this in the documentation which has\nsince been removed:\n\n \"You will be prompted for a name and other missing information if it is not\n specified on the command line.\"\n\n> The ternary options are checked against decimal 0, but it should use\n> TRI_DEFAULT instead. (attached third)\n\nAgreed, nice catch.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 10 Aug 2022 10:28:06 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: createuser doesn't tell default settings for some options"
},
{
"msg_contents": "> On 10 Aug 2022, at 10:28, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 10 Aug 2022, at 08:12, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>> \n>> (I suppose this is a pg15 issue)\n>> \n>> createuser --help shows the following help text.\n>> \n>>> --bypassrls role can bypass row-level security (RLS) policy\n>>> --no-bypassrls role cannot bypass row-level security (RLS) policy\n>>> --replication role can initiate replication\n>>> --no-replication role cannot initiate replication\n>> \n>> For other options the text tells which one is the default, which I\n>> think the two options also should have the same.\n> \n> Agreed. For --no-replication the docs in createuser.sgml should fixed to\n> include a \"This is the default\" sentence like the others have as well.\n\n>> The ternary options are checked against decimal 0, but it should use\n>> TRI_DEFAULT instead. (attached third)\n> \n> Agreed, nice catch.\n\nAttached is my proposal for this, combining your 0001 and 0003 patches with\nsome docs and test fixups to match.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Mon, 21 Nov 2022 15:07:17 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: createuser doesn't tell default settings for some options"
}
] |
[
{
"msg_contents": "new thread [was: WIP Patch: Add a function that returns binary JSONB as a bytea]\n\n> I wrote:\n> > We can also shave a\n> > few percent by having pg_utf8_verifystr use SSE2 for the ascii path. I\n> > can look into this.\n>\n> Here's a patch for that. If the input is mostly ascii, I'd expect that\n> part of the flame graph to shrink by 40-50% and give a small boost\n> overall.\n\nHere is an updated patch using the new USE_SSE2 symbol. The style is\ndifferent from the last one in that each stanza has platform-specific\ncode. I wanted to try it this way because is_valid_ascii() is already\nwritten in SIMD-ish style using general purpose registers and bit\ntwiddling, so it seemed natural to see the two side-by-side. Sometimes\nthey can share the same comment. If we think this is bad for\nreadability, I can go back to one block each, but that way leads to\nduplication of code and it's difficult to see what's different for\neach platform, IMO.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 10 Aug 2022 13:50:14 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "use SSE2 for is_valid_ascii"
},
{
"msg_contents": "On Wed, Aug 10, 2022 at 01:50:14PM +0700, John Naylor wrote:\n> Here is an updated patch using the new USE_SSE2 symbol. The style is\n> different from the last one in that each stanza has platform-specific\n> code. I wanted to try it this way because is_valid_ascii() is already\n> written in SIMD-ish style using general purpose registers and bit\n> twiddling, so it seemed natural to see the two side-by-side. Sometimes\n> they can share the same comment. If we think this is bad for\n> readability, I can go back to one block each, but that way leads to\n> duplication of code and it's difficult to see what's different for\n> each platform, IMO.\n\nThis is a neat patch. I don't know that we need an entirely separate code\nblock for the USE_SSE2 path, but I do think that a little bit of extra\ncommentary would improve the readability. IMO the existing comment for the\nzero accumulator has the right amount of detail.\n\n+\t\t/*\n+\t\t * Set all bits in each lane of the error accumulator where input\n+\t\t * bytes are zero.\n+\t\t */\n+\t\terror_cum = _mm_or_si128(error_cum,\n+\t\t\t\t\t\t\t\t _mm_cmpeq_epi8(chunk, _mm_setzero_si128()));\n\nI wonder if reusing a zero vector (instead of creating a new one every\ntime) has any noticeable effect on performance.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 10 Aug 2022 15:31:20 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: use SSE2 for is_valid_ascii"
},
{
"msg_contents": "On Thu, Aug 11, 2022 at 5:31 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> This is a neat patch. I don't know that we need an entirely separate code\n> block for the USE_SSE2 path, but I do think that a little bit of extra\n> commentary would improve the readability. IMO the existing comment for the\n> zero accumulator has the right amount of detail.\n>\n> + /*\n> + * Set all bits in each lane of the error accumulator where input\n> + * bytes are zero.\n> + */\n> + error_cum = _mm_or_si128(error_cum,\n> + _mm_cmpeq_epi8(chunk, _mm_setzero_si128()));\n\nOkay, I will think about the comments, thanks for looking.\n\n> I wonder if reusing a zero vector (instead of creating a new one every\n> time) has any noticeable effect on performance.\n\nCreating a zeroed register is just FOO PXOR FOO, which should get\nhoisted out of the (unrolled in this case) loop, and which a recent\nCPU will just map to a hard-coded zero in the register file, in which\ncase the execution latency is 0 cycles. :-)\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 11 Aug 2022 11:10:34 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: use SSE2 for is_valid_ascii"
},
{
"msg_contents": "On Thu, Aug 11, 2022 at 11:10:34AM +0700, John Naylor wrote:\n>> I wonder if reusing a zero vector (instead of creating a new one every\n>> time) has any noticeable effect on performance.\n> \n> Creating a zeroed register is just FOO PXOR FOO, which should get\n> hoisted out of the (unrolled in this case) loop, and which a recent\n> CPU will just map to a hard-coded zero in the register file, in which\n> case the execution latency is 0 cycles. :-)\n\nAh, indeed. At -O2, my compiler seems to zero out two registers before the\nloop with either approach:\n\n\tpxor %xmm0, %xmm0\t; accumulator\n\tpxor %xmm2, %xmm2\t; always zeros\n\nAnd within the loop, I see the following:\n\n\tmovdqu (%rdi), %xmm1\n\tmovdqu (%rdi), %xmm3\n\taddq $16, %rdi\n\tpcmpeqb %xmm2, %xmm1\t; check for zeros\n\tpor %xmm3, %xmm0\t\t; OR data into accumulator\n\tpor %xmm1, %xmm0\t\t; OR zero check results into accumulator\n\tcmpq %rdi, %rsi\n\nSo the call to _mm_setzero_si128() within the loop is fine. Apologies for\nthe noise.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 10 Aug 2022 22:35:30 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: use SSE2 for is_valid_ascii"
},
{
"msg_contents": "v3 applies on top of the v9 json_lex_string patch in [1] and adds a\nbit more to that, resulting in a simpler patch that is more amenable\nto additional SIMD-capable platforms.\n\n[1] https://www.postgresql.org/message-id/CAFBsxsFV4v802idV0-Bo%3DV7wLMHRbOZ4er0hgposhyGCikmVGA%40mail.gmail.com\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 25 Aug 2022 16:41:53 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: use SSE2 for is_valid_ascii"
},
{
"msg_contents": "On Thu, Aug 25, 2022 at 04:41:53PM +0700, John Naylor wrote:\n> v3 applies on top of the v9 json_lex_string patch in [1] and adds a\n> bit more to that, resulting in a simpler patch that is more amenable\n> to additional SIMD-capable platforms.\n\nLGTM\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 25 Aug 2022 20:26:25 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: use SSE2 for is_valid_ascii"
},
{
"msg_contents": "On Fri, Aug 26, 2022 at 10:26 AM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n>\n> On Thu, Aug 25, 2022 at 04:41:53PM +0700, John Naylor wrote:\n> > v3 applies on top of the v9 json_lex_string patch in [1] and adds a\n> > bit more to that, resulting in a simpler patch that is more amenable\n> > to additional SIMD-capable platforms.\n>\n> LGTM\n\nThanks for looking, pushed with some rearrangements.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 26 Aug 2022 16:02:32 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: use SSE2 for is_valid_ascii"
}
] |
[
{
"msg_contents": "Hi,\n\nIt seems like there's the following typo in pgstatfuncs.c:\n\n- /* Values only available to role member or \npg_read_all_stats */\n+ /* Values only available to role member of \npg_read_all_stats */\n\nAttaching a tiny patch to fix it.\n\nRegards,\n\n-- \nBertrand Drouvot\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 10 Aug 2022 09:52:02 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Fix a typo in pgstatfuncs.c"
},
{
"msg_contents": "On Wed, Aug 10, 2022 at 1:22 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> Hi,\n>\n> It seems like there's the following typo in pgstatfuncs.c:\n>\n> - /* Values only available to role member or\n> pg_read_all_stats */\n> + /* Values only available to role member of\n> pg_read_all_stats */\n>\n> Attaching a tiny patch to fix it.\n\nI don't think it's a typo, the comment says that the values are only\navailable to the user who has privileges of backend's role or\npg_read_all_stats, the macro HAS_PGSTAT_PERMISSIONS says it all.\n\nIMO, any of the following works better, if the existing comment is confusing:\n\n /* Values only available to the member{or role or user} with\nprivileges of backend's role or pg_read_all_stats */\n\n /* Values only available to the member{or role or user} that\nhas membership in backend's role or has privileges of\npg_read_all_stats */\n\n--\nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n",
"msg_date": "Wed, 10 Aug 2022 14:00:45 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix a typo in pgstatfuncs.c"
},
{
"msg_contents": "Hi,\n\nOn 8/10/22 10:30 AM, Bharath Rupireddy wrote:\n> On Wed, Aug 10, 2022 at 1:22 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>> Hi,\n>>\n>> It seems like there's the following typo in pgstatfuncs.c:\n>>\n>> - /* Values only available to role member or\n>> pg_read_all_stats */\n>> + /* Values only available to role member of\n>> pg_read_all_stats */\n>>\n>> Attaching a tiny patch to fix it.\n> I don't think it's a typo, the comment says that the values are only\n> available to the user who has privileges of backend's role or\n> pg_read_all_stats, the macro HAS_PGSTAT_PERMISSIONS says it all.\n\nLooking at HAS_PGSTAT_PERMISSIONS, i think you are right: sorry for the \nnoise.\n\nThanks!\n\nBertrand Drouvot\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Wed, 10 Aug 2022 10:46:03 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix a typo in pgstatfuncs.c"
}
] |
[
{
"msg_contents": "Reading over the new object access hook test I spotted a small typo in the\ndocumentation. Will apply a fix shortly.\n\n-A real-world OAT hook should certainly provide more fine-grained conrol than\n+A real-world OAT hook should certainly provide more fine-grained control than\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Wed, 10 Aug 2022 10:55:57 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Small typo in OAT README"
}
] |
[
{
"msg_contents": "Hi,\n\nAre builds being paused on s390x as it looks like the s390x builds were last run 15 days ago. If so, wondering what is the reason for the pause and what is required to resume the builds?\nThe OS the builds were running on seems to have reached end of life. Please let me know if we can help with getting them updated and resume the builds.\n\nRegards,\n\nVivian Kong\nLinux on IBM Z Open Source Ecosystem\nIBM Canada Toronto Lab\n\n\n\n\n\n\n\n\n\nHi,\n \nAre builds being paused on s390x as it looks like the s390x builds were last run 15 days ago. If so, wondering what is the reason for the pause and what is required to resume the builds?\nThe OS the builds were running on seems to have reached end of life. Please let me know if we can help with getting them updated and resume the builds.\n \nRegards,\n\nVivian Kong\nLinux on IBM Z Open Source Ecosystem\nIBM Canada Toronto Lab",
"msg_date": "Wed, 10 Aug 2022 13:04:40 +0000",
"msg_from": "Vivian Kong <vivkong@ca.ibm.com>",
"msg_from_op": true,
"msg_subject": "s390x builds on buildfarm"
},
{
"msg_contents": "\nOn 2022-08-10 We 09:04, Vivian Kong wrote:\n>\n> Hi,\n>\n> \n>\n> Are builds being paused on s390x as it looks like the s390x builds\n> were last run 15 days ago. If so, wondering what is the reason for\n> the pause and what is required to resume the builds?\n> The OS the builds were running on seems to have reached end of life. \n> Please let me know if we can help with getting them updated and resume\n> the builds.\n>\n> \n>\n>\n\nMark, I think you run most or all of these.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 10 Aug 2022 09:56:21 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: s390x builds on buildfarm"
},
{
"msg_contents": "Thanks Andrew. Mark, please let me know if I can help.\n\nRegards,\n\nVivian Kong\nLinux on IBM Z Open Source Ecosystem\nIBM Canada Toronto Lab\n\nFrom: Andrew Dunstan <andrew@dunslane.net>\nDate: Wednesday, August 10, 2022 at 9:56 AM\nTo: Vivian Kong <vivkong@ca.ibm.com>, pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>, mark.wong@enterprisedb.com <mark.wong@enterprisedb.com>\nSubject: [EXTERNAL] Re: s390x builds on buildfarm\n\nOn 2022-08-10 We 09:04, Vivian Kong wrote:\n>\n> Hi,\n>\n>\n>\n> Are builds being paused on s390x as it looks like the s390x builds\n> were last run 15 days ago. If so, wondering what is the reason for\n> the pause and what is required to resume the builds?\n> The OS the builds were running on seems to have reached end of life.\n> Please let me know if we can help with getting them updated and resume\n> the builds.\n>\n>\n>\n>\n\nMark, I think you run most or all of these.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n\n\n\n\n\n\nThanks Andrew. Mark, please let me know if I can help.\n \nRegards,\n\nVivian Kong\nLinux on IBM Z Open Source Ecosystem\nIBM Canada Toronto Lab\n \n\nFrom:\nAndrew Dunstan <andrew@dunslane.net>\nDate: Wednesday, August 10, 2022 at 9:56 AM\nTo: Vivian Kong <vivkong@ca.ibm.com>, pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>, mark.wong@enterprisedb.com <mark.wong@enterprisedb.com>\nSubject: [EXTERNAL] Re: s390x builds on buildfarm\n\n\n\nOn 2022-08-10 We 09:04, Vivian Kong wrote:\n>\n> Hi,\n>\n> \n>\n> Are builds being paused on s390x as it looks like the s390x builds\n> were last run 15 days ago. If so, wondering what is the reason for\n> the pause and what is required to resume the builds?\n> The OS the builds were running on seems to have reached end of life. \n> Please let me know if we can help with getting them updated and resume\n> the builds.\n>\n> \n>\n>\n\nMark, I think you run most or all of these.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 17 Aug 2022 15:07:16 +0000",
"msg_from": "Vivian Kong <vivkong@ca.ibm.com>",
"msg_from_op": true,
"msg_subject": "RE: s390x builds on buildfarm"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-10 13:04:40 +0000, Vivian Kong wrote:\n> Are builds being paused on s390x as it looks like the s390x builds were last\n> run 15 days ago. If so, wondering what is the reason for the pause and what\n> is required to resume the builds? The OS the builds were running on seems\n> to have reached end of life. Please let me know if we can help with getting\n> them updated and resume the builds.\n\nI realize the question below is likely not your department, but perhaps you\ncould refer us to the right people?\n\nDoes IBM provide any AIX instances to open source projects? We have access to\nsome via the gcc compile farm, but they're a bit outdated, often very\noverloaded, and seem to have some other issues (system perl segfaulting etc).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Aug 2022 16:19:35 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: s390x builds on buildfarm"
},
{
"msg_contents": "Hi everyone,\n\nOn Wed, Aug 10, 2022 at 6:56 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n> On 2022-08-10 We 09:04, Vivian Kong wrote:\n> >\n> > Hi,\n> >\n> > \n> >\n> > Are builds being paused on s390x as it looks like the s390x builds\n> > were last run 15 days ago. If so, wondering what is the reason for\n> > the pause and what is required to resume the builds?\n> > The OS the builds were running on seems to have reached end of life. \n> > Please let me know if we can help with getting them updated and resume\n> > the builds.\n> >\n> > \n> >\n> >\n>\n> Mark, I think you run most or all of these.\n\nYeah, IBM moved me to new hardware and I haven't set them up yet. I\nwill try to do that soon.\n\nRegards,\nMark\n\n\n",
"msg_date": "Thu, 18 Aug 2022 07:38:32 -0700",
"msg_from": "Mark Wong <mark.wong@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: s390x builds on buildfarm"
},
{
"msg_contents": "Hi Andres,\n\nSorry I don’t have any connections in AIX. I couldn’t find info related to this. Sorry I couldn’t help.\n\nRegards,\n\nVivian Kong\nLinux on IBM Z Open Source Ecosystem\nIBM Canada Toronto Lab\n\nFrom: Andres Freund <andres@anarazel.de>\nDate: Wednesday, August 17, 2022 at 7:19 PM\nTo: Vivian Kong <vivkong@ca.ibm.com>\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: [EXTERNAL] Re: s390x builds on buildfarm\nHi,\n\nOn 2022-08-10 13:04:40 +0000, Vivian Kong wrote:\n> Are builds being paused on s390x as it looks like the s390x builds were last\n> run 15 days ago. If so, wondering what is the reason for the pause and what\n> is required to resume the builds? The OS the builds were running on seems\n> to have reached end of life. Please let me know if we can help with getting\n> them updated and resume the builds.\n\nI realize the question below is likely not your department, but perhaps you\ncould refer us to the right people?\n\nDoes IBM provide any AIX instances to open source projects? We have access to\nsome via the gcc compile farm, but they're a bit outdated, often very\noverloaded, and seem to have some other issues (system perl segfaulting etc).\n\nGreetings,\n\nAndres Freund\n\n\n\n\n\n\n\n\n\nHi Andres,\n\nSorry I don’t have any connections in AIX. I couldn’t find info related to this. Sorry I couldn’t help.\n \nRegards,\n\nVivian Kong\nLinux on IBM Z Open Source Ecosystem\nIBM Canada Toronto Lab\n \n\nFrom:\nAndres Freund <andres@anarazel.de>\nDate: Wednesday, August 17, 2022 at 7:19 PM\nTo: Vivian Kong <vivkong@ca.ibm.com>\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: [EXTERNAL] Re: s390x builds on buildfarm\n\n\nHi,\n\nOn 2022-08-10 13:04:40 +0000, Vivian Kong wrote:\n> Are builds being paused on s390x as it looks like the s390x builds were last\n> run 15 days ago. If so, wondering what is the reason for the pause and what\n> is required to resume the builds? The OS the builds were running on seems\n> to have reached end of life. Please let me know if we can help with getting\n> them updated and resume the builds.\n\nI realize the question below is likely not your department, but perhaps you\ncould refer us to the right people?\n\nDoes IBM provide any AIX instances to open source projects? We have access to\nsome via the gcc compile farm, but they're a bit outdated, often very\noverloaded, and seem to have some other issues (system perl segfaulting etc).\n\nGreetings,\n\nAndres Freund",
"msg_date": "Thu, 18 Aug 2022 20:12:05 +0000",
"msg_from": "Vivian Kong <vivkong@ca.ibm.com>",
"msg_from_op": true,
"msg_subject": "RE: s390x builds on buildfarm"
},
{
"msg_contents": "On Fri, Aug 19, 2022 at 8:12 AM Vivian Kong <vivkong@ca.ibm.com> wrote:\n> From: Andres Freund <andres@anarazel.de>\n>> Does IBM provide any AIX instances to open source projects? We have access to\n>> some via the gcc compile farm, but they're a bit outdated, often very\n>> overloaded, and seem to have some other issues (system perl segfaulting etc).\n\nIt looks like the way IBM supports open source projects doing POWER\ndevelopment and testing is via the Oregon State U Open Source Lab[1].\nIt's pretty Linux-focused and I don't see AIX in the OS drop-down list\nfor OpenStack managed virtual machines, but it has \"other\", and we can\nsee from the GCC build farm machine list[2] that their AIX boxes are\nhosted there, and a quick search tells me that OpenStack understands\nAIX[3], so maybe that works or maybe it's a special order. I wonder\nif we could find an advocate for PostgreSQL on AIX at IBM, for that\nbox on the request form.\n\n(More generally, an advocate anywhere would be a nice thing to have\nfor each port...)\n\n[1] https://osuosl.org/services/powerdev/\n[2] https://cfarm.tetaneutral.net/machines/list/\n[3] https://wiki.openstack.org/wiki/PowerVM\n\n\n",
"msg_date": "Fri, 19 Aug 2022 10:52:54 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: s390x builds on buildfarm"
}
] |
[
{
"msg_contents": "Hi,\n\nToday while hacking I encountered this delight:\n\n2022-08-10 09:30:29.025 EDT [27126] FATAL: something has gone wrong\n\nI actually already knew that something had gone wrong, because the\ncode I was writing was incomplete. And if I hadn't known that, the\nword FATAL would have been a real good clue. What I was hoping was\nthat the error message might tell me WHAT had gone wrong, but it\ndidn't.\n\nThis seems to be the fault of Andres's commit\n5aa4a9d2077fa902b4041245805082fec6be0648. In his defense, the addition\nof any kind of elog() at that point in the code appears to be an\nimprovement over the previous state of affairs. Nonetheless I feel we\ncould do better still, as in the attached.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 10 Aug 2022 09:41:30 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "something has gone wrong, but what is it?"
},
{
"msg_contents": "> On 10 Aug 2022, at 15:41, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> I feel we could do better still, as in the attached.\n\n+1, LGTM.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 10 Aug 2022 15:52:56 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: something has gone wrong, but what is it?"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n\n-\t\t\telog(ERROR, \"something has gone wrong\");\n+\t\t\telog(ERROR, \"unrecognized AuxProcType: %d\", (int) auxtype);\n\n+1 ... the existing message is clearly not up to project standard.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 10 Aug 2022 09:53:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: something has gone wrong, but what is it?"
},
{
"msg_contents": "On Wed, Aug 10, 2022 at 9:53 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>\n> - elog(ERROR, \"something has gone wrong\");\n> + elog(ERROR, \"unrecognized AuxProcType: %d\", (int) auxtype);\n>\n> +1 ... the existing message is clearly not up to project standard.\n\nAfter a bit of further looking around I noticed that there's another\ncheck for an invalid auxtype in this function which uses a slightly\ndifferent message text and also PANIC rather than ERROR.\n\nI think we should adopt that here too, for consistency, as in the attached.\n\nThe distinction between PANIC and ERROR doesn't really seem to matter\nhere. Either way, the server goes into an infinite crash-and-restart\nloop. May as well be consistent.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 10 Aug 2022 10:49:59 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: something has gone wrong, but what is it?"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-10 10:49:59 -0400, Robert Haas wrote:\n> On Wed, Aug 10, 2022 at 9:53 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> >\n> > - elog(ERROR, \"something has gone wrong\");\n> > + elog(ERROR, \"unrecognized AuxProcType: %d\", (int) auxtype);\n> >\n> > +1 ... the existing message is clearly not up to project standard.\n> \n> After a bit of further looking around I noticed that there's another\n> check for an invalid auxtype in this function which uses a slightly\n> different message text and also PANIC rather than ERROR.\n> \n> I think we should adopt that here too, for consistency, as in the attached.\n> \n> The distinction between PANIC and ERROR doesn't really seem to matter\n> here. Either way, the server goes into an infinite crash-and-restart\n> loop. May as well be consistent.\n\nMakes sense.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 10 Aug 2022 07:56:06 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: something has gone wrong, but what is it?"
},
{
"msg_contents": "\n\n> On 10 Aug 2022, at 19:49, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> After a bit of further looking around I noticed that there's another\n> check for an invalid auxtype in this function which uses a slightly\n> different message text and also PANIC rather than ERROR.\n\nIs there a reason to do\nMyBackendType = B_INVALID;\nafter PANIC or ERROR?\n\nBest regards, Andrey Borodin.\n\n\n",
"msg_date": "Wed, 10 Aug 2022 23:06:01 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: something has gone wrong, but what is it?"
},
{
"msg_contents": "On Wed, Aug 10, 2022 at 2:06 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> > On 10 Aug 2022, at 19:49, Robert Haas <robertmhaas@gmail.com> wrote:\n> > After a bit of further looking around I noticed that there's another\n> > check for an invalid auxtype in this function which uses a slightly\n> > different message text and also PANIC rather than ERROR.\n>\n> Is there a reason to do\n> MyBackendType = B_INVALID;\n> after PANIC or ERROR?\n\nThat could probably be taken out, but it doesn't seem important to take it out.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 10 Aug 2022 14:50:30 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: something has gone wrong, but what is it?"
}
] |
[
{
"msg_contents": "(Coming from https://postgr.es/m/20220809193616.5uucf33piwdxn452@alvherre.pgsql )\n\nOn 2022-Aug-09, Alvaro Herrera wrote:\n\n> On 2022-Aug-09, Andres Freund wrote:\n> \n> > Mildly wondering whether we ought to use designated initializers instead,\n> > given we're whacking it around already. Too easy to get the order wrong when\n> > adding new members, and we might want to have optional callbacks too.\n> \n> Strong +1. It makes code much easier to navigate (see XmlTableRoutine\n> and compare with heapam_methods, for example).\n\nFor example, I propose the attached.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"El que vive para el futuro es un iluso, y el que vive para el pasado,\nun imbécil\" (Luis Adler, \"Los tripulantes de la noche\")",
"msg_date": "Wed, 10 Aug 2022 16:03:00 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "designated initializers"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-10 16:03:00 +0200, Alvaro Herrera wrote:\n> (Coming from https://postgr.es/m/20220809193616.5uucf33piwdxn452@alvherre.pgsql )\n> \n> On 2022-Aug-09, Alvaro Herrera wrote:\n> \n> > On 2022-Aug-09, Andres Freund wrote:\n> > \n> > > Mildly wondering whether we ought to use designated initializers instead,\n> > > given we're whacking it around already. Too easy to get the order wrong when\n> > > adding new members, and we might want to have optional callbacks too.\n> > \n> > Strong +1. It makes code much easier to navigate (see XmlTableRoutine\n> > and compare with heapam_methods, for example).\n> \n> For example, I propose the attached.\n\n+1 I've fought with this one when fixing a conflict when rebasing a patch...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 10 Aug 2022 09:56:29 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: designated initializers"
},
{
"msg_contents": "Hello\n\nOn 2022-Aug-10, Andres Freund wrote:\n\n> +1 I've fought with this one when fixing a conflict when rebasing a patch...\n\nRight -- pushed, thanks.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 11 Aug 2022 12:10:35 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: designated initializers"
}
] |
[
{
"msg_contents": "In the report at [1] we learned that the SQL-language function\nhandler is too cavalier about read/write expanded datums that\nit receives as input. A function that receives such a datum\nis entitled to scribble on its value, or even delete it.\nIf the function turns around and passes the datum on to some\nother function, the same applies there. So in general, it can\nonly be safe to pass such a datum to *one* subsidiary function.\nIf you want to use the value more than once, you'd better convert\nthe pointer to read-only. fmgr_sql wasn't doing that, leading\nto the reported bug.\n\nAfter fixing that, I wondered if we had the same problem anywhere\nelse, and it didn't take long to think of such a place: SPI.\nIf you pass a read/write datum to SPI_execute_plan or one of its\nsiblings, and the executed query references that datum more than\nonce, you're potentially in trouble. Even if it does only\nreference it once, you might be surprised that your copy of the\ndatum got modified.\n\nHowever, we can't install a 100% fix in SPI itself, because\nplpgsql intentionally exploits exactly this behavior to optimize\nthings like \"arr := array_append(arr, val)\". I considered the\nidea of adding a 90% fix by making _SPI_convert_params() convert\nR/W pointers to R/O. That would protect places using the old-style\n\"char *Nulls\" APIs, and then we'd deem it the responsibility\nof callers using ParamListInfo APIs to protect themselves.\nI can't get terribly excited about that though, because it'd\nbe adding complexity and cycles for a problem that seems entirely\ntheoretical at this point. I can't find any SPI callers that\nwould *actually* be passing a R/W datum to a query that'd be\nlikely to modify it. The non-plpgsql PLs are at the most risk\nof calling a hazardous query, but they all pass \"flat\" datums\nthat are the immediate result of a typinput function or the like.\n\nSo my inclination is to do nothing about this now, and maybe\nnothing ever. But I thought it'd be a good idea to memorialize\nthis issue for the archives.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/WScDU5qfoZ7PB2gXwNqwGGgDPmWzz08VdydcPFLhOwUKZcdWbblbo-0Lku-qhuEiZoXJ82jpiQU4hOjOcrevYEDeoAvz6nR0IU4IHhXnaCA%3D%40mackler.email\n\n\n",
"msg_date": "Wed, 10 Aug 2022 10:51:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "SPI versus read/write expanded datums"
}
] |
[
{
"msg_contents": "Hey hackers,\n\nI see that logical replication subscriptions have an option to enable\nbinary [1].\nWhen it's enabled, subscription requests publisher to send data in binary\nformat.\nBut this is only the case for apply phase. In tablesync, tables are still\ncopied as text.\n\nTo copy tables, COPY command is used and that command supports copying in\nbinary. So it seemed to me possible to copy in binary for tablesync too.\nI'm not sure if there is a reason to always copy tables in text format. But\nI couldn't see why not to do it in binary if it's enabled.\n\nYou can find the small patch that only enables binary copy attached.\n\nWhat do you think about this change? Does it make sense? Am I missing\nsomething?\n\n[1] https://www.postgresql.org/docs/15/sql-createsubscription.html\n\nBest,\nMelih",
"msg_date": "Wed, 10 Aug 2022 18:03:56 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Wed, Aug 10, 2022, at 12:03 PM, Melih Mutlu wrote:\n> I see that logical replication subscriptions have an option to enable binary [1]. \n> When it's enabled, subscription requests publisher to send data in binary format. \n> But this is only the case for apply phase. In tablesync, tables are still copied as text.\nThis option could have been included in the commit 9de77b54531; it wasn't.\nMaybe it wasn't considered because the initial table synchronization can be a\nseparate step in your logical replication setup idk. I agree that the binary\noption should be available for the initial table synchronization.\n\n> To copy tables, COPY command is used and that command supports copying in binary. So it seemed to me possible to copy in binary for tablesync too.\n> I'm not sure if there is a reason to always copy tables in text format. But I couldn't see why not to do it in binary if it's enabled.\nThe reason to use text format is that it is error prone. There are restrictions\nwhile using the binary format. For example, if your schema has different data\ntypes for a certain column, the copy will fail. Even with such restrictions, I\nthink it is worth adding it.\n\n> You can find the small patch that only enables binary copy attached. \nI have a few points about your implementation.\n\n* Are we considering to support prior Postgres versions too? These releases\n support binary mode but it could be an unexpected behavior (initial sync in\n binary mode) for a publisher using 14 or 15 and a subscriber using 16. IMO\n you should only allow it for publisher on 16 or later.\n* Docs should say that the binary option also applies to initial table\n synchronization and possibly emphasize some of the restrictions.\n* Tests. Are the current tests enough? 014_binary.pl.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Aug 10, 2022, at 12:03 PM, Melih Mutlu wrote:I see that logical replication subscriptions have an option to enable binary [1]. When it's enabled, subscription requests publisher to send data in binary format. But this is only the case for apply phase. In tablesync, tables are still copied as text.This option could have been included in the commit 9de77b54531; it wasn't.Maybe it wasn't considered because the initial table synchronization can be aseparate step in your logical replication setup idk. I agree that the binaryoption should be available for the initial table synchronization.To copy tables, COPY command is used and that command supports copying in binary. So it seemed to me possible to copy in binary for tablesync too.I'm not sure if there is a reason to always copy tables in text format. But I couldn't see why not to do it in binary if it's enabled.The reason to use text format is that it is error prone. There are restrictionswhile using the binary format. For example, if your schema has different datatypes for a certain column, the copy will fail. Even with such restrictions, Ithink it is worth adding it.You can find the small patch that only enables binary copy attached. I have a few points about your implementation.* Are we considering to support prior Postgres versions too? These releases support binary mode but it could be an unexpected behavior (initial sync in binary mode) for a publisher using 14 or 15 and a subscriber using 16. IMO you should only allow it for publisher on 16 or later.* Docs should say that the binary option also applies to initial table synchronization and possibly emphasize some of the restrictions.* Tests. Are the current tests enough? 014_binary.pl.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Wed, 10 Aug 2022 23:03:04 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Thu, Aug 11, 2022 at 7:34 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Wed, Aug 10, 2022, at 12:03 PM, Melih Mutlu wrote:\n>\n> I see that logical replication subscriptions have an option to enable binary [1].\n> When it's enabled, subscription requests publisher to send data in binary format.\n> But this is only the case for apply phase. In tablesync, tables are still copied as text.\n>\n> This option could have been included in the commit 9de77b54531; it wasn't.\n> Maybe it wasn't considered because the initial table synchronization can be a\n> separate step in your logical replication setup idk. I agree that the binary\n> option should be available for the initial table synchronization.\n>\n> To copy tables, COPY command is used and that command supports copying in binary. So it seemed to me possible to copy in binary for tablesync too.\n> I'm not sure if there is a reason to always copy tables in text format. But I couldn't see why not to do it in binary if it's enabled.\n>\n> The reason to use text format is that it is error prone. There are restrictions\n> while using the binary format. For example, if your schema has different data\n> types for a certain column, the copy will fail.\n>\n\nWon't such restrictions hold true even during replication?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 11 Aug 2022 16:34:21 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Thu, Aug 11, 2022, at 8:04 AM, Amit Kapila wrote:\n> On Thu, Aug 11, 2022 at 7:34 AM Euler Taveira <euler@eulerto.com> wrote:\n> >\n> > The reason to use text format is that it is error prone. There are restrictions\n> > while using the binary format. For example, if your schema has different data\n> > types for a certain column, the copy will fail.\n> >\n> \n> Won't such restrictions hold true even during replication?\nI expect that the COPY code matches the proto.c code. The point is that table\nsync is decoupled from the logical replication. Hence, we should emphasize in\nthe documentation that the restrictions *also* apply to the initial table\nsynchronization.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Thu, Aug 11, 2022, at 8:04 AM, Amit Kapila wrote:On Thu, Aug 11, 2022 at 7:34 AM Euler Taveira <euler@eulerto.com> wrote:>> The reason to use text format is that it is error prone. There are restrictions> while using the binary format. For example, if your schema has different data> types for a certain column, the copy will fail.>Won't such restrictions hold true even during replication?I expect that the COPY code matches the proto.c code. The point is that tablesync is decoupled from the logical replication. Hence, we should emphasize inthe documentation that the restrictions *also* apply to the initial tablesynchronization.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Thu, 11 Aug 2022 10:26:40 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Euler Taveira <euler@eulerto.com>, 11 Ağu 2022 Per, 16:27 tarihinde şunu\nyazdı:\n\n> On Thu, Aug 11, 2022, at 8:04 AM, Amit Kapila wrote:\n>\n> On Thu, Aug 11, 2022 at 7:34 AM Euler Taveira <euler@eulerto.com> wrote:\n> >\n> > The reason to use text format is that it is error prone. There are\n> restrictions\n> > while using the binary format. For example, if your schema has different\n> data\n> > types for a certain column, the copy will fail.\n> >\n>\n> Won't such restrictions hold true even during replication?\n>\n> I expect that the COPY code matches the proto.c code. The point is that\n> table\n> sync is decoupled from the logical replication. Hence, we should emphasize\n> in\n> the documentation that the restrictions *also* apply to the initial table\n> synchronization.\n>\n\nIf such restrictions are already the case for replication phase after\ninitial table sync, then it shouldn't prevent us from enabling binary\noption for table sync. Right?\nBut yes, it needs to be stated somewhere.\n\nEuler Taveira <euler@eulerto.com>, 11 Ağu 2022 Per, 05:03 tarihinde şunu\nyazdı\n\n> I have a few points about your implementation.\n>\n> * Are we considering to support prior Postgres versions too? These releases\n> support binary mode but it could be an unexpected behavior (initial sync\n> in\n> binary mode) for a publisher using 14 or 15 and a subscriber using 16.\n> IMO\n> you should only allow it for publisher on 16 or later.\n>\n\nHow is any issue that might occur due to version mismatch being handled\nright now in repliaction after table sync?\nWhat I understand from the documentation is if replication can fail due to\nusing different pg versions, it just fails. So binary option cannot be\nused. [1]\nDo you think that this is more serious for table sync and we need to\nrestrict binary option with different publisher and subscriber versions?\nBut not for replication?\n\n* Docs should say that the binary option also applies to initial table\n> synchronization and possibly emphasize some of the restrictions.\n> * Tests. Are the current tests enough? 014_binary.pl.\n>\n\nYou're right on both points. I just wanted to know your opinions on this\nfirst. Then the patch will need some tests and proper documentation.\n\n[1] https://www.postgresql.org/docs/15/sql-createsubscription.html\n<https://www.postgresql.org/docs/15/sql-createsubscription.html>\n\nBest,\nMelih\n\nEuler Taveira <euler@eulerto.com>, 11 Ağu 2022 Per, 16:27 tarihinde şunu yazdı:On Thu, Aug 11, 2022, at 8:04 AM, Amit Kapila wrote:On Thu, Aug 11, 2022 at 7:34 AM Euler Taveira <euler@eulerto.com> wrote:>> The reason to use text format is that it is error prone. There are restrictions> while using the binary format. For example, if your schema has different data> types for a certain column, the copy will fail.>Won't such restrictions hold true even during replication?I expect that the COPY code matches the proto.c code. The point is that tablesync is decoupled from the logical replication. Hence, we should emphasize inthe documentation that the restrictions *also* apply to the initial tablesynchronization.If such restrictions are already the case for replication phase after initial table sync, then it shouldn't prevent us from enabling binary option for table sync. Right?But yes, it needs to be stated somewhere.Euler Taveira <euler@eulerto.com>, 11 Ağu 2022 Per, 05:03 tarihinde şunu yazdıI have a few points about your implementation.* Are we considering to support prior Postgres versions too? These releases support binary mode but it could be an unexpected behavior (initial sync in binary mode) for a publisher using 14 or 15 and a subscriber using 16. IMO you should only allow it for publisher on 16 or later.How is any issue that might occur due to version mismatch being handled right now in repliaction after table sync?What I understand from the documentation is if replication can fail due to using different pg versions, it just fails. So binary option cannot be used. [1]Do you think that this is more serious for table sync and we need to restrict binary option with different publisher and subscriber versions? But not for replication?* Docs should say that the binary option also applies to initial table synchronization and possibly emphasize some of the restrictions.* Tests. Are the current tests enough? 014_binary.pl.You're right on both points. I just wanted to know your opinions on this first. Then the patch will need some tests and proper documentation. [1] https://www.postgresql.org/docs/15/sql-createsubscription.htmlBest,Melih",
"msg_date": "Thu, 11 Aug 2022 16:46:21 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Thu, Aug 11, 2022, at 10:46 AM, Melih Mutlu wrote:\n> If such restrictions are already the case for replication phase after initial table sync, then it shouldn't prevent us from enabling binary option for table sync. Right?\nI didn't carefully examine the COPY code but I won't expect significant\ndifferences (related to text vs binary mode) from the logical replication\nprotocol. After inspecting the git history, I took my argument back after\nchecking the commit 670c0a1d474. The initial commit 9de77b54531 imposes some\nrestrictions (user-defined arrays and composite types) as mentioned in the\ncommit message but it was removed in 670c0a1d474. My main concern is to break a\nscenario that was previously working (14 -> 15) but after a subscriber upgrade\nit won't (14 -> 16). I would say that you should test some scenarios:\n014_binary.pl and also custom data types, same column with different data\ntypes, etc.\n\n> How is any issue that might occur due to version mismatch being handled right now in repliaction after table sync?\n> What I understand from the documentation is if replication can fail due to using different pg versions, it just fails. So binary option cannot be used. [1]\n> Do you think that this is more serious for table sync and we need to restrict binary option with different publisher and subscriber versions? But not for replication?\nIt is a conservative argument. If we didn't allow a publisher to run COPY in\nbinary mode while using previous Postgres versions, we know that it works. (At\nleast there aren't bug reports for logical replication using binary option.)\nSince one of the main use cases for logical replication is migration, I'm\nconcerned that it may not work (even if the binary option defaults to false,\nsomeone can decide to use it for performance reasons).\n\nI did a quick test and the failure while using binary mode is not clear. Since\nyou are modifying this code, you could probably provide additional patch(es) to\nmake it clear that there is an error (due to some documented restriction).\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Thu, Aug 11, 2022, at 10:46 AM, Melih Mutlu wrote:If such restrictions are already the case for replication phase after initial table sync, then it shouldn't prevent us from enabling binary option for table sync. Right?I didn't carefully examine the COPY code but I won't expect significantdifferences (related to text vs binary mode) from the logical replicationprotocol. After inspecting the git history, I took my argument back afterchecking the commit 670c0a1d474. The initial commit 9de77b54531 imposes somerestrictions (user-defined arrays and composite types) as mentioned in thecommit message but it was removed in 670c0a1d474. My main concern is to break ascenario that was previously working (14 -> 15) but after a subscriber upgradeit won't (14 -> 16). I would say that you should test some scenarios:014_binary.pl and also custom data types, same column with different datatypes, etc.How is any issue that might occur due to version mismatch being handled right now in repliaction after table sync?What I understand from the documentation is if replication can fail due to using different pg versions, it just fails. So binary option cannot be used. [1]Do you think that this is more serious for table sync and we need to restrict binary option with different publisher and subscriber versions? But not for replication?It is a conservative argument. If we didn't allow a publisher to run COPY inbinary mode while using previous Postgres versions, we know that it works. (Atleast there aren't bug reports for logical replication using binary option.)Since one of the main use cases for logical replication is migration, I'mconcerned that it may not work (even if the binary option defaults to false,someone can decide to use it for performance reasons).I did a quick test and the failure while using binary mode is not clear. Sinceyou are modifying this code, you could probably provide additional patch(es) tomake it clear that there is an error (due to some documented restriction).--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Thu, 11 Aug 2022 14:15:17 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Euler Taveira <euler@eulerto.com>, 11 Ağu 2022 Per, 20:16 tarihinde şunu\nyazdı:\n\n> My main concern is to break a scenario that was previously working (14 ->\n> 15) but after a subscriber upgrade\n> it won't (14 -> 16).\n>\nFair concern. Some cases that might break the logical replication with\nversion upgrade would be:\n1- Usage of different binary formats between publisher and subscriber. As\nstated here [1], binary format has been changed after v7.4.\nBut I don't think this would be a concern, since we wouldn't even have\nlogical replication with 7.4 and earlier versions.\n2- Lack (or mismatch) of binary send/receive functions for custom data\ntypes would cause failures. This case can already cause failures with\ncurrent logical replication, regardless of binary copy. Stated here [2].\n3- Copying in binary format would work with the same schemas. Currently,\nlogical replication does not require the exact same schemas in publisher\nand subscriber.\nThis is an additional restriction that comes with the COPY command.\n\nIf a logical replication has been set up with different schemas and\nsubscription is created with the binary option, then yes this would break\nthings.\nThis restriction can be clearly stated and wouldn't be unexpected though.\n\nI'm also okay with allowing binary copy only for v16 or later, if you think\nit would be safer and no one disagrees with that.\nWhat are your thoughts?\n\nI would say that you should test some scenarios:\n> 014_binary.pl and also custom data types, same column with different data\n> types, etc.\n>\nI added scenarios in two tests to test binary copy:\n014_binary.pl: This was already testing subscriptions with binary option\nenabled. I added an extra step to insert initial data before creating the\nsubscription.\nSo that we can test initial table sync with binary copy.\n\n002_types.pl: This file was already testing more complex data types. I\nadded an extra subscriber node to create a subscription with binary option\nenabled.\nThis way, it now tests binary copy with different custom types.\n\nDo you think these would be enough in terms of testing?\n\nAttached patch also includes some additions to the doc along with the\ntests.\n\nThanks,\nMelih\n\n\n[1] https://www.postgresql.org/docs/devel/sql-copy.html\n[2] https://www.postgresql.org/docs/devel/sql-createsubscription.html",
"msg_date": "Mon, 15 Aug 2022 20:03:36 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-10 18:03:56 +0300, Melih Mutlu wrote:\n> To copy tables, COPY command is used and that command supports copying in\n> binary. So it seemed to me possible to copy in binary for tablesync too.\n> I'm not sure if there is a reason to always copy tables in text format.\n\nIt'd be good to collect some performance numbers justifying this. I'd expect\ndecent gains if there's e.g. a bytea or timestamptz column involved.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 1 Sep 2022 15:25:45 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Hello,\n\nAndres Freund <andres@anarazel.de>, 2 Eyl 2022 Cum, 01:25 tarihinde şunu\nyazdı:\n\n> It'd be good to collect some performance numbers justifying this. I'd\n> expect\n> decent gains if there's e.g. a bytea or timestamptz column involved.\n\n\nExperimented the binary copy with a quick setup.\n\n- Created a \"temp\" table with bytea and timestamptz columns\n\n> postgres=# \\d temp\n> Table \"public.temp\"\n> Column | Type | Collation | Nullable | Default\n> --------+--------------------------+-----------+----------+---------\n> i | integer | | |\n> b | bytea | | |\n> t | timestamp with time zone | | |\n>\n\n- Loaded with ~1GB data\n\n> postgres=# SELECT pg_size_pretty( pg_total_relation_size('temp') );\n> pg_size_pretty\n> ----------------\n> 1137 MB\n> (1 row)\n\n\n- Created a publication with only this \"temp\" table.\n- Created a subscription with binary enabled on instances from master\nbranch and this patch.\n- Timed the tablesync process by calling the following procedure:\n\n> CREATE OR REPLACE PROCEDURE wait_for_rep() LANGUAGE plpgsql AS $$BEGIN\n> WHILE (SELECT count(*) != 0 FROM pg_subscription_rel WHERE srsubstate <>\n> 'r') LOOP COMMIT; END LOOP; END; $$;\n\n\nHera are averaged results of multiple consecutive runs from both master\nbranch and the patch:\n\nmaster (binary enabled but no binary copy): 20007.7948 ms\nthe patch (allows binary copy): 8874,869 ms\n\nSeems like a good improvement.\nWhat are your thoughts on this patch?\n\nBest,\nMelih\n\nHello,Andres Freund <andres@anarazel.de>, 2 Eyl 2022 Cum, 01:25 tarihinde şunu yazdı:It'd be good to collect some performance numbers justifying this. I'd expectdecent gains if there's e.g. a bytea or timestamptz column involved.Experimented the binary copy with a quick setup.- Created a \"temp\" table with bytea and timestamptz columnspostgres=# \\d temp Table \"public.temp\" Column | Type | Collation | Nullable | Default--------+--------------------------+-----------+----------+--------- i | integer | | | b | bytea | | | t | timestamp with time zone | | | - Loaded with ~1GB datapostgres=# SELECT pg_size_pretty( pg_total_relation_size('temp') ); pg_size_pretty---------------- 1137 MB(1 row)- Created a publication with only this \"temp\" table. - Created a subscription with binary enabled on instances from master branch and this patch.- Timed the tablesync process by calling the following procedure:CREATE OR REPLACE PROCEDURE wait_for_rep() LANGUAGE plpgsql AS $$BEGIN WHILE (SELECT count(*) != 0 FROM pg_subscription_rel WHERE srsubstate <> 'r') LOOP COMMIT; END LOOP; END; $$;Hera are averaged results of multiple consecutive runs from both master branch and the patch:master (binary enabled but no binary copy): 20007.7948 msthe patch (allows binary copy): 8874,869 ms Seems like a good improvement.What are your thoughts on this patch?Best,Melih",
"msg_date": "Wed, 7 Sep 2022 14:51:25 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Hi hackers,\n\nI just wanted to gently ping to hear what you all think about this patch.\n\nAppreciate any feedback/thougths.\n\nThanks,\nMelih\n\nHi hackers,I just wanted to gently ping to hear what you all think about this patch.Appreciate any feedback/thougths.Thanks,Melih",
"msg_date": "Wed, 14 Sep 2022 19:50:33 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Tuesday, August 16, 2022 2:04 AM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\r\n> Attached patch also includes some additions to the doc along with the tests.\r\n\r\nHi, thank you for updating the patch. Minor review comments for the v2.\r\n\r\n\r\n(1) whitespace issues\r\n\r\nPlease fix below whitespace errors.\r\n\r\n$ git apply v2-0001-Allow-logical-replication-to-copy-table-in-binary.patch\r\nv2-0001-Allow-logical-replication-to-copy-table-in-binary.patch:39: trailing whitespace.\r\n binary format.(See <xref linkend=\"sql-copy\"/>.)\r\nv2-0001-Allow-logical-replication-to-copy-table-in-binary.patch:120: trailing whitespace.\r\n\r\nv2-0001-Allow-logical-replication-to-copy-table-in-binary.patch:460: trailing whitespace.\r\n);\r\nwarning: 3 lines add whitespace errors.\r\n\r\n\r\n(2) Suggestion to update another general description about the subscription\r\n\r\nKindly have a look at doc/src/sgml/logical-replication.sgml.\r\n\r\n\"The data types of the columns do not need to match,\r\nas long as the text representation of the data can be converted to the target type.\r\nFor example, you can replicate from a column of type integer to a column of type bigint.\"\r\n\r\nWith the patch, I think we have an impact about those descriptions\r\nsince enabling the binary option for a subscription and executing the\r\ninitial synchronization requires the same data types for binary format.\r\n\r\nI suggest that we update those descriptions as well.\r\n\r\n\r\n(3) shouldn't test that we fail expectedly with binary copy for different types ?\r\n\r\nHow about having a test that we correctly fail with different data types\r\nbetween the publisher and the subscriber, for instance ?\r\n\r\n\r\n(4) Error message of the binary format copy\r\n\r\nI've gotten below message from data type contradiction (between integer and bigint).\r\nProbably, this is unclear for the users to understand the direct cause \r\nand needs to be adjusted ?\r\nThis might be a similar comment Euler mentioned in [1].\r\n\r\n2022-09-16 11:54:54.835 UTC [4570] ERROR: insufficient data left in message\r\n2022-09-16 11:54:54.835 UTC [4570] CONTEXT: COPY tab, line 1, column id\r\n\r\n\r\n(5) Minor adjustment of the test comment in 002_types.pl.\r\n\r\n+is( $result, $sync_result, 'check initial sync on subscriber');\r\n+is( $result_binary, $sync_result, 'check initial sync on subscriber in binary');\r\n\r\n # Insert initial test data\r\n\r\nThere are two same comments which say \"Insert initial test data\" in this file.\r\nWe need to update them, one for the initial table sync and\r\nthe other for the application of changes.\r\n\r\n[1] - https://www.postgresql.org/message-id/f1d58324-8df4-4bb5-a546-8c741c2e6fa8%40www.fastmail.com\r\n\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Fri, 16 Sep 2022 13:51:07 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Hi\r\n\r\n\r\nFew more minor comments.\r\n\r\nOn Tuesday, August 16, 2022 2:04 AM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\r\n> \r\n> \r\n> \tMy main concern is to break a scenario that was previously working (14\r\n> -> 15) but after a subscriber upgrade\r\n> \tit won't (14 -> 16).\r\n> \r\n> Fair concern. Some cases that might break the logical replication with version\r\n> upgrade would be:\r\n...\r\n> 3- Copying in binary format would work with the same schemas. Currently,\r\n> logical replication does not require the exact same schemas in publisher and\r\n> subscriber.\r\n> This is an additional restriction that comes with the COPY command.\r\n> \r\n> If a logical replication has been set up with different schemas and subscription\r\n> is created with the binary option, then yes this would break things.\r\n> This restriction can be clearly stated and wouldn't be unexpected though.\r\n> \r\n> I'm also okay with allowing binary copy only for v16 or later, if you think it would\r\n> be safer and no one disagrees with that.\r\n> What are your thoughts?\r\nI agree with the direction to support binary copy for v16 and later.\r\n\r\nIIUC, the binary format replication with different data types fails even during apply phase on HEAD.\r\nI thought that means, the upgrade concern only applies to a scenario that the user executes\r\nonly initial table synchronizations between the publisher and subscriber\r\nand doesn't replicate any data at apply phase after that. I would say\r\nthis isn't a valid scenario and your proposal makes sense.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Thu, 22 Sep 2022 03:22:04 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Hi Takamichi,\n\nThanks for your reviews.\n\nI addressed your reviews, please find the attached patch.\n\nosumi.takamichi@fujitsu.com <osumi.takamichi@fujitsu.com>, 16 Eyl 2022 Cum,\n16:51 tarihinde şunu yazdı:\n\n> (1) whitespace issues\n>\n\nFixed\n\n(2) Suggestion to update another general description about the subscription\n>\n> Kindly have a look at doc/src/sgml/logical-replication.sgml.\n>\n> \"The data types of the columns do not need to match,\n> as long as the text representation of the data can be converted to the\n> target type.\n> For example, you can replicate from a column of type integer to a column\n> of type bigint.\"\n>\n> With the patch, I think we have an impact about those descriptions\n> since enabling the binary option for a subscription and executing the\n> initial synchronization requires the same data types for binary format.\n>\n> I suggest that we update those descriptions as well.\n>\n\nYou're right, this needs to be stated in the docs. Modified descriptions\naccordingly.\n\n\n> (3) shouldn't test that we fail expectedly with binary copy for different\n> types ?\n>\n> How about having a test that we correctly fail with different data types\n> between the publisher and the subscriber, for instance ?\n>\n\nModified 002_types.pl test such that it now tests the replication between\ndifferent data types.\nIt's expected to fail if the binary is enabled, and succeed if not.\n\n\n> (4) Error message of the binary format copy\n>\n> I've gotten below message from data type contradiction (between integer\n> and bigint).\n> Probably, this is unclear for the users to understand the direct cause\n> and needs to be adjusted ?\n> This might be a similar comment Euler mentioned in [1].\n>\n> 2022-09-16 11:54:54.835 UTC [4570] ERROR: insufficient data left in\n> message\n> 2022-09-16 11:54:54.835 UTC [4570] CONTEXT: COPY tab, line 1, column id\n>\n\nIt's already unclear for users to understand what's the issue if they're\ncopying data between different column types via the COPY command.\nThis issue comes from COPY, and logical replication just utilizes COPY.\nI don't think it would make sense to adjust an error message from a\nfunctionality which logical replication only uses and has no direct impact\non.\nIt might be better to do this in a separate patch. What do you think?\n\n\n> (5) Minor adjustment of the test comment in 002_types.pl.\n>\n> +is( $result, $sync_result, 'check initial sync on subscriber');\n> +is( $result_binary, $sync_result, 'check initial sync on subscriber in\n> binary');\n>\n> # Insert initial test data\n>\n> There are two same comments which say \"Insert initial test data\" in this\n> file.\n> We need to update them, one for the initial table sync and\n> the other for the application of changes.\n>\n\nFixed.\n\nI agree with the direction to support binary copy for v16 and later.\n>\n> IIUC, the binary format replication with different data types fails even\n> during apply phase on HEAD.\n> I thought that means, the upgrade concern only applies to a scenario that\n> the user executes\n> only initial table synchronizations between the publisher and subscriber\n> and doesn't replicate any data at apply phase after that. I would say\n> this isn't a valid scenario and your proposal makes sense.\n>\n\nNo, logical replication in binary does not fail on apply phase if data\ntypes are different.\nThe concern with upgrade (if data types are not the same) would be not\nbeing able to create a new subscription with binary enabled or replicate\nnew tables added into publication.\nReplication of tables from existing subscriptions would not be affected by\nthis change since they will already be in the apply phase, not tablesync.\nDo you think this would still be an issue?\n\n\nThanks,\nMelih",
"msg_date": "Mon, 3 Oct 2022 14:50:25 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Hi,\n\n\nOn Monday, October 3, 2022 8:50 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> \t(4) Error message of the binary format copy\n> \n> \tI've gotten below message from data type contradiction (between\n> integer and bigint).\n> \tProbably, this is unclear for the users to understand the direct cause\n> \tand needs to be adjusted ?\n> \tThis might be a similar comment Euler mentioned in [1].\n> \n> \t2022-09-16 11:54:54.835 UTC [4570] ERROR: insufficient data left in\n> message\n> \t2022-09-16 11:54:54.835 UTC [4570] CONTEXT: COPY tab, line 1,\n> column id\n>\n> It's already unclear for users to understand what's the issue if they're copying\n> data between different column types via the COPY command.\n> This issue comes from COPY, and logical replication just utilizes COPY.\n> I don't think it would make sense to adjust an error message from a functionality\n> which logical replication only uses and has no direct impact on.\n> It might be better to do this in a separate patch. What do you think?\nYes, makes sense. It should be a separate patch.\n\n\n> \tI agree with the direction to support binary copy for v16 and later.\n> \n> \tIIUC, the binary format replication with different data types fails even\n> during apply phase on HEAD.\n> \tI thought that means, the upgrade concern only applies to a scenario\n> that the user executes\n> \tonly initial table synchronizations between the publisher and subscriber\n> \tand doesn't replicate any data at apply phase after that. I would say\n> \tthis isn't a valid scenario and your proposal makes sense.\n> \n> No, logical replication in binary does not fail on apply phase if data types are\n> different.\nWith HEAD, I observe in some case we fail at apply phase because of different data types like \ninteger vs. bigint as written scenario in [1]. In short, I think we can slightly\nadjust your documentation and make it more general so that the description applies to\nboth table sync phase and apply phase.\n\nI'll suggest a below change for your sentence of logical-replication.sgml.\nFROM:\nIn binary case, it is not allowed to replicate data between different types due to restrictions inherited from COPY.\nTO:\nBinary format is type specific and does not allow to replicate data between different types according to its\nrestrictions.\n\n\nIf my idea above is correct, then I feel we can remove all the fixes for create_subscription.sgml.\nI'm not sure if I should pursue this perspective of the document improvement\nany further after this email, since this isn't essentially because of this patch.\n\n\n\n> The concern with upgrade (if data types are not the same) would be not being\n> able to create a new subscription with binary enabled or replicate new tables\n> added into publication.\n> Replication of tables from existing subscriptions would not be affected by this\n> change since they will already be in the apply phase, not tablesync.\n> Do you think this would still be an issue?\nOkay, thanks for explaining this. I understand that\nthe upgrade concern applies to the table sync that is executed\nbetween text format (before the patch) and binary format (after the patch).\n\n\n\n\n[1] - binary format test that we fail for different types on apply phase on HEAD\n\n<publisher>\ncreate table tab (id integer);\ninsert into tab values (100);\ncreate publication mypub for table tab;\n\n<subscriber>\ncreate table tab (id bigint);\ncreate subscription mysub connection '...' publication mypub with (copy_data = false, binary = true, disable_on_error = true);\n\n-- wait for several seconds\n\n<subscriber>\nselect srsubid, srrelid, srrelid::regclass, srsubstate, srsublsn from pg_subscription_rel; -- check the status as 'r' for the relation\nselect * from tab; -- confirm we don't copy the initial data on the pub\n\n<publisher>\ninsert into tab values (1), (2);\n\n-- wait for several seconds\n\n<subscriber>\nselect subname, subenabled from pg_subscription; -- shows 'f' for the 2nd column because of an error\nselect * from tab -- no records\n\nThis error doesn't happen when we adopt 'integer' on the subscriber aligned with the publisher\nand we can see the two records on the subscriber.\n\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Wed, 12 Oct 2022 01:36:04 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Hello,\n\nosumi.takamichi@fujitsu.com <osumi.takamichi@fujitsu.com>, 12 Eki 2022 Çar,\n04:36 tarihinde şunu yazdı:\n\n> > I agree with the direction to support binary copy for v16 and\n> later.\n> >\n> > IIUC, the binary format replication with different data types\n> fails even\n> > during apply phase on HEAD.\n> > I thought that means, the upgrade concern only applies to a\n> scenario\n> > that the user executes\n> > only initial table synchronizations between the publisher and\n> subscriber\n> > and doesn't replicate any data at apply phase after that. I would\n> say\n> > this isn't a valid scenario and your proposal makes sense.\n> >\n> > No, logical replication in binary does not fail on apply phase if data\n> types are\n> > different.\n> With HEAD, I observe in some case we fail at apply phase because of\n> different data types like\n> integer vs. bigint as written scenario in [1]. In short, I think we can\n> slightly\n> adjust your documentation and make it more general so that the description\n> applies to\n> both table sync phase and apply phase.\n>\n\nYes, you're right. I somehow had the impression that HEAD supports\nreplication between different types in binary.\nBut as can be shown in the scenario you mentioned, it does not work.\n\nI'll suggest a below change for your sentence of logical-replication.sgml.\n> FROM:\n> In binary case, it is not allowed to replicate data between different\n> types due to restrictions inherited from COPY.\n> TO:\n> Binary format is type specific and does not allow to replicate data\n> between different types according to its\n> restrictions.\n>\n\nIn this case, this change makes sense since this patch does actually not\nintroduce this issue. It already exists in HEAD too.\n\n\n> If my idea above is correct, then I feel we can remove all the fixes for\n> create_subscription.sgml.\n> I'm not sure if I should pursue this perspective of the document\n> improvement\n> any further after this email, since this isn't essentially because of this\n> patch.\n>\n\nI'm only keeping the following change in create_subscription.sgml to\nindicate binary option copies in binary format now.\n\n> - Specifies whether the subscription will request the publisher to\n> - send the data in binary format (as opposed to text).\n> + Specifies whether the subscription will copy the initial data to\n> + synchronize relations in binary format and also request the\n> publisher\n> + to send the data in binary format too (as opposed to text).\n\n\n\n\n> > The concern with upgrade (if data types are not the same) would be not\n> being\n> > able to create a new subscription with binary enabled or replicate new\n> tables\n> > added into publication.\n> > Replication of tables from existing subscriptions would not be affected\n> by this\n> > change since they will already be in the apply phase, not tablesync.\n> > Do you think this would still be an issue?\n> Okay, thanks for explaining this. I understand that\n> the upgrade concern applies to the table sync that is executed\n> between text format (before the patch) and binary format (after the patch).\n>\n\nI was thinking apply would work with different types in binary format.\nSince apply also would not work, then the scenario that I tried to explain\nearlier is not a concern anymore.\n\n\nAttached patch with updated version of this patch.\n\nThanks,\nMelih",
"msg_date": "Mon, 14 Nov 2022 15:07:30 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Mon, Nov 14, 2022 8:08 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\r\n>\r\n> Attached patch with updated version of this patch.\r\n\r\nThanks for your patch.\r\n\r\nI tried to do a performance test for this patch, the result looks good to me.\r\n(The steps are similar to what Melih shared [1].)\r\n\r\nThe time to synchronize about 1GB data in binary (the average of the middle\r\neight was taken):\r\nHEAD: 16.854 s\r\nPatched: 6.805 s\r\n\r\nBesides, here are some comments.\r\n\r\n1.\r\n+# Binary enabled subscription should fail\r\n+$node_subscriber_binary->wait_for_log(\"ERROR: insufficient data left in message\");\r\n\r\nShould it be changed to \"ERROR: ( [A-Z0-9]+:)? \", like other subscription tests.\r\n\r\n2.\r\n+# Binary disabled subscription should succeed\r\n+$node_publisher->wait_for_catchup('tap_sub');\r\n\r\nIf we want to wait for table synchronization to finish, should we call\r\nwait_for_subscription_sync()?\r\n\r\n3.\r\nI also think it might be better to support copy binary only for publishers of\r\nv16 or later. Do you plan to implement it in the patch?\r\n \r\n[1] https://www.postgresql.org/message-id/CAGPVpCQEKDVKQPf6OFQ-9WiRYB1YRejm--YJTuwgzuvj1LEJ2A%40mail.gmail.com\r\n\r\nRegards,\r\nShi yu\r\n\r\n",
"msg_date": "Wed, 11 Jan 2023 08:56:28 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Hi,\n\nThanks for your review.\n\nshiy.fnst@fujitsu.com <shiy.fnst@fujitsu.com>, 11 Oca 2023 Çar, 11:56\ntarihinde şunu yazdı:\n\n> On Mon, Nov 14, 2022 8:08 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> 1.\n> +# Binary enabled subscription should fail\n> +$node_subscriber_binary->wait_for_log(\"ERROR: insufficient data left in\n> message\");\n>\n> Should it be changed to \"ERROR: ( [A-Z0-9]+:)? \", like other subscription\n> tests.\n>\n\nDone.\n\n\n> 2.\n> +# Binary disabled subscription should succeed\n> +$node_publisher->wait_for_catchup('tap_sub');\n>\n> If we want to wait for table synchronization to finish, should we call\n> wait_for_subscription_sync()?\n>\n\nDone.\n\n\n> 3.\n> I also think it might be better to support copy binary only for publishers\n> of\n> v16 or later. Do you plan to implement it in the patch?\n>\n\nDone.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft",
"msg_date": "Wed, 11 Jan 2023 13:44:30 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Wed, 11 Jan 2023 at 16:14, Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi,\n>\n> Thanks for your review.\n>\n> shiy.fnst@fujitsu.com <shiy.fnst@fujitsu.com>, 11 Oca 2023 Çar, 11:56 tarihinde şunu yazdı:\n>>\n>> On Mon, Nov 14, 2022 8:08 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>> 1.\n>> +# Binary enabled subscription should fail\n>> +$node_subscriber_binary->wait_for_log(\"ERROR: insufficient data left in message\");\n>>\n>> Should it be changed to \"ERROR: ( [A-Z0-9]+:)? \", like other subscription tests.\n>\n>\n> Done.\n>\n>>\n>> 2.\n>> +# Binary disabled subscription should succeed\n>> +$node_publisher->wait_for_catchup('tap_sub');\n>>\n>> If we want to wait for table synchronization to finish, should we call\n>> wait_for_subscription_sync()?\n>\n>\n> Done.\n>\n>>\n>> 3.\n>> I also think it might be better to support copy binary only for publishers of\n>> v16 or later. Do you plan to implement it in the patch?\n>\n>\n> Done.\n\nFor some reason CFBot is not able to apply the patch as in [1], Could\nyou have a look and post an updated patch if required:\n=== Applying patches on top of PostgreSQL commit ID\nc96de2ce1782116bd0489b1cd69ba88189a495e8 ===\n=== applying patch\n./v5-0001-Allow-logical-replication-to-copy-table-in-binary.patch\ngpatch: **** Only garbage was found in the patch input.\n\n[1] - http://cfbot.cputube.org/patch_41_3840.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 11 Jan 2023 22:04:34 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Wednesday, January 11, 2023 7:45 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> Thanks for your review.\n> Done.\nHi, minor comments on v5.\n\n\n(1) publisher's version check\n\n+ /* If the publisher is v16 or later, copy in binary format.*/\n+ server_version = walrcv_server_version(LogRepWorkerWalRcvConn);\n+ if (server_version >=160000 && MySubscription->binary)\n+ {\n+ appendStringInfoString(&cmd, \" WITH (FORMAT binary)\");\n+ options = lappend(options, makeDefElem(\"format\", (Node *) makeString(\"binary\"), -1));\n+ }\n+\n+ elog(LOG, \"version: %i, %s\", server_version, cmd.data);\n\n(1-1) There is no need to log the version and the query by elog here.\n(1-2) Also, I suggest we can remove the server_version variable itself,\n because we have only one actual reference for it.\n There is a style that we call walrcv_server_version in the\n 'if condition' directly like existing codes in fetch_remote_table_info().\n(1-3) Suggestion to improve comments.\n FROM:\n /* If the publisher is v16 or later, copy in binary format.*/\n TO:\n /* If the publisher is v16 or later, copy data in the required data format. */\n\n\n(2) Minor suggestion for some test code alignment.\n\n $result =\n $node_subscriber->safe_psql('postgres',\n \"SELECT sum(a) FROM tst_dom_constr\");\n-is($result, '21', 'sql-function constraint on domain');\n+is($result, '33', 'sql-function constraint on domain');\n+\n+$result_binary =\n+ $node_subscriber->safe_psql('postgres',\n+ \"SELECT sum(a) FROM tst_dom_constr\");\n+is($result_binary, '33', 'sql-function constraint on domain');\n\n\nI think if we change the order of this part of check like below, then\nit would look more aligned with other existing test codes introduced by this patch.\n\n---\nmy $domain_check = 'SELECT sum(a) FROM tst_dom_constr';\n$result = $node_subscriber->safe_psql('postgres', $domain_check);\n$result_binary = $node_subscriber->safe_psql('postgres', $domain_check);\nis($result, '33', 'sql-function constraint on domain');\nis($result_binary, '33', 'sql-function constraint on domain in binary');\n---\n\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Thu, 12 Jan 2023 03:07:24 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Hi,\n\nThanks for your reviews.\n\nTakamichi Osumi (Fujitsu) <osumi.takamichi@fujitsu.com>, 12 Oca 2023 Per,\n06:07 tarihinde şunu yazdı:\n\n> On Wednesday, January 11, 2023 7:45 PM Melih Mutlu <m.melihmutlu@gmail.com>\n> wrote:\n> (1-1) There is no need to log the version and the query by elog here.\n> (1-2) Also, I suggest we can remove the server_version variable itself,\n> because we have only one actual reference for it.\n> There is a style that we call walrcv_server_version in the\n> 'if condition' directly like existing codes in\n> fetch_remote_table_info().\n> (1-3) Suggestion to improve comments.\n> FROM:\n> /* If the publisher is v16 or later, copy in binary format.*/\n> TO:\n> /* If the publisher is v16 or later, copy data in the required data\n> format. */\n>\n\nForgot to remove that LOG line. Removed it now and applied other\nsuggestions too.\n\n\n> I think if we change the order of this part of check like below, then\n> it would look more aligned with other existing test codes introduced by\n> this patch.\n>\n\nRight. Changed it to make it more aligned with the rest.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft",
"msg_date": "Thu, 12 Jan 2023 11:23:09 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Thu, Jan 12, 2023 at 1:53 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi,\n>\n> Thanks for your reviews.\n\nThanks. I have some comments:\n\n1. The performance numbers posted upthread [1] look impressive for the\nuse-case tried, that's a 2.25X improvement or 55.6% reduction in\nexecution times. However, it'll be great to run a few more varied\ntests to confirm the benefit.\n\n2. It'll be great to capture the perf report to see if the time spent\nin copy_table() is reduced with the patch.\n\n3. I think blending initial table sync's binary copy option with\ndata-type level binary send/receive is not good. Moreover, data-type\nlevel binary send/receive has its own restrictions per 9de77b5453.\nIMO, it'll be good to have a new option, say copy_data_format synonyms\nwith COPY command's FORMAT option but with only text (default) and\nbinary values.\n\n4. Why to add tests to existing 002_types.pl? Can we add a new file\nwith all the data types covered?\n\n5. It's not clear to me as to why these rows were removed in the patch?\n- (1, '{1, 2, 3}'),\n- (2, '{2, 3, 1}'),\n (3, '{3, 2, 1}'),\n (4, '{4, 3, 2}'),\n (5, '{5, NULL, 3}');\n\n -- test_tbl_arrays\n INSERT INTO tst_arrays (a, b, c, d) VALUES\n- ('{1, 2, 3}', '{\"a\", \"b\", \"c\"}', '{1.1, 2.2, 3.3}', '{\"1\nday\", \"2 days\", \"3 days\"}'),\n- ('{2, 3, 1}', '{\"b\", \"c\", \"a\"}', '{2.2, 3.3, 1.1}', '{\"2\nminutes\", \"3 minutes\", \"1 minute\"}'),\n\n6. BTW, the subbinary description is missing in pg_subscription docs\nhttps://www.postgresql.org/docs/devel/catalog-pg-subscription.html?\n- Specifies whether the subscription will request the publisher to\n- send the data in binary format (as opposed to text).\n+ Specifies whether the subscription will copy the initial data to\n+ synchronize relations in binary format and also request the publisher\n+ to send the data in binary format too (as opposed to text).\n\n7. A nitpick - space is needed after >= before 160000.\n+ if (walrcv_server_version(LogRepWorkerWalRcvConn) >=160000 &&\n\n8. Note that the COPY binary format isn't portable across platforms\n(Windows to Linux for instance) or major versions\nhttps://www.postgresql.org/docs/devel/sql-copy.html, whereas, logical\nreplication is https://www.postgresql.org/docs/devel/logical-replication.html.\nI don't see any handling of such cases in copy_table but only a check\nfor the publisher version. I think we need to account for all the\ncases - allow binary COPY only when publisher and subscriber are of\nsame versions, architectures, platforms. The trick here is how we\nidentify if the subscriber is of the same type and taste\n(architectures and platforms) as the publisher. Looking for\nPG_VERSION_STR of publisher and subscriber might be naive, but I'm not\nsure if there's a better way to do it.\n\nAlso, the COPY binary format is very data type specific - per the docs\n\"for example it will not work to output binary data from a smallint\ncolumn and read it into an integer column, even though that would work\nfine in text format.\". I did a small experiment [2], the logical\nreplication works with compatible data types (int -> smallint, even\nint -> text), whereas the COPY binary format doesn't.\n\nI think it'll complicate things a bit to account for the above cases\nand allow COPY with binary format for logical replication.\n\n[1] https://www.postgresql.org/message-id/CAGPVpCQEKDVKQPf6OFQ-9WiRYB1YRejm--YJTuwgzuvj1LEJ2A%40mail.gmail.com\n[2]\nDROP TABLE foo;\nDROP PUBLICATION mypub;\nCREATE TABLE foo(c1 bigint, c2 int, c3 smallint);\nINSERT INTO foo SELECT i , i+1, i+2 FROM generate_series(1, 5) i;\nCREATE PUBLICATION mypub FOR TABLE foo;\nSELECT COUNT(*) FROM foo;\nSELECT * FROM foo;\n\nDROP SUBSCRIPTION mysub;\nDROP TABLE foo;\nCREATE TABLE foo(c1 smallint, c2 smallint, c3 smallint); -- works\nwithout any problem\n-- OR\nCREATE TABLE foo(c1 smallint, c2 text, c3 smallint); -- works without\nany problem\nCREATE SUBSCRIPTION mysub CONNECTION 'port=5432 dbname=postgres\nuser=ubuntu' PUBLICATION mypub;\nSELECT COUNT(*) FROM foo;\nSELECT * FROM foo;\n\ndrop table foo;\ncreate table foo(c1 bigint, c2 int, c3 smallint);\ninsert into foo select i, i+1, i+2 from generate_series(1, 10) i;\ncopy foo(c1, c2, c3) to '/home/ubuntu/postgres/inst/bin/data/foo.text'\nwith (format 'text');\ncopy foo(c1, c2, c3) to\n'/home/ubuntu/postgres/inst/bin/data/foo.binary' with (format\n'binary');\ndrop table bar;\ncreate table bar(c1 smallint, c2 smallint, c3 smallint);\n-- or\ncreate table bar(c1 smallint, c2 text, c3 smallint);\ncopy bar(c1, c2, c3) from\n'/home/ubuntu/postgres/inst/bin/data/foo.text' with (format 'text');\ncopy bar(c1, c2, c3) from\n'/home/ubuntu/postgres/inst/bin/data/foo.binary' with (format\n'binary'); -- produces \"ERROR: incorrect binary data format\"\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 18 Jan 2023 12:46:49 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Hi Bharath,\n\nThanks for reviewing.\n\nBharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>, 18 Oca 2023\nÇar, 10:17 tarihinde şunu yazdı:\n\n> On Thu, Jan 12, 2023 at 1:53 PM Melih Mutlu <m.melihmutlu@gmail.com>\n> wrote:\n> 1. The performance numbers posted upthread [1] look impressive for the\n> use-case tried, that's a 2.25X improvement or 55.6% reduction in\n> execution times. However, it'll be great to run a few more varied\n> tests to confirm the benefit.\n>\n\nSure, do you have any specific test case or suggestion in mind?\n\n\n> 2. It'll be great to capture the perf report to see if the time spent\n> in copy_table() is reduced with the patch.\n>\n\nWill provide that in another email soon.\n\n\n> 3. I think blending initial table sync's binary copy option with\n> data-type level binary send/receive is not good. Moreover, data-type\n> level binary send/receive has its own restrictions per 9de77b5453.\n> IMO, it'll be good to have a new option, say copy_data_format synonyms\n> with COPY command's FORMAT option but with only text (default) and\n> binary values.\n>\n\nAdded a \"copy_format\" option for subscriptions with text as default value.\nSo it would be possible to create a binary subscription but copy tables in\ntext format to avoid restrictions that you're concerned about.\n\n\n> 4. Why to add tests to existing 002_types.pl? Can we add a new file\n> with all the data types covered?\n>\n\nSince 002_types.pl is where the replication of data types are covered. I\nthought it would be okay to test replication with the binary option in that\nfile.\nSure, we can add a new file with different data types for testing\nsubscriptions with binary option. But then I feel like it would have too\nmany duplicated lines with 002_types.pl.\nIf you think that 002_types.pl lacks some data types needs to be tested,\nthen we should add those into 002_types.pl too whether we test subscription\nwith binary option in that file or some other place, right?\n\n\n> 5. It's not clear to me as to why these rows were removed in the patch?\n> - (1, '{1, 2, 3}'),\n> - (2, '{2, 3, 1}'),\n> (3, '{3, 2, 1}'),\n> (4, '{4, 3, 2}'),\n> (5, '{5, NULL, 3}');\n>\n> -- test_tbl_arrays\n> INSERT INTO tst_arrays (a, b, c, d) VALUES\n> - ('{1, 2, 3}', '{\"a\", \"b\", \"c\"}', '{1.1, 2.2, 3.3}', '{\"1\n> day\", \"2 days\", \"3 days\"}'),\n> - ('{2, 3, 1}', '{\"b\", \"c\", \"a\"}', '{2.2, 3.3, 1.1}', '{\"2\n> minutes\", \"3 minutes\", \"1 minute\"}'),\n>\n\nPreviously, it wasn't actually testing the initial table sync since all\ntables were empty when subscription was created.\nI just simply split the data initially inserted to test initial table sync.\n\nWith this patch, it inserts the first two rows for all data types before\nsubscriptions get created.\nYou can see these lines:\n\n> +# Insert initial test data\n> +$node_publisher->safe_psql(\n> + 'postgres', qq(\n> + -- test_tbl_one_array_col\n> + INSERT INTO tst_one_array (a, b) VALUES\n> + (1, '{1, 2, 3}'),\n> + (2, '{2, 3, 1}');\n> +\n> + -- test_tbl_arrays\n> + INSERT INTO tst_arrays (a, b, c, d) VALUES\n\n\n\n\n> 6. BTW, the subbinary description is missing in pg_subscription docs\n> https://www.postgresql.org/docs/devel/catalog-pg-subscription.html?\n> - Specifies whether the subscription will request the publisher to\n> - send the data in binary format (as opposed to text).\n> + Specifies whether the subscription will copy the initial data to\n> + synchronize relations in binary format and also request the\n> publisher\n> + to send the data in binary format too (as opposed to text).\n>\n\nDone.\n\n\n> 7. A nitpick - space is needed after >= before 160000.\n> + if (walrcv_server_version(LogRepWorkerWalRcvConn) >=160000 &&\n>\n\nDone.\n\n\n> 8. Note that the COPY binary format isn't portable across platforms\n> (Windows to Linux for instance) or major versions\n> https://www.postgresql.org/docs/devel/sql-copy.html, whereas, logical\n> replication is\n> https://www.postgresql.org/docs/devel/logical-replication.html.\n> I don't see any handling of such cases in copy_table but only a check\n> for the publisher version. I think we need to account for all the\n> cases - allow binary COPY only when publisher and subscriber are of\n> same versions, architectures, platforms. The trick here is how we\n> identify if the subscriber is of the same type and taste\n> (architectures and platforms) as the publisher. Looking for\n> PG_VERSION_STR of publisher and subscriber might be naive, but I'm not\n> sure if there's a better way to do it.\n>\n\nI think having the \"copy_format\" option with text as default, like I\nreplied to your 3rd review above, will keep things as they are now.\nThe patch now will only allow users to choose binary copy as well, if they\nwant it and acknowledge the limitations that come with binary copy.\nCOPY command's portability issues shouldn't be an issue right now, since\nthe patch still supports text format. Right?\n\n\n> Also, the COPY binary format is very data type specific - per the docs\n> \"for example it will not work to output binary data from a smallint\n> column and read it into an integer column, even though that would work\n> fine in text format.\". I did a small experiment [2], the logical\n> replication works with compatible data types (int -> smallint, even\n> int -> text), whereas the COPY binary format doesn't.\n>\n\nLogical replication between different types like int and smallint is\nalready not working properly on HEAD too.\nYes, the scenario you shared looks like working. But you didn't create the\nsubscription with binary=true. The patch did not change subscription with\nbinary=false case. I believe what you should experiment is binary=true case\nwhich already fails in the apply phase on HEAD.\n\nWell, with this patch, it will begin to fail in the table copy phase. But I\ndon't think this is a problem because logical replication in binary format\nis already broken for replications between different data types.\n\nPlease see [1] and you'll get the following error in your case:\n\"ERROR: incorrect binary data format in logical replication column 1\"\n\n[1]\nPublisher:\nDROP TABLE foo;\nDROP PUBLICATION mypub;\nCREATE TABLE foo(c1 bigint, c2 int, c3 smallint);\nINSERT INTO foo SELECT i , i+1, i+2 FROM generate_series(1, 5) i;\nCREATE PUBLICATION mypub FOR TABLE foo;\nSELECT COUNT(*) FROM foo;\nSELECT * FROM foo;\n\nSubscriber:\nDROP SUBSCRIPTION mysub;\nDROP TABLE foo;\nCREATE TABLE foo(c1 smallint, c2 smallint, c3 smallint);\nCREATE SUBSCRIPTION mysub CONNECTION 'port=5432 dbname=postgres\nuser=ubuntu' PUBLICATION mypub WITH(binary); -- table sync will be\nsuccessful since they're copied in text even though set binary=true\nSELECT COUNT(*) FROM foo;\nSELECT * FROM foo;\n\nBack to publisher:\nINSERT INTO foo SELECT i , i+1, i+2 FROM generate_series(1, 5) i; -- insert\nmore rows to see whether the apply also works\nSELECT COUNT(*) FROM foo; -- you'll see that new rows does not get\nreplicated\n\nIn subscriber logs:\nLOG: logical replication apply worker for subscription \"mysub\" has started\nERROR: incorrect binary data format in logical replication column 1\nCONTEXT: processing remote data for replication origin \"pg_16395\" during\nmessage type \"INSERT\" for replication target relation \"public.foo\" column\n\"c1\" in transaction 747, finished at 0/157F3E0\nLOG: background worker \"logical replication worker\" (PID 16903) exited\nwith exit code 1\n\nBest,\n-- \nMelih Mutlu\nMicrosoft",
"msg_date": "Mon, 30 Jan 2023 13:49:34 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Mon, Jan 30, 2023 at 4:19 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n\nThanks for providing an updated patch.\n\n>> On Thu, Jan 12, 2023 at 1:53 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>> 1. The performance numbers posted upthread [1] look impressive for the\n>> use-case tried, that's a 2.25X improvement or 55.6% reduction in\n>> execution times. However, it'll be great to run a few more varied\n>> tests to confirm the benefit.\n>\n> Sure, do you have any specific test case or suggestion in mind?\n\nHave a huge amount of publisher's table (with mix of columns like int,\ntext, double, bytea and so on) prior data so that the subscriber's\ntable sync workers have to do a \"good\" amount of work to copy, then\nmeasure the copy_table time with and without patch.\n\n>> 2. It'll be great to capture the perf report to see if the time spent\n>> in copy_table() is reduced with the patch.\n>\n> Will provide that in another email soon.\n\nThanks.\n\n>> 4. Why to add tests to existing 002_types.pl? Can we add a new file\n>> with all the data types covered?\n>\n> Since 002_types.pl is where the replication of data types are covered. I thought it would be okay to test replication with the binary option in that file.\n> Sure, we can add a new file with different data types for testing subscriptions with binary option. But then I feel like it would have too many duplicated lines with 002_types.pl.\n> If you think that 002_types.pl lacks some data types needs to be tested, then we should add those into 002_types.pl too whether we test subscription with binary option in that file or some other place, right?\n>\n> Previously, it wasn't actually testing the initial table sync since all tables were empty when subscription was created.\n> I just simply split the data initially inserted to test initial table sync.\n>\n> With this patch, it inserts the first two rows for all data types before subscriptions get created.\n> You can see these lines:\n\nIt'd be better and clearer to have a separate TAP test file IMO since\nthe focus of the feature here isn't to just test for data types. With\nseparate tests, you can verify \"ERROR: incorrect binary data format\nin logical replication column 1\" cases.\n\n>> 8. Note that the COPY binary format isn't portable across platforms\n>> (Windows to Linux for instance) or major versions\n>> https://www.postgresql.org/docs/devel/sql-copy.html, whereas, logical\n>> replication is https://www.postgresql.org/docs/devel/logical-replication.html.\n>> I don't see any handling of such cases in copy_table but only a check\n>> for the publisher version. I think we need to account for all the\n>> cases - allow binary COPY only when publisher and subscriber are of\n>> same versions, architectures, platforms. The trick here is how we\n>> identify if the subscriber is of the same type and taste\n>> (architectures and platforms) as the publisher. Looking for\n>> PG_VERSION_STR of publisher and subscriber might be naive, but I'm not\n>> sure if there's a better way to do it.\n>\n> I think having the \"copy_format\" option with text as default, like I replied to your 3rd review above, will keep things as they are now.\n> The patch now will only allow users to choose binary copy as well, if they want it and acknowledge the limitations that come with binary copy.\n> COPY command's portability issues shouldn't be an issue right now, since the patch still supports text format. Right?\n\nWith the above said, do you think checking for publisher versions is\nneeded? The user can decide to enable/disable binary COPY based on the\npublisher's version no?\n+ /* If the publisher is v16 or later, specify the format to copy data. */\n+ if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 160000)\n+ {\n\nFew more comments on v7:\n1.\n+ Specifies the format in which pre-existing data on the publisher will\n+ copied to the subscriber. Supported formats are\n+ <literal>text</literal> and <literal>binary</literal>. The default is\n+ <literal>text</literal>.\nIt'll be good to call out the cases in the documentation as to where\ncopy_format can be enabled and needs to be disabled.\n\n2.\n+ errmsg(\"%s value should be either \\\"text\\\" or \\\"binary\\\"\",\nHow about \"value must be either ....\"?\n\n3.\n+ if (!opts->binary &&\n+ opts->copy_format == LOGICALREP_COPY_AS_BINARY)\n+ {\n+ ereport(ERROR,\n+ (errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"%s and %s are mutually exclusive options\",\n+ \"binary = false\", \"copy_format = binary\")));\n\n+ \"CREATE SUBSCRIPTION tap_sub_binary CONNECTION\n'$publisher_connstr' PUBLICATION tap_pub WITH (slot_name =\ntap_sub_binary_slot, binary = true, copy_format = 'binary')\"\nWhy should the subscription's binary option and copy_format option be\ntied at all? Tying these two options hurts usability. Is there a\nfundamental reason? I think they both are/must be independent. One\ndeals with data types and another deals with how initial table data is\ncopied.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 30 Jan 2023 18:04:20 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Monday, January 30, 2023 7:50 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> Thanks for reviewing. \n...\n> Please see [1] and you'll get the following error in your case:\n> \"ERROR: incorrect binary data format in logical replication column 1\"\nHi, thanks for sharing v7.\n\n(1) general comment\n\nI wondered if the addition of the new option/parameter can introduce some confusion to the users.\n\ncase 1. When binary = true and copy_format = text, the table sync is conducted by text.\ncase 2. When binary = false and copy_format = binary, the table sync is conducted by binary.\n(Note that the case 2 can be accepted after addressing the 3rd comment of Bharath-san in [1].\nI agree with the 3rd comment by itself.)\n\nThe name of the new subscription parameter looks confusing.\nHow about \"init_sync_format\" or something ?\n\n(2) The commit message doesn't get updated.\n\nThe commit message needs to mention the new subscription option.\n\n(3) whitespace errors.\n\n$ git am v7-0001-Allow-logical-replication-to-copy-table-in-binary.patch\nApplying: Allow logical replication to copy table in binary\n.git/rebase-apply/patch:95: trailing whitespace.\n copied to the subscriber. Supported formats are\n.git/rebase-apply/patch:101: trailing whitespace.\n that data will not be copied if <literal>copy_data = false</literal>.\nwarning: 2 lines add whitespace errors.\n\n(4) pg_dump.c\n\n if (fout->remoteVersion >= 160000)\n- appendPQExpBufferStr(query, \" s.suborigin\\n\");\n+ appendPQExpBufferStr(query, \" s.suborigin,\\n\");\n else\n- appendPQExpBuffer(query, \" '%s' AS suborigin\\n\", LOGICALREP_ORIGIN_ANY);\n+ appendPQExpBuffer(query, \" '%s' AS suborigin,\\n\", LOGICALREP_ORIGIN_ANY);\n+\n+ if (fout->remoteVersion >= 160000)\n+ appendPQExpBufferStr(query, \" s.subcopyformat\\n\");\n+ else\n+ appendPQExpBuffer(query, \" '%c' AS subcopyformat\\n\", LOGICALREP_COPY_AS_TEXT);\n\nThis new branch for v16 can be made together with the previous same condition.\n\n(5) describe.c\n\n+\n+ /* Copy format is only supported in v16 and higher */\n+ if (pset.sversion >= 160000)\n+ appendPQExpBuffer(&buf,\n+ \", subcopyformat AS \\\"%s\\\"\\n\",\n+ gettext_noop(\"Copy Format\"));\n\n\nSimilarly to (4), this creates a new branch for v16. Please see the above codes of this part.\n\n(6) \n\n+ * Extract the copy format value from a DefElem.\n+ */\n+char\n+defGetCopyFormat(DefElem *def)\n\nShouldn't this function be static and remove the change of subscriptioncmds.h ?\n\n(7) catalogs.sgml\n\nThe subcopyformat should be mentioned here and the current description for subbinary\nshould be removed.\n\n(8) create_subscription.sgml\n\n+ <literal>text</literal>.\n+\n+ <literal>binary</literal> format can be selected only if\n\nUnnecessary blank line.\n\n[1] - https://www.postgresql.org/message-id/CALj2ACW5Oa7_v25iZb326UXvtM_tjQfw0Tc3hPJ8zN4FZqc9cw%40mail.gmail.com\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Wed, 1 Feb 2023 03:05:49 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Hi, Melih\n\n\nOn Monday, January 30, 2023 7:50 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> Thanks for reviewing. \n...\n> Well, with this patch, it will begin to fail in the table copy phase...\nThe latest patch doesn't get updated for more than two weeks\nafter some review comments. If you don't have time,\nI would like to help updating the patch for you and other reviewers.\n\nKindly let me know your status.\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Thu, 16 Feb 2023 05:16:51 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Hi,\n\nPlease see the attached patch for following changes.\n\nBharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>, 30 Oca 2023\nPzt, 15:34 tarihinde şunu yazdı:\n\n> On Mon, Jan 30, 2023 at 4:19 PM Melih Mutlu <m.melihmutlu@gmail.com>\n> wrote:\n\nIt'd be better and clearer to have a separate TAP test file IMO since\n> the focus of the feature here isn't to just test for data types. With\n> separate tests, you can verify \"ERROR: incorrect binary data format\n> in logical replication column 1\" cases.\n>\n\nMoved some tests from 002_types.pl to 014_binary.pl since this is where\nmost binary features are tested. It covers now \"incorrect data format\"\ncases too.\nAlso added some regression tests for copy_format parameter.\n\n\n> With the above said, do you think checking for publisher versions is\n> needed? The user can decide to enable/disable binary COPY based on the\n> publisher's version no?\n> + /* If the publisher is v16 or later, specify the format to copy data.\n> */\n> + if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 160000)\n> + {\n>\n\nIf the user decides to enable it, then it might be nice to not allow it for\nprevious versions.\nBut I'm not sure. I'm okay to remove it if you all agree.\n\n\n> Few more comments on v7:\n> 1.\n> + Specifies the format in which pre-existing data on the\n> publisher will\n> + copied to the subscriber. Supported formats are\n> + <literal>text</literal> and <literal>binary</literal>. The\n> default is\n> + <literal>text</literal>.\n> It'll be good to call out the cases in the documentation as to where\n> copy_format can be enabled and needs to be disabled.\n>\n\nModified that description a bit. Can you check if that's okay now?\n\n\n> 2.\n> + errmsg(\"%s value should be either \\\"text\\\" or \\\"binary\\\"\",\n> How about \"value must be either ....\"?\n>\n\nDone\n\n\n> 3.\n> Why should the subscription's binary option and copy_format option be\n> tied at all? Tying these two options hurts usability. Is there a\n> fundamental reason? I think they both are/must be independent. One\n> deals with data types and another deals with how initial table data is\n> copied.\n>\n\nMy initial purpose with this patch was just making subscriptions with\nbinary option enabled fully binary from initial copy to apply. Things have\nchanged a bit when we decided to move binary copy behind a parameter.\nI didn't actually think there would be any use case where a user wants the\ninitial copy to be in binary format for a sub with binary = false. Do you\nthink it would be useful to copy in binary even for a sub with binary\ndisabled?\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft",
"msg_date": "Thu, 16 Feb 2023 12:18:53 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Hi,\n\nThanks for reviewing. Please see the v8 here [1]\n\nTakamichi Osumi (Fujitsu) <osumi.takamichi@fujitsu.com>, 1 Şub 2023 Çar,\n06:05 tarihinde şunu yazdı:\n\n> (1) general comment\n>\n> I wondered if the addition of the new option/parameter can introduce some\n> confusion to the users.\n>\n> case 1. When binary = true and copy_format = text, the table sync is\n> conducted by text.\n> case 2. When binary = false and copy_format = binary, the table sync is\n> conducted by binary.\n> (Note that the case 2 can be accepted after addressing the 3rd comment of\n> Bharath-san in [1].\n> I agree with the 3rd comment by itself.)\n>\n\nI replied to Bharath's comment [1], can you please check to see if that\nmakes sense?\n\n\n> The name of the new subscription parameter looks confusing.\n> How about \"init_sync_format\" or something ?\n>\n\nThe option to enable initial sync is named \"copy_data\", so I named the new\nparameter as \"copy_format\" to refer to that copy meant by \"copy_data\".\nMaybe \"copy_data_format\" would be better. I can change it if you think it's\nbetter.\n\n\n> (2) The commit message doesn't get updated.\n>\n\nDone\n\n\n> (3) whitespace errors.\n>\n\nDone\n\n\n> (4) pg_dump.c\n>\n\nDone\n\n\n> (5) describe.c\n>\n\nDone\n\n\n> (6)\n>\n> + * Extract the copy format value from a DefElem.\n> + */\n> +char\n> +defGetCopyFormat(DefElem *def)\n>\n> Shouldn't this function be static and remove the change of\n> subscriptioncmds.h ?\n>\n\nI wanted to make \"defGetCopyFormat\" be consistent with\n\"defGetStreamingMode\" since they're basically doing the same work for\ndifferent parameters. And that function isn't static, so I didn't make\n\"defGetCopyFormat\" static too.\n\n\n> (7) catalogs.sgml\n>\n\nDone\n\n(8) create_subscription.sgml\n>\n\nDone\n\nAlso;\n\nThe latest patch doesn't get updated for more than two weeks\n> after some review comments. If you don't have time,\n> I would like to help updating the patch for you and other reviewers.\n>\n> Kindly let me know your status.\n>\n\n Sorry for the delay. This patch is currently one of my priorities.\nHopefully I will share quicker updates from now on.\n\n[1]\nhttps://www.postgresql.org/message-id/CAGPVpCQYi9AYQSS%3DRmGgVNjz5ZEnLB8mACwd9aioVhLmbgiAMA%40mail.gmail.com\n\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft\n\nHi,Thanks for reviewing. Please see the v8 here [1]Takamichi Osumi (Fujitsu) <osumi.takamichi@fujitsu.com>, 1 Şub 2023 Çar, 06:05 tarihinde şunu yazdı:\n(1) general comment\n\nI wondered if the addition of the new option/parameter can introduce some confusion to the users.\n\ncase 1. When binary = true and copy_format = text, the table sync is conducted by text.\ncase 2. When binary = false and copy_format = binary, the table sync is conducted by binary.\n(Note that the case 2 can be accepted after addressing the 3rd comment of Bharath-san in [1].\nI agree with the 3rd comment by itself.)I replied to Bharath's comment [1], can you please check to see if that makes sense? \nThe name of the new subscription parameter looks confusing.\nHow about \"init_sync_format\" or something ?The option to enable initial sync is named \"copy_data\", so I named the new parameter as \"copy_format\" to refer to that copy meant by \"copy_data\". Maybe \"copy_data_format\" would be better. I can change it if you think it's better. \n(2) The commit message doesn't get updated. Done \n(3) whitespace errors.Done \n(4) pg_dump.cDone \n(5) describe.cDone \n(6) \n\n+ * Extract the copy format value from a DefElem.\n+ */\n+char\n+defGetCopyFormat(DefElem *def)\n\nShouldn't this function be static and remove the change of subscriptioncmds.h ?I wanted to make \"defGetCopyFormat\" be consistent with \"defGetStreamingMode\" since they're basically doing the same work for different parameters. And that function isn't static, so I didn't make \"defGetCopyFormat\" static too. \n(7) catalogs.sgml Done \n(8) create_subscription.sgmlDone Also;The latest patch doesn't get updated for more than two weeksafter some review comments. If you don't have time,I would like to help updating the patch for you and other reviewers.Kindly let me know your status. Sorry for the delay. This patch is currently one of my priorities. Hopefully I will share quicker updates from now on.[1] https://www.postgresql.org/message-id/CAGPVpCQYi9AYQSS%3DRmGgVNjz5ZEnLB8mACwd9aioVhLmbgiAMA%40mail.gmail.com Thanks,-- Melih MutluMicrosoft",
"msg_date": "Thu, 16 Feb 2023 12:33:37 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Mon, Jan 30, 2023 at 4:19 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi Bharath,\n>\n> Thanks for reviewing.\n>\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>, 18 Oca 2023 Çar, 10:17 tarihinde şunu yazdı:\n>>\n>> On Thu, Jan 12, 2023 at 1:53 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>> 1. The performance numbers posted upthread [1] look impressive for the\n>> use-case tried, that's a 2.25X improvement or 55.6% reduction in\n>> execution times. However, it'll be great to run a few more varied\n>> tests to confirm the benefit.\n>\n>\n> Sure, do you have any specific test case or suggestion in mind?\n>\n>>\n>> 2. It'll be great to capture the perf report to see if the time spent\n>> in copy_table() is reduced with the patch.\n>\n>\n> Will provide that in another email soon.\n>\n>>\n>> 3. I think blending initial table sync's binary copy option with\n>> data-type level binary send/receive is not good. Moreover, data-type\n>> level binary send/receive has its own restrictions per 9de77b5453.\n>> IMO, it'll be good to have a new option, say copy_data_format synonyms\n>> with COPY command's FORMAT option but with only text (default) and\n>> binary values.\n>\n>\n> Added a \"copy_format\" option for subscriptions with text as default value. So it would be possible to create a binary subscription but copy tables in text format to avoid restrictions that you're concerned about.\n>\n>>\n>> 4. Why to add tests to existing 002_types.pl? Can we add a new file\n>> with all the data types covered?\n>\n>\n> Since 002_types.pl is where the replication of data types are covered. I thought it would be okay to test replication with the binary option in that file.\n> Sure, we can add a new file with different data types for testing subscriptions with binary option. But then I feel like it would have too many duplicated lines with 002_types.pl.\n> If you think that 002_types.pl lacks some data types needs to be tested, then we should add those into 002_types.pl too whether we test subscription with binary option in that file or some other place, right?\n>\n>>\n>> 5. It's not clear to me as to why these rows were removed in the patch?\n>> - (1, '{1, 2, 3}'),\n>> - (2, '{2, 3, 1}'),\n>> (3, '{3, 2, 1}'),\n>> (4, '{4, 3, 2}'),\n>> (5, '{5, NULL, 3}');\n>>\n>> -- test_tbl_arrays\n>> INSERT INTO tst_arrays (a, b, c, d) VALUES\n>> - ('{1, 2, 3}', '{\"a\", \"b\", \"c\"}', '{1.1, 2.2, 3.3}', '{\"1\n>> day\", \"2 days\", \"3 days\"}'),\n>> - ('{2, 3, 1}', '{\"b\", \"c\", \"a\"}', '{2.2, 3.3, 1.1}', '{\"2\n>> minutes\", \"3 minutes\", \"1 minute\"}'),\n>\n>\n> Previously, it wasn't actually testing the initial table sync since all tables were empty when subscription was created.\n> I just simply split the data initially inserted to test initial table sync.\n>\n> With this patch, it inserts the first two rows for all data types before subscriptions get created.\n> You can see these lines:\n>>\n>> +# Insert initial test data\n>> +$node_publisher->safe_psql(\n>> + 'postgres', qq(\n>> + -- test_tbl_one_array_col\n>> + INSERT INTO tst_one_array (a, b) VALUES\n>> + (1, '{1, 2, 3}'),\n>> + (2, '{2, 3, 1}');\n>> +\n>> + -- test_tbl_arrays\n>> + INSERT INTO tst_arrays (a, b, c, d) VALUES\n>\n>\n>\n>>\n>> 6. BTW, the subbinary description is missing in pg_subscription docs\n>> https://www.postgresql.org/docs/devel/catalog-pg-subscription.html?\n>> - Specifies whether the subscription will request the publisher to\n>> - send the data in binary format (as opposed to text).\n>> + Specifies whether the subscription will copy the initial data to\n>> + synchronize relations in binary format and also request the publisher\n>> + to send the data in binary format too (as opposed to text).\n>\n>\n> Done.\n>\n>>\n>> 7. A nitpick - space is needed after >= before 160000.\n>> + if (walrcv_server_version(LogRepWorkerWalRcvConn) >=160000 &&\n>\n>\n> Done.\n>\n>>\n>> 8. Note that the COPY binary format isn't portable across platforms\n>> (Windows to Linux for instance) or major versions\n>> https://www.postgresql.org/docs/devel/sql-copy.html, whereas, logical\n>> replication is https://www.postgresql.org/docs/devel/logical-replication.html.\n>> I don't see any handling of such cases in copy_table but only a check\n>> for the publisher version. I think we need to account for all the\n>> cases - allow binary COPY only when publisher and subscriber are of\n>> same versions, architectures, platforms. The trick here is how we\n>> identify if the subscriber is of the same type and taste\n>> (architectures and platforms) as the publisher. Looking for\n>> PG_VERSION_STR of publisher and subscriber might be naive, but I'm not\n>> sure if there's a better way to do it.\n>\n>\n> I think having the \"copy_format\" option with text as default, like I replied to your 3rd review above, will keep things as they are now.\n> The patch now will only allow users to choose binary copy as well, if they want it and acknowledge the limitations that come with binary copy.\n> COPY command's portability issues shouldn't be an issue right now, since the patch still supports text format. Right?\n>\n\nOne thing that is not completely clear from above is whether we will\nhave any problem if the subscription uses binary mode for copying\nacross the server versions. Do we use binary file during the copy used\nin logical replication?\n\n>>\n>> Also, the COPY binary format is very data type specific - per the docs\n>> \"for example it will not work to output binary data from a smallint\n>> column and read it into an integer column, even though that would work\n>> fine in text format.\". I did a small experiment [2], the logical\n>> replication works with compatible data types (int -> smallint, even\n>> int -> text), whereas the COPY binary format doesn't.\n>\n>\n> Logical replication between different types like int and smallint is already not working properly on HEAD too.\n> Yes, the scenario you shared looks like working. But you didn't create the subscription with binary=true. The patch did not change subscription with binary=false case. I believe what you should experiment is binary=true case which already fails in the apply phase on HEAD.\n>\n> Well, with this patch, it will begin to fail in the table copy phase. But I don't think this is a problem because logical replication in binary format is already broken for replications between different data types.\n>\n\nSo, doesn't this mean that there is no separate failure mode during\nthe initial copy? I am clarifying this to see if the patch really\nneeds a separate copy_format option for initial sync?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 16 Feb 2023 18:17:33 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Dear Melih,\r\n\r\nThank you for updating the patch. Before reviewing, I found that\r\ncfbot have not accepted v8 patch [1].\r\n\r\nIIUC src/psql/describe.c has been modified in v8, but src/test/regress/expected/subscription.out\r\nhas not been changed accordingly.\r\n\r\n[1]: https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/42/3840\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Mon, 20 Feb 2023 07:12:03 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Thu, Feb 16, 2023 8:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Mon, Jan 30, 2023 at 4:19 PM Melih Mutlu <m.melihmutlu@gmail.com>\r\n> wrote:\r\n> >\r\n> > Logical replication between different types like int and smallint is already not\r\n> working properly on HEAD too.\r\n> > Yes, the scenario you shared looks like working. But you didn't create the\r\n> subscription with binary=true. The patch did not change subscription with\r\n> binary=false case. I believe what you should experiment is binary=true case\r\n> which already fails in the apply phase on HEAD.\r\n> >\r\n> > Well, with this patch, it will begin to fail in the table copy phase. But I don't\r\n> think this is a problem because logical replication in binary format is already\r\n> broken for replications between different data types.\r\n> >\r\n> \r\n> So, doesn't this mean that there is no separate failure mode during\r\n> the initial copy? I am clarifying this to see if the patch really\r\n> needs a separate copy_format option for initial sync?\r\n> \r\n\r\nIn the case that the data type doesn't have binary output function, for apply\r\nphase, the column will be sent in text format (see logicalrep_write_tuple()) and\r\nit works fine. But with copy_format = binary, the walsender exits with an\r\nerror.\r\n\r\nFor example:\r\n-- create table on publisher and subscriber\r\nCREATE TYPE myvarchar;\r\nCREATE FUNCTION myvarcharin(cstring, oid, integer) RETURNS myvarchar\r\nLANGUAGE internal IMMUTABLE PARALLEL SAFE STRICT AS 'varcharin';\r\nCREATE FUNCTION myvarcharout(myvarchar) RETURNS cstring\r\nLANGUAGE internal IMMUTABLE PARALLEL SAFE STRICT AS 'varcharout';\r\nCREATE TYPE myvarchar (\r\n input = myvarcharin,\r\n output = myvarcharout,\r\n alignment = integer,\r\n storage = main\r\n);\r\nCREATE TABLE tbl1 (a myvarchar);\r\n\r\n-- create publication and insert some data on publisher\r\ncreate publication pub for table tbl1;\r\nINSERT INTO tbl1 values ('a');\r\n\r\n-- create subscription on subscriber\r\ncreate subscription sub connection 'dbname=postgres port=5432' publication pub with(binary, copy_format = binary);\r\n\r\nThen I got the following error in the publisher log.\r\n\r\nwalsender ERROR: no binary output function available for type public.myvarchar\r\nwalsender STATEMENT: COPY public.tbl1 (a) TO STDOUT WITH (FORMAT binary)\r\n\r\nRegards,\r\nShi Yu\r\n",
"msg_date": "Mon, 20 Feb 2023 10:07:51 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Hi,\n\nHayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com>, 20 Şub 2023 Pzt, 10:12\ntarihinde şunu yazdı:\n\n> Dear Melih,\n>\n> Thank you for updating the patch. Before reviewing, I found that\n> cfbot have not accepted v8 patch [1].\n>\n\nThanks for letting me know.\nAttached the fixed version of the patch.\n\nBest,\n-- \nMelih Mutlu\nMicrosoft",
"msg_date": "Mon, 20 Feb 2023 14:46:57 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com>, 16 Şub 2023 Per, 15:47 tarihinde\nşunu yazdı:\n\n> On Mon, Jan 30, 2023 at 4:19 PM Melih Mutlu <m.melihmutlu@gmail.com>\n> wrote:\n> >> 8. Note that the COPY binary format isn't portable across platforms\n> >> (Windows to Linux for instance) or major versions\n> >> https://www.postgresql.org/docs/devel/sql-copy.html, whereas, logical\n> >> replication is\n> https://www.postgresql.org/docs/devel/logical-replication.html.\n> >> I don't see any handling of such cases in copy_table but only a check\n> >> for the publisher version. I think we need to account for all the\n> >> cases - allow binary COPY only when publisher and subscriber are of\n> >> same versions, architectures, platforms. The trick here is how we\n> >> identify if the subscriber is of the same type and taste\n> >> (architectures and platforms) as the publisher. Looking for\n> >> PG_VERSION_STR of publisher and subscriber might be naive, but I'm not\n> >> sure if there's a better way to do it.\n> >\n> >\n> > I think having the \"copy_format\" option with text as default, like I\n> replied to your 3rd review above, will keep things as they are now.\n> > The patch now will only allow users to choose binary copy as well, if\n> they want it and acknowledge the limitations that come with binary copy.\n> > COPY command's portability issues shouldn't be an issue right now, since\n> the patch still supports text format. Right?\n> >\n>\n> One thing that is not completely clear from above is whether we will\n> have any problem if the subscription uses binary mode for copying\n> across the server versions. Do we use binary file during the copy used\n> in logical replication?\n>\n\nSince binary copy relies on COPY command, we may have problems across\ndifferent server versions in cases where COPY is not portable.\nWhat I understand from this [1], COPY works across server versions later\nthan 7.4. This shouldn't be a problem for logical replication.\nCurrently the patch also does not allow binary copy if the publisher\nversion is older than 16.\n\n> > Logical replication between different types like int and smallint is\nalready not\n> working properly on HEAD too.\n> > Yes, the scenario you shared looks like working. But you didn't create\nthe\n> subscription with binary=true. The patch did not change subscription with\n> binary=false case. I believe what you should experiment is binary=true\ncase\n> which already fails in the apply phase on HEAD.\n> >\n> > Well, with this patch, it will begin to fail in the table copy phase.\nBut I don't\n> think this is a problem because logical replication in binary format is\nalready\n> broken for replications between different data types.\n> >\n>\n\n> So, doesn't this mean that there is no separate failure mode during\n> the initial copy? I am clarifying this to see if the patch really\n> needs a separate copy_format option for initial sync?\n>\n\nIt will fail in a case such as [2] while it would work on HEAD.\nWhat I meant by my above comment was that binary enabled subscriptions are\nnot already working properly if they replicate between different types. So,\nthe failure caused by replicating, for example, from smallint to int is not\nreally introduced by this patch. Such subscriptions would fail in\napply phase anyway. With this patch they will fail while binary copy.\n\n[1] https://www.postgresql.org/docs/current/sql-copy.html\n[2]\nhttps://www.postgresql.org/message-id/OSZPR01MB6310B58F069FF8E148B247FDFDA49%40OSZPR01MB6310.jpnprd01.prod.outlook.com\n\nBest,\n-- \nMelih Mutlu\nMicrosoft\n\nAmit Kapila <amit.kapila16@gmail.com>, 16 Şub 2023 Per, 15:47 tarihinde şunu yazdı:On Mon, Jan 30, 2023 at 4:19 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>> 8. Note that the COPY binary format isn't portable across platforms\n>> (Windows to Linux for instance) or major versions\n>> https://www.postgresql.org/docs/devel/sql-copy.html, whereas, logical\n>> replication is https://www.postgresql.org/docs/devel/logical-replication.html.\n>> I don't see any handling of such cases in copy_table but only a check\n>> for the publisher version. I think we need to account for all the\n>> cases - allow binary COPY only when publisher and subscriber are of\n>> same versions, architectures, platforms. The trick here is how we\n>> identify if the subscriber is of the same type and taste\n>> (architectures and platforms) as the publisher. Looking for\n>> PG_VERSION_STR of publisher and subscriber might be naive, but I'm not\n>> sure if there's a better way to do it.\n>\n>\n> I think having the \"copy_format\" option with text as default, like I replied to your 3rd review above, will keep things as they are now.\n> The patch now will only allow users to choose binary copy as well, if they want it and acknowledge the limitations that come with binary copy.\n> COPY command's portability issues shouldn't be an issue right now, since the patch still supports text format. Right?\n>\n\nOne thing that is not completely clear from above is whether we will\nhave any problem if the subscription uses binary mode for copying\nacross the server versions. Do we use binary file during the copy used\nin logical replication?Since binary copy relies on COPY command, we may have problems across different server versions in cases where COPY is not portable.What I understand from this [1], COPY works across server versions later than 7.4. This shouldn't be a problem for logical replication.Currently the patch also does not allow binary copy if the publisher version is older than 16. > > Logical replication between different types like int and smallint is already not> working properly on HEAD too.> > Yes, the scenario you shared looks like working. But you didn't create the> subscription with binary=true. The patch did not change subscription with> binary=false case. I believe what you should experiment is binary=true case> which already fails in the apply phase on HEAD.> >> > Well, with this patch, it will begin to fail in the table copy phase. But I don't> think this is a problem because logical replication in binary format is already> broken for replications between different data types.> >> \nSo, doesn't this mean that there is no separate failure mode during\nthe initial copy? I am clarifying this to see if the patch really\nneeds a separate copy_format option for initial sync?It will fail in a case such as [2] while it would work on HEAD. What I meant by my above comment was that binary enabled subscriptions are not already working properly if they replicate between different types. So, the failure caused by replicating, for example, from smallint to int is not really introduced by this patch. Such subscriptions would fail in apply phase anyway. With this patch they will fail while binary copy. [1] https://www.postgresql.org/docs/current/sql-copy.html [2] https://www.postgresql.org/message-id/OSZPR01MB6310B58F069FF8E148B247FDFDA49%40OSZPR01MB6310.jpnprd01.prod.outlook.comBest,-- Melih MutluMicrosoft",
"msg_date": "Mon, 20 Feb 2023 15:06:27 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Mon, Feb 20, 2023 at 5:17 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Thanks for letting me know.\n> Attached the fixed version of the patch.\n\nThanks. I have few comments on v9 patch:\n\n1.\n+ /* Do not allow binary = false with copy_format = binary */\n+ if (!opts.binary &&\n+ sub->copyformat == LOGICALREP_COPY_AS_BINARY &&\n+ !IsSet(opts.specified_opts, SUBOPT_COPY_FORMAT))\n+ ereport(ERROR,\n+\n(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"cannot set %s for a\nsubscription with %s\",\n+ \"binary = false\",\n\"copy_format = binary\")));\n\nI don't understand why we'd need to tie an option (binary) that deals\nwith data types at column-level with another option (copy_format) that\nrequests the entire table data to be in binary. This'd essentially\nmake one to set binary = true to use copy_format = binary, no? IMHO,\nthis inter-dependency is not good for better usability.\n\n2. Why can't the tests that this patch adds be simple? Why would it\nneed to change the existing tests at all? I'm thinking to create a new\n00X_binary_copy_format.pl or such and setting up logical replication\nwith copy_format = binary and letting table sync worker request\npublisher in binary format - you can verify this via publisher server\nlogs - look for COPY with BINARY option. If required, have the table\nwith different data types. This greatly reduces the patch's footprint.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 21 Feb 2023 19:18:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Dear Melih,\r\n\r\nThank you for updating the patch! Followings are my comments.\r\n\r\n01. catalogs.sgml\r\n\r\n```\r\n If true, the subscription will request that the publisher send data\r\n- in binary format\r\n```\r\n\r\nI'm not clear here. subbinary does not directly mean that whether the worker\r\nrequests to send data or not. How about:\r\n\r\n\r\nIf true, the subscription will request that the publisher send data in binary\r\nformat, except initial data synchronization\r\n\r\n02. create_subscription.sgml\r\n\r\n```\r\n+ the binary format is very data type specific, it will not allow copying\r\n+ between different column types as opposed to text format. Note that\r\n```\r\n\r\nThe name of formats are not specified as <literal>, whereas in previous sentence\r\nthey are. We can use format either of them.\r\n\r\n\r\n03. parse_subscription_options()\r\n\r\nI'm not sure the combination of \"copy_format = binary\" and \"copy_data = false\"\r\nshould be accepted or not. How do you think?\r\n\r\n04. parse_subscription_options()\r\n\r\n```\r\n+ (errcode(ERRCODE_SYNTAX_ERROR),\r\n+ errmsg(\"%s and %s are mutually exclusive options\",\r\n+ \"binary = false\", \"copy_format = binary\")));\r\n+ }\r\n```\r\n\r\nA comment for translator seemed to be missed.\r\n\r\n05. CreateSubscription()\r\n\r\n```\r\n values[Anum_pg_subscription_suborigin - 1] =\r\n CStringGetTextDatum(opts.origin);\r\n+ values[Anum_pg_subscription_subcopyformat - 1] = CharGetDatum(opts.copy_format);\r\n```\r\n\r\nI think this should be done the same ordering with FormData_pg_subscription.\r\nMaybe after the line?\r\n\r\n```\r\n\tvalues[Anum_pg_subscription_subdisableonerr - 1] = BoolGetDatum(opts.disableonerr);\r\n```\r\n\r\n06. AlterSubscription()\r\n\r\nIf we decided not to accept \"copy_format = binary\" and \"copy_data = false\", here\r\nshould be also fixed.\r\n\r\n07. AlterSubscription()\r\n\r\n```\r\n+ ereport(ERROR,\r\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n+ errmsg(\"cannot set %s for a subscription with %s\",\r\n+ \"binary = false\", \"copy_format = binary\")));\r\n...\r\n+ ereport(ERROR,\r\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n+ errmsg(\"cannot set %s for a subscription with %s\",\r\n+ \"copy_format = binary\", \"binary = false\")));\r\n```\r\n\r\nComments for translator seemed to be missed.\r\n\r\n08. defGetCopyFormat()\r\n\r\n```\r\n+ /*\r\n+ * If no parameter value given, set it to text format.\r\n+ */\r\n+ if (!def->arg)\r\n+ return LOGICALREP_COPY_AS_TEXT;\r\n```\r\n\r\nI think the case no parameter is given should be rejected. It should be accepted\r\nonly when the parameter has boolean data type. Note that defGetStreamingMode()\r\nis accepted no-parameter style due to the compatibility. At first streaming is\r\nboolean, and then \"parallel\" is added.\r\n\r\n09. describeSubscriptions\r\n\r\n```\r\n+ /* Copy format is only supported in v16 and higher */\r\n```\r\n\r\nI think this comment should be atop of if statement, and it can mention about Origin too.\r\n\r\n\r\n10. pg_subscription.h\r\n\r\n```\r\n+ char subcopyformat BKI_DEFAULT(LOGICALREP_COPY_AS_TEXT); /* Copy format *\r\n```\r\n\r\nI'm not sure whether BKI_DEFAULT() is needed or not. Other options like twophase\r\ndoes not have the default value as catalog level. The default is set in\r\nparse_subscription_options() and then the value will be set to catalog.\r\n\r\n11. typedef struct Subscription\r\n\r\nIn catalog entry the subcopyformat is aligned just after subdisableonerr, but in\r\nstruct Subscription, copyformat is added at the end. Can we place just after disableonerr?\r\n\r\n12. Reply\r\n\r\n> Since binary copy relies on COPY command, we may have problems across\r\n> different server versions in cases where COPY is not portable.\r\n> What I understand from this [1], COPY works across server versions later\r\n> than 7.4. This shouldn't be a problem for logical replication.\r\n> Currently the patch also does not allow binary copy if the publisher\r\n> version is older than 16.\r\n\r\nIf in future version the general data type is added, the copy command in binary\r\nformat will not work well, right? It is because the inferior version does not have\r\nrecv/send functions for added type. It means that there is a possibility that\r\nreplication between different versions may be failed if binary format is specified.\r\nTherefore, I think we should accept copy_format = binary only when the major\r\nversion of servers are the same.\r\n\r\nNote that this comments is not the request to the patch. Maybe the modification\r\nshould be done not only for copy_format but also binary, and it may be out of scope\r\nthe patch.\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 22 Feb 2023 07:01:28 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "> If in future version the general data type is added, the copy command in binary\n> format will not work well, right? It is because the inferior version does not have\n> recv/send functions for added type. It means that there is a possibility that\n> replication between different versions may be failed if binary format is specified.\n> Therefore, I think we should accept copy_format = binary only when the major\n> version of servers are the same.\n\nI don't think it's necessary to check versions. Yes, there are\nsituations where binary will fail across major versions. But in many\ncases it does not. To me it seems the responsibility of the operator\nto evaluate this risk. And if the operator chooses wrong and uses\nbinary copy across incompatible versions, then it will still fail hard\nin that case during the copy phase (so still a very early error). So I\ndon't see a reason to check pre-emptively, afaict it will only\ndisallow some valid usecases and introduce more code.\n\nFurthermore no major version check is done for \"binary = true\" either\n(afaik). The only additional failure scenario that copy_format=binary\nintroduces is when one of the types does not implement a send function\non the source. With binary=true, this would continue to work, but with\ncopy_format=binary this stops working. All other failure scenarios\nthat binary encoding of types introduces apply to both binary=true and\ncopy_format=binary (the only difference being in which phase of the\nreplication these failures happen, the apply or the copy phase).\n\n> I'm not sure the combination of \"copy_format = binary\" and \"copy_data = false\"\n> should be accepted or not. How do you think?\n\nIt seems quite useless indeed to specify the format of a copy that won't happen.\n\n\n",
"msg_date": "Wed, 22 Feb 2023 11:43:23 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Dear Jelte,\r\n\r\n> I don't think it's necessary to check versions. Yes, there are\r\n> situations where binary will fail across major versions. But in many\r\n> cases it does not. To me it seems the responsibility of the operator\r\n> to evaluate this risk. And if the operator chooses wrong and uses\r\n> binary copy across incompatible versions, then it will still fail hard\r\n> in that case during the copy phase (so still a very early error). So I\r\n> don't see a reason to check pre-emptively, afaict it will only\r\n> disallow some valid usecases and introduce more code.\r\n> \r\n> Furthermore no major version check is done for \"binary = true\" either\r\n> (afaik). The only additional failure scenario that copy_format=binary\r\n> introduces is when one of the types does not implement a send function\r\n> on the source. With binary=true, this would continue to work, but with\r\n> copy_format=binary this stops working. All other failure scenarios\r\n> that binary encoding of types introduces apply to both binary=true and\r\n> copy_format=binary (the only difference being in which phase of the\r\n> replication these failures happen, the apply or the copy phase).\r\n\r\nI thought that current specification was lack of consideration, but you meant to\r\nsay that it is intentional one to keep the availability, right? \r\nIndeed my suggestion seems to be too pessimistic, but I want to listen to other\r\nopinions more...\r\n\r\n> > I'm not sure the combination of \"copy_format = binary\" and \"copy_data = false\"\r\n> > should be accepted or not. How do you think?\r\n> \r\n> It seems quite useless indeed to specify the format of a copy that won't happen.\r\n\r\nI understood that the conbination of \"copy_format = binary\" and \"copy_data = false\"\r\nshould be rejected in parse_subscription_options() and AlterSubscription(). Is it right?\r\nI'm expecting that is done in next version.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 23 Feb 2023 04:40:01 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Thu, Feb 23, 2023 12:40 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\r\n> \r\n> > > I'm not sure the combination of \"copy_format = binary\" and \"copy_data =\r\n> false\"\r\n> > > should be accepted or not. How do you think?\r\n> >\r\n> > It seems quite useless indeed to specify the format of a copy that won't\r\n> happen.\r\n> \r\n> I understood that the conbination of \"copy_format = binary\" and \"copy_data =\r\n> false\"\r\n> should be rejected in parse_subscription_options() and AlterSubscription(). Is it\r\n> right?\r\n> I'm expecting that is done in next version.\r\n> \r\n\r\nThe copy_data option only takes effect once in CREATE SUBSCIPTION or ALTER\r\nSUBSCIPTION REFRESH PUBLICATION command, but the copy_format option can take\r\naffect multiple times if the subscription is refreshed multiple times. Even if\r\nthe subscription is created with copy_date=false, copy_format can take affect\r\nwhen executing ALTER SUBSCIPTION REFRESH PUBLICATION. So, I am not sure we want\r\nto reject this usage.\r\n\r\nBesides, here are my comments on the v9 patch.\r\n1.\r\nsrc/bin/pg_dump/pg_dump.c\r\n\tif (fout->remoteVersion >= 160000)\r\n-\t\tappendPQExpBufferStr(query, \" s.suborigin\\n\");\r\n+\t{\r\n+\t\tappendPQExpBufferStr(query, \" s.suborigin,\\n\");\r\n+\t\tappendPQExpBufferStr(query, \" s.subcopyformat\\n\");\r\n+\t}\r\n \telse\r\n-\t\tappendPQExpBuffer(query, \" '%s' AS suborigin\\n\", LOGICALREP_ORIGIN_ANY);\r\n+\t{\r\n+\t\tappendPQExpBuffer(query, \" '%s' AS suborigin,\\n\", LOGICALREP_ORIGIN_ANY);\r\n+\t\tappendPQExpBuffer(query, \" '%c' AS subcopyformat\\n\", LOGICALREP_COPY_AS_TEXT);\r\n+\t}\r\n\r\nsrc/bin/psql/describe.c\r\n\t\tif (pset.sversion >= 160000)\r\n+\t\t{\r\n \t\t\tappendPQExpBuffer(&buf,\r\n \t\t\t\t\t\t\t \", suborigin AS \\\"%s\\\"\\n\",\r\n \t\t\t\t\t\t\t gettext_noop(\"Origin\"));\r\n+\t\t\t/* Copy format is only supported in v16 and higher */\r\n+\t\t\tappendPQExpBuffer(&buf,\r\n+\t\t\t\t\t\t\t \", subcopyformat AS \\\"%s\\\"\\n\",\r\n+\t\t\t\t\t\t\t gettext_noop(\"Copy Format\"));\r\n+\t\t}\r\n\r\nI think we can call only once appendPQExpBuffer() for the two options which are supported in v16.\r\nFor example,\r\n\t\tif (pset.sversion >= 160000)\r\n\t\t{\r\n\t\t\tappendPQExpBuffer(&buf,\r\n\t\t\t\t\t\t\t \", suborigin AS \\\"%s\\\"\\n\"\r\n\t\t\t\t\t\t\t \", subcopyformat AS \\\"%s\\\"\\n\",\r\n\t\t\t\t\t\t\t gettext_noop(\"Origin\"),\r\n\t\t\t\t\t\t\t gettext_noop(\"Copy Format\"));\r\n\t\t}\r\n\r\n2.\r\nsrc/bin/psql/tab-complete.c\r\n@@ -1926,7 +1926,7 @@ psql_completion(const char *text, int start, int end)\r\n \t/* ALTER SUBSCRIPTION <name> SET ( */\r\n \telse if (HeadMatches(\"ALTER\", \"SUBSCRIPTION\", MatchAny) && TailMatches(\"SET\", \"(\"))\r\n \t\tCOMPLETE_WITH(\"binary\", \"disable_on_error\", \"origin\", \"slot_name\",\r\n-\t\t\t\t\t \"streaming\", \"synchronous_commit\");\r\n+\t\t\t\t\t \"streaming\", \"synchronous_commit\", \"copy_format\");\r\n \t/* ALTER SUBSCRIPTION <name> SKIP ( */\r\n \telse if (HeadMatches(\"ALTER\", \"SUBSCRIPTION\", MatchAny) && TailMatches(\"SKIP\", \"(\"))\r\n \t\tCOMPLETE_WITH(\"lsn\");\r\n@@ -3269,7 +3269,8 @@ psql_completion(const char *text, int start, int end)\r\n \telse if (HeadMatches(\"CREATE\", \"SUBSCRIPTION\") && TailMatches(\"WITH\", \"(\"))\r\n \t\tCOMPLETE_WITH(\"binary\", \"connect\", \"copy_data\", \"create_slot\",\r\n \t\t\t\t\t \"disable_on_error\", \"enabled\", \"origin\", \"slot_name\",\r\n-\t\t\t\t\t \"streaming\", \"synchronous_commit\", \"two_phase\");\r\n+\t\t\t\t\t \"streaming\", \"synchronous_commit\", \"two_phase\",\r\n+\t\t\t\t\t \"copy_format\");\r\n\r\n\r\nThe options should be listed in alphabetical order. See commit d547f7cf5ef.\r\n\r\nRegards,\r\nShi Yu\r\n",
"msg_date": "Thu, 23 Feb 2023 09:29:44 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "shiy.fnst@fujitsu.com <shiy.fnst@fujitsu.com>, 23 Şub 2023 Per, 12:29\ntarihinde şunu yazdı:\n\n> On Thu, Feb 23, 2023 12:40 PM Kuroda, Hayato/黒田 隼人 <\n> kuroda.hayato@fujitsu.com> wrote:\n> >\n> > > > I'm not sure the combination of \"copy_format = binary\" and\n> \"copy_data =\n> > false\"\n> > > > should be accepted or not. How do you think?\n> > >\n> > > It seems quite useless indeed to specify the format of a copy that\n> won't\n> > happen.\n> >\n> > I understood that the conbination of \"copy_format = binary\" and\n> \"copy_data =\n> > false\"\n> > should be rejected in parse_subscription_options() and\n> AlterSubscription(). Is it\n> > right?\n> > I'm expecting that is done in next version.\n> >\n>\n> The copy_data option only takes effect once in CREATE SUBSCIPTION or ALTER\n> SUBSCIPTION REFRESH PUBLICATION command, but the copy_format option can\n> take\n> affect multiple times if the subscription is refreshed multiple times.\n> Even if\n> the subscription is created with copy_date=false, copy_format can take\n> affect\n> when executing ALTER SUBSCIPTION REFRESH PUBLICATION. So, I am not sure we\n> want\n> to reject this usage.\n>\n\nI believe the places copy_data and copy_format are needed are pretty much\nthe same. I couldn't think of a case where copy_format is needed but\ncopy_data isn't. Please let me know if I'm missing something.\nCREATE SUBSCRIPTION, ALTER SUBSCRIPTION SET/ADD/REFRESH PUBLICATION are all\nthe places where initial sync can happen. For all these commands, copy_data\nneeds to be given as a parameter or it will be set to the default value\nwhich is true. Even if copy_data=false when the sub was created, REFRESH\nPUBLICATION (without an explicit copy_data parameter) will copy some tables\nif needed regardless of what copy_data was in CREATE SUBSCRIPTION. This is\nbecause copy_data is not something stored in pg_subscription or another\ncatalog. But this is not an issue for copy_fornat since its value will be\nstored in the catalog. This can allow users to set the format even if\ncopy_data=false and no initial sync is needed at that moment. So that\nfuture initial syncs which can be triggered by ALTER SUBSCRIPTION will be\nperformed in the correct format.\nSo, I also think we should allow setting copy_format even if\ncopy_data=false.\n\nAnother way to deal with this issue could be expecting the user to specify\nformat every time copy_format is needed, similar to the case for copy_data,\nand moving on with text (default) format if it's not specified for the\ncurrent CREATE/ALTER SUBSCRIPTION execution. But I don't think this would\nmake things easier.\n\nBest,\n-- \nMelih Mutlu\nMicrosoft\n\nshiy.fnst@fujitsu.com <shiy.fnst@fujitsu.com>, 23 Şub 2023 Per, 12:29 tarihinde şunu yazdı:On Thu, Feb 23, 2023 12:40 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\n> \n> > > I'm not sure the combination of \"copy_format = binary\" and \"copy_data =\n> false\"\n> > > should be accepted or not. How do you think?\n> >\n> > It seems quite useless indeed to specify the format of a copy that won't\n> happen.\n> \n> I understood that the conbination of \"copy_format = binary\" and \"copy_data =\n> false\"\n> should be rejected in parse_subscription_options() and AlterSubscription(). Is it\n> right?\n> I'm expecting that is done in next version.\n> \n\nThe copy_data option only takes effect once in CREATE SUBSCIPTION or ALTER\nSUBSCIPTION REFRESH PUBLICATION command, but the copy_format option can take\naffect multiple times if the subscription is refreshed multiple times. Even if\nthe subscription is created with copy_date=false, copy_format can take affect\nwhen executing ALTER SUBSCIPTION REFRESH PUBLICATION. So, I am not sure we want\nto reject this usage.I believe the places copy_data and copy_format are needed are pretty much the same. I couldn't think of a case where copy_format is needed but copy_data isn't. Please let me know if I'm missing something.CREATE SUBSCRIPTION, ALTER SUBSCRIPTION SET/ADD/REFRESH PUBLICATION are all the places where initial sync can happen. For all these commands, copy_data needs to be given as a parameter or it will be set to the default value which is true. Even if copy_data=false when the sub was created, REFRESH PUBLICATION (without an explicit copy_data parameter) will copy some tables if needed regardless of what copy_data was in CREATE SUBSCRIPTION. This is because copy_data is not something stored in pg_subscription or another catalog. But this is not an issue for copy_fornat since its value will be stored in the catalog. This can allow users to set the format even if copy_data=false and no initial sync is needed at that moment. So that future initial syncs which can be triggered by ALTER SUBSCRIPTION will be performed in the correct format.So, I also think we should allow setting copy_format even if copy_data=false.Another way to deal with this issue could be expecting the user to specify format every time copy_format is needed, similar to the case for copy_data, and moving on with text (default) format if it's not specified for the current CREATE/ALTER SUBSCRIPTION execution. But I don't think this would make things easier. Best,-- Melih MutluMicrosoft",
"msg_date": "Thu, 23 Feb 2023 14:59:40 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "> This is because copy_data is not something stored in pg_subscription\n> or another catalog. But this is not an issue for copy_fornat since its\n> value will be stored in the catalog. This can allow users to set the\n> format even if copy_data=false and no initial sync is needed at that\n> moment.\n\nOne other approach that might make sense is to expand the values that\ncopy_data accepts to include the value \"binary\" (and probably \"text\"\nfor clarity). That way people could easily choose for each sync if\nthey want to use binary copy, text copy or no copy. Based on your\nmessage, this would mean that copy_format=binary would not be stored\nin catalogs anymore, does that have any bad side-effects for the\nimplementation?\n\n\n",
"msg_date": "Thu, 23 Feb 2023 13:17:33 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Here is my summary of this thread so far, plus some other thoughts.\n\n(I wrote this mostly for my own internal notes/understanding, but\nmaybe it is a helpful summary for others so I am posting it here too).\n\n~~\n\n1. Initial Goal\n------------------\n\nCurrently, the CREATE SUBSCRIPTION ... WITH(binary=true) mode does\ndata replication in binary mode, but tablesync COPY phase is still\nonly in text mode. IIUC Melih just wanted to unify those phases so\nbinary=true would mean also do the COPY phase in binary [1].\n\nActually, this was my very first expectation too.\n\n2. Objections to unification\n-----------------------------------\n\nBharath posted suggesting tying the COPY/replication parts is not a\ngood idea [2]. But if these are not to be unified then it requires a\nnew subscription option to be introduced, and currently, the thread\nrefers to this new option as copy_format=binary.\n\n3. A problem: binary replication is not always binary!\n----------------------------------------------------------------------\n\nShi-san reported [3] that under special circumstances (e.g. if the\ndatatype has no binary output function) the current HEAD binary=true\nmode for replication has the ability to fall back to text replication.\nSince the binary COPY doesn't do this, it means binary COPY could fail\nin some cases where the binary=true replication will be OK.\n\nAFAIK, this means that even if we *wanted* to unify everything with\njust 'binary=true' it can't be done like that.\n\n4. New option is needed\n---------------------------------\n\nOK, so let's assume these options must be separated (because of the\nproblem of #3).\n\n~\n\n4a. New string option called copy_format?\n\nCurrently, the patch/thread describes a new 'copy_format' string\noption with values 'text' (default) and 'binary'.\n\nWhy? If there are only 2 values then IMO it would be better to have a\n*boolean* option called something like binary_copy=true/false.\n\n~\n\n4b. Or, we could extend copy_data\n\nJelte suggested [4] we could extend copy_data = 'text' (aka on/true)\nOR 'binary' OR 'none' (aka off/false).\n\nThat was interesting, although\n- my first impression was to worry about backward compatibility issues\nfor existing application code. I don't know if those are easily\nsolved.\n- AFAIK such \"hybrid\" boolean/enum options are kind of frowned upon so\nI am not sure if a committer would be in favour of introducing another\none.\n\n\n5. Combining options\n------------------------------\n\nBecause options are separated, it means combinations become possible...\n\n~~\n\n5a. Combining option: \"copy_format = binary\" and \"copy_data = false\"\n\nKuroda [5] wrote such a combination should not be allowed.\n\nI kind of disagree with that. IMO everything should be flexible as\npossible. The patch might end up accidentally stopping something that\nhas a valid use case. Anyway, such restrictions are easy to add later.\n\n~~\n\n5b. Combining options: binary=true/copy_format=binary, and\nbinary=false/copy_format=binary become possible.\n\nAFAIK currently the patch disallows some combinations:\n\n+ ereport(ERROR,\n+ (errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"%s and %s are mutually exclusive options\",\n+ \"binary = false\", \"copy_format = binary\")));\n\n\nI kind of disagree with introducing such rules/restrictions. IMO all\nthis patch needs to do is clearly document all necessary precautions\netc. But if the user still wants to do something, we should just let\nthem do it. If they manage to shoot themselves in the foot, well it\nwas their choice after reading the docs, and it's their foot.\n\n\n6. pub/sub version checking\n----------------------------\n\nThere were some discussions about doing some server checking to reject\nsome PG combinations from being allowed to use the copy_format=binary.\n\nJelte suggested [5] that it is the \"responsibility of the operator to\nevaluate the risk\".\n\nI agree. Yes, the patch certainly needs to *document* all precautions,\nbut having too many restrictions might end up accidentally stopping\nsomething useful. Anyway, this seems like #5a. I prefer KISS\nPrinciple. More restrictions can be added later if found necessary.\n\n\n7. More doubts & a thought bubble\n---------------------------------\n\n7a. What is user action for this scenario?\n\nI am unsure about this scenario - imagine that everything is working\nproperly and the copy_format=binary/copy_data=true is all working\nnicely for weeks for a certain pub/sub...\n\nBut if the publication was defined as FOR ALL TABLES, or as ALL TABLES\nIN SCHEMA, then I think the tablesync can still crash if a user\ncreates a new table that suffers the same COPY binary trouble Shi-san\ndescribed earlier [3].\n\nWhat is the user supposed to do when that tablesync fails?\n\nThey had no way to predict it could happen, And it will be too painful\nto have to DROP and re-create the entire SUBSCRIPTION again now that\nit has happened.\n\n~~\n\n7a. A thought bubble\n\nI wondered (perhaps this is naive) would it be it possible to enhance\nthe COPY command to enhance the \"binary\" mode so it can be made to\nfall back to text mode if it needs to in the same way that binary\nreplication does.\n\nIf such an enhanced COPY format mode worked, then most of the patch is redundant\n- there is no need for any new option\n- tablesync COPY could then *always* use this \"binary-with-falback\" mode\n\n\n------\n[1] Melih initially wanted a unified binary mode -\nhttps://www.postgresql.org/message-id/CAGPVpCQYi9AYQSS%3DRmGgVNjz5ZEnLB8mACwd9aioVhLmbgiAMA%40mail.gmail.com\n\n[2] Barath doesn't like the binary/copy_format inter-dependency -\nhttps://www.postgresql.org/message-id/CALj2ACVPt-BaLMm3Ezy1-rfUzH9qStxePcyGrHPamPESEZSBFA%40mail.gmail.com\n\n[3] Shi-san found binary mode has the ability to fall back to text\nsometimes - https://www.postgresql.org/message-id/OSZPR01MB6310B58F069FF8E148B247FDFDA49%40OSZPR01MB6310.jpnprd01.prod.outlook.com\n\n[4] Jelte idea to enhance the copy_data option -\nhttps://www.postgresql.org/message-id/CAGECzQS393xiME%2BEySLU7ceO4xOB81kPjPqNBjaeW3sLgfLhNw%40mail.gmail.com\n\n[5] Kuroda-san etc expecting copy_data=false/copy_format=binary\ncombination is not allowed -\nhttps://www.postgresql.org/message-id/TYAPR01MB5866DDF02B3CEE59DA024CC3F5AB9%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\n[6] Jelte says it is the operator's responsibility to use the correct\noptions - https://www.postgresql.org/message-id/CAGECzQSStdb%2Bx1BxzCktZd1uSjvd6eYq1wcHV3vPCykrGqxYKQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 24 Feb 2023 14:02:19 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Monday, February 20, 2023 8:47 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> Thanks for letting me know. \n> Attached the fixed version of the patch.\nHi, Melih\n\n\nThanks for updating the patch. Minor comments on v9.\n\n(1) commit message\n\n\"The patch introduces a new parameter, copy_format, to CREATE SUBSCRIPTION to allow to choose\nthe format used in initial table synchronization.\"\n\nThis patch introduces the new parameter not only to CREATE SUBSCRIPTION and ALTER SUBSCRIPTION, then this description should be more general, something like below.\n\n\"The patch introduces a new parameter, copy_format, as subscription option to\nallow user to choose the format of initial table synchronization.\"\n\n(2) copy_table\n\nWe don't need to check the publisher's version below.\n\n+\n+ /* If the publisher is v16 or later, specify the format to copy data. */\n+ if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 160000)\n+ {\n+ char *format = MySubscription->copyformat == LOGICALREP_COPY_AS_BINARY ? \"binary\" : \"text\";\n+ appendStringInfo(&cmd, \" WITH (FORMAT %s)\", format);\n+ options = lappend(options, makeDefElem(\"format\", (Node *) makeString(format), -1));\n+ }\n+\n\nWe don't have this kind of check for \"binary\" option and it seems this is user's responsibility to avoid any errors during replication. If we want to add this kind of check, then we can add checks for both \"binary\" and \"copy_format\" option together as an independent patch.\n\n(3) subscription.sql/out\n\nThe other existing other subscription options check the invalid input for newly created option (e.g. \"foo\" for disable_on_error, streaming mode). So, I think we can add this type of test for this feature.\n\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Sun, 26 Feb 2023 22:58:26 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Mon, Feb 20, 2023 at 3:37 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Thu, Feb 16, 2023 8:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > So, doesn't this mean that there is no separate failure mode during\n> > the initial copy? I am clarifying this to see if the patch really\n> > needs a separate copy_format option for initial sync?\n> >\n>\n> In the case that the data type doesn't have binary output function, for apply\n> phase, the column will be sent in text format (see logicalrep_write_tuple()) and\n> it works fine. But with copy_format = binary, the walsender exits with an\n> error.\n>\n...\n...\n>\n> Then I got the following error in the publisher log.\n>\n> walsender ERROR: no binary output function available for type public.myvarchar\n> walsender STATEMENT: COPY public.tbl1 (a) TO STDOUT WITH (FORMAT binary)\n>\n\nThanks for sharing the example. I think to address this user can\ncreate a SUBSCRIPTION with 'binary = false' and then after the initial\ncopy enables it with ALTER SUBSCRIPTION. Personally, I feel it is not\nrequired to have a separate option to allow copy in binary mode. Note,\nwhere there is some use for it but having more options for similar\nwork is also confusing as users need to pay attention to different\noptions and their values. It won't be difficult to add such an option\nin the future if we see such cases and or users specifically require\nsomething like this.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 27 Feb 2023 14:31:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Fri, Feb 24, 2023 at 8:32 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here is my summary of this thread so far, plus some other thoughts.\n\nThanks.\n\n> 1. Initial Goal\n> ------------------\n>\n> Currently, the CREATE SUBSCRIPTION ... WITH(binary=true) mode does\n> data replication in binary mode, but tablesync COPY phase is still\n> only in text mode. IIUC Melih just wanted to unify those phases so\n> binary=true would mean also do the COPY phase in binary [1].\n>\n> Actually, this was my very first expectation too.\n>\n> 2. Objections to unification\n> -----------------------------------\n>\n> Bharath posted suggesting tying the COPY/replication parts is not a\n> good idea [2]. But if these are not to be unified then it requires a\n> new subscription option to be introduced, and currently, the thread\n> refers to this new option as copy_format=binary.\n\nLooking closely at the existing binary=true option and COPY's binary\nprotocol, essentially they depend on the data type's binary send and\nreceive functions.\n\n> 3. A problem: binary replication is not always binary!\n> ----------------------------------------------------------------------\n>\n> Shi-san reported [3] that under special circumstances (e.g. if the\n> datatype has no binary output function) the current HEAD binary=true\n> mode for replication has the ability to fall back to text replication.\n> Since the binary COPY doesn't do this, it means binary COPY could fail\n> in some cases where the binary=true replication will be OK.\n\nRight. Apply workers can fallback to text mode transparently, whereas\nwith binary COPY it's a bit difficult to do so.\n\n> AFAIK, this means that even if we *wanted* to unify everything with\n> just 'binary=true' it can't be done like that.\n\nHm, it looks like that.\n\n> 4. New option is needed\n> ---------------------------------\n>\n> OK, so let's assume these options must be separated (because of the\n> problem of #3).\n>\n> ~\n>\n> 4a. New string option called copy_format?\n>\n> Currently, the patch/thread describes a new 'copy_format' string\n> option with values 'text' (default) and 'binary'.\n>\n> Why? If there are only 2 values then IMO it would be better to have a\n> *boolean* option called something like binary_copy=true/false.\n>\n> ~\n>\n> 4b. Or, we could extend copy_data\n>\n> Jelte suggested [4] we could extend copy_data = 'text' (aka on/true)\n> OR 'binary' OR 'none' (aka off/false).\n\nHow about copy_binary = {true/false}? So, the options for the user are:\n\ncopy_binary - defaults to false and when true, the subscriber requests\npublisher to send pre-existing table's data in binary format (as\nopposed to text) during data synchronization/table copy phase. It is\nrecommended to enable this option only when 1) the column data types\nhave appropriate binary send/receive functions, 2) not replicating\nbetween different major versions or different platforms, 3) both\npublisher and subscriber tables have the exact same column types (not\nwhen replicating from smallint to int or numeric to int8 and so on), 4)\nboth publisher and subscriber supports COPY with binary option,\notherwise the table copy can fail.\n\nbinary - defaults to false and when true, the subscriber requests\npublisher to send table's data in binary format (as opposed to text)\nduring normal replication phase. <<existing description from the docs\ncontinues>>\n\nNote that with this we made users responsible to choose copy_binary\nrather than we being smart.\n\n> AFAIK currently the patch disallows some combinations:\n>\n> + ereport(ERROR,\n> + (errcode(ERRCODE_SYNTAX_ERROR),\n> + errmsg(\"%s and %s are mutually exclusive options\",\n> + \"binary = false\", \"copy_format = binary\")));\n>\n> 6. pub/sub version checking\n> ----------------------------\n>\n> There were some discussions about doing some server checking to reject\n> some PG combinations from being allowed to use the copy_format=binary.\n\nIMHO, these restrictions make the feature more complicated to use and\nbe removed and the options be made simple to use (usability and\nsimplicity clearly wins the race). If there's any kind of feedback\nfrom the users/developers, we can always come back and improve.\n\n> But if the publication was defined as FOR ALL TABLES, or as ALL TABLES\n> IN SCHEMA, then I think the tablesync can still crash if a user\n> creates a new table that suffers the same COPY binary trouble Shi-san\n> described earlier [3].\n>\n> What is the user supposed to do when that tablesync fails?\n>\n> They had no way to predict it could happen, And it will be too painful\n> to have to DROP and re-create the entire SUBSCRIPTION again now that\n> it has happened.\n\nCan't ALTER SUBSCRIPTION .... SET copy_format = 'text'; and ALTER\nSUBSCRIPTION ... REFRESH PUBLICATION WITH (copy_data = true); work\nhere instead of drop and recreate subscription?\n\n> 7a. A thought bubble\n>\n> I wondered (perhaps this is naive) would it be it possible to enhance\n> the COPY command to enhance the \"binary\" mode so it can be made to\n> fall back to text mode if it needs to in the same way that binary\n> replication does.\n>\n> If such an enhanced COPY format mode worked, then most of the patch is redundant\n> - there is no need for any new option\n> - tablesync COPY could then *always* use this \"binary-with-falback\" mode\n\nI don't think that's a great idea. COPY is a user-facing SQL command\nand any errors (because of data type not having binary send/receive\nfunctions) better be reported to the users. Again, such an option\nmight complicate both COPY code and usability.\n\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 27 Feb 2023 15:46:06 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Tue, Feb 21, 2023 at 7:18 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Feb 20, 2023 at 5:17 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> >\n> > Thanks for letting me know.\n> > Attached the fixed version of the patch.\n>\n> Thanks. I have few comments on v9 patch:\n>\n> 1.\n> + /* Do not allow binary = false with copy_format = binary */\n> + if (!opts.binary &&\n> + sub->copyformat == LOGICALREP_COPY_AS_BINARY &&\n> + !IsSet(opts.specified_opts, SUBOPT_COPY_FORMAT))\n> + ereport(ERROR,\n> +\n> (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> + errmsg(\"cannot set %s for a\n> subscription with %s\",\n> + \"binary = false\",\n> \"copy_format = binary\")));\n>\n> I don't understand why we'd need to tie an option (binary) that deals\n> with data types at column-level with another option (copy_format) that\n> requests the entire table data to be in binary. This'd essentially\n> make one to set binary = true to use copy_format = binary, no? IMHO,\n> this inter-dependency is not good for better usability.\n>\n> 2. Why can't the tests that this patch adds be simple? Why would it\n> need to change the existing tests at all? I'm thinking to create a new\n> 00X_binary_copy_format.pl or such and setting up logical replication\n> with copy_format = binary and letting table sync worker request\n> publisher in binary format - you can verify this via publisher server\n> logs - look for COPY with BINARY option. If required, have the table\n> with different data types. This greatly reduces the patch's footprint.\n\nI've done performance testing with the v9 patch.\n\nI can constantly observe 1.34X improvement or 25% improvement in table\nsync/copy performance with the patch:\nHEAD binary = false\nTime: 214398.637 ms (03:34.399)\n\nPATCHED binary = true, copy_format = binary:\nTime: 160509.262 ms (02:40.509)\n\nThere's a clear reduction (5.68% to 4.49%) in the CPU cycles spent in\ncopy_table with the patch:\nperf report HEAD:\n- 5.68% 0.00% postgres postgres [.] copy_table\n - copy_table\n - 5.67% CopyFrom\n - 4.26% NextCopyFrom\n - 2.16% NextCopyFromRawFields\n - 1.55% CopyReadLine\n - 1.52% CopyReadLineText\n - 0.76% CopyLoadInputBuf\n 0.50% CopyConvertBuf\n 0.60% CopyReadAttributesText\n - 1.93% InputFunctionCall\n 0.69% timestamptz_in\n 0.53% byteain\n - 0.73% CopyMultiInsertInfoFlush\n - 0.73% CopyMultiInsertBufferFlush\n - 0.66% table_multi_insert\n 0.65% heap_multi_insert\n\nperf report PATCHED:\n- 4.49% 0.00% postgres postgres [.] copy_table\n - copy_table\n - 4.48% CopyFrom\n - 2.36% NextCopyFrom\n - 1.81% CopyReadBinaryAttribute\n 1.47% ReceiveFunctionCall\n - 1.21% CopyMultiInsertInfoFlush\n - 1.21% CopyMultiInsertBufferFlush\n - 1.11% table_multi_insert\n - 1.09% heap_multi_insert\n - 0.71% RelationGetBufferForTuple\n - 0.63% ReadBufferBI\n - 0.62% ReadBufferExtended\n 0.62% ReadBuffer_common\n\nI've tried to choose the table columns such that the binary send/recv\nvs non-binary/plain/text copy has some benefit. The table has 100mn\nrows, and is of 11GB size. I've measured the benefit using Melih's\nhelper function wait_for_rep(). Note that I had to compile source code\nwith -ggdb3 -O0 for perf report, otherwise with -O3 for performance\nnumbers:\n\nwal_level = 'logical'\nshared_buffers = '8GB'\nwal_buffers = '1GB'\nmax_wal_size = '32GB'\n\ncreate table foo(i int, n numeric, v varchar, b bytea, t timestamp\nwith time zone default current_timestamp);\ninsert into foo select i, i+1, md5(i::text), md5(i::text)::bytea from\ngenerate_series(1, 100000000) i;\n\nCREATE OR REPLACE PROCEDURE wait_for_rep()\nLANGUAGE plpgsql\nAS $$\nBEGIN\n WHILE (SELECT count(*) != 0 FROM pg_subscription_rel WHERE\nsrsubstate <> 'r') LOOP COMMIT;\n END LOOP;\nEND;\n$$;\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 27 Feb 2023 18:54:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Hi,\n\nThanks for all of your reviews!\n\nSo, I made some changes in the v10 according to your comments.\n\n1- copy_format is changed to a boolean parameter copy_binary.\nIt sounds easier to use a boolean to enable/disable binary copy. Default\nvalue is false, so nothing changes in the current behaviour if copy_binary\nis not specified.\nIt's still persisted into the catalog. This is needed since its value will\nbe needed by tablesync workers later. And tablesync workers fetch\nsubscription configurations from the catalog.\nIn the copy_data case, it is not directly stored anywhere but it affects\nthe state of the table which is stored in the catalog. So, I guess this is\nthe convenient way if we decide to go with a new parameter.\n\n2- copy_binary is not tied to another parameter\nThe patch does not disallow any combination of copy_binary with binary and\ncopy_data options. I tried to explain what binary copy can and cannot do in\nthe docs. Rest is up to the user now.\n\n3- Removed version check for copy_binary\nHEAD already does not have any check for binary option. Making binary copy\nwork only if publisher and subscriber are the same major version can be too\nrestricted.\n\n4- Added separate test file\nAlthough I believe 002_types.pl and 014_binary.pl can be improved more for\nbinary enabled subscription cases, this patch might not be the best place\nto make those changes.\n032_binary_copy.pl now has the tests for binary copy option. There are also\nsome regression tests in subscription.sql.\n\nFinally, some other small fixes are done according to the reviews.\n\n Also, thanks Bharath for performance testing [1]. It can be seen that the\npatch has some benefits.\n\nI appreciate further thoughts/reviews.\n\n\n[1]\nhttps://www.postgresql.org/message-id/CALj2ACUfE08ZNjKK-nK9JiwGhwUMRLM%2BqRhNKTVM9HipFk7Fow%40mail.gmail.com\n\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft",
"msg_date": "Mon, 27 Feb 2023 22:52:29 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Tue, 28 Feb 2023 at 01:22, Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi,\n>\n> Thanks for all of your reviews!\n>\n> So, I made some changes in the v10 according to your comments.\n>\n> 1- copy_format is changed to a boolean parameter copy_binary.\n> It sounds easier to use a boolean to enable/disable binary copy. Default value is false, so nothing changes in the current behaviour if copy_binary is not specified.\n> It's still persisted into the catalog. This is needed since its value will be needed by tablesync workers later. And tablesync workers fetch subscription configurations from the catalog.\n> In the copy_data case, it is not directly stored anywhere but it affects the state of the table which is stored in the catalog. So, I guess this is the convenient way if we decide to go with a new parameter.\n>\n> 2- copy_binary is not tied to another parameter\n> The patch does not disallow any combination of copy_binary with binary and copy_data options. I tried to explain what binary copy can and cannot do in the docs. Rest is up to the user now.\n>\n> 3- Removed version check for copy_binary\n> HEAD already does not have any check for binary option. Making binary copy work only if publisher and subscriber are the same major version can be too restricted.\n>\n> 4- Added separate test file\n> Although I believe 002_types.pl and 014_binary.pl can be improved more for binary enabled subscription cases, this patch might not be the best place to make those changes.\n> 032_binary_copy.pl now has the tests for binary copy option. There are also some regression tests in subscription.sql.\n>\n> Finally, some other small fixes are done according to the reviews.\n>\n> Also, thanks Bharath for performance testing [1]. It can be seen that the patch has some benefits.\n>\n> I appreciate further thoughts/reviews.\n\nThanks for the patch, Few comments:\n1) Are primary key required for the tables, if not required we could\nremove the primary key which will speed up the test by not creating\nthe indexes and inserting in the indexes. Even if required just create\nfor one of the tables:\n+# Create tables on both sides of the replication\n+my $ddl = qq(\n+ CREATE TABLE public.test_numerical (\n+ a INTEGER PRIMARY KEY,\n+ b NUMERIC,\n+ c FLOAT,\n+ d BIGINT\n+ );\n+ CREATE TABLE public.test_arrays (\n+ a INTEGER[] PRIMARY KEY,\n+ b NUMERIC[],\n+ c TEXT[]\n+ );\n+ CREATE TABLE public.test_range_array (\n+ a INTEGER PRIMARY KEY,\n+ b TSTZRANGE,\n+ c int8range[]\n+ );\n+ CREATE TYPE public.test_comp_basic_t AS (a FLOAT, b TEXT, c INTEGER);\n+ CREATE TABLE public.test_one_comp (\n+ a INTEGER PRIMARY KEY,\n+ b public.test_comp_basic_t\n+ ););\n+\n\n2) We can have the Insert/Select of tables in the same order so that\nit is easy to verify. test_range_array/test_one_comp\ninsertion/selection order was different.\n+# Insert some content and before creating a subscription\n+$node_publisher->safe_psql(\n+ 'postgres', qq(\n+ INSERT INTO public.test_numerical (a, b, c, d) VALUES\n+ (1, 1.2, 1.3, 10),\n+ (2, 2.2, 2.3, 20);\n+ INSERT INTO public.test_arrays (a, b, c) VALUES\n+ ('{1,2,3}', '{1.1, 1.2, 1.3}', '{\"one\", \"two\", \"three\"}'),\n+ ('{3,1,2}', '{1.3, 1.1, 1.2}', '{\"three\", \"one\", \"two\"}');\n+ INSERT INTO test_range_array (a, b, c) VALUES\n+ (1, tstzrange('Mon Aug 04 00:00:00 2014\nCEST'::timestamptz, 'infinity'), '{\"[1,2]\", \"[10,20]\"}'),\n+ (2, tstzrange('Sat Aug 02 00:00:00 2014\nCEST'::timestamptz, 'Mon Aug 04 00:00:00 2014 CEST'::timestamptz),\n'{\"[2,3]\", \"[20,30]\"}');\n+ INSERT INTO test_one_comp (a, b) VALUES\n+ (1, ROW(1.0, 'a', 1)),\n+ (2, ROW(2.0, 'b', 2));\n+ ));\n+\n+# Create the subscription with copy_binary = true\n+my $publisher_connstring = $node_publisher->connstr . ' dbname=postgres';\n+$node_subscriber->safe_psql('postgres',\n+ \"CREATE SUBSCRIPTION tsub CONNECTION '$publisher_connstring' \"\n+ . \"PUBLICATION tpub WITH (slot_name = tpub_slot, copy_binary\n= true)\");\n+\n+# Ensure nodes are in sync with each other\n+$node_subscriber->wait_for_subscription_sync($node_publisher, 'tsub');\n+\n+my $sync_check = qq(\n+ SET timezone = '+2';\n+ SELECT a, b, c, d FROM test_numerical ORDER BY a;\n+ SELECT a, b, c FROM test_arrays ORDER BY a;\n+ SELECT a, b FROM test_one_comp ORDER BY a;\n+ SELECT a, b, c FROM test_range_array ORDER BY a;\n+);\n\n3) Should we check the results for test_myvarchar table only, since\nthere is no change in other tables, we need not check other tables\nagain:\n+# Now tablesync should succeed\n+$node_subscriber->wait_for_subscription_sync($node_publisher, 'tsub');\n+\n+$sync_check = qq(\n+ SET timezone = '+2';\n+ SELECT a, b, c, d FROM test_numerical ORDER BY a;\n+ SELECT a, b, c FROM test_arrays ORDER BY a;\n+ SELECT a, b FROM test_one_comp ORDER BY a;\n+ SELECT a, b, c FROM test_range_array ORDER BY a;\n+ SELECT a FROM test_myvarchar;\n+);\n+\n+# Check the synced data on subscriber\n+$result = $node_subscriber->safe_psql('postgres', $sync_check);\n\n4) Similarly check only for test_mismatching_types in this case:\n+# Cannot sync due to type mismatch\n+$node_subscriber->wait_for_log(qr/ERROR: ( [A-Z0-9]+:)? incorrect\nbinary data format/);\n+\n+# Setting copy_binary to false should allow syncing\n+$node_subscriber->safe_psql(\n+ 'postgres', qq(\n+ ALTER SUBSCRIPTION tsub SET (copy_binary = false);));\n+\n+$node_subscriber->wait_for_subscription_sync($node_publisher, 'tsub');\n+\n+$sync_check = qq(\n+ SET timezone = '+2';\n+ SELECT a, b, c, d FROM test_numerical ORDER BY a;\n+ SELECT a, b, c FROM test_arrays ORDER BY a;\n+ SELECT a, b FROM test_one_comp ORDER BY a;\n+ SELECT a, b, c FROM test_range_array ORDER BY a;\n+ SELECT a FROM test_myvarchar;\n+ SELECT a FROM test_mismatching_types ORDER BY a;\n+);\n+\n+# Check the synced data on subscribers\n+$result = $node_subscriber->safe_psql('postgres', $sync_check);\n+\n+is( $result, '1|1.2|1.3|10\n+2|2.2|2.3|20\n+{1,2,3}|{1.1,1.2,1.3}|{one,two,three}\n+{3,1,2}|{1.3,1.1,1.2}|{three,one,two}\n+1|(1,a,1)\n+2|(2,b,2)\n+1|[\"2014-08-04 00:00:00+02\",infinity)|{\"[1,3)\",\"[10,21)\"}\n+2|[\"2014-08-02 00:00:00+02\",\"2014-08-04 00:00:00+02\")|{\"[2,4)\",\"[20,31)\"}\n+a\n+1\n+2', 'check synced data on subscriber with copy_binary = false');\n\n5) Should we change \"Basic logical replication test\" to \"Test logical\nreplication of copy table in binary\"\ndiff --git a/src/test/subscription/t/032_binary_copy.pl\nb/src/test/subscription/t/032_binary_copy.pl\nnew file mode 100644\nindex 0000000000..bcad66e5ea\n--- /dev/null\n+++ b/src/test/subscription/t/032_binary_copy.pl\n@@ -0,0 +1,223 @@\n+\n+# Copyright (c) 2023, PostgreSQL Global Development Group\n+\n+# Basic logical replication test\n+use strict;\n+use warnings;\n+use PostgreSQL::Test::Cluster;\n+use PostgreSQL::Test::Utils;\n+use Test::More;\n+\n\n6) We can change \"Initialize publisher node\" to \"Create publisher\nnode\" to maintain consistency.\n+# Initialize publisher node\n+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');\n+$node_publisher->init(allows_streaming => 'logical');\n+$node_publisher->start;\n+\n+# Create subscriber node\n+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');\n+$node_subscriber->init(allows_streaming => 'logical');\n+$node_subscriber->start;\n\n7) Should \"Insert some content and before creating a subscription.\" be\nchanged to:\n\"Insert some content before creating a subscription.\"\n\n+# Publish all tables\n+$node_publisher->safe_psql('postgres',\n+ \"CREATE PUBLICATION tpub FOR ALL TABLES\");\n+\n+# Insert some content and before creating a subscription\n+$node_publisher->safe_psql(\n+ 'postgres', qq(\n+ INSERT INTO public.test_numerical (a, b, c, d) VALUES\n+ (1, 1.2, 1.3, 10),\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 28 Feb 2023 16:29:22 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Tue, Feb 28, 2023 at 1:22 AM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi,\n>\n> Thanks for all of your reviews!\n>\n> So, I made some changes in the v10 according to your comments.\n\nThanks. Some quick comments on v10:\n\n1.\n+ <para>\n+ If true, initial data synchronization will be performed in binary format\n+ </para></entry>\nIt's not just the initial table sync right? The table sync can happen\nat any other point of time when ALTER SUBSCRIPTION ... REFRESH\nPUBLICATION WITH (copy = true) is run.\nHow about - \"If true, the subscriber requests publication for\npre-existing data in binary format\"?\n\n2.\n+ Specifies whether pre-existing data on the publisher will be copied\n+ to the subscriber in binary format. The default is\n<literal>false</literal>.\n+ Binary format is very data type specific, it will not allow copying\n+ between different column types as opposed to text format. Note that\n+ if this option is enabled, all data types which will be copied during\n+ the initial synchronization should have binary send and\nreceive functions.\n+ If this option is disabled, data format for the initial\nsynchronization\n+ will be text.\nPerhaps, this should cover the recommended cases for enabling this new\noption - something like below (may not need to have exact wording, but\nthe recommended cases?):\n\n\"It is recommended to enable this option only when 1) the column data\ntypes have appropriate binary send/receive functions, 2) not\nreplicating between different major versions or different platforms,\n3) both publisher and subscriber tables have the exact same column\ntypes (not when replicating from smallint to int or numeric to int8\nand so on), 4) both publisher and subscriber supports COPY with binary\noption, otherwise the table copy can fail.\"\n\n3. I think the newly added tests must verify if the binary COPY is\npicked up when enabled. Perhaps, looking at the publisher's server log\nfor 'COPY ... WITH BINARY format'? Maybe it's an overkill, otherwise,\nwe have no way of testing that the option took effect.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 28 Feb 2023 17:57:37 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "> 3. I think the newly added tests must verify if the binary COPY is\n> picked up when enabled. Perhaps, looking at the publisher's server log\n> for 'COPY ... WITH BINARY format'? Maybe it's an overkill, otherwise,\n> we have no way of testing that the option took effect.\n\nAnother way to test that BINARY is enabled could be to trigger one\nof the failure cases.\n\n\n",
"msg_date": "Tue, 28 Feb 2023 14:25:11 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Hi,\n\nAttached v11.\n\nvignesh C <vignesh21@gmail.com>, 28 Şub 2023 Sal, 13:59 tarihinde şunu\nyazdı:\n\n> Thanks for the patch, Few comments:\n> 1) Are primary key required for the tables, if not required we could\n> remove the primary key which will speed up the test by not creating\n> the indexes and inserting in the indexes. Even if required just create\n> for one of the tables:\n>\n\nI think that having a primary key in tables for logical replication tests\nis good for checking if log. rep. duplicates any row. Other tests also have\nprimary keys in almost all tables.\n\nBharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>, 28 Şub 2023\nSal, 15:27 tarihinde şunu yazdı:\n\n> 1.\n> + <para>\n> + If true, initial data synchronization will be performed in binary\n> format\n> + </para></entry>\n> It's not just the initial table sync right? The table sync can happen\n> at any other point of time when ALTER SUBSCRIPTION ... REFRESH\n> PUBLICATION WITH (copy = true) is run.\n> How about - \"If true, the subscriber requests publication for\n> pre-existing data in binary format\"?\n>\n\nI changed it as you suggested.\nI sometimes feel like the phrase \"initial sync\" is used for initial sync of\na table, not a subscription. Or table syncs triggered by ALTER SUBSCRIPTION\nare ignored in some places where \"initial sync\" is used.\n\n2.\n> Perhaps, this should cover the recommended cases for enabling this new\n> option - something like below (may not need to have exact wording, but\n> the recommended cases?):\n> \"It is recommended to enable this option only when 1) the column data\n> types have appropriate binary send/receive functions, 2) not\n> replicating between different major versions or different platforms,\n> 3) both publisher and subscriber tables have the exact same column\n> types (not when replicating from smallint to int or numeric to int8\n> and so on), 4) both publisher and subscriber supports COPY with binary\n> option, otherwise the table copy can fail.\"\n>\n\nI added a line stating that binary format is less portable across machine\narchitectures and versions as stated in COPY [1].\nI don't think we should add line saying \"recommended\", but state the\nrestrictions clearly instead. It's also similar in COPY docs as well.\nI think the explanation now covers all your points, right?\n\nJelte Fennema <postgres@jeltef.nl>, 28 Şub 2023 Sal, 16:25 tarihinde şunu\nyazdı:\n\n> > 3. I think the newly added tests must verify if the binary COPY is\n> > picked up when enabled. Perhaps, looking at the publisher's server log\n> > for 'COPY ... WITH BINARY format'? Maybe it's an overkill, otherwise,\n> > we have no way of testing that the option took effect.\n>\n> Another way to test that BINARY is enabled could be to trigger one\n> of the failure cases.\n\n\nYes, there is already a failure case for binary copy which resolves with\nswithcing binary_copy to false.\nBut I also added checks for publisher logs now too.\n\n\n[1] https://www.postgresql.org/docs/devel/sql-copy.html\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft",
"msg_date": "Tue, 28 Feb 2023 17:20:47 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Mon, Feb 27, 2023 at 2:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Feb 20, 2023 at 3:37 PM shiy.fnst@fujitsu.com\n> <shiy.fnst@fujitsu.com> wrote:\n> >\n> > On Thu, Feb 16, 2023 8:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > So, doesn't this mean that there is no separate failure mode during\n> > > the initial copy? I am clarifying this to see if the patch really\n> > > needs a separate copy_format option for initial sync?\n> > >\n> >\n> > In the case that the data type doesn't have binary output function, for apply\n> > phase, the column will be sent in text format (see logicalrep_write_tuple()) and\n> > it works fine. But with copy_format = binary, the walsender exits with an\n> > error.\n> >\n> ...\n> ...\n> >\n> > Then I got the following error in the publisher log.\n> >\n> > walsender ERROR: no binary output function available for type public.myvarchar\n> > walsender STATEMENT: COPY public.tbl1 (a) TO STDOUT WITH (FORMAT binary)\n> >\n>\n> Thanks for sharing the example. I think to address this user can\n> create a SUBSCRIPTION with 'binary = false' and then after the initial\n> copy enables it with ALTER SUBSCRIPTION. Personally, I feel it is not\n> required to have a separate option to allow copy in binary mode. Note,\n> where there is some use for it but having more options for similar\n> work is also confusing as users need to pay attention to different\n> options and their values. It won't be difficult to add such an option\n> in the future if we see such cases and or users specifically require\n> something like this.\n\nI agree with this thought, basically adding an extra option will\nalways complicate things for the user. And logically it doesn't make\nmuch sense to copy data in text mode and then stream in binary mode\n(except in some exception cases and for that, we can always alter the\nsubscription). So IMHO it makes more sense that if the binary option\nis selected then ideally it should choose to do the initial sync also\nin the binary mode.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 1 Mar 2023 16:46:45 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Wed, Mar 1, 2023 at 4:47 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> > > walsender ERROR: no binary output function available for type public.myvarchar\n> > > walsender STATEMENT: COPY public.tbl1 (a) TO STDOUT WITH (FORMAT binary)\n> > >\n> >\n> > Thanks for sharing the example. I think to address this user can\n> > create a SUBSCRIPTION with 'binary = false' and then after the initial\n> > copy enables it with ALTER SUBSCRIPTION. Personally, I feel it is not\n> > required to have a separate option to allow copy in binary mode. Note,\n> > where there is some use for it but having more options for similar\n> > work is also confusing as users need to pay attention to different\n> > options and their values. It won't be difficult to add such an option\n> > in the future if we see such cases and or users specifically require\n> > something like this.\n>\n> I agree with this thought, basically adding an extra option will\n> always complicate things for the user. And logically it doesn't make\n> much sense to copy data in text mode and then stream in binary mode\n> (except in some exception cases and for that, we can always alter the\n> subscription). So IMHO it makes more sense that if the binary option\n> is selected then ideally it should choose to do the initial sync also\n> in the binary mode.\n\nI think I was suggesting earlier to use a separate option for binary\ntable sync copy based on my initial knowledge of binary COPY. Now that\nI have a bit more understanding of binary COPY and subscription's\nexisting binary option, +1 for using the same option for table sync\ntoo.\n\nIf used the existing subscription binary option for the table sync,\nthere can be following possibilities for the users:\n1. users might want to enable the binary option for table sync and\ndisable it for subsequent replication\n2. users might want to enable the binary option for both table sync\nand for subsequent replication\n3. users might want to disable the binary option for table sync and\nenable it for subsequent replication\n4. users might want to disable binary option for both table sync and\nfor subsequent replication\n\nBinary copy use-cases are a bit narrower compared to the existing\nsubscription binary option, it works only if:\na) the column data types have appropriate binary send/receive functions\nb) not replicating between different major versions or different platforms\nc) both publisher and subscriber tables have the exact same column\ntypes (not when replicating from smallint to int or numeric to int8\nand so on)\nd) both publisher and subscriber supports COPY with binary option\n\nNow if one enabled the binary option for table sync, that means, they\nmust have ensured all (a), (b), (c), and (d) are met. The point is if\none decides to use binary copy for table sync, it means that the\nsubsequent binary replication works too without any problem. If\nrequired, one can disable it for normal replication i.e. post-table\nsync.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 1 Mar 2023 17:32:28 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Hi,\n\nBharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>, 1 Mar 2023 Çar,\n15:02 tarihinde şunu yazdı:\n\n> On Wed, Mar 1, 2023 at 4:47 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > I agree with this thought, basically adding an extra option will\n> > always complicate things for the user. And logically it doesn't make\n> > much sense to copy data in text mode and then stream in binary mode\n> > (except in some exception cases and for that, we can always alter the\n> > subscription). So IMHO it makes more sense that if the binary option\n> > is selected then ideally it should choose to do the initial sync also\n> > in the binary mode.\n>\n\nI agree that copying in text then streaming in binary does not have a good\nuse-case.\n\nI think I was suggesting earlier to use a separate option for binary\n> table sync copy based on my initial knowledge of binary COPY. Now that\n> I have a bit more understanding of binary COPY and subscription's\n> existing binary option, +1 for using the same option for table sync\n> too.\n>\n> If used the existing subscription binary option for the table sync,\n> there can be following possibilities for the users:\n> 1. users might want to enable the binary option for table sync and\n> disable it for subsequent replication\n> 2. users might want to enable the binary option for both table sync\n> and for subsequent replication\n> 3. users might want to disable the binary option for table sync and\n> enable it for subsequent replication\n> 4. users might want to disable binary option for both table sync and\n> for subsequent replication\n>\n> Binary copy use-cases are a bit narrower compared to the existing\n> subscription binary option, it works only if:\n> a) the column data types have appropriate binary send/receive functions\n> b) not replicating between different major versions or different platforms\n> c) both publisher and subscriber tables have the exact same column\n> types (not when replicating from smallint to int or numeric to int8\n> and so on)\n> d) both publisher and subscriber supports COPY with binary option\n>\n> Now if one enabled the binary option for table sync, that means, they\n> must have ensured all (a), (b), (c), and (d) are met. The point is if\n> one decides to use binary copy for table sync, it means that the\n> subsequent binary replication works too without any problem. If\n> required, one can disable it for normal replication i.e. post-table\n> sync.\n>\n\nThat was my intention in the beginning with this patch. Then the new option\nalso made some sense at some point, and I added copy_binary option\naccording to reviews.\nThe earlier versions of the patch didn't have that. Without the new option,\nthis patch would also be smaller.\n\nBut before changing back to the point where these are all tied to binary\noption without a new option, I think we should decide if that's really the\nideal way to do it.\nI believe that the patch is all good now with the binary_copy option which\nis not tied to anything, explanations in the doc and separate tests etc.\nBut I also agree that binary=true should make everything in binary and\nbinary=false should do them in text format. It makes more sense.\n\nBest,\n-- \nMelih Mutlu\nMicrosoft\n\nHi,Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>, 1 Mar 2023 Çar, 15:02 tarihinde şunu yazdı:On Wed, Mar 1, 2023 at 4:47 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I agree with this thought, basically adding an extra option will\n> always complicate things for the user. And logically it doesn't make\n> much sense to copy data in text mode and then stream in binary mode\n> (except in some exception cases and for that, we can always alter the\n> subscription). So IMHO it makes more sense that if the binary option\n> is selected then ideally it should choose to do the initial sync also\n> in the binary mode.I agree that copying in text then streaming in binary does not have a good use-case.\nI think I was suggesting earlier to use a separate option for binary\ntable sync copy based on my initial knowledge of binary COPY. Now that\nI have a bit more understanding of binary COPY and subscription's\nexisting binary option, +1 for using the same option for table sync\ntoo.\n\nIf used the existing subscription binary option for the table sync,\nthere can be following possibilities for the users:\n1. users might want to enable the binary option for table sync and\ndisable it for subsequent replication\n2. users might want to enable the binary option for both table sync\nand for subsequent replication\n3. users might want to disable the binary option for table sync and\nenable it for subsequent replication\n4. users might want to disable binary option for both table sync and\nfor subsequent replication\n\nBinary copy use-cases are a bit narrower compared to the existing\nsubscription binary option, it works only if:\na) the column data types have appropriate binary send/receive functions\nb) not replicating between different major versions or different platforms\nc) both publisher and subscriber tables have the exact same column\ntypes (not when replicating from smallint to int or numeric to int8\nand so on)\nd) both publisher and subscriber supports COPY with binary option\n\nNow if one enabled the binary option for table sync, that means, they\nmust have ensured all (a), (b), (c), and (d) are met. The point is if\none decides to use binary copy for table sync, it means that the\nsubsequent binary replication works too without any problem. If\nrequired, one can disable it for normal replication i.e. post-table\nsync.That was my intention in the beginning with this patch. Then the new option also made some sense at some point, and I added copy_binary option according to reviews.The earlier versions of the patch didn't have that. Without the new option, this patch would also be smaller.But before changing back to the point where these are all tied to binary option without a new option, I think we should decide if that's really the ideal way to do it.I believe that the patch is all good now with the binary_copy option which is not tied to anything, explanations in the doc and separate tests etc.But I also agree that binary=true should make everything in binary and binary=false should do them in text format. It makes more sense.Best,-- Melih MutluMicrosoft",
"msg_date": "Wed, 1 Mar 2023 17:28:23 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Dear Melih,\r\n\r\nIf we do not have to treat the case Shi pointed out[1] as code-level, I agreed to\r\nsame option binary because it is simpler. I read the use-cases addressed by Bharath[2]\r\nand I cannot find advantage for case 1 and 3 expect the case that binary functions\r\nare not implemented.\r\nPreviously I said that the combination of \"copy_format = binary\" and \"copy_data = false\"\r\nseemed strange[3], but this approach could solve it and other related ones\r\nautomatically.\r\n\r\nI think we should add description to doc that it is more likely happen to fail\r\nthe initial copy user should enable binary format after synchronization if\r\ntables have original datatype.\r\n\r\n[1]: https://www.postgresql.org/message-id/OSZPR01MB6310B58F069FF8E148B247FDFDA49%40OSZPR01MB6310.jpnprd01.prod.outlook.com\r\n[2] https://www.postgresql.org/message-id/CALj2ACXiUsJoXt%3DfMpa4yYseB5h3un_syVh-J3RxL4-6r9Dx2A%40mail.gmail.com\r\n[3]: https://www.postgresql.org/message-id/TYAPR01MB5866968CF42FBAB73E8EEDF8F5AA9%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 1 Mar 2023 15:40:43 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Hi,\n\nHayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com>, 1 Mar 2023 Çar, 18:40\ntarihinde şunu yazdı:\n\n> Dear Melih,\n>\n> If we do not have to treat the case Shi pointed out[1] as code-level, I\n> agreed to\n> same option binary because it is simpler.\n\n\nHow is this an issue if we let the binary option do binary copy and not an\nissue if we have a separate copy_binary option?\nYou can easily have the similar errors when you set copy_binary=true if a\ntype is missing binary send/receive functions.\nAnd also, as Amit mentioned, the same issue can easily be avoided if\nbinary=false until the initial sync is done. It can be set to true later.\n\n\n> I read the use-cases addressed by Bharath[2]\n> and I cannot find advantage for case 1 and 3 expect the case that binary\n> functions\n> are not implemented.\n>\n\nNote that case 3 is already how it works on HEAD. Its advantages, as you\nalready mentioned, is when some types are missing the binary functions.\nI think that's why case 3 should be allowed even if a new option is added\nor not.\n\nPreviously I said that the combination of \"copy_format = binary\" and\n> \"copy_data = false\"\n> seemed strange[3], but this approach could solve it and other related ones\n> automatically.\n>\n\nI think it is quite similar to the case where binary=true and\nenabled=false. In that case, the format is set but the subscription does\nnot replicate anything. And this is allowed.\ncopy_binary=true and copy_data=false combination also sets the copy format\nbut does not copy anything. Even if any table will not be copied at that\nmoment, tables which might be added later might need to be copied (by ALTER\nSUBSCRIPTION). And setting the copy format beforehand can be useful in such\ncases.\n\n\n> I think we should add description to doc that it is more likely happen to\n> fail\n> the initial copy user should enable binary format after synchronization if\n> tables have original datatype.\n>\n\nI tried to explain when binary copy can cause failures in the doc. What\nexactly do you think is missing?\n\nBest,\n-- \nMelih Mutlu\nMicrosoft\n\nHi,Hayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com>, 1 Mar 2023 Çar, 18:40 tarihinde şunu yazdı:Dear Melih,\n\nIf we do not have to treat the case Shi pointed out[1] as code-level, I agreed to\nsame option binary because it is simpler.How is this an issue if we let the binary option do binary copy and not an issue if we have a separate copy_binary option?You can easily have the similar errors when you set copy_binary=true if a type is missing binary send/receive functions.And also, as Amit mentioned, the same issue can easily be avoided if binary=false until the initial sync is done. It can be set to true later. I read the use-cases addressed by Bharath[2]\nand I cannot find advantage for case 1 and 3 expect the case that binary functions\nare not implemented.Note that case 3 is already how it works on HEAD. Its advantages, as you already mentioned, is when some types are missing the binary functions.I think that's why case 3 should be allowed even if a new option is added or not.\nPreviously I said that the combination of \"copy_format = binary\" and \"copy_data = false\"\nseemed strange[3], but this approach could solve it and other related ones\nautomatically.I think it is quite similar to the case where binary=true and enabled=false. In that case, the format is set but the subscription does not replicate anything. And this is allowed.copy_binary=true and copy_data=false combination also sets the copy format but does not copy anything. Even if any table will not be copied at that moment, tables which might be added later might need to be copied (by ALTER SUBSCRIPTION). And setting the copy format beforehand can be useful in such cases. \nI think we should add description to doc that it is more likely happen to fail\nthe initial copy user should enable binary format after synchronization if\ntables have original datatype.I tried to explain when binary copy can cause failures in the doc. What exactly do you think is missing? Best,-- Melih MutluMicrosoft",
"msg_date": "Wed, 1 Mar 2023 21:09:44 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 5:10 AM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi,\n>\n> Hayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com>, 1 Mar 2023 Çar, 18:40 tarihinde şunu yazdı:\n>>\n>> Dear Melih,\n>>\n>> If we do not have to treat the case Shi pointed out[1] as code-level, I agreed to\n>> same option binary because it is simpler.\n>\n>\n> How is this an issue if we let the binary option do binary copy and not an issue if we have a separate copy_binary option?\n> You can easily have the similar errors when you set copy_binary=true if a type is missing binary send/receive functions.\n> And also, as Amit mentioned, the same issue can easily be avoided if binary=false until the initial sync is done. It can be set to true later.\n>\n>>\n\nIIUC most people seem to be coming down in favour of there being a\nsingle unified option (the existing 'binary==true/false) which would\napply to both the COPY and the data replication parts.\n\nI also agree\n- Yes, it is simpler.\n- Yes, there are various workarounds in case the COPY part failed\n\nBut, AFAICT the main question remains unanswered -- Are we happy to\nbreak existing applications already using binary=true. E.g. I think\nthere might be cases where applications are working *only* because\ntheir binary=true is internally (and probably unbeknownst to the user)\nreverting to text. So if we unified everything under one 'binary'\noption then binary=true will force COPY binary so now some previously\nworking applications will get COPY errors requiring workarounds. Is\nthat acceptable?\n\nTBH I am not sure anymore if the complications justify the patch.\n\nIt seems we have to choose from 2 bad choices:\n- separate options = this works but would be more confusing for the user\n- unified option = this would be simpler and faster, but risks\nbreaking existing applications currently using 'binary=true'\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 2 Mar 2023 12:57:09 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 7:27 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Thu, Mar 2, 2023 at 5:10 AM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > Hayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com>, 1 Mar 2023 Çar, 18:40 tarihinde şunu yazdı:\n> >>\n> >> Dear Melih,\n> >>\n> >> If we do not have to treat the case Shi pointed out[1] as code-level, I agreed to\n> >> same option binary because it is simpler.\n> >\n> >\n> > How is this an issue if we let the binary option do binary copy and not an issue if we have a separate copy_binary option?\n> > You can easily have the similar errors when you set copy_binary=true if a type is missing binary send/receive functions.\n> > And also, as Amit mentioned, the same issue can easily be avoided if binary=false until the initial sync is done. It can be set to true later.\n> >\n> >>\n>\n> IIUC most people seem to be coming down in favour of there being a\n> single unified option (the existing 'binary==true/false) which would\n> apply to both the COPY and the data replication parts.\n>\n> I also agree\n> - Yes, it is simpler.\n> - Yes, there are various workarounds in case the COPY part failed\n>\n> But, AFAICT the main question remains unanswered -- Are we happy to\n> break existing applications already using binary=true. E.g. I think\n> there might be cases where applications are working *only* because\n> their binary=true is internally (and probably unbeknownst to the user)\n> reverting to text. So if we unified everything under one 'binary'\n> option then binary=true will force COPY binary so now some previously\n> working applications will get COPY errors requiring workarounds. Is\n> that acceptable?\n>\n\nI think one can look at this from another angle also where users would\nbe expecting that when binary = true and copy_data = true, all the\ndata transferred between publisher and subscriber should be in binary\nformat. Users have a workaround to set binary=true only after the\ninitial sync. Also, if at all, the behaviour change would be after\nmajor version upgrade which shouldn't be a problem.\n\n> TBH I am not sure anymore if the complications justify the patch.\n>\n> It seems we have to choose from 2 bad choices:\n> - separate options = this works but would be more confusing for the user\n> - unified option = this would be simpler and faster, but risks\n> breaking existing applications currently using 'binary=true'\n>\n\nI would prefer a unified option as apart from other things you and\nothers mentioned that will be less of a maintenance burden in the\nfuture.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 2 Mar 2023 10:30:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 10:30 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > TBH I am not sure anymore if the complications justify the patch.\n> >\n> > It seems we have to choose from 2 bad choices:\n> > - separate options = this works but would be more confusing for the user\n> > - unified option = this would be simpler and faster, but risks\n> > breaking existing applications currently using 'binary=true'\n> >\n>\n> I would prefer a unified option as apart from other things you and\n> others mentioned that will be less of a maintenance burden in the\n> future.\n>\n+1\nWhen someone sets the binary=true while creating a subscription, the\nexpectation would be that the data transfer will happen in binary mode\nif binary in/out functions are available. As per current\nimplementation, that's not happening in the table-sync phase. So, it\nmakes sense to fix that behaviour in a major version release.\nFor the existing applications that are using (or unknowingly misusing)\nthe feature, as Amit mentioned, they have a workaround.\n\n\n-- \nThanks & Regards,\nKuntal Ghosh\n\n\n",
"msg_date": "Thu, 2 Mar 2023 20:57:23 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 4:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Mar 2, 2023 at 7:27 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n...\n> > IIUC most people seem to be coming down in favour of there being a\n> > single unified option (the existing 'binary==true/false) which would\n> > apply to both the COPY and the data replication parts.\n> >\n> > I also agree\n> > - Yes, it is simpler.\n> > - Yes, there are various workarounds in case the COPY part failed\n> >\n> > But, AFAICT the main question remains unanswered -- Are we happy to\n> > break existing applications already using binary=true. E.g. I think\n> > there might be cases where applications are working *only* because\n> > their binary=true is internally (and probably unbeknownst to the user)\n> > reverting to text. So if we unified everything under one 'binary'\n> > option then binary=true will force COPY binary so now some previously\n> > working applications will get COPY errors requiring workarounds. Is\n> > that acceptable?\n> >\n>\n> I think one can look at this from another angle also where users would\n> be expecting that when binary = true and copy_data = true, all the\n> data transferred between publisher and subscriber should be in binary\n> format. Users have a workaround to set binary=true only after the\n> initial sync. Also, if at all, the behaviour change would be after\n> major version upgrade which shouldn't be a problem.\n>\n> > TBH I am not sure anymore if the complications justify the patch.\n> >\n> > It seems we have to choose from 2 bad choices:\n> > - separate options = this works but would be more confusing for the user\n> > - unified option = this would be simpler and faster, but risks\n> > breaking existing applications currently using 'binary=true'\n> >\n>\n> I would prefer a unified option as apart from other things you and\n> others mentioned that will be less of a maintenance burden in the\n> future.\n\nMy concern was mostly just about the potential to break the behaviour\nof existing binary=true applications in some edge cases.\n\nIf you are happy that doing so shouldn't be a problem, then I am also\n+1 to use the unified option.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 3 Mar 2023 11:32:33 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Wed, Mar 1, 2023 at 7:58 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>, 1 Mar 2023 Çar, 15:02 tarihinde şunu yazdı:\n>>\n>\n> That was my intention in the beginning with this patch. Then the new option also made some sense at some point, and I added copy_binary option according to reviews.\n> The earlier versions of the patch didn't have that. Without the new option, this patch would also be smaller.\n>\n> But before changing back to the point where these are all tied to binary option without a new option, I think we should decide if that's really the ideal way to do it.\n>\n\nAs per what I could read in this thread, most people prefer to use the\nexisting binary option rather than inventing a new way (option) to\nbinary copy in the initial sync phase. Do you agree? If so, it is\nbetter to update the patch accordingly as this is the last CF for this\nrelease and we have a limited time left.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 7 Mar 2023 08:40:20 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Dear Melih,\r\n\r\n>> I think we should add description to doc that it is more likely happen to fail\r\n>> the initial copy user should enable binary format after synchronization if\r\n>> tables have original datatype.\r\n>\r\n> I tried to explain when binary copy can cause failures in the doc. What exactly\r\n> do you think is missing?\r\n\r\nI assumed here that \"copy_format\" and \"binary\" were combined into one option.\r\nCurrently the binary option has descriptions :\r\n\r\n```\r\nEven when this option is enabled, only data types having binary send and receive functions will be transferred in binary\r\n```\r\n\r\nBut this is not suitable for initial data sync, as we knew. I meant to say that\r\nit must be updated if options are combined.\r\n\r\nNote that following is not necessary for PG16, just an improvement for newer version.\r\n\r\nIs it possible to automatically switch the binary option from 'true' to 'false'\r\nwhen data transfer fails? As we found that while synchronizing the initial data\r\nwith binary format may lead another error, and it can be solved if the options\r\nis changed. When DBAs check a log after synchronization and find the output like\r\n\"binary option was changed and worker will restart...\" or something, they can turn\r\n\"binary\" on again.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 8 Mar 2023 05:24:55 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Hi,\n\nOn 7 Mar 2023 Tue at 04:10 Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> As per what I could read in this thread, most people prefer to use the\n> existing binary option rather than inventing a new way (option) to\n> binary copy in the initial sync phase. Do you agree?\n\n\nI agree.\nWhat do you think about the version checks? I removed any kind of check\nsince it’s currently a different option. Should we check publisher version\nbefore doing binary copy to ensure that the publisher node supports binary\noption of COPY command?\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft\n\nHi,On 7 Mar 2023 Tue at 04:10 Amit Kapila <amit.kapila16@gmail.com> wrote:\nAs per what I could read in this thread, most people prefer to use the\nexisting binary option rather than inventing a new way (option) to\nbinary copy in the initial sync phase. Do you agree?I agree.What do you think about the version checks? I removed any kind of check since it’s currently a different option. Should we check publisher version before doing binary copy to ensure that the publisher node supports binary option of COPY command?Thanks,-- Melih MutluMicrosoft",
"msg_date": "Wed, 8 Mar 2023 13:47:22 +0100",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Wed, Mar 8, 2023 at 6:17 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> On 7 Mar 2023 Tue at 04:10 Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> As per what I could read in this thread, most people prefer to use the\n>> existing binary option rather than inventing a new way (option) to\n>> binary copy in the initial sync phase. Do you agree?\n>\n>\n> I agree.\n> What do you think about the version checks? I removed any kind of check since it’s currently a different option. Should we check publisher version before doing binary copy to ensure that the publisher node supports binary option of COPY command?\n>\n\nIt is not clear to me which version check you wanted to add because we\nseem to have a binary option in COPY from the time prior to logical\nreplication. I feel we need a publisher version 14 check as that is\nwhere we start to support binary mode transfer in logical replication.\nSee the check in function libpqrcv_startstreaming().\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 9 Mar 2023 08:20:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Hi,\n\nAttached v12 with a unified option.\n\nSetting binary = true now allows the initial sync to happen in binary\nformat.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft",
"msg_date": "Mon, 13 Mar 2023 13:29:32 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Here are some review comments for patch v12-0001\n\n======\nGeneral\n\n1.\nThere is no new test code. Are we sure that there are already\nsufficient TAP tests doing binary testing with/without copy_data and\ncovering all the necessary combinations?\n\n======\nCommit Message\n\n2.\nWithout this patch, table are copied in text format even if the\nsubscription is created with binary option enabled. This patch allows\nlogical replication\nto perform in binary format starting from initial sync. When binary\nformat is beneficial\nto use, allowing the subscription to copy tables in binary in table\nsync phase may\nreduce the time spent on copy depending on column types.\n\n~\n\na. \"table\" -> \"tables\"\n\nb. I don't think need to keep referring to the initial table\nsynchronization many times.\n\nSUGGESTION\nWithout this patch, table synchronization COPY uses text format even\nif the subscription is created with the binary option enabled. Copying\ntables in binary format may reduce the time spent depending on column\ntypes.\n\n======\ndoc/src/sgml/logical-replication.sgml\n\n3.\n@@ -241,10 +241,11 @@\n types of the columns do not need to match, as long as the text\n representation of the data can be converted to the target type. For\n example, you can replicate from a column of type <type>integer</type> to a\n- column of type <type>bigint</type>. The target table can also have\n- additional columns not provided by the published table. Any such columns\n- will be filled with the default value as specified in the definition of the\n- target table.\n+ column of type <type>bigint</type>. However, replication in\nbinary format is\n+ type specific and does not allow to replicate data between different types\n+ according to its restrictions. The target table can also have additional\n+ columns not provided by the published table. Any such columns\nwill be filled\n+ with the default value as specified in the definition of the target table.\n </para>\n\nI am not sure if there is enough information here about the binary restrictions.\n- e.g. does the column order matter for tablesync COPY binary?\n- e.g. no mention of the send/receive function requirements of tablesync COPY.\n\nBut maybe here is not the place to write all such details anyway;\nInstead of duplicating information IMO here should give a link to the\nCREATE SUBSCRIPTION notes -- something like:\n\nSUGGESTION\nNote that replication in binary format is more restrictive. See CREATE\nSUBSCRIPTION binary subscription parameter for details.\n\n======\ndoc/src/sgml/ref/create_subscription.sgml\n\n4.\n@@ -189,11 +189,17 @@ CREATE SUBSCRIPTION <replaceable\nclass=\"parameter\">subscription_name</replaceabl\n <term><literal>binary</literal> (<type>boolean</type>)</term>\n <listitem>\n <para>\n- Specifies whether the subscription will request the publisher to\n- send the data in binary format (as opposed to text).\n- The default is <literal>false</literal>.\n- Even when this option is enabled, only data types having\n- binary send and receive functions will be transferred in binary.\n+ Specifies whether the subscription will both copy the initial data to\n+ synchronize relations and request the publisher to send the data in\n+ binary format (as opposed to text). The default is\n<literal>false</literal>.\n+ Binary format can be faster than the text format, but it is\nless portable\n+ across machine architectures and PostgreSQL versions.\nBinary format is\n+ also very data type specific, it will not allow copying\nbetween different\n+ column types as opposed to text format. Even when this\noption is enabled,\n+ only data types having binary send and receive functions will be\n+ transferred in binary. Note that the initial synchronization requires\n+ all data types to have binary send and receive functions, otherwise\n+ the synchronization will fail.\n </para>\n\nThere seems to be a small ambiguity because this wording comes more\nfrom our code-level understanding, rather than what the user sees.\ne.g. I think \"will be transferred\" could mean also the COPY phase as\nfar as the user is concerned. Maybe some slight rewording can help.\n\nThere is also some use of \"copy\" (e.g. \"will not allow copying\") which\ncan be confused with the initial tablesync phase which is not what was\nintended.\n\nSUGGESTION\nSpecifies whether the subscription will request the publisher to send\nthe data in binary format (as opposed to text). The default is\n<literal>false</literal>. Any initial table synchronization copy [link\nto copy_data] also uses the same format. Using binary format can be\nfaster than the text format, but it is less portable across machine\narchitectures and PostgreSQL versions. Binary format is also data type\nspecific, it will not allow transfer between different column types as\nopposed to text format. Even when the binary option is enabled, only\ndata types having binary send/receive functions can be transferred in\nbinary format. If these functions don't exist then the publisher send\nwill revert to sending text format. Note that the binary initial table\nsynchronization copy requires all data types to have binary\nsend/receive functions, otherwise it will fail.\n\n======\nsrc/backend/replication/logical/tablesync.c\n\n5.\n+\n+ /*\n+ * If the publisher is v14 or later, copy data in the required data format.\n+ * If the publisher version is earlier, it doesn't support COPY with binary\n+ * option.\n+ */\n+ if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 140000 &&\n+ MySubscription->binary)\n+ {\n+ appendStringInfo(&cmd, \" WITH (FORMAT binary)\");\n+ options = lappend(options, makeDefElem(\"format\", (Node *)\nmakeString(\"binary\"), -1));\n+ }\n+\n\n5a.\nI didn't think you need to say \"copy data in the required data format\".\n\nCan’t you just say like:\n\nSUGGESTION\nIf the publisher version is earlier than v14, it COPY command doesn't\nsupport the binary option.\n\n~\n\n5b.\nDoes this also need to be mentioned as a note on the CREATE\nSUBSCRIPTION docs page? e.g. COPY binary from server versions < v14\nwill work because it will just be skipping anyway and use text.\n\n======\ndoc/src/sgml/ref/alter_subscription.sgml\n\n6.\nThe v12 patch does not update the ALTER SUBSCRIPTION DOCS, but I\nthought perhaps there should be some reference from the ALTER\ncopy_data back to the main CREATE SUBSCRIPTION page because if the\nuser leaves the default (copy_data=true) then the ALTER might cause\nsome unexpected errors is the subscription was already using binary\nformat.\n\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 14 Mar 2023 11:06:46 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 11:06 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are some review comments for patch v12-0001\n>\n> ======\n> General\n>\n> 1.\n> There is no new test code. Are we sure that there are already\n> sufficient TAP tests doing binary testing with/without copy_data and\n> covering all the necessary combinations?\n>\n\nOops. Please ignore this comment. Somehow I missed seeing those\n032_binary_copy.pl tests earlier.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 14 Mar 2023 11:48:11 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Here are some review comments for patch v12-0001 (test code only)\n\n======\nsrc/test/subscription/t/014_binary.pl\n\n# Check the synced data on subscribers\n\n~\n\nThere are a couple of comments like the above that say: \"on\nsubscribers\" instead of \"on subscriber\".\n\n~~~\n\nI wondered if it might be useful to also include another test case\nthat demonstrates you can still use COPY with binary format even when\nthe table column orders are different, so long as the same names have\nthe same data types. In other words, it shows apparently, the binary\nrow COPY processes per column; not one single binary data copy\nspanning all the replicated columns.\n\nFor example,\n\n# --------------------------------\n# Test syncing tables with different column order\n$node_publisher->safe_psql(\n 'postgres', qq(\n CREATE TABLE public.test_col_order (\n a bigint, b int\n );\n INSERT INTO public.test_col_order (a,b)\n VALUES (1,2),(3,4);\n ));\n\n$node_subscriber->safe_psql(\n 'postgres', qq(\n CREATE TABLE public.test_col_order (\n b int, a bigint\n );\n ALTER SUBSCRIPTION tsub REFRESH PUBLICATION;\n ));\n\n# Ensure nodes are in sync with each other\n$node_subscriber->wait_for_subscription_sync($node_publisher, 'tsub');\n\n# Check the synced data on subscribers\n$result = $node_subscriber->safe_psql('postgres', 'SELECT a,b FROM\npublic.test_col_order;');\n\nis( $result, '1|2\n3|4', 'check synced data on subscriber for different column order and\nbinary = true');\n# --------------------------------\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 14 Mar 2023 13:13:37 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 6:18 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Tue, Mar 14, 2023 at 11:06 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Here are some review comments for patch v12-0001\n> >\n> > ======\n> > General\n> >\n> > 1.\n> > There is no new test code. Are we sure that there are already\n> > sufficient TAP tests doing binary testing with/without copy_data and\n> > covering all the necessary combinations?\n> >\n>\n> Oops. Please ignore this comment. Somehow I missed seeing those\n> 032_binary_copy.pl tests earlier.\n>\n\nI think it would better to write the tests for this feature in the\nexisting test file 014_binary as that would save some time for node\nsetup/shutdown and also that would be a more appropriate place for\nthese tests.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 14 Mar 2023 08:47:03 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Hi,\n\nAttached v13.\n\nPeter Smith <smithpb2250@gmail.com>, 14 Mar 2023 Sal, 03:07 tarihinde şunu\nyazdı:\n\n> Here are some review comments for patch v12-0001\n>\n\nThanks for reviewing. I tried to make explanations in docs better\naccording to your comments.\nWhat do you think?\n\n Amit Kapila <amit.kapila16@gmail.com>, 14 Mar 2023 Sal, 06:17 tarihinde\nşunu yazdı:\n\n> I think it would better to write the tests for this feature in the\n> existing test file 014_binary as that would save some time for node\n> setup/shutdown and also that would be a more appropriate place for\n> these tests.\n\n\nI removed 032_binary_copy.pl and added those tests into 014_binary.pl.\nAlso added the case with different column order as Peter suggested.\n\nBest,\n-- \nMelih Mutlu\nMicrosoft",
"msg_date": "Tue, 14 Mar 2023 14:01:56 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Tuesday, March 14, 2023 8:02 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> Attached v13.\nHi,\n\n\nThanks for sharing v13. Few minor review comments.\n(1) create_subscription.sgml\n\n+ column types as opposed to text format. Even when this option is enabled,\n+ only data types having binary send and receive functions will be\n+ transferred in binary. Note that the initial synchronization requires\n\n(1-1)\n\nI think it's helpful to add a reference for the description about send and receive functions (e.g. to the page of CREATE TYPE).\n\n(1-2)\n\nAlso, it would be better to have a cross reference from there to this doc as one paragraph probably in \"Notes\". I suggested this, because send and receive functions are described as \"optional\" there and missing them leads to error in the context of binary table synchronization.\n\n(3) copy_table()\n\n+ /*\n+ * If the publisher version is earlier than v14, it COPY command doesn't\n+ * support the binary option.\n+ */\n\nThis sentence doesn't look correct grammatically. We can replace \"it COPY command\" with \"subscription\" for example. Kindly please fix it.\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Tue, 14 Mar 2023 15:20:52 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 8:50 PM Takamichi Osumi (Fujitsu)\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Tuesday, March 14, 2023 8:02 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> (3) copy_table()\n>\n> + /*\n> + * If the publisher version is earlier than v14, it COPY command doesn't\n> + * support the binary option.\n> + */\n>\n> This sentence doesn't look correct grammatically. We can replace \"it COPY command\" with \"subscription\" for example. Kindly please fix it.\n>\n\nHow about something like: \"The binary option for replication is\nsupported since v14.\"?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 15 Mar 2023 11:03:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Here are some review comments for v13-0001\n\n======\ndoc/src/sgml/logical-replication.sgml\n\n1.\n@@ -241,10 +241,13 @@\n types of the columns do not need to match, as long as the text\n representation of the data can be converted to the target type. For\n example, you can replicate from a column of type <type>integer</type> to a\n- column of type <type>bigint</type>. The target table can also have\n- additional columns not provided by the published table. Any such columns\n- will be filled with the default value as specified in the definition of the\n- target table.\n+ column of type <type>bigint</type>. However, replication in\nbinary format is\n+ type specific and does not allow to replicate data between different types\n+ according to its restrictions (See <literal>binary</literal> option of\n+ <link linkend=\"sql-createsubscription\"><command>CREATE\nSUBSCRIPTION</command></link>\n+ for details). The target table can also have additional columns\nnot provided\n+ by the published table. Any such columns will be filled with the default\n+ value as specified in the definition of the target table.\n </para>\n\nI don’t really think we should mention details of what the binary\nproblems are here, because then:\ni) it is just duplicating information already on the CREATE SUBSCRIPTION page\nii) you will miss some restrictions. (e.g. here you said something\nabout \"type specific\" but didn't mention send/receive functions would\nbe mandatory for the copy_data option)\n\nThat's why in the previous v12 review [1] (comment #3) I suggested\nthat this page should just say something quite generic like \"However,\nreplication in binary format is more restrictive\", and link back to\nthe other page which has all the gory details.\n\n======\ndoc/src/sgml/ref/alter_subscription.sgml\n\n2.\nMy previous v12 review [1] (comment #6) suggested maybe updating this\npage. But it was not done in v13. Did you accidentally miss the review\ncomment, or chose not to do it?\n\n======\ndoc/src/sgml/ref/create_subscription.sgml\n\n3.\n <para>\n- Specifies whether the subscription will request the publisher to\n- send the data in binary format (as opposed to text).\n- The default is <literal>false</literal>.\n- Even when this option is enabled, only data types having\n- binary send and receive functions will be transferred in binary.\n+ Specifies whether the subscription will request the publisher to send\n+ the data in binary format (as opposed to text). The default is\n+ <literal>false</literal>. Any initial table synchronization copy\n+ (see <literal>copy_data</literal>) also uses the same format. Binary\n+ format can be faster than the text format, but it is less portable\n+ across machine architectures and PostgreSQL versions.\nBinary format is\n+ also very data type specific, it will not allow copying\nbetween different\n+ column types as opposed to text format. Even when this\noption is enabled,\n+ only data types having binary send and receive functions will be\n+ transferred in binary. Note that the initial synchronization requires\n+ all data types to have binary send and receive functions, otherwise\n+ the synchronization will fail.\n </para>\n\n\nBEFORE\nBinary format is also very data type specific, it will not allow\ncopying between different column types as opposed to text format.\n\nSUGGESTION (worded more similar to what is already on the COPY page [2])\nBinary format is very data type specific; for example, it will not\nallow copying from a smallint column to an integer column, even though\nthat would work fine in text format.\n\n\n~~~\n\n4.\n\n+ <para>\n+ If the publisher is a <productname>PostgreSQL</productname> version\n+ before 14, then any initial table synchronization will use\ntext format\n+ even if this option is enabled.\n+ </para>\n\nIMO it will be clearer to explicitly say the option instead of 'this option'.\n\nSUGGESTION\nIf the publisher is a <productname>PostgreSQL</productname> version\nbefore 14, then any initial table synchronization will use text format\neven if <literal>binary = true</literal>.\n\n\n======\nsrc/backend/replication/logical/tablesync.c\n\n5.\n+\n+ /*\n+ * If the publisher version is earlier than v14, it COPY command doesn't\n+ * support the binary option.\n+ */\n+ if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 140000 &&\n+ MySubscription->binary)\n+ {\n+ appendStringInfo(&cmd, \" WITH (FORMAT binary)\");\n+ options = lappend(options, makeDefElem(\"format\", (Node *)\nmakeString(\"binary\"), -1));\n+ }\n\nSorry, I gave a poor review comment for this previously. Now I have\nrevisited all the thread discussions about version checking. I feel\nthat some explanation should be given in the code comment so that\nfuture readers of this code can understand why you decided to use v14\nchecking.\n\nSomething like this:\n\nSUGGESTION\nIf the publisher version is earlier than v14, we use text format COPY.\nNote - In fact COPY syntax \"WITH (FORMAT binary)\" has existed since\nv9, but since the logical replication binary mode transfer was not\nintroduced until v14 it was decided to check using the later version.\n\n------\n[1] PS v12 review -\nhttps://www.postgresql.org/message-id/CAHut%2BPsAS8HpjdbDv%2BRM-YUJaLO0UC3f5be%2BqN296%2BGrewsGXg%40mail.gmail.com\n[2] pg docs COPY - https://www.postgresql.org/docs/current/sql-copy.html\n[3] pg docs COPY v9.0 - https://www.postgresql.org/docs/9.0/sql-copy.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 15 Mar 2023 17:22:14 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 11:52 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> ======\n> src/backend/replication/logical/tablesync.c\n>\n> 5.\n> +\n> + /*\n> + * If the publisher version is earlier than v14, it COPY command doesn't\n> + * support the binary option.\n> + */\n> + if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 140000 &&\n> + MySubscription->binary)\n> + {\n> + appendStringInfo(&cmd, \" WITH (FORMAT binary)\");\n> + options = lappend(options, makeDefElem(\"format\", (Node *)\n> makeString(\"binary\"), -1));\n> + }\n>\n> Sorry, I gave a poor review comment for this previously. Now I have\n> revisited all the thread discussions about version checking. I feel\n> that some explanation should be given in the code comment so that\n> future readers of this code can understand why you decided to use v14\n> checking.\n>\n> Something like this:\n>\n> SUGGESTION\n> If the publisher version is earlier than v14, we use text format COPY.\n>\n\nI think this isn't explicit that we supported the binary format since\nv14. So, I would prefer my version of the comment as suggested in the\nprevious email.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 15 Mar 2023 12:01:36 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Hi,\r\n\r\n\r\nOn Wednesday, March 15, 2023 2:34 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Tue, Mar 14, 2023 at 8:50 PM Takamichi Osumi (Fujitsu)\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > On Tuesday, March 14, 2023 8:02 PM Melih Mutlu\r\n> <m.melihmutlu@gmail.com> wrote:\r\n> > (3) copy_table()\r\n> >\r\n> > + /*\r\n> > + * If the publisher version is earlier than v14, it COPY command\r\n> doesn't\r\n> > + * support the binary option.\r\n> > + */\r\n> >\r\n> > This sentence doesn't look correct grammatically. We can replace \"it COPY\r\n> command\" with \"subscription\" for example. Kindly please fix it.\r\n> >\r\n> \r\n> How about something like: \"The binary option for replication is supported since\r\n> v14.\"?\r\nYes, this looks best to me. I agree with this suggestion.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Wed, 15 Mar 2023 06:49:36 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 4:32 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Attached v13.\n>\n\nI have a question related to the below test in the patch:\n\n+# Setting binary to false should allow syncing\n+$node_subscriber->safe_psql(\n+ 'postgres', qq(\n+ ALTER SUBSCRIPTION tsub SET (binary = false);));\n+\n+# Ensure the COPY command is executed in text format on the publisher\n+$node_publisher->wait_for_log(qr/LOG: ( [a-z0-9]+:)? COPY (.+)? TO STDOUT\\n/);\n+\n+$node_subscriber->wait_for_subscription_sync($node_publisher, 'tsub');\n+\n+# Check the synced data on the subscriber\n+$result = $node_subscriber->safe_psql('postgres', 'SELECT a FROM\ntest_mismatching_types ORDER BY a;');\n+\n+is( $result, '1\n+2', 'check synced data on subscriber with binary = false');\n+\n+# Test syncing tables with different column order\n+$node_publisher->safe_psql(\n+ 'postgres', qq(\n+ CREATE TABLE public.test_col_order (\n+ a bigint, b int\n+ );\n+ INSERT INTO public.test_col_order (a,b)\n+ VALUES (1,2),(3,4);\n+ ));\n\nWhat purpose does this test serve w.r.t this patch? Before checking\nthe sync for different column orders, the patch has already changed\nbinary to false, so it doesn't seem to test the functionality of this\npatch. Am, I missing something?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 15 Mar 2023 15:01:28 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Hi,\n\nPlease see the attached patch.\n\nTakamichi Osumi (Fujitsu) <osumi.takamichi@fujitsu.com>, 14 Mar 2023 Sal,\n18:20 tarihinde şunu yazdı:\n\n> (1) create_subscription.sgml\n>\n> + column types as opposed to text format. Even when this option\n> is enabled,\n> + only data types having binary send and receive functions will be\n> + transferred in binary. Note that the initial synchronization\n> requires\n>\n> (1-1)\n>\n> I think it's helpful to add a reference for the description about send and\n> receive functions (e.g. to the page of CREATE TYPE).\n>\n\nDone.\n\n\n>\n> (1-2)\n>\n> Also, it would be better to have a cross reference from there to this doc\n> as one paragraph probably in \"Notes\". I suggested this, because send and\n> receive functions are described as \"optional\" there and missing them leads\n> to error in the context of binary table synchronization.\n>\n\nI'm not sure whether this is necessary. In case of missing send/receive\nfunctions, error logs are already clear about what's wrong and logical\nreplication docs also explain what could go wrong with binary.\n\n\n> (3) copy_table()\n>\n> + /*\n> + * If the publisher version is earlier than v14, it COPY command\n> doesn't\n> + * support the binary option.\n> + */\n>\n> This sentence doesn't look correct grammatically. We can replace \"it COPY\n> command\" with \"subscription\" for example. Kindly please fix it.\n>\n\nChanged this with Amit's suggestion [1].\n\n\n[1]\nhttps://www.postgresql.org/message-id/CAA4eK1%2BC7ykvdBxh_t1BdbX5Da1bM1BgsE%3D-i2koPkd3pSid0A%40mail.gmail.com\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft",
"msg_date": "Wed, 15 Mar 2023 12:59:50 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Hi,\n\nPlease see v14 [1].\n\nPeter Smith <smithpb2250@gmail.com>, 15 Mar 2023 Çar, 09:22 tarihinde şunu\nyazdı:\n\n> Here are some review comments for v13-0001\n>\n> ======\n> doc/src/sgml/logical-replication.sgml\n>\n> 1.\n> That's why in the previous v12 review [1] (comment #3) I suggested\n> that this page should just say something quite generic like \"However,\n> replication in binary format is more restrictive\", and link back to\n> the other page which has all the gory details.\n>\n\nYou're right. Changed it with what you suggested.\n\n\n> 2.\n> My previous v12 review [1] (comment #6) suggested maybe updating this\n> page. But it was not done in v13. Did you accidentally miss the review\n> comment, or chose not to do it?\n>\n\nSorry, I missed this. Added a line leading to CREATE SUBSCRIPTION doc.\n\n\n> ======\n> doc/src/sgml/ref/create_subscription.sgml\n>\n> 3.\n> BEFORE\n> Binary format is also very data type specific, it will not allow\n> copying between different column types as opposed to text format.\n>\n> SUGGESTION (worded more similar to what is already on the COPY page [2])\n> Binary format is very data type specific; for example, it will not\n> allow copying from a smallint column to an integer column, even though\n> that would work fine in text format.\n>\n\nDone.\n\n\n> 4.\n> SUGGESTION\n> If the publisher is a <productname>PostgreSQL</productname> version\n> before 14, then any initial table synchronization will use text format\n> even if <literal>binary = true</literal>.\n>\n\nDone.\n\n\n> SUGGESTION\n> If the publisher version is earlier than v14, we use text format COPY.\n> Note - In fact COPY syntax \"WITH (FORMAT binary)\" has existed since\n> v9, but since the logical replication binary mode transfer was not\n> introduced until v14 it was decided to check using the later version.\n>\n\nChanged it as suggested here [2].\n\n[1]\nhttps://www.postgresql.org/message-id/CAGPVpCTaXYctCUp3z%3D_BstonHiZcC5Jj7584i7B8jeZQq4RJkw%40mail.gmail.com\n[2]\nhttps://www.postgresql.org/message-id/CAA4eK1%2BC7ykvdBxh_t1BdbX5Da1bM1BgsE%3D-i2koPkd3pSid0A%40mail.gmail.com\n\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft\n\nHi,Please see v14 [1].Peter Smith <smithpb2250@gmail.com>, 15 Mar 2023 Çar, 09:22 tarihinde şunu yazdı:Here are some review comments for v13-0001\n\n======\ndoc/src/sgml/logical-replication.sgml\n\n1.\nThat's why in the previous v12 review [1] (comment #3) I suggested\nthat this page should just say something quite generic like \"However,\nreplication in binary format is more restrictive\", and link back to\nthe other page which has all the gory details.You're right. Changed it with what you suggested. \n2.\nMy previous v12 review [1] (comment #6) suggested maybe updating this\npage. But it was not done in v13. Did you accidentally miss the review\ncomment, or chose not to do it?Sorry, I missed this. Added a line leading to CREATE SUBSCRIPTION doc. \n======\ndoc/src/sgml/ref/create_subscription.sgml\n\n3.\nBEFORE\nBinary format is also very data type specific, it will not allow\ncopying between different column types as opposed to text format.\n\nSUGGESTION (worded more similar to what is already on the COPY page [2])\nBinary format is very data type specific; for example, it will not\nallow copying from a smallint column to an integer column, even though\nthat would work fine in text format.Done. \n4.\nSUGGESTION\nIf the publisher is a <productname>PostgreSQL</productname> version\nbefore 14, then any initial table synchronization will use text format\neven if <literal>binary = true</literal>.Done. \nSUGGESTION\nIf the publisher version is earlier than v14, we use text format COPY.\nNote - In fact COPY syntax \"WITH (FORMAT binary)\" has existed since\nv9, but since the logical replication binary mode transfer was not\nintroduced until v14 it was decided to check using the later version.Changed it as suggested here [2].[1] https://www.postgresql.org/message-id/CAGPVpCTaXYctCUp3z%3D_BstonHiZcC5Jj7584i7B8jeZQq4RJkw%40mail.gmail.com[2] https://www.postgresql.org/message-id/CAA4eK1%2BC7ykvdBxh_t1BdbX5Da1bM1BgsE%3D-i2koPkd3pSid0A%40mail.gmail.com Thanks,-- Melih MutluMicrosoft",
"msg_date": "Wed, 15 Mar 2023 13:00:54 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com>, 15 Mar 2023 Çar, 12:31 tarihinde\nşunu yazdı:\n\n> On Tue, Mar 14, 2023 at 4:32 PM Melih Mutlu <m.melihmutlu@gmail.com>\n> wrote:\n>\n\n\n> What purpose does this test serve w.r.t this patch? Before checking\n> the sync for different column orders, the patch has already changed\n> binary to false, so it doesn't seem to test the functionality of this\n> patch. Am, I missing something?\n>\n\nI missed that binary has changed to false before testing column orders. I\nmoved that test case up before changing binary to false.\nPlease see v14 [1].\n\n[1]\nhttps://www.postgresql.org/message-id/CAGPVpCTaXYctCUp3z%3D_BstonHiZcC5Jj7584i7B8jeZQq4RJkw%40mail.gmail.com\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft\n\nAmit Kapila <amit.kapila16@gmail.com>, 15 Mar 2023 Çar, 12:31 tarihinde şunu yazdı:On Tue, Mar 14, 2023 at 4:32 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote: \nWhat purpose does this test serve w.r.t this patch? Before checking\nthe sync for different column orders, the patch has already changed\nbinary to false, so it doesn't seem to test the functionality of this\npatch. Am, I missing something?I missed that binary has changed to false before testing column orders. I moved that test case up before changing binary to false.Please see v14 [1].[1] https://www.postgresql.org/message-id/CAGPVpCTaXYctCUp3z%3D_BstonHiZcC5Jj7584i7B8jeZQq4RJkw%40mail.gmail.comThanks,-- Melih MutluMicrosoft",
"msg_date": "Wed, 15 Mar 2023 13:03:23 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Wed, 15 Mar 2023 at 15:30, Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi,\n>\n> Please see the attached patch.\n\nOne comment:\n1) There might be a chance the result order of select may vary as\n\"ORDER BY\" is not specified, Should we add \"ORDER BY\" as the table\nhas multiple rows:\n+# Check the synced data on the subscriber\n+$result = $node_subscriber->safe_psql('postgres', 'SELECT a,b FROM\n+public.test_col_order;');\n+\n+is( $result, '1|2\n+3|4', 'check synced data on subscriber for different column order');\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 15 Mar 2023 20:42:18 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Hi,\n\nvignesh C <vignesh21@gmail.com>, 15 Mar 2023 Çar, 18:12 tarihinde şunu\nyazdı:\n\n> One comment:\n> 1) There might be a chance the result order of select may vary as\n> \"ORDER BY\" is not specified, Should we add \"ORDER BY\" as the table\n> has multiple rows:\n> +# Check the synced data on the subscriber\n> +$result = $node_subscriber->safe_psql('postgres', 'SELECT a,b FROM\n> +public.test_col_order;');\n> +\n> +is( $result, '1|2\n> +3|4', 'check synced data on subscriber for different column order');\n>\n\nRight, it needs to be ordered. Fixed.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft",
"msg_date": "Wed, 15 Mar 2023 21:26:02 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Here are some review comments for v15-0001\n\n======\ndoc/src/sgml/logical-replication.sgml\n\n1.\n+ target table. However, logical replication in binary format is\nmore restrictive,\n+ see <literal>binary</literal> option of\n+ <link linkend=\"sql-createsubscription\"><command>CREATE\nSUBSCRIPTION</command></link>\n+ for more details.\n\nIMO (and Chat-GPT agrees) the new text should be 2 sentences.\n\nAlso, I changed \"more details\" --> \"details\" because none are provided here,.\n\nSUGGESTION\nHowever, logical replication in <literal>binary</literal> format is\nmore restrictive. See the binary option of <link\nlinkend=\"sql-createsubscription\"><command>CREATE\nSUBSCRIPTION</command></link> for details.\n\n======\ndoc/src/sgml/ref/alter_subscription.sgml\n\n2.\n+ <para>\n+ See <literal>binary</literal> option of <xref\nlinkend=\"sql-createsubscription\"/>\n+ for details of copying pre-existing data in binary format.\n+ </para>\n\nShould the link should be defined more like you did above using the\n<command> markup to get the better font?\n\nSUGGESTION (also minor rewording)\nSee the <literal>binary</literal> option of <link\nlinkend=\"sql-createsubscription\"><command>CREATE\nSUBSCRIPTION</command></link> for details about copying pre-existing\ndata in binary format.\n\n======\ndoc/src/sgml/ref/create_subscription.sgml\n\n3.\n <para>\n- Specifies whether the subscription will request the publisher to\n- send the data in binary format (as opposed to text).\n- The default is <literal>false</literal>.\n- Even when this option is enabled, only data types having\n- binary send and receive functions will be transferred in binary.\n+ Specifies whether the subscription will request the publisher to send\n+ the data in binary format (as opposed to text). The default is\n+ <literal>false</literal>. Any initial table synchronization copy\n+ (see <literal>copy_data</literal>) also uses the same format. Binary\n+ format can be faster than the text format, but it is less portable\n+ across machine architectures and PostgreSQL versions. Binary format\n+ is very data type specific; for example, it will not allow copying\n+ from a smallint column to an integer column, even though that would\n+ work fine in text format. Even when this option is enabled, only data\n+ types having binary send and receive functions will be transferred in\n+ binary. Note that the initial synchronization requires all data types\n+ to have binary send and receive functions, otherwise the\nsynchronization\n+ will fail (see <xref linkend=\"sql-createtype\"/> for more about\n+ send/receive functions).\n </para>\n\nIMO that part saying \"from a smallint column to an integer column\"\nshould have <type></type> markups for \"smallint\" and \"integer\".\n\n======\nsrc/backend/replication/logical/tablesync.c\n\n4.\n+ /*\n+ * The binary option for replication is supported since v14\n+ */\n+ if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 140000 &&\n+ MySubscription->binary)\n+ {\n+ appendStringInfo(&cmd, \" WITH (FORMAT binary)\");\n+ options = lappend(options, makeDefElem(\"format\", (Node *)\nmakeString(\"binary\"), -1));\n+ }\n\nShould this now be a single-line comment instead of spanning 3 lines?\n\n======\nsrc/test/subscription/t/014_binary.pl\n\n5.\nEverything looked OK to me, but the TAP file has only small comments\nfor each test step, which forces you to read everything from\ntop-to-bottom to understand what is going on. I felt it might be\neasier to understand the tests if you add a few \"bigger\" comments just\nto break the tests into the categories being tested. For example,\nsomething like:\n\n\n# ------------------------------------------------------\n# Ensure binary mode also executes COPY in binary format\n# ------------------------------------------------------\n\n~\n\n# --------------------------------------\n# Ensure normal binary replication works\n# --------------------------------------\n\n~\n\n# ------------------------------------------------------------------------------\n# Use ALTER SUBSCRIPTION to change to text format and then back to binary format\n# ------------------------------------------------------------------------------\n\n~\n\n# ---------------------------------------------------------------\n# Test binary replication without and with send/receive functions\n# ---------------------------------------------------------------\n\n~\n\n# ----------------------------------------------\n# Test different column orders on pub/sub tables\n# ----------------------------------------------\n\n~\n\n# -----------------------------------------------------\n# Test mismatched column types with/without binary mode\n# -----------------------------------------------------\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 16 Mar 2023 11:03:14 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 5:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Mar 15, 2023 at 11:52 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > ======\n> > src/backend/replication/logical/tablesync.c\n> >\n> > 5.\n> > +\n> > + /*\n> > + * If the publisher version is earlier than v14, it COPY command doesn't\n> > + * support the binary option.\n> > + */\n> > + if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 140000 &&\n> > + MySubscription->binary)\n> > + {\n> > + appendStringInfo(&cmd, \" WITH (FORMAT binary)\");\n> > + options = lappend(options, makeDefElem(\"format\", (Node *)\n> > makeString(\"binary\"), -1));\n> > + }\n> >\n> > Sorry, I gave a poor review comment for this previously. Now I have\n> > revisited all the thread discussions about version checking. I feel\n> > that some explanation should be given in the code comment so that\n> > future readers of this code can understand why you decided to use v14\n> > checking.\n> >\n> > Something like this:\n> >\n> > SUGGESTION\n> > If the publisher version is earlier than v14, we use text format COPY.\n> >\n>\n> I think this isn't explicit that we supported the binary format since\n> v14. So, I would prefer my version of the comment as suggested in the\n> previous email.\n>\n\nHmm, but my *full* suggestion was bigger than what is misquoted above,\nand it certainly did say \" logical replication binary mode transfer\nwas not introduced until v14\".\n\nSUGGESTION\nIf the publisher version is earlier than v14, we use text format COPY.\nNote - In fact COPY syntax \"WITH (FORMAT binary)\" has existed since\nv9, but since the logical replication binary mode transfer was not\nintroduced until v14 it was decided to check using the later version.\n\n~~\n\nAnyway, the shortened comment as in the latest v15 patch is fine by me too.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 16 Mar 2023 11:29:22 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Thu, Mar 16, 2023 2:26 AM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\r\n> \r\n> Right, it needs to be ordered. Fixed.\r\n> \r\n\r\nHi,\r\n\r\nThanks for updating the patch. I tested some cases like toast data, combination\r\nof row filter and column lists, and it works well.\r\n\r\nHere is a comment:\r\n\r\n+# Ensure the COPY command is executed in binary format on the publisher\r\n+$node_publisher->wait_for_log(qr/LOG: ( [a-z0-9]+:)? COPY (.+)? TO STDOUT WITH \\(FORMAT binary\\)/);\r\n\r\nThe test failed with `log_error_verbosity = verbose` because it couldn't match\r\nthe following log:\r\n2023-03-16 09:45:50.096 CST [2499415] pg_16398_sync_16391_7210954376230900539 LOG: 00000: statement: COPY public.test_arrays (a, b, c) TO STDOUT WITH (FORMAT binary)\r\n\r\nI think we should make it pass, see commit 19408aae7f.\r\nShould it be changed to:\r\n\r\n$node_publisher->wait_for_log(qr/LOG: ( [A-Z0-9]+:)? statement: COPY (.+)? TO STDOUT WITH \\(FORMAT binary\\)/);\r\n\r\nBesides, for the same reason, this line also needs to be modified.\r\n+$node_publisher->wait_for_log(qr/LOG: ( [a-z0-9]+:)? COPY (.+)? TO STDOUT\\n/);\r\n\r\nRegards,\r\nShi Yu\r\n",
"msg_date": "Thu, 16 Mar 2023 02:35:08 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Thu, Mar 16, 2023 at 5:59 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Wed, Mar 15, 2023 at 5:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Mar 15, 2023 at 11:52 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > ======\n> > > src/backend/replication/logical/tablesync.c\n> > >\n> > > 5.\n> > > +\n> > > + /*\n> > > + * If the publisher version is earlier than v14, it COPY command doesn't\n> > > + * support the binary option.\n> > > + */\n> > > + if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 140000 &&\n> > > + MySubscription->binary)\n> > > + {\n> > > + appendStringInfo(&cmd, \" WITH (FORMAT binary)\");\n> > > + options = lappend(options, makeDefElem(\"format\", (Node *)\n> > > makeString(\"binary\"), -1));\n> > > + }\n> > >\n> > > Sorry, I gave a poor review comment for this previously. Now I have\n> > > revisited all the thread discussions about version checking. I feel\n> > > that some explanation should be given in the code comment so that\n> > > future readers of this code can understand why you decided to use v14\n> > > checking.\n> > >\n> > > Something like this:\n> > >\n> > > SUGGESTION\n> > > If the publisher version is earlier than v14, we use text format COPY.\n> > >\n> >\n> > I think this isn't explicit that we supported the binary format since\n> > v14. So, I would prefer my version of the comment as suggested in the\n> > previous email.\n> >\n>\n> Hmm, but my *full* suggestion was bigger than what is misquoted above,\n> and it certainly did say \" logical replication binary mode transfer\n> was not introduced until v14\".\n>\n> SUGGESTION\n> If the publisher version is earlier than v14, we use text format COPY.\n> Note - In fact COPY syntax \"WITH (FORMAT binary)\" has existed since\n> v9, but since the logical replication binary mode transfer was not\n> introduced until v14 it was decided to check using the later version.\n>\n\nI find this needlessly verbose.\n\n> ~~\n>\n> Anyway, the shortened comment as in the latest v15 patch is fine by me too.\n>\n\nOkay, then let's go with that.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 16 Mar 2023 08:19:54 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Hi,\r\n\r\nThanks for updating the patch, I think it is a useful feature.\r\n\r\nI looked at the v15 patch and the patch looks mostly good to me.\r\nHere are few comments:\r\n\r\n1.\r\n+\t{\r\n+\t\tappendStringInfo(&cmd, \" WITH (FORMAT binary)\");\r\n\r\nWe could use appendStringInfoString here.\r\n\r\n\r\n2.\r\n+# It should fail\r\n+$node_subscriber->wait_for_log(qr/ERROR: ( [A-Z0-9]+:)? no binary input function available for type/);\r\n...\r\n+# Cannot sync due to type mismatch\r\n+$node_subscriber->wait_for_log(qr/ERROR: ( [A-Z0-9]+:)? incorrect binary data format/);\r\n...\r\n+# Ensure the COPY command is executed in text format on the publisher\r\n+$node_publisher->wait_for_log(qr/LOG: ( [a-z0-9]+:)? COPY (.+)? TO STDOUT\\n/);\r\n\r\nI think it would be better to pass the log offset when using wait_for_log,\r\nbecause otherwise it will check the whole log file to find the target message,\r\nThis might not be a big problem, but it has a risk of getting unexpected log message\r\nwhich was generated by previous commands.\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Thu, 16 Mar 2023 02:54:57 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 3:33 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com>, 15 Mar 2023 Çar, 12:31 tarihinde şunu yazdı:\n>>\n>> On Tue, Mar 14, 2023 at 4:32 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n>\n>>\n>> What purpose does this test serve w.r.t this patch? Before checking\n>> the sync for different column orders, the patch has already changed\n>> binary to false, so it doesn't seem to test the functionality of this\n>> patch. Am, I missing something?\n>\n>\n> I missed that binary has changed to false before testing column orders. I moved that test case up before changing binary to false.\n> Please see v14 [1].\n>\n\nAfter thinking some more about this test, I don't think we need this\ntest as this doesn't add any value to this patch. This tests the\ncolumn orders which is well-established functionality of the apply\nworker.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 16 Mar 2023 08:25:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Wed, Mar 8, 2023, at 11:50 PM, Amit Kapila wrote:\n> On Wed, Mar 8, 2023 at 6:17 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> >\n> > On 7 Mar 2023 Tue at 04:10 Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>\n> >> As per what I could read in this thread, most people prefer to use the\n> >> existing binary option rather than inventing a new way (option) to\n> >> binary copy in the initial sync phase. Do you agree?\n> >\n> >\n> > I agree.\n> > What do you think about the version checks? I removed any kind of check since it’s currently a different option. Should we check publisher version before doing binary copy to ensure that the publisher node supports binary option of COPY command?\n> >\n> \n> It is not clear to me which version check you wanted to add because we\n> seem to have a binary option in COPY from the time prior to logical\n> replication. I feel we need a publisher version 14 check as that is\n> where we start to support binary mode transfer in logical replication.\n> See the check in function libpqrcv_startstreaming().\n... then you are breaking existent cases. Even if you have a convincing\nargument, you are introducing a behavior change in prior versions (commit\nmessages should always indicate that you are breaking backward compatibility).\n\n+\n+ /*\n+ * The binary option for replication is supported since v14\n+ */\n+ if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 140000 &&\n+ MySubscription->binary)\n+ {\n+ appendStringInfo(&cmd, \" WITH (FORMAT binary)\");\n+ options = lappend(options, makeDefElem(\"format\", (Node *) makeString(\"binary\"), -1));\n+ }\n+\n\nWhat are the arguments to support since v14 instead of the to-be-released\nversion? I read the thread but it is not clear. It was said about the\nrestrictive nature of this feature and it will be frustrating to see that the\nsame setup (with the same commands) works with v14 and v15 but it doesn't with\nv16. IMO it should be >= 16 and documentation should explain that v14/v15 uses\ntext format during initial table synchronization even if binary = true.\n\nShould there be a fallback mode (text) if initial table synchronization failed\nbecause of the binary option? Maybe a different setting (auto) to support such\nbehavior.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Mar 8, 2023, at 11:50 PM, Amit Kapila wrote:On Wed, Mar 8, 2023 at 6:17 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:>> On 7 Mar 2023 Tue at 04:10 Amit Kapila <amit.kapila16@gmail.com> wrote:>>>> As per what I could read in this thread, most people prefer to use the>> existing binary option rather than inventing a new way (option) to>> binary copy in the initial sync phase. Do you agree?>>> I agree.> What do you think about the version checks? I removed any kind of check since it’s currently a different option. Should we check publisher version before doing binary copy to ensure that the publisher node supports binary option of COPY command?>It is not clear to me which version check you wanted to add because weseem to have a binary option in COPY from the time prior to logicalreplication. I feel we need a publisher version 14 check as that iswhere we start to support binary mode transfer in logical replication.See the check in function libpqrcv_startstreaming().... then you are breaking existent cases. Even if you have a convincingargument, you are introducing a behavior change in prior versions (commitmessages should always indicate that you are breaking backward compatibility).++ /*+ * The binary option for replication is supported since v14+ */+ if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 140000 &&+ MySubscription->binary)+ {+ appendStringInfo(&cmd, \" WITH (FORMAT binary)\");+ options = lappend(options, makeDefElem(\"format\", (Node *) makeString(\"binary\"), -1));+ }+What are the arguments to support since v14 instead of the to-be-releasedversion? I read the thread but it is not clear. It was said about therestrictive nature of this feature and it will be frustrating to see that thesame setup (with the same commands) works with v14 and v15 but it doesn't withv16. IMO it should be >= 16 and documentation should explain that v14/v15 usestext format during initial table synchronization even if binary = true.Should there be a fallback mode (text) if initial table synchronization failedbecause of the binary option? Maybe a different setting (auto) to support suchbehavior.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Wed, 15 Mar 2023 23:57:24 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Thu, Mar 16, 2023 at 8:27 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Wed, Mar 8, 2023, at 11:50 PM, Amit Kapila wrote:\n>\n> It is not clear to me which version check you wanted to add because we\n> seem to have a binary option in COPY from the time prior to logical\n> replication. I feel we need a publisher version 14 check as that is\n> where we start to support binary mode transfer in logical replication.\n> See the check in function libpqrcv_startstreaming().\n>\n> ... then you are breaking existent cases. Even if you have a convincing\n> argument, you are introducing a behavior change in prior versions (commit\n> messages should always indicate that you are breaking backward compatibility).\n>\n> +\n> + /*\n> + * The binary option for replication is supported since v14\n> + */\n> + if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 140000 &&\n> + MySubscription->binary)\n> + {\n> + appendStringInfo(&cmd, \" WITH (FORMAT binary)\");\n> + options = lappend(options, makeDefElem(\"format\", (Node *) makeString(\"binary\"), -1));\n> + }\n> +\n>\n> What are the arguments to support since v14 instead of the to-be-released\n> version? I read the thread but it is not clear. It was said about the\n> restrictive nature of this feature and it will be frustrating to see that the\n> same setup (with the same commands) works with v14 and v15 but it doesn't with\n> v16.\n>\n\nIf the failure has to happen it will anyway happen later when the\npublisher will be upgraded to v16. The reason for the version checks\nas v14 was to allow the initial sync from the same version where the\nbinary mode for replication was introduced. However, if we expect\nfailures in the existing setup, I am fine with supporting this for >=\nv16.\n\n> IMO it should be >= 16 and documentation should explain that v14/v15 uses\n> text format during initial table synchronization even if binary = true.\n>\n\nYeah, if we change the version then the corresponding text in the\npatch should also be changed.\n\n> Should there be a fallback mode (text) if initial table synchronization failed\n> because of the binary option? Maybe a different setting (auto) to support such\n> behavior.\n>\n\nI think the workaround is that the user disables binary mode for the\ntime of initial sync. I think if we want to extend and add a fallback\n(text) mode then it is better to keep it as default behavior rather\nthan introducing a new setting like 'auto'. Personally, I feel it can\nbe added later after doing some more study.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 16 Mar 2023 08:54:51 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Hi,\n\nPlease see the attached v16.\n\nPeter Smith <smithpb2250@gmail.com>, 16 Mar 2023 Per, 03:03 tarihinde şunu\nyazdı:\n\n> Here are some review comments for v15-0001\n>\n\nI applied your comments in the updated patch.\n\nshiy.fnst@fujitsu.com <shiy.fnst@fujitsu.com>, 16 Mar 2023 Per, 05:35\ntarihinde şunu yazdı:\n\n> On Thu, Mar 16, 2023 2:26 AM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> >\n> > Right, it needs to be ordered. Fixed.\n> >\n>\n> Hi,\n>\n> Thanks for updating the patch. I tested some cases like toast data,\n> combination\n> of row filter and column lists, and it works well.\n>\n\nThanks for testing. I changed wait_for_log lines as you suggested.\n\nhouzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com>, 16 Mar 2023 Per, 05:55\ntarihinde şunu yazdı:\n\n> 1.\n> + {\n> + appendStringInfo(&cmd, \" WITH (FORMAT binary)\");\n>\n> We could use appendStringInfoString here.\n>\n\nDone.\n\n\n> 2.\n> I think it would be better to pass the log offset when using wait_for_log,\n> because otherwise it will check the whole log file to find the target\n> message,\n> This might not be a big problem, but it has a risk of getting unexpected\n> log message\n> which was generated by previous commands.\n>\n\nYou're right. I added offsets for wait_for_log's .\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft",
"msg_date": "Thu, 16 Mar 2023 16:20:13 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com>, 16 Mar 2023 Per, 06:25 tarihinde\nşunu yazdı:\n\n> On Thu, Mar 16, 2023 at 8:27 AM Euler Taveira <euler@eulerto.com> wrote:\n> >\n> > On Wed, Mar 8, 2023, at 11:50 PM, Amit Kapila wrote:\n> >\n> > It is not clear to me which version check you wanted to add because we\n> > seem to have a binary option in COPY from the time prior to logical\n> > replication. I feel we need a publisher version 14 check as that is\n> > where we start to support binary mode transfer in logical replication.\n> > See the check in function libpqrcv_startstreaming().\n> >\n> > ... then you are breaking existent cases. Even if you have a convincing\n> > argument, you are introducing a behavior change in prior versions (commit\n> > messages should always indicate that you are breaking backward\n> compatibility).\n> >\n> > +\n> > + /*\n> > + * The binary option for replication is supported since v14\n> > + */\n> > + if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 140000 &&\n> > + MySubscription->binary)\n> > + {\n> > + appendStringInfo(&cmd, \" WITH (FORMAT binary)\");\n> > + options = lappend(options, makeDefElem(\"format\", (Node *)\n> makeString(\"binary\"), -1));\n> > + }\n> > +\n> >\n> > What are the arguments to support since v14 instead of the to-be-released\n> > version? I read the thread but it is not clear. It was said about the\n> > restrictive nature of this feature and it will be frustrating to see\n> that the\n> > same setup (with the same commands) works with v14 and v15 but it\n> doesn't with\n> > v16.\n> >\n>\n> If the failure has to happen it will anyway happen later when the\n> publisher will be upgraded to v16. The reason for the version checks\n> as v14 was to allow the initial sync from the same version where the\n> binary mode for replication was introduced. However, if we expect\n> failures in the existing setup, I am fine with supporting this for >=\n> v16.\n>\n\nUpgrading the subscriber to v16 and keeping the subscriber in v14 could\nbreak existing subscriptions. I don't know how likely such a case is.\n\nI don't have a strong preference on this. What do you think? Should we\nchange it >=v16 or keep it as it is?\n\nBest,\n-- \nMelih Mutlu\nMicrosoft\n\nAmit Kapila <amit.kapila16@gmail.com>, 16 Mar 2023 Per, 06:25 tarihinde şunu yazdı:On Thu, Mar 16, 2023 at 8:27 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Wed, Mar 8, 2023, at 11:50 PM, Amit Kapila wrote:\n>\n> It is not clear to me which version check you wanted to add because we\n> seem to have a binary option in COPY from the time prior to logical\n> replication. I feel we need a publisher version 14 check as that is\n> where we start to support binary mode transfer in logical replication.\n> See the check in function libpqrcv_startstreaming().\n>\n> ... then you are breaking existent cases. Even if you have a convincing\n> argument, you are introducing a behavior change in prior versions (commit\n> messages should always indicate that you are breaking backward compatibility).\n>\n> +\n> + /*\n> + * The binary option for replication is supported since v14\n> + */\n> + if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 140000 &&\n> + MySubscription->binary)\n> + {\n> + appendStringInfo(&cmd, \" WITH (FORMAT binary)\");\n> + options = lappend(options, makeDefElem(\"format\", (Node *) makeString(\"binary\"), -1));\n> + }\n> +\n>\n> What are the arguments to support since v14 instead of the to-be-released\n> version? I read the thread but it is not clear. It was said about the\n> restrictive nature of this feature and it will be frustrating to see that the\n> same setup (with the same commands) works with v14 and v15 but it doesn't with\n> v16.\n>\n\nIf the failure has to happen it will anyway happen later when the\npublisher will be upgraded to v16. The reason for the version checks\nas v14 was to allow the initial sync from the same version where the\nbinary mode for replication was introduced. However, if we expect\nfailures in the existing setup, I am fine with supporting this for >=\nv16.Upgrading the subscriber to v16 and keeping the subscriber in v14 could break existing subscriptions. I don't know how likely such a case is.I don't have a strong preference on this. What do you think? Should we change it >=v16 or keep it as it is?Best,-- Melih MutluMicrosoft",
"msg_date": "Thu, 16 Mar 2023 16:29:12 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Thu, Mar 16, 2023 at 6:59 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com>, 16 Mar 2023 Per, 06:25 tarihinde şunu yazdı:\n>>\n>> On Thu, Mar 16, 2023 at 8:27 AM Euler Taveira <euler@eulerto.com> wrote:\n>> >\n>> > On Wed, Mar 8, 2023, at 11:50 PM, Amit Kapila wrote:\n>> >\n>> > It is not clear to me which version check you wanted to add because we\n>> > seem to have a binary option in COPY from the time prior to logical\n>> > replication. I feel we need a publisher version 14 check as that is\n>> > where we start to support binary mode transfer in logical replication.\n>> > See the check in function libpqrcv_startstreaming().\n>> >\n>> > ... then you are breaking existent cases. Even if you have a convincing\n>> > argument, you are introducing a behavior change in prior versions (commit\n>> > messages should always indicate that you are breaking backward compatibility).\n>> >\n>> > +\n>> > + /*\n>> > + * The binary option for replication is supported since v14\n>> > + */\n>> > + if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 140000 &&\n>> > + MySubscription->binary)\n>> > + {\n>> > + appendStringInfo(&cmd, \" WITH (FORMAT binary)\");\n>> > + options = lappend(options, makeDefElem(\"format\", (Node *) makeString(\"binary\"), -1));\n>> > + }\n>> > +\n>> >\n>> > What are the arguments to support since v14 instead of the to-be-released\n>> > version? I read the thread but it is not clear. It was said about the\n>> > restrictive nature of this feature and it will be frustrating to see that the\n>> > same setup (with the same commands) works with v14 and v15 but it doesn't with\n>> > v16.\n>> >\n>>\n>> If the failure has to happen it will anyway happen later when the\n>> publisher will be upgraded to v16. The reason for the version checks\n>> as v14 was to allow the initial sync from the same version where the\n>> binary mode for replication was introduced. However, if we expect\n>> failures in the existing setup, I am fine with supporting this for >=\n>> v16.\n>\n>\n> Upgrading the subscriber to v16 and keeping the subscriber in v14 could break existing subscriptions. I don't know how likely such a case is.\n>\n> I don't have a strong preference on this. What do you think? Should we change it >=v16 or keep it as it is?\n>\n\nI think to reduce the risk of breakage, let's change the check to\n>=v16. Also, accordingly, update the doc and commit message.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 17 Mar 2023 05:32:45 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Thu, Mar 16, 2023 at 1:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Mar 15, 2023 at 3:33 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> >\n> > Amit Kapila <amit.kapila16@gmail.com>, 15 Mar 2023 Çar, 12:31 tarihinde şunu yazdı:\n> >>\n> >> On Tue, Mar 14, 2023 at 4:32 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> >\n> >\n> >>\n> >> What purpose does this test serve w.r.t this patch? Before checking\n> >> the sync for different column orders, the patch has already changed\n> >> binary to false, so it doesn't seem to test the functionality of this\n> >> patch. Am, I missing something?\n> >\n> >\n> > I missed that binary has changed to false before testing column orders. I moved that test case up before changing binary to false.\n> > Please see v14 [1].\n> >\n>\n> After thinking some more about this test, I don't think we need this\n> test as this doesn't add any value to this patch. This tests the\n> column orders which is well-established functionality of the apply\n> worker.\n>\n\nI agree that different column order is a \"well-established\nfunctionality of the apply worker\".\n\nBut when I searched the TAP tests I could not find any existing tests\nthat check the combination of\n- different column orders\n- CREATE SUBSCRIPTION with parameters binary=true and copy_data=true\n\nSo there seemed to be a gap in the test coverage, which is why I suggested it.\n\nI guess that test was not strictly tied to this patch. Should I post\nthis new test suggestion as a separate thread or do you think there is\nno point because it will not get any support?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 17 Mar 2023 12:12:15 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Fri, Mar 17, 2023 at 12:20 AM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi,\n>\n> Please see the attached v16.\n>\n> Peter Smith <smithpb2250@gmail.com>, 16 Mar 2023 Per, 03:03 tarihinde şunu yazdı:\n>>\n>> Here are some review comments for v15-0001\n>\n>\n> I applied your comments in the updated patch.\n\nThanks.\n\nI checked patchv16-0001 and have only one minor comment.\n\n======\ndoc/src/sgml/logical-replication.sgml\n\n1.\ndiff --git a/doc/src/sgml/logical-replication.sgml\nb/doc/src/sgml/logical-replication.sgml\nindex 6b0e300adc..bad25e54cd 100644\n--- a/doc/src/sgml/logical-replication.sgml\n+++ b/doc/src/sgml/logical-replication.sgml\n@@ -251,7 +251,10 @@\n column of type <type>bigint</type>. The target table can also have\n additional columns not provided by the published table. Any such columns\n will be filled with the default value as specified in the definition of the\n- target table.\n+ target table. However, logical replication in <literal>binary</literal>\n+ format is more restrictive. See the <literal>binary</literal> option of\n+ <link linkend=\"sql-createsubscription\"><command>CREATE\nSUBSCRIPTION</command></link>\n+ for details.\n </para>\n\nIMO the sentence \"However, logical replication in binary format is\nmore restrictive.\" should just be plain text.\n\nThere should not be the <literal>binary</literal> markup in that 1st sentence.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 17 Mar 2023 12:57:33 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Thu, Mar 16, 2023 9:20 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\r\n> \r\n> Hi,\r\n> \r\n> Please see the attached v16.\r\n> \r\n\r\nThanks for updating the patch.\r\n\r\n+# Cannot sync due to type mismatch\r\n+$node_subscriber->wait_for_log(qr/ERROR: ( [A-Z0-9]+:)? incorrect binary data format/);\r\n\r\n+# Ensure the COPY command is executed in text format on the publisher\r\n+$node_publisher->wait_for_log(qr/LOG: ( [A-Z0-9]+:)? statement: COPY (.+)? TO STDOUT\\n/);\r\n\r\nIt looks that you forgot to pass `offset` into wait_for_log().\r\n\r\nRegards,\r\nShi Yu\r\n",
"msg_date": "Fri, 17 Mar 2023 02:26:19 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Fri, Mar 17, 2023 at 6:42 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Thu, Mar 16, 2023 at 1:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Mar 15, 2023 at 3:33 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> > >\n> > > Amit Kapila <amit.kapila16@gmail.com>, 15 Mar 2023 Çar, 12:31 tarihinde şunu yazdı:\n> > >>\n> > >> On Tue, Mar 14, 2023 at 4:32 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> > >\n> > >\n> > >>\n> > >> What purpose does this test serve w.r.t this patch? Before checking\n> > >> the sync for different column orders, the patch has already changed\n> > >> binary to false, so it doesn't seem to test the functionality of this\n> > >> patch. Am, I missing something?\n> > >\n> > >\n> > > I missed that binary has changed to false before testing column orders. I moved that test case up before changing binary to false.\n> > > Please see v14 [1].\n> > >\n> >\n> > After thinking some more about this test, I don't think we need this\n> > test as this doesn't add any value to this patch. This tests the\n> > column orders which is well-established functionality of the apply\n> > worker.\n> >\n>\n> I agree that different column order is a \"well-established\n> functionality of the apply worker\".\n>\n> But when I searched the TAP tests I could not find any existing tests\n> that check the combination of\n> - different column orders\n> - CREATE SUBSCRIPTION with parameters binary=true and copy_data=true\n>\n> So there seemed to be a gap in the test coverage, which is why I suggested it.\n>\n> I guess that test was not strictly tied to this patch. Should I post\n> this new test suggestion as a separate thread or do you think there is\n> no point because it will not get any support?\n>\n\nPersonally, I don't think we need to test every possible combination\nunless it is really achieving something meaningful. In this particular\ncase, I don't see the need or maybe I am missing something.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 17 Mar 2023 08:35:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Hi,\n\nSharing v17.\n\nAmit Kapila <amit.kapila16@gmail.com>, 17 Mar 2023 Cum, 03:02 tarihinde\nşunu yazdı:\n\n> I think to reduce the risk of breakage, let's change the check to\n> >=v16. Also, accordingly, update the doc and commit message.\n>\n\nDone.\n\nPeter Smith <smithpb2250@gmail.com>, 17 Mar 2023 Cum, 04:58 tarihinde şunu\nyazdı:\n\n> IMO the sentence \"However, logical replication in binary format is\n> more restrictive.\" should just be plain text.\n>\n\nDone.\n\n shiy.fnst@fujitsu.com <shiy.fnst@fujitsu.com>, 17 Mar 2023 Cum, 05:26\ntarihinde şunu yazdı:\n\n> It looks that you forgot to pass `offset` into wait_for_log().\n\n\nYes, I somehow didn't include those lines into the patch. Thanks for\nnoticing. Fixed them now.\n\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft",
"msg_date": "Fri, 17 Mar 2023 15:24:55 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Fri, 17 Mar 2023 at 17:55, Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi,\n>\n> Sharing v17.\n>\n> Amit Kapila <amit.kapila16@gmail.com>, 17 Mar 2023 Cum, 03:02 tarihinde şunu yazdı:\n>>\n>> I think to reduce the risk of breakage, let's change the check to\n>> >=v16. Also, accordingly, update the doc and commit message.\n>\n>\n> Done.\n>\n> Peter Smith <smithpb2250@gmail.com>, 17 Mar 2023 Cum, 04:58 tarihinde şunu yazdı:\n>>\n>> IMO the sentence \"However, logical replication in binary format is\n>> more restrictive.\" should just be plain text.\n>\n>\n> Done.\n>\n> shiy.fnst@fujitsu.com <shiy.fnst@fujitsu.com>, 17 Mar 2023 Cum, 05:26 tarihinde şunu yazdı:\n>>\n>> It looks that you forgot to pass `offset` into wait_for_log().\n>\n>\n> Yes, I somehow didn't include those lines into the patch. Thanks for noticing. Fixed them now.\n\nThanks for the updated patch, few comments:\n1) Currently we refer the link to the beginning of create subscription\npage, this can be changed to refer to binary option contents in create\nsubscription:\n+ <para>\n+ See the <literal>binary</literal> option of\n+ <link linkend=\"sql-createsubscription\"><command>CREATE\nSUBSCRIPTION</command></link>\n+ for details about copying pre-existing data in binary format.\n+ </para>\n\n2) Running pgperltidy shows the test script 014_binary.pl could be\nslightly improved as in the attachment.\n\nRegards,\nVignesh",
"msg_date": "Sat, 18 Mar 2023 14:33:27 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Fri, Mar 17, 2023 at 11:25 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi,\n>\n> Sharing v17.\n>\n> Amit Kapila <amit.kapila16@gmail.com>, 17 Mar 2023 Cum, 03:02 tarihinde şunu yazdı:\n>>\n>> I think to reduce the risk of breakage, let's change the check to\n>> >=v16. Also, accordingly, update the doc and commit message.\n>\n>\n> Done.\n>\n\nHere are my review comments for v17-0001\n\n\n======\nCommit message\n\n1.\nBinary copy is supported for v16 or later.\n\n~\n\nAs written that's very general and not quite correct. E.g. COPY ...\nWITH (FORMAT binary) has been available for a long time. IMO that\ncommit message sentence ought to be more specific.\n\nSUGGESTION\nBinary copy for logical replication table synchronization is supported\nonly when both publisher and subscriber are v16 or later.\n\n======\nsrc/backend/replication/logical/tablesync.c\n\n2.\n@@ -1168,6 +1170,15 @@ copy_table(Relation rel)\n\n appendStringInfoString(&cmd, \") TO STDOUT\");\n }\n+\n+ /* The binary option for replication is supported since v16 */\n+ if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 160000 &&\n+ MySubscription->binary)\n+ {\n+ appendStringInfoString(&cmd, \" WITH (FORMAT binary)\");\n+ options = lappend(options, makeDefElem(\"format\", (Node *)\nmakeString(\"binary\"), -1));\n+ }\n\n\nLogical replication binary mode was introduced in v14, so the old\ncomment (\"The binary option for replication is supported since v14\")\nwas correct. Unfortunately, after changing the code check to 16000, I\nthink the new comment (\"The binary option for replication is supported\nsince v16\") became incorrect, and so it needs some rewording. Maybe it\nshould say something like below:\n\nSUGGESTION\nIf the publisher is v16 or later, then any initial table\nsynchronization will use the same format as specified by the\nsubscription binary mode. If the publisher is before v16, then any\ninitial table synchronization will use text format regardless of the\nsubscription binary mode.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Sat, 18 Mar 2023 20:41:23 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Sat, Mar 18, 2023 at 3:11 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Fri, Mar 17, 2023 at 11:25 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> ======\n> src/backend/replication/logical/tablesync.c\n>\n> 2.\n> @@ -1168,6 +1170,15 @@ copy_table(Relation rel)\n>\n> appendStringInfoString(&cmd, \") TO STDOUT\");\n> }\n> +\n> + /* The binary option for replication is supported since v16 */\n> + if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 160000 &&\n> + MySubscription->binary)\n> + {\n> + appendStringInfoString(&cmd, \" WITH (FORMAT binary)\");\n> + options = lappend(options, makeDefElem(\"format\", (Node *)\n> makeString(\"binary\"), -1));\n> + }\n>\n>\n> Logical replication binary mode was introduced in v14, so the old\n> comment (\"The binary option for replication is supported since v14\")\n> was correct. Unfortunately, after changing the code check to 16000, I\n> think the new comment (\"The binary option for replication is supported\n> since v16\") became incorrect, and so it needs some rewording. Maybe it\n> should say something like below:\n>\n> SUGGESTION\n> If the publisher is v16 or later, then any initial table\n> synchronization will use the same format as specified by the\n> subscription binary mode. If the publisher is before v16, then any\n> initial table synchronization will use text format regardless of the\n> subscription binary mode.\n>\n\nI agree that the previous comment should be updated but I would prefer\nsomething along the lines: \"Prior to v16, initial table\nsynchronization will use text format even if the binary option is\nenabled for a subscription.\"\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 18 Mar 2023 15:41:09 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "There are a couple of TAP tests where the copy binary is expected to\nfail. And when it fails, you do binary=false (changing the format back\nto 'text') so the test is then expected to be able to proceed.\n\nI don't know if this happens in practice, but IIUC in theory, if the\ntiming is extremely bad, the tablesync could relaunch in binary mode\nmultiple times (any fail multiple times?) before your binary=false\nchange takes effect.\n\nSo, I was wondering if it could help to use the subscription\n'disable_on_error=true' parameter for those cases so that the\ntablesync won't needlessly attempt to relaunch until you have set\nbinary=false and then re-enabled the subscription.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 20 Mar 2023 09:07:02 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Mon, Mar 20, 2023 at 3:37 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> There are a couple of TAP tests where the copy binary is expected to\n> fail. And when it fails, you do binary=false (changing the format back\n> to 'text') so the test is then expected to be able to proceed.\n>\n> I don't know if this happens in practice, but IIUC in theory, if the\n> timing is extremely bad, the tablesync could relaunch in binary mode\n> multiple times (any fail multiple times?) before your binary=false\n> change takes effect.\n>\n> So, I was wondering if it could help to use the subscription\n> 'disable_on_error=true' parameter for those cases so that the\n> tablesync won't needlessly attempt to relaunch until you have set\n> binary=false and then re-enabled the subscription.\n>\n\n+1. That would make tests more reliable.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 20 Mar 2023 07:43:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Dear Melih,\r\n\r\nThank you for updating the patch.\r\nI checked your added description about initial data sync and I think it's OK.\r\n\r\nFew minor comments:\r\n\r\n01. copy_table\r\n\r\n```\r\n+\tList \t *options = NIL;\r\n```\r\n\r\nI found a unnecessary blank just after \"List\". You can remove it and align definition.\r\n\r\n02. copy_table\r\n\r\n```\r\n+\t\toptions = lappend(options, makeDefElem(\"format\", (Node *) makeString(\"binary\"), -1));\r\n```\r\n\r\nThe line seems to exceed 80 characters. How do you think to change like following?\r\n\r\n```\r\n\t\toptions = lappend(options,\r\n\t\t\t\t\t\t makeDefElem(\"format\",\r\n\t\t\t\t\t\t\t\t\t (Node *) makeString(\"binary\"), -1));\r\n```\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Mon, 20 Mar 2023 04:13:29 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Hi,\n\nPlease see the attached patch.\n\nvignesh C <vignesh21@gmail.com>, 18 Mar 2023 Cmt, 12:03 tarihinde şunu\nyazdı:\n\n> On Fri, 17 Mar 2023 at 17:55, Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> 1) Currently we refer the link to the beginning of create subscription\n> page, this can be changed to refer to binary option contents in create\n> subscription:\n>\n\nDone.\n\n\n> 2) Running pgperltidy shows the test script 014_binary.pl could be\n> slightly improved as in the attachment.\n>\n\nCouldn't apply the patch you attached but I ran pgperltidy myself and I\nguess the result should be similar.\n\n\nPeter Smith <smithpb2250@gmail.com>, 18 Mar 2023 Cmt, 12:41 tarihinde şunu\nyazdı:\n\n> Commit message\n>\n> 1.\n> Binary copy is supported for v16 or later.\n>\n\nDone as you suggested.\n\nAmit Kapila <amit.kapila16@gmail.com>, 18 Mar 2023 Cmt, 13:11 tarihinde\nşunu yazdı:\n\n> On Sat, Mar 18, 2023 at 3:11 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > SUGGESTION\n> > If the publisher is v16 or later, then any initial table\n> > synchronization will use the same format as specified by the\n> > subscription binary mode. If the publisher is before v16, then any\n> > initial table synchronization will use text format regardless of the\n> > subscription binary mode.\n> >\n>\n> I agree that the previous comment should be updated but I would prefer\n> something along the lines: \"Prior to v16, initial table\n> synchronization will use text format even if the binary option is\n> enabled for a subscription.\"\n>\n\nDone.\n\nAmit Kapila <amit.kapila16@gmail.com>, 20 Mar 2023 Pzt, 05:13 tarihinde\nşunu yazdı:\n\n> On Mon, Mar 20, 2023 at 3:37 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > There are a couple of TAP tests where the copy binary is expected to\n> > fail. And when it fails, you do binary=false (changing the format back\n> > to 'text') so the test is then expected to be able to proceed.\n> >\n> > I don't know if this happens in practice, but IIUC in theory, if the\n> > timing is extremely bad, the tablesync could relaunch in binary mode\n> > multiple times (any fail multiple times?) before your binary=false\n> > change takes effect.\n> >\n> > So, I was wondering if it could help to use the subscription\n> > 'disable_on_error=true' parameter for those cases so that the\n> > tablesync won't needlessly attempt to relaunch until you have set\n> > binary=false and then re-enabled the subscription.\n> >\n>\n> +1. That would make tests more reliable.\n>\n\nDone.\n\nHayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com>, 20 Mar 2023 Pzt, 07:13\ntarihinde şunu yazdı:\n\n> Dear Melih,\n>\n> Thank you for updating the patch.\n> I checked your added description about initial data sync and I think it's\n> OK.\n>\n> Few minor comments:\n>\n\nFixed both comments.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft",
"msg_date": "Mon, 20 Mar 2023 13:58:59 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Here are my review comments for v18-0001\n\n======\ndoc/src/sgml/logical-replication.sgml\n\n1.\n+ target table. However, logical replication in binary format is more\n+ restrictive. See the <literal>binary</literal> option of\n+ <link linkend=\"sql-createsubscription-binary\"><command>CREATE\nSUBSCRIPTION</command></link>\n+ for details.\n </para>\n\nBecause you've changed the linkend to be the binary option, IMO now\nthe <link> part also needs to be modified. Otherwise, this page has\nmultiple \"CREATE SUBSCRIPTION\" links which jump to different places,\nwhich just seems wrong to me.\n\nSUGGESTION (for the \"See the\" sentence)\n\nSee the <link linkend=\"sql-createsubscription-binary\"><literal>binary</literal>\noption</link> of <command>CREATE SUBSCRIPTION</command> for details.\n\n======\ndoc/src/sgml/ref/alter_subscription.sgml\n\n2.\n+ <para>\n+ See the <literal>binary</literal> option of\n+ <link\nlinkend=\"sql-createsubscription-binary\"><command>CREATE\nSUBSCRIPTION</command></link>\n+ for details about copying pre-existing data in binary format.\n+ </para>\n\n(Same as review comment #1 above)\n\nSUGGESTION\nSee the <link linkend=\"sql-createsubscription-binary\"><literal>binary</literal>\noption</link> of <command>CREATE SUBSCRIPTION</command> for details\nabout copying pre-existing data in binary format.\n\n\n======\nsrc/backend/replication/logical/tablesync.c\n\n3.\n+ /*\n+ * Prior to v16, initial table synchronization will use text format even\n+ * if the binary option is enabled for a subscription.\n+ */\n+ if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 160000 &&\n+ MySubscription->binary)\n+ {\n+ appendStringInfoString(&cmd, \" WITH (FORMAT binary)\");\n+ options = lappend(options,\n+ makeDefElem(\"format\",\n+ (Node *) makeString(\"binary\"), -1));\n+ }\n\nI think there can only be 0 or 1 list element in 'options'.\n\nSo, why does the code here use lappend(options,...) instead of just\nusing list_make1(...)?\n\n======\nsrc/test/subscription/t/014_binary.pl\n\n4.\n# -----------------------------------------------------\n# Test mismatched column types with/without binary mode\n# -----------------------------------------------------\n\n# Test syncing tables with mismatching column types\n$node_publisher->safe_psql(\n'postgres', qq(\n CREATE TABLE public.test_mismatching_types (\n a bigint PRIMARY KEY\n );\n INSERT INTO public.test_mismatching_types (a)\n VALUES (1), (2);\n ));\n\n# Check the subscriber log from now on.\n$offset = -s $node_subscriber->logfile;\n\n# Ensure the subscription is enabled. disable_on_error is still true,\n# so the subscription can be disabled due to missing realtion until\n# the test_mismatching_types table is created.\n$node_subscriber->safe_psql(\n'postgres', qq(\n CREATE TABLE public.test_mismatching_types (\n a int PRIMARY KEY\n );\nALTER SUBSCRIPTION tsub ENABLE;\n ALTER SUBSCRIPTION tsub REFRESH PUBLICATION;\n ));\n\n~~\n\nI found the \"Ensure the subscription is enabled...\" comment and the\nnecessity for enabling the subscription to be confusing.\n\nCan't some complications all be eliminated just by creating the table\non the subscribe side first?\n\nFor example, I rearranged that test (above fragment) like below and it\nstill works OK for me:\n\n# -----------------------------------------------------\n# Test mismatched column types with/without binary mode\n# -----------------------------------------------------\n\n# Create the table on the subscriber side\n$node_subscriber->safe_psql(\n 'postgres', qq(\n CREATE TABLE public.test_mismatching_types (\n a int PRIMARY KEY\n )));\n\n# Check the subscriber log from now on.\n$offset = -s $node_subscriber->logfile;\n\n# Test syncing tables with mismatching column types\n$node_publisher->safe_psql(\n 'postgres', qq(\n CREATE TABLE public.test_mismatching_types (\n a bigint PRIMARY KEY\n );\n INSERT INTO public.test_mismatching_types (a)\n VALUES (1), (2);\n ));\n\n# Refresh the publication to trigger the tablesync\n$node_subscriber->safe_psql(\n 'postgres', qq(\n ALTER SUBSCRIPTION tsub REFRESH PUBLICATION;\n ));\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 21 Mar 2023 12:32:40 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Tue, Mar 21, 2023 at 7:03 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n>\n> ======\n> src/test/subscription/t/014_binary.pl\n>\n> 4.\n> # -----------------------------------------------------\n> # Test mismatched column types with/without binary mode\n> # -----------------------------------------------------\n>\n> # Test syncing tables with mismatching column types\n> $node_publisher->safe_psql(\n> 'postgres', qq(\n> CREATE TABLE public.test_mismatching_types (\n> a bigint PRIMARY KEY\n> );\n> INSERT INTO public.test_mismatching_types (a)\n> VALUES (1), (2);\n> ));\n>\n> # Check the subscriber log from now on.\n> $offset = -s $node_subscriber->logfile;\n>\n> # Ensure the subscription is enabled. disable_on_error is still true,\n> # so the subscription can be disabled due to missing realtion until\n> # the test_mismatching_types table is created.\n> $node_subscriber->safe_psql(\n> 'postgres', qq(\n> CREATE TABLE public.test_mismatching_types (\n> a int PRIMARY KEY\n> );\n> ALTER SUBSCRIPTION tsub ENABLE;\n> ALTER SUBSCRIPTION tsub REFRESH PUBLICATION;\n> ));\n>\n> ~~\n>\n> I found the \"Ensure the subscription is enabled...\" comment and the\n> necessity for enabling the subscription to be confusing.\n>\n> Can't some complications all be eliminated just by creating the table\n> on the subscribe side first?\n>\n\nHmm, that would make this test inconsistent with other tests and\nprobably difficult to understand and extend. I don't like to say this\nbut I think introducing disable_on_error has introduced more\ncomplexities in the patch due to the requirement of enabling\nsubscription again and again. I feel it would be better without using\ndisable_on_error option in these tests.\n\nOne minor point:\n+ format can be faster than the text format, but it is less portable\n+ across machine architectures and PostgreSQL versions.\n\nAs per email [1], it would be better to use <productname> tag here\nwith PostgreSQL.\n\n[1] - https://www.postgresql.org/message-id/932629.1679322674%40sss.pgh.pa.us\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 21 Mar 2023 11:33:00 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com>, 21 Mar 2023 Sal, 09:03 tarihinde\nşunu yazdı:\n\n> On Tue, Mar 21, 2023 at 7:03 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> >\n> > ======\n> > src/test/subscription/t/014_binary.pl\n> >\n> > 4.\n> > # -----------------------------------------------------\n> > # Test mismatched column types with/without binary mode\n> > # -----------------------------------------------------\n> >\n> > # Test syncing tables with mismatching column types\n> > $node_publisher->safe_psql(\n> > 'postgres', qq(\n> > CREATE TABLE public.test_mismatching_types (\n> > a bigint PRIMARY KEY\n> > );\n> > INSERT INTO public.test_mismatching_types (a)\n> > VALUES (1), (2);\n> > ));\n> >\n> > # Check the subscriber log from now on.\n> > $offset = -s $node_subscriber->logfile;\n> >\n> > # Ensure the subscription is enabled. disable_on_error is still true,\n> > # so the subscription can be disabled due to missing realtion until\n> > # the test_mismatching_types table is created.\n> > $node_subscriber->safe_psql(\n> > 'postgres', qq(\n> > CREATE TABLE public.test_mismatching_types (\n> > a int PRIMARY KEY\n> > );\n> > ALTER SUBSCRIPTION tsub ENABLE;\n> > ALTER SUBSCRIPTION tsub REFRESH PUBLICATION;\n> > ));\n> >\n> > ~~\n> >\n> > I found the \"Ensure the subscription is enabled...\" comment and the\n> > necessity for enabling the subscription to be confusing.\n> >\n> > Can't some complications all be eliminated just by creating the table\n> > on the subscribe side first?\n> >\n>\n> Hmm, that would make this test inconsistent with other tests and\n> probably difficult to understand and extend. I don't like to say this\n> but I think introducing disable_on_error has introduced more\n> complexities in the patch due to the requirement of enabling\n> subscription again and again. I feel it would be better without using\n> disable_on_error option in these tests.\n>\n\nWhile this change would make the test inconsistent, I think it also would\nmake more confusing.\nExplaining the issue explicitly with a comment seems better to me than the\ntrick of changing order of table creation just for some test cases.\nBut I'm also ok with removing the use of disable_on_error if that's what\nyou agree on.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft\n\nAmit Kapila <amit.kapila16@gmail.com>, 21 Mar 2023 Sal, 09:03 tarihinde şunu yazdı:On Tue, Mar 21, 2023 at 7:03 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n>\n> ======\n> src/test/subscription/t/014_binary.pl\n>\n> 4.\n> # -----------------------------------------------------\n> # Test mismatched column types with/without binary mode\n> # -----------------------------------------------------\n>\n> # Test syncing tables with mismatching column types\n> $node_publisher->safe_psql(\n> 'postgres', qq(\n> CREATE TABLE public.test_mismatching_types (\n> a bigint PRIMARY KEY\n> );\n> INSERT INTO public.test_mismatching_types (a)\n> VALUES (1), (2);\n> ));\n>\n> # Check the subscriber log from now on.\n> $offset = -s $node_subscriber->logfile;\n>\n> # Ensure the subscription is enabled. disable_on_error is still true,\n> # so the subscription can be disabled due to missing realtion until\n> # the test_mismatching_types table is created.\n> $node_subscriber->safe_psql(\n> 'postgres', qq(\n> CREATE TABLE public.test_mismatching_types (\n> a int PRIMARY KEY\n> );\n> ALTER SUBSCRIPTION tsub ENABLE;\n> ALTER SUBSCRIPTION tsub REFRESH PUBLICATION;\n> ));\n>\n> ~~\n>\n> I found the \"Ensure the subscription is enabled...\" comment and the\n> necessity for enabling the subscription to be confusing.\n>\n> Can't some complications all be eliminated just by creating the table\n> on the subscribe side first?\n>\n\nHmm, that would make this test inconsistent with other tests and\nprobably difficult to understand and extend. I don't like to say this\nbut I think introducing disable_on_error has introduced more\ncomplexities in the patch due to the requirement of enabling\nsubscription again and again. I feel it would be better without using\ndisable_on_error option in these tests.While this change would make the test inconsistent, I think it also would make more confusing.Explaining the issue explicitly with a comment seems better to me than the trick of changing order of table creation just for some test cases.But I'm also ok with removing the use of disable_on_error if that's what you agree on.Thanks,-- Melih MutluMicrosoft",
"msg_date": "Tue, 21 Mar 2023 11:46:17 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Tue, Mar 21, 2023 at 2:16 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com>, 21 Mar 2023 Sal, 09:03 tarihinde şunu yazdı:\n>>\n>> On Tue, Mar 21, 2023 at 7:03 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>> >\n>> >\n>> > ======\n>> > src/test/subscription/t/014_binary.pl\n>> >\n>> > 4.\n>> > # -----------------------------------------------------\n>> > # Test mismatched column types with/without binary mode\n>> > # -----------------------------------------------------\n>> >\n>> > # Test syncing tables with mismatching column types\n>> > $node_publisher->safe_psql(\n>> > 'postgres', qq(\n>> > CREATE TABLE public.test_mismatching_types (\n>> > a bigint PRIMARY KEY\n>> > );\n>> > INSERT INTO public.test_mismatching_types (a)\n>> > VALUES (1), (2);\n>> > ));\n>> >\n>> > # Check the subscriber log from now on.\n>> > $offset = -s $node_subscriber->logfile;\n>> >\n>> > # Ensure the subscription is enabled. disable_on_error is still true,\n>> > # so the subscription can be disabled due to missing realtion until\n>> > # the test_mismatching_types table is created.\n>> > $node_subscriber->safe_psql(\n>> > 'postgres', qq(\n>> > CREATE TABLE public.test_mismatching_types (\n>> > a int PRIMARY KEY\n>> > );\n>> > ALTER SUBSCRIPTION tsub ENABLE;\n>> > ALTER SUBSCRIPTION tsub REFRESH PUBLICATION;\n>> > ));\n>> >\n>> > ~~\n>> >\n>> > I found the \"Ensure the subscription is enabled...\" comment and the\n>> > necessity for enabling the subscription to be confusing.\n>> >\n>> > Can't some complications all be eliminated just by creating the table\n>> > on the subscribe side first?\n>> >\n>>\n>> Hmm, that would make this test inconsistent with other tests and\n>> probably difficult to understand and extend. I don't like to say this\n>> but I think introducing disable_on_error has introduced more\n>> complexities in the patch due to the requirement of enabling\n>> subscription again and again. I feel it would be better without using\n>> disable_on_error option in these tests.\n>\n>\n> While this change would make the test inconsistent, I think it also would make more confusing.\n>\n\nI also think so.\n\n> Explaining the issue explicitly with a comment seems better to me than the trick of changing order of table creation just for some test cases.\n> But I'm also ok with removing the use of disable_on_error if that's what you agree on.\n>\n\nLet's do that way for now.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 21 Mar 2023 14:56:52 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Hi,\n\nPeter Smith <smithpb2250@gmail.com>, 21 Mar 2023 Sal, 04:33 tarihinde şunu\nyazdı:\n\n> Here are my review comments for v18-0001\n>\n> ======\n> doc/src/sgml/logical-replication.sgml\n>\n> 1.\n> + target table. However, logical replication in binary format is more\n> + restrictive. See the <literal>binary</literal> option of\n> + <link linkend=\"sql-createsubscription-binary\"><command>CREATE\n> SUBSCRIPTION</command></link>\n> + for details.\n> </para>\n>\n> Because you've changed the linkend to be the binary option, IMO now\n> the <link> part also needs to be modified. Otherwise, this page has\n> multiple \"CREATE SUBSCRIPTION\" links which jump to different places,\n> which just seems wrong to me.\n>\n\nMakes sense. I changed it as you suggested.\n\n\n> 3.\n> I think there can only be 0 or 1 list element in 'options'.\n>\n> So, why does the code here use lappend(options,...) instead of just\n> using list_make1(...)?\n>\n\nChanged it to list_make1.\n\nAmit Kapila <amit.kapila16@gmail.com>, 21 Mar 2023 Sal, 12:27 tarihinde\nşunu yazdı:\n\n> > Explaining the issue explicitly with a comment seems better to me than\n> the trick of changing order of table creation just for some test cases.\n> > But I'm also ok with removing the use of disable_on_error if that's what\n> you agree on.\n> >\n>\n> Let's do that way for now.\n>\n\nDone.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft",
"msg_date": "Tue, 21 Mar 2023 12:32:40 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Thanks for all the patch updates. Patch v19 LGTM.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 22 Mar 2023 10:28:53 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Wed Mar 22, 2023 7:29 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> \r\n> Thanks for all the patch updates. Patch v19 LGTM.\r\n> \r\n\r\n+1\r\n\r\nRegards,\r\nShi Yu\r\n",
"msg_date": "Wed, 22 Mar 2023 03:30:11 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Wed, Mar 22, 2023 at 9:00 AM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Wed Mar 22, 2023 7:29 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Thanks for all the patch updates. Patch v19 LGTM.\n> >\n>\n> +1\n>\n\nThe patch looks mostly good to me. However, I have one\nquestion/comment as follows:\n\n- <varlistentry>\n+ <varlistentry id=\"sql-createsubscription-binary\" xreflabel=\"binary\">\n <term><literal>binary</literal> (<type>boolean</type>)</term>\n <listitem>\n\nTo allow references to the binary option, we add the varlistentry id\nhere. It looks slightly odd to me to add id for just one entry, see\ncommit 78ee60ed84bb3a1cf0b6bd9a715dcbcf252a90f5 where we have\npurposefully added ids to allow future references. Shall we add id to\nother options as well on this page?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 22 Mar 2023 15:20:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Dear Amit, hackers,\r\n\r\n> The patch looks mostly good to me. However, I have one\r\n> question/comment as follows:\r\n> \r\n> - <varlistentry>\r\n> + <varlistentry id=\"sql-createsubscription-binary\" xreflabel=\"binary\">\r\n> <term><literal>binary</literal> (<type>boolean</type>)</term>\r\n> <listitem>\r\n> \r\n> To allow references to the binary option, we add the varlistentry id\r\n> here. It looks slightly odd to me to add id for just one entry, see\r\n> commit 78ee60ed84bb3a1cf0b6bd9a715dcbcf252a90f5 where we have\r\n> purposefully added ids to allow future references. Shall we add id to\r\n> other options as well on this page?\r\n\r\nI have analyzed same points and made patch that could be applied atop v19-0001.\r\nPlease check 0002 patch.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Wed, 22 Mar 2023 10:36:47 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "On Wed, Mar 22, 2023 at 4:06 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> > The patch looks mostly good to me. However, I have one\n> > question/comment as follows:\n> >\n> > - <varlistentry>\n> > + <varlistentry id=\"sql-createsubscription-binary\" xreflabel=\"binary\">\n> > <term><literal>binary</literal> (<type>boolean</type>)</term>\n> > <listitem>\n> >\n> > To allow references to the binary option, we add the varlistentry id\n> > here. It looks slightly odd to me to add id for just one entry, see\n> > commit 78ee60ed84bb3a1cf0b6bd9a715dcbcf252a90f5 where we have\n> > purposefully added ids to allow future references. Shall we add id to\n> > other options as well on this page?\n>\n> I have analyzed same points and made patch that could be applied atop v19-0001.\n> Please check 0002 patch.\n>\n\nPushed the 0001. It may be better to start a separate thread for 0002.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 23 Mar 2023 11:18:19 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> Pushed the 0001. It may be better to start a separate thread for 0002.\r\n\r\nGood job! I have started new thread [1] for 0002.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB58667AE04D291924671E2051F5879%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 23 Mar 2023 06:23:05 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Allow logical replication to copy tables in binary format"
},
{
"msg_contents": "Hi,\n\nAmit Kapila <amit.kapila16@gmail.com>, 23 Mar 2023 Per, 08:48 tarihinde\nşunu yazdı:\n\n> Pushed the 0001. It may be better to start a separate thread for 0002.\n>\n\nGreat! Thanks.\n\nBest,\n-- \nMelih Mutlu\nMicrosoft\n\nHi, Amit Kapila <amit.kapila16@gmail.com>, 23 Mar 2023 Per, 08:48 tarihinde şunu yazdı:\nPushed the 0001. It may be better to start a separate thread for 0002.Great! Thanks.Best, -- Melih MutluMicrosoft",
"msg_date": "Thu, 23 Mar 2023 10:40:29 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow logical replication to copy tables in binary format"
}
] |
[
{
"msg_contents": "when parsing command-line options, the -f option support disabling\n8 scan and join methods, o, b and t disable index-only scans,\nbitmap index scans, and TID scans respectively, add them to the\nhelp message.\n\ndiff --git a/src/backend/main/main.c b/src/backend/main/main.c\nindex 5a964a0db6..f5da4260a1 100644\n--- a/src/backend/main/main.c\n+++ b/src/backend/main/main.c\n@@ -351,7 +351,7 @@ help(const char *progname)\n printf(_(\" -?, --help show this help, then exit\\n\"));\n\n printf(_(\"\\nDeveloper options:\\n\"));\n- printf(_(\" -f s|i|n|m|h forbid use of some plan\ntypes\\n\"));\n+ printf(_(\" -f s|i|o|b|t|n|m|h forbid use of some plan\ntypes\\n\"));\n printf(_(\" -n do not reinitialize shared\nmemory after abnormal exit\\n\"));\n printf(_(\" -O allow system table structure\nchanges\\n\"));\n printf(_(\" -P disable system indexes\\n\"));\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Wed, 10 Aug 2022 23:32:18 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "fix stale help message"
},
{
"msg_contents": "Hi peter,\n\nSorry to bother, but I notice that you are one of the most active\ncommitters, can you pls take a look at this thread.\n\nThanks!\n\nOn Wed, Aug 10, 2022 at 11:32 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\n>\n> when parsing command-line options, the -f option support disabling\n> 8 scan and join methods, o, b and t disable index-only scans,\n> bitmap index scans, and TID scans respectively, add them to the\n> help message.\n>\n> diff --git a/src/backend/main/main.c b/src/backend/main/main.c\n> index 5a964a0db6..f5da4260a1 100644\n> --- a/src/backend/main/main.c\n> +++ b/src/backend/main/main.c\n> @@ -351,7 +351,7 @@ help(const char *progname)\n> printf(_(\" -?, --help show this help, then exit\\n\"));\n>\n> printf(_(\"\\nDeveloper options:\\n\"));\n> - printf(_(\" -f s|i|n|m|h forbid use of some plan\n> types\\n\"));\n> + printf(_(\" -f s|i|o|b|t|n|m|h forbid use of some plan\n> types\\n\"));\n> printf(_(\" -n do not reinitialize shared\n> memory after abnormal exit\\n\"));\n> printf(_(\" -O allow system table structure\n> changes\\n\"));\n> printf(_(\" -P disable system indexes\\n\"));\n>\n> --\n> Regards\n> Junwang Zhao\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Sat, 13 Aug 2022 09:03:29 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: fix stale help message"
},
{
"msg_contents": "On Wed, Aug 10, 2022 at 11:32:18PM +0800, Junwang Zhao wrote:\n> when parsing command-line options, the -f option support disabling\n> 8 scan and join methods, o, b and t disable index-only scans,\n> bitmap index scans, and TID scans respectively, add them to the\n> help message.\n>\n> @@ -351,7 +351,7 @@ help(const char *progname)\n> printf(_(\" -?, --help show this help, then exit\\n\"));\n> \n> printf(_(\"\\nDeveloper options:\\n\"));\n> - printf(_(\" -f s|i|n|m|h forbid use of some plan\n> types\\n\"));\n> + printf(_(\" -f s|i|o|b|t|n|m|h forbid use of some plan\n> types\\n\"));\n> printf(_(\" -n do not reinitialize shared\n> memory after abnormal exit\\n\"));\n> printf(_(\" -O allow system table structure\n> changes\\n\"));\n> printf(_(\" -P disable system indexes\\n\"));\n\nset_plan_disabling_options() is telling that you have all of them, as\nmuch as the docs. I don't mind fixing that as you suggest, FWIW.\n--\nMichael",
"msg_date": "Sun, 14 Aug 2022 19:16:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: fix stale help message"
},
{
"msg_contents": "Hi Michael,\n\nThanks for your reply :)\n\nI think the goal of `help message` is to tell users(like DBA) how\nto use postgres, so it's better to provide a complete view.\n\nI'm not sure by saying `set_plan_disabling_options` do you mean\nthe postgres source code? If that's the case, I think most of the\ninstallations don't have source code but just binaries, people will\nreference the help message by running `postgres --help`.\n\nBTW, I noticed in [0] the doc gives the full options, so I think we\nshould keep the source code in consistent with the doc.\n\n[0] https://www.postgresql.org/docs/current/app-postgres.html\n\nOn Sun, Aug 14, 2022 at 6:17 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Aug 10, 2022 at 11:32:18PM +0800, Junwang Zhao wrote:\n> > when parsing command-line options, the -f option support disabling\n> > 8 scan and join methods, o, b and t disable index-only scans,\n> > bitmap index scans, and TID scans respectively, add them to the\n> > help message.\n> >\n> > @@ -351,7 +351,7 @@ help(const char *progname)\n> > printf(_(\" -?, --help show this help, then exit\\n\"));\n> >\n> > printf(_(\"\\nDeveloper options:\\n\"));\n> > - printf(_(\" -f s|i|n|m|h forbid use of some plan\n> > types\\n\"));\n> > + printf(_(\" -f s|i|o|b|t|n|m|h forbid use of some plan\n> > types\\n\"));\n> > printf(_(\" -n do not reinitialize shared\n> > memory after abnormal exit\\n\"));\n> > printf(_(\" -O allow system table structure\n> > changes\\n\"));\n> > printf(_(\" -P disable system indexes\\n\"));\n>\n> set_plan_disabling_options() is telling that you have all of them, as\n> much as the docs. I don't mind fixing that as you suggest, FWIW.\n> --\n> Michael\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Sun, 14 Aug 2022 21:03:15 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: fix stale help message"
},
{
"msg_contents": "On Sun, Aug 14, 2022 at 09:03:15PM +0800, Junwang Zhao wrote:\n> I'm not sure by saying `set_plan_disabling_options` do you mean\n> the postgres source code? If that's the case, I think most of the\n> installations don't have source code but just binaries, people will\n> reference the help message by running `postgres --help`.\n\nI am just telling that your patch does the right thing, based on the\nstate of the code and the contents of the docs. Applied down to 10.\n--\nMichael",
"msg_date": "Mon, 15 Aug 2022 13:41:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: fix stale help message"
}
] |
[
{
"msg_contents": "The caller of `get_stats_option_name` pass optarg as the argument,\nit's saner to use the argument instead of the global variable set\nby getopt, which is more safe since the argument has a *const*\nspecifier.\n\ndiff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c\nindex 11e802eba9..68552b8779 100644\n--- a/src/backend/tcop/postgres.c\n+++ b/src/backend/tcop/postgres.c\n@@ -3598,9 +3598,9 @@ get_stats_option_name(const char *arg)\n switch (arg[0])\n {\n case 'p':\n- if (optarg[1] == 'a') /* \"parser\" */\n+ if (arg[1] == 'a') /* \"parser\" */\n return \"log_parser_stats\";\n- else if (optarg[1] == 'l') /* \"planner\"\n*/\n+ else if (arg[1] == 'l') /* \"planner\" */\n return \"log_planner_stats\";\n break;\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Wed, 10 Aug 2022 23:50:50 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "use argument instead of global variable"
}
] |
[
{
"msg_contents": "Hi,\n\nOne of the things motivating me to work on the meson conversion is the ability\nto run tests in an easily understandable way. Meson has a testrunner that\nboth allows to run all tests at once, and run subsets of tests.\n\n\n= Test and testsuite naming =\n\nEach test has a unique name, and 0-n labels (called 'suites'). One can run\ntests by their name, or select tests by suites.\n\nThe way I've organized it so far is that tests are named like this:\n\nmain/pg_regress\nmain/isolation\nrecovery/t/018_wal_optimize.pl\npg_prewarm/t/001_basic.pl\ntest_shm_mq/pg_regress\npsql/t/001_basic.pl\n\nwe could also name tests by their full path, but that makes the display very\nunwieldy.\n\n\nAt the moment there's three suites differentiating by the type of test:\n'pg_regress', 'isolation' and 'tap'. There's also a separate \"axis\" of suites,\ndescribing what's being tested, e.g. 'main', 'test_decoding', 'recovery' etc.\n\nThat currently works out to each test having two suites, although I've\nwondered about adding a 'contrib' suite as well.\n\n\nPerhaps the pg_regress suite should just be named 'regress'? And perhaps the\nt/ in the tap tests is superfluous - the display is already fairly wide?\n\n\n\n= Log and Data locations =\n\nTo make things like the selection of log files for a specific test easier,\nI've so far set it up so that test data and logs are stored in a separate\ndirectory from the sources.\n\ntestrun/<main|recovery|...>/<testname>/<log|tmp_check|results...>\n\nThe runner now creates a test.start at the start of a test and either\ntest.success or test.failure at the end. That should make it pretty easy for\ne.g. the buildfarm and CI to make the logs for a failed test easily\naccessible. I've spent far too much time going through the ~hundred logs in\nsrc/test/recovery/ that the buildfarm displays as one thing.\n\n\nI really like having all the test data separately from the sources, but I get\nthat that's not what we've done so far. It's doable to just mirror the current\nchoice, but I don't think we should. But I won't push too hard against keeping\nthings the same.\n\n\nI do wonder if we should put test data and log files in a separate directory\ntree, but that'd be a bit more work probably.\n\n\nAny comments on the above?\n\n\n= Example outputs =\n\nHere's an example output that you mostly should be able to make sense of now:\n\n$ m test --print-errorlogs\nninja: Entering directory `/tmp/meson'\nninja: no work to do.\n 1/242 postgresql:setup / tmp_install OK 0.33s\n 2/242 postgresql:tap+pg_upgrade / pg_upgrade/t/001_basic.pl OK 0.35s 8 subtests passed\n 3/242 postgresql:tap+recovery / recovery/t/011_crash_recovery.pl OK 2.67s 3 subtests passed\n 4/242 postgresql:tap+recovery / recovery/t/013_crash_restart.pl OK 2.91s 18 subtests passed\n 5/242 postgresql:tap+recovery / recovery/t/014_unlogged_reinit.pl OK 3.01s 23 subtests passed\n 6/242 postgresql:tap+recovery / recovery/t/022_crash_temp_files.pl OK 3.16s 11 subtests passed\n 7/242 postgresql:tap+recovery / recovery/t/016_min_consistency.pl OK 3.43s 1 subtests passed\n 8/242 postgresql:tap+recovery / recovery/t/021_row_visibility.pl OK 3.46s 10 subtests passed\n 9/242 postgresql:isolation+tcn / tcn/isolation OK 3.42s\n 10/242 postgresql:tap+recovery / recovery/t/023_pitr_prepared_xact.pl OK 3.63s 1 subtests passed\n...\n241/242 postgresql:isolation+main / main/isolation OK 46.69s\n242/242 postgresql:tap+pg_upgrade / pg_upgrade/t/002_pg_upgrade.pl OK 57.00s 13 subtests passed\n\nOk: 242\nExpected Fail: 0\nFail: 0\nUnexpected Pass: 0\nSkipped: 0\nTimeout: 0\n\nFull log written to /tmp/meson/meson-logs/testlog.txt\n\n\nThe 'postgresql' is because meson supports subprojects (both to provide\ndependencies if needed, and \"real\" subprojects), and their tests can be run at\nonce.\n\n\nIf a test fails it'll show the error output at the time of test:\n\n39/242 postgresql:pg_regress+cube / cube/pg_regress FAIL 3.74s exit status 1\n>>> REGRESS_SHLIB=/tmp/meson/src/test/regress/regress.so MALLOC_PERTURB_=44 PG_TEST_EXTRA='kerberos ldap ssl' PG_REGRESS=/tmp/meson/src/test/regress/pg_regress PATH=/tmp/meson/tmp_install/tmp/meson-install/bin:/tmp/meson/contrib/cube:/home/andres/bin/perl5/bin:/home/andres/bin/pg:/home/andres/bin/bin:/usr/sbin:/sbin:/home/andres/bin/pg:/home/andres/bin/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin:/usr/games /usr/bin/python3 /home/andres/src/postgresql/src/tools/testwrap --srcdir /home/andres/src/postgresql/contrib/cube --basedir /tmp/meson --builddir /tmp/meson/contrib/cube --testgroup cube --testname pg_regress /tmp/meson/src/test/regress/pg_regress --temp-instance /tmp/meson/testrun/cube/pg_regress/tmp_check --inputdir /home/andres/src/postgresql/contrib/cube --expecteddir /home/andres/src/postgresql/contrib/cube --outputdir /tmp/meson/testrun/cube/pg_regress --bindir '' --dlpath /tmp/meson/contrib/cube --max-concurrent-tests=20 --port=40012 cube cube_sci\n――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― ✀ ―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――\n# executing test in /tmp/meson/testrun/cube/pg_regress group cube test pg_regress, builddir /tmp/meson/contrib/cube\n============== creating temporary instance ==============\n============== initializing database system ==============\n============== starting postmaster ==============\nrunning on port 40012 with PID 354981\n============== creating database \"regression\" ==============\nCREATE DATABASE\nALTER DATABASE\nALTER DATABASE\nALTER DATABASE\nALTER DATABASE\nALTER DATABASE\nALTER DATABASE\n============== running regression test queries ==============\ntest cube ... FAILED 418 ms\ntest cube_sci ... ok 16 ms\n============== shutting down postmaster ==============\n\n======================\n 1 of 2 tests failed.\n======================\n\nThe differences that caused some tests to fail can be viewed in the\nfile \"/tmp/meson/testrun/cube/pg_regress/regression.diffs\". A copy of the test summary that you see\nabove is saved in the file \"/tmp/meson/testrun/cube/pg_regress/regression.out\".\n\n# test failed\n――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――\n\n 40/242 postgresql:pg_regress+citext / citext/pg_regress OK 3.77s\n...\n\nA list of failed tests is listed at the end of the test:\n\n\nSummary of Failures:\n\n 39/242 postgresql:pg_regress+cube / cube/pg_regress FAIL 3.74s exit status 1\n\nOk: 241\nExpected Fail: 0\nFail: 1\nUnexpected Pass: 0\nSkipped: 0\nTimeout: 0\n\nFull log written to /tmp/meson/meson-logs/testlog.txt\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 10 Aug 2022 21:04:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "tests and meson - test names and file locations"
},
{
"msg_contents": "\nOn 2022-08-11 Th 00:04, Andres Freund wrote:\n>\n> The runner now creates a test.start at the start of a test and either\n> test.success or test.failure at the end. That should make it pretty easy for\n> e.g. the buildfarm and CI to make the logs for a failed test easily\n> accessible. I've spent far too much time going through the ~hundred logs in\n> src/test/recovery/ that the buildfarm displays as one thing.\n\n\nI do have work in hand to improve that markedly, just need a little time\nto finish it.\n\n\n>\n>\n> I really like having all the test data separately from the sources, but I get\n> that that's not what we've done so far. It's doable to just mirror the current\n> choice, but I don't think we should. But I won't push too hard against keeping\n> things the same.\n\n\nI also like that. I think we should take this opportunity for some\nserious rationalization of this. Tests and associated data have grown\nrather like Topsy, and we should fix that. So please don't feel too\nconstrained by current practice.\n\n\n>\n>\n> I do wonder if we should put test data and log files in a separate directory\n> tree, but that'd be a bit more work probably.\n>\n>\n> Any comments on the above?\n>\n>\n> = Example outputs =\n\n\n[...]\n\n> Full log written to /tmp/meson/meson-logs/testlog.txt\n\n\n/tmp ?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 11 Aug 2022 10:06:35 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: tests and meson - test names and file locations"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> I also like that. I think we should take this opportunity for some\n> serious rationalization of this. Tests and associated data have grown\n> rather like Topsy, and we should fix that. So please don't feel too\n> constrained by current practice.\n\nI'm definitely -1 on that. Major rearrangement of the test scripts\nwould be a huge blocking factor for almost any back-patch. I don't\ncare much if you want to rearrange how the tests are invoked, but\nplease don't touch the individual .sql and .pl scripts.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 11 Aug 2022 10:20:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: tests and meson - test names and file locations"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-11 10:06:35 -0400, Andrew Dunstan wrote:\n> > Full log written to /tmp/meson/meson-logs/testlog.txt\n>\n> /tmp ?\n\nI often put throwaway buildtrees in /tmp. So this is just because my buildtree\nis in /tmp/meson, i.e. the log always is in $build_root/meson-logs/testlog.txt\n(there's also a log of the configure run in there etc).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 11 Aug 2022 07:52:45 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: tests and meson - test names and file locations"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-11 10:20:42 -0400, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > I also like that. I think we should take this opportunity for some\n> > serious rationalization of this. Tests and associated data have grown\n> > rather like Topsy, and we should fix that. So please don't feel too\n> > constrained by current practice.\n> \n> I'm definitely -1 on that. Major rearrangement of the test scripts\n> would be a huge blocking factor for almost any back-patch. I don't\n> care much if you want to rearrange how the tests are invoked, but\n> please don't touch the individual .sql and .pl scripts.\n\nI don't precisely know what Andrew was thinking of, but the relocation of log\nfiles for example doesn't require many changes to .pl files - one change to\nUtils.pm. The one exception to that is 010_tab_completion.pl, which encodes\ntmp_check/ in its output.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 11 Aug 2022 08:06:34 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: tests and meson - test names and file locations"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I don't precisely know what Andrew was thinking of, but the relocation of log\n> files for example doesn't require many changes to .pl files - one change to\n> Utils.pm. The one exception to that is 010_tab_completion.pl, which encodes\n> tmp_check/ in its output.\n\nAh. That seems perfectly tolerable. Andrew seemed to be thinking of\nmoving the tests' source files around.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 11 Aug 2022 11:28:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: tests and meson - test names and file locations"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> = Log and Data locations =\n\n> To make things like the selection of log files for a specific test easier,\n> I've so far set it up so that test data and logs are stored in a separate\n> directory from the sources.\n\n> testrun/<main|recovery|...>/<testname>/<log|tmp_check|results...>\n\n> I do wonder if we should put test data and log files in a separate directory\n> tree, but that'd be a bit more work probably.\n\nI'm confused, didn't you just say you already did that?\n\n\n\n> Here's an example output that you mostly should be able to make sense of now:\n\nTBH, this seems to be almost all the same sort of useless noise that\nwe have worked to suppress in \"make\" output. The only thing that's\nof any interest at all is the report that the \"cube\" test failed,\nand you have managed to make it so that that report is pretty\nvoluminous and yet contains not one useful detail. I still have\nto go and look at other log files to figure out what happened;\nand if I need to see the postmaster log, it's not even apparent\nwhere that is. I also wonder where, say, a core dump might wind up.\n\nI'm failing to see any advance at all here over what we have now.\nIf anything, the signal-to-noise ratio has gotten worse.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 11 Aug 2022 13:06:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: tests and meson - test names and file locations"
},
{
"msg_contents": "\nOn 2022-08-11 Th 11:06, Andres Freund wrote:\n> Hi,\n>\n> On 2022-08-11 10:20:42 -0400, Tom Lane wrote:\n>> Andrew Dunstan <andrew@dunslane.net> writes:\n>>> I also like that. I think we should take this opportunity for some\n>>> serious rationalization of this. Tests and associated data have grown\n>>> rather like Topsy, and we should fix that. So please don't feel too\n>>> constrained by current practice.\n>> I'm definitely -1 on that. Major rearrangement of the test scripts\n>> would be a huge blocking factor for almost any back-patch. I don't\n>> care much if you want to rearrange how the tests are invoked, but\n>> please don't touch the individual .sql and .pl scripts.\n> I don't precisely know what Andrew was thinking of, but the relocation of log\n> files for example doesn't require many changes to .pl files - one change to\n> Utils.pm. The one exception to that is 010_tab_completion.pl, which encodes\n> tmp_check/ in its output.\n\n\nI meant test results, logs et. I thought that was the topic under\ndiscussion. Changing the location of test sources would be a whole other\ntopic. Sorry if I was not clear enough.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 11 Aug 2022 13:07:44 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: tests and meson - test names and file locations"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-11 13:06:35 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > = Log and Data locations =\n> \n> > To make things like the selection of log files for a specific test easier,\n> > I've so far set it up so that test data and logs are stored in a separate\n> > directory from the sources.\n> \n> > testrun/<main|recovery|...>/<testname>/<log|tmp_check|results...>\n> \n> > I do wonder if we should put test data and log files in a separate directory\n> > tree, but that'd be a bit more work probably.\n> \n> I'm confused, didn't you just say you already did that?\n\nI did separate the source / build tree from the test log files / data files,\nbut not the test log files from the test data files. It'd be easy enough to\nachieve for pg_regress tests, but perhaps a bit harder for things like\n027_stream_regress.pl that run pg_regress themselves.\n\n\n> > Here's an example output that you mostly should be able to make sense of now:\n> \n> TBH, this seems to be almost all the same sort of useless noise that\n> we have worked to suppress in \"make\" output.\n\nWhich part have we suppressed that's shown here? We've been talking about\ncutting down the pointless information that pg_regress produces, but that\nseems like a separate endeavor.\n\n\n> The only thing that's of any interest at all is the report that the \"cube\"\n> test failed, and you have managed to make it so that that report is pretty\n> voluminous and yet contains not one useful detail.\n\nIt'll just print the list of tests and their success / failure, without\nprinting the test's output, if you don't pass --print-errorlogs.\n\nThe reason the commmand is shown is so you can copy-paste it to run the tests\non its own, which can be useful for debugging, it'll include all the\nenvironment variables set when the test was run, so it's actually the same\ncommand (not like right now, were some env variables are set via export in the\nmakefile). We probably can make the command shorter - but that's pretty\nsimilar to what's done for the make invocation of the tests.\n\n\n> I still have to go and look at other log files to figure out what happened;\n> and if I need to see the postmaster log, it's not even apparent where that\n> is.\n\nThere's a hint at the start, but it could stand to be reformulated to point\nthat out more clearly:\n\n# executing test in /tmp/meson/testrun/cube/pg_regress group cube test pg_regress, builddir /tmp/meson/contrib/cube\n\n\n> I also wonder where, say, a core dump might wind up.\n\nThat'll depend on system configuration as before. Most commonly in the data\ndir. Are you wondering where the data dir is?\n\n\n> I'm failing to see any advance at all here over what we have now.\n> If anything, the signal-to-noise ratio has gotten worse.\n\nI'm surprised - I find it *vastly* more readable, because it'll show this\nstuff only for the failed tests, and it'll tell you the failed tests at the\nend. I don't know how many hours I've spent going backward through check-world\noutput to find the first failed tests, but it's many.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 11 Aug 2022 12:27:13 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: tests and meson - test names and file locations"
},
{
"msg_contents": "On 11.08.22 06:04, Andres Freund wrote:\n> At the moment there's three suites differentiating by the type of test:\n> 'pg_regress', 'isolation' and 'tap'. There's also a separate \"axis\" of suites,\n> describing what's being tested, e.g. 'main', 'test_decoding', 'recovery' etc.\n> \n> That currently works out to each test having two suites, although I've\n> wondered about adding a 'contrib' suite as well.\n\nI'm not sure what the value of these suite names would be. I don't \nusually find myself wanting to run, say, just all tap tests.\n\nPerhaps suites would be useful to do things like select slow tests \n(numeric_big) or handle the ssl, ldap, kerberos tests.\n\n> = Log and Data locations =\n> \n> To make things like the selection of log files for a specific test easier,\n> I've so far set it up so that test data and logs are stored in a separate\n> directory from the sources.\n> \n> testrun/<main|recovery|...>/<testname>/<log|tmp_check|results...>\n> \n> The runner now creates a test.start at the start of a test and either\n> test.success or test.failure at the end. That should make it pretty easy for\n> e.g. the buildfarm and CI to make the logs for a failed test easily\n> accessible. I've spent far too much time going through the ~hundred logs in\n> src/test/recovery/ that the buildfarm displays as one thing.\n\nI don't really understand which problem this solves and how. Sure, the \ntest output is somewhat complex, but I know where it is and I've never \nfound myself wishing it to be somewhere else.\n\n\n\n",
"msg_date": "Fri, 12 Aug 2022 18:08:00 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: tests and meson - test names and file locations"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-12 18:08:00 +0200, Peter Eisentraut wrote:\n> > At the moment there's three suites differentiating by the type of test:\n> > 'pg_regress', 'isolation' and 'tap'. There's also a separate \"axis\" of suites,\n> > describing what's being tested, e.g. 'main', 'test_decoding', 'recovery' etc.\n> > \n> > That currently works out to each test having two suites, although I've\n> > wondered about adding a 'contrib' suite as well.\n> \n> I'm not sure what the value of these suite names would be. I don't usually\n> find myself wanting to run, say, just all tap tests.\n\nI occasionally want to, but it may not be important enough. I do find it\nuseful for display purposes alone, and for finding kinds of test with meson\ntest --list.\n\n\n> On 11.08.22 06:04, Andres Freund wrote:\n> > = Log and Data locations =\n> > \n> > To make things like the selection of log files for a specific test easier,\n> > I've so far set it up so that test data and logs are stored in a separate\n> > directory from the sources.\n> > \n> > testrun/<main|recovery|...>/<testname>/<log|tmp_check|results...>\n> > \n> > The runner now creates a test.start at the start of a test and either\n> > test.success or test.failure at the end. That should make it pretty easy for\n> > e.g. the buildfarm and CI to make the logs for a failed test easily\n> > accessible. I've spent far too much time going through the ~hundred logs in\n> > src/test/recovery/ that the buildfarm displays as one thing.\n> \n> I don't really understand which problem this solves and how. Sure, the test\n> output is somewhat complex, but I know where it is and I've never found\n> myself wishing it to be somewhere else.\n\nI'd like the buildfarm and CI a) use parallelism to run tests (that's why the\nBF is slow) b) show the logfiles for exactly the failed test ([1]). We can of\ncourse iterate through the whole directory tree, somehow identify which log\nfiles are for which test, and then select the log files for the failed\ntests. But that's much easier to do then when you have a uniform directory\nhierarchy, where you can test which tests have failed based on the filesystem\nalone.\n\nGreetings,\n\nAndres Freund\n\n[1] E.g. the log file for this failed run is 13MB, and I've had ones with a\n bit more debugging enabled crash both firefox and chrome before\n https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=skink&dt=2022-08-06%2012%3A17%3A14&stg=recovery-check\n\n\n",
"msg_date": "Fri, 12 Aug 2022 09:29:27 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: tests and meson - test names and file locations"
},
{
"msg_contents": "On 12.08.22 18:29, Andres Freund wrote:\n>> I don't really understand which problem this solves and how. Sure, the test\n>> output is somewhat complex, but I know where it is and I've never found\n>> myself wishing it to be somewhere else.\n> I'd like the buildfarm and CI a) use parallelism to run tests (that's why the\n> BF is slow) b) show the logfiles for exactly the failed test ([1]). We can of\n> course iterate through the whole directory tree, somehow identify which log\n> files are for which test, and then select the log files for the failed\n> tests. But that's much easier to do then when you have a uniform directory\n> hierarchy, where you can test which tests have failed based on the filesystem\n> alone.\n\nMy initial experiences with testing under meson is that it's quite \nfragile and confusing (unlike the building, which is quite robust and \nunderstandable). Some of that is the fault of meson, some of that is \nour implementation. Surely this can be improved over time, but my \nexperience has been that it's not there yet.\n\nThe idea that we are going to move all the test output files somewhere \nelse at the same time is not appealing to me. The combination of \nfragile plus can't find the diagnostics is not a good one.\n\nNow, this is my experience; others might have different ones.\n\nAlso, is there anything in these proposed changes that couldn't also be \napplied to the old build system? We are going to be running them in \nparallel for some time. It would be good if one doesn't have to learn \ntwo entirely different sets of testing interfaces.\n\n\n\n",
"msg_date": "Wed, 17 Aug 2022 16:12:39 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: tests and meson - test names and file locations"
}
] |
[
{
"msg_contents": "Hi,\n\nOne other case suspicious, which I think deserves a conference.\nAt function wait_on_slots (src/fe_utils/parallel_slot.c)\nThe variable \"slots\" are array, but at function call SetCancelConn,\n\"slots\" are used as an object, which at the very least would be suspicious.\n\ncancelconn wouldn't that be the correct argument?\n\nregards,\nRanier Vilela",
"msg_date": "Thu, 11 Aug 2022 09:52:49 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Use array as object (src/fe_utils/parallel_slot.c)"
},
{
"msg_contents": "Em qui., 11 de ago. de 2022 às 09:52, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Hi,\n>\n> One other case suspicious, which I think deserves a conference.\n> At function wait_on_slots (src/fe_utils/parallel_slot.c)\n> The variable \"slots\" are array, but at function call SetCancelConn,\n> \"slots\" are used as an object, which at the very least would be suspicious.\n>\nThe commit\nhttps://github.com/postgres/postgres/commit/f71519e545a34ece0a27c8bb1a2b6e197d323163\nIntroduced the affected function.\nI'm not sure you're having problems, but using arrays as single pointer is\nnot recommended.\n\nregards,\nRanier Vilela\n\nEm qui., 11 de ago. de 2022 às 09:52, Ranier Vilela <ranier.vf@gmail.com> escreveu:Hi,One other case suspicious, which I think deserves a conference.At function wait_on_slots (src/fe_utils/parallel_slot.c)The variable \"slots\" are array, but at function call SetCancelConn,\"slots\" are used as an object, which at the very least would be suspicious.The commit https://github.com/postgres/postgres/commit/f71519e545a34ece0a27c8bb1a2b6e197d323163Introduced the affected function.I'm not sure you're having problems, but using arrays as single pointer is not recommended.regards,Ranier Vilela",
"msg_date": "Fri, 19 Aug 2022 15:52:36 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use array as object (src/fe_utils/parallel_slot.c)"
},
{
"msg_contents": "On Fri, Aug 19, 2022 at 03:52:36PM -0300, Ranier Vilela wrote:\n> Em qui., 11 de ago. de 2022 �s 09:52, Ranier Vilela <ranier.vf@gmail.com> escreveu:\n> \n> > Hi,\n> >\n> > One other case suspicious, which I think deserves a conference.\n> > At function wait_on_slots (src/fe_utils/parallel_slot.c)\n> > The variable \"slots\" are array, but at function call SetCancelConn,\n> > \"slots\" are used as an object, which at the very least would be suspicious.\n>\n> The commit\n> https://github.com/postgres/postgres/commit/f71519e545a34ece0a27c8bb1a2b6e197d323163\n> Introduced the affected function.\n\nIt's true that the function was added there, but SetCancelConn() was called the\nsame way before that: SetCancelConn(slots->connection);\n\nIf you trace the history back to a17923204, you'll see a comment about the\n\"zeroth slot\", which makes it clear that the first slot it what's intended.\n\nI agree that it would be clearer if this were written as slots[0].connection.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 19 Aug 2022 14:22:32 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Use array as object (src/fe_utils/parallel_slot.c)"
},
{
"msg_contents": "Em sex., 19 de ago. de 2022 às 16:22, Justin Pryzby <pryzby@telsasoft.com>\nescreveu:\n\n> On Fri, Aug 19, 2022 at 03:52:36PM -0300, Ranier Vilela wrote:\n> > Em qui., 11 de ago. de 2022 às 09:52, Ranier Vilela <ranier.vf@gmail.com>\n> escreveu:\n> >\n> > > Hi,\n> > >\n> > > One other case suspicious, which I think deserves a conference.\n> > > At function wait_on_slots (src/fe_utils/parallel_slot.c)\n> > > The variable \"slots\" are array, but at function call SetCancelConn,\n> > > \"slots\" are used as an object, which at the very least would be\n> suspicious.\n> >\n> > The commit\n> >\n> https://github.com/postgres/postgres/commit/f71519e545a34ece0a27c8bb1a2b6e197d323163\n> > Introduced the affected function.\n>\n> It's true that the function was added there, but SetCancelConn() was\n> called the\n> same way before that: SetCancelConn(slots->connection);\n>\n> If you trace the history back to a17923204, you'll see a comment about the\n> \"zeroth slot\", which makes it clear that the first slot it what's intended.\n>\nThank you Justin, for the research.\n\n>\n> I agree that it would be clearer if this were written as\n> slots[0].connection.\n>\nBut I still think that the new variable introduced, \"cancelconn\", became\nthe real argument.\n\nregards,\nRanier Vilela\n\nEm sex., 19 de ago. de 2022 às 16:22, Justin Pryzby <pryzby@telsasoft.com> escreveu:On Fri, Aug 19, 2022 at 03:52:36PM -0300, Ranier Vilela wrote:\n> Em qui., 11 de ago. de 2022 às 09:52, Ranier Vilela <ranier.vf@gmail.com> escreveu:\n> \n> > Hi,\n> >\n> > One other case suspicious, which I think deserves a conference.\n> > At function wait_on_slots (src/fe_utils/parallel_slot.c)\n> > The variable \"slots\" are array, but at function call SetCancelConn,\n> > \"slots\" are used as an object, which at the very least would be suspicious.\n>\n> The commit\n> https://github.com/postgres/postgres/commit/f71519e545a34ece0a27c8bb1a2b6e197d323163\n> Introduced the affected function.\n\nIt's true that the function was added there, but SetCancelConn() was called the\nsame way before that: SetCancelConn(slots->connection);\n\nIf you trace the history back to a17923204, you'll see a comment about the\n\"zeroth slot\", which makes it clear that the first slot it what's intended.Thank you Justin, for the research.\n\nI agree that it would be clearer if this were written as slots[0].connection.But I still think that the new variable introduced, \"cancelconn\", became the real argument.regards,Ranier Vilela",
"msg_date": "Fri, 19 Aug 2022 16:30:19 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use array as object (src/fe_utils/parallel_slot.c)"
},
{
"msg_contents": "On Fri, Aug 19, 2022 at 02:22:32PM -0500, Justin Pryzby wrote:\n> If you trace the history back to a17923204, you'll see a comment about the\n> \"zeroth slot\", which makes it clear that the first slot it what's intended.\n> \n> I agree that it would be clearer if this were written as slots[0].connection.\n\nBased on the way the code is written on HEAD, this would be the\ncorrect assumption. Now, calling PQgetCancel() would return NULL for\na connection that we actually ignore in the code a couple of lines\nabove when it has PGINVALID_SOCKET. So it seems to me that the\nsuggestion of using \"cancelconn\", which would be the first valid\nconnection, rather than always the first connection, which may be\nusing an invalid socket, is correct, so as we always have our hands\non a way to cancel a command.\n--\nMichael",
"msg_date": "Mon, 22 Aug 2022 09:15:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use array as object (src/fe_utils/parallel_slot.c)"
},
{
"msg_contents": "Em dom., 21 de ago. de 2022 às 21:15, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Fri, Aug 19, 2022 at 02:22:32PM -0500, Justin Pryzby wrote:\n> > If you trace the history back to a17923204, you'll see a comment about\n> the\n> > \"zeroth slot\", which makes it clear that the first slot it what's\n> intended.\n> >\n> > I agree that it would be clearer if this were written as\n> slots[0].connection.\n>\n> Based on the way the code is written on HEAD, this would be the\n> correct assumption. Now, calling PQgetCancel() would return NULL for\n> a connection that we actually ignore in the code a couple of lines\n> above when it has PGINVALID_SOCKET. So it seems to me that the\n> suggestion of using \"cancelconn\", which would be the first valid\n> connection, rather than always the first connection, which may be\n> using an invalid socket, is correct, so as we always have our hands\n> on a way to cancel a command.\n>\nThanks Michael, for looking at this.\nIs it worth creating a commiffest?\n\nregards,\nRanier Vilela\n\nEm dom., 21 de ago. de 2022 às 21:15, Michael Paquier <michael@paquier.xyz> escreveu:On Fri, Aug 19, 2022 at 02:22:32PM -0500, Justin Pryzby wrote:\n> If you trace the history back to a17923204, you'll see a comment about the\n> \"zeroth slot\", which makes it clear that the first slot it what's intended.\n> \n> I agree that it would be clearer if this were written as slots[0].connection.\n\nBased on the way the code is written on HEAD, this would be the\ncorrect assumption. Now, calling PQgetCancel() would return NULL for\na connection that we actually ignore in the code a couple of lines\nabove when it has PGINVALID_SOCKET. So it seems to me that the\nsuggestion of using \"cancelconn\", which would be the first valid\nconnection, rather than always the first connection, which may be\nusing an invalid socket, is correct, so as we always have our hands\non a way to cancel a command.Thanks Michael, for looking at this.Is it worth creating a commiffest?regards,Ranier Vilela",
"msg_date": "Fri, 26 Aug 2022 13:54:26 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use array as object (src/fe_utils/parallel_slot.c)"
},
{
"msg_contents": "On Fri, Aug 26, 2022 at 01:54:26PM -0300, Ranier Vilela wrote:\n> Is it worth creating a commiffest?\n\nDon't think so, but feel free to create one and mark me as committer\nif you think that's appropriate. I have marked this thread as\nsomething to do soon-ishly, but I am being distracted by life this\nmonth.\n--\nMichael",
"msg_date": "Sat, 27 Aug 2022 12:00:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use array as object (src/fe_utils/parallel_slot.c)"
},
{
"msg_contents": "Em sáb., 27 de ago. de 2022 às 00:00, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Fri, Aug 26, 2022 at 01:54:26PM -0300, Ranier Vilela wrote:\n> > Is it worth creating a commiffest?\n>\n> Don't think so, but feel free to create one and mark me as committer\n> if you think that's appropriate. I have marked this thread as\n> something to do soon-ishly\n\nHi Michael, I see the commit.\nThanks for the hardest part.\nSuspecting something wrong is easy, the difficult thing is to describe why\nit is wrong.\n\n, but I am being distracted by life this\n> month.\n>\nGlad to know, enjoy.\n\nregards,\nRanier Vilela\n\nEm sáb., 27 de ago. de 2022 às 00:00, Michael Paquier <michael@paquier.xyz> escreveu:On Fri, Aug 26, 2022 at 01:54:26PM -0300, Ranier Vilela wrote:\n> Is it worth creating a commiffest?\n\nDon't think so, but feel free to create one and mark me as committer\nif you think that's appropriate. I have marked this thread as\nsomething to do soon-ishlyHi Michael, I see the commit.Thanks for the hardest part.Suspecting something wrong is easy, the difficult thing is to describe why it is wrong., but I am being distracted by life this\nmonth.Glad to know, enjoy.regards,Ranier Vilela",
"msg_date": "Sat, 27 Aug 2022 07:57:34 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use array as object (src/fe_utils/parallel_slot.c)"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Based on the way the code is written on HEAD, this would be the\n> correct assumption. Now, calling PQgetCancel() would return NULL for\n> a connection that we actually ignore in the code a couple of lines\n> above when it has PGINVALID_SOCKET. So it seems to me that the\n> suggestion of using \"cancelconn\", which would be the first valid\n> connection, rather than always the first connection, which may be\n> using an invalid socket, is correct, so as we always have our hands\n> on a way to cancel a command.\n\nI came across this commit (52144b6fc) while writing release notes,\nand I have to seriously question whether it's right yet. The thing\nthat needs to be asked is, if we get a SIGINT in a program using this\nlogic, why would we propagate a cancel to just one of the controlled\nsessions and not all of them?\n\nIt looks to me like the original concept was that slot zero would be\na \"master\" connection, such that canceling just that one would have a\nuseful effect. Maybe the current users of parallel_slot.c still use\nit like that, but I bet it's more likely that the connections are\nall doing independent things and you really gotta cancel them all\nif you want out.\n\nI suppose maybe this commit improved matters: if you are running N jobs\nthen typing control-C N times (not too quickly) might eventually get\nyou out, by successively canceling the lowest-numbered surviving\nconnection. Previously you could have pounded the key all day and\nnot gotten rid of any but the zero'th task. OTOH, if the connections\ndon't exit but just go back to idle, which seems pretty possible,\nthen it's not clear we've moved the needle at all.\n\nAnyway I think this needs rewritten, not just tweaked. The cancel.c\ninfrastructure is really next to useless here since it is only designed\nwith one connection in mind. I'm inclined to think we should only\nexpect the signal handler to set CancelRequested, and then manually\nissue cancels to all live connections when we see that become set.\n\nI'm not proposing reverting 52144b6fc, because I doubt it made\nanything worse; but I'm thinking of leaving it out of the release\nnotes, because I'm unsure it had any user-visible effect at all.\nIt doesn't look to me like we'd ever get to wait_on_slots unless\nall the connections are known busy.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Nov 2022 17:49:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Use array as object (src/fe_utils/parallel_slot.c)"
}
] |
[
{
"msg_contents": "Hi,\n\nHere's a small patch replacing the explicit setting of\nXLogCtl->InstallXLogFileSegmentActive with the existing function\nSetInstallXLogFileSegmentActive(), removes duplicate code and saves 4\nLOC.\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/",
"msg_date": "Thu, 11 Aug 2022 21:42:18 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Use SetInstallXLogFileSegmentActive() for setting\n XLogCtl->InstallXLogFileSegmentActive"
},
{
"msg_contents": "On Thu, Aug 11, 2022 at 09:42:18PM +0530, Bharath Rupireddy wrote:\n> Here's a small patch replacing the explicit setting of\n> XLogCtl->InstallXLogFileSegmentActive with the existing function\n> SetInstallXLogFileSegmentActive(), removes duplicate code and saves 4\n> LOC.\n\nLGTM\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 11 Aug 2022 15:47:00 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use SetInstallXLogFileSegmentActive() for setting\n XLogCtl->InstallXLogFileSegmentActive"
},
{
"msg_contents": "On Fri, Aug 12, 2022 at 4:17 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Thu, Aug 11, 2022 at 09:42:18PM +0530, Bharath Rupireddy wrote:\n> > Here's a small patch replacing the explicit setting of\n> > XLogCtl->InstallXLogFileSegmentActive with the existing function\n> > SetInstallXLogFileSegmentActive(), removes duplicate code and saves 4\n> > LOC.\n>\n> LGTM\n\nThanks for reviewing. I added it to the current commitfest to not lose\ntrack of it - https://commitfest.postgresql.org/39/3815/\n\n--\nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n",
"msg_date": "Mon, 15 Aug 2022 11:33:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use SetInstallXLogFileSegmentActive() for setting\n XLogCtl->InstallXLogFileSegmentActive"
},
{
"msg_contents": "On Mon, Aug 15, 2022 at 11:33:00AM +0530, Bharath Rupireddy wrote:\n> Thanks for reviewing. I added it to the current commitfest to not lose\n> track of it - https://commitfest.postgresql.org/39/3815/\n\nThis reduces slightly the footprint of InstallXLogFileSegmentActive,\nwhich is fine by me, so applied.\n--\nMichael",
"msg_date": "Wed, 17 Aug 2022 15:30:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use SetInstallXLogFileSegmentActive() for setting\n XLogCtl->InstallXLogFileSegmentActive"
}
] |
[
{
"msg_contents": "Hi,\nIn cash_out(), we have the following code:\n\n if (value < 0)\n {\n /* make the amount positive for digit-reconstruction loop */\n value = -value;\n\nThe negation cannot be represented in type long when the value is LONG_MIN.\nIt seems we can error out when LONG_MIN is detected instead of continuing\nwith computation.\n\nPlease take a look at the patch and provide your feedback.\n\nThanks",
"msg_date": "Thu, 11 Aug 2022 10:30:07 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "avoid negating LONG_MIN in cash_out()"
},
{
"msg_contents": "Zhihong Yu <zyu@yugabyte.com> writes:\n> In cash_out(), we have the following code:\n> if (value < 0)\n> {\n> /* make the amount positive for digit-reconstruction loop */\n> value = -value;\n\n> The negation cannot be represented in type long when the value is LONG_MIN.\n\nPossibly not good, but it seems like the subsequent loop works anyway:\n\nregression=# select '-92233720368547758.08'::money;\n money \n-----------------------------\n -$92,233,720,368,547,758.08\n(1 row)\n\nNote that this exact test case appears in money.sql, so we know that\nit works everywhere, not only my machine.\n\n> It seems we can error out when LONG_MIN is detected instead of continuing\n> with computation.\n\nHow could you think that that's an acceptable solution? Once the\nvalue is stored, we'd better be able to print it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 11 Aug 2022 13:40:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: avoid negating LONG_MIN in cash_out()"
},
{
"msg_contents": "On Thu, Aug 11, 2022 at 10:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Zhihong Yu <zyu@yugabyte.com> writes:\n> > In cash_out(), we have the following code:\n> > if (value < 0)\n> > {\n> > /* make the amount positive for digit-reconstruction loop */\n> > value = -value;\n>\n> > The negation cannot be represented in type long when the value is\n> LONG_MIN.\n>\n> Possibly not good, but it seems like the subsequent loop works anyway:\n>\n> regression=# select '-92233720368547758.08'::money;\n> money\n> -----------------------------\n> -$92,233,720,368,547,758.08\n> (1 row)\n>\n> Note that this exact test case appears in money.sql, so we know that\n> it works everywhere, not only my machine.\n>\n> > It seems we can error out when LONG_MIN is detected instead of continuing\n> > with computation.\n>\n> How could you think that that's an acceptable solution? Once the\n> value is stored, we'd better be able to print it.\n>\n> regards, tom lane\n>\n\nThanks for taking a look.\nI raise this thread due to the following assertion :\n\nsrc/backend/utils/adt/cash.c:356:11: runtime error: negation of\n-9223372036854775808 cannot be represented in type 'Cash' (aka 'long');\ncast to an unsigned type to negate this value to itself\n\nSUMMARY: UndefinedBehaviorSanitizer: undefined-behavior\n../../../../../../../src/postgres/src/backend/utils/adt/cash.c:356:11\n\n\nThough '-92233720368547758.085'::money displays correct error message in\nother builds, this statement wouldn't pass the build where\nUndefinedBehaviorSanitizer is active.\nI think we should fix this otherwise when there is new assertion triggered\ndue to future changes around Cash (or other types covered by money.sql), we\nwouldn't see it.\n\nI am open to other ways of bypassing the above assertion.\n\nCheers\n\nOn Thu, Aug 11, 2022 at 10:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Zhihong Yu <zyu@yugabyte.com> writes:\n> In cash_out(), we have the following code:\n> if (value < 0)\n> {\n> /* make the amount positive for digit-reconstruction loop */\n> value = -value;\n\n> The negation cannot be represented in type long when the value is LONG_MIN.\n\nPossibly not good, but it seems like the subsequent loop works anyway:\n\nregression=# select '-92233720368547758.08'::money;\n money \n-----------------------------\n -$92,233,720,368,547,758.08\n(1 row)\n\nNote that this exact test case appears in money.sql, so we know that\nit works everywhere, not only my machine.\n\n> It seems we can error out when LONG_MIN is detected instead of continuing\n> with computation.\n\nHow could you think that that's an acceptable solution? Once the\nvalue is stored, we'd better be able to print it.\n\n regards, tom laneThanks for taking a look.I raise this thread due to the following assertion :src/backend/utils/adt/cash.c:356:11: runtime error: negation of -9223372036854775808 cannot be represented in type 'Cash' (aka 'long'); cast to an unsigned type to negate this value to itselfSUMMARY: UndefinedBehaviorSanitizer: undefined-behavior ../../../../../../../src/postgres/src/backend/utils/adt/cash.c:356:11Though '-92233720368547758.085'::money displays correct error message in other builds, this statement wouldn't pass the build where UndefinedBehaviorSanitizer is active.I think we should fix this otherwise when there is new assertion triggered due to future changes around Cash (or other types covered by money.sql), we wouldn't see it.I am open to other ways of bypassing the above assertion.Cheers",
"msg_date": "Thu, 11 Aug 2022 10:55:48 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: avoid negating LONG_MIN in cash_out()"
},
{
"msg_contents": "On Thu, Aug 11, 2022 at 10:55 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Thu, Aug 11, 2022 at 10:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Zhihong Yu <zyu@yugabyte.com> writes:\n>> > In cash_out(), we have the following code:\n>> > if (value < 0)\n>> > {\n>> > /* make the amount positive for digit-reconstruction loop */\n>> > value = -value;\n>>\n>> > The negation cannot be represented in type long when the value is\n>> LONG_MIN.\n>>\n>> Possibly not good, but it seems like the subsequent loop works anyway:\n>>\n>> regression=# select '-92233720368547758.08'::money;\n>> money\n>> -----------------------------\n>> -$92,233,720,368,547,758.08\n>> (1 row)\n>>\n>> Note that this exact test case appears in money.sql, so we know that\n>> it works everywhere, not only my machine.\n>>\n>> > It seems we can error out when LONG_MIN is detected instead of\n>> continuing\n>> > with computation.\n>>\n>> How could you think that that's an acceptable solution? Once the\n>> value is stored, we'd better be able to print it.\n>>\n>> regards, tom lane\n>>\n>\n> Thanks for taking a look.\n> I raise this thread due to the following assertion :\n>\n> src/backend/utils/adt/cash.c:356:11: runtime error: negation of\n> -9223372036854775808 cannot be represented in type 'Cash' (aka 'long');\n> cast to an unsigned type to negate this value to itself\n>\n> SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior\n> ../../../../../../../src/postgres/src/backend/utils/adt/cash.c:356:11\n>\n>\n> Though '-92233720368547758.085'::money displays correct error message in\n> other builds, this statement wouldn't pass the build where\n> UndefinedBehaviorSanitizer is active.\n> I think we should fix this otherwise when there is new assertion triggered\n> due to future changes around Cash (or other types covered by money.sql), we\n> wouldn't see it.\n>\n> I am open to other ways of bypassing the above assertion.\n>\n> Cheers\n>\n\nHere is sample output with patch:\n\n# SELECT '-92233720368547758.085'::money;\nERROR: value \"-92233720368547758.085\" is out of range for type money\nLINE 1: SELECT '-92233720368547758.085'::money;\n\nFYI\n\nOn Thu, Aug 11, 2022 at 10:55 AM Zhihong Yu <zyu@yugabyte.com> wrote:On Thu, Aug 11, 2022 at 10:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Zhihong Yu <zyu@yugabyte.com> writes:\n> In cash_out(), we have the following code:\n> if (value < 0)\n> {\n> /* make the amount positive for digit-reconstruction loop */\n> value = -value;\n\n> The negation cannot be represented in type long when the value is LONG_MIN.\n\nPossibly not good, but it seems like the subsequent loop works anyway:\n\nregression=# select '-92233720368547758.08'::money;\n money \n-----------------------------\n -$92,233,720,368,547,758.08\n(1 row)\n\nNote that this exact test case appears in money.sql, so we know that\nit works everywhere, not only my machine.\n\n> It seems we can error out when LONG_MIN is detected instead of continuing\n> with computation.\n\nHow could you think that that's an acceptable solution? Once the\nvalue is stored, we'd better be able to print it.\n\n regards, tom laneThanks for taking a look.I raise this thread due to the following assertion :src/backend/utils/adt/cash.c:356:11: runtime error: negation of -9223372036854775808 cannot be represented in type 'Cash' (aka 'long'); cast to an unsigned type to negate this value to itselfSUMMARY: UndefinedBehaviorSanitizer: undefined-behavior ../../../../../../../src/postgres/src/backend/utils/adt/cash.c:356:11Though '-92233720368547758.085'::money displays correct error message in other builds, this statement wouldn't pass the build where UndefinedBehaviorSanitizer is active.I think we should fix this otherwise when there is new assertion triggered due to future changes around Cash (or other types covered by money.sql), we wouldn't see it.I am open to other ways of bypassing the above assertion.CheersHere is sample output with patch:# SELECT '-92233720368547758.085'::money;ERROR: value \"-92233720368547758.085\" is out of range for type moneyLINE 1: SELECT '-92233720368547758.085'::money; FYI",
"msg_date": "Thu, 11 Aug 2022 11:05:18 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: avoid negating LONG_MIN in cash_out()"
},
{
"msg_contents": "On Fri, 12 Aug 2022 at 05:58, Zhihong Yu <zyu@yugabyte.com> wrote:\n> Here is sample output with patch:\n>\n> # SELECT '-92233720368547758.085'::money;\n> ERROR: value \"-92233720368547758.085\" is out of range for type money\n> LINE 1: SELECT '-92233720368547758.085'::money;\n\nI'm struggling to follow along here. There are already overflow checks\nfor this in cash_in(), which is exactly where they should be.\n\nThe above case already fails on master, there's even a regression test\nto make sure it does for this exact case, just look at money.out:356.\nSo, if we're already stopping this from happening in cash_in(), why do\nyou think it also needs to happen in cash_out()?\n\nI'm also not sure why you opted to use LONG_MIN for your check. The C\ntype \"Cash\" is based on int64, that's not long.\n\nDavid\n\n\n",
"msg_date": "Fri, 12 Aug 2022 07:55:05 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: avoid negating LONG_MIN in cash_out()"
},
{
"msg_contents": "On Thu, Aug 11, 2022 at 12:55 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Fri, 12 Aug 2022 at 05:58, Zhihong Yu <zyu@yugabyte.com> wrote:\n> > Here is sample output with patch:\n> >\n> > # SELECT '-92233720368547758.085'::money;\n> > ERROR: value \"-92233720368547758.085\" is out of range for type money\n> > LINE 1: SELECT '-92233720368547758.085'::money;\n>\n> I'm struggling to follow along here. There are already overflow checks\n> for this in cash_in(), which is exactly where they should be.\n>\n> The above case already fails on master, there's even a regression test\n> to make sure it does for this exact case, just look at money.out:356.\n> So, if we're already stopping this from happening in cash_in(), why do\n> you think it also needs to happen in cash_out()?\n>\n> I'm also not sure why you opted to use LONG_MIN for your check. The C\n> type \"Cash\" is based on int64, that's not long.\n>\n> David\n>\n\nHi, David:\nI am very sorry for not having looked closer at the sample SQL statement\nearlier.\nIndeed, the previous statement didn't trigger cash_out().\n\nI think this was due to the fact that sanitizer assertion may be separated\nfrom the statement triggering the assertion.\nI am still going over the test output, trying to pinpoint the statement.\n\nMeanwhile, I want to thank you for pointing out the constant shouldn't be\nused for the boundary check.\n\nHow about patch v2 which uses the same check from cash_in() ?\nI will see which statement triggers the assertion.\n\nCheers",
"msg_date": "Thu, 11 Aug 2022 13:57:54 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: avoid negating LONG_MIN in cash_out()"
},
{
"msg_contents": "Zhihong Yu <zyu@yugabyte.com> writes:\n> How about patch v2 which uses the same check from cash_in() ?\n\nI'm not sure which part of this statement you're not getting:\nit is completely unacceptable for cash_out to fail on valid\nvalues of the type. And this value is valid. cash_in goes\nout of its way to take it, and you can also produce it via\narithmetic operators.\n\nI understand that you're trying to get rid of an analyzer warning that\nnegating INT64_MIN is (pedantically, not in practice) undefined behavior.\nBut the way to fix that is to make the behavior conform to the C spec.\nPerhaps it would work to do\n\n Cash value = PG_GETARG_CASH(0);\n uint64 uvalue;\n\n if (value < 0)\n uvalue = -(uint64) value;\n else\n uvalue = value;\n\nand then use uvalue instead of \"(uint64) value\" in the loop.\nOf course, this begs the question of what negation means for\nan unsigned value. I believe that this formulation is allowed\nby the C spec and produces the same results as what we do now,\nbut I'm not convinced that it's clearer for the reader.\n\nAnother possibility is\n\n if (value < 0)\n {\n if (value == INT64_MIN)\n uvalue = however you wanna spell -INT64_MIN;\n else\n uvalue = (uint64) -value;\n }\n else\n uvalue = value;\n\nbut this really seems to be letting pedantry get the best of us.\n\nThe short answer here is that the code works fine on every platform\nwe support. We know that because we have a regression test checking\nthis exact case. So it's not broken and I don't think there's a\nvery good argument that it needs to be fixed. Maybe the right thing\nis just to add a comment pointing out what happens for INT64_MIN.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 11 Aug 2022 21:28:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: avoid negating LONG_MIN in cash_out()"
},
{
"msg_contents": "On Thu, Aug 11, 2022 at 6:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Zhihong Yu <zyu@yugabyte.com> writes:\n> > How about patch v2 which uses the same check from cash_in() ?\n>\n> I'm not sure which part of this statement you're not getting:\n> it is completely unacceptable for cash_out to fail on valid\n> values of the type. And this value is valid. cash_in goes\n> out of its way to take it, and you can also produce it via\n> arithmetic operators.\n>\n> I understand that you're trying to get rid of an analyzer warning that\n> negating INT64_MIN is (pedantically, not in practice) undefined behavior.\n> But the way to fix that is to make the behavior conform to the C spec.\n> Perhaps it would work to do\n>\n> Cash value = PG_GETARG_CASH(0);\n> uint64 uvalue;\n>\n> if (value < 0)\n> uvalue = -(uint64) value;\n> else\n> uvalue = value;\n>\n> and then use uvalue instead of \"(uint64) value\" in the loop.\n> Of course, this begs the question of what negation means for\n> an unsigned value. I believe that this formulation is allowed\n> by the C spec and produces the same results as what we do now,\n> but I'm not convinced that it's clearer for the reader.\n>\n> Another possibility is\n>\n> if (value < 0)\n> {\n> if (value == INT64_MIN)\n> uvalue = however you wanna spell -INT64_MIN;\n> else\n> uvalue = (uint64) -value;\n> }\n> else\n> uvalue = value;\n>\n> but this really seems to be letting pedantry get the best of us.\n>\n> The short answer here is that the code works fine on every platform\n> we support. We know that because we have a regression test checking\n> this exact case. So it's not broken and I don't think there's a\n> very good argument that it needs to be fixed. Maybe the right thing\n> is just to add a comment pointing out what happens for INT64_MIN.\n>\n> regards, tom lane\n>\nHi,\nThanks for taking the time to contemplate various possibilities.\n\nI thought of using uint64 as well - but as you have shown, the readability\nisn't better.\n\nI will keep this in the back of my mind.\n\nCheers\n\nOn Thu, Aug 11, 2022 at 6:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Zhihong Yu <zyu@yugabyte.com> writes:\n> How about patch v2 which uses the same check from cash_in() ?\n\nI'm not sure which part of this statement you're not getting:\nit is completely unacceptable for cash_out to fail on valid\nvalues of the type. And this value is valid. cash_in goes\nout of its way to take it, and you can also produce it via\narithmetic operators.\n\nI understand that you're trying to get rid of an analyzer warning that\nnegating INT64_MIN is (pedantically, not in practice) undefined behavior.\nBut the way to fix that is to make the behavior conform to the C spec.\nPerhaps it would work to do\n\n Cash value = PG_GETARG_CASH(0);\n uint64 uvalue;\n\n if (value < 0)\n uvalue = -(uint64) value;\n else\n uvalue = value;\n\nand then use uvalue instead of \"(uint64) value\" in the loop.\nOf course, this begs the question of what negation means for\nan unsigned value. I believe that this formulation is allowed\nby the C spec and produces the same results as what we do now,\nbut I'm not convinced that it's clearer for the reader.\n\nAnother possibility is\n\n if (value < 0)\n {\n if (value == INT64_MIN)\n uvalue = however you wanna spell -INT64_MIN;\n else\n uvalue = (uint64) -value;\n }\n else\n uvalue = value;\n\nbut this really seems to be letting pedantry get the best of us.\n\nThe short answer here is that the code works fine on every platform\nwe support. We know that because we have a regression test checking\nthis exact case. So it's not broken and I don't think there's a\nvery good argument that it needs to be fixed. Maybe the right thing\nis just to add a comment pointing out what happens for INT64_MIN.\n\n regards, tom laneHi,Thanks for taking the time to contemplate various possibilities.I thought of using uint64 as well - but as you have shown, the readability isn't better.I will keep this in the back of my mind.Cheers",
"msg_date": "Thu, 11 Aug 2022 20:36:42 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: avoid negating LONG_MIN in cash_out()"
}
] |
[
{
"msg_contents": "Hi,\n\nFor my optimized builds I've long used -O3 -march=native. After one of the\nrecent package updates (I'm not certain when exactly yet), the main regression\ntests started to fail for me with that. Oddly enough in opr_sanity:\n\n -- Ask access methods to validate opclasses\n -- (this replaces a lot of SQL-level checks that used to be done in this file)\n SELECT oid, opcname FROM pg_opclass WHERE NOT amvalidate(oid);\n- oid | opcname\n------+---------\n-(0 rows)\n+INFO: operator family \"array_ops\" of access method hash contains function hash_array_extended(anyarray,bigint) with wrong signature for support number 2\n+INFO: operator family \"bpchar_ops\" of access method hash contains function hashbpcharextended(character,bigint) with wrong signature for support number 2\n...\n+ 16492 | part_test_int4_ops\n+ 16497 | part_test_text_ops\n+(43 rows)\n\n\nGiven that I did not encounter this problem with gcc-12 before, and that\ngcc-12 has been released, it seems less likely to be a bug in our code\nhighlighted by a new optimization and more likely to be a bug in a gcc bugfix,\nbut it's definitely not clear.\n\n\nI only investigated this a tiny bit so far. What fails is the\nprocform->prorettype != restype comparison in check_hash_func_signature().\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 11 Aug 2022 13:03:43 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "test failure with gcc-12 -O3 -march=native"
},
{
"msg_contents": "On Thu, Aug 11, 2022 at 01:03:43PM -0700, Andres Freund wrote:\n> Hi,\n> \n> For my optimized builds I've long used -O3 -march=native. After one of the\n\nOn what kind of arch ?\n\n> Given that I did not encounter this problem with gcc-12 before, and that\n> gcc-12 has been released, it seems less likely to be a bug in our code\n> highlighted by a new optimization and more likely to be a bug in a gcc bugfix,\n> but it's definitely not clear.\n\ndebian testing is now defaulting to gcc-12.\nhttps://tracker.debian.org/news/1348007/accepted-gcc-defaults-1198-source-into-unstable/\n\nAre you sure you were building with gcc-12 and not gcc(default) which, until 3\nweeks ago, was gcc-11 ?\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 11 Aug 2022 20:06:02 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: test failure with gcc-12 -O3 -march=native"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-11 20:06:02 -0500, Justin Pryzby wrote:\n> On Thu, Aug 11, 2022 at 01:03:43PM -0700, Andres Freund wrote:\n> > Hi,\n> > \n> > For my optimized builds I've long used -O3 -march=native. After one of the\n> \n> On what kind of arch ?\n\nx86-64 cascadelake. I've since debugged this further. It's not even -march\nthat's the problem, it's the difference between -mtune=broadwell and\n-mtune=skylake, even with -march=x86-64.\n\n\n> > Given that I did not encounter this problem with gcc-12 before, and that\n> > gcc-12 has been released, it seems less likely to be a bug in our code\n> > highlighted by a new optimization and more likely to be a bug in a gcc bugfix,\n> > but it's definitely not clear.\n> \n> debian testing is now defaulting to gcc-12.\n> https://tracker.debian.org/news/1348007/accepted-gcc-defaults-1198-source-into-unstable/\n> \n> Are you sure you were building with gcc-12 and not gcc(default) which, until 3\n> weeks ago, was gcc-11 ?\n\nYes.\n\nI'm now bisecting...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 11 Aug 2022 18:24:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: test failure with gcc-12 -O3 -march=native"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-11 18:24:16 -0700, Andres Freund wrote:\n> > > Given that I did not encounter this problem with gcc-12 before, and that\n> > > gcc-12 has been released, it seems less likely to be a bug in our code\n> > > highlighted by a new optimization and more likely to be a bug in a gcc bugfix,\n> > > but it's definitely not clear.\n> > \n> > debian testing is now defaulting to gcc-12.\n> > https://tracker.debian.org/news/1348007/accepted-gcc-defaults-1198-source-into-unstable/\n> > \n> > Are you sure you were building with gcc-12 and not gcc(default) which, until 3\n> > weeks ago, was gcc-11 ?\n> \n> Yes.\n> \n> I'm now bisecting...\n\nI found the commit triggering it [1]. Oddly it's a change from a few months\nago, and I can reconstruct from dpkg.log and shell history that I definitely\nran the tests many times since upgrading the compiler. I did however clean my\nccache cache yesterday, I wonder if somehow the 'old' version got stuck in\nit. ccache says it checks the compiler's mtime though.\n\nGreetings,\n\nAndres Freund\n\n[1] https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=1ceddd7497e\n\n\n",
"msg_date": "Thu, 11 Aug 2022 19:08:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: test failure with gcc-12 -O3 -march=native"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-11 19:08:14 -0700, Andres Freund wrote:\n> On 2022-08-11 18:24:16 -0700, Andres Freund wrote:\n> > I'm now bisecting...\n> \n> I found the commit triggering it [1]. Oddly it's a change from a few months\n> ago, and I can reconstruct from dpkg.log and shell history that I definitely\n> ran the tests many times since upgrading the compiler. I did however clean my\n> ccache cache yesterday, I wonder if somehow the 'old' version got stuck in\n> it. ccache says it checks the compiler's mtime though.\n\nSpent a fair bit of time reducing the problem to something triggering the\nproblem in isolation. This is somewhat scary - I'd be quite surprised if this\nwere the only place triggering the bug.\n\nAnd I suspect that it doesn't actually require -mtune=skylake, but I'm not\nsure.\n\nhttps://gcc.gnu.org/bugzilla/show_bug.cgi?id=106590\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 11 Aug 2022 21:28:54 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: test failure with gcc-12 -O3 -march=native"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nAt ServiceNow, we frequently encounter queries with very large IN lists\nwhere the number of elements in the IN list range from a few hundred to\nseveral thousand. For a significant fraction of the queries, the IN clauses\nare constructed on primary key columns. While planning these queries,\nPostgres query planner loops over every element in the IN clause, computing\nthe selectivity of each element and then uses that as an input to compute\nthe total selectivity of the IN clause. For IN clauses on primary key or\nunique columns, it is easy to see that the selectivity of the IN predicate\nis given by (number of elements in the IN clause / table cardinality) and\nis independent of the selectivity of the individual elements. We use this\nobservation to avoid computing the selectivities of the individual\nelements. This results in an improvement in the planning time especially\nwhen the number of elements in the IN clause is relatively large.\n\n\n\nThe table below demonstrates the improvement in planning time (averaged\nover 3 runs) for IN queries of the form SELECT COUNT(*) FROM table_a WHERE\nsys_id IN ('000356e61b568510eabcca2b234bcb08',\n'00035846db2f24101ad7f256b9961925', ...). Here sys_id is the primary key\ncolumn of type VARCHAR(32) and the table cardinality of table_a is around\n10M.\n\n\n\nNumber of IN list elements\n\nPlanning time w/o optimization (in ms)\n\nPlanning time w/ optimization (in ms)\n\nSpeedup\n\n500\n\n0.371\n\n0.236\n\n1.57\n\n5000\n\n2.019\n\n0.874\n\n2.31\n\n50000\n\n19.886\n\n8.273\n\n2.40\n\n\n\nSimilar to IN clauses, the selectivity of NOT IN clauses on a primary key\nor unique column can be computed by not computing the selectivities of\nindividual elements. The query used is of the form SELECT COUNT(*) FROM\ntable_a WHERE sys_id NOT IN ('000356e61b568510eabcca2b234bcb08',\n'00035846db2f24101ad7f256b9961925', ...).\n\n\n\nNumber of NOT IN list elements\n\nPlanning time w/o optimization (in ms)\n\nPlanning time w/ optimization (in ms)\n\nSpeedup\n\n500\n\n0.380\n\n0.248\n\n1.53\n\n5000\n\n2.534\n\n0.854\n\n2.97\n\n50000\n\n21.316\n\n9.009\n\n2.36\n\n\n\nWe also obtain planning time of queries on a primary key column of type\nINTEGER with 10M elements for both IN and NOT in queries.\n\n\nNumber of IN list elements\n\nPlanning time w/o optimization (in ms)\n\nPlanning time w/ optimization (in ms)\n\nSpeedup\n\n500\n\n0.370\n\n0.208\n\n1.78\n\n5000\n\n1.998\n\n0.816\n\n2.45\n\n50000\n\n18.073\n\n6.750\n\n2.67\n\n\n\n\nNumber of NOT IN list elements\n\nPlanning time w/o optimization (in ms)\n\nPlanning time w/ optimization (in ms)\n\nSpeedup\n\n500\n\n0.342\n\n0.203\n\n1.68\n\n5000\n\n2.073\n\n0.822\n\n3.29\n\n50000\n\n19.551\n\n6.738\n\n2.90\n\n\n\nWe see that the planning time of queries on unique columns are identical to\nthat we observed for primary key columns. The resulting patch file for the\nchanges above is small and we are happy to polish it up and share.\n\n\nBest,\n\nSouvik Bhattacherjee\n\n(ServiceNow)\n\nHi hackers,At ServiceNow, we frequently encounter queries with very large IN lists where the number of elements in the IN list range from a few hundred to several thousand. For a significant fraction of the queries, the IN clauses are constructed on primary key columns. While planning these queries, Postgres query planner loops over every element in the IN clause, computing the selectivity of each element and then uses that as an input to compute the total selectivity of the IN clause. For IN clauses on primary key or unique columns, it is easy to see that the selectivity of the IN predicate is given by (number of elements in the IN clause / table cardinality) and is independent of the selectivity of the individual elements. We use this observation to avoid computing the selectivities of the individual elements. This results in an improvement in the planning time especially when the number of elements in the IN clause is relatively large. The table below demonstrates the improvement in planning time (averaged over 3 runs) for IN queries of the form SELECT COUNT(*) FROM table_a WHERE sys_id IN ('000356e61b568510eabcca2b234bcb08', '00035846db2f24101ad7f256b9961925', ...). Here sys_id is the primary key column of type VARCHAR(32) and the table cardinality of table_a is around 10M. Number of IN list elementsPlanning time w/o optimization (in ms)Planning time w/ optimization (in ms)Speedup5000.3710.2361.5750002.0190.8742.315000019.8868.2732.40 Similar to IN clauses, the selectivity of NOT IN clauses on a primary key or unique column can be computed by not computing the selectivities of individual elements. The query used is of the form SELECT COUNT(*) FROM table_a WHERE sys_id NOT IN ('000356e61b568510eabcca2b234bcb08', '00035846db2f24101ad7f256b9961925', ...). Number of NOT IN list elementsPlanning time w/o optimization (in ms)Planning time w/ optimization (in ms)Speedup5000.3800.2481.5350002.5340.8542.975000021.3169.0092.36 We also obtain planning time of queries on a primary key column of type INTEGER with 10M elements for both IN and NOT in queries.Number of IN list elementsPlanning time w/o optimization (in ms)Planning time w/ optimization (in ms)Speedup5000.3700.2081.7850001.9980.8162.455000018.0736.7502.67 Number of NOT IN list elementsPlanning time w/o optimization (in ms)Planning time w/ optimization (in ms)Speedup5000.3420.2031.6850002.0730.8223.295000019.5516.7382.90 We see that the planning time of queries on unique columns are identical to that we observed for primary key columns. The resulting patch file for the changes above is small and we are happy to polish it up and share.Best,Souvik Bhattacherjee(ServiceNow)",
"msg_date": "Thu, 11 Aug 2022 14:42:15 -0700",
"msg_from": "Souvik Bhattacherjee <pgsdbhacker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Reducing planning time of large IN queries on primary key / unique\n columns"
},
{
"msg_contents": "(Re-posting with better formatting)\n\nHi hackers,\n\nAt ServiceNow, we frequently encounter queries with very large IN lists\nwhere the number of elements in the IN list range from a\n\nfew hundred to several thousand. For a significant fraction of the queries,\nthe IN clauses are constructed on primary key columns.\n\nWhile planning these queries, Postgres query planner loops over every\nelement in the IN clause, computing the selectivity of each\n\nelement and then uses that as an input to compute the total selectivity of\nthe IN clause. For IN clauses on primary key or unique\n\ncolumns, it is easy to see that the selectivity of the IN predicate is\ngiven by (number of elements in the IN clause / table cardinality)\n\nand is independent of the selectivity of the individual elements. We use\nthis observation to avoid computing the selectivities of the\n\nindividual elements. This results in an improvement in the planning time\nespecially when the number of elements in the IN clause\n\nis relatively large.\n\n\n\nThe table below demonstrates the improvement in planning time (averaged\nover 3 runs) for IN queries of the form\n\nSELECT COUNT(*) FROM table_a WHERE sys_id IN\n('000356e61b568510eabcca2b234bcb08', '00035846db2f24101ad7f256b9961925',\n...).\n\nHere sys_id is the primary key column of type VARCHAR(32) and the table\ncardinality of table_a is around 10M.\n\n\nNumber of IN list elements | Planning time w/o optimization (in ms) | Planning\ntime w/ optimization (in ms) | Speedup\n\n------------------------------------|---------------------------------------------------\n | --------------------------------------------------|--------------\n\n500 | 0.371\n | 0.236\n | 1.57\n\n5000 | 2.019\n | 0.874\n | 2.31\n\n50000 | 19.886\n | 8.273\n | 2.40\n\n\n\nSimilar to IN clauses, the selectivity of NOT IN clauses on a primary key\nor unique column can be computed by not computing the\n\nselectivities of individual elements. The query used is of the form SELECT\nCOUNT(*) FROM table_a WHERE sys_id NOT IN\n\n('000356e61b568510eabcca2b234bcb08', '00035846db2f24101ad7f256b9961925',\n...).\n\n\nNumber of NOT IN list elements | Planning time w/o optimization (in\nms) | Planning\ntime w/ optimization (in ms) | Speedup\n\n-------------------------------------------|---------------------------------------------------\n | --------------------------------------------------|--------------\n\n500 | 0.380\n | 0.248\n | 1.53\n\n5000 | 2.534\n | 0.854\n | 2.97\n\n50000 | 21.316\n | 9.009\n | 2.36\n\n\n\nWe also obtain planning time of queries on a primary key column of type\nINTEGER with 10M elements for both IN and NOT in queries.\n\n\nNumber of IN list elements | Planning time w/o optimization (in ms) | Planning\ntime w/ optimization (in ms) | Speedup\n\n------------------------------------|---------------------------------------------------\n | --------------------------------------------------|--------------\n\n500 | 0.370\n | 0.208\n | 1.78\n\n5000 | 1.998\n | 0.816\n | 2.45\n\n50000 | 18.073\n | 6.750\n | 2.67\n\n\nNumber of NOT IN list elements | Planning time w/o optimization (in\nms) | Planning\ntime w/ optimization (in ms) | Speedup\n\n-------------------------------------------|---------------------------------------------------\n | --------------------------------------------------|--------------\n\n500 | 0.342\n | 0.203\n | 1.68\n\n5000 | 2.073\n | 0.822\n | 3.29\n\n50000 |19.551\n | 6.738\n | 2.90\n\n\n\nWe see that the planning time of queries on unique columns are identical to\nthat we observed for primary key columns.\n\nThe resulting patch file for the changes above is small and we are happy to\npolish it up and share.\n\n\nBest,\n\nSouvik Bhattacherjee\n\n(ServiceNow)\n\nOn Thu, Aug 11, 2022 at 2:42 PM Souvik Bhattacherjee <pgsdbhacker@gmail.com>\nwrote:\n\n> Hi hackers,\n>\n> At ServiceNow, we frequently encounter queries with very large IN lists\n> where the number of elements in the IN list range from a few hundred to\n> several thousand. For a significant fraction of the queries, the IN clauses\n> are constructed on primary key columns. While planning these queries,\n> Postgres query planner loops over every element in the IN clause, computing\n> the selectivity of each element and then uses that as an input to compute\n> the total selectivity of the IN clause. For IN clauses on primary key or\n> unique columns, it is easy to see that the selectivity of the IN predicate\n> is given by (number of elements in the IN clause / table cardinality) and\n> is independent of the selectivity of the individual elements. We use this\n> observation to avoid computing the selectivities of the individual\n> elements. This results in an improvement in the planning time especially\n> when the number of elements in the IN clause is relatively large.\n>\n>\n>\n> The table below demonstrates the improvement in planning time (averaged\n> over 3 runs) for IN queries of the form SELECT COUNT(*) FROM table_a WHERE\n> sys_id IN ('000356e61b568510eabcca2b234bcb08',\n> '00035846db2f24101ad7f256b9961925', ...). Here sys_id is the primary key\n> column of type VARCHAR(32) and the table cardinality of table_a is around\n> 10M.\n>\n>\n>\n> Number of IN list elements\n>\n> Planning time w/o optimization (in ms)\n>\n> Planning time w/ optimization (in ms)\n>\n> Speedup\n>\n> 500\n>\n> 0.371\n>\n> 0.236\n>\n> 1.57\n>\n> 5000\n>\n> 2.019\n>\n> 0.874\n>\n> 2.31\n>\n> 50000\n>\n> 19.886\n>\n> 8.273\n>\n> 2.40\n>\n>\n>\n> Similar to IN clauses, the selectivity of NOT IN clauses on a primary key\n> or unique column can be computed by not computing the selectivities of\n> individual elements. The query used is of the form SELECT COUNT(*) FROM\n> table_a WHERE sys_id NOT IN ('000356e61b568510eabcca2b234bcb08',\n> '00035846db2f24101ad7f256b9961925', ...).\n>\n>\n>\n> Number of NOT IN list elements\n>\n> Planning time w/o optimization (in ms)\n>\n> Planning time w/ optimization (in ms)\n>\n> Speedup\n>\n> 500\n>\n> 0.380\n>\n> 0.248\n>\n> 1.53\n>\n> 5000\n>\n> 2.534\n>\n> 0.854\n>\n> 2.97\n>\n> 50000\n>\n> 21.316\n>\n> 9.009\n>\n> 2.36\n>\n>\n>\n> We also obtain planning time of queries on a primary key column of type\n> INTEGER with 10M elements for both IN and NOT in queries.\n>\n>\n> Number of IN list elements\n>\n> Planning time w/o optimization (in ms)\n>\n> Planning time w/ optimization (in ms)\n>\n> Speedup\n>\n> 500\n>\n> 0.370\n>\n> 0.208\n>\n> 1.78\n>\n> 5000\n>\n> 1.998\n>\n> 0.816\n>\n> 2.45\n>\n> 50000\n>\n> 18.073\n>\n> 6.750\n>\n> 2.67\n>\n>\n>\n>\n> Number of NOT IN list elements\n>\n> Planning time w/o optimization (in ms)\n>\n> Planning time w/ optimization (in ms)\n>\n> Speedup\n>\n> 500\n>\n> 0.342\n>\n> 0.203\n>\n> 1.68\n>\n> 5000\n>\n> 2.073\n>\n> 0.822\n>\n> 3.29\n>\n> 50000\n>\n> 19.551\n>\n> 6.738\n>\n> 2.90\n>\n>\n>\n> We see that the planning time of queries on unique columns are identical\n> to that we observed for primary key columns. The resulting patch file for\n> the changes above is small and we are happy to polish it up and share.\n>\n>\n> Best,\n>\n> Souvik Bhattacherjee\n>\n> (ServiceNow)\n>\n\n(Re-posting with better formatting)Hi hackers,At ServiceNow, we frequently encounter queries with very large IN lists where the number of elements in the IN list range from a few hundred to several thousand. For a significant fraction of the queries, the IN clauses are constructed on primary key columns. While planning these queries, Postgres query planner loops over every element in the IN clause, computing the selectivity of each element and then uses that as an input to compute the total selectivity of the IN clause. For IN clauses on primary key or unique columns, it is easy to see that the selectivity of the IN predicate is given by (number of elements in the IN clause / table cardinality)and is independent of the selectivity of the individual elements. We use this observation to avoid computing the selectivities of theindividual elements. This results in an improvement in the planning time especially when the number of elements in the IN clause is relatively large. The table below demonstrates the improvement in planning time (averaged over 3 runs) for IN queries of the form SELECT COUNT(*) FROM table_a WHERE sys_id IN ('000356e61b568510eabcca2b234bcb08', '00035846db2f24101ad7f256b9961925', ...). Here sys_id is the primary key column of type VARCHAR(32) and the table cardinality of table_a is around 10M.Number of IN list elements | Planning time w/o optimization (in ms) | Planning time w/ optimization (in ms) | Speedup------------------------------------|--------------------------------------------------- | --------------------------------------------------|--------------500 | 0.371 | 0.236 | 1.575000 | 2.019 | 0.874 | 2.3150000 | 19.886 | 8.273 | 2.40 Similar to IN clauses, the selectivity of NOT IN clauses on a primary key or unique column can be computed by not computing the selectivities of individual elements. The query used is of the form SELECT COUNT(*) FROM table_a WHERE sys_id NOT IN ('000356e61b568510eabcca2b234bcb08', '00035846db2f24101ad7f256b9961925', ...).Number of NOT IN list elements | Planning time w/o optimization (in ms) | Planning time w/ optimization (in ms) | Speedup-------------------------------------------|--------------------------------------------------- | --------------------------------------------------|--------------500 | 0.380 | 0.248 | 1.535000 | 2.534 | 0.854 | 2.9750000 | 21.316 | 9.009 | 2.36 We also obtain planning time of queries on a primary key column of type INTEGER with 10M elements for both IN and NOT in queries.Number of IN list elements | Planning time w/o optimization (in ms) | Planning time w/ optimization (in ms) | Speedup------------------------------------|--------------------------------------------------- | --------------------------------------------------|--------------500 | 0.370 | 0.208 | 1.785000 | 1.998 | 0.816 | 2.4550000 | 18.073 | 6.750 | 2.67Number of NOT IN list elements | Planning time w/o optimization (in ms) | Planning time w/ optimization (in ms) | Speedup-------------------------------------------|--------------------------------------------------- | --------------------------------------------------|--------------500 | 0.342 | 0.203 | 1.685000 | 2.073 | 0.822 | 3.2950000 |19.551 | 6.738 | 2.90 We see that the planning time of queries on unique columns are identical to that we observed for primary key columns. The resulting patch file for the changes above is small and we are happy to polish it up and share.Best,Souvik Bhattacherjee(ServiceNow)On Thu, Aug 11, 2022 at 2:42 PM Souvik Bhattacherjee <pgsdbhacker@gmail.com> wrote:Hi hackers,At ServiceNow, we frequently encounter queries with very large IN lists where the number of elements in the IN list range from a few hundred to several thousand. For a significant fraction of the queries, the IN clauses are constructed on primary key columns. While planning these queries, Postgres query planner loops over every element in the IN clause, computing the selectivity of each element and then uses that as an input to compute the total selectivity of the IN clause. For IN clauses on primary key or unique columns, it is easy to see that the selectivity of the IN predicate is given by (number of elements in the IN clause / table cardinality) and is independent of the selectivity of the individual elements. We use this observation to avoid computing the selectivities of the individual elements. This results in an improvement in the planning time especially when the number of elements in the IN clause is relatively large. The table below demonstrates the improvement in planning time (averaged over 3 runs) for IN queries of the form SELECT COUNT(*) FROM table_a WHERE sys_id IN ('000356e61b568510eabcca2b234bcb08', '00035846db2f24101ad7f256b9961925', ...). Here sys_id is the primary key column of type VARCHAR(32) and the table cardinality of table_a is around 10M. Number of IN list elementsPlanning time w/o optimization (in ms)Planning time w/ optimization (in ms)Speedup5000.3710.2361.5750002.0190.8742.315000019.8868.2732.40 Similar to IN clauses, the selectivity of NOT IN clauses on a primary key or unique column can be computed by not computing the selectivities of individual elements. The query used is of the form SELECT COUNT(*) FROM table_a WHERE sys_id NOT IN ('000356e61b568510eabcca2b234bcb08', '00035846db2f24101ad7f256b9961925', ...). Number of NOT IN list elementsPlanning time w/o optimization (in ms)Planning time w/ optimization (in ms)Speedup5000.3800.2481.5350002.5340.8542.975000021.3169.0092.36 We also obtain planning time of queries on a primary key column of type INTEGER with 10M elements for both IN and NOT in queries.Number of IN list elementsPlanning time w/o optimization (in ms)Planning time w/ optimization (in ms)Speedup5000.3700.2081.7850001.9980.8162.455000018.0736.7502.67 Number of NOT IN list elementsPlanning time w/o optimization (in ms)Planning time w/ optimization (in ms)Speedup5000.3420.2031.6850002.0730.8223.295000019.5516.7382.90 We see that the planning time of queries on unique columns are identical to that we observed for primary key columns. The resulting patch file for the changes above is small and we are happy to polish it up and share.Best,Souvik Bhattacherjee(ServiceNow)",
"msg_date": "Thu, 11 Aug 2022 15:04:15 -0700",
"msg_from": "Souvik Bhattacherjee <pgsdbhacker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing planning time of large IN queries on primary key /\n unique columns"
},
{
"msg_contents": "Hi hackers,\n\n(Sorry about the re-post. Another attempt at fixing the formatting)\n\nAt ServiceNow, we frequently encounter queries with very large IN lists\nwhere the number of elements in the IN list range from a few hundred to\nseveral thousand. For a significant fraction of the queries, the IN clauses\nare constructed on primary key columns. While planning these queries,\nPostgres query planner loops over every element in the IN clause, computing\nthe selectivity of each element and then uses that as an input to compute\nthe total selectivity of the IN clause. For IN clauses on primary key or\nunique columns, it is easy to see that the selectivity of the IN predicate\nis given by (number of elements in the IN clause / table cardinality) and\nis independent of the selectivity of the individual elements. We use this\nobservation to avoid computing the selectivities of the individual\nelements. This results in an improvement in the planning time especially\nwhen the number of elements in the IN clause is relatively large.\n\n\n\nThe table below demonstrates the improvement in planning time in\nmilliseconds (averaged over 3 runs) for IN queries of the form SELECT\nCOUNT(*) FROM table_a WHERE sys_id IN ('000356e61b568510eabcca2b234bcb08', '\n00035846db2f24101ad7f256b9961925', ...). Here sys_id is the primary key\ncolumn of type VARCHAR(32) and the table cardinality of table_a is around\n10M.\n\n\n# IN elements | Plan time w/o opt | Plan time w/ opt | Speedup\n\n-------------------\n|-------------------------|-----------------------|--------------\n\n500 | 0.371 | 0.236 |\n1.57\n\n5000 | 2.019 | 0.874 |\n2.31\n\n50000 | 19.886 | 8.273 | 2.40\n\n\n\nSimilar to IN clauses, the selectivity of NOT IN clauses on a primary key\nor unique column can be computed by not computing the selectivities of\nindividual elements. The query used is of the form SELECT COUNT(*) FROM\ntable_a WHERE sys_id NOT IN ('000356e61b568510eabcca2b234bcb08', '\n00035846db2f24101ad7f256b9961925', ...).\n\n\n# NOT IN elements | Plan time w/o opt | Plan time w/ opt | Speedup\n\n---------------------------|-------------------------|----------------------|--------------\n\n500 | 0.380 | 0.248\n | 1.53\n\n5000 | 2.534 | 0.854\n | 2.97\n\n50000 | 21.316 | 9.009\n | 2.36\n\n\n\nWe also obtain planning time of queries on a primary key column of type\nINTEGER with 10M elements for both IN and NOT in queries.\n\n\n# IN elements | Plan time w/o opt | Plan time w/ opt | Speedup\n\n--------------------|-------------------------|----------------------|--------------\n\n500 | 0.370 | 0.208 |\n1.78\n\n5000 | 1.998 | 0.816 |\n2.45\n\n50000 | 18.073 | 6.750 | 2.67\n\n\n# NOT IN elements | Plan time w/o opt | Plan time w/ opt | Speedup\n\n--------------------------\n|------------------------|------------------------|--------------\n\n500 | 0.342 | 0.203\n | 1.68\n\n5000 | 2.073 | 0.822\n | 3.29\n\n50000 |19.551 | 6.738\n | 2.90\n\n\n\nWe see that the planning time of queries on unique columns are identical to\nthat we observed for primary key columns. The resulting patch file for the\nchanges above is small and we are happy to polish it up and share.\n\n\nBest,\n\nSouvik Bhattacherjee\n\n(ServiceNow)\n\nOn Thu, Aug 11, 2022 at 3:04 PM Souvik Bhattacherjee <pgsdbhacker@gmail.com>\nwrote:\n\n> (Re-posting with better formatting)\n>\n> Hi hackers,\n>\n> At ServiceNow, we frequently encounter queries with very large IN lists\n> where the number of elements in the IN list range from a\n>\n> few hundred to several thousand. For a significant fraction of the\n> queries, the IN clauses are constructed on primary key columns.\n>\n> While planning these queries, Postgres query planner loops over every\n> element in the IN clause, computing the selectivity of each\n>\n> element and then uses that as an input to compute the total selectivity of\n> the IN clause. For IN clauses on primary key or unique\n>\n> columns, it is easy to see that the selectivity of the IN predicate is\n> given by (number of elements in the IN clause / table cardinality)\n>\n> and is independent of the selectivity of the individual elements. We use\n> this observation to avoid computing the selectivities of the\n>\n> individual elements. This results in an improvement in the planning time\n> especially when the number of elements in the IN clause\n>\n> is relatively large.\n>\n>\n>\n> The table below demonstrates the improvement in planning time (averaged\n> over 3 runs) for IN queries of the form\n>\n> SELECT COUNT(*) FROM table_a WHERE sys_id IN\n> ('000356e61b568510eabcca2b234bcb08', '00035846db2f24101ad7f256b9961925',\n> ...).\n>\n> Here sys_id is the primary key column of type VARCHAR(32) and the table\n> cardinality of table_a is around 10M.\n>\n>\n> Number of IN list elements | Planning time w/o optimization (in ms) | Planning\n> time w/ optimization (in ms) | Speedup\n>\n> ------------------------------------|---------------------------------------------------\n> | --------------------------------------------------|--------------\n>\n> 500 | 0.371\n> | 0.236\n> | 1.57\n>\n> 5000 | 2.019\n> | 0.874\n> | 2.31\n>\n> 50000 | 19.886\n> | 8.273\n> | 2.40\n>\n>\n>\n> Similar to IN clauses, the selectivity of NOT IN clauses on a primary key\n> or unique column can be computed by not computing the\n>\n> selectivities of individual elements. The query used is of the form SELECT\n> COUNT(*) FROM table_a WHERE sys_id NOT IN\n>\n> ('000356e61b568510eabcca2b234bcb08', '00035846db2f24101ad7f256b9961925',\n> ...).\n>\n>\n> Number of NOT IN list elements | Planning time w/o optimization (in ms) | Planning\n> time w/ optimization (in ms) | Speedup\n>\n> -------------------------------------------|---------------------------------------------------\n> | --------------------------------------------------|--------------\n>\n> 500 | 0.380\n> | 0.248\n> | 1.53\n>\n> 5000 | 2.534\n> | 0.854\n> | 2.97\n>\n> 50000 | 21.316\n> | 9.009\n> | 2.36\n>\n>\n>\n> We also obtain planning time of queries on a primary key column of type\n> INTEGER with 10M elements for both IN and NOT in queries.\n>\n>\n> Number of IN list elements | Planning time w/o optimization (in ms) | Planning\n> time w/ optimization (in ms) | Speedup\n>\n> ------------------------------------|---------------------------------------------------\n> | --------------------------------------------------|--------------\n>\n> 500 | 0.370\n> | 0.208\n> | 1.78\n>\n> 5000 | 1.998\n> | 0.816\n> | 2.45\n>\n> 50000 | 18.073\n> | 6.750\n> | 2.67\n>\n>\n> Number of NOT IN list elements | Planning time w/o optimization (in ms) | Planning\n> time w/ optimization (in ms) | Speedup\n>\n> -------------------------------------------|---------------------------------------------------\n> | --------------------------------------------------|--------------\n>\n> 500 | 0.342\n> | 0.203\n> | 1.68\n>\n> 5000 | 2.073\n> | 0.822\n> | 3.29\n>\n> 50000 |19.551\n> | 6.738\n> | 2.90\n>\n>\n>\n> We see that the planning time of queries on unique columns are identical\n> to that we observed for primary key columns.\n>\n> The resulting patch file for the changes above is small and we are happy\n> to polish it up and share.\n>\n>\n> Best,\n>\n> Souvik Bhattacherjee\n>\n> (ServiceNow)\n>\n> On Thu, Aug 11, 2022 at 2:42 PM Souvik Bhattacherjee <\n> pgsdbhacker@gmail.com> wrote:\n>\n>> Hi hackers,\n>>\n>> At ServiceNow, we frequently encounter queries with very large IN lists\n>> where the number of elements in the IN list range from a few hundred to\n>> several thousand. For a significant fraction of the queries, the IN clauses\n>> are constructed on primary key columns. While planning these queries,\n>> Postgres query planner loops over every element in the IN clause, computing\n>> the selectivity of each element and then uses that as an input to compute\n>> the total selectivity of the IN clause. For IN clauses on primary key or\n>> unique columns, it is easy to see that the selectivity of the IN predicate\n>> is given by (number of elements in the IN clause / table cardinality) and\n>> is independent of the selectivity of the individual elements. We use this\n>> observation to avoid computing the selectivities of the individual\n>> elements. This results in an improvement in the planning time especially\n>> when the number of elements in the IN clause is relatively large.\n>>\n>>\n>>\n>> The table below demonstrates the improvement in planning time (averaged\n>> over 3 runs) for IN queries of the form SELECT COUNT(*) FROM table_a WHERE\n>> sys_id IN ('000356e61b568510eabcca2b234bcb08',\n>> '00035846db2f24101ad7f256b9961925', ...). Here sys_id is the primary key\n>> column of type VARCHAR(32) and the table cardinality of table_a is around\n>> 10M.\n>>\n>>\n>>\n>> Number of IN list elements\n>>\n>> Planning time w/o optimization (in ms)\n>>\n>> Planning time w/ optimization (in ms)\n>>\n>> Speedup\n>>\n>> 500\n>>\n>> 0.371\n>>\n>> 0.236\n>>\n>> 1.57\n>>\n>> 5000\n>>\n>> 2.019\n>>\n>> 0.874\n>>\n>> 2.31\n>>\n>> 50000\n>>\n>> 19.886\n>>\n>> 8.273\n>>\n>> 2.40\n>>\n>>\n>>\n>> Similar to IN clauses, the selectivity of NOT IN clauses on a primary key\n>> or unique column can be computed by not computing the selectivities of\n>> individual elements. The query used is of the form SELECT COUNT(*) FROM\n>> table_a WHERE sys_id NOT IN ('000356e61b568510eabcca2b234bcb08',\n>> '00035846db2f24101ad7f256b9961925', ...).\n>>\n>>\n>>\n>> Number of NOT IN list elements\n>>\n>> Planning time w/o optimization (in ms)\n>>\n>> Planning time w/ optimization (in ms)\n>>\n>> Speedup\n>>\n>> 500\n>>\n>> 0.380\n>>\n>> 0.248\n>>\n>> 1.53\n>>\n>> 5000\n>>\n>> 2.534\n>>\n>> 0.854\n>>\n>> 2.97\n>>\n>> 50000\n>>\n>> 21.316\n>>\n>> 9.009\n>>\n>> 2.36\n>>\n>>\n>>\n>> We also obtain planning time of queries on a primary key column of type\n>> INTEGER with 10M elements for both IN and NOT in queries.\n>>\n>>\n>> Number of IN list elements\n>>\n>> Planning time w/o optimization (in ms)\n>>\n>> Planning time w/ optimization (in ms)\n>>\n>> Speedup\n>>\n>> 500\n>>\n>> 0.370\n>>\n>> 0.208\n>>\n>> 1.78\n>>\n>> 5000\n>>\n>> 1.998\n>>\n>> 0.816\n>>\n>> 2.45\n>>\n>> 50000\n>>\n>> 18.073\n>>\n>> 6.750\n>>\n>> 2.67\n>>\n>>\n>>\n>>\n>> Number of NOT IN list elements\n>>\n>> Planning time w/o optimization (in ms)\n>>\n>> Planning time w/ optimization (in ms)\n>>\n>> Speedup\n>>\n>> 500\n>>\n>> 0.342\n>>\n>> 0.203\n>>\n>> 1.68\n>>\n>> 5000\n>>\n>> 2.073\n>>\n>> 0.822\n>>\n>> 3.29\n>>\n>> 50000\n>>\n>> 19.551\n>>\n>> 6.738\n>>\n>> 2.90\n>>\n>>\n>>\n>> We see that the planning time of queries on unique columns are identical\n>> to that we observed for primary key columns. The resulting patch file for\n>> the changes above is small and we are happy to polish it up and share.\n>>\n>>\n>> Best,\n>>\n>> Souvik Bhattacherjee\n>>\n>> (ServiceNow)\n>>\n>\n\nHi hackers,(Sorry about the re-post. Another attempt at fixing the formatting)At ServiceNow, we frequently encounter queries with very large IN lists where the number of elements in the IN list range from a few hundred to several thousand. For a significant fraction of the queries, the IN clauses are constructed on primary key columns. While planning these queries, Postgres query planner loops over every element in the IN clause, computing the selectivity of each element and then uses that as an input to compute the total selectivity of the IN clause. For IN clauses on primary key or unique columns, it is easy to see that the selectivity of the IN predicate is given by (number of elements in the IN clause / table cardinality) and is independent of the selectivity of the individual elements. We use this observation to avoid computing the selectivities of the individual elements. This results in an improvement in the planning time especially when the number of elements in the IN clause is relatively large. The table below demonstrates the improvement in planning time in milliseconds (averaged over 3 runs) for IN queries of the form SELECT COUNT(*) FROM table_a WHERE sys_id IN ('000356e61b568510eabcca2b234bcb08', '00035846db2f24101ad7f256b9961925', ...). Here sys_id is the primary key column of type VARCHAR(32) and the table cardinality of table_a is around 10M.# IN elements | Plan time w/o opt | Plan time w/ opt | Speedup------------------- |-------------------------|-----------------------|--------------500 | 0.371 | 0.236 | 1.575000 | 2.019 | 0.874 | 2.3150000 | 19.886 | 8.273 | 2.40 Similar to IN clauses, the selectivity of NOT IN clauses on a primary key or unique column can be computed by not computing the selectivities of individual elements. The query used is of the form SELECT COUNT(*) FROM table_a WHERE sys_id NOT IN ('000356e61b568510eabcca2b234bcb08', '00035846db2f24101ad7f256b9961925', ...).# NOT IN elements | Plan time w/o opt | Plan time w/ opt | Speedup---------------------------|-------------------------|----------------------|--------------500 | 0.380 | 0.248 | 1.535000 | 2.534 | 0.854 | 2.9750000 | 21.316 | 9.009 | 2.36 We also obtain planning time of queries on a primary key column of type INTEGER with 10M elements for both IN and NOT in queries.# IN elements | Plan time w/o opt | Plan time w/ opt | Speedup--------------------|-------------------------|----------------------|--------------500 | 0.370 | 0.208 | 1.785000 | 1.998 | 0.816 | 2.4550000 | 18.073 | 6.750 | 2.67# NOT IN elements | Plan time w/o opt | Plan time w/ opt | Speedup-------------------------- |------------------------|------------------------|--------------500 | 0.342 | 0.203 | 1.685000 | 2.073 | 0.822 | 3.2950000 |19.551 | 6.738 | 2.90 We see that the planning time of queries on unique columns are identical to that we observed for primary key columns. The resulting patch file for the changes above is small and we are happy to polish it up and share.Best,Souvik Bhattacherjee(ServiceNow)On Thu, Aug 11, 2022 at 3:04 PM Souvik Bhattacherjee <pgsdbhacker@gmail.com> wrote:(Re-posting with better formatting)Hi hackers,At ServiceNow, we frequently encounter queries with very large IN lists where the number of elements in the IN list range from a few hundred to several thousand. For a significant fraction of the queries, the IN clauses are constructed on primary key columns. While planning these queries, Postgres query planner loops over every element in the IN clause, computing the selectivity of each element and then uses that as an input to compute the total selectivity of the IN clause. For IN clauses on primary key or unique columns, it is easy to see that the selectivity of the IN predicate is given by (number of elements in the IN clause / table cardinality)and is independent of the selectivity of the individual elements. We use this observation to avoid computing the selectivities of theindividual elements. This results in an improvement in the planning time especially when the number of elements in the IN clause is relatively large. The table below demonstrates the improvement in planning time (averaged over 3 runs) for IN queries of the form SELECT COUNT(*) FROM table_a WHERE sys_id IN ('000356e61b568510eabcca2b234bcb08', '00035846db2f24101ad7f256b9961925', ...). Here sys_id is the primary key column of type VARCHAR(32) and the table cardinality of table_a is around 10M.Number of IN list elements | Planning time w/o optimization (in ms) | Planning time w/ optimization (in ms) | Speedup------------------------------------|--------------------------------------------------- | --------------------------------------------------|--------------500 | 0.371 | 0.236 | 1.575000 | 2.019 | 0.874 | 2.3150000 | 19.886 | 8.273 | 2.40 Similar to IN clauses, the selectivity of NOT IN clauses on a primary key or unique column can be computed by not computing the selectivities of individual elements. The query used is of the form SELECT COUNT(*) FROM table_a WHERE sys_id NOT IN ('000356e61b568510eabcca2b234bcb08', '00035846db2f24101ad7f256b9961925', ...).Number of NOT IN list elements | Planning time w/o optimization (in ms) | Planning time w/ optimization (in ms) | Speedup-------------------------------------------|--------------------------------------------------- | --------------------------------------------------|--------------500 | 0.380 | 0.248 | 1.535000 | 2.534 | 0.854 | 2.9750000 | 21.316 | 9.009 | 2.36 We also obtain planning time of queries on a primary key column of type INTEGER with 10M elements for both IN and NOT in queries.Number of IN list elements | Planning time w/o optimization (in ms) | Planning time w/ optimization (in ms) | Speedup------------------------------------|--------------------------------------------------- | --------------------------------------------------|--------------500 | 0.370 | 0.208 | 1.785000 | 1.998 | 0.816 | 2.4550000 | 18.073 | 6.750 | 2.67Number of NOT IN list elements | Planning time w/o optimization (in ms) | Planning time w/ optimization (in ms) | Speedup-------------------------------------------|--------------------------------------------------- | --------------------------------------------------|--------------500 | 0.342 | 0.203 | 1.685000 | 2.073 | 0.822 | 3.2950000 |19.551 | 6.738 | 2.90 We see that the planning time of queries on unique columns are identical to that we observed for primary key columns. The resulting patch file for the changes above is small and we are happy to polish it up and share.Best,Souvik Bhattacherjee(ServiceNow)On Thu, Aug 11, 2022 at 2:42 PM Souvik Bhattacherjee <pgsdbhacker@gmail.com> wrote:Hi hackers,At ServiceNow, we frequently encounter queries with very large IN lists where the number of elements in the IN list range from a few hundred to several thousand. For a significant fraction of the queries, the IN clauses are constructed on primary key columns. While planning these queries, Postgres query planner loops over every element in the IN clause, computing the selectivity of each element and then uses that as an input to compute the total selectivity of the IN clause. For IN clauses on primary key or unique columns, it is easy to see that the selectivity of the IN predicate is given by (number of elements in the IN clause / table cardinality) and is independent of the selectivity of the individual elements. We use this observation to avoid computing the selectivities of the individual elements. This results in an improvement in the planning time especially when the number of elements in the IN clause is relatively large. The table below demonstrates the improvement in planning time (averaged over 3 runs) for IN queries of the form SELECT COUNT(*) FROM table_a WHERE sys_id IN ('000356e61b568510eabcca2b234bcb08', '00035846db2f24101ad7f256b9961925', ...). Here sys_id is the primary key column of type VARCHAR(32) and the table cardinality of table_a is around 10M. Number of IN list elementsPlanning time w/o optimization (in ms)Planning time w/ optimization (in ms)Speedup5000.3710.2361.5750002.0190.8742.315000019.8868.2732.40 Similar to IN clauses, the selectivity of NOT IN clauses on a primary key or unique column can be computed by not computing the selectivities of individual elements. The query used is of the form SELECT COUNT(*) FROM table_a WHERE sys_id NOT IN ('000356e61b568510eabcca2b234bcb08', '00035846db2f24101ad7f256b9961925', ...). Number of NOT IN list elementsPlanning time w/o optimization (in ms)Planning time w/ optimization (in ms)Speedup5000.3800.2481.5350002.5340.8542.975000021.3169.0092.36 We also obtain planning time of queries on a primary key column of type INTEGER with 10M elements for both IN and NOT in queries.Number of IN list elementsPlanning time w/o optimization (in ms)Planning time w/ optimization (in ms)Speedup5000.3700.2081.7850001.9980.8162.455000018.0736.7502.67 Number of NOT IN list elementsPlanning time w/o optimization (in ms)Planning time w/ optimization (in ms)Speedup5000.3420.2031.6850002.0730.8223.295000019.5516.7382.90 We see that the planning time of queries on unique columns are identical to that we observed for primary key columns. The resulting patch file for the changes above is small and we are happy to polish it up and share.Best,Souvik Bhattacherjee(ServiceNow)",
"msg_date": "Thu, 11 Aug 2022 15:41:27 -0700",
"msg_from": "Souvik Bhattacherjee <pgsdbhacker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing planning time of large IN queries on primary key /\n unique columns"
}
] |
[
{
"msg_contents": "Hi,\n\nWith commit 64da07c41a8c0a680460cdafc79093736332b6cf making default\nvalue of log_checkpoints to on, do we need to remove explicit settings\nin perl tests to save some (5) LOC?\n\nAlthough, it's harmless, here's a tiny patch to remove them.\n\n--\nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/",
"msg_date": "Fri, 12 Aug 2022 10:00:53 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Remove log_checkpoints = true from .pl tests"
},
{
"msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> With commit 64da07c41a8c0a680460cdafc79093736332b6cf making default\n> value of log_checkpoints to on, do we need to remove explicit settings\n> in perl tests to save some (5) LOC?\n\nI'm not particularly eager to do that, because I think defaulting\nlog_checkpoints to \"on\" was a bad decision that will eventually get\nreverted. Even if that doesn't happen, we have *far* better ways\nto spend our time than removing five lines of code, or even\ndiscussing whether to do so.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 12 Aug 2022 01:34:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove log_checkpoints = true from .pl tests"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.