threads
listlengths 1
2.99k
|
|---|
[
{
"msg_contents": "Hi hackers,\n\nInspired by Andre's works, I put efforts to try to improve\nGetSnapShotData().\nI think that I got something.\n\nAs is well known GetSnapShotData() is a critical function for Postgres\nperformance, already demonstrated in other benchmarks.\nSo any gain, no matter how small, makes a difference.\n\nSo far, no regression has been observed.\n\nWindows 10 64 bits (msvc 2019 64 bits)\npgbench -M prepared -c $conns -S -n -U postgres\n\nconns tps head tps patched\n1 2918.004085 3027.550711\n10 12262.415696 12876.641772\n50 13656.724571 14877.410140\n80 14338.202348 15244.192915\n\npgbench can't run with conns > 80.\n\nLinux Ubuntu 64 bits (gcc 9.4)\n./pgbench -M prepared -c $conns -j $conns -S -n -U postgres\n\nconns tps head tps patched\n1 2918.004085 3190.810466\n10 12262.415696 17199.862401\n50 13656.724571 18278.194114\n80 14338.202348 17955.336101\n90 16597.510373 18269.660184\n100 17706.775793 18349.650150\n200 16877.067441 17881.250615\n300 16942.260775 17181.441752\n400 16794.514911 17124.533892\n500 16598.502151 17181.244953\n600 16717.935001 16961.130742\n700 16651.204834 16959.172005\n800 16467.546583 16834.591719\n900 16588.241149 16693.902459\n1000 16564.985265 16936.952195\n\nI was surprised that in my tests, with connections greater than 100,\nthe tps performance drops, even in the head.\nI don't have access to a powerful machine, to really test with a high\nworkload.\nBut with connections below 100, It seems to me that there are obvious gains.\n\npatch attached.\n\nregards,\nRanier Vilela",
"msg_date": "Tue, 24 May 2022 12:28:20 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Improving connection scalability\n (src/backend/storage/ipc/procarray.c)"
},
{
"msg_contents": "On Tue, May 24, 2022 at 11:28 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> I think that I got something.\n\nYou might have something, but it's pretty hard to tell based on\nlooking at this patch. Whatever relevant changes it has are mixed with\na bunch of changes that are probably not relevant. For example, it's\nhard to believe that moving \"uint32 i\" to an inner scope in\nXidInMVCCSnapshot() is causing a performance gain, because an\noptimizing compiler should figure that out anyway.\n\nAn even bigger issue is that it's not sufficient to just demonstrate\nthat the patch improves performance. It's also necessary to make an\nargument as to why it is safe and correct, and \"I tried it out and\nnothing seemed to break\" does not qualify as an argument. I'd guess\nthat most or maybe all of the performance gain that you've observed\nhere is attributable to changing GetSnapshotData() to call\nGetSnapshotDataReuse() without first acquiring ProcArrayLock. That\ndoesn't seem like a completely hopeless idea, because the comments for\nGetSnapshotDataReuse() say this:\n\n * This very likely can be evolved to not need ProcArrayLock held (at very\n * least in the case we already hold a snapshot), but that's for another day.\n\nHowever, those comment seem to imply that it might not be safe in all\ncases, and that changes might be needed someplace in order to make it\nsafe, but you haven't updated these comments, or changed the function\nin any way, so it's not really clear how or whether whatever problems\nAndres was worried about have been handled.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 24 May 2022 12:06:43 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving connection scalability\n (src/backend/storage/ipc/procarray.c)"
},
{
"msg_contents": "Em ter., 24 de mai. de 2022 às 13:06, Robert Haas <robertmhaas@gmail.com>\nescreveu:\n\n> On Tue, May 24, 2022 at 11:28 AM Ranier Vilela <ranier.vf@gmail.com>\n> wrote:\n> > I think that I got something.\n>\n> You might have something, but it's pretty hard to tell based on\n> looking at this patch. Whatever relevant changes it has are mixed with\n> a bunch of changes that are probably not relevant. For example, it's\n> hard to believe that moving \"uint32 i\" to an inner scope in\n> XidInMVCCSnapshot() is causing a performance gain, because an\n> optimizing compiler should figure that out anyway.\n>\nI believe that even these small changes are helpful and favorable.\nImproves code readability and helps the compiler generate better code,\nespecially for older compilers.\n\n\n>\n> An even bigger issue is that it's not sufficient to just demonstrate\n> that the patch improves performance. It's also necessary to make an\n> argument as to why it is safe and correct, and \"I tried it out and\n> nothing seemed to break\" does not qualify as an argument.\n\n Ok, certainly the convincing work is not good.\n\nI'd guess that most or maybe all of the performance gain that you've\n> observed\n> here is attributable to changing GetSnapshotData() to call\n> GetSnapshotDataReuse() without first acquiring ProcArrayLock.\n\nIt certainly helps, but I trust that's not the only reason, in all the\ntests I did, there was an improvement in performance, even before using\nthis feature.\nIf you look closely at GetSnapShotData() you will see that\nGetSnapshotDataReuse is called for all snapshots, even the new ones, which\nis unnecessary.\nAnother example NormalTransactionIdPrecedes is more expensive than testing\nstatusFlags.\n\nThat\n> doesn't seem like a completely hopeless idea, because the comments for\n> GetSnapshotDataReuse() say this:\n>\n> * This very likely can be evolved to not need ProcArrayLock held (at very\n> * least in the case we already hold a snapshot), but that's for another\n> day.\n>\n> However, those comment seem to imply that it might not be safe in all\n> cases, and that changes might be needed someplace in order to make it\n> safe, but you haven't updated these comments, or changed the function\n> in any way, so it's not really clear how or whether whatever problems\n> Andres was worried about have been handled.\n>\nI think it's worth trying and testing to see if everything goes well,\nso in the final patch apply whatever comments are needed.\n\nregards,\nRanier Vilela\n\nEm ter., 24 de mai. de 2022 às 13:06, Robert Haas <robertmhaas@gmail.com> escreveu:On Tue, May 24, 2022 at 11:28 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> I think that I got something.\n\nYou might have something, but it's pretty hard to tell based on\nlooking at this patch. Whatever relevant changes it has are mixed with\na bunch of changes that are probably not relevant. For example, it's\nhard to believe that moving \"uint32 i\" to an inner scope in\nXidInMVCCSnapshot() is causing a performance gain, because an\noptimizing compiler should figure that out anyway.I believe that even these small changes are helpful and favorable.Improves code readability and helps the compiler generate better code, especially for older compilers. \n\nAn even bigger issue is that it's not sufficient to just demonstrate\nthat the patch improves performance. It's also necessary to make an\nargument as to why it is safe and correct, and \"I tried it out and\nnothing seemed to break\" does not qualify as an argument. Ok, certainly the convincing work is not good. I'd guess that most or maybe all of the performance gain that you've observed\nhere is attributable to changing GetSnapshotData() to call\nGetSnapshotDataReuse() without first acquiring ProcArrayLock.It certainly helps, but I trust that's not the only reason, in all the tests I did, there was an improvement in performance, even before using this feature.If you look closely at GetSnapShotData() you will see that GetSnapshotDataReuse is called for all snapshots, even the new ones, which is unnecessary.Another example NormalTransactionIdPrecedes is more expensive than testing statusFlags. That\ndoesn't seem like a completely hopeless idea, because the comments for\nGetSnapshotDataReuse() say this:\n\n * This very likely can be evolved to not need ProcArrayLock held (at very\n * least in the case we already hold a snapshot), but that's for another day.\n\nHowever, those comment seem to imply that it might not be safe in all\ncases, and that changes might be needed someplace in order to make it\nsafe, but you haven't updated these comments, or changed the function\nin any way, so it's not really clear how or whether whatever problems\nAndres was worried about have been handled.I think it's worth trying and testing to see if everything goes well, so in the final patch apply whatever comments are needed.regards,Ranier Vilela",
"msg_date": "Tue, 24 May 2022 13:23:43 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improving connection scalability\n (src/backend/storage/ipc/procarray.c)"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-24 12:28:20 -0300, Ranier Vilela wrote:\n> Linux Ubuntu 64 bits (gcc 9.4)\n> ./pgbench -M prepared -c $conns -j $conns -S -n -U postgres\n> \n> conns tps head tps patched\n> 1 2918.004085 3190.810466\n> 10 12262.415696 17199.862401\n> 50 13656.724571 18278.194114\n> 80 14338.202348 17955.336101\n> 90 16597.510373 18269.660184\n> 100 17706.775793 18349.650150\n> 200 16877.067441 17881.250615\n> 300 16942.260775 17181.441752\n> 400 16794.514911 17124.533892\n> 500 16598.502151 17181.244953\n> 600 16717.935001 16961.130742\n> 700 16651.204834 16959.172005\n> 800 16467.546583 16834.591719\n> 900 16588.241149 16693.902459\n> 1000 16564.985265 16936.952195\n\n17-18k tps is pretty low for pgbench -S. For a shared_buffers resident run, I\ncan get 40k in a single connection in an optimized build. If you're testing a\nworkload >> shared_buffers, GetSnapshotData() isn't the bottleneck. And\ntesting an assert build isn't a meaningful exercise either, unless you have\nway way higher gains (i.e. stuff like turning O(n^2) into O(n)).\n\nWhat pgbench scale is this and are you using an optimized build?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 24 May 2022 20:46:55 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Improving connection scalability\n (src/backend/storage/ipc/procarray.c)"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-24 13:23:43 -0300, Ranier Vilela wrote:\n> It certainly helps, but I trust that's not the only reason, in all the\n> tests I did, there was an improvement in performance, even before using\n> this feature.\n> If you look closely at GetSnapShotData() you will see that\n> GetSnapshotDataReuse is called for all snapshots, even the new ones, which\n> is unnecessary.\n\nThat only happens a handful of times as snapshots are persistently\nallocated. Doing an extra GetSnapshotDataReuse() in those cases doesn't matter\nfor performance. If anything this increases the number of jumps for the common\ncase.\n\n\nIt'd be a huge win to avoid needing ProcArrayLock when reusing a snapshot, but\nit's not at all easy to guarantee that it's correct / see how to make it\ncorrect. I'm fairly sure it can be made correct, but ...\n\n\n> Another example NormalTransactionIdPrecedes is more expensive than testing\n> statusFlags.\n\nThat may be true when you count instructions, but isn't at all true when you\ntake into account that the cachelines containing status flags are hotly\ncontended.\n\nAlso, the likelihood of filtering out a proc due to\nNormalTransactionIdPrecedes(xid, xmax) is *vastly* higher than the due to the\nstatusFlags check. There may be a lot of procs failing that test, but\ntypically there will be far fewer backends in vacuum or logical decoding.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 24 May 2022 20:56:26 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Improving connection scalability\n (src/backend/storage/ipc/procarray.c)"
},
{
"msg_contents": "Em qua., 25 de mai. de 2022 às 00:46, Andres Freund <andres@anarazel.de>\nescreveu:\n\nHi Andres, thank you for taking a look.\n\n>\n> On 2022-05-24 12:28:20 -0300, Ranier Vilela wrote:\n> > Linux Ubuntu 64 bits (gcc 9.4)\n> > ./pgbench -M prepared -c $conns -j $conns -S -n -U postgres\n> >\n> > conns tps head tps patched\n> > 1 2918.004085 3190.810466\n> > 10 12262.415696 17199.862401\n> > 50 13656.724571 18278.194114\n> > 80 14338.202348 17955.336101\n> > 90 16597.510373 18269.660184\n> > 100 17706.775793 18349.650150\n> > 200 16877.067441 17881.250615\n> > 300 16942.260775 17181.441752\n> > 400 16794.514911 17124.533892\n> > 500 16598.502151 17181.244953\n> > 600 16717.935001 16961.130742\n> > 700 16651.204834 16959.172005\n> > 800 16467.546583 16834.591719\n> > 900 16588.241149 16693.902459\n> > 1000 16564.985265 16936.952195\n>\n> 17-18k tps is pretty low for pgbench -S. For a shared_buffers resident\n> run, I\n> can get 40k in a single connection in an optimized build. If you're\n> testing a\n> workload >> shared_buffers, GetSnapshotData() isn't the bottleneck. And\n> testing an assert build isn't a meaningful exercise either, unless you have\n> way way higher gains (i.e. stuff like turning O(n^2) into O(n)).\n>\nThanks for sharing these hits.\nYes, their 17-18k tps are disappointing.\n\n\n> What pgbench scale is this and are you using an optimized build?\n>\nYes this optimized build.\nCFLAGS='-Wall -Wmissing-prototypes -Wpointer-arith\n-Wdeclaration-after-statement -Werror=vla -Wendif-labels\n-Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type\n-Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard\n-Wno-format-truncation -Wno-stringop-truncation -O2'\nfrom config.log\n\npgbench was initialized with:\npgbench -i -p 5432 -d postgres\n\npgbench -M prepared -c 100 -j 100 -S -n -U postgres\npgbench (15beta1)\ntransaction type: <builtin: select only>\nscaling factor: 1\nquery mode: prepared\nnumber of clients: 100\nnumber of threads: 100\n\nThe shared_buffers is default:\nshared_buffers = 128MB\n\nIntel® Core™ i5-8250U CPU Quad Core\nRAM 8GB\nSSD 256 GB\n\nCan you share the pgbench configuration and shared buffers\nthis benchmark?\nhttps://www.postgresql.org/message-id/20200301083601.ews6hz5dduc3w2se%40alap3.anarazel.de\n\nregards,\nRanier Vilela\n\nEm qua., 25 de mai. de 2022 às 00:46, Andres Freund <andres@anarazel.de> escreveu:Hi Andres, thank you for taking a look.\n\nOn 2022-05-24 12:28:20 -0300, Ranier Vilela wrote:\n> Linux Ubuntu 64 bits (gcc 9.4)\n> ./pgbench -M prepared -c $conns -j $conns -S -n -U postgres\n> \n> conns tps head tps patched\n> 1 2918.004085 3190.810466\n> 10 12262.415696 17199.862401\n> 50 13656.724571 18278.194114\n> 80 14338.202348 17955.336101\n> 90 16597.510373 18269.660184\n> 100 17706.775793 18349.650150\n> 200 16877.067441 17881.250615\n> 300 16942.260775 17181.441752\n> 400 16794.514911 17124.533892\n> 500 16598.502151 17181.244953\n> 600 16717.935001 16961.130742\n> 700 16651.204834 16959.172005\n> 800 16467.546583 16834.591719\n> 900 16588.241149 16693.902459\n> 1000 16564.985265 16936.952195\n\n17-18k tps is pretty low for pgbench -S. For a shared_buffers resident run, I\ncan get 40k in a single connection in an optimized build. If you're testing a\nworkload >> shared_buffers, GetSnapshotData() isn't the bottleneck. And\ntesting an assert build isn't a meaningful exercise either, unless you have\nway way higher gains (i.e. stuff like turning O(n^2) into O(n)).Thanks for sharing these hits.Yes, their 17-18k tps are disappointing.\n\nWhat pgbench scale is this and are you using an optimized build?Yes this optimized build.CFLAGS='-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2' from config.logpgbench was initialized with:pgbench -i -p 5432 -d postgrespgbench -M prepared -c 100 -j 100 -S -n -U postgrespgbench (15beta1)transaction type: <builtin: select only>scaling factor: 1query mode: preparednumber of clients: 100number of threads: 100The shared_buffers is default:shared_buffers = 128MBIntel® Core™ i5-8250U CPU Quad CoreRAM 8GBSSD 256 GBCan you share the pgbench configuration and shared buffersthis benchmark?https://www.postgresql.org/message-id/20200301083601.ews6hz5dduc3w2se%40alap3.anarazel.deregards,Ranier Vilela",
"msg_date": "Wed, 25 May 2022 06:07:01 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improving connection scalability\n (src/backend/storage/ipc/procarray.c)"
},
{
"msg_contents": "Em qua., 25 de mai. de 2022 às 00:56, Andres Freund <andres@anarazel.de>\nescreveu:\n\n> Hi,\n>\n> On 2022-05-24 13:23:43 -0300, Ranier Vilela wrote:\n> > It certainly helps, but I trust that's not the only reason, in all the\n> > tests I did, there was an improvement in performance, even before using\n> > this feature.\n> > If you look closely at GetSnapShotData() you will see that\n> > GetSnapshotDataReuse is called for all snapshots, even the new ones,\n> which\n> > is unnecessary.\n>\n> That only happens a handful of times as snapshots are persistently\n> allocated.\n\nYes, but now this does not happen with new snapshots.\n\nDoing an extra GetSnapshotDataReuse() in those cases doesn't matter\n> for performance. If anything this increases the number of jumps for the\n> common\n> case.\n>\nIMHO with GetSnapShotData(), any gain makes a difference.\n\n\n>\n> It'd be a huge win to avoid needing ProcArrayLock when reusing a snapshot,\n> but\n> it's not at all easy to guarantee that it's correct / see how to make it\n> correct. I'm fairly sure it can be made correct, but ...\n>\nI believe it's worth the effort to make sure everything goes well and use\nthis feature.\n\n\n> > Another example NormalTransactionIdPrecedes is more expensive than\n> testing\n> > statusFlags.\n>\n> That may be true when you count instructions, but isn't at all true when\n> you\n> take into account that the cachelines containing status flags are hotly\n> contended.\n>\n\n> Also, the likelihood of filtering out a proc due to\n> NormalTransactionIdPrecedes(xid, xmax) is *vastly* higher than the due to\n> the\n> statusFlags check. There may be a lot of procs failing that test, but\n> typically there will be far fewer backends in vacuum or logical decoding.\n>\nI believe that keeping the instructions in the cache together works better\nthan having the status flags test in the middle.\nBut I will test this to be sure.\n\nregards,\nRanier Vilela\n\nEm qua., 25 de mai. de 2022 às 00:56, Andres Freund <andres@anarazel.de> escreveu:Hi,\n\nOn 2022-05-24 13:23:43 -0300, Ranier Vilela wrote:\n> It certainly helps, but I trust that's not the only reason, in all the\n> tests I did, there was an improvement in performance, even before using\n> this feature.\n> If you look closely at GetSnapShotData() you will see that\n> GetSnapshotDataReuse is called for all snapshots, even the new ones, which\n> is unnecessary.\n\nThat only happens a handful of times as snapshots are persistently\nallocated.Yes, but now this does not happen with new snapshots. Doing an extra GetSnapshotDataReuse() in those cases doesn't matter\nfor performance. If anything this increases the number of jumps for the common\ncase.IMHO with GetSnapShotData(), any gain makes a difference. \n\nIt'd be a huge win to avoid needing ProcArrayLock when reusing a snapshot, but\nit's not at all easy to guarantee that it's correct / see how to make it\ncorrect. I'm fairly sure it can be made correct, but ...I believe it's worth the effort to make sure everything goes well and use this feature. \n\n> Another example NormalTransactionIdPrecedes is more expensive than testing\n> statusFlags.\n\nThat may be true when you count instructions, but isn't at all true when you\ntake into account that the cachelines containing status flags are hotly\ncontended.\n\nAlso, the likelihood of filtering out a proc due to\nNormalTransactionIdPrecedes(xid, xmax) is *vastly* higher than the due to the\nstatusFlags check. There may be a lot of procs failing that test, but\ntypically there will be far fewer backends in vacuum or logical decoding.I believe that keeping the instructions in the cache together works better than having the status flags test in the middle.But I will test this to be sure. regards,Ranier Vilela",
"msg_date": "Wed, 25 May 2022 06:22:24 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improving connection scalability\n (src/backend/storage/ipc/procarray.c)"
},
{
"msg_contents": "\n\nOn 5/25/22 11:07, Ranier Vilela wrote:\n> Em qua., 25 de mai. de 2022 às 00:46, Andres Freund <andres@anarazel.de\n> <mailto:andres@anarazel.de>> escreveu:\n> \n> Hi Andres, thank you for taking a look.\n> \n> \n> On 2022-05-24 12:28:20 -0300, Ranier Vilela wrote:\n> > Linux Ubuntu 64 bits (gcc 9.4)\n> > ./pgbench -M prepared -c $conns -j $conns -S -n -U postgres\n> >\n> > conns tps head tps patched\n> > 1 2918.004085 3190.810466\n> > 10 12262.415696 17199.862401\n> > 50 13656.724571 18278.194114\n> > 80 14338.202348 17955.336101\n> > 90 16597.510373 18269.660184\n> > 100 17706.775793 18349.650150\n> > 200 16877.067441 17881.250615\n> > 300 16942.260775 17181.441752\n> > 400 16794.514911 17124.533892\n> > 500 16598.502151 17181.244953\n> > 600 16717.935001 16961.130742\n> > 700 16651.204834 16959.172005\n> > 800 16467.546583 16834.591719\n> > 900 16588.241149 16693.902459\n> > 1000 16564.985265 16936.952195\n> \n> 17-18k tps is pretty low for pgbench -S. For a shared_buffers\n> resident run, I\n> can get 40k in a single connection in an optimized build. If you're\n> testing a\n> workload >> shared_buffers, GetSnapshotData() isn't the bottleneck. And\n> testing an assert build isn't a meaningful exercise either, unless\n> you have\n> way way higher gains (i.e. stuff like turning O(n^2) into O(n)).\n> \n> Thanks for sharing these hits.\n> Yes, their 17-18k tps are disappointing.\n> \n> \n> What pgbench scale is this and are you using an optimized build?\n> \n> Yes this optimized build.\n> CFLAGS='-Wall -Wmissing-prototypes -Wpointer-arith\n> -Wdeclaration-after-statement -Werror=vla -Wendif-labels\n> -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type\n> -Wformat-security -fno-strict-aliasing -fwrapv\n> -fexcess-precision=standard -Wno-format-truncation\n> -Wno-stringop-truncation -O2'\n> from config.log\n> \n\nThat can still be assert-enabled build. We need to see configure flags.\n\n> pgbench was initialized with:\n> pgbench -i -p 5432 -d postgres\n> \n> pgbench -M prepared -c 100 -j 100 -S -n -U postgres\n\nYou're not specifying duration/number of transactions to execute. So\nit's using just 10 transactions per client, which is bound to give you\nbogus results due to not having anything in relcache etc. Use -T 60 or\nsomething like that.\n\n> pgbench (15beta1)\n> transaction type: <builtin: select only>\n> scaling factor: 1\n> query mode: prepared\n> number of clients: 100\n> number of threads: 100\n> \n> The shared_buffers is default:\n> shared_buffers = 128MB\n> \n> Intel® Core™ i5-8250U CPU Quad Core\n> RAM 8GB\n> SSD 256 GB\n> \n\nWell, quick results on my laptop (i7-9750H, so not that different from\nwhat you have):\n\n1 = 18908.080126\n2 = 32943.953182\n3 = 42316.079028\n4 = 46700.087645\n\nSo something is likely wrong in your setup.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 25 May 2022 12:13:45 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving connection scalability\n (src/backend/storage/ipc/procarray.c)"
},
{
"msg_contents": "Em qua., 25 de mai. de 2022 às 07:13, Tomas Vondra <\ntomas.vondra@enterprisedb.com> escreveu:\n\n>\n>\n> On 5/25/22 11:07, Ranier Vilela wrote:\n> > Em qua., 25 de mai. de 2022 às 00:46, Andres Freund <andres@anarazel.de\n> > <mailto:andres@anarazel.de>> escreveu:\n> >\n> > Hi Andres, thank you for taking a look.\n> >\n> >\n> > On 2022-05-24 12:28:20 -0300, Ranier Vilela wrote:\n> > > Linux Ubuntu 64 bits (gcc 9.4)\n> > > ./pgbench -M prepared -c $conns -j $conns -S -n -U postgres\n> > >\n> > > conns tps head tps patched\n> > > 1 2918.004085 3190.810466\n> > > 10 12262.415696 17199.862401\n> > > 50 13656.724571 18278.194114\n> > > 80 14338.202348 17955.336101\n> > > 90 16597.510373 18269.660184\n> > > 100 17706.775793 18349.650150\n> > > 200 16877.067441 17881.250615\n> > > 300 16942.260775 17181.441752\n> > > 400 16794.514911 17124.533892\n> > > 500 16598.502151 17181.244953\n> > > 600 16717.935001 16961.130742\n> > > 700 16651.204834 16959.172005\n> > > 800 16467.546583 16834.591719\n> > > 900 16588.241149 16693.902459\n> > > 1000 16564.985265 16936.952195\n> >\n> > 17-18k tps is pretty low for pgbench -S. For a shared_buffers\n> > resident run, I\n> > can get 40k in a single connection in an optimized build. If you're\n> > testing a\n> > workload >> shared_buffers, GetSnapshotData() isn't the bottleneck.\n> And\n> > testing an assert build isn't a meaningful exercise either, unless\n> > you have\n> > way way higher gains (i.e. stuff like turning O(n^2) into O(n)).\n> >\n> > Thanks for sharing these hits.\n> > Yes, their 17-18k tps are disappointing.\n> >\n> >\n> > What pgbench scale is this and are you using an optimized build?\n> >\n> > Yes this optimized build.\n> > CFLAGS='-Wall -Wmissing-prototypes -Wpointer-arith\n> > -Wdeclaration-after-statement -Werror=vla -Wendif-labels\n> > -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type\n> > -Wformat-security -fno-strict-aliasing -fwrapv\n> > -fexcess-precision=standard -Wno-format-truncation\n> > -Wno-stringop-truncation -O2'\n> > from config.log\n> >\n>\n> That can still be assert-enabled build. We need to see configure flags.\n>\n./configure\nAttached the config.log (compressed)\n\n\n>\n> > pgbench was initialized with:\n> > pgbench -i -p 5432 -d postgres\n> >\n> > pgbench -M prepared -c 100 -j 100 -S -n -U postgres\n>\n> You're not specifying duration/number of transactions to execute. So\n> it's using just 10 transactions per client, which is bound to give you\n> bogus results due to not having anything in relcache etc. Use -T 60 or\n> something like that.\n>\nOk, I will try with -T 60.\n\n\n>\n> > pgbench (15beta1)\n> > transaction type: <builtin: select only>\n> > scaling factor: 1\n> > query mode: prepared\n> > number of clients: 100\n> > number of threads: 100\n> >\n> > The shared_buffers is default:\n> > shared_buffers = 128MB\n> >\n> > Intel® Core™ i5-8250U CPU Quad Core\n> > RAM 8GB\n> > SSD 256 GB\n> >\n>\n> Well, quick results on my laptop (i7-9750H, so not that different from\n> what you have):\n>\n> 1 = 18908.080126\n> 2 = 32943.953182\n> 3 = 42316.079028\n> 4 = 46700.087645\n>\n> So something is likely wrong in your setup.\n>\nselect version();\n version\n\n----------------------------------------------------------------------------------------------------------\n PostgreSQL 15beta1 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu\n9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bit\n\nTarget: x86_64-linux-gnu\nConfigured with: ../src/configure -v --with-pkgversion='Ubuntu\n9.4.0-1ubuntu1~20.04.1'\n--with-bugurl=file:///usr/share/doc/gcc-9/README.Bugs\n--enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2\n--prefix=/usr --with-gcc-major-version-only --program-suffix=-9\n--program-prefix=x86_64-linux-gnu- --enable-shared --enable-linker-build-id\n--libexecdir=/usr/lib --without-included-gettext --enable-threads=posix\n--libdir=/usr/lib --enable-nls --enable-clocale=gnu\n--enable-libstdcxx-debug --enable-libstdcxx-time=yes\n--with-default-libstdcxx-abi=new --enable-gnu-unique-object\n--disable-vtable-verify --enable-plugin --enable-default-pie\n--with-system-zlib --with-target-system-zlib=auto --enable-objc-gc=auto\n--enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64\n--with-multilib-list=m32,m64,mx32 --enable-multilib --with-tune=generic\n--enable-offload-targets=nvptx-none=/build/gcc-9-Av3uEd/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa\n--without-cuda-driver --enable-checking=release --build=x86_64-linux-gnu\n--host=x86_64-linux-gnu --target=x86_64-linux-gnu\nThread model: posix\ngcc version 9.4.0 (Ubuntu 9.4.0-1ubuntu1~20.04.1)\n\nregards,\nRanier Vilela",
"msg_date": "Wed, 25 May 2022 08:26:21 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improving connection scalability\n (src/backend/storage/ipc/procarray.c)"
},
{
"msg_contents": "Em qua., 25 de mai. de 2022 às 08:26, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Em qua., 25 de mai. de 2022 às 07:13, Tomas Vondra <\n> tomas.vondra@enterprisedb.com> escreveu:\n>\n>>\n>>\n>> On 5/25/22 11:07, Ranier Vilela wrote:\n>> > Em qua., 25 de mai. de 2022 às 00:46, Andres Freund <andres@anarazel.de\n>> > <mailto:andres@anarazel.de>> escreveu:\n>> >\n>> > Hi Andres, thank you for taking a look.\n>> >\n>> >\n>> > On 2022-05-24 12:28:20 -0300, Ranier Vilela wrote:\n>> > > Linux Ubuntu 64 bits (gcc 9.4)\n>> > > ./pgbench -M prepared -c $conns -j $conns -S -n -U postgres\n>> > >\n>> > > conns tps head tps patched\n>> > > 1 2918.004085 3190.810466\n>> > > 10 12262.415696 17199.862401\n>> > > 50 13656.724571 18278.194114\n>> > > 80 14338.202348 17955.336101\n>> > > 90 16597.510373 18269.660184\n>> > > 100 17706.775793 18349.650150\n>> > > 200 16877.067441 17881.250615\n>> > > 300 16942.260775 17181.441752\n>> > > 400 16794.514911 17124.533892\n>> > > 500 16598.502151 17181.244953\n>> > > 600 16717.935001 16961.130742\n>> > > 700 16651.204834 16959.172005\n>> > > 800 16467.546583 16834.591719\n>> > > 900 16588.241149 16693.902459\n>> > > 1000 16564.985265 16936.952195\n>> >\n>> > 17-18k tps is pretty low for pgbench -S. For a shared_buffers\n>> > resident run, I\n>> > can get 40k in a single connection in an optimized build. If you're\n>> > testing a\n>> > workload >> shared_buffers, GetSnapshotData() isn't the bottleneck.\n>> And\n>> > testing an assert build isn't a meaningful exercise either, unless\n>> > you have\n>> > way way higher gains (i.e. stuff like turning O(n^2) into O(n)).\n>> >\n>> > Thanks for sharing these hits.\n>> > Yes, their 17-18k tps are disappointing.\n>> >\n>> >\n>> > What pgbench scale is this and are you using an optimized build?\n>> >\n>> > Yes this optimized build.\n>> > CFLAGS='-Wall -Wmissing-prototypes -Wpointer-arith\n>> > -Wdeclaration-after-statement -Werror=vla -Wendif-labels\n>> > -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type\n>> > -Wformat-security -fno-strict-aliasing -fwrapv\n>> > -fexcess-precision=standard -Wno-format-truncation\n>> > -Wno-stringop-truncation -O2'\n>> > from config.log\n>> >\n>>\n>> That can still be assert-enabled build. We need to see configure flags.\n>>\n> ./configure\n> Attached the config.log (compressed)\n>\n>\n>>\n>> > pgbench was initialized with:\n>> > pgbench -i -p 5432 -d postgres\n>> >\n>> > pgbench -M prepared -c 100 -j 100 -S -n -U postgres\n>>\n>> You're not specifying duration/number of transactions to execute. So\n>> it's using just 10 transactions per client, which is bound to give you\n>> bogus results due to not having anything in relcache etc. Use -T 60 or\n>> something like that.\n>>\n> Ok, I will try with -T 60.\n>\n\nHere the results with -T 60:\nLinux Ubuntu 64 bits\nshared_buffers = 128MB\n\n./pgbench -M prepared -c $conns -j $conns -T 60 -S -n -U postgres\n\npgbench (15beta1)\ntransaction type: <builtin: select only>\nscaling factor: 1\nquery mode: prepared\nnumber of clients: 100\nnumber of threads: 100\nmaximum number of tries: 1\nduration: 60 s\n\nconns tps head tps patched\n\n1 17126.326108 17792.414234\n10 82068.123383 82468.334836\n50 73808.731404 74678.839428\n80 73290.191713 73116.553986\n90 67558.483043 68384.906949\n100 65960.982801 66997.793777\n200 62216.011998 62870.243385\n300 62924.225658 62796.157548\n400 62278.099704 63129.555135\n500 63257.930870 62188.825044\n600 61479.890611 61517.913967\n700 61139.354053 61327.898847\n800 60833.663791 61517.913967\n900 61305.129642 61248.336593\n1000 60990.918719 61041.670996\n\n\nLinux Ubuntu 64 bits\nshared_buffers = 2048MB\n\n./pgbench -M prepared -c $conns -j $conns -S -n -U postgres\n\npgbench (15beta1)\ntransaction type: <builtin: select only>\nscaling factor: 1\nquery mode: prepared\nnumber of clients: 100\nnumber of threads: 100\nmaximum number of tries: 1\nnumber of transactions per client: 10\n\nconns tps head tps patched\n\n1 2918.004085 3211.303789\n10 12262.415696 15540.015540\n50 13656.724571 16701.182444\n80 14338.202348 16628.559551\n90 16597.510373 16835.016835\n100 17706.775793 16607.433487\n200 16877.067441 16426.969799\n300 16942.260775 16319.780662\n400 16794.514911 16155.023607\n500 16598.502151 16051.106724\n600 16717.935001 16007.171213\n700 16651.204834 16004.353184\n800 16467.546583 16834.591719\n900 16588.241149 16693.902459\n1000 16564.985265 16936.952195\n\n\nLinux Ubuntu 64 bits\nshared_buffers = 2048MB\n\n./pgbench -M prepared -c $conns -j $conns -T 60 -S -n -U postgres\n\npgbench (15beta1)\ntransaction type: <builtin: select only>\nscaling factor: 1\nquery mode: prepared\nnumber of clients: 100\nnumber of threads: 100\nmaximum number of tries: 1\nduration: 60 s\n\nconns tps head tps patched\n\n1 17174.265804 17792.414234\n10 82365.634750 82468.334836\n50 74593.714180 74678.839428\n80 69219.756038 73116.553986\n90 67419.574189 68384.906949\n100 66613.771701 66997.793777\n200 61739.784830 62870.243385\n300 62109.691298 62796.157548\n400 61630.822446 63129.555135\n500 61711.019964 62755.190389\n600 60620.010181 61517.913967\n700 60303.317736 61688.044232\n800 60451.113573 61076.666572\n900 60017.327157 61256.290037\n1000 60088.823434 60986.799312\n\nregards,\nRanier Vilela\n\nEm qua., 25 de mai. de 2022 às 08:26, Ranier Vilela <ranier.vf@gmail.com> escreveu:Em qua., 25 de mai. de 2022 às 07:13, Tomas Vondra <tomas.vondra@enterprisedb.com> escreveu:\n\nOn 5/25/22 11:07, Ranier Vilela wrote:\n> Em qua., 25 de mai. de 2022 às 00:46, Andres Freund <andres@anarazel.de\n> <mailto:andres@anarazel.de>> escreveu:\n> \n> Hi Andres, thank you for taking a look.\n> \n> \n> On 2022-05-24 12:28:20 -0300, Ranier Vilela wrote:\n> > Linux Ubuntu 64 bits (gcc 9.4)\n> > ./pgbench -M prepared -c $conns -j $conns -S -n -U postgres\n> >\n> > conns tps head tps patched\n> > 1 2918.004085 3190.810466\n> > 10 12262.415696 17199.862401\n> > 50 13656.724571 18278.194114\n> > 80 14338.202348 17955.336101\n> > 90 16597.510373 18269.660184\n> > 100 17706.775793 18349.650150\n> > 200 16877.067441 17881.250615\n> > 300 16942.260775 17181.441752\n> > 400 16794.514911 17124.533892\n> > 500 16598.502151 17181.244953\n> > 600 16717.935001 16961.130742\n> > 700 16651.204834 16959.172005\n> > 800 16467.546583 16834.591719\n> > 900 16588.241149 16693.902459\n> > 1000 16564.985265 16936.952195\n> \n> 17-18k tps is pretty low for pgbench -S. For a shared_buffers\n> resident run, I\n> can get 40k in a single connection in an optimized build. If you're\n> testing a\n> workload >> shared_buffers, GetSnapshotData() isn't the bottleneck. And\n> testing an assert build isn't a meaningful exercise either, unless\n> you have\n> way way higher gains (i.e. stuff like turning O(n^2) into O(n)).\n> \n> Thanks for sharing these hits.\n> Yes, their 17-18k tps are disappointing.\n> \n> \n> What pgbench scale is this and are you using an optimized build?\n> \n> Yes this optimized build.\n> CFLAGS='-Wall -Wmissing-prototypes -Wpointer-arith\n> -Wdeclaration-after-statement -Werror=vla -Wendif-labels\n> -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type\n> -Wformat-security -fno-strict-aliasing -fwrapv\n> -fexcess-precision=standard -Wno-format-truncation\n> -Wno-stringop-truncation -O2'\n> from config.log\n> \n\nThat can still be assert-enabled build. We need to see configure flags../configureAttached the config.log (compressed) \n\n> pgbench was initialized with:\n> pgbench -i -p 5432 -d postgres\n> \n> pgbench -M prepared -c 100 -j 100 -S -n -U postgres\n\nYou're not specifying duration/number of transactions to execute. So\nit's using just 10 transactions per client, which is bound to give you\nbogus results due to not having anything in relcache etc. Use -T 60 or\nsomething like that.Ok, I will try with -T 60.Here the results with -T 60:Linux Ubuntu 64 bitsshared_buffers = 128MB./pgbench -M prepared -c $conns -j $conns -T 60 -S -n -U postgrespgbench (15beta1)transaction type: <builtin: select only>scaling factor: 1query mode: preparednumber of clients: 100number of threads: 100maximum number of tries: 1duration: 60 sconns tps head\t\t tps patched1 17126.326108 17792.41423410 82068.123383 82468.33483650 73808.731404 74678.83942880 73290.191713 73116.55398690 67558.483043 68384.906949100 65960.982801 66997.793777200 62216.011998 62870.243385300 62924.225658 62796.157548400 62278.099704 63129.555135500 63257.930870 62188.825044600 61479.890611 61517.913967700 61139.354053 61327.898847800 60833.663791 61517.913967900 61305.129642 61248.3365931000 60990.918719 61041.670996Linux Ubuntu 64 bitsshared_buffers = 2048MB./pgbench -M prepared -c $conns -j $conns -S -n -U postgrespgbench (15beta1)transaction type: <builtin: select only>scaling factor: 1query mode: preparednumber of clients: 100number of threads: 100maximum number of tries: 1number of transactions per client: 10conns tps head\t\t tps patched1 2918.004085 3211.30378910 12262.415696 15540.01554050 13656.724571 16701.18244480 14338.202348 16628.55955190 16597.510373 16835.016835100 17706.775793 16607.433487200 16877.067441 16426.969799300 16942.260775 16319.780662400 16794.514911 16155.023607500 16598.502151 16051.106724600 16717.935001 16007.171213700 16651.204834 16004.353184800 16467.546583 16834.591719900 16588.241149 16693.9024591000 16564.985265 16936.952195Linux Ubuntu 64 bitsshared_buffers = 2048MB./pgbench -M prepared -c $conns -j $conns -T 60 -S -n -U postgrespgbench (15beta1)transaction type: <builtin: select only>scaling factor: 1query mode: preparednumber of clients: 100number of threads: 100maximum number of tries: 1duration: 60 sconns tps head\t\t \t tps patched1 17174.265804 17792.414234 10 82365.634750 82468.33483650 74593.714180 74678.83942880 69219.756038 73116.55398690 67419.574189 68384.906949100 66613.771701 66997.793777200 61739.784830 62870.243385300 62109.691298 62796.157548400 61630.822446 63129.555135500 61711.019964 62755.190389600 60620.010181 61517.913967700 60303.317736 61688.044232800 60451.113573 61076.666572900 60017.327157 61256.2900371000 60088.823434 60986.799312regards,Ranier Vilela",
"msg_date": "Thu, 26 May 2022 21:11:46 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improving connection scalability\n (src/backend/storage/ipc/procarray.c)"
},
{
"msg_contents": "On 5/27/22 02:11, Ranier Vilela wrote:\n>\n> ...\n> \n> Here the results with -T 60:\n\nMight be a good idea to share your analysis / interpretation of the\nresults, not just the raw data. After all, the change is being proposed\nby you, so do you think this shows the change is beneficial?\n\n> Linux Ubuntu 64 bits\n> shared_buffers = 128MB\n> \n> ./pgbench -M prepared -c $conns -j $conns -T 60 -S -n -U postgres\n> \n> pgbench (15beta1)\n> transaction type: <builtin: select only>\n> scaling factor: 1\n> query mode: prepared\n> number of clients: 100\n> number of threads: 100\n> maximum number of tries: 1\n> duration: 60 s\n> \n> conns tps head tps patched\n> \n> 1 17126.326108 17792.414234\n> 10 82068.123383 82468.334836\n> 50 73808.731404 74678.839428\n> 80 73290.191713 73116.553986\n> 90 67558.483043 68384.906949\n> 100 65960.982801 66997.793777\n> 200 62216.011998 62870.243385\n> 300 62924.225658 62796.157548\n> 400 62278.099704 63129.555135\n> 500 63257.930870 62188.825044\n> 600 61479.890611 61517.913967\n> 700 61139.354053 61327.898847\n> 800 60833.663791 61517.913967\n> 900 61305.129642 61248.336593\n> 1000 60990.918719 61041.670996\n> \n\nThese results look much saner, but IMHO it also does not show any clear\nbenefit of the patch. Or are you still claiming there is a benefit?\n\nBTW it's generally a good idea to do multiple runs and then use the\naverage and/or median. Results from a single may be quite noisy.\n\n> \n> Linux Ubuntu 64 bits\n> shared_buffers = 2048MB\n> \n> ./pgbench -M prepared -c $conns -j $conns -S -n -U postgres\n> \n> pgbench (15beta1)\n> transaction type: <builtin: select only>\n> scaling factor: 1\n> query mode: prepared\n> number of clients: 100\n> number of threads: 100\n> maximum number of tries: 1\n> number of transactions per client: 10\n> \n> conns tps head tps patched\n> \n> 1 2918.004085 3211.303789\n> 10 12262.415696 15540.015540\n> 50 13656.724571 16701.182444\n> 80 14338.202348 16628.559551\n> 90 16597.510373 16835.016835\n> 100 17706.775793 16607.433487\n> 200 16877.067441 16426.969799\n> 300 16942.260775 16319.780662\n> 400 16794.514911 16155.023607\n> 500 16598.502151 16051.106724\n> 600 16717.935001 16007.171213\n> 700 16651.204834 16004.353184\n> 800 16467.546583 16834.591719\n> 900 16588.241149 16693.902459\n> 1000 16564.985265 16936.952195\n> \n\nI think we've agreed these results are useless.\n\n> \n> Linux Ubuntu 64 bits\n> shared_buffers = 2048MB\n> \n> ./pgbench -M prepared -c $conns -j $conns -T 60 -S -n -U postgres\n> \n> pgbench (15beta1)\n> transaction type: <builtin: select only>\n> scaling factor: 1\n> query mode: prepared\n> number of clients: 100\n> number of threads: 100\n> maximum number of tries: 1\n> duration: 60 s\n> \n> conns tps head tps patched\n> \n> 1 17174.265804 17792.414234\n> 10 82365.634750 82468.334836\n> 50 74593.714180 74678.839428\n> 80 69219.756038 73116.553986\n> 90 67419.574189 68384.906949\n> 100 66613.771701 66997.793777\n> 200 61739.784830 62870.243385\n> 300 62109.691298 62796.157548\n> 400 61630.822446 63129.555135\n> 500 61711.019964 62755.190389\n> 600 60620.010181 61517.913967\n> 700 60303.317736 61688.044232\n> 800 60451.113573 61076.666572\n> 900 60017.327157 61256.290037\n> 1000 60088.823434 60986.799312\n> \n\nI have no idea why shared buffers 2GB would be interesting. The proposed\nchange was related to procarray, not shared buffers. And scale 1 is\n~15MB of data, so it fits into 128MB just fine.\n\nAlso, the first ~10 results for \"patched\" case match results for 128MB\nshared buffers. That seems very unlikely to happen by chance, so this\nseems rather suspicious.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 27 May 2022 03:30:46 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving connection scalability\n (src/backend/storage/ipc/procarray.c)"
},
{
"msg_contents": "Em qui., 26 de mai. de 2022 às 22:30, Tomas Vondra <\ntomas.vondra@enterprisedb.com> escreveu:\n\n> On 5/27/22 02:11, Ranier Vilela wrote:\n> >\n> > ...\n> >\n> > Here the results with -T 60:\n>\n> Might be a good idea to share your analysis / interpretation of the\n> results, not just the raw data. After all, the change is being proposed\n> by you, so do you think this shows the change is beneficial?\n>\nI think so, but the expectation has diminished.\nI expected that the more connections, the better the performance.\nAnd for both patch and head, this doesn't happen in tests.\nPerformance degrades with a greater number of connections.\nGetSnapShowData() isn't a bottleneck?\n\n\n>\n> > Linux Ubuntu 64 bits\n> > shared_buffers = 128MB\n> >\n> > ./pgbench -M prepared -c $conns -j $conns -T 60 -S -n -U postgres\n> >\n> > pgbench (15beta1)\n> > transaction type: <builtin: select only>\n> > scaling factor: 1\n> > query mode: prepared\n> > number of clients: 100\n> > number of threads: 100\n> > maximum number of tries: 1\n> > duration: 60 s\n> >\n> > conns tps head tps patched\n> >\n> > 1 17126.326108 17792.414234\n> > 10 82068.123383 82468.334836\n> > 50 73808.731404 74678.839428\n> > 80 73290.191713 73116.553986\n> > 90 67558.483043 68384.906949\n> > 100 65960.982801 66997.793777\n> > 200 62216.011998 62870.243385\n> > 300 62924.225658 62796.157548\n> > 400 62278.099704 63129.555135\n> > 500 63257.930870 62188.825044\n> > 600 61479.890611 61517.913967\n> > 700 61139.354053 61327.898847\n> > 800 60833.663791 61517.913967\n> > 900 61305.129642 61248.336593\n> > 1000 60990.918719 61041.670996\n> >\n>\n> These results look much saner, but IMHO it also does not show any clear\n> benefit of the patch. Or are you still claiming there is a benefit?\n>\nWe agree that they are micro-optimizations.\nHowever, I think they should be considered micro-optimizations in inner\nloops,\nbecause all in procarray.c is a hotpath.\nThe first objective, I believe, was achieved, with no performance\nregression.\nI agree, the gains are small, by the tests done.\nBut, IMHO, this is a good way, small gains turn into big gains in the end,\nwhen applied to all code.\n\nConsider GetSnapShotData()\n1. Most of the time the snapshot is not null, so:\nif (snaphost == NULL), will fail most of the time.\n\nWith the patch:\nif (snapshot->xip != NULL)\n{\n if (GetSnapshotDataReuse(snapshot))\n return snapshot;\n}\n\nMost of the time the test is true and GetSnapshotDataReuse is not called\nfor new\nsnapshots.\n\ncount, subcount and suboverflowed, will not be initialized, for all\nsnapshots.\n\n2. If snapshot is taken during recoverys\nThe pgprocnos and ProcGlobal->subxidStates are not touched unnecessarily.\nOnly if is not suboverflowed.\nSkipping all InvalidTransactionId, mypgxactoff, backends doing logical\ndecoding,\nand XID is >= xmax.\n\n3. Calling GetSnapshotDataReuse() without first acquiring ProcArrayLock.\nThere's an agreement that this would be fine, for now.\n\nConsider ComputeXidHorizons()\n1. ProcGlobal->statusFlags is touched before the lock.\n2. allStatusFlags[index] is not touched for all numProcs.\n\nAll changes were made with the aim of avoiding or postponing unnecessary\nwork.\n\n\n> BTW it's generally a good idea to do multiple runs and then use the\n> average and/or median. Results from a single may be quite noisy.\n>\n> >\n> > Linux Ubuntu 64 bits\n> > shared_buffers = 2048MB\n> >\n> > ./pgbench -M prepared -c $conns -j $conns -S -n -U postgres\n> >\n> > pgbench (15beta1)\n> > transaction type: <builtin: select only>\n> > scaling factor: 1\n> > query mode: prepared\n> > number of clients: 100\n> > number of threads: 100\n> > maximum number of tries: 1\n> > number of transactions per client: 10\n> >\n> > conns tps head tps patched\n> >\n> > 1 2918.004085 3211.303789\n> > 10 12262.415696 15540.015540\n> > 50 13656.724571 16701.182444\n> > 80 14338.202348 16628.559551\n> > 90 16597.510373 16835.016835\n> > 100 17706.775793 16607.433487\n> > 200 16877.067441 16426.969799\n> > 300 16942.260775 16319.780662\n> > 400 16794.514911 16155.023607\n> > 500 16598.502151 16051.106724\n> > 600 16717.935001 16007.171213\n> > 700 16651.204834 16004.353184\n> > 800 16467.546583 16834.591719\n> > 900 16588.241149 16693.902459\n> > 1000 16564.985265 16936.952195\n> >\n>\n> I think we've agreed these results are useless.\n>\n> >\n> > Linux Ubuntu 64 bits\n> > shared_buffers = 2048MB\n> >\n> > ./pgbench -M prepared -c $conns -j $conns -T 60 -S -n -U postgres\n> >\n> > pgbench (15beta1)\n> > transaction type: <builtin: select only>\n> > scaling factor: 1\n> > query mode: prepared\n> > number of clients: 100\n> > number of threads: 100\n> > maximum number of tries: 1\n> > duration: 60 s\n> >\n> > conns tps head tps patched\n> >\n> > 1 17174.265804 17792.414234\n> > 10 82365.634750 82468.334836\n> > 50 74593.714180 74678.839428\n> > 80 69219.756038 73116.553986\n> > 90 67419.574189 68384.906949\n> > 100 66613.771701 66997.793777\n> > 200 61739.784830 62870.243385\n> > 300 62109.691298 62796.157548\n> > 400 61630.822446 63129.555135\n> > 500 61711.019964 62755.190389\n> > 600 60620.010181 61517.913967\n> > 700 60303.317736 61688.044232\n> > 800 60451.113573 61076.666572\n> > 900 60017.327157 61256.290037\n> > 1000 60088.823434 60986.799312\n> >\n>\n> I have no idea why shared buffers 2GB would be interesting. The proposed\n> change was related to procarray, not shared buffers. And scale 1 is\n> ~15MB of data, so it fits into 128MB just fine.\n>\n I thought about doing this benchmark, in the most common usage situation\n(25% of RAM).\n\n\n> Also, the first ~10 results for \"patched\" case match results for 128MB\n> shared buffers. That seems very unlikely to happen by chance, so this\n> seems rather suspicious.\n>\nProbably, copy and paste mistake.\nI redid this test, for patched:\n\nLinux Ubuntu 64 bits\nshared_buffers = 2048MB\n\n./pgbench -M prepared -c $conns -j $conns -T 60 -S -n -U postgres\n\npgbench (15beta1)\ntransaction type: <builtin: select only>\nscaling factor: 1\nquery mode: prepared\nnumber of clients: 100\nnumber of threads: 100\nmaximum number of tries: 1\nduration: 60 s\n\nconns tps head tps patched\n\n1 17174.265804 17524.482668\n10 82365.634750 81840.537713\n50 74593.714180 74806.729434\n80 69219.756038 73116.553986\n90 67419.574189 69130.749209\n100 66613.771701 67478.234595\n200 61739.784830 63094.202413\n300 62109.691298 62984.501251\n400 61630.822446 63243.232816\n500 61711.019964 62827.977636\n600 60620.010181 62838.051693\n700 60303.317736 61594.629618\n800 60451.113573 61208.629058\n900 60017.327157 61171.001256\n1000 60088.823434 60558.067810\n\nregards,\nRanier Vilela\n\nEm qui., 26 de mai. de 2022 às 22:30, Tomas Vondra <tomas.vondra@enterprisedb.com> escreveu:On 5/27/22 02:11, Ranier Vilela wrote:\n>\n> ...\n> \n> Here the results with -T 60:\n\nMight be a good idea to share your analysis / interpretation of the\nresults, not just the raw data. After all, the change is being proposed\nby you, so do you think this shows the change is beneficial?I think so, but the expectation has diminished.I expected that the more connections, the better the performance.And for both patch and head, this doesn't happen in tests.Performance degrades with a greater number of connections.GetSnapShowData() isn't a bottleneck? \n\n> Linux Ubuntu 64 bits\n> shared_buffers = 128MB\n> \n> ./pgbench -M prepared -c $conns -j $conns -T 60 -S -n -U postgres\n> \n> pgbench (15beta1)\n> transaction type: <builtin: select only>\n> scaling factor: 1\n> query mode: prepared\n> number of clients: 100\n> number of threads: 100\n> maximum number of tries: 1\n> duration: 60 s\n> \n> conns tps head tps patched\n> \n> 1 17126.326108 17792.414234\n> 10 82068.123383 82468.334836\n> 50 73808.731404 74678.839428\n> 80 73290.191713 73116.553986\n> 90 67558.483043 68384.906949\n> 100 65960.982801 66997.793777\n> 200 62216.011998 62870.243385\n> 300 62924.225658 62796.157548\n> 400 62278.099704 63129.555135\n> 500 63257.930870 62188.825044\n> 600 61479.890611 61517.913967\n> 700 61139.354053 61327.898847\n> 800 60833.663791 61517.913967\n> 900 61305.129642 61248.336593\n> 1000 60990.918719 61041.670996\n> \n\nThese results look much saner, but IMHO it also does not show any clear\nbenefit of the patch. Or are you still claiming there is a benefit?We agree that they are micro-optimizations.However, I think they should be considered micro-optimizations in inner loops,because all in procarray.c is a hotpath.The first objective, I believe, was achieved, with no performance regression.I agree, the gains are small, by the tests done.But, IMHO, this is a good way, small gains turn into big gains in the end, when applied to all code.Consider GetSnapShotData()1. Most of the time the snapshot is not null, so:if (snaphost == NULL), will fail most of the time.With the patch:\tif (snapshot->xip != NULL)\t{ if (GetSnapshotDataReuse(snapshot)) return snapshot;\t}Most of the time the test is true and \nGetSnapshotDataReuse is not called for newsnapshots.count, subcount and suboverflowed, will not be initialized, for all snapshots.2. If snapshot is taken during recoverysThe pgprocnos and ProcGlobal->subxidStates are not touched unnecessarily.Only if is not suboverflowed.Skipping all InvalidTransactionId, mypgxactoff, backends doing logical decoding,and XID is >= xmax.3. Calling GetSnapshotDataReuse() without first acquiring ProcArrayLock. There's an agreement that this would be fine, for now.Consider \nComputeXidHorizons()1. ProcGlobal->statusFlags is touched before the lock.2. allStatusFlags[index] is not touched for all numProcs.All changes were made with the aim of avoiding or postponing unnecessary work.\n\nBTW it's generally a good idea to do multiple runs and then use the\naverage and/or median. Results from a single may be quite noisy.\n\n> \n> Linux Ubuntu 64 bits\n> shared_buffers = 2048MB\n> \n> ./pgbench -M prepared -c $conns -j $conns -S -n -U postgres\n> \n> pgbench (15beta1)\n> transaction type: <builtin: select only>\n> scaling factor: 1\n> query mode: prepared\n> number of clients: 100\n> number of threads: 100\n> maximum number of tries: 1\n> number of transactions per client: 10\n> \n> conns tps head tps patched\n> \n> 1 2918.004085 3211.303789\n> 10 12262.415696 15540.015540\n> 50 13656.724571 16701.182444\n> 80 14338.202348 16628.559551\n> 90 16597.510373 16835.016835\n> 100 17706.775793 16607.433487\n> 200 16877.067441 16426.969799\n> 300 16942.260775 16319.780662\n> 400 16794.514911 16155.023607\n> 500 16598.502151 16051.106724\n> 600 16717.935001 16007.171213\n> 700 16651.204834 16004.353184\n> 800 16467.546583 16834.591719\n> 900 16588.241149 16693.902459\n> 1000 16564.985265 16936.952195\n> \n\nI think we've agreed these results are useless.\n\n> \n> Linux Ubuntu 64 bits\n> shared_buffers = 2048MB\n> \n> ./pgbench -M prepared -c $conns -j $conns -T 60 -S -n -U postgres\n> \n> pgbench (15beta1)\n> transaction type: <builtin: select only>\n> scaling factor: 1\n> query mode: prepared\n> number of clients: 100\n> number of threads: 100\n> maximum number of tries: 1\n> duration: 60 s\n> \n> conns tps head tps patched\n> \n> 1 17174.265804 17792.414234\n> 10 82365.634750 82468.334836\n> 50 74593.714180 74678.839428\n> 80 69219.756038 73116.553986\n> 90 67419.574189 68384.906949\n> 100 66613.771701 66997.793777\n> 200 61739.784830 62870.243385\n> 300 62109.691298 62796.157548\n> 400 61630.822446 63129.555135\n> 500 61711.019964 62755.190389\n> 600 60620.010181 61517.913967\n> 700 60303.317736 61688.044232\n> 800 60451.113573 61076.666572\n> 900 60017.327157 61256.290037\n> 1000 60088.823434 60986.799312\n> \n\nI have no idea why shared buffers 2GB would be interesting. The proposed\nchange was related to procarray, not shared buffers. And scale 1 is\n~15MB of data, so it fits into 128MB just fine. I thought about doing this benchmark, in the most common usage situation (25% of RAM).\n\nAlso, the first ~10 results for \"patched\" case match results for 128MB\nshared buffers. That seems very unlikely to happen by chance, so this\nseems rather suspicious.Probably, copy and paste mistake.I redid this test, for patched:Linux Ubuntu 64 bitsshared_buffers = 2048MB./pgbench -M prepared -c $conns -j $conns -T 60 -S -n -U postgrespgbench (15beta1)transaction type: <builtin: select only>scaling factor: 1query mode: preparednumber of clients: 100number of threads: 100maximum number of tries: 1duration: 60 sconns tps head\t\t \t tps patched1 17174.265804 17524.48266810 82365.634750 81840.53771350 74593.714180 74806.72943480 69219.756038 73116.55398690 67419.574189 69130.749209100 66613.771701 67478.234595200 61739.784830 63094.202413300 62109.691298 62984.501251400 61630.822446 63243.232816500 61711.019964 62827.977636600 60620.010181 62838.051693700 60303.317736 61594.629618800 60451.113573 61208.629058900 60017.327157 61171.0012561000 60088.823434 60558.067810 regards,Ranier Vilela",
"msg_date": "Fri, 27 May 2022 10:35:08 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improving connection scalability\n (src/backend/storage/ipc/procarray.c)"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-27 03:30:46 +0200, Tomas Vondra wrote:\n> On 5/27/22 02:11, Ranier Vilela wrote:\n> > ./pgbench -M prepared -c $conns -j $conns -T 60 -S -n -U postgres\n> > \n> > pgbench (15beta1)\n> > transaction type: <builtin: select only>\n> > scaling factor: 1\n> > query mode: prepared\n> > number of clients: 100\n> > number of threads: 100\n> > maximum number of tries: 1\n> > duration: 60 s\n> > \n> > conns tps head tps patched\n> > \n> > 1 17126.326108 17792.414234\n> > 10 82068.123383 82468.334836\n> > 50 73808.731404 74678.839428\n> > 80 73290.191713 73116.553986\n> > 90 67558.483043 68384.906949\n> > 100 65960.982801 66997.793777\n> > 200 62216.011998 62870.243385\n> > 300 62924.225658 62796.157548\n> > 400 62278.099704 63129.555135\n> > 500 63257.930870 62188.825044\n> > 600 61479.890611 61517.913967\n> > 700 61139.354053 61327.898847\n> > 800 60833.663791 61517.913967\n> > 900 61305.129642 61248.336593\n> > 1000 60990.918719 61041.670996\n> > \n> \n> These results look much saner, but IMHO it also does not show any clear\n> benefit of the patch. Or are you still claiming there is a benefit?\n\nThey don't look all that sane to me - isn't that way lower than one would\nexpect? Restricting both client and server to the same four cores, a\nthermically challenged older laptop I have around I get 150k tps at both 10\nand 100 clients.\n\nEither way, I'd not expect to see any GetSnapshotData() scalability effects to\nshow up on an \"Intel® Core™ i5-8250U CPU Quad Core\" - there's just not enough\nconcurrency.\n\nThe correct pieces of these changes seem very unlikely to affect\nGetSnapshotData() performance meaningfully.\n\nTo improve something like GetSnapshotData() you first have to come up with a\nworkload that shows it being a meaningful part of a profile. Unless it is,\nperformance differences are going to just be due to various forms of noise.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 27 May 2022 14:08:28 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Improving connection scalability\n (src/backend/storage/ipc/procarray.c)"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-27 10:35:08 -0300, Ranier Vilela wrote:\n> Em qui., 26 de mai. de 2022 �s 22:30, Tomas Vondra <\n> tomas.vondra@enterprisedb.com> escreveu:\n>\n> > On 5/27/22 02:11, Ranier Vilela wrote:\n> > >\n> > > ...\n> > >\n> > > Here the results with -T 60:\n> >\n> > Might be a good idea to share your analysis / interpretation of the\n> > results, not just the raw data. After all, the change is being proposed\n> > by you, so do you think this shows the change is beneficial?\n> >\n> I think so, but the expectation has diminished.\n> I expected that the more connections, the better the performance.\n> And for both patch and head, this doesn't happen in tests.\n> Performance degrades with a greater number of connections.\n\nYour system has four CPUs. Once they're all busy, adding more connections\nwon't improve performance. It'll just add more and more context switching,\ncache misses, and make the OS scheduler do more work.\n\n\n\n> GetSnapShowData() isn't a bottleneck?\n\nI'd be surprised if it showed up in a profile on your machine with that\nworkload in any sort of meaningful way. The snapshot reuse logic will always\nwork - because there are no writes - and thus the only work that needs to be\ndone is to acquire the ProcArrayLock briefly. And because there is only a\nsmall number of cores, contention on the cacheline for that isn't a problem.\n\n\n> > These results look much saner, but IMHO it also does not show any clear\n> > benefit of the patch. Or are you still claiming there is a benefit?\n> >\n> We agree that they are micro-optimizations. However, I think they should be\n> considered micro-optimizations in inner loops, because all in procarray.c is\n> a hotpath.\n\nAs explained earlier, I don't agree that they optimize anything - you're\nmaking some of the scalability behaviour *worse*, if it's changed at all.\n\n\n> The first objective, I believe, was achieved, with no performance\n> regression.\n> I agree, the gains are small, by the tests done.\n\nThere are no gains.\n\n\n> But, IMHO, this is a good way, small gains turn into big gains in the end,\n> when applied to all code.\n>\n> Consider GetSnapShotData()\n> 1. Most of the time the snapshot is not null, so:\n> if (snaphost == NULL), will fail most of the time.\n>\n> With the patch:\n> if (snapshot->xip != NULL)\n> {\n> if (GetSnapshotDataReuse(snapshot))\n> return snapshot;\n> }\n>\n> Most of the time the test is true and GetSnapshotDataReuse is not called\n> for new\n> snapshots.\n> count, subcount and suboverflowed, will not be initialized, for all\n> snapshots.\n\nBut that's irrelevant. There's only a few \"new\" snapshots in the life of a\nconnection. You're optimizing something irrelevant.\n\n\n> 2. If snapshot is taken during recoverys\n> The pgprocnos and ProcGlobal->subxidStates are not touched unnecessarily.\n\nThat code isn't reached when in recovery?\n\n\n> 3. Calling GetSnapshotDataReuse() without first acquiring ProcArrayLock.\n> There's an agreement that this would be fine, for now.\n\nThere's no such agreement at all. It's not correct.\n\n\n> Consider ComputeXidHorizons()\n> 1. ProcGlobal->statusFlags is touched before the lock.\n\nHard to believe that'd have a measurable effect.\n\n\n> 2. allStatusFlags[index] is not touched for all numProcs.\n\nI'd be surprised if the compiler couldn't defer that load on its own.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 27 May 2022 14:22:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Improving connection scalability\n (src/backend/storage/ipc/procarray.c)"
},
{
"msg_contents": "Em sex., 27 de mai. de 2022 às 18:08, Andres Freund <andres@anarazel.de>\nescreveu:\n\n> Hi,\n>\n> On 2022-05-27 03:30:46 +0200, Tomas Vondra wrote:\n> > On 5/27/22 02:11, Ranier Vilela wrote:\n> > > ./pgbench -M prepared -c $conns -j $conns -T 60 -S -n -U postgres\n> > >\n> > > pgbench (15beta1)\n> > > transaction type: <builtin: select only>\n> > > scaling factor: 1\n> > > query mode: prepared\n> > > number of clients: 100\n> > > number of threads: 100\n> > > maximum number of tries: 1\n> > > duration: 60 s\n> > >\n> > > conns tps head tps patched\n> > >\n> > > 1 17126.326108 17792.414234\n> > > 10 82068.123383 82468.334836\n> > > 50 73808.731404 74678.839428\n> > > 80 73290.191713 73116.553986\n> > > 90 67558.483043 68384.906949\n> > > 100 65960.982801 66997.793777\n> > > 200 62216.011998 62870.243385\n> > > 300 62924.225658 62796.157548\n> > > 400 62278.099704 63129.555135\n> > > 500 63257.930870 62188.825044\n> > > 600 61479.890611 61517.913967\n> > > 700 61139.354053 61327.898847\n> > > 800 60833.663791 61517.913967\n> > > 900 61305.129642 61248.336593\n> > > 1000 60990.918719 61041.670996\n> > >\n> >\n> > These results look much saner, but IMHO it also does not show any clear\n> > benefit of the patch. Or are you still claiming there is a benefit?\n>\n> They don't look all that sane to me - isn't that way lower than one would\n> expect?\n\nYes, quite disappointing.\n\nRestricting both client and server to the same four cores, a\n> thermically challenged older laptop I have around I get 150k tps at both 10\n> and 100 clients.\n>\nAnd you can share the benchmark details? Hardware, postgres and pgbench,\nplease?\n\n>\n> Either way, I'd not expect to see any GetSnapshotData() scalability\n> effects to\n> show up on an \"Intel® Core™ i5-8250U CPU Quad Core\" - there's just not\n> enough\n> concurrency.\n>\nThis means that our customers will not see any connections scalability with\nPG15, using the simplest hardware?\n\n\n> The correct pieces of these changes seem very unlikely to affect\n> GetSnapshotData() performance meaningfully.\n>\n> To improve something like GetSnapshotData() you first have to come up with\n> a\n> workload that shows it being a meaningful part of a profile. Unless it is,\n> performance differences are going to just be due to various forms of noise.\n>\nActually in the profiles I got with perf, GetSnapShotData() didn't show up.\n\nregards,\nRanier Vilela\n\nEm sex., 27 de mai. de 2022 às 18:08, Andres Freund <andres@anarazel.de> escreveu:Hi,\n\nOn 2022-05-27 03:30:46 +0200, Tomas Vondra wrote:\n> On 5/27/22 02:11, Ranier Vilela wrote:\n> > ./pgbench -M prepared -c $conns -j $conns -T 60 -S -n -U postgres\n> > \n> > pgbench (15beta1)\n> > transaction type: <builtin: select only>\n> > scaling factor: 1\n> > query mode: prepared\n> > number of clients: 100\n> > number of threads: 100\n> > maximum number of tries: 1\n> > duration: 60 s\n> > \n> > conns tps head tps patched\n> > \n> > 1 17126.326108 17792.414234\n> > 10 82068.123383 82468.334836\n> > 50 73808.731404 74678.839428\n> > 80 73290.191713 73116.553986\n> > 90 67558.483043 68384.906949\n> > 100 65960.982801 66997.793777\n> > 200 62216.011998 62870.243385\n> > 300 62924.225658 62796.157548\n> > 400 62278.099704 63129.555135\n> > 500 63257.930870 62188.825044\n> > 600 61479.890611 61517.913967\n> > 700 61139.354053 61327.898847\n> > 800 60833.663791 61517.913967\n> > 900 61305.129642 61248.336593\n> > 1000 60990.918719 61041.670996\n> > \n> \n> These results look much saner, but IMHO it also does not show any clear\n> benefit of the patch. Or are you still claiming there is a benefit?\n\nThey don't look all that sane to me - isn't that way lower than one would\nexpect?Yes, quite disappointing. Restricting both client and server to the same four cores, a\nthermically challenged older laptop I have around I get 150k tps at both 10\nand 100 clients.And you can share the benchmark details? Hardware, postgres and pgbench, please?\n\nEither way, I'd not expect to see any GetSnapshotData() scalability effects to\nshow up on an \"Intel® Core™ i5-8250U CPU Quad Core\" - there's just not enough\nconcurrency. This means that our customers will not see any connections scalability with PG15, using the simplest hardware? \n\nThe correct pieces of these changes seem very unlikely to affect\nGetSnapshotData() performance meaningfully.\n\nTo improve something like GetSnapshotData() you first have to come up with a\nworkload that shows it being a meaningful part of a profile. Unless it is,\nperformance differences are going to just be due to various forms of noise.Actually in the profiles I got with perf, GetSnapShotData() didn't show up.regards,Ranier Vilela",
"msg_date": "Fri, 27 May 2022 21:15:50 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improving connection scalability\n (src/backend/storage/ipc/procarray.c)"
},
{
"msg_contents": "Em sex., 27 de mai. de 2022 às 18:22, Andres Freund <andres@anarazel.de>\nescreveu:\n\n> Hi,\n>\n> On 2022-05-27 10:35:08 -0300, Ranier Vilela wrote:\n> > Em qui., 26 de mai. de 2022 às 22:30, Tomas Vondra <\n> > tomas.vondra@enterprisedb.com> escreveu:\n> >\n> > > On 5/27/22 02:11, Ranier Vilela wrote:\n> > > >\n> > > > ...\n> > > >\n> > > > Here the results with -T 60:\n> > >\n> > > Might be a good idea to share your analysis / interpretation of the\n> > > results, not just the raw data. After all, the change is being proposed\n> > > by you, so do you think this shows the change is beneficial?\n> > >\n> > I think so, but the expectation has diminished.\n> > I expected that the more connections, the better the performance.\n> > And for both patch and head, this doesn't happen in tests.\n> > Performance degrades with a greater number of connections.\n>\n> Your system has four CPUs. Once they're all busy, adding more connections\n> won't improve performance. It'll just add more and more context switching,\n> cache misses, and make the OS scheduler do more work.\n>\nconns tps head\n10 82365.634750\n50 74593.714180\n80 69219.756038\n90 67419.574189\n100 66613.771701\nYes it is quite disappointing that with 100 connections, tps loses to 10\nconnections.\n\n\n>\n>\n>\n> > GetSnapShowData() isn't a bottleneck?\n>\n> I'd be surprised if it showed up in a profile on your machine with that\n> workload in any sort of meaningful way. The snapshot reuse logic will\n> always\n> work - because there are no writes - and thus the only work that needs to\n> be\n> done is to acquire the ProcArrayLock briefly. And because there is only a\n> small number of cores, contention on the cacheline for that isn't a\n> problem.\n>\nThanks for sharing this.\n\n\n>\n>\n> > > These results look much saner, but IMHO it also does not show any clear\n> > > benefit of the patch. Or are you still claiming there is a benefit?\n> > >\n> > We agree that they are micro-optimizations. However, I think they\n> should be\n> > considered micro-optimizations in inner loops, because all in\n> procarray.c is\n> > a hotpath.\n>\n> As explained earlier, I don't agree that they optimize anything - you're\n> making some of the scalability behaviour *worse*, if it's changed at all.\n>\n>\n> > The first objective, I believe, was achieved, with no performance\n> > regression.\n> > I agree, the gains are small, by the tests done.\n>\n> There are no gains.\n>\nIMHO, I must disagree.\n\n\n>\n>\n> > But, IMHO, this is a good way, small gains turn into big gains in the\n> end,\n> > when applied to all code.\n> >\n> > Consider GetSnapShotData()\n> > 1. Most of the time the snapshot is not null, so:\n> > if (snaphost == NULL), will fail most of the time.\n> >\n> > With the patch:\n> > if (snapshot->xip != NULL)\n> > {\n> > if (GetSnapshotDataReuse(snapshot))\n> > return snapshot;\n> > }\n> >\n> > Most of the time the test is true and GetSnapshotDataReuse is not called\n> > for new\n> > snapshots.\n> > count, subcount and suboverflowed, will not be initialized, for all\n> > snapshots.\n>\n> But that's irrelevant. There's only a few \"new\" snapshots in the life of a\n> connection. You're optimizing something irrelevant.\n>\nIMHO, when GetSnapShotData() is the bottleneck, all is relevant.\n\n\n>\n>\n> > 2. If snapshot is taken during recoverys\n> > The pgprocnos and ProcGlobal->subxidStates are not touched unnecessarily.\n>\n> That code isn't reached when in recovery?\n>\nCurrently it is reached *even* when not in recovery.\nWith the patch, *only* is reached when in recovery.\n\n\n>\n> > 3. Calling GetSnapshotDataReuse() without first acquiring ProcArrayLock.\n> > There's an agreement that this would be fine, for now.\n>\n> There's no such agreement at all. It's not correct.\n>\nOk, but there is a chance it will work correctly.\n\n\n>\n> > Consider ComputeXidHorizons()\n> > 1. ProcGlobal->statusFlags is touched before the lock.\n>\n> Hard to believe that'd have a measurable effect.\n>\nIMHO, anything you take out of the lock is a benefit.\n\n\n>\n>\n> > 2. allStatusFlags[index] is not touched for all numProcs.\n>\n> I'd be surprised if the compiler couldn't defer that load on its own.\n>\nBetter be sure of that, no?\n\nregards,\nRanier Vilela\n\nEm sex., 27 de mai. de 2022 às 18:22, Andres Freund <andres@anarazel.de> escreveu:Hi,\n\nOn 2022-05-27 10:35:08 -0300, Ranier Vilela wrote:\n> Em qui., 26 de mai. de 2022 às 22:30, Tomas Vondra <\n> tomas.vondra@enterprisedb.com> escreveu:\n>\n> > On 5/27/22 02:11, Ranier Vilela wrote:\n> > >\n> > > ...\n> > >\n> > > Here the results with -T 60:\n> >\n> > Might be a good idea to share your analysis / interpretation of the\n> > results, not just the raw data. After all, the change is being proposed\n> > by you, so do you think this shows the change is beneficial?\n> >\n> I think so, but the expectation has diminished.\n> I expected that the more connections, the better the performance.\n> And for both patch and head, this doesn't happen in tests.\n> Performance degrades with a greater number of connections.\n\nYour system has four CPUs. Once they're all busy, adding more connections\nwon't improve performance. It'll just add more and more context switching,\ncache misses, and make the OS scheduler do more work.conns tps head10 82365.63475050 74593.71418080 69219.75603890 67419.574189100 66613.771701Yes it is quite disappointing that with 100 connections, tps loses to 10 connections. \n\n\n\n> GetSnapShowData() isn't a bottleneck?\n\nI'd be surprised if it showed up in a profile on your machine with that\nworkload in any sort of meaningful way. The snapshot reuse logic will always\nwork - because there are no writes - and thus the only work that needs to be\ndone is to acquire the ProcArrayLock briefly. And because there is only a\nsmall number of cores, contention on the cacheline for that isn't a problem.Thanks for sharing this. \n\n\n> > These results look much saner, but IMHO it also does not show any clear\n> > benefit of the patch. Or are you still claiming there is a benefit?\n> >\n> We agree that they are micro-optimizations. However, I think they should be\n> considered micro-optimizations in inner loops, because all in procarray.c is\n> a hotpath.\n\nAs explained earlier, I don't agree that they optimize anything - you're\nmaking some of the scalability behaviour *worse*, if it's changed at all.\n\n\n> The first objective, I believe, was achieved, with no performance\n> regression.\n> I agree, the gains are small, by the tests done.\n\nThere are no gains.IMHO, I must disagree. \n\n\n> But, IMHO, this is a good way, small gains turn into big gains in the end,\n> when applied to all code.\n>\n> Consider GetSnapShotData()\n> 1. Most of the time the snapshot is not null, so:\n> if (snaphost == NULL), will fail most of the time.\n>\n> With the patch:\n> if (snapshot->xip != NULL)\n> {\n> if (GetSnapshotDataReuse(snapshot))\n> return snapshot;\n> }\n>\n> Most of the time the test is true and GetSnapshotDataReuse is not called\n> for new\n> snapshots.\n> count, subcount and suboverflowed, will not be initialized, for all\n> snapshots.\n\nBut that's irrelevant. There's only a few \"new\" snapshots in the life of a\nconnection. You're optimizing something irrelevant.IMHO, when GetSnapShotData() is the bottleneck, all is relevant. \n\n\n> 2. If snapshot is taken during recoverys\n> The pgprocnos and ProcGlobal->subxidStates are not touched unnecessarily.\n\nThat code isn't reached when in recovery?Currently it is reached *even* when not in recovery.With the patch, *only* is reached when in recovery. \n\n\n> 3. Calling GetSnapshotDataReuse() without first acquiring ProcArrayLock.\n> There's an agreement that this would be fine, for now.\n\nThere's no such agreement at all. It's not correct.Ok, but there is a chance it will work correctly. \n\n\n> Consider ComputeXidHorizons()\n> 1. ProcGlobal->statusFlags is touched before the lock.\n\nHard to believe that'd have a measurable effect.IMHO, anything you take out of the lock is a benefit. \n\n\n> 2. allStatusFlags[index] is not touched for all numProcs.\n\nI'd be surprised if the compiler couldn't defer that load on its own.Better be sure of that, no?regards,Ranier Vilela",
"msg_date": "Fri, 27 May 2022 21:36:30 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improving connection scalability\n (src/backend/storage/ipc/procarray.c)"
},
{
"msg_contents": "On 5/28/22 02:15, Ranier Vilela wrote:\n> \n> \n> Em sex., 27 de mai. de 2022 às 18:08, Andres Freund <andres@anarazel.de\n> <mailto:andres@anarazel.de>> escreveu:\n> \n> Hi,\n> \n> On 2022-05-27 03:30:46 +0200, Tomas Vondra wrote:\n> > On 5/27/22 02:11, Ranier Vilela wrote:\n> > > ./pgbench -M prepared -c $conns -j $conns -T 60 -S -n -U postgres\n> > >\n> > > pgbench (15beta1)\n> > > transaction type: <builtin: select only>\n> > > scaling factor: 1\n> > > query mode: prepared\n> > > number of clients: 100\n> > > number of threads: 100\n> > > maximum number of tries: 1\n> > > duration: 60 s\n> > >\n> > > conns tps head tps patched\n> > >\n> > > 1 17126.326108 17792.414234\n> > > 10 82068.123383 82468.334836\n> > > 50 73808.731404 74678.839428\n> > > 80 73290.191713 73116.553986\n> > > 90 67558.483043 68384.906949\n> > > 100 65960.982801 66997.793777\n> > > 200 62216.011998 62870.243385\n> > > 300 62924.225658 62796.157548\n> > > 400 62278.099704 63129.555135\n> > > 500 63257.930870 62188.825044\n> > > 600 61479.890611 61517.913967\n> > > 700 61139.354053 61327.898847\n> > > 800 60833.663791 61517.913967\n> > > 900 61305.129642 61248.336593\n> > > 1000 60990.918719 61041.670996\n> > >\n> >\n> > These results look much saner, but IMHO it also does not show any\n> clear\n> > benefit of the patch. Or are you still claiming there is a benefit?\n> \n> They don't look all that sane to me - isn't that way lower than one\n> would\n> expect?\n> \n> Yes, quite disappointing.\n> \n> Restricting both client and server to the same four cores, a\n> thermically challenged older laptop I have around I get 150k tps at\n> both 10\n> and 100 clients.\n> \n> And you can share the benchmark details? Hardware, postgres and pgbench,\n> please?\n> \n> \n> Either way, I'd not expect to see any GetSnapshotData() scalability\n> effects to\n> show up on an \"Intel® Core™ i5-8250U CPU Quad Core\" - there's just\n> not enough\n> concurrency.\n> \n> This means that our customers will not see any connections scalability\n> with PG15, using the simplest hardware?\n> \n\nNo. It means that on 4-core machine GetSnapshotData() is unlikely to be\na bottleneck, because you'll hit various other bottlenecks way earlier.\n\nI personally doubt it even makes sense to worry about scaling to this\nmany connections on such tiny system too much.\n\n> \n> The correct pieces of these changes seem very unlikely to affect\n> GetSnapshotData() performance meaningfully.\n> \n> To improve something like GetSnapshotData() you first have to come\n> up with a\n> workload that shows it being a meaningful part of a profile. Unless\n> it is,\n> performance differences are going to just be due to various forms of\n> noise.\n> \n> Actually in the profiles I got with perf, GetSnapShotData() didn't show up.\n> \n\nBut that's exactly the point Andres is trying to make - if you don't see\nGetSnapshotData() in the perf profile, why do you think optimizing it\nwill have any meaningful impact on throughput?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 28 May 2022 14:00:16 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving connection scalability\n (src/backend/storage/ipc/procarray.c)"
},
{
"msg_contents": "On 5/28/22 02:36, Ranier Vilela wrote:\n> Em sex., 27 de mai. de 2022 às 18:22, Andres Freund <andres@anarazel.de\n> <mailto:andres@anarazel.de>> escreveu:\n> \n> Hi,\n> \n> On 2022-05-27 10:35:08 -0300, Ranier Vilela wrote:\n> > Em qui., 26 de mai. de 2022 às 22:30, Tomas Vondra <\n> > tomas.vondra@enterprisedb.com\n> <mailto:tomas.vondra@enterprisedb.com>> escreveu:\n> >\n> > > On 5/27/22 02:11, Ranier Vilela wrote:\n> > > >\n> > > > ...\n> > > >\n> > > > Here the results with -T 60:\n> > >\n> > > Might be a good idea to share your analysis / interpretation of the\n> > > results, not just the raw data. After all, the change is being\n> proposed\n> > > by you, so do you think this shows the change is beneficial?\n> > >\n> > I think so, but the expectation has diminished.\n> > I expected that the more connections, the better the performance.\n> > And for both patch and head, this doesn't happen in tests.\n> > Performance degrades with a greater number of connections.\n> \n> Your system has four CPUs. Once they're all busy, adding more\n> connections\n> won't improve performance. It'll just add more and more context\n> switching,\n> cache misses, and make the OS scheduler do more work.\n> \n> conns tps head\n> 10 82365.634750\n> 50 74593.714180\n> 80 69219.756038\n> 90 67419.574189\n> 100 66613.771701\n> Yes it is quite disappointing that with 100 connections, tps loses to 10\n> connections.\n> \n\nIMO that's entirely expected on a system with only 4 cores. Increasing\nthe number of connections inevitably means more overhead (you have to\ntrack/manage more stuff). And at some point the backends start competing\nfor L2/L3 caches, context switches are not free either, etc. So once you\ncross ~2-3x the number of cores, you should expect this.\n\nThis behavior is natural/inherent, it's unlikely to go away, and it's\none of the reasons why we recommend not to use too many connections. If\nyou try to maximize throughput, just don't do that. Or just use machine\nwith more cores.\n\n> \n> > GetSnapShowData() isn't a bottleneck?\n> \n> I'd be surprised if it showed up in a profile on your machine with that\n> workload in any sort of meaningful way. The snapshot reuse logic\n> will always\n> work - because there are no writes - and thus the only work that\n> needs to be\n> done is to acquire the ProcArrayLock briefly. And because there is\n> only a\n> small number of cores, contention on the cacheline for that isn't a\n> problem.\n> \n> Thanks for sharing this.\n> \n> \n> \n> \n> > > These results look much saner, but IMHO it also does not show\n> any clear\n> > > benefit of the patch. Or are you still claiming there is a benefit?\n> > >\n> > We agree that they are micro-optimizations. However, I think they\n> should be\n> > considered micro-optimizations in inner loops, because all in\n> procarray.c is\n> > a hotpath.\n> \n> As explained earlier, I don't agree that they optimize anything - you're\n> making some of the scalability behaviour *worse*, if it's changed at\n> all.\n> \n> \n> > The first objective, I believe, was achieved, with no performance\n> > regression.\n> > I agree, the gains are small, by the tests done.\n> \n> There are no gains.\n> \n> IMHO, I must disagree.\n> \n\nYou don't have to, really. What you should do is showing results\ndemonstrating the claimed gains, and so far you have not done that.\n\nI don't want to be rude, but so far you've shown results from a\nbenchmark testing fork(), due to only running 10 transactions per\nclient, and then results from a single run for each client count (which\ndoesn't really show any gains either, and is so noisy).\n\nAs mentioned GetSnapshotData() is not even in perf profile, so why would\nthe patch even make a difference?\n\nYou've also claimed it helps generating better code on older compilers,\nbut you've never supported that with any evidence.\n\n\nMaybe there is an improvement - show us. Do a benchmark with more runs,\nto average-out the noise. Calculate VAR/STDEV to show how variable the\nresults are. Use that to compare results and decide if there is an\nimprovement. Also, keep in mind binary layout matters [1].\n\n[1] https://www.youtube.com/watch?v=r-TLSBdHe1A\n\n> \n> \n> \n> > But, IMHO, this is a good way, small gains turn into big gains in\n> the end,\n> > when applied to all code.\n> >\n> > Consider GetSnapShotData()\n> > 1. Most of the time the snapshot is not null, so:\n> > if (snaphost == NULL), will fail most of the time.\n> >\n> > With the patch:\n> > if (snapshot->xip != NULL)\n> > {\n> > if (GetSnapshotDataReuse(snapshot))\n> > return snapshot;\n> > }\n> >\n> > Most of the time the test is true and GetSnapshotDataReuse is not\n> called\n> > for new\n> > snapshots.\n> > count, subcount and suboverflowed, will not be initialized, for all\n> > snapshots.\n> \n> But that's irrelevant. There's only a few \"new\" snapshots in the\n> life of a\n> connection. You're optimizing something irrelevant.\n> \n> IMHO, when GetSnapShotData() is the bottleneck, all is relevant.\n> \n\nMaybe. Show us the difference.\n\n> \n> \n> \n> > 2. If snapshot is taken during recoverys\n> > The pgprocnos and ProcGlobal->subxidStates are not touched\n> unnecessarily.\n> \n> That code isn't reached when in recovery?\n> \n> Currently it is reached *even* when not in recovery.\n> With the patch, *only* is reached when in recovery.\n> \n> \n> \n> > 3. Calling GetSnapshotDataReuse() without first acquiring\n> ProcArrayLock.\n> > There's an agreement that this would be fine, for now.\n> \n> There's no such agreement at all. It's not correct.\n> \n> Ok, but there is a chance it will work correctly.\n> \n\nEither it's correct or not. Chance of being correct does not count.\n\n> \n> \n> > Consider ComputeXidHorizons()\n> > 1. ProcGlobal->statusFlags is touched before the lock.\n> \n> Hard to believe that'd have a measurable effect.\n> \n> IMHO, anything you take out of the lock is a benefit.\n> \n\nMaybe. Show us the difference.\n\n> \n> \n> \n> > 2. allStatusFlags[index] is not touched for all numProcs.\n> \n> I'd be surprised if the compiler couldn't defer that load on its own.\n> \n> Better be sure of that, no?\n> \n\nWe rely on compilers doing this in about a million other places.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 28 May 2022 14:35:00 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving connection scalability\n (src/backend/storage/ipc/procarray.c)"
},
{
"msg_contents": "Em sáb., 28 de mai. de 2022 às 09:00, Tomas Vondra <\ntomas.vondra@enterprisedb.com> escreveu:\n\n> On 5/28/22 02:15, Ranier Vilela wrote:\n> >\n> >\n> > Em sex., 27 de mai. de 2022 às 18:08, Andres Freund <andres@anarazel.de\n> > <mailto:andres@anarazel.de>> escreveu:\n> >\n> > Hi,\n> >\n> > On 2022-05-27 03:30:46 +0200, Tomas Vondra wrote:\n> > > On 5/27/22 02:11, Ranier Vilela wrote:\n> > > > ./pgbench -M prepared -c $conns -j $conns -T 60 -S -n -U postgres\n> > > >\n> > > > pgbench (15beta1)\n> > > > transaction type: <builtin: select only>\n> > > > scaling factor: 1\n> > > > query mode: prepared\n> > > > number of clients: 100\n> > > > number of threads: 100\n> > > > maximum number of tries: 1\n> > > > duration: 60 s\n> > > >\n> > > > conns tps head tps patched\n> > > >\n> > > > 1 17126.326108 17792.414234\n> > > > 10 82068.123383 82468.334836\n> > > > 50 73808.731404 74678.839428\n> > > > 80 73290.191713 73116.553986\n> > > > 90 67558.483043 68384.906949\n> > > > 100 65960.982801 66997.793777\n> > > > 200 62216.011998 62870.243385\n> > > > 300 62924.225658 62796.157548\n> > > > 400 62278.099704 63129.555135\n> > > > 500 63257.930870 62188.825044\n> > > > 600 61479.890611 61517.913967\n> > > > 700 61139.354053 61327.898847\n> > > > 800 60833.663791 61517.913967\n> > > > 900 61305.129642 61248.336593\n> > > > 1000 60990.918719 61041.670996\n> > > >\n> > >\n> > > These results look much saner, but IMHO it also does not show any\n> > clear\n> > > benefit of the patch. Or are you still claiming there is a benefit?\n> >\n> > They don't look all that sane to me - isn't that way lower than one\n> > would\n> > expect?\n> >\n> > Yes, quite disappointing.\n> >\n> > Restricting both client and server to the same four cores, a\n> > thermically challenged older laptop I have around I get 150k tps at\n> > both 10\n> > and 100 clients.\n> >\n> > And you can share the benchmark details? Hardware, postgres and pgbench,\n> > please?\n> >\n> >\n> > Either way, I'd not expect to see any GetSnapshotData() scalability\n> > effects to\n> > show up on an \"Intel® Core™ i5-8250U CPU Quad Core\" - there's just\n> > not enough\n> > concurrency.\n> >\n> > This means that our customers will not see any connections scalability\n> > with PG15, using the simplest hardware?\n> >\n>\n> No. It means that on 4-core machine GetSnapshotData() is unlikely to be\n> a bottleneck, because you'll hit various other bottlenecks way earlier.\n>\n> I personally doubt it even makes sense to worry about scaling to this\n> many connections on such tiny system too much.\n>\n\n> >\n> > The correct pieces of these changes seem very unlikely to affect\n> > GetSnapshotData() performance meaningfully.\n> >\n> > To improve something like GetSnapshotData() you first have to come\n> > up with a\n> > workload that shows it being a meaningful part of a profile. Unless\n> > it is,\n> > performance differences are going to just be due to various forms of\n> > noise.\n> >\n> > Actually in the profiles I got with perf, GetSnapShotData() didn't show\n> up.\n> >\n>\n> But that's exactly the point Andres is trying to make - if you don't see\n> GetSnapshotData() in the perf profile, why do you think optimizing it\n> will have any meaningful impact on throughput?\n>\nYou see, I've seen in several places that GetSnapShotData() is the\nbottleneck in scaling connections.\nOne of them, if I remember correctly, was at an IBM in Russia.\nAnother statement occurs in [1][2][3]\nJust because I don't have enough hardware to force GetSnapShotData()\ndoesn't mean optimizing it won't make a difference.\nAnd even on my modest hardware, we've seen gains, small but consistent.\nSo IMHO everyone will benefit, including the small servers.\n\nregards,\nRanier Vilela\n\n[1]\nhttps://techcommunity.microsoft.com/t5/azure-database-for-postgresql/improving-postgres-connection-scalability-snapshots/ba-p/1806462\n[2] https://www.postgresql.org/message-id/5198715A.6070808%40vmware.com\n[3]\nhttps://it-events.com/system/attachments/files/000/001/098/original/PostgreSQL_%D0%BC%D0%B0%D1%81%D1%88%D1%82%D0%B0%D0%B1%D0%B8%D1%80%D0%BE%D0%B2%D0%B0%D0%BD%D0%B8%D0%B5.pdf?1448975472\n\n\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nEm sáb., 28 de mai. de 2022 às 09:00, Tomas Vondra <tomas.vondra@enterprisedb.com> escreveu:On 5/28/22 02:15, Ranier Vilela wrote:\n> \n> \n> Em sex., 27 de mai. de 2022 às 18:08, Andres Freund <andres@anarazel.de\n> <mailto:andres@anarazel.de>> escreveu:\n> \n> Hi,\n> \n> On 2022-05-27 03:30:46 +0200, Tomas Vondra wrote:\n> > On 5/27/22 02:11, Ranier Vilela wrote:\n> > > ./pgbench -M prepared -c $conns -j $conns -T 60 -S -n -U postgres\n> > >\n> > > pgbench (15beta1)\n> > > transaction type: <builtin: select only>\n> > > scaling factor: 1\n> > > query mode: prepared\n> > > number of clients: 100\n> > > number of threads: 100\n> > > maximum number of tries: 1\n> > > duration: 60 s\n> > >\n> > > conns tps head tps patched\n> > >\n> > > 1 17126.326108 17792.414234\n> > > 10 82068.123383 82468.334836\n> > > 50 73808.731404 74678.839428\n> > > 80 73290.191713 73116.553986\n> > > 90 67558.483043 68384.906949\n> > > 100 65960.982801 66997.793777\n> > > 200 62216.011998 62870.243385\n> > > 300 62924.225658 62796.157548\n> > > 400 62278.099704 63129.555135\n> > > 500 63257.930870 62188.825044\n> > > 600 61479.890611 61517.913967\n> > > 700 61139.354053 61327.898847\n> > > 800 60833.663791 61517.913967\n> > > 900 61305.129642 61248.336593\n> > > 1000 60990.918719 61041.670996\n> > >\n> >\n> > These results look much saner, but IMHO it also does not show any\n> clear\n> > benefit of the patch. Or are you still claiming there is a benefit?\n> \n> They don't look all that sane to me - isn't that way lower than one\n> would\n> expect?\n> \n> Yes, quite disappointing.\n> \n> Restricting both client and server to the same four cores, a\n> thermically challenged older laptop I have around I get 150k tps at\n> both 10\n> and 100 clients.\n> \n> And you can share the benchmark details? Hardware, postgres and pgbench,\n> please?\n> \n> \n> Either way, I'd not expect to see any GetSnapshotData() scalability\n> effects to\n> show up on an \"Intel® Core™ i5-8250U CPU Quad Core\" - there's just\n> not enough\n> concurrency.\n> \n> This means that our customers will not see any connections scalability\n> with PG15, using the simplest hardware?\n> \n\nNo. It means that on 4-core machine GetSnapshotData() is unlikely to be\na bottleneck, because you'll hit various other bottlenecks way earlier.\n\nI personally doubt it even makes sense to worry about scaling to this\nmany connections on such tiny system too much.\n\n> \n> The correct pieces of these changes seem very unlikely to affect\n> GetSnapshotData() performance meaningfully.\n> \n> To improve something like GetSnapshotData() you first have to come\n> up with a\n> workload that shows it being a meaningful part of a profile. Unless\n> it is,\n> performance differences are going to just be due to various forms of\n> noise.\n> \n> Actually in the profiles I got with perf, GetSnapShotData() didn't show up.\n> \n\nBut that's exactly the point Andres is trying to make - if you don't see\nGetSnapshotData() in the perf profile, why do you think optimizing it\nwill have any meaningful impact on throughput?You see, I've seen in several places that GetSnapShotData() is the bottleneck in scaling connections.One of them, if I remember correctly, was at an IBM in Russia.Another statement occurs in [1][2][3]Just because I don't have enough hardware to force GetSnapShotData() doesn't mean optimizing it won't make a difference. And even on my modest hardware, we've seen gains, small but consistent.So IMHO everyone will benefit, including the small servers.regards,Ranier Vilela[1] https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/improving-postgres-connection-scalability-snapshots/ba-p/1806462[2] https://www.postgresql.org/message-id/5198715A.6070808%40vmware.com[3] https://it-events.com/system/attachments/files/000/001/098/original/PostgreSQL_%D0%BC%D0%B0%D1%81%D1%88%D1%82%D0%B0%D0%B1%D0%B8%D1%80%D0%BE%D0%B2%D0%B0%D0%BD%D0%B8%D0%B5.pdf?1448975472\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sat, 28 May 2022 11:12:47 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improving connection scalability\n (src/backend/storage/ipc/procarray.c)"
},
{
"msg_contents": "> On 28 May 2022, at 16:12, Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> Just because I don't have enough hardware to force GetSnapShotData() doesn't mean optimizing it won't make a difference. \n\nQuoting Andres from upthread:\n\n \"To improve something like GetSnapshotData() you first have to come up with\n a workload that shows it being a meaningful part of a profile. Unless it\n is, performance differences are going to just be due to various forms of\n noise.\"\n\nIf you think this is a worthwhile improvement, you need to figure out a way to\nreliably test it in order to prove it.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Sat, 28 May 2022 17:17:46 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Improving connection scalability\n (src/backend/storage/ipc/procarray.c)"
},
{
"msg_contents": "On 5/28/22 16:12, Ranier Vilela wrote:\n> Em sáb., 28 de mai. de 2022 às 09:00, Tomas Vondra\n> <tomas.vondra@enterprisedb.com <mailto:tomas.vondra@enterprisedb.com>>\n> escreveu:\n> \n> On 5/28/22 02:15, Ranier Vilela wrote:\n> >\n> >\n> > Em sex., 27 de mai. de 2022 às 18:08, Andres Freund\n> <andres@anarazel.de <mailto:andres@anarazel.de>\n> > <mailto:andres@anarazel.de <mailto:andres@anarazel.de>>> escreveu:\n> >\n> > Hi,\n> >\n> > On 2022-05-27 03:30:46 +0200, Tomas Vondra wrote:\n> > > On 5/27/22 02:11, Ranier Vilela wrote:\n> > > > ./pgbench -M prepared -c $conns -j $conns -T 60 -S -n -U\n> postgres\n> > > >\n> > > > pgbench (15beta1)\n> > > > transaction type: <builtin: select only>\n> > > > scaling factor: 1\n> > > > query mode: prepared\n> > > > number of clients: 100\n> > > > number of threads: 100\n> > > > maximum number of tries: 1\n> > > > duration: 60 s\n> > > >\n> > > > conns tps head tps patched\n> > > >\n> > > > 1 17126.326108 17792.414234\n> > > > 10 82068.123383 82468.334836\n> > > > 50 73808.731404 74678.839428\n> > > > 80 73290.191713 73116.553986\n> > > > 90 67558.483043 68384.906949\n> > > > 100 65960.982801 66997.793777\n> > > > 200 62216.011998 62870.243385\n> > > > 300 62924.225658 62796.157548\n> > > > 400 62278.099704 63129.555135\n> > > > 500 63257.930870 62188.825044\n> > > > 600 61479.890611 61517.913967\n> > > > 700 61139.354053 61327.898847\n> > > > 800 60833.663791 61517.913967\n> > > > 900 61305.129642 61248.336593\n> > > > 1000 60990.918719 61041.670996\n> > > >\n> > >\n> > > These results look much saner, but IMHO it also does not\n> show any\n> > clear\n> > > benefit of the patch. Or are you still claiming there is a\n> benefit?\n> >\n> > They don't look all that sane to me - isn't that way lower\n> than one\n> > would\n> > expect?\n> >\n> > Yes, quite disappointing.\n> >\n> > Restricting both client and server to the same four cores, a\n> > thermically challenged older laptop I have around I get 150k\n> tps at\n> > both 10\n> > and 100 clients.\n> >\n> > And you can share the benchmark details? Hardware, postgres and\n> pgbench,\n> > please?\n> >\n> >\n> > Either way, I'd not expect to see any GetSnapshotData()\n> scalability\n> > effects to\n> > show up on an \"Intel® Core™ i5-8250U CPU Quad Core\" - there's just\n> > not enough\n> > concurrency.\n> >\n> > This means that our customers will not see any connections scalability\n> > with PG15, using the simplest hardware?\n> >\n> \n> No. It means that on 4-core machine GetSnapshotData() is unlikely to be\n> a bottleneck, because you'll hit various other bottlenecks way earlier.\n> \n> I personally doubt it even makes sense to worry about scaling to this\n> many connections on such tiny system too much.\n> \n> \n> >\n> > The correct pieces of these changes seem very unlikely to affect\n> > GetSnapshotData() performance meaningfully.\n> >\n> > To improve something like GetSnapshotData() you first have to come\n> > up with a\n> > workload that shows it being a meaningful part of a profile.\n> Unless\n> > it is,\n> > performance differences are going to just be due to various\n> forms of\n> > noise.\n> >\n> > Actually in the profiles I got with perf, GetSnapShotData() didn't\n> show up.\n> >\n> \n> But that's exactly the point Andres is trying to make - if you don't see\n> GetSnapshotData() in the perf profile, why do you think optimizing it\n> will have any meaningful impact on throughput?\n> \n> You see, I've seen in several places that GetSnapShotData() is the\n> bottleneck in scaling connections.\n> One of them, if I remember correctly, was at an IBM in Russia.\n> Another statement occurs in [1][2][3]\n\nNo one is claiming GetSnapshotData() can't be a bottleneck on systems\nwith many cores. That's certainly possible, which is why e.g. Andres\nspent a lot of time optimizing for that case.\n\nBut that's what we're arguing about. You're trying to convince us that\nyour patch will improve things, and you're supporting that by numbers\nfrom a machine that is unlikely to be hitting this bottleneck.\n\n> Just because I don't have enough hardware to force GetSnapShotData()\n> doesn't mean optimizing it won't make a difference.\n\nWell, the question is if it actually optimizes things. Maybe it does,\nwhich would be great, but people in this thread (including me) seem to\nbe fairly skeptical about that claim, because the results are frankly\nentirely unconvincing.\n\nI doubt we'll just accept changes in such sensitive places without\nresults from a relevant machine. Maybe if there was a clear agreement\nit's a win, but that's not the case here.\n\n\n> And even on my modest hardware, we've seen gains, small but consistent.\n> So IMHO everyone will benefit, including the small servers.\n> \n\nNo, we haven't seen any convincing gains. I've tried to explain multiple\ntimes that the results you've shared are not showing any clear\nimprovement, due to only having one run for each client count (which\nmeans there's a lot of noise), impact of binary layout in different\nbuilds, etc. You've ignored all of that, so instead of repeating myself,\nI did a simple benchmark on my two machines:\n\n1) i5-2500k / 4 cores and 8GB RAM (so similar to what you have)\n\n2) 2x e5-2620v3 / 16/32 cores, 64GB RAM (so somewhat bigger)\n\nand I tested 1, 2, 5, 10, 50, 100, ...., 1000 clients using the same\nbenchmark as you (pgbench -S -M prepared ... ). I did 10 client counts\nfor each client count, to calculate median which evens out the noise.\nAnd for fun I tried this with gcc 9.3, 10.3 and 11.2. The script and\nresults from both machines are attached.\n\nThe results from xeon and gcc 11.2 look like this:\n\n clients master patched diff\n ---------------------------------------\n 1 46460 44936 97%\n 2 87486 84746 97%\n 5 199102 192169 97%\n 10 344458 339403 99%\n 20 515257 512513 99%\n 30 528675 525467 99%\n 40 592761 594384 100%\n 50 694635 706193 102%\n 100 643950 655238 102%\n 200 690133 696815 101%\n 300 670403 677818 101%\n 400 678573 681387 100%\n 500 665349 678722 102%\n 600 666028 670915 101%\n 700 662316 662511 100%\n 800 647922 654745 101%\n 900 650274 654698 101%\n 1000 644482 649332 101%\n\nPlease, explain to me how this shows consistent measurable improvement?\n\nThe standard deviation is roughly 1.5% on average, and the difference is\nwell within that range. Even if there was a tiny improvement for the\nhigh client counts, no one sane will run with that many clients, because\nthe throughput peaks at ~50 clients. So even if you gain 1% with 500\nclients, it's still less than with 50 clients. If anything, this shows\nregression for lower client counts.\n\nFWIW this entirely ignores the question is this benchmark even hits the\nbottleneck this patch aims to improve. Also, there's the question of\ncorrectness, and I'd bet Andres is right getting snapshot without\nholding ProcArrayLock is busted.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 29 May 2022 18:00:14 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving connection scalability\n (src/backend/storage/ipc/procarray.c)"
},
{
"msg_contents": "Em sáb., 28 de mai. de 2022 às 09:35, Tomas Vondra <\ntomas.vondra@enterprisedb.com> escreveu:\n\n> On 5/28/22 02:36, Ranier Vilela wrote:\n> > Em sex., 27 de mai. de 2022 às 18:22, Andres Freund <andres@anarazel.de\n> > <mailto:andres@anarazel.de>> escreveu:\n> >\n> > Hi,\n> >\n> > On 2022-05-27 10:35:08 -0300, Ranier Vilela wrote:\n> > > Em qui., 26 de mai. de 2022 às 22:30, Tomas Vondra <\n> > > tomas.vondra@enterprisedb.com\n> > <mailto:tomas.vondra@enterprisedb.com>> escreveu:\n> > >\n> > > > On 5/27/22 02:11, Ranier Vilela wrote:\n> > > > >\n> > > > > ...\n> > > > >\n> > > > > Here the results with -T 60:\n> > > >\n> > > > Might be a good idea to share your analysis / interpretation of\n> the\n> > > > results, not just the raw data. After all, the change is being\n> > proposed\n> > > > by you, so do you think this shows the change is beneficial?\n> > > >\n> > > I think so, but the expectation has diminished.\n> > > I expected that the more connections, the better the performance.\n> > > And for both patch and head, this doesn't happen in tests.\n> > > Performance degrades with a greater number of connections.\n> >\n> > Your system has four CPUs. Once they're all busy, adding more\n> > connections\n> > won't improve performance. It'll just add more and more context\n> > switching,\n> > cache misses, and make the OS scheduler do more work.\n> >\n> > conns tps head\n> > 10 82365.634750\n> > 50 74593.714180\n> > 80 69219.756038\n> > 90 67419.574189\n> > 100 66613.771701\n> > Yes it is quite disappointing that with 100 connections, tps loses to 10\n> > connections.\n> >\n>\n> IMO that's entirely expected on a system with only 4 cores. Increasing\n> the number of connections inevitably means more overhead (you have to\n> track/manage more stuff). And at some point the backends start competing\n> for L2/L3 caches, context switches are not free either, etc. So once you\n> cross ~2-3x the number of cores, you should expect this.\n>\n> This behavior is natural/inherent, it's unlikely to go away, and it's\n> one of the reasons why we recommend not to use too many connections. If\n> you try to maximize throughput, just don't do that. Or just use machine\n> with more cores.\n>\n> >\n> > > GetSnapShowData() isn't a bottleneck?\n> >\n> > I'd be surprised if it showed up in a profile on your machine with\n> that\n> > workload in any sort of meaningful way. The snapshot reuse logic\n> > will always\n> > work - because there are no writes - and thus the only work that\n> > needs to be\n> > done is to acquire the ProcArrayLock briefly. And because there is\n> > only a\n> > small number of cores, contention on the cacheline for that isn't a\n> > problem.\n> >\n> > Thanks for sharing this.\n> >\n> >\n> >\n> >\n> > > > These results look much saner, but IMHO it also does not show\n> > any clear\n> > > > benefit of the patch. Or are you still claiming there is a\n> benefit?\n> > > >\n> > > We agree that they are micro-optimizations. However, I think they\n> > should be\n> > > considered micro-optimizations in inner loops, because all in\n> > procarray.c is\n> > > a hotpath.\n> >\n> > As explained earlier, I don't agree that they optimize anything -\n> you're\n> > making some of the scalability behaviour *worse*, if it's changed at\n> > all.\n> >\n> >\n> > > The first objective, I believe, was achieved, with no performance\n> > > regression.\n> > > I agree, the gains are small, by the tests done.\n> >\n> > There are no gains.\n> >\n> > IMHO, I must disagree.\n> >\n>\n> You don't have to, really. What you should do is showing results\n> demonstrating the claimed gains, and so far you have not done that.\n>\n> I don't want to be rude, but so far you've shown results from a\n> benchmark testing fork(), due to only running 10 transactions per\n> client, and then results from a single run for each client count (which\n> doesn't really show any gains either, and is so noisy).\n>\n> As mentioned GetSnapshotData() is not even in perf profile, so why would\n> the patch even make a difference?\n>\n> You've also claimed it helps generating better code on older compilers,\n> but you've never supported that with any evidence.\n>\n>\n> Maybe there is an improvement - show us. Do a benchmark with more runs,\n> to average-out the noise. Calculate VAR/STDEV to show how variable the\n> results are. Use that to compare results and decide if there is an\n> improvement. Also, keep in mind binary layout matters [1].\n>\nI redid the benchmark with a better machine:\nIntel i7-10510U\nRAM 8GB\nSSD 512GB\nLinux Ubuntu 64 bits\n\nAll files are attached, including the raw data of the results.\nI did the calculations as requested.\nBut a quick average of the 10 benchmarks, done resulted in 10,000 tps more.\nNot bad, for a simple patch, made entirely of micro-optimizations.\n\nResults attached.\n\nregards,\nRanier Vilela",
"msg_date": "Sun, 29 May 2022 14:26:44 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improving connection scalability\n (src/backend/storage/ipc/procarray.c)"
},
{
"msg_contents": "On 2022-05-29 18:00:14 +0200, Tomas Vondra wrote:\n> Also, there's the question of correctness, and I'd bet Andres is right\n> getting snapshot without holding ProcArrayLock is busted.\n\nUnless there's some actual analysis of this by Rainier, I'm just going to\nignore this thread going forward. It's pointless to invest time when\neverything we say is just ignored.\n\n\n",
"msg_date": "Sun, 29 May 2022 11:21:44 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Improving connection scalability\n (src/backend/storage/ipc/procarray.c)"
},
{
"msg_contents": "On 5/29/22 19:26, Ranier Vilela wrote:\n> ...\n> I redid the benchmark with a better machine:\n> Intel i7-10510U\n> RAM 8GB\n> SSD 512GB\n> Linux Ubuntu 64 bits\n> \n> All files are attached, including the raw data of the results.\n> I did the calculations as requested.\n> But a quick average of the 10 benchmarks, done resulted in 10,000 tps more.\n> Not bad, for a simple patch, made entirely of micro-optimizations.\n> \n\nI am a bit puzzled by the calculations.\n\nIt seems you somehow sum the differences for each run, and then average\nthat over all the runs. So, something like\n\n SELECT avg(delta_tps) FROM (\n SELECT run, SUM(patched_tps - master_tps) AS delta_tps\n FROM results GROUP BY run\n ) foo;\n\nThat's certainly \"unorthodox\" way to evaluate the results, because it\nmixes results for different client counts. That's certainly not what I\nsuggested, and it's a pretty useless view on the data, as it obfuscates\nhow throughput depends on the client count.\n\nAnd no, the resulting 10k does not mean you've \"gained\" 10k tps anywhere\n- none of the \"diff\" values is anywhere close to that value. If you\ntested more client counts, you'd probably get bigger difference.\nCompared to the \"sum(tps)\" for each run, it's like 0.8% difference. But\neven that is entirely useless, due to mixing different client counts.\n\nI'm sorry, but this is so silly it's hard to even explain why ...\n\n\nWhat I meant is calculating median for each client count, so for example\nfor the master branch you get 10 values for 1 client\n\n 38820 39245 39773 39597 39301 39442 39379 39622 38909 38454\n\nand if you calculate median, you'll get 39340 (and stdev 411). And same\nfor the other client counts, etc. If you do that, you'll get this:\n\n clients master patched diff\n ------------------------------------\n 1 39340 40173 2.12%\n 10 132462 134274 1.37%\n 50 115669 116575 0.78%\n 100 97931 98816 0.90%\n 200 88912 89660 0.84%\n 300 87879 88636 0.86%\n 400 87721 88219 0.57%\n 500 87267 88078 0.93%\n 600 87317 87781 0.53%\n 700 86907 87603 0.80%\n 800 86852 87364 0.59%\n 900 86578 87173 0.69%\n 1000 86481 86969 0.56%\n\nHow exactly this improves scalability is completely unclear to me.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 29 May 2022 22:02:32 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving connection scalability\n (src/backend/storage/ipc/procarray.c)"
},
{
"msg_contents": "Em dom., 29 de mai. de 2022 às 15:21, Andres Freund <andres@anarazel.de>\nescreveu:\n\n> On 2022-05-29 18:00:14 +0200, Tomas Vondra wrote:\n> > Also, there's the question of correctness, and I'd bet Andres is right\n> > getting snapshot without holding ProcArrayLock is busted.\n>\n> Unless there's some actual analysis of this by Rainier, I'm just going to\n> ignore this thread going forward. It's pointless to invest time when\n> everything we say is just ignored.\n>\nSorry, just not my intention to ignore this important point.\nOf course, any performance gain is good, but robustness comes first.\n\nAs soon as I have some time.\n\nregards,\nRanier Vilela\n\nEm dom., 29 de mai. de 2022 às 15:21, Andres Freund <andres@anarazel.de> escreveu:On 2022-05-29 18:00:14 +0200, Tomas Vondra wrote:\n> Also, there's the question of correctness, and I'd bet Andres is right\n> getting snapshot without holding ProcArrayLock is busted.\n\nUnless there's some actual analysis of this by Rainier, I'm just going to\nignore this thread going forward. It's pointless to invest time when\neverything we say is just ignored.Sorry, just not my intention to ignore this important point.Of course, any performance gain is good, but robustness comes first.As soon as I have some time.regards,Ranier Vilela",
"msg_date": "Sun, 29 May 2022 17:10:05 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improving connection scalability\n (src/backend/storage/ipc/procarray.c)"
},
{
"msg_contents": "Em dom., 29 de mai. de 2022 às 17:10, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Em dom., 29 de mai. de 2022 às 15:21, Andres Freund <andres@anarazel.de>\n> escreveu:\n>\n>> On 2022-05-29 18:00:14 +0200, Tomas Vondra wrote:\n>> > Also, there's the question of correctness, and I'd bet Andres is right\n>> > getting snapshot without holding ProcArrayLock is busted.\n>>\n>> Unless there's some actual analysis of this by Rainier, I'm just going to\n>> ignore this thread going forward. It's pointless to invest time when\n>> everything we say is just ignored.\n>>\n> Sorry, just not my intention to ignore this important point.\n> Of course, any performance gain is good, but robustness comes first.\n>\n> As soon as I have some time.\n>\nI redid the benchmarks, with getting a snapshot with holding ProcArrayLock.\n\nAverage Results\n\n\n\n\nConnections:\ntps head tps patch diff\n1 39196,3088985 39858,0207936 661,711895100008 101,69%\n2 65050,8643819 65245,9852367 195,1208548 100,30%\n5 91486,0298359 91862,9026528 376,872816899995 100,41%\n10 131318,0774955 131547,1404573 229,062961799995 100,17%\n50 116531,2334144 116687,0325522 155,799137800001 100,13%\n100 98969,4650449 98808,6778717 -160,787173199991 99,84%\n200 89514,5238649 89463,6196075 -50,904257400005 99,94%\n300 88426,3612183 88457,2695151 30,9082968000002 100,03%\n400 88078,1686912 88338,2859163 260,117225099995 100,30%\n500 87791,1620039 88074,3418504 283,179846500003 100,32%\n600 87552,3343394 87930,8645184 378,530178999994 100,43%\n1000 86538,3772895 86771,1946099 232,817320400005 100,27%\navg 89204,4088731917 89420,444631825 1981,0816042 100,24%\nFor clients with 1 connections, the results are good.\nBut for clients with 100 and 200 connections, the results are not good.\nI can't say why these two tests were so bad.\nBecause, 100 and 200 results, I'm not sure if this should go ahead, if it's\nworth the effort.\n\nAttached the results files and calc plan.\n\nregards,\nRanier Vilela",
"msg_date": "Tue, 31 May 2022 11:36:28 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improving connection scalability\n (src/backend/storage/ipc/procarray.c)"
},
{
"msg_contents": "On 5/31/22 16:36, Ranier Vilela wrote:\n> Em dom., 29 de mai. de 2022 às 17:10, Ranier Vilela <ranier.vf@gmail.com\n> <mailto:ranier.vf@gmail.com>> escreveu:\n> \n> Em dom., 29 de mai. de 2022 às 15:21, Andres Freund\n> <andres@anarazel.de <mailto:andres@anarazel.de>> escreveu:\n> \n> On 2022-05-29 18:00:14 +0200, Tomas Vondra wrote:\n> > Also, there's the question of correctness, and I'd bet Andres\n> is right\n> > getting snapshot without holding ProcArrayLock is busted.\n> \n> Unless there's some actual analysis of this by Rainier, I'm just\n> going to\n> ignore this thread going forward. It's pointless to invest time when\n> everything we say is just ignored.\n> \n> Sorry, just not my intention to ignore this important point.\n> Of course, any performance gain is good, but robustness comes first.\n> \n> As soon as I have some time.\n> \n> I redid the benchmarks, with getting a snapshot with holding ProcArrayLock.\n> \n> Average Results\n> \t\n> \t\n> \t\n> \t\n> Connections: \n> \ttps head \ttps patch \tdiff \t\n> 1 \t39196,3088985 \t39858,0207936 \t661,711895100008 \t101,69%\n> 2 \t65050,8643819 \t65245,9852367 \t195,1208548 \t100,30%\n> 5 \t91486,0298359 \t91862,9026528 \t376,872816899995 \t100,41%\n> 10 \t131318,0774955 \t131547,1404573 \t229,062961799995 \t100,17%\n> 50 \t116531,2334144 \t116687,0325522 \t155,799137800001 \t100,13%\n> 100 \t98969,4650449 \t98808,6778717 \t-160,787173199991 \t99,84%\n> 200 \t89514,5238649 \t89463,6196075 \t-50,904257400005 \t99,94%\n> 300 \t88426,3612183 \t88457,2695151 \t30,9082968000002 \t100,03%\n> 400 \t88078,1686912 \t88338,2859163 \t260,117225099995 \t100,30%\n> 500 \t87791,1620039 \t88074,3418504 \t283,179846500003 \t100,32%\n> 600 \t87552,3343394 \t87930,8645184 \t378,530178999994 \t100,43%\n> 1000 \t86538,3772895 \t86771,1946099 \t232,817320400005 \t100,27%\n> avg \t89204,4088731917 \t89420,444631825 \t1981,0816042 \t100,24%\n> \n> \n> For clients with 1 connections, the results are good.\n\nIsn't that a bit strange, considering the aim of this patch was\nscalability? Which should improve higher client counts in the first place.\n\n> But for clients with 100 and 200 connections, the results are not good.\n> I can't say why these two tests were so bad.\n> Because, 100 and 200 results, I'm not sure if this should go ahead, if\n> it's worth the effort.\n> \n\nI'd argue this is either just noise, and there's no actual difference.\nThis could be verified by some sort of statistical testing (e.g. the\nwell known t-test).\n\nAnother option is that this is simply due to differences in binary\nlayout - this can result in small differences (easily 1-2%) that are\ncompletely unrelated to what the patch does. This is exactly what the\n\"stabilizer\" talk I mentioned a couple days ago was about.\n\nFWIW, when a patch improves scalability, the improvement usually\nincreases with the number of clients. So you'd see no/small improvement\nfor 10 clients, 100 clients would be improved more, 200 more, etc. We\nsee nothing like that here. So either the patch does not really improve\nanything, or perhaps the benchmark doesn't even hit the bottleneck the\npatch is meant to improve (which was already suggested in this thread\nrepeatedly).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 31 May 2022 20:44:26 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving connection scalability\n (src/backend/storage/ipc/procarray.c)"
},
{
"msg_contents": "On 5/31/22 11:44, Tomas Vondra wrote:\n> I'd argue this is either just noise, and there's no actual difference.\n> This could be verified by some sort of statistical testing (e.g. the\n> well known t-test).\n\nGiven the conversation so far, I'll go ahead and mark this Returned with\nFeedback. Specifically, this patch would need hard statistical proof\nthat it's having a positive effect.\n\n--Jacob\n\n\n",
"msg_date": "Mon, 1 Aug 2022 14:51:48 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving connection scalability\n (src/backend/storage/ipc/procarray.c)"
},
{
"msg_contents": "Em seg., 1 de ago. de 2022 às 18:51, Jacob Champion <jchampion@timescale.com>\nescreveu:\n\n> On 5/31/22 11:44, Tomas Vondra wrote:\n> > I'd argue this is either just noise, and there's no actual difference.\n> > This could be verified by some sort of statistical testing (e.g. the\n> > well known t-test).\n>\n> Given the conversation so far, I'll go ahead and mark this Returned with\n> Feedback. Specifically, this patch would need hard statistical proof\n> that it's having a positive effect.\n>\nI think that the contrary opinions, the little proven benefit and the lack\nof enthusiasm in changing procarray.c,\nI believe it is best to reject it.\n\nregards,\nRanier Vilela\n\nEm seg., 1 de ago. de 2022 às 18:51, Jacob Champion <jchampion@timescale.com> escreveu:On 5/31/22 11:44, Tomas Vondra wrote:\n> I'd argue this is either just noise, and there's no actual difference.\n> This could be verified by some sort of statistical testing (e.g. the\n> well known t-test).\n\nGiven the conversation so far, I'll go ahead and mark this Returned with\nFeedback. Specifically, this patch would need hard statistical proof\nthat it's having a positive effect.I think that the contrary opinions, the little proven benefit and the lack of enthusiasm in changing procarray.c,I believe it is best to reject it. regards,Ranier Vilela",
"msg_date": "Mon, 1 Aug 2022 20:27:03 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improving connection scalability\n (src/backend/storage/ipc/procarray.c)"
}
] |
[
{
"msg_contents": "Hi,\nPlease see attached for enhancement to COPY command progress.\n\nThe added status column would allow users to get the status of the most\nrecent COPY command.\n\nBelow is sample output.\n\nThanks\n\nyugabyte=# SELECT relid::regclass, command, status,\nyugabyte-# type, bytes_processed, bytes_total,\nyugabyte-# tuples_processed, tuples_excluded FROM\npg_stat_progress_copy;\n relid | command | status | type | bytes_processed | bytes_total\n| tuples_processed | tuples_excluded\n----------+-----------+--------+------+-----------------+-------------+------------------+-----------------\n copy_tab | COPY FROM | PASS | FILE | 152 | 152\n| 12 | 0\n(1 row)",
"msg_date": "Tue, 24 May 2022 10:18:49 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "adding status for COPY progress report"
},
{
"msg_contents": "Hi,\nHere is the updated patch.\n\nCheers\n\nOn Tue, May 24, 2022 at 10:18 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n> Hi,\n> Please see attached for enhancement to COPY command progress.\n>\n> The added status column would allow users to get the status of the most\n> recent COPY command.\n>\n> Below is sample output.\n>\n> Thanks\n>\n> yugabyte=# SELECT relid::regclass, command, status,\n> yugabyte-# type, bytes_processed, bytes_total,\n> yugabyte-# tuples_processed, tuples_excluded FROM pg_stat_progress_copy;\n> relid | command | status | type | bytes_processed | bytes_total | tuples_processed | tuples_excluded\n> ----------+-----------+--------+------+-----------------+-------------+------------------+-----------------\n> copy_tab | COPY FROM | PASS | FILE | 152 | 152 | 12 | 0\n> (1 row)\n>\n>",
"msg_date": "Tue, 24 May 2022 11:22:21 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: adding status for COPY progress report"
},
{
"msg_contents": "On Tue, 24 May 2022 at 19:13, Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> Hi,\n> Please see attached for enhancement to COPY command progress.\n>\n> The added status column would allow users to get the status of the most recent COPY command.\n\nI fail to see the merit of retaining completed progress reporting\ncommands in their views after completion, other than making the\nbehaviour of the pg_stat_progress-views much more complicated and\nadding overhead in places where we want the system to have as little\noverhead as possible.\n\nTrying to get the status of a COPY command after it finished on a\ndifferent connection seems weird, as that other connection is likely\nto have already disconnected / started another task. To be certain\nthat a backend can see the return status of the COPY command, you'd\nhave to be certain that the connection doesn't run any other\n_progress-able commands in the following seconds / minutes, which\nimplies control over the connection, which means you already have\naccess to the resulting status of your COPY command.\n\nRegarding the patch: I really do not like that this leaks entries into\nall _progress views: I get garbage data from e.g. the _create_index\nand _copy views when VACUUM is running, etc, because you removed the\nfilter on cmdtype.\nAlso, the added fields in CopyToStateData / CopyFromStateData seem\nuseless when a pgstat_progress_update_param in the right place should\nsuffice.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Tue, 24 May 2022 21:37:46 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: adding status for COPY progress report"
},
{
"msg_contents": "On Tue, May 24, 2022 at 12:37 PM Matthias van de Meent <\nboekewurm+postgres@gmail.com> wrote:\n\n> On Tue, 24 May 2022 at 19:13, Zhihong Yu <zyu@yugabyte.com> wrote:\n> >\n> > Hi,\n> > Please see attached for enhancement to COPY command progress.\n> >\n> > The added status column would allow users to get the status of the most\n> recent COPY command.\n>\n> I fail to see the merit of retaining completed progress reporting\n> commands in their views after completion, other than making the\n> behaviour of the pg_stat_progress-views much more complicated and\n> adding overhead in places where we want the system to have as little\n> overhead as possible.\n>\n> Trying to get the status of a COPY command after it finished on a\n> different connection seems weird, as that other connection is likely\n> to have already disconnected / started another task. To be certain\n> that a backend can see the return status of the COPY command, you'd\n> have to be certain that the connection doesn't run any other\n> _progress-able commands in the following seconds / minutes, which\n> implies control over the connection, which means you already have\n> access to the resulting status of your COPY command.\n>\n> Regarding the patch: I really do not like that this leaks entries into\n> all _progress views: I get garbage data from e.g. the _create_index\n> and _copy views when VACUUM is running, etc, because you removed the\n> filter on cmdtype.\n> Also, the added fields in CopyToStateData / CopyFromStateData seem\n> useless when a pgstat_progress_update_param in the right place should\n> suffice.\n>\n> Kind regards,\n>\n> Matthias van de Meent\n>\nHi,\nFor #2 above, can you let me know where the pgstat_progress_update_param()\ncall(s) should be added ?\n\nIn my patch, pgstat_progress_update_param() is called from error callback\nand EndCopy().\n\nFor #1, if I use param18 (which is not used by other views), would that be\nbetter ?\n\nThanks\n\nOn Tue, May 24, 2022 at 12:37 PM Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:On Tue, 24 May 2022 at 19:13, Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> Hi,\n> Please see attached for enhancement to COPY command progress.\n>\n> The added status column would allow users to get the status of the most recent COPY command.\n\nI fail to see the merit of retaining completed progress reporting\ncommands in their views after completion, other than making the\nbehaviour of the pg_stat_progress-views much more complicated and\nadding overhead in places where we want the system to have as little\noverhead as possible.\n\nTrying to get the status of a COPY command after it finished on a\ndifferent connection seems weird, as that other connection is likely\nto have already disconnected / started another task. To be certain\nthat a backend can see the return status of the COPY command, you'd\nhave to be certain that the connection doesn't run any other\n_progress-able commands in the following seconds / minutes, which\nimplies control over the connection, which means you already have\naccess to the resulting status of your COPY command.\n\nRegarding the patch: I really do not like that this leaks entries into\nall _progress views: I get garbage data from e.g. the _create_index\nand _copy views when VACUUM is running, etc, because you removed the\nfilter on cmdtype.\nAlso, the added fields in CopyToStateData / CopyFromStateData seem\nuseless when a pgstat_progress_update_param in the right place should\nsuffice.\n\nKind regards,\n\nMatthias van de MeentHi,For #2 above, can you let me know where the pgstat_progress_update_param() call(s) should be added ?In my patch, pgstat_progress_update_param() is called from error callback and EndCopy().For #1, if I use param18 (which is not used by other views), would that be better ?Thanks",
"msg_date": "Tue, 24 May 2022 13:17:58 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: adding status for COPY progress report"
},
{
"msg_contents": "On Tue, 24 May 2022 at 22:12, Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> On Tue, May 24, 2022 at 12:37 PM Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:\n>>\n>> On Tue, 24 May 2022 at 19:13, Zhihong Yu <zyu@yugabyte.com> wrote:\n>> >\n>> > Hi,\n>> > Please see attached for enhancement to COPY command progress.\n>> >\n>> > The added status column would allow users to get the status of the most recent COPY command.\n>>\n>> I fail to see the merit of retaining completed progress reporting\n>> commands in their views after completion, other than making the\n>> behaviour of the pg_stat_progress-views much more complicated and\n>> adding overhead in places where we want the system to have as little\n>> overhead as possible.\n>>\n>> Trying to get the status of a COPY command after it finished on a\n>> different connection seems weird, as that other connection is likely\n>> to have already disconnected / started another task. To be certain\n>> that a backend can see the return status of the COPY command, you'd\n>> have to be certain that the connection doesn't run any other\n>> _progress-able commands in the following seconds / minutes, which\n>> implies control over the connection, which means you already have\n>> access to the resulting status of your COPY command.\n>>\n>> Regarding the patch: I really do not like that this leaks entries into\n>> all _progress views: I get garbage data from e.g. the _create_index\n>> and _copy views when VACUUM is running, etc, because you removed the\n>> filter on cmdtype.\n>> Also, the added fields in CopyToStateData / CopyFromStateData seem\n>> useless when a pgstat_progress_update_param in the right place should\n>> suffice.\n>>\n>> Kind regards,\n>>\n>> Matthias van de Meent\n>\n> Hi,\n> For #2 above, can you let me know where the pgstat_progress_update_param() call(s) should be added ?\n> In my patch, pgstat_progress_update_param() is called from error callback and EndCopy().\n\nIn the places that the patch currently sets cstate->status it could\ninstead directly call pgstat_progress_update_param(..., STATUS_VALUE).\nI'm fairly certain that EndCopy is not called when the error callback\nis called, so status reporting should not be overwritten when\nunconditionally setting the status to OK in EndCopy.\n\n> For #1, if I use param18 (which is not used by other views), would that be better ?\n\nNo:\n\n /*\n- * Report values for only those backends which are running the given\n- * command.\n+ * Report values for only those backends which are running or have run.\n */\n- if (!beentry || beentry->st_progress_command != cmdtype)\n+ if (!beentry || beentry->st_progress_command_target == InvalidOid)\n continue;\n\nThis change removes the filter that ensures that we only return the\nbackends which have a st_progress_command of the correct cmdtype (i.e.\nfor _progress_copy only those that have st_progress_command ==\nPROGRESS_COMMAND_COPY. Without that filter, you'll return all backends\nthat have (or have had) their progress fields set at any point. Which\nmeans that the expectation of \"the backends returned by\npg_stat_get_progress_info are those running the requested command\"\nwill be incorrect - you'll violate the contract / documented behaviour\nof the function: \"Returns command progress information for the named\ncommand.\".\n\nThe numerical index of the column thus doesn't matter, what matters is\nthat you want special behaviour for only the COPY progress reporting\nthat doesn't fit with the rest of the progress-reporting\ninfrastructure, and that the patch as-is breaks all progress reporting\nviews.\n\nSidenote: The change is also invalid because the rows that we expect\nto return for pg_stat_progress_basebackup always have\nst_progress_command_target == InvalidOid, so the backends running\nBASE_BACKUP would never be visible with the change as-is. COPY (SELECT\nstuff) also would not show up, because that too reports a\ncommand_target of InvalidOid.\n\nEither way, I don't think that this idea is worth pursuing: the\nprogress views are explicitly there to show the progress of currently\nactive backends, and not to show the last progress state of backends\nthat at some point ran a progress-reporting-enabled command.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Tue, 24 May 2022 23:12:17 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: adding status for COPY progress report"
},
{
"msg_contents": "On Tue, May 24, 2022 at 2:12 PM Matthias van de Meent <\nboekewurm+postgres@gmail.com> wrote:\n\n> On Tue, 24 May 2022 at 22:12, Zhihong Yu <zyu@yugabyte.com> wrote:\n> >\n> > On Tue, May 24, 2022 at 12:37 PM Matthias van de Meent <\n> boekewurm+postgres@gmail.com> wrote:\n> >>\n> >> On Tue, 24 May 2022 at 19:13, Zhihong Yu <zyu@yugabyte.com> wrote:\n> >> >\n> >> > Hi,\n> >> > Please see attached for enhancement to COPY command progress.\n> >> >\n> >> > The added status column would allow users to get the status of the\n> most recent COPY command.\n> >>\n> >> I fail to see the merit of retaining completed progress reporting\n> >> commands in their views after completion, other than making the\n> >> behaviour of the pg_stat_progress-views much more complicated and\n> >> adding overhead in places where we want the system to have as little\n> >> overhead as possible.\n> >>\n> >> Trying to get the status of a COPY command after it finished on a\n> >> different connection seems weird, as that other connection is likely\n> >> to have already disconnected / started another task. To be certain\n> >> that a backend can see the return status of the COPY command, you'd\n> >> have to be certain that the connection doesn't run any other\n> >> _progress-able commands in the following seconds / minutes, which\n> >> implies control over the connection, which means you already have\n> >> access to the resulting status of your COPY command.\n> >>\n> >> Regarding the patch: I really do not like that this leaks entries into\n> >> all _progress views: I get garbage data from e.g. the _create_index\n> >> and _copy views when VACUUM is running, etc, because you removed the\n> >> filter on cmdtype.\n> >> Also, the added fields in CopyToStateData / CopyFromStateData seem\n> >> useless when a pgstat_progress_update_param in the right place should\n> >> suffice.\n> >>\n> >> Kind regards,\n> >>\n> >> Matthias van de Meent\n> >\n> > Hi,\n> > For #2 above, can you let me know where the\n> pgstat_progress_update_param() call(s) should be added ?\n> > In my patch, pgstat_progress_update_param() is called from error\n> callback and EndCopy().\n>\n> In the places that the patch currently sets cstate->status it could\n> instead directly call pgstat_progress_update_param(..., STATUS_VALUE).\n> I'm fairly certain that EndCopy is not called when the error callback\n> is called, so status reporting should not be overwritten when\n> unconditionally setting the status to OK in EndCopy.\n>\n> > For #1, if I use param18 (which is not used by other views), would that\n> be better ?\n>\n> No:\n>\n> /*\n> - * Report values for only those backends which are running the\n> given\n> - * command.\n> + * Report values for only those backends which are running or\n> have run.\n> */\n> - if (!beentry || beentry->st_progress_command != cmdtype)\n> + if (!beentry || beentry->st_progress_command_target == InvalidOid)\n> continue;\n>\n> This change removes the filter that ensures that we only return the\n> backends which have a st_progress_command of the correct cmdtype (i.e.\n> for _progress_copy only those that have st_progress_command ==\n> PROGRESS_COMMAND_COPY. Without that filter, you'll return all backends\n> that have (or have had) their progress fields set at any point. Which\n> means that the expectation of \"the backends returned by\n> pg_stat_get_progress_info are those running the requested command\"\n> will be incorrect - you'll violate the contract / documented behaviour\n> of the function: \"Returns command progress information for the named\n> command.\".\n>\n> The numerical index of the column thus doesn't matter, what matters is\n> that you want special behaviour for only the COPY progress reporting\n> that doesn't fit with the rest of the progress-reporting\n> infrastructure, and that the patch as-is breaks all progress reporting\n> views.\n>\n> Sidenote: The change is also invalid because the rows that we expect\n> to return for pg_stat_progress_basebackup always have\n> st_progress_command_target == InvalidOid, so the backends running\n> BASE_BACKUP would never be visible with the change as-is. COPY (SELECT\n> stuff) also would not show up, because that too reports a\n> command_target of InvalidOid.\n>\n> Either way, I don't think that this idea is worth pursuing: the\n> progress views are explicitly there to show the progress of currently\n> active backends, and not to show the last progress state of backends\n> that at some point ran a progress-reporting-enabled command.\n>\n> Kind regards,\n>\n> Matthias van de Meent\n>\nHi,\nThanks for the comment.\nw.r.t. `Returns command progress information for the named command`,\nhow about introducing PROGRESS_COMMAND_COPY_DONE which signifies that\nPROGRESS_COMMAND_COPY was the previous command ?\n\nIn pgstat_progress_end_command():\n\n if (beentry->st_progress_command == PROGRESS_COMMAND_COPY)\n beentry->st_progress_command = PROGRESS_COMMAND_COPY_DONE;\n else\n beentry->st_progress_command = PROGRESS_COMMAND_INVALID;\n beentry->st_progress_command_target = InvalidOid;\n\nIn pg_stat_get_progress_info():\n\n if (!beentry || (beentry->st_progress_command != cmdtype &&\n (cmdtype == PROGRESS_COMMAND_COPY\n && beentry->st_progress_command !=\nPROGRESS_COMMAND_COPY_DONE)))\n continue;\n\nCheers\n\nOn Tue, May 24, 2022 at 2:12 PM Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:On Tue, 24 May 2022 at 22:12, Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> On Tue, May 24, 2022 at 12:37 PM Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:\n>>\n>> On Tue, 24 May 2022 at 19:13, Zhihong Yu <zyu@yugabyte.com> wrote:\n>> >\n>> > Hi,\n>> > Please see attached for enhancement to COPY command progress.\n>> >\n>> > The added status column would allow users to get the status of the most recent COPY command.\n>>\n>> I fail to see the merit of retaining completed progress reporting\n>> commands in their views after completion, other than making the\n>> behaviour of the pg_stat_progress-views much more complicated and\n>> adding overhead in places where we want the system to have as little\n>> overhead as possible.\n>>\n>> Trying to get the status of a COPY command after it finished on a\n>> different connection seems weird, as that other connection is likely\n>> to have already disconnected / started another task. To be certain\n>> that a backend can see the return status of the COPY command, you'd\n>> have to be certain that the connection doesn't run any other\n>> _progress-able commands in the following seconds / minutes, which\n>> implies control over the connection, which means you already have\n>> access to the resulting status of your COPY command.\n>>\n>> Regarding the patch: I really do not like that this leaks entries into\n>> all _progress views: I get garbage data from e.g. the _create_index\n>> and _copy views when VACUUM is running, etc, because you removed the\n>> filter on cmdtype.\n>> Also, the added fields in CopyToStateData / CopyFromStateData seem\n>> useless when a pgstat_progress_update_param in the right place should\n>> suffice.\n>>\n>> Kind regards,\n>>\n>> Matthias van de Meent\n>\n> Hi,\n> For #2 above, can you let me know where the pgstat_progress_update_param() call(s) should be added ?\n> In my patch, pgstat_progress_update_param() is called from error callback and EndCopy().\n\nIn the places that the patch currently sets cstate->status it could\ninstead directly call pgstat_progress_update_param(..., STATUS_VALUE).\nI'm fairly certain that EndCopy is not called when the error callback\nis called, so status reporting should not be overwritten when\nunconditionally setting the status to OK in EndCopy.\n\n> For #1, if I use param18 (which is not used by other views), would that be better ?\n\nNo:\n\n /*\n- * Report values for only those backends which are running the given\n- * command.\n+ * Report values for only those backends which are running or have run.\n */\n- if (!beentry || beentry->st_progress_command != cmdtype)\n+ if (!beentry || beentry->st_progress_command_target == InvalidOid)\n continue;\n\nThis change removes the filter that ensures that we only return the\nbackends which have a st_progress_command of the correct cmdtype (i.e.\nfor _progress_copy only those that have st_progress_command ==\nPROGRESS_COMMAND_COPY. Without that filter, you'll return all backends\nthat have (or have had) their progress fields set at any point. Which\nmeans that the expectation of \"the backends returned by\npg_stat_get_progress_info are those running the requested command\"\nwill be incorrect - you'll violate the contract / documented behaviour\nof the function: \"Returns command progress information for the named\ncommand.\".\n\nThe numerical index of the column thus doesn't matter, what matters is\nthat you want special behaviour for only the COPY progress reporting\nthat doesn't fit with the rest of the progress-reporting\ninfrastructure, and that the patch as-is breaks all progress reporting\nviews.\n\nSidenote: The change is also invalid because the rows that we expect\nto return for pg_stat_progress_basebackup always have\nst_progress_command_target == InvalidOid, so the backends running\nBASE_BACKUP would never be visible with the change as-is. COPY (SELECT\nstuff) also would not show up, because that too reports a\ncommand_target of InvalidOid.\n\nEither way, I don't think that this idea is worth pursuing: the\nprogress views are explicitly there to show the progress of currently\nactive backends, and not to show the last progress state of backends\nthat at some point ran a progress-reporting-enabled command.\n\nKind regards,\n\nMatthias van de MeentHi,Thanks for the comment.w.r.t. `Returns command progress information for the named command`,how about introducing PROGRESS_COMMAND_COPY_DONE which signifies that PROGRESS_COMMAND_COPY was the previous command ?In pgstat_progress_end_command(): if (beentry->st_progress_command == PROGRESS_COMMAND_COPY) beentry->st_progress_command = PROGRESS_COMMAND_COPY_DONE; else beentry->st_progress_command = PROGRESS_COMMAND_INVALID; beentry->st_progress_command_target = InvalidOid;In pg_stat_get_progress_info(): if (!beentry || (beentry->st_progress_command != cmdtype && (cmdtype == PROGRESS_COMMAND_COPY && beentry->st_progress_command != PROGRESS_COMMAND_COPY_DONE))) continue;Cheers",
"msg_date": "Tue, 24 May 2022 15:02:13 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: adding status for COPY progress report"
},
{
"msg_contents": "On Tue, May 24, 2022 at 3:02 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Tue, May 24, 2022 at 2:12 PM Matthias van de Meent <\n> boekewurm+postgres@gmail.com> wrote:\n>\n>> On Tue, 24 May 2022 at 22:12, Zhihong Yu <zyu@yugabyte.com> wrote:\n>> >\n>> > On Tue, May 24, 2022 at 12:37 PM Matthias van de Meent <\n>> boekewurm+postgres@gmail.com> wrote:\n>> >>\n>> >> On Tue, 24 May 2022 at 19:13, Zhihong Yu <zyu@yugabyte.com> wrote:\n>> >> >\n>> >> > Hi,\n>> >> > Please see attached for enhancement to COPY command progress.\n>> >> >\n>> >> > The added status column would allow users to get the status of the\n>> most recent COPY command.\n>> >>\n>> >> I fail to see the merit of retaining completed progress reporting\n>> >> commands in their views after completion, other than making the\n>> >> behaviour of the pg_stat_progress-views much more complicated and\n>> >> adding overhead in places where we want the system to have as little\n>> >> overhead as possible.\n>> >>\n>> >> Trying to get the status of a COPY command after it finished on a\n>> >> different connection seems weird, as that other connection is likely\n>> >> to have already disconnected / started another task. To be certain\n>> >> that a backend can see the return status of the COPY command, you'd\n>> >> have to be certain that the connection doesn't run any other\n>> >> _progress-able commands in the following seconds / minutes, which\n>> >> implies control over the connection, which means you already have\n>> >> access to the resulting status of your COPY command.\n>> >>\n>> >> Regarding the patch: I really do not like that this leaks entries into\n>> >> all _progress views: I get garbage data from e.g. the _create_index\n>> >> and _copy views when VACUUM is running, etc, because you removed the\n>> >> filter on cmdtype.\n>> >> Also, the added fields in CopyToStateData / CopyFromStateData seem\n>> >> useless when a pgstat_progress_update_param in the right place should\n>> >> suffice.\n>> >>\n>> >> Kind regards,\n>> >>\n>> >> Matthias van de Meent\n>> >\n>> > Hi,\n>> > For #2 above, can you let me know where the\n>> pgstat_progress_update_param() call(s) should be added ?\n>> > In my patch, pgstat_progress_update_param() is called from error\n>> callback and EndCopy().\n>>\n>> In the places that the patch currently sets cstate->status it could\n>> instead directly call pgstat_progress_update_param(..., STATUS_VALUE).\n>> I'm fairly certain that EndCopy is not called when the error callback\n>> is called, so status reporting should not be overwritten when\n>> unconditionally setting the status to OK in EndCopy.\n>>\n>> > For #1, if I use param18 (which is not used by other views), would that\n>> be better ?\n>>\n>> No:\n>>\n>> /*\n>> - * Report values for only those backends which are running the\n>> given\n>> - * command.\n>> + * Report values for only those backends which are running or\n>> have run.\n>> */\n>> - if (!beentry || beentry->st_progress_command != cmdtype)\n>> + if (!beentry || beentry->st_progress_command_target ==\n>> InvalidOid)\n>> continue;\n>>\n>> This change removes the filter that ensures that we only return the\n>> backends which have a st_progress_command of the correct cmdtype (i.e.\n>> for _progress_copy only those that have st_progress_command ==\n>> PROGRESS_COMMAND_COPY. Without that filter, you'll return all backends\n>> that have (or have had) their progress fields set at any point. Which\n>> means that the expectation of \"the backends returned by\n>> pg_stat_get_progress_info are those running the requested command\"\n>> will be incorrect - you'll violate the contract / documented behaviour\n>> of the function: \"Returns command progress information for the named\n>> command.\".\n>>\n>> The numerical index of the column thus doesn't matter, what matters is\n>> that you want special behaviour for only the COPY progress reporting\n>> that doesn't fit with the rest of the progress-reporting\n>> infrastructure, and that the patch as-is breaks all progress reporting\n>> views.\n>>\n>> Sidenote: The change is also invalid because the rows that we expect\n>> to return for pg_stat_progress_basebackup always have\n>> st_progress_command_target == InvalidOid, so the backends running\n>> BASE_BACKUP would never be visible with the change as-is. COPY (SELECT\n>> stuff) also would not show up, because that too reports a\n>> command_target of InvalidOid.\n>>\n>> Either way, I don't think that this idea is worth pursuing: the\n>> progress views are explicitly there to show the progress of currently\n>> active backends, and not to show the last progress state of backends\n>> that at some point ran a progress-reporting-enabled command.\n>>\n>> Kind regards,\n>>\n>> Matthias van de Meent\n>>\n> Hi,\n> Thanks for the comment.\n> w.r.t. `Returns command progress information for the named command`,\n> how about introducing PROGRESS_COMMAND_COPY_DONE which signifies that\n> PROGRESS_COMMAND_COPY was the previous command ?\n>\n> In pgstat_progress_end_command():\n>\n> if (beentry->st_progress_command == PROGRESS_COMMAND_COPY)\n> beentry->st_progress_command = PROGRESS_COMMAND_COPY_DONE;\n> else\n> beentry->st_progress_command = PROGRESS_COMMAND_INVALID;\n> beentry->st_progress_command_target = InvalidOid;\n>\n> In pg_stat_get_progress_info():\n>\n> if (!beentry || (beentry->st_progress_command != cmdtype &&\n> (cmdtype == PROGRESS_COMMAND_COPY\n> && beentry->st_progress_command !=\n> PROGRESS_COMMAND_COPY_DONE)))\n> continue;\n>\n> Cheers\n>\nHi,\nPatch v3 follows advice from Matthias (status field has been dropped).\n\nThanks",
"msg_date": "Wed, 25 May 2022 01:20:36 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: adding status for COPY progress report"
},
{
"msg_contents": "On Wed, 25 May 2022 at 10:15, Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> Hi,\n> Patch v3 follows advice from Matthias (status field has been dropped).\n\nCould you argue why you think that this should be added to the\npg_stat_progress_copy view? Again, the progress reporting subsystem is\nbuilt to \"report the progress of certain commands during command\nexecution\". Why do you think we need to go further than that and allow\nsome commands to retain their report even after they've finished\nexecuting?\n\nOf note: The contents of >st_progress_param are only defined and\nguaranteed to be consistent when the reporting command is running.\nEven if no other progress-reporting command is running other commands\nor processes in that backend may call functions that update the fields\nwith somewhat arbitrary values when no progress-reporting command is\nactively running, thus corrupting the information for the progress\nreporting view.\n\nCould you please provide some insights on why you think that we should\nchange the progress reporting guts to accomodate something that it was\nnot built for?\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Wed, 25 May 2022 12:54:46 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: adding status for COPY progress report"
},
{
"msg_contents": "On Wed, May 25, 2022 at 3:55 AM Matthias van de Meent <\nboekewurm+postgres@gmail.com> wrote:\n\n> On Wed, 25 May 2022 at 10:15, Zhihong Yu <zyu@yugabyte.com> wrote:\n> >\n> > Hi,\n> > Patch v3 follows advice from Matthias (status field has been dropped).\n>\n> Could you argue why you think that this should be added to the\n> pg_stat_progress_copy view? Again, the progress reporting subsystem is\n> built to \"report the progress of certain commands during command\n> execution\". Why do you think we need to go further than that and allow\n> some commands to retain their report even after they've finished\n> executing?\n>\n> Of note: The contents of >st_progress_param are only defined and\n> guaranteed to be consistent when the reporting command is running.\n> Even if no other progress-reporting command is running other commands\n> or processes in that backend may call functions that update the fields\n> with somewhat arbitrary values when no progress-reporting command is\n> actively running, thus corrupting the information for the progress\n> reporting view.\n>\n> Could you please provide some insights on why you think that we should\n> change the progress reporting guts to accomodate something that it was\n> not built for?\n>\n>\n> Kind regards,\n>\n> Matthias van de Meent\n>\nHi, Matthias:\nWhen I first followed the procedure in\nhttps://paquier.xyz/postgresql-2/postgres-14-monitoring-copy/ , I didn't\nsee the output from the view.\nThis was because the example used 10 rows where the COPY command finishes\nquickly.\nI had to increase the row count in order to see output from the system view.\n\nWith my patch, the user would be able to see the result of COPY command\neven if the duration for command execution is very short.\n\nI made a slight change in patch v4. With patch v3, we would see the\nfollowing:\n\n relid | command | status_yb | type | bytes_processed | bytes_total |\ntuples_processed | tuples_excluded\n\n-------+-----------+-----------+------+-----------------+-------------+------------------+-----------------\n - | COPY FROM | PASS | PIPE | 6 | 0 |\n 1 | 0\n\nIt would be desirable to see the relation for the COPY command.\n\nWith the updated patch, I think the interference from other commands in\nprogress reporting has been prevented (see logic inside\npg_stat_get_progress_info).\n\nCheers",
"msg_date": "Wed, 25 May 2022 07:38:17 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: adding status for COPY progress report"
},
{
"msg_contents": "On Wed, 25 May 2022 at 16:32, Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> On Wed, May 25, 2022 at 3:55 AM Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:\n>>\n>> On Wed, 25 May 2022 at 10:15, Zhihong Yu <zyu@yugabyte.com> wrote:\n>> >\n>> > Hi,\n>> > Patch v3 follows advice from Matthias (status field has been dropped).\n>>\n>> Could you argue why you think that this should be added to the\n>> pg_stat_progress_copy view? Again, the progress reporting subsystem is\n>> built to \"report the progress of certain commands during command\n>> execution\". Why do you think we need to go further than that and allow\n>> some commands to retain their report even after they've finished\n>> executing?\n>>\n>> Of note: The contents of >st_progress_param are only defined and\n>> guaranteed to be consistent when the reporting command is running.\n>> Even if no other progress-reporting command is running other commands\n>> or processes in that backend may call functions that update the fields\n>> with somewhat arbitrary values when no progress-reporting command is\n>> actively running, thus corrupting the information for the progress\n>> reporting view.\n>>\n>> Could you please provide some insights on why you think that we should\n>> change the progress reporting guts to accomodate something that it was\n>> not built for?\n>>\n>>\n>> Kind regards,\n>>\n>> Matthias van de Meent\n>\n> Hi, Matthias:\n> When I first followed the procedure in https://paquier.xyz/postgresql-2/postgres-14-monitoring-copy/ , I didn't see the output from the view.\n> This was because the example used 10 rows where the COPY command finishes quickly.\n> I had to increase the row count in order to see output from the system view.\n>\n> With my patch, the user would be able to see the result of COPY command even if the duration for command execution is very short.\n\nI see that that indeed now happens, but the point of the _progress\n-views is that they show progress on tasks that are expected to take a\nvery long time while the connection that initiated the task does not\nreceive any updates. Good examples being REINDEX and CLUSTER, that\nneed to process tables of data (potentially terabytes in size) without\ncompleting or sending meaningful data to the client. To show that\nthere is progress for such long-running tasks the pgstat_progress\nsubsystem was developed so that some long-running tasks now would show\ntheir (lack of) progress.\n\nThe patch you sent, however, is not expected to be updated with\nprogress of the command: it is the final state of the command that\nwon't change. In my view, a backend that finished it's command\nshouldn't be shown in pg_stat_progress -views.\n\nKind regards,\n\nMatthias van de Meent.\n\n\n",
"msg_date": "Wed, 25 May 2022 17:20:30 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: adding status for COPY progress report"
},
{
"msg_contents": "On Wed, May 25, 2022 at 8:20 AM Matthias van de Meent <\nboekewurm+postgres@gmail.com> wrote:\n\n> On Wed, 25 May 2022 at 16:32, Zhihong Yu <zyu@yugabyte.com> wrote:\n> >\n> > On Wed, May 25, 2022 at 3:55 AM Matthias van de Meent <\n> boekewurm+postgres@gmail.com> wrote:\n> >>\n> >> On Wed, 25 May 2022 at 10:15, Zhihong Yu <zyu@yugabyte.com> wrote:\n> >> >\n> >> > Hi,\n> >> > Patch v3 follows advice from Matthias (status field has been dropped).\n> >>\n> >> Could you argue why you think that this should be added to the\n> >> pg_stat_progress_copy view? Again, the progress reporting subsystem is\n> >> built to \"report the progress of certain commands during command\n> >> execution\". Why do you think we need to go further than that and allow\n> >> some commands to retain their report even after they've finished\n> >> executing?\n> >>\n> >> Of note: The contents of >st_progress_param are only defined and\n> >> guaranteed to be consistent when the reporting command is running.\n> >> Even if no other progress-reporting command is running other commands\n> >> or processes in that backend may call functions that update the fields\n> >> with somewhat arbitrary values when no progress-reporting command is\n> >> actively running, thus corrupting the information for the progress\n> >> reporting view.\n> >>\n> >> Could you please provide some insights on why you think that we should\n> >> change the progress reporting guts to accomodate something that it was\n> >> not built for?\n> >>\n> >>\n> >> Kind regards,\n> >>\n> >> Matthias van de Meent\n> >\n> > Hi, Matthias:\n> > When I first followed the procedure in\n> https://paquier.xyz/postgresql-2/postgres-14-monitoring-copy/ , I didn't\n> see the output from the view.\n> > This was because the example used 10 rows where the COPY command\n> finishes quickly.\n> > I had to increase the row count in order to see output from the system\n> view.\n> >\n> > With my patch, the user would be able to see the result of COPY command\n> even if the duration for command execution is very short.\n>\n> I see that that indeed now happens, but the point of the _progress\n> -views is that they show progress on tasks that are expected to take a\n> very long time while the connection that initiated the task does not\n> receive any updates. Good examples being REINDEX and CLUSTER, that\n> need to process tables of data (potentially terabytes in size) without\n> completing or sending meaningful data to the client. To show that\n> there is progress for such long-running tasks the pgstat_progress\n> subsystem was developed so that some long-running tasks now would show\n> their (lack of) progress.\n>\n> The patch you sent, however, is not expected to be updated with\n> progress of the command: it is the final state of the command that\n> won't change. In my view, a backend that finished it's command\n> shouldn't be shown in pg_stat_progress -views.\n>\n> Kind regards,\n>\n> Matthias van de Meent.\n>\n Hi, Matthias:\nThanks for taking time to evaluate my patch.\n\nI understand that pg_stat_progress views should show progress for on-going\noperation.\n\nLet's look at the sequences of user activity for long running COPY command.\nThe user would likely issue queries to pg_stat_progress_copy over time.\nLet's say on Nth invocation, the user sees X tuples copied.\nOn (N+1)st invocation, the view returns nothing.\nThe user knows that the COPY may have completed - but did the operation\nsucceed or end up with some error ?\n\nI would think that the user should be allowed to know the answer to the\nabove question using the same query to pg_stat_progress_copy view.\n\nWhat do you think ?\n\nCheers\n\nOn Wed, May 25, 2022 at 8:20 AM Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:On Wed, 25 May 2022 at 16:32, Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> On Wed, May 25, 2022 at 3:55 AM Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:\n>>\n>> On Wed, 25 May 2022 at 10:15, Zhihong Yu <zyu@yugabyte.com> wrote:\n>> >\n>> > Hi,\n>> > Patch v3 follows advice from Matthias (status field has been dropped).\n>>\n>> Could you argue why you think that this should be added to the\n>> pg_stat_progress_copy view? Again, the progress reporting subsystem is\n>> built to \"report the progress of certain commands during command\n>> execution\". Why do you think we need to go further than that and allow\n>> some commands to retain their report even after they've finished\n>> executing?\n>>\n>> Of note: The contents of >st_progress_param are only defined and\n>> guaranteed to be consistent when the reporting command is running.\n>> Even if no other progress-reporting command is running other commands\n>> or processes in that backend may call functions that update the fields\n>> with somewhat arbitrary values when no progress-reporting command is\n>> actively running, thus corrupting the information for the progress\n>> reporting view.\n>>\n>> Could you please provide some insights on why you think that we should\n>> change the progress reporting guts to accomodate something that it was\n>> not built for?\n>>\n>>\n>> Kind regards,\n>>\n>> Matthias van de Meent\n>\n> Hi, Matthias:\n> When I first followed the procedure in https://paquier.xyz/postgresql-2/postgres-14-monitoring-copy/ , I didn't see the output from the view.\n> This was because the example used 10 rows where the COPY command finishes quickly.\n> I had to increase the row count in order to see output from the system view.\n>\n> With my patch, the user would be able to see the result of COPY command even if the duration for command execution is very short.\n\nI see that that indeed now happens, but the point of the _progress\n-views is that they show progress on tasks that are expected to take a\nvery long time while the connection that initiated the task does not\nreceive any updates. Good examples being REINDEX and CLUSTER, that\nneed to process tables of data (potentially terabytes in size) without\ncompleting or sending meaningful data to the client. To show that\nthere is progress for such long-running tasks the pgstat_progress\nsubsystem was developed so that some long-running tasks now would show\ntheir (lack of) progress.\n\nThe patch you sent, however, is not expected to be updated with\nprogress of the command: it is the final state of the command that\nwon't change. In my view, a backend that finished it's command\nshouldn't be shown in pg_stat_progress -views.\n\nKind regards,\n\nMatthias van de Meent. Hi, Matthias:Thanks for taking time to evaluate my patch.I understand that pg_stat_progress views should show progress for on-going operation.Let's look at the sequences of user activity for long running COPY command.The user would likely issue queries to pg_stat_progress_copy over time.Let's say on Nth invocation, the user sees X tuples copied.On (N+1)st invocation, the view returns nothing.The user knows that the COPY may have completed - but did the operation succeed or end up with some error ?I would think that the user should be allowed to know the answer to the above question using the same query to pg_stat_progress_copy view.What do you think ?Cheers",
"msg_date": "Wed, 25 May 2022 09:34:51 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: adding status for COPY progress report"
},
{
"msg_contents": "On Wed, May 25, 2022 at 09:34:51AM -0700, Zhihong Yu wrote:\n> Let's look at the sequences of user activity for long running COPY command.\n> The user would likely issue queries to pg_stat_progress_copy over time.\n> Let's say on Nth invocation, the user sees X tuples copied.\n> On (N+1)st invocation, the view returns nothing.\n> The user knows that the COPY may have completed - but did the operation\n> succeed or end up with some error ?\n\nIf I am following this thread correctly and after reading the patch,\nthat's what the status code of the connection issuing the command is\nhere for. You have no guarantee either that the status you are trying\nto store in the progress view is not going to be quickly overwritten\nby a follow-up command, making the window where this information is\navailable very small in most cases, limiting its value. The window\ngets even smaller if the connection that failed the COPY is used in a\nconnection pooler by a different command.\n\nThe changes in pgstat_progress_end_command() and\npg_stat_get_progress_info() update st_progress_command_target\ndepending on the command type involved, breaking the existing contract\nof those routines, particularly the fact that the progress fields\n*should* be reset in an error stack.\n--\nMichael",
"msg_date": "Thu, 26 May 2022 09:51:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: adding status for COPY progress report"
},
{
"msg_contents": "On Wed, May 25, 2022 at 5:51 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Wed, May 25, 2022 at 09:34:51AM -0700, Zhihong Yu wrote:\n> > Let's look at the sequences of user activity for long running COPY\n> command.\n> > The user would likely issue queries to pg_stat_progress_copy over time.\n> > Let's say on Nth invocation, the user sees X tuples copied.\n> > On (N+1)st invocation, the view returns nothing.\n> > The user knows that the COPY may have completed - but did the operation\n> > succeed or end up with some error ?\n>\n> If I am following this thread correctly and after reading the patch,\n> that's what the status code of the connection issuing the command is\n> here for. You have no guarantee either that the status you are trying\n> to store in the progress view is not going to be quickly overwritten\n> by a follow-up command, making the window where this information is\n> available very small in most cases, limiting its value. The window\n> gets even smaller if the connection that failed the COPY is used in a\n> connection pooler by a different command.\n>\n> Hi,\nI tend to think that in case of failed COPY command, the user would spend\nsome time trying to find out\nwhy the COPY command failed. The investigation would make the window longer.\n\n\n> The changes in pgstat_progress_end_command() and\n> pg_stat_get_progress_info() update st_progress_command_target\n> depending on the command type involved, breaking the existing contract\n> of those routines, particularly the fact that the progress fields\n> *should* be reset in an error stack.\n>\n\nI searched the code base for how st_progress_command_target is used.\nIn case there is subsequent command following the\nCOPY, st_progress_command_target would be set to the Oid\nof the subsequent command.\nIn case there is no subsequent command following the COPY command, it seems\nleaving st_progress_command_target as\nthe Oid of the COPY command wouldn't hurt.\n\nMaybe you can let me know what side effect not resetting\nst_progress_command_target would have.\n\nAs an alternative, upon seeing PROGRESS_COMMAND_COPY_DONE, we can transfer\nthe value of\nst_progress_command_target to a new field called, say,\nst_progress_command_previous_target (\nand resetting st_progress_command_target as usual).\n\nPlease let me know what you think.\n\nThanks\n\nOn Wed, May 25, 2022 at 5:51 PM Michael Paquier <michael@paquier.xyz> wrote:On Wed, May 25, 2022 at 09:34:51AM -0700, Zhihong Yu wrote:\n> Let's look at the sequences of user activity for long running COPY command.\n> The user would likely issue queries to pg_stat_progress_copy over time.\n> Let's say on Nth invocation, the user sees X tuples copied.\n> On (N+1)st invocation, the view returns nothing.\n> The user knows that the COPY may have completed - but did the operation\n> succeed or end up with some error ?\n\nIf I am following this thread correctly and after reading the patch,\nthat's what the status code of the connection issuing the command is\nhere for. You have no guarantee either that the status you are trying\nto store in the progress view is not going to be quickly overwritten\nby a follow-up command, making the window where this information is\navailable very small in most cases, limiting its value. The window\ngets even smaller if the connection that failed the COPY is used in a\nconnection pooler by a different command.\nHi,I tend to think that in case of failed COPY command, the user would spend some time trying to find out why the COPY command failed. The investigation would make the window longer. \nThe changes in pgstat_progress_end_command() and\npg_stat_get_progress_info() update st_progress_command_target\ndepending on the command type involved, breaking the existing contract\nof those routines, particularly the fact that the progress fields\n*should* be reset in an error stack.I searched the code base for how st_progress_command_target is used.In case there is subsequent command following the COPY, st_progress_command_target would be set to the Oidof the subsequent command.In case there is no subsequent command following the COPY command, it seems leaving st_progress_command_target asthe Oid of the COPY command wouldn't hurt.Maybe you can let me know what side effect not resetting st_progress_command_target would have.As an alternative, upon seeing PROGRESS_COMMAND_COPY_DONE, we can transfer the value ofst_progress_command_target to a new field called, say, st_progress_command_previous_target (and resetting st_progress_command_target as usual).Please let me know what you think.Thanks",
"msg_date": "Wed, 25 May 2022 19:40:58 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: adding status for COPY progress report"
},
{
"msg_contents": "Hi,\n\nOn Thu, May 26, 2022 at 11:35 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>> The changes in pgstat_progress_end_command() and\n>> pg_stat_get_progress_info() update st_progress_command_target\n>> depending on the command type involved, breaking the existing contract\n>> of those routines, particularly the fact that the progress fields\n>> *should* be reset in an error stack.\n\n+1 to what Michael said here. I don't think the following changes are\nacceptable:\n\n@@ -106,7 +106,13 @@ pgstat_progress_end_command(void)\n return;\n\n PGSTAT_BEGIN_WRITE_ACTIVITY(beentry);\n- beentry->st_progress_command = PROGRESS_COMMAND_INVALID;\n- beentry->st_progress_command_target = InvalidOid;\n+ if (beentry->st_progress_command == PROGRESS_COMMAND_COPY)\n+ // We want to show the relation for the most recent COPY command\n+ beentry->st_progress_command = PROGRESS_COMMAND_COPY_DONE;\n+ else\n+ {\n+ beentry->st_progress_command = PROGRESS_COMMAND_INVALID;\n+ beentry->st_progress_command_target = InvalidOid;\n+ }\n PGSTAT_END_WRITE_ACTIVITY(beentry);\n }\n\npgstat_progress_end_command() is generic infrastructure and there\nshouldn't be anything COPY-specific there.\n\n> I searched the code base for how st_progress_command_target is used.\n> In case there is subsequent command following the COPY, st_progress_command_target would be set to the Oid\n> of the subsequent command.\n> In case there is no subsequent command following the COPY command, it seems leaving st_progress_command_target as\n> the Oid of the COPY command wouldn't hurt.\n>\n> Maybe you can let me know what side effect not resetting st_progress_command_target would have.\n>\n> As an alternative, upon seeing PROGRESS_COMMAND_COPY_DONE, we can transfer the value of\n> st_progress_command_target to a new field called, say, st_progress_command_previous_target (\n> and resetting st_progress_command_target as usual).\n\nThat doesn't sound like a good idea.\n\nAs others have said, there's no point in adding a status field to\npg_stat_progress_copy that only tells whether a COPY is running or\nnot. You can already do that by looking at the output of `select *\nfrom pg_stat_progress_copy`. If the COPY you're interested in is\nrunning, you'll find the corresponding row in the view. The row is\nmade to disappear from the view the instance the COPY finishes, either\nsuccessfully or due to an error. Whichever is the case will be known\nin the connection that initiated the COPY and you may find it in the\nserver log. I don't think we should make Postgres remember anything\nabout that in the shared memory, or at least not with one-off\nadjustments of the shared progress reporting state like in the\nproposed patch.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 31 May 2022 17:19:47 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: adding status for COPY progress report"
},
{
"msg_contents": "On Tue, May 31, 2022 at 1:20 AM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> Hi,\n>\n> On Thu, May 26, 2022 at 11:35 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> >> The changes in pgstat_progress_end_command() and\n> >> pg_stat_get_progress_info() update st_progress_command_target\n> >> depending on the command type involved, breaking the existing contract\n> >> of those routines, particularly the fact that the progress fields\n> >> *should* be reset in an error stack.\n>\n> +1 to what Michael said here. I don't think the following changes are\n> acceptable:\n>\n> @@ -106,7 +106,13 @@ pgstat_progress_end_command(void)\n> return;\n>\n> PGSTAT_BEGIN_WRITE_ACTIVITY(beentry);\n> - beentry->st_progress_command = PROGRESS_COMMAND_INVALID;\n> - beentry->st_progress_command_target = InvalidOid;\n> + if (beentry->st_progress_command == PROGRESS_COMMAND_COPY)\n> + // We want to show the relation for the most recent COPY command\n> + beentry->st_progress_command = PROGRESS_COMMAND_COPY_DONE;\n> + else\n> + {\n> + beentry->st_progress_command = PROGRESS_COMMAND_INVALID;\n> + beentry->st_progress_command_target = InvalidOid;\n> + }\n> PGSTAT_END_WRITE_ACTIVITY(beentry);\n> }\n>\n> pgstat_progress_end_command() is generic infrastructure and there\n> shouldn't be anything COPY-specific there.\n>\n> > I searched the code base for how st_progress_command_target is used.\n> > In case there is subsequent command following the COPY,\n> st_progress_command_target would be set to the Oid\n> > of the subsequent command.\n> > In case there is no subsequent command following the COPY command, it\n> seems leaving st_progress_command_target as\n> > the Oid of the COPY command wouldn't hurt.\n> >\n> > Maybe you can let me know what side effect not resetting\n> st_progress_command_target would have.\n> >\n> > As an alternative, upon seeing PROGRESS_COMMAND_COPY_DONE, we can\n> transfer the value of\n> > st_progress_command_target to a new field called, say,\n> st_progress_command_previous_target (\n> > and resetting st_progress_command_target as usual).\n>\n> That doesn't sound like a good idea.\n>\n> As others have said, there's no point in adding a status field to\n> pg_stat_progress_copy that only tells whether a COPY is running or\n> not. You can already do that by looking at the output of `select *\n> from pg_stat_progress_copy`. If the COPY you're interested in is\n> running, you'll find the corresponding row in the view. The row is\n> made to disappear from the view the instance the COPY finishes, either\n> successfully or due to an error. Whichever is the case will be known\n> in the connection that initiated the COPY and you may find it in the\n> server log. I don't think we should make Postgres remember anything\n> about that in the shared memory, or at least not with one-off\n> adjustments of the shared progress reporting state like in the\n> proposed patch.\n>\n> --\n> Thanks, Amit Langote\n> EDB: http://www.enterprisedb.com\n\n\nHi, Matthias, Michael and Amit:\nThanks for your time reviewing my patch.\n\nI took note of what you said.\n\nIf I can make the changes more general, I would circle back.\n\nCheers\n\nOn Tue, May 31, 2022 at 1:20 AM Amit Langote <amitlangote09@gmail.com> wrote:Hi,\n\nOn Thu, May 26, 2022 at 11:35 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>> The changes in pgstat_progress_end_command() and\n>> pg_stat_get_progress_info() update st_progress_command_target\n>> depending on the command type involved, breaking the existing contract\n>> of those routines, particularly the fact that the progress fields\n>> *should* be reset in an error stack.\n\n+1 to what Michael said here. I don't think the following changes are\nacceptable:\n\n@@ -106,7 +106,13 @@ pgstat_progress_end_command(void)\n return;\n\n PGSTAT_BEGIN_WRITE_ACTIVITY(beentry);\n- beentry->st_progress_command = PROGRESS_COMMAND_INVALID;\n- beentry->st_progress_command_target = InvalidOid;\n+ if (beentry->st_progress_command == PROGRESS_COMMAND_COPY)\n+ // We want to show the relation for the most recent COPY command\n+ beentry->st_progress_command = PROGRESS_COMMAND_COPY_DONE;\n+ else\n+ {\n+ beentry->st_progress_command = PROGRESS_COMMAND_INVALID;\n+ beentry->st_progress_command_target = InvalidOid;\n+ }\n PGSTAT_END_WRITE_ACTIVITY(beentry);\n }\n\npgstat_progress_end_command() is generic infrastructure and there\nshouldn't be anything COPY-specific there.\n\n> I searched the code base for how st_progress_command_target is used.\n> In case there is subsequent command following the COPY, st_progress_command_target would be set to the Oid\n> of the subsequent command.\n> In case there is no subsequent command following the COPY command, it seems leaving st_progress_command_target as\n> the Oid of the COPY command wouldn't hurt.\n>\n> Maybe you can let me know what side effect not resetting st_progress_command_target would have.\n>\n> As an alternative, upon seeing PROGRESS_COMMAND_COPY_DONE, we can transfer the value of\n> st_progress_command_target to a new field called, say, st_progress_command_previous_target (\n> and resetting st_progress_command_target as usual).\n\nThat doesn't sound like a good idea.\n\nAs others have said, there's no point in adding a status field to\npg_stat_progress_copy that only tells whether a COPY is running or\nnot. You can already do that by looking at the output of `select *\nfrom pg_stat_progress_copy`. If the COPY you're interested in is\nrunning, you'll find the corresponding row in the view. The row is\nmade to disappear from the view the instance the COPY finishes, either\nsuccessfully or due to an error. Whichever is the case will be known\nin the connection that initiated the COPY and you may find it in the\nserver log. I don't think we should make Postgres remember anything\nabout that in the shared memory, or at least not with one-off\nadjustments of the shared progress reporting state like in the\nproposed patch.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.comHi, Matthias, Michael and Amit:Thanks for your time reviewing my patch.I took note of what you said.If I can make the changes more general, I would circle back.Cheers",
"msg_date": "Tue, 31 May 2022 07:31:35 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: adding status for COPY progress report"
}
] |
[
{
"msg_contents": "Hi,\n\n AT_EnableTrig, /* ENABLE TRIGGER name */\n+ AT_EnableTrigRecurse, /* internal to commands/tablecmds.c */\n AT_EnableAlwaysTrig, /* ENABLE ALWAYS TRIGGER name */\n+ AT_EnableAlwaysTrigRecurse, /* internal to commands/tablecmds.c */\n\nIs it better to put the new enum's at the end of the AlterTableType?\n\nThis way the numeric values for existing ones don't change.\n\nCheers\n\nHi, AT_EnableTrig, /* ENABLE TRIGGER name */+ AT_EnableTrigRecurse, /* internal to commands/tablecmds.c */ AT_EnableAlwaysTrig, /* ENABLE ALWAYS TRIGGER name */+ AT_EnableAlwaysTrigRecurse, /* internal to commands/tablecmds.c */Is it better to put the new enum's at the end of the AlterTableType?This way the numeric values for existing ones don't change.Cheers",
"msg_date": "Tue, 24 May 2022 14:23:12 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: enable/disable broken for statement triggers on partitioned\n tables"
},
{
"msg_contents": "\nOn 24.05.22 23:23, Zhihong Yu wrote:\n> Hi,\n> \n> AT_EnableTrig, /* ENABLE TRIGGER name */\n> + AT_EnableTrigRecurse, /* internal to commands/tablecmds.c */\n> AT_EnableAlwaysTrig, /* ENABLE ALWAYS TRIGGER name */\n> + AT_EnableAlwaysTrigRecurse, /* internal to commands/tablecmds.c */\n> \n> Is it better to put the new enum's at the end of the AlterTableType?\n> \n> This way the numeric values for existing ones don't change.\n\nThat's a concern if backpatching. Otherwise, it's better to put them \nlike shown in the patch.\n\n\n",
"msg_date": "Fri, 27 May 2022 10:11:15 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: enable/disable broken for statement triggers on partitioned\n tables"
},
{
"msg_contents": "On Fri, May 27, 2022 at 5:11 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> On 24.05.22 23:23, Zhihong Yu wrote:\n> > Hi,\n> >\n> > AT_EnableTrig, /* ENABLE TRIGGER name */\n> > + AT_EnableTrigRecurse, /* internal to commands/tablecmds.c */\n> > AT_EnableAlwaysTrig, /* ENABLE ALWAYS TRIGGER name */\n> > + AT_EnableAlwaysTrigRecurse, /* internal to commands/tablecmds.c */\n> >\n> > Is it better to put the new enum's at the end of the AlterTableType?\n> >\n> > This way the numeric values for existing ones don't change.\n>\n> That's a concern if backpatching. Otherwise, it's better to put them\n> like shown in the patch.\n\nAgreed.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 27 May 2022 21:23:08 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: enable/disable broken for statement triggers on partitioned\n tables"
}
] |
[
{
"msg_contents": "In postgresql.conf.sample, stats_fetch_consistency is set to \"none,\" but\nthe default appears to be \"cache.\" Should these be consistent? I've\nattached a patch to change the entry in the sample.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 24 May 2022 15:01:47 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "On Tue, May 24, 2022 at 03:01:47PM -0700, Nathan Bossart wrote:\n> In postgresql.conf.sample, stats_fetch_consistency is set to \"none,\" but\n> the default appears to be \"cache.\" Should these be consistent? I've\n> attached a patch to change the entry in the sample.\n\nYes, postgresql.conf.sample should reflect the default, and that's\nPGSTAT_FETCH_CONSISTENCY_CACHE in guc.c. Andres, shouldn't\npgstat_fetch_consistency be initialized to the same in pgstat.c?\n--\nMichael",
"msg_date": "Wed, 25 May 2022 13:08:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "At Tue, 24 May 2022 15:01:47 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in \n> In postgresql.conf.sample, stats_fetch_consistency is set to \"none,\" but\n> the default appears to be \"cache.\" Should these be consistent? I've\n> attached a patch to change the entry in the sample.\n\nGood catch:)\n\nThe base C variable is inirtialized with none.\nThe same GUC is intialized with \"cache\".\nThe default valur for the GUC is \"none\" in the sample file.\n\nI think we set the same value to C variable. However, I wonder if it\nwould be possible to reduce the burden of unifying the three inital\nvalues.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 25 May 2022 13:11:40 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-25 13:11:40 +0900, Kyotaro Horiguchi wrote:\n> At Tue, 24 May 2022 15:01:47 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in \n> > In postgresql.conf.sample, stats_fetch_consistency is set to \"none,\" but\n> > the default appears to be \"cache.\" Should these be consistent? I've\n> > attached a patch to change the entry in the sample.\n> \n> Good catch:)\n> \n> The base C variable is inirtialized with none.\n> The same GUC is intialized with \"cache\".\n> The default valur for the GUC is \"none\" in the sample file.\n> \n> I think we set the same value to C variable. However, I wonder if it\n> would be possible to reduce the burden of unifying the three inital\n> values.\n\nYes, they should be the same. I think we ended up switching the default at\nsome point, and evidently I missed a step when doing so.\n\nWill apply.\n\nI wonder if we should make src/test/modules/test_misc/t/003_check_guc.pl\ndetect this kind of thing?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 24 May 2022 21:23:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "On 2022-05-24 21:23:32 -0700, Andres Freund wrote:\n> Will apply.\n\nAnd done. Thanks Nathan!\n\n\n",
"msg_date": "Tue, 24 May 2022 21:28:49 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "On Tue, May 24, 2022 at 09:28:49PM -0700, Andres Freund wrote:\n> And done. Thanks Nathan!\n\nShouldn't you also refresh pgstat_fetch_consistency in pgstat.c for\nconsistency?\n\n> I wonder if we should make src/test/modules/test_misc/t/003_check_guc.pl\n> detect this kind of thing?\n\nThat sounds like a good idea to me.\n--\nMichael",
"msg_date": "Wed, 25 May 2022 14:00:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "At Wed, 25 May 2022 14:00:23 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Tue, May 24, 2022 at 09:28:49PM -0700, Andres Freund wrote:\n> > And done. Thanks Nathan!\n> \n> Shouldn't you also refresh pgstat_fetch_consistency in pgstat.c for\n> consistency?\n> \n> > I wonder if we should make src/test/modules/test_misc/t/003_check_guc.pl\n> > detect this kind of thing?\n> \n> That sounds like a good idea to me.\n\nI think the work as assumed not to use detailed knowledge of each\nvariable. Due to lack of knowlege about the detail of each variable,\nfor example, type, unit, internal exansion, we cannot check for the\nfollowing values.\n\n- Numbers with unit (MB, kB, s, min ...)\n- internally expanded string (FILE, ConfigDir)\n\nSo it's hard to automate to check consistency of all variables, but I\nfound the following inconsistencies between the sample config file and\nGUC default value. C default value cannot be revealed so it is\nignored.\n\nThe following results are not deeply confirmed yet.\n\n13 apparent inconsistencies are found. These should be fixed.\n\narchive_command = \"(disabled)\" != \"\"\nbgwriter_flush_after = \"64\" != \"0\"\ncheckpoint_flush_after = \"32\" != \"0\"\ncluster_name = \"main\" != \"\"\ndefault_text_search_config = \"pg_catalog.english\" != \"pg_catalog.simple\"\nfsync = \"off\" != \"on\"\nlog_replication_commands = \"on\" != \"off\"\nlog_statement = \"all\" != \"none\"\nmax_wal_senders = \"0\" != \"10\"\nrestart_after_crash = \"off\" != \"on\"\nstats_fetch_consistency = \"cache\" != \"none\"\nwal_sync_method = \"fdatasync\" != \"fsync\"\n\n\n11 has letter-case inconsistencies. Are these need to be fixed?\n\nevent_source = \"postgresql\" != \"PostgreSQL\"\nlc_messages = \"c\" != \"C\"\nlc_monetary = \"en_us.utf-8\" != \"C\"\nlc_numeric = \"en_us.utf-8\" != \"C\"\nlc_time = \"en_us.utf-8\" != \"C\"\nlog_filename = \"postgresql-%y-%m-%d_%h%m%s.log\" != \"postgresql-%Y-%m-%d_%H%M%S.log\"\nlog_line_prefix = \"%m [%p] %q%a \" != \"%m [%p] \"\nssl_ciphers = \"high:medium:+3des:!anull\" != \"HIGH:MEDIUM:+3DES:!aNULL\"\nssl_min_protocol_version = \"tlsv1.2\" != \"TLSv1.2\"\nsyslog_facility = \"local0\" != \"LOCAL0\"\ntimezone_abbreviations = \"default\" != \"Default\"\n\n\nThe followings are the result of automatic configuration?\n\nclient_encoding = \"utf8\" != \"sql_ascii\"\ndata_directory = \"/home/horiguti/work/postgresql/src/test/modules/test_misc/tmp_check/t_003_check_guc_main_data/pgdata\" != \"ConfigDir\"\nhba_file = \"/home/horiguti/work/postgresql/src/test/modules/test_misc/tmp_check/t_003_check_guc_main_data/pgdata/pg_hba.conf\" != \"ConfigDir/pg_hba.conf\"\nident_file = \"/home/horiguti/work/postgresql/src/test/modules/test_misc/tmp_check/t_003_check_guc_main_data/pgdata/pg_ident.conf\" != \"ConfigDir/pg_ident.conf\"\nkrb_server_keyfile = \"file:/home/horiguti/bin/pgsql_work/etc/krb5.keytab\" != \"FILE:${sysconfdir}/krb5.keytab\"\nlog_timezone = \"asia/tokyo\" != \"GMT\"\ntimezone = \"asia/tokyo\" != \"GMT\"\nunix_socket_directories = \"/tmp/g3fpspvjuy\" != \"/tmp\"\nwal_buffers = \"512\" != \"-1\"\n\n\nThe followings are the result of TAP harness?\n\nlisten_addresses = \"\" != \"localhost\"\nport = \"60866\" != \"5432\"\nwal_level = \"minimal\" != \"replica\"\n\n\nThe following is inconsistent, but I'm not sure where the \"500\" came\nfrom. In guc.c it is defined as 5000 and normal (out of TAP test)\nserver returns 5000.\n\nwal_retrieve_retry_interval = \"500\" , \"5s\"\n\n\nThe followings cannot be automaticaly compared due to hidden unit\nconversion, but looks consistent.\n\nauthentication_timeout = \"60\" , \"1min\"\nautovacuum_naptime = \"60\" , \"1min\"\nautovacuum_vacuum_cost_delay = \"2\" , \"2ms\"\nbgwriter_delay = \"200\" , \"200ms\"\ncheckpoint_timeout = \"300\" , \"5min\"\ncheckpoint_warning = \"30\" , \"30s\"\ndeadlock_timeout = \"1000\" , \"1s\"\neffective_cache_size = \"524288\" , \"4GB\"\ngin_pending_list_limit = \"4096\" , \"4MB\"\nlog_autovacuum_min_duration = \"600000\" , \"10min\"\nlog_rotation_age = \"1440\" , \"1d\"\nlog_rotation_size = \"10240\" , \"10MB\"\nlog_startup_progress_interval = \"10000\" , \"10s\"\nlogical_decoding_work_mem = \"65536\" , \"64MB\"\nmaintenance_work_mem = \"65536\" , \"64MB\"\nmax_stack_depth = \"2048\" , \"2MB\"\nmax_standby_archive_delay = \"30000\" , \"30s\"\nmax_standby_streaming_delay = \"30000\" , \"30s\"\nmax_wal_size = \"1024\" , \"1GB\"\nmin_dynamic_shared_memory = \"0\" , \"0MB\"\nmin_parallel_index_scan_size = \"64\" , \"512kB\"\nmin_parallel_table_scan_size = \"1024\" , \"8MB\"\nmin_wal_size = \"80\" , \"80MB\"\nshared_buffers = \"16384\" , \"128MB\"\ntemp_buffers = \"1024\" , \"8MB\"\nwal_decode_buffer_size = \"524288\" , \"512kB\"\nwal_receiver_status_interval = \"10\" , \"10s\"\nwal_receiver_timeout = \"60000\" , \"60s\"\nwal_sender_timeout = \"60000\" , \"60s\"\nwal_skip_threshold = \"2048\" , \"2MB\"\nwal_writer_delay = \"200\" , \"200ms\"\nwal_writer_flush_after = \"128\" , \"1MB\"\nwork_mem = \"4096\" , \"4MB\"\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 25 May 2022 15:56:23 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "At Wed, 25 May 2022 15:56:23 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> The following results are not deeply confirmed yet.\n> \n> 13 apparent inconsistencies are found. These should be fixed.\n> \n> archive_command = \"(disabled)\" != \"\"\n\nThe \"(disabled)\" is the representation of \"\".\n\n> bgwriter_flush_after = \"64\" != \"0\"\n> checkpoint_flush_after = \"32\" != \"0\"\n\nThey vary according to the existence of sync_file_range().\n\n> cluster_name = \"main\" != \"\"\n\nThis is named by 003_check_guc.pl.\n\n> default_text_search_config = \"pg_catalog.english\" != \"pg_catalog.simple\"\n\ninitdb decided this.\n\n> fsync = \"off\" != \"on\"\n> log_line_prefix = \"%m [%p] %q%a \" != \"%m [%p] \"\n> log_replication_commands = \"on\" != \"off\"\n> log_statement = \"all\" != \"none\"\n> max_wal_senders = \"0\" != \"10\"\n> restart_after_crash = \"off\" != \"on\"\n\nThese are set by Cluster.pm.\n\n> wal_sync_method = \"fdatasync\" != \"fsync\"\n\nThis is platform dependent.\n\n> stats_fetch_consistency = \"cache\" != \"none\"\n\nThis has been fixed recently.\n\n> 11 has letter-case inconsistencies. Are these need to be fixed?\n> \n> event_source = \"postgresql\" != \"PostgreSQL\"\n> lc_messages = \"c\" != \"C\"\n> lc_monetary = \"en_us.utf-8\" != \"C\"\n> lc_numeric = \"en_us.utf-8\" != \"C\"\n> lc_time = \"en_us.utf-8\" != \"C\"\n> log_filename = \"postgresql-%y-%m-%d_%h%m%s.log\" != \"postgresql-%Y-%m-%d_%H%M%S.log\"\n> ssl_ciphers = \"high:medium:+3des:!anull\" != \"HIGH:MEDIUM:+3DES:!aNULL\"\n> ssl_min_protocol_version = \"tlsv1.2\" != \"TLSv1.2\"\n> syslog_facility = \"local0\" != \"LOCAL0\"\n> timezone_abbreviations = \"default\" != \"Default\"\n\nThese are harmless. Since no significant inconsistency is found,\nthere's no need to fix these either.\n\n(sigh..) As the result, no need to fix in this area for now, and I\ndon't think there's any generic and reliable way to detect\ninconsistencies of guc variable definitions.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 25 May 2022 16:12:07 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "On Wed, May 25, 2022 at 04:12:07PM +0900, Kyotaro Horiguchi wrote:\n> (sigh..) As the result, no need to fix in this area for now, and I\n> don't think there's any generic and reliable way to detect\n> inconsistencies of guc variable definitions.\n\nHmm. Making the automation test painless in terms of maintenance\nconsists in making it require zero manual filtering in the list of\nGUCs involved, while still being useful in what it can detect. The\nunits involved in a GUC make the checks between postgresql.conf.sample \nand pg_settings.boot_value annoying because they would require extra\ncalculations depending on the unit with a logic maintained in the\ntest.\n\nI may be missing something obvious, of course, but it seems to me that\nas long as you fetch the values from postgresql.conf.sample and\ncross-check them with pg_settings.boot_value for GUCs that do not have\nunits, the maintenance would be painless, while still being useful (it\nwould cover the case of enums, for one). The values need to be\nlower-cased for consistency, similarly to the GUC names.\n--\nMichael",
"msg_date": "Thu, 26 May 2022 08:53:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "At Thu, 26 May 2022 08:53:55 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Wed, May 25, 2022 at 04:12:07PM +0900, Kyotaro Horiguchi wrote:\n> > (sigh..) As the result, no need to fix in this area for now, and I\n> > don't think there's any generic and reliable way to detect\n> > inconsistencies of guc variable definitions.\n> \n> Hmm. Making the automation test painless in terms of maintenance\n> consists in making it require zero manual filtering in the list of\n> GUCs involved, while still being useful in what it can detect. The\n> units involved in a GUC make the checks between postgresql.conf.sample \n> and pg_settings.boot_value annoying because they would require extra\n> calculations depending on the unit with a logic maintained in the\n> test.\n> \n> I may be missing something obvious, of course, but it seems to me that\n> as long as you fetch the values from postgresql.conf.sample and\n> cross-check them with pg_settings.boot_value for GUCs that do not have\n> units, the maintenance would be painless, while still being useful (it\n> would cover the case of enums, for one). The values need to be\n> lower-cased for consistency, similarly to the GUC names.\n\nYeah, \"boot_val\" is appropreate here. And I noticed that pg_settings\nhas the \"unit\" field. I'll try using them.\n\nThanks for the suggestion!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 26 May 2022 11:10:18 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "On Thu, May 26, 2022 at 11:10:18AM +0900, Kyotaro Horiguchi wrote:\n> Yeah, \"boot_val\" is appropreate here. And I noticed that pg_settings\n> has the \"unit\" field. I'll try using them.\n\nI wrote this in guc.sql, which seems promising, but it needs to be rewritten in\ncheck_guc.pl to access postgresql.conf from the source tree. Do you want to\nhandle that ?\n\n+\\getenv abs_srcdir PG_ABS_SRCDIR\n+\\set filename :abs_srcdir '../../../../src/backend/utils/misc/postgresql.conf.sample'\n+\n+begin;\n+CREATE TEMP TABLE sample_conf AS\n+-- SELECT m[1] AS name, trim(BOTH '''' FROM m[3]) AS sample_value\n+SELECT m[1] AS name, COALESCE(m[3], m[5]) AS sample_value\n+FROM (SELECT regexp_split_to_table(pg_read_file(:'filename'), '\\n') AS ln) conf,\n+-- regexp_match(ln, '^#?([_[:alpha:]]+) (= ([^[:space:]]*)|[^ ]*$).*') AS m\n+regexp_match(ln, '^#?([_[:alpha:]]+) (= ''([^'']*)''|(= ([^[:space:]]*))|[^ ]*$).*') AS m\n+WHERE ln ~ '^#?[[:alpha:]]';\n+\n+-- test that GUCs in postgresql.conf have correct default values\n+SELECT name, tsf.cooked_value, sc.sample_value\n+FROM tab_settings_flags tsf JOIN sample_conf sc USING(name)\n+WHERE NOT not_in_sample AND tsf.cooked_value != sc.sample_value AND tsf.cooked_value||'.0' != sc.sample_value\n+ORDER BY 1;\n+rollback;\n\nIt detects the original problem:\n\n stats_fetch_consistency | cache | none\n\nAnd I think these should be updated it postgresql.conf to use the same unit as\nin current_setting().\n\n track_activity_query_size | 1kB | 1024\n wal_buffers | 4MB | -1\n wal_receiver_timeout | 1min | 60s\n wal_sender_timeout | 1min | 60s\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 25 May 2022 21:25:53 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "At Wed, 25 May 2022 21:25:53 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in \n> On Thu, May 26, 2022 at 11:10:18AM +0900, Kyotaro Horiguchi wrote:\n> > Yeah, \"boot_val\" is appropreate here. And I noticed that pg_settings\n> > has the \"unit\" field. I'll try using them.\n> \n> I wrote this in guc.sql, which seems promising, but it needs to be rewritten in\n> check_guc.pl to access postgresql.conf from the source tree. Do you want to\n> handle that ?\n\nYes.\n\n> +\\getenv abs_srcdir PG_ABS_SRCDIR\n> +\\set filename :abs_srcdir '../../../../src/backend/utils/misc/postgresql.conf.sample'\n> +\n> +begin;\n> +CREATE TEMP TABLE sample_conf AS\n> +-- SELECT m[1] AS name, trim(BOTH '''' FROM m[3]) AS sample_value\n> +SELECT m[1] AS name, COALESCE(m[3], m[5]) AS sample_value\n> +FROM (SELECT regexp_split_to_table(pg_read_file(:'filename'), '\\n') AS ln) conf,\n> +-- regexp_match(ln, '^#?([_[:alpha:]]+) (= ([^[:space:]]*)|[^ ]*$).*') AS m\n> +regexp_match(ln, '^#?([_[:alpha:]]+) (= ''([^'']*)''|(= ([^[:space:]]*))|[^ ]*$).*') AS m\n> +WHERE ln ~ '^#?[[:alpha:]]';\n> +\n> +-- test that GUCs in postgresql.conf have correct default values\n> +SELECT name, tsf.cooked_value, sc.sample_value\n> +FROM tab_settings_flags tsf JOIN sample_conf sc USING(name)\n> +WHERE NOT not_in_sample AND tsf.cooked_value != sc.sample_value AND tsf.cooked_value||'.0' != sc.sample_value\n> +ORDER BY 1;\n> +rollback;\n>\n> It detects the original problem:\n> \n> stats_fetch_consistency | cache | none\n\nYeah, it is a straight forward outcome.\n\n> And I think these should be updated it postgresql.conf to use the same unit as\n> in current_setting().\n> \n> track_activity_query_size | 1kB | 1024\n> wal_buffers | 4MB | -1\n> wal_receiver_timeout | 1min | 60s\n> wal_sender_timeout | 1min | 60s\n\nI'm not sure we should do so. Rather I'd prefer 60s than 1min here.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 26 May 2022 13:00:45 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "At Thu, 26 May 2022 13:00:45 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Wed, 25 May 2022 21:25:53 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in \n> > And I think these should be updated it postgresql.conf to use the same unit as\n> > in current_setting().\n> > \n> > track_activity_query_size | 1kB | 1024\n> > wal_buffers | 4MB | -1\n> > wal_receiver_timeout | 1min | 60s\n> > wal_sender_timeout | 1min | 60s\n> \n> I'm not sure we should do so. Rather I'd prefer 60s than 1min here.\n\nIt could be in SQL, but *I* prefer to use perl for this, since it\nallows me to write a bit complex things (than simple string\ncomparison) simpler.\n\nSo the attached is a wip version of that\n\nNumeric values are compared considering units. But does not require\nthe units of the both values to match. Some variables are ignored by\nan explicit instruction (ignored_parameters). Some variables are\ncompared case-insensitively by an explicit instruction\n(case_insensitive_params). bool and enum are compared\ncase-insensitively automatically.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 26 May 2022 16:27:53 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-25 14:00:23 +0900, Michael Paquier wrote:\n> On Tue, May 24, 2022 at 09:28:49PM -0700, Andres Freund wrote:\n> > And done. Thanks Nathan!\n>\n> Shouldn't you also refresh pgstat_fetch_consistency in pgstat.c for\n> consistency?\n\nYes. Now that the visible sheen of embarassment in my face has subsided a bit\n(and pgcon has ended), I pushed this bit too.\n\n- Andres\n\n\n",
"msg_date": "Sat, 28 May 2022 13:14:09 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-26 16:27:53 +0900, Kyotaro Horiguchi wrote:\n> It could be in SQL, but *I* prefer to use perl for this, since it\n> allows me to write a bit complex things (than simple string\n> comparison) simpler.\n\nI wonder if we shouldn't just expose a C function to do this, rather than\nhaving a separate implementation in a tap test.\n\n\n> +# parameter names that cannot get consistency check performed\n> +my @ignored_parameters =\n\nI think most of these we could ignore by relying on source <> 'override'\ninstead of listing them?\n\n\n> +# parameter names that requires case-insensitive check\n> +my @case_insensitive_params =\n> + ('ssl_ciphers',\n> + 'log_filename',\n> + 'event_source',\n> + 'log_timezone',\n> + 'timezone',\n> + 'lc_monetary',\n> + 'lc_numeric',\n> + 'lc_time');\n\nWhy do these differ by case?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 28 May 2022 13:22:45 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "On Sat, May 28, 2022 at 01:14:09PM -0700, Andres Freund wrote:\n> I pushed this bit too.\n\nThanks for taking care of that!\n--\nMichael",
"msg_date": "Mon, 30 May 2022 11:49:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "At Sat, 28 May 2022 13:22:45 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> On 2022-05-26 16:27:53 +0900, Kyotaro Horiguchi wrote:\n> > It could be in SQL, but *I* prefer to use perl for this, since it\n> > allows me to write a bit complex things (than simple string\n> > comparison) simpler.\n> \n> I wonder if we shouldn't just expose a C function to do this, rather than\n> having a separate implementation in a tap test.\n\nIt was annoying that I needed to copy the unit-conversion stuff. I\ndid that in the attached. parse_val() and check_val() and the duped\ndata is removed.\n\n> > +# parameter names that cannot get consistency check performed\n> > +my @ignored_parameters =\n> \n> I think most of these we could ignore by relying on source <> 'override'\n> instead of listing them?\n> \n> \n> > +# parameter names that requires case-insensitive check\n> > +my @case_insensitive_params =\n> > + ('ssl_ciphers',\n> > + 'log_filename',\n> > + 'event_source',\n> > + 'log_timezone',\n> > + 'timezone',\n> > + 'lc_monetary',\n> > + 'lc_numeric',\n> > + 'lc_time');\n> \n> Why do these differ by case?\n\nMmm. It just came out of a thinko. I somehow believed that the script\ndown-cases only the parameter names among the values from\npg_settings. I felt that something's strange while on it,\nthough.. After fixing it, there are only the following values that\ndiffer only in letter cases. In passing I changed \"bool\" and \"enum\"\nare case-sensitive, too.\n\nname conf bootval\nclient_encoding: \"sql_ascii\" \"SQL_ASCII\"\ndatestyle : \"iso, mdy\" \"ISO, MDY\"\nsyslog_facility: \"LOCAL0\" \"local0\"\n\nIt seems to me that the bootval is right for all variables.\n\n\nI added a testing-aid function pg_normalize_config_option(name,value)\nso the consistency check can be performed like this.\n\nSELECT f.n, f.v, s.boot_val\n FROM (VALUES ('work_mem','4MB'),...) f(n,v)\n JOIN pg_settings s ON s.name = f.n '.\n WHERE pg_normalize_config_value(f.n, f.v) <> '.\n pg_normalize_config_value(f.n, s.boot_val)';\n\nThere're some concerns on the function.\n\n- _ShowConfig() returns archive_command as \"(disabled)\" regardless of\n its value. The test passes accidentally for the variable...\n\n- _ShowConfig() errors out for \"timezone_abbreviations\" and \"\" since\n the check function tries to open the timezone file. (It is excluded\n from the test.)\n\nI don't want to create a copy of the function only for this purpose.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 30 May 2022 17:27:19 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "On Mon, May 30, 2022 at 05:27:19PM +0900, Kyotaro Horiguchi wrote:\n> At Sat, 28 May 2022 13:22:45 -0700, Andres Freund <andres@anarazel.de> wrote in \n> > Hi,\n> > \n> > On 2022-05-26 16:27:53 +0900, Kyotaro Horiguchi wrote:\n> > > It could be in SQL, but *I* prefer to use perl for this, since it\n> > > allows me to write a bit complex things (than simple string\n> > > comparison) simpler.\n> > \n> > I wonder if we shouldn't just expose a C function to do this, rather than\n> > having a separate implementation in a tap test.\n> \n> It was annoying that I needed to copy the unit-conversion stuff. I\n> did that in the attached. parse_val() and check_val() and the duped\n> data is removed.\n\nNote that this gives:\n\nguc.c:7573:9: warning: ‘dst’ may be used uninitialized in this function [-Wmaybe-uninitialized]\n\nwith gcc version 9.2.1 20191008 (Ubuntu 9.2.1-9ubuntu2)\n\nI wonder whether you'd consider renaming pg_normalize_config_value() to\npg_pretty_config_value() or similar.\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 11 Jun 2022 09:41:37 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "On Sat, Jun 11, 2022 at 09:41:37AM -0500, Justin Pryzby wrote:\n> Note that this gives:\n> \n> guc.c:7573:9: warning: ‘dst’ may be used uninitialized in this function [-Wmaybe-uninitialized]\n> \n> with gcc version 9.2.1 20191008 (Ubuntu 9.2.1-9ubuntu2)\n> \n> I wonder whether you'd consider renaming pg_normalize_config_value() to\n> pg_pretty_config_value() or similar.\n\nI have looked at the patch, and I am not convinced that we need a\nfunction that does a integer -> integer-with-unit conversion for the\npurpose of this test. One thing is that it can be unstable with the\nunit in the conversion where values are close to a given threshold\n(aka for cases like 2048kB/2MB). On top of that, this overlaps with\nthe existing system function in charge of converting values with bytes\nas size unit, while this stuff handles more unit types and all GUC\ntypes. I think that there could be some room in doing the opposite\nconversion, feeding the value from postgresql.conf.sample to something\nand compare it directly with boot_val. That's solvable at SQL level,\nstill a system function may be more user-friendly.\n\nExtending the tests to check after the values is something worth\ndoing, but I think that I would limit the checks to the parameters \nthat do not have units for now, until we figure out which interface\nwould be more adapted for doing the normalization of the parameter\nvalues.\n\n-#syslog_facility = 'LOCAL0'\n+#syslog_facility = 'local0'\nThose changes should not be necessary in postgresql.conf.sample. The\ntest should be in charge of applying the lower() conversion, in the\nsame way as guc.c does internally, and that's a mode supported by the\nparameter parsing. Using an upper-case value in the sample file is\nactually meaningful sometimes (for example, syslog can use upper-case\nstrings to refer to LOCAL0~7).\n--\nMichael",
"msg_date": "Thu, 16 Jun 2022 12:07:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "At Thu, 16 Jun 2022 12:07:03 +0900, Michael Paquier <michael@paquier.xyz> wrote in \r\n> On Sat, Jun 11, 2022 at 09:41:37AM -0500, Justin Pryzby wrote:\r\n> > Note that this gives:\r\n> > \r\n> > guc.c:7573:9: warning: ‘dst’ may be used uninitialized in this function [-Wmaybe-uninitialized]\r\n> > \r\n> > with gcc version 9.2.1 20191008 (Ubuntu 9.2.1-9ubuntu2)\r\n> > \r\n> > I wonder whether you'd consider renaming pg_normalize_config_value() to\r\n> > pg_pretty_config_value() or similar.\r\n> \r\n> I have looked at the patch, and I am not convinced that we need a\r\n> function that does a integer -> integer-with-unit conversion for the\r\n> purpose of this test. One thing is that it can be unstable with the\r\n> unit in the conversion where values are close to a given threshold\r\n> (aka for cases like 2048kB/2MB). On top of that, this overlaps with\r\n\r\nI agree that needing to-with-unit conversion is a bit crumsy. One of\r\nthe reason is I didn't want to add a function that has no use of other\r\nthan testing,\r\n\r\n> the existing system function in charge of converting values with bytes\r\n> as size unit, while this stuff handles more unit types and all GUC\r\n> types. I think that there could be some room in doing the opposite\r\n> conversion, feeding the value from postgresql.conf.sample to something\r\n> and compare it directly with boot_val. That's solvable at SQL level,\r\n> still a system function may be more user-friendly.\r\n\r\nThe output value must be the same with what pg_settings shows, so it\r\nneeds to take in some code from GetConfigOptionByNum() (and needs to\r\nkeep in-sync with it), which is what I didn't wnat to do. Anyway done\r\nin the attached.\r\n\r\nThis method has a problem for wal_buffers. parse_and_validate_value()\r\nreturns 512 for -1 input since check_wal_buffers() converts it to 512.\r\nIt is added to the exclusion list. (Conversely the previous method\r\nregarded \"-1\" and \"512\" as identical.)\r\n\r\n> Extending the tests to check after the values is something worth\r\n> doing, but I think that I would limit the checks to the parameters \r\n> that do not have units for now, until we figure out which interface\r\n> would be more adapted for doing the normalization of the parameter\r\n> values.\r\n\r\nThe attached second is that. FWIW, I'd like to support integer/real\r\nvalues since I think they need more support of this kind of check.\r\n\r\n> -#syslog_facility = 'LOCAL0'\r\n> +#syslog_facility = 'local0'\r\n> Those changes should not be necessary in postgresql.conf.sample. The\r\n> test should be in charge of applying the lower() conversion, in the\r\n> same way as guc.c does internally, and that's a mode supported by the\r\n> parameter parsing. Using an upper-case value in the sample file is\r\n> actually meaningful sometimes (for example, syslog can use upper-case\r\n> strings to refer to LOCAL0~7).\r\n\r\nI didn't notice, but now know parse_and_validate_value() convers\r\nvalues the same way with bootval so finally case-unification is not\r\nneeded.\r\n\r\n=# select pg_config_unitless_value('datestyle', 'iso, mdy');\r\n pg_config_unitless_value \r\n--------------------------\r\n ISO, MDY\r\n\r\nHowever, the \"datestyle\" variable is shown as \"DateStyle\" in the\r\npg_settings view. So the name in the view needs to be lower-cased\r\ninstead. The same can be said to \"TimeZone\" and \"IntervalStyle\". The\r\nold query missed the case where there's no variable with the names\r\nappear in the config file. Fixed it.\r\n\r\nAt Sat, 11 Jun 2022 09:41:37 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in \r\n> Note that this gives:\r\n> \r\n> guc.c:7573:9: warning: ‘dst’ may be used uninitialized in this function [-Wmaybe-uninitialized]\r\n\r\nMmm. I don't have an idea where the 'dst' came from...\r\n\r\n\r\n> I wonder whether you'd consider renaming pg_normalize_config_value() to\r\n> pg_pretty_config_value() or similar.\r\n\r\nYeah, that's sensible, the function is now changed (not renamed) to\r\npg_config_unitless_value(). This name also doesn't satisfies me at\r\nall..:(\r\n\r\n\r\nSo, the attached are:\r\n\r\nv2-0001-Add-fileval-bootval-consistency-check-of-GUC-para.patch:\r\n\r\n New version of the previous patch. It is changed after Michael's\r\n suggestions.\r\n \r\n\r\n0001-Add-fileval-bootval-consistency-check-of-GUC-paramet-simple.patch\r\n\r\n Another version that doesn't need new C function. It ignores\r\n variables that have units but I didn't count how many variables are\r\n ignored by this chnage.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center",
"msg_date": "Thu, 16 Jun 2022 17:19:46 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "On Thu, Jun 16, 2022 at 05:19:46PM +0900, Kyotaro Horiguchi wrote:\n> At Sat, 11 Jun 2022 09:41:37 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in \n> > Note that this gives:\n> > \n> > guc.c:7573:9: warning: ‘dst’ may be used uninitialized in this function [-Wmaybe-uninitialized]\n> \n> Mmm. I don't have an idea where the 'dst' came from...\n\nWell, in your latest patch, you've renamed it.\n\nguc.c:7586:19: warning: ‘result’ may be used uninitialized in this function [-Wmaybe-uninitialized]\n 7586 | PG_RETURN_TEXT_P(cstring_to_text(result));\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 16 Jun 2022 08:23:07 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "At Thu, 16 Jun 2022 08:23:07 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in \r\n> On Thu, Jun 16, 2022 at 05:19:46PM +0900, Kyotaro Horiguchi wrote:\r\n> > At Sat, 11 Jun 2022 09:41:37 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in \r\n> > > Note that this gives:\r\n> > > \r\n> > > guc.c:7573:9: warning: ‘dst’ may be used uninitialized in this function [-Wmaybe-uninitialized]\r\n> > \r\n> > Mmm. I don't have an idea where the 'dst' came from...\r\n> \r\n> Well, in your latest patch, you've renamed it.\r\n> \r\n> guc.c:7586:19: warning: ‘result’ may be used uninitialized in this function [-Wmaybe-uninitialized]\r\n> 7586 | PG_RETURN_TEXT_P(cstring_to_text(result));\r\n\r\nOoo. I find that the patch on my hand was different from that on this\r\nlist by some reason uncertain to me. I now understand what's\r\nhappening.\r\n\r\nAt Sat, 11 Jun 2022 09:41:37 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in \r\n> with gcc version 9.2.1 20191008 (Ubuntu 9.2.1-9ubuntu2)\r\n\r\nMy compiler (gcc 8.5.0) (with -Wswitch) is satisfied by finding that\r\nthe switch() covers all enum values. I don't know why the new\r\ncompiler complains with this, but compilers in such environment should\r\nshut up by the following change.\r\n\r\n\r\n-\tchar *result;\r\n+\tchar *result = \"\";\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center",
"msg_date": "Fri, 17 Jun 2022 09:43:58 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "Hi,\n\nOn 2022-06-17 09:43:58 +0900, Kyotaro Horiguchi wrote:\n> +/*\n> + * Convert value to unitless value according to the specified GUC variable\n> + */\n> +Datum\n> +pg_config_unitless_value(PG_FUNCTION_ARGS)\n> +{\n> +\tchar *name = \"\";\n> +\tchar *value = \"\";\n> +\tstruct config_generic *record;\n> +\tchar *result = \"\";\n> +\tvoid *extra;\n> +\tunion config_var_val val;\n> +\tconst char *p;\n> +\tchar buffer[256];\n> +\n> +\tif (!PG_ARGISNULL(0))\n> +\t\tname = text_to_cstring(PG_GETARG_TEXT_PP(0));\n> +\tif (!PG_ARGISNULL(1))\n> +\t\tvalue = text_to_cstring(PG_GETARG_TEXT_PP(1));\n> +\n> +\trecord = find_option(name, true, false, ERROR);\n> +\n> +\tparse_and_validate_value(record, name, value, PGC_S_TEST, WARNING,\n> +\t\t\t\t\t\t\t &val, &extra);\n> +\n\nHm. I think this should error out for options that the user doesn't have the\npermissions for - good. I suggest adding a test for that.\n\n\n> +\tswitch (record->vartype)\n> +\t{\n> +\t\tcase PGC_BOOL:\n> +\t\t\tresult = (val.boolval ? \"on\" : \"off\");\n> +\t\t\tbreak;\n> +\t\tcase PGC_INT:\n> +\t\t\tsnprintf(buffer, sizeof(buffer), \"%d\", val.intval);\n> +\t\t\tresult = pstrdup(buffer);\n> +\t\t\tbreak;\n> +\t\tcase PGC_REAL:\n> +\t\t\tsnprintf(buffer, sizeof(buffer), \"%g\", val.realval);\n> +\t\t\tresult = pstrdup(buffer);\n> +\t\t\tbreak;\n> +\t\tcase PGC_STRING:\n> +\t\t\tp = val.stringval;\n> +\t\t\tif (p == NULL)\n> +\t\t\t\tp = \"\";\n> +\t\t\tresult = pstrdup(p);\n> +\t\t\tbreak;\n\nIs this a good idea? I wonder if we shouldn't instead return NULL, rather than\nmaking NULL and \"\" undistinguishable.\n\nNot that it matters for efficiency here, but why are you pstrdup'ing the\nbuffers? cstring_to_text() will already copy the string, no?\n\n\n> +# parameter names that cannot get consistency check performed\n> +my @ignored_parameters = (\n\nPerhaps worth adding comments explaining why these can't get checked?\n\n\n> +foreach my $line (split(\"\\n\", $all_params))\n> +{\n> +\tmy @f = split('\\|', $line);\n> +\tfail(\"query returned wrong number of columns: $#f : $line\") if ($#f != 4);\n> +\t$all_params_hash{$f[0]}->{type} = $f[1];\n> +\t$all_params_hash{$f[0]}->{unit} = $f[2];\n> +\t$all_params_hash{$f[0]}->{bootval} = $f[3];\n> +}\n>\n\nMight look a bit nicer to generate the hash in a local variable and then\nassign to $all_params_hash{$f[0]} once, rather than repeating that part\nmultiple times.\n\n\n> -\tif ($line =~ m/^#?([_[:alpha:]]+) = .*/)\n> +\tif ($line =~ m/^#?([_[:alpha:]]+) = (.*)$/)\n> \t{\n> \t\t# Lower-case conversion matters for some of the GUCs.\n> \t\tmy $param_name = lc($1);\n> \n> +\t\t# extract value\n> +\t\tmy $file_value = $2;\n> +\t\t$file_value =~ s/\\s*#.*$//;\t\t# strip trailing comment\n> +\t\t$file_value =~ s/^'(.*)'$/$1/;\t# strip quotes\n> +\n> \t\t# Ignore some exceptions.\n> \t\tnext if $param_name eq \"include\";\n> \t\tnext if $param_name eq \"include_dir\";\n\nSo there's now two ignore mechanisms? Why not just handle include[_dir] via\n@ignored_parameters?\n\n\n> @@ -66,19 +94,39 @@ while (my $line = <$contents>)\n> \t\t# Update the list of GUCs found in the sample file, for the\n> \t\t# follow-up tests.\n> \t\tpush @gucs_in_file, $param_name;\n> +\n> +\t\t# Check for consistency between bootval and file value.\n\nYou're not checking the consistency here though?\n\n\n> +\t\tif (!grep { $_ eq $param_name } @ignored_parameters)\n> +\t\t{\n> +\t\t\tpush (@check_elems, \"('$param_name','$file_value')\");\n> +\t\t}\n> \t}\n> }\n\n> \n> close $contents;\n> \n> +# Run consistency check between config-file's default value and boot\n> +# values. To show sample setting that is not found in the view, use\n> +# LEFT JOIN and make sure pg_settings.name is not NULL.\n> +my $check_query =\n> + 'SELECT f.n, f.v, s.boot_val FROM (VALUES '.\n> + join(',', @check_elems).\n> + ') f(n,v) LEFT JOIN pg_settings s ON lower(s.name) = f.n '.\n> + \"WHERE pg_config_unitless_value(f.n, f.v) <> COALESCE(s.boot_val, '') \".\n> + 'OR s.name IS NULL';\n> +\n> +print $check_query;\n> +\n> +is ($node->safe_psql('postgres', $check_query), '',\n> +\t'check if fileval-bootval consistency is fine');\n\n\"fileval-bootval\" isn't that easy to understand, \"is fine\" doesn't quite sound\nright. Maybe something like \"GUC values in .sample and boot value match\"?\n\n\nI wonder if it'd not result in easier to understand output if the query just\ncalled pg_config_unitless_value() for all the .sample values, but then did the\ncomparison of the results in perl.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 22 Jun 2022 16:07:10 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "Thanks!\n\nAt Wed, 22 Jun 2022 16:07:10 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> On 2022-06-17 09:43:58 +0900, Kyotaro Horiguchi wrote:\n> > +/*\n> > + * Convert value to unitless value according to the specified GUC variable\n> > + */\n> > +Datum\n> > +pg_config_unitless_value(PG_FUNCTION_ARGS)\n> > +{\n...\n> > +\trecord = find_option(name, true, false, ERROR);\n> > +\n> > +\tparse_and_validate_value(record, name, value, PGC_S_TEST, WARNING,\n> > +\t\t\t\t\t\t\t &val, &extra);\n> > +\n> \n> Hm. I think this should error out for options that the user doesn't have the\n> permissions for - good. I suggest adding a test for that.\n\nGenerally sounds reasonable but it doesn't reveal its setting. It\njust translates (or decodes) given string to internal value. And\ncurrently most of all values are string and only two are enum (TLS\nversion), that are returned almost as-is. That being said, the\nsuggested behavior seems better. So I did that in the attached.\nAnd I added the test for this to rolenames in modules/unsafe_tests.\n\n> > +\tswitch (record->vartype)\n> > +\t{\n> > +\t\tcase PGC_BOOL:\n> > +\t\t\tresult = (val.boolval ? \"on\" : \"off\");\n> > +\t\t\tbreak;\n> > +\t\tcase PGC_INT:\n> > +\t\t\tsnprintf(buffer, sizeof(buffer), \"%d\", val.intval);\n> > +\t\t\tresult = pstrdup(buffer);\n> > +\t\t\tbreak;\n> > +\t\tcase PGC_REAL:\n> > +\t\t\tsnprintf(buffer, sizeof(buffer), \"%g\", val.realval);\n> > +\t\t\tresult = pstrdup(buffer);\n> > +\t\t\tbreak;\n> > +\t\tcase PGC_STRING:\n> > +\t\t\tp = val.stringval;\n> > +\t\t\tif (p == NULL)\n> > +\t\t\t\tp = \"\";\n> > +\t\t\tresult = pstrdup(p);\n> > +\t\t\tbreak;\n> \n> Is this a good idea? I wonder if we shouldn't instead return NULL, rather than\n> making NULL and \"\" undistinguishable.\n\nAnyway NULL cannot be seen there and I don't recall the reason I made\nthe function non-strict. I changed the SQL function back to 'strict',\nwhich makes things cleaner and simpler.\n\n> Not that it matters for efficiency here, but why are you pstrdup'ing the\n> buffers? cstring_to_text() will already copy the string, no?\n\nRight. That's a silly thinko, omitting that behavior..\n\n> \n> > +# parameter names that cannot get consistency check performed\n> > +my @ignored_parameters = (\n> \n> Perhaps worth adding comments explaining why these can't get checked?\n\nMmm. I agree. I rewrote it as the follows.\n\n> # The following parameters are defaultly set with\n> # environment-dependent values which may not match the default values\n> # written in the sample config file.\n\n\n> > +foreach my $line (split(\"\\n\", $all_params))\n> > +{\n> > +\tmy @f = split('\\|', $line);\n> > +\tfail(\"query returned wrong number of columns: $#f : $line\") if ($#f != 4);\n> > +\t$all_params_hash{$f[0]}->{type} = $f[1];\n> > +\t$all_params_hash{$f[0]}->{unit} = $f[2];\n> > +\t$all_params_hash{$f[0]}->{bootval} = $f[3];\n> > +}\n> >\n> \n> Might look a bit nicer to generate the hash in a local variable and then\n> assign to $all_params_hash{$f[0]} once, rather than repeating that part\n> multiple times.\n\nYeah, but I noticed that that hash is no longer needed..\n\n> > -\tif ($line =~ m/^#?([_[:alpha:]]+) = .*/)\n> > +\tif ($line =~ m/^#?([_[:alpha:]]+) = (.*)$/)\n> > \t{\n> > \t\t# Lower-case conversion matters for some of the GUCs.\n> > \t\tmy $param_name = lc($1);\n> > \n> > +\t\t# extract value\n> > +\t\tmy $file_value = $2;\n> > +\t\t$file_value =~ s/\\s*#.*$//;\t\t# strip trailing comment\n> > +\t\t$file_value =~ s/^'(.*)'$/$1/;\t# strip quotes\n> > +\n> > \t\t# Ignore some exceptions.\n> > \t\tnext if $param_name eq \"include\";\n> > \t\tnext if $param_name eq \"include_dir\";\n> \n> So there's now two ignore mechanisms? Why not just handle include[_dir] via\n> @ignored_parameters?\n\nThe two ignore mechanisms work for different arrays. So we need to\ndistinct between the two uses. I tried that but it looks like\nreseparating particles that were uselessly mixed. Finally I changed\nthe variable to hash and apply the same mechanism to \"include\" and\nfriends but by using different hash.\n\n\n> > @@ -66,19 +94,39 @@ while (my $line = <$contents>)\n> > \t\t# Update the list of GUCs found in the sample file, for the\n> > \t\t# follow-up tests.\n> > \t\tpush @gucs_in_file, $param_name;\n> > +\n> > +\t\t# Check for consistency between bootval and file value.\n> \n> You're not checking the consistency here though?\n\nMmm. Right. I reworded it following the comment just above.\n\n> > +\t\tif (!grep { $_ eq $param_name } @ignored_parameters)\n> > +\t\t{\n> > +\t\t\tpush (@check_elems, \"('$param_name','$file_value')\");\n> > +\t\t}\n> > \t}\n> > }\n> \n> > \n> > close $contents;\n> > \n> > +# Run consistency check between config-file's default value and boot\n> > +# values. To show sample setting that is not found in the view, use\n> > +# LEFT JOIN and make sure pg_settings.name is not NULL.\n> > +my $check_query =\n> > + 'SELECT f.n, f.v, s.boot_val FROM (VALUES '.\n> > + join(',', @check_elems).\n> > + ') f(n,v) LEFT JOIN pg_settings s ON lower(s.name) = f.n '.\n> > + \"WHERE pg_config_unitless_value(f.n, f.v) <> COALESCE(s.boot_val, '') \".\n> > + 'OR s.name IS NULL';\n> > +\n> > +print $check_query;\n> > +\n> > +is ($node->safe_psql('postgres', $check_query), '',\n> > +\t'check if fileval-bootval consistency is fine');\n> \n> \"fileval-bootval\" isn't that easy to understand, \"is fine\" doesn't quite sound\n> right. Maybe something like \"GUC values in .sample and boot value match\"?\n\nNo objection. Changed.\n\n> I wonder if it'd not result in easier to understand output if the query just\n> called pg_config_unitless_value() for all the .sample values, but then did the\n> comparison of the results in perl.\n\nIt is a fair alternative. I said exactly the same thing (perl is\neasier to understand than the same (procedual) logic in SQL)\nupthread:p So done that in the attached.\n\nI tempted to find extra filevals by the code added here but it is\ncleaner to leave it to the existing checking code.\n\n- Change the behavior of pg_config_unitless_value according to the comment.\n- and added the test for the function's behavior about privileges.\n- Skip \"include\" and friends by using a hash similar to ignore_parameters.\n- Removed %all_params_hash. (Currently it is @file_vals)\n- A comment reworded (but it donesn't look fine..).\n- Moved value-check logic from SQL to perl.\n\nAnd I'll add this to the coming CF.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 30 Jun 2022 17:38:01 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "> +# The following parameters are defaultly set with\n> +# environment-dependent values at run-time which may not match the\n> +# default values written in the sample config file.\n> +my %ignore_parameters = \n> + map { $_ => 1 } (\n> +\t 'data_directory',\n> +\t 'hba_file',\n> +\t 'ident_file',\n> +\t 'krb_server_keyfile',\n> +\t 'max_stack_depth',\n> +\t 'bgwriter_flush_after',\n> +\t 'wal_sync_method',\n> +\t 'checkpoint_flush_after',\n> +\t 'timezone_abbreviations',\n> +\t 'lc_messages',\n> +\t 'wal_buffers');\n\nHow did you make this list ? Was it by excluding things that failed for you ?\n\ncfbot is currently failing due to io_concurrency on windows.\nI think there are more GUC which should be included here.\n\nhttp://cfbot.cputube.org/kyotaro-horiguchi.html\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 13 Jul 2022 12:30:00 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "On Wed, Jul 13, 2022 at 12:30:00PM -0500, Justin Pryzby wrote:\n> How did you make this list ? Was it by excluding things that failed for you ?\n> \n> cfbot is currently failing due to io_concurrency on windows.\n> I think there are more GUC which should be included here.\n> \n> http://cfbot.cputube.org/kyotaro-horiguchi.html\n\nFWIW, I am not really a fan of making this test depend on a hardcoded\nlist of GUCs. The design strength of the existing test is that we\ndon't have such a dependency now, making less to think about in terms\nof maintenance in the long-term, even if this is now run\nautomatically.\n--\nMichael",
"msg_date": "Thu, 14 Jul 2022 08:46:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-14 08:46:02 +0900, Michael Paquier wrote:\n> On Wed, Jul 13, 2022 at 12:30:00PM -0500, Justin Pryzby wrote:\n> > How did you make this list ? Was it by excluding things that failed for you ?\n> > \n> > cfbot is currently failing due to io_concurrency on windows.\n> > I think there are more GUC which should be included here.\n> > \n> > http://cfbot.cputube.org/kyotaro-horiguchi.html\n> \n> FWIW, I am not really a fan of making this test depend on a hardcoded\n> list of GUCs.\n\nI wonder if we should add flags indicating platform dependency etc to guc.c?\nThat should allow to remove most of them?\n\n\n> The design strength of the existing test is that we\n> don't have such a dependency now, making less to think about in terms\n> of maintenance in the long-term, even if this is now run\n> automatically.\n\nThere's no existing test for things covered by these exceptions, unless I am\nmissing something?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 13 Jul 2022 16:49:00 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "On Thu, Jul 14, 2022 at 08:46:02AM +0900, Michael Paquier wrote:\n> On Wed, Jul 13, 2022 at 12:30:00PM -0500, Justin Pryzby wrote:\n> > How did you make this list ? Was it by excluding things that failed for you ?\n> > \n> > cfbot is currently failing due to io_concurrency on windows.\n> > I think there are more GUC which should be included here.\n> > \n> > http://cfbot.cputube.org/kyotaro-horiguchi.html\n> \n> FWIW, I am not really a fan of making this test depend on a hardcoded\n> list of GUCs. The design strength of the existing test is that we\n> don't have such a dependency now, making less to think about in terms\n> of maintenance in the long-term, even if this is now run\n> automatically.\n\nIt doesn't really need to be stated that an inclusive list wouldn't be useful.\n\nThat's a list of GUCs to be excluded.\nWhich is hardly different from the pre-existing list of exceptions.\n\n # Ignore some exceptions.\n next if $param_name eq \"include\";\n next if $param_name eq \"include_dir\";\n next if $param_name eq \"include_if_exists\";\n\n-- Exceptions are transaction_*.\nSELECT name FROM tab_settings_flags\n WHERE NOT no_show_all AND no_reset_all\n ORDER BY 1;\n name \n------------------------\n transaction_deferrable\n transaction_isolation\n transaction_read_only\n(3 rows)\n\nHow else do you propose to make this work for guc whose defaults vary by\nplatform in guc.c or in initdb ?\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 13 Jul 2022 18:54:45 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "At Wed, 13 Jul 2022 18:54:45 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in \n> On Thu, Jul 14, 2022 at 08:46:02AM +0900, Michael Paquier wrote:\n> > On Wed, Jul 13, 2022 at 12:30:00PM -0500, Justin Pryzby wrote:\n> > > How did you make this list ? Was it by excluding things that failed for you ?\n\nYes. I didn't confirm each variable. They are the variables differ on\nRHEL-family OSes. io_concurrency differs according to\nUSE_PREFETCH. Regarding to effects of macro definitions, I searched\nguc.c for non-GUC_NOT_IN_SAMPLE variables with macro-affected defaults.\n\nWIN32 affects update_process_title\nUSE_PREFETCH affects effective_io_concurrency and maintenance_io_concurrency\nHAVE_UNIX_SOCKETS affects unix_socket_directories and unix_socket_directories\nUSE_SSL affects ssl_ecdh_curve\nUSE_OPENSSL affects ssl_ciphers\nHAVE_SYSLOG affects syslog_facility\n\nDifferent from most of the variables already in the exclusion list,\nthese could be changed at build time, but I haven't found a sensible\nway to do that. Otherwise we need to add them to the exclusion list...\n\n> > > cfbot is currently failing due to io_concurrency on windows.\n> > > I think there are more GUC which should be included here.\n> > > \n> > > http://cfbot.cputube.org/kyotaro-horiguchi.html\n> > \n> > FWIW, I am not really a fan of making this test depend on a hardcoded\n> > list of GUCs. The design strength of the existing test is that we\n> > don't have such a dependency now, making less to think about in terms\n> > of maintenance in the long-term, even if this is now run\n> > automatically.\n> \n> It doesn't really need to be stated that an inclusive list wouldn't be useful.\n\n+1\n\n> That's a list of GUCs to be excluded.\n> Which is hardly different from the pre-existing list of exceptions.\n> \n> # Ignore some exceptions.\n> next if $param_name eq \"include\";\n> next if $param_name eq \"include_dir\";\n> next if $param_name eq \"include_if_exists\";\n> \n> -- Exceptions are transaction_*.\n> SELECT name FROM tab_settings_flags\n> WHERE NOT no_show_all AND no_reset_all\n> ORDER BY 1;\n> name \n> ------------------------\n> transaction_deferrable\n> transaction_isolation\n> transaction_read_only\n> (3 rows)\n> \n> How else do you propose to make this work for guc whose defaults vary by\n> platform in guc.c or in initdb ?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 19 Jul 2022 15:04:27 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "On Tue, Jul 19, 2022 at 03:04:27PM +0900, Kyotaro Horiguchi wrote:\n> At Wed, 13 Jul 2022 18:54:45 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in \n> > On Thu, Jul 14, 2022 at 08:46:02AM +0900, Michael Paquier wrote:\n> > > On Wed, Jul 13, 2022 at 12:30:00PM -0500, Justin Pryzby wrote:\n> > > > How did you make this list ? Was it by excluding things that failed for you ?\n> \n> Yes. I didn't confirm each variable. They are the variables differ on\n> RHEL-family OSes. io_concurrency differs according to\n> USE_PREFETCH. Regarding to effects of macro definitions, I searched\n> guc.c for non-GUC_NOT_IN_SAMPLE variables with macro-affected defaults.\n\nI think you'd also need to handle the ones which are changed by initdb.c.\n\nThis patch takes Andres' suggestion.\n\nThe list of GUCs I flagged is probably incomplete, maybe inaccurate, and at\nleast up for discussion.\n\nBTW I still think it might have been better to leave pg_settings_get_flags()\ndeliberately undocumented.\n\n-- \nJustin",
"msg_date": "Wed, 20 Jul 2022 00:12:26 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "Note that this can currently exposes internal elog() errors to users:\n\npostgres=# select pg_normalize_config_value('log_min_messages','abc');\nWARNING: invalid value for parameter \"log_min_messages\": \"abc\"\nHINT: Available values: debug5, debug4, debug3, debug2, debug1, info, notice, warning, error, log, fatal, panic.\nERROR: could not find enum option 0 for log_min_messages\n\npostgres=# \\errverbose\nERROR: XX000: could not find enum option 0 for log_min_messages\nLOCATION: config_enum_lookup_by_value, guc.c:7284\n\n\n",
"msg_date": "Thu, 28 Jul 2022 17:27:34 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "Hi,\n\nChecking if you're planning to work on this patch still ?\n\nOn Thu, Jul 28, 2022 at 05:27:34PM -0500, Justin Pryzby wrote:\n> Note that this can currently exposes internal elog() errors to users:\n> \n> postgres=# select pg_normalize_config_value('log_min_messages','abc');\n> WARNING: invalid value for parameter \"log_min_messages\": \"abc\"\n> HINT: Available values: debug5, debug4, debug3, debug2, debug1, info, notice, warning, error, log, fatal, panic.\n> ERROR: could not find enum option 0 for log_min_messages\n> \n> postgres=# \\errverbose\n> ERROR: XX000: could not find enum option 0 for log_min_messages\n> LOCATION: config_enum_lookup_by_value, guc.c:7284\n\n\n",
"msg_date": "Mon, 12 Sep 2022 19:43:03 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "This is an alternative implementation, which still relies on adding the\nGUC_DYNAMIC, flag but doesn't require adding a new, sql-accessible\nfunction to convert the GUC to a pretty/human display value.",
"msg_date": "Sat, 17 Sep 2022 23:53:07 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "At Sat, 17 Sep 2022 23:53:07 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in \n> This is an alternative implementation, which still relies on adding the\n> GUC_DYNAMIC, flag but doesn't require adding a new, sql-accessible\n> function to convert the GUC to a pretty/human display value.\n\nThanks!\n\nI'm not sure shared_buffer is GUC_DYNAMIC_DEFAULT, and we need to read\npostgresql.conf.sample using SQL, but +1 for the direction.\n\n+\tAND NOT (sc.sample_value ~ '^0' AND current_setting(name) ~ '^0') -- zeros may be written differently\n+\tAND NOT (sc.sample_value='60s' AND current_setting(name) = '1min') -- two ways to write 1min\n\nMmm. Couldn't we get away from that explicit exceptions?\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 26 Sep 2022 17:29:58 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "Hi, I was hoping to use this patch in my other thread [1], but your\nlatest attachment is reported broken in cfbot [2]. Please rebase it.\n\n------\n[1] GUC C var sanity check -\nhttps://www.postgresql.org/message-id/CAHut%2BPs91wgaE9P7JORnK_dGq7zPB56WLDJwLNCLgGXxqrh9%3DQ%40mail.gmail.com\n[2] cfbot fail - http://cfbot.cputube.org/patch_40_3736.log\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 21 Oct 2022 11:58:15 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "At Fri, 21 Oct 2022 11:58:15 +1100, Peter Smith <smithpb2250@gmail.com> wrote in \n> Hi, I was hoping to use this patch in my other thread [1], but your\n> latest attachment is reported broken in cfbot [2]. Please rebase it.\n\nOuch. I haven't reach here. I'll do that next Monday.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 21 Oct 2022 17:50:34 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "On Mon, Sep 26, 2022 at 05:29:58PM +0900, Kyotaro Horiguchi wrote:\n> At Sat, 17 Sep 2022 23:53:07 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in \n> > This is an alternative implementation, which still relies on adding the\n> > GUC_DYNAMIC, flag but doesn't require adding a new, sql-accessible\n> > function to convert the GUC to a pretty/human display value.\n> \n> Thanks!\n> \n> I'm not sure shared_buffer is GUC_DYNAMIC_DEFAULT, and we need to read\n\nIt's set during initdb.\n\n> postgresql.conf.sample using SQL, but +1 for the direction.\n> \n> +\tAND NOT (sc.sample_value ~ '^0' AND current_setting(name) ~ '^0') -- zeros may be written differently\n> +\tAND NOT (sc.sample_value='60s' AND current_setting(name) = '1min') -- two ways to write 1min\n> \n> Mmm. Couldn't we get away from that explicit exceptions?\n\nSuggestions are welcomed.\n\nRebased the patch.\n\nI also split the flag into DEFAULTS_COMPILE and DEFAULTS_INITDB, since\nthat makes it easier to understand what the flags mean and the intent of\nthe patch. And maybe allows fewer exclusions in patches like Peter's,\nwhich I think would only want to exclude compile-time defaults.\n\n-- \nJustin",
"msg_date": "Mon, 24 Oct 2022 17:05:44 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "On Tue, Oct 25, 2022 at 9:05 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Mon, Sep 26, 2022 at 05:29:58PM +0900, Kyotaro Horiguchi wrote:\n> > At Sat, 17 Sep 2022 23:53:07 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in\n...\n\n> Rebased the patch.\n>\n> I also split the flag into DEFAULTS_COMPILE and DEFAULTS_INITDB, since\n> that makes it easier to understand what the flags mean and the intent of\n> the patch. And maybe allows fewer exclusions in patches like Peter's,\n> which I think would only want to exclude compile-time defaults.\n>\n\nThanks!\n\nFYI, I'm making use of this patch now as a prerequisite for my GUC C\nvar sanity-checker [1].\n\n------\n[1] https://www.postgresql.org/message-id/CAHut%2BPss16YBiYYKyrZBvSp_4uSQfCy7aYfDXU0N8w5VZ5dd_g%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 25 Oct 2022 14:57:36 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "@cfbot: re-rebased again",
"msg_date": "Fri, 18 Nov 2022 15:37:31 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "On Wed, Jul 13, 2022 at 04:49:00PM -0700, Andres Freund wrote:\n> On 2022-07-14 08:46:02 +0900, Michael Paquier wrote:\n> > On Wed, Jul 13, 2022 at 12:30:00PM -0500, Justin Pryzby wrote:\n> > > How did you make this list ? Was it by excluding things that failed for you ?\n> > > \n> > > cfbot is currently failing due to io_concurrency on windows.\n> > > I think there are more GUC which should be included here.\n> > > \n> > > http://cfbot.cputube.org/kyotaro-horiguchi.html\n> > \n> > FWIW, I am not really a fan of making this test depend on a hardcoded\n> > list of GUCs.\n> \n> I wonder if we should add flags indicating platform dependency etc to guc.c?\n> That should allow to remove most of them?\n\nMichael commented on this, but on another thread, so I'm copying and\npasting it here.\n\nOn Thu, Mar 23, 2023 at 08:59:57PM -0500, Justin Pryzby wrote:\n> On Fri, Mar 24, 2023 at 10:24:43AM +0900, Michael Paquier wrote:\n> > >> * Check consistency of GUC defaults between .sample.conf and pg_settings.boot_val\n> > > - It looks like this was pretty active until last October and might\n> > > have been ready to apply at least partially? But no further work or\n> > > review has happened since.\n> > \n> > FWIW, I don't find much appealing the addition of two GUC flags for\n> > only the sole purpose of that,\n> \n> The flags seem independently interesting - adding them here follows\n> a suggestion Andres made in response to your complaint.\n> 20220713234900.z4rniuaerkq34s4v@awork3.anarazel.de\n> \n> > particularly as we get a stronger\n> > dependency between GUCs that can be switched dynamically at\n> > initialization and at compile-time.\n> \n> What do you mean by \"stronger dependency between GUCs\" ?\n\nI'm still not clear what that means ?\n\nI updated the patch to handle the GUC added at 1671f990d.\n\n-- \nJustin",
"msg_date": "Wed, 29 Mar 2023 23:03:59 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "On Wed, Mar 29, 2023 at 11:03:59PM -0500, Justin Pryzby wrote:\n> On Wed, Jul 13, 2022 at 04:49:00PM -0700, Andres Freund wrote:\n> > On 2022-07-14 08:46:02 +0900, Michael Paquier wrote:\n> > > On Wed, Jul 13, 2022 at 12:30:00PM -0500, Justin Pryzby wrote:\n> > > > How did you make this list ? Was it by excluding things that failed for you ?\n> > > > \n> > > > cfbot is currently failing due to io_concurrency on windows.\n> > > > I think there are more GUC which should be included here.\n> > > > \n> > > > http://cfbot.cputube.org/kyotaro-horiguchi.html\n> > > \n> > > FWIW, I am not really a fan of making this test depend on a hardcoded\n> > > list of GUCs.\n> > \n> > I wonder if we should add flags indicating platform dependency etc to guc.c?\n> > That should allow to remove most of them?\n> \n> Michael commented on this, but on another thread, so I'm copying and\n> pasting it here.\n> \n> On Thu, Mar 23, 2023 at 08:59:57PM -0500, Justin Pryzby wrote:\n> > On Fri, Mar 24, 2023 at 10:24:43AM +0900, Michael Paquier wrote:\n> > > >> * Check consistency of GUC defaults between .sample.conf and pg_settings.boot_val\n> > > > - It looks like this was pretty active until last October and might\n> > > > have been ready to apply at least partially? But no further work or\n> > > > review has happened since.\n> > > \n> > > FWIW, I don't find much appealing the addition of two GUC flags for\n> > > only the sole purpose of that,\n> > \n> > The flags seem independently interesting - adding them here follows\n> > a suggestion Andres made in response to your complaint.\n> > 20220713234900.z4rniuaerkq34s4v@awork3.anarazel.de\n> > \n> > > particularly as we get a stronger\n> > > dependency between GUCs that can be switched dynamically at\n> > > initialization and at compile-time.\n> > \n> > What do you mean by \"stronger dependency between GUCs\" ?\n> \n> I'm still not clear what that means ?\n\nMichael ?\n\nThis fixes an issue with the last version that failed with\nlog_autovacuum_min_duration in cirrusci's pg_ci_base.conf.\n\nAnd now includes both a perl and a sql-based versions of the test - both\nof which rely on the flags.",
"msg_date": "Tue, 9 May 2023 19:37:27 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "On Wed, 10 May 2023 at 06:07, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Wed, Mar 29, 2023 at 11:03:59PM -0500, Justin Pryzby wrote:\n> > On Wed, Jul 13, 2022 at 04:49:00PM -0700, Andres Freund wrote:\n> > > On 2022-07-14 08:46:02 +0900, Michael Paquier wrote:\n> > > > On Wed, Jul 13, 2022 at 12:30:00PM -0500, Justin Pryzby wrote:\n> > > > > How did you make this list ? Was it by excluding things that failed for you ?\n> > > > >\n> > > > > cfbot is currently failing due to io_concurrency on windows.\n> > > > > I think there are more GUC which should be included here.\n> > > > >\n> > > > > http://cfbot.cputube.org/kyotaro-horiguchi.html\n> > > >\n> > > > FWIW, I am not really a fan of making this test depend on a hardcoded\n> > > > list of GUCs.\n> > >\n> > > I wonder if we should add flags indicating platform dependency etc to guc.c?\n> > > That should allow to remove most of them?\n> >\n> > Michael commented on this, but on another thread, so I'm copying and\n> > pasting it here.\n> >\n> > On Thu, Mar 23, 2023 at 08:59:57PM -0500, Justin Pryzby wrote:\n> > > On Fri, Mar 24, 2023 at 10:24:43AM +0900, Michael Paquier wrote:\n> > > > >> * Check consistency of GUC defaults between .sample.conf and pg_settings.boot_val\n> > > > > - It looks like this was pretty active until last October and might\n> > > > > have been ready to apply at least partially? But no further work or\n> > > > > review has happened since.\n> > > >\n> > > > FWIW, I don't find much appealing the addition of two GUC flags for\n> > > > only the sole purpose of that,\n> > >\n> > > The flags seem independently interesting - adding them here follows\n> > > a suggestion Andres made in response to your complaint.\n> > > 20220713234900.z4rniuaerkq34s4v@awork3.anarazel.de\n> > >\n> > > > particularly as we get a stronger\n> > > > dependency between GUCs that can be switched dynamically at\n> > > > initialization and at compile-time.\n> > >\n> > > What do you mean by \"stronger dependency between GUCs\" ?\n> >\n> > I'm still not clear what that means ?\n>\n> Michael ?\n>\n> This fixes an issue with the last version that failed with\n> log_autovacuum_min_duration in cirrusci's pg_ci_base.conf.\n>\n> And now includes both a perl and a sql-based versions of the test - both\n> of which rely on the flags.\n\nI'm seeing that there has been no activity in this thread for more\nthan 8 months, I'm planning to close this in the current commitfest\nunless someone is planning to take it forward.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 20 Jan 2024 07:59:22 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "On Sat, Jan 20, 2024 at 07:59:22AM +0530, vignesh C wrote:\n> I'm seeing that there has been no activity in this thread for more\n> than 8 months, I'm planning to close this in the current commitfest\n> unless someone is planning to take it forward.\n\nThanks, that seems right to me.\n\nI have been looking again at the patch after seeing your reply (spent\nsome time looking at it but I could not decide what to do), and I am\nnot really excited with the amount of new facilities this requires in\nthe TAP test (especially the list of hardcoded parameters that may\nchange) and the backend-side changes for the GUC flags as well as the\nrequirements to make the checks flexible enough to work across initdb\nand platform-dependent default values. In short, I'm happy to let\n003_check_guc.pl be what check_guc was able to do (script gone in\ncf29a11ef646) for the parameter names.\n--\nMichael",
"msg_date": "Sat, 20 Jan 2024 12:09:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
},
{
"msg_contents": "On Sat, 20 Jan 2024 at 08:39, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, Jan 20, 2024 at 07:59:22AM +0530, vignesh C wrote:\n> > I'm seeing that there has been no activity in this thread for more\n> > than 8 months, I'm planning to close this in the current commitfest\n> > unless someone is planning to take it forward.\n>\n> Thanks, that seems right to me.\n\nThanks, I have updated the commitfest entry to \"returned with\nfeedback\". Feel free to start a new entry when someone wants to pursue\nit further more actively.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 26 Jan 2024 18:48:30 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fix stats_fetch_consistency value in postgresql.conf.sample"
}
] |
[
{
"msg_contents": "Hi,\n\nA couple of recent isolation test failures reported $SUBJECT.\n\nIt could be a bug in recent-ish latch refactoring work, though I don't\nknow why it would show up twice just recently.\n\nJust BTW, that animal has shown signs of a flaky toolchain before[1].\nI know we have quite a lot of museum exhibits in the 'farm, in terms\nof hardare, OS, and tool chain. In some cases, they're probably just\nforgotten/not on anyone's upgrade radar. If they've shown signs of\nmisbehaving, maybe it's time to figure out if they can be upgraded?\nFor example, it'd be nice to be able to rule out problems in GCC 4.6.0\n(that's like running PostgreSQL 9.1.0, in terms of vintage,\nunsupported status, and long list of missing bugfixes from the time\nwhen it was supported).\n\n[1] https://www.postgresql.org/message-id/CA+hUKGJK5R0S1LL_W4vEzKxNQGY_xGAQ1XknR-WN9jqQeQtB_w@mail.gmail.com\n\n\n",
"msg_date": "Wed, 25 May 2022 12:45:21 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "\"ERROR: latch already owned\" on gharial"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-25 12:45:21 +1200, Thomas Munro wrote:\n> A couple of recent isolation test failures reported $SUBJECT.\n\nWas that just on gharial?\n\n\n> It could be a bug in recent-ish latch refactoring work, though I don't\n> know why it would show up twice just recently.\n\nYea, that's weird.\n\n\n> Just BTW, that animal has shown signs of a flaky toolchain before[1].\n> I know we have quite a lot of museum exhibits in the 'farm, in terms\n> of hardare, OS, and tool chain. In some cases, they're probably just\n> forgotten/not on anyone's upgrade radar. If they've shown signs of\n> misbehaving, maybe it's time to figure out if they can be upgraded?\n> For example, it'd be nice to be able to rule out problems in GCC 4.6.0\n> (that's like running PostgreSQL 9.1.0, in terms of vintage,\n> unsupported status, and long list of missing bugfixes from the time\n> when it was supported).\n\nYea. gcc 4.6.0 is pretty ridiculous - the only thing we gain by testing with a\n.0 compiler of that vintage is pain. Could it be upgraded?\n\n\nTBH, I think we should just desupport HPUX. It's makework to support it at\nthis point. 11.31 v3 is about to be old enough to drink in quite a few\ncountries...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 24 May 2022 18:24:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: \"ERROR: latch already owned\" on gharial"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-05-25 12:45:21 +1200, Thomas Munro wrote:\n>> I know we have quite a lot of museum exhibits in the 'farm, in terms\n>> of hardare, OS, and tool chain. In some cases, they're probably just\n>> forgotten/not on anyone's upgrade radar. If they've shown signs of\n>> misbehaving, maybe it's time to figure out if they can be upgraded?\n\n> TBH, I think we should just desupport HPUX.\n\nI think there's going to be a significant die-off of old BF animals\nwhen (if?) we convert over to the meson build system; it's just not\ngoing to be worth the trouble to upgrade those platforms to be able\nto run meson and ninja. I'm inclined to wait until that's over and\nsee what's still standing before we make decisions about officially\ndesupporting things.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 24 May 2022 21:44:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"ERROR: latch already owned\" on gharial"
},
{
"msg_contents": "On Tue, May 24, 2022 at 06:24:39PM -0700, Andres Freund wrote:\n> On 2022-05-25 12:45:21 +1200, Thomas Munro wrote:\n> > Just BTW, that animal has shown signs of a flaky toolchain before[1].\n> > I know we have quite a lot of museum exhibits in the 'farm, in terms\n> > of hardare, OS, and tool chain. In some cases, they're probably just\n> > forgotten/not on anyone's upgrade radar. If they've shown signs of\n> > misbehaving, maybe it's time to figure out if they can be upgraded?\n> > For example, it'd be nice to be able to rule out problems in GCC 4.6.0\n> > (that's like running PostgreSQL 9.1.0, in terms of vintage,\n> > unsupported status, and long list of missing bugfixes from the time\n> > when it was supported).\n> \n> Yea. gcc 4.6.0 is pretty ridiculous - the only thing we gain by testing with a\n> .0 compiler of that vintage is pain. Could it be upgraded?\n\n+1, this is at least the third non-obvious miscompilation from gharial.\nInstalling the latest GCC that builds easily (perhaps GCC 10.3) would make\nthis a good buildfarm member again. If that won't happen, at least add a note\nto the animal like described in\nhttps://postgr.es/m/20211109144021.GD940092@rfd.leadboat.com\n\n\n",
"msg_date": "Tue, 24 May 2022 23:46:58 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: \"ERROR: latch already owned\" on gharial"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> +1, this is at least the third non-obvious miscompilation from gharial.\n\nIs there any evidence that this is a compiler-sourced problem?\nMaybe it is, but it's sure not obvious to me (he says, eyeing his\nbuildfarm animals with even older gcc versions).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 25 May 2022 10:25:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"ERROR: latch already owned\" on gharial"
},
{
"msg_contents": "On Thu, May 26, 2022 at 2:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > +1, this is at least the third non-obvious miscompilation from gharial.\n>\n> Is there any evidence that this is a compiler-sourced problem?\n> Maybe it is, but it's sure not obvious to me (he says, eyeing his\n> buildfarm animals with even older gcc versions).\n\nSorry for the ambiguity -- I have no evidence of miscompilation. My\n\"just BTW\" paragraph was a reaction to the memory of the last couple\nof times Noah and I wasted hours chasing red herrings on this system,\nwhich is pretty demotivating when looking into an unexplained failure.\n\nOn a more practical note, I don't have access to the BF database right\nnow. Would you mind checking if \"latch already owned\" has occurred on\nany other animals?\n\n\n",
"msg_date": "Thu, 26 May 2022 13:50:00 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: \"ERROR: latch already owned\" on gharial"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Sorry for the ambiguity -- I have no evidence of miscompilation. My\n> \"just BTW\" paragraph was a reaction to the memory of the last couple\n> of times Noah and I wasted hours chasing red herrings on this system,\n> which is pretty demotivating when looking into an unexplained failure.\n\nI can't deny that those HPUX animals have produced more than their\nfair share of problems.\n\n> On a more practical note, I don't have access to the BF database right\n> now. Would you mind checking if \"latch already owned\" has occurred on\n> any other animals?\n\nLooking back 6 months, these are the only occurrences of that string\nin failed tests:\n\n sysname | branch | snapshot | stage | l \n---------+--------+---------------------+----------------+-------------------------------------------------------------------\n gharial | HEAD | 2022-04-28 23:37:51 | Check | 2022-04-28 18:36:26.981 MDT [22642:1] ERROR: latch already owned\n gharial | HEAD | 2022-05-06 11:33:11 | IsolationCheck | 2022-05-06 10:10:52.727 MDT [7366:1] ERROR: latch already owned\n gharial | HEAD | 2022-05-24 06:31:31 | IsolationCheck | 2022-05-24 02:44:51.850 MDT [13089:1] ERROR: latch already owned\n(3 rows)\n\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 25 May 2022 22:35:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"ERROR: latch already owned\" on gharial"
},
{
"msg_contents": "On Thu, May 26, 2022 at 2:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On a more practical note, I don't have access to the BF database right\n> > now. Would you mind checking if \"latch already owned\" has occurred on\n> > any other animals?\n>\n> Looking back 6 months, these are the only occurrences of that string\n> in failed tests:\n>\n> sysname | branch | snapshot | stage | l\n> ---------+--------+---------------------+----------------+-------------------------------------------------------------------\n> gharial | HEAD | 2022-04-28 23:37:51 | Check | 2022-04-28 18:36:26.981 MDT [22642:1] ERROR: latch already owned\n> gharial | HEAD | 2022-05-06 11:33:11 | IsolationCheck | 2022-05-06 10:10:52.727 MDT [7366:1] ERROR: latch already owned\n> gharial | HEAD | 2022-05-24 06:31:31 | IsolationCheck | 2022-05-24 02:44:51.850 MDT [13089:1] ERROR: latch already owned\n> (3 rows)\n\nThanks. Hmm. So far it's always a parallel worker. The best idea I\nhave is to include the ID of the mystery PID in the error message and\nsee if that provides a clue next time.",
"msg_date": "Fri, 27 May 2022 23:54:24 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: \"ERROR: latch already owned\" on gharial"
},
{
"msg_contents": "On Fri, May 27, 2022 at 7:55 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Thanks. Hmm. So far it's always a parallel worker. The best idea I\n> have is to include the ID of the mystery PID in the error message and\n> see if that provides a clue next time.\n\nWhat I'm inclined to do is get gharial and anole removed from the\nbuildfarm. anole was set up by Heikki in 2011. I don't know when\ngharial was set up, or by whom. I don't think anyone at EDB cares\nabout these machines any more, or has any interest in maintaining\nthem. I think the only reason they're still running is that, just by\ngood fortune, they haven't fallen over and died yet. The hardest part\nof getting them taken out of the buildfarm is likely to be finding\nsomeone who has a working username and password to log into them and\ntake the jobs out of the crontab.\n\nIf someone really cares about figuring out what's going on here, it's\nprobably possible to get someone who is an EDB employee access to the\nbox to chase it down. But I'm having a hard time understanding what\nvalue we get out of that given that the machines are running an\n11-year-old compiler version on discontinued hardware on a\ndiscontinued operating system. Even if we find a bug in PostgreSQL,\nit's likely to be a bug that only matters on systems nobody cares\nabout.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 27 May 2022 09:56:08 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"ERROR: latch already owned\" on gharial"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, May 27, 2022 at 7:55 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> Thanks. Hmm. So far it's always a parallel worker. The best idea I\n>> have is to include the ID of the mystery PID in the error message and\n>> see if that provides a clue next time.\n\n> ... Even if we find a bug in PostgreSQL,\n> it's likely to be a bug that only matters on systems nobody cares\n> about.\n\nThat's possible, certainly. It's also possible that it's a real bug\nthat so far has only manifested there for (say) timing reasons.\nThe buildfarm is not so large that we can write off single-machine\nfailures as being unlikely to hit in the real world.\n\nWhat I'd suggest is to promote that failure to elog(PANIC), which\nwould at least give us the PID and if we're lucky a stack trace.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 27 May 2022 10:21:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"ERROR: latch already owned\" on gharial"
},
{
"msg_contents": "On Fri, May 27, 2022 at 10:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> That's possible, certainly. It's also possible that it's a real bug\n> that so far has only manifested there for (say) timing reasons.\n> The buildfarm is not so large that we can write off single-machine\n> failures as being unlikely to hit in the real world.\n>\n> What I'd suggest is to promote that failure to elog(PANIC), which\n> would at least give us the PID and if we're lucky a stack trace.\n\nThat proposed change is fine with me.\n\nAs to the question of whether it's a real bug, nobody can prove\nanything unless we actually run it down. It's just a question of what\nyou think the odds are. Noah's PGCon talk a few years back on the long\ntail of buildfarm failures convinced me (perhaps unintentionally) that\nlow-probability failures that occur only on obscure systems or\nconfigurations are likely not worth running down, because while they\nCOULD be real bugs, a lot of them aren't, and the time it would take\nto figure it out could be spent on other things - for instance, fixing\nthings that we know for certain are bugs. Spending 40 hours of\nperson-time on something with a 10% chance of being a bug in the\nPostgreSQL code doesn't necessarily make sense to me, because while\nyou are correct that the buildfarm isn't that large, neither is the\ndeveloper community.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 27 May 2022 15:44:44 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"ERROR: latch already owned\" on gharial"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, May 27, 2022 at 10:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> What I'd suggest is to promote that failure to elog(PANIC), which\n>> would at least give us the PID and if we're lucky a stack trace.\n\n> That proposed change is fine with me.\n\n> As to the question of whether it's a real bug, nobody can prove\n> anything unless we actually run it down.\n\nAgreed, and I'll even grant your point that if it is an HPUX-specific\nor IA64-specific bug, it is not worth spending huge amounts of time\nto isolate. The problem is that we don't know that. What we do know\nso far is that if it can occur elsewhere, it's rare --- so we'd better\nbe prepared to glean as much info as possible if we do get such a\nfailure. Hence my thought of s/ERROR/PANIC/. And I'd be in favor of\nany other low-effort change we can make to instrument the case better.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 27 May 2022 16:11:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"ERROR: latch already owned\" on gharial"
},
{
"msg_contents": "On Sat, May 28, 2022 at 8:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Fri, May 27, 2022 at 10:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> What I'd suggest is to promote that failure to elog(PANIC), which\n> >> would at least give us the PID and if we're lucky a stack trace.\n>\n> > That proposed change is fine with me.\n>\n> > As to the question of whether it's a real bug, nobody can prove\n> > anything unless we actually run it down.\n>\n> Agreed, and I'll even grant your point that if it is an HPUX-specific\n> or IA64-specific bug, it is not worth spending huge amounts of time\n> to isolate. The problem is that we don't know that. What we do know\n> so far is that if it can occur elsewhere, it's rare --- so we'd better\n> be prepared to glean as much info as possible if we do get such a\n> failure. Hence my thought of s/ERROR/PANIC/. And I'd be in favor of\n> any other low-effort change we can make to instrument the case better.\n\nOK, pushed (except I realised that all the PIDs involved were int, not\npid_t). Let's see...\n\n\n",
"msg_date": "Tue, 31 May 2022 12:08:23 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: \"ERROR: latch already owned\" on gharial"
},
{
"msg_contents": "On Sat, May 28, 2022 at 1:56 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> What I'm inclined to do is get gharial and anole removed from the\n> buildfarm. anole was set up by Heikki in 2011. I don't know when\n> gharial was set up, or by whom. I don't think anyone at EDB cares\n> about these machines any more, or has any interest in maintaining\n> them. I think the only reason they're still running is that, just by\n> good fortune, they haven't fallen over and died yet. The hardest part\n> of getting them taken out of the buildfarm is likely to be finding\n> someone who has a working username and password to log into them and\n> take the jobs out of the crontab.\n\nFWIW, in a previous investigation, Semab and Sandeep had access:\n\nhttps://www.postgresql.org/message-id/CABimMB4mRs9N3eivR-%3DqF9M8oWc5E6OX7GywsWF0DXN4P5gNEA%40mail.gmail.com\n\n\n",
"msg_date": "Tue, 31 May 2022 12:31:12 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: \"ERROR: latch already owned\" on gharial"
},
{
"msg_contents": "On Mon, May 30, 2022 at 8:31 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sat, May 28, 2022 at 1:56 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > What I'm inclined to do is get gharial and anole removed from the\n> > buildfarm. anole was set up by Heikki in 2011. I don't know when\n> > gharial was set up, or by whom. I don't think anyone at EDB cares\n> > about these machines any more, or has any interest in maintaining\n> > them. I think the only reason they're still running is that, just by\n> > good fortune, they haven't fallen over and died yet. The hardest part\n> > of getting them taken out of the buildfarm is likely to be finding\n> > someone who has a working username and password to log into them and\n> > take the jobs out of the crontab.\n>\n> FWIW, in a previous investigation, Semab and Sandeep had access:\n>\n> https://www.postgresql.org/message-id/CABimMB4mRs9N3eivR-%3DqF9M8oWc5E6OX7GywsWF0DXN4P5gNEA%40mail.gmail.com\n\nYeah, I'm in touch with Sandeep but not able to get in yet for some\nreason. Will try to sort it out.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 31 May 2022 08:20:53 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"ERROR: latch already owned\" on gharial"
},
{
"msg_contents": "On Tue, May 31, 2022 at 8:20 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, May 30, 2022 at 8:31 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Sat, May 28, 2022 at 1:56 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > What I'm inclined to do is get gharial and anole removed from the\n> > > buildfarm. anole was set up by Heikki in 2011. I don't know when\n> > > gharial was set up, or by whom. I don't think anyone at EDB cares\n> > > about these machines any more, or has any interest in maintaining\n> > > them. I think the only reason they're still running is that, just by\n> > > good fortune, they haven't fallen over and died yet. The hardest part\n> > > of getting them taken out of the buildfarm is likely to be finding\n> > > someone who has a working username and password to log into them and\n> > > take the jobs out of the crontab.\n> >\n> > FWIW, in a previous investigation, Semab and Sandeep had access:\n> >\n> > https://www.postgresql.org/message-id/CABimMB4mRs9N3eivR-%3DqF9M8oWc5E6OX7GywsWF0DXN4P5gNEA%40mail.gmail.com\n>\n> Yeah, I'm in touch with Sandeep but not able to get in yet for some\n> reason. Will try to sort it out.\n\nOK, I have access to the box now. I guess I might as well leave the\ncrontab jobs enabled until the next time this happens, since Thomas\njust took steps to improve the logging, but I do think these BF\nmembers are overdue to be killed off, and would like to do that as\nsoon as it seems like a reasonable step to take.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 31 May 2022 08:55:24 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"ERROR: latch already owned\" on gharial"
},
{
"msg_contents": "On Wed, Jun 1, 2022 at 12:55 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> OK, I have access to the box now. I guess I might as well leave the\n> crontab jobs enabled until the next time this happens, since Thomas\n> just took steps to improve the logging, but I do think these BF\n> members are overdue to be killed off, and would like to do that as\n> soon as it seems like a reasonable step to take.\n\nA couple of months later, there has been no repeat of that error. I'd\nhappily forget about that and move on, if you want to decommission\nthese.\n\n\n",
"msg_date": "Mon, 4 Jul 2022 15:50:37 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: \"ERROR: latch already owned\" on gharial"
},
{
"msg_contents": "On Sun, Jul 3, 2022 at 11:51 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Jun 1, 2022 at 12:55 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > OK, I have access to the box now. I guess I might as well leave the\n> > crontab jobs enabled until the next time this happens, since Thomas\n> > just took steps to improve the logging, but I do think these BF\n> > members are overdue to be killed off, and would like to do that as\n> > soon as it seems like a reasonable step to take.\n>\n> A couple of months later, there has been no repeat of that error. I'd\n> happily forget about that and move on, if you want to decommission\n> these.\n\nI have commented out the BF stuff in crontab on that machine.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Jul 2022 15:56:48 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"ERROR: latch already owned\" on gharial"
},
{
"msg_contents": "Thanks Robert.\n\nWe are receiving the alerts from buildfarm-admins for anole and gharial not\nreporting. Who can help to stop these? Thanks\n\nOn Wed, Jul 6, 2022 at 1:27 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Sun, Jul 3, 2022 at 11:51 PM Thomas Munro <thomas.munro@gmail.com>\n> wrote:\n> > On Wed, Jun 1, 2022 at 12:55 AM Robert Haas <robertmhaas@gmail.com>\n> wrote:\n> > > OK, I have access to the box now. I guess I might as well leave the\n> > > crontab jobs enabled until the next time this happens, since Thomas\n> > > just took steps to improve the logging, but I do think these BF\n> > > members are overdue to be killed off, and would like to do that as\n> > > soon as it seems like a reasonable step to take.\n> >\n> > A couple of months later, there has been no repeat of that error. I'd\n> > happily forget about that and move on, if you want to decommission\n> > these.\n>\n> I have commented out the BF stuff in crontab on that machine.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n>\n>\n\n-- \nSandeep Thakkar\n\nThanks Robert. We are receiving the alerts from buildfarm-admins for anole and gharial not reporting. Who can help to stop these? ThanksOn Wed, Jul 6, 2022 at 1:27 AM Robert Haas <robertmhaas@gmail.com> wrote:On Sun, Jul 3, 2022 at 11:51 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Jun 1, 2022 at 12:55 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > OK, I have access to the box now. I guess I might as well leave the\n> > crontab jobs enabled until the next time this happens, since Thomas\n> > just took steps to improve the logging, but I do think these BF\n> > members are overdue to be killed off, and would like to do that as\n> > soon as it seems like a reasonable step to take.\n>\n> A couple of months later, there has been no repeat of that error. I'd\n> happily forget about that and move on, if you want to decommission\n> these.\n\nI have commented out the BF stuff in crontab on that machine.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n-- Sandeep Thakkar",
"msg_date": "Wed, 13 Jul 2022 09:00:10 +0530",
"msg_from": "Sandeep Thakkar <sandeep.thakkar@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: \"ERROR: latch already owned\" on gharial"
},
{
"msg_contents": "On 2022-Jul-13, Sandeep Thakkar wrote:\n\n> Thanks Robert.\n> \n> We are receiving the alerts from buildfarm-admins for anole and gharial not\n> reporting. Who can help to stop these? Thanks\n\nProbably Andrew knows how to set buildsystems.no_alerts for these\nanimals.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"El hombre nunca sabe de lo que es capaz hasta que lo intenta\" (C. Dickens)\n\n\n",
"msg_date": "Wed, 13 Jul 2022 10:28:52 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: \"ERROR: latch already owned\" on gharial"
},
{
"msg_contents": "Hey hackers,\n\nI wanted to report that we have seen this issue (with the procLatch) a few\ntimes very sporadically on Greenplum 6X (based on 9.4), with relatively newer\nversions of GCC.\n\nI realize that 9.4 is out of support, so this email is purely to add on to the\nexisting thread, in case the info can help fix/reveal something in supported\nversions.\n\nUnfortunately, we don't have a core to share as we don't have the benefit of\ncommit [1] in Greenplum 6X, but we do possess commit [2] which gives us an elog\nERROR as opposed to PANIC.\n\nInstance 1:\n\nEvent 1: 2023-11-13 10:01:31.927168 CET..., pY,\n...\"LOG\",\"00000\",\"disconnection: session time: ...\"\nEvent 2: 2023-11-13 10:01:32.049135\nCET...,pX,,,,,\"FATAL\",\"XX000\",\"latch already owned by pid Y (is_set:\n0) (pg_latch.c:159)\",,,,,,,0,,\n\"pg_latch.c\",159,\"Stack trace:\n1 0xbde8b8 postgres errstart (elog.c:567)\n2 0xbe0768 postgres elog_finish (discriminator 7)\n3 0xa08924 postgres <symbol not found> (pg_latch.c:158) <---------- OwnLatch\n4 0xa7f179 postgres InitProcess (proc.c:523)\n5 0xa94ac3 postgres PostgresMain (postgres.c:4874)\n6 0xa1e2ed postgres <symbol not found> (postmaster.c:2860)\n7 0xa1f295 postgres PostmasterMain (discriminator 5)\n...\n\"LOG\",\"00000\",\"server process (PID Y) exited with exit code\n1\",,,,,,,0,,\"postmaster.c\",3987,\n\nInstance 2 (was reported with (GCC) 8.5.0 20210514 (Red Hat 8.5.0-20)):\n\nExactly the same as Instance 1 with identical log, ordering of events and stack\ntrace, except this time (is_set: 1) when the ERROR is logged.\n\nA possible ordering of events:\n\n(1) DisownLatch() is called by pid Y during ProcKill() and the write for\nlatch->owner_pid = 0 is NOT yet flushed to shmem.\n\n(2) The PGPROC object for pid Y is returned to the free list.\n\n(3) Pid X sees the same PGPROC object on the free list and grabs it.\n\n(4) Pid X does sanity check inside OwnLatch during InitProcess and\nstill sees the\nold value of latch->owner_pid = Y (and not = 0), and trips the ERROR.\n\nThe above sequence of operations should apply to PG HEAD as well.\n\nSuggestion:\n\nShould we do a pg_memory_barrier() at the end of DisownLatch(), like in\nResetLatch(), like the one introduced in [3]? This would ensure that the write\nlatch->owner_pid = 0; is flushed to shmem. The attached patch does this.\n\nI'm not sure why we didn't introduce a memory barrier in DisownLatch() in [3].\nI didn't find anything in the associated hackers thread [4] either. Was it the\nperformance impact, or was it just because SetLatch and ResetLatch\nwere more racy\nand this is way less likely to happen?\n\nThis is out of my wheelhouse, but would one additional barrier in a process'\nlifecycle be that bad for performance?\n\nAppendix:\n\nBuild details: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-20)\n\nCFLAGS=-Wall -Wmissing-prototypes -Wpointer-arith -Wendif-labels\n-Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv\n-fexcess-precision=standard -fno-aggressive-loop-optimizations\n-Wno-unused-but-set-variable -Wno-address -Werror=implicit-fallthrough=3\n-Wno-format-truncation -Wno-stringop-truncation -m64 -O3\n-fargument-noalias-global -fno-omit-frame-pointer -g -std=gnu99\n-Werror=uninitialized -Werror=implicit-function-declaration\n\nRegards,\nSoumyadeep (VMware)\n\n[1] https://github.com/postgres/postgres/commit/12e28aac8e8eb76cab13a4e9b696e3dab17f1c99\n[2] https://github.com/greenplum-db/gpdb/commit/81fdd6c5219af865e9dc41f4087e0405d6616050\n[3] https://github.com/postgres/postgres/commit/14e8803f101a54d99600683543b0f893a2e3f529\n[4] https://www.postgresql.org/message-id/flat/20150112154026.GB2092%40awork2.anarazel.de",
"msg_date": "Wed, 7 Feb 2024 18:08:50 -0800",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"ERROR: latch already owned\" on gharial"
},
{
"msg_contents": "On 08/02/2024 04:08, Soumyadeep Chakraborty wrote:\n> A possible ordering of events:\n> \n> (1) DisownLatch() is called by pid Y during ProcKill() and the write for\n> latch->owner_pid = 0 is NOT yet flushed to shmem.\n> \n> (2) The PGPROC object for pid Y is returned to the free list.\n> \n> (3) Pid X sees the same PGPROC object on the free list and grabs it.\n> \n> (4) Pid X does sanity check inside OwnLatch during InitProcess and\n> still sees the\n> old value of latch->owner_pid = Y (and not = 0), and trips the ERROR.\n> \n> The above sequence of operations should apply to PG HEAD as well.\n> \n> Suggestion:\n> \n> Should we do a pg_memory_barrier() at the end of DisownLatch(), like in\n> ResetLatch(), like the one introduced in [3]? This would ensure that the write\n> latch->owner_pid = 0; is flushed to shmem. The attached patch does this.\n\nHmm, there is a pair of SpinLockAcquire() and SpinLockRelease() in \nProcKill(), before step 3 can happen. Comment in spin.h about \nSpinLockAcquire/Release:\n\n> *\tLoad and store operations in calling code are guaranteed not to be\n> *\treordered with respect to these operations, because they include a\n> *\tcompiler barrier. (Before PostgreSQL 9.5, callers needed to use a\n> *\tvolatile qualifier to access data protected by spinlocks.)\n\nThat talks about a compiler barrier, though, not a memory barrier. But \nlooking at the implementations in s_lock.h, I believe they do act as \nmemory barrier, too.\n\nSo you might indeed have that problem on 9.4, but AFAICS not on later \nversions.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Thu, 8 Feb 2024 14:57:47 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: \"ERROR: latch already owned\" on gharial"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-08 14:57:47 +0200, Heikki Linnakangas wrote:\n> On 08/02/2024 04:08, Soumyadeep Chakraborty wrote:\n> > A possible ordering of events:\n> > \n> > (1) DisownLatch() is called by pid Y during ProcKill() and the write for\n> > latch->owner_pid = 0 is NOT yet flushed to shmem.\n> > \n> > (2) The PGPROC object for pid Y is returned to the free list.\n> > \n> > (3) Pid X sees the same PGPROC object on the free list and grabs it.\n> > \n> > (4) Pid X does sanity check inside OwnLatch during InitProcess and\n> > still sees the\n> > old value of latch->owner_pid = Y (and not = 0), and trips the ERROR.\n> > \n> > The above sequence of operations should apply to PG HEAD as well.\n> > \n> > Suggestion:\n> > \n> > Should we do a pg_memory_barrier() at the end of DisownLatch(), like in\n> > ResetLatch(), like the one introduced in [3]? This would ensure that the write\n> > latch->owner_pid = 0; is flushed to shmem. The attached patch does this.\n> \n> Hmm, there is a pair of SpinLockAcquire() and SpinLockRelease() in\n> ProcKill(), before step 3 can happen.\n\nRight. I wonder if the issue istead could be something similar to what was\nfixed in 8fb13dd6ab5b and more generally in 97550c0711972a. If two procs go\nthrough proc_exit() for the same process, you can get all kinds of weird\nmixed up resource ownership. The bug fixed in 8fb13dd6ab5b wouldn't apply,\nbut it's pretty easy to introduce similar bugs in other places, so it seems\nquite plausible that greenplum might have done so. We also did have more\nproc_exit()s in signal handlers in older branches, so it might just be an\nissue that also was present before.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 8 Feb 2024 13:41:14 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: \"ERROR: latch already owned\" on gharial"
},
{
"msg_contents": "Hey,\n\nDeeply appreciate both your input!\n\nOn Thu, Feb 8, 2024 at 4:57 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> Hmm, there is a pair of SpinLockAcquire() and SpinLockRelease() in\n> ProcKill(), before step 3 can happen. Comment in spin.h about\n> SpinLockAcquire/Release:\n>\n> > * Load and store operations in calling code are guaranteed not to be\n> > * reordered with respect to these operations, because they include a\n> > * compiler barrier. (Before PostgreSQL 9.5, callers needed to use a\n> > * volatile qualifier to access data protected by spinlocks.)\n>\n> That talks about a compiler barrier, though, not a memory barrier. But\n> looking at the implementations in s_lock.h, I believe they do act as\n> memory barrier, too.\n>\n> So you might indeed have that problem on 9.4, but AFAICS not on later\n> versions.\n\nYes 9.4 does not have 0709b7ee72e, which I'm assuming you are referring to?\n\nReading src/backend/storage/lmgr/README.barrier: For x86, to avoid reordering\nbetween a load and a store, we need something that prevents both CPU and\ncompiler reordering. pg_memory_barrier() fits the bill.\n\nHere we can have pid X's read of latch->owner_pid=Y reordered to precede\npid Y's store of latch->owner_pid = 0. The compiler barrier in S_UNLOCK() will\nprevent compiler reordering but not CPU reordering of the above.\n\n#define S_UNLOCK(lock) \\\ndo { __asm__ __volatile__(\"\" : : : \"memory\"); *(lock) = 0; } while (0)\nwhich is equivalent to a:\n#define pg_compiler_barrier_impl() __asm__ __volatile__(\"\" ::: \"memory\")\n\nBut maybe both CPU and memory reordering will be prevented by the tas() in\nS_LOCK() which does a lock and xchgb?\n\nIs the above acting as BOTH a compiler and CPU barrier, like the lock; addl\nstuff in pg_memory_barrier_impl()?\n\nIf yes, then the picture would look like this:\n\nPid Y in DisownLatch(), Pid X in OwnLatch()\n\nY: LOAD latch->ownerPid\n...\nY: STORE latch->ownerPid = 0\n...\n// returning PGPROC to freeList\nY:S_LOCK(ProcStructLock) <--- tas() prevents X: LOAD latch->ownerPid\nfrom preceding this\n...\n... <-------- X: LOAD latch->ownerPid can't get here anyway as spinlock is held\n...\nY:S_UNLOCK(ProcStructLock)\n...\nX: S_LOCK(ProcStructLock) // to retrieve PGPROC from freeList\n...\nX: S_UNLOCK(ProcStructLock)\n...\nX: LOAD latch->ownerPid\n\nAnd this issue is not caused due to 9.4 missing 0709b7ee72e, which\nchanged S_UNLOCK\nexclusively.\n\nIf no, then we would need the patch that does an explicit pg_memory_barrier()\nat the end of DisownLatch() for PG HEAD.\n\nOn Thu, Feb 8, 2024 at 1:41 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Right. I wonder if the issue istead could be something similar to what was\n> fixed in 8fb13dd6ab5b and more generally in 97550c0711972a. If two procs go\n> through proc_exit() for the same process, you can get all kinds of weird\n> mixed up resource ownership. The bug fixed in 8fb13dd6ab5b wouldn't apply,\n> but it's pretty easy to introduce similar bugs in other places, so it seems\n> quite plausible that greenplum might have done so. We also did have more\n> proc_exit()s in signal handlers in older branches, so it might just be an\n> issue that also was present before.\n\nHmm, the pids X and Y in the example provided upthread don't spawn off any\nchildren (like by calling system()) - they are just regular backends. So its\nnot possible for them to receive TERM and try to proc_exit() w/ the same\nPGPROC. So that is not the issue, I guess?\n\nRegards,\nSoumyadeep (VMware)\n\n\n",
"msg_date": "Fri, 9 Feb 2024 17:56:19 -0800",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"ERROR: latch already owned\" on gharial"
}
] |
[
{
"msg_contents": "Hi hackers,\nThanks to all the developers. The attached patch updates the manual for the pg_stats_ext and pg_stats_ext_exprs view.\nThe current pg_stats_ext/pg_stats_ext_exprs view manual are missing the inherited column. This column was added at the same time as the stxdinherit column in the pg_statistic_ext_data view. The attached patch adds the missing description. If there is a better description, please correct it.\n\nCommit: Add stxdinherit flag to pg_statistic_ext_data\n\thttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=269b532aef55a579ae02a3e8e8df14101570dfd9\nCurrent Manual: \n\thttps://www.postgresql.org/docs/15/view-pg-stats-ext.html\n\thttps://www.postgresql.org/docs/15/view-pg-stats-ext-exprs.html\n\nRegards,\nNoriyoshi Shinoda",
"msg_date": "Wed, 25 May 2022 01:08:12 +0000",
"msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>",
"msg_from_op": true,
"msg_subject": "PG15 beta1 fix pg_stats_ext/pg_stats_ext_exprs view manual"
},
{
"msg_contents": "On Wed, May 25, 2022 at 01:08:12AM +0000, Shinoda, Noriyoshi (PN Japan FSIP) wrote:\n> Hi hackers,\n> Thanks to all the developers. The attached patch updates the manual for the pg_stats_ext and pg_stats_ext_exprs view.\n> The current pg_stats_ext/pg_stats_ext_exprs view manual are missing the inherited column. This column was added at the same time as the stxdinherit column in the pg_statistic_ext_data view. The attached patch adds the missing description. If there is a better description, please correct it.\n> \n> Commit: Add stxdinherit flag to pg_statistic_ext_data\n> \thttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=269b532aef55a579ae02a3e8e8df14101570dfd9\n> Current Manual: \n> \thttps://www.postgresql.org/docs/15/view-pg-stats-ext.html\n> \thttps://www.postgresql.org/docs/15/view-pg-stats-ext-exprs.html\n\nThanks for copying me.\n\nThis looks right, and uses the same language as pg_stats and pg_statistic.\n\nBut, I'd prefer if it didn't say \"inheritance child\", since that now sounds\nlike it means \"a child which is using inheritance\" and not just \"any child\".\n\nI'd made a patch for that, for which I'll create a separate thread shortly.\n\n\n",
"msg_date": "Tue, 24 May 2022 20:19:27 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 fix pg_stats_ext/pg_stats_ext_exprs view manual"
},
{
"msg_contents": "On Tue, May 24, 2022 at 08:19:27PM -0500, Justin Pryzby wrote:\n> On Wed, May 25, 2022 at 01:08:12AM +0000, Shinoda, Noriyoshi (PN Japan FSIP) wrote:\n> > Hi hackers,\n> > Thanks to all the developers. The attached patch updates the manual for the pg_stats_ext and pg_stats_ext_exprs view.\n> > The current pg_stats_ext/pg_stats_ext_exprs view manual are missing the inherited column. This column was added at the same time as the stxdinherit column in the pg_statistic_ext_data view. The attached patch adds the missing description. If there is a better description, please correct it.\n> > \n> > Commit: Add stxdinherit flag to pg_statistic_ext_data\n> > \thttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=269b532aef55a579ae02a3e8e8df14101570dfd9\n> > Current Manual: \n> > \thttps://www.postgresql.org/docs/15/view-pg-stats-ext.html\n> > \thttps://www.postgresql.org/docs/15/view-pg-stats-ext-exprs.html\n> \n> Thanks for copying me.\n> \n> This looks right, and uses the same language as pg_stats and pg_statistic.\n> \n> But, I'd prefer if it didn't say \"inheritance child\", since that now sounds\n> like it means \"a child which is using inheritance\" and not just \"any child\".\n> \n> I'd made a patch for that, for which I'll create a separate thread shortly.\n\nThe thread I started [0] has stalled out, so your patch seems seems fine, since\nit's consistent with pre-existing docs.\n\n[0] https://www.postgresql.org/message-id/20220525013248.GO19626@telsasoft.com\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 14 Jun 2022 09:30:06 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 fix pg_stats_ext/pg_stats_ext_exprs view manual"
},
{
"msg_contents": "Thanks for your comment. sorry for the late reply.\nI hope it will be fixed during the period of PostgreSQL 15 Beta.\n\nRegards,\n\nNoriyoshi Shinoda\n-----Original Message-----\nFrom: Justin Pryzby <pryzby@telsasoft.com> \nSent: Tuesday, June 14, 2022 11:30 PM\nTo: Shinoda, Noriyoshi (PN Japan FSIP) <noriyoshi.shinoda@hpe.com>\nCc: pgsql-hackers@postgresql.org; Tomas Vondra <tomas.vondra@enterprisedb.com>\nSubject: Re: PG15 beta1 fix pg_stats_ext/pg_stats_ext_exprs view manual\n\nOn Tue, May 24, 2022 at 08:19:27PM -0500, Justin Pryzby wrote:\n> On Wed, May 25, 2022 at 01:08:12AM +0000, Shinoda, Noriyoshi (PN Japan FSIP) wrote:\n> > Hi hackers,\n> > Thanks to all the developers. The attached patch updates the manual for the pg_stats_ext and pg_stats_ext_exprs view.\n> > The current pg_stats_ext/pg_stats_ext_exprs view manual are missing the inherited column. This column was added at the same time as the stxdinherit column in the pg_statistic_ext_data view. The attached patch adds the missing description. If there is a better description, please correct it.\n> > \n> > Commit: Add stxdinherit flag to pg_statistic_ext_data\n> > \t\n> > INVALID URI REMOVED\n> > tgresql.git;a=commit;h=269b532aef55a579ae02a3e8e8df14101570dfd9__;!!\n> > NpxR!kBff64MGwFvJU4EPtHmXM1YogdVCJKoc9-TAYGJxy_9p_MMVUGE0GJaL4KGVqY5\n> > dTBlzhU6k0odtBi1Wv_fZ$\n> > Current Manual: \n> > \thttps://www.postgresql.org/docs/15/view-pg-stats-ext.html \n> > \t\n> > INVALID URI REMOVED\n> > pg-stats-ext-exprs.html__;!!NpxR!kBff64MGwFvJU4EPtHmXM1YogdVCJKoc9-T\n> > AYGJxy_9p_MMVUGE0GJaL4KGVqY5dTBlzhU6k0odtBvG3tq9F$\n> \n> Thanks for copying me.\n> \n> This looks right, and uses the same language as pg_stats and pg_statistic.\n> \n> But, I'd prefer if it didn't say \"inheritance child\", since that now \n> sounds like it means \"a child which is using inheritance\" and not just \"any child\".\n> \n> I'd made a patch for that, for which I'll create a separate thread shortly.\n\nThe thread I started [0] has stalled out, so your patch seems seems fine, since it's consistent with pre-existing docs.\n\n[0] https://www.postgresql.org/message-id/20220525013248.GO19626@telsasoft.com \n\n--\nJustin\n\n\n",
"msg_date": "Mon, 27 Jun 2022 03:49:20 +0000",
"msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>",
"msg_from_op": true,
"msg_subject": "RE: PG15 beta1 fix pg_stats_ext/pg_stats_ext_exprs view manual"
},
{
"msg_contents": "On Mon, Jun 27, 2022 at 03:49:20AM +0000, Shinoda, Noriyoshi (PN Japan FSIP) wrote:\n> Thanks for your comment. sorry for the late reply.\n> I hope it will be fixed during the period of PostgreSQL 15 Beta.\n\nApologies for the delay, fixed in time for beta2.\n--\nMichael",
"msg_date": "Mon, 27 Jun 2022 15:37:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 fix pg_stats_ext/pg_stats_ext_exprs view manual"
}
] |
[
{
"msg_contents": "In this old sub-thread, we removed the use of word \"partition\" when it didn't\nmean \"declarative partitioning\".\n\nhttps://www.postgresql.org/message-id/flat/20180601213300.GT5164%40telsasoft.com#32efea8c1aa0e875d201873dac56e09c\n\nNow, I'm proposing to get rid of the phrase \"inheritance child\" when it also\n*does* apply to declarative partitioning.",
"msg_date": "Tue, 24 May 2022 20:32:48 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "doc phrase: \"inheritance child\""
},
{
"msg_contents": "Hi Justin,\n\n@@ -7306,7 +7306,7 @@ SCRAM-SHA-256$<replaceable><iteration\ncount></replaceable>:<replaceable>&l\n <para>\n Normally there is one entry, with <structfield>stainherit</structfield>\n=\n <literal>false</literal>, for each table column that has been analyzed.\n- If the table has inheritance children, a second entry with\n+ If the table has inheritance children or partitions, a second entry with\n <structfield>stainherit</structfield> = <literal>true</literal> is also\ncreated. This row\n represents the column's statistics over the inheritance tree, i.e.,\n statistics for the data you'd see with\n\nFor partitioned tables only the second entry makes sense. IIRC, we had done\nsome work to remove the first entry. Can you please check whether a\npartitioned table also has two entries?\n\n <para>\n- If true, the stats include inheritance child columns, not just the\n+ If true, the stats include child tables, not just the\n\nWe are replacing columns with tables; is that intentional?\n\nPartitioned tables do not have their own stats, it's just aggregated\npartition stats.\n\n- If the table has inheritance children, a second entry with\n+ If the table has inheritance children or partitions, a second entry with\n <structfield>stxdinherit</structfield> = <literal>true</literal> is\nalso created.\n This row represents the statistics object over the inheritance tree,\ni.e.,\n\nSimilar to the first comment. s/inheritance tree/inheritance or partition\ntree/ ?\n\n\n- If true, the stats include inheritance child columns, not just the\n+ If true, the stats include child childs, not just the\n values in the specified relation\n </para></entry>\n </row>\n@@ -13152,7 +13152,7 @@ SELECT * FROM pg_locks pl LEFT JOIN\npg_prepared_xacts ppx\n <structfield>inherited</structfield> <type>bool</type>\n </para>\n <para>\n- If true, this row includes inheritance child columns, not just the\n+ If true, this row includes child tables, not just the\n values in the specified table\n </para></entry>\n </row>\n\nReplacing inheritance child \"column\" with \"tables\", is that intentional?\n\nAre these all the places where child/children need to be replaced by\npartitions?\n\nNow that the feature is old and also being used widely, it probably makes\nsense to mention partition where inheritance children is mentioned, if this\ndouble mention makes sense. But I think it's more than just the\nreplacement. We need to rewrite or make modified copies of some of the\nsentences or paragraphs entirely. Esp. the things that apply to inheritance\nmay not be applicable as is to partitioning and vice versa. We may be\nrequired to replace inheritance tree with partition tree in the nearby\nsentences.\n\n\n--\nBest Wishes,\nAshutosh\n\nHi Justin, @@ -7306,7 +7306,7 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l <para> Normally there is one entry, with <structfield>stainherit</structfield> = <literal>false</literal>, for each table column that has been analyzed.- If the table has inheritance children, a second entry with+ If the table has inheritance children or partitions, a second entry with <structfield>stainherit</structfield> = <literal>true</literal> is also created. This row represents the column's statistics over the inheritance tree, i.e., statistics for the data you'd see withFor partitioned tables only the second entry makes sense. IIRC, we had done some work to remove the first entry. Can you please check whether a partitioned table also has two entries? <para>- If true, the stats include inheritance child columns, not just the+ If true, the stats include child tables, not just theWe are replacing columns with tables; is that intentional?Partitioned tables do not have their own stats, it's just aggregated partition stats.- If the table has inheritance children, a second entry with+ If the table has inheritance children or partitions, a second entry with <structfield>stxdinherit</structfield> = <literal>true</literal> is also created. This row represents the statistics object over the inheritance tree, i.e.,Similar to the first comment. s/inheritance tree/inheritance or partition tree/ ? - If true, the stats include inheritance child columns, not just the+ If true, the stats include child childs, not just the values in the specified relation </para></entry> </row>@@ -13152,7 +13152,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx <structfield>inherited</structfield> <type>bool</type> </para> <para>- If true, this row includes inheritance child columns, not just the+ If true, this row includes child tables, not just the values in the specified table </para></entry> </row>Replacing inheritance child \"column\" with \"tables\", is that intentional?Are these all the places where child/children need to be replaced by partitions?Now that the feature is old and also being used widely, it probably makes sense to mention partition where inheritance children is mentioned, if this double mention makes sense. But I think it's more than just the replacement. We need to rewrite or make modified copies of some of the sentences or paragraphs entirely. Esp. the things that apply to inheritance may not be applicable as is to partitioning and vice versa. We may be required to replace inheritance tree with partition tree in the nearby sentences.--Best Wishes,Ashutosh",
"msg_date": "Wed, 25 May 2022 09:59:57 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: doc phrase: \"inheritance child\""
},
{
"msg_contents": "Hi,\n\nOn Wed, May 25, 2022 at 1:30 PM Ashutosh Bapat\n<ashutosh.bapat@enterprisedb.com> wrote:\n> @@ -7306,7 +7306,7 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l\n> <para>\n> Normally there is one entry, with <structfield>stainherit</structfield> =\n> <literal>false</literal>, for each table column that has been analyzed.\n> - If the table has inheritance children, a second entry with\n> + If the table has inheritance children or partitions, a second entry with\n> <structfield>stainherit</structfield> = <literal>true</literal> is also created. This row\n> represents the column's statistics over the inheritance tree, i.e.,\n> statistics for the data you'd see with\n>\n> For partitioned tables only the second entry makes sense. IIRC, we had done some work to remove the first entry. Can you please check whether a partitioned table also has two entries?\n\nDon't think we've made any changes yet that get rid of the parent\npartitioned table's entry in pg_statistic:\n\ncreate table foo (a int) partition by list (a);\ncreate table foo1 partition of foo for values in (1);\nanalyze foo;\nselect starelid::regclass, stainherit from pg_statistic where\nstarelid::regclass in (select relid from pg_partition_tree('foo'));\n starelid | stainherit\n----------+------------\n foo | t\n foo1 | f\n(2 rows)\n\nMaybe you're thinking of RangeTblEntry that the planner makes 2 copies\nfor inheritance parents, but only 1 for partition parents as of\ne8d5dd6be79.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 27 May 2022 12:47:18 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc phrase: \"inheritance child\""
},
{
"msg_contents": "On Wed, May 25, 2022 at 1:30 PM Ashutosh Bapat\n<ashutosh.bapat@enterprisedb.com> wrote:\n> <para>\n> - If true, the stats include inheritance child columns, not just the\n> + If true, the stats include child tables, not just the\n>\n> We are replacing columns with tables; is that intentional?\n>\n> Partitioned tables do not have their own stats, it's just aggregated partition stats.\n> ...\n> - If true, the stats include inheritance child columns, not just the\n> + If true, the stats include child childs, not just the\n> values in the specified relation\n> </para></entry>\n> </row>\n> @@ -13152,7 +13152,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx\n> <structfield>inherited</structfield> <type>bool</type>\n> </para>\n> <para>\n> - If true, this row includes inheritance child columns, not just the\n> + If true, this row includes child tables, not just the\n> values in the specified table\n> </para></entry>\n> </row>\n>\n> Replacing inheritance child \"column\" with \"tables\", is that intentional?\n\nI was a bit confused by these too, though perhaps the original text is\nnot as clear as it could be? Would the following be a good rewrite:\n\nIf true, the stats cover the contents not only of the specified table,\nbut also of its child tables or partitions. (If the table is\npartitioned, which contains no data by itself, the stats only cover\nthe contents of partitions).\n\nAlthough, maybe the parenthetical is unnecessary.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 27 May 2022 15:22:38 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc phrase: \"inheritance child\""
},
{
"msg_contents": "On Fri, May 27, 2022 at 03:22:38PM +0900, Amit Langote wrote:\n> On Wed, May 25, 2022 at 1:30 PM Ashutosh Bapat <ashutosh.bapat@enterprisedb.com> wrote:\n> > <para>\n> > - If true, the stats include inheritance child columns, not just the\n> > + If true, the stats include child tables, not just the\n> >\n> > We are replacing columns with tables; is that intentional?\n> >\n> > Partitioned tables do not have their own stats, it's just aggregated partition stats.\n> > ...\n> > - If true, the stats include inheritance child columns, not just the\n> > + If true, the stats include child childs, not just the\n> > values in the specified relation\n> > </para></entry>\n> > </row>\n> > @@ -13152,7 +13152,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx\n> > <structfield>inherited</structfield> <type>bool</type>\n> > </para>\n> > <para>\n> > - If true, this row includes inheritance child columns, not just the\n> > + If true, this row includes child tables, not just the\n> > values in the specified table\n> > </para></entry>\n> > </row>\n> >\n> > Replacing inheritance child \"column\" with \"tables\", is that intentional?\n> \n> I was a bit confused by these too, though perhaps the original text is\n> not as clear as it could be? Would the following be a good rewrite:\n\nI updated the language to say \"values from\". Is this better ?\n\nAnd rebased to include changes to 401f623c7.\n\nBTW nobody complained about my \"child child\" typo.\n\n-- \nJustin",
"msg_date": "Thu, 30 Jun 2022 04:55:36 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: doc phrase: \"inheritance child\""
},
{
"msg_contents": "On Thu, Jun 30, 2022 at 6:55 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Fri, May 27, 2022 at 03:22:38PM +0900, Amit Langote wrote:\n> > On Wed, May 25, 2022 at 1:30 PM Ashutosh Bapat <ashutosh.bapat@enterprisedb.com> wrote:\n> > > <para>\n> > > - If true, the stats include inheritance child columns, not just the\n> > > + If true, the stats include child tables, not just the\n> > >\n> > > We are replacing columns with tables; is that intentional?\n> > >\n> > > Partitioned tables do not have their own stats, it's just aggregated partition stats.\n> > > ...\n> > > - If true, the stats include inheritance child columns, not just the\n> > > + If true, the stats include child childs, not just the\n> > > values in the specified relation\n> > > </para></entry>\n> > > </row>\n> > > @@ -13152,7 +13152,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx\n> > > <structfield>inherited</structfield> <type>bool</type>\n> > > </para>\n> > > <para>\n> > > - If true, this row includes inheritance child columns, not just the\n> > > + If true, this row includes child tables, not just the\n> > > values in the specified table\n> > > </para></entry>\n> > > </row>\n> > >\n> > > Replacing inheritance child \"column\" with \"tables\", is that intentional?\n> >\n> > I was a bit confused by these too, though perhaps the original text is\n> > not as clear as it could be? Would the following be a good rewrite:\n>\n> I updated the language to say \"values from\". Is this better ?\n\nYes.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Jul 2022 16:36:08 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc phrase: \"inheritance child\""
},
{
"msg_contents": "On 2022-Jun-30, Justin Pryzby wrote:\n\n> I updated the language to say \"values from\". Is this better ?\n> \n> And rebased to include changes to 401f623c7.\n\nApplied to 15 and master, thanks.\n\n> BTW nobody complained about my \"child child\" typo.\n\n:-(\n\nBTW I didn't notice your annotation in the CF app until I had already\npushed it and went there to update the status.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\nMaybe there's lots of data loss but the records of data loss are also lost.\n(Lincoln Yeoh)\n\n\n",
"msg_date": "Thu, 28 Jul 2022 18:30:26 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: doc phrase: \"inheritance child\""
},
{
"msg_contents": "On Thu, Jul 28, 2022 at 06:30:26PM +0200, Alvaro Herrera wrote:\n> BTW I didn't notice your annotation in the CF app until I had already\n> pushed it and went there to update the status.\n\nHmmm and I didn't see that you'd updated the status ... so done.\nThanks for rebasifying it.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 29 Jul 2022 08:29:15 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: doc phrase: \"inheritance child\""
}
] |
[
{
"msg_contents": "Hi,\n\nWhenever you call CHECK_FOR_INTERRUPTS(), there are three flow-control\npossibilities:\n\n1. It doesn't return, because we ereport(FATAL) AKA \"die\".\n2. It doesn't return, because we ereport(ERROR) AKA \"cancel\".\n3. It returns, possibly having serviced various kinds of requests.\n\nIf we're in a critical section, it always returns.\n\nSince commit 6ce0ed2813d, if we're in a HOLD_INTERRUPTS() section, it\nalways returns.\n\nSince commit 2b3a8b20c2d, it we're in a HOLD_CANCEL_INTERRUPTS()\nsection, it either returns or does ereport(FATAL), but paths that\nwould ereport(ERROR) are deferred.\n\n(That's not the whole story, as HandleParallelMessage() can\nereport(ERROR) without respecting QueryCancelHoldoffCount, but that's\nprobably OK WRT the protocol sync concerns of 2b3a8b20c2d because we\nshouldn't be reading from the client during a parallel query.)\n\nIn recent years we've invented a new class of non-throwing interrupts\nthat process requests to perform work of some kind, but don't\nnecessarily die or cancel. So, while an \"interrupt\" used to mean\napproximately a \"queued error\" of some kind, now it means\napproximately a \"queued task\".\n\nMy question is: do we really need to suppress these non-ereporting\ninterrupts in all the places we currently do HOLD_INTERRUPTS()? The\nreason I'm wondering about this is because the new ProcSignalBarrier\nmechanism has to wait for any HOLD_INTERRUPTS() sections across all\nbackends to complete, and that possibly includes long cleanup loops\nthat perform disk I/O. While some future ProcSignalBarrier handler\nmight indeed not be safe during eg cleanup (perhaps because it can\nereport(ERROR)), it is free to return false to defer itself until the\nnext CFI.\n\nConcretely, for example, where xact.c holds interrupts:\n\n /* Prevent cancel/die interrupt while cleaning up */\n HOLD_INTERRUPTS();\n\n... or where dsm_detach does something similar, there is probably no\nreason we should have to delay a ProcSignalBarrier just to accomplish\nwhat the comment says. Presumably it really just wants to make sure\nit doesn't lose control of the program counter via non-local return.\nI get, though, that the current coding avoids a class of bug: we'd\nhave to make absolutely certain that so-called non-ereporting\ninterrupts really can't ereport, or chaos would ensue.\n\nNo patch yet, this is more of a problem statement, and a request for a\nsanity check on my understanding of how we got here, namely that it's\nreally just a path dependency due to the way that interrupts have\nevolved.\n\n\n",
"msg_date": "Wed, 25 May 2022 14:47:41 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "HOLD_INTERRUPTS() vs ProcSignalBarrier"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-25 14:47:41 +1200, Thomas Munro wrote:\n> My question is: do we really need to suppress these non-ereporting\n> interrupts in all the places we currently do HOLD_INTERRUPTS()?\n\nMost of those should be fairly short / only block on lwlocks, small amounts of\nIO. I'm not sure how much of an issue this is. Are there actually CFIs inside\nthose HOLD_INTERRUPT sections?\n\n\n> The reason I'm wondering about this is because the new ProcSignalBarrier\n> mechanism has to wait for any HOLD_INTERRUPTS() sections across all backends\n> to complete, and that possibly includes long cleanup loops that perform disk\n> I/O. While some future ProcSignalBarrier handler might indeed not be safe\n> during eg cleanup (perhaps because it can ereport(ERROR)), it is free to\n> return false to defer itself until the next CFI.\n> \n> Concretely, for example, where xact.c holds interrupts:\n> \n> /* Prevent cancel/die interrupt while cleaning up */\n> HOLD_INTERRUPTS();\n> \n> ... or where dsm_detach does something similar, there is probably no\n> reason we should have to delay a ProcSignalBarrier just to accomplish\n> what the comment says. Presumably it really just wants to make sure\n> it doesn't lose control of the program counter via non-local return.\n\nI don't think that's quite it. There are elog(ERROR) reachable from within\nHOLD_INTERRUPTS() sections (it's not a critical section after all). I think\nit's more that there's no point in reacting to interrupts in those spots,\nbecause e.g. processing ProcDiePending requires aborting the currently active\ntransaction.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 24 May 2022 20:08:01 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: HOLD_INTERRUPTS() vs ProcSignalBarrier"
},
{
"msg_contents": "On Wed, May 25, 2022 at 3:08 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-05-25 14:47:41 +1200, Thomas Munro wrote:\n> > My question is: do we really need to suppress these non-ereporting\n> > interrupts in all the places we currently do HOLD_INTERRUPTS()?\n>\n> Most of those should be fairly short / only block on lwlocks, small amounts of\n> IO. I'm not sure how much of an issue this is. Are there actually CFIs inside\n> those HOLD_INTERRUPT sections?\n\nThe concrete example I have in mind is the one created by me in\n637668fb. That can reach a walkdir() that unlinks a ton of temporary\nfiles, and has a CFI() in it.\n\nMaybe that particular case should just be using\nHOLD_CANCEL_INTERRUPTS() instead, but that's not quite bulletproof\nenough (see note about parallel interrupts not respecting it), which\nmade me start wondering about some other way to say \"hold everything\nexcept non-ereturning interrupts\".\n\n\n",
"msg_date": "Wed, 25 May 2022 15:47:11 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: HOLD_INTERRUPTS() vs ProcSignalBarrier"
},
{
"msg_contents": "On Tue, May 24, 2022 at 11:47 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> The concrete example I have in mind is the one created by me in\n> 637668fb. That can reach a walkdir() that unlinks a ton of temporary\n> files, and has a CFI() in it.\n\nHmm, I missed that commit, and I have to say I'm a bit doubtful about\nit. I don't know what would be better, but using HOLD_INTERRUPTS()\nacross that much code seems pretty sketch.\n\n> Maybe that particular case should just be using\n> HOLD_CANCEL_INTERRUPTS() instead, but that's not quite bulletproof\n> enough (see note about parallel interrupts not respecting it), which\n> made me start wondering about some other way to say \"hold everything\n> except non-ereturning interrupts\".\n\nConsidering the current uses of HOLD_CANCEL_INTERRUPTS(), maybe we\nought to make parallel interrupts respect that flag. It's not obvious\nthat it's impossible to reach the parallel message handling stuff\nwhile we're in the middle of some wire protocol communication, but\nthere's no loss either way. If the code is reachable, then it's\nincorrect not to hold off parallel message processing at that point,\nand if it's unreachable, then it doesn't matter whether we hold off\nparallel message processing at that point.\n\nIn the case of the DSM cleanup code, we probably shouldn't be\nreceiving parallel messages while we're trying to tear down a DSM\nsegment, unless we've got multiple DSM segments around and the one\nwe're tearing down is not the one being used by parallel query.\nHowever, that's an unlikely scenario, and probably won't cost much if\nit happens, and we can't really afford to lose control of the program\ncounter anyway.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 25 May 2022 08:25:44 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: HOLD_INTERRUPTS() vs ProcSignalBarrier"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nWhen working on some table_rewrite related projects, I noticed that we don’t\nhave tab-complete for \"ON table_rewrite\" when creating the event trigger. I think it\nmight be better to add this. Here is the patch. Thoughts ?\n\nBest regards,\nHou zhijie",
"msg_date": "Wed, 25 May 2022 03:40:51 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Tab-complete for CREATE EVENT TRIGGER ON TABLE_REWRITE"
},
{
"msg_contents": "On Wed, May 25, 2022 at 03:40:51AM +0000, houzj.fnst@fujitsu.com wrote:\n> When working on some table_rewrite related projects, I noticed that we don’t\n> have tab-complete for \"ON table_rewrite\" when creating the event trigger. I think it\n> might be better to add this. Here is the patch. Thoughts ?\n\nIndeed. Will fix, thanks.\n--\nMichael",
"msg_date": "Wed, 25 May 2022 12:56:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Tab-complete for CREATE EVENT TRIGGER ON TABLE_REWRITE"
}
] |
[
{
"msg_contents": "The 002_pg_upgrade.pl test leaves a file delete_old_cluster.sh in the \nsource directory. In vpath builds, there shouldn't be any files written \nto the source directory.\n\nNote that the TAP tests run with the source directory as the current \ndirectory, so this is the result of pg_upgrade leaving its output files \nin the current directory.\n\nIt looks like an addition of\n\n chdir $ENV{TESTOUTDIR};\n\ncould fix it. Please check the patch.",
"msg_date": "Wed, 25 May 2022 08:21:26 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "pg_upgrade test writes to source directory"
},
{
"msg_contents": "On Wed, May 25, 2022 at 08:21:26AM +0200, Peter Eisentraut wrote:\n> The 002_pg_upgrade.pl test leaves a file delete_old_cluster.sh in the source\n> directory. In vpath builds, there shouldn't be any files written to the\n> source directory.\n>\n> Note that the TAP tests run with the source directory as the current\n> directory, so this is the result of pg_upgrade leaving its output files in\n> the current directory.\n\nGood catch, thanks.\n\n> It looks like an addition of\n> \n> chdir $ENV{TESTOUTDIR};\n> \n> could fix it. Please check the patch.\n\nI think that you mean TESTDIR, and not TESTOUTDIR? Doing a chdir at\nthe beginning of the tests would cause pg_regress to fail as we would\nnot find anymore the regression schedule in a VPATH build, but it is\npossible to chdir before the execution of pg_upgrade, like the\nattached.\n--\nMichael",
"msg_date": "Wed, 25 May 2022 16:25:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test writes to source directory"
},
{
"msg_contents": "On 25.05.22 09:25, Michael Paquier wrote:\n>> It looks like an addition of\n>>\n>> chdir $ENV{TESTOUTDIR};\n>>\n>> could fix it. Please check the patch.\n> I think that you mean TESTDIR, and not TESTOUTDIR?\n\nI chose TESTOUTDIR because it corresponds to the tmp_check directory, so \nthat the output files of the pg_upgrade run are removed when the test \nartifacts are cleaned up. When using TESTDIR, the pg_upgrade output \nfiles end up in the build directory, which is less bad than the source \ndirectory, but still not ideal.\n\n\n",
"msg_date": "Thu, 26 May 2022 16:36:47 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade test writes to source directory"
},
{
"msg_contents": "On Thu, May 26, 2022 at 04:36:47PM +0200, Peter Eisentraut wrote:\n> I chose TESTOUTDIR because it corresponds to the tmp_check directory, so\n> that the output files of the pg_upgrade run are removed when the test\n> artifacts are cleaned up. When using TESTDIR, the pg_upgrade output files\n> end up in the build directory, which is less bad than the source directory,\n> but still not ideal.\n\nWhere does the choice of TESTOUTDIR come from? I am a bit surprised\nby this choice, to be honest, because there is no trace of it in the\nbuildfarm client or the core code. TESTDIR, on the other hand, points\nto tmp_check/ if not set. It gets set it in vcregress.pl and\nMakefile.global.in.\n--\nMichael",
"msg_date": "Fri, 27 May 2022 05:43:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test writes to source directory"
},
{
"msg_contents": "On Fri, May 27, 2022 at 05:43:04AM +0900, Michael Paquier wrote:\n> On Thu, May 26, 2022 at 04:36:47PM +0200, Peter Eisentraut wrote:\n> > I chose TESTOUTDIR because it corresponds to the tmp_check directory, so\n> > that the output files of the pg_upgrade run are removed when the test\n> > artifacts are cleaned up. When using TESTDIR, the pg_upgrade output files\n> > end up in the build directory, which is less bad than the source directory,\n> > but still not ideal.\n> \n> Where does the choice of TESTOUTDIR come from? I am a bit surprised\n> by this choice, to be honest, because there is no trace of it in the\n> buildfarm client or the core code. TESTDIR, on the other hand, points\n> to tmp_check/ if not set. It gets set it in vcregress.pl and\n> Makefile.global.in.\n\nIt looks like Peter working on top of the meson branch.\nTESTOUTDIR is not yet in master. \n\nhttps://commitfest.postgresql.org/38/3395/\nhttps://github.com/anarazel/postgres/tree/meson\nhttps://github.com/anarazel/postgres/commit/e754bde6d0d3cb6329a5bf568e19eb271c3bdc7c\n\ncommit e754bde6d0d3cb6329a5bf568e19eb271c3bdc7c\nAuthor: Andres Freund <andres@anarazel.de>\nDate: Mon Feb 14 21:47:07 2022 -0800\n\n wip: split TESTDIR into two.\n\n\n",
"msg_date": "Thu, 26 May 2022 15:52:18 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test writes to source directory"
},
{
"msg_contents": "On Thu, May 26, 2022 at 03:52:18PM -0500, Justin Pryzby wrote:\n> It looks like Peter working on top of the meson branch.\n> TESTOUTDIR is not yet in master. \n\nThanks for the reference. I didn't know this part of the puzzle.\n\n> https://commitfest.postgresql.org/38/3395/\n> https://github.com/anarazel/postgres/tree/meson\n> https://github.com/anarazel/postgres/commit/e754bde6d0d3cb6329a5bf568e19eb271c3bdc7c\n> \n> commit e754bde6d0d3cb6329a5bf568e19eb271c3bdc7c\n> Author: Andres Freund <andres@anarazel.de>\n> Date: Mon Feb 14 21:47:07 2022 -0800\n> \n> wip: split TESTDIR into two.\n\nWell, we need to do something about that on HEAD, and it also means\nthat TESTDIR is the best fit for the job now, except if the variable\nsplit happens before REL_15_STABLE is forked.\n--\nMichael",
"msg_date": "Fri, 27 May 2022 07:03:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test writes to source directory"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Thu, May 26, 2022 at 03:52:18PM -0500, Justin Pryzby wrote:\n>> It looks like Peter working on top of the meson branch.\n>> TESTOUTDIR is not yet in master. \n\n> Well, we need to do something about that on HEAD, and it also means\n> that TESTDIR is the best fit for the job now, except if the variable\n> split happens before REL_15_STABLE is forked.\n\nIt looks like that patch is meant to resolve misbehaviors equivalent to\nthis one that already exist in several other places. So fixing this\none along with the other ones seems like an appropriate thing to do\nwhen that lands.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 26 May 2022 18:19:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test writes to source directory"
},
{
"msg_contents": "On Thu, May 26, 2022 at 06:19:56PM -0400, Tom Lane wrote:\n> It looks like that patch is meant to resolve misbehaviors equivalent to\n> this one that already exist in several other places. So fixing this\n> one along with the other ones seems like an appropriate thing to do\n> when that lands.\n\nWell, would this specific change land in REL_15_STABLE? From what I\ncan see, generating delete_old_cluster.sh in the source rather than\nthe build directory is a defect from 322becb, as test.sh issues\npg_upgrade from the build path in ~14, but we do it from the source\npath to get an access to parallel_schedule.\n--\nMichael",
"msg_date": "Fri, 27 May 2022 08:23:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test writes to source directory"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Thu, May 26, 2022 at 06:19:56PM -0400, Tom Lane wrote:\n>> It looks like that patch is meant to resolve misbehaviors equivalent to\n>> this one that already exist in several other places. So fixing this\n>> one along with the other ones seems like an appropriate thing to do\n>> when that lands.\n\n> Well, would this specific change land in REL_15_STABLE?\n\nI wouldn't object to doing that, and even back-patching. It looked\nlike a pretty sane change, and we've learned before that skimping on\nback-branch test infrastructure is a poor tradeoff.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 26 May 2022 19:51:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test writes to source directory"
},
{
"msg_contents": "On Thu, May 26, 2022 at 07:51:08PM -0400, Tom Lane wrote:\n> I wouldn't object to doing that, and even back-patching. It looked\n> like a pretty sane change, and we've learned before that skimping on\n> back-branch test infrastructure is a poor tradeoff.\n\nOkay, fine by me. Andres, what do you think about backpatching [1]?\n\n[1]: https://github.com/anarazel/postgres/commit/e754bde6d0d3cb6329a5bf568e19eb271c3bdc7c\n--\nMichael",
"msg_date": "Fri, 27 May 2022 09:05:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test writes to source directory"
},
{
"msg_contents": "On 26.05.22 22:52, Justin Pryzby wrote:\n> On Fri, May 27, 2022 at 05:43:04AM +0900, Michael Paquier wrote:\n>> On Thu, May 26, 2022 at 04:36:47PM +0200, Peter Eisentraut wrote:\n>>> I chose TESTOUTDIR because it corresponds to the tmp_check directory, so\n>>> that the output files of the pg_upgrade run are removed when the test\n>>> artifacts are cleaned up. When using TESTDIR, the pg_upgrade output files\n>>> end up in the build directory, which is less bad than the source directory,\n>>> but still not ideal.\n>>\n>> Where does the choice of TESTOUTDIR come from? I am a bit surprised\n>> by this choice, to be honest, because there is no trace of it in the\n>> buildfarm client or the core code. TESTDIR, on the other hand, points\n>> to tmp_check/ if not set. It gets set it in vcregress.pl and\n>> Makefile.global.in.\n> \n> It looks like Peter working on top of the meson branch.\n> TESTOUTDIR is not yet in master.\n\nOoops, yeah. :)\n\nI think you can just chdir to ${PostgreSQL::Test::Utils::tmp_check}.\n\n\n",
"msg_date": "Fri, 27 May 2022 14:45:57 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade test writes to source directory"
},
{
"msg_contents": "On Fri, May 27, 2022 at 02:45:57PM +0200, Peter Eisentraut wrote:\n> I think you can just chdir to ${PostgreSQL::Test::Utils::tmp_check}.\n\nHmm. I think that I prefer your initial suggestion with TESTOUTDIR.\nThis sticks better in the long term, while making things consistent\nwith 010_tab_completion.pl, the only test that moves to TESTDIR while\nrunning. So my vote would be to backpatch first the addition of\nTESTOUTDIR, then fix the TAP test of pg_upgrade on HEAD to do the\nsame.\n\nAnd I have just noticed that I completely forgot to add Andres about\nthis specific point, as meson is his work. So done now.\n--\nMichael",
"msg_date": "Sat, 28 May 2022 17:56:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test writes to source directory"
},
{
"msg_contents": "On 28.05.22 10:56, Michael Paquier wrote:\n> On Fri, May 27, 2022 at 02:45:57PM +0200, Peter Eisentraut wrote:\n>> I think you can just chdir to ${PostgreSQL::Test::Utils::tmp_check}.\n> \n> Hmm. I think that I prefer your initial suggestion with TESTOUTDIR.\n> This sticks better in the long term, while making things consistent\n> with 010_tab_completion.pl, the only test that moves to TESTDIR while\n> running. So my vote would be to backpatch first the addition of\n> TESTOUTDIR, then fix the TAP test of pg_upgrade on HEAD to do the\n> same.\n\nI think it's a bit premature to talk about backpatching, since the patch \nin question hasn't been committed anywhere yet, and AFAICT hasn't even \nreally been reviewed yet.\n\nIf you want to go this direction, I suggest you extract the patch and \npresent it here on its own merit. -- But then I might ask why such a \nbroad change post beta when apparently a one-line change would also work.\n\n\n",
"msg_date": "Sat, 28 May 2022 20:29:27 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade test writes to source directory"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-27 09:05:43 +0900, Michael Paquier wrote:\n> On Thu, May 26, 2022 at 07:51:08PM -0400, Tom Lane wrote:\n> > I wouldn't object to doing that, and even back-patching. It looked\n> > like a pretty sane change, and we've learned before that skimping on\n> > back-branch test infrastructure is a poor tradeoff.\n> \n> Okay, fine by me. Andres, what do you think about backpatching [1]?\n> \n> [1]: https://github.com/anarazel/postgres/commit/e754bde6d0d3cb6329a5bf568e19eb271c3bdc7c\n\nWell, committing and backpatching ;)\n\nI suspect there might be a bit more polish might be needed - that's why I\nhadn't proposed the commit on its own yet. I was also wondering about\nproposing a different split (test data, test logs).\n\nI don't even know if we still need TESTDIR - since f4ce6c4d3a3 we add the\nbuild dir to PATH, which IIUC was the reason for TESTDIR previously. Afaics\nafter f4ce6c4d3a3 and the TESTOUTDIR split the only TESTDIR use is in\nsrc/tools/msvc/ecpg_regression.proj - so we could at least restrict it to\nthat.\n\n\nStuff I noticed on a quick skim:\n\n> # In a VPATH build, we'll be started in the source directory, but we want\n> # to run in the build directory so that we can use relative paths to\n> # access the tmp_check subdirectory; otherwise the output from filename\n> # completion tests is too variable.\n\nJust needs a bit of rephrasing.\n\n\n>\t# Determine output directories, and create them. The base path is the\n>\t# TESTDIR environment variable, which is normally set by the invoking\n>\t# Makefile.\n>\t$tmp_check = $ENV{TESTOUTDIR} ? \"$ENV{TESTOUTDIR}\" : \"tmp_check\";\n>\t$log_path = \"$tmp_check/log\";\n\nProbably just needs a s/TESTDIR/TESTOUTDIR/\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 28 May 2022 12:59:49 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test writes to source directory"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I suspect there might be a bit more polish might be needed - that's why I\n> hadn't proposed the commit on its own yet.\n\nYeah, I'd noticed the obsoleted comments too, but not bothered to complain\nsince that was just WIP and not an officially proposed patch. I'll be\nhappy to review if you want to put up a full patch.\n\n> I was also wondering about\n> proposing a different split (test data, test logs).\n\nMight be too invasive for back-patch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 28 May 2022 16:14:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test writes to source directory"
},
{
"msg_contents": "On Sat, May 28, 2022 at 04:14:01PM -0400, Tom Lane wrote:\n> Yeah, I'd noticed the obsoleted comments too, but not bothered to complain\n> since that was just WIP and not an officially proposed patch. I'll be\n> happy to review if you want to put up a full patch.\n\nWell, here is a formal patch set, then. Please feel free to comment.\n\nFWIW, I am on the fence with dropping TESTDIR, as it could be used by\nout-of-core test code as well. If there are doubts about\nback-patching the first part, doing that only on HEAD would be fine to\nfix the problem of this thread.\n--\nMichael",
"msg_date": "Tue, 31 May 2022 16:17:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test writes to source directory"
},
{
"msg_contents": "On 31.05.22 09:17, Michael Paquier wrote:\n> On Sat, May 28, 2022 at 04:14:01PM -0400, Tom Lane wrote:\n>> Yeah, I'd noticed the obsoleted comments too, but not bothered to complain\n>> since that was just WIP and not an officially proposed patch. I'll be\n>> happy to review if you want to put up a full patch.\n> Well, here is a formal patch set, then. Please feel free to comment.\n> \n> FWIW, I am on the fence with dropping TESTDIR, as it could be used by\n> out-of-core test code as well. If there are doubts about\n> back-patching the first part, doing that only on HEAD would be fine to\n> fix the problem of this thread.\n\nI don't understand the point of this first patch at all. Why define \nTESTOUTDIR as a separate variable if it's always TESTDIR + tmp_check? \nWhy define TESTOUTDIR in pg_regress invocations, if nothing uses it? If \nyou want it as a separate variable, it could be defined in some Per \nutility module, but I don't see why it needs to be in Makefile.global. \nWhat is the problem that this is trying to solve?\n\n\n\n\n",
"msg_date": "Wed, 1 Jun 2022 16:11:16 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade test writes to source directory"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 31.05.22 09:17, Michael Paquier wrote:\n>> Well, here is a formal patch set, then. Please feel free to comment.\n>> \n>> FWIW, I am on the fence with dropping TESTDIR, as it could be used by\n>> out-of-core test code as well. If there are doubts about\n>> back-patching the first part, doing that only on HEAD would be fine to\n>> fix the problem of this thread.\n\n> I don't understand the point of this first patch at all. Why define \n> TESTOUTDIR as a separate variable if it's always TESTDIR + tmp_check? \n> Why define TESTOUTDIR in pg_regress invocations, if nothing uses it? If \n> you want it as a separate variable, it could be defined in some Per \n> utility module, but I don't see why it needs to be in Makefile.global. \n> What is the problem that this is trying to solve?\n\nYeah, after looking this over it seems like we could drop 0001 and\njust change 0002 to chdir into TESTDIR then into tmp_check. I'm not\nsure I see the point of inventing a new global variable either,\nand I'm definitely not happy with the proposed changes to \n010_tab_completion.pl. My recollection is that those tests\nwere intentionally written to test tab completion involving a\ndirectory name, but this change just loses that aspect entirely.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 Jun 2022 10:55:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test writes to source directory"
},
{
"msg_contents": "Hi,\n\nOn 2022-06-01 16:11:16 +0200, Peter Eisentraut wrote:\n> On 31.05.22 09:17, Michael Paquier wrote:\n> > On Sat, May 28, 2022 at 04:14:01PM -0400, Tom Lane wrote:\n> > > Yeah, I'd noticed the obsoleted comments too, but not bothered to complain\n> > > since that was just WIP and not an officially proposed patch. I'll be\n> > > happy to review if you want to put up a full patch.\n> > Well, here is a formal patch set, then. Please feel free to comment.\n> > \n> > FWIW, I am on the fence with dropping TESTDIR, as it could be used by\n> > out-of-core test code as well. If there are doubts about\n> > back-patching the first part, doing that only on HEAD would be fine to\n> > fix the problem of this thread.\n> \n> I don't understand the point of this first patch at all. Why define\n> TESTOUTDIR as a separate variable if it's always TESTDIR + tmp_check? Why\n> define TESTOUTDIR in pg_regress invocations, if nothing uses it? If you\n> want it as a separate variable, it could be defined in some Per utility\n> module, but I don't see why it needs to be in Makefile.global. What is the\n> problem that this is trying to solve?\n\nUntil recently TESTDIR needed to point to the build directory containing the\nbinaries. But I'd like to be able to separate test log output from the build\ntree, so that it's easier to capture files generated by tests for CI /\nbuildfarm. The goal is to have a separate directory for each test, so we can\npresent logs for failed tests separately. That was impossible with TESTDIR,\nbecause it needed to point to the build directory.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 1 Jun 2022 14:11:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test writes to source directory"
},
{
"msg_contents": "On Wed, Jun 01, 2022 at 02:11:12PM -0700, Andres Freund wrote:\n> Until recently TESTDIR needed to point to the build directory containing the\n> binaries. But I'd like to be able to separate test log output from the build\n> tree, so that it's easier to capture files generated by tests for CI /\n> buildfarm. The goal is to have a separate directory for each test, so we can\n> present logs for failed tests separately. That was impossible with TESTDIR,\n> because it needed to point to the build directory.\n\nFWIW, this argument sounds sensible to me since I looked at 0001, not\nonly for the log files, but also to help in the capture of files\ngenerated by the tests like 010_tab_completion.pl.\n\nI don't know yet what to do about this part, so for now I have fixed\nthe other issue reported by Peter where the test names were missing.\n--\nMichael",
"msg_date": "Thu, 2 Jun 2022 09:37:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test writes to source directory"
},
{
"msg_contents": "\nOn 2022-06-01 We 20:37, Michael Paquier wrote:\n> On Wed, Jun 01, 2022 at 02:11:12PM -0700, Andres Freund wrote:\n>> Until recently TESTDIR needed to point to the build directory containing the\n>> binaries. But I'd like to be able to separate test log output from the build\n>> tree, so that it's easier to capture files generated by tests for CI /\n>> buildfarm. The goal is to have a separate directory for each test, so we can\n>> present logs for failed tests separately. That was impossible with TESTDIR,\n>> because it needed to point to the build directory.\n> FWIW, this argument sounds sensible to me since I looked at 0001, not\n> only for the log files, but also to help in the capture of files\n> generated by the tests like 010_tab_completion.pl.\n>\n> I don't know yet what to do about this part, so for now I have fixed\n> the other issue reported by Peter where the test names were missing.\n\n\nI hope we fix the original issue soon - it's apparently been the cause\nof numerous buildfarm failures that it was on my list to investigate\ne.g.\n<https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2022-05-15%2019%3A24%3A27>\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 2 Jun 2022 17:17:34 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test writes to source directory"
},
{
"msg_contents": "On Thu, Jun 02, 2022 at 05:17:34PM -0400, Andrew Dunstan wrote:\n> I hope we fix the original issue soon - it's apparently been the cause\n> of numerous buildfarm failures that it was on my list to investigate\n> e.g.\n> <https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2022-05-15%2019%3A24%3A27>\n\nOops. Thanks, Andrew, I was not aware of that. I don't really want\nto wait more if this impacts some of the buildfarm animals. Even if\nwe don't conclude with the use of TESTOUTDIR for the time being, I see\nno strong objections in using TESTDIR/tmp_check, aka\n${PostgreSQL::Test::Utils::tmp_check}. So I propose to apply a fix\ndoing that in the next 24 hours or so. We can always switch to a\ndifferent path once we decide something else, if necessary.\n--\nMichael",
"msg_date": "Fri, 3 Jun 2022 12:29:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test writes to source directory"
},
{
"msg_contents": "On 2022-06-03 12:29:04 +0900, Michael Paquier wrote:\n> On Thu, Jun 02, 2022 at 05:17:34PM -0400, Andrew Dunstan wrote:\n> > I hope we fix the original issue soon - it's apparently been the cause\n> > of numerous buildfarm failures that it was on my list to investigate\n> > e.g.\n> > <https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2022-05-15%2019%3A24%3A27>\n> \n> Oops. Thanks, Andrew, I was not aware of that. I don't really want\n> to wait more if this impacts some of the buildfarm animals. Even if\n> we don't conclude with the use of TESTOUTDIR for the time being, I see\n> no strong objections in using TESTDIR/tmp_check, aka\n> ${PostgreSQL::Test::Utils::tmp_check}. So I propose to apply a fix\n> doing that in the next 24 hours or so. We can always switch to a\n> different path once we decide something else, if necessary.\n\n+1\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 2 Jun 2022 20:50:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test writes to source directory"
},
{
"msg_contents": "On Thu, Jun 02, 2022 at 08:50:12PM -0700, Andres Freund wrote:\n> +1\n\nOK, applied the extra chdir to PostgreSQL::Test::Utils::tmp_check, as\ninitially suggested by Peter. The CI looked happy on that.\n--\nMichael",
"msg_date": "Sat, 4 Jun 2022 12:21:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test writes to source directory"
},
{
"msg_contents": "Hi,\n\nOn 2022-06-01 10:55:28 -0400, Tom Lane wrote:\n> [...] I'm definitely not happy with the proposed changes to\n> 010_tab_completion.pl. My recollection is that those tests\n> were intentionally written to test tab completion involving a\n> directory name, but this change just loses that aspect entirely.\n\nHow about creating a dedicated directory for the created files, to maintain\nthat? My goal of being able to redirect the test output elsewhere can be\nachieved with just a hunk like this:\n\n@@ -70,11 +70,13 @@ delete $ENV{LS_COLORS};\n # to run in the build directory so that we can use relative paths to\n # access the tmp_check subdirectory; otherwise the output from filename\n # completion tests is too variable.\n-if ($ENV{TESTDIR})\n+if ($ENV{TESTOUTDIR})\n {\n- chdir $ENV{TESTDIR} or die \"could not chdir to \\\"$ENV{TESTDIR}\\\": $!\";\n+ chdir \"$ENV{TESTOUTDIR}\" or die \"could not chdir to \\\"$ENV{TESTOUTDIR}\\\": $!\";\n }\n \n+mkdir \"tmp_check\" unless -d \"tmp_check\";\n+\n # Create some junk files for filename completion testing.\n my $FH;\n open $FH, \">\", \"tmp_check/somefile\"\n\n\nOf course it'd need a comment adjustment etc. It's a bit ugly to use a\notherwise empty tmp_check/ directory just to reduce the diff size, but it's\nalso not too bad.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 11 Aug 2022 08:20:28 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test writes to source directory"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-06-01 10:55:28 -0400, Tom Lane wrote:\n>> [...] I'm definitely not happy with the proposed changes to\n>> 010_tab_completion.pl. My recollection is that those tests\n>> were intentionally written to test tab completion involving a\n>> directory name, but this change just loses that aspect entirely.\n\n> How about creating a dedicated directory for the created files, to maintain\n> that? My goal of being able to redirect the test output elsewhere can be\n> achieved with just a hunk like this:\n\nSure, there's no need for these files to be in the exact same place that\nthe output is collected. I just want to keep their same relationship\nto the test's CWD.\n\n> Of course it'd need a comment adjustment etc. It's a bit ugly to use a\n> otherwise empty tmp_check/ directory just to reduce the diff size, but it's\n> also not too bad.\n\nGiven that it's no longer going to be the same tmp_check dir used\nelsewhere, maybe we could s/tmp_check/tab_comp_dir/g or something\nlike that? That'd add some clarity I think.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 11 Aug 2022 11:26:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test writes to source directory"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-11 11:26:39 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-06-01 10:55:28 -0400, Tom Lane wrote:\n> >> [...] I'm definitely not happy with the proposed changes to\n> >> 010_tab_completion.pl. My recollection is that those tests\n> >> were intentionally written to test tab completion involving a\n> >> directory name, but this change just loses that aspect entirely.\n>\n> > How about creating a dedicated directory for the created files, to maintain\n> > that? My goal of being able to redirect the test output elsewhere can be\n> > achieved with just a hunk like this:\n>\n> Sure, there's no need for these files to be in the exact same place that\n> the output is collected. I just want to keep their same relationship\n> to the test's CWD.\n>\n> > Of course it'd need a comment adjustment etc. It's a bit ugly to use a\n> > otherwise empty tmp_check/ directory just to reduce the diff size, but it's\n> > also not too bad.\n>\n> Given that it's no longer going to be the same tmp_check dir used\n> elsewhere, maybe we could s/tmp_check/tab_comp_dir/g or something\n> like that? That'd add some clarity I think.\n\nDone in the attached patch (0001).\n\nA bunch of changes (e.g. f4ce6c4d3a3) made since I'd first written that\nTESTOUTDIR patch means that we don't need two different variables anymore. So\npatch 0002 just moves the addition of /tmp_check from Utils.pm to the places\nin which TESTDIR is defined.\n\nThat still \"forces\" tmp_check/ to exist when going through pg_regress, but\nthat's less annoying because pg_regress at least keeps\nregression.{diffs,out}/log files/directory outside of tmp_check/.\n\nI've also attached a 0003 that splits the log location from the data\nlocation. That could be used to make the log file location symmetrical between\npg_regress (log/) and tap tests (tmp_check/log). But it'd break the\nbuildfarm's tap test log file collection, so I don't think that's something we\nreally can do soon-ish?\n\nGreetings,\n\nAndres Freund",
"msg_date": "Mon, 15 Aug 2022 20:20:51 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test writes to source directory"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-15 20:20:51 -0700, Andres Freund wrote:\n> On 2022-08-11 11:26:39 -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > On 2022-06-01 10:55:28 -0400, Tom Lane wrote:\n> > >> [...] I'm definitely not happy with the proposed changes to\n> > >> 010_tab_completion.pl. My recollection is that those tests\n> > >> were intentionally written to test tab completion involving a\n> > >> directory name, but this change just loses that aspect entirely.\n> >\n> > > How about creating a dedicated directory for the created files, to maintain\n> > > that? My goal of being able to redirect the test output elsewhere can be\n> > > achieved with just a hunk like this:\n> >\n> > Sure, there's no need for these files to be in the exact same place that\n> > the output is collected. I just want to keep their same relationship\n> > to the test's CWD.\n> >\n> > > Of course it'd need a comment adjustment etc. It's a bit ugly to use a\n> > > otherwise empty tmp_check/ directory just to reduce the diff size, but it's\n> > > also not too bad.\n> >\n> > Given that it's no longer going to be the same tmp_check dir used\n> > elsewhere, maybe we could s/tmp_check/tab_comp_dir/g or something\n> > like that? That'd add some clarity I think.\n> \n> Done in the attached patch (0001).\n> \n> A bunch of changes (e.g. f4ce6c4d3a3) made since I'd first written that\n> TESTOUTDIR patch means that we don't need two different variables anymore. So\n> patch 0002 just moves the addition of /tmp_check from Utils.pm to the places\n> in which TESTDIR is defined.\n> \n> That still \"forces\" tmp_check/ to exist when going through pg_regress, but\n> that's less annoying because pg_regress at least keeps\n> regression.{diffs,out}/log files/directory outside of tmp_check/.\n> \n> I've also attached a 0003 that splits the log location from the data\n> location. That could be used to make the log file location symmetrical between\n> pg_regress (log/) and tap tests (tmp_check/log). But it'd break the\n> buildfarm's tap test log file collection, so I don't think that's something we\n> really can do soon-ish?\n\nOops, 0003 had some typos in it that I added last minute... Corrected patches\nattached.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Mon, 15 Aug 2022 22:14:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test writes to source directory"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-08-11 11:26:39 -0400, Tom Lane wrote:\n>> Given that it's no longer going to be the same tmp_check dir used\n>> elsewhere, maybe we could s/tmp_check/tab_comp_dir/g or something\n>> like that? That'd add some clarity I think.\n\n> Done in the attached patch (0001).\n\nI was confused by 0001, because with the present test setup that will\nresult in creating an extra tab_comp_dir that isn't inside tmp_check,\nleading to needing cleanup infrastructure that isn't there. However,\n0002 clarifies that: you're redefining TESTDIR. I think 0001 is OK\nas long as you apply it after, or integrate it into, 0002.\n\n> patch 0002 just moves the addition of /tmp_check from Utils.pm to the places\n> in which TESTDIR is defined.\n\nI see some references to TESTDIR in src/tools/msvc/ecpg_regression.proj.\nIt looks like those are not references to this variable but uses of the\n\n <PropertyGroup>\n <TESTDIR>..\\..\\interfaces\\ecpg\\test</TESTDIR>\n\nthingy at the top of the file. Still, it's a bit confusing --- should\nwe rename that? Maybe not worth the trouble given the short expected\nlifespan of the MSVC test scripts. 0002 seems fine otherwise.\n\n> I've also attached a 0003 that splits the log location from the data\n> location. That could be used to make the log file location symmetrical between\n> pg_regress (log/) and tap tests (tmp_check/log). But it'd break the\n> buildfarm's tap test log file collection, so I don't think that's something we\n> really can do soon-ish?\n\nNo particular opinion about 0003 -- as you say, that's going to be\ngated by the buildfarm.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Aug 2022 10:58:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test writes to source directory"
},
{
"msg_contents": "\nOn 2022-08-15 Mo 23:20, Andres Freund wrote:\n>\n> I've also attached a 0003 that splits the log location from the data\n> location. That could be used to make the log file location symmetrical between\n> pg_regress (log/) and tap tests (tmp_check/log). But it'd break the\n> buildfarm's tap test log file collection, so I don't think that's something we\n> really can do soon-ish?\n\n\nWhere would you like to have the buildfarm client search? Currently it\ndoes this:\n\n\n my @logs = glob(\"$dir/tmp_check/log/*\");\n\n $log->add_log($_) foreach (@logs);\n\nI can add another pattern in that glob expression. I'm intending to put\nout a new release pretty soon (before US Labor Day).\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 16 Aug 2022 11:33:12 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test writes to source directory"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-16 11:33:12 -0400, Andrew Dunstan wrote:\n> On 2022-08-15 Mo 23:20, Andres Freund wrote:\n> >\n> > I've also attached a 0003 that splits the log location from the data\n> > location. That could be used to make the log file location symmetrical between\n> > pg_regress (log/) and tap tests (tmp_check/log). But it'd break the\n> > buildfarm's tap test log file collection, so I don't think that's something we\n> > really can do soon-ish?\n> \n> \n> Where would you like to have the buildfarm client search? Currently it\n> does this:\n> \n> \n> ��� my @logs = glob(\"$dir/tmp_check/log/*\");\n> \n> ��� $log->add_log($_) foreach (@logs);\n> \n> I can add another pattern in that glob expression. I'm intending to put\n> out a new release pretty soon (before US Labor Day).\n\n$dir/log, so it's symmetric to the location of log files of regress/isolation\ntests.\n\nThanks!\n\nAndres\n\n\n",
"msg_date": "Tue, 16 Aug 2022 08:42:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test writes to source directory"
}
] |
[
{
"msg_contents": "Hello,\n\nVik Fearing pointed out the inconsistency in the SQL Standard that imposes\nusing count(*) (with a star) but row)number() without it.\n\nVik's point of view is that we should be able to use row_number with a\nstar, which is already implemented in Postgres.\n\nMy point of view is we could add support for count(). It does not remove\nthe compliance with the SQL Standard, it just adds an extra feature.\n\nYou will find enclosed a patch proposal to allow count to be used without a\nstar. I, on purpose, decided not to document this behavior, maybe that's\nwrong.\n\nHave a great day,\n\nLætitia",
"msg_date": "Wed, 25 May 2022 12:26:47 +0200",
"msg_from": "Laetitia Avrot <laetitia.avrot@gmail.com>",
"msg_from_op": true,
"msg_subject": "Authorizing select count()"
},
{
"msg_contents": "On Wed, May 25, 2022 at 12:26:47PM +0200, Laetitia Avrot wrote:\n> You will find enclosed a patch proposal to allow count to be used without a\n> star. I, on purpose, decided not to document this behavior, maybe that's\n> wrong.\n\nThis originates from 108fe47, most likely as part of this thread. The\npatch proposed by Sergey did not include this restriction, though:\nhttps://www.postgresql.org/message-id/Pine.LNX.4.64.0607241340090.19158%40lnfm1.sai.msu.ru\n\nTom?\n--\nMichael",
"msg_date": "Thu, 26 May 2022 09:33:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Authorizing select count()"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, May 25, 2022 at 12:26:47PM +0200, Laetitia Avrot wrote:\n>> You will find enclosed a patch proposal to allow count to be used without a\n>> star. I, on purpose, decided not to document this behavior, maybe that's\n>> wrong.\n\n> This originates from 108fe47, most likely as part of this thread.\n\nI'm fairly sure that in the past we've considered this idea and rejected\nit, mainly on the grounds that it's a completely gratuitous departure\nfrom SQL standard. I quite agree that the syntax without star would be\nsaner, but once we get into inventing \"saner\" variants of SQL syntax,\nwhere do we stop? And how much are we buying really?\n\nI definitely don't agree with doing it but not documenting it; that\nwill just result in endless confusion.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 25 May 2022 21:27:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Authorizing select count()"
},
{
"msg_contents": "I wrote:\n> I'm fairly sure that in the past we've considered this idea and rejected\n> it, mainly on the grounds that it's a completely gratuitous departure\n> from SQL standard.\n\nAfter some more digging I found the thread that (I think) the \"mere\npedantry\" comment was referring to:\n\nhttps://www.postgresql.org/message-id/flat/Pine.LNX.4.44.0604131644260.20730-100000%40lnfm1.sai.msu.ru\n\nThere's other nearby discussion at\n\nhttps://www.postgresql.org/message-id/flat/4476BABD.4080100%40zigo.dhs.org\n\n(note that that's referring to the klugy state of affairs before 108fe4730)\n\nOf course, that's just a couple of offhand email threads, which should\nnot be mistaken for graven stone tablets. But I still don't see much\nadvantage in deviating from the SQL-standard syntax for COUNT(*).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 26 May 2022 01:27:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Authorizing select count()"
}
] |
[
{
"msg_contents": "\nHi,\n\nToday, I try to use repeat() to generate 1GB text, and it occurs invalid memory\nalloc request size [1]. It is a limit from palloc(), then I try to reduce it,\nit still complains out of memory which comes from enlargeStringInfo() [2]. The\ndocumentation about repect() [3] doesn't mentaion the limitation.\n\nI want to known the max memory size that the repect() can use? Should we\nmentaion it in documentation? Or should we report an error in repeat() if the\nsize exceeds the limitation?\n\n[1]\npostgres=# select repeat('x', 1024 * 1024 * 1024);\nERROR: invalid memory alloc request size 1073741828\n\n[2]\npostgres=# select repeat('x', 1024 * 1024 * 1024 - 5);\nERROR: out of memory\nDETAIL: Cannot enlarge string buffer containing 6 bytes by 1073741819 more bytes.\n\n[3] https://www.postgresql.org/docs/14/functions-string.html\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Wed, 25 May 2022 22:34:42 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Invalid memory alloc request size for repeat()"
},
{
"msg_contents": "On Wednesday, May 25, 2022, Japin Li <japinli@hotmail.com> wrote:\n\n>\n> Hi,\n>\n> Today, I try to use repeat() to generate 1GB text, and it occurs invalid\n> memory\n> alloc request size [1]. It is a limit from palloc(), then I try to reduce\n> it,\n> it still complains out of memory which comes from enlargeStringInfo()\n> [2]. The\n> documentation about repect() [3] doesn't mentaion the limitation.\n>\n\nThat is still a “field” even if it is not stored.\n\nhttps://www.postgresql.org/docs/current/limits.html\n\nDavid J.\n\nOn Wednesday, May 25, 2022, Japin Li <japinli@hotmail.com> wrote:\nHi,\n\nToday, I try to use repeat() to generate 1GB text, and it occurs invalid memory\nalloc request size [1]. It is a limit from palloc(), then I try to reduce it,\nit still complains out of memory which comes from enlargeStringInfo() [2]. The\ndocumentation about repect() [3] doesn't mentaion the limitation.\nThat is still a “field” even if it is not stored.https://www.postgresql.org/docs/current/limits.htmlDavid J.",
"msg_date": "Wed, 25 May 2022 07:41:11 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Invalid memory alloc request size for repeat()"
},
{
"msg_contents": "Japin Li <japinli@hotmail.com> writes:\n> Today, I try to use repeat() to generate 1GB text, and it occurs invalid memory\n> alloc request size [1]. It is a limit from palloc(), then I try to reduce it,\n> it still complains out of memory which comes from enlargeStringInfo() [2]. The\n> documentation about repect() [3] doesn't mentaion the limitation.\n\nIt would probably make sense for repeat() to check this explicitly:\n\n if (unlikely(pg_mul_s32_overflow(count, slen, &tlen)) ||\n- unlikely(pg_add_s32_overflow(tlen, VARHDRSZ, &tlen)))\n+ unlikely(pg_add_s32_overflow(tlen, VARHDRSZ, &tlen)) ||\n+ unlikely(!AllocSizeIsValid(tlen)))\n ereport(ERROR,\n (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n errmsg(\"requested length too large\")));\n\nThe failure in enlargeStringInfo is probably coming from trying to\nconstruct an output message to send back to the client. That's\ngoing to be a lot harder to do anything nice about (and even if\nthe backend didn't fail, the client might).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 25 May 2022 10:50:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Invalid memory alloc request size for repeat()"
},
{
"msg_contents": "\nOn Wed, 25 May 2022 at 22:41, David G. Johnston <david.g.johnston@gmail.com> wrote:\n> On Wednesday, May 25, 2022, Japin Li <japinli@hotmail.com> wrote:\n>\n>>\n>> Hi,\n>>\n>> Today, I try to use repeat() to generate 1GB text, and it occurs invalid\n>> memory\n>> alloc request size [1]. It is a limit from palloc(), then I try to reduce\n>> it,\n>> it still complains out of memory which comes from enlargeStringInfo()\n>> [2]. The\n>> documentation about repect() [3] doesn't mentaion the limitation.\n>>\n>\n> That is still a “field” even if it is not stored.\n>\n> https://www.postgresql.org/docs/current/limits.html\n>\n\nI mean this is a limitation about repect() function, it isn't really about 1GB,\nwe can only use 1GB - 4 for it.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Thu, 26 May 2022 09:02:08 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Invalid memory alloc request size for repeat()"
},
{
"msg_contents": "\nOn Wed, 25 May 2022 at 22:50, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Japin Li <japinli@hotmail.com> writes:\n>> Today, I try to use repeat() to generate 1GB text, and it occurs invalid memory\n>> alloc request size [1]. It is a limit from palloc(), then I try to reduce it,\n>> it still complains out of memory which comes from enlargeStringInfo() [2]. The\n>> documentation about repect() [3] doesn't mentaion the limitation.\n>\n> It would probably make sense for repeat() to check this explicitly:\n>\n> if (unlikely(pg_mul_s32_overflow(count, slen, &tlen)) ||\n> - unlikely(pg_add_s32_overflow(tlen, VARHDRSZ, &tlen)))\n> + unlikely(pg_add_s32_overflow(tlen, VARHDRSZ, &tlen)) ||\n> + unlikely(!AllocSizeIsValid(tlen)))\n> ereport(ERROR,\n> (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n> errmsg(\"requested length too large\")));\n>\n\nLGTM. Thanks for your patch!\n\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Thu, 26 May 2022 09:03:54 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Invalid memory alloc request size for repeat()"
},
{
"msg_contents": "\nOn Thu, 26 May 2022 at 09:03, Japin Li <japinli@hotmail.com> wrote:\n> On Wed, 25 May 2022 at 22:50, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Japin Li <japinli@hotmail.com> writes:\n>>> Today, I try to use repeat() to generate 1GB text, and it occurs invalid memory\n>>> alloc request size [1]. It is a limit from palloc(), then I try to reduce it,\n>>> it still complains out of memory which comes from enlargeStringInfo() [2]. The\n>>> documentation about repect() [3] doesn't mentaion the limitation.\n>>\n>> It would probably make sense for repeat() to check this explicitly:\n>>\n>> if (unlikely(pg_mul_s32_overflow(count, slen, &tlen)) ||\n>> - unlikely(pg_add_s32_overflow(tlen, VARHDRSZ, &tlen)))\n>> + unlikely(pg_add_s32_overflow(tlen, VARHDRSZ, &tlen)) ||\n>> + unlikely(!AllocSizeIsValid(tlen)))\n>> ereport(ERROR,\n>> (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n>> errmsg(\"requested length too large\")));\n>>\n>\n> LGTM. Thanks for your patch!\n\nAfter some analysis, I found it might not easy to solve this.\n\nFor example,\n\n```\npostgres=# CREATE TABLE myrepeat AS SELECT repeat('a', 1024 * 1024 * 1024 - 5);\nERROR: invalid memory alloc request size 1073741871\n```\n\nHere is the backtrace:\n\n#0 palloc0 (size=1073741871) at /mnt/workspace/postgresql/build/../src/backend/utils/mmgr/mcxt.c:1103\n#1 0x0000561925199faf in heap_form_tuple (tupleDescriptor=0x561927cb4310, values=0x561927cb4470, isnull=0x561927cb4478)\n at /mnt/workspace/postgresql/build/../src/backend/access/common/heaptuple.c:1069\n#2 0x00005619254879aa in tts_virtual_copy_heap_tuple (slot=0x561927cb4428) at /mnt/workspace/postgresql/build/../src/backend/executor/execTuples.c:280\n#3 0x000056192548a40c in ExecFetchSlotHeapTuple (slot=0x561927cb4428, materialize=true, shouldFree=0x7ffc9cc5197f)\n at /mnt/workspace/postgresql/build/../src/backend/executor/execTuples.c:1660\n#4 0x000056192520e0c9 in heapam_tuple_insert (relation=0x7fdacaa9d3e0, slot=0x561927cb4428, cid=5, options=2, bistate=0x561927cb83f0)\n at /mnt/workspace/postgresql/build/../src/backend/access/heap/heapam_handler.c:245\n#5 0x00005619253b2edf in table_tuple_insert (rel=0x7fdacaa9d3e0, slot=0x561927cb4428, cid=5, options=2, bistate=0x561927cb83f0)\n at /mnt/workspace/postgresql/build/../src/include/access/tableam.h:1376\n#6 0x00005619253b3d73 in intorel_receive (slot=0x561927cb4428, self=0x561927c82430) at /mnt/workspace/postgresql/build/../src/backend/commands/createas.c:586\n#7 0x0000561925478d86 in ExecutePlan (estate=0x561927cb3ed0, planstate=0x561927cb4108, use_parallel_mode=false, operation=CMD_SELECT, sendTuples=true, numberTuples=0,\n direction=ForwardScanDirection, dest=0x561927c82430, execute_once=true) at /mnt/workspace/postgresql/build/../src/backend/executor/execMain.c:1667\n#8 0x0000561925476735 in standard_ExecutorRun (queryDesc=0x561927cab990, direction=ForwardScanDirection, count=0, execute_once=true)\n at /mnt/workspace/postgresql/build/../src/backend/executor/execMain.c:363\n#9 0x000056192547654b in ExecutorRun (queryDesc=0x561927cab990, direction=ForwardScanDirection, count=0, execute_once=true)\n at /mnt/workspace/postgresql/build/../src/backend/executor/execMain.c:307\n#10 0x00005619253b3711 in ExecCreateTableAs (pstate=0x561927c35b00, stmt=0x561927b5aa70, params=0x0, queryEnv=0x0, qc=0x7ffc9cc52370)\n\nThe heap_form_tupe() function need extra 48 bytes for HeapTupleHeaderData and HEAPTUPLESIZE.\n\nIf we use the following, everything is ok.\n\npostgres=# CREATE TABLE myrepeat AS SELECT repeat('a', 1024 * 1024 * 1024 - 5 - 48);\nSELECT 1\n\n\nIf we want to send the result to the client. Here is an error.\n\npostgres=# SELECT repeat('a', 1024 * 1024 * 1024 - 5);\nERROR: out of memory\nDETAIL: Cannot enlarge string buffer containing 6 bytes by 1073741819 more bytes.\n\nThis is because the printtup() needs send to the number of attributes (2-byte)\nand the length of data (4-byte) and then the data (1073741819-byte).\n\nDo those mean we cannot store 1GB to a field [1] and send 1GB of data to the client?\n\n[1] https://www.postgresql.org/docs/current/limits.html\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Fri, 27 May 2022 12:51:04 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Invalid memory alloc request size for repeat()"
},
{
"msg_contents": "Japin Li <japinli@hotmail.com> writes:\n> Do those mean we cannot store 1GB to a field [1] and send 1GB of data to the client?\n\nThat's what I said upthread. I'm not terribly excited about that.\nShoving gigabyte-sized field values around as atomic strings is not\ngoing to lead to anything but pain: even if the server can manage\nit, clients will likely fall over. (Try a string a little smaller\nthan that, and notice how much psql sucks at handling it.)\n\nThere's been speculation from time to time about creating some\nsort of streaming interface that would allow processing enormous\nfield values more reasonably. You can kinda-sorta do that now\nwith large objects, but those have enough other limitations and\nissues that they're not very recommendable as a general solution.\nSomeone should try to develop a better version of that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 27 May 2022 01:03:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Invalid memory alloc request size for repeat()"
}
] |
[
{
"msg_contents": "Hi all,\n\nOn the thread about the removal of VS 2013, Jose (in CC) has mentioned\nthat bumping MIN_WINNT independently would make sense, as the\nsimplication of locales would expose under MinGW some code for\nGetLocaleInfoEx():\nhttps://www.postgresql.org/message-id/CAC+AXB3himFH+-pGRO1cYju6zF2hLH6VmwPbf5RAytF1UBm_nw@mail.gmail.com\n\nAttached is a patch to set MIN_WINNT, the minimal version of Windows\nallowed at run-time to 0x0600 for all environments, aka Vista. This\nresults in removing the support for XP at run-time when compiling with\nanything else than VS >= 2015 (VS 2013, MinGW, etc.). We could cut\nthings more, I hope, but this bump makes sense in itself with the\nbusiness related to locales.\n\nWhat I would like to do is to apply that at the beginning of the dev\ncycle for v16, in parallel of the removal of VS 2013. This move is\nrather independent of the other thread, which is why I am spawning a\nnew one here. And it is better than having to dig into the other\nthread for a change like that.\n\nThoughts or opinions?\n--\nMichael",
"msg_date": "Thu, 26 May 2022 11:59:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Thu, May 26, 2022 at 2:59 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On the thread about the removal of VS 2013, Jose (in CC) has mentioned\n> that bumping MIN_WINNT independently would make sense, as the\n> simplication of locales would expose under MinGW some code for\n> GetLocaleInfoEx():\n> https://www.postgresql.org/message-id/CAC+AXB3himFH+-pGRO1cYju6zF2hLH6VmwPbf5RAytF1UBm_nw@mail.gmail.com\n>\n> Attached is a patch to set MIN_WINNT, the minimal version of Windows\n> allowed at run-time to 0x0600 for all environments, aka Vista. This\n> results in removing the support for XP at run-time when compiling with\n> anything else than VS >= 2015 (VS 2013, MinGW, etc.). We could cut\n> things more, I hope, but this bump makes sense in itself with the\n> business related to locales.\n>\n> What I would like to do is to apply that at the beginning of the dev\n> cycle for v16, in parallel of the removal of VS 2013. This move is\n> rather independent of the other thread, which is why I am spawning a\n> new one here. And it is better than having to dig into the other\n> thread for a change like that.\n>\n> Thoughts or opinions?\n\nI think we should drop everything older than Win 10 for PG16, as\nargued in various threads where various pain points came up. For one\nthing, that would make a lot of future work simpler (ie not needing to\ntest alternative code paths on dead computers without CI or BF, AKA\ndead code), and also I don't think we really help anyone by allowing\nnew database deployments on operating systems that aren't receiving\nvendor patches on the world's most attacked operating system. Doing\nit incrementally is fine by me, too, though, if it makes the patches\nand discussions easier...\n\n\n",
"msg_date": "Thu, 26 May 2022 16:16:40 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Thu, May 26, 2022 at 04:16:40PM +1200, Thomas Munro wrote:\n> I think we should drop everything older than Win 10 for PG16, as\n> argued in various threads where various pain points came up. For one\n> thing, that would make a lot of future work simpler (ie not needing to\n> test alternative code paths on dead computers without CI or BF, AKA\n> dead code), and also I don't think we really help anyone by allowing\n> new database deployments on operating systems that aren't receiving\n> vendor patches on the world's most attacked operating system. Doing\n> it incrementally is fine by me, too, though, if it makes the patches\n> and discussions easier...\n\nIs there anything posted recently that would require that? Perhaps\nthe async work? FWIW, I agree to be much more aggressive, but there\nis nothing in the tree now that depends on _WIN32_WINNT, except one\nchange for the locales.\n--\nMichael",
"msg_date": "Thu, 26 May 2022 13:27:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Thu, May 26, 2022 at 6:27 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n>\n> Is there anything posted recently that would require that? Perhaps\n> the async work? FWIW, I agree to be much more aggressive, but there\n> is nothing in the tree now that depends on _WIN32_WINNT, except one\n> change for the locales.\n\n\nThere have been a couple of discussions involving not only Windows\nversion10, but also the Release id:\n\nhttps://commitfest.postgresql.org/38/3347/\nhttps://commitfest.postgresql.org/38/3530/\nhttps://www.postgresql.org/message-id/6389b5a88e114bee85593af2853c08cd%40dental-vision.de\n\nMaybe this thread can push those others forward.\n\nRegards,\n\nJuan José Santamaría Flecha\n\n\n>\n\nOn Thu, May 26, 2022 at 6:27 AM Michael Paquier <michael@paquier.xyz> wrote:\nIs there anything posted recently that would require that? Perhaps\nthe async work? FWIW, I agree to be much more aggressive, but there\nis nothing in the tree now that depends on _WIN32_WINNT, except one\nchange for the locales.There have been a couple of discussions involving not only Windows version10, but also the Release id:https://commitfest.postgresql.org/38/3347/https://commitfest.postgresql.org/38/3530/https://www.postgresql.org/message-id/6389b5a88e114bee85593af2853c08cd%40dental-vision.deMaybe this thread can push those others forward.Regards,Juan José Santamaría Flecha",
"msg_date": "Thu, 26 May 2022 10:17:08 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Thu, May 26, 2022 at 4:27 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Perhaps the async work?\n\n(Checks code...) Looks like the experimental Windows native AIO code\nwe have today, namely io_method=windows_iocp, only needs Vista.\nThat's for GetQueueCompletionStatusEx() (before that you had to call\nGetQueuedCompletionStatus() in a loop to read multiple IO completion\nevents from an IOCP), and otherwise it's all just ancient Windows\n\"overlapped\" stuff. Not sure if we'll propose that io_method, or skip\ndirectly to the new io_uring-style API that appeared in Win 11 (not\nyet tried), or propose both as long as Win 10 is around, or ... I\ndunno.\n\n\n",
"msg_date": "Fri, 27 May 2022 08:59:01 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Thu, May 26, 2022 at 10:17:08AM +0200, Juan José Santamaría Flecha wrote:\n> There have been a couple of discussions involving not only Windows\n> version10, but also the Release id:\n> \n> https://commitfest.postgresql.org/38/3347/\n\nThis mentions 0x0A00, aka Windows 10, for atomic rename support.\n\n> https://commitfest.postgresql.org/38/3530/\n\nSimilarly 0x0A00, aka Windows 10, for fdatasync().\n\n> https://www.postgresql.org/message-id/6389b5a88e114bee85593af2853c08cd%40dental-vision.de\n\nAnd Windows 10 1703, for large pages.\n\n> Maybe this thread can push those others forward.\n\nPost Windows 8, the most annoying part is that manifests are required\nto be able to check at run-time on which version you are running with\nroutines like IsWindowsXXOrGreater() if you compiled with a threshold\nof MIN_WINNT lower than the version you expect compatibility for.\nAnd each one of those things would mean cutting a lot of past support\nif we want to eliminate the manifest part. Windows 8 ends its support\nin 2023, it seems, so that sounds short even for PG16.\n--\nMichael",
"msg_date": "Fri, 27 May 2022 12:53:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Fri, May 27, 2022 at 3:53 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Windows 8 ends its support\n> in 2023, it seems, so that sounds short even for PG16.\n\nI guess you meant 8.1 here, and corresponding server release 2012 R2.\nThese will come to the end of their \"extended\" support phase in 2023,\nbefore PG16 comes out. If I understand correctly (and I'm not a\nWindows user, I just googled this), they will start showing blue\nfull-screen danger-Will-Robinson alerts about viruses and malware.\nWhy would we have explicit support for that in a new release? Do we\nwant people putting their users' data in such a system? Can you go to\nGDPR jail for that in Europe? (Joking, I think).\n\nWe should go full Marie Kondo on EOL'd OSes that are not in our CI or\nbuild farm, IMHO.\n\n\n",
"msg_date": "Sat, 28 May 2022 09:07:31 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "> On 27 May 2022, at 23:07, Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> We should go full Marie Kondo on EOL'd OSes that are not in our CI or\n> build farm, IMHO.\n\nFWIW, +1\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Sat, 28 May 2022 17:30:51 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Sat, May 28, 2022 at 05:30:51PM +0200, Daniel Gustafsson wrote:\n> On 27 May 2022, at 23:07, Thomas Munro <thomas.munro@gmail.com> wrote:\n>> We should go full Marie Kondo on EOL'd OSes that are not in our CI or\n>> build farm, IMHO.\n> \n> FWIW, +1\n\nOkay, I am outnumbered, and that would mean bumping MIN_WINNT to\n0x0A00. So, ready to move to this version at full speed for 16? We\nstill have a couple of weeks ahead before the next dev cycle begins,\nso feel free to comment, of course.\n--\nMichael",
"msg_date": "Mon, 30 May 2022 15:59:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Mon, May 30, 2022 at 03:59:52PM +0900, Michael Paquier wrote:\n> Okay, I am outnumbered, and that would mean bumping MIN_WINNT to\n> 0x0A00. So, ready to move to this version at full speed for 16? We\n> still have a couple of weeks ahead before the next dev cycle begins,\n> so feel free to comment, of course.\n\nAnd attached is an updated patch to do exactly that.\n--\nMichael",
"msg_date": "Thu, 9 Jun 2022 12:55:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Thu, Jun 9, 2022 at 3:55 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, May 30, 2022 at 03:59:52PM +0900, Michael Paquier wrote:\n> > Okay, I am outnumbered, and that would mean bumping MIN_WINNT to\n> > 0x0A00. So, ready to move to this version at full speed for 16? We\n> > still have a couple of weeks ahead before the next dev cycle begins,\n> > so feel free to comment, of course.\n>\n> And attached is an updated patch to do exactly that.\n\n <productname>PostgreSQL</productname> can be expected to work on\nthese operating\n- systems: Linux (all recent distributions), Windows (XP and later),\n+ systems: Linux (all recent distributions), Windows (10 and later),\n FreeBSD, OpenBSD, NetBSD, macOS, AIX, HP/UX, and Solaris.\n\nCool. (I'm not sure what \"all recent distributions\" contributes but\nthat's not from your patch...).\n\nThe Cygwin stuff in installation.sgml also mentions NT, 2000, XP, but\nit's not clear from the phrasing if it meant \"and later\" or \"and\nearlier\", so I'm not sure if it needs adjusting or removing...\n\nWhile looking for more stuff to vacuum, I found this:\n\n <title>Special Considerations for 64-Bit Windows</title>\n\n <para>\n PostgreSQL will only build for the x64 architecture on 64-bit Windows, there\n is no support for Itanium processors.\n </para>\n\nI think we can drop mention of Itanium (RIP): the ancient versions of\nWindows that could run on that arch are desupported with your patch.\nIt might be more relevant to say that we can't yet run on ARM, and\nWindows 11 is untested by us, but let's fix those problems instead of\ndocumenting them :-)\n\nIs the mention of \"64-Bit\" in that title sounds a little anachronistic\nto me (isn't that the norm now?) but I'm not sure what to suggest.\n\nThere are more mentions of older Windows releases near the compiler\nstuff but perhaps your MSVC version vacuuming work will take care of\nthose.\n\n\n",
"msg_date": "Thu, 9 Jun 2022 16:55:45 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Thu, Jun 09, 2022 at 04:55:45PM +1200, Thomas Munro wrote:\n> The Cygwin stuff in installation.sgml also mentions NT, 2000, XP, but\n> it's not clear from the phrasing if it meant \"and later\" or \"and\n> earlier\", so I'm not sure if it needs adjusting or removing...\n\nRight. We could just remove the entire mention to \"NT, 2000 or XP\"\ninstead? There would be no loss in clarity IMO.\n\n> I think we can drop mention of Itanium (RIP): the ancient versions of\n> Windows that could run on that arch are desupported with your patch.\n> It might be more relevant to say that we can't yet run on ARM, and\n> Windows 11 is untested by us, but let's fix those problems instead of\n> documenting them :-)\n\nOkay to remove the Itanium part for me.\n\n> Is the mention of \"64-Bit\" in that title sounds a little anachronistic\n> to me (isn't that the norm now?) but I'm not sure what to suggest.\n\nNot sure. I think that I would leave this part alone for now.\n\n> There are more mentions of older Windows releases near the compiler\n> stuff but perhaps your MSVC version vacuuming work will take care of\n> those.\n\nYes, I have a few changes like the one in main.c for _M_AMD64. Are\nyou referring to something else?\n--\nMichael",
"msg_date": "Thu, 9 Jun 2022 14:47:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Thu, Jun 09, 2022 at 02:47:36PM +0900, Michael Paquier wrote:\n> On Thu, Jun 09, 2022 at 04:55:45PM +1200, Thomas Munro wrote:\n>> I think we can drop mention of Itanium (RIP): the ancient versions of\n>> Windows that could run on that arch are desupported with your patch.\n>> It might be more relevant to say that we can't yet run on ARM, and\n>> Windows 11 is untested by us, but let's fix those problems instead of\n>> documenting them :-)\n> \n> Okay to remove the Itanium part for me.\n\ninstall-windows.sgml has one extra spot mentioning Windows 7 and\nServer 2008 that can be simplified on top of that.\n\n>> There are more mentions of older Windows releases near the compiler\n>> stuff but perhaps your MSVC version vacuuming work will take care of\n>> those.\n> \n> Yes, I have a few changes like the one in main.c for _M_AMD64. Are\n> you referring to something else?\n\nActually, this can go with the bump of MIN_WINNT as it uses one of the\nIsWindows*OrGreater() macros as a runtime check. And there are two\nmore places in pg_ctl.c that can be similarly cleaned up.\n\nIt is possible that I have missed some spots, of course.\n--\nMichael",
"msg_date": "Thu, 16 Jun 2022 15:14:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Thu, Jun 16, 2022 at 03:14:16PM +0900, Michael Paquier wrote:\n> Actually, this can go with the bump of MIN_WINNT as it uses one of the\n> IsWindows*OrGreater() macros as a runtime check. And there are two\n> more places in pg_ctl.c that can be similarly cleaned up.\n> \n> It is possible that I have missed some spots, of course.\n\nIt does not seem to be the case on a second look. The buildfarm\nanimals running Windows are made of:\n- hamerkop, Windows server 2016 (based on Win10 AFAIK)\n- drongo, Windows server 2019\n- bowerbird, Windows 10 pro\n- jacana, Windows 10\n- fairywren, Msys server 2019\n- bichir, Ubuntu/Windows 10 mix\n\nNow that v16 is open for business, any objections to move on with this\npatch and bump MIN_WINNT to 0x0A00 on HEAD? There are follow-up items\nfor large pages and more.\n--\nMichael",
"msg_date": "Wed, 6 Jul 2022 16:27:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Wed, Jul 6, 2022 at 7:28 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Thu, Jun 16, 2022 at 03:14:16PM +0900, Michael Paquier wrote:\n> > Actually, this can go with the bump of MIN_WINNT as it uses one of the\n> > IsWindows*OrGreater() macros as a runtime check. And there are two\n> > more places in pg_ctl.c that can be similarly cleaned up.\n> >\n> > It is possible that I have missed some spots, of course.\n>\n> It does not seem to be the case on a second look. The buildfarm\n> animals running Windows are made of:\n> - hamerkop, Windows server 2016 (based on Win10 AFAIK)\n> - drongo, Windows server 2019\n> - bowerbird, Windows 10 pro\n> - jacana, Windows 10\n> - fairywren, Msys server 2019\n> - bichir, Ubuntu/Windows 10 mix\n>\n> Now that v16 is open for business, any objections to move on with this\n> patch and bump MIN_WINNT to 0x0A00 on HEAD? There are follow-up items\n> for large pages and more.\n\n+1 for proceeding. This will hopefully unblock a few things, and it's\ngood to update our claims to match the reality of what we are actually\ntesting and able to debug.\n\nThe build farm also has frogmouth and currawong, 32 bit systems\nrunning Windows XP, but they are only testing REL_10_STABLE so I\nassume Andrew will decommission them in November.\n\n\n",
"msg_date": "Thu, 7 Jul 2022 08:46:26 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "\nOn 2022-07-06 We 16:46, Thomas Munro wrote:\n> On Wed, Jul 6, 2022 at 7:28 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> On Thu, Jun 16, 2022 at 03:14:16PM +0900, Michael Paquier wrote:\n>>> Actually, this can go with the bump of MIN_WINNT as it uses one of the\n>>> IsWindows*OrGreater() macros as a runtime check. And there are two\n>>> more places in pg_ctl.c that can be similarly cleaned up.\n>>>\n>>> It is possible that I have missed some spots, of course.\n>> It does not seem to be the case on a second look. The buildfarm\n>> animals running Windows are made of:\n>> - hamerkop, Windows server 2016 (based on Win10 AFAIK)\n>> - drongo, Windows server 2019\n>> - bowerbird, Windows 10 pro\n>> - jacana, Windows 10\n>> - fairywren, Msys server 2019\n>> - bichir, Ubuntu/Windows 10 mix\n>>\n>> Now that v16 is open for business, any objections to move on with this\n>> patch and bump MIN_WINNT to 0x0A00 on HEAD? There are follow-up items\n>> for large pages and more.\n> +1 for proceeding. This will hopefully unblock a few things, and it's\n> good to update our claims to match the reality of what we are actually\n> testing and able to debug.\n>\n> The build farm also has frogmouth and currawong, 32 bit systems\n> running Windows XP, but they are only testing REL_10_STABLE so I\n> assume Andrew will decommission them in November.\n\n\nYeah, it's not capable of supporting anything newer, so it will finally\ngo to sleep this year.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 6 Jul 2022 17:13:27 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Wed, Jul 06, 2022 at 05:13:27PM -0400, Andrew Dunstan wrote:\n> On 2022-07-06 We 16:46, Thomas Munro wrote:\n>> The build farm also has frogmouth and currawong, 32 bit systems\n>> running Windows XP, but they are only testing REL_10_STABLE so I\n>> assume Andrew will decommission them in November.\n> \n> Yeah, it's not capable of supporting anything newer, so it will finally\n> go to sleep this year.\n\nOkay, thanks for confirming. I think that I'll give it a try today\nthen, my schedule would fit nicely with the buildfarm monitoring.\n--\nMichael",
"msg_date": "Thu, 7 Jul 2022 09:11:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Thu, Jul 07, 2022 at 09:11:57AM +0900, Michael Paquier wrote:\n> Okay, thanks for confirming. I think that I'll give it a try today\n> then, my schedule would fit nicely with the buildfarm monitoring.\n\nAnd I have applied that, after noticing that the MinGW was complaining\nbecause _WIN32_WINNT was not getting set like previously and removing\n_WIN32_WINNT as there is no need for it anymore. The CI has reported\ngreen for all my tests, so I am rather confident to not have messed up\nsomething. Now, let's see what the buildfarm members tell. This\nshould take a couple of hours..\n--\nMichael",
"msg_date": "Thu, 7 Jul 2022 13:56:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Thu, Jul 07, 2022 at 01:56:39PM +0900, Michael Paquier wrote:\n> And I have applied that, after noticing that the MinGW was complaining\n> because _WIN32_WINNT was not getting set like previously and removing\n> _WIN32_WINNT as there is no need for it anymore. The CI has reported\n> green for all my tests, so I am rather confident to not have messed up\n> something. Now, let's see what the buildfarm members tell. This\n> should take a couple of hours..\n\nSince this has been applied, all the Windows members have reported a\ngreen state except for jacana and bowerbird. Based on their\nenvironment, I would not expect any issues though I may be wrong.\n\nAndrew, is something happening on those environments? Is 495ed0e\ncausing problems?\n--\nMichael",
"msg_date": "Mon, 11 Jul 2022 10:22:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Thu, Jul 07, 2022 at 01:56:39PM +0900, Michael Paquier wrote:\n> On Thu, Jul 07, 2022 at 09:11:57AM +0900, Michael Paquier wrote:\n> > Okay, thanks for confirming. I think that I'll give it a try today\n> > then, my schedule would fit nicely with the buildfarm monitoring.\n> \n> And I have applied that, after noticing that the MinGW was complaining\n> because _WIN32_WINNT was not getting set like previously and removing\n> _WIN32_WINNT as there is no need for it anymore. The CI has reported\n> green for all my tests, so I am rather confident to not have messed up\n> something. Now, let's see what the buildfarm members tell. This\n> should take a couple of hours..\n\nIf I'm not wrong, there's some lingering comments which could be removed since\n495ed0ef2.\n\nsrc/bin/pg_ctl/pg_ctl.c: * on NT4. That way, we don't break on NT4.\nsrc/bin/pg_ctl/pg_ctl.c: * On NT4, or any other system not containing the required functions, will\nsrc/bin/pg_ctl/pg_ctl.c: * NT4 doesn't have CreateRestrictedToken, so just call ordinary\nsrc/port/dirmod.c: * Win32 (NT4 and newer).\nsrc/backend/port/win32/socket.c: /* No error, zero bytes (win2000+) or error+WSAEWOULDBLOCK (<=nt4) */\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 26 Aug 2022 06:26:37 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Fri, Aug 26, 2022 at 06:26:37AM -0500, Justin Pryzby wrote:\n> If I'm not wrong, there's some lingering comments which could be removed since\n> 495ed0ef2.\n\nIt seems to me that you are right. I have not thought about looking\nat references to NT. Good catches!\n\n> src/bin/pg_ctl/pg_ctl.c: * on NT4. That way, we don't break on NT4.\n> src/bin/pg_ctl/pg_ctl.c: * On NT4, or any other system not containing the required functions, will\n> src/bin/pg_ctl/pg_ctl.c: * NT4 doesn't have CreateRestrictedToken, so just call ordinary\n> src/port/dirmod.c: * Win32 (NT4 and newer).\n> src/backend/port/win32/socket.c: /* No error, zero bytes (win2000+) or error+WSAEWOULDBLOCK (<=nt4) */\n\nThere is also a reference to Nt4 in win32.c, for a comment that is\nirrelevant now, so it can be IMO removed.\n\nThere may be a point in enforcing CreateProcess() if\nCreateRestrictedToken() cannot be loaded, but that would be a security\nissue if Windows goes crazy as we should always expect the function,\nso this had better return an error.\n\nSo, what do you think about the attached?\n--\nMichael",
"msg_date": "Sat, 27 Aug 2022 14:35:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Sat, Aug 27, 2022 at 02:35:25PM +0900, Michael Paquier wrote:\n> There may be a point in enforcing CreateProcess() if\n> CreateRestrictedToken() cannot be loaded, but that would be a security\n> issue if Windows goes crazy as we should always expect the function,\n> so this had better return an error.\n\nAnd applied as of b1ec7f4.\n--\nMichael",
"msg_date": "Tue, 30 Aug 2022 09:57:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Tue, Aug 30, 2022 at 12:57 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Sat, Aug 27, 2022 at 02:35:25PM +0900, Michael Paquier wrote:\n> > There may be a point in enforcing CreateProcess() if\n> > CreateRestrictedToken() cannot be loaded, but that would be a security\n> > issue if Windows goes crazy as we should always expect the function,\n> > so this had better return an error.\n>\n> And applied as of b1ec7f4.\n\nThis reminds me of 24c3ce8f1, which replaced a dlopen()-ish thing with\na direct function call. Can you just call all these functions\ndirectly these days?\n\n\n",
"msg_date": "Tue, 30 Aug 2022 13:29:24 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Tue, Aug 30, 2022 at 01:29:24PM +1200, Thomas Munro wrote:\n> This reminds me of 24c3ce8f1, which replaced a dlopen()-ish thing with\n> a direct function call. Can you just call all these functions\n> directly these days?\n\nHmm. Some tests in the CI show that attempting to call directly\nMiniDumpWriteDump() causes a linking failure at compilation. Anyway,\nin the same fashion, I can get some simplifications done right for\npg_ctl.c, auth.c and restricted_token.c. And I am seeing all these\nfunctions listed in the headers of MinGW, meaning that all these\nshould work out of the box in this case, no?\n\nThe others are library-dependent, and I not really confident about\nldap_start_tls_sA(). So, at the end, I am finishing with the\nattached, what do you think? This cuts some code, which is nice:\n 3 files changed, 48 insertions(+), 159 deletions(-)\n--\nMichael",
"msg_date": "Thu, 8 Sep 2022 17:55:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Thu, Sep 08, 2022 at 05:55:40PM +0900, Michael Paquier wrote:\n> On Tue, Aug 30, 2022 at 01:29:24PM +1200, Thomas Munro wrote:\n> > This reminds me of 24c3ce8f1, which replaced a dlopen()-ish thing with\n> > a direct function call. Can you just call all these functions\n> > directly these days?\n> \n> Hmm. Some tests in the CI show that attempting to call directly\n> MiniDumpWriteDump() causes a linking failure at compilation. Anyway,\n> in the same fashion, I can get some simplifications done right for\n> pg_ctl.c, auth.c and restricted_token.c. And I am seeing all these\n> functions listed in the headers of MinGW, meaning that all these\n> should work out of the box in this case, no?\n> \n> The others are library-dependent, and I not really confident about\n> ldap_start_tls_sA(). So, at the end, I am finishing with the\n> attached, what do you think? This cuts some code, which is nice:\n> 3 files changed, 48 insertions(+), 159 deletions(-)\n\n+1\n\nIt seems silly to do it at runtime if it's possible to do it at link time.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 8 Sep 2022 08:29:20 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Thu, Sep 08, 2022 at 08:29:20AM -0500, Justin Pryzby wrote:\n> It seems silly to do it at runtime if it's possible to do it at link time.\n\nThanks. This set of simplifications is too good to let go, and I have\na window to look after the buildfarm today and tomorrow, which should\nbe enough to take action if need be. Hence, I have applied the\npatch. Now, let's see what the buildfarm tells us ;)\n--\nMichael",
"msg_date": "Fri, 9 Sep 2022 10:55:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Fri, Sep 09, 2022 at 10:55:55AM +0900, Michael Paquier wrote:\n> Thanks. This set of simplifications is too good to let go, and I have\n> a window to look after the buildfarm today and tomorrow, which should\n> be enough to take action if need be. Hence, I have applied the\n> patch. Now, let's see what the buildfarm tells us ;)\n\nBased on what I can see, the Windows animals seem to have digested\n47bd0b3 (cygwin, MinGW and MSVC), so I think that we are good.\n--\nMichael",
"msg_date": "Fri, 9 Sep 2022 20:11:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Fri, Sep 09, 2022 at 08:11:09PM +0900, Michael Paquier wrote:\n> Based on what I can see, the Windows animals seem to have digested\n> 47bd0b3 (cygwin, MinGW and MSVC), so I think that we are good.\n\nThe last part that's worth adjusting is ldap_start_tls_sA(), which\nwould lead to the attached simplification. The MinGW headers list\nthis routine, so like the previous change I think that it should be\nsafe for such builds.\n\nLooking at the buildfarm animals, bowerbird, jacana, fairywren,\nlorikeet and drongo disable ldap. hamerkop is the only member that\nprovides coverage for it, still that's a MSVC build.\n\nThe CI provides coverage for ldap as it is enabled by default and\nwindows_build_config.pl does not tell otherwise, but with the existing\nanimals we don't have ldap coverage under MinGW.\n\nAnyway, I'd like to apply the attached, and I don't quite see why it\nwould not work after 47bd0b3 under MinGW. Any thoughts?\n--\nMichael",
"msg_date": "Sun, 11 Sep 2022 09:28:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Sun, Sep 11, 2022 at 09:28:54AM +0900, Michael Paquier wrote:\n> On Fri, Sep 09, 2022 at 08:11:09PM +0900, Michael Paquier wrote:\n> > Based on what I can see, the Windows animals seem to have digested\n> > 47bd0b3 (cygwin, MinGW and MSVC), so I think that we are good.\n> \n> The last part that's worth adjusting is ldap_start_tls_sA(), which\n> would lead to the attached simplification. The MinGW headers list\n> this routine, so like the previous change I think that it should be\n> safe for such builds.\n> \n> Looking at the buildfarm animals, bowerbird, jacana, fairywren,\n> lorikeet and drongo disable ldap. hamerkop is the only member that\n> provides coverage for it, still that's a MSVC build.\n> \n> The CI provides coverage for ldap as it is enabled by default and\n> windows_build_config.pl does not tell otherwise, but with the existing\n> animals we don't have ldap coverage under MinGW.\n> \n> Anyway, I'd like to apply the attached, and I don't quite see why it\n> would not work after 47bd0b3 under MinGW. Any thoughts?\n\nThere's a CF entry to add it, and I launched it with your patch.\n(This is in a branch which already has that, and also does a few other\nthings differently).\n\nhttps://cirrus-ci.com/task/6302833585684480\n\n[02:07:57.497] checking whether to build with LDAP support... yes\n\nIt compiles, which is probably all that matters, and eventually skips\nthe test anyway.\n\n[02:23:18.209] [02:23:18] c:/cirrus/src/test/ldap/t/001_auth.pl .. skipped: ldap tests not supported on MSWin32 or dependencies not installed\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 10 Sep 2022 22:39:19 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Sat, Sep 10, 2022 at 10:39:19PM -0500, Justin Pryzby wrote:\n> There's a CF entry to add it, and I launched it with your patch.\n> (This is in a branch which already has that, and also does a few other\n> things differently).\n\nNo need for a CF entry if you want to play with the tree. I have\nCirrus enabled on my own fork of Postgres on github, and I saw the\nsame result as you:\nhttps://github.com/michaelpq/postgres/\n--\nMichael",
"msg_date": "Sun, 11 Sep 2022 13:09:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Sun, Sep 11, 2022 at 12:29 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Fri, Sep 09, 2022 at 08:11:09PM +0900, Michael Paquier wrote:\n> > Based on what I can see, the Windows animals seem to have digested\n> > 47bd0b3 (cygwin, MinGW and MSVC), so I think that we are good.\n\nGreat, that's a lot of nice cleanup.\n\n> The last part that's worth adjusting is ldap_start_tls_sA(), which\n> would lead to the attached simplification.\n\n- if ((r = _ldap_start_tls_sA(*ldap, NULL, NULL, NULL, NULL))\n!= LDAP_SUCCESS)\n+ if ((r = ldap_start_tls_sA(*ldap, NULL, NULL, NULL, NULL)) !=\nLDAP_SUCCESS)\n\nWhen looking the function up it made sense to use the name ending in\n'...A', but when calling directly I think we shouldn't use the A\nsuffix, we should let the <winldap.h> macros do that for us[1]. (I\nwondered for a moment if that would even make Windows and Unix code\nthe same, but sadly not due to the extra NULL arguments.)\n\n[1] https://docs.microsoft.com/en-us/windows/win32/api/winldap/nf-winldap-ldap_start_tls_sa\n\n\n",
"msg_date": "Mon, 12 Sep 2022 09:42:25 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Mon, Sep 12, 2022 at 09:42:25AM +1200, Thomas Munro wrote:\n> When looking the function up it made sense to use the name ending in\n> '...A', but when calling directly I think we shouldn't use the A\n> suffix, we should let the <winldap.h> macros do that for us[1]. (I\n> wondered for a moment if that would even make Windows and Unix code\n> the same, but sadly not due to the extra NULL arguments.)\n\nGood idea, I did not noticed this part. This should work equally, so\ndone this way and applied. I am keeping an eye on the buildfarm.\n--\nMichael",
"msg_date": "Mon, 12 Sep 2022 09:08:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "After 495ed0e, do these references to Windows SDK 8.1 still make sense?\n\nsrc/sgml/install-windows.sgml: as well as standalone Windows SDK\nreleases 8.1a to 10.\nsrc/sgml/install-windows.sgml: <productname>Microsoft Windows\nSDK</productname> version 8.1a to 10 or\n\n\n",
"msg_date": "Mon, 15 May 2023 14:08:32 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Mon, May 15, 2023 at 02:08:32PM +1200, Thomas Munro wrote:\n> After 495ed0e, do these references to Windows SDK 8.1 still make sense?\n> \n> src/sgml/install-windows.sgml: as well as standalone Windows SDK\n> releases 8.1a to 10.\n> src/sgml/install-windows.sgml: <productname>Microsoft Windows\n> SDK</productname> version 8.1a to 10 or\n\nAs listed on https://en.wikipedia.org/wiki/Microsoft_Windows_SDK,\nlikely not. What do you think about the attached?\n--\nMichael",
"msg_date": "Mon, 15 May 2023 11:57:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Mon, May 15, 2023 at 2:57 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, May 15, 2023 at 02:08:32PM +1200, Thomas Munro wrote:\n> > After 495ed0e, do these references to Windows SDK 8.1 still make sense?\n\n> As listed on https://en.wikipedia.org/wiki/Microsoft_Windows_SDK,\n> likely not. What do you think about the attached?\n\nLGTM.\n\n\n",
"msg_date": "Mon, 15 May 2023 15:22:29 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
},
{
"msg_contents": "On Mon, May 15, 2023 at 03:22:29PM +1200, Thomas Munro wrote:\n> LGTM.\n\nThanks, fixed!\n--\nMichael",
"msg_date": "Mon, 15 May 2023 16:04:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Bump MIN_WINNT to 0x0600 (Vista) as minimal runtime in 16~"
}
] |
[
{
"msg_contents": "postgres and initdb not working inside docker.\n\nchmod 755 always for a mounted volume inside docker.\n\n=============\n\nFrom: Roffild <roffild@hotmail.com>\nSubject: fix chmod inside docker\n\n\ndiff --git a/src/backend/utils/init/miscinit.c \nb/src/backend/utils/init/miscinit.c\nindex 30f0f19dd5..adf3218cf9 100644\n--- a/src/backend/utils/init/miscinit.c\n+++ b/src/backend/utils/init/miscinit.c\n@@ -373,7 +373,7 @@ checkDataDir(void)\n */\n #if !defined(WIN32) && !defined(__CYGWIN__)\n if (stat_buf.st_mode & PG_MODE_MASK_GROUP)\n- ereport(FATAL,\n+ ereport(WARNING,\n(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n errmsg(\"data directory \\\"%s\\\" has invalid permissions\",\n DataDir),\n\n\n\n",
"msg_date": "Thu, 26 May 2022 12:22:35 +0300",
"msg_from": "Roffild <roffild@hotmail.com>",
"msg_from_op": true,
"msg_subject": "postgres and initdb not working inside docker"
},
{
"msg_contents": "Roffild <roffild@hotmail.com> writes:\n> postgres and initdb not working inside docker.\n> chmod 755 always for a mounted volume inside docker.\n\nThis patch will never be accepted. You don't need it if you take the\nstandard advice[1] that the Postgres data directory should not itself\nbe a mount point. Instead, make a subdirectory in the mounted volume,\nand that can have the ownership and permissions that the server expects.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/12168.1312921709%40sss.pgh.pa.us\n\n\n",
"msg_date": "Thu, 26 May 2022 13:29:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: postgres and initdb not working inside docker"
},
{
"msg_contents": "Only in an ideal world are all standards observed...\n\nDocker has different standards inside.\n\n$ ls -l /home/neo/\ndrwxr-xr-x 2 pgsql pgsql 8192 May 27 10:37 data\ndrwxr-sr-x 2 pgsql pgsql 4096 May 24 09:35 data2\n\n/home/pgsql/data - mounted volume.\n\nTherefore, the permissions have changed to drwxr-xr-x\n\n$ mkdir /home/pgsql/data/pgtest\n$ ls -l /home/pgsql/data\ndrwxr-xr-x 2 pgsql pgsql 0 May 27 11:08 pgtest\n\n$ chmod 700 /home/pgsql/data/pgtest\n$ ls -l /home/pgsql/data\ndrwxr-xr-x 2 pgsql pgsql 0 May 27 11:08 pgtest\n\nOops...\n\nIf it's a regular \"data2\" folder and there is no \"read_only: true\" flag \nfor the container:\n$ mkdir /home/pgsql/data2/pgtest\n$ chmod 700 /home/pgsql/data2/pgtest\n$ ls -l /home/pgsql/data2\ndrwx------ 2 pgsql pgsql 4096 May 27 11:19 pgtest\n\n> Roffild writes:\n>> postgres and initdb not working inside docker.\n>> chmod 755 always for a mounted volume inside docker.\n> \n> This patch will never be accepted. You don't need it if you take the\n> standard advice[1] that the Postgres data directory should not itself\n> be a mount point. Instead, make a subdirectory in the mounted volume,\n> and that can have the ownership and permissions that the server expects.\n> \n> \t\t\tregards, tom lane\n> \n> [1] https://www.postgresql.org/message-id/12168.1312921709%40sss.pgh.pa.us\n\n\n",
"msg_date": "Fri, 27 May 2022 11:50:04 +0300",
"msg_from": "Roffild <roffild@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres and initdb not working inside docker"
},
{
"msg_contents": "Add --disable-check-permissions to ./configure\n\nAfter applying the patch, run \"autoheader -f ; autoconf\"\n\nThis patch fixes an issue inside Docker and will not affect other builds.",
"msg_date": "Sat, 28 May 2022 15:59:40 +0300",
"msg_from": "Roffild <roffild@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres and initdb not working inside docker"
},
{
"msg_contents": "> On 28 May 2022, at 14:59, Roffild <roffild@hotmail.com> wrote:\n\n> This patch fixes an issue inside Docker and will not affect other builds.\n\nLooks like you generated the patch backwards, it's removing the lines you\npropose to add.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Sat, 28 May 2022 17:11:59 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: postgres and initdb not working inside docker"
},
{
"msg_contents": "Fix\n\n> Looks like you generated the patch backwards, it's removing the lines you\n> propose to add.",
"msg_date": "Sat, 28 May 2022 18:49:43 +0300",
"msg_from": "Roffild <roffild@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres and initdb not working inside docker"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 28 May 2022, at 14:59, Roffild <roffild@hotmail.com> wrote:\n>> This patch fixes an issue inside Docker and will not affect other builds.\n\n> Looks like you generated the patch backwards, it's removing the lines you\n> propose to add.\n\nLacks documentation, too. But it doesn't matter, because we are not\ngoing to accept such a \"feature\". The OP has offered no justification\nwhy this is necessary (and no, he's not the first who's ever used\nPostgres inside Docker). Introducing a security hole that goes\nagainst twenty-five years of deliberate project policy is going to\nrequire a heck of a lot better-reasoned argument than \"there's an\nissue inside Docker\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 28 May 2022 11:49:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: postgres and initdb not working inside docker"
},
{
"msg_contents": "> On 28 May 2022, at 17:49, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>>> On 28 May 2022, at 14:59, Roffild <roffild@hotmail.com> wrote:\n>>> This patch fixes an issue inside Docker and will not affect other builds.\n> \n>> Looks like you generated the patch backwards, it's removing the lines you\n>> propose to add.\n> \n> Lacks documentation, too. But it doesn't matter, because we are not\n> going to accept such a \"feature\". The OP has offered no justification\n> why this is necessary (and no, he's not the first who's ever used\n> Postgres inside Docker). Introducing a security hole that goes\n> against twenty-five years of deliberate project policy is going to\n> require a heck of a lot better-reasoned argument than \"there's an\n> issue inside Docker\".\n\nFWIW, I 100% agree.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Sat, 28 May 2022 17:51:32 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: postgres and initdb not working inside docker"
},
{
"msg_contents": "Docker is now the DevOps standard. It's easier to build an image for \nDocker and run the site with one command.\n\nBut the volume mount has a limitation with chmod 755. I don't want to \nwrite the database directly to the container.\n\nThe container is isolated from everything. Therefore, checking the file \npermissions inside the container is meaningless. And writing to the \ncontainer is also a \"security hole\".\n\nThe world has changed! And the old standards don't work...\n\n28.05.2022 18:49, Tom Lane:\n> Lacks documentation, too. But it doesn't matter, because we are not\n> going to accept such a \"feature\". The OP has offered no justification\n> why this is necessary (and no, he's not the first who's ever used\n> Postgres inside Docker). Introducing a security hole that goes\n> against twenty-five years of deliberate project policy is going to\n> require a heck of a lot better-reasoned argument than \"there's an\n> issue inside Docker\".\n\n\n",
"msg_date": "Sat, 28 May 2022 19:34:58 +0300",
"msg_from": "Roffild <roffild@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres and initdb not working inside docker"
},
{
"msg_contents": "On Sat, May 28, 2022 at 9:35 AM Roffild <roffild@hotmail.com> wrote:\n\n> Docker is now the DevOps standard. It's easier to build an image for\n> Docker and run the site with one command.\n>\n> But the volume mount has a limitation with chmod 755. I don't want to\n> write the database directly to the container.\n>\n> The container is isolated from everything. Therefore, checking the file\n> permissions inside the container is meaningless. And writing to the\n> container is also a \"security hole\".\n>\n> The world has changed! And the old standards don't work...\n>\n>\nGiven the general lack of clamoring for this kind of change I'd be more\ninclined to believe that your specific attempt at doing this is problematic\nrather than there being a pervasive incompatibility between Docker and\nPostgreSQL. There is a host environment, a container environment, multiple\nways to expose host resources to the container, and the command line and/or\ndocker file configuration itself. None of which you've shared. So I think\nthat skepticism about your claims is quite understandable.\n\nMy suspicion is you aren't leveraging named volumes to separate the\ncontainer and storage and that doing so will give you the desired\nseparation and control of the directory permissions.\n\nBased upon my reading of:\n\nhttps://github.com/docker-library/docs/blob/master/postgres/README.md\n\nand limited personal experience using Docker, I'm inclined to believe it\ncan be made to work even if you cannot do it exactly the way you are trying\nright now. Absent a use case for why one way is preferable to another\nhaving the bar set at \"it works if you do it like this\" seems reasonable.\n\nDavid J.\n\nOn Sat, May 28, 2022 at 9:35 AM Roffild <roffild@hotmail.com> wrote:Docker is now the DevOps standard. It's easier to build an image for \nDocker and run the site with one command.\n\nBut the volume mount has a limitation with chmod 755. I don't want to \nwrite the database directly to the container.\n\nThe container is isolated from everything. Therefore, checking the file \npermissions inside the container is meaningless. And writing to the \ncontainer is also a \"security hole\".\n\nThe world has changed! And the old standards don't work...Given the general lack of clamoring for this kind of change I'd be more inclined to believe that your specific attempt at doing this is problematic rather than there being a pervasive incompatibility between Docker and PostgreSQL. There is a host environment, a container environment, multiple ways to expose host resources to the container, and the command line and/or docker file configuration itself. None of which you've shared. So I think that skepticism about your claims is quite understandable.My suspicion is you aren't leveraging named volumes to separate the container and storage and that doing so will give you the desired separation and control of the directory permissions.Based upon my reading of:https://github.com/docker-library/docs/blob/master/postgres/README.mdand limited personal experience using Docker, I'm inclined to believe it can be made to work even if you cannot do it exactly the way you are trying right now. Absent a use case for why one way is preferable to another having the bar set at \"it works if you do it like this\" seems reasonable.David J.",
"msg_date": "Sat, 28 May 2022 10:11:27 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres and initdb not working inside docker"
},
{
"msg_contents": "On Sat, May 28, 2022, 18:35 Roffild <roffild@hotmail.com> wrote:\n\n> But the volume mount has a limitation with chmod 755. I don't want to\n> write the database directly to the container.\n\nUsing a $PGDATA subdirectory in a mounted Volume allows you to run with 0700\nand also retain this limitation you mention. I don't believe this\nlimitation is a limitation\nof Docker - AFAIK Docker uses the permissions from the Host Directory for\nthe Mount.\n\nIn my experience we have been using (since 2014?) a subdirectory of the\nmounted Volume\nand run a statement similar to this on startup of your container, before\nstarting postgres/initdb or the like\n\ninstall -o postgres -g postgres -d -m 0700 \"${PGDATA}\"\n\n> The world has changed! And the old standards don't work...\n\nThere's enough people running Postgres in Docker containers in production\nfor almost a decade.\nIt does work!\n\nKind regards,\n\nFeike Steenbergen\n\nOn Sat, May 28, 2022, 18:35 Roffild <roffild@hotmail.com> wrote:> But the volume mount has a limitation with chmod 755. I don't want to> write the database directly to the container.Using a $PGDATA subdirectory in a mounted Volume allows you to run with 0700and also retain this limitation you mention. I don't believe this limitation is a limitationof Docker - AFAIK Docker uses the permissions from the Host Directory for the Mount.In my experience we have been using (since 2014?) a subdirectory of the mounted Volumeand run a statement similar to this on startup of your container, before starting postgres/initdb or the likeinstall -o postgres -g postgres -d -m 0700 \"${PGDATA}\"> The world has changed! And the old standards don't work...There's enough people running Postgres in Docker containers in production for almost a decade.It does work!Kind regards,Feike Steenbergen",
"msg_date": "Sat, 28 May 2022 19:12:38 +0200",
"msg_from": "Feike Steenbergen <feikesteenbergen@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres and initdb not working inside docker"
}
] |
[
{
"msg_contents": "Hi,\nResearching on join selectivity improvement I stuck into the code in \nrowtypes.c:\n\n/*\n * Have two matching columns, they must be same type\n */\nif (att1->atttypid != att2->atttypid)\n ereport(ERROR, ...\n\nWhy, for example, isn't allowed next trivial query:\n\nSELECT *\nFROM\n (SELECT ROW(1::integer, 'robert'::text)) AS s1,\n (SELECT ROW(1::bigint, 'robert'::name)) AS s2\nWHERE s1 = s2;\n\nI guess, here the compatible_oper() routine can be used to find a \nappropriate operator, or something like that can be invented.\nI looked into the 2cd7084 and a4424c5, but don't found any rationale.\n\n-- \nRegards\nAndrey Lepikhov\nPostgres Professional\n\n\n",
"msg_date": "Thu, 26 May 2022 16:25:52 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Compare variables of composite type with slightly different column\n types"
},
{
"msg_contents": "On 26/5/2022 14:25, Andrey Lepikhov wrote:\n> I guess, here the compatible_oper() routine can be used to find a \n> appropriate operator, or something like that can be invented.\n> I looked into the 2cd7084 and a4424c5, but don't found any rationale.\n> \nIn accordance to this idea I prepared a code. For a demo only.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional",
"msg_date": "Sat, 28 May 2022 22:53:20 +0300",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Compare variables of composite type with slightly different\n column types"
}
] |
[
{
"msg_contents": "Base on this thread:\nhttps://www.postgresql.org/message-id/20220305083830.lpz3k3yku5lmm5xs%40jrouhaud\nordering reference:\nhttps://unicode-org.github.io/cldr-staging/charts/latest/collation/en_US_POSIX.html\n\n<https://www.postgresql.org/message-id/20220305083830.lpz3k3yku5lmm5xs%40jrouhaud>CREATE\nDATABASE dbicu1 LOCALE_PROVIDER icu LOCALE 'en_US.UTF-8' ICU_LOCALE\n'en-u-kf-upper' TEMPLATE 'template0';\nCREATE DATABASE dbicu2 LOCALE_PROVIDER icu LOCALE 'en_US.UTF-8' ICU_LOCALE\n'en-u-kr-latn-digit' TEMPLATE 'template0';\n--same script apply to dbicu1 dbicu2\nBEGIN;\nCREATE COLLATION upperfirst (\n provider = icu,\n locale = 'en-u-kf-upper'\n);\nCREATE TABLE icu (\n def text,\n en text COLLATE \"en_US\",\n upfirst text COLLATE upperfirst,\n test_kr text\n);\nINSERT INTO icu\n VALUES ('a', 'a', 'a', '1 a'), ('b', 'b', 'b', 'A 11'), ('A', 'A', 'A',\n'A 19'), ('B', 'B', 'B', '8 p');\nINSERT INTO icu\n VALUES ('a', 'a', 'a', 'a 7');\nINSERT INTO icu\n VALUES ('a', 'a', 'a', 'Œ 1');\nCOMMIT;\n-----------------------\n--dbicu1\nSELECT def AS def FROM icu ORDER BY def; --since only character. all works\nfine.\nSELECT test_kr FROM icu ORDER BY def;\n/*\n test_kr\n---------\n A 19\n 1 a\n a 7\n Œ 1\n 8 p\n A 11\n */\n\n\n--dbicu2\nSELECT def AS def FROM icu ORDER BY def; --since only character. all works\nfine.\nSELECT test_kr FROM icu ORDER BY def;\n/*\n test_kr\n---------\n 1 a\n a 7\n Œ 1\n A 19\n A 11\n 8 p\n(6 rows)\n*/\n\nSince dbicu1 and dbicu2 set the default collation then\nIn dbicu1, I should expect the ordering:\n number >> Upper case alphabet letter >> lower case alphabet letter>>\ncharacter Œ (U+0153)\n\nIn dbicu2, I should expect the ordering:\n lower case alphabet letter >> Upper case alphabet letter >> number >>\ncharacter Œ (U+0153)\n\nAs you can see one letter works fine for dbicu1, dbicu2. However, it does\nnot work on more characters.\nOr The result is correct, but something I misunderstood?\n\nI am not sure this is my personal misunderstanding.\nIn the above examples, the first character of column *test_kr*\nis so different that the comparison is based on the first letter.\nIf the first letter is the same then compute the second letter..\nSo for whatever collation, I should expect 'A 19' to be adjacent with 'A\n11'?\n\n\n\n-- \n I recommend David Deutsch's <<The Beginning of Infinity>>\n\n Jian\n\nBase on this thread: https://www.postgresql.org/message-id/20220305083830.lpz3k3yku5lmm5xs%40jrouhaud\nordering reference: https://unicode-org.github.io/cldr-staging/charts/latest/collation/en_US_POSIX.html\nCREATE DATABASE dbicu1 LOCALE_PROVIDER icu LOCALE 'en_US.UTF-8' ICU_LOCALE 'en-u-kf-upper' TEMPLATE 'template0';CREATE DATABASE dbicu2 LOCALE_PROVIDER icu LOCALE 'en_US.UTF-8' ICU_LOCALE 'en-u-kr-latn-digit' TEMPLATE 'template0';--same script apply to dbicu1 dbicu2BEGIN;CREATE COLLATION upperfirst ( provider = icu, locale = 'en-u-kf-upper');CREATE TABLE icu ( def text, en text COLLATE \"en_US\", upfirst text COLLATE upperfirst, test_kr text);INSERT INTO icu VALUES ('a', 'a', 'a', '1 a'), ('b', 'b', 'b', 'A 11'), ('A', 'A', 'A', 'A 19'), ('B', 'B', 'B', '8 p');INSERT INTO icu VALUES ('a', 'a', 'a', 'a 7');INSERT INTO icu VALUES ('a', 'a', 'a', 'Œ 1');COMMIT;-------------------------dbicu1SELECT def AS def FROM icu ORDER BY def; --since only character. all works fine.SELECT test_kr FROM icu ORDER BY def;/* test_kr--------- A 19 1 a a 7 Œ 1 8 p A 11 */--dbicu2SELECT def AS def FROM icu ORDER BY def; --since only character. all works fine.SELECT test_kr FROM icu ORDER BY def;/* test_kr--------- 1 a a 7 Œ 1 A 19 A 11 8 p(6 rows)*/Since dbicu1 and dbicu2 set the default collation thenIn dbicu1, I should expect the ordering: number >> Upper case alphabet letter >> lower case alphabet letter>> character Œ (U+0153)In dbicu2, I should expect the ordering: lower case alphabet letter >> Upper case alphabet letter >> number >> character Œ (U+0153)As you can see one letter works fine for dbicu1, dbicu2. However, it does not work on more characters.Or The result is correct, but something I misunderstood? I am not sure this is my personal misunderstanding.In the above examples, the first character of column test_kris so different that the comparison is based on the first letter.If the first letter is the same then compute the second letter..So for whatever collation, I should expect 'A 19' to be adjacent with 'A 11'?-- I recommend David Deutsch's <<The Beginning of Infinity>> Jian",
"msg_date": "Thu, 26 May 2022 20:54:05 +0530",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": true,
"msg_subject": "ICU_LOCALE set database default icu collation but not working as\n intended."
},
{
"msg_contents": "\tjian he wrote:\n\n> CREATE\n> DATABASE dbicu1 LOCALE_PROVIDER icu LOCALE 'en_US.UTF-8' ICU_LOCALE\n> 'en-u-kf-upper' TEMPLATE 'template0';\n> CREATE DATABASE dbicu2 LOCALE_PROVIDER icu LOCALE 'en_US.UTF-8' ICU_LOCALE\n> 'en-u-kr-latn-digit' TEMPLATE 'template0';\n> [...]\n> I am not sure this is my personal misunderstanding.\n> In the above examples, the first character of column *test_kr*\n> is so different that the comparison is based on the first letter.\n> If the first letter is the same then compute the second letter..\n> So for whatever collation, I should expect 'A 19' to be adjacent with 'A\n> 11'?\n\nThe query \"SELECT test_kr FROM icu ORDER BY def;\"\ndoes not order by test_kr, so the contents of test_kr have no bearing\non the order of the results.\n\nIf you order by test_kr, the results look like what you're expecting:\n\ndbicu1=# SELECT test_kr,def FROM icu ORDER BY test_kr;\n test_kr | def \n---------+-----\n 1 a\t | a\n 8 p\t | B\n A 11\t | b\n A 19\t | A\n a 7\t | a\n Œ 1\t | a\n\ndbicu2=# SELECT test_kr,def FROM icu ORDER BY test_kr ;\n test_kr | def \n---------+-----\n A 11\t | b\n A 19\t | A\n a 7\t | a\n Œ 1\t | a\n 1 a\t | a\n 8 p\t | B\n\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Thu, 26 May 2022 22:14:46 +0200",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: ICU_LOCALE set database default icu collation but not working as\n intended."
},
{
"msg_contents": "Hi, here are some other trigger cases.\n\nCREATE DATABASE dbicu3 LOCALE_PROVIDER icu LOCALE 'en_US.UTF-8'\n ICU_LOCALE 'en-u-kr-latn-digit-kf-upper-kn-true' TEMPLATE 'template0';\nCREATE DATABASE dbicu4 LOCALE_PROVIDER icu LOCALE 'en_US.UTF-8'\n ICU_LOCALE 'en-u-kr-latn-digit-kn-true' TEMPLATE 'template0';\n--mistake\nCREATE DATABASE dbicu5 LOCALE_PROVIDER icu LOCALE 'en_US.UTF-8'\n ICU_LOCALE 'en-u-kr-latn-digit-kr-upper' TEMPLATE 'template0';\nCREATE DATABASE dbicu6 LOCALE_PROVIDER icu LOCALE 'en_US.UTF-8'\n ICU_LOCALE 'en-u-kr-latn-digit-kf-upper' TEMPLATE 'template0';\n\n--same script applies to dbicu3, dbicu4, dbicu5, dbicu6.\nbegin;\nCREATE COLLATION upperfirst (provider = icu, locale = 'en-u-kf-upper');\nCREATE TABLE icu(def text, en text COLLATE \"en_US\", upfirst text COLLATE\nupperfirst, test_kr text);\nINSERT INTO icu VALUES ('a', 'a', 'a', '1 a'), ('b','b','b', 'A 11'),\n('A','A','A','A 19'), ('B','B','B', '8 p');\nINSERT INTO icu VALUES ('a', 'a', 'a', 'a 7'),('a', 'a', 'a', 'a 117');\nINSERT INTO icu VALUES ('a', 'a', 'a', 'a 70'), ('a', 'a', 'a', 'A 70');\nINSERT INTO icu VALUES ('a', 'a', 'a', 'Œ 1');\ncommit ;\n-----------------------\nlocalhost:5433 admin@dbicu3=# SELECT test_kr FROM icu ORDER BY test_kr ;\n\n test_kr\n---------\n a 7\n A 11\n A 19\n A 70\n a 70\n a 117\n Œ 1\n 1 a\n 8 p\n(9 rows)\n--------------------------------------\nlocalhost:5433 admin@dbicu4=# SELECT test_kr FROM icu ORDER BY test_kr ;\n test_kr\n---------\n a 7\n A 11\n A 19\n a 70\n A 70\n a 117\n Œ 1\n 1 a\n 8 p\n(9 rows)\n------------------------------------------------------------------------\nlocalhost:5433 admin@dbicu6=# SELECT test_kr FROM icu ORDER BY test_kr ;\n test_kr\n---------\n A 11\n a 117\n A 19\n a 7\n A 70\n a 70\n Œ 1\n 1 a\n 8 p\n(9 rows)\n-----------------------------------------------------------------------------\n\n - dbicu3, ICU_LOCALE 'en-u-kr-latn-digit-kf-upper-kn-true' seems\n 'kf-upper' not grouped strings beginning with character 'A' together?\n\n\n - dbicu4, ICU_LOCALE 'en-u-kr-latn-digit-kn-true' since upper/lower not\n explicitly mentioned, and since the collation is deterministic, so\n character 'A' should be grouped together first then do the numeric value\n comparison.\n\n\n - dbicu6, ICU_LOCALE 'en-u-kr-latn-digit-kf-upper' , from the\nresult, *kr-latn-digit\n *is working as intended. But *kf-upper *seems not working.\n\nmaybe this link(\nhttps://www.unicode.org/reports/tr35/tr35-collation.html#314-case-parameters\n) can help.\n\nCan I specify as many key-value settings options (\nhttps://www.unicode.org/reports/tr35/tr35-collation.html#table-collation-settings)\nas I want in ICU_LOCALE while I create a new database?\n\n\nOn Fri, May 27, 2022 at 1:44 AM Daniel Verite <daniel@manitou-mail.org>\nwrote:\n\n> jian he wrote:\n>\n> > CREATE\n> > DATABASE dbicu1 LOCALE_PROVIDER icu LOCALE 'en_US.UTF-8' ICU_LOCALE\n> > 'en-u-kf-upper' TEMPLATE 'template0';\n> > CREATE DATABASE dbicu2 LOCALE_PROVIDER icu LOCALE 'en_US.UTF-8'\n> ICU_LOCALE\n> > 'en-u-kr-latn-digit' TEMPLATE 'template0';\n> > [...]\n> > I am not sure this is my personal misunderstanding.\n> > In the above examples, the first character of column *test_kr*\n> > is so different that the comparison is based on the first letter.\n> > If the first letter is the same then compute the second letter..\n> > So for whatever collation, I should expect 'A 19' to be adjacent with 'A\n> > 11'?\n>\n> The query \"SELECT test_kr FROM icu ORDER BY def;\"\n> does not order by test_kr, so the contents of test_kr have no bearing\n> on the order of the results.\n>\n> If you order by test_kr, the results look like what you're expecting:\n>\n> dbicu1=# SELECT test_kr,def FROM icu ORDER BY test_kr;\n> test_kr | def\n> ---------+-----\n> 1 a | a\n> 8 p | B\n> A 11 | b\n> A 19 | A\n> a 7 | a\n> Œ 1 | a\n>\n> dbicu2=# SELECT test_kr,def FROM icu ORDER BY test_kr ;\n> test_kr | def\n> ---------+-----\n> A 11 | b\n> A 19 | A\n> a 7 | a\n> Œ 1 | a\n> 1 a | a\n> 8 p | B\n>\n>\n>\n> Best regards,\n> --\n> Daniel Vérité\n> https://postgresql.verite.pro/\n> Twitter: @DanielVerite\n>\n\n\n-- \n I recommend David Deutsch's <<The Beginning of Infinity>>\n\n Jian\n\nHi, here are some other trigger cases.CREATE DATABASE dbicu3 LOCALE_PROVIDER icu LOCALE 'en_US.UTF-8' ICU_LOCALE 'en-u-kr-latn-digit-kf-upper-kn-true' TEMPLATE 'template0';CREATE DATABASE dbicu4 LOCALE_PROVIDER icu LOCALE 'en_US.UTF-8' ICU_LOCALE 'en-u-kr-latn-digit-kn-true' TEMPLATE 'template0';--mistakeCREATE DATABASE dbicu5 LOCALE_PROVIDER icu LOCALE 'en_US.UTF-8' ICU_LOCALE 'en-u-kr-latn-digit-kr-upper' TEMPLATE 'template0';CREATE DATABASE dbicu6 LOCALE_PROVIDER icu LOCALE 'en_US.UTF-8' ICU_LOCALE 'en-u-kr-latn-digit-kf-upper' TEMPLATE 'template0';--same script applies to dbicu3, dbicu4, dbicu5, dbicu6.begin;CREATE COLLATION upperfirst (provider = icu, locale = 'en-u-kf-upper');CREATE TABLE icu(def text, en text COLLATE \"en_US\", upfirst text COLLATE upperfirst, test_kr text);INSERT INTO icu VALUES ('a', 'a', 'a', '1 a'), ('b','b','b', 'A 11'), ('A','A','A','A 19'), ('B','B','B', '8 p');INSERT INTO icu VALUES ('a', 'a', 'a', 'a 7'),('a', 'a', 'a', 'a 117');INSERT INTO icu VALUES ('a', 'a', 'a', 'a 70'), ('a', 'a', 'a', 'A 70');INSERT INTO icu VALUES ('a', 'a', 'a', 'Œ 1');commit ;-----------------------localhost:5433 admin@dbicu3=# SELECT test_kr FROM icu ORDER BY test_kr ; test_kr--------- a 7 A 11 A 19 A 70 a 70 a 117 Œ 1 1 a 8 p(9 rows)--------------------------------------localhost:5433 admin@dbicu4=# SELECT test_kr FROM icu ORDER BY test_kr ; test_kr--------- a 7 A 11 A 19 a 70 A 70 a 117 Œ 1 1 a 8 p(9 rows)------------------------------------------------------------------------localhost:5433 admin@dbicu6=# SELECT test_kr FROM icu ORDER BY test_kr ; test_kr--------- A 11 a 117 A 19 a 7 A 70 a 70 Œ 1 1 a 8 p(9 rows)-----------------------------------------------------------------------------dbicu3, ICU_LOCALE 'en-u-kr-latn-digit-kf-upper-kn-true' seems 'kf-upper' not grouped strings beginning with character 'A' together? dbicu4, ICU_LOCALE 'en-u-kr-latn-digit-kn-true' since upper/lower not explicitly mentioned, and since the collation is deterministic, so character 'A' should be grouped together first then do the numeric value comparison. dbicu6, ICU_LOCALE 'en-u-kr-latn-digit-kf-upper' , from the result, kr-latn-digit is working as intended. But kf-upper seems not working. maybe this link(\nhttps://www.unicode.org/reports/tr35/tr35-collation.html#314-case-parameters\n\n) can help. Can I specify as many key-value settings options (https://www.unicode.org/reports/tr35/tr35-collation.html#table-collation-settings) as I want in \nICU_LOCALE while I create a new database? On Fri, May 27, 2022 at 1:44 AM Daniel Verite <daniel@manitou-mail.org> wrote: jian he wrote:\n\n> CREATE\n> DATABASE dbicu1 LOCALE_PROVIDER icu LOCALE 'en_US.UTF-8' ICU_LOCALE\n> 'en-u-kf-upper' TEMPLATE 'template0';\n> CREATE DATABASE dbicu2 LOCALE_PROVIDER icu LOCALE 'en_US.UTF-8' ICU_LOCALE\n> 'en-u-kr-latn-digit' TEMPLATE 'template0';\n> [...]\n> I am not sure this is my personal misunderstanding.\n> In the above examples, the first character of column *test_kr*\n> is so different that the comparison is based on the first letter.\n> If the first letter is the same then compute the second letter..\n> So for whatever collation, I should expect 'A 19' to be adjacent with 'A\n> 11'?\n\nThe query \"SELECT test_kr FROM icu ORDER BY def;\"\ndoes not order by test_kr, so the contents of test_kr have no bearing\non the order of the results.\n\nIf you order by test_kr, the results look like what you're expecting:\n\ndbicu1=# SELECT test_kr,def FROM icu ORDER BY test_kr;\n test_kr | def \n---------+-----\n 1 a | a\n 8 p | B\n A 11 | b\n A 19 | A\n a 7 | a\n Œ 1 | a\n\ndbicu2=# SELECT test_kr,def FROM icu ORDER BY test_kr ;\n test_kr | def \n---------+-----\n A 11 | b\n A 19 | A\n a 7 | a\n Œ 1 | a\n 1 a | a\n 8 p | B\n\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n-- I recommend David Deutsch's <<The Beginning of Infinity>> Jian",
"msg_date": "Fri, 27 May 2022 11:04:46 +0530",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: ICU_LOCALE set database default icu collation but not working as\n intended."
},
{
"msg_contents": "\tjian he wrote:\n\n> - dbicu3, ICU_LOCALE 'en-u-kr-latn-digit-kf-upper-kn-true' seems\n> 'kf-upper' not grouped strings beginning with character 'A' together?\n\nYou seem to expect that the sort algorithm takes characters\nfrom left to right, and when it compares 'A' and 'a', it will\nsort the string with the 'A' before, no matter what other\ncharacters are in the rest of the string.\n\nI don't think that's what kf-upper does. I think kf-upper kicks in\nonly for strings that are identical at the secondary level.\nIn your example, its effect is to make 'A 70' sort before\n'a 70' . The other strings are unaffected.\n\n> - dbicu4, ICU_LOCALE 'en-u-kr-latn-digit-kn-true' since upper/lower not\n> explicitly mentioned, and since the collation is deterministic, so\n> character 'A' should be grouped together first then do the numeric value\n\nThe deterministic property is only relevant when strings are compared equal\nby ICU. Since your collations use the default strength setting (tertiary)\nand the strings in your example are all different at this level,\nthe fact that the collation is deterministic does not play a role in\nthe results.\n\nBesides, the TR35 doc says for \"kn\" (numeric ordering)\n\n \"If set to on, any sequence of Decimal Digits (General_Category = Nd in\n the [UAX44]) is sorted at a primary level with its numeric value\"\n\nwhich means that the order of numbers (7, 11, 19, 70, 117) is \"stronger\"\n(primary level) than the relative order of the 'a' and 'A' \n(case difference=secondary level) that precede them.\nThat's why these numbers drive the sort for these strings that are\notherwise identical at the primary level.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Sat, 28 May 2022 19:18:51 +0200",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: ICU_LOCALE set database default icu collation but not working as\n intended."
}
] |
[
{
"msg_contents": "I'm trying to include a sensitivity operator in a function. My issue is\nthat when I have my function, I get a call to SupportRequestSimplify, but\nnot SupportRequestSensitivity. It is not obvious what I am doing that is\nincorrect.\n\nMy c (stub) function is:\n\nPG_FUNCTION_INFO_V1(pgq3c_join_selectivity);\nDatum pgq3c_join_selectivity(PG_FUNCTION_ARGS)\n{\n Node *rawreq = (Node *) PG_GETARG_POINTER(0);\n Node *ret = NULL;\n\n elog(WARNING,\"in pgq3c_join_selectivity %d %d %d\",\n rawreq->type,T_SupportRequestSelectivity,T_SupportRequestSimplify);\n\n\n if (IsA(rawreq, SupportRequestSelectivity))\n {\n elog(WARNING,\"found SupportRequestSelectivity\");\n }\n if (IsA(rawreq, SupportRequestSimplify))\n {\n elog(WARNING,\"found SupportRequestSimplify\");\n }\n\n PG_RETURN_POINTER(ret);\n}\n\nmy sql function code is:\n\n-- a selectivity function for the q3c join functionCREATE OR REPLACE\nFUNCTION q3c_join_selectivity(internal)\n RETURNS internal\n AS 'MODULE_PATHNAME', 'pgq3c_join_selectivity'\n LANGUAGE C IMMUTABLE STRICT ;\n\nand my function definition is:\n\n CREATE OR REPLACE FUNCTION q3c_join(leftra double precision,\nleftdec double precision,\n rightra double precision, rightdec double precision,\n radius double precision)\n RETURNS boolean AS'\nSELECT (((q3c_ang2ipix($3,$4)>=(q3c_nearby_it($1,$2,$5,0))) AND\n(q3c_ang2ipix($3,$4)<=(q3c_nearby_it($1,$2,$5,1))))\n OR ((q3c_ang2ipix($3,$4)>=(q3c_nearby_it($1,$2,$5,2))) AND\n(q3c_ang2ipix($3,$4)<=(q3c_nearby_it($1,$2,$5,3))))\n OR ((q3c_ang2ipix($3,$4)>=(q3c_nearby_it($1,$2,$5,4))) AND\n(q3c_ang2ipix($3,$4)<=(q3c_nearby_it($1,$2,$5,5))))\n OR ((q3c_ang2ipix($3,$4)>=(q3c_nearby_it($1,$2,$5,6))) AND\n(q3c_ang2ipix($3,$4)<=(q3c_nearby_it($1,$2,$5,7)))))\n AND q3c_sindist($1,$2,$3,$4)<POW(SIN(RADIANS($5)/2),2)\n AND ($5::double precision ==<<>>== ($1,$2,$3,$4)::q3c_type)\n' LANGUAGE SQL IMMUTABLE COST 10 SUPPORT q3c_join_selectivity;\n\nWhen I run my function, I get:\n\n(base) [greg.hennessy@localhost ~]$ psql q3c_test\nTiming is on.\nOutput format is unaligned.\npsql (13.4)\nType \"help\" for help.\n\nq3c_test=# select count(*) from test as a, test1 as b where\nq3c_join(a.ra,a.dec,b.ra,b.dec,.01);\nWARNING: in pgq3c_join_selectivity 417 418 417\nWARNING: found SupportRequestSimplify\ncount153\n(1 row)Time: 9701.717 ms (00:09.702)\nq3c_test=#\n\nSo, I see a call where I am asked for a SupportRequestSimplify, but not a\nSupportRequestSelectivity.\n\nI admit to not being an expert in postgres internals hacking. Is there\nsomething obvious I am doing incorrect? How do I ensure my support Function\nis asked for a SupportRequestSelectivity?\n\nI'm trying to include a sensitivity operator in a function. My issue is that when I have my function, I get a call to SupportRequestSimplify, but not SupportRequestSensitivity. It is not obvious what I am doing that is incorrect.My c (stub) function is:PG_FUNCTION_INFO_V1(pgq3c_join_selectivity);\nDatum pgq3c_join_selectivity(PG_FUNCTION_ARGS)\n{\n Node *rawreq = (Node *) PG_GETARG_POINTER(0);\n Node *ret = NULL;\n\n elog(WARNING,\"in pgq3c_join_selectivity %d %d %d\",\n rawreq->type,T_SupportRequestSelectivity,T_SupportRequestSimplify);\n\n \n if (IsA(rawreq, SupportRequestSelectivity))\n {\n elog(WARNING,\"found SupportRequestSelectivity\");\n }\n if (IsA(rawreq, SupportRequestSimplify))\n {\n elog(WARNING,\"found SupportRequestSimplify\");\n }\n\n PG_RETURN_POINTER(ret);\n}\nmy sql function code is:-- a selectivity function for the q3c join function\nCREATE OR REPLACE FUNCTION q3c_join_selectivity(internal)\n RETURNS internal\n AS 'MODULE_PATHNAME', 'pgq3c_join_selectivity'\n LANGUAGE C IMMUTABLE STRICT ;\nand my function definition is: CREATE OR REPLACE FUNCTION q3c_join(leftra double precision, leftdec double precision,\n rightra double precision, rightdec double precision,\n radius double precision)\n RETURNS boolean AS\n'\nSELECT (((q3c_ang2ipix($3,$4)>=(q3c_nearby_it($1,$2,$5,0))) AND (q3c_ang2ipix($3,$4)<=(q3c_nearby_it($1,$2,$5,1))))\n OR ((q3c_ang2ipix($3,$4)>=(q3c_nearby_it($1,$2,$5,2))) AND (q3c_ang2ipix($3,$4)<=(q3c_nearby_it($1,$2,$5,3))))\n OR ((q3c_ang2ipix($3,$4)>=(q3c_nearby_it($1,$2,$5,4))) AND (q3c_ang2ipix($3,$4)<=(q3c_nearby_it($1,$2,$5,5))))\n OR ((q3c_ang2ipix($3,$4)>=(q3c_nearby_it($1,$2,$5,6))) AND (q3c_ang2ipix($3,$4)<=(q3c_nearby_it($1,$2,$5,7))))) \n AND q3c_sindist($1,$2,$3,$4)<POW(SIN(RADIANS($5)/2),2)\n AND ($5::double precision ==<<>>== ($1,$2,$3,$4)::q3c_type)\n' LANGUAGE SQL IMMUTABLE COST 10 SUPPORT q3c_join_selectivity;\nWhen I run my function, I get:(base) [greg.hennessy@localhost ~]$ psql q3c_test\nTiming is on.\nOutput format is unaligned.\npsql (13.4)\nType \"help\" for help.\n\nq3c_test=# select count(*) from test as a, test1 as b where q3c_join(a.ra,a.dec,b.ra,b.dec,.01);\nWARNING: in pgq3c_join_selectivity 417 418 417\nWARNING: found SupportRequestSimplify\ncount\n153\n(1 row)\nTime: 9701.717 ms (00:09.702)\nq3c_test=# \nSo, I see a call where I am asked for a SupportRequestSimplify, but not a SupportRequestSelectivity.I admit to not being an expert in postgres internals hacking. Is there something obvious I am doing incorrect? How do I ensure my support Function is asked for a SupportRequestSelectivity?",
"msg_date": "Thu, 26 May 2022 14:58:54 -0400",
"msg_from": "Greg Hennessy <greg.hennessy@gmail.com>",
"msg_from_op": true,
"msg_subject": "selectivity function"
},
{
"msg_contents": "Greg Hennessy <greg.hennessy@gmail.com> writes:\n> I'm trying to include a sensitivity operator in a function. My issue is\n> that when I have my function, I get a call to SupportRequestSimplify, but\n> not SupportRequestSensitivity. It is not obvious what I am doing that is\n> incorrect.\n\nAttaching a support function to a SQL-language function seems pretty\nweird to me. I think probably what is happening is that the SQL\nfunction is getting inlined and thus there is nothing left to apply\nthe selectivity hook to. simplify_function() will try the\nSupportRequestSimplify hook before it tries inlining, so the fact\nthat that one registers isn't at odds with this theory.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 26 May 2022 15:10:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: selectivity function"
},
{
"msg_contents": "On Thu, May 26, 2022 at 3:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Greg Hennessy <greg.hennessy@gmail.com> writes:\n> > I'm trying to include a sensitivity operator in a function. My issue is\n> > that when I have my function, I get a call to SupportRequestSimplify, but\n> > not SupportRequestSensitivity. It is not obvious what I am doing that is\n> > incorrect.\n>\n\nOn Thu, May 26, 2022 at 3:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Attaching a support function to a SQL-language function seems pretty\n> weird to me. I think probably what is happening is that the SQL\n> function is getting inlined and thus there is nothing left to apply\n> the selectivity hook to. simplify_function() will try the\n> SupportRequestSimplify hook before it tries inlining, so the fact\n> that that one registers isn't at odds with this theory.\n>\n\nIs there a way to set the selectivity of a SQL-language function? My use\ncase is I'm an astronomer, matching large star catalogs, and if I have\na 1e6 star catalog joined with a 1e6 star catalog, the planner estimates\nabout 1e12 rows, even though the selectivity is about 1e-9 or so.\nLooking at https://www.postgresql.org/docs/current/sql-createfunction.html\nI don't see a way to define a selectivity function. One of the indexed\nfunctions\ndoes have a RESTRICT line with some about of selectivity in the function,\nbut\nit isn't apparent it is being referenced.\n\nMy issue is that when I have small and medium sized star catalogs, the join\nI'm using uses the index, but at a certain large size it stops using the\nindex\nand starts using sequential scans, due to the cost of the sequential scan\nbeing smaller than the cost of using the index. I surmise that the cost of\nreading in the index, and the use of random_page_cost = 1.2 makes the\nsequential scan seem cheaper/faster, even though as a human I know\nthat using the index scan would be faster. I'm just not sure how to convince\npostgresql to calculate the costs properly.\n\nOn Thu, May 26, 2022 at 3:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Greg Hennessy <greg.hennessy@gmail.com> writes:\n> I'm trying to include a sensitivity operator in a function. My issue is\n> that when I have my function, I get a call to SupportRequestSimplify, but\n> not SupportRequestSensitivity. It is not obvious what I am doing that is\n> incorrect. On Thu, May 26, 2022 at 3:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote: Attaching a support function to a SQL-language function seems pretty\nweird to me. I think probably what is happening is that the SQL\nfunction is getting inlined and thus there is nothing left to apply\nthe selectivity hook to. simplify_function() will try the\nSupportRequestSimplify hook before it tries inlining, so the fact\nthat that one registers isn't at odds with this theory.Is there a way to set the selectivity of a SQL-language function? My usecase is I'm an astronomer, matching large star catalogs, and if I havea 1e6 star catalog joined with a 1e6 star catalog, the planner estimatesabout 1e12 rows, even though the selectivity is about 1e-9 or so.Looking at https://www.postgresql.org/docs/current/sql-createfunction.htmlI don't see a way to define a selectivity function. One of the indexed functionsdoes have a RESTRICT line with some about of selectivity in the function, butit isn't apparent it is being referenced.My issue is that when I have small and medium sized star catalogs, the joinI'm using uses the index, but at a certain large size it stops using the indexand starts using sequential scans, due to the cost of the sequential scanbeing smaller than the cost of using the index. I surmise that the cost ofreading in the index, and the use of random_page_cost = 1.2 makes thesequential scan seem cheaper/faster, even though as a human I knowthat using the index scan would be faster. I'm just not sure how to convincepostgresql to calculate the costs properly.",
"msg_date": "Thu, 26 May 2022 17:39:19 -0400",
"msg_from": "Greg Hennessy <greg.hennessy@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: selectivity function"
},
{
"msg_contents": "Greg Hennessy <greg.hennessy@gmail.com> writes:\n> On Thu, May 26, 2022 at 3:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Attaching a support function to a SQL-language function seems pretty\n>> weird to me.\n\n> Is there a way to set the selectivity of a SQL-language function?\n\nI think it'd work if you prevented inlining, but doing so would defeat\nmost of the value of writing it as a SQL function as opposed to (say)\nplpgsql. Can you do anything useful with attaching selectivity estimates\nto the functions it references, instead?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 26 May 2022 17:54:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: selectivity function"
},
{
"msg_contents": "On Thu, May 26, 2022 at 3:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Can you do anything useful with attaching selectivity estimates\n> to the functions it references, instead?\nI may have been doing down a bad path before. The function I'm\nworking to improve has five argument, the last being \"degrees\", which\nis the match radius. Obviously a larger match radius should cause more\nmatches.\n\nFor a small value of a match radius (0.005 degrees):\n\nq3c_test=# explain (analyze, buffers) select * from test as a, test1 as \nb where q3c_join(a.ra,a.dec,b.ra,b.dec,.005);\nQUERY PLAN\nNested Loop (cost=92.28..22787968818.00 rows=5 width=32) (actual \ntime=7.799..10758.566 rows=31 loops=1)\n Buffers: shared hit=8005684\n -> Seq Scan on test a (cost=0.00..15406.00 rows=1000000 width=16) \n(actual time=0.008..215.570 rows=1000000 loops=1)\n Buffers: shared hit=5406\n -> Bitmap Heap Scan on test1 b (cost=92.28..22785.45 rows=250 \nwidth=16) (actual time=0.009..0.009 rows=0 loops=1000000)\n\n(note: I deleted some of the output, since I think I'm keeping the \nimportant bits)\n\nSo, the cost of the query is calculated as 2e10, where it expect five rows,\nfound 31, and a hot cache of reading 8 million units of disk space, I'd have\nto check the fine manual to remind myself of the units of that.\n\nWhen I do the same sort of query on a much larger match radius (5 deg) I \nget:\nq3c_test=# explain (analyze, buffers) select * from test as a, test1 as \nb where q3c_join(a.ra,a.dec,b.ra,b.dec,5);\nQUERY PLAN\nNested Loop (cost=92.28..22787968818.00 rows=4766288 width=32) (actual \ntime=0.086..254995.691 rows=38051626 loops=1)\n Buffers: shared hit=104977026\n -> Seq Scan on test a (cost=0.00..15406.00 rows=1000000 width=16) \n(actual time=0.008..261.425 rows=1000000 loops=1)\n Buffers: shared hit=5406\n -> Bitmap Heap Scan on test1 b (cost=92.28..22785.45 rows=250 \nwidth=16) (actual time=0.053..0.247 rows=38 loops=1000000)\n\nThe \"total cost\" is the same identical 2e10, this time the number of \nrows expectd\nis 4.7 million, the number of rows delivered is 38 million (so the \ncalculation is off\nby a factor of 8, I'm not sure that is important), but the io is now 104 \nmillion units.\nSo while we are doing a lot more IO, and dealing with a lot more rows, the\ncalculated cost is identical. That seems strange me me. Is that a normal \nthing?\nIs it possible that the cost calculation isn't including the selectivity \ncalculation?\n\nGreg\n\n\n\n\n",
"msg_date": "Fri, 27 May 2022 12:04:14 -0400",
"msg_from": "Greg Hennessy <greg.hennessy@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: selectivity function"
}
] |
[
{
"msg_contents": "Here is a status report of where I think we are with cluster file\nencryption.\n\nThe last patch for temporary file I/O centralization is from April 20:\n\n\thttps://www.postgresql.org/message-id/24759.1650466826@antos\n\nOnce that is done I can modify my patch set to switch from CTR to XTS\nmode and hook into the temporary file I/O centralization code. After\nthat, we need to work on the WAL encryption code and tool support. \nReplication must also be handled.\n\nI think once the temporary file I/O centralization is done we can\nconsider putting some of my patch set into the tree once PG 16 opens for\ndevelopment --- the first step might be the key management feature.\n\nI have updated my cluster file encryption presentation to show diagrams\nof the architecture:\n\n\thttps://momjian.us/main/writings/pgsql/cfe.pdf\n\nHopefully that helps.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 26 May 2022 17:08:07 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Status of cluster file encryption"
}
] |
[
{
"msg_contents": "Hello,\n\nCurrently, lo_creat(e), lo_import, lo_unlink, lowrite, lo_put,\nand lo_from_bytea are allowed even in read-only transactions.\nBy using them, pg_largeobject and pg_largeobject_metatable can\nbe modified in read-only transactions and the effect remains\nafter the transaction finished. Is it unacceptable behaviours, \nisn't it?\n\nAlso, when such transactions are used in recovery mode, it fails\nbut the messages output are not user friendly, like:\n\n postgres=# select lo_creat(42);\n ERROR: cannot assign OIDs during recovery\n\n postgres=# select lo_create(42);\n ERROR: cannot assign TransactionIds during recovery\n\n postgres=# select lo_unlink(16389);\n ERROR: cannot acquire lock mode AccessExclusiveLock on database objects while recovery is in progress\n HINT: Only RowExclusiveLock or less can be acquired on database objects during recovery.\n \n\nSo, I would like propose to explicitly prevent such writes operations\non large object in read-only transactions, like:\n\n postgres=# SELECT lo_create(42);\n ERROR: cannot execute lo_create in a read-only transaction\n\nThe patch is attached.\n\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Fri, 27 May 2022 15:30:28 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Prevent writes on large objects in read-only transactions"
},
{
"msg_contents": "On Fri, 2022-05-27 at 15:30 +0900, Yugo NAGATA wrote:\n> Currently, lo_creat(e), lo_import, lo_unlink, lowrite, lo_put,\n> and lo_from_bytea are allowed even in read-only transactions.\n> By using them, pg_largeobject and pg_largeobject_metatable can\n> be modified in read-only transactions and the effect remains\n> after the transaction finished. Is it unacceptable behaviours, \n> isn't it?\n\n+1\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Fri, 27 May 2022 14:02:24 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Prevent writes on large objects in read-only transactions"
},
{
"msg_contents": "On Fri, May 27, 2022 at 03:30:28PM +0900, Yugo NAGATA wrote:\n> Currently, lo_creat(e), lo_import, lo_unlink, lowrite, lo_put,\n> and lo_from_bytea are allowed even in read-only transactions.\n> By using them, pg_largeobject and pg_largeobject_metatable can\n> be modified in read-only transactions and the effect remains\n> after the transaction finished. Is it unacceptable behaviours, \n> isn't it?\n\nWell, there is an actual risk to break applications that have worked\nuntil now for a behavior that has existed for years with zero\ncomplaints on the matter, so I would leave that alone. Saying that, I\ndon't really disagree with improving the error messages a bit if we\nare in recovery.\n--\nMichael",
"msg_date": "Sat, 28 May 2022 18:00:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Prevent writes on large objects in read-only transactions"
},
{
"msg_contents": "On Sat, 28 May 2022 18:00:54 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Fri, May 27, 2022 at 03:30:28PM +0900, Yugo NAGATA wrote:\n> > Currently, lo_creat(e), lo_import, lo_unlink, lowrite, lo_put,\n> > and lo_from_bytea are allowed even in read-only transactions.\n> > By using them, pg_largeobject and pg_largeobject_metatable can\n> > be modified in read-only transactions and the effect remains\n> > after the transaction finished. Is it unacceptable behaviours, \n> > isn't it?\n> \n> Well, there is an actual risk to break applications that have worked\n> until now for a behavior that has existed for years with zero\n> complaints on the matter, so I would leave that alone. Saying that, I\n> don't really disagree with improving the error messages a bit if we\n> are in recovery.\n\nThank you for your comment. I am fine with leaving the behaviour in\nread-only transactions as is if anyone don't complain and there are no\nrisks. \n\nAs to the error messages during recovery, I think it is better to improve\nit, because the current messages are emitted by elog() and it seems to imply\nthey are internal errors that we don't expected. I attached a updated patch\nfor it.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Mon, 30 May 2022 17:44:18 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Prevent writes on large objects in read-only transactions"
},
{
"msg_contents": "On Mon, May 30, 2022 at 05:44:18PM +0900, Yugo NAGATA wrote:\n> As to the error messages during recovery, I think it is better to improve\n> it, because the current messages are emitted by elog() and it seems to imply\n> they are internal errors that we don't expected. I attached a updated patch\n> for it.\n\nYeah, elog() messages should never be user-facing as they refer to\ninternal errors, and any of those errors are rather deep in the tree\nwhile being unexpected.\n\nlo_write() is published in be-fsstubs.h, though we have no callers of\nit in the backend for the core code. Couldn't there be a point in\nhaving the recovery protection there rather than in the upper SQL\nroutine be_lowrite()? At the end, we would likely generate a failure\nwhen attempting to insert the LO data in the catalogs through\ninv_api.c, but I was wondering if we should make an extra effort in\nimproving the report also in this case if there is a direct caller of\nthis LO write routine. The final picture may be better if we make\nlo_write() a routine static to be-fsstubs.c but it is available for\nages, so I'd rather leave it declared as-is.\n\nlibpq fetches the OIDs of the large object functions and caches it for\nPQfn() as far as I can see, so it is fine by me to have the\nprotections in be-fsstubs.c, letting inv_api.c deal with the internals\nwith the catalogs, ACL checks, etc. Should we complain on lo_open()\nwith the write mode though?\n\nThe change for lo_truncate_internal() is a bit confusing for the 64b\nversion, as we would complain about lo_truncate() and not\nlo_truncate64().\n\nWhile on it, could we remove -DFSDB?\n--\nMichael",
"msg_date": "Tue, 31 May 2022 10:34:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Prevent writes on large objects in read-only transactions"
},
{
"msg_contents": "On Fri, May 27, 2022 at 02:02:24PM +0200, Laurenz Albe wrote:\n> On Fri, 2022-05-27 at 15:30 +0900, Yugo NAGATA wrote:\n>> Currently, lo_creat(e), lo_import, lo_unlink, lowrite, lo_put,\n>> and lo_from_bytea are allowed even in read-only transactions.\n>> By using them, pg_largeobject and pg_largeobject_metatable can\n>> be modified in read-only transactions and the effect remains\n>> after the transaction finished. Is it unacceptable behaviours, \n>> isn't it?\n> \n> +1\n\nAnd I have forgotten to add your name as a reviewer. Sorry about\nthat!\n--\nMichael",
"msg_date": "Tue, 31 May 2022 10:46:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Prevent writes on large objects in read-only transactions"
},
{
"msg_contents": "On Sat, May 28, 2022 at 5:01 AM Michael Paquier <michael@paquier.xyz> wrote:\n> Well, there is an actual risk to break applications that have worked\n> until now for a behavior that has existed for years with zero\n> complaints on the matter, so I would leave that alone. Saying that, I\n> don't really disagree with improving the error messages a bit if we\n> are in recovery.\n\nOn the other hand, there's a good argument that the existing behavior\nis simply incorrect.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 31 May 2022 14:40:43 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Prevent writes on large objects in read-only transactions"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Sat, May 28, 2022 at 5:01 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> Well, there is an actual risk to break applications that have worked\n>> until now for a behavior that has existed for years with zero\n>> complaints on the matter, so I would leave that alone. Saying that, I\n>> don't really disagree with improving the error messages a bit if we\n>> are in recovery.\n\n> On the other hand, there's a good argument that the existing behavior\n> is simply incorrect.\n\nYeah. Certainly we'd not want to back-patch this change, in case\nanyone is relying on the current behavior ... but it's hard to argue\nthat it's not wrong.\n\nWhat I'm wondering about is how far the principle of read-only-ness\nought to be expected to go. Should a read-only transaction fail\nto execute adminpack's pg_file_write(), for example? Should it\nrefuse to execute random() on the grounds that that changes the\nsession's PRNG state? The latter seems obviously silly, but\nI'm not very sure about pg_file_write(). Maybe the restriction\nshould be \"no changes to database state that's visible to other\nsessions\", which would leave outside-the-DB changes out of the\ndiscussion.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 31 May 2022 15:49:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Prevent writes on large objects in read-only transactions"
},
{
"msg_contents": "On Tue, May 31, 2022 at 3:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah. Certainly we'd not want to back-patch this change, in case\n> anyone is relying on the current behavior ... but it's hard to argue\n> that it's not wrong.\n\nAgreed.\n\n> What I'm wondering about is how far the principle of read-only-ness\n> ought to be expected to go. Should a read-only transaction fail\n> to execute adminpack's pg_file_write(), for example? Should it\n> refuse to execute random() on the grounds that that changes the\n> session's PRNG state? The latter seems obviously silly, but\n> I'm not very sure about pg_file_write(). Maybe the restriction\n> should be \"no changes to database state that's visible to other\n> sessions\", which would leave outside-the-DB changes out of the\n> discussion.\n\nYeah, I think that's a pretty good idea. It's really pretty hard to\nimagine preventing outside-the-database writes in any kind of\nprincipled way. Somebody can install a C function that does anything,\nand we can do a pretty fair job preventing it from e.g. acquiring a\ntransaction ID or writing WAL by making changes in PostgreSQL itself,\nbut we can't prevent it from doing whatever it wants outside the\ndatabase. Nor is it even a very clear concept definitionally. I\nwouldn't consider a function read-write solely on the basis that it\ncan cause data to be written to the PostgreSQL log file, for instance,\nso it doesn't seem correct to suppose that a C function provided by an\nextension is read-write just because it calls write(2) -- not that we\ncan detect that anyway, but even if we could.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 31 May 2022 17:17:42 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Prevent writes on large objects in read-only transactions"
},
{
"msg_contents": "On Tue, May 31, 2022 at 05:17:42PM -0400, Robert Haas wrote:\n> On Tue, May 31, 2022 at 3:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> What I'm wondering about is how far the principle of read-only-ness\n>> ought to be expected to go. Should a read-only transaction fail\n>> to execute adminpack's pg_file_write(), for example? Should it\n>> refuse to execute random() on the grounds that that changes the\n>> session's PRNG state? The latter seems obviously silly, but\n>> I'm not very sure about pg_file_write(). Maybe the restriction\n>> should be \"no changes to database state that's visible to other\n>> sessions\", which would leave outside-the-DB changes out of the\n>> discussion.\n> \n> Yeah, I think that's a pretty good idea. It's really pretty hard to\n> imagine preventing outside-the-database writes in any kind of\n> principled way. Somebody can install a C function that does anything,\n> and we can do a pretty fair job preventing it from e.g. acquiring a\n> transaction ID or writing WAL by making changes in PostgreSQL itself,\n> but we can't prevent it from doing whatever it wants outside the\n> database. Nor is it even a very clear concept definitionally. I\n> wouldn't consider a function read-write solely on the basis that it\n> can cause data to be written to the PostgreSQL log file, for instance,\n> so it doesn't seem correct to suppose that a C function provided by an\n> extension is read-write just because it calls write(2) -- not that we\n> can detect that anyway, but even if we could.\n\nAgreed. There are a couple of arguments in authorizing\npg_file_write() in a read-only state or writes as long as it does not\naffect WAL or the data. For example, a change of configuration file\ncan be very useful at recovery if one wants to switch the\nconfiguration (ALTER TABLE SET, etc.), so restricting functions that\nperform writes outside the scope of WAL or the data does not make\nsense to restrict. Not to count updates in the control file, but\nthat's different.\n\nNow the LO handling is quite old, and I am not sure if this is worth\nchanging as we have seen no actual complains about that with read-only\ntransactions, even if I agree on that it is inconsistent. That could\ncause more harm than the consistency benefit is worth :/\n--\nMichael",
"msg_date": "Wed, 1 Jun 2022 14:29:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Prevent writes on large objects in read-only transactions"
},
{
"msg_contents": "On Wed, Jun 1, 2022 at 1:29 AM Michael Paquier <michael@paquier.xyz> wrote:\n> Now the LO handling is quite old, and I am not sure if this is worth\n> changing as we have seen no actual complains about that with read-only\n> transactions, even if I agree on that it is inconsistent. That could\n> cause more harm than the consistency benefit is worth :/\n\nThe message that started this thread is literally a complaint about\nthat exact thing.\n\nWe seem to do this fairly often on this list, honestly. Someone posts\na message saying \"X is broken\" and someone agrees and says it's a good\nidea to fix it and then a third person responds and says \"let's not\nchange it, no one has ever {noticed that,cared before,complained about\nit}\". I wonder whether the people who start such threads ever come to\nthe conclusion that the PostgreSQL community thinks that they are a\nnobody and don't count.\n\nAs for the rest, I understand that changing the behavior creates an\nincompatibility with previous releases, but I don't think we should be\nworried about it. We create far larger incompatibilities in nearly\nevery release. There's probably very few people using large object\nfunctions in read-only transactions compared to the number of people\nusing exclusive backup mode, or recovery.conf, or some\npg_stat_activity column that we decided to rename, or accessing\npg_xlog by name in some tool/script. I haven't really heard you\narguing vigorously against those changes, and it doesn't make sense to\nme to hold this one, which to me seems to be vastly less likely to\nbreak anything, to a higher standard.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 1 Jun 2022 10:01:34 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Prevent writes on large objects in read-only transactions"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Jun 1, 2022 at 1:29 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> Now the LO handling is quite old, and I am not sure if this is worth\n>> changing as we have seen no actual complains about that with read-only\n>> transactions, even if I agree on that it is inconsistent. That could\n>> cause more harm than the consistency benefit is worth :/\n\n> The message that started this thread is literally a complaint about\n> that exact thing.\n\nYeah. I think this is more nearly \"nobody had noticed\" than \"everybody\nthinks this is okay\".\n\n> We seem to do this fairly often on this list, honestly. Someone posts\n> a message saying \"X is broken\" and someone agrees and says it's a good\n> idea to fix it and then a third person responds and says \"let's not\n> change it, no one has ever {noticed that,cared before,complained about\n> it}\".\n\nIt's always appropriate to consider backwards compatibility, and we\nfrequently don't back-patch a change because of worries about that.\nHowever, if someone complains because we start rejecting this as of\nv15 or v16, I don't think they have good grounds for that. It's just\nobviously wrong ... unless someone can come up with a plausible\ndefinition of read-only-ness that excludes large objects. I don't\nsay that that's impossible, but it sure seems like it'd be contorted\nreasoning. They're definitely inside-the-database entities.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 Jun 2022 10:15:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Prevent writes on large objects in read-only transactions"
},
{
"msg_contents": "On Wed, Jun 01, 2022 at 10:15:17AM -0400, Tom Lane wrote:\n> It's always appropriate to consider backwards compatibility, and we\n> frequently don't back-patch a change because of worries about that.\n> However, if someone complains because we start rejecting this as of\n> v15 or v16, I don't think they have good grounds for that. It's just\n> obviously wrong ... unless someone can come up with a plausible\n> definition of read-only-ness that excludes large objects. I don't\n> say that that's impossible, but it sure seems like it'd be contorted\n> reasoning. They're definitely inside-the-database entities.\n\nFWIW, I find the removal of error paths to authorize new behaviors\neasy to think about in terms of compatibility, because nobody is going\nto complain about that as long as it works as intended. The opposite,\naka enforcing an error in a code path is much harder to reason about.\nAnyway, if I am outnumbered on this one that's fine by me :)\n\nThere are a couple of things in the original patch that may require to\nbe adjusted though:\n1) Should we complain in lo_open() when using the write mode for a\nread-only transaction? My answer to that would be yes.\n2) We still publish two non-fmgr-callable routines, lo_read() and\nlo_write(). Perhaps we'd better make them static to be-fsstubs.c or\nput the same restriction to the write routine as its SQL flavor?\n--\nMichael",
"msg_date": "Thu, 2 Jun 2022 07:43:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Prevent writes on large objects in read-only transactions"
},
{
"msg_contents": "On Thu, 2 Jun 2022 07:43:06 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Wed, Jun 01, 2022 at 10:15:17AM -0400, Tom Lane wrote:\n> > It's always appropriate to consider backwards compatibility, and we\n> > frequently don't back-patch a change because of worries about that.\n> > However, if someone complains because we start rejecting this as of\n> > v15 or v16, I don't think they have good grounds for that. It's just\n> > obviously wrong ... unless someone can come up with a plausible\n> > definition of read-only-ness that excludes large objects. I don't\n> > say that that's impossible, but it sure seems like it'd be contorted\n> > reasoning. They're definitely inside-the-database entities.\n> \n> FWIW, I find the removal of error paths to authorize new behaviors\n> easy to think about in terms of compatibility, because nobody is going\n> to complain about that as long as it works as intended. The opposite,\n> aka enforcing an error in a code path is much harder to reason about.\n> Anyway, if I am outnumbered on this one that's fine by me :)\n\nI attached the updated patch.\n\nPer discussions above, I undo the change so that it prevents large\nobject writes in read-only transactions again.\n \n> There are a couple of things in the original patch that may require to\n> be adjusted though:\n> 1) Should we complain in lo_open() when using the write mode for a\n> read-only transaction? My answer to that would be yes.\n\nI fixed to raise the error in lo_open() when using the write mode.\n\n> 2) We still publish two non-fmgr-callable routines, lo_read() and\n> lo_write(). Pe4rhaps we'd better make them static to be-fsstubs.c or\n> put the same restriction to the write routine as its SQL flavor?\n\nI am not sure if we should use PreventCommandIfReadOnly in lo_write()\nbecause there are other public functions that write to catalogs but there\nare not the similar restrictions in such functions. I think it is caller's\nresponsibility to prevent to use such public functions in read-only context.\n\nI also fixed to raise the error in each of lo_truncate and lo_truncate64\nper Michael's comment in the other post.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Thu, 16 Jun 2022 15:42:06 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Prevent writes on large objects in read-only transactions"
},
{
"msg_contents": "On Thu, Jun 16, 2022 at 03:42:06PM +0900, Yugo NAGATA wrote:\n> I am not sure if we should use PreventCommandIfReadOnly in lo_write()\n> because there are other public functions that write to catalogs but there\n> are not the similar restrictions in such functions. I think it is caller's\n> responsibility to prevent to use such public functions in read-only context.\n\nI'd be really tempted to remove the plug on this one, actually.\nHowever, that would also mean to break something just for the sake of\nbreaking it. So perhaps you are right at the end in that it is better\nto let this code be, without the new check.\n\n> I also fixed to raise the error in each of lo_truncate and lo_truncate64\n> per Michael's comment in the other post.\n\nThanks! That counts for 10 SQL functions blocked with 10 tests. So\nyou have all of them covered.\n\nLooking at the docs of large objects, as of \"Client Interfaces\", we\nmention that any action must take place in a transaction block.\nShouldn't we add a note that no write operations are allowed in a\nread-only transaction?\n\n+ if (mode & INV_WRITE)\n+ PreventCommandIfReadOnly(\"lo_open() in write mode\");\nNit. This breaks translation. I think that it could be switched to\n\"lo_open(INV_WRITE)\" instead as the flag name is documented.\n\nThe patch is forgetting a refresh for largeobject_1.out.\n\n--- INV_READ = 0x20000\n--- INV_WRITE = 0x40000\n+-- INV_READ = 0x40000\n+-- INV_WRITE = 0x20000\nGood catch! This one is kind of independent, so I have fixed it\nseparately, in all the expected output files.\n--\nMichael",
"msg_date": "Thu, 16 Jun 2022 17:31:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Prevent writes on large objects in read-only transactions"
},
{
"msg_contents": "Hello Michael-san,\n\nThank you for reviewing the patch. I attached the updated patch.\n\nOn Thu, 16 Jun 2022 17:31:22 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> Looking at the docs of large objects, as of \"Client Interfaces\", we\n> mention that any action must take place in a transaction block.\n> Shouldn't we add a note that no write operations are allowed in a\n> read-only transaction?\n\nI added a description about read-only transaction to the doc.\n\n> + if (mode & INV_WRITE)\n> + PreventCommandIfReadOnly(\"lo_open() in write mode\");\n> Nit. This breaks translation. I think that it could be switched to\n> \"lo_open(INV_WRITE)\" instead as the flag name is documented.\n\nChanged it as you suggested.\n \n> The patch is forgetting a refresh for largeobject_1.out.\n\nI added changes for largeobject_1.out.\n\n> --- INV_READ = 0x20000\n> --- INV_WRITE = 0x40000\n> +-- INV_READ = 0x40000\n> +-- INV_WRITE = 0x20000\n> Good catch! This one is kind of independent, so I have fixed it\n> separately, in all the expected output files.\n\nThanks!\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Wed, 29 Jun 2022 17:29:50 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Prevent writes on large objects in read-only transactions"
},
{
"msg_contents": "On Wed, Jun 29, 2022 at 05:29:50PM +0900, Yugo NAGATA wrote:\n> Thank you for reviewing the patch. I attached the updated patch.\n\nThanks for the new version. I have looked at that again, and the set\nof changes seem fine (including the change for the alternate output).\nSo, applied.\n--\nMichael",
"msg_date": "Mon, 4 Jul 2022 15:51:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Prevent writes on large objects in read-only transactions"
},
{
"msg_contents": "On Mon, 4 Jul 2022 15:51:32 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Wed, Jun 29, 2022 at 05:29:50PM +0900, Yugo NAGATA wrote:\n> > Thank you for reviewing the patch. I attached the updated patch.\n> \n> Thanks for the new version. I have looked at that again, and the set\n> of changes seem fine (including the change for the alternate output).\n> So, applied.\n\n\nThanks!\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Mon, 4 Jul 2022 16:34:07 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Prevent writes on large objects in read-only transactions"
}
] |
[
{
"msg_contents": "Hi,\n\nPresently, makeaclitem() allows only a single privilege in a single call.\nThis\npatch allows it to additionally accept multiple comma-separated privileges.\n\nThe attached patch reuses the has_foo_privileges() infrastructure and\nbesides\na minor change to the function documentation, it also adds 3 regression\ntests\nthat increase the function code-coverage to 100%.\n\nSample usage:\n\npostgres=# SELECT makeaclitem('postgres'::regrole, 'usr1'::regrole,\n 'SELECT, INSERT, UPDATE, ALTER SYSTEM', FALSE);\n makeaclitem\n--------------------\n postgres=arwA/usr1\n(1 row)\n\nThe need for this patch came up during a recent customer experience, where\n'pg_init_privs.initprivs' had grantees pointing to non-existent roles. This\nis\neasy to reproduce [5] and given that this issue was blocking the\ncustomer's planned upgrade, the temporary solution was to UPDATE the\ninitprivs column. From what I could see, there was a fix for similar\nissues [1], although that didn't fix this specific issue [2] and thus\nmanually\nmodifying initprivs was required. For this manual update though, if the\nproposed feature was available, it would have helped with the UPDATE SQLs.\n\nTo elaborate the customer issue, in most rows aclitems[]::TEXT was 2000+\ncharacters long and spanned 30+ missing roles and multiple databases. In\ntrying\nto automate the generation of UPDATE SQLs, I tried to use aclexplode() to\nfilter-out the missing grantee roles, however re-stitching the remaining\nitems\nback into an aclitems[] array was non-trivial, since makeaclitem() doesn't\nyet\naccept multiple privileges in a single call. In particular, the\nunnest() + string-search approach mentioned in this thread [4] didn't scale\nwith many missing roles where rolenames were alphanumeric.\n\nSee [6] for a contrived example where the updated makeaclitem() can be used\nto\nregenerate the initprivs column, weeding out privileges related to missing\ngrantees.\n\nLastly, while researching, I saw a thread [3] questioning whether\nmakeaclitem() is useful, and think that if it were to allow multiple\nprivileges,\nit could have helped in the above situation, and thus I'd -1 on dropping the\nfunction.\n\nReference:\n1)\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=47088c599\ncc6d6473c7b89ac029060525cf086d8\n\n2)\nhttps://www.postgresql.org/message-id/flat/29913.1573835973%40sss.pgh.pa.us#\n90b0192c126ea61266e31dbb864c9b08\n\n3)\nhttps://www.postgresql.org/message-id/flat/48f9156d-3937-cf47-13ee-ac4e90c83\nc43%40postgresfriends.org#7f5c830819bc104c222b440689d2028f\n\n4)\nhttps://www.postgresql.org/message-id/flat/1573808483712.96817%40Optiver.com\n\n5) Reproduction to get pg_init_privs.initprivs to point to missing roles.\n\npsql -c \"CREATE DATABASE w\" postgres\nexport PGDATABASE=w;\npsql -c \"CREATE USER usr1 PASSWORD 'usr1' SUPERUSER;\"\npsql -U usr1 -c \"CREATE EXTENSION pg_stat_statements\" -c \"SELECT * FROM\npg_init_privs WHERE privtype = 'e'\" -c \"REASSIGN OWNED BY usr1 TO postgres;\"\npsql -c \"DROP USER usr1\" -c \"SELECT * FROM pg_init_privs WHERE privtype =\n'e';\"\n.\n.\n.\nThis would end up with something like this:\n\nr=# select * from pg_init_privs where privtype = 'e';\n objoid | classoid | objsubid | privtype |\ninitprivs\n--------+----------+----------+----------+----------------------------------\n-------------------------------------\n 16433 | 1255 | 0 | e | {16425=X/16425}\n 16441 | 1259 | 0 | e |\n{16425=arwdDxt/16425,=r/16425,t=r/postgres}\n 16452 | 1259 | 0 | e |\n{16425=arwdDxt/16425,=r/16425,uw22341=arwdDxt/postgres,t=rw/postgres}\n(3 rows)\n\n6) This feature can then be used to generate the UPDATE SQLs to fix the\nissue:\n\nr=# BEGIN;\nBEGIN\nr=*# WITH a AS (SELECT pg_init_privs.* ,(aclexplode(initprivs)).* FROM\npg_init_privs WHERE privtype = 'e')\nr-*# SELECT DISTINCT 'UPDATE pg_init_privs SET initprivs = ''{' || (\nr(*# WITH x AS (\nr(*# SELECT (aclexplode(initprivs)).*\nr(*# FROM pg_init_privs\nr(*# WHERE objoid = a.objoid AND classoid = a.classoid AND objsubid\n= a.objsubid\nr(*# )\nr(*# ,y AS (\nr(*# SELECT makeaclitem(grantee, grantor, string_agg(privilege_type,\n','), is_grantable) p\nr(*# FROM x\nr(*# WHERE grantee IN (SELECT oid FROM pg_roles)\nr(*# GROUP BY grantee,grantor,is_grantable\nr(*# )\nr(*# SELECT string_agg(p::TEXT, ',') FROM y\nr(*# ) || '}'' WHERE objoid=' || a.objoid || ' AND classoid=' ||\na.classoid || ' AND objsubid=' || a.objsubid || ';'\nr-*# FROM a;\n ?column?\n----------------------------------------------------------------------------\n----------------------------------------------------------\n UPDATE pg_init_privs SET initprivs = '{t=r/postgres}' WHERE objoid=16441\nAND classoid=1259 AND objsubid=0;\n UPDATE pg_init_privs SET initprivs =\n'{uw22341=arwdDxt/postgres,t=rw/postgres}' WHERE objoid=16452 AND\nclassoid=1259 AND objsubid=0;\n\n(3 rows)\n\nr=*# \\gexec\nUPDATE 1\nUPDATE 1\nr=*# select * from pg_init_privs where privtype = 'e';\n objoid | classoid | objsubid | privtype | initprivs\n--------+----------+----------+----------+----------------------------------\n--------\n 16433 | 1255 | 0 | e | {16425=X/16425}\n 16441 | 1259 | 0 | e | {t=r/postgres}\n 16452 | 1259 | 0 | e |\n{uw22341=arwdDxt/postgres,t=rw/postgres}\n(3 rows)\n\n(Similarly, it should be possible to generate DELETE SQLs for rows with\n*all*\nACL objects pointing to missing roles - for e.g. row 1 above).\n\n-\nRobins Tharakan\nAmazon Web Services",
"msg_date": "Fri, 27 May 2022 07:03:52 +0000",
"msg_from": "\"Tharakan, Robins\" <tharar@amazon.com>",
"msg_from_op": true,
"msg_subject": "Allow makeaclitem() to accept multiple privileges"
},
{
"msg_contents": "\"Tharakan, Robins\" <tharar@amazon.com> writes:\n> Presently, makeaclitem() allows only a single privilege in a single call.\n> This\n> patch allows it to additionally accept multiple comma-separated privileges.\n\nSeems reasonable. Pushed with minor cosmetic adjustments (mostly\ndocs changes).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 03 Jul 2022 16:50:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allow makeaclitem() to accept multiple privileges"
}
] |
[
{
"msg_contents": "Hello,\n\nI found that tests for TRUNCATE on foreign tables are left\nin the foreign_data regression test. Now TRUNCATE on foreign\ntables are allowed, so I think the tests should be removed.\n\nCurrently, the results of the test is \n \"ERROR: foreign-data wrapper \"dummy\" has no handler\",\nbut it is just because the foreign table has no handler,\nnot due to TRUNCATE.\n\nThe patch is attached.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Fri, 27 May 2022 17:25:43 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Remove useless tests about TRUNCATE on foreign table"
},
{
"msg_contents": "On Fri, May 27, 2022 at 05:25:43PM +0900, Yugo NAGATA wrote:\n> --- TRUNCATE doesn't work on foreign tables, either directly or recursively\n> -TRUNCATE ft2; -- ERROR\n> -ERROR: foreign-data wrapper \"dummy\" has no handler\n> -TRUNCATE fd_pt1; -- ERROR\n> -ERROR: foreign-data wrapper \"dummy\" has no handler\n> DROP TABLE fd_pt1 CASCADE;\n\nIn the case of this test, fd_pt1 is a normal table that ft2 inherits,\nso this TRUNCATE command somewhat checks that the TRUNCATE falls back\nto the foreign table in this case. However, this happens to be tested\nin postgres_fdw (see around tru_ftable_parent),\n\n> --- TRUNCATE doesn't work on foreign tables, either directly or recursively\n> -TRUNCATE fd_pt2_1; -- ERROR\n> -ERROR: foreign-data wrapper \"dummy\" has no handler\n> -TRUNCATE fd_pt2; -- ERROR\n> -ERROR: foreign-data wrapper \"dummy\" has no handler\n\nPartitions have also some coverage as far as I can see, so I agree\nthat it makes little sense to keep the tests you are removing here.\n--\nMichael",
"msg_date": "Mon, 30 May 2022 17:08:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Remove useless tests about TRUNCATE on foreign table"
},
{
"msg_contents": "On Mon, May 30, 2022 at 05:08:10PM +0900, Michael Paquier wrote:\n> Partitions have also some coverage as far as I can see, so I agree\n> that it makes little sense to keep the tests you are removing here.\n\nAnd done as of 0efa513.\n--\nMichael",
"msg_date": "Tue, 31 May 2022 09:49:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Remove useless tests about TRUNCATE on foreign table"
},
{
"msg_contents": "On Tue, 31 May 2022 09:49:40 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, May 30, 2022 at 05:08:10PM +0900, Michael Paquier wrote:\n> > Partitions have also some coverage as far as I can see, so I agree\n> > that it makes little sense to keep the tests you are removing here.\n> \n> And done as of 0efa513.\n\nThank you!\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Tue, 31 May 2022 18:11:20 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Remove useless tests about TRUNCATE on foreign table"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile investigating an internal report, I concluded that it is a bug. The\nreproducible test case is simple (check 0002) and it consists of a FOR ALL\nTABLES publication and a non-empty materialized view on publisher. After the\nsetup, if you refresh the MV, you got the following message on the subscriber:\n\nERROR: logical replication target relation \"public.pg_temp_NNNNN\" does not exist\n\nThat's because the commit 1a499c2520 (that fixes the heap rewrite for tables)\nforgot to consider that materialized views can also create transient heaps and\nthey should also be skipped. The affected version is only 10 because 11\ncontains a different solution (commit 325f2ec555) that provides a proper fix\nfor the heap rewrite handling in logical decoding.\n\n0001 is a patch to skip MV too. I attached 0002 to demonstrate the issue but it\ndoesn't seem appropriate to be included. The test was written to detect the\nerror and bail out. After this fix, it takes a considerable amount of time to\nfinish the test because it waits for a message that never arrives. Since nobody\nreports this bug in 5 years and considering that version 10 will be EOL in 6\nmonths, I don't think an additional test is crucial here.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/",
"msg_date": "Fri, 27 May 2022 18:12:54 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": true,
"msg_subject": "Ignore heap rewrites for materialized views in logical replication"
},
{
"msg_contents": "On Sat, May 28, 2022 at 2:44 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> While investigating an internal report, I concluded that it is a bug. The\n> reproducible test case is simple (check 0002) and it consists of a FOR ALL\n> TABLES publication and a non-empty materialized view on publisher. After the\n> setup, if you refresh the MV, you got the following message on the subscriber:\n>\n> ERROR: logical replication target relation \"public.pg_temp_NNNNN\" does not exist\n>\n> That's because the commit 1a499c2520 (that fixes the heap rewrite for tables)\n> forgot to consider that materialized views can also create transient heaps and\n> they should also be skipped. The affected version is only 10 because 11\n> contains a different solution (commit 325f2ec555) that provides a proper fix\n> for the heap rewrite handling in logical decoding.\n>\n> 0001 is a patch to skip MV too.\n>\n\nI agree with your analysis and the fix looks correct to me.\n\n> I attached 0002 to demonstrate the issue but it\n> doesn't seem appropriate to be included. The test was written to detect the\n> error and bail out. After this fix, it takes a considerable amount of time to\n> finish the test because it waits for a message that never arrives.\n>\n\nInstead of waiting for an error, we can try to insert into a new table\ncreated by the test case after the 'Refresh ..' command and wait for\nthe change to be replicated by using wait_for_caught_up.\n\n> Since nobody\n> reports this bug in 5 years and considering that version 10 will be EOL in 6\n> months, I don't think an additional test is crucial here.\n>\n\nLet's try to see if we can simplify the test so that it can be\ncommitted along with a fix. If we are not able to find any reasonable\nway then we can think of skipping it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 28 May 2022 15:37:06 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Ignore heap rewrites for materialized views in logical\n replication"
},
{
"msg_contents": "On Sat, May 28, 2022, at 7:07 AM, Amit Kapila wrote:\n> I agree with your analysis and the fix looks correct to me.\nThanks for checking.\n\n> Instead of waiting for an error, we can try to insert into a new table\n> created by the test case after the 'Refresh ..' command and wait for\n> the change to be replicated by using wait_for_caught_up.\nThat's a good idea. [modifying the test...] I used the same table. Whenever the\nnew row arrives on the subscriber or it reads that error message, it bails out.\n\n> Let's try to see if we can simplify the test so that it can be\n> committed along with a fix. If we are not able to find any reasonable\n> way then we can think of skipping it.\nThe new test is attached.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/",
"msg_date": "Mon, 30 May 2022 21:56:26 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": true,
"msg_subject": "Re: Ignore heap rewrites for materialized views in logical\n replication"
},
{
"msg_contents": "On Tue, May 31, 2022 at 6:27 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Sat, May 28, 2022, at 7:07 AM, Amit Kapila wrote:\n>\n> I agree with your analysis and the fix looks correct to me.\n>\n> Thanks for checking.\n>\n> Instead of waiting for an error, we can try to insert into a new table\n> created by the test case after the 'Refresh ..' command and wait for\n> the change to be replicated by using wait_for_caught_up.\n>\n> That's a good idea. [modifying the test...] I used the same table. Whenever the\n> new row arrives on the subscriber or it reads that error message, it bails out.\n>\n\nI think we don't need the retry logical to check error, a simple\nwait_for_caught_up should be sufficient as we are doing in other\ntests. See attached. I have slightly modified the commit message as\nwell. Kindly let me know what you think?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Tue, 31 May 2022 19:43:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Ignore heap rewrites for materialized views in logical\n replication"
},
{
"msg_contents": "On Tue, May 31, 2022, at 11:13 AM, Amit Kapila wrote:\n> I think we don't need the retry logical to check error, a simple\n> wait_for_caught_up should be sufficient as we are doing in other\n> tests. See attached. I have slightly modified the commit message as\n> well. Kindly let me know what you think?\nYour modification will hang until the test timeout without the patch. That's\nwhy I avoided to use wait_for_caught_up and used a loop for fast exit on success\nor failure. I'm fine with a simple test case like you proposed.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Tue, May 31, 2022, at 11:13 AM, Amit Kapila wrote:I think we don't need the retry logical to check error, a simplewait_for_caught_up should be sufficient as we are doing in othertests. See attached. I have slightly modified the commit message aswell. Kindly let me know what you think?Your modification will hang until the test timeout without the patch. That'swhy I avoided to use wait_for_caught_up and used a loop for fast exit on successor failure. I'm fine with a simple test case like you proposed.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Tue, 31 May 2022 11:57:43 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": true,
"msg_subject": "Re: Ignore heap rewrites for materialized views in logical\n replication"
},
{
"msg_contents": "On Tue, May 31, 2022 at 8:28 PM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Tue, May 31, 2022, at 11:13 AM, Amit Kapila wrote:\n>\n> I think we don't need the retry logical to check error, a simple\n> wait_for_caught_up should be sufficient as we are doing in other\n> tests. See attached. I have slightly modified the commit message as\n> well. Kindly let me know what you think?\n>\n> Your modification will hang until the test timeout without the patch. That's\n> why I avoided to use wait_for_caught_up and used a loop for fast exit on success\n> or failure.\n>\n\nRight, but that is true for other tests as well and we are not\nexpecting to face this/other errors. I think keeping it simple and\nsimilar to other tests seems enough for this case.\n\n> I'm fine with a simple test case like you proposed.\n>\n\nThanks, I'll push this in a day or two unless I see any other\nsuggestions/comments. Note to others: this is v10 fix only. As\nmentioned by Euler in his initial email, this is not required from v11\nonwards due to a different solution for this problem via commit\n325f2ec555.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 1 Jun 2022 10:39:13 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Ignore heap rewrites for materialized views in logical\n replication"
},
{
"msg_contents": "On Wed, Jun 1, 2022 at 10:39 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, May 31, 2022 at 8:28 PM Euler Taveira <euler@eulerto.com> wrote:\n> >\n> > On Tue, May 31, 2022, at 11:13 AM, Amit Kapila wrote:\n> >\n> > I think we don't need the retry logical to check error, a simple\n> > wait_for_caught_up should be sufficient as we are doing in other\n> > tests. See attached. I have slightly modified the commit message as\n> > well. Kindly let me know what you think?\n> >\n> > Your modification will hang until the test timeout without the patch. That's\n> > why I avoided to use wait_for_caught_up and used a loop for fast exit on success\n> > or failure.\n> >\n>\n> Right, but that is true for other tests as well and we are not\n> expecting to face this/other errors. I think keeping it simple and\n> similar to other tests seems enough for this case.\n>\n> > I'm fine with a simple test case like you proposed.\n> >\n>\n> Thanks, I'll push this in a day or two unless I see any other\n> suggestions/comments.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 3 Jun 2022 15:50:03 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Ignore heap rewrites for materialized views in logical\n replication"
}
] |
[
{
"msg_contents": "Hi,\n\nwhile working on some BRIN stuff, I realized (my) commit 5753d4ee320b\nignoring BRIN indexes for HOT is likely broken. Consider this example:\n\n----------------------------------------------------------------------\nCREATE TABLE t (a INT) WITH (fillfactor = 10);\n\nINSERT INTO t SELECT i\n FROM generate_series(0,100000) s(i);\n\nCREATE INDEX ON t USING BRIN (a);\n\nUPDATE t SET a = 0 WHERE random() < 0.01;\n\nSET enable_seqscan = off;\nEXPLAIN (ANALYZE, COSTS OFF, TIMING OFF) SELECT * FROM t WHERE a = 0;\n\nSET enable_seqscan = on;\nEXPLAIN (ANALYZE, COSTS OFF, TIMING OFF) SELECT * FROM t WHERE a = 0;\n----------------------------------------------------------------------\n\nwhich unfortunately produces this:\n\n QUERY PLAN\n ---------------------------------------------------------------\n Bitmap Heap Scan on t (actual rows=23 loops=1)\n Recheck Cond: (a = 0)\n Rows Removed by Index Recheck: 2793\n Heap Blocks: lossy=128\n -> Bitmap Index Scan on t_a_idx (actual rows=1280 loops=1)\n Index Cond: (a = 0)\n Planning Time: 0.049 ms\n Execution Time: 0.424 ms\n (8 rows)\n\n SET\n QUERY PLAN\n -----------------------------------------\n Seq Scan on t (actual rows=995 loops=1)\n Filter: (a = 0)\n Rows Removed by Filter: 99006\n Planning Time: 0.027 ms\n Execution Time: 7.670 ms\n (5 rows)\n\nThat is, the index fails to return some of the rows :-(\n\nI don't remember the exact reasoning behind the commit, but the commit\nmessage justifies the change like this:\n\n There are no index pointers to individual tuples in BRIN, and the\n page range summary will be updated anyway as it relies on visibility\n info.\n\nAFAICS that's a misunderstanding of how BRIN uses visibility map, or\nrather does not use. In particular, bringetbitmap() does not look at the\nvm at all, so it'll produce incomplete bitmap.\n\nSo it seems I made a boo boo here. Luckily, this is a PG15 commit, not a\nlive issue. I don't quite see if this can be salvaged - I'll think about\nthis a bit more, but it'll probably end with a revert.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sat, 28 May 2022 16:50:54 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Ignoring BRIN for HOT udpates seems broken"
},
{
"msg_contents": "On Sat, 28 May 2022 at 16:51, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> while working on some BRIN stuff, I realized (my) commit 5753d4ee320b\n> ignoring BRIN indexes for HOT is likely broken. Consider this example:\n>\n> ----------------------------------------------------------------------\n> CREATE TABLE t (a INT) WITH (fillfactor = 10);\n>\n> INSERT INTO t SELECT i\n> FROM generate_series(0,100000) s(i);\n>\n> CREATE INDEX ON t USING BRIN (a);\n>\n> UPDATE t SET a = 0 WHERE random() < 0.01;\n>\n> SET enable_seqscan = off;\n> EXPLAIN (ANALYZE, COSTS OFF, TIMING OFF) SELECT * FROM t WHERE a = 0;\n>\n> SET enable_seqscan = on;\n> EXPLAIN (ANALYZE, COSTS OFF, TIMING OFF) SELECT * FROM t WHERE a = 0;\n> ----------------------------------------------------------------------\n>\n> which unfortunately produces this:\n>\n> QUERY PLAN\n> ---------------------------------------------------------------\n> Bitmap Heap Scan on t (actual rows=23 loops=1)\n> Recheck Cond: (a = 0)\n> Rows Removed by Index Recheck: 2793\n> Heap Blocks: lossy=128\n> -> Bitmap Index Scan on t_a_idx (actual rows=1280 loops=1)\n> Index Cond: (a = 0)\n> Planning Time: 0.049 ms\n> Execution Time: 0.424 ms\n> (8 rows)\n>\n> SET\n> QUERY PLAN\n> -----------------------------------------\n> Seq Scan on t (actual rows=995 loops=1)\n> Filter: (a = 0)\n> Rows Removed by Filter: 99006\n> Planning Time: 0.027 ms\n> Execution Time: 7.670 ms\n> (5 rows)\n>\n> That is, the index fails to return some of the rows :-(\n>\n> I don't remember the exact reasoning behind the commit, but the commit\n> message justifies the change like this:\n>\n> There are no index pointers to individual tuples in BRIN, and the\n> page range summary will be updated anyway as it relies on visibility\n> info.\n>\n> AFAICS that's a misunderstanding of how BRIN uses visibility map, or\n> rather does not use. In particular, bringetbitmap() does not look at the\n> vm at all, so it'll produce incomplete bitmap.\n>\n> So it seems I made a boo boo here. Luckily, this is a PG15 commit, not a\n> live issue. I don't quite see if this can be salvaged - I'll think about\n> this a bit more, but it'll probably end with a revert.\n\nThe principle of 'amhotblocking' for only blocking HOT updates seems\ncorrect, except for the fact that the HOT flag bit is also used as a\nway to block the propagation of new values to existing indexes.\n\nA better abstraction would be \"amSummarizes[Block]', in which updates\nthat only modify columns that are only included in summarizing indexes\nstill allow HOT, but still will see an update call to all (relevant?)\nsummarizing indexes. That should still improve performance\nsignificantly for the relevant cases.\n\n-Matthias\n\n\n",
"msg_date": "Sat, 28 May 2022 21:24:59 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Ignoring BRIN for HOT udpates seems broken"
},
{
"msg_contents": "\n\nOn 5/28/22 21:24, Matthias van de Meent wrote:\n> On Sat, 28 May 2022 at 16:51, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> Hi,\n>>\n>> while working on some BRIN stuff, I realized (my) commit 5753d4ee320b\n>> ignoring BRIN indexes for HOT is likely broken. Consider this example:\n>>\n>> ----------------------------------------------------------------------\n>> CREATE TABLE t (a INT) WITH (fillfactor = 10);\n>>\n>> INSERT INTO t SELECT i\n>> FROM generate_series(0,100000) s(i);\n>>\n>> CREATE INDEX ON t USING BRIN (a);\n>>\n>> UPDATE t SET a = 0 WHERE random() < 0.01;\n>>\n>> SET enable_seqscan = off;\n>> EXPLAIN (ANALYZE, COSTS OFF, TIMING OFF) SELECT * FROM t WHERE a = 0;\n>>\n>> SET enable_seqscan = on;\n>> EXPLAIN (ANALYZE, COSTS OFF, TIMING OFF) SELECT * FROM t WHERE a = 0;\n>> ----------------------------------------------------------------------\n>>\n>> which unfortunately produces this:\n>>\n>> QUERY PLAN\n>> ---------------------------------------------------------------\n>> Bitmap Heap Scan on t (actual rows=23 loops=1)\n>> Recheck Cond: (a = 0)\n>> Rows Removed by Index Recheck: 2793\n>> Heap Blocks: lossy=128\n>> -> Bitmap Index Scan on t_a_idx (actual rows=1280 loops=1)\n>> Index Cond: (a = 0)\n>> Planning Time: 0.049 ms\n>> Execution Time: 0.424 ms\n>> (8 rows)\n>>\n>> SET\n>> QUERY PLAN\n>> -----------------------------------------\n>> Seq Scan on t (actual rows=995 loops=1)\n>> Filter: (a = 0)\n>> Rows Removed by Filter: 99006\n>> Planning Time: 0.027 ms\n>> Execution Time: 7.670 ms\n>> (5 rows)\n>>\n>> That is, the index fails to return some of the rows :-(\n>>\n>> I don't remember the exact reasoning behind the commit, but the commit\n>> message justifies the change like this:\n>>\n>> There are no index pointers to individual tuples in BRIN, and the\n>> page range summary will be updated anyway as it relies on visibility\n>> info.\n>>\n>> AFAICS that's a misunderstanding of how BRIN uses visibility map, or\n>> rather does not use. In particular, bringetbitmap() does not look at the\n>> vm at all, so it'll produce incomplete bitmap.\n>>\n>> So it seems I made a boo boo here. Luckily, this is a PG15 commit, not a\n>> live issue. I don't quite see if this can be salvaged - I'll think about\n>> this a bit more, but it'll probably end with a revert.\n> \n> The principle of 'amhotblocking' for only blocking HOT updates seems\n> correct, except for the fact that the HOT flag bit is also used as a\n> way to block the propagation of new values to existing indexes.\n> \n> A better abstraction would be \"amSummarizes[Block]', in which updates\n> that only modify columns that are only included in summarizing indexes\n> still allow HOT, but still will see an update call to all (relevant?)\n> summarizing indexes. That should still improve performance\n> significantly for the relevant cases.\n> \n\nYeah, I think that might/should work. We could still create the HOT\nchain, but we'd have to update the BRIN indexes. But that seems like a\nfairly complicated change to be done this late for PG15.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 28 May 2022 22:50:59 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Ignoring BRIN for HOT udpates seems broken"
},
{
"msg_contents": "On Sat, 28 May 2022 at 22:51, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n>\n>\n> On 5/28/22 21:24, Matthias van de Meent wrote:\n> > On Sat, 28 May 2022 at 16:51, Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> Hi,\n> >>\n> >> while working on some BRIN stuff, I realized (my) commit 5753d4ee320b\n> >> ignoring BRIN indexes for HOT is likely broken. Consider this example:\n> >>\n> >> ----------------------------------------------------------------------\n> >> CREATE TABLE t (a INT) WITH (fillfactor = 10);\n> >>\n> >> INSERT INTO t SELECT i\n> >> FROM generate_series(0,100000) s(i);\n> >>\n> >> CREATE INDEX ON t USING BRIN (a);\n> >>\n> >> UPDATE t SET a = 0 WHERE random() < 0.01;\n> >>\n> >> SET enable_seqscan = off;\n> >> EXPLAIN (ANALYZE, COSTS OFF, TIMING OFF) SELECT * FROM t WHERE a = 0;\n> >>\n> >> SET enable_seqscan = on;\n> >> EXPLAIN (ANALYZE, COSTS OFF, TIMING OFF) SELECT * FROM t WHERE a = 0;\n> >> ----------------------------------------------------------------------\n> >>\n> >> which unfortunately produces this:\n> >>\n> >> QUERY PLAN\n> >> ---------------------------------------------------------------\n> >> Bitmap Heap Scan on t (actual rows=23 loops=1)\n> >> Recheck Cond: (a = 0)\n> >> Rows Removed by Index Recheck: 2793\n> >> Heap Blocks: lossy=128\n> >> -> Bitmap Index Scan on t_a_idx (actual rows=1280 loops=1)\n> >> Index Cond: (a = 0)\n> >> Planning Time: 0.049 ms\n> >> Execution Time: 0.424 ms\n> >> (8 rows)\n> >>\n> >> SET\n> >> QUERY PLAN\n> >> -----------------------------------------\n> >> Seq Scan on t (actual rows=995 loops=1)\n> >> Filter: (a = 0)\n> >> Rows Removed by Filter: 99006\n> >> Planning Time: 0.027 ms\n> >> Execution Time: 7.670 ms\n> >> (5 rows)\n> >>\n> >> That is, the index fails to return some of the rows :-(\n> >>\n> >> I don't remember the exact reasoning behind the commit, but the commit\n> >> message justifies the change like this:\n> >>\n> >> There are no index pointers to individual tuples in BRIN, and the\n> >> page range summary will be updated anyway as it relies on visibility\n> >> info.\n> >>\n> >> AFAICS that's a misunderstanding of how BRIN uses visibility map, or\n> >> rather does not use. In particular, bringetbitmap() does not look at the\n> >> vm at all, so it'll produce incomplete bitmap.\n> >>\n> >> So it seems I made a boo boo here. Luckily, this is a PG15 commit, not a\n> >> live issue. I don't quite see if this can be salvaged - I'll think about\n> >> this a bit more, but it'll probably end with a revert.\n> >\n> > The principle of 'amhotblocking' for only blocking HOT updates seems\n> > correct, except for the fact that the HOT flag bit is also used as a\n> > way to block the propagation of new values to existing indexes.\n> >\n> > A better abstraction would be \"amSummarizes[Block]', in which updates\n> > that only modify columns that are only included in summarizing indexes\n> > still allow HOT, but still will see an update call to all (relevant?)\n> > summarizing indexes. That should still improve performance\n> > significantly for the relevant cases.\n> >\n>\n> Yeah, I think that might/should work. We could still create the HOT\n> chain, but we'd have to update the BRIN indexes. But that seems like a\n> fairly complicated change to be done this late for PG15.\n\nHere's an example patch for that (based on a branch derived from\nmaster @ 5bb2b6ab). A nod to the authors of the pHOT patch, as that is\na related patch and was informative in how this could/should impact AM\nAPIs -- this is doing things similar (but not exactly the same) to\nthat by only updating select indexes.\n\nNote that this is an ABI change in some critical places -- I'm not\nsure it's OK to commit a fix like this into PG15 unless we really\ndon't want to revert 5753d4ee320b.\n\nAlso of note is that this still updates _all_ summarizing indexes, not\nonly those involved in the tuple update. Better performance is up to a\ndifferent implementation.\n\nThe patch includes a new regression test based on your example, which\nfails on master but succeeds after applying the patch.\n\n-Matthias",
"msg_date": "Mon, 30 May 2022 17:22:35 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Ignoring BRIN for HOT udpates seems broken"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-30 17:22:35 +0200, Matthias van de Meent wrote:\n> > Yeah, I think that might/should work. We could still create the HOT\n> > chain, but we'd have to update the BRIN indexes. But that seems like a\n> > fairly complicated change to be done this late for PG15.\n> \n> Here's an example patch for that (based on a branch derived from\n> master @ 5bb2b6ab). A nod to the authors of the pHOT patch, as that is\n> a related patch and was informative in how this could/should impact AM\n> APIs -- this is doing things similar (but not exactly the same) to\n> that by only updating select indexes.\n> \n> Note that this is an ABI change in some critical places -- I'm not\n> sure it's OK to commit a fix like this into PG15 unless we really\n> don't want to revert 5753d4ee320b.\n> \n> Also of note is that this still updates _all_ summarizing indexes, not\n> only those involved in the tuple update. Better performance is up to a\n> different implementation.\n> \n> The patch includes a new regression test based on your example, which\n> fails on master but succeeds after applying the patch.\n\nThis seems like a pretty clear cut case for reverting and retrying in\n16. There's plenty subtlety in this area (as evidenced by this thread and the\nindex/reindex concurrently breakage), and building infrastructure post beta1\nisn't exactly conducive to careful analysis and testing.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 30 May 2022 12:57:58 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Ignoring BRIN for HOT udpates seems broken"
},
{
"msg_contents": "On Sat, May 28, 2022 at 4:51 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> Yeah, I think that might/should work. We could still create the HOT\n> chain, but we'd have to update the BRIN indexes. But that seems like a\n> fairly complicated change to be done this late for PG15.\n\nYeah, I think a revert is better for now. But I agree that the basic\nidea seems salvageable. I think that the commit message is correct\nwhen it states that \"When determining whether an index update may be\nskipped by using HOT, we can ignore attributes indexed only by BRIN\nindexes.\" However, that doesn't mean that we can ignore the need to\nupdate those indexes. In that regard, the commit message makes it\nsound like all is well, because it states that \"the page range summary\nwill be updated anyway\" which reads to me like the indexes are in fact\ngetting updated. Your example, however, seems to show that the indexes\nare not getting updated.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 1 Jun 2022 16:38:16 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Ignoring BRIN for HOT udpates seems broken"
},
{
"msg_contents": "On 6/1/22 22:38, Robert Haas wrote:\n> On Sat, May 28, 2022 at 4:51 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> Yeah, I think that might/should work. We could still create the HOT\n>> chain, but we'd have to update the BRIN indexes. But that seems like a\n>> fairly complicated change to be done this late for PG15.\n> \n> Yeah, I think a revert is better for now. But I agree that the basic\n> idea seems salvageable. I think that the commit message is correct\n> when it states that \"When determining whether an index update may be\n> skipped by using HOT, we can ignore attributes indexed only by BRIN\n> indexes.\" However, that doesn't mean that we can ignore the need to\n> update those indexes. In that regard, the commit message makes it\n> sound like all is well, because it states that \"the page range summary\n> will be updated anyway\" which reads to me like the indexes are in fact\n> getting updated. Your example, however, seems to show that the indexes\n> are not getting updated.\n> \n\nYeah, agreed :-( I agree we can probably salvage some of the idea, but\nit's far too late for major reworks in PG15.\n\nAttached is a patch reverting both commits (5753d4ee32 and fe60b67250).\nThis changes the IndexAmRoutine struct, so it's an ABI break. That's not\ngreat post-beta :-( In principle we might also leave amhotblocking in\nthe struct but ignore it in the code (and treat it as false), but that\nseems weird and it's going to be a pain when backpatching. Opinions?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 6 Jun 2022 09:08:08 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Ignoring BRIN for HOT udpates seems broken"
},
{
"msg_contents": "On Mon, Jun 06, 2022 at 09:08:08AM +0200, Tomas Vondra wrote:\n> Attached is a patch reverting both commits (5753d4ee32 and fe60b67250).\n> This changes the IndexAmRoutine struct, so it's an ABI break. That's not\n> great post-beta :-( In principle we might also leave amhotblocking in\n> the struct but ignore it in the code (and treat it as false), but that\n> seems weird and it's going to be a pain when backpatching. Opinions?\n\nI don't think that you need to worry about ABI breakages now in beta,\nbecause that's the period of time where we can still change things and\nshape the code in its best way for prime time. It depends on the\nchange, of course, but what you are doing, by removing the field,\nlooks right to me here.\n--\nMichael",
"msg_date": "Mon, 6 Jun 2022 16:28:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Ignoring BRIN for HOT udpates seems broken"
},
{
"msg_contents": "On 6/6/22 09:28, Michael Paquier wrote:\n> On Mon, Jun 06, 2022 at 09:08:08AM +0200, Tomas Vondra wrote:\n>> Attached is a patch reverting both commits (5753d4ee32 and fe60b67250).\n>> This changes the IndexAmRoutine struct, so it's an ABI break. That's not\n>> great post-beta :-( In principle we might also leave amhotblocking in\n>> the struct but ignore it in the code (and treat it as false), but that\n>> seems weird and it's going to be a pain when backpatching. Opinions?\n> \n> I don't think that you need to worry about ABI breakages now in beta,\n> because that's the period of time where we can still change things and\n> shape the code in its best way for prime time. It depends on the\n> change, of course, but what you are doing, by removing the field,\n> looks right to me here.\n\nI've pushed the revert. Let's try again for PG16.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 16 Jun 2022 15:05:06 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Ignoring BRIN for HOT udpates seems broken"
},
{
"msg_contents": "On Thu, 16 Jun 2022 at 15:05, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> I've pushed the revert. Let's try again for PG16.\n\nAs we discussed in person at the developer meeting, here's a patch to\ntry again for PG16.\n\nIt combines the committed patches with my fix, and adds some\nadditional comments and polish. I am confident the code is correct,\nbut not that it is clean (see the commit message of the patch for\ndetails).\n\nKind regards,\n\nMatthias van de Meent\n\nPS. I'm adding this to the commitfest\n\nOriginal patch thread:\nhttps://www.postgresql.org/message-id/flat/CAFp7QwpMRGcDAQumN7onN9HjrJ3u4X3ZRXdGFT0K5G2JWvnbWg%40mail.gmail.com\n\nOther relevant:\nhttps://www.postgresql.org/message-id/flat/CA%2BTgmoZOgdoAFH9HatRwuydOZkMdyPi%3D97rNhsu%3DhQBBYs%2BgXQ%40mail.gmail.com",
"msg_date": "Sun, 19 Feb 2023 02:03:21 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Ignoring BRIN for HOT updates (was: -udpates seems broken)"
},
{
"msg_contents": "Hi,\n\nOn 2/19/23 02:03, Matthias van de Meent wrote:\n> On Thu, 16 Jun 2022 at 15:05, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> I've pushed the revert. Let's try again for PG16.\n> \n> As we discussed in person at the developer meeting, here's a patch to\n> try again for PG16.\n> \n> It combines the committed patches with my fix, and adds some\n> additional comments and polish. I am confident the code is correct,\n> but not that it is clean (see the commit message of the patch for\n> details).\n> \n\nThanks for the patch. I took a quick look, and I agree it seems correct,\nand fairly clean too. Which places you think need cleanup/improvement?\n\nAFAICS some of the code comes from the original (reverted) patch, so\nthat should be fairly non-controversial. The two new bits seem to be\nTU_UpdateIndexes and HEAP_TUPLE_SUMMARIZING_UPDATED.\n\nI have some minor review comments regarding TU_UpdateIndexes, but in\nprinciple it's fine - we need to track/pass the flag somehow, and this\nis reasonable IMHO.\n\nI'm not entirely sure about HEAP_TUPLE_SUMMARIZING_UPDATED yet. It's\npretty much a counter-part to TU_UpdateIndexes - until now we've only\nhad HOT vs. non-HOT, and one bit in header (HEAP_HOT_UPDATED) was\nsufficient for that. But now we need 3 states, so an extra bit is\nneeded. That's fine, and using another bit in the header makes sense.\n\nThe commit message says the bit is \"otherwise unused\" but after a while\nI realized it's just an \"alias\" for HEAP_HOT_UPDATED - I guess it means\nit's unused in the places that need to track set it, right? I wonder if\nsomething can be confused by this - thinking it's a regular HOT update,\nand doing something wrong.\n\nDo we have some precedent for using a header bit like this? Something\nthat'd set a bit on in-memory tuple only to reset it shortly after?\n\nDoes it make sense to add asserts that'd ensure we can't set the bit\ntwice? Like a code setting both HEAP_HOT_UPDATED and the new flag?\n\n\nA couple more minor comments after eye-balling the patch:\n\n* I think heap_update would benefit from a couple more comments, e.g.\nthe comment before calculating sum_attrs should probably mention the\nsummarization optimization.\n\n\n* heapam_tuple_update is probably the one place that I find hard to read\nnot quite readable.\n\n\n* I don't understand why the TU_UpdateIndexes fields are prefixed TUUI_?\nWhy not to just use TU_?\n\n\n* indexam.sgml says:\n\n Access methods that do not point to individual tuples, but to (like\n\nI guess \"page range\" (or something like that) is missing.\n\nNote: I wonder how difficult would it be to also deal with attributes in\npredicates. IIRC if the predicate is false, we can ignore the index, but\nthe consensus back then was it's too expensive as it can't be done using\nthe bitmaps and requires evaluating the expression, etc. But maybe there\nare ways to work around that by first checking everything except for the\nindex predicates, and only when we still think HOT is possible we would\ncheck the predicates. Tables usually have only a couple partial indexes,\nso this might be a win. Not something this patch should/needs to do, of\ncourse.\n\n\n* bikeshedding: rel.h says\n\nBitmapset *rd_summarizedattr;\t/* cols indexed by block-or-larger\nsummarizing indexes */\n\nI think the \"block-or-larger\" bit is unnecessary. I think the crucial\nbit is the index does not contain pointers to individual tuples.\nSimilarly for indexam.sgml, which talks about \"at least all tuples in\none block\".\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 19 Feb 2023 16:04:16 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Ignoring BRIN for HOT updates (was: -udpates seems broken)"
},
{
"msg_contents": "Hi,\n\nOn Sun, 19 Feb 2023 at 16:04, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> On 2/19/23 02:03, Matthias van de Meent wrote:\n> > On Thu, 16 Jun 2022 at 15:05, Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> I've pushed the revert. Let's try again for PG16.\n> >\n> > As we discussed in person at the developer meeting, here's a patch to\n> > try again for PG16.\n> >\n> > It combines the committed patches with my fix, and adds some\n> > additional comments and polish. I am confident the code is correct,\n> > but not that it is clean (see the commit message of the patch for\n> > details).\n> >\n>\n> Thanks for the patch. I took a quick look, and I agree it seems correct,\n> and fairly clean too.\n\nThanks. Based on feedback, attached is v2 of the patch, with as\nsignificant changes:\n\n- We don't store the columns we mention in predicates of summarized\nindexes in the hotblocking column anymore, they are stored in the\nsummarized columns bitmap instead. This further reduces the chance of\nfailiing to apply HOT with summarizing indexes.\n- The heaptuple header bit for summarized update in inserted tuples is\nreplaced with passing an out parameter. This simplifies the logic and\ndecreases chances of accidentally storing incorrect data.\n\nResponses to feedback below.\n\n> Which places you think need cleanup/improvement?\n\nI wasn't confident about the use of HEAP_TUPLE_SUMMARIZING_UPDATED -\nit's not a nice way to signal what indexes to update. This has been\nupdated in the attached patch.\n\n> AFAICS some of the code comes from the original (reverted) patch, so\n> that should be fairly non-controversial. The two new bits seem to be\n> TU_UpdateIndexes and HEAP_TUPLE_SUMMARIZING_UPDATED.\n\nCorrect.\n\n> I have some minor review comments regarding TU_UpdateIndexes, but in\n> principle it's fine - we need to track/pass the flag somehow, and this\n> is reasonable IMHO.\n>\n> I'm not entirely sure about HEAP_TUPLE_SUMMARIZING_UPDATED yet.\n\nThis is the part that I wasn't sure about either. I don't really like\nthe way it was implemented (temporary in-memory only bits in the tuple\nheader), but I also couldn't find an amazing alternative back in the\nv15 beta window when I wrote the original fix for the now-reverted\ncommit. I've updated this to utilize 'out parameters' instead.\nAlthough this change requires some more function signature changes, I\nthink it's better overall.\n\n> It's\n> pretty much a counter-part to TU_UpdateIndexes - until now we've only\n> had HOT vs. non-HOT, and one bit in header (HEAP_HOT_UPDATED) was\n> sufficient for that. But now we need 3 states, so an extra bit is\n> needed. That's fine, and using another bit in the header makes sense.\n>\n> The commit message says the bit is \"otherwise unused\" but after a while\n> I realized it's just an \"alias\" for HEAP_HOT_UPDATED - I guess it means\n> it's unused in the places that need to track set it, right? I wonder if\n> something can be confused by this - thinking it's a regular HOT update,\n> and doing something wrong.\n\nYes. A newly inserted tuple, whether created from an update or a fresh\ninsert, can't already have been HOT-updated, so the bit is only\navailable (not in use for meaningful operations) in the in-memory\ntuple processing path of new tuple insertion (be it update or actual\ninsert).\n\n> Do we have some precedent for using a header bit like this? Something\n> that'd set a bit on in-memory tuple only to reset it shortly after?\n\nI can't find any, but I also haven't looked very far.\n\n> Does it make sense to add asserts that'd ensure we can't set the bit\n> twice? Like a code setting both HEAP_HOT_UPDATED and the new flag?\n\nI'm doubtful of that; as this is basically a HOT chain intermediate\ntuple being returned (but only in memory), instead of the normal\nfreshly inserted HOT tuple that's the end of a HOT chain. Anyway, that\ncode has been removed in the attached patch.\n\n> A couple more minor comments after eye-balling the patch:\n>\n> * I think heap_update would benefit from a couple more comments, e.g.\n> the comment before calculating sum_attrs should probably mention the\n> summarization optimization.\n\nDone.\n\n> * heapam_tuple_update is probably the one place that I find hard to read\n> not quite readable.\n\nUpdated.\n\n> * I don't understand why the TU_UpdateIndexes fields are prefixed TUUI_?\n> Why not to just use TU_?\n\nI was under the (after checking, mistaken) impression that we already\nhad an enum that used the TU_* prefix. This has been updated.\n\n> * indexam.sgml says:\n>\n> Access methods that do not point to individual tuples, but to (like\n>\n> I guess \"page range\" (or something like that) is missing.\n\nFixed\n\n> Note: I wonder how difficult would it be to also deal with attributes in\n> predicates. IIRC if the predicate is false, we can ignore the index, but\n> the consensus back then was it's too expensive as it can't be done using\n> the bitmaps and requires evaluating the expression, etc. But maybe there\n> are ways to work around that by first checking everything except for the\n> index predicates, and only when we still think HOT is possible we would\n> check the predicates. Tables usually have only a couple partial indexes,\n> so this might be a win. Not something this patch should/needs to do, of\n> course.\n\nYes, I think that could be considered separately.\n\n> * bikeshedding: rel.h says\n>\n> Bitmapset *rd_summarizedattr; /* cols indexed by block-or-larger\n> summarizing indexes */\n>\n> I think the \"block-or-larger\" bit is unnecessary. I think the crucial\n> bit is the index does not contain pointers to individual tuples.\n> Similarly for indexam.sgml, which talks about \"at least all tuples in\n> one block\".\n\nThat makes sense, fixed.\n\nKind regards,\n\nMatthias van de Meent",
"msg_date": "Mon, 20 Feb 2023 19:15:56 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Ignoring BRIN for HOT updates (was: -udpates seems broken)"
},
{
"msg_contents": "On 2/20/23 19:15, Matthias van de Meent wrote:\n> Hi,\n> \n> On Sun, 19 Feb 2023 at 16:04, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> Hi,\n>>\n>> On 2/19/23 02:03, Matthias van de Meent wrote:\n>>> On Thu, 16 Jun 2022 at 15:05, Tomas Vondra\n>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>>\n>>>> I've pushed the revert. Let's try again for PG16.\n>>>\n>>> As we discussed in person at the developer meeting, here's a patch to\n>>> try again for PG16.\n>>>\n>>> It combines the committed patches with my fix, and adds some\n>>> additional comments and polish. I am confident the code is correct,\n>>> but not that it is clean (see the commit message of the patch for\n>>> details).\n>>>\n>>\n>> Thanks for the patch. I took a quick look, and I agree it seems correct,\n>> and fairly clean too.\n> \n> Thanks. Based on feedback, attached is v2 of the patch, with as\n> significant changes:\n> \n> - We don't store the columns we mention in predicates of summarized\n> indexes in the hotblocking column anymore, they are stored in the\n> summarized columns bitmap instead. This further reduces the chance of\n> failiing to apply HOT with summarizing indexes.\n\nInteresting idea. I need to think about the correctness, but AFAICS it\nshould work. Do we have any tests covering such cases?\n\nI see both v1 and v2 had exactly this\n\n src/test/regress/expected/stats.out | 110 ++++++++++++++++++\n src/test/regress/sql/stats.sql | 82 ++++++++++++-\n\nso I guess there are no new tests testing this for BRIN with predicates.\nWe should probably add some ...\n\n> - The heaptuple header bit for summarized update in inserted tuples is\n> replaced with passing an out parameter. This simplifies the logic and\n> decreases chances of accidentally storing incorrect data.\n> \n\nOK.\n\n0002 proposes a minor RelationGetIndexPredicate() tweak, getting rid of\nthe repeated if/else branches. Feel free to discard, if you think the v2\napproach is better.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 22 Feb 2023 13:15:07 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Ignoring BRIN for HOT updates (was: -udpates seems broken)"
},
{
"msg_contents": "On Wed, 22 Feb 2023 at 13:15, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 2/20/23 19:15, Matthias van de Meent wrote:\n> > Thanks. Based on feedback, attached is v2 of the patch, with as\n> > significant changes:\n> >\n> > - We don't store the columns we mention in predicates of summarized\n> > indexes in the hotblocking column anymore, they are stored in the\n> > summarized columns bitmap instead. This further reduces the chance of\n> > failiing to apply HOT with summarizing indexes.\n>\n> Interesting idea. I need to think about the correctness, but AFAICS it\n> should work. Do we have any tests covering such cases?\n\nThere is a test that checks that an update to the predicated column\ndoes update the index (on table brin_hot_2). However, the description\nwas out of date, so I've updated that in v4.\n\n> > - The heaptuple header bit for summarized update in inserted tuples is\n> > replaced with passing an out parameter. This simplifies the logic and\n> > decreases chances of accidentally storing incorrect data.\n> >\n>\n> OK.\n>\n> 0002 proposes a minor RelationGetIndexPredicate() tweak, getting rid of\n> the repeated if/else branches. Feel free to discard, if you think the v2\n> approach is better.\n\nI agree that this is better, it's included in v4 of the patch, as attached.\n\nKind regards,\n\nMatthias van de Meent.",
"msg_date": "Wed, 22 Feb 2023 14:14:02 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Ignoring BRIN for HOT updates (was: -udpates seems broken)"
},
{
"msg_contents": "On Wed, 22 Feb 2023 at 14:14, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Wed, 22 Feb 2023 at 13:15, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n> > On 2/20/23 19:15, Matthias van de Meent wrote:\n> > > Thanks. Based on feedback, attached is v2 of the patch, with as\n> > > significant changes:\n> > >\n> > > - We don't store the columns we mention in predicates of summarized\n> > > indexes in the hotblocking column anymore, they are stored in the\n> > > summarized columns bitmap instead. This further reduces the chance of\n> > > failiing to apply HOT with summarizing indexes.\n> >\n> > Interesting idea. I need to think about the correctness, but AFAICS it\n> > should work. Do we have any tests covering such cases?\n>\n> There is a test that checks that an update to the predicated column\n> does update the index (on table brin_hot_2). However, the description\n> was out of date, so I've updated that in v4.\n>\n> > > - The heaptuple header bit for summarized update in inserted tuples is\n> > > replaced with passing an out parameter. This simplifies the logic and\n> > > decreases chances of accidentally storing incorrect data.\n> > >\n> >\n> > OK.\n> >\n> > 0002 proposes a minor RelationGetIndexPredicate() tweak, getting rid of\n> > the repeated if/else branches. Feel free to discard, if you think the v2\n> > approach is better.\n>\n> I agree that this is better, it's included in v4 of the patch, as attached.\n\nI think that the v4 patch solves all comments up to now; and\nconsidering that most of this patch was committed but then reverted\ndue to an issue in v15, and that said issue is fixed in this patch,\nI'm marking this as ready for committer.\n\nTomas, would you be up for that?\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Wed, 8 Mar 2023 23:31:58 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Ignoring BRIN for HOT updates (was: -udpates seems broken)"
},
{
"msg_contents": "On 3/8/23 23:31, Matthias van de Meent wrote:\n> On Wed, 22 Feb 2023 at 14:14, Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n>>\n>> On Wed, 22 Feb 2023 at 13:15, Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>>>\n>>> On 2/20/23 19:15, Matthias van de Meent wrote:\n>>>> Thanks. Based on feedback, attached is v2 of the patch, with as\n>>>> significant changes:\n>>>>\n>>>> - We don't store the columns we mention in predicates of summarized\n>>>> indexes in the hotblocking column anymore, they are stored in the\n>>>> summarized columns bitmap instead. This further reduces the chance of\n>>>> failiing to apply HOT with summarizing indexes.\n>>>\n>>> Interesting idea. I need to think about the correctness, but AFAICS it\n>>> should work. Do we have any tests covering such cases?\n>>\n>> There is a test that checks that an update to the predicated column\n>> does update the index (on table brin_hot_2). However, the description\n>> was out of date, so I've updated that in v4.\n>>\n>>>> - The heaptuple header bit for summarized update in inserted tuples is\n>>>> replaced with passing an out parameter. This simplifies the logic and\n>>>> decreases chances of accidentally storing incorrect data.\n>>>>\n>>>\n>>> OK.\n>>>\n>>> 0002 proposes a minor RelationGetIndexPredicate() tweak, getting rid of\n>>> the repeated if/else branches. Feel free to discard, if you think the v2\n>>> approach is better.\n>>\n>> I agree that this is better, it's included in v4 of the patch, as attached.\n> \n> I think that the v4 patch solves all comments up to now; and\n> considering that most of this patch was committed but then reverted\n> due to an issue in v15, and that said issue is fixed in this patch,\n> I'm marking this as ready for committer.\n> \n> Tomas, would you be up for that?\n> \n\nThanks for the patch. I started looking at it yesterday, and I think\nit's 99% RFC. I think it's correct and I only have some minor comments,\n(see the 0002 patch):\n\n\n1) There were still a couple minor wording issues in the sgml docs.\n\n2) bikeshedding: I added a bunch of \"()\" to various conditions, I think\nit makes it clearer.\n\n3) This seems a bit weird way to write a conditional Assert:\n\n if (onlySummarized)\n Assert(HeapTupleIsHeapOnly(heapTuple));\n\nbetter to do a composed Assert(!(onlySummarized && !...)) or something?\n\n4) A couple comments and minor tweaks.\n\n5) Undoing a couple unnecessary changes (whitespace, ...).\n\n6) Proper formatting of TU_UpdateIndexes enum.\n\n7) Comment in RelationGetIndexAttrBitmap() is misleading, as it still\nreferences hotblockingattrs, even though it may update summarizedattrs\nin some cases.\n\n\nIf you agree with these changes, I'll get it committed.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 14 Mar 2023 14:49:08 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Ignoring BRIN for HOT updates (was: -udpates seems broken)"
},
{
"msg_contents": "On Tue, 14 Mar 2023 at 14:49, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n>\n>\n> On 3/8/23 23:31, Matthias van de Meent wrote:\n> > On Wed, 22 Feb 2023 at 14:14, Matthias van de Meent\n> >\n> > I think that the v4 patch solves all comments up to now; and\n> > considering that most of this patch was committed but then reverted\n> > due to an issue in v15, and that said issue is fixed in this patch,\n> > I'm marking this as ready for committer.\n> >\n> > Tomas, would you be up for that?\n> >\n>\n> Thanks for the patch. I started looking at it yesterday, and I think\n> it's 99% RFC. I think it's correct and I only have some minor comments,\n> (see the 0002 patch):\n>\n>\n> 1) There were still a couple minor wording issues in the sgml docs.\n>\n> 2) bikeshedding: I added a bunch of \"()\" to various conditions, I think\n> it makes it clearer.\n\nSure\n\n> 3) This seems a bit weird way to write a conditional Assert:\n>\n> if (onlySummarized)\n> Assert(HeapTupleIsHeapOnly(heapTuple));\n>\n> better to do a composed Assert(!(onlySummarized && !...)) or something?\n\nI don't like this double negation, as it adds significant parsing\ncomplexity to the statement. If I'd have gone with a single Assert()\nstatement, I'd have used the following:\n\nAssert((!onlySummarized) || HeapTupleIsHeapOnly(heapTuple));\n\nbecause in the code section above that the HOT + !onlySummarized case\nis an early exit.\n\n> 4) A couple comments and minor tweaks.\n> 5) Undoing a couple unnecessary changes (whitespace, ...).\n> 6) Proper formatting of TU_UpdateIndexes enum.\n\nAllright\n\n> + *\n> + * XXX Why do we assign explicit values to the members, instead of just letting\n> + * it up to the enum (just like for TM_Result)?\n\nThis was from the v15 beta window, to reduce the difference between\nbool and TU_UpdateIndexes. With pg16, that can be dropped.\n\n>\n> 7) Comment in RelationGetIndexAttrBitmap() is misleading, as it still\n> references hotblockingattrs, even though it may update summarizedattrs\n> in some cases.\n\nHow about\n\n Since we have covering indexes with non-key columns, we must\n handle them accurately here. Non-key columns must be added into\n the hotblocking or summarizing attrs bitmap, since they are in\n the index, and update shouldn't miss them.\n\ninstead for that section?\n\n> If you agree with these changes, I'll get it committed.\n\nYes, thanks!\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Tue, 14 Mar 2023 15:41:58 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Ignoring BRIN for HOT updates (was: -udpates seems broken)"
},
{
"msg_contents": "On 3/14/23 15:41, Matthias van de Meent wrote:\n> On Tue, 14 Mar 2023 at 14:49, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>>> ...\n> \n>> If you agree with these changes, I'll get it committed.\n> \n> Yes, thanks!\n> \n\nI've tweaked the patch per the last round of comments, cleaned up the\ncommit message a bit (it still talked about unused bit in tuple header\nand so on), and pushed it.\n\nThanks for fixing the issues that got the patch reverted last year!\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 20 Mar 2023 11:11:27 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Ignoring BRIN for HOT updates (was: -udpates seems broken)"
},
{
"msg_contents": "On Mon, 20 Mar 2023 at 11:11, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 3/14/23 15:41, Matthias van de Meent wrote:\n> > On Tue, 14 Mar 2023 at 14:49, Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >>> ...\n> >\n> >> If you agree with these changes, I'll get it committed.\n> >\n> > Yes, thanks!\n> >\n>\n> I've tweaked the patch per the last round of comments, cleaned up the\n> commit message a bit (it still talked about unused bit in tuple header\n> and so on), and pushed it.\n>\n> Thanks for fixing the issues that got the patch reverted last year!\n\nThanks for helping getting this in!\n\n\nKind regards,\n\nMatthias van de Meent.\n\n\n",
"msg_date": "Mon, 20 Mar 2023 11:24:20 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Ignoring BRIN for HOT updates (was: -udpates seems broken)"
},
{
"msg_contents": "po 20. 3. 2023 v 11:24 odesílatel Matthias van de Meent\n<boekewurm+postgres@gmail.com> napsal:\n>\n> On Mon, 20 Mar 2023 at 11:11, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n> > On 3/14/23 15:41, Matthias van de Meent wrote:\n> > > On Tue, 14 Mar 2023 at 14:49, Tomas Vondra\n> > > <tomas.vondra@enterprisedb.com> wrote:\n> > >>\n> > >>> ...\n> > >\n> > >> If you agree with these changes, I'll get it committed.\n> > >\n> > > Yes, thanks!\n> > >\n> >\n> > I've tweaked the patch per the last round of comments, cleaned up the\n> > commit message a bit (it still talked about unused bit in tuple header\n> > and so on), and pushed it.\n> >\n> > Thanks for fixing the issues that got the patch reverted last year!\n>\n> Thanks for helping getting this in!\n\nThanks for fixing the problems!\n\n>\n> Kind regards,\n>\n> Matthias van de Meent.\n>\n>\n\n\n",
"msg_date": "Mon, 20 Mar 2023 11:44:15 +0100",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Ignoring BRIN for HOT updates (was: -udpates seems broken)"
}
] |
[
{
"msg_contents": "I'm \"joining\" a bunch of unresolved threads hoping to present them better since\nthey're related and I'm maintaining them together anyway.\n\nhttps://www.postgresql.org/message-id/flat/20220219234148.GC9008%40telsasoft.com\n - set TESTDIR from perl rather than Makefile\nhttps://www.postgresql.org/message-id/flat/20220522232606.GZ19626%40telsasoft.com\n - ccache, MSVC, and meson\nhttps://www.postgresql.org/message-id/20220416144454.GX26620%40telsasoft.com\n - Re: convert libpq uri-regress tests to tap test\nhttps://www.postgresql.org/message-id/CA%2BhUKGLneD%2Bq%2BE7upHGwn41KGvbxhsKbJ%2BM-y9nvv7_Xjv8Qog%40mail.gmail.com\n - Re: A test for replay of regression tests\nhttps://www.postgresql.org/message-id/flat/20220409021853.GP24419@telsasoft.com\n - cfbot requests\n\nSee the commit messages for more thread references.\n\nI'm anticipating the need to further re-arrange the patch set - it's not clear\nwhich patches should go first. Maybe some patches should be dropped in favour\nof the meson project. I guess some patches will have to be re-implemented with\nmeson (msvc warnings).\n\nI think there was some confusion about the vcregress \"alltaptests\" target.\nI said that it's okay to add it and make cirrus use it (and that the buildfarm\ncould use it too). Andrew responded that the buildfarm wants to run different\ntests separately. But Andres seems to have interpretted that as an objection\nto the addition of an \"alltaptests\" target, which I think isn't what's\nintended - it's fine if the buildfarm prefers not to use it.\n\nMaybe the improvements to vcregress should go into v15 ? CI should run all the\ntests (which also serves to document *how* to run all the tests, even if there\nisn't a literal check-world target).",
"msg_date": "Sat, 28 May 2022 10:37:41 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "CI and test improvements"
},
{
"msg_contents": "On Sat, May 28, 2022 at 10:37:41AM -0500, Justin Pryzby wrote:\n> I'm \"joining\" a bunch of unresolved threads hoping to present them better since\n> they're related and I'm maintaining them together anyway.\n\nThis resolves an error with libpq tests in an intermediate commit, probably\ncaused by rebasing (and maybe hidden by the fact that the tests weren't being\nrun).\n\nAnd updates ccache to avoid CCACHE_COMPILER.\n\nShould any of the test changes go into v15 ?\n\n> Subject: [PATCH 02/19] cirrus/vcregress: test modules/contrib with NO_INSTALLCHECK=1\n> Subject: [PATCH 08/19] vcregress: add alltaptests\n> Subject: [PATCH 14/19] Move libpq_pipeline test into src/interfaces/libpq.\n> Subject: [PATCH 15/19] msvc: do not install libpq test tools by default\n\nI also added: cirrus/ccache: disable compression and show stats\n\nSince v4.0, ccache enables zstd compression by default, saving roughly 2x-3x.\nBut, cirrus caches are compressed as tar.gz, so we could disable ccache\ncompression, allowing cirrus to gzip the uncompressed data (better than\nccache's default of zstd-1).\n\nhttps://cirrus-ci.com/build/5194596123672576\ndebian/bullseye has ccache 4.2; cirrus says 92MB cache after a single compilation; cache_size_kibibyte 102000\nmacos: has 4.5.1: 46MB cache; cache_size_kibibyte 51252\nfreebsd: has 3.7.12: 41MB cache; cache_size_kibibyte 130112\n\nFor some reason, mac's ccache uses 2x less space than either freesbsd or linux.\nLinux is ~30% larger.\nFreebsd uploads an artifact 3x smaller than the size ccache reports, because\nits ccache is old so doesn't enable compression by default.\n\nI've also sent some patches to Thomas for cfbot to help progress some of these\npatches (code coverage and documentation upload as artifacts).\nhttps://github.com/justinpryzby/cfbot/commits/master\n\n-- \nJustin",
"msg_date": "Thu, 23 Jun 2022 14:31:26 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "[Resending -- sorry if you receive this twice. Jacob's mail server\nhas apparently offended the list management software so emails bounce\nif he's in CC.]\n\nOn Fri, Jun 24, 2022 at 7:23 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Freebsd uploads an artifact 3x smaller than the size ccache reports, because\n> its ccache is old so doesn't enable compression by default.\n\nThat port/package got stuck on 3.x because of some dependency problems\nwhen using in bootstrapping FreeBSD itself (or other ports?),\napparently. I didn't follow the details but the recent messages here\nsound hopeful and I'm keeping my eye on it to see if 4.x lands as a\nseparate package we'd need to install, or something. Fingers crossed!\n\nhttps://bugs.freebsd.org/bugzilla/show_bug.cgi?id=234971\n\n> I've also sent some patches to Thomas for cfbot to help progress some of these\n> patches (code coverage and documentation upload as artifacts).\n> https://github.com/justinpryzby/cfbot/commits/master\n\nThanks, sorry for lack of action, will get to these soon.\n\n\n",
"msg_date": "Fri, 24 Jun 2022 08:38:50 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "rebased on b6a5158f9 and 054325c5e\n\nAlso, cirrus/freebsd task can run 3x faster with more CPUs.\nSubject: [PATCH 21/21] cirrus: run freebsd with more CPUs+RAM \nhttps://cirrus-ci.com/task/4664440120410112\nhttps://cirrus-ci.com/task/5100411110555648\n\nIn the past, I gather there was some undiagnosed issue when using more CPUs\n(cirrus now enforces >5GB RAM when using 6 CPUs - maybe you tried to use too\nlittle RAM, or maybe hit bad performance involving NUMA?)\nhttps://www.postgresql.org/message-id/flat/20220310033347.hgxk4pyarzq4hxwp@alap3.anarazel.de#f36c0b17e33e31e7925e7e5812998686",
"msg_date": "Thu, 7 Jul 2022 19:22:32 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "Resending with a problematic address removed...\n\nOn Sat, May 28, 2022 at 10:37:41AM -0500, Justin Pryzby wrote:\n> I'm anticipating the need to further re-arrange the patch set - it's not clear\n> which patches should go first. Maybe some patches should be dropped in favour\n> of the meson project. I guess some patches will have to be re-implemented with\n> meson (msvc warnings).\n\n> Maybe the improvements to vcregress should go into v15 ? CI should run all the\n> tests (which also serves to document *how* to run all the tests, even if there\n> isn't a literal check-world target).\n\nOn Thu, Jun 23, 2022 at 02:31:25PM -0500, Justin Pryzby wrote:\n> Should any of the test changes go into v15 ?\n\nOn Thu, Jul 07, 2022 at 07:22:32PM -0500, Justin Pryzby wrote:\n> Also, cirrus/freebsd task can run 3x faster with more CPUs.\n\nChecking if there's interest in any/none of these patches ?\nI have added several more.\n\nDo you have an idea when the meson branch might be merged ?\n\nWill vcregress remain for a while, or will it go away for v16 ?\n\n-- \nJustin",
"msg_date": "Sun, 28 Aug 2022 09:44:47 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-28 09:44:47 -0500, Justin Pryzby wrote:\n> On Sat, May 28, 2022 at 10:37:41AM -0500, Justin Pryzby wrote:\n> > I'm anticipating the need to further re-arrange the patch set - it's not clear\n> > which patches should go first. Maybe some patches should be dropped in favour\n> > of the meson project. I guess some patches will have to be re-implemented with\n> > meson (msvc warnings).\n>\n> > Maybe the improvements to vcregress should go into v15 ? CI should run all the\n> > tests (which also serves to document *how* to run all the tests, even if there\n> > isn't a literal check-world target).\n>\n> On Thu, Jun 23, 2022 at 02:31:25PM -0500, Justin Pryzby wrote:\n> > Should any of the test changes go into v15 ?\n>\n> On Thu, Jul 07, 2022 at 07:22:32PM -0500, Justin Pryzby wrote:\n> > Also, cirrus/freebsd task can run 3x faster with more CPUs.\n>\n> Checking if there's interest in any/none of these patches ?\n> I have added several more.\n>\n> Do you have an idea when the meson branch might be merged ?\n\nI hope to do that fairly soon, but it's of course dependant on review etc. The\nplan was to merge it early and mature it in tree to some degree. There's only\nso much we can do \"from the outside\"...\n\n\n> Will vcregress remain for a while, or will it go away for v16 ?\n\nThe plan was for the windows stuff to go away fairly quickly.\n\n\n> From 99ee0bef5054539aad0e23a24dd9c9cc9ccee788 Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Wed, 25 May 2022 21:53:22 -0500\n> Subject: [PATCH 01/25] cirrus/windows: add compiler_warnings_script\n\nLooks good.\n\n> - MSBFLAGS: -m -verbosity:minimal \"-consoleLoggerParameters:Summary;ForceNoAlign\" /p:TrackFileAccess=false -nologo\n> + # -fileLoggerParameters1: write warnings to msbuild.warn.log.\n> + MSBFLAGS: -m -verbosity:minimal \"-consoleLoggerParameters:Summary;ForceNoAlign\" /p:TrackFileAccess=false -nologo -fileLoggerParameters1:warningsonly;logfile=msbuild.warn.log\n\nExcept, I think it'd be good to split this line. What do you think about using\nsomething like\nMSBFLAGS: >-\n -nologo\n -m -verbosity:minimal\n /p:TrackFileAccess=false\n \"-consoleLoggerParameters:Summary;ForceNoAlign\"\n -fileLoggerParameters1:warningsonly;logfile=msbuild.warn.log\n\nI think that should work?\n\n\n> # If tests hang forever, cirrus eventually times out. In that case log\n> # output etc is not uploaded, making the problem hard to debug. Of course\n> @@ -450,6 +451,11 @@ task:\n> cd src/tools/msvc\n> %T_C% perl vcregress.pl ecpgcheck\n>\n> + # These should be last, so all the important checks are always run\n> + always:\n> + compiler_warnings_script:\n> + - sh src\\tools\\ci\\windows-compiler-warnings msbuild.warn.log\n> +\n> on_failure:\n> <<: *on_failure\n> crashlog_artifacts:\n> diff --git a/src/tools/ci/windows-compiler-warnings b/src/tools/ci/windows-compiler-warnings\n> new file mode 100755\n> index 00000000000..d6f9a1fc569\n> --- /dev/null\n> +++ b/src/tools/ci/windows-compiler-warnings\n> @@ -0,0 +1,16 @@\n> +#! /bin/sh\n> +# Success if the given file doesn't exist or is empty, else fail\n> +# This is a separate file only to avoid dealing with windows shell quoting and escaping.\n> +set -e\n> +\n> +fn=$1\n> +\n> +if [ -s \"$fn\" ]\n> +then\n> +\t# Display the file's content, then exit indicating failure\n> +\tcat \"$fn\"\n> +\texit 1\n> +else\n> +\t# Success\n> +\texit 0\n> +fi\n> --\n> 2.17.1\n\nWouldn't that be doable as something like\nsh -c 'if test -s file; then cat file;exit 1; fi\"\ninside .cirrus.yml?\n\n\n\n> From 1064a0794e85e06b3a0eca4ed3765f078795cb36 Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Sun, 3 Apr 2022 00:10:20 -0500\n> Subject: [PATCH 03/25] cirrus/ccache: disable compression and show stats\n>\n> ccache since 4.0 enables zstd compression by default.\n>\n> With default compression enabled (https://cirrus-ci.com/task/6692342840164352):\n> linux has 4.2; 99MB cirrus cache; cache_size_kibibyte\t109616\n> macos has 4.5.1: 47MB cirrus cache; cache_size_kibibyte\t52500\n> freebsd has 3.7.12: 42MB cirrus cache; cache_size_kibibyte\t134064\n> windows has 4.6.1; 180MB cirrus cache; cache_size_kibibyte\t51179\n> todo: compiler warnings\n>\n> With compression disabled (https://cirrus-ci.com/task/4614182514458624):\n> linux: 91MB cirrus cache; cache_size_kibibyte\t316136\n> macos: 41MB cirrus cache; cache_size_kibibyte\t118068\n> windows: 42MB cirrus cache; cache_size_kibibyte\t134064\n> freebsd is the same\n>\n> The stats should either be shown or logged (or maybe run with CCACHE_NOSTATS,\n> to avoid re-uploading cache tarball in a 100% cached build, due only to\n> updating ./**/stats).\n>\n> Note that ccache 4.4 supports CCACHE_STATSLOG, which seems ideal.\n\nI stared at this commit message for a while, trying to make sense of it, and\ncouldn't really. I assume you're saying that the cirrus compression is better\nwith ccache compression disabled, but it's extremely hard to parse out of it.\n\nThis does too much at once. Show stats, change cache sizes, disable\ncompression.\n\n\n\n> From 01e9abd386a4e6cc0125b97617fb42e695898cbf Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Tue, 26 Jul 2022 20:30:02 -0500\n> Subject: [PATCH 04/25] cirrus/ccache: add explicit cache keys..\n>\n> Since otherwise, building with ci-os-only will probably fail to use the normal\n> cache, since the cache key is computed using both the task name and its *index*\n> in the list of caches (internal/executor/cache.go:184).\n\nHm, perhaps worth confirming and/or reporting to cirrus rather?\n\n\n\n> From 8de5c977686270b0a4e666a924ebe820a245913a Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Sun, 24 Jul 2022 23:09:12 -0500\n> Subject: [PATCH 05/25] silence make distprep and generated-headers\n>\n> this saves vertical screen space.\n>\n> https://www.postgresql.org/message-id/20220221164736.rq3ornzjdkmwk2wo@alap3.anarazel.de\n\nI don't feel this should go in as a part of CI changes. Or rather, I feel\nuncomfortable committing it when just discussed under this subject.\n\n\n\n> From eaf263ccaa8310c5d9834b97e93ad8434d63296e Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Sun, 24 Jul 2022 22:44:53 -0500\n> Subject: [PATCH 06/25] pg_regress: run more quietly\n>\n> The number of lines of output should be closer to 1 per test, rather than 25 +\n> 1 per test.\n>\n> https://www.postgresql.org/message-id/flat/20220221173109.yl6dqqu3ud52ripd%40alap3.anarazel.de\n\nSee above. There's also a dedicated thread about revising the output.\n\n\n> From 6a6a97fc869fd1fd8b7ab5da5147f145581634f9 Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Fri, 24 Jun 2022 00:09:12 -0500\n> Subject: [PATCH 08/25] cirrus/freebsd: run with more CPUs+RAM and do not\n> repartitiion\n>\n> There was some historic problem where tests under freebsd took 8+ minutes (and\n> before 4a288a37f took 15 minutes).\n>\n> This reduces test time from 10min to 3min.\n> 4 CPUs 4 tests https://cirrus-ci.com/task/4880240739614720\n> 4 CPUs 6 tests https://cirrus-ci.com/task/4664440120410112 https://cirrus-ci.com/task/4586784884523008\n> 4 CPUs 8 tests https://cirrus-ci.com/task/5001995491737600\n>\n> 6 CPUs https://cirrus-ci.com/task/6678321684545536\n> 8 CPUs https://cirrus-ci.com/task/6264854121021440\n>\n> See also:\n> https://www.postgresql.org/message-id/flat/20220310033347.hgxk4pyarzq4hxwp@alap3.anarazel.de#f36c0b17e33e31e7925e7e5812998686\n> 8 jobs 7min https://cirrus-ci.com/task/6186376667332608\n>\n> xi-os-only: freebsd\n\nTypo.\n\n\n> @@ -71,8 +69,6 @@ task:\n> fingerprint_key: ccache/freebsd\n> reupload_on_changes: true\n>\n> - # Workaround around performance issues due to 32KB block size\n> - repartition_script: src/tools/ci/gcp_freebsd_repartition.sh\n> create_user_script: |\n> pw useradd postgres\n> chown -R postgres:postgres .\n> --\n\nWhat's the story there - at some point that was important for performance\nbecause of the native block size triggering significant read-modify-write\ncycles with postres' writes. You didn't comment on it in the commit message.\n\n\n> From fd1c36a0bd8fa608ccdff5be3735dac5e3e48bf3 Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Wed, 27 Jul 2022 16:54:47 -0500\n> Subject: [PATCH 09/25] cirrus/freebsd: run build+check in a make vpath\n\n> From 7052a32a21752b59632225684fc9426bb94e46e0 Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Sun, 13 Feb 2022 17:56:40 -0600\n> Subject: [PATCH 10/25] cirrus/windows: increase timeout to 25min\n\nNo explanation?\n\n\n> From 602983b2cf37fc43465c62330b2e15e9d6d2035d Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Fri, 26 Aug 2022 12:00:10 -0500\n> Subject: [PATCH 15/25] f!and chdir\n\nI don't see the point of pointing fixup commits to the list.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 28 Aug 2022 09:07:52 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Sun, Aug 28, 2022 at 09:07:52AM -0700, Andres Freund wrote:\n> > --- /dev/null\n> > +++ b/src/tools/ci/windows-compiler-warnings\n> \n> Wouldn't that be doable as something like\n> sh -c 'if test -s file; then cat file;exit 1; fi\"\n> inside .cirrus.yml?\n\nI had written it inline in a couple ways, like\n- sh -exc 'f=msbuild.warn.log; if [ -s \"$f\" ]; then cat \"$f\"; exit 1; else exit 0; fi'\n\nbut then separated it out as you suggested in\n20220227010908.vz2a7dmfzgwg742w@alap3.anarazel.de\n\nafter I complained about cmd.exe requiring escaping for && and ||\nThat makes writing any shell script a bit perilous and a separate script\nseems better.\n\n> > Subject: [PATCH 03/25] cirrus/ccache: disable compression and show stats\n> >\n> > ccache since 4.0 enables zstd compression by default.\n> >\n> > With default compression enabled (https://cirrus-ci.com/task/6692342840164352):\n> > linux has 4.2; 99MB cirrus cache; cache_size_kibibyte\t109616\n> > macos has 4.5.1: 47MB cirrus cache; cache_size_kibibyte\t52500\n> > freebsd has 3.7.12: 42MB cirrus cache; cache_size_kibibyte\t134064\n> > windows has 4.6.1; 180MB cirrus cache; cache_size_kibibyte\t51179\n> > todo: compiler warnings\n> >\n> > With compression disabled (https://cirrus-ci.com/task/4614182514458624):\n> > linux: 91MB cirrus cache; cache_size_kibibyte\t316136\n> > macos: 41MB cirrus cache; cache_size_kibibyte\t118068\n> > windows: 42MB cirrus cache; cache_size_kibibyte\t134064\n> > freebsd is the same\n> >\n> > The stats should either be shown or logged (or maybe run with CCACHE_NOSTATS,\n> > to avoid re-uploading cache tarball in a 100% cached build, due only to\n> > updating ./**/stats).\n> >\n> > Note that ccache 4.4 supports CCACHE_STATSLOG, which seems ideal.\n> \n> I stared at this commit message for a while, trying to make sense of it, and\n> couldn't really. I assume you're saying that the cirrus compression is better\n> with ccache compression disabled, but it's extremely hard to parse out of it.\n\nYes, because ccache uses zstd-1, and cirrus uses gzip, which it's going\nto use no matter what ccache does, and gzip's default -6 is better than\nccache's zstd-1.\n\n> This does too much at once. Show stats, change cache sizes, disable\n> compression.\n\nThe cache size change is related to the compression level change; ccache\nprunes based on the local size, which was compressed with zstd-1 and,\nwith this patch, not compressed (so ~2x larger). Also, it's more\ninteresting to control the size uploaded to cirrus (after compression\nith gzip-6).\n\n> > From 01e9abd386a4e6cc0125b97617fb42e695898cbf Mon Sep 17 00:00:00 2001\n> > From: Justin Pryzby <pryzbyj@telsasoft.com>\n> > Date: Tue, 26 Jul 2022 20:30:02 -0500\n> > Subject: [PATCH 04/25] cirrus/ccache: add explicit cache keys..\n> >\n> > Since otherwise, building with ci-os-only will probably fail to use the normal\n> > cache, since the cache key is computed using both the task name and its *index*\n> > in the list of caches (internal/executor/cache.go:184).\n> \n> Hm, perhaps worth confirming and/or reporting to cirrus rather?\n\nI know because of reading their source. Unfortunately, there's no\ncommit history indicating the intent or rationale.\nhttps://github.com/cirruslabs/cirrus-ci-agent/blob/master/internal/executor/cache.go#L183\n\n> > From 6a6a97fc869fd1fd8b7ab5da5147f145581634f9 Mon Sep 17 00:00:00 2001\n> > From: Justin Pryzby <pryzbyj@telsasoft.com>\n> > Date: Fri, 24 Jun 2022 00:09:12 -0500\n> > Subject: [PATCH 08/25] cirrus/freebsd: run with more CPUs+RAM and do not\n> > repartitiion\n> >\n> > There was some historic problem where tests under freebsd took 8+ minutes (and\n> > before 4a288a37f took 15 minutes).\n> >\n> > This reduces test time from 10min to 3min.\n> > 4 CPUs 4 tests https://cirrus-ci.com/task/4880240739614720\n> > 4 CPUs 6 tests https://cirrus-ci.com/task/4664440120410112 https://cirrus-ci.com/task/4586784884523008\n> > 4 CPUs 8 tests https://cirrus-ci.com/task/5001995491737600\n> >\n> > 6 CPUs https://cirrus-ci.com/task/6678321684545536\n> > 8 CPUs https://cirrus-ci.com/task/6264854121021440\n> >\n> > See also:\n> > https://www.postgresql.org/message-id/flat/20220310033347.hgxk4pyarzq4hxwp@alap3.anarazel.de#f36c0b17e33e31e7925e7e5812998686\n> > 8 jobs 7min https://cirrus-ci.com/task/6186376667332608\n> >\n> > xi-os-only: freebsd\n> \n> Typo.\n\nNo - it's deliberate so I can switch to and from \"everything\" to \"this\nonly\".\n\n> > @@ -71,8 +69,6 @@ task:\n> > fingerprint_key: ccache/freebsd\n> > reupload_on_changes: true\n> >\n> > - # Workaround around performance issues due to 32KB block size\n> > - repartition_script: src/tools/ci/gcp_freebsd_repartition.sh\n> > create_user_script: |\n> > pw useradd postgres\n> > chown -R postgres:postgres .\n> > --\n> \n> What's the story there - at some point that was important for performance\n> because of the native block size triggering significant read-modify-write\n> cycles with postres' writes. You didn't comment on it in the commit message.\n\nWell, I don't know the history, but it seems to be unneeded now.\n\nIs there a good description of the original problem ? Originally,\nfreebsd check-world took ~15min to run tests, and when we changed to use\n-Og it took 10min. Since then, seems to have improved on its own, and\ncurrently takes ~6min. This patch adds CPUs to make it run in ~4min,\nand takes the opportuity to drop the historic repartition stuff.\n\n> > From fd1c36a0bd8fa608ccdff5be3735dac5e3e48bf3 Mon Sep 17 00:00:00 2001\n> > From: Justin Pryzby <pryzbyj@telsasoft.com>\n> > Date: Wed, 27 Jul 2022 16:54:47 -0500\n> > Subject: [PATCH 09/25] cirrus/freebsd: run build+check in a make vpath\n> \n> > From 7052a32a21752b59632225684fc9426bb94e46e0 Mon Sep 17 00:00:00 2001\n> > From: Justin Pryzby <pryzbyj@telsasoft.com>\n> > Date: Sun, 13 Feb 2022 17:56:40 -0600\n> > Subject: [PATCH 10/25] cirrus/windows: increase timeout to 25min\n> \n> No explanation?\n\nBecause of the immediately following commit which makes it run all the\ntests.\n\n> > From 602983b2cf37fc43465c62330b2e15e9d6d2035d Mon Sep 17 00:00:00 2001\n> > From: Justin Pryzby <pryzbyj@telsasoft.com>\n> > Date: Fri, 26 Aug 2022 12:00:10 -0500\n> > Subject: [PATCH 15/25] f!and chdir\n> \n> I don't see the point of pointing fixup commits to the list.\n\nIt's a separate commit to make it easy to see the changes, separately,\nsince I imagine maybe the \"chdir\" part won't be desirable, or maybe the\nPATH part won't. But I'm not sure, so I'm here soliciting feedback.\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 28 Aug 2022 12:10:29 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-28 12:10:29 -0500, Justin Pryzby wrote:\n> On Sun, Aug 28, 2022 at 09:07:52AM -0700, Andres Freund wrote:\n> > > --- /dev/null\n> > > +++ b/src/tools/ci/windows-compiler-warnings\n> >\n> > Wouldn't that be doable as something like\n> > sh -c 'if test -s file; then cat file;exit 1; fi\"\n> > inside .cirrus.yml?\n>\n> I had written it inline in a couple ways, like\n> - sh -exc 'f=msbuild.warn.log; if [ -s \"$f\" ]; then cat \"$f\"; exit 1; else exit 0; fi'\n>\n> but then separated it out as you suggested in\n> 20220227010908.vz2a7dmfzgwg742w@alap3.anarazel.de\n>\n> after I complained about cmd.exe requiring escaping for && and ||\n> That makes writing any shell script a bit perilous and a separate script\n> seems better.\n\nI remember that I suggested it - but note that the way I wrote above doesn't\nhave anything needing escaping. Anyway, what do you think of the multiline\nsplit I suggested?\n\n\n> > > Subject: [PATCH 03/25] cirrus/ccache: disable compression and show stats\n> > >\n> > > ccache since 4.0 enables zstd compression by default.\n> > >\n> > > With default compression enabled (https://cirrus-ci.com/task/6692342840164352):\n> > > linux has 4.2; 99MB cirrus cache; cache_size_kibibyte\t109616\n> > > macos has 4.5.1: 47MB cirrus cache; cache_size_kibibyte\t52500\n> > > freebsd has 3.7.12: 42MB cirrus cache; cache_size_kibibyte\t134064\n> > > windows has 4.6.1; 180MB cirrus cache; cache_size_kibibyte\t51179\n> > > todo: compiler warnings\n> > >\n> > > With compression disabled (https://cirrus-ci.com/task/4614182514458624):\n> > > linux: 91MB cirrus cache; cache_size_kibibyte\t316136\n> > > macos: 41MB cirrus cache; cache_size_kibibyte\t118068\n> > > windows: 42MB cirrus cache; cache_size_kibibyte\t134064\n> > > freebsd is the same\n> > >\n> > > The stats should either be shown or logged (or maybe run with CCACHE_NOSTATS,\n> > > to avoid re-uploading cache tarball in a 100% cached build, due only to\n> > > updating ./**/stats).\n> > >\n> > > Note that ccache 4.4 supports CCACHE_STATSLOG, which seems ideal.\n> >\n> > I stared at this commit message for a while, trying to make sense of it, and\n> > couldn't really. I assume you're saying that the cirrus compression is better\n> > with ccache compression disabled, but it's extremely hard to parse out of it.\n>\n> Yes, because ccache uses zstd-1, and cirrus uses gzip, which it's going\n> to use no matter what ccache does, and gzip's default -6 is better than\n> ccache's zstd-1.\n>\n> > This does too much at once. Show stats, change cache sizes, disable\n> > compression.\n>\n> The cache size change is related to the compression level change; ccache\n> prunes based on the local size, which was compressed with zstd-1 and,\n> with this patch, not compressed (so ~2x larger). Also, it's more\n> interesting to control the size uploaded to cirrus (after compression\n> ith gzip-6).\n\nThat's what should have been in the commit message.\n\n\n> > > From 6a6a97fc869fd1fd8b7ab5da5147f145581634f9 Mon Sep 17 00:00:00 2001\n> > > From: Justin Pryzby <pryzbyj@telsasoft.com>\n> > > Date: Fri, 24 Jun 2022 00:09:12 -0500\n> > > Subject: [PATCH 08/25] cirrus/freebsd: run with more CPUs+RAM and do not\n> > > repartitiion\n> > >\n> > > There was some historic problem where tests under freebsd took 8+ minutes (and\n> > > before 4a288a37f took 15 minutes).\n> > >\n> > > This reduces test time from 10min to 3min.\n> > > 4 CPUs 4 tests https://cirrus-ci.com/task/4880240739614720\n> > > 4 CPUs 6 tests https://cirrus-ci.com/task/4664440120410112 https://cirrus-ci.com/task/4586784884523008\n> > > 4 CPUs 8 tests https://cirrus-ci.com/task/5001995491737600\n> > >\n> > > 6 CPUs https://cirrus-ci.com/task/6678321684545536\n> > > 8 CPUs https://cirrus-ci.com/task/6264854121021440\n> > >\n> > > See also:\n> > > https://www.postgresql.org/message-id/flat/20220310033347.hgxk4pyarzq4hxwp@alap3.anarazel.de#f36c0b17e33e31e7925e7e5812998686\n> > > 8 jobs 7min https://cirrus-ci.com/task/6186376667332608\n> > >\n> > > xi-os-only: freebsd\n> >\n> > Typo.\n>\n> No - it's deliberate so I can switch to and from \"everything\" to \"this\n> only\".\n\nI don't see the point in posting patches to be applied if they contain lots of\nsuch things that a potential committer would need to catch and include a lot\nof of fixup patches.\n\n\n> > > @@ -71,8 +69,6 @@ task:\n> > > fingerprint_key: ccache/freebsd\n> > > reupload_on_changes: true\n> > >\n> > > - # Workaround around performance issues due to 32KB block size\n> > > - repartition_script: src/tools/ci/gcp_freebsd_repartition.sh\n> > > create_user_script: |\n> > > pw useradd postgres\n> > > chown -R postgres:postgres .\n> > > --\n> >\n> > What's the story there - at some point that was important for performance\n> > because of the native block size triggering significant read-modify-write\n> > cycles with postres' writes. You didn't comment on it in the commit message.\n>\n> Well, I don't know the history, but it seems to be unneeded now.\n\nIt's possible it was mainly needed for testing with aio + dio. But also\npossible that an upgrade improved the situation since.\n\n\n> > > From fd1c36a0bd8fa608ccdff5be3735dac5e3e48bf3 Mon Sep 17 00:00:00 2001\n> > > From: Justin Pryzby <pryzbyj@telsasoft.com>\n> > > Date: Wed, 27 Jul 2022 16:54:47 -0500\n> > > Subject: [PATCH 09/25] cirrus/freebsd: run build+check in a make vpath\n> >\n> > > From 7052a32a21752b59632225684fc9426bb94e46e0 Mon Sep 17 00:00:00 2001\n> > > From: Justin Pryzby <pryzbyj@telsasoft.com>\n> > > Date: Sun, 13 Feb 2022 17:56:40 -0600\n> > > Subject: [PATCH 10/25] cirrus/windows: increase timeout to 25min\n> >\n> > No explanation?\n>\n> Because of the immediately following commit which makes it run all the\n> tests.\n\nMention that in the commit message then. Especially when dealing with 25\ncommits I don't think you can expect others to infer such things.\n\n\n> > > From 602983b2cf37fc43465c62330b2e15e9d6d2035d Mon Sep 17 00:00:00 2001\n> > > From: Justin Pryzby <pryzbyj@telsasoft.com>\n> > > Date: Fri, 26 Aug 2022 12:00:10 -0500\n> > > Subject: [PATCH 15/25] f!and chdir\n> >\n> > I don't see the point of pointing fixup commits to the list.\n>\n> It's a separate commit to make it easy to see the changes, separately,\n> since I imagine maybe the \"chdir\" part won't be desirable, or maybe the\n> PATH part won't. But I'm not sure, so I'm here soliciting feedback.\n\nShrug, I doubt you'll get much if asked that way.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 28 Aug 2022 14:28:02 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Sun, Aug 28, 2022 at 02:28:02PM -0700, Andres Freund wrote:\n> On 2022-08-28 12:10:29 -0500, Justin Pryzby wrote:\n> > On Sun, Aug 28, 2022 at 09:07:52AM -0700, Andres Freund wrote:\n> > > > --- /dev/null\n> > > > +++ b/src/tools/ci/windows-compiler-warnings\n> > >\n> > > Wouldn't that be doable as something like\n> > > sh -c 'if test -s file; then cat file;exit 1; fi\"\n> > > inside .cirrus.yml?\n> >\n> > I had written it inline in a couple ways, like\n> > - sh -exc 'f=msbuild.warn.log; if [ -s \"$f\" ]; then cat \"$f\"; exit 1; else exit 0; fi'\n> >\n> > but then separated it out as you suggested in\n> > 20220227010908.vz2a7dmfzgwg742w@alap3.anarazel.de\n> >\n> > after I complained about cmd.exe requiring escaping for && and ||\n> > That makes writing any shell script a bit perilous and a separate script\n> > seems better.\n> \n> I remember that I suggested it - but note that the way I wrote above doesn't\n> have anything needing escaping. \n\nIt doesn't require it, but that still gives the impression that it's\nnormally possible to write one-liner shell scripts there, which is\nmisleading/wrong, and the reason why I took your suggestion to use a\nseparate script file.\n\n> Anyway, what do you think of the multiline split I suggested?\n\nDone, and sorted.\n\n> That's what should have been in the commit message.\n\nSure. I copied into the commit message the explanation that I had\nwritten in June's email.\n\n> > > > xi-os-only: freebsd\n> > >\n> > > Typo.\n> >\n> > No - it's deliberate so I can switch to and from \"everything\" to \"this\n> > only\".\n> \n> I don't see the point in posting patches to be applied if they contain lots of\n> such things that a potential committer would need to catch and include a lot\n> of of fixup patches.\n\nI get that you disliked that I disabled the effect of a CI tag by\nmunging \"c\" to \"x\". I've amended the message to avoid confusion. But,\nlots of what such things ? \"ci-os-only\" would be removed before being\npushed anyway.\n\n\"catching things\" is the first part of the review process, which (as I\nunderstand) is intended to help patch authors to improve their patches.\nIf you found lots of problems in my patches, I'd need to know about\nthem; but most of what I heard seem like quibbles about the presentation\nof the patches. It's true that some parts are dirty/unclear, and that\nseems reasonable for patches most of which haven't yet received review,\nfor which I asked whether to pursue the patch at all, and how best to\npresent them. This is (or could be) an opportunity to make\nimprovements.\n\nI renamed the two, related patches to Cluser.pm which said \"f!\", which\nare deliberately separate but looked like \"fixup\" patches. Are you\ninterested in any combination of those three, related changes to move\nlogic from Makefile to perl ? If not, we don't need to debate the\nmerits of spliting the patch.\n\nWhat about the three, related changes for ccache compression ?\n\nShould these be dropped in favour of meson ?\n - cirrus/vcregress: test modules/contrib with NO_INSTALLCHECK=1\n - vcregress: add alltaptests\n\nI added: \"WIP: skip building if only docs have changed\"\n\nchangesInclude() didn't seem to work right when I first tried to use it.\nEventually, I realized that it seems to use something like \"git log\",\nand not \"git diff\" (as I'd thought). It seems to work fine now that I\nknow what to expect.\n\ngit commit --amend --no-edit\ngit diff --stat @{1}..@{0} # this outputs nothing\ngit log --stat @{1}..@{0} # this lists the files changed by the tip commit\n\nIt'd be nice to be have cfbot inject this patch into each commitfest\npatch for awhile, to make sure everything works as expected. Same for\nthe code coverage patch and the doc artifacts patch. (These patches\ncurrently assume that the base commit is HEAD~1, which is correct for\ncfbot, and that would also provide code coverage and docs until such\ntime as cfbot is updated to apply and preserve the original series of\npatches).\n\n-- \nJustin",
"msg_date": "Sat, 10 Sep 2022 15:05:42 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Sun, Aug 28, 2022 at 02:28:02PM -0700, Andres Freund wrote:\n> > > > @@ -71,8 +69,6 @@ task:\n> > > > fingerprint_key: ccache/freebsd\n> > > > reupload_on_changes: true\n> > > >\n> > > > - # Workaround around performance issues due to 32KB block size\n> > > > - repartition_script: src/tools/ci/gcp_freebsd_repartition.sh\n> > > > create_user_script: |\n> > > > pw useradd postgres\n> > > > chown -R postgres:postgres .\n> > > > --\n> > >\n> > > What's the story there - at some point that was important for performance\n> > > because of the native block size triggering significant read-modify-write\n> > > cycles with postres' writes. You didn't comment on it in the commit message.\n> >\n> > Well, I don't know the history, but it seems to be unneeded now.\n> \n> It's possible it was mainly needed for testing with aio + dio. But also\n> possible that an upgrade improved the situation since.\n\nMaybe freebsd got faster as a result of the TAU CPUs?\nhttps://mobile.twitter.com/cirrus_labs/status/1534982111568052240\n\nI noticed because it's been *slower* the last ~24h since cirrusci\ndisabled TAU, as Thomas commit mentioned.\nhttps://twitter.com/cirrus_labs/status/1572657320093712384\n\nFor example this CF entry:\n\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/39/3736\nhttps://cirrus-ci.com/task/4670794365140992 5m36s - 4days ago\nhttps://cirrus-ci.com/task/4974926233862144 5m25s - 3days ago\nhttps://cirrus-ci.com/task/5561409034518528 5m29s - 2days ago\nhttps://cirrus-ci.com/task/6432442008469504 9m19s - yesterday\n\nCF_BOT's latest tasks seem to be fast again, since 1-2h ago.\nhttps://cirrus-ci.com/build/5178906041909248 9m1s\nhttps://cirrus-ci.com/build/4593160281128960 5m8s\nhttps://cirrus-ci.com/build/4539845644124160 5m22s\n\nThe logs for July show when freebsd started \"being fast\":\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/38/3708\nhttps://cirrus-ci.com/task/6316073015312384 10m25s Jul 13\nhttps://cirrus-ci.com/task/5662878987452416 5m48s Jul 15\n\nMaybe that changed in July rather than June because the TAU CPUs were\nstill not available in every region/zone (?)\nhttps://cloud.google.com/compute/docs/regions-zones/\n\nI have no idea if the TAU CPUs eliminate/mitigate the original\nperformance issue you had with AIO. But they have such a large effect\non freebsd that it could now be the fastest task, if given more than 2\nCPUs.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 22 Sep 2022 16:07:02 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-22 16:07:02 -0500, Justin Pryzby wrote:\n> On Sun, Aug 28, 2022 at 02:28:02PM -0700, Andres Freund wrote:\n> > > > > @@ -71,8 +69,6 @@ task:\n> > > > > fingerprint_key: ccache/freebsd\n> > > > > reupload_on_changes: true\n> > > > >\n> > > > > - # Workaround around performance issues due to 32KB block size\n> > > > > - repartition_script: src/tools/ci/gcp_freebsd_repartition.sh\n> > > > > create_user_script: |\n> > > > > pw useradd postgres\n> > > > > chown -R postgres:postgres .\n> > > > > --\n> > > >\n> > > > What's the story there - at some point that was important for performance\n> > > > because of the native block size triggering significant read-modify-write\n> > > > cycles with postres' writes. You didn't comment on it in the commit message.\n> > >\n> > > Well, I don't know the history, but it seems to be unneeded now.\n> > \n> > It's possible it was mainly needed for testing with aio + dio. But also\n> > possible that an upgrade improved the situation since.\n> \n> Maybe freebsd got faster as a result of the TAU CPUs?\n> https://mobile.twitter.com/cirrus_labs/status/1534982111568052240\n> \n> I noticed because it's been *slower* the last ~24h since cirrusci\n> disabled TAU, as Thomas commit mentioned.\n> https://twitter.com/cirrus_labs/status/1572657320093712384\n\nYea, I noticed that as well. It's entirely possible that something in the\n\"hardware\" stack improved sufficiently to avoid problems.\n\n\n> I have no idea if the TAU CPUs eliminate/mitigate the original\n> performance issue you had with AIO. But they have such a large effect\n> on freebsd that it could now be the fastest task, if given more than 2\n> CPUs.\n\nI'm planning to rebase early next week and try that out.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 22 Sep 2022 14:53:23 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Fri, Sep 23, 2022 at 9:07 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Sun, Aug 28, 2022 at 02:28:02PM -0700, Andres Freund wrote:\n> > > > > - # Workaround around performance issues due to 32KB block size\n> > > > > - repartition_script: src/tools/ci/gcp_freebsd_repartition.sh\n> > > > > create_user_script: |\n> > > > > pw useradd postgres\n> > > > > chown -R postgres:postgres .\n> > > >\n> > > > What's the story there - at some point that was important for performance\n> > > > because of the native block size triggering significant read-modify-write\n> > > > cycles with postres' writes. You didn't comment on it in the commit message.\n> > >\n> > > Well, I don't know the history, but it seems to be unneeded now.\n> >\n> > It's possible it was mainly needed for testing with aio + dio. But also\n> > possible that an upgrade improved the situation since.\n\nYeah, it is very important for direct I/O (patches soon...), because\nevery 8KB random write becomes a read-32KB/wait/write-32KB without it.\nIt's slightly less important for buffered I/O, because the kernel\ncaches hide that, but it still triggers I/O bandwidth amplification,\nand we definitely saw positive effects earlier on the CI system back\non the previous generation. FWIW I am planning to see about getting\nthe FreeBSD installer to create the root file system the way we want,\ninstead of this ugliness.\n\n> Maybe freebsd got faster as a result of the TAU CPUs?\n> [data]\n\nVery interesting. Would be good to find the exact instance types +\nstorage types to see if there has been a massive IOPS boost, perhaps\nvia local SSD. The newer times are getting closer to a local\ndeveloper machine.\n\n\n",
"msg_date": "Fri, 23 Sep 2022 15:32:48 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-10 15:05:42 -0500, Justin Pryzby wrote:\n> From 4ed5eb427de4508a4c3422e60891b45c8512814a Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Sun, 3 Apr 2022 00:10:20 -0500\n> Subject: [PATCH 03/23] cirrus/ccache: disable compression and show stats\n> \n> Since v4.0, ccache enables zstd compression by default, saving roughly\n> 2x-3x. But, cirrus caches are compressed as tar.gz, so we could disable\n> ccache compression, allowing cirrus to gzip the uncompressed data\n> (better than ccache's default of zstd-1).\n\nI wonder whether we could instead change CCACHE_COMPRESSLEVEL (maybe 3, zstd's\ndefault IIRC). It'd be good if we could increase cache utilization.\n\n\n> From 0bd5f51b8c143ed87a867987309d66b8554b1fd6 Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Thu, 14 Apr 2022 06:27:07 -0500\n> Subject: [PATCH 05/23] cirrus: enable various runtime checks on macos and\n> freebsd\n> \n> windows is slower than freebsd and mac, so it's okay to enable options which\n> will slow them down some. Also, the cirrusci mac instances always have lot of\n> cores available.\n\n> See:\n> https://www.postgresql.org/message-id/20211217193159.pwrelhiyx7kevgsn@alap3.anarazel.de\n> https://www.postgresql.org/message-id/20211213211223.vkgg3wwiss2tragj%40alap3.anarazel.de\n> https://www.postgresql.org/message-id/CAH2-WzmevBhKNEtqX3N-Tkb0gVBHH62C0KfeTxXzqYES_PiFiA%40mail.gmail.com\n> https://www.postgresql.org/message-id/20220325000933.vgazz7pjk2ytj65d@alap3.anarazel.de\n> \n> ci-os-only: freebsd, macos\n> ---\n> .cirrus.yml | 8 +++++---\n> 1 file changed, 5 insertions(+), 3 deletions(-)\n> \n> diff --git a/.cirrus.yml b/.cirrus.yml\n> index 183e8746ce6..4ad20892eeb 100644\n> --- a/.cirrus.yml\n> +++ b/.cirrus.yml\n> @@ -113,7 +113,9 @@ task:\n> \\\n> CC=\"ccache cc\" \\\n> CXX=\"ccache c++\" \\\n> - CFLAGS=\"-Og -ggdb\"\n> + CPPFLAGS=\"-DRELCACHE_FORCE_RELEASE -DCOPY_PARSE_PLAN_TREES -DWRITE_READ_PARSE_PLAN_TREES -DRAW_EXPRESSION_COVERAGE_TEST\" \\\n> + CXXFLAGS=\"-Og -ggdb -march=native -mtune=native\" \\\n> + CFLAGS=\"-Og -ggdb -march=native -mtune=native\"\n\nWhat's reason for -march=native -mtune=native here?\n\n\n> EOF\n> build_script: |\n> su postgres -c \"ccache --zero-stats\"\n> @@ -336,8 +338,8 @@ task:\n> CC=\"ccache cc\" \\\n> CXX=\"ccache c++\" \\\n> CLANG=\"ccache ${brewpath}/llvm/bin/ccache\" \\\n> - CFLAGS=\"-Og -ggdb\" \\\n> - CXXFLAGS=\"-Og -ggdb\" \\\n> + CFLAGS=\"-Og -ggdb -DRANDOMIZE_ALLOCATED_MEMORY\" \\\n> + CXXFLAGS=\"-Og -ggdb -DRANDOMIZE_ALLOCATED_MEMORY\" \\\n> \\\n> LLVM_CONFIG=${brewpath}/llvm/bin/llvm-config \\\n> PYTHON=python3\n\nI'd also use CPPFLAGS here, given you'd used it above...\n\nI'm planning to commit an updated version of this change soon, without the\n-march=native -mtune=native bit, unless somebody protests...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 1 Oct 2022 17:45:01 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Sat, Oct 01, 2022 at 05:45:01PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2022-09-10 15:05:42 -0500, Justin Pryzby wrote:\n> > From 4ed5eb427de4508a4c3422e60891b45c8512814a Mon Sep 17 00:00:00 2001\n> > From: Justin Pryzby <pryzbyj@telsasoft.com>\n> > Date: Sun, 3 Apr 2022 00:10:20 -0500\n> > Subject: [PATCH 03/23] cirrus/ccache: disable compression and show stats\n> > \n> > Since v4.0, ccache enables zstd compression by default, saving roughly\n> > 2x-3x. But, cirrus caches are compressed as tar.gz, so we could disable\n> > ccache compression, allowing cirrus to gzip the uncompressed data\n> > (better than ccache's default of zstd-1).\n> \n> I wonder whether we could instead change CCACHE_COMPRESSLEVEL (maybe 3, zstd's\n> default IIRC). It'd be good if we could increase cache utilization.\n\nI considered that (and I think that's what I wrote initially).\n\nI figured that if cirrus is going to use gzip-6 (tar.gz) in any case, we\nmight as well disable compression. Then, all the tasks are also doing\nthe same thing (half the tasks have ccache before 4.0).\n\n> > From 0bd5f51b8c143ed87a867987309d66b8554b1fd6 Mon Sep 17 00:00:00 2001\n> > From: Justin Pryzby <pryzbyj@telsasoft.com>\n> > Date: Thu, 14 Apr 2022 06:27:07 -0500\n> > Subject: [PATCH 05/23] cirrus: enable various runtime checks on macos and\n> > freebsd\n> > \n> > windows is slower than freebsd and mac, so it's okay to enable options which\n> > will slow them down some. Also, the cirrusci mac instances always have lot of\n> > cores available.\n> \n> > See:\n> > https://www.postgresql.org/message-id/20211217193159.pwrelhiyx7kevgsn@alap3.anarazel.de\n> > https://www.postgresql.org/message-id/20211213211223.vkgg3wwiss2tragj%40alap3.anarazel.de\n> > https://www.postgresql.org/message-id/CAH2-WzmevBhKNEtqX3N-Tkb0gVBHH62C0KfeTxXzqYES_PiFiA%40mail.gmail.com\n> > https://www.postgresql.org/message-id/20220325000933.vgazz7pjk2ytj65d@alap3.anarazel.de\n> > \n> > ci-os-only: freebsd, macos\n> > ---\n> > .cirrus.yml | 8 +++++---\n> > 1 file changed, 5 insertions(+), 3 deletions(-)\n> > \n> > diff --git a/.cirrus.yml b/.cirrus.yml\n> > index 183e8746ce6..4ad20892eeb 100644\n> > --- a/.cirrus.yml\n> > +++ b/.cirrus.yml\n> > @@ -113,7 +113,9 @@ task:\n> > \\\n> > CC=\"ccache cc\" \\\n> > CXX=\"ccache c++\" \\\n> > - CFLAGS=\"-Og -ggdb\"\n> > + CPPFLAGS=\"-DRELCACHE_FORCE_RELEASE -DCOPY_PARSE_PLAN_TREES -DWRITE_READ_PARSE_PLAN_TREES -DRAW_EXPRESSION_COVERAGE_TEST\" \\\n> > + CXXFLAGS=\"-Og -ggdb -march=native -mtune=native\" \\\n> > + CFLAGS=\"-Og -ggdb -march=native -mtune=native\"\n> \n> What's reason for -march=native -mtune=native here?\n\nNo particular reason, and my initial patch didn't have it.\nI suppose I added it to test its effect and never got rid of it.\n\n> > EOF\n> > build_script: |\n> > su postgres -c \"ccache --zero-stats\"\n> > @@ -336,8 +338,8 @@ task:\n> > CC=\"ccache cc\" \\\n> > CXX=\"ccache c++\" \\\n> > CLANG=\"ccache ${brewpath}/llvm/bin/ccache\" \\\n> > - CFLAGS=\"-Og -ggdb\" \\\n> > - CXXFLAGS=\"-Og -ggdb\" \\\n> > + CFLAGS=\"-Og -ggdb -DRANDOMIZE_ALLOCATED_MEMORY\" \\\n> > + CXXFLAGS=\"-Og -ggdb -DRANDOMIZE_ALLOCATED_MEMORY\" \\\n> > \\\n> > LLVM_CONFIG=${brewpath}/llvm/bin/llvm-config \\\n> > PYTHON=python3\n> \n> I'd also use CPPFLAGS here, given you'd used it above...\n> \n> I'm planning to commit an updated version of this change soon, without the\n> -march=native -mtune=native bit, unless somebody protests...\n\nOne other thing is that your -m32 changes caused the linux/meson task to\ntake an additional 3+ minutes (total ~8). That's no issue, except that\nthe Warnings task depends on the linux/mason task, and itself can take\nup to 15 minutes.\n\nSo those two potentially take as long as the windows task.\nI suggested that CompileWarnings could instead \"Depend on: Freebsd\",\nwhich currently takes 6-7min (and could take 4-5min if given more CPUs).\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 1 Oct 2022 19:58:01 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-01 19:58:01 -0500, Justin Pryzby wrote:\n> One other thing is that your -m32 changes caused the linux/meson task to\n> take an additional 3+ minutes (total ~8). That's no issue, except that\n> the Warnings task depends on the linux/mason task, and itself can take\n> up to 15 minutes.\n\n> So those two potentially take as long as the windows task.\n> I suggested that CompileWarnings could instead \"Depend on: Freebsd\",\n> which currently takes 6-7min (and could take 4-5min if given more CPUs).\n\nI am wondering if we should instead introduce a new \"quickcheck\" task that\njust compiles and runs maybe one test and have *all* other tests depend on\nthat. Wasting a precious available windows instance to just fail to build or\nimmediately fail during tests doesn't really make sense.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 1 Oct 2022 18:36:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-01 18:36:41 -0700, Andres Freund wrote:\n> I am wondering if we should instead introduce a new \"quickcheck\" task that\n> just compiles and runs maybe one test and have *all* other tests depend on\n> that. Wasting a precious available windows instance to just fail to build or\n> immediately fail during tests doesn't really make sense.\n\nAttached is an implementation of that idea.\n\nI fairly randomly chose two quick tests to execute as part of the sanity\ncheck, cube/regress pg_ctl/001_start_stop. I wanted to have coverage for\ninitdb, a pg_regress style test, a tap test, some other client binary.\n\nWith a primed cache this takes ~32s, not too bad imo. 12s of that is cloning\nthe repo.\n\n\nWhat do you think?\n\n\nWe could bake a bare repo into the images to make the clone step in faster,\nbut that'd be for later anyway.\n\nset -e\nrm -rf /tmp/pg-clone-better\nmkdir /tmp/pg-clone-better\ncd /tmp/pg-clone-better\ngit init --bare\ngit remote add origin https://github.com/postgres/postgres.git --no-tags -t 'REL_*' -t master\ngit fetch -v\ngit repack -ad -f\ndu -sh\n\nresults in a 227MB repo.\n\ngit clone https://github.com/anarazel/postgres.git -v --depth 1000 -b ci-sanitycheck --reference /tmp/pg-clone-better /tmp/pg-clone-better-clone\n\nclones an example branch in ~1.35s.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sun, 2 Oct 2022 13:52:01 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Sun, Oct 02, 2022 at 01:52:01PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2022-10-01 18:36:41 -0700, Andres Freund wrote:\n> > I am wondering if we should instead introduce a new \"quickcheck\" task that\n> > just compiles and runs maybe one test and have *all* other tests depend on\n> > that. Wasting a precious available windows instance to just fail to build or\n> > immediately fail during tests doesn't really make sense.\n\n> With a primed cache this takes ~32s, not too bad imo. 12s of that is\n> cloning the repo.\n\nMaybe - that would avoid waiting 4 minutes for a windows instance to\nstart in the (hopefully atypical) case of a patch that fails in 1-2\nminutes under linux/freebsd.\n\nIf the patch were completely broken, the windows task would take ~4min\nto start, plus up to ~4min before failing to compile or failing an early\ntest. 6-8 minutes isn't nothing, but doesn't seem worth the added\ncomplexity.\n\nAlso, this would mean that in the common case, the slowest task would be\ndelayed until after the SanityCheck task instance starts, compiles, and\nruns some test :( Your best case is 32sec, but I doubt that's going to\nbe typical.\n\nI was thinking about the idea of cfbot handling \"tasks\" separately,\nsimilar to what it used to do with travis/appveyor. The logic for\n\"windows tasks are only run if linux passes tests\" could live there.\nThat could also be useful if there's ever the possibility of running an\nadditional OS on another CI provider, or if another provider can run\nwindows tasks faster, or if we need to reduce our load/dependency on\ncirrus. I realized that goes backwards in some ways to the direction\nwe've gone with cirrus, and I'm not sure how exactly it would do that (I\nsuppose it might add ci-os-only tags to its commit message).\n\n> + # no options enabled, should be small\n> + CCACHE_MAXSIZE: \"150M\"\n\nActually, tasks can share caches if the \"cache key\" is set.\n\nIf there was a separate \"Sanity\" task, I think it should use whatever\nflags linux (or freebsd) use to avoid doing two compilations (with lots\nof cache misses for patches which modify *.h files, which would then\nhappen twice, in serial).\n\n> + # use always: to continue after failures. Task that did not run count as a\n> + # success, so we need to recheck SanityChecks's condition here ...\n\n> - # task that did not run, count as a success, so we need to recheck Linux'\n> - # condition here ...\n\nAnother/better justification/description is that \"cirrus warns if the\ndepending task has different only_if conditions than the dependant task\".\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 2 Oct 2022 16:35:06 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-02 16:35:06 -0500, Justin Pryzby wrote:\n> Maybe - that would avoid waiting 4 minutes for a windows instance to\n> start in the (hopefully atypical) case of a patch that fails in 1-2\n> minutes under linux/freebsd.\n> \n> If the patch were completely broken, the windows task would take ~4min\n> to start, plus up to ~4min before failing to compile or failing an early\n> test. 6-8 minutes isn't nothing, but doesn't seem worth the added\n> complexity.\n\nAvoiding 6-8mins of wasted windows time would, I think, allow us to crank\ncfbot's concurrency up a notch or two.\n\n\n> Also, this would mean that in the common case, the slowest task would be\n> delayed until after the SanityCheck task instance starts, compiles, and\n> runs some test :( Your best case is 32sec, but I doubt that's going to\n> be typical.\n\nEven the worst case isn't that bad, the uncached minimal build is 67s.\n\n\n> I was thinking about the idea of cfbot handling \"tasks\" separately,\n> similar to what it used to do with travis/appveyor. The logic for\n> \"windows tasks are only run if linux passes tests\" could live there.\n\nI don't really see the advantage of doing that over just increasing\nconcurrency by a bit.\n\n\n> > + # no options enabled, should be small\n> > + CCACHE_MAXSIZE: \"150M\"\n> \n> Actually, tasks can share caches if the \"cache key\" is set.\n> If there was a separate \"Sanity\" task, I think it should use whatever\n> flags linux (or freebsd) use to avoid doing two compilations (with lots\n> of cache misses for patches which modify *.h files, which would then\n> happen twice, in serial).\n\nI think the price of using exactly the same flags is higher than the gain. And\nit'll rarely work if we use the container task for the sanity check, as the\ntimestamps of the compiler, system headers etc will be different.\n\n\n> > + # use always: to continue after failures. Task that did not run count as a\n> > + # success, so we need to recheck SanityChecks's condition here ...\n> \n> > - # task that did not run, count as a success, so we need to recheck Linux'\n> > - # condition here ...\n> \n> Another/better justification/description is that \"cirrus warns if the\n> depending task has different only_if conditions than the dependant task\".\n\nThat doesn't really seem easier to understand to me.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 2 Oct 2022 14:51:23 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-02 16:35:06 -0500, Justin Pryzby wrote:\n> On Sun, Oct 02, 2022 at 01:52:01PM -0700, Andres Freund wrote:\n> > On 2022-10-01 18:36:41 -0700, Andres Freund wrote:\n> > > I am wondering if we should instead introduce a new \"quickcheck\" task that\n> > > just compiles and runs maybe one test and have *all* other tests depend on\n> > > that. Wasting a precious available windows instance to just fail to build or\n> > > immediately fail during tests doesn't really make sense.\n> \n> > With a primed cache this takes ~32s, not too bad imo. 12s of that is\n> > cloning the repo.\n> \n> Maybe - that would avoid waiting 4 minutes for a windows instance to\n> start in the (hopefully atypical) case of a patch that fails in 1-2\n> minutes under linux/freebsd.\n> \n> If the patch were completely broken, the windows task would take ~4min\n> to start, plus up to ~4min before failing to compile or failing an early\n> test. 6-8 minutes isn't nothing, but doesn't seem worth the added\n> complexity.\n\nBtw, the motivation to work on this just now was that I'd like to enable more\nsanitizers (undefined,alignment for linux-meson, address for\nlinux-autoconf). Yes, we could make the dependency on freebsd instead, but I'd\nlike to try to enable the clang-only memory sanitizer there (if it works on\nfreebsd)...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 2 Oct 2022 14:54:21 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Sun, Oct 02, 2022 at 02:54:21PM -0700, Andres Freund wrote:\n> the clang-only memory sanitizer there (if it works on freebsd)...\n\nHave you looked at this much ? I think it'll require a bunch of\nexclusions, right ?\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 2 Oct 2022 21:15:41 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Sat, Sep 10, 2022 at 03:05:42PM -0500, Justin Pryzby wrote:\n> On Sun, Aug 28, 2022 at 02:28:02PM -0700, Andres Freund wrote:\n> > On 2022-08-28 12:10:29 -0500, Justin Pryzby wrote:\n> > > On Sun, Aug 28, 2022 at 09:07:52AM -0700, Andres Freund wrote:\n> > > > > --- /dev/null\n> > > > > +++ b/src/tools/ci/windows-compiler-warnings\n> > > >\n> > > > Wouldn't that be doable as something like\n> > > > sh -c 'if test -s file; then cat file;exit 1; fi\"\n> > > > inside .cirrus.yml?\n> > >\n> > > I had written it inline in a couple ways, like\n> > > - sh -exc 'f=msbuild.warn.log; if [ -s \"$f\" ]; then cat \"$f\"; exit 1; else exit 0; fi'\n> > >\n> > > but then separated it out as you suggested in\n> > > 20220227010908.vz2a7dmfzgwg742w@alap3.anarazel.de\n> > >\n> > > after I complained about cmd.exe requiring escaping for && and ||\n> > > That makes writing any shell script a bit perilous and a separate script\n> > > seems better.\n> > \n> > I remember that I suggested it - but note that the way I wrote above doesn't\n> > have anything needing escaping. \n> \n> It doesn't require it, but that still gives the impression that it's\n> normally possible to write one-liner shell scripts there, which is\n> misleading/wrong, and the reason why I took your suggestion to use a\n> separate script file.\n> \n> > Anyway, what do you think of the multiline split I suggested?\n> \n> Done, and sorted.\n\nRewrote this and rebased some of the other stuff on top of the meson\ncommit, for which I also include some new patches.",
"msg_date": "Fri, 4 Nov 2022 18:54:12 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-04 18:54:12 -0500, Justin Pryzby wrote:\n> Subject: [PATCH 1/8] meson: PROVE is not required\n> Subject: [PATCH 3/8] meson: rename 'main' tasks to 'regress' and 'isolation'\n\nPushed, thanks for the patches.\n\n\n> Subject: [PATCH 2/8] meson: other fixes for cygwin\n>\n> XXX: what about HAVE_BUGGY_STRTOF ?\n\nWhat are you wondering about here? Shouldn't that continue to be set via\nsrc/include/port/cygwin.h?\n\n\n> diff --git a/src/test/regress/meson.build b/src/test/regress/meson.build\n> index 3dcfc11278f..6ec3c77af53 100644\n> --- a/src/test/regress/meson.build\n> +++ b/src/test/regress/meson.build\n> @@ -10,7 +10,7 @@ regress_sources = pg_regress_c + files(\n> # patterns like \".*-.*-mingw.*\". We probably can do better, but for now just\n> # replace 'gcc' with 'mingw' on windows.\n> host_tuple_cc = cc.get_id()\n> -if host_system == 'windows' and host_tuple_cc == 'gcc'\n> +if host_system in ['windows', 'cygwin'] and host_tuple_cc == 'gcc'\n> host_tuple_cc = 'mingw'\n> endif\n\nThis doesn't quite seem right - shouldn't it say cywin? Not that it makes a\ndifference right now, given the contents of resultmap:\nfloat4:out:.*-.*-cygwin.*=float4-misrounded-input.out\nfloat4:out:.*-.*-mingw.*=float4-misrounded-input.out\n\n\n> From 0acbbd2fdd97bbafc5c4552e26f92d52c597eea9 Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Wed, 25 May 2022 21:53:22 -0500\n> Subject: [PATCH 4/8] cirrus/windows: add compiler_warnings_script\n>\n> I'm not sure how to write this test in windows shell; it's also not easy to\n> write it in posix sh, since windows shell is somehow interpretting && and ||...\n>\n> https://www.postgresql.org/message-id/20220212212310.f645c6vw3njkgxka%40alap3.anarazel.de\n>\n> See also:\n> 8a1ce5e54f6d144e4f8e19af7c767b026ee0c956\n> https://cirrus-ci.com/task/6241060062494720\n> https://cirrus-ci.com/task/6496366607204352\n>\n> ci-os-only: windows\n> ---\n> .cirrus.yml | 10 +++++++++-\n> src/tools/ci/windows-compiler-warnings | 24 ++++++++++++++++++++++++\n> 2 files changed, 33 insertions(+), 1 deletion(-)\n> create mode 100755 src/tools/ci/windows-compiler-warnings\n>\n> diff --git a/.cirrus.yml b/.cirrus.yml\n> index 9f2282471a9..99ac09dc679 100644\n> --- a/.cirrus.yml\n> +++ b/.cirrus.yml\n> @@ -451,12 +451,20 @@ task:\n>\n> build_script: |\n> vcvarsall x64\n> - ninja -C build\n> + ninja -C build |tee build/meson-logs/build.txt\n> + REM Since pipes lose exit status of the preceding command, rerun compilation,\n> + REM without the pipe exiting now if it fails, rather than trying to run checks\n> + ninja -C build > nul\n\nThis seems mighty grotty :(. but I guess it's quick enough not worry about,\nand I can't come up with a better plan.\n\nIt doesn't seem quite right to redirect into meson-logs/ to me, my\ninterpretation is that that's \"meson's namespace\". Why not just store it in\nbuild/?\n\n\n\n> From e85fe83fd8a4b4c79a96d2bf66cd6a5e1bdfcd1e Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Sat, 26 Feb 2022 19:34:35 -0600\n> Subject: [PATCH 5/8] cirrus: build docs as a separate task..\n>\n> This will run the doc build if any docs have changed, even if Linux\n> fails, to allow catch doc build failures.\n>\n> This'll automatically show up as a separate \"column\" on cfbot.\n>\n> Also, in the future, this will hopefully upload each patch's changed HTML docs\n> as an artifact, for easy review.\n>\n> Note that this is currently building docs with both autoconf and meson.\n>\n> ci-os-only: html\n> ---\n> .cirrus.yml | 62 +++++++++++++++++++++++++++++++++++++----------------\n> 1 file changed, 44 insertions(+), 18 deletions(-)\n>\n> diff --git a/.cirrus.yml b/.cirrus.yml\n> index 99ac09dc679..37fd79e5b77 100644\n> --- a/.cirrus.yml\n> +++ b/.cirrus.yml\n> @@ -472,6 +472,9 @@ task:\n> type: text/plain\n>\n>\n> +###\n> +# Test that code can be built with gcc/clang without warnings\n> +###\n> task:\n> name: CompilerWarnings\n>\n> @@ -515,10 +518,6 @@ task:\n> #apt-get update\n> #DEBIAN_FRONTEND=noninteractive apt-get -y install ...\n>\n> - ###\n> - # Test that code can be built with gcc/clang without warnings\n> - ###\n> -\n\nWhy remove this?\n\n\n> setup_script: echo \"COPT=-Werror\" > src/Makefile.custom\n>\n> # Trace probes have a history of getting accidentally broken. Use the\n> @@ -580,20 +579,6 @@ task:\n> make -s -j${BUILD_JOBS} clean\n> time make -s -j${BUILD_JOBS} world-bin\n>\n> - ###\n> - # Verify docs can be built\n> - ###\n> - # XXX: Only do this if there have been changes in doc/ since last build\n> - always:\n> - docs_build_script: |\n> - time ./configure \\\n> - --cache gcc.cache \\\n> - CC=\"ccache gcc\" \\\n> - CXX=\"ccache g++\" \\\n> - CLANG=\"ccache clang\"\n> - make -s -j${BUILD_JOBS} clean\n> - time make -s -j${BUILD_JOBS} -C doc\n> -\n> ###\n> # Verify headerscheck / cpluspluscheck succeed\n> #\n> @@ -617,3 +602,44 @@ task:\n>\n> always:\n> upload_caches: ccache\n> +\n> +\n> +###\n> +# Verify docs can be built\n> +# changesInclude() will skip this task if none of the commits since\n> +# CIRRUS_LAST_GREEN_CHANGE touched any relevant files. The comparison appears\n> +# to be like \"git log a..b -- ./file\", not \"git diff a..b -- ./file\"\n> +###\n> +\n> +task:\n> + name: Documentation\n> +\n> + env:\n> + CPUS: 1\n> + BUILD_JOBS: 1\n> +\n> + only_if: $CIRRUS_CHANGE_MESSAGE !=~ '.*\\nci-os-only:.*' || $CIRRUS_CHANGE_MESSAGE =~ '.*\\nci-os-only:[^\\n]*(docs|html).*'\n> + skip: \"!changesInclude('.cirrus.yml', 'doc/**')\"\n\nPerhaps we should introduce something other than ci-os-only if we want that to\ninclude things like \"docs and html\". At least this should update\nsrc/tools/ci/README.\n\n\n> + sysinfo_script: |\n> + id\n> + uname -a\n> + cat /proc/cmdline\n> + ulimit -a -H && ulimit -a -S\n> + export\n\nI think we can skip this here.\n\n\n> + # Exercise HTML and other docs:\n> + ninja_docs_build_script: |\n> + mkdir build.ninja\n> + cd build.ninja\n\nPerhaps build-ninja instead? build.ninja is the filename for ninja's build\ninstructions, so it seems a bit confusing.\n\n\n> From adebe93a4409990e929f2775d45c6613134a4243 Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Tue, 26 Jul 2022 20:30:02 -0500\n> Subject: [PATCH 6/8] cirrus/ccache: add explicit cache keys..\n>\n> Since otherwise, building with ci-os-only will probably fail to use the\n> normal cache, since the cache key is computed using both the task name\n> and its *index* in the list of caches (internal/executor/cache.go:184).\n\nSeems like this would potentially better addressed by reporting a bug to the\ncirrus folks?\n\n\n> ccache_cache:\n> folder: ${CCACHE_DIR}\n> + fingerprint_key: ccache/linux\n> + reupload_on_changes: true\n\nThere's enough copies of this to make it worth deduplicating. If we use\nsomething like\n fingerprint_script: echo ccache/$CIRRUS_BRANCH/$CIRRUS_OS\nwe can use a yaml ref?\n\n\nI think you experimented with creating a 'base' ccache dir (e.g. on the master\nbranch) and then using branch specific secondar caches? How did that turn out?\nI think cfbot's caches constantly get removed due to overrunning the global\nspace.\n\n\n> From f16739bc5d2087847129baf663aa25fa9edb8449 Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Sun, 3 Apr 2022 00:10:20 -0500\n> Subject: [PATCH 7/8] cirrus/ccache: disable compression and show stats\n\n> Since v4.0, ccache enables zstd compression by default, saving roughly\n> 2x-3x. But, cirrus caches are compressed as tar.gz, so we could disable\n> ccache compression, allowing cirrus to gzip the uncompressed data\n> (better than ccache's default of zstd-1).\n>\n> With default compression enabled (https://cirrus-ci.com/task/6692342840164352):\n> linux/debian/bullseye has 4.2; 99MB cirrus cache; cache_size_kibibyte\t109616\n> macos has 4.5.1: 47MB cirrus cache; cache_size_kibibyte\t52500\n> freebsd has 3.7.12: 42MB cirrus cache; cache_size_kibibyte\t134064\n> XXX windows has 4.7.2; 180MB cirrus cache; cache_size_kibibyte\t51179\n> todo: compiler warnings\n>\n> With compression disabled (https://cirrus-ci.com/task/4614182514458624):\n> linux: 91MB cirrus cache; cache_size_kibibyte\t316136\n> macos: 41MB cirrus cache; cache_size_kibibyte\t118068\n> windows: 42MB cirrus cache; cache_size_kibibyte\t134064\n> freebsd is the same\n\nI'm still somewhat doubtful this is a good idea. The mingw cache is huge, for\nexample, and all that additional IO and memory usage is bound to show up.\n\n\n> The stats should be shown and/or logged.\n> ccache --show-stats shows the *cumulative* stats (including prior\n> compilations)\n> ccache --zero-stats clears out not only the global stats, but the\n> per-file cache stats (from which the global stats are derived) - which\n> obviously makes the cache work poorly.\n>\n> Note that ccache 4.4 supports CCACHE_STATSLOG, which seems ideal.\n> The log should be written *outside* the ccache folder - it shouldn't be\n> preserved across cirrusci task invocations.\n\nI assume we don't have a new enough ccache everywhere yet?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 4 Nov 2022 18:59:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Fri, Nov 04, 2022 at 06:59:46PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2022-11-04 18:54:12 -0500, Justin Pryzby wrote:\n> > Subject: [PATCH 1/8] meson: PROVE is not required\n> > Subject: [PATCH 3/8] meson: rename 'main' tasks to 'regress' and 'isolation'\n> \n> Pushed, thanks for the patches.\n\nThanks.\n\n> > diff --git a/.cirrus.yml b/.cirrus.yml\n> > index 9f2282471a9..99ac09dc679 100644\n> > --- a/.cirrus.yml\n> > +++ b/.cirrus.yml\n> > @@ -451,12 +451,20 @@ task:\n> >\n> > build_script: |\n> > vcvarsall x64\n> > - ninja -C build\n> > + ninja -C build |tee build/meson-logs/build.txt\n> > + REM Since pipes lose exit status of the preceding command, rerun compilation,\n> > + REM without the pipe exiting now if it fails, rather than trying to run checks\n> > + ninja -C build > nul\n> \n> This seems mighty grotty :(. but I guess it's quick enough not worry about,\n> and I can't come up with a better plan.\n> \n> It doesn't seem quite right to redirect into meson-logs/ to me, my\n> interpretation is that that's \"meson's namespace\". Why not just store it in\n> build/?\n\nI put it there so it'd be included with the build artifacts.\nMaybe it's worth adding a separate line to artifacts for stuff like\nthis, and ccache log ?\n\n> > From e85fe83fd8a4b4c79a96d2bf66cd6a5e1bdfcd1e Mon Sep 17 00:00:00 2001\n> > From: Justin Pryzby <pryzbyj@telsasoft.com>\n> > Date: Sat, 26 Feb 2022 19:34:35 -0600\n> > Subject: [PATCH 5/8] cirrus: build docs as a separate task..\n\n> > + # Exercise HTML and other docs:\n> > + ninja_docs_build_script: |\n> > + mkdir build.ninja\n> > + cd build.ninja\n> \n> Perhaps build-ninja instead? build.ninja is the filename for ninja's build\n> instructions, so it seems a bit confusing.\n\nSure.\n\nDo you think building docs with both autoconf and meson is what's\ndesirable here ?\n\nI'm not sure if this ought to be combined with/before/after your \"move\ncompilerwarnings task to meson\" patch? (Regarding that patch: I\nmentioned that it shouldn't use ccache -C, and it should use\nmeson_log_artifacts.)\n\n> > From: Justin Pryzby <pryzbyj@telsasoft.com>\n> > Date: Tue, 26 Jul 2022 20:30:02 -0500\n> > Subject: [PATCH 6/8] cirrus/ccache: add explicit cache keys..\n> >\n> > Since otherwise, building with ci-os-only will probably fail to use the\n> > normal cache, since the cache key is computed using both the task name\n> > and its *index* in the list of caches (internal/executor/cache.go:184).\n> \n> Seems like this would potentially better addressed by reporting a bug to the\n> cirrus folks?\n\nYou said that before, but I don't think so - since they wrote code to do\nthat, it's odd to file a bug that says that the behavior is wrong. I am\ncurious why, but it seems delibrate.\n\nhttps://www.postgresql.org/message-id/20220828171029.GO2342%40telsasoft.com\n\n> There's enough copies of this to make it worth deduplicating. If we\n> use something like fingerprint_script: echo\n> ccache/$CIRRUS_BRANCH/$CIRRUS_OS we can use a yaml ref? \nI'll look into it...\n\n> I think you experimented with creating a 'base' ccache dir (e.g. on the master\n> branch) and then using branch specific secondar caches?\n\nI have to revisit that sometime.\n\nThat's a new feaure in ccache 4.4, which is currently only in macos.\nThis is another thing that it'd be easier to test by having cfbot\nclobber the cirrus.yml rather than committing to postgres repo.\n(Technically, it should probably only do use the in-testing cirrus.yml\nif the patch it's testing doesn't itself modify .cirrus.yml)\n\n> How did that turn out? I think cfbot's caches constantly get removed\n> due to overrunning the global space.\n\nFor cfbot, I don't know if there's much hope that any patch-specific\nbuild artifacts will be cached from the previous run, typically ~24h\nprior.\n\nOne idea I have, for the \"Warnings\" task (and maybe linux too), is to\ndefer pruning until after all the compilations. To avoid LRU pruning\nduring early tasks causing bad hit ratios of later tasks.\n\nAnother thing that probably happens is that task1 starts compiling\npatch1, and then another instance of task1 starts compiling patch2. A\nbit later, the first instance will upload its ccache result for patch1,\nwhich will be summarily overwritten by the second instance's compilation\nresult, which doesn't include anything from the first instance.\n\nAlso, whenever ccache hits its MAXSIZE threshold, it prunes the cache\ndown to 80% of the configured size, which probably wipes away everything\nfrom all but the most recent ~20 builds.\n\nI also thought about having separate caches for each compilation in the\nwarnings task - but that requires too much repeated yaml just for that..\n\n> > From: Justin Pryzby <pryzbyj@telsasoft.com>\n> > Date: Sun, 3 Apr 2022 00:10:20 -0500\n> > Subject: [PATCH 7/8] cirrus/ccache: disable compression and show stats\n> >\n> > linux/debian/bullseye has 4.2; 99MB cirrus cache; cache_size_kibibyte\t109616\n> > macos has 4.5.1: 47MB cirrus cache; cache_size_kibibyte\t52500\n> > freebsd has 3.7.12: 42MB cirrus cache; cache_size_kibibyte\t134064\n> > XXX windows has 4.7.2; 180MB cirrus cache; cache_size_kibibyte\t51179\n> \n> I'm still somewhat doubtful this is a good idea. The mingw cache is huge, for\n> example, and all that additional IO and memory usage is bound to show up.\n\nI think you're referring to the proposed mingw task which runs under\nwindows, and not the existing cross-compilation ?\n\nAnd you're right - I remember this now (I think it's due to PCH?)\n\nIn my local copy I'd \"unset CCACHE_NOCOMPRESS\". But I view that as an\noddity of windows headers, rather than an argument against disabling\ncompression elsewhere. BTW, freebsd ccache is too old to use\ncompression.\n\nWhat about using CCACHE_HARDLINK (which implies no compression) ?\n\n> > Note that ccache 4.4 supports CCACHE_STATSLOG, which seems ideal.\n> > The log should be written *outside* the ccache folder - it shouldn't be\n> > preserved across cirrusci task invocations.\n> \n> I assume we don't have a new enough ccache everywhere yet?\n\nNo - see above.\n\nI've added patches to update macos.\n\n-- \nJustin",
"msg_date": "Sun, 13 Nov 2022 17:53:04 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-02 14:54:21 -0700, Andres Freund wrote:\n> On 2022-10-02 16:35:06 -0500, Justin Pryzby wrote:\n> > On Sun, Oct 02, 2022 at 01:52:01PM -0700, Andres Freund wrote:\n> > > On 2022-10-01 18:36:41 -0700, Andres Freund wrote:\n> > > > I am wondering if we should instead introduce a new \"quickcheck\" task that\n> > > > just compiles and runs maybe one test and have *all* other tests depend on\n> > > > that. Wasting a precious available windows instance to just fail to build or\n> > > > immediately fail during tests doesn't really make sense.\n> > \n> > > With a primed cache this takes ~32s, not too bad imo. 12s of that is\n> > > cloning the repo.\n> > \n> > Maybe - that would avoid waiting 4 minutes for a windows instance to\n> > start in the (hopefully atypical) case of a patch that fails in 1-2\n> > minutes under linux/freebsd.\n> > \n> > If the patch were completely broken, the windows task would take ~4min\n> > to start, plus up to ~4min before failing to compile or failing an early\n> > test. 6-8 minutes isn't nothing, but doesn't seem worth the added\n> > complexity.\n> \n> Btw, the motivation to work on this just now was that I'd like to enable more\n> sanitizers (undefined,alignment for linux-meson, address for\n> linux-autoconf). Yes, we could make the dependency on freebsd instead, but I'd\n> like to try to enable the clang-only memory sanitizer there (if it works on\n> freebsd)...\n\nI've used this a bunch on personal branches, and I think it's the way to\ngo. It doesn't take long, saves a lot of cycles when one pushes something\nbroken. Starts to runs the CompilerWarnings task after a minimal amount of\nsanity checking, instead of having to wait for a task running all tests,\nwithout the waste of running it immediately and failing all the different\nconfigurations, which takes forever.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 16 Nov 2022 19:48:14 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Wed, Nov 16, 2022 at 07:48:14PM -0800, Andres Freund wrote:\n> I've used this a bunch on personal branches, and I think it's the way to\n> go. It doesn't take long, saves a lot of cycles when one pushes something\n> broken. Starts to runs the CompilerWarnings task after a minimal amount of\n> sanity checking, instead of having to wait for a task running all tests,\n> without the waste of running it immediately and failing all the different\n> configurations, which takes forever.\n\nWell, I don't hate it.\n\nBut I don't think you should call \"ccache -z\":\n\nOn Tue, Oct 18, 2022 at 12:09:30PM -0500, Justin Pryzby wrote:\n> I realized that ccache -z clears out not only the global stats, but the\n> per-file cache stats (from which the global stats are derived) - which\n> obviously makes the cache work poorly.\n\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 16 Nov 2022 21:58:39 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-16 21:58:39 -0600, Justin Pryzby wrote:\n> On Wed, Nov 16, 2022 at 07:48:14PM -0800, Andres Freund wrote:\n> > I've used this a bunch on personal branches, and I think it's the way to\n> > go. It doesn't take long, saves a lot of cycles when one pushes something\n> > broken. Starts to runs the CompilerWarnings task after a minimal amount of\n> > sanity checking, instead of having to wait for a task running all tests,\n> > without the waste of running it immediately and failing all the different\n> > configurations, which takes forever.\n> \n> Well, I don't hate it.\n> \n> But I don't think you should call \"ccache -z\":\n\nAgreed - that was really just for \"development\" of the task.\n\nI also don't like my \"cores_script\". Not quite sure yet how to do that\nmore cleanly.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 16 Nov 2022 20:08:32 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Wed, Nov 16, 2022 at 08:08:32PM -0800, Andres Freund wrote:\n> I also don't like my \"cores_script\". Not quite sure yet how to do that\n> more cleanly.\n\nI don't know which is cleaner:\n\nls /core* && mv /tmp/core* /tmp/cores\n\nfind / -maxdepth 1 -type f -name 'core*' -print0 |\n\txargs -r0 mv -vt /tmp/cores\n\nfor a in /core*; do [ ! -e \"$a\" ] || mv \"$a\" /tmp/cores; done\n\n\n",
"msg_date": "Wed, 16 Nov 2022 22:16:31 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Sun, Oct 02, 2022 at 01:52:01PM -0700, Andres Freund wrote:\n> \n> +# To avoid unnecessarily spinning up a lot of VMs / containers for entirely\n> +# broken commits, have a very minimal test that all others depend on.\n> +task:\n> + name: SanityCheck\n\nMaybe this should be named 00-SanityCheck, so it sorts first in cfbot ?\n\nAlso, if CompilerWarnings doesn't depend on Linux, that means those two\ntasks will normally start and run simultaneously, which means a single\nbranch will use all 8 of the linux CPUs available from cirrus. Is that\nintentional?\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 19 Nov 2022 14:22:20 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-19 14:22:20 -0600, Justin Pryzby wrote:\n> On Sun, Oct 02, 2022 at 01:52:01PM -0700, Andres Freund wrote:\n> > \n> > +# To avoid unnecessarily spinning up a lot of VMs / containers for entirely\n> > +# broken commits, have a very minimal test that all others depend on.\n> > +task:\n> > + name: SanityCheck\n> \n> Maybe this should be named 00-SanityCheck, so it sorts first in cfbot ?\n\nHm. Perhaps cfbot could just use the sorting from cirrus? I don't really like\nthe idea of making the names more confusing with numbered prefixes,\nparticularly when only used for some but not other tasks.\n\n\n> Also, if CompilerWarnings doesn't depend on Linux, that means those two\n> tasks will normally start and run simultaneously, which means a single\n> branch will use all 8 of the linux CPUs available from cirrus. Is that\n> intentional?\n\nI don't think that'd really make anything worse. But perhaps we could just\nreduce the CPU count for linux autoconf by 1? I suspect that even with asan\nenabled it'd still be roughly even with the rest.\n\nI'll try to repost a version of the ubsan/asan patch together with the\nsanitycheck patch and see how that looks.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 19 Nov 2022 13:18:54 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-19 13:18:54 -0800, Andres Freund wrote:\n> > Also, if CompilerWarnings doesn't depend on Linux, that means those two\n> > tasks will normally start and run simultaneously, which means a single\n> > branch will use all 8 of the linux CPUs available from cirrus. Is that\n> > intentional?\n> \n> I don't think that'd really make anything worse. But perhaps we could just\n> reduce the CPU count for linux autoconf by 1? I suspect that even with asan\n> enabled it'd still be roughly even with the rest.\n\nHm, that doesn't suffice, because we allow 4 cores for the warnings task. The\nlimit for cirrus is 16 linux CPUs though, not 8. We'll temporarily go up to 12\ndue to CompilerWarnings after the change. But I think that's fine, because\nwe'd previously use the same amount of CPUs, just some of it\nsequentially.\n\n From the POV of linux CPUs we'd still be able to start a second task\nconcurrently without delaying the sanitycheck task, and then at max delaying\none of the other linux tasks (meson, autoconf, compiler warnings).\n\nThe limit is, and continues to be, be the number of concurrent macos\nVMs. Might be better after moving to m1 macs.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 19 Nov 2022 13:35:17 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Sat, Nov 19, 2022 at 01:18:54PM -0800, Andres Freund wrote:\n> > Also, if CompilerWarnings doesn't depend on Linux, that means those two\n> > tasks will normally start and run simultaneously, which means a single\n> > branch will use all 8 of the linux CPUs available from cirrus. Is that\n> > intentional?\n> \n> I don't think that'd really make anything worse. But perhaps we could just\n> reduce the CPU count for linux autoconf by 1?\n\nI didn't understand the goal of \"reducing by one\" ?\n\nUp to now, most tasks are using half of the available CPUs, which seemed\ndeliberate. Like maybe to allow running two branches simultaneously\n(that doesn't necessarily work well with ccache, though).\n\nOn Sat, Nov 19, 2022 at 01:35:17PM -0800, Andres Freund wrote:\n> The limit for cirrus is 16 linux CPUs though, not 8.\n\nOh. Then I don't see any issue.\n\n> We'll temporarily go up to 12 due to CompilerWarnings after the change.\n\nWhat do you mean \"temporarily\" ? I think you're implying that the\nWarnings task is fast but (at least right now) it is not.\n\nNote that the most recent \"code coverage\" task is built into the\nlinux-autoconf task, and slows it down some more. That's because it's\nthe only remaining in-tree build, and I aimed to only show coverage for\nchanged files (I know you questioned whether that was okay, but to me it\nstill seems to be valuable, even though it obviously doesn't show\nchanges outside of those files). And I couldn't see how to map from\n\"object filename to source file\" with meson, although I guess it's\npossible with instrospection. I haven't re-sent that patch because it's\nwaiting on cfbot changes.\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 19 Nov 2022 15:45:06 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-19 15:45:06 -0600, Justin Pryzby wrote:\n> What do you mean \"temporarily\" ? I think you're implying that the\n> Warnings task is fast but (at least right now) it is not.\n\nIn the sense that we don't need all CPUs until the whole commit has finished\ntesting (none of the tasks are the slowest task, even after ubsan/asan). As\nsoon as one of the linux tests has finished for one commit, another task in a\nconcurrently tested commit can start. Whereas that's not the case for macos,\ndue to the VM limit.\n\n(cfbot has double the limits, because it has a 10$/mo account)\n\n\n> Note that the most recent \"code coverage\" task is built into the\n> linux-autoconf task, and slows it down some more. That's because it's\n> the only remaining in-tree build, and I aimed to only show coverage for\n> changed files (I know you questioned whether that was okay, but to me it\n> still seems to be valuable, even though it obviously doesn't show\n> changes outside of those files).\n\nI think we shouldn't add further tests using autoconf, that'll just mean we'll\nhave to do the work changing that test at some later point.\n\n\n> And I couldn't see how to map from\n> \"object filename to source file\" with meson, although I guess it's\n> possible with instrospection. I haven't re-sent that patch because it's\n> waiting on cfbot changes.\n\nThe object files should have that in their metadata, fwiw.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 19 Nov 2022 14:14:17 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-19 13:18:54 -0800, Andres Freund wrote:\n> I'll try to repost a version of the ubsan/asan patch together with the\n> sanitycheck patch and see how that looks.\n\nI just pushed the prerequisite patch making UBSAN_OPTIONS work. Attached\nis 1) addition of SanityCheck 2) use of asan and ubsan+alignment san to\nthe linux tasks.\n\nI went with a variation of the find command for SanityCheck's\ncores_script, but used -exec to invoke mv, as that results in a nicer\nlooking commandline imo.\n\nPreviously the SanityCheck patch did trigger warnings about only_if not\nmatching, despite SanityCheck not having an only_if, but I reported that\nas a bug to cirrus-ci, and they fixed that.\n\nPretty happy with this.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Mon, 21 Nov 2022 14:09:03 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-13 17:53:04 -0600, Justin Pryzby wrote:\n> On Fri, Nov 04, 2022 at 06:59:46PM -0700, Andres Freund wrote:\n> > > diff --git a/.cirrus.yml b/.cirrus.yml\n> > > index 9f2282471a9..99ac09dc679 100644\n> > > --- a/.cirrus.yml\n> > > +++ b/.cirrus.yml\n> > > @@ -451,12 +451,20 @@ task:\n> > >\n> > > build_script: |\n> > > vcvarsall x64\n> > > - ninja -C build\n> > > + ninja -C build |tee build/meson-logs/build.txt\n> > > + REM Since pipes lose exit status of the preceding command, rerun compilation,\n> > > + REM without the pipe exiting now if it fails, rather than trying to run checks\n> > > + ninja -C build > nul\n> > \n> > This seems mighty grotty :(. but I guess it's quick enough not worry about,\n> > and I can't come up with a better plan.\n> > \n> > It doesn't seem quite right to redirect into meson-logs/ to me, my\n> > interpretation is that that's \"meson's namespace\". Why not just store it in\n> > build/?\n> \n> I put it there so it'd be included with the build artifacts.\n\nWouldn't just naming it build-warnings.log suffice? I don't think we\nwant to actually upload build.txt - it already is captured.\n\n\n> > > From e85fe83fd8a4b4c79a96d2bf66cd6a5e1bdfcd1e Mon Sep 17 00:00:00 2001\n> > > From: Justin Pryzby <pryzbyj@telsasoft.com>\n> > > Date: Sat, 26 Feb 2022 19:34:35 -0600\n> > > Subject: [PATCH 5/8] cirrus: build docs as a separate task..\n> \n> > > + # Exercise HTML and other docs:\n> > > + ninja_docs_build_script: |\n> > > + mkdir build.ninja\n> > > + cd build.ninja\n> > \n> > Perhaps build-ninja instead? build.ninja is the filename for ninja's build\n> > instructions, so it seems a bit confusing.\n> \n> Sure.\n> \n> Do you think building docs with both autoconf and meson is what's\n> desirable here ?\n\nNot sure.\n\n\n> I'm not sure if this ought to be combined with/before/after your \"move\n> compilerwarnings task to meson\" patch? (Regarding that patch: I\n> mentioned that it shouldn't use ccache -C, and it should use\n> meson_log_artifacts.)\n\nTBH, I'm not quite sure a separate docs task does really still make\nsense after the SanityCheck task. It's worth building the docs even if\nsome flappy test fails, but I don't think we should try to build the\ndocs if the code doesn't even compile, in all likelihood a lot more is\nwrong in that case.\n\n\n> > > From: Justin Pryzby <pryzbyj@telsasoft.com>\n> > > Date: Tue, 26 Jul 2022 20:30:02 -0500\n> > > Subject: [PATCH 6/8] cirrus/ccache: add explicit cache keys..\n> > >\n> > > Since otherwise, building with ci-os-only will probably fail to use the\n> > > normal cache, since the cache key is computed using both the task name\n> > > and its *index* in the list of caches (internal/executor/cache.go:184).\n> > \n> > Seems like this would potentially better addressed by reporting a bug to the\n> > cirrus folks?\n> \n> You said that before, but I don't think so - since they wrote code to do\n> that, it's odd to file a bug that says that the behavior is wrong. I am\n> curious why, but it seems delibrate.\n> \n> https://www.postgresql.org/message-id/20220828171029.GO2342%40telsasoft.com\n\nI suspect this is just about dealing with unnamed tasks and could be\nhandled by just mixing in CI_NODE_INDEX if the task name isn't set.\n\n\nI pushed a version of 0007-cirrus-clean-up-windows-task.patch. I didn't\nrename the task as I would like to add a msbuild version of the task at\nsome point (it's pretty easy to break msbuild but not ninja\nunfortunately). In additional I also removed NO_TEMP_INSTALL.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 21 Nov 2022 14:45:42 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Mon, Nov 21, 2022 at 02:45:42PM -0800, Andres Freund wrote:\n> > > > + ninja -C build |tee build/meson-logs/build.txt\n> > > > + REM Since pipes lose exit status of the preceding command, rerun compilation,\n> > > > + REM without the pipe exiting now if it fails, rather than trying to run checks\n> > > > + ninja -C build > nul\n> > > \n> > > This seems mighty grotty :(. but I guess it's quick enough not worry about,\n> > > and I can't come up with a better plan.\n> > > \n> > > It doesn't seem quite right to redirect into meson-logs/ to me, my\n> > > interpretation is that that's \"meson's namespace\". Why not just store it in\n> > > build/?\n> > \n> > I put it there so it'd be included with the build artifacts.\n> \n> Wouldn't just naming it build-warnings.log suffice? I don't think we\n> want to actually upload build.txt - it already is captured.\n\nOriginally, I wanted the input and the output to be available as files\nand not just in cirrus' web GUI, but maybe that's not important anymore.\nI rewrote it again.\n\n> > I'm not sure if this ought to be combined with/before/after your \"move\n> > compilerwarnings task to meson\" patch? (Regarding that patch: I\n> > mentioned that it shouldn't use ccache -C, and it should use\n> > meson_log_artifacts.)\n> \n> TBH, I'm not quite sure a separate docs task does really still make\n> sense after the SanityCheck task. It's worth building the docs even if\n> some flappy test fails, but I don't think we should try to build the\n> docs if the code doesn't even compile, in all likelihood a lot more is\n> wrong in that case.\n\nIt'd be okay either way. I had split it out to 1) isolate the changes\nin the \"upload changed docs as artifacts\" patch; and, 2) so the docs\nartifacts are visible in a cfbot link called \"Documentation\"; and, 3) so\nthe docs task runs without a dependency on \"Linux\", since (as you said)\ndocs/errors are worth showing/reviewing/reporting/addressing separately\nfrom test errors (perhaps similar to compiler warnings...).\n\nI shuffled my branch around and sending now the current \"docs\" patches,\nbut I suppose this is waiting on the \"convert CompilerWarnings task to\nmeson\" patch.\n\n-- \nJustin",
"msg_date": "Tue, 22 Nov 2022 16:57:44 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Wed, Nov 23, 2022 at 11:57 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> [PATCH 02/10] cirrus/macos: switch to \"macos_instance\" / M1..\n\nDuelling patches.\n\nBilal's patch[1] uses the matrix feature to run the tests on both\nIntel and ARM, which made sense when he proposed it, but according to\nCirrus CI warnings, the Intel instances are about to go away. So I\nthink we just need your smaller change to switch the instance type.\n\nAs for the pathname change, there is another place that knows where\nHomebrew lives, in ldap/001_auth. Fixed in the attached. That test\njust SKIPs if it can't find the binary, making it harder to notice.\nStandardising our approach here might make sense for a later patch.\nAs for the kerberos test, Bilal's patch may well be a better idea (it\nadds MacPorts for one thing), and so I'll suggest rebasing that, but\nhere I just wanted the minimum mechanical fix to avoid breaking on the\n1st of Jan.\n\nI plan to push this soon if there are no objections. Then discussion\nof Bilal's patch can continue.\n\n> [PATCH 03/10] cirrus/macos: update to macos ventura\n\nI don't know any reason not to push this one too, but it's not time critical.\n\n[1] https://www.postgresql.org/message-id/flat/CAN55FZ2R%2BXufuVgJ8ew_yDBk48PgXEBvyKNvnNdTTVyczbQj0g%40mail.gmail.com",
"msg_date": "Fri, 30 Dec 2022 16:59:03 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Fri, 30 Dec 2022 at 09:29, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Wed, Nov 23, 2022 at 11:57 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > [PATCH 02/10] cirrus/macos: switch to \"macos_instance\" / M1..\n>\n> Duelling patches.\n>\n> Bilal's patch[1] uses the matrix feature to run the tests on both\n> Intel and ARM, which made sense when he proposed it, but according to\n> Cirrus CI warnings, the Intel instances are about to go away. So I\n> think we just need your smaller change to switch the instance type.\n>\n> As for the pathname change, there is another place that knows where\n> Homebrew lives, in ldap/001_auth. Fixed in the attached. That test\n> just SKIPs if it can't find the binary, making it harder to notice.\n> Standardising our approach here might make sense for a later patch.\n> As for the kerberos test, Bilal's patch may well be a better idea (it\n> adds MacPorts for one thing), and so I'll suggest rebasing that, but\n> here I just wanted the minimum mechanical fix to avoid breaking on the\n> 1st of Jan.\n>\n> I plan to push this soon if there are no objections. Then discussion\n> of Bilal's patch can continue.\n>\n> > [PATCH 03/10] cirrus/macos: update to macos ventura\n>\n> I don't know any reason not to push this one too, but it's not time critical.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\ne351f85418313e97c203c73181757a007dfda6d0 ===\n=== applying patch ./0001-ci-Change-macOS-builds-from-Intel-to-ARM.patch\npatching file .cirrus.yml\nHunk #1 FAILED at 407.\nHunk #2 FAILED at 428.\nHunk #3 FAILED at 475.\n3 out of 3 hunks FAILED -- saving rejects to file .cirrus.yml.rej\npatching file src/test/kerberos/t/001_auth.pl\nHunk #1 FAILED at 32.\n1 out of 1 hunk FAILED -- saving rejects to file\nsrc/test/kerberos/t/001_auth.pl.rej\npatching file src/test/ldap/t/001_auth.pl\nHunk #1 FAILED at 21.\n1 out of 1 hunk FAILED -- saving rejects to file src/test/ldap/t/001_auth.pl.rej\n\n[1] - http://cfbot.cputube.org/patch_41_3709.log\n\nRegards,\nVigneh\n\n\n",
"msg_date": "Tue, 3 Jan 2023 17:46:59 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Mon, Nov 21, 2022 at 02:45:42PM -0800, Andres Freund wrote:\n> On 2022-11-13 17:53:04 -0600, Justin Pryzby wrote:\n> > > > From: Justin Pryzby <pryzbyj@telsasoft.com>\n> > > > Date: Tue, 26 Jul 2022 20:30:02 -0500\n> > > > Subject: [PATCH 6/8] cirrus/ccache: add explicit cache keys..\n> > > >\n> > > > Since otherwise, building with ci-os-only will probably fail to use the\n> > > > normal cache, since the cache key is computed using both the task name\n> > > > and its *index* in the list of caches (internal/executor/cache.go:184).\n> > > \n> > > Seems like this would potentially better addressed by reporting a bug to the\n> > > cirrus folks?\n> > \n> > You said that before, but I don't think so - since they wrote code to do\n> > that, it's odd to file a bug that says that the behavior is wrong. I am\n> > curious why, but it seems delibrate.\n> > \n> > https://www.postgresql.org/message-id/20220828171029.GO2342%40telsasoft.com\n> \n> I suspect this is just about dealing with unnamed tasks and could be\n> handled by just mixing in CI_NODE_INDEX if the task name isn't set.\n\nI suppose it was their way of dealing with this:\n\n|Cache artifacts are shared between tasks, so two caches with the same\n|name on e.g. Linux containers and macOS VMs will share the same set of\n|files. This may introduce binary incompatibility between caches. To\n|avoid that, add echo $CIRRUS_OS into fingerprint_script or use\n|$CIRRUS_OS in fingerprint_key, which will distinguish caches based on\n|OS.\n\nTo make caches work automatically, without having to know to name them\ndifferently.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 4 Jan 2023 17:19:24 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Tue, Nov 22, 2022 at 04:57:44PM -0600, Justin Pryzby wrote:\n> I shuffled my branch around and sending now the current \"docs\" patches,\n> but I suppose this is waiting on the \"convert CompilerWarnings task to\n> meson\" patch.\n\nIn case it's not, here's a version to do that now.",
"msg_date": "Wed, 4 Jan 2023 17:44:24 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "The autoconf system runs all tap tests in t/*.pl, but meson requires\nenumerating them in ./meson.build.\n\nThis checks for and finds no missing tests in the current tree:\n\n$ for pl in `find src contrib -path '*/t/*.pl'`; do base=${pl##*/}; dir=${pl%/*}; meson=${dir%/*}/meson.build; grep \"$base\" \"$meson\" >/dev/null || echo \"$base is missing from $meson\"; done\n\nHowever, this finds two real problems and one false-positive with\nmissing regress/isolation tests:\n\n$ for makefile in `find src contrib -name Makefile`; do for testname in `sed -r '/^(REGRESS|ISOLATION) =/!d; s///; :l; /\\\\\\\\$/{s///; N; b l}; s/\\n//g' \"$makefile\"`; do meson=${makefile%/Makefile}/meson.build; grep -Fw \"$testname\" \"$meson\" >/dev/null || echo \"$testname is missing from $meson\"; done; done\nguc_privs is missing from src/test/modules/unsafe_tests/meson.build\noldextversions is missing from contrib/pg_stat_statements/meson.build\n$(CF_PGP_TESTS) is missing from contrib/pgcrypto/meson.build\n\nI also tried but failed to write something to warn if \"meson test\" was\nrun with a list of tests but without tmp_install. Help wanted.\n\nI propose to put something like this into \"SanityCheck\".\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 17 Jan 2023 11:35:09 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-17 11:35:09 -0600, Justin Pryzby wrote:\n> The autoconf system runs all tap tests in t/*.pl, but meson requires\n> enumerating them in ./meson.build.\n\nYes. It was a mistake that we ever used t/*.pl for make. For one, it means\nthat make can't control concurrency meaningfully, due to the varying number of\ntests run with one prove instance. It's also the only thing that tied us to\nprove, which is one hell of a buggy mess.\n\n\n> This checks for and finds no missing tests in the current tree:\n>\n> $ for pl in `find src contrib -path '*/t/*.pl'`; do base=${pl##*/}; dir=${pl%/*}; meson=${dir%/*}/meson.build; grep \"$base\" \"$meson\" >/dev/null || echo \"$base is missing from $meson\"; done\n\nLikely because I do something similar locally.\n\n# prep\nm test --list > /tmp/tests.txt\n\n# Check if all tap tests are known to meson\nfor f in $(git ls-files|grep -E '(t|test)/.*.pl$'|sort);do t=$(echo $f|sed -E -e 's/^.*\\/([^/]*)\\/(t|test)\\/(.*)\\.pl$/\\1\\/\\3/');grep -q -L $t /tmp/tests.txt |\\\n| echo $f;done\n\n\n# Check if all regression / isolation tests are known to meson\n#\n# Expected to find plpgsql due to extra 'src' directory level, src/test/mb\n# because it's not run anywhere and sepgsql, because that's not tested yet\nfor d in $(find ~/src/postgresql -type d \\( -name sql -or -name specs \\) );do t=$(basename $(dirname $d)); grep -q -L $t /tmp/tests.txt || echo $d; done\n\n\n\n> However, this finds two real problems and one false-positive with\n> missing regress/isolation tests:\n\nWhich the above does *not* test for. Good catch.\n\nI'll push the fix for those as soon as tests passed on my personal repo.\n\n\n> $ for makefile in `find src contrib -name Makefile`; do for testname in `sed -r '/^(REGRESS|ISOLATION) =/!d; s///; :l; /\\\\\\\\$/{s///; N; b l}; s/\\n//g' \"$makefile\"`; do meson=${makefile%/Makefile}/meson.build; grep -Fw \"$testname\" \"$meson\" >/dev/null || echo \"$testname is missing from $meson\"; done; done\n> guc_privs is missing from src/test/modules/unsafe_tests/meson.build\n\nYep. That got added during the development of the meson port, so it's not too surprising.\n\n\n> oldextversions is missing from contrib/pg_stat_statements/meson.build\n\nThis one however, is odd. Not sure how that happened.\n\n\n> $(CF_PGP_TESTS) is missing from contrib/pgcrypto/meson.build\n\nAssume that's the false positive?\n\n\n> I also tried but failed to write something to warn if \"meson test\" was\n> run with a list of tests but without tmp_install. Help wanted.\n\nThat doesn't even catch the worst case - when there's tmp_install, but it's\ntoo old.\n\nThe proper solution would be to make the creation of tmp_install a dependency\nof the relevant tests. Unfortunately meson still auto-propages those to\ndependencies of the 'all' target (for historical reasons), and creating the\ntemp install is too slow on some machines to make that tolerable. I think\nthere's an open PR to change that. Once that's in a released meson version\nthat's in somewhat widespread use, we should change that.\n\nThe other path forward is to allow running the tests without\ntmp_install. There's not that much we'd need to allow running directly from\nthe source tree - the biggest thing is a way to load extensions from a list of\npaths. This option is especially attractive because it'd allow to run\nindividual tests without a fully built sourcetree. No need to build other\nbinaries when you just want to test psql, or more extremely, pg_test_timing.\n\n\n> I propose to put something like this into \"SanityCheck\".\n\nPerhaps we instead could add it as a separate \"meson-only\" test? Then it'd\nfail on developer's machines, instead of later in CI. We could pass the test\ninformation from the 'tests' array, or it could look at the metadata in\nmeson-info/intro-tests.json\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 17 Jan 2023 11:56:42 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Tue, Jan 17, 2023 at 11:56:42AM -0800, Andres Freund wrote:\n> > $(CF_PGP_TESTS) is missing from contrib/pgcrypto/meson.build\n> \n> Assume that's the false positive?\n\nYes\n\n> > I also tried but failed to write something to warn if \"meson test\" was\n> > run with a list of tests but without tmp_install. Help wanted.\n> \n> That doesn't even catch the worst case - when there's tmp_install, but it's\n> too old.\n\nI don't understand what you mean by \"too old\" ?\n\n> > I propose to put something like this into \"SanityCheck\".\n> \n> Perhaps we instead could add it as a separate \"meson-only\" test? Then it'd\n> fail on developer's machines, instead of later in CI. We could pass the test\n> information from the 'tests' array, or it could look at the metadata in\n> meson-info/intro-tests.json\n\nI guess you mean that it should be *able* to fail on developer machines\n*in addition* to cirrusci.\n\nBut, a meson-only test might not be so helpful, as it assumes that the\ndeveloper is using meson, in which case the problem would tend not to\nhave occured in the first place.\n\nBTW I also noticed that:\n\nmeson.build:meson_binpath_r = run_command(python, 'src/tools/find_meson', check: true)\nmeson.build-\nmeson.build-if meson_binpath_r.returncode() != 0 or meson_binpath_r.stdout() == ''\nmeson.build- error('huh, could not run find_meson.\\nerrcode: @0@\\nstdout: @1@\\nstderr: @2@'.format(\n\nThe return code will never be nonzero since check==true, right ?\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 17 Jan 2023 19:10:18 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Fri, Dec 30, 2022 at 4:59 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Nov 23, 2022 at 11:57 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > [PATCH 03/10] cirrus/macos: update to macos ventura\n>\n> I don't know any reason not to push this one too, but it's not time critical.\n\nSome observations:\n\n* macOS has a new release every year in June[1]\n* updates cease after three years[1]\n* thus three releases are in support (by that definition) at a time\n* we need an image on Cirrus; 13 appeared ~1 month later[2]\n* we need Homebrew support; 13 appeared ~3 months later[3]\n* we have 13 and 12 in the buildfarm, but no 11\n* it's common for developers but uncommon for servers/deployment\n\nSo what should our policy be on when to roll the CI image forward? I\nguess around New Year/now (~6 months after release) is a good time and\nwe should just do it. Anyone got a reason why we should wait? Our\nother CI OSes have slower major version release cycles and longer\nlives, so it's not quite the same hamster wheel of upgrades.\n\n[1] https://en.wikipedia.org/wiki/MacOS_version_history#Releases\n[2] https://github.com/orgs/cirruslabs/packages?tab=packages&q=macos\n[3] https://brew.sh/2022/09/07/homebrew-3.6.0/\n\n\n",
"msg_date": "Thu, 2 Feb 2023 14:02:25 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Some observations:\n\n> * macOS has a new release every year in June[1]\n> * updates cease after three years[1]\n> * thus three releases are in support (by that definition) at a time\n> * we need an image on Cirrus; 13 appeared ~1 month later[2]\n> * we need Homebrew support; 13 appeared ~3 months later[3]\n> * we have 13 and 12 in the buildfarm, but no 11\n> * it's common for developers but uncommon for servers/deployment\n\n> So what should our policy be on when to roll the CI image forward? I\n> guess around New Year/now (~6 months after release) is a good time and\n> we should just do it. Anyone got a reason why we should wait? Our\n> other CI OSes have slower major version release cycles and longer\n> lives, so it's not quite the same hamster wheel of upgrades.\n\nI'd argue that developers are probably the kind of people who update\ntheir OS sooner rather than later --- I've usually updated my laptop\nand at least one BF animal to $latest macOS within a month or so of\nthe dot-zero release. So waiting 6 months seems to me like CI will be\nbehind the users, which will be unhelpful. I'd rather drop the oldest\nrelease sooner, if we need to hold down the number of macOS revisions\nunder test.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 Feb 2023 20:12:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Thu, Feb 2, 2023 at 2:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Some observations:\n> > So what should our policy be on when to roll the CI image forward? I\n> > guess around New Year/now (~6 months after release) is a good time and\n> > we should just do it. Anyone got a reason why we should wait? Our\n> > other CI OSes have slower major version release cycles and longer\n> > lives, so it's not quite the same hamster wheel of upgrades.\n>\n> I'd argue that developers are probably the kind of people who update\n> their OS sooner rather than later --- I've usually updated my laptop\n> and at least one BF animal to $latest macOS within a month or so of\n> the dot-zero release. So waiting 6 months seems to me like CI will be\n> behind the users, which will be unhelpful. I'd rather drop the oldest\n> release sooner, if we need to hold down the number of macOS revisions\n> under test.\n\nCool. Done.\n\nOut of curiosity, I wondered how the \"graphical installer\" packagers\nlike EDB and Postgres.app choose a target, when Apple is moving so\nfast. I see that the current EDB installers target 10.14 for PG15,\nwhich was 5 years old at initial release, and thus already EOL'd for 2\nyears. Postgres.app goes back one more year. In other words, even\nthough that preadv/pwritev \"decl\" stuff is unnecessary for PG16 if you\nthink we should only target OSes that the vendor still supports (which\nwill be 12, 13, 14), someone would still shout at me if I removed it.\n\n\n",
"msg_date": "Fri, 3 Feb 2023 14:58:04 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "rebased, and re-including a patch to show code coverage of changed\nfiles.\n\na5b3e50d922 cirrus/windows: add compiler_warnings_script\n4c98dcb0e03 cirrus/freebsd: run with more CPUs+RAM and do not repartition\naaeef938ed4 cirrus/freebsd: define ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS\n9baf41674ad pg_upgrade: tap test: exercise --link and --clone\n7e09035f588 WIP: ci/meson: allow showing only failed tests ..\ne4534821ef5 cirrus/ccache: use G rather than GB suffix..\n185d1c3ed13 cirrus: code coverage\n5dace84a038 cirrus: upload changed html docs as artifacts\n852360330ef +html index file",
"msg_date": "Fri, 3 Feb 2023 08:26:57 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On 03.02.23 15:26, Justin Pryzby wrote:\n> rebased, and re-including a patch to show code coverage of changed\n> files.\n\nThis constant flow of patches under one subject doesn't lend itself well \nto the commit fest model of trying to finish things up. I can't quite \ntell which of these patches are ready and agreed upon, and which ones \nare work in progress or experimental.\n\n > e4534821ef5 cirrus/ccache: use G rather than GB suffix..\n\nThis one seems obvious. I have committed it.\n\n> 9baf41674ad pg_upgrade: tap test: exercise --link and --clone\n\nThis seems like a good idea.\n\n> 7e09035f588 WIP: ci/meson: allow showing only failed tests ..\n\nI'm not sure I like this one. I sometimes look up the logs of \nnon-failed tests to compare them with failed tests, to get context to \ncould lead to failures. Maybe we can make this behavior adjustable. \nBut I've not been bothered by the current behavior.\n\n\n\n",
"msg_date": "Mon, 13 Mar 2023 07:39:52 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Mon, Mar 13, 2023 at 07:39:52AM +0100, Peter Eisentraut wrote:\n> On 03.02.23 15:26, Justin Pryzby wrote:\n> > rebased, and re-including a patch to show code coverage of changed\n> > files.\n> \n> This constant flow of patches under one subject doesn't lend itself well to\n> the commit fest model of trying to finish things up.\n> I can't quite tell which of these patches are ready and agreed upon,\n> and which ones are work in progress or experimental.\n\nI'm soliticing feedback on those patches that I've sent recently - I've\nelided patches if they have some unresolved issue.\n\nI'm not aware of any loose ends other than what's updated here:\n\n- cirrus: code coverage\n\nI changed this to also run an \"initial\" coverage report before running\ntests. It's not clear to me what effect that has, though...\n\nAndres seems to think it's a problem that this shows coverage only for\nfiles that were actually changed. But that's what's intended; it's\nsufficient to see if new code is being hit by tests. It would be slow\nand take a lot of extra space to upload a coverage report for every\npatch, every day. It might be nice for cfbot to show how test coverage\nchanged in the affected files: -15% / +25%.\n\n- cirrus: upload changed html docs as artifacts\n\nFixed an \"only_if\" line so cfbot will run the \"warnings\" task.\n\nMaybe this path is waiting on Andres' patch to \"move CompilerWarnings to\nmeson\" ?\n\n> > 7e09035f588 WIP: ci/meson: allow showing only failed tests ..\n> \n> I'm not sure I like this one. I sometimes look up the logs of non-failed\n> tests to compare them with failed tests, to get context to could lead to\n> failures. Maybe we can make this behavior adjustable. But I've not been\n> bothered by the current behavior.\n\nIt's adjustable by un/setting the environment variable.\n\nI'm surprised to hear that anyone using cirrusci (with or without cfbot)\nwouldn't prefer the behavior this patch implements. It's annoying to\nsearch find the logs for the (typically exactly one) failing test in\ncirrus' directory of 200some test artifacts. We're also uploading a lot\nof logs for every failure. (But I suppose this might break cfbot's new\nclient side parsing of things like build logs...)\n\n-- \nJustin",
"msg_date": "Mon, 13 Mar 2023 23:56:56 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On 14.03.23 05:56, Justin Pryzby wrote:\n> I'm soliticing feedback on those patches that I've sent recently - I've\n> elided patches if they have some unresolved issue.\n\n > [PATCH 1/8] cirrus/windows: add compiler_warnings_script\n\nNeeds a better description of what it actually does. (And fewer \"I'm \nnot sure how to write this ...\" comments ;-) ) It looks like it would \nfail the build if there is a compiler warning in the Windows VS task? \nShouldn't that be done in the CompilerWarnings task?\n\nAlso, I see a bunch of warnings in the current output from that task. \nThese should be cleaned up in any case before we can let a thing like \nthis loose.\n\n(The warnings are all like\n\nC:\\python\\Include\\pyconfig.h(117): warning C4005: 'MS_WIN64': macro \nredefinition\n\nso possibly a single fix can address them all.)\n\n\n > [PATCH 2/8] cirrus/freebsd: run with more CPUs+RAM and do not repartition\n\nI don't know enough about this. Maybe Andres or Thomas want to take \nthis. No concerns if it's safe.\n\n\n > [PATCH 3/8] cirrus/freebsd: define \nENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS\n\nLooks sensible.\n\n\n > [PATCH 4/8] pg_upgrade: tap test: exercise --link and --clone\n\nI haven't been able to get any changes to the test run times outside of \nnoise from this. But some more coverage is sensible in any case.\n\nI'm concerned that with this change, the only platform that tests --copy \nis Windows, but Windows has a separate code path for copy. So we should \nleave one Unix platform to test --copy. Maybe have FreeBSD test --link \nand macOS test --clone and leave the others with --copy?\n\n\n > [PATCH 5/8] WIP: ci/meson: allow showing only failed tests ..\n\n>>> 7e09035f588 WIP: ci/meson: allow showing only failed tests ..\n>>\n>> I'm not sure I like this one. I sometimes look up the logs of non-failed\n>> tests to compare them with failed tests, to get context to could lead to\n>> failures. Maybe we can make this behavior adjustable. But I've not been\n>> bothered by the current behavior.\n> \n> It's adjustable by un/setting the environment variable.\n> \n> I'm surprised to hear that anyone using cirrusci (with or without cfbot)\n> wouldn't prefer the behavior this patch implements. It's annoying to\n> search find the logs for the (typically exactly one) failing test in\n> cirrus' directory of 200some test artifacts. We're also uploading a lot\n> of logs for every failure. (But I suppose this might break cfbot's new\n> client side parsing of things like build logs...)\n\nOne thing that actually annoys me that is that a successful run does not \nupload any test artifacts at all. So, I guess I'm just of a different \nopinion here.\n\n\n > [PATCH 6/8] cirrus: code coverage\n\nThis adds -Db_coverage=true to the FreeBSD task. This has a significant \nimpact on the build time. (+50% at least, it appears.)\n\nI'm not sure the approach here makes sense. For example, if you add a \nnew test, the set of changed files is just that test. So you won't get \nany report on what coverage change the test has caused.\n\nAlso, I don't think I trust the numbers from the meson coverage stuff \nyet. See for example \n<https://www.postgresql.org/message-id/Y/3AI+/MqKcjLk/T@paquier.xyz>.\n\n\n > [PATCH 7/8] cirrus: upload changed html docs as artifacts\n > [PATCH 8/8] +html index file\n\nThis builds the docs twice and then analyzes the differences between the \ntwo builds. This also affects the build times quite significantly.\n\nHow useful is this actually? People who want to look at the docs can \nbuild them locally. There are no platform dependencies or anything like \nthat where having them built elsewhere is of advantage.\n\n\n\n\n",
"msg_date": "Wed, 15 Mar 2023 10:58:41 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 10:58:41AM +0100, Peter Eisentraut wrote:\n> On 14.03.23 05:56, Justin Pryzby wrote:\n> > I'm soliticing feedback on those patches that I've sent recently - I've\n> > elided patches if they have some unresolved issue.\n> \n> > [PATCH 1/8] cirrus/windows: add compiler_warnings_script\n> \n> Needs a better description of what it actually does. (And fewer \"I'm not\n> sure how to write this ...\" comments ;-) ) It looks like it would fail the\n> build if there is a compiler warning in the Windows VS task? Shouldn't that\n> be done in the CompilerWarnings task?\n\nThe goal is to fail due to warnings only after running tests.\n\nhttps://www.postgresql.org/message-id/20220212212310.f645c6vw3njkgxka%40alap3.anarazel.de\n\"Probably worth scripting something to make the windows task error out\nif there had been warnings, but only after running the tests.\"\n\nCompilerWarnings runs in a linux environment running with -Werror. \nThis patch scrapes warnings out of MSVC, since (at least historically)\nit's too slow to run a separate windows VM to compile with -Werror.\n\n> Also, I see a bunch of warnings in the current output from that task. These\n> should be cleaned up in any case before we can let a thing like this loose.\n\nYeah (and I mentioned those myself). As it stands, my patch also\n\"breaks\" everytime someone's else's patch introduces warnings. I\nincluded links demonstrating its failures.\n\nI agree that it's not okay to merge the patch when it's currently\nfailing, but I cannot dig into that other issue right now.\n\n> > [PATCH 6/8] cirrus: code coverage\n> \n> This adds -Db_coverage=true to the FreeBSD task. This has a significant\n> impact on the build time. (+50% at least, it appears.)\n\nYes - but with the CPUs added by the prior patch, the freebsd task is\nfaster than it is currently. And its 8min runtime would match the other\ntasks well.\n\n> I'm not sure the approach here makes sense. For example, if you add a new\n> test, the set of changed files is just that test. So you won't get any\n> report on what coverage change the test has caused.\n\nThe coverage report that I proposed clearly doesn't handle that case -\nit's not intended to.\n\nShowing a full coverage report is somewhat slow to generate, probably\nunreasonable to upload for every patch, every day, and not very\ninteresting since it's at least 99% duplicative. The goal is to show a\ncoverage report for new code for every patch. What fraction of the time\ndo you think the patch author, reviewer or committer have looked at a\ncoverage report? It's not a question of whether it's possible to do so\nlocally, but of whether it's actually done.\n\n> Also, I don't think I trust the numbers from the meson coverage stuff yet.\n> See for example\n> <https://www.postgresql.org/message-id/Y/3AI+/MqKcjLk/T@paquier.xyz>.\n\nI'm not using the meson coverage target. I could instead add\nCFLAGS=--coverage. Anyway, getting a scalar value like \"83%\" might be\ninteresting to show in cfbot, but it's not the main goal here.\n\n> > [PATCH 7/8] cirrus: upload changed html docs as artifacts\n> > [PATCH 8/8] +html index file\n> \n> This builds the docs twice and then analyzes the differences between the two\n> builds. This also affects the build times quite significantly.\n\nThe main goal is to upload the changed docs.\n\n> People who want to look at the docs can build them locally.\n\nThis makes the docs for every patch available for reviewers, without\nneeding a build environment. An easy goal would be if documentation for\nevery patch was reviewed by a native english speaker. Right now that's\nnot consistently true.\n\n> How useful is this actually?\n\nI'm surprised if there's any question about the merits of making\ndocumentation easily available for review. Several people have agreed;\none person mailed me privately specifically to ask how to show HTML docs\non cirrusci.\n\nAnyway, all this stuff is best addressed either before or after the CF.\nI'll kick the patch forward. Thanks for looking.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 15 Mar 2023 09:56:12 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On 15.03.23 15:56, Justin Pryzby wrote:\n> I'm surprised if there's any question about the merits of making\n> documentation easily available for review. Several people have agreed;\n> one person mailed me privately specifically to ask how to show HTML docs\n> on cirrusci.\n> \n> Anyway, all this stuff is best addressed either before or after the CF.\n> I'll kick the patch forward. Thanks for looking.\n\nI suppose this depends on what you want to use this for. If your use is \nto prepare and lay out as much information as possible about a patch for \na reviewer, some of your ideas make sense.\n\nI'm using this primarily to quickly test local work in progress. So I \nwant a quick feedback cycle. I don't need it to show me which HTML docs \nchanged, for example.\n\nSo maybe there need to be different modes.\n\n\n\n",
"msg_date": "Wed, 15 Mar 2023 16:57:34 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 04:57:34PM +0100, Peter Eisentraut wrote:\n> On 15.03.23 15:56, Justin Pryzby wrote:\n> > I'm surprised if there's any question about the merits of making\n> > documentation easily available for review. Several people have agreed;\n> > one person mailed me privately specifically to ask how to show HTML docs\n> > on cirrusci.\n> > \n> > Anyway, all this stuff is best addressed either before or after the CF.\n> > I'll kick the patch forward. Thanks for looking.\n> \n> I suppose this depends on what you want to use this for. If your use is to\n> prepare and lay out as much information as possible about a patch for a\n> reviewer, some of your ideas make sense.\n> \n> I'm using this primarily to quickly test local work in progress. So I want\n> a quick feedback cycle. I don't need it to show me which HTML docs changed,\n> for example.\n> \n> So maybe there need to be different modes.\n\nI'm opened to that - for example, mingw is currently opt-in. Maybe this\nshould be a separate task - it was implemented like that based on an\nearlier suggestion (and then changed back again based on another\nsuggestion). The task could be triggered manually or by cfbot's\nmessage.\n\nBut a primary goal for cirrus.yml was to allow developers to do the same\nthings as cfbot, and without everyone needing to reimplement it for\nthemselves.\n\nYou want quick feedback, like everyone else - but I doubt you disable\nthe documentation build when you don't need it, even though that would\nshave off a whole minute. And I doubt that you'd comment it out even\nthe documentation was built twice.\n\nAnyway - I think this patch is probably waiting on Andres' patch to\n\"convert CompilerWarnings to meson\".\n\n> > 7e09035f588 WIP: ci/meson: allow showing only failed tests ..\n> \n> I'm not sure I like this one. I sometimes look up the logs of non-failed\n> tests to compare them with failed tests, to get context to could lead to\n> failures. Maybe we can make this behavior adjustable. But I've not been\n> bothered by the current behavior.\n\nI suggest to try the patch; I doubt you'd prefer the existing behavior.\n\nThe patch is rebased now that meson is updated to avoid the windows\npython warnings (thanks Andres).\n\n-- \nJustin",
"msg_date": "Tue, 11 Apr 2023 20:05:21 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On 12.04.23 03:05, Justin Pryzby wrote:\n> The patch is rebased now that meson is updated to avoid the windows\n> python warnings (thanks Andres).\n\nTo keep this moving along, I have committed\n\n[PATCH 3/8] cirrus/freebsd: define ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 10:37:43 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Tue, Jan 17, 2023 at 11:35:09AM -0600, Justin Pryzby wrote:\n> However, this finds two real problems and one false-positive with\n> missing regress/isolation tests:\n> \n> $ for makefile in `find src contrib -name Makefile`; do for testname in `sed -r '/^(REGRESS|ISOLATION) =/!d; s///; :l; /\\\\\\\\$/{s///; N; b l}; s/\\n//g' \"$makefile\"`; do meson=${makefile%/Makefile}/meson.build; grep -Fw \"$testname\" \"$meson\" >/dev/null || echo \"$testname is missing from $meson\"; done; done\n\nAnd, since 681d9e462:\n\nsecurity is missing from contrib/seg/meson.build\n\n\n",
"msg_date": "Wed, 12 Jul 2023 00:56:17 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 12:56:17AM -0500, Justin Pryzby wrote:\n> And, since 681d9e462:\n> \n> security is missing from contrib/seg/meson.build\n\nUgh. Good catch!\n--\nMichael",
"msg_date": "Wed, 12 Jul 2023 15:07:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Wed, 12 Jul 2023 at 11:38, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Jul 12, 2023 at 12:56:17AM -0500, Justin Pryzby wrote:\n> > And, since 681d9e462:\n> >\n> > security is missing from contrib/seg/meson.build\n>\n> Ugh. Good catch!\n\nAre we planning to do anything more in this thread, the thread has\nbeen idle for more than 7 months. If nothing more is planned for this,\nI'm planning to close this commitfest entry in this commitfest.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 17 Jan 2024 17:34:00 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Wed, Jan 17, 2024 at 05:34:00PM +0530, vignesh C wrote:\n> Are we planning to do anything more in this thread, the thread has\n> been idle for more than 7 months. If nothing more is planned for this,\n> I'm planning to close this commitfest entry in this commitfest.\n\nOops, this went through the cracks. security was still missing in\nseg's meson.build, so I've just applied a patch to take care of it.\nI am not spotting any other holes..\n--\nMichael",
"msg_date": "Thu, 18 Jan 2024 10:16:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Thu, 18 Jan 2024 at 06:46, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Jan 17, 2024 at 05:34:00PM +0530, vignesh C wrote:\n> > Are we planning to do anything more in this thread, the thread has\n> > been idle for more than 7 months. If nothing more is planned for this,\n> > I'm planning to close this commitfest entry in this commitfest.\n>\n> Oops, this went through the cracks. security was still missing in\n> seg's meson.build, so I've just applied a patch to take care of it.\n> I am not spotting any other holes..\n\nAre we planning to do anything more on this? I was not sure if we\nshould move this to next commitfest or return it.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 31 Jan 2024 15:10:13 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On 2024-Jan-31, vignesh C wrote:\n\n> Are we planning to do anything more on this? I was not sure if we\n> should move this to next commitfest or return it.\n\nWell, the patches don't apply anymore since .cirrus.tasks.yml has been\ncreated. However, I'm sure we still want [some of] the improvements\nto the tests in [1]. I can volunteer to rebase the patches in time for the\nMarch commitfest, if Justin is not available to do so. If you can\nplease move it forward to the March cf and set it WoA, I'd appreciate\nthat.\n\nThanks\n\n[1] https://postgr.es/m/ZA/+mKDX9zWfhD3v@telsasoft.com\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Tiene valor aquel que admite que es un cobarde\" (Fernandel)\n\n\n",
"msg_date": "Wed, 31 Jan 2024 11:59:21 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Wed, Jan 31, 2024 at 11:59:21AM +0100, Alvaro Herrera wrote:\n> On 2024-Jan-31, vignesh C wrote:\n> \n> > Are we planning to do anything more on this? I was not sure if we\n> > should move this to next commitfest or return it.\n> \n> Well, the patches don't apply anymore since .cirrus.tasks.yml has been\n> created. However, I'm sure we still want [some of] the improvements\n> to the tests in [1]. I can volunteer to rebase the patches in time for the\n> March commitfest, if Justin is not available to do so. If you can\n> please move it forward to the March cf and set it WoA, I'd appreciate\n> that.\n\nThe patches are rebased. A couple were merged since I last rebased them\n~10 months ago. The freebsd patch will probably be obsoleted by a patch\nof Thomas.\n\nOn Mon, Mar 13, 2023 at 07:39:52AM +0100, Peter Eisentraut wrote:\n> On 03.02.23 15:26, Justin Pryzby wrote:\n> > 9baf41674ad pg_upgrade: tap test: exercise --link and --clone\n> \n> This seems like a good idea.\n\nOn Wed, Mar 15, 2023 at 10:58:41AM +0100, Peter Eisentraut wrote:\n> > [PATCH 4/8] pg_upgrade: tap test: exercise --link and --clone\n> \n> I haven't been able to get any changes to the test run times outside\n> of noise from this. But some more coverage is sensible in any case.\n> \n> I'm concerned that with this change, the only platform that tests\n> --copy is Windows, but Windows has a separate code path for copy. So\n> we should leave one Unix platform to test --copy. Maybe have FreeBSD\n> test --link and macOS test --clone and leave the others with --copy?\n\nI addressed Peter's comments, but haven't heard further.\n\nThe patch to show HTML docs artifacts may be waiting for Andres' patch\nto convert CompilerWarnings to meson.\n\nIt may also be waiting on cfbot to avoid squishing all the patches\ntogether.\n\nI sent various patches to cfbot but haven't heard back.\nhttps://www.postgresql.org/message-id/flat/20220409021853.GP24419@telsasoft.com\nhttps://www.postgresql.org/message-id/flat/20220623193125.GB22452@telsasoft.com\nhttps://github.com/justinpryzby/cfbot/commits/master\nhttps://github.com/macdice/cfbot/pulls\n\n-- \nJustin",
"msg_date": "Tue, 13 Feb 2024 13:10:20 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On 13.02.24 20:10, Justin Pryzby wrote:\n> On Mon, Mar 13, 2023 at 07:39:52AM +0100, Peter Eisentraut wrote:\n>> On 03.02.23 15:26, Justin Pryzby wrote:\n>>> 9baf41674ad pg_upgrade: tap test: exercise --link and --clone\n>> This seems like a good idea.\n> On Wed, Mar 15, 2023 at 10:58:41AM +0100, Peter Eisentraut wrote:\n>>> [PATCH 4/8] pg_upgrade: tap test: exercise --link and --clone\n>> I haven't been able to get any changes to the test run times outside\n>> of noise from this. But some more coverage is sensible in any case.\n>>\n>> I'm concerned that with this change, the only platform that tests\n>> --copy is Windows, but Windows has a separate code path for copy. So\n>> we should leave one Unix platform to test --copy. Maybe have FreeBSD\n>> test --link and macOS test --clone and leave the others with --copy?\n> I addressed Peter's comments, but haven't heard further.\n\nOk, I didn't see that my feedback had been addressed. I have committed \nthis patch.\n\n\n",
"msg_date": "Mon, 19 Feb 2024 09:33:54 +0100",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "\n\n> On 19 Feb 2024, at 11:33, Peter Eisentraut <peter@eisentraut.org> wrote:\n> \n> Ok, I didn't see that my feedback had been addressed. I have committed this patch.\n\nJustin, Peter, I can't determine actual status of the CF entry [0]. May I ask someone of you to move patch to next CF or close as committed?\nThanks!\n\n\nBest regards, Andrey Borodin.\n[0] https://commitfest.postgresql.org/47/3709/\n\n",
"msg_date": "Mon, 8 Apr 2024 17:54:10 +0300",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Mon, Apr 08, 2024 at 05:54:10PM +0300, Andrey M. Borodin wrote:\n> Justin, Peter, I can't determine actual status of the CF entry\n> [0]. May I ask someone of you to move patch to next CF or close as\n> committed?\n\n0002 is the only thing committed as of 21a71648d39f.\n\nI can see the value in 0001, but the implementation feels awkward.\n\n0003 is wanted.\n\nI am personally not sure about 0004 to upload doc artifacts.\nSimilarly.\n\n0005 can already be done with a few clicks on the CI, and the previous\nrun may not be the only one that matters.\n\n0006 makes the doc check phase more complex.\n\nIn all that, 0003 is something that we should move on with, at least.\n\nMoving this entry to the next CF makes sense to me now, to give more\ntime to the other patches, and there's value to be extracted at quick\nglance.\n--\nMichael",
"msg_date": "Thu, 11 Apr 2024 10:12:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Fri, Jun 24, 2022 at 08:38:50AM +1200, Thomas Munro wrote:\n> > I've also sent some patches to Thomas for cfbot to help progress some of these\n> > patches (code coverage and documentation upload as artifacts).\n> > https://github.com/justinpryzby/cfbot/commits/master\n> \n> Thanks, sorry for lack of action, will get to these soon.\n\nOn Tue, Feb 13, 2024 at 01:10:21PM -0600, Justin Pryzby wrote:\n> I sent various patches to cfbot but haven't heard back.\n\n> https://www.postgresql.org/message-id/flat/20220409021853.GP24419@telsasoft.com\n> https://www.postgresql.org/message-id/flat/20220623193125.GB22452@telsasoft.com\n> https://github.com/justinpryzby/cfbot/commits/master\n> https://github.com/macdice/cfbot/pulls\n\n@Thomas: ping\n\nI reintroduced the patch for ccache/windows -- v4.10 supports PCH, which\ncan make the builds 2x faster.",
"msg_date": "Wed, 12 Jun 2024 08:10:23 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "Hi,\n\nThanks for working on this!\n\nOn Wed, 12 Jun 2024 at 16:10, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Fri, Jun 24, 2022 at 08:38:50AM +1200, Thomas Munro wrote:\n> > > I've also sent some patches to Thomas for cfbot to help progress some of these\n> > > patches (code coverage and documentation upload as artifacts).\n> > > https://github.com/justinpryzby/cfbot/commits/master\n> >\n> > Thanks, sorry for lack of action, will get to these soon.\n>\n> On Tue, Feb 13, 2024 at 01:10:21PM -0600, Justin Pryzby wrote:\n> > I sent various patches to cfbot but haven't heard back.\n>\n> > https://www.postgresql.org/message-id/flat/20220409021853.GP24419@telsasoft.com\n> > https://www.postgresql.org/message-id/flat/20220623193125.GB22452@telsasoft.com\n> > https://github.com/justinpryzby/cfbot/commits/master\n> > https://github.com/macdice/cfbot/pulls\n>\n> @Thomas: ping\n>\n> I reintroduced the patch for ccache/windows -- v4.10 supports PCH, which\n> can make the builds 2x faster.\n\nI applied 0001 and 0002 to see ccache support on Windows but the build\nstep failed with: 'ccache: error: No stats log has been configured'.\nPerhaps you forgot to add 'CCACHE_STATSLOG: $CCACHE_DIR.stats.log' to\n0002? After adding that line, CI finished successfully. And, I confirm\nthat the build step takes ~30 seconds now; it was ~90 seconds before\nthat.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Thu, 13 Jun 2024 14:38:46 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Thu, Jun 13, 2024 at 02:38:46PM +0300, Nazir Bilal Yavuz wrote:\n> On Wed, 12 Jun 2024 at 16:10, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > On Fri, Jun 24, 2022 at 08:38:50AM +1200, Thomas Munro wrote:\n> > > > I've also sent some patches to Thomas for cfbot to help progress some of these\n> > > > patches (code coverage and documentation upload as artifacts).\n> > > > https://github.com/justinpryzby/cfbot/commits/master\n> > >\n> > > Thanks, sorry for lack of action, will get to these soon.\n> >\n> > On Tue, Feb 13, 2024 at 01:10:21PM -0600, Justin Pryzby wrote:\n> > > I sent various patches to cfbot but haven't heard back.\n> >\n> > > https://www.postgresql.org/message-id/flat/20220409021853.GP24419@telsasoft.com\n> > > https://www.postgresql.org/message-id/flat/20220623193125.GB22452@telsasoft.com\n> > > https://github.com/justinpryzby/cfbot/commits/master\n> > > https://github.com/macdice/cfbot/pulls\n> >\n> > @Thomas: ping\n> >\n> > I reintroduced the patch for ccache/windows -- v4.10 supports PCH, which\n> > can make the builds 2x faster.\n> \n> I applied 0001 and 0002 to see ccache support on Windows but the build\n> step failed with: 'ccache: error: No stats log has been configured'.\n> Perhaps you forgot to add 'CCACHE_STATSLOG: $CCACHE_DIR.stats.log' to\n> 0002?\n\nSomething like that - I put the line back. I don't know if statslog\nshould be included in the patch, but it's useful for demonstrating that\nit's working.\n\nccache should be installed in the image rather than re-installed on each\ninvocation.\n\n-- \nJustin",
"msg_date": "Thu, 13 Jun 2024 06:56:20 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Thu, Jun 13, 2024 at 06:56:20AM -0500, Justin Pryzby wrote:\n> On Thu, Jun 13, 2024 at 02:38:46PM +0300, Nazir Bilal Yavuz wrote:\n>>> I reintroduced the patch for ccache/windows -- v4.10 supports PCH, which\n>>> can make the builds 2x faster.\n>> \n>> I applied 0001 and 0002 to see ccache support on Windows but the build\n>> step failed with: 'ccache: error: No stats log has been configured'.\n>> Perhaps you forgot to add 'CCACHE_STATSLOG: $CCACHE_DIR.stats.log' to\n>> 0002?\n> \n> Something like that - I put the line back. I don't know if statslog\n> should be included in the patch, but it's useful for demonstrating that\n> it's working.\n> \n> ccache should be installed in the image rather than re-installed on each\n> invocation.\n\nGetting a 90s -> 30s improvement is nice. With such numbers, 0002 is\nworth considering first.\n\n+ ninja -C build |tee build.txt\n\nIn 0001, how OK is it to rely on the existence of tee for the VS2019\nenvironments? The base images include it, meaning that it is OK?\n\n- REM choco install -y --no-progress ...\n\nI'd rather keep this line in 0002, as a matter of documentation.\n\n+ set CC=c:\\ProgramData\\chocolatey\\lib\\ccache\\tools\\ccache-4.10-windows-x86_64\\ccache.exe cl.exe\n\nAs of https://docs.mesa3d.org/meson.html#compiler-specification, using\nCC is supported by meson (didn't know that), but shouldn't this be set\nin the \"env:\" part of the VS2019 task in .cirrus.tasks.yml?\n--\nMichael",
"msg_date": "Fri, 14 Jun 2024 08:05:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "Hi,\n\nOn Thu, 13 Jun 2024 at 14:56, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> ccache should be installed in the image rather than re-installed on each\n> invocation.\n\nccache is installed in the Windows VM images now [1]. It can be used\nas 'set CC=ccache.exe cl.exe' in the Windows CI task.\n\n[1] https://github.com/anarazel/pg-vm-images/commit/03a9225ac962fb30b5c0722c702941e2d7c1e81e\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Fri, 14 Jun 2024 17:36:54 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Fri, Jun 14, 2024 at 05:36:54PM +0300, Nazir Bilal Yavuz wrote:\n> Hi,\n> \n> On Thu, 13 Jun 2024 at 14:56, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > ccache should be installed in the image rather than re-installed on each\n> > invocation.\n> \n> ccache is installed in the Windows VM images now [1]. It can be used\n> as 'set CC=ccache.exe cl.exe' in the Windows CI task.\n> \n> [1] https://github.com/anarazel/pg-vm-images/commit/03a9225ac962fb30b5c0722c702941e2d7c1e81e\n\nThanks. I think that works by using a \"shim\" created by choco in\nC:\\ProgramData\\chocolatey\\bin.\n\nBut going through that indirection seems to incur an extra 15sec of\ncompilation time; I think we'll want to do something to avoid that.\n\nI'm not sure what the options are, like maybe installing ccache into a\nfixed path like c:\\ccache or installing a custom link, to a \"pinned\"\nversion of ccache.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 14 Jun 2024 10:22:01 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "Hi, \n\nOn June 14, 2024 8:22:01 AM PDT, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>On Fri, Jun 14, 2024 at 05:36:54PM +0300, Nazir Bilal Yavuz wrote:\n>> Hi,\n>> \n>> On Thu, 13 Jun 2024 at 14:56, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> >\n>> > ccache should be installed in the image rather than re-installed on each\n>> > invocation.\n>> \n>> ccache is installed in the Windows VM images now [1]. It can be used\n>> as 'set CC=ccache.exe cl.exe' in the Windows CI task.\n>> \n>> [1] https://github.com/anarazel/pg-vm-images/commit/03a9225ac962fb30b5c0722c702941e2d7c1e81e\n>\n>Thanks. I think that works by using a \"shim\" created by choco in\n>C:\\ProgramData\\chocolatey\\bin.\n>\n>But going through that indirection seems to incur an extra 15sec of\n>compilation time; I think we'll want to do something to avoid that.\n>\n>I'm not sure what the options are, like maybe installing ccache into a\n>fixed path like c:\\ccache or installing a custom link, to a \"pinned\"\n>version of ccache.\n\n\nHm. There actually already is the mingw ccache installed. The images for mingw and msvc used to be separate (for startup performance when using containers), but we just merged them. So it might be easiest to just explicitly use the ccache from there (also an explicit path might be faster). Could you check if that's fast? If not, we can just install the binaries distributed by the project, it's just more work to keep up2date that way. \n\nAndres \n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Fri, 14 Jun 2024 08:34:37 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Fri, Jun 14, 2024 at 08:34:37AM -0700, Andres Freund wrote:\n> Hm. There actually already is the mingw ccache installed. The images for mingw and msvc used to be separate (for startup performance when using containers), but we just merged them. So it might be easiest to just explicitly use the ccache from there\n\n> (also an explicit path might be faster).\n\nI don't think the path search is significant; it's fast so long as\nthere's no choco stub involved.\n\n> Could you check if that's fast?\n\nYes, it is.\n\n> If not, we can just install the binaries distributed by the project, it's just more work to keep up2date that way. \n\nI guess you mean a separate line to copy choco's ccache.exe somewhere.\n\n-- \nJustin",
"msg_date": "Tue, 18 Jun 2024 08:36:57 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "Hi,\n\nOn 2024-06-18 08:36:57 -0500, Justin Pryzby wrote:\n> On Fri, Jun 14, 2024 at 08:34:37AM -0700, Andres Freund wrote:\n> > Hm. There actually already is the mingw ccache installed. The images for mingw and msvc used to be separate (for startup performance when using containers), but we just merged them. So it might be easiest to just explicitly use the ccache from there\n> \n> > (also an explicit path might be faster).\n> \n> I don't think the path search is significant; it's fast so long as\n> there's no choco stub involved.\n\nComparatively it's definitely small, but I've seen it make a difference on\nwindows.\n\n\n> > Could you check if that's fast?\n> \n> Yes, it is.\n\nCool.\n\n\n> > If not, we can just install the binaries distributed by the project, it's just more work to keep up2date that way. \n> \n> I guess you mean a separate line to copy choco's ccache.exe somewhere.\n\nI was thinking of alternatively just using the binaries upstream\ndistributes. But the mingw way seems easier.\n\nPerhaps we should add an environment variable pointing to ccache to the image,\nor such?\n\n\n> build_script: |\n> vcvarsall x64\n> - ninja -C build\n> + ninja -C build |tee build.txt\n> + REM Since pipes lose the exit status of the preceding command, rerun the compilation\n> + REM without the pipe, exiting now if it fails, to avoid trying to run checks\n> + ninja -C build > nul\n\nPerhaps we could do something like\n (ninja -C build && touch success) | tee build.txt\n cat success\n?\n\n\n> + CCACHE_MAXSIZE: \"500M\"\n\nDoes it have to be this big to work?\n\n\n> configure_script: |\n> vcvarsall x64\n> - meson setup --backend ninja --buildtype debug -Dc_link_args=/DEBUG:FASTLINK -Dcassert=true -Dinjection_points=true -Db_pch=true -Dextra_lib_dirs=c:\\openssl\\1.1\\lib -Dextra_include_dirs=c:\\openssl\\1.1\\include -DTAR=%TAR% -DPG_TEST_EXTRA=\"%PG_TEST_EXTRA%\" build\n> + meson setup build --backend ninja --buildtype debug -Dc_link_args=/DEBUG:FASTLINK -Dcassert=true -Dinjection_points=true -Db_pch=true -Dextra_lib_dirs=c:\\openssl\\1.1\\lib -Dextra_include_dirs=c:\\openssl\\1.1\\include -DTAR=%TAR% -DPG_TEST_EXTRA=\"%PG_TEST_EXTRA%\" -Dc_args=\"/Z7\"\n\nA comment explaining why /Z7 is necessary would be good.\n\n\n\n> From 3a399c6350ed2f341425431f184e382c3f46d981 Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Sat, 26 Feb 2022 19:39:10 -0600\n> Subject: [PATCH 4/7] WIP: cirrus: upload changed html docs as artifacts\n> \n> This could be done on the client side (cfbot). One advantage of doing\n> it here is that fewer docs are uploaded - many patches won't upload docs\n> at all.\n> \n> https://www.postgresql.org/message-id/flat/20220409021853.GP24419@telsasoft.com\n> https://www.postgresql.org/message-id/CAB8KJ=i4qmEuopQ+PCSMBzGd4O-Xv0FCnC+q1x7hN9hsdvkBug@mail.gmail.com\n> \n> https://cirrus-ci.com/task/5396696388599808\n\nI still think that this runs the risk of increasing space utilization and thus\nincrease frequency of caches/artifacts being purged.\n\n\n\n> + # The commit that this branch is rebased on. There's no easy way to find this.\n\nI don't think that's true, IIRC I've complained about it before. We can do\nsomething like:\n\nmajor_num=$(grep PG_MAJORVERSION_NUM build/src/include/pg_config.h|awk '{print $3}');\necho major: $major_num;\nif git rev-parse --quiet --verify origin/REL_${major_num}_STABLE > /dev/null ; then\n basebranch=origin/REL_${major_num}_STABLE;\nelse\n basebranch=origin/master;\nfi;\necho base branch: $basebranch\nbase_commit=$(git merge-base HEAD $basebranch)\n\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 18 Jun 2024 14:25:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: CI and test improvements"
},
{
"msg_contents": "On Tue, Jun 18, 2024 at 02:25:46PM -0700, Andres Freund wrote:\n> > > If not, we can just install the binaries distributed by the project, it's just more work to keep up2date that way. \n> > \n> > I guess you mean a separate line to copy choco's ccache.exe somewhere.\n> \n> I was thinking of alternatively just using the binaries upstream\n> distributes. But the mingw way seems easier.\n> \n> Perhaps we should add an environment variable pointing to ccache to the image,\n> or such?\n\nI guess you mean changing the OS image so there's a $CCACHE.\nThat sounds fine...\n\n> > + CCACHE_MAXSIZE: \"500M\"\n> \n> Does it have to be this big to work?\n\nIt's using 150MB for an initial compilation, and maxsize will need to be\n20% larger for it to not evict its cache before it can be used.\n\nThe other ccaches (except for mingw) seem to be several times larger\nthan what's needed for a single compilation, which makes sense to cache\nacross multiple branches (or even commits in a single branch), and for\ncfbot.\n\n> A comment explaining why /Z7 is necessary would be good.\n\nSure\n\n> > build_script: |\n> > vcvarsall x64\n> > - ninja -C build\n> > + ninja -C build |tee build.txt\n> > + REM Since pipes lose the exit status of the preceding command, rerun the compilation\n> > + REM without the pipe, exiting now if it fails, to avoid trying to run checks\n> > + ninja -C build > nul\n> \n> Perhaps we could do something like\n> (ninja -C build && touch success) | tee build.txt\n> cat success\n> ?\n\nI don't know -- a pipe alone seems more direct than a\nsubshell+conditional+pipe written in cmd.exe, whose syntax is not well\nknown here.\n\nMaybe you're suggesting to write \n\nsh -c \"(ninja -C build && touch success) | tee build.txt ; cat ./success\"\n\nBut that's another layer of complexity .. for what ?\n\n> > Subject: [PATCH 4/7] WIP: cirrus: upload changed html docs as artifacts\n> \n> I still think that this runs the risk of increasing space utilization and thus\n> increase frequency of caches/artifacts being purged.\n\nMaybe it should run on the local macs where I think you can control\nthat.\n\n> > + # The commit that this branch is rebased on. There's no easy way to find this.\n> \n> I don't think that's true, IIRC I've complained about it before. We can do\n> something like:\n\ncfbot now exposes it, so it'd be trivial for that case (although there\nwas no response here to my inquiries about that). I'll revisit this in\nthe future, once other patches have progressed.\n\n> major_num=$(grep PG_MAJORVERSION_NUM build/src/include/pg_config.h|awk '{print $3}');\n> echo major: $major_num;\n> if git rev-parse --quiet --verify origin/REL_${major_num}_STABLE > /dev/null ; then\n> basebranch=origin/REL_${major_num}_STABLE;\n> else\n> basebranch=origin/master;\n> fi;\n> echo base branch: $basebranch\n> base_commit=$(git merge-base HEAD $basebranch)",
"msg_date": "Tue, 6 Aug 2024 14:10:15 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
}
] |
[
{
"msg_contents": "Hi all,\n\nTesting on the PG15 beta, I'm getting new failures when trying to create a\ncollation:\n\nCREATE COLLATION some_collation (LC_COLLATE = 'en-u-ks-primary',\n LC_CTYPE = 'en-u-ks-primary',\n PROVIDER = icu,\n DETERMINISTIC = False\n);\n\nThis works on PG14, but on PG15 it errors with 'parameter \"locale\" must be\nspecified'.\n\nI wanted to make sure this breaking change is intentional (it doesn't seem\ndocumented in the release notes or in the docs for CREATE COLLATION).\n\nShay\n\nHi all,Testing on the PG15 beta, I'm getting new failures when trying to create a collation:CREATE COLLATION some_collation (LC_COLLATE = 'en-u-ks-primary', LC_CTYPE = 'en-u-ks-primary', PROVIDER = icu, DETERMINISTIC = False);This works on PG14, but on PG15 it errors with 'parameter \"locale\" must be specified'.I wanted to make sure this breaking change is intentional (it doesn't seem documented in the release notes or in the docs for CREATE COLLATION).Shay",
"msg_date": "Sat, 28 May 2022 20:16:40 +0200",
"msg_from": "Shay Rojansky <roji@roji.org>",
"msg_from_op": true,
"msg_subject": "CREATE COLLATION must be specified"
},
{
"msg_contents": "On 28.05.22 20:16, Shay Rojansky wrote:\n> CREATE COLLATION some_collation (LC_COLLATE = 'en-u-ks-primary',\n> LC_CTYPE = 'en-u-ks-primary',\n> PROVIDER = icu,\n> DETERMINISTIC = False\n> );\n> \n> This works on PG14, but on PG15 it errors with 'parameter \"locale\" must \n> be specified'.\n> \n> I wanted to make sure this breaking change is intentional (it doesn't \n> seem documented in the release notes or in the docs for CREATE COLLATION).\n\nThis change is intentional, but the documentation could be improved.\n\n\n",
"msg_date": "Sat, 28 May 2022 20:25:20 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATE COLLATION must be specified"
},
{
"msg_contents": "--ok\nCREATE COLLATION some_collation (\n PROVIDER = icu,\n LOCALE = 'en-u-ks-primary',\n DETERMINISTIC = FALSE\n);\n\nCREATE COLLATION some_collation1 (\n PROVIDER = icu,\n LC_COLLATE = 'en-u-ks-primary',\n LC_CTYPE = 'en-u-ks-primary',\n DETERMINISTIC = FALSE\n);\n--ERROR: parameter \"locale\" must be specified\n\nCREATE COLLATION some_collation2 (\n LC_COLLATE = 'en-u-ks-primary',\n LC_CTYPE = 'en-u-ks-primary',\n LOCALE = 'en-u-ks-primary',\n PROVIDER = icu,\n DETERMINISTIC = FALSE\n);\n--ERROR: conflicting or redundant options\n--DETAIL: LOCALE cannot be specified together with LC_COLLATE or LC_CTYPE.\n\nSince LC_COLLATE is bundled together with LC_CTYPE.\nIn 15, If the provider is ICU then LC_COLLATE and LC_CTYPE are no longer\nrequired?\n\n\nOn Sat, May 28, 2022 at 11:55 PM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 28.05.22 20:16, Shay Rojansky wrote:\n> > CREATE COLLATION some_collation (LC_COLLATE = 'en-u-ks-primary',\n> > LC_CTYPE = 'en-u-ks-primary',\n> > PROVIDER = icu,\n> > DETERMINISTIC = False\n> > );\n> >\n> > This works on PG14, but on PG15 it errors with 'parameter \"locale\" must\n> > be specified'.\n> >\n> > I wanted to make sure this breaking change is intentional (it doesn't\n> > seem documented in the release notes or in the docs for CREATE\n> COLLATION).\n>\n> This change is intentional, but the documentation could be improved.\n>\n>\n>\n\n-- \n I recommend David Deutsch's <<The Beginning of Infinity>>\n\n Jian\n\n--okCREATE COLLATION some_collation ( PROVIDER = icu, LOCALE = 'en-u-ks-primary', DETERMINISTIC = FALSE);CREATE COLLATION some_collation1 ( PROVIDER = icu, LC_COLLATE = 'en-u-ks-primary', LC_CTYPE = 'en-u-ks-primary', DETERMINISTIC = FALSE);--ERROR: parameter \"locale\" must be specified CREATE COLLATION some_collation2 ( LC_COLLATE = 'en-u-ks-primary', LC_CTYPE = 'en-u-ks-primary', LOCALE = 'en-u-ks-primary', PROVIDER = icu, DETERMINISTIC = FALSE);--ERROR: conflicting or redundant options--DETAIL: LOCALE cannot be specified together with LC_COLLATE or LC_CTYPE.Since LC_COLLATE is bundled together with LC_CTYPE. In 15, If the provider is ICU then \nLC_COLLATE and LC_CTYPE are no longer required?On Sat, May 28, 2022 at 11:55 PM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 28.05.22 20:16, Shay Rojansky wrote:\n> CREATE COLLATION some_collation (LC_COLLATE = 'en-u-ks-primary',\n> LC_CTYPE = 'en-u-ks-primary',\n> PROVIDER = icu,\n> DETERMINISTIC = False\n> );\n> \n> This works on PG14, but on PG15 it errors with 'parameter \"locale\" must \n> be specified'.\n> \n> I wanted to make sure this breaking change is intentional (it doesn't \n> seem documented in the release notes or in the docs for CREATE COLLATION).\n\nThis change is intentional, but the documentation could be improved.\n\n\n-- I recommend David Deutsch's <<The Beginning of Infinity>> Jian",
"msg_date": "Mon, 30 May 2022 13:47:36 +0530",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATE COLLATION must be specified"
},
{
"msg_contents": "> > CREATE COLLATION some_collation (LC_COLLATE = 'en-u-ks-primary',\n> > LC_CTYPE = 'en-u-ks-primary',\n> > PROVIDER = icu,\n> > DETERMINISTIC = False\n> > );\n> >\n> > This works on PG14, but on PG15 it errors with 'parameter \"locale\" must\n> > be specified'.\n> >\n> > I wanted to make sure this breaking change is intentional (it doesn't\n> > seem documented in the release notes or in the docs for CREATE\nCOLLATION).\n>\n> This change is intentional, but the documentation could be improved.\n\nI think this is still missing in the PG15 release notes (\nhttps://www.postgresql.org/docs/15/release-15.html).\n\n> > CREATE COLLATION some_collation (LC_COLLATE = 'en-u-ks-primary',> > LC_CTYPE = 'en-u-ks-primary',> > PROVIDER = icu,> > DETERMINISTIC = False> > );> >> > This works on PG14, but on PG15 it errors with 'parameter \"locale\" must> > be specified'.> >> > I wanted to make sure this breaking change is intentional (it doesn't> > seem documented in the release notes or in the docs for CREATE COLLATION).> > This change is intentional, but the documentation could be improved.I think this is still missing in the PG15 release notes (https://www.postgresql.org/docs/15/release-15.html).",
"msg_date": "Sat, 15 Oct 2022 07:27:29 +0200",
"msg_from": "Shay Rojansky <roji@roji.org>",
"msg_from_op": true,
"msg_subject": "Re: CREATE COLLATION must be specified"
}
] |
[
{
"msg_contents": "Commit 1a36bc9db (SQL/JSON query functions) introduced STRING as a\ntype_func_name_keyword. As per the complaint at [1], this broke use\nof \"string\" as a table name, column name, function parameter name, etc.\nThe restriction about column names, in particular, seems likely to\nbreak boatloads of applications --- wouldn't you think that \"string\"\nis a pretty likely column name? We have no cover to claim \"the SQL\nstandard says so\", either, because they list STRING as an unreserved\nkeyword.\n\nThis is trivial enough to fix so far as the core grammar is concerned;\nit works to just change STRING to be an unreserved_keyword. However,\nvarious ECPG tests fall over, so I surmise that somebody felt it was\nokay to break potentially thousands of applications in order to avoid\ntouching ECPG. I do not think that's an acceptable trade-off.\n\nI poked into it a bit and found the proximate cause: ECPG uses\nECPGColLabelCommon to represent user-chosen type names in some\nplaces, and that production *does not include unreserved_keyword*.\n(Sure enough, the failing test cases try to use \"string\" as a\ntype name in a variable declaration.) That's a pre-existing\nbit of awfulness, and it's indeed pretty awful, because it means\nthat any time we add a new keyword --- even a fully unreserved one\n--- we risk breaking somebody's ECPG application. We just hadn't\nhappened to add any keywords to date that conflicted with type names\nused in the ECPG test suite.\n\nI looked briefly at whether we could improve that situation.\nTwo of the four uses of ECPGColLabelCommon in ecpg.trailer can be\nconverted to the more general ECPGColLabel without difficulty,\nbut trying to change either of the uses in var_type results in\nliterally thousands of shift/reduce and reduce/reduce conflicts.\nThis seems to be because what follows ecpgstart can be either a general\nSQL statement or an ECPGVarDeclaration (beginning with var_type),\nand bison isn't smart enough to disambiguate. I have a feeling that\nthis situation could be improved with enough elbow grease, because\nplpgsql manages to solve a closely-related problem in allowing its\nassignment statements to coexist with general SQL statements. But\nI don't have the interest to tackle it personally, and anyway any\nfix would probably be more invasive than we want to put in post-beta.\n\nI also wondered briefly about just changing the affected test cases,\nreasoning that breaking some ECPG applications that use \"string\" is\nless bad than breaking everybody else that uses \"string\". But type\n\"string\" is apparently a standard ECPG and/or INFORMIX thing, so we\nprobably can't get away with that.\n\nHence, I propose the attached, which fixes the easily-fixable\nECPGColLabelCommon uses and adds a hard-wired special case for\nSTRING to handle the var_type usage.\n\nMore generally, I feel like we have a process problem: there needs to\nbe a higher bar to adding new fully- or even partially-reserved words.\nI might've missed it, but I don't recall that there was any discussion\nof the compatibility implications of this change.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/PAXPR02MB760049C92DFC2D8B5E8B8F5AE3DA9%40PAXPR02MB7600.eurprd02.prod.outlook.com",
"msg_date": "Sun, 29 May 2022 16:19:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "SQL/JSON functions vs. ECPG vs. STRING as a reserved word"
},
{
"msg_contents": "> I looked briefly at whether we could improve that situation.\n> Two of the four uses of ECPGColLabelCommon in ecpg.trailer can be\n> converted to the more general ECPGColLabel without difficulty,\n> but trying to change either of the uses in var_type results in\n> literally thousands of shift/reduce and reduce/reduce conflicts.\n> This seems to be because what follows ecpgstart can be either a general\n> SQL statement or an ECPGVarDeclaration (beginning with var_type),\n> and bison isn't smart enough to disambiguate. I have a feeling that\n> this situation could be improved with enough elbow grease, because\n> plpgsql manages to solve a closely-related problem in allowing its\n> assignment statements to coexist with general SQL statements. But\n> I don't have the interest to tackle it personally, and anyway any\n> fix would probably be more invasive than we want to put in post-beta.\n\nRight, the reason for all this is that the part after the \"exec sql\" could be\neither language, SQL or C. It has been like this for all those years. I do not\nclaim that the solution we have is the best, it's only the best I could come up\nwith when I implemented it. If anyone has a better way, please be my guest.\n\n> I also wondered briefly about just changing the affected test cases,\n> reasoning that breaking some ECPG applications that use \"string\" is\n> less bad than breaking everybody else that uses \"string\". But type\n> \"string\" is apparently a standard ECPG and/or INFORMIX thing, so we\n> probably can't get away with that.\n\nIIRC STRING is a normal datatype for INFORMIX and thus is implemented in ECPG\nfor its compatibility.\n\n> Hence, I propose the attached, which fixes the easily-fixable\n> ECPGColLabelCommon uses and adds a hard-wired special case for\n> STRING to handle the var_type usage.\n\nI'm fine with this approach.\n\nThanks Tom.\n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De\nMichael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\n\n\n",
"msg_date": "Mon, 30 May 2022 15:25:16 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON functions vs. ECPG vs. STRING as a reserved word"
},
{
"msg_contents": "Michael Meskes <meskes@postgresql.org> writes:\n>> This seems to be because what follows ecpgstart can be either a general\n>> SQL statement or an ECPGVarDeclaration (beginning with var_type),\n>> and bison isn't smart enough to disambiguate. I have a feeling that\n>> this situation could be improved with enough elbow grease, because\n>> plpgsql manages to solve a closely-related problem in allowing its\n>> assignment statements to coexist with general SQL statements.\n\n> Right, the reason for all this is that the part after the \"exec sql\" could be\n> either language, SQL or C. It has been like this for all those years. I do not\n> claim that the solution we have is the best, it's only the best I could come up\n> with when I implemented it. If anyone has a better way, please be my guest.\n\nI pushed the proposed patch, but after continuing to think about\nit I have an idea for a possible solution to the older problem.\nI noticed that the problematic cases in var_type aren't really\ninterested in seeing any possible unreserved keyword: they care about\ncertain specific built-in type names (most of which are keywords\nalready) and about typedef names. Now, almost every C-parsing program\nI've ever seen has to lex typedef names specially, so what if we made\necpg do that too? After a couple of false starts, I came up with the\nattached draft patch. The key ideas are:\n\n1. In pgc.l, if an identifier is a typedef name, ignore any possible\nkeyword meaning and return it as an IDENT. (I'd originally supposed\nthat we'd want to return some new TYPEDEF token type, but that does\nnot seem to be necessary right now, and adding a new token type would\nincrease the patch footprint quite a bit.)\n\n2. In the var_type production, forget about ECPGColLabel[Common]\nand just handle the keywords we know we need, plus IDENT for the\ntypedef case. It turns out that we have to have duplicate coding\nbecause most of these keywords are not keywords in C lexing mode,\nso that they'll come through the IDENT path anyway when we're\nin a C rather than SQL context. That seemed acceptable to me.\nI thought about adding them all to the C keywords list but that\nseemed likely to have undesirable side-effects, and again it'd\nbloat the patch footprint.\n\nThis fix is not without downsides. Disabling recognition of\nkeywords that match typedefs means that, for example, if you\ndeclare a typedef named \"work\" then ECPG will fail to parse\n\"EXEC SQL BEGIN WORK\". So in a real sense this is just trading\none hazard for another. But there is an important difference:\nwith this, whether your ECPG program works depends only on what\ntypedef names and SQL commands are used in the program. If\nit compiles today it'll still compile next year, whereas with\nthe present implementation the addition of some new unreserved\nSQL keyword could break it. We'd have to document this change\nfor sure, and it wouldn't be something to back-patch, but it\nseems like it might be acceptable from the users' standpoint.\n\nWe could narrow (not eliminate) this hazard if we could get the\ntypedef lookup in pgc.l to happen only when we're about to parse\na var_type construct. But because of Bison's lookahead behavior,\nthat seems to be impossible, or at least undesirably messy\nand fragile. But perhaps somebody else will see a way.\n\nAnyway, this seems like too big a change to consider for v15,\nso I'll stick this patch into the v16 CF queue. It's only\ndraft quality anyway --- lacks documentation changes and test\ncases. There are also some coding points that could use review.\nNotably, I made the typedef lookup override SQL keywords but\nnot C keywords; this is for consistency with the C-mode lookup\nrules, but is it the right thing?\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 30 May 2022 17:20:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON functions vs. ECPG vs. STRING as a reserved word"
},
{
"msg_contents": "\nOn 2022-05-29 Su 16:19, Tom Lane wrote:\n> More generally, I feel like we have a process problem: there needs to\n> be a higher bar to adding new fully- or even partially-reserved words.\n> I might've missed it, but I don't recall that there was any discussion\n> of the compatibility implications of this change.\n>\n\nThanks for fixing this while I was away.\n\nI did in fact raise the issue on 1 Feb, see\n<https://postgr.es/m/f174a289-3274-569d-875c-2e810101df22@dunslane.net>,\nbut nobody responded that I recall. I guess I should have pushed the\ndiscussion further\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 31 May 2022 11:09:23 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON functions vs. ECPG vs. STRING as a reserved word"
},
{
"msg_contents": "On Mon, May 30, 2022 at 05:20:15PM -0400, Tom Lane wrote:\n\n[allow EXEC SQL TYPE unreserved_keyword IS ...]\n\n> 1. In pgc.l, if an identifier is a typedef name, ignore any possible\n> keyword meaning and return it as an IDENT. (I'd originally supposed\n> that we'd want to return some new TYPEDEF token type, but that does\n> not seem to be necessary right now, and adding a new token type would\n> increase the patch footprint quite a bit.)\n> \n> 2. In the var_type production, forget about ECPGColLabel[Common]\n> and just handle the keywords we know we need, plus IDENT for the\n> typedef case. It turns out that we have to have duplicate coding\n> because most of these keywords are not keywords in C lexing mode,\n> so that they'll come through the IDENT path anyway when we're\n> in a C rather than SQL context. That seemed acceptable to me.\n> I thought about adding them all to the C keywords list but that\n> seemed likely to have undesirable side-effects, and again it'd\n> bloat the patch footprint.\n> \n> This fix is not without downsides. Disabling recognition of\n> keywords that match typedefs means that, for example, if you\n> declare a typedef named \"work\" then ECPG will fail to parse\n> \"EXEC SQL BEGIN WORK\". So in a real sense this is just trading\n> one hazard for another. But there is an important difference:\n> with this, whether your ECPG program works depends only on what\n> typedef names and SQL commands are used in the program. If\n> it compiles today it'll still compile next year, whereas with\n> the present implementation the addition of some new unreserved\n> SQL keyword could break it. We'd have to document this change\n> for sure, and it wouldn't be something to back-patch, but it\n> seems like it might be acceptable from the users' standpoint.\n\nI agree this change is more likely to please a user than to harm a user. The\nuser benefit is slim, but the patch is also slim.\n\n> We could narrow (not eliminate) this hazard if we could get the\n> typedef lookup in pgc.l to happen only when we're about to parse\n> a var_type construct. But because of Bison's lookahead behavior,\n> that seems to be impossible, or at least undesirably messy\n> and fragile. But perhaps somebody else will see a way.\n\nI don't, though I'm not much of a Bison wizard.\n\n> Anyway, this seems like too big a change to consider for v15,\n> so I'll stick this patch into the v16 CF queue. It's only\n> draft quality anyway --- lacks documentation changes and test\n> cases. There are also some coding points that could use review.\n> Notably, I made the typedef lookup override SQL keywords but\n> not C keywords; this is for consistency with the C-mode lookup\n> rules, but is it the right thing?\n\nThat decision seems fine. ScanCKeywordLookup() covers just twenty-six\nkeywords, and that list hasn't changed since 2003. Moreover, most of them are\nkeywords of the C language itself, so allowing them would entailing mangling\nthem in the generated C to avoid C compiler errors. Given the lack of\ncomplaints, let's not go there.\n\nI didn't locate any problems beyond the test and doc gaps that you mentioned,\nso I've marked this Ready for Committer.\n\n\n",
"msg_date": "Sun, 3 Jul 2022 01:01:27 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON functions vs. ECPG vs. STRING as a reserved word"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Mon, May 30, 2022 at 05:20:15PM -0400, Tom Lane wrote:\n>> [allow EXEC SQL TYPE unreserved_keyword IS ...]\n\n> I didn't locate any problems beyond the test and doc gaps that you mentioned,\n> so I've marked this Ready for Committer.\n\nThanks! Here's a fleshed-out version with doc changes, plus adjustment\nof preproc/type.pgc so that it exposes the existing problem. (No code\nchanges from v1.) I'll push this in a few days if there are not\nobjections.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 03 Jul 2022 13:08:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON functions vs. ECPG vs. STRING as a reserved word"
}
] |
[
{
"msg_contents": "Hi,\n\n \n\nI tried it on PostgreSQL 13. If you use the Unicode Variation Selector and Combining Character\n\n, the base character and the Variation selector will be 2 in length. Since it will be one character on the display, we expect it to be one in length. Please provide a function corresponding to the unicode variasion selector. I hope It is supposed to be provided as an extension.\n\n \n\nThe functions that need to be supported are as follows:\n\n \n\nchar_length|character_length|substring|trim|btrim|left\n\n|length|lpad|ltrim|regexp_match|regexp_matches\n\n|regexp_replace|regexp_split_to_array|regexp_split_to_table\n\n|replace|reverse|right|rpad|rtrim|split_part|strpos|substr|starts_with\n\n \n\nBest regartds,\n\n\nHi, I tried it on PostgreSQL 13. If you use the Unicode Variation Selector and Combining Character, the base character and the Variation selector will be 2 in length. Since it will be one character on the display, we expect it to be one in length. Please provide a function corresponding to the unicode variasion selector. I hope It is supposed to be provided as an extension. The functions that need to be supported are as follows: char_length|character_length|substring|trim|btrim|left|length|lpad|ltrim|regexp_match|regexp_matches|regexp_replace|regexp_split_to_array|regexp_split_to_table|replace|reverse|right|rpad|rtrim|split_part|strpos|substr|starts_with Best regartds,",
"msg_date": "Mon, 30 May 2022 09:27:08 +0900",
"msg_from": "=?utf-8?B?6I2S5LqV5YWD5oiQ?= <n2029@ndensan.co.jp>",
"msg_from_op": true,
"msg_subject": "Unicode Variation Selector and Combining character"
},
{
"msg_contents": "On 30.05.22 02:27, 荒井元成 wrote:\n> I tried it on PostgreSQL 13. If you use the Unicode Variation Selector \n> and Combining Character\n> \n> , the base character and the Variation selector will be 2 in length. \n> Since it will be one character on the display, we expect it to be one in \n> length. Please provide a function corresponding to the unicode variasion \n> selector. I hope It is supposed to be provided as an extension.\n> \n> The functions that need to be supported are as follows:\n> \n> char_length|character_length|substring|trim|btrim|left\n> \n> |length|lpad|ltrim|regexp_match|regexp_matches\n> \n> |regexp_replace|regexp_split_to_array|regexp_split_to_table\n> \n> |replace|reverse|right|rpad|rtrim|split_part|strpos|substr|starts_with\n\nPlease show a test case of what you mean. For example,\n\nselect char_length(...) returns X but should return Y\n\nExamples with Unicode escapes (U&'\\NNNN...') would be the most robust.\n\n\n",
"msg_date": "Wed, 1 Jun 2022 07:27:00 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Unicode Variation Selector and Combining character"
},
{
"msg_contents": "Thank you for your reply.\n\n \n\nWe use IPAmj Mincho Font in the specifications of the Government of Japan.\n\nhttps://moji.or.jp/mojikiban/font/\n\n \n\n \n\nExsample)IVS\n\nI will attach an image.\n\n\n\n \n\n\n\n \n\nD209007=# select char_length(U&'\\+0066FE' || U&'\\+0E0103') ;\n\nchar_length\n\n-------------\n\n 2\n\n(1 行)\n\n \nI expect length 1.\n\n \n\n \n\n \n\nExsample)Combining Character\n\nI will attach an image.\n\n\n\n \n\n\n\n \n\nD209007=# select char_length(U&'\\+00304B' || U&'\\+00309A') ;\n\nchar_length\n\n-------------\n\n 2\n\n(1 行)\n\n \n\nI expect length 1.\n\n \n\n \n\nthank you.\n\n \n\n \n\n \n\n \n\n \n\n-----Original Message-----\nFrom: Peter Eisentraut <peter.eisentraut@enterprisedb.com> \nSent: Wednesday, June 1, 2022 2:27 PM\nTo: 荒井元成 <n2029@ndensan.co.jp>; pgsql-hackers@lists.postgresql.org\nSubject: Re: Unicode Variation Selector and Combining character\n\n \n\nOn 30.05.22 02:27, 荒井元成 wrote:\n\n> I tried it on PostgreSQL 13. If you use the Unicode Variation Selector \n\n> and Combining Character\n\n> \n\n> , the base character and the Variation selector will be 2 in length. \n\n> Since it will be one character on the display, we expect it to be one \n\n> in length. Please provide a function corresponding to the unicode \n\n> variasion selector. I hope It is supposed to be provided as an extension.\n\n> \n\n> The functions that need to be supported are as follows:\n\n> \n\n> char_length|character_length|substring|trim|btrim|left\n\n> \n\n> |length|lpad|ltrim|regexp_match|regexp_matches\n\n> \n\n> |regexp_replace|regexp_split_to_array|regexp_split_to_table\n\n> \n\n> |replace|reverse|right|rpad|rtrim|split_part|strpos|substr|starts_with\n\n \n\nPlease show a test case of what you mean. For example,\n\n \n\nselect char_length(...) returns X but should return Y\n\n \n\nExamples with Unicode escapes (U&'\\NNNN...') would be the most robust.",
"msg_date": "Wed, 1 Jun 2022 15:15:15 +0900",
"msg_from": "=?UTF-8?B?6I2S5LqV5YWD5oiQ?= <n2029@ndensan.co.jp>",
"msg_from_op": false,
"msg_subject": "RE: Unicode Variation Selector and Combining character"
},
{
"msg_contents": "On Wed, Jun 1, 2022 at 6:15 PM 荒井元成 <n2029@ndensan.co.jp> wrote:\n> D209007=# select char_length(U&'\\+0066FE' || U&'\\+0E0103') ;\n> char_length\n> -------------\n> 2\n> (1 行)\n>\n> I expect length 1.\n\nNo opinion here, but I did happen to see Noriyoshi Shinoda's slides\nabout this topic a little while ago, comparing different databases:\n\nhttps://www.slideshare.net/noriyoshishinoda/postgresql-unconference-29-unicode-ivs\n\nIt's the same with Latin combining characters... we count the\nindividual codepoints of combining sequences:\n\npostgres=# select 'e' || U&'\\0301', length('e' || U&'\\0301');\n ?column? | length\n----------+--------\n é | 2\n(1 row)\n\n\n",
"msg_date": "Wed, 1 Jun 2022 19:09:23 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unicode Variation Selector and Combining character"
},
{
"msg_contents": "On Wed, Jun 1, 2022 at 7:09 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Jun 1, 2022 at 6:15 PM 荒井元成 <n2029@ndensan.co.jp> wrote:\n> > D209007=# select char_length(U&'\\+0066FE' || U&'\\+0E0103') ;\n> > char_length\n> > -------------\n> > 2\n> > (1 行)\n> >\n> > I expect length 1.\n>\n> No opinion here, but I did happen to see Noriyoshi Shinoda's slides\n> about this topic a little while ago, comparing different databases:\n>\n> https://www.slideshare.net/noriyoshishinoda/postgresql-unconference-29-unicode-ivs\n>\n> It's the same with Latin combining characters... we count the\n> individual codepoints of combining sequences:\n>\n> postgres=# select 'e' || U&'\\0301', length('e' || U&'\\0301');\n> ?column? | length\n> ----------+--------\n> é | 2\n> (1 row)\n\nLooking around a bit, it might be interesting to check if the\nicu_character_boundaries() function in Daniel Vérité's icu_ext treats\nIVSs as single grapheme clusters.\n\n\n",
"msg_date": "Wed, 1 Jun 2022 19:39:55 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unicode Variation Selector and Combining character"
},
{
"msg_contents": "\tThomas Munro wrote:\n\n> Looking around a bit, it might be interesting to check if the\n> icu_character_boundaries() function in Daniel Vérité's icu_ext treats\n> IVSs as single grapheme clusters.\n\nIt does.\n\nwith strings(s) as (\n values (U&'\\+0066FE' || U&'\\+0E0103'),\n\t(U&'\\+00304B' || U&'\\+00309A')\n)\nselect s,\n octet_length(s),\n char_length(s),\n (select count(*) from icu_character_boundaries(s,'en')) as graphemes\nfrom strings;\n\n\n s | octet_length | char_length | graphemes \n-----+--------------+-------------+-----------\n 曾󠄃 |\t\t7 |\t 2 |\t 1\n か゚ |\t\t 6 |\t 2 |\t 1\n\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Wed, 01 Jun 2022 11:45:47 +0200",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: Unicode Variation Selector and Combining character"
},
{
"msg_contents": "On 01.06.22 08:15, 荒井元成 wrote:\n> D209007=# select char_length(U&'\\+0066FE' || U&'\\+0E0103') ;\n> \n> char_length\n> \n> -------------\n> \n> 2\n> \n> (1 行)\n> \n> I expect length 1.\n\nThe char_length function is defined to return the length in characters, \nso 2 is the correct answer. What you appear to be looking for is length \nin glyphs or length in graphemes or display width, or something like \nthat. There is no built-in server side function for that.\n\nIt looks like psql is getting the display width wrong, but that's a \nseparate issue.\n\n\n",
"msg_date": "Wed, 1 Jun 2022 12:25:49 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Unicode Variation Selector and Combining character"
},
{
"msg_contents": "Thank you for your reply.\n\nI will check if there is any function below char_length that is realized by icu_ext.\n\nsubstring|trim|btrim|left\n|lpad|ltrim|regexp_match|regexp_matches\n|regexp_replace|regexp_split_to_array|regexp_split_to_table\n|replace|reverse|right|rpad|rtrim|split_part|strpos|substr|starts_with\n\n\nBest regards,\n\n-----Original Message-----\nFrom: Daniel Verite <daniel@manitou-mail.org> \nSent: Wednesday, June 1, 2022 6:46 PM\nTo: Thomas Munro <thomas.munro@gmail.com>\nCc: 荒井元成 <n2029@ndensan.co.jp>; Peter Eisentraut <peter.eisentraut@enterprisedb.com>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Unicode Variation Selector and Combining character\n\n\tThomas Munro wrote:\n\n> Looking around a bit, it might be interesting to check if the\n> icu_character_boundaries() function in Daniel Vérité's icu_ext treats \n> IVSs as single grapheme clusters.\n\nIt does.\n\nwith strings(s) as (\n values (U&'\\+0066FE' || U&'\\+0E0103'),\n\t(U&'\\+00304B' || U&'\\+00309A')\n)\nselect s,\n octet_length(s),\n char_length(s),\n (select count(*) from icu_character_boundaries(s,'en')) as graphemes from strings;\n\n\n s | octet_length | char_length | graphemes \n-----+--------------+-------------+-----------\n 曾󠄃 |\t\t7 |\t 2 |\t 1\n か゚ |\t\t 6 |\t 2 |\t 1\n\n\n\nBest regards,\n--\nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n\n\n",
"msg_date": "Wed, 1 Jun 2022 19:35:36 +0900",
"msg_from": "=?UTF-8?B?6I2S5LqV5YWD5oiQ?= <n2029@ndensan.co.jp>",
"msg_from_op": false,
"msg_subject": "RE: Unicode Variation Selector and Combining character"
}
] |
[
{
"msg_contents": "While working on some patch, I saw the following error message when a\ntransaction ended successfully after a failed call to\nparse_and_validate_value().\n\nThe cause is ParseTzFile() returns leaving an open file descriptor\nunfreed in some error cases.\n\nThis happens only in a special case when the errors are ignored, but\nin principle the file descriptor should be released before exiting the\nfunction.\n\nI'm not sure it's worth fixing but the attached fixes that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 30 May 2022 17:37:40 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "ParseTzFile doesn't FreeFile on error"
},
{
"msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> The cause is ParseTzFile() returns leaving an open file descriptor\n> unfreed in some error cases.\n> This happens only in a special case when the errors are ignored, but\n> in principle the file descriptor should be released before exiting the\n> function.\n> I'm not sure it's worth fixing but the attached fixes that.\n\nI agree this is worth fixing, but adding all these gotos seems a bit\ninelegant. What do you think of the attached version?\n\nBTW, my first thought about it was \"what if one of the callees throws\nelog(ERROR), eg palloc out-of-memory\"? But I think that's all right\nsince then we'll reach transaction abort cleanup, which won't whine\nabout open files. The problem is limited to the case where no error\ngets thrown.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 30 May 2022 13:11:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ParseTzFile doesn't FreeFile on error"
},
{
"msg_contents": "At Mon, 30 May 2022 13:11:04 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > The cause is ParseTzFile() returns leaving an open file descriptor\n> > unfreed in some error cases.\n> > This happens only in a special case when the errors are ignored, but\n> > in principle the file descriptor should be released before exiting the\n> > function.\n> > I'm not sure it's worth fixing but the attached fixes that.\n> \n> I agree this is worth fixing, but adding all these gotos seems a bit\n> inelegant. What do you think of the attached version?\n\nIt is what came up to me first. It is natural. So I'm fine with\nit. The point of the \"goto\"s was that repeated \"n = -1;break;\" looked\nsomewhat noisy to me in the loop.\n\n> BTW, my first thought about it was \"what if one of the callees throws\n> elog(ERROR), eg palloc out-of-memory\"? But I think that's all right\n> since then we'll reach transaction abort cleanup, which won't whine\n> about open files. The problem is limited to the case where no error\n> gets thrown.\n\nRight. This \"issue\" is not a problem unless the caller continues\nwithout throwing an exception after the function errors out, which is\nnot done by the current code.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 31 May 2022 09:22:37 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: ParseTzFile doesn't FreeFile on error"
},
{
"msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> At Mon, 30 May 2022 13:11:04 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n>> BTW, my first thought about it was \"what if one of the callees throws\n>> elog(ERROR), eg palloc out-of-memory\"? But I think that's all right\n>> since then we'll reach transaction abort cleanup, which won't whine\n>> about open files. The problem is limited to the case where no error\n>> gets thrown.\n\n> Right. This \"issue\" is not a problem unless the caller continues\n> without throwing an exception after the function errors out, which is\n> not done by the current code.\n\nActually the problem *is* reachable, if you intentionally break the\nalready-active timezone abbreviation file: newly started sessions\nproduce file-leak warnings after failing to apply the setting.\nI concede that's not a likely scenario, but that's why I think it's\nworth fixing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 31 May 2022 14:21:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ParseTzFile doesn't FreeFile on error"
},
{
"msg_contents": "At Tue, 31 May 2022 14:21:28 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Actually the problem *is* reachable, if you intentionally break the\n> already-active timezone abbreviation file: newly started sessions\n> produce file-leak warnings after failing to apply the setting.\n> I concede that's not a likely scenario, but that's why I think it's\n> worth fixing.\n\nAh, I see. Thanks!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 01 Jun 2022 11:58:08 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: ParseTzFile doesn't FreeFile on error"
}
] |
[
{
"msg_contents": "Hello world!\n\nFew years ago we had a thread with $subj [0]. A year ago Heikki put a lot of effort in improving GIN checks [1] while hunting a GIN bug.\nAnd in view of some releases with a recommendation to reindex anything that fails or lacks amcheck verification, I decided that I want to review the thread.\n\nPFA $subj incorporating all Heikki's improvements and restored GiST checks. Also I've added heapallindexed verification for GiST. I'm sure that we must add it for GIN too. Yet I do not know how to implement it. Maybe just check that every entry generated from heap present in entry tree? Or that every tids is present in the index?\n\nGiST verification does parent check despite taking only AccessShareLock. It's possible because when the key discrepancy is found we acquire parent tuple with lock coupling. I'm sure that this is correct to check keys this way. And I'm almost sure it will not deadlock, because split is doing the same locking.\n\nWhat do you think?\n\nBest regards, Andrey Borodin.\n\n[0] https://www.postgresql.org/message-id/flat/CAF3eApa07-BajjG8%2BRYx-Dr_cq28ZA0GsZmUQrGu5b2ayRhB5A%40mail.gmail.com\n[1] https://www.postgresql.org/message-id/flat/9fdbb584-1e10-6a55-ecc2-9ba8b5dca1cf%40iki.fi#fec2751faf1ca52495b0a61acc0f5532",
"msg_date": "Mon, 30 May 2022 14:40:06 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Amcheck verification of GiST and GIN"
},
{
"msg_contents": "> On 30 May 2022, at 12:40, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> What do you think?\n\nHi Andrey!\n\nHere's a version with better tests. I've made sure that GiST tests actually trigger page reuse after deletion. And enhanced comments in both GiST and GIN test scripts. I hope you'll like it.\n\nGIN heapallindexed is still a no-op check. Looking forward to hear any ideas on what it could be.\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Wed, 22 Jun 2022 20:40:56 +0300",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "On Wed, Jun 22, 2022 at 11:35 AM Andrey Borodin <x4mmm@yandex-team.ru>\nwrote:\n\n> > On 30 May 2022, at 12:40, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> >\n> > What do you think?\n>\n> Hi Andrey!\n>\n\nHi Andrey!\n\nSince you're talking to yourself, just wanted to support you – this is an\nimportant thing, definitely should be very useful for many projects; I hope\nto find time to test it in the next few days.\n\nThanks for working on it.\n\nOn Wed, Jun 22, 2022 at 11:35 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> On 30 May 2022, at 12:40, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> What do you think?\n\nHi Andrey!Hi Andrey!Since you're talking to yourself, just wanted to support you – this is an important thing, definitely should be very useful for many projects; I hope to find time to test it in the next few days. Thanks for working on it.",
"msg_date": "Wed, 22 Jun 2022 12:27:25 -0700",
"msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "Hi,\n\nI think having amcheck for more indexes is great.\n\nOn 2022-06-22 20:40:56 +0300, Andrey Borodin wrote:\n>> diff --git a/contrib/amcheck/amcheck.c b/contrib/amcheck/amcheck.c\n> new file mode 100644\n> index 0000000000..7a222719dd\n> --- /dev/null\n> +++ b/contrib/amcheck/amcheck.c\n> @@ -0,0 +1,187 @@\n> +/*-------------------------------------------------------------------------\n> + *\n> + * amcheck.c\n> + *\t\tUtility functions common to all access methods.\n\nThis'd likely be easier to read if the reorganization were split into its own\ncommit.\n\nI'd also split gin / gist support. It's a large enough patch that that imo\nmakes reviewing easier.\n\n\n> +void amcheck_lock_relation_and_check(Oid indrelid, IndexCheckableCallback checkable,\n> +\t\t\t\t\t\t\t\t\t\t\t\tIndexDoCheckCallback check, LOCKMODE lockmode, void *state)\n\nMight be worth pgindenting - the void for function definitions (but not for\ndeclarations) is typically on its own line in PG code.\n\n\n> +static GistCheckState\n> +gist_init_heapallindexed(Relation rel)\n> +{\n> +\tint64\t\ttotal_pages;\n> +\tint64\t\ttotal_elems;\n> +\tuint64\t\tseed;\n> +\tGistCheckState result;\n> +\n> +\t/*\n> +\t* Size Bloom filter based on estimated number of tuples in index\n> +\t*/\n> +\ttotal_pages = RelationGetNumberOfBlocks(rel);\n> +\ttotal_elems = Max(total_pages * (MaxOffsetNumber / 5),\n> +\t\t\t\t\t\t(int64) rel->rd_rel->reltuples);\n> +\t/* Generate a random seed to avoid repetition */\n> +\tseed = pg_prng_uint64(&pg_global_prng_state);\n> +\t/* Create Bloom filter to fingerprint index */\n> +\tresult.filter = bloom_create(total_elems, maintenance_work_mem, seed);\n> +\n> +\t/*\n> +\t * Register our own snapshot\n> +\t */\n> +\tresult.snapshot = RegisterSnapshot(GetTransactionSnapshot());\n\nFWIW, comments like this, that just restate exactly what the code does, are\nimo not helpful. Also, there's a trailing space :)\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 22 Jun 2022 16:29:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "\n\n> On 23 Jun 2022, at 00:27, Nikolay Samokhvalov <samokhvalov@gmail.com> wrote:\n> \n> Since you're talking to yourself, just wanted to support you – this is an important thing, definitely should be very useful for many projects; I hope to find time to test it in the next few days. \n\nThanks Nikolay!\n\n\n> On 23 Jun 2022, at 04:29, Andres Freund <andres@anarazel.de> wrote:\nThanks for looking into the patch, Andres!\n\n> On 2022-06-22 20:40:56 +0300, Andrey Borodin wrote:\n>>> diff --git a/contrib/amcheck/amcheck.c b/contrib/amcheck/amcheck.c\n>> new file mode 100644\n>> index 0000000000..7a222719dd\n>> --- /dev/null\n>> +++ b/contrib/amcheck/amcheck.c\n>> @@ -0,0 +1,187 @@\n>> +/*-------------------------------------------------------------------------\n>> + *\n>> + * amcheck.c\n>> + *\t\tUtility functions common to all access methods.\n> \n> This'd likely be easier to read if the reorganization were split into its own\n> commit.\n> \n> I'd also split gin / gist support. It's a large enough patch that that imo\n> makes reviewing easier.\nI will split the patch in 3 steps:\n1. extract generic functions to amcheck.c\n2. add gist functions\n3. add gin functions\nBut each this step is just adding few independent files + some lines to Makefile.\n\nI'll fix other notes too in the next version.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Sun, 26 Jun 2022 00:10:11 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "> On 26 Jun 2022, at 00:10, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> I will split the patch in 3 steps:\n> 1. extract generic functions to amcheck.c\n> 2. add gist functions\n> 3. add gin functions\n> \n> I'll fix other notes too in the next version.\n\n\nDone. PFA attached patchset.\n\nThanks!\n\nBest regards, Andrey Borodin.",
"msg_date": "Sat, 23 Jul 2022 14:40:44 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "> On 23 Jul 2022, at 14:40, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> Done. PFA attached patchset.\n> \n> Best regards, Andrey Borodin.\n> <v12-0001-Refactor-amcheck-to-extract-common-locking-routi.patch><v12-0002-Add-gist_index_parent_check-function-to-verify-G.patch><v12-0003-Add-gin_index_parent_check-to-verify-GIN-index.patch>\n\nHere's v13. Changes:\n1. Fixed passing through downlink in GIN index\n2. Fixed GIN tests (one test case was not working)\n\nThanks to Vitaliy Kukharik for trying this patches.\n\nBest regards, Andrey Borodin.",
"msg_date": "Wed, 17 Aug 2022 17:28:02 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-17 17:28:02 +0500, Andrey Borodin wrote:\n> Here's v13. Changes:\n> 1. Fixed passing through downlink in GIN index\n> 2. Fixed GIN tests (one test case was not working)\n> \n> Thanks to Vitaliy Kukharik for trying this patches.\n\nDue to the merge of the meson based build, this patch needs to be\nadjusted. See\nhttps://cirrus-ci.com/build/6637154947301376\n\nThe changes should be fairly simple, just mirroring the Makefile ones.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 22 Sep 2022 08:19:09 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-22 08:19:09 -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2022-08-17 17:28:02 +0500, Andrey Borodin wrote:\n> > Here's v13. Changes:\n> > 1. Fixed passing through downlink in GIN index\n> > 2. Fixed GIN tests (one test case was not working)\n> > \n> > Thanks to Vitaliy Kukharik for trying this patches.\n> \n> Due to the merge of the meson based build, this patch needs to be\n> adjusted. See\n> https://cirrus-ci.com/build/6637154947301376\n> \n> The changes should be fairly simple, just mirroring the Makefile ones.\n\nHere's an updated patch adding meson compat.\n\nI didn't fix the following warnings:\n\n[25/28 3 89%] Compiling C object contrib/amcheck/amcheck.dll.p/amcheck.c.obj\n../../home/andres/src/postgresql/contrib/amcheck/amcheck.c: In function ‘amcheck_lock_relation_and_check’:\n../../home/andres/src/postgresql/contrib/amcheck/amcheck.c:81:20: warning: implicit declaration of function ‘NewGUCNestLevel’ [-Wimplicit-function-declaration]\n 81 | save_nestlevel = NewGUCNestLevel();\n | ^~~~~~~~~~~~~~~\n../../home/andres/src/postgresql/contrib/amcheck/amcheck.c:124:2: warning: implicit declaration of function ‘AtEOXact_GUC’; did you mean ‘AtEOXact_SMgr’? [-Wimplicit-function-declaration]\n 124 | AtEOXact_GUC(false, save_nestlevel);\n | ^~~~~~~~~~~~\n | AtEOXact_SMgr\n[26/28 2 92%] Compiling C object contrib/amcheck/amcheck.dll.p/verify_gin.c.obj\n../../home/andres/src/postgresql/contrib/amcheck/verify_gin.c: In function ‘gin_check_parent_keys_consistency’:\n../../home/andres/src/postgresql/contrib/amcheck/verify_gin.c:423:8: warning: unused variable ‘heapallindexed’ [-Wunused-variable]\n 423 | bool heapallindexed = *((bool*)callback_state);\n | ^~~~~~~~~~~~~~\n[28/28 1 100%] Linking target contrib/amcheck/amcheck.dll\n\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sun, 2 Oct 2022 00:12:30 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "On Sun, Oct 2, 2022 at 12:12 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Here's an updated patch adding meson compat.\n\nThank you, Andres! Here's one more rebase (something was adjusted in\namcheck build).\nAlso I've fixed new warnings except warning about absent\nheapallindexed for GIN. It's a TODO.\n\nThanks!\n\nBest regards, Andrey Borodin.",
"msg_date": "Sat, 8 Oct 2022 15:36:52 -0700",
"msg_from": "Andrew Borodin <amborodin86@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "Hello.\n\nI reviewed this patch and I would like to share some comments.\n\nIt compiled with those 2 warnings:\n\nverify_gin.c: In function 'gin_check_parent_keys_consistency':\nverify_gin.c:481:38: warning: declaration of 'maxoff' shadows a previous \nlocal [-Wshadow=compatible-local]\n 481 | OffsetNumber maxoff = \nPageGetMaxOffsetNumber(page);\n | ^~~~~~\nverify_gin.c:453:41: note: shadowed declaration is here\n 453 | maxoff;\n | ^~~~~~\nverify_gin.c:423:25: warning: unused variable 'heapallindexed' \n[-Wunused-variable]\n 423 | bool heapallindexed = *((bool*)callback_state);\n | ^~~~~~~~~~~~~~\n\n\nAlso, I'm not sure about postgres' headers conventions, inside amcheck.h, \nthere is \"miscadmin.h\" included, and inside verify_gin.c, verify_gist.h \nand verify_nbtree.c both amcheck.h and miscadmin.h are included.\n\nAbout the documentation, the bt_index_parent_check has comments about the \nShareLock and \"SET client_min_messages = DEBUG1;\", and both \ngist_index_parent_check and gin_index_parent_check lack it. verify_gin \nuses DEBUG3, I'm not sure if it is on purpose, but it would be nice to \ndocument it or put DEBUG1 to be consistent.\n\nI lack enough context to do a deep review on the code, so in this area \nthis patch needs more eyes.\n\nI did the following test:\n\npostgres=# create table teste (t text, tv tsvector);\nCREATE TABLE\npostgres=# insert into teste values ('hello', 'hello'::tsvector);\nINSERT 0 1\npostgres=# create index teste_tv on teste using gist(tv);\nCREATE INDEX\npostgres=# select pg_relation_filepath('teste_tv');\n pg_relation_filepath\n----------------------\n base/5/16441\n(1 row)\n\npostgres=#\n\\q\n$ bin/pg_ctl -D data -l log\nwaiting for server to shut down.... done\nserver stopped\n$ okteta base/5/16441 # I couldn't figure out the dd syntax to change the \n1FE9 to '0'\n$ bin/pg_ctl -D data -l log\nwaiting for server to start.... done\nserver started\n$ bin/psql -U ze postgres\npsql (16devel)\nType \"help\" for help.\n\npostgres=# SET client_min_messages = DEBUG3;\nSET\npostgres=# select gist_index_parent_check('teste_tv'::regclass, true);\nDEBUG: verifying that tuples from index \"teste_tv\" are present in \"teste\"\nERROR: heap tuple (0,1) from table \"teste\" lacks matching index tuple \nwithin index \"teste_tv\"\npostgres=#\n\nA simple index corruption in gin:\n\npostgres=# CREATE TABLE \"gin_check\"(\"Column1\" int[]);\nCREATE TABLE\npostgres=# insert into gin_check values (ARRAY[1]),(ARRAY[2]);\nINSERT 0 2\npostgres=# CREATE INDEX gin_check_idx on \"gin_check\" USING GIN(\"Column1\");\nCREATE INDEX\npostgres=# select pg_relation_filepath('gin_check_idx');\n pg_relation_filepath\n----------------------\n base/5/16453\n(1 row)\n\npostgres=#\n\\q\n$ bin/pg_ctl -D data -l logfile stop\nwaiting for server to shut down.... done\nserver stopped\n$ okteta data/base/5/16453 # edited some bits near 3FCC\n$ bin/pg_ctl -D data -l logfile start\nwaiting for server to start.... done\nserver started\n$ bin/psql -U ze postgres\npsql (16devel)\nType \"help\" for help.\n\npostgres=# SET client_min_messages = DEBUG3;\nSET\npostgres=# SELECT gin_index_parent_check('gin_check_idx', true);\nERROR: number of items mismatch in GIN entry tuple, 49 in tuple header, 1 \ndecoded\npostgres=#\n\nThere are more code paths to follow to check the entire code, and I had a \nhard time to corrupt the indices. Is there any automated code to corrupt \nindex to test such code?\n\n\n--\nJose Arthur Benetasso Villanova\n\n\n\n",
"msg_date": "Thu, 24 Nov 2022 23:04:37 -0300 (-03)",
"msg_from": "Jose Arthur Benetasso Villanova <jose.arthur@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "Hello!\n\nThank you for the review!\n\nOn Thu, Nov 24, 2022 at 6:04 PM Jose Arthur Benetasso Villanova\n<jose.arthur@gmail.com> wrote:\n>\n> It compiled with those 2 warnings:\n>\n> verify_gin.c: In function 'gin_check_parent_keys_consistency':\n> verify_gin.c:481:38: warning: declaration of 'maxoff' shadows a previous\n> local [-Wshadow=compatible-local]\n> 481 | OffsetNumber maxoff =\n> PageGetMaxOffsetNumber(page);\n> | ^~~~~~\n> verify_gin.c:453:41: note: shadowed declaration is here\n> 453 | maxoff;\n> | ^~~~~~\n> verify_gin.c:423:25: warning: unused variable 'heapallindexed'\n> [-Wunused-variable]\n\nFixed.\n\n> 423 | bool heapallindexed = *((bool*)callback_state);\n> | ^~~~~~~~~~~~~~\n>\n\nThis one is in progress yet, heapallindexed check is not implemented yet...\n\n\n>\n> Also, I'm not sure about postgres' headers conventions, inside amcheck.h,\n> there is \"miscadmin.h\" included, and inside verify_gin.c, verify_gist.h\n> and verify_nbtree.c both amcheck.h and miscadmin.h are included.\nFixed.\n\n>\n> About the documentation, the bt_index_parent_check has comments about the\n> ShareLock and \"SET client_min_messages = DEBUG1;\", and both\n> gist_index_parent_check and gin_index_parent_check lack it. verify_gin\n> uses DEBUG3, I'm not sure if it is on purpose, but it would be nice to\n> document it or put DEBUG1 to be consistent.\nGiST and GIN verifications do not take ShareLock for parent checks.\nB-tree check cannot verify cross-level invariants between levels when\nthe index is changing.\n\nGiST verification checks only one invariant that can be verified if\npage locks acquired the same way as page split does.\nGIN does not require ShareLock because it does not check cross-level invariants.\n\nReporting progress with DEBUG1 is a good idea, I did not know that\nthis feature exists. I'll implement something similar in following\nversions.\n\n> I did the following test:\n\nCool! Thank you!\n\n>\n> There are more code paths to follow to check the entire code, and I had a\n> hard time to corrupt the indices. Is there any automated code to corrupt\n> index to test such code?\n>\n\nHeapam tests do this in an automated way, look into this file\nt/001_verify_heapam.pl.\nSurely we can write these tests. At least automate what you have just\ndone in the review. However, committing similar checks is a very\ntedious work: something will inevitably turn buildfarm red as a\nwatermelon.\n\nI hope I'll post a version with DEBUG1 reporting and heapallindexed soon.\nPFA current state.\nThank you for looking into this!\n\nBest regards, Andrey Borodin.",
"msg_date": "Sun, 27 Nov 2022 13:29:18 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "On Sun, Nov 27, 2022 at 1:29 PM Andrey Borodin <amborodin86@gmail.com> wrote:\n>\n> GiST verification checks only one invariant that can be verified if\n> page locks acquired the same way as page split does.\n> GIN does not require ShareLock because it does not check cross-level invariants.\n>\n\nI was wrong. GIN check does similar gin_refind_parent() to lock pages\nin bottom-up manner and truly verify downlink-child_page invariant.\n\nHere's v17. The only difference is that I added progress reporting to\nGiST verification.\nI still did not implement heapallindexed for GIN. Existence of pending\nlists makes this just too difficult for a weekend coding project :(\n\nThank you!\n\nBest regards, Andrey Borodin.",
"msg_date": "Sun, 27 Nov 2022 17:07:40 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "\nOn Sun, 27 Nov 2022, Andrey Borodin wrote:\n\n> On Sun, Nov 27, 2022 at 1:29 PM Andrey Borodin <amborodin86@gmail.com> wrote:\n>>\n> I was wrong. GIN check does similar gin_refind_parent() to lock pages\n> in bottom-up manner and truly verify downlink-child_page invariant.\n\nDoes this mean that we need the adjustment in docs?\n\n> Here's v17. The only difference is that I added progress reporting to\n> GiST verification.\n> I still did not implement heapallindexed for GIN. Existence of pending\n> lists makes this just too difficult for a weekend coding project :(\n>\n> Thank you!\n>\n> Best regards, Andrey Borodin.\n>\n\nI'm a bit lost here. I tried your patch again and indeed the \nheapallindexed inside gin_check_parent_keys_consistency has a TODO \ncomment, but it's unclear to me if you are going to implement it or if the \npatch \"needs review\". Right now it's \"Waiting on Author\".\n\n-- \nJose Arthur Benetasso Villanova\n\n\n\n",
"msg_date": "Wed, 14 Dec 2022 09:18:44 -0300 (-03)",
"msg_from": "Jose Arthur Benetasso Villanova <jose.arthur@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "On Wed, Dec 14, 2022 at 7:19 AM Jose Arthur Benetasso Villanova\n<jose.arthur@gmail.com> wrote:\n> I'm a bit lost here. I tried your patch again and indeed the\n> heapallindexed inside gin_check_parent_keys_consistency has a TODO\n> comment, but it's unclear to me if you are going to implement it or if the\n> patch \"needs review\". Right now it's \"Waiting on Author\".\n\nFWIW, I don't think there's a hard requirement that every index AM\nneeds to support the same set of amcheck options. Where it makes sense\nand can be done in a reasonably straightforward manner, we should. But\nsometimes that may not be the case, and that seems fine, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 14 Dec 2022 12:25:17 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "Hi Jose, thank you for review and sorry for so long delay to answer.\n\nOn Wed, Dec 14, 2022 at 4:19 AM Jose Arthur Benetasso Villanova\n<jose.arthur@gmail.com> wrote:\n>\n>\n> On Sun, 27 Nov 2022, Andrey Borodin wrote:\n>\n> > On Sun, Nov 27, 2022 at 1:29 PM Andrey Borodin <amborodin86@gmail.com> wrote:\n> >>\n> > I was wrong. GIN check does similar gin_refind_parent() to lock pages\n> > in bottom-up manner and truly verify downlink-child_page invariant.\n>\n> Does this mean that we need the adjustment in docs?\nIt seems to me that gin_index_parent_check() docs are correct.\n\n>\n> > Here's v17. The only difference is that I added progress reporting to\n> > GiST verification.\n> > I still did not implement heapallindexed for GIN. Existence of pending\n> > lists makes this just too difficult for a weekend coding project :(\n> >\n> > Thank you!\n> >\n> > Best regards, Andrey Borodin.\n> >\n>\n> I'm a bit lost here. I tried your patch again and indeed the\n> heapallindexed inside gin_check_parent_keys_consistency has a TODO\n> comment, but it's unclear to me if you are going to implement it or if the\n> patch \"needs review\". Right now it's \"Waiting on Author\".\n>\n\nPlease find the attached new version. In this patchset heapallindexed\nflag is removed from GIN checks.\n\nThank you!\n\nBest regards, Andrey Borodin.",
"msg_date": "Sun, 8 Jan 2023 20:05:25 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "On Sun, Jan 8, 2023 at 8:05 PM Andrey Borodin <amborodin86@gmail.com> wrote:\n>\n> Please find the attached new version. In this patchset heapallindexed\n> flag is removed from GIN checks.\n>\nUh... sorry, git-formatted wrong branch.\nHere's the correct version. Double checked.\n\nBest regards, Andrey Borodin.",
"msg_date": "Sun, 8 Jan 2023 20:08:05 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "\nOn Sun, 8 Jan 2023, Andrey Borodin wrote:\n\n> On Sun, Jan 8, 2023 at 8:05 PM Andrey Borodin <amborodin86@gmail.com> wrote:\n>>\n>> Please find the attached new version. In this patchset heapallindexed\n>> flag is removed from GIN checks.\n>>\n> Uh... sorry, git-formatted wrong branch.\n> Here's the correct version. Double checked.\n>\n\nHello again.\n\nI applied the patch without errors / warnings and did the same tests. All \nworking as expected.\n\nThe only thing that I found is the gin_index_parent_check function in docs \nstill references the \"gin_index_parent_check(index regclass, \nheapallindexed boolean) returns void\"\n\n--\nJose Arthur Benetasso Villanova\n\n\n",
"msg_date": "Fri, 13 Jan 2023 08:46:45 -0300 (-03)",
"msg_from": "Jose Arthur Benetasso Villanova <jose.arthur@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "On Fri, Jan 13, 2023 at 3:46 AM Jose Arthur Benetasso Villanova\n<jose.arthur@gmail.com> wrote:\n>\n> The only thing that I found is the gin_index_parent_check function in docs\n> still references the \"gin_index_parent_check(index regclass,\n> heapallindexed boolean) returns void\"\n>\n\nCorrect! Please find the attached fixed version.\n\nThank you!\n\nBest regards, Andrey Borodin.",
"msg_date": "Fri, 13 Jan 2023 16:18:23 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "\nOn Fri, 13 Jan 2023, Andrey Borodin wrote:\n\n> On Fri, Jan 13, 2023 at 3:46 AM Jose Arthur Benetasso Villanova\n> <jose.arthur@gmail.com> wrote:\n>>\n>> The only thing that I found is the gin_index_parent_check function in docs\n>> still references the \"gin_index_parent_check(index regclass,\n>> heapallindexed boolean) returns void\"\n>>\n>\n> Correct! Please find the attached fixed version.\n>\n> Thank you!\n>\n> Best regards, Andrey Borodin.\n>\n\nHello again. I see the change. Thanks\n\n--\nJose Arthur Benetasso Villanova\n\n\n",
"msg_date": "Sat, 14 Jan 2023 00:34:38 -0300 (-03)",
"msg_from": "Jose Arthur Benetasso Villanova <jose.arthur@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "On Fri, Jan 13, 2023 at 7:35 PM Jose Arthur Benetasso Villanova\n<jose.arthur@gmail.com> wrote:\n>\n> Hello again. I see the change. Thanks\n>\n\nThanks! I also found out that there was a CI complaint about amcheck.h\nnot including some necessary stuff. Here's a version with a fix for\nthat.\n\nBest regards, Andrey Borodin.",
"msg_date": "Fri, 13 Jan 2023 20:14:46 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "Hi Andrey,\n\n> Thanks! I also found out that there was a CI complaint about amcheck.h\n> not including some necessary stuff. Here's a version with a fix for\n> that.\n\nThanks for the updated patchset.\n\nOne little nitpick I have is that the tests cover only cases when all\nthe checks pass successfully. The tests don't show that the checks\nwill fail if the indexes are corrupted. Usually we check this as well,\nsee bd807be6 and other amcheck replated patches and commits.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 30 Jan 2023 16:38:03 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "On Fri, Jan 13, 2023 at 8:15 PM Andrey Borodin <amborodin86@gmail.com> wrote:\n> (v21 of patch series)\n\nI can see why the refactoring patch is necessary overall, but I have\nsome concerns about the details. More specifically:\n\n* PageGetItemIdCareful() doesn't seem like it needs to be moved to\namcheck.c and generalized to work with GIN and GiST.\n\nIt seems better to just allow some redundancy, by having static/local\nversions of PageGetItemIdCareful() for both GIN and GiST. There are\nnumerous reasons why that seems better to me. For one thing it's\nsimpler. For another, the requirements are already a bit different,\nand may become more different in the future. I have seriously\nconsidered adding a new PageGetItemCareful() routine to nbtree in the\npast (which would work along similar lines when we access\nIndexTuples), which would have to be quite different across each index\nAM. Maybe this idea of adding a PageGetItemCareful() would totally\nsupersede the existing PageGetItemIdCareful() function.\n\nBut even now, without any of that, the rules for\nPageGetItemIdCareful() are already different. For example, with GIN\nyou cannot have LP_DEAD bits set, so ISTM that you should be checking\nfor that in its own custom version of PageGetItemIdCareful().\n\nYou can just have comments that refer the reader to the original\nnbtree version of PageGetItemIdCareful() for a high level overview.\n\n* You have distinct versions of the current btree_index_checkable()\nfunction for both GIN and GiST, which doesn't seem necessary to me --\nso this is kind of the opposite of the situation with\nPageGetItemIdCareful() IMV.\n\nThe only reason to have separate versions of these is to detect when\nthe wrong index AM is used -- the other 2 checks are 100% common to\nall index AMs. Why not just move that one non-generic check out of the\nfunction, to each respective index AM .c file, while keeping the other\n2 generic checks in amcheck.c?\n\nOnce things are structured this way, it would then make sense to add a\ncan't-be-LP_DEAD check to the GIN specific version of\nPageGetItemIdCareful().\n\nI also have some questions about the verification functionality itself:\n\n* Why haven't you done something like palloc_btree_page() for both\nGiST and GIN, and use that for everything?\n\nObviously this may not be possible in100% of all cases -- even\nverify_nbtree.c doesn't manage that. But I see no reason for that\nhere. Though, in general, it's not exactly clear what's going on with\nbuffer lock coupling in general.\n\n* Why does gin_refind_parent() buffer lock the parent while the child\nbuffer lock remains held?\n\nIn any case this doesn't really need to have any buffer lock coupling.\nSince you're both of the new verification functions you're adding are\n\"parent\" variants, that acquire a ShareLock to block concurrent\nmodifications and concurrent VACUUM?\n\n* Oh wait, they don't use a ShareLock at all -- they use an\nAccessShareLock. This means that there are significant inconsistencies\nwith the verify_nbtree.c scheme.\n\nI now realize that gist_index_parent_check() and\ngin_index_parent_check() are actually much closer to bt_index_check()\nthan to bt_index_parent_check(). I think that you should stick with\nthe convention of using the word \"parent\" whenever we'll need a\nShareLock, and omitting \"parent\" whenever we will only require an\nAccessShareLock. I'm not sure if that means that you should change the\nlock strength or change the name of the functions. I am sure that you\nshould follow the general convention that we have already.\n\nI feel rather pessimistic about our ability to get all the details\nright with GIN. Frankly I have serious doubts that GIN itself gets\neverything right, which makes our task just about impossible. The GIN\nREADME did gain a \"Concurrency\" section in 2019, at my behest, but in\ngeneral the locking protocols are still chronically under-documented,\nand have been revised in various ways as a response to bugs. So at\nleast in the case of GIN, we really need amcheck coverage, but should\ntake a very conservative approach.\n\nWith GIN I think that we need to make the most modest possible\nassumptions about concurrency, by using a ShareLock. Without that, I\nthink that we can have very little confidence in the verification\nchecks -- the concurrency rules are just too complicated right now.\nMaybe it will be possible in the future, but right now I'd rather not\ntry that. I find it very difficult to figure out the GIN locking\nprotocol, even for things that seem like they should be quite\nstraightforward. This situation would be totally unthinkable in\nnbtree, and perhaps with GiST.\n\n* Why does the GIN patch change a comment in contrib/amcheck/amcheck.c?\n\n* There is no pg_amcheck patch here, but I think that there should be,\nsince that is now the preferred and recommended way to run amcheck in\ngeneral.\n\nWe could probably do something very similar to what is already there\nfor nbtree. Maybe it would make sense to change --heapallindexed and\n--parent-check so that they call your parent check functions for GiST\nand GIN -- though the locking/naming situation must be resolved before\nwe decide what to do here, for pg_amcheck.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 2 Feb 2023 11:51:45 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "On Thu, Feb 2, 2023 at 11:51 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> I also have some questions about the verification functionality itself:\n\nI forgot to include another big concern here:\n\n* Why are there only WARNINGs, never ERRORs here?\n\nIt's far more likely that you'll run into problems when running\namcheck this way. I understand that the heapam checks can do that, but\nthat is both more useful, and less risky. With heapam we're not\ntraversing a tree structure in logical/keyspace order. I'm not\nclaiming that this approach is impossible; just that it doesn't seem\neven remotely worth it. Indexes are never supposed to be corrupt, but\nif they are corrupt the solution always involves a REINDEX. You never\ntry to recover the data from an index, since it's redundant and less\nauthoritative, almost by definition (at least in Postgres).\n\nBy far the most important piece of information is that an index has\nsome non-zero amount of corruption. Any amount of corruption is\nsupposed to be extremely surprising. It's kind of like if you see one\ncockroach in your home. The problem is not that you have one cockroach\nin your home; the problem is that you simply have cockroaches. We can\nall agree that in some abstract sense, fewer cockroaches is better.\nBut that doesn't seem to have any practical relevance -- it's a purely\ntheoretical point. It doesn't really affect what you do about the\nproblem at that point.\n\nAdmittedly there is some value in seeing multiple WARNINGs to true\nexperts that are performing some kind of forensic analysis, but that\ndoesn't seem worth it to me -- I'm an expert, and I don't think that\nI'd do it this way for any reason other than it being more convenient\nas a way to get information about a system that I don't have access\nto. Even then, I think that I'd probably have serious doubts about\nmost of the extra information that I'd get, since it might very well\nbe a downstream consequence of the same basic problem.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 2 Feb 2023 12:15:32 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "On Thu, Feb 2, 2023 at 12:15 PM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Thu, Feb 2, 2023 at 11:51 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n...\n\n> Admittedly there is some value in seeing multiple WARNINGs to true\n> experts that are performing some kind of forensic analysis, but that\n> doesn't seem worth it to me -- I'm an expert, and I don't think that\n> I'd do it this way for any reason other than it being more convenient\n> as a way to get information about a system that I don't have access\n> to. Even then, I think that I'd probably have serious doubts about\n> most of the extra information that I'd get, since it might very well\n> be a downstream consequence of the same basic problem.\n>\n...\n\nI understand your thoughts (I think) and agree with them, but at least one\nscenario where I do want to see *all* errors is corruption prevention –\nrunning\namcheck in lower environments, not in production, to predict and prevent\nissues.\nFor example, not long ago, Ubuntu 16.04 became EOL (in phases), and people\nneeded to upgrade, with glibc version change. It was quite good to use\namcheck\non production clones (running on a new OS/glibc) to identify all indexes\nthat\nneed to be rebuilt. Being able to see only one of them would be very\ninconvenient. Rebuilding all indexes didn't seem a good idea in the case of\nlarge databases.\n\nOn Thu, Feb 2, 2023 at 12:15 PM Peter Geoghegan <pg@bowt.ie> wrote:On Thu, Feb 2, 2023 at 11:51 AM Peter Geoghegan <pg@bowt.ie> wrote:... \nAdmittedly there is some value in seeing multiple WARNINGs to true\nexperts that are performing some kind of forensic analysis, but that\ndoesn't seem worth it to me -- I'm an expert, and I don't think that\nI'd do it this way for any reason other than it being more convenient\nas a way to get information about a system that I don't have access\nto. Even then, I think that I'd probably have serious doubts about\nmost of the extra information that I'd get, since it might very well\nbe a downstream consequence of the same basic problem....I understand your thoughts (I think) and agree with them, but at least onescenario where I do want to see *all* errors is corruption prevention – runningamcheck in lower environments, not in production, to predict and prevent issues.For example, not long ago, Ubuntu 16.04 became EOL (in phases), and peopleneeded to upgrade, with glibc version change. It was quite good to use amcheckon production clones (running on a new OS/glibc) to identify all indexes thatneed to be rebuilt. Being able to see only one of them would be veryinconvenient. Rebuilding all indexes didn't seem a good idea in the case oflarge databases.",
"msg_date": "Thu, 2 Feb 2023 12:31:33 -0800",
"msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "On Thu, Feb 2, 2023 at 12:31 PM Nikolay Samokhvalov\n<samokhvalov@gmail.com> wrote:\n> I understand your thoughts (I think) and agree with them, but at least one\n> scenario where I do want to see *all* errors is corruption prevention – running\n> amcheck in lower environments, not in production, to predict and prevent issues.\n> For example, not long ago, Ubuntu 16.04 became EOL (in phases), and people\n> needed to upgrade, with glibc version change. It was quite good to use amcheck\n> on production clones (running on a new OS/glibc) to identify all indexes that\n> need to be rebuilt. Being able to see only one of them would be very\n> inconvenient. Rebuilding all indexes didn't seem a good idea in the case of\n> large databases.\n\nI agree that this matters at the level of whole indexes. That is, if\nyou want to check every index in the database, it is unhelpful if the\nwhole process stops just because one individual index has corruption.\nAny extra information about the index that is corrupt may not be all\nthat valuable, but information about other indexes remains almost as\nvaluable.\n\nI think that that problem should be solved at a higher level, in the\nprogram that runs amcheck. Note that pg_amcheck will already do this\nfor B-Tree indexes. While verify_nbtree.c won't try to limp on with an\nindex that is known to be corrupt, pg_amcheck will continue with other\nindexes.\n\nWe should add a \"Tip\" to the amcheck documentation on 14+ about this.\nWe should clearly advise users that they should probably just use\npg_amcheck. Using the SQL interface directly should now mostly be\nsomething that only a tiny minority of experts need to do -- and even\nthe experts won't do it that way unless they have a good reason to.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 2 Feb 2023 12:42:52 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "On Thu, Feb 2, 2023 at 12:43 PM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> I agree that this matters at the level of whole indexes.\n>\n\nI already realized my mistake – indeed, having multiple errors for 1 index\ndoesn't seem to be super practically helpful.\n\n\n> I think that that problem should be solved at a higher level, in the\n> program that runs amcheck. Note that pg_amcheck will already do this\n> for B-Tree indexes.\n>\n\nThat's a great tool, and it's great it supports parallelization, very useful\non large machines.\n\n\n> We should add a \"Tip\" to the amcheck documentation on 14+ about this.\n> We should clearly advise users that they should probably just use\n> pg_amcheck.\n\n\nand with -j$N, with high $N (unless it's production)\n\nOn Thu, Feb 2, 2023 at 12:43 PM Peter Geoghegan <pg@bowt.ie> wrote:I agree that this matters at the level of whole indexes. I already realized my mistake – indeed, having multiple errors for 1 indexdoesn't seem to be super practically helpful. \n\nI think that that problem should be solved at a higher level, in the\nprogram that runs amcheck. Note that pg_amcheck will already do this\nfor B-Tree indexes.That's a great tool, and it's great it supports parallelization, very usefulon large machines. \nWe should add a \"Tip\" to the amcheck documentation on 14+ about this.\nWe should clearly advise users that they should probably just use\npg_amcheck. and with -j$N, with high $N (unless it's production)",
"msg_date": "Thu, 2 Feb 2023 12:56:47 -0800",
"msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "On Thu, Feb 2, 2023 at 12:56 PM Nikolay Samokhvalov\n<samokhvalov@gmail.com> wrote:\n> I already realized my mistake – indeed, having multiple errors for 1 index\n> doesn't seem to be super practically helpful.\n\nI wouldn't mind supporting it if the cost wasn't too high. But I\nbelieve that it's not a good trade-off.\n\n>> I think that that problem should be solved at a higher level, in the\n>> program that runs amcheck. Note that pg_amcheck will already do this\n>> for B-Tree indexes.\n>\n>\n> That's a great tool, and it's great it supports parallelization, very useful\n> on large machines.\n\nAnother big advantage of just using pg_amcheck is that running each\nindex verification in a standalone query avoids needlessly holding the\nsame MVCC snapshot across all indexes verified (compared to running\none big SQL query that verifies multiple indexes). As simple as\npg_amcheck's approach is (it's doing nothing that you couldn't\nreplicate in a shell script), in practice that its standardized\napproach probably makes things a lot smoother, especially in terms of\nhow VACUUM is impacted.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 2 Feb 2023 15:16:32 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "On Thu, Feb 2, 2023 at 12:15 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> * Why are there only WARNINGs, never ERRORs here?\n\nAttached revision v22 switches all of the WARNINGs over to ERRORs. It\nhas also been re-indented, and now uses a non-generic version of\nPageGetItemIdCareful() in both verify_gin.c and verify_gist.c.\nObviously this isn't a big set of revisions, but I thought that Andrey\nwould appreciate it if I posted this much now. I haven't thought much\nmore about the locking stuff, which is my main concern for now.\n\nWho are the authors of the patch, in full? At some point we'll need to\nget the attribution right if this is going to be committed.\n\nI think that it would be good to add some comments explaining the high\nlevel control flow. Is the verification process driven by a\nbreadth-first search, or a depth-first search, or something else?\n\nI think that we should focus on getting the GiST patch into shape for\ncommit first, since that seems easier.\n\n-- \nPeter Geoghegan",
"msg_date": "Fri, 3 Feb 2023 18:49:50 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "Thank for working on this, Peter!\n\nOn Fri, Feb 3, 2023 at 6:50 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> I think that we should focus on getting the GiST patch into shape for\n> commit first, since that seems easier.\n>\n\nHere's the next version. I've focused on GiST part in this revision.\nChanges:\n1. Refactored index_chackable so that is shared between all AMs.\n2. Renamed gist_index_parent_check -> gist_index_check\n3. Gathered reviewers (in no particular order). I hope I didn't forget\nanyone. GIN patch is based on work by Grigory Kryachko, but\nessentially rewritten by Heikki. Somewhat cosmetically whacked by me.\n4. Extended comments for GistScanItem,\ngist_check_parent_keys_consistency() and gist_refind_parent().\n\nI tried adding support of GiST in pg_amcheck, but it is largely\nassuming the relation is either heap or B-tree. I hope to do that part\ntomorrow or in nearest future.\n\nHere's the current version. Thank you!\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Sat, 4 Feb 2023 13:37:29 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "On Sat, Feb 4, 2023 at 1:37 PM Andrey Borodin <amborodin86@gmail.com> wrote:\n>\n> I tried adding support of GiST in pg_amcheck, but it is largely\n> assuming the relation is either heap or B-tree. I hope to do that part\n> tomorrow or in nearest future.\n>\n\nHere's v24 == (v23 + a step for pg_amcheck). There's a lot of\nshotgun-style changes, but I hope next index types will be easy to add\nnow.\n\nAdding Mark to cc, just in case.\n\nThanks!\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Sun, 5 Feb 2023 16:44:53 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "Hi,\n\nOn Thu, Feb 02, 2023 at 12:56:47PM -0800, Nikolay Samokhvalov wrote:\n> On Thu, Feb 2, 2023 at 12:43 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I think that that problem should be solved at a higher level, in the\n> > program that runs amcheck. Note that pg_amcheck will already do this\n> > for B-Tree indexes.\n> \n> That's a great tool, and it's great it supports parallelization, very useful\n> on large machines.\n\nRight, but unfortunately not an option on managed services. It's clear\nthat this restriction should not be a general guideline for Postgres\ndevelopment, but it makes the amcheck extension (that is now shipped\neverywhere due to being in-code I believe) somewhat less useful for\nuse-case of checking your whole database for corruption.\n\n\nMichael\n\n\n",
"msg_date": "Wed, 22 Feb 2023 09:51:32 +0100",
"msg_from": "Michael Banck <mbanck@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "On Sun, Feb 5, 2023 at 4:45 PM Andrey Borodin <amborodin86@gmail.com> wrote:\n> Here's v24 == (v23 + a step for pg_amcheck). There's a lot of\n> shotgun-style changes, but I hope next index types will be easy to add\n> now.\n\nSome feedback on the GiST patch:\n\n* You forgot to initialize GistCheckState.heaptuplespresent to 0.\n\nIt might be better to allocate GistCheckState dynamically, using\npalloc0(). That's future proof. \"Simple and obvious\" is usually the\nmost important goal for managing memory in amcheck code. It can be a\nlittle inefficient if that makes it simpler.\n\n* ISTM that gist_index_check() should allow the caller to omit a\n\"heapallindexed\" argument by specifying \"DEFAULT FALSE\", for\nconsistency with bt_index_check().\n\n(Actually there are two versions of bt_index_check(), with\noverloading, but that's just because of the way that the extension\nevolved over time).\n\n* What's the point in having a custom memory context that is never reset?\n\nI believe that gistgetadjusted() will leak memory here, so there is a\nneed for some kind of high level strategy for managing memory. The\nstrategy within verify_nbtree.c is to call MemoryContextReset() right\nafter every loop iteration within bt_check_level_from_leftmost() --\nwhich is pretty much once every call to bt_target_page_check(). That\nkind of approach is obviously not going to suffer any memory leaks.\n\nAgain, \"simple and obvious\" is good for memory management in amcheck.\n\n* ISTM that it would be clearer if the per-page code within\ngist_check_parent_keys_consistency() was broken out into its own\nfunction -- a little like bt_target_page_check()..\n\nThat way the control flow would be easier to understand when looking\nat the code at a high level.\n\n* ISTM that gist_refind_parent() should throw an error about\ncorruption in the event of a parent page somehow becoming a leaf page.\n\nObviously this is never supposed to happen, and likely never will\nhappen, even with corruption. But it seems like a good idea to make\nthe most conservative possible assumption by throwing an error. If it\nnever happens anyway, then the fact that we handle it with an error\nwon't matter -- so the error is harmless. If it does happen then we'll\nwant to hear about it as soon as possible -- so the error is useful.\n\n* I suggest using c99 style variable declarations in loops.\n\nEspecially for things like \"for (OffsetNumber offset =\nFirstOffsetNumber; ... ; ... )\".\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 16 Mar 2023 16:48:09 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "On Thu, Mar 16, 2023 at 4:48 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Some feedback on the GiST patch:\n\nI see that the Bloom filter that's used to implement heapallindexed\nverification fingerprints index tuples that are formed via calls to\ngistFormTuple(), without any attempt to normalize-away differences in\nTOAST input state. In other words, there is nothing like\nverify_nbtree.c's bt_normalize_tuple() function involved in the\nfingerprinting process. Why is that safe, though? See the \"toast_bug\"\ntest case within contrib/amcheck/sql/check_btree.sql for an example of\nhow inconsistent TOAST input state confused verify_nbtree.c's\nheapallindexed verification (before bugfix commit eba775345d). I'm\nconcerned about GiST heapallindexed verification being buggy in\nexactly the same way, or in some way that is roughly analogous.\n\nI do have some concerns about there being analogous problems that are\nunique to GiST, since GiST is an AM that gives opclass authors many\nmore choices than B-Tree opclass authors have. In particular, I wonder\nif heapallindexed verification needs to account for how GiST\ncompression might end up breaking heapallindexed. I refer to the\n\"compression\" implemented by GiST support routine 3 of GiST opclasses.\nThe existence of GiST support routine 7, the \"same\" routine, also\nmakes me feel a bit squeamish about heapallindexed verification -- the\nexistence of a \"same\" routine hints at some confusion about \"equality\nversus equivalence\" issues.\n\nIn more general terms: heapallindexed verification works by\nfingerprinting index tuples during the index verification stage, and\nthen performing Bloom filter probes in a separate CREATE INDEX style\nheap-matches-index stage (obviously). There must be some justification\nfor our assumption that there can be no false positive corruption\nreports due only to a GiST opclass (either extant or theoretical) that\nfollows the GiST contract, and yet allows an inconsistency to arise\nthat isn't really index corruption. This justification won't be easy\nto come up with, since the GiST contract was not really designed with\nthese requirements in mind. But...we should try to come up with\nsomething.\n\nWhat are the assumptions underlying heapallindexed verification for\nGiST? It doesn't have to be provably correct or anything, but it\nshould at least be empirically falsifiable. Basically, something that\nsays: \"Here are our assumptions, if we were wrong in making these\nassumptions then you could tell that we made a mistake because of X,\nY, Z\". It's not always clear when something is corrupt. Admittedly I\nhave much less experience with GiST than other people, which likely\nincludes you (Andrey). I am likely missing some context around the\nevolution of GiST. Possibly I'm making a big deal out of something\nwithout it being unhelpful. Unsure.\n\nHere is an example of the basic definition of correctness being\nunclear, in a bad way: Is a HOT chain corrupt when its root\nLP_REDIRECT points to an LP_DEAD item, or does that not count as\ncorruption? I'm pretty sure that the answer is ambiguous even today,\nor was ambiguous until recently, at least. Hopefully the\nverify_heapam.c HOT chain verification patch will be committed,\nproviding us with a clear *definition* of HOT chain corruption -- the\ndefinition itself may not be the easy part.\n\nOn a totally unrelated note: I wonder if we should be checking that\ninternal page tuples have 0xffff as their offset number? Seems like\nit'd be a cheap enough cross-check.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 16 Mar 2023 18:22:42 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "Hi Peter,\n\nThanks for the feedback! I'll work on it during the weekend.\n\nOn Thu, Mar 16, 2023 at 6:23 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> existence of a \"same\" routine hints at some confusion about \"equality\n> versus equivalence\" issues.\n\nHmm...yes, actually, GiST deals with floats routinely. And there might\nbe some sorts of NaNs and Infs that are equal, but not binary\nequivalent.\nI'll think more about it.\n\ngist_get_adjusted() calls \"same\" routine, which for type point will\nuse FPeq(double A, double B). And this might be kind of a corruption\nout of the box. Because it's an epsilon-comparison, ε=1.0E-06.\nGiST might miss newly inserted data, because the \"adjusted\" tuple was\n\"same\" if data is in proximity of 0.000001 of any previously indexed\npoint, but out of known MBRs.\nI'll try to reproduce this tomorrow, so far no luck.\n\n\nBest regards, Andrey Borodin.\n\n\n",
"msg_date": "Fri, 17 Mar 2023 20:40:05 -0700",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "On Fri, Mar 17, 2023 at 8:40 PM Andrey Borodin <amborodin86@gmail.com> wrote:\n>\n> On Thu, Mar 16, 2023 at 6:23 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> >\n> > existence of a \"same\" routine hints at some confusion about \"equality\n> > versus equivalence\" issues.\n>\n> Hmm...yes, actually, GiST deals with floats routinely. And there might\n> be some sorts of NaNs and Infs that are equal, but not binary\n> equivalent.\n> I'll think more about it.\n>\n> gist_get_adjusted() calls \"same\" routine, which for type point will\n> use FPeq(double A, double B). And this might be kind of a corruption\n> out of the box. Because it's an epsilon-comparison, ε=1.0E-06.\n> GiST might miss newly inserted data, because the \"adjusted\" tuple was\n> \"same\" if data is in proximity of 0.000001 of any previously indexed\n> point, but out of known MBRs.\n> I'll try to reproduce this tomorrow, so far no luck.\n>\nAfter several attempts to corrupt GiST with this 0.000001 epsilon\nadjustment tolerance I think GiST indexing of points is valid.\nBecause intersection for search purposes is determined with the same epsilon!\nSo it's kind of odd\npostgres=# select point(0.0000001,0)~=point(0,0);\n?column?\n----------\n t\n(1 row)\n, yet the index works correctly.\n\n\n\nOn Thu, Mar 16, 2023 at 4:48 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Sun, Feb 5, 2023 at 4:45 PM Andrey Borodin <amborodin86@gmail.com> wrote:\n> > Here's v24 == (v23 + a step for pg_amcheck). There's a lot of\n> > shotgun-style changes, but I hope next index types will be easy to add\n> > now.\n>\n> Some feedback on the GiST patch:\n>\n> * You forgot to initialize GistCheckState.heaptuplespresent to 0.\n>\n> It might be better to allocate GistCheckState dynamically, using\n> palloc0(). That's future proof. \"Simple and obvious\" is usually the\n> most important goal for managing memory in amcheck code. It can be a\n> little inefficient if that makes it simpler.\nDone.\n\n> * ISTM that gist_index_check() should allow the caller to omit a\n> \"heapallindexed\" argument by specifying \"DEFAULT FALSE\", for\n> consistency with bt_index_check().\nDone.\n\n> * What's the point in having a custom memory context that is never reset?\nThe problem is we traverse index with depth-first scan and must retain\ninternal tuples for a whole time of the scan.\nAnd gistgetadjusted() will allocate memory only in case of suspicion\nof corruption. So, it's kind of an infrequent case.\n\nThe context is there only as an overall leak protection mechanism.\nActual memory management is done via pfree() calls.\n\n> Again, \"simple and obvious\" is good for memory management in amcheck.\nYes, that would be great to come up with some \"unit of work\" contexts.\nYet, now palloced tuples and scan items have very different lifespans.\n\n\n> * ISTM that it would be clearer if the per-page code within\n> gist_check_parent_keys_consistency() was broken out into its own\n> function -- a little like bt_target_page_check()..\n\nI've refactored page logic into gist_check_page().\n\n> * ISTM that gist_refind_parent() should throw an error about\n> corruption in the event of a parent page somehow becoming a leaf page.\nDone.\n\n> * I suggest using c99 style variable declarations in loops.\nDone.\n\n\nOn Thu, Mar 16, 2023 at 6:23 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Thu, Mar 16, 2023 at 4:48 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Some feedback on the GiST patch:\n>\n> I see that the Bloom filter that's used to implement heapallindexed\n> verification fingerprints index tuples that are formed via calls to\n> gistFormTuple(), without any attempt to normalize-away differences in\n> TOAST input state. In other words, there is nothing like\n> verify_nbtree.c's bt_normalize_tuple() function involved in the\n> fingerprinting process. Why is that safe, though? See the \"toast_bug\"\n> test case within contrib/amcheck/sql/check_btree.sql for an example of\n> how inconsistent TOAST input state confused verify_nbtree.c's\n> heapallindexed verification (before bugfix commit eba775345d). I'm\n> concerned about GiST heapallindexed verification being buggy in\n> exactly the same way, or in some way that is roughly analogous.\nFWIW contrib opclasses, AFAIK, always detoast possibly long datums,\nsee gbt_var_compress()\nhttps://github.com/postgres/postgres/blob/master/contrib/btree_gist/btree_utils_var.c#L281\nBut there might be opclasses that do not do so...\nAlso, there are INCLUDEd attributes. Right now we just put them as-is\nto the bloom filter. Does this constitute a TOAST bug as in B-tree?\nIf so, I think we should use a version of tuple formatting that omits\nincluded attributes...\nWhat do you think?\n\n>\n> I do have some concerns about there being analogous problems that are\n> unique to GiST, since GiST is an AM that gives opclass authors many\n> more choices than B-Tree opclass authors have. In particular, I wonder\n> if heapallindexed verification needs to account for how GiST\n> compression might end up breaking heapallindexed. I refer to the\n> \"compression\" implemented by GiST support routine 3 of GiST opclasses.\n> The existence of GiST support routine 7, the \"same\" routine, also\n> makes me feel a bit squeamish about heapallindexed verification -- the\n> existence of a \"same\" routine hints at some confusion about \"equality\n> versus equivalence\" issues.\n>\n> In more general terms: heapallindexed verification works by\n> fingerprinting index tuples during the index verification stage, and\n> then performing Bloom filter probes in a separate CREATE INDEX style\n> heap-matches-index stage (obviously). There must be some justification\n> for our assumption that there can be no false positive corruption\n> reports due only to a GiST opclass (either extant or theoretical) that\n> follows the GiST contract, and yet allows an inconsistency to arise\n> that isn't really index corruption. This justification won't be easy\n> to come up with, since the GiST contract was not really designed with\n> these requirements in mind. But...we should try to come up with\n> something.\n>\n> What are the assumptions underlying heapallindexed verification for\n> GiST? It doesn't have to be provably correct or anything, but it\n> should at least be empirically falsifiable. Basically, something that\n> says: \"Here are our assumptions, if we were wrong in making these\n> assumptions then you could tell that we made a mistake because of X,\n> Y, Z\". It's not always clear when something is corrupt. Admittedly I\n> have much less experience with GiST than other people, which likely\n> includes you (Andrey). I am likely missing some context around the\n> evolution of GiST. Possibly I'm making a big deal out of something\n> without it being unhelpful. Unsure.\n>\n> Here is an example of the basic definition of correctness being\n> unclear, in a bad way: Is a HOT chain corrupt when its root\n> LP_REDIRECT points to an LP_DEAD item, or does that not count as\n> corruption? I'm pretty sure that the answer is ambiguous even today,\n> or was ambiguous until recently, at least. Hopefully the\n> verify_heapam.c HOT chain verification patch will be committed,\n> providing us with a clear *definition* of HOT chain corruption -- the\n> definition itself may not be the easy part.\n\nRules for compression methods are not described anyware. And I suspect\nthat it's intentional, to provide more flexibility.\nTo make heapallindexed check work we need that opclass always returns\nthe same compression result for the same input datum.\nAll known to me opclasses (built-in and PostGIS) comply with this requirement.\n\nYet another behavior might be reasonable. Consider we have a\ncompression which learns on data. It will observe that some datums are\nmore frequent and start using shorter version of them.\n\nCompression function actually is not about compression, but kind of a\nconversion from heap format to indexable. Many opclasses do not have a\ncompression function at all.\nWe can require that the checked opclass would not have a compression\nfunction at all. But GiST is mainly used for PostGIS, and in PostGIS\nthey use compression to convert complex geometry into a bounding box.\n\nMethod \"same\" is used only for a business of internal tuples, but not\nfor leaf tuples that we fingerprint in the bloom filter.\n\nWe can put requirements for heapallindexed in another way: \"the\nopclass compression method must be a pure function\". It's also a very\nstrict requirement, disallowing all kinds of detoasting, dictionary\ncompression etc. And btree_gist opclasses does not comply :) But they\nseem to me safe for heapallindexed.\n\n> On a totally unrelated note: I wonder if we should be checking that\n> internal page tuples have 0xffff as their offset number? Seems like\n> it'd be a cheap enough cross-check.\n>\n\nDone.\n\nThank you!\n\nBest regards, Andrey Borodin.",
"msg_date": "Sun, 19 Mar 2023 16:00:14 -0700",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "On Sun, Mar 19, 2023 at 4:00 PM Andrey Borodin <amborodin86@gmail.com> wrote:\n>\n> Also, there are INCLUDEd attributes. Right now we just put them as-is\n> to the bloom filter. Does this constitute a TOAST bug as in B-tree?\n> If so, I think we should use a version of tuple formatting that omits\n> included attributes...\n> What do you think?\nI've ported the B-tree TOAST test to GiST, and, as expected, it fails.\nFinds non-indexed tuple for a fresh valid index.\nI've implemented normalization, plz see gistFormNormalizedTuple().\nBut there are two problems:\n1. I could not come up with a proper way to pfree() compressed value\nafter decompressing. See TODO in gistFormNormalizedTuple().\n2. In the index tuples seem to be normalized somewhere. They do not\nhave to be deformed and normalized. It's not clear to me how this\nhappened.\n\nThanks!\n\nBest regards, Andrey Borodin.",
"msg_date": "Sun, 26 Mar 2023 15:17:02 -0700",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "On Sun, Mar 19, 2023 at 4:00 PM Andrey Borodin <amborodin86@gmail.com> wrote:\n> After several attempts to corrupt GiST with this 0.000001 epsilon\n> adjustment tolerance I think GiST indexing of points is valid.\n> Because intersection for search purposes is determined with the same epsilon!\n> So it's kind of odd\n> postgres=# select point(0.0000001,0)~=point(0,0);\n> ?column?\n> ----------\n> t\n> (1 row)\n> , yet the index works correctly.\n\nI think that it's okay, provided that we can assume deterministic\nbehavior in the code that forms new index tuples. Within nbtree,\noperator classes like numeric_ops are supported by heapallindexed\nverification, without any requirement for special normalization code\nto make it work correctly as a special case. This is true even though\noperator classes such as numeric_ops have similar \"equality is not\nequivalence\" issues, which comes up in other areas (e.g., nbtree\ndeduplication, which must call support routine 4 during a CREATE INDEX\n[1]).\n\nThe important principle is that amcheck must always be able to produce\na consistent fingerprintable binary output given the same input (the\nsame heap tuple/Datum array). This must work across all operator\nclasses that play by the rules for GiST operator classes. We *can*\ntolerate some variation here. Well, we really *have* to tolerate a\nlittle of this kind of variation in order to deal with the TOAST input\nstate thing...but I hope that that's the only complicating factor\nhere, for GiST (as it is for nbtree). Note that we already rely on the\nfact that index_form_tuple() uses palloc0() (not plain palloc) in\nverify_nbtree.c, for the obvious reason.\n\nI think that there is a decent chance that it just wouldn't make sense\nfor an operator class author to ever do something that we need to\nworry about. I'm pretty sure that it's just the TOAST thing. But it's\nworth thinking about carefully.\n\n[1] https://www.postgresql.org/docs/devel/btree-support-funcs.html\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 26 Mar 2023 19:34:49 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "Hi Andrey,\n\n27.03.2023 01:17, Andrey Borodin wrote:\n> I've ported the B-tree TOAST test to GiST, and, as expected, it fails.\n> Finds non-indexed tuple for a fresh valid index.\n\nI've tried to use this feature with the latest patch set and discovered that\nmodified pg_amcheck doesn't find any gist indexes when running without a\nschema specification. For example:\nCREATE TABLE tbl (id integer, p point);\nINSERT INTO tbl VALUES (1, point(1, 1));\nCREATE INDEX gist_tbl_idx ON tbl USING gist (p);\nCREATE INDEX btree_tbl_idx ON tbl USING btree (id);\n\npg_amcheck -v -s public\nprints:\npg_amcheck: checking index \"regression.public.btree_tbl_idx\"\npg_amcheck: checking heap table \"regression.public.tbl\"\npg_amcheck: checking index \"regression.public.gist_tbl_idx\"\n\nbut without \"-s public\" a message about checking of gist_tbl_idx is absent.\n\nAs I can see in the server.log, the queries, that generate relation lists in\nthese cases, differ in:\n... AND ep.pattern_id IS NULL AND c.relam = 2 AND c.relkind IN ('r', 'S', 'm', 't') AND c.relnamespace != 99 ...\n\n... AND ep.pattern_id IS NULL AND c.relam IN (2, 403, 783)AND c.relkind IN ('r', 'S', 'm', 't', 'i') AND ((c.relam = 2 \nAND c.relkind IN ('r', 'S', 'm', 't')) OR ((c.relam = 403 OR c.relam = 783) AND c.relkind = 'i')) ...\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Thu, 6 Apr 2023 07:00:01 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "On Mon, 27 Mar 2023 at 03:47, Andrey Borodin <amborodin86@gmail.com> wrote:\n>\n> On Sun, Mar 19, 2023 at 4:00 PM Andrey Borodin <amborodin86@gmail.com> wrote:\n> >\n> > Also, there are INCLUDEd attributes. Right now we just put them as-is\n> > to the bloom filter. Does this constitute a TOAST bug as in B-tree?\n> > If so, I think we should use a version of tuple formatting that omits\n> > included attributes...\n> > What do you think?\n> I've ported the B-tree TOAST test to GiST, and, as expected, it fails.\n> Finds non-indexed tuple for a fresh valid index.\n> I've implemented normalization, plz see gistFormNormalizedTuple().\n> But there are two problems:\n> 1. I could not come up with a proper way to pfree() compressed value\n> after decompressing. See TODO in gistFormNormalizedTuple().\n> 2. In the index tuples seem to be normalized somewhere. They do not\n> have to be deformed and normalized. It's not clear to me how this\n> happened.\n\nI have changed the status of the commitfest entry to \"Waiting on\nAuthor\" as there was no follow-up on Alexander's queries. Feel free to\naddress them and change the commitfest entry accordingly.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 20 Jan 2024 08:16:16 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "\n\n> On 20 Jan 2024, at 07:46, vignesh C <vignesh21@gmail.com> wrote:\n> \n> I have changed the status of the commitfest entry to \"Waiting on\n> Author\" as there was no follow-up on Alexander's queries. Feel free to\n> address them and change the commitfest entry accordingly.\n\nThanks Vignesh!\n\nAt the moment it’s obvious that this change will not be in 17, but I have plans to continue work on this. So I’ll move this item to July CF.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Mon, 11 Mar 2024 11:11:33 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "> On 6 Apr 2023, at 09:00, Alexander Lakhin <exclusion@gmail.com> wrote:\n> \n> I've tried to use this feature with the latest patch set and discovered that\n> modified pg_amcheck doesn't find any gist indexes when running without a\n> schema specification.\n\nThanks, Alexander! I’ve fixed this problem and rebased on current HEAD.\nThere’s one more problem in pg_amcheck’s GiST verification. We must check that amcheck is 1.5+ and use GiST verification only in that case…\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Fri, 5 Jul 2024 17:27:55 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "> On 5 Jul 2024, at 17:27, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> There’s one more problem in pg_amcheck’s GiST verification. We must check that amcheck is 1.5+ and use GiST verification only in that case…\n\nDone. I’ll set the status to “Needs review”.\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Tue, 9 Jul 2024 11:36:50 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "Hi,\n\nOn 7/9/24 08:36, Andrey M. Borodin wrote:\n> \n> \n>> On 5 Jul 2024, at 17:27, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>>\n>> There’s one more problem in pg_amcheck’s GiST verification. We must\n>> check that amcheck is 1.5+ and use GiST verification only in that\n>> case …\n> \n> Done. I’ll set the status to “Needs review”.\n> \n\nI realized amcheck GIN/GiST support would be useful for testing my\npatches adding parallel builds for these index types, so I decided to\ntake a look at this and do an initial review today.\n\nAttached is a patch series with a extra commits to keep the review\ncomments and patches adjusting the formatting by pgindent (the patch\nseems far enough for this).\n\nLet me quickly go through the review comments:\n\n1) Not sure I like 'amcheck.c' very much, I'd probably go with something\nlike 'verify_common.c' to match naming of the other files. But it's just\nnitpicking and I can live with it.\n\n2) amcheck_lock_relation_and_check seems to be the most important\nfunction, yet there's no comment explaining what it does :-(\n\n3) amcheck_lock_relation_and_check still has a TODO to add the correct\nname of the AM\n\n4) Do we actually need amcheck_index_mainfork_expected as a separate\nfunction, or could it be a part of index_checkable?\n\n5) The comment for heaptuplespresent says \"debug counter\" but that does\nnot really explain what it's for. (I see verify_nbtree has the same\ncomment, but maybe let's improve that.)\n\n6) I'd suggest moving the GISTSTATE + blocknum fields to the beginning\nof GistCheckState, it seems more natural to start with \"generic\" fields.\n\n7) I'd adjust the gist_check_parent_keys_consistency comment a bit, to\nexplain what the function does first, and only then explain how.\n\n8) We seem to be copying PageGetItemIdCareful() around, right? And the\ncopy in _gist.c still references nbtree - I guess that's not right.\n\n9) Why is the GIN function called gin_index_parent_check() and not\nsimply gin_index_check() as for the other AMs?\n\n10) The debug in gin_check_posting_tree_parent_keys_consistency triggers\nassert when running with client_min_messages='debug5', it seems to be\naccessing bogus item pointers.\n\n11) Why does it add pg_amcheck support only for GiST and not GIN?\n\n\nThat's all for now. I'll add this to the stress-testing tests of my\nindex build patches, and if that triggers more issues I'll report those.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 10 Jul 2024 18:01:40 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "On 7/10/24 18:01, Tomas Vondra wrote:\n> ...\n>\n> That's all for now. I'll add this to the stress-testing tests of my\n> index build patches, and if that triggers more issues I'll report those.\n> \n\nAs mentioned a couple days ago, I started using this patch to validate\nthe patches adding parallel builds to GIN and GiST indexes - I scripts\nto stress-test the builds, and I added the new amcheck functions as\nanother validation step.\n\nFor GIN indexes it didn't find anything new (in either this or my\npatches), aside from the assert crash I already reported.\n\nBut for GiST it turned out to be very valuable - it did actually find an\nissue in my patches, or rather confirm my hypothesis that the way the\npatch generates fake LSN may not be quite right.\n\nIn particular, I've observed these two issues:\n\n ERROR: heap tuple (13315,38) from table \"planet_osm_roads\" lacks\n matching index tuple within index \"roads_7_1_idx\"\n\n ERROR: index \"roads_7_7_idx\" has inconsistent records on page 23723\n offset 113\n\nAnd those consistency issues are real - I've managed to reproduce issues\nwith incorrect query results (by comparing the results to an index built\nwithout parallelism).\n\nSo that's nice - it shows the value of this patch, and I like it.\n\nOne thing I've been wondering about is that currently amcheck (in\ngeneral, not just these new GIN/GiST functions) errors out on the first\nissue, because it does ereport(ERROR). Which is good enough to decide if\nthere is some corruption, but a bit inconvenient if you need to assess\nhow much corruption is there. For example when investigating the issue\nin my patch it would have been great to know if there's just one broken\npage, or if there are dozens/hundreds/thousands of them.\n\nI'd imagine we could have a flag which says whether to fail on the first\nissue, or keep looking at future pages. Essentially, whether to do\nereport(ERROR) or ereport(WARNING). But maybe that's a dead-end, and\nonce we find the first issue it's futile to inspect the rest of the\nindex, because it can be garbage. Not sure. In any case, it's not up to\nthis patch to invent that.\n\nI don't have additional comments, the patch seems to be clean and likely\nready to go. There's a couple committers already involved in this\nthread, I wonder if one of them already planned to take care of this?\nPeter and Andres, either of you interested in this?\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 12 Jul 2024 14:16:24 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "OK, one mere comment - it seems the 0001 patch has incorrect indentation\nin bt_index_check_callback, triggering this:\n\n----------------------------------------------------------------------\nverify_nbtree.c: In function ‘bt_index_check_callback’:\nverify_nbtree.c:331:25: warning: this ‘if’ clause does not guard...\n[-Wmisleading-indentation]\n 331 | if (indrel->rd_opfamily[i] ==\nINTERVAL_BTREE_FAM_OID)\n | ^~\nIn file included from ../../src/include/postgres.h:46,\n from verify_nbtree.c:24:\n../../src/include/utils/elog.h:142:9: note: ...this statement, but the\nlatter is misleadingly indented as if it were guarded by the ‘if’\n 142 | do { \\\n | ^~\n../../src/include/utils/elog.h:164:9: note: in expansion of macro\n‘ereport_domain’\n 164 | ereport_domain(elevel, TEXTDOMAIN, __VA_ARGS__)\n | ^~~~~~~~~~~~~~\nverify_nbtree.c:333:33: note: in expansion of macro ‘ereport’\n 333 | ereport(ERROR,\n | ^~~~~~~\n----------------------------------------------------------------------\n\nThis seems to be because the ereport() happens to be indented as if it\nwas in the \"if\", but should have been at the \"for\" level.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 12 Jul 2024 18:15:03 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "OK, one more issue report. I originally thought it's a bug in my patch\nadding parallel builds for GIN indexes, but it turns out it happens even\nwith serial builds on master ...\n\nIf I build any GIN index, and then do gin_index_parent_check() on it, I\nget this error:\n\ncreate index jsonb_hash on messages using gin (msg_headers jsonb_path_ops);\n\nselect gin_index_parent_check('jsonb_hash');\nERROR: index \"jsonb_hash\" has wrong tuple order, block 43932, offset 328\n\nI did try investigating usinng pageinspect - the page seems to be the\nright-most in the tree, judging by rightlink = InvalidBlockNumber:\n\ntest=# select gin_page_opaque_info(get_raw_page('jsonb_hash', 43932));\n gin_page_opaque_info\n----------------------\n (4294967295,0,{})\n(1 row)\n\nBut gin_leafpage_items() apparently only works with compressed leaf\npages, so I'm not sure what's in the page. In any case, the index seems\nto be working fine, so it seems like a bug in this patch.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 12 Jul 2024 20:16:42 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "Hi Tomas!\n\nThank you so much for your interest in the patchset.\n\n> On 10 Jul 2024, at 19:01, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> \n> I realized amcheck GIN/GiST support would be useful for testing my\n> patches adding parallel builds for these index types, so I decided to\n> take a look at this and do an initial review today.\n\nGreat! Thank you!\n\n> Attached is a patch series with a extra commits to keep the review\n> comments and patches adjusting the formatting by pgindent (the patch\n> seems far enough for this).\n\nI was hoping to address your review comments this weekend, but unfortunately I could not. I'll do this ASAP, but at least I decided to post answers on questions.\n\n> \n> Let me quickly go through the review comments:\n> \n> 1) Not sure I like 'amcheck.c' very much, I'd probably go with something\n> like 'verify_common.c' to match naming of the other files. But it's just\n> nitpicking and I can live with it.\n\nAny name works for me. We have tens of files ending with \"common.c\", so I think that's a good way to go.\n\n> 2) amcheck_lock_relation_and_check seems to be the most important\n> function, yet there's no comment explaining what it does :-(\n\nMakes sense.\n\n> 3) amcheck_lock_relation_and_check still has a TODO to add the correct\n> name of the AM\n\nYes, I've discovered it during rebase and added TODO.\n\n> 4) Do we actually need amcheck_index_mainfork_expected as a separate\n> function, or could it be a part of index_checkable?\n\nIt was separate function before refactoring...\n\n> 5) The comment for heaptuplespresent says \"debug counter\" but that does\n> not really explain what it's for. (I see verify_nbtree has the same\n> comment, but maybe let's improve that.)\n\nIt's there for a DEBUG1 message\nereport(DEBUG1,\n (errmsg_internal(\"finished verifying presence of \" INT64_FORMAT \" tuples from table \\\"%s\\\" with bitset %.2f%% set\",\nBut the message is gone for GiST. Perhaps, let's restore this message?\n\n> \n> 6) I'd suggest moving the GISTSTATE + blocknum fields to the beginning\n> of GistCheckState, it seems more natural to start with \"generic\" fields.\n\nMakes sense.\n\n> 7) I'd adjust the gist_check_parent_keys_consistency comment a bit, to\n> explain what the function does first, and only then explain how.\n\nMakes sense.\n\n> 8) We seem to be copying PageGetItemIdCareful() around, right? And the\n> copy in _gist.c still references nbtree - I guess that's not right.\n\nVersion differ in two aspects:\n1. Size of opaque data may be different. But we can pass it as a parameter.\n2. GIN's line pointer verification is slightly more strict.\n\n> \n> 9) Why is the GIN function called gin_index_parent_check() and not\n> simply gin_index_check() as for the other AMs?\n\nAFAIR function should be called _parent_ if it takes ShareLock. gin_index_parent_check() does not, so I think we should rename it.\n\n> 10) The debug in gin_check_posting_tree_parent_keys_consistency triggers\n> assert when running with client_min_messages='debug5', it seems to be\n> accessing bogus item pointers.\n> \n> 11) Why does it add pg_amcheck support only for GiST and not GIN?\n\nGiST part is by far more polished. When we were discussing current implementation with Peter G, we decided that we could finish work on GiST, and then proceed to GIN. Main concern is about GIN's locking model.\n\n\n\n\n> On 12 Jul 2024, at 15:16, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> \n> On 7/10/24 18:01, Tomas Vondra wrote:\n>> ...\n>> \n>> That's all for now. I'll add this to the stress-testing tests of my\n>> index build patches, and if that triggers more issues I'll report those.\n>> \n> \n> As mentioned a couple days ago, I started using this patch to validate\n> the patches adding parallel builds to GIN and GiST indexes - I scripts\n> to stress-test the builds, and I added the new amcheck functions as\n> another validation step.\n> \n> For GIN indexes it didn't find anything new (in either this or my\n> patches), aside from the assert crash I already reported.\n> \n> But for GiST it turned out to be very valuable - it did actually find an\n> issue in my patches, or rather confirm my hypothesis that the way the\n> patch generates fake LSN may not be quite right.\n> \n> In particular, I've observed these two issues:\n> \n> ERROR: heap tuple (13315,38) from table \"planet_osm_roads\" lacks\n> matching index tuple within index \"roads_7_1_idx\"\n> \n> ERROR: index \"roads_7_7_idx\" has inconsistent records on page 23723\n> offset 113\n> \n> And those consistency issues are real - I've managed to reproduce issues\n> with incorrect query results (by comparing the results to an index built\n> without parallelism).\n> \n> So that's nice - it shows the value of this patch, and I like it.\n\nThat's great!\n\n> One thing I've been wondering about is that currently amcheck (in\n> general, not just these new GIN/GiST functions) errors out on the first\n> issue, because it does ereport(ERROR). Which is good enough to decide if\n> there is some corruption, but a bit inconvenient if you need to assess\n> how much corruption is there. For example when investigating the issue\n> in my patch it would have been great to know if there's just one broken\n> page, or if there are dozens/hundreds/thousands of them.\n> \n> I'd imagine we could have a flag which says whether to fail on the first\n> issue, or keep looking at future pages. Essentially, whether to do\n> ereport(ERROR) or ereport(WARNING). But maybe that's a dead-end, and\n> once we find the first issue it's futile to inspect the rest of the\n> index, because it can be garbage. Not sure. In any case, it's not up to\n> this patch to invent that.\n\nThe thing is amcheck tries hard to to do a core dump. It's still possible to crash it with garbage. But if we continue check after encountering first corruption - increase in SegFaults is inevitable.\n\n\nThank you! I hope I can get back to code ASAP.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Sun, 14 Jul 2024 22:00:19 +0300",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
},
{
"msg_contents": "Hi,\n\nI've spent a bit more time looking at the GiST part as part of my\n\"parallel GiST build\" patch nearby, and I think there's some sort of\nmemory leak.\n\nConsider this:\n\n create table t (a text);\n\n insert into t select md5(i::text)\n from generate_series(1,25000000) s(i);\n\n create index on t using gist (a gist_trgm_ops);\n\n select gist_index_check('t_a_idx', true);\n\nThis creates a ~4GB GiST trigram index, and then checks it. But that\ngets killed, because of OOM killer. On my test machine it consumes\n~6.5GB of memory before OOM intervenes.\n\nThe memory context stats look like this:\n\n TopPortalContext: 8192 total in 1 blocks; 7680 free (0 chunks); 512 used\n PortalContext: 1024 total in 1 blocks; 616 free (0 chunks); 408\nused: <unnamed>\n ExecutorState: 8192 total in 1 blocks; 4024 free (4 chunks); 4168 used\n printtup: 8192 total in 1 blocks; 7952 free (0 chunks); 240 used\n ExprContext: 8192 total in 1 blocks; 7224 free (10 chunks); 968 used\n amcheck context: 3128950872 total in 376 blocks; 219392 free\n(1044 chunks); 3128731480 used\n ExecutorState: 8192 total in 1 blocks; 7200 free (0 chunks);\n992 used\n ExprContext: 8192 total in 1 blocks; 7952 free (0 chunks);\n240 used\n GiST scan context: 22248 total in 2 blocks; 7808 free (8\nchunks); 14440 used\n\nThis is from before the OOM kill, but it shows there's ~3GB of memory is\nthe amcheck context.\n\nSeems like a memory leak to me - I didn't look at which place leaks.\n\n\n\nregards\n\n-- \nTomas Vondra\n\n\n",
"msg_date": "Mon, 5 Aug 2024 17:05:04 +0200",
"msg_from": "Tomas Vondra <tomas@vondra.me>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck verification of GiST and GIN"
}
] |
[
{
"msg_contents": "Hi,\n\nHere are some failures in the test sto_using_cursor, on 12, 13 and\nHEAD branches:\n\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=desmoxytes&dt=2020-03-15%2023:18:30\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gharial&dt=2021-03-03%2005:59:30\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gharial&dt=2021-04-20%2020:49:17\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gharial&dt=2021-08-10%2004:47:08\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2021-09-16%2002:15:25\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=peripatus&dt=2022-05-16%2010:21:03\n\nUnfortunately the build farm doesn't capture regression.diffs. I\nrecall that a problem like that was fixed at some point in the BF\ncode, but peripatus is running the latest tag (REL_14) so I'm not sure\nwhy.\n\nThat reminds me: there was a discussion about whether the whole STO\nfeature should be deprecated/removed[1], starting from Andres's\ncomplaints about correctness and testability problems in STO while\nhacking on snapshot scalability. I'd be happy to come up with the\npatch for that, if we decide to do that, perhaps when the tree opens\nfor 16?\n\nOr perhaps someone wants to try to address the remaining known issues\nwith it? Robert fixed some stuff, and I had some more patches[2] that\ntried to fix a few more things relating to testability and a\nwraparound problem, not committed. I'd happily rebase them with a\nview to getting them in, but only if there is interest in reviewing\nthem and really trying to save this design. There were more problems\nthough: there isn't a systematic way to make sure that\nTestForOldSnapshot() is in all the right places, and IIRC there are\nsome known mistakes? And maybe more.\n\n[1] https://www.postgresql.org/message-id/flat/20200401064008.qob7bfnnbu4w5cw4%40alap3.anarazel.de\n[2] https://www.postgresql.org/message-id/CA%2BhUKGJyw%3DuJ4eL1x%3D%2BvKm16fLaxNPvKUYtnChnRkSKi024u_A%40mail.gmail.com\n\n\n",
"msg_date": "Tue, 31 May 2022 15:33:41 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Failures in sto_using_cursor test"
}
] |
[
{
"msg_contents": "Hi,\n\nToday I hit \"ERROR: target lists can have at most 1664 entries\", and I was\nsurprised the limit was not documented.\n\nI suggest that the limit of \"1664 columns per tuple\" (or whatever is the\nright term) should be added\nto the list at https://www.postgresql.org/docs/current/limits.html e.g.\nafter \"columns per table\".\n\nCould someone please commit that 1-2 line doc improvement or do you need a\npatch for it?\n\nVladimir\n\nHi,Today I hit \"ERROR: target lists can have at most 1664 entries\", and I was surprised the limit was not documented.I suggest that the limit of \"1664 columns per tuple\" (or whatever is the right term) should be addedto the list at https://www.postgresql.org/docs/current/limits.html e.g. after \"columns per table\".Could someone please commit that 1-2 line doc improvement or do you need a patch for it?Vladimir",
"msg_date": "Tue, 31 May 2022 10:16:35 +0300",
"msg_from": "Vladimir Sitnikov <sitnikov.vladimir@gmail.com>",
"msg_from_op": true,
"msg_subject": "PostgreSQL Limits: maximum number of columns in SELECT result"
},
{
"msg_contents": "On Tue, May 31, 2022 at 12:46 PM Vladimir Sitnikov\n<sitnikov.vladimir@gmail.com> wrote:\n>\n> Hi,\n>\n> Today I hit \"ERROR: target lists can have at most 1664 entries\", and I was surprised the limit was not documented.\n>\n> I suggest that the limit of \"1664 columns per tuple\" (or whatever is the right term) should be added\n> to the list at https://www.postgresql.org/docs/current/limits.html e.g. after \"columns per table\".\n>\n\nRather, I think the \"columns per table\" limit needs to be updated to 1664.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Tue, 31 May 2022 19:25:53 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Limits: maximum number of columns in SELECT result"
},
{
"msg_contents": "On Tue, 31 May 2022 at 09:56, Amul Sul <sulamul@gmail.com> wrote:\n\n> On Tue, May 31, 2022 at 12:46 PM Vladimir Sitnikov\n> <sitnikov.vladimir@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > Today I hit \"ERROR: target lists can have at most 1664 entries\", and I\n> was surprised the limit was not documented.\n> >\n> > I suggest that the limit of \"1664 columns per tuple\" (or whatever is the\n> right term) should be added\n> > to the list at https://www.postgresql.org/docs/current/limits.html e.g.\n> after \"columns per table\".\n> >\n>\n> Rather, I think the \"columns per table\" limit needs to be updated to 1664.\n>\n\nActually that is correct. Columns per table is MaxHeapAttributeNumber which\nis 1600.\n\nMaxTupleAttributeNumber is 1664 and is the limit of user columns in a\ntuple.\n\nDave\n\nOn Tue, 31 May 2022 at 09:56, Amul Sul <sulamul@gmail.com> wrote:On Tue, May 31, 2022 at 12:46 PM Vladimir Sitnikov\n<sitnikov.vladimir@gmail.com> wrote:\n>\n> Hi,\n>\n> Today I hit \"ERROR: target lists can have at most 1664 entries\", and I was surprised the limit was not documented.\n>\n> I suggest that the limit of \"1664 columns per tuple\" (or whatever is the right term) should be added\n> to the list at https://www.postgresql.org/docs/current/limits.html e.g. after \"columns per table\".\n>\n\nRather, I think the \"columns per table\" limit needs to be updated to 1664.Actually that is correct. Columns per table is MaxHeapAttributeNumber which is 1600.MaxTupleAttributeNumber is 1664 and is the limit of user columns in a tuple.Dave",
"msg_date": "Tue, 31 May 2022 10:02:48 -0400",
"msg_from": "Dave Cramer <davecramer@postgres.rocks>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Limits: maximum number of columns in SELECT result"
},
{
"msg_contents": ">\n>\n>\n>\n> On Tue, 31 May 2022 at 09:56, Amul Sul <sulamul@gmail.com> wrote:\n>\n>> On Tue, May 31, 2022 at 12:46 PM Vladimir Sitnikov\n>> <sitnikov.vladimir@gmail.com> wrote:\n>> >\n>> > Hi,\n>> >\n>> > Today I hit \"ERROR: target lists can have at most 1664 entries\", and I\n>> was surprised the limit was not documented.\n>> >\n>> > I suggest that the limit of \"1664 columns per tuple\" (or whatever is\n>> the right term) should be added\n>> > to the list at https://www.postgresql.org/docs/current/limits.html\n>> e.g. after \"columns per table\".\n>> >\n>>\n>> Rather, I think the \"columns per table\" limit needs to be updated to 1664.\n>>\n>\n> Actually that is correct. Columns per table is MaxHeapAttributeNumber\n> which is 1600.\n>\n> MaxTupleAttributeNumber is 1664 and is the limit of user columns in a\n> tuple.\n>\n> Dave\n>\n\nAttached is a patch to limits.sgml. I'm not sure this is where it belongs,\nas it's not a physical limit per-se but I am not familiar enough with the\ndocs to propose another location.\n\nNote this was suggested by Vladimir.\n\nsee attached",
"msg_date": "Tue, 31 May 2022 10:10:13 -0400",
"msg_from": "Dave Cramer <davecramer@postgres.rocks>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Limits: maximum number of columns in SELECT result"
},
{
"msg_contents": "Amul Sul <sulamul@gmail.com> writes:\n> On Tue, May 31, 2022 at 12:46 PM Vladimir Sitnikov\n> <sitnikov.vladimir@gmail.com> wrote:\n>> I suggest that the limit of \"1664 columns per tuple\" (or whatever is the right term) should be added\n>> to the list at https://www.postgresql.org/docs/current/limits.html e.g. after \"columns per table\".\n\nWe've generally felt that the existing \"columns per table\" limit is\nsufficient detail here.\n\n> Rather, I think the \"columns per table\" limit needs to be updated to 1664.\n\nThat number is not wrong. See MaxTupleAttributeNumber and\nMaxHeapAttributeNumber:\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/include/access/htup_details.h;h=51a60eda088578188b41f4506f6053c2fb77ef0b;hb=HEAD#l23\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 31 May 2022 10:16:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Limits: maximum number of columns in SELECT result"
},
{
"msg_contents": "On Tue, 31 May 2022 at 10:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Amul Sul <sulamul@gmail.com> writes:\n> > On Tue, May 31, 2022 at 12:46 PM Vladimir Sitnikov\n> > <sitnikov.vladimir@gmail.com> wrote:\n> >> I suggest that the limit of \"1664 columns per tuple\" (or whatever is\n> the right term) should be added\n> >> to the list at https://www.postgresql.org/docs/current/limits.html\n> e.g. after \"columns per table\".\n>\n> We've generally felt that the existing \"columns per table\" limit is\n> sufficient detail here.\n>\n\nISTM that adding detail is free whereas the readers time to figure out why\nand where this number came from is not.\n\nI think it deserves mention.\n\nRegards,\nDave.\n\nOn Tue, 31 May 2022 at 10:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:Amul Sul <sulamul@gmail.com> writes:\n> On Tue, May 31, 2022 at 12:46 PM Vladimir Sitnikov\n> <sitnikov.vladimir@gmail.com> wrote:\n>> I suggest that the limit of \"1664 columns per tuple\" (or whatever is the right term) should be added\n>> to the list at https://www.postgresql.org/docs/current/limits.html e.g. after \"columns per table\".\n\nWe've generally felt that the existing \"columns per table\" limit is\nsufficient detail here.ISTM that adding detail is free whereas the readers time to figure out why and where this number came from is not.I think it deserves mention.Regards,Dave.",
"msg_date": "Tue, 31 May 2022 10:27:16 -0400",
"msg_from": "Dave Cramer <davecramer@postgres.rocks>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Limits: maximum number of columns in SELECT result"
},
{
"msg_contents": "Dave Cramer <davecramer@postgres.rocks> writes:\n> On Tue, 31 May 2022 at 10:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> We've generally felt that the existing \"columns per table\" limit is\n>> sufficient detail here.\n\n> ISTM that adding detail is free whereas the readers time to figure out why\n> and where this number came from is not.\n\nDetail is far from \"free\". Most readers are going to spend more time\nwondering what the difference is between \"columns per table\" and \"columns\nper tuple\", and which limit applies when, than they are going to save by\nhaving the docs present them with two inconsistent numbers.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 31 May 2022 10:49:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Limits: maximum number of columns in SELECT result"
},
{
"msg_contents": ">ost readers are going to spend more time\n>wondering what the difference is between \"columns per table\" and \"columns\n>per tuple\"\n\n\"tuple\" is already mentioned 10 times on \"limits\" page, so adding \"columns\nper tuple\" is not really obscure.\nThe comment could be like \"for instance, max number of expressions in each\nSELECT clause\"\n\n\nI know I visited current/limits.html many times (mostly for things like\n\"max field length\")\nHowever, I was really surprised there's an easy to hit limit on the number\nof expressions in SELECT.\n\nI don't ask to lift the limit, however, I am sure documenting the limit\nwould make it clear\nfor the application developers that the limit exists and they should plan\nfor it in advance.\n\n----\n\nI bumped into \"target lists can have at most 1664 entries\" when I was\ntrying to execute a statement with 65535 parameters.\nI know wire format uses unsigned int2 for the number of parameters, so I\nwanted to test if the driver supports that.\n\na) My first test was like select ? c1, ? c2, ? c3, ..., ? c65535\nThen it failed with \"ERROR: target lists can have at most 1664 entries\".\nI do not think \"columns per table\" is applicable to select like that\n\nb) Then I tried select ?||?||?||?||....||?\nI wanted to verify that the driver sent all the values properly, so I don't\nwant to just ignore them and I concatenated the values.\nUnfortunately, it failed with \"stack depth limit exceeded. Increase the\nconfiguration parameter \"max_stack_depth\" (currently 2048kB), after\nensuring the platform's stack depth limit is adequate\"\n\nFinally, I settled on select ARRAY[?, ?, ... ?] which worked up to 65535\nparameters just fine.\nPlease, do not suggest me avoid 65535 parameters. What I wanted was just to\ntest that the driver was able to handle 65535 parameters.\n\nVladimir\n\n>ost readers are going to spend more time>wondering what the difference is between \"columns per table\" and \"columns>per tuple\"\"tuple\" is already mentioned 10 times on \"limits\" page, so adding \"columns per tuple\" is not really obscure.The comment could be like \"for instance, max number of expressions in each SELECT clause\" I know I visited current/limits.html many times (mostly for things like \"max field length\")However, I was really surprised there's an easy to hit limit on the number of expressions in SELECT.I don't ask to lift the limit, however, I am sure documenting the limit would make it clearfor the application developers that the limit exists and they should plan for it in advance.----I bumped into \"target lists can have at most 1664 entries\" when I was trying to execute a statement with 65535 parameters.I know wire format uses unsigned int2 for the number of parameters, so I wanted to test if the driver supports that.a) My first test was like select ? c1, ? c2, ? c3, ..., ? c65535Then it failed with \"ERROR: target lists can have at most 1664 entries\".I do not think \"columns per table\" is applicable to select like thatb) Then I tried select ?||?||?||?||....||?I wanted to verify that the driver sent all the values properly, so I don't want to just ignore them and I concatenated the values.Unfortunately, it failed with \"stack depth limit exceeded. Increase the configuration parameter \"max_stack_depth\" (currently 2048kB), after ensuring the platform's stack depth limit is adequate\"Finally, I settled on select ARRAY[?, ?, ... ?] which worked up to 65535 parameters just fine.Please, do not suggest me avoid 65535 parameters. What I wanted was just to test that the driver was able to handle 65535 parameters.Vladimir",
"msg_date": "Tue, 31 May 2022 18:59:23 +0300",
"msg_from": "Vladimir Sitnikov <sitnikov.vladimir@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL Limits: maximum number of columns in SELECT result"
},
{
"msg_contents": "On Tue, 31 May 2022 at 10:49, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Dave Cramer <davecramer@postgres.rocks> writes:\n> > On Tue, 31 May 2022 at 10:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> We've generally felt that the existing \"columns per table\" limit is\n> >> sufficient detail here.\n>\n> > ISTM that adding detail is free whereas the readers time to figure out\n> why\n> > and where this number came from is not.\n>\n> Detail is far from \"free\". Most readers are going to spend more time\n> wondering what the difference is between \"columns per table\" and \"columns\n> per tuple\", and which limit applies when, than they are going to save by\n> having the docs present them with two inconsistent numbers.\n>\n\nSounds to me like we are discussing different sides of the same coin. On\none hand we have readers of the documentation who may be confused,\nand on the other hand we have developers who run into this and have to\nspend time digging into the code to figure out what's what.\n\nFor me, while I have some familiarity with the server code it takes me\nquite a while to load and find what I am looking for.\nThen we have the less than clear names like \"resno\" for which I still\nhaven't groked. So imagine someone who has no familiarity\nwith the backend code trying to figure out why 1664 is relevant when the\ndocs mention 1600. Surely there must be some middle ground\nwhere we can give them some clues without having to wade through the source\ncode ?\n\nDave\n\nOn Tue, 31 May 2022 at 10:49, Tom Lane <tgl@sss.pgh.pa.us> wrote:Dave Cramer <davecramer@postgres.rocks> writes:\n> On Tue, 31 May 2022 at 10:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> We've generally felt that the existing \"columns per table\" limit is\n>> sufficient detail here.\n\n> ISTM that adding detail is free whereas the readers time to figure out why\n> and where this number came from is not.\n\nDetail is far from \"free\". Most readers are going to spend more time\nwondering what the difference is between \"columns per table\" and \"columns\nper tuple\", and which limit applies when, than they are going to save by\nhaving the docs present them with two inconsistent numbers.Sounds to me like we are discussing different sides of the same coin. On one hand we have readers of the documentation who may be confused, and on the other hand we have developers who run into this and have to spend time digging into the code to figure out what's what.For me, while I have some familiarity with the server code it takes me quite a while to load and find what I am looking for. Then we have the less than clear names like \"resno\" for which I still haven't groked. So imagine someone who has no familiarity with the backend code trying to figure out why 1664 is relevant when the docs mention 1600. Surely there must be some middle groundwhere we can give them some clues without having to wade through the source code ?Dave",
"msg_date": "Tue, 31 May 2022 13:22:44 -0400",
"msg_from": "Dave Cramer <davecramer@postgres.rocks>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Limits: maximum number of columns in SELECT result"
},
{
"msg_contents": "On 2022-May-31, Tom Lane wrote:\n\n> Detail is far from \"free\". Most readers are going to spend more time\n> wondering what the difference is between \"columns per table\" and \"columns\n> per tuple\", and which limit applies when, than they are going to save by\n> having the docs present them with two inconsistent numbers.\n\nI think it's reasonable to have two adjacent rows in the table for these\ntwo closely related things, but rather than \"columns per tuple\" I would\nlabel the second one \"columns in a result set\". This is easy enough to\nunderstand and to differentiate from the other limit.\n\n(Replacing \"in a\" with \"per\" sounds OK to me but less natural, not sure\nwhy.)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"The Gord often wonders why people threaten never to come back after they've\nbeen told never to return\" (www.actsofgord.com)\n\n\n",
"msg_date": "Tue, 31 May 2022 20:15:14 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Limits: maximum number of columns in SELECT result"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> I think it's reasonable to have two adjacent rows in the table for these\n> two closely related things, but rather than \"columns per tuple\" I would\n> label the second one \"columns in a result set\". This is easy enough to\n> understand and to differentiate from the other limit.\n\nOK, with that wording it's probably clear enough.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 31 May 2022 14:51:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Limits: maximum number of columns in SELECT result"
},
{
"msg_contents": "On Tue, 31 May 2022 at 14:51, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > I think it's reasonable to have two adjacent rows in the table for these\n> > two closely related things, but rather than \"columns per tuple\" I would\n> > label the second one \"columns in a result set\". This is easy enough to\n> > understand and to differentiate from the other limit.\n>\n> OK, with that wording it's probably clear enough.\n>\n> regards, tom lane\n>\n> Reworded patch attached",
"msg_date": "Tue, 31 May 2022 15:07:56 -0400",
"msg_from": "Dave Cramer <davecramer@postgres.rocks>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Limits: maximum number of columns in SELECT result"
},
{
"msg_contents": "On Tue, May 31, 2022 at 01:22:44PM -0400, Dave Cramer wrote:\n> \n> \n> On Tue, 31 May 2022 at 10:49, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Dave Cramer <davecramer@postgres.rocks> writes:\n> > On Tue, 31 May 2022 at 10:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> We've generally felt that the existing \"columns per table\" limit is\n> >> sufficient detail here.\n> \n> > ISTM that adding detail is free whereas the readers time to figure out\n> why\n> > and where this number came from is not.\n> \n> Detail is far from \"free\". Most readers are going to spend more time\n> wondering what the difference is between \"columns per table\" and \"columns\n> per tuple\", and which limit applies when, than they are going to save by\n> having the docs present them with two inconsistent numbers.\n> \n> \n> Sounds to me like we are discussing different sides of the same coin. On one\n> hand we have readers of the documentation who may be confused, \n> and on the other hand we have developers who run into this and have to spend\n> time digging into the code to figure out what's what.\n\nHow many people ask about this limit. I can't remember one.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 31 May 2022 20:16:23 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Limits: maximum number of columns in SELECT result"
},
{
"msg_contents": "On Wed, 1 Jun 2022 at 07:08, Dave Cramer <davecramer@postgres.rocks> wrote:\n>\n> On Tue, 31 May 2022 at 14:51, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> > I think it's reasonable to have two adjacent rows in the table for these\n>> > two closely related things, but rather than \"columns per tuple\" I would\n>> > label the second one \"columns in a result set\". This is easy enough to\n>> > understand and to differentiate from the other limit.\n>>\n>> OK, with that wording it's probably clear enough.\n\n> Reworded patch attached\n\nI see the patch does not have the same text as what was proposed and\nseconded above. My personal preferences would be \"result set\ncolumns\", but \"columns in a result set\" seems fine too.\n\nI've adjusted the patch to use the wording proposed by Alvaro. See attached.\n\nI will push this shortly.\n\nDavid",
"msg_date": "Wed, 1 Jun 2022 12:32:58 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Limits: maximum number of columns in SELECT result"
},
{
"msg_contents": "On Tue, 31 May 2022 at 20:33, David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Wed, 1 Jun 2022 at 07:08, Dave Cramer <davecramer@postgres.rocks>\n> wrote:\n> >\n> > On Tue, 31 May 2022 at 14:51, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>\n> >> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> >> > I think it's reasonable to have two adjacent rows in the table for\n> these\n> >> > two closely related things, but rather than \"columns per tuple\" I\n> would\n> >> > label the second one \"columns in a result set\". This is easy enough\n> to\n> >> > understand and to differentiate from the other limit.\n> >>\n> >> OK, with that wording it's probably clear enough.\n>\n> > Reworded patch attached\n>\n> I see the patch does not have the same text as what was proposed and\n> seconded above. My personal preferences would be \"result set\n> columns\", but \"columns in a result set\" seems fine too.\n>\n> I've adjusted the patch to use the wording proposed by Alvaro. See\n> attached.\n>\n> I will push this shortly.\n>\n> David\n>\n\nThanks David, Apparently I am truly unable to multi-task.\n\nDave\n\nOn Tue, 31 May 2022 at 20:33, David Rowley <dgrowleyml@gmail.com> wrote:On Wed, 1 Jun 2022 at 07:08, Dave Cramer <davecramer@postgres.rocks> wrote:\n>\n> On Tue, 31 May 2022 at 14:51, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> > I think it's reasonable to have two adjacent rows in the table for these\n>> > two closely related things, but rather than \"columns per tuple\" I would\n>> > label the second one \"columns in a result set\". This is easy enough to\n>> > understand and to differentiate from the other limit.\n>>\n>> OK, with that wording it's probably clear enough.\n\n> Reworded patch attached\n\nI see the patch does not have the same text as what was proposed and\nseconded above. My personal preferences would be \"result set\ncolumns\", but \"columns in a result set\" seems fine too.\n\nI've adjusted the patch to use the wording proposed by Alvaro. See attached.\n\nI will push this shortly.\n\nDavidThanks David, Apparently I am truly unable to multi-task.Dave",
"msg_date": "Tue, 31 May 2022 20:37:35 -0400",
"msg_from": "Dave Cramer <davecramer@postgres.rocks>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Limits: maximum number of columns in SELECT result"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I've adjusted the patch to use the wording proposed by Alvaro. See attached.\n\nShould we also change the adjacent item to \"columns in a table\",\nfor consistency of wording? Not sure though, because s/per/in a/\nthroughout the list doesn't seem like it'd be an improvement.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 31 May 2022 20:42:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Limits: maximum number of columns in SELECT result"
},
{
"msg_contents": "On 1/06/22 12:42, Tom Lane wrote:\n> David Rowley <dgrowleyml@gmail.com> writes:\n>> I've adjusted the patch to use the wording proposed by Alvaro. See attached.\n> Should we also change the adjacent item to \"columns in a table\",\n> for consistency of wording? Not sure though, because s/per/in a/\n> throughout the list doesn't seem like it'd be an improvement.\n>\n> \t\t\tregards, tom lane\n>\n>\nI like the word 'per' better than the phrase 'in a', at least in this \ncontext.\n\n(Though I'm not too worried either way!)\n\n\nCheers,\nGavin\n\n\n\n",
"msg_date": "Wed, 1 Jun 2022 12:50:53 +1200",
"msg_from": "Gavin Flower <GavinFlower@archidevsys.co.nz>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Limits: maximum number of columns in SELECT result"
},
{
"msg_contents": "On Wed, 1 Jun 2022 at 12:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > I've adjusted the patch to use the wording proposed by Alvaro. See attached.\n>\n> Should we also change the adjacent item to \"columns in a table\",\n> for consistency of wording? Not sure though, because s/per/in a/\n> throughout the list doesn't seem like it'd be an improvement.\n\nI might agree if there weren't so many other \"per\"s in the list.\n\nMaybe \"columns per result set\" would have been a better title for consistency.\n\nDavid\n\n\n",
"msg_date": "Wed, 1 Jun 2022 12:51:24 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Limits: maximum number of columns in SELECT result"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> Maybe \"columns per result set\" would have been a better title for consistency.\n\nI can't quite put my finger on why, but that wording seems odd to me,\neven though \"columns per table\" is natural enough. \"In a\" reads much\nbetter here IMO. Anyway, I see you committed it that way, and it's\ncertainly not worth the effort to change further.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 31 May 2022 20:55:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Limits: maximum number of columns in SELECT result"
},
{
"msg_contents": "On Tue, 31 May 2022 at 12:00, Vladimir Sitnikov\n<sitnikov.vladimir@gmail.com> wrote:\n>\n> Please, do not suggest me avoid 65535 parameters. What I wanted was just to test that the driver was able to handle 65535 parameters.\n\nI don't think we have regression tests to cover things at these\nlimits, that might be worth adding if they're not too awkward to\nmaintain.\n\n-- \ngreg\n\n\n",
"msg_date": "Thu, 2 Jun 2022 10:58:33 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Limits: maximum number of columns in SELECT result"
}
] |
[
{
"msg_contents": "PSA a patch to fix a spelling mistake that I happened upon...\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Tue, 31 May 2022 18:27:59 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix spelling mistake in README file"
},
{
"msg_contents": "On Tue, May 31, 2022 at 1:58 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> PSA a patch to fix a spelling mistake that I happened upon...\n>\n\nLGTM. I'll push this in some time. Thanks!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 31 May 2022 14:07:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix spelling mistake in README file"
}
] |
[
{
"msg_contents": "Hi, hackers!\n\nThere are not many commands in PostgreSQL for working with partitioned \ntables. This is an obstacle to their widespread use.\nAdding SPLIT PARTITION/MERGE PARTITIONS operations can make easier to \nuse partitioned tables in PostgreSQL.\n(This is especially important when migrating projects from ORACLE DBMS.)\n\nSPLIT PARTITION/MERGE PARTITIONS commands are supported for range \npartitioning (BY RANGE) and for list partitioning (BY LIST).\nFor hash partitioning (BY HASH) these operations are not supported.\n\n=================\n1 SPLIT PARTITION\n=================\nCommand for split a single partition.\n\n1.1 Syntax\n----------\n\nALTER TABLE <name> SPLIT PARTITION <partition_name> INTO\n(PARTITION <partition_name1> { FOR VALUES <partition_bound_spec> | \nDEFAULT },\n [ ... ]\n PARTITION <partition_nameN> { FOR VALUES <partition_bound_spec> | \nDEFAULT })\n\n<partition_bound_spec>:\n\tIN ( <partition_bound_expr> [, ...] ) |\n\tFROM ( { <partition_bound_expr> | MINVALUE | MAXVALUE } [, ...] )\n\tTO ( { <partition_bound_expr> | MINVALUE | MAXVALUE } [, ...] )\n\n1.2 Rules\n---------\n\n1.2.1 The <partition_name> partition should be split into two (or more) \npartitions.\n\n1.2.2 New partitions should have different names (with existing \npartitions too).\n\n1.2.3 Bounds of new partitions should not overlap with new and existing \npartitions.\n\n1.2.4 In case split partition is DEFAULT partition, one of new \npartitions should be DEFAULT.\n\n1.2.5 In case new partitions or existing partitions contains DEFAULT \npartition, new partitions <partition_name1>...<partition_nameN> can have \nany bounds inside split partition bound (can be spaces between \npartitions bounds).\n\n1.2.6 In case partitioned table does not have DEFAULT partition, DEFAULT \npartition can be defined as one of new partition.\n\n1.2.7 In case new partitions not contains DEFAULT partition and \npartitioned table does not have DEFAULT partition the following should \nbe true: sum bounds of new partitions \n<partition_name1>...<partition_nameN> should be equal to bound of split \npartition <partition_name>.\n\n1.2.8 One of the new partitions <partition_name1>-<partition_nameN> can \nhave the same name as split partition <partition_name> (this is suitable \nin case splitting a DEFAULT partition: we split it, but after splitting \nwe have a partition with the same name).\n\n1.2.9 Only simple (non-partitioned) partitions can be split.\n\n1.3 Examples\n------------\n\n1.3.1 Example for range partitioning (BY RANGE):\n\nCREATE TABLE sales_range (salesman_id INT, salesman_name VARCHAR(30), \nsales_amount INT, sales_date DATE) PARTITION BY RANGE (sales_date);\nCREATE TABLE sales_jan2022 PARTITION OF sales_range FOR VALUES FROM \n('2022-01-01') TO ('2022-02-01');\nCREATE TABLE sales_feb_mar_apr2022 PARTITION OF sales_range FOR VALUES \nFROM ('2022-02-01') TO ('2022-05-01');\nCREATE TABLE sales_others PARTITION OF sales_range DEFAULT;\n\nALTER TABLE sales_range SPLIT PARTITION sales_feb_mar_apr2022 INTO\n (PARTITION sales_feb2022 FOR VALUES FROM ('2022-02-01') TO \n('2022-03-01'),\n PARTITION sales_mar2022 FOR VALUES FROM ('2022-03-01') TO \n('2022-04-01'),\n PARTITION sales_apr2022 FOR VALUES FROM ('2022-04-01') TO \n('2022-05-01'));\n\n1.3.2 Example for list partitioning (BY LIST):\n\nCREATE TABLE sales_list\n (salesman_id INT GENERATED ALWAYS AS IDENTITY,\n salesman_name VARCHAR(30),\n sales_state VARCHAR(20),\n sales_amount INT,\n sales_date DATE)\nPARTITION BY LIST (sales_state);\n\nCREATE TABLE sales_nord PARTITION OF sales_list FOR VALUES IN \n('Murmansk', 'St. Petersburg', 'Ukhta');\nCREATE TABLE sales_all PARTITION OF sales_list FOR VALUES IN ('Moscow', \n'Voronezh', 'Smolensk', 'Bryansk', 'Magadan', 'Kazan', 'Khabarovsk', \n'Volgograd', 'Vladivostok');\nCREATE TABLE sales_others PARTITION OF sales_list DEFAULT;\n\nALTER TABLE sales_list SPLIT PARTITION sales_all INTO\n (PARTITION sales_west FOR VALUES IN ('Voronezh', 'Smolensk', 'Bryansk'),\n PARTITION sales_east FOR VALUES IN ('Magadan', 'Khabarovsk', \n'Vladivostok'),\n PARTITION sales_central FOR VALUES IN ('Moscow', 'Kazan', \n'Volgograd'));\n\n1.4 ToDo:\n---------\n\n1.4.1 Possibility to specify tablespace for each of the new partitions \n(currently new partitions are created in the same tablespace as split \npartition).\n1.4.2 Possibility to use CONCURRENTLY mode that allows (during the SPLIT \noperation) not blocking partitions that are not splitting.\n\n==================\n2 MERGE PARTITIONS\n==================\nCommand for merge several partitions into one partition.\n\n2.1 Syntax\n----------\n\nALTER TABLE <name> MERGE PARTITIONS (<partition_name1>, \n<partition_name2>[, ...]) INTO <new_partition_name>;\n\n2.2 Rules\n---------\n\n2.2.1 The number of partitions that are merged into the new partition \n<new_partition_name> should be at least two.\n\n2.2.2\nIf DEFAULT partition is not in the list of partitions <partition_name1>, \n<partition_name2>[, ...]:\n * for range partitioning (BY RANGE) is necessary that the ranges of \nthe partitions <partition_name1>, <partition_name2>[, ...] can be merged \ninto one range without spaces and overlaps (otherwise an error will be \ngenerated).\n The combined range will be the range for the partition \n<new_partition_name>.\n * for list partitioning (BY LIST) the values lists of all partitions \n<partition_name1>, <partition_name2>[, ...] are combined and form a list \nof values of partition <new_partition_name>.\n\nIf DEFAULT partition is in the list of partitions <partition_name1>, \n<partition_name2>[, ...]:\n * the partition <new_partition_name> will be the DEFAULT partition;\n * for both partitioning types (BY RANGE, BY LIST) the ranges and \nlists of values of the merged partitions can be any.\n\n2.2.3 The new partition <new_partition_name> can have the same name as \none of the merged partitions.\n\n2.2.4 Only simple (non-partitioned) partitions can be merged.\n\n2.3 Examples\n------------\n\n2.3.1 Example for range partitioning (BY RANGE):\n\nCREATE TABLE sales_range (salesman_id INT, salesman_name VARCHAR(30), \nsales_amount INT, sales_date DATE) PARTITION BY RANGE (sales_date);\nCREATE TABLE sales_jan2022 PARTITION OF sales_range FOR VALUES FROM \n('2022-01-01') TO ('2022-02-01');\nCREATE TABLE sales_feb2022 PARTITION OF sales_range FOR VALUES FROM \n('2022-02-01') TO ('2022-03-01');\nCREATE TABLE sales_mar2022 PARTITION OF sales_range FOR VALUES FROM \n('2022-03-01') TO ('2022-04-01');\nCREATE TABLE sales_apr2022 PARTITION OF sales_range FOR VALUES FROM \n('2022-04-01') TO ('2022-05-01');\nCREATE TABLE sales_others PARTITION OF sales_range DEFAULT;\n\nALTER TABLE sales_range MERGE PARTITIONS (sales_feb2022, sales_mar2022, \nsales_apr2022) INTO sales_feb_mar_apr2022;\n\n2.3.2 Example for list partitioning (BY LIST):\n\nCREATE TABLE sales_list\n(salesman_id INT GENERATED ALWAYS AS IDENTITY,\n salesman_name VARCHAR(30),\n sales_state VARCHAR(20),\n sales_amount INT,\n sales_date DATE)\nPARTITION BY LIST (sales_state);\n\nCREATE TABLE sales_nord PARTITION OF sales_list FOR VALUES IN \n('Murmansk', 'St. Petersburg', 'Ukhta');\nCREATE TABLE sales_west PARTITION OF sales_list FOR VALUES IN \n('Voronezh', 'Smolensk', 'Bryansk');\nCREATE TABLE sales_east PARTITION OF sales_list FOR VALUES IN \n('Magadan', 'Khabarovsk', 'Vladivostok');\nCREATE TABLE sales_central PARTITION OF sales_list FOR VALUES IN \n('Moscow', 'Kazan', 'Volgograd');\nCREATE TABLE sales_others PARTITION OF sales_list DEFAULT;\n\nALTER TABLE sales_list MERGE PARTITIONS (sales_west, sales_east, \nsales_central) INTO sales_all;\n\n2.4 ToDo:\n---------\n\n2.4.1 Possibility to specify tablespace for the new partition (currently \nnew partition is created in the same tablespace as partitioned table).\n2.4.2 Possibility to use CONCURRENTLY mode that allows (during the MERGE \noperation) not blocking partitions that are not merging.\n2.4.3 New syntax for ALTER TABLE ... MERGE PARTITIONS command for range \npartitioning (BY RANGE):\n\nALTER TABLE <name> MERGE PARTITIONS <partition_name1> TO \n<partition_name2> INTO <new_partition_name>;\n\nThis command can merge all partitions between <partition_name1> and \n<partition_name2> into new partition <new_partition_name>.\nThis can be useful for this example cases: need to merge all one-month \npartitions into a year partition or need to merge all one-day partitions \ninto a month partition.\n\nYour opinions are very much welcome!\n\n-- \nWith best regards,\nDmitry Koval.",
"msg_date": "Tue, 31 May 2022 12:32:43 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Tue, 31 May 2022 at 11:33, Dmitry Koval <d.koval@postgrespro.ru> wrote:\n>\n> Hi, hackers!\n>\n> There are not many commands in PostgreSQL for working with partitioned\n> tables. This is an obstacle to their widespread use.\n> Adding SPLIT PARTITION/MERGE PARTITIONS operations can make easier to\n> use partitioned tables in PostgreSQL.\n\nThat is quite a nice and useful feature to have.\n\n> (This is especially important when migrating projects from ORACLE DBMS.)\n>\n> SPLIT PARTITION/MERGE PARTITIONS commands are supported for range\n> partitioning (BY RANGE) and for list partitioning (BY LIST).\n> For hash partitioning (BY HASH) these operations are not supported.\n\nJust out of curiosity, why is SPLIT / MERGE support not included for\nHASH partitions? Because sibling partitions can have a different\nmodulus, you should be able to e.g. split a partition with (modulus,\nremainder) of (3, 1) into two partitions with (mod, rem) of (6, 1) and\n(6, 4) respectively, with the reverse being true for merge operations,\nright?\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Tue, 31 May 2022 12:30:22 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Tue, 2022-05-31 at 12:32 +0300, Dmitry Koval wrote:\n> There are not many commands in PostgreSQL for working with partitioned \n> tables. This is an obstacle to their widespread use.\n> Adding SPLIT PARTITION/MERGE PARTITIONS operations can make easier to \n> use partitioned tables in PostgreSQL.\n> (This is especially important when migrating projects from ORACLE DBMS.)\n> \n> SPLIT PARTITION/MERGE PARTITIONS commands are supported for range \n> partitioning (BY RANGE) and for list partitioning (BY LIST).\n> For hash partitioning (BY HASH) these operations are not supported.\n\n+1 on the general idea.\n\nAt least, it will makes these operations simpler, but probably also less\ninvasive (no need to detach the affected partitions).\n\n\nI didn't read the patch, but what lock level does that place on the\npartitioned table? Anything more than ACCESS SHARE?\n\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 31 May 2022 13:02:27 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": ">Just out of curiosity, why is SPLIT / MERGE support not included for\n >HASH partitions? Because sibling partitions can have a different\n >modulus, you should be able to e.g. split a partition with (modulus,\n >remainder) of (3, 1) into two partitions with (mod, rem) of (6, 1) and\n >(6, 4) respectively, with the reverse being true for merge operations,\n >right?\n\nYou are right, SPLIT/MERGE operations can be added for HASH-partitioning \nin the future. But HASH-partitioning is rarer than RANGE- and \nLIST-partitioning and I decided to skip it in the first step.\nMaybe community will say that SPLIT/MERGE commands are not needed... (At \nfirst step I would like to make sure that it is no true)\n\nP.S. I attached patch with 1-line warning fix (for cfbot).\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Tue, 31 May 2022 22:43:16 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "> I didn't read the patch, but what lock level does that place on the\n> partitioned table? Anything more than ACCESS SHARE?\n\nCurrent patch locks a partitioned table with ACCESS EXCLUSIVE lock. \nUnfortunately only this lock guarantees that other session can not work \nwith partitions that are splitting or merging.\n\nI want add CONCURRENTLY mode in future. With this mode partitioned table \nduring SPLIT/MERGE operation will be locked with SHARE UPDATE EXCLUSIVE \n(as ATTACH/DETACH PARTITION commands in CONCURRENTLY mode).\nBut in this case queries from other sessions that want to work with \npartitions that are splitting/merging at this time should receive an \nerror (like \"Partition data is moving. Repeat the operation later\") \nbecause old partitions will be deleted at the end of SPLIT/MERGE operation.\nI hope exists a better solution, but I don't know it now...\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com\n\n\n",
"msg_date": "Tue, 31 May 2022 23:22:32 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Tue, May 31, 2022 at 12:43 PM Dmitry Koval <d.koval@postgrespro.ru>\nwrote:\n\n> >Just out of curiosity, why is SPLIT / MERGE support not included for\n> >HASH partitions? Because sibling partitions can have a different\n> >modulus, you should be able to e.g. split a partition with (modulus,\n> >remainder) of (3, 1) into two partitions with (mod, rem) of (6, 1) and\n> >(6, 4) respectively, with the reverse being true for merge operations,\n> >right?\n>\n> You are right, SPLIT/MERGE operations can be added for HASH-partitioning\n> in the future. But HASH-partitioning is rarer than RANGE- and\n> LIST-partitioning and I decided to skip it in the first step.\n> Maybe community will say that SPLIT/MERGE commands are not needed... (At\n> first step I would like to make sure that it is no true)\n>\n> P.S. I attached patch with 1-line warning fix (for cfbot).\n> --\n> With best regards,\n> Dmitry Koval\n>\n> Postgres Professional: http://postgrespro.com\n\n\nHi,\nFor attachPartTable, the parameter wqueue is missing from comment.\nThe parameters of CloneRowTriggersToPartition are called parent\nand partition. I think it is better to name the parameters to\nattachPartTable in a similar manner.\n\nFor struct SplitPartContext, SplitPartitionContext would be better name.\n\n+ /* Store partition contect into list. */\ncontect -> context\n\nCheers\n\nOn Tue, May 31, 2022 at 12:43 PM Dmitry Koval <d.koval@postgrespro.ru> wrote: >Just out of curiosity, why is SPLIT / MERGE support not included for\n >HASH partitions? Because sibling partitions can have a different\n >modulus, you should be able to e.g. split a partition with (modulus,\n >remainder) of (3, 1) into two partitions with (mod, rem) of (6, 1) and\n >(6, 4) respectively, with the reverse being true for merge operations,\n >right?\n\nYou are right, SPLIT/MERGE operations can be added for HASH-partitioning \nin the future. But HASH-partitioning is rarer than RANGE- and \nLIST-partitioning and I decided to skip it in the first step.\nMaybe community will say that SPLIT/MERGE commands are not needed... (At \nfirst step I would like to make sure that it is no true)\n\nP.S. I attached patch with 1-line warning fix (for cfbot).\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.comHi,For attachPartTable, the parameter wqueue is missing from comment.The parameters of CloneRowTriggersToPartition are called parent and partition. I think it is better to name the parameters to attachPartTable in a similar manner.For struct SplitPartContext, SplitPartitionContext would be better name.+ /* Store partition contect into list. */contect -> contextCheers",
"msg_date": "Tue, 31 May 2022 13:43:26 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Tue, May 31, 2022 at 1:43 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Tue, May 31, 2022 at 12:43 PM Dmitry Koval <d.koval@postgrespro.ru>\n> wrote:\n>\n>> >Just out of curiosity, why is SPLIT / MERGE support not included for\n>> >HASH partitions? Because sibling partitions can have a different\n>> >modulus, you should be able to e.g. split a partition with (modulus,\n>> >remainder) of (3, 1) into two partitions with (mod, rem) of (6, 1) and\n>> >(6, 4) respectively, with the reverse being true for merge operations,\n>> >right?\n>>\n>> You are right, SPLIT/MERGE operations can be added for HASH-partitioning\n>> in the future. But HASH-partitioning is rarer than RANGE- and\n>> LIST-partitioning and I decided to skip it in the first step.\n>> Maybe community will say that SPLIT/MERGE commands are not needed... (At\n>> first step I would like to make sure that it is no true)\n>>\n>> P.S. I attached patch with 1-line warning fix (for cfbot).\n>> --\n>> With best regards,\n>> Dmitry Koval\n>>\n>> Postgres Professional: http://postgrespro.com\n>\n>\n> Hi,\n> For attachPartTable, the parameter wqueue is missing from comment.\n> The parameters of CloneRowTriggersToPartition are called parent\n> and partition. I think it is better to name the parameters to\n> attachPartTable in a similar manner.\n>\n> For struct SplitPartContext, SplitPartitionContext would be better name.\n>\n> + /* Store partition contect into list. */\n> contect -> context\n>\n> Cheers\n>\nHi,\nFor transformPartitionCmdForMerge(), nested loop is used to detect\nduplicate names.\nIf the number of partitions in partcmd->partlist, we should utilize map to\nspeed up the check.\n\nFor check_parent_values_in_new_partitions():\n\n+ if (!find_value_in_new_partitions(&key->partsupfunc[0],\n+ key->partcollation, parts,\nnparts, datum, false))\n+ found = false;\n\nIt seems we can break out of the loop when found is false.\n\nCheers\n\nOn Tue, May 31, 2022 at 1:43 PM Zhihong Yu <zyu@yugabyte.com> wrote:On Tue, May 31, 2022 at 12:43 PM Dmitry Koval <d.koval@postgrespro.ru> wrote: >Just out of curiosity, why is SPLIT / MERGE support not included for\n >HASH partitions? Because sibling partitions can have a different\n >modulus, you should be able to e.g. split a partition with (modulus,\n >remainder) of (3, 1) into two partitions with (mod, rem) of (6, 1) and\n >(6, 4) respectively, with the reverse being true for merge operations,\n >right?\n\nYou are right, SPLIT/MERGE operations can be added for HASH-partitioning \nin the future. But HASH-partitioning is rarer than RANGE- and \nLIST-partitioning and I decided to skip it in the first step.\nMaybe community will say that SPLIT/MERGE commands are not needed... (At \nfirst step I would like to make sure that it is no true)\n\nP.S. I attached patch with 1-line warning fix (for cfbot).\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.comHi,For attachPartTable, the parameter wqueue is missing from comment.The parameters of CloneRowTriggersToPartition are called parent and partition. I think it is better to name the parameters to attachPartTable in a similar manner.For struct SplitPartContext, SplitPartitionContext would be better name.+ /* Store partition contect into list. */contect -> contextCheersHi,For transformPartitionCmdForMerge(), nested loop is used to detect duplicate names.If the number of partitions in partcmd->partlist, we should utilize map to speed up the check.For check_parent_values_in_new_partitions():+ if (!find_value_in_new_partitions(&key->partsupfunc[0],+ key->partcollation, parts, nparts, datum, false))+ found = false;It seems we can break out of the loop when found is false.Cheers",
"msg_date": "Tue, 31 May 2022 15:14:25 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi,\n\n1)\n> For attachPartTable, the parameter wqueue is missing from comment.\n> The parameters of CloneRowTriggersToPartition are called parent and partition.\n> I think it is better to name the parameters to attachPartTable in a similar manner.\n> \n> For struct SplitPartContext, SplitPartitionContext would be better name.\n> \n> + /* Store partition contect into list. */\n> contect -> context\n\nThanks, changed.\n\n2)\n> For transformPartitionCmdForMerge(), nested loop is used to detect duplicate names.\n> If the number of partitions in partcmd->partlist, we should utilize map to speed up the check.\n\nI'm not sure what we should utilize map in this case because chance that \nnumber of merging partitions exceed dozens is low.\nIs there a function example that uses a map for such a small number of \nelements?\n\n3)\n> For check_parent_values_in_new_partitions():\n> \n> + if (!find_value_in_new_partitions(&key->partsupfunc[0],\n> + key->partcollation, parts, nparts, datum, false))\n> + found = false;\n> \n> It seems we can break out of the loop when found is false.\n\nWe have implicit \"break\" in \"for\" construction:\n\n+\tfor (i = 0; i < boundinfo->ndatums && found; i++)\n\nI'll change it to explicit \"break;\" to avoid confusion.\n\n\nAttached patch with the changes described above.\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Wed, 1 Jun 2022 21:58:42 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Wed, Jun 1, 2022 at 11:58 AM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n\n> Hi,\n>\n> 1)\n> > For attachPartTable, the parameter wqueue is missing from comment.\n> > The parameters of CloneRowTriggersToPartition are called parent and\n> partition.\n> > I think it is better to name the parameters to attachPartTable in a\n> similar manner.\n> >\n> > For struct SplitPartContext, SplitPartitionContext would be better name.\n> >\n> > + /* Store partition contect into list. */\n> > contect -> context\n>\n> Thanks, changed.\n>\n> 2)\n> > For transformPartitionCmdForMerge(), nested loop is used to detect\n> duplicate names.\n> > If the number of partitions in partcmd->partlist, we should utilize map\n> to speed up the check.\n>\n> I'm not sure what we should utilize map in this case because chance that\n> number of merging partitions exceed dozens is low.\n> Is there a function example that uses a map for such a small number of\n> elements?\n>\n> 3)\n> > For check_parent_values_in_new_partitions():\n> >\n> > + if (!find_value_in_new_partitions(&key->partsupfunc[0],\n> > + key->partcollation, parts,\n> nparts, datum, false))\n> > + found = false;\n> >\n> > It seems we can break out of the loop when found is false.\n>\n> We have implicit \"break\" in \"for\" construction:\n>\n> + for (i = 0; i < boundinfo->ndatums && found; i++)\n>\n> I'll change it to explicit \"break;\" to avoid confusion.\n>\n>\n> Attached patch with the changes described above.\n> --\n> With best regards,\n> Dmitry Koval\n>\n> Postgres Professional: http://postgrespro.com\n\nHi,\nThanks for your response.\n\nw.r.t. #2, I think using nested loop is fine for now.\nIf, when this feature is merged, some user comes up with long merge list,\nwe can revisit this topic.\n\nCheers\n\nOn Wed, Jun 1, 2022 at 11:58 AM Dmitry Koval <d.koval@postgrespro.ru> wrote:Hi,\n\n1)\n> For attachPartTable, the parameter wqueue is missing from comment.\n> The parameters of CloneRowTriggersToPartition are called parent and partition.\n> I think it is better to name the parameters to attachPartTable in a similar manner.\n> \n> For struct SplitPartContext, SplitPartitionContext would be better name.\n> \n> + /* Store partition contect into list. */\n> contect -> context\n\nThanks, changed.\n\n2)\n> For transformPartitionCmdForMerge(), nested loop is used to detect duplicate names.\n> If the number of partitions in partcmd->partlist, we should utilize map to speed up the check.\n\nI'm not sure what we should utilize map in this case because chance that \nnumber of merging partitions exceed dozens is low.\nIs there a function example that uses a map for such a small number of \nelements?\n\n3)\n> For check_parent_values_in_new_partitions():\n> \n> + if (!find_value_in_new_partitions(&key->partsupfunc[0],\n> + key->partcollation, parts, nparts, datum, false))\n> + found = false;\n> \n> It seems we can break out of the loop when found is false.\n\nWe have implicit \"break\" in \"for\" construction:\n\n+ for (i = 0; i < boundinfo->ndatums && found; i++)\n\nI'll change it to explicit \"break;\" to avoid confusion.\n\n\nAttached patch with the changes described above.\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.comHi,Thanks for your response. w.r.t. #2, I think using nested loop is fine for now.If, when this feature is merged, some user comes up with long merge list, we can revisit this topic.Cheers",
"msg_date": "Wed, 1 Jun 2022 12:10:22 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi!\n\nPatch stop applying due to changes in upstream.\nHere is a rebased version.\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Wed, 13 Jul 2022 21:27:44 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Wed, Jul 13, 2022 at 11:28 AM Dmitry Koval <d.koval@postgrespro.ru>\nwrote:\n\n> Hi!\n>\n> Patch stop applying due to changes in upstream.\n> Here is a rebased version.\n>\n> --\n> With best regards,\n> Dmitry Koval\n>\n> Postgres Professional: http://postgrespro.com\n\nHi,\n\n+attachPartTable(List **wqueue, Relation rel, Relation partition,\nPartitionBoundSpec *bound)\n\nI checked naming of existing methods, such as AttachPartitionEnsureIndexes.\nI think it would be better if the above method is\nnamed attachPartitionTable.\n\n+ if (!defaultPartCtx && OidIsValid(defaultPartOid))\n+ {\n+ pc = createSplitPartitionContext(table_open(defaultPartOid,\nAccessExclusiveLock));\n\nSince the value of pc would be passed to defaultPartCtx, there is no need\nto assign to pc above. You can assign directly to defaultPartCtx.\n\n+ /* Drop splitted partition. */\n\nsplitted -> split\n\n+ /* Rename new partition if it is need. */\n\nneed -> needed.\n\nCheers\n\nOn Wed, Jul 13, 2022 at 11:28 AM Dmitry Koval <d.koval@postgrespro.ru> wrote:Hi!\n\nPatch stop applying due to changes in upstream.\nHere is a rebased version.\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.comHi,+attachPartTable(List **wqueue, Relation rel, Relation partition, PartitionBoundSpec *bound) I checked naming of existing methods, such as AttachPartitionEnsureIndexes.I think it would be better if the above method is named attachPartitionTable.+ if (!defaultPartCtx && OidIsValid(defaultPartOid))+ {+ pc = createSplitPartitionContext(table_open(defaultPartOid, AccessExclusiveLock));Since the value of pc would be passed to defaultPartCtx, there is no need to assign to pc above. You can assign directly to defaultPartCtx.+ /* Drop splitted partition. */splitted -> split+ /* Rename new partition if it is need. */need -> needed.Cheers",
"msg_date": "Wed, 13 Jul 2022 12:03:46 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Thanks you!\nI've fixed all things mentioned.\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Wed, 13 Jul 2022 23:05:44 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Wed, Jul 13, 2022 at 1:05 PM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n\n> Thanks you!\n> I've fixed all things mentioned.\n>\n> --\n> With best regards,\n> Dmitry Koval\n>\n> Postgres Professional: http://postgrespro.com\n\nHi,\nToward the end of ATExecSplitPartition():\n\n+ /* Unlock new partition. */\n+ table_close(newPartRel, NoLock);\n\n Why is NoLock passed (instead of AccessExclusiveLock) ?\n\nCheers\n\nOn Wed, Jul 13, 2022 at 1:05 PM Dmitry Koval <d.koval@postgrespro.ru> wrote:Thanks you!\nI've fixed all things mentioned.\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.comHi,Toward the end of ATExecSplitPartition():+ /* Unlock new partition. */+ table_close(newPartRel, NoLock); Why is NoLock passed (instead of AccessExclusiveLock) ?Cheers",
"msg_date": "Wed, 13 Jul 2022 13:17:30 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "> + /* Unlock new partition. */\n> + table_close(newPartRel, NoLock);\n> \n> Why is NoLock passed (instead of AccessExclusiveLock) ?\n\nThanks!\n\nYou're right, I replaced the comment with \"Keep the lock until commit.\".\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Wed, 13 Jul 2022 23:33:45 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "This is not a review, but I think the isolation tests should be\nexpanded. At least, include the case of serializable transactions being\ninvolved.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Pensar que el espectro que vemos es ilusorio no lo despoja de espanto,\nsólo le suma el nuevo terror de la locura\" (Perelandra, C.S. Lewis)\n\n\n",
"msg_date": "Thu, 14 Jul 2022 10:12:14 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "> This is not a review, but I think the isolation tests should be\n> expanded. At least, include the case of serializable transactions being\n> involved.\n\nThanks!\nI will expand the tests for the next commitfest.\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com\n\n\n",
"msg_date": "Fri, 15 Jul 2022 14:00:48 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi!\n\nPatch stop applying due to changes in upstream.\nHere is a rebased version.\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Thu, 11 Aug 2022 09:56:37 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "> I will expand the tests for the next commitfest.\n\nHi!\n\nCombinations of isolation modes (READ COMMITTED/REPEATABLE \nREAD/SERIALIZABLE) were added to test\n\nsrc/test/isolation/specs/partition-split-merge.spec\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Mon, 29 Aug 2022 19:56:47 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi!\n\nPatch stop applying due to changes in upstream.\nHere is a rebased version.\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Wed, 7 Sep 2022 20:03:09 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Wed, Sep 07, 2022 at 08:03:09PM +0300, Dmitry Koval wrote:\n> Hi!\n> \n> Patch stop applying due to changes in upstream.\n> Here is a rebased version.\n\nThis crashes on freebsd with -DRELCACHE_FORCE_RELEASE\nhttps://cirrus-ci.com/task/6565371623768064\nhttps://cirrus-ci.com/task/6145355992530944\n\nNote that that's a modified cirrus script from my CI improvements branch\nwhich also does some extra/different things.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 7 Sep 2022 13:43:34 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Thanks a lot Justin!\n\nAfter compilation PostgreSQL+patch with macros\nRELCACHE_FORCE_RELEASE,\nCOPY_PARSE_PLAN_TREES,\nWRITE_READ_PARSE_PLAN_TREES,\nRAW_EXPRESSION_COVERAGE_TEST,\nRANDOMIZE_ALLOCATED_MEMORY,\nI saw a problem on Windows 10, MSVC2019.\n\n(I hope this problem was the same as on Cirrus CI).\n\nAttached patch with fix.\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Thu, 8 Sep 2022 14:35:24 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Thu, Sep 08, 2022 at 02:35:24PM +0300, Dmitry Koval wrote:\n> Thanks a lot Justin!\n> \n> After compilation PostgreSQL+patch with macros\n> RELCACHE_FORCE_RELEASE,\n> RANDOMIZE_ALLOCATED_MEMORY,\n> I saw a problem on Windows 10, MSVC2019.\n\nYes, it passes tests on my CI improvements branch.\nhttps://github.com/justinpryzby/postgres/runs/8248668269\nThanks to Alexander Pyhalov for reminding me about\nRELCACHE_FORCE_RELEASE last year ;)\n\nOn Tue, May 31, 2022 at 12:32:43PM +0300, Dmitry Koval wrote:\n> This can be useful for this example cases: \n> need to merge all one-day partitions\n> into a month partition.\n\n+1, we would use this (at least the MERGE half).\n\nI wonder if it's possible to reduce the size of this patch (I'm starting\nto try to absorb it). Is there a way to refactor/reuse existing code to\nreduce its footprint ?\n\npartbounds.c is adding 500+ LOC about checking if proposed partitions\nmeet the requirements (don't overlap, etc). But a lot of those checks\nmust already happen, no? Can you re-use/refactor the existing checks ?\n\nAn UPDATE on a partitioned table will move tuples from one partition to\nanother. Is there a way to re-use that ? Also, postgres already\nsupports concurrent DDL (CREATE+ATTACH and DETACH CONCURRENTLY). Is it \npossible to leverage that ? (Mostly to reduce the patch size, but also\nbecause maybe some cases could be concurrent?).\n\nIf the patch were split into separate parts for MERGE and SPLIT, would\nthe first patch be significantly smaller than the existing patch\n(hopefully half as big) ? That would help to review it, even if both\nhalves were ultimately squished together. (An easy way to do this is to\nopen up all the files in separate editor instances, trim out the parts\nthat aren't needed for the first patch, save the files but don't quit\nthe editors, test compilation and regression tests, then git commit\n--amend -a. Then in each editor, \"undo\" all the trimmed changes, save,\nand git commit -a).\n\nWould it save much code if \"default\" partitions weren't handled in the\nfirst patch ?\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 8 Sep 2022 07:26:04 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On 2022-Sep-08, Justin Pryzby wrote:\n\n> If the patch were split into separate parts for MERGE and SPLIT, would\n> the first patch be significantly smaller than the existing patch\n> (hopefully half as big) ? That would help to review it, even if both\n> halves were ultimately squished together. (An easy way to do this is to\n> open up all the files in separate editor instances, trim out the parts\n> that aren't needed for the first patch, save the files but don't quit\n> the editors, test compilation and regression tests, then git commit\n> --amend -a. Then in each editor, \"undo\" all the trimmed changes, save,\n> and git commit -a).\n\nAn easier (IMO) way to do that is to use \"git gui\" or even \"git add -p\",\nwhich allow you to selectively add changed lines/hunks to the index.\nYou add a few, commit, then add the rest, commit again. With \"git add\n-p\" you can even edit individual hunks in an editor in case you have a\nmix of both wanted and unwanted in a single hunk (after \"s\"plitting, of\ncourse), which turns out to be easier than it sounds.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"El sudor es la mejor cura para un pensamiento enfermo\" (Bardia)\n\n\n",
"msg_date": "Thu, 8 Sep 2022 16:10:54 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Thanks for your advice, Justin and Alvaro!\n\nI'll try to reduce the size of this patch and split it into separate \nparts (for MERGE and SPLIT).\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com\n\n\n",
"msg_date": "Thu, 8 Sep 2022 17:26:51 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi!\n\nTwo separate parts for MERGE and SPLIT partitions (without refactoring; \nit will be later)\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Mon, 19 Sep 2022 22:26:28 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Tue, May 31, 2022 at 5:33 AM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n> There are not many commands in PostgreSQL for working with partitioned\n> tables. This is an obstacle to their widespread use.\n> Adding SPLIT PARTITION/MERGE PARTITIONS operations can make easier to\n> use partitioned tables in PostgreSQL.\n> (This is especially important when migrating projects from ORACLE DBMS.)\n>\n> SPLIT PARTITION/MERGE PARTITIONS commands are supported for range\n> partitioning (BY RANGE) and for list partitioning (BY LIST).\n> For hash partitioning (BY HASH) these operations are not supported.\n\nThis may be a good idea, but I would like to point out one\ndisadvantage of this approach.\n\nIf you know that a certain partition is not changing, and you would\nlike to split it, you can create two or more new standalone tables and\npopulate them from the original partition using INSERT .. SELECT. Then\nyou can BEGIN a transaction, DETACH the existing partitions, and\nATTACH the replacement ones. By doing this, you take an ACCESS\nEXCLUSIVE lock on the partitioned table only for a brief period. The\nsame kind of idea can be used to merge partitions.\n\nIt seems hard to do something comparable with built-in DDL for SPLIT\nPARTITION and MERGE PARTITION. You could start by taking e.g. SHARE\nlock on the existing partition(s) and then wait until the end to take\nACCESS EXCLUSIVE lock on the partitions, but we typically avoid such\ncoding patterns, because the lock upgrade might deadlock and then a\nlot of work would be wasted. So most likely with the approach you\npropose here you will end up acquiring ACCESS EXCLUSIVE lock at the\nbeginning of the operation and then shuffle a lot of data around while\nstill holding it, which is pretty painful.\n\nBecause of this problem, I find it hard to believe that these commands\nwould get much use, except perhaps on small tables or in\nnon-production environments, unless people just didn't know about the\nalternatives. That's not to say that something like this has no value.\nAs a convenience feature, it's fine. It's just hard for me to see it\nas any more than that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 19 Sep 2022 15:56:42 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Thanks for comments and advice!\nI thought about this problem and discussed about it with colleagues.\nUnfortunately, I don't know of a good general solution.\n\n19.09.2022 22:56, Robert Haas пишет:\n> If you know that a certain partition is not changing, and you would\n> like to split it, you can create two or more new standalone tables and\n> populate them from the original partition using INSERT .. SELECT. Then\n> you can BEGIN a transaction, DETACH the existing partitions, and\n> ATTACH the replacement ones. By doing this, you take an ACCESS\n> EXCLUSIVE lock on the partitioned table only for a brief period. The\n> same kind of idea can be used to merge partitions.\n\nBut for specific situation like this (certain partition is not changing) \nwe can add CONCURRENTLY modifier.\nOur DDL query can be like\n\nALTER TABLE...SPLIT PARTITION [CONCURRENTLY];\n\nWith CONCURRENTLY modifier we can lock partitioned table in \nShareUpdateExclusiveLock mode and split partition - in \nAccessExclusiveLock mode. So we don't lock partitioned table in \nAccessExclusiveLock mode and can modify other partitions during SPLIT \noperation (except split partition).\nIf smb try to modify split partition, he will receive error \"relation \ndoes not exist\" at end of operation (because split partition will be drop).\n\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com\n\n\n",
"msg_date": "Mon, 19 Sep 2022 23:42:38 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Mon, Sep 19, 2022 at 4:42 PM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n> Thanks for comments and advice!\n> I thought about this problem and discussed about it with colleagues.\n> Unfortunately, I don't know of a good general solution.\n\nYeah, me neither.\n\n> But for specific situation like this (certain partition is not changing)\n> we can add CONCURRENTLY modifier.\n> Our DDL query can be like\n>\n> ALTER TABLE...SPLIT PARTITION [CONCURRENTLY];\n>\n> With CONCURRENTLY modifier we can lock partitioned table in\n> ShareUpdateExclusiveLock mode and split partition - in\n> AccessExclusiveLock mode. So we don't lock partitioned table in\n> AccessExclusiveLock mode and can modify other partitions during SPLIT\n> operation (except split partition).\n> If smb try to modify split partition, he will receive error \"relation\n> does not exist\" at end of operation (because split partition will be drop).\n\nI think that a built-in DDL command can't really assume that the user\nwon't modify anything. You'd have to take a ShareLock.\n\nBut you might be able to have a CONCURRENTLY variant of the command\nthat does the same kind of multi-transaction thing as, e.g., CREATE\nINDEX CONCURRENTLY. You would probably have to be quite careful about\nrace conditions (e.g. you commit the first transaction and before you\nstart the second one, someone drops or detaches the partition you were\nplanning to merge or split). Might take some thought, but feels\npossibly doable. I've never been excited enough about this kind of\nthing to want to put a lot of energy into engineering it, because\ndoing it \"manually\" feels so much nicer to me, and doubly so given\nthat we now have ATTACH CONCURRENTLY and DETACH CONCURRENTLY, but it\ndoes seem like a thing some people would probably use and value.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Sep 2022 08:20:52 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi!\n\nFixed couple warnings (for cfbot).\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Tue, 11 Oct 2022 19:21:54 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Tue, Oct 11, 2022 at 9:22 AM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n\n> Hi!\n>\n> Fixed couple warnings (for cfbot).\n>\n> --\n> With best regards,\n> Dmitry Koval\n>\n> Postgres Professional: http://postgrespro.com\n\nHi,\nFor v12-0001-PGPRO-ALTER-TABLE-MERGE-PARTITIONS-command.patch:\n\n+ if (equal(name, cmd->name))\n+ /* One new partition can have the same name as merged\npartition. */\n+ isSameName = true;\n\nI think there should be a check before assigning true to isSameName - if\nisSameName is true, that means there are two partitions with this same name.\n\nCheers\n\nOn Tue, Oct 11, 2022 at 9:22 AM Dmitry Koval <d.koval@postgrespro.ru> wrote:Hi!\n\nFixed couple warnings (for cfbot).\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.comHi,For v12-0001-PGPRO-ALTER-TABLE-MERGE-PARTITIONS-command.patch:+ if (equal(name, cmd->name))+ /* One new partition can have the same name as merged partition. */+ isSameName = true;I think there should be a check before assigning true to isSameName - if isSameName is true, that means there are two partitions with this same name.Cheers",
"msg_date": "Tue, 11 Oct 2022 09:58:01 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Tue, Oct 11, 2022 at 9:58 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Tue, Oct 11, 2022 at 9:22 AM Dmitry Koval <d.koval@postgrespro.ru>\n> wrote:\n>\n>> Hi!\n>>\n>> Fixed couple warnings (for cfbot).\n>>\n>> --\n>> With best regards,\n>> Dmitry Koval\n>>\n>> Postgres Professional: http://postgrespro.com\n>\n> Hi,\n> For v12-0001-PGPRO-ALTER-TABLE-MERGE-PARTITIONS-command.patch:\n>\n> + if (equal(name, cmd->name))\n> + /* One new partition can have the same name as merged\n> partition. */\n> + isSameName = true;\n>\n> I think there should be a check before assigning true to isSameName - if\n> isSameName is true, that means there are two partitions with this same name.\n>\n> Cheers\n>\n\nPardon - I see that transformPartitionCmdForMerge() compares the partition\nnames.\nMaybe you can add a comment in ATExecMergePartitions referring to\ntransformPartitionCmdForMerge() so that people can more easily understand\nthe logic.\n\nOn Tue, Oct 11, 2022 at 9:58 AM Zhihong Yu <zyu@yugabyte.com> wrote:On Tue, Oct 11, 2022 at 9:22 AM Dmitry Koval <d.koval@postgrespro.ru> wrote:Hi!\n\nFixed couple warnings (for cfbot).\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.comHi,For v12-0001-PGPRO-ALTER-TABLE-MERGE-PARTITIONS-command.patch:+ if (equal(name, cmd->name))+ /* One new partition can have the same name as merged partition. */+ isSameName = true;I think there should be a check before assigning true to isSameName - if isSameName is true, that means there are two partitions with this same name.Cheers Pardon - I see that transformPartitionCmdForMerge() compares the partition names.Maybe you can add a comment in ATExecMergePartitions referring to transformPartitionCmdForMerge() so that people can more easily understand the logic.",
"msg_date": "Tue, 11 Oct 2022 10:15:05 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi!\n\n >Maybe you can add a comment in ATExecMergePartitions referring to\n >transformPartitionCmdForMerge() so that people can more easily\n >understand the logic.\n\nThanks, comment added.\n\nPatch stop applying due to changes in upstream.\nHere is a fixed version.\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Thu, 13 Oct 2022 11:57:33 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "I'm sorry, I couldn't answer earlier...\n\n1.\n > partbounds.c is adding 500+ LOC about checking if proposed partitions\n > meet the requirements (don't overlap, etc). But a lot of those\n > checks must already happen, no? Can you re-use/refactor the existing\n > checks ?\n\nI a bit reduced the number of lines in partbounds.c and added comments.\nUnfortunately, it is very difficult to re-use existing checks for other \npartitioned tables operations, because mostly part of PostgreSQL \ncommands works with a single partition.\nSo for SPLIT/MERGE commands were created new checks for several partitions.\n\n2.\n > Also, postgres already supports concurrent DDL (CREATE+ATTACH and\n > DETACH CONCURRENTLY). Is it possible to leverage that ?\n > (Mostly to reduce the patch size, but also because maybe some cases\n > could be concurrent?).\n\nProbably \"ATTACH CONCURRENTLY\" is not supported?\nA few words about \"DETACH CONCURRENTLY\".\n\"DETACH CONCURRENTLY\" can works because this command not move rows \nduring detach partition (and so no reason to block detached partition).\n\"DETACH CONCURRENTLY\" do not changes data, but changes partition \ndescription (partition is marked as \"inhdetachpending = true\" etc.).\n\nFor SPLIT and MERGE the situation is completely different - these \ncommands transfer rows between sections.\nTherefore partitions must be LOCKED EXCLUSIVELY during rows transfer.\nProbably we can use concurrently partitions not participating in SPLIT \nand MERGE.\nBut now PostgreSQL has no possibilities to forbid using a part of \npartitions of a partitioned table (until the end of data transfer by \nSPLIT/MERGE commands).\nSimple locking is not quite suitable here.\nI see only one variant of SPLIT/MERGE CONCURRENTLY implementation that \ncan be realized now:\n\n* ShareUpdateExclusiveLock on partitioned table;\n* AccessExclusiveLock on partition(s) which will be deleted and will be \ncreated during SPLIT/MEGRE command;\n* transferring data between locked sections; operations with non-blocked \npartitions are allowed;\n* sessions which want to use partition(s) which will be deleted, waits \non locks;\n* finally we release AccessExclusiveLock on partition(s) which will be \ndeleted and delete them;\n* waiting sessions will get errors \"relation ... does not exist\" (we can \ntransform it to \"relation structure was changed ... please try again\"?).\n\nIt doesn't look pretty.\nTherefore for the SPLIT/MERGE command the partitioned table is locked \nwith AccessExclusiveLock.\n\n3.\n > An UPDATE on a partitioned table will move tuples from one partition\n > to another. Is there a way to re-use that?\n\nThis could be realized using methods that are called from \nExecCrossPartitionUpdate().\nBut using these methods is more expensive than the current \nimplementation of the SPLIT/MERGE commands.\nSPLIT/MERGE commands uses \"bulk insert\" and there is low overhead for \nfinding a partition to insert data: for MERGE is not need to search \npartition; for SPLIT need to use simple search from several partitions \n(listed in the SPLIT command).\nBelow is a test example.\n\na. Transferring data from the table \"test2\" to partitions \"partition1\" \nand \"partition2\" using the current implementation of tuple routing in \nPostgreSQL:\n\nCREATE TABLE test (a int, b char(10)) PARTITION BY RANGE (a);\nCREATE TABLE partition1 PARTITION OF test FOR VALUES FROM (10) TO (20);\nCREATE TABLE partition2 PARTITION OF test FOR VALUES FROM (20) TO (30);\nCREATE TABLE test2 (a int, b char(10));\nINSERT INTO test2 (a, b) SELECT 11, 'a' FROM generate_series(1, 1000000);\nINSERT INTO test2 (a, b) SELECT 22, 'b' FROM generate_series(1, 1000000);\nINSERT INTO test(a, b) SELECT a, b FROM test2;\nDROP TABLE test2;\nDROP TABLE test;\n\nThree attempts (the results are little different), the best result:\n\nINSERT 0 2000000\nTime: 4467,814 ms (00:04,468)\n\nb. Transferring data from the partition \"partition0\" to partitions \n\"partition 1\" and \"partition2\" using SPLIT command:\n\nCREATE TABLE test (a int, b char(10)) PARTITION BY RANGE (a);\nCREATE TABLE partition0 PARTITION OF test FOR VALUES FROM (0) TO (30);\nINSERT INTO test (a, b) SELECT 11, 'a' FROM generate_series(1, 1000000);\nINSERT INTO test (a, b) SELECT 22, 'b' FROM generate_series(1, 1000000);\nALTER TABLE test SPLIT PARTITION partition0 INTO\n (PARTITION partition0 FOR VALUES FROM (0) TO (10),\n PARTITION partition1 FOR VALUES FROM (10) TO (20),\n PARTITION partition2 FOR VALUES FROM (20) TO (30));\nDROP TABLE test;\n\nThree attempts (the results are little different), the best result:\n\nALTER TABLE\nTime: 3840,127 ms (00:03,840)\n\nSo the current implementation of tuple routing is ~16% slower than the \nSPLIT command.\nThat's quite a lot.\n\n\nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Tue, 29 Nov 2022 01:30:14 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, failed\n\nFeature is clearly missing with partition handling in PostgreSQL, so, this patch is very welcome (as are futur steps)\r\nCode presents good, comments are explicit\r\nPatch v14 apply nicely on 4f46f870fa56fa73d6678273f1bd059fdd93d5e6\r\nCompilation ok with meson compile\r\nLCOV after meson test shows good new code coverage.\r\nDocumentation is missing in v14.",
"msg_date": "Sun, 19 Mar 2023 20:45:13 +0000",
"msg_from": "stephane tachoires <stephane.tachoires@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi!\n\n> Documentation: tested, failed\n\nAdded documentation (as separate commit).\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Tue, 28 Mar 2023 11:28:05 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi,\r\n\r\nPatch v15-0001-ALTER-TABLE-MERGE-PARTITIONS-command.patch\r\nApply nicely.\r\nOne warning on meson compile (configure -Dssl=openssl -Dldap=enabled -Dauto_features=enabled -DPG_TEST_EXTRA='ssl,ldap,kerberos' -Dbsd_auth=disabled -Dbonjour=disabled -Dpam=disabled -Dpltcl=disabled -Dsystemd=disabled -Dzstd=disabled -Db_coverage=true)\r\n\r\n../../src/pgmergesplit/src/test/modules/test_ddl_deparse/test_ddl_deparse.c: In function ‘get_altertable_subcmdinfo’:\r\n../../src/pgmergesplit/src/test/modules/test_ddl_deparse/test_ddl_deparse.c:112:17: warning: enumeration value ‘AT_MergePartitions’ not handled in switch [-Wswitch]\r\n 112 | switch (subcmd->subtype)\r\n | ^~~~~~\r\nShould be the same with 0002...\r\n\r\nmeson test perfect, patch coverage is very good.\r\n\r\nPatch v15-0002-ALTER-TABLE-SPLIT-PARTITION-command.patch\r\nDoesn't apply on 326a33a289c7ba2dbf45f17e610b7be98dc11f67\r\n\r\nPatch v15-0003-Documentation-for-ALTER-TABLE-SPLIT-PARTITION-ME.patch\r\nApply with one warning 1 line add space error (translate from french \"warning: 1 ligne a ajouté des erreurs d'espace\").\r\nv15-0003-Documentation-for-ALTER-TABLE-SPLIT-PARTITION-ME.patch:54: trailing whitespace.\r\n One of the new partitions <replaceable class=\"parameter\">partition_name1</replaceable>, \r\nComment are ok for me. A non native english speaker.\r\nPerhaps you could add some remarks in ddl.html and alter-ddl.html\r\n\r\nStéphane\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Tue, 28 Mar 2023 19:34:33 +0000",
"msg_from": "stephane tachoires <stephane.tachoires@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Thank you!\n\nCorrected version in attachment.\nStrange that cfbot didn't show this warning ...\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Tue, 28 Mar 2023 23:43:45 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, failed\n\nHi,\r\nJust a minor warning with documentation patch \r\ngit apply ../v16-0003-Documentation-for-ALTER-TABLE-SPLIT-PARTITION-ME.patch\r\n../v16-0003-Documentation-for-ALTER-TABLE-SPLIT-PARTITION-ME.patch:54: trailing whitespace.\r\n One of the new partitions <replaceable class=\"parameter\">partition_name1</replaceable>, \r\nwarning: 1 ligne a ajouté des erreurs d'espace.\r\n(perhaps due to my Ubuntu 22.04.2 french install)\r\nEverything else is ok.\r\n\r\nThanks a lot for your work\r\nStéphane",
"msg_date": "Wed, 29 Mar 2023 10:13:37 +0000",
"msg_from": "stephane tachoires <stephane.tachoires@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Thanks!\n\nI missed the trailing whitespace.\nCorrected.\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Wed, 29 Mar 2023 16:32:36 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "This patch no longer applies to master, please submit a rebased version to the\nthread. I've marked the CF entry as waiting for author in the meantime.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 6 Jul 2023 18:10:28 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Thanks, Daniel!\n\n > This patch no longer applies to master, please submit a rebased\n > version to the thread.\n\nHere is a rebased version.\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Thu, 6 Jul 2023 21:43:23 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nOnly documentation patch applied on 4e465aac36ce9a9533c68dbdc83e67579880e628\r\nChecked with v18\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Tue, 18 Jul 2023 12:51:41 +0000",
"msg_from": "stephane tachoires <stephane.tachoires@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Thank you, Stephane!\n\nRebased version attached to email.\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Wed, 19 Jul 2023 16:43:47 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nIt is just a rebase\r\nI check with make and meson\r\nrun manual split and merge on list and range partition\r\nDoc fits\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Thu, 20 Jul 2023 11:56:33 +0000",
"msg_from": "stephane tachoires <stephane.tachoires@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Rebased version attached to email.\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Sat, 11 Nov 2023 13:26:03 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hello!\n\nAdded commit v21-0004-SPLIT-PARTITION-optimization.patch.\n\nThree already existing commits did not change \n(v21-0001-ALTER-TABLE-MERGE-PARTITIONS-command.patch, \nv21-0002-ALTER-TABLE-SPLIT-PARTITION-command.patch, \nv21-0003-Documentation-for-ALTER-TABLE-SPLIT-PARTITION-ME.patch).\n\nThe new commit is an optimization for the SPLIT PARTITION command.\n\nDescription of optimization:\n1) optimization is used for the SPLIT PARTITION command for tables with \nBY RANGE partitioning in case the partitioning key has a b-tree index;\n2) the point of optimization is that, if after dividing of the old \npartition, all its records according to the range conditions must be \ninserted into ONE new partition, then instead of transferring data (from \nthe old partition to new partition), the old partition will be renamed.\n\nExample.\nSuppose we have a BY RANGE-partitioned table \"test\" (indexed by \npartitioning key) with a single partition \"test_default\", which we want \nto split into two partitions (\"test_1\" and \"test_default\"), and all \nrecords should be moved to the \"test_1\" partition.\nWhen executing the script below, the \"test_default\" partition will be \nrenamed to \"test_1\".\n\n----\nCREATE TABLE test(d date, v text) PARTITION BY RANGE (d);\nCREATE TABLE test_default PARTITION OF test DEFAULT;\n\nCREATE INDEX idx_test_d ON test USING btree (d);\n\nINSERT INTO test (d, v)\n SELECT d, 'value_' || md5(random()::text) FROM\n generate_series('2024-01-01', '2024-01-25', interval '10 seconds')\n AS d;\n\n-- Oid of table 'test_default':\nSELECT 'test_default'::regclass::oid AS previous_partition_oid;\n\nALTER TABLE test SPLIT PARTITION test_default INTO\n (PARTITION test_1 FOR VALUES FROM ('2024-01-01') TO ('2024-02-01'),\n PARTITION test_default DEFAULT);\n\n-- Oid of table 'test_1' (should be the same as \"previous_partition_oid\"):\nSELECT 'test_1'::regclass::oid AS current_partition_oid;\n\nDROP TABLE test CASCADE;\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Mon, 4 Dec 2023 10:52:06 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Mon, 4 Dec 2023 at 13:22, Dmitry Koval <d.koval@postgrespro.ru> wrote:\n>\n> Hello!\n>\n> Added commit v21-0004-SPLIT-PARTITION-optimization.patch.\n\nCFBot shows that the patch does not apply anymore as in [1]:\n=== Applying patches on top of PostgreSQL commit ID\n8ba6fdf905d0f5aef70ced4504c6ad297bfe08ea ===\n=== applying patch ./v21-0001-ALTER-TABLE-MERGE-PARTITIONS-command.patch\npatching file src/backend/commands/tablecmds.c\n...\nHunk #7 FAILED at 18735.\nHunk #8 succeeded at 20608 (offset 315 lines).\n1 out of 8 hunks FAILED -- saving rejects to file\nsrc/backend/commands/tablecmds.c.rej\npatching file src/backend/parser/gram.y\n\nPlease post an updated version for the same.\n\n[1] - http://cfbot.cputube.org/patch_46_3659.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 26 Jan 2024 18:20:07 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On 2024-Jan-26, vignesh C wrote:\n\n> Please post an updated version for the same.\n\nHere's a rebase. I only fixed the conflicts, didn't review.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/",
"msg_date": "Fri, 26 Jan 2024 15:01:52 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On 2024-Jan-26, Alvaro Herrera wrote:\n\n> On 2024-Jan-26, vignesh C wrote:\n> \n> > Please post an updated version for the same.\n> \n> Here's a rebase. I only fixed the conflicts, didn't review.\n\nHmm, but I got the attached regression.diffs with it. I didn't\ninvestigate further, but it looks like the recent changes to replication\nidentity for partitioned tables has broken the regression tests.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"This is what I like so much about PostgreSQL. Most of the surprises\nare of the \"oh wow! That's cool\" Not the \"oh shit!\" kind. :)\"\nScott Marlowe, http://archives.postgresql.org/pgsql-admin/2008-10/msg00152.php",
"msg_date": "Fri, 26 Jan 2024 17:36:33 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "git format-patch -4 HEAD -v 23\n\n=============================\n\nThanks!\n\nI excluded regression test \"Test: split partition witch identity column\" \nfrom script src/test/regress/sql/partition_split.sql because after \ncommit [1] partitions cannot contain identity columns and queries\n\nCREATE TABLE salesmans2_5(salesman_id INT GENERATED ALWAYS AS IDENTITY \nPRIMARY KEY, salesman_name VARCHAR(30));\nALTER TABLE salesmans ATTACH PARTITION salesmans2_5 FOR VALUES FROM (2) \nTO (5);\n\nreturns\n\nERROR: table \"salesmans2_5\" being attached contains an identity column \n\"salesman_id\"\nDETAIL: The new partition may not contain an identity column.\n\n[1] \nhttps://github.com/postgres/postgres/commit/699586315704a8268808e3bdba4cb5924a038c49\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Fri, 26 Jan 2024 20:08:08 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "I thought it's wrong to exclude the IDENTITY-column test, so I fixed the \ntest and return it back.\nChanges in attachment (commit \nv24-0002-ALTER-TABLE-SPLIT-PARTITION-command.patch).\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Fri, 26 Jan 2024 21:36:59 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "\n\n> On 26 Jan 2024, at 23:36, Dmitry Koval <d.koval@postgrespro.ru> wrote:\n> \n> <v24-0001-ALTER-TABLE-MERGE-PARTITIONS-command.patch><v24-0002-ALTER-TABLE-SPLIT-PARTITION-command.patch><v24-0003-Documentation-for-ALTER-TABLE-SPLIT-PARTITION-ME.patch><v24-0004-SPLIT-PARTITION-optimization.patch>\n\nThe CF entry was in Ready for Committer state no so long ago.\nStephane, you might want to review recent version after it was rebased on current HEAD. CFbot's test passed successfully.\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Fri, 8 Mar 2024 15:26:17 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi!\n\nRebased version attached to email.\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Tue, 12 Mar 2024 19:45:28 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, failed\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nHi,\r\nI have failing tap test after patches apply:\r\n\r\nok 201 + partition_merge 2635 ms\r\nnot ok 202 + partition_split 5719 ms\r\n\r\n@@ -805,6 +805,7 @@\r\n (PARTITION salesmans2_3 FOR VALUES FROM (2) TO (3),\r\n PARTITION salesmans3_4 FOR VALUES FROM (3) TO (4),\r\n PARTITION salesmans4_5 FOR VALUES FROM (4) TO (5));\r\n+ERROR: no owned sequence found\r\n INSERT INTO salesmans (salesman_name) VALUES ('May');\r\n INSERT INTO salesmans (salesman_name) VALUES ('Ford');\r\n SELECT * FROM salesmans1_2;\r\n@@ -814,23 +815,17 @@\r\n (1 row)\r\n\r\n SELECT * FROM salesmans2_3;\r\n- salesman_id | salesman_name \r\n--------------+---------------\r\n- 2 | Ivanov\r\n-(1 row)\r\n-\r\n+ERROR: relation \"salesmans2_3\" does not exist\r\n+LINE 1: SELECT * FROM salesmans2_3;\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Tue, 19 Mar 2024 13:06:41 +0000",
"msg_from": "stephane tachoires <stephane.tachoires@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nSorry, tests passed when applying all patches.\r\nI planned to check without optimisation first.\n\nThe new status of this patch is: Needs review\n",
"msg_date": "Tue, 19 Mar 2024 14:29:47 +0000",
"msg_from": "stephane tachoires <stephane.tachoires@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi!\n\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n\nThanks for info!\nI was unable to reproduce the problem and I wanted to ask for \nclarification. But your message was ahead of my question.\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com\n\n\n",
"msg_date": "Tue, 19 Mar 2024 17:43:33 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Tue, Sep 20, 2022 at 3:21 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, Sep 19, 2022 at 4:42 PM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n> > Thanks for comments and advice!\n> > I thought about this problem and discussed about it with colleagues.\n> > Unfortunately, I don't know of a good general solution.\n>\n> Yeah, me neither.\n>\n> > But for specific situation like this (certain partition is not changing)\n> > we can add CONCURRENTLY modifier.\n> > Our DDL query can be like\n> >\n> > ALTER TABLE...SPLIT PARTITION [CONCURRENTLY];\n> >\n> > With CONCURRENTLY modifier we can lock partitioned table in\n> > ShareUpdateExclusiveLock mode and split partition - in\n> > AccessExclusiveLock mode. So we don't lock partitioned table in\n> > AccessExclusiveLock mode and can modify other partitions during SPLIT\n> > operation (except split partition).\n> > If smb try to modify split partition, he will receive error \"relation\n> > does not exist\" at end of operation (because split partition will be drop).\n>\n> I think that a built-in DDL command can't really assume that the user\n> won't modify anything. You'd have to take a ShareLock.\n>\n> But you might be able to have a CONCURRENTLY variant of the command\n> that does the same kind of multi-transaction thing as, e.g., CREATE\n> INDEX CONCURRENTLY. You would probably have to be quite careful about\n> race conditions (e.g. you commit the first transaction and before you\n> start the second one, someone drops or detaches the partition you were\n> planning to merge or split). Might take some thought, but feels\n> possibly doable. I've never been excited enough about this kind of\n> thing to want to put a lot of energy into engineering it, because\n> doing it \"manually\" feels so much nicer to me, and doubly so given\n> that we now have ATTACH CONCURRENTLY and DETACH CONCURRENTLY, but it\n> does seem like a thing some people would probably use and value.\n\n+1\nCurrently people are using external tools to implement this kind of\ntask. However, having this functionality in core would be great.\nImplementing concurrent merge/split seems quite a difficult task,\nwhich needs careful design. It might be too hard to carry around the\nsyntax altogether. So, I think having basic syntax in-core is a good\nstep forward. But I think we need a clear notice in the documentation\nabout the concurrency to avoid wrong user expectations.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 25 Mar 2024 12:28:36 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Tue, Mar 19, 2024 at 4:43 PM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n> Thanks for info!\n> I was unable to reproduce the problem and I wanted to ask for\n> clarification. But your message was ahead of my question.\n\nI've revised the patchset. I mostly did some refactoring, code\nimprovements and wrote new comments.\n\nIf I apply just the first two patches, I get the same error as [1].\nThis error happens when createPartitionTable() tries to copy the\nidentity of another partition. I've fixed that by skipping a copy of\nthe identity of another partition (remove CREATE_TABLE_LIKE_IDENTITY\nfrom TableLikeClause.options). BTW, the same error happened to me\nwhen I manually ran CREATE TABLE ... (LIKE ... INCLUDING IDENTITY) for\na partition of the table with identity. So, this probably deserves a\nseparate fix, but I think not directly related to this patch.\n\nI have one question. When merging partitions you're creating a merged\npartition like the parent table. But when splitting a partition\nyou're creating new partitions like the partition being split. What\nmotivates this difference?\n\nLinks.\n1. https://www.postgresql.org/message-id/171085360143.2046436.7217841141682511557.pgcf%40coridan.postgresql.org\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Wed, 27 Mar 2024 01:39:04 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi!\n\n > I've fixed that by skipping a copy of the identity of another\n > partition (remove CREATE_TABLE_LIKE_IDENTITY from\n > TableLikeClause.options).\n\nThanks for correction!\nProbably I should have looked at the code more closely after commit [1]. \nI'm also very glad that situation [2] was reproduced.\n\n > When merging partitions you're creating a merged partition like the\n > parent table. But when splitting a partition you're creating new\n > partitions like the partition being split. What motivates this\n > difference?\n\nWhen splitting a partition, I planned to set parameters for each of the \nnew partitions (for example, tablespace parameter).\nIt would make sense if we want to transfer part of the data of splitting \npartition to a slower (archive) storage device.\nRight now I haven't seen any interest in this functionality, so it \nhasn't been implemented yet. But I think this will be needed in the future.\n\nSpecial thanks for the hint that new structures should be added to the \nlist src\\tools\\pgindent\\typedefs.list.\n\nLinks.\n[1] \nhttps://github.com/postgres/postgres/commit/699586315704a8268808e3bdba4cb5924a038c49\n\n[2] \nhttps://www.postgresql.org/message-id/171085360143.2046436.7217841141682511557.pgcf%40coridan.postgresql.org\n\n--\nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com\n\n\n",
"msg_date": "Wed, 27 Mar 2024 23:18:00 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi, Dmitry!\n\nThank you for your feedback!\n\nOn Wed, Mar 27, 2024 at 10:18 PM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n> > I've fixed that by skipping a copy of the identity of another\n> > partition (remove CREATE_TABLE_LIKE_IDENTITY from\n> > TableLikeClause.options).\n>\n> Thanks for correction!\n> Probably I should have looked at the code more closely after commit [1].\n> I'm also very glad that situation [2] was reproduced.\n>\n> > When merging partitions you're creating a merged partition like the\n> > parent table. But when splitting a partition you're creating new\n> > partitions like the partition being split. What motivates this\n> > difference?\n>\n> When splitting a partition, I planned to set parameters for each of the\n> new partitions (for example, tablespace parameter).\n> It would make sense if we want to transfer part of the data of splitting\n> partition to a slower (archive) storage device.\n> Right now I haven't seen any interest in this functionality, so it\n> hasn't been implemented yet. But I think this will be needed in the future.\n\nOK, I've changed the code to use the parent table as a template for\nnew partitions in split case. So, now it's the same in both split and\nmerge cases.\n\nI also added a special note into docs about ACCESS EXCLUSIVE lock,\nbecause I believe that's a significant limitation for usage of this\nfunctionality.\n\nI think 0001, 0002 and 0003 could be considered for pg17. I will\ncontinue reviewing them.\n\n0004 might require more work. I didn't rebase it for now. I suggest\nwe can rebase it later and consider for pg18.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Sat, 30 Mar 2024 14:40:43 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi, Alexander!\n\nThank you very much for your work on refactoring the commits!\nYesterday I received an email from adjkldd@126.com <winterloo@126.com> \nwith a proposal for optimization (MERGE PARTITION command) for cases \nwhere the target partition has a name identical to one of the merging \npartition names.\nI think this optimization is worth considering.\nA simplified version of the optimization is attached to this letter \n(difference is 10-15 lines).\nAll changes made in one commit \n(v28-0001-ALTER-TABLE-MERGE-PARTITIONS-command.patch) and in one \nfunction (ATExecMergePartitions).\n\nIn your opinion, should we added this optimization now or should it be \nleft for later?\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Sun, 31 Mar 2024 03:56:50 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi!\n\nPatch stop applying due to changes in upstream.\nHere is a rebased version.\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Sun, 31 Mar 2024 05:12:19 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi!\n\nOn Sun, Mar 31, 2024 at 5:12 AM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n> Patch stop applying due to changes in upstream.\n> Here is a rebased version.\n\nI've revised the patchset. Now there are two self-contained patches\ncoming with the documentation. Also, now each command has a paragraph\nin the \"Data definition\" chapter. Also documentation and tests\ncontain geographical partitioning with all Russian cities. I think\nthat might create a country-centric feeling for the reader. I've\nedited that to make cities spread around the world to reflect the\ninternational spirit. Hope you're OK with this. Now, both merge and\nsplit commands make new partitions using the parent table as the\ntemplate. And some other edits to comments, commit messages,\ndocumentation etc.\n\nI think this patch is well-reviewed and also has quite straightforward\nimplementation. The major limitation of holding ACCESS EXCLUSIVE LOCK\non the parent table is well-documented. I'm going to push this if no\nobjections.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Thu, 4 Apr 2024 22:17:45 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi!\n\n> I've revised the patchset.\n\nThanks for the corrections (especially ddl.sgml).\nCould you also look at a small optimization for the MERGE PARTITIONS \ncommand (in a separate file \nv31-0003-Additional-patch-for-ALTER-TABLE-.-MERGE-PARTITI.patch, I wrote \nabout it in an email 2024-03-31 00:56:50)?\n\nFiles v31-0001-*.patch, v31-0002-*.patch are the same as \nv30-0001-*.patch, v30-0002-*.patch (after rebasing because patch stopped \napplying due to changes in upstream).\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Fri, 5 Apr 2024 16:00:44 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nAll three patches applied nivcely.\r\nCode fits standart, comments are relevant.\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Fri, 05 Apr 2024 19:06:03 +0000",
"msg_from": "stephane tachoires <stephane.tachoires@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi, Dmitry!\n\nOn Fri, Apr 5, 2024 at 4:00 PM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n> > I've revised the patchset.\n>\n> Thanks for the corrections (especially ddl.sgml).\n> Could you also look at a small optimization for the MERGE PARTITIONS\n> command (in a separate file\n> v31-0003-Additional-patch-for-ALTER-TABLE-.-MERGE-PARTITI.patch, I wrote\n> about it in an email 2024-03-31 00:56:50)?\n>\n> Files v31-0001-*.patch, v31-0002-*.patch are the same as\n> v30-0001-*.patch, v30-0002-*.patch (after rebasing because patch stopped\n> applying due to changes in upstream).\n\nI've pushed 0001 and 0002. I didn't push 0003 for the following reasons.\n1) This doesn't keep functionality equivalent to 0001. With 0003, the\nmerged partition will inherit indexes, constraints, and so on from the\none of merging partitions.\n2) This is not necessarily an optimization. Without 0003 indexes on\nthe merged partition are created after moving the rows in\nattachPartitionTable(). With 0003 we merge data into the existing\npartition which saves its indexes. That might cause a significant\nperformance loss because mass inserts into indexes may be much slower\nthan building indexes from scratch.\nI think both aspects need to be carefully considered. Even if we\naccept them, this needs to be documented. I think now it's too late\nfor both of these. So, this should wait for v18.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sun, 7 Apr 2024 01:22:51 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi, Alexander!\n\n> I didn't push 0003 for the following reasons. ....\n\nThanks for clarifying. You are right, these are serious reasons.\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com\n\n\n",
"msg_date": "Sun, 7 Apr 2024 01:38:56 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi Alexander and Dmitry,\n\n07.04.2024 01:22, Alexander Korotkov wrote:\n> I've pushed 0001 and 0002. I didn't push 0003 for the following reasons.\n\nPlease try the following (erroneous) query:\nCREATE TABLE t1(i int, t text) PARTITION BY LIST (t);\nCREATE TABLE t1pa PARTITION OF t1 FOR VALUES IN ('A');\n\nCREATE TABLE t2 (i int, t text) PARTITION BY RANGE (t);\nALTER TABLE t2 SPLIT PARTITION t1pa INTO\n (PARTITION t2a FOR VALUES FROM ('A') TO ('B'),\n PARTITION t2b FOR VALUES FROM ('B') TO ('C'));\n\nthat triggers an assertion failure:\nTRAP: failed Assert(\"datums != NIL\"), File: \"partbounds.c\", Line: 3434, PID: 1841459\n\nor a segfault (in a non-assert build):\nProgram terminated with signal SIGSEGV, Segmentation fault.\n\n#0 pg_detoast_datum_packed (datum=0x0) at fmgr.c:1866\n1866 if (VARATT_IS_COMPRESSED(datum) || VARATT_IS_EXTERNAL(datum))\n(gdb) bt\n#0 pg_detoast_datum_packed (datum=0x0) at fmgr.c:1866\n#1 0x000055f38c5d5e3f in bttextcmp (...) at varlena.c:1834\n#2 0x000055f38c6030dd in FunctionCall2Coll (...) at fmgr.c:1161\n#3 0x000055f38c417c83 in partition_rbound_cmp (...) at partbounds.c:3525\n#4 check_partition_bounds_for_split_range (...) at partbounds.c:5221\n#5 check_partitions_for_split (...) at partbounds.c:5688\n#6 0x000055f38c256c49 in transformPartitionCmdForSplit (...) at parse_utilcmd.c:3451\n#7 transformAlterTableStmt (...) at parse_utilcmd.c:3810\n#8 0x000055f38c2bdf9c in ATParseTransformCmd (...) at tablecmds.c:5650\n...\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Sun, 7 Apr 2024 22:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi, Alexander!\n\nOn Sun, Apr 7, 2024 at 10:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n> 07.04.2024 01:22, Alexander Korotkov wrote:\n> > I've pushed 0001 and 0002. I didn't push 0003 for the following reasons.\n>\n> Please try the following (erroneous) query:\n> CREATE TABLE t1(i int, t text) PARTITION BY LIST (t);\n> CREATE TABLE t1pa PARTITION OF t1 FOR VALUES IN ('A');\n>\n> CREATE TABLE t2 (i int, t text) PARTITION BY RANGE (t);\n> ALTER TABLE t2 SPLIT PARTITION t1pa INTO\n> (PARTITION t2a FOR VALUES FROM ('A') TO ('B'),\n> PARTITION t2b FOR VALUES FROM ('B') TO ('C'));\n>\n> that triggers an assertion failure:\n> TRAP: failed Assert(\"datums != NIL\"), File: \"partbounds.c\", Line: 3434, PID: 1841459\n>\n> or a segfault (in a non-assert build):\n> Program terminated with signal SIGSEGV, Segmentation fault.\n>\n> #0 pg_detoast_datum_packed (datum=0x0) at fmgr.c:1866\n> 1866 if (VARATT_IS_COMPRESSED(datum) || VARATT_IS_EXTERNAL(datum))\n> (gdb) bt\n> #0 pg_detoast_datum_packed (datum=0x0) at fmgr.c:1866\n> #1 0x000055f38c5d5e3f in bttextcmp (...) at varlena.c:1834\n> #2 0x000055f38c6030dd in FunctionCall2Coll (...) at fmgr.c:1161\n> #3 0x000055f38c417c83 in partition_rbound_cmp (...) at partbounds.c:3525\n> #4 check_partition_bounds_for_split_range (...) at partbounds.c:5221\n> #5 check_partitions_for_split (...) at partbounds.c:5688\n> #6 0x000055f38c256c49 in transformPartitionCmdForSplit (...) at parse_utilcmd.c:3451\n> #7 transformAlterTableStmt (...) at parse_utilcmd.c:3810\n> #8 0x000055f38c2bdf9c in ATParseTransformCmd (...) at tablecmds.c:5650\n\nThank you for spotting this. This seems like a missing check. I'm\ngoing to get a closer look at this tomorrow.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 8 Apr 2024 01:15:06 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "08.04.2024 01:15, Alexander Korotkov wrote:\n> Thank you for spotting this. This seems like a missing check. I'm\n> going to get a closer look at this tomorrow.\n>\n\nThanks!\n\nThere is also an anomaly with the MERGE command:\nCREATE TABLE t1 (i int, a int, b int, c int) PARTITION BY RANGE (a, b);\nCREATE TABLE t1p1 PARTITION OF t1 FOR VALUES FROM (1, 1) TO (1, 2);\n\nCREATE TABLE t2 (i int, t text) PARTITION BY RANGE (t);\nCREATE TABLE t2pa PARTITION OF t2 FOR VALUES FROM ('A') TO ('C');\n\nCREATE TABLE t3 (i int, t text);\n\nALTER TABLE t2 MERGE PARTITIONS (t1p1, t2pa, t3) INTO t2p;\n\nleads to:\nERROR: partition bound for relation \"t3\" is null\nWARNING: problem in alloc set PortalContext: detected write past chunk end in block 0x55f1ef42f820, chunk 0x55f1ef42ff40\nWARNING: problem in alloc set PortalContext: detected write past chunk end in block 0x55f1ef42f820, chunk 0x55f1ef42ff40\n\n(I'm also not sure that the error message is clear enough (can't we say\n\"relation X is not a partition of relation Y\" in this context, as in\nMarkInheritDetached(), for example?).)\n\nWhilst with\nALTER TABLE t2 MERGE PARTITIONS (t1p1, t2pa) INTO t2p;\n\nI get:\nProgram terminated with signal SIGSEGV, Segmentation fault.\n\n#0 pg_detoast_datum_packed (datum=0x1) at fmgr.c:1866\n1866 if (VARATT_IS_COMPRESSED(datum) || VARATT_IS_EXTERNAL(datum))\n(gdb) bt\n#0 pg_detoast_datum_packed (datum=0x1) at fmgr.c:1866\n#1 0x000055d77d00fde2 in bttextcmp (...) at ../../../../src/include/postgres.h:314\n#2 0x000055d77d03fa27 in FunctionCall2Coll (...) at fmgr.c:1161\n#3 0x000055d77ce1572f in partition_rbound_cmp (...) at partbounds.c:3525\n#4 0x000055d77ce157b9 in qsort_partition_rbound_cmp (...) at partbounds.c:3816\n#5 0x000055d77d0982ef in qsort_arg (...) at ../../src/include/lib/sort_template.h:316\n#6 0x000055d77ce1d109 in calculate_partition_bound_for_merge (...) at partbounds.c:5786\n#7 0x000055d77cc24b2b in transformPartitionCmdForMerge (...) at parse_utilcmd.c:3524\n#8 0x000055d77cc2b555 in transformAlterTableStmt (...) at parse_utilcmd.c:3812\n#9 0x000055d77ccab17c in ATParseTransformCmd (...) at tablecmds.c:5650\n#10 0x000055d77ccafd09 in ATExecCmd (...) at tablecmds.c:5589\n...\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Mon, 8 Apr 2024 07:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Alexander Lakhin, thanks for the problems you found!\n\nUnfortunately I can't watch them immediately (event [1]).\nI will try to start solving them in 12-14 hours.\n\n[1] https://pgconf.ru/2024\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com\n\n\n",
"msg_date": "Mon, 8 Apr 2024 09:16:54 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi all,\n I went through the MERGE/SPLIT partition codes today, thanks for the\nworks. I found some grammar errors:\n i. in error messages(Users can see this grammar errors, not friendly).\nii. in codes comments\n\n\n\nAlexander Korotkov <aekorotkov@gmail.com> 于2024年4月7日周日 06:23写道:\n\n> Hi, Dmitry!\n>\n> On Fri, Apr 5, 2024 at 4:00 PM Dmitry Koval <d.koval@postgrespro.ru>\n> wrote:\n> > > I've revised the patchset.\n> >\n> > Thanks for the corrections (especially ddl.sgml).\n> > Could you also look at a small optimization for the MERGE PARTITIONS\n> > command (in a separate file\n> > v31-0003-Additional-patch-for-ALTER-TABLE-.-MERGE-PARTITI.patch, I wrote\n> > about it in an email 2024-03-31 00:56:50)?\n> >\n> > Files v31-0001-*.patch, v31-0002-*.patch are the same as\n> > v30-0001-*.patch, v30-0002-*.patch (after rebasing because patch stopped\n> > applying due to changes in upstream).\n>\n> I've pushed 0001 and 0002. I didn't push 0003 for the following reasons.\n> 1) This doesn't keep functionality equivalent to 0001. With 0003, the\n> merged partition will inherit indexes, constraints, and so on from the\n> one of merging partitions.\n> 2) This is not necessarily an optimization. Without 0003 indexes on\n> the merged partition are created after moving the rows in\n> attachPartitionTable(). With 0003 we merge data into the existing\n> partition which saves its indexes. That might cause a significant\n> performance loss because mass inserts into indexes may be much slower\n> than building indexes from scratch.\n> I think both aspects need to be carefully considered. Even if we\n> accept them, this needs to be documented. I think now it's too late\n> for both of these. So, this should wait for v18.\n>\n> ------\n> Regards,\n> Alexander Korotkov\n>\n>\n>\n\n-- \nTender Wang\nOpenPie: https://en.openpie.com/",
"msg_date": "Mon, 8 Apr 2024 18:43:51 +0800",
"msg_from": "Tender Wang <tndrwang@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi Tender Wang,\n\n08.04.2024 13:43, Tender Wang wrote:\n> Hi all,\n> I went through the MERGE/SPLIT partition codes today, thanks for the works. I found some grammar errors:\n> i. in error messages(Users can see this grammar errors, not friendly).\n> ii. in codes comments\n>\n\nOn a quick glance, I saw also:\nNULL-value\npartitionde\nsplited\ntemparary\n\nAnd a trailing whitespace at:\n the quarter partition back to monthly partitions:\nwarning: 1 line adds whitespace errors.\n\nI'm also confused by \"administrators\" here:\nhttps://www.postgresql.org/docs/devel/ddl-partitioning.html\n\n(We can find on the same page, for instance:\n... whereas table inheritance allows data to be divided in a manner of\nthe user's choosing.\nIt seems to me, that \"users\" should work for merging partitions as well.)\n\nThough the documentation addition requires more than just a quick glance,\nof course.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Mon, 8 Apr 2024 15:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi!\n\nAttached fix for the problems found by Alexander Lakhin.\n\nAbout grammar errors.\nUnfortunately, I don't know English well.\nTherefore, I plan (in the coming days) to show the text to specialists \nwho perform technical translation of documentation.\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Mon, 8 Apr 2024 23:43:21 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Mon, Apr 8, 2024 at 11:43 PM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n> Attached fix for the problems found by Alexander Lakhin.\n>\n> About grammar errors.\n> Unfortunately, I don't know English well.\n> Therefore, I plan (in the coming days) to show the text to specialists\n> who perform technical translation of documentation.\n\nThank you. I've pushed this fix with minor corrections from me.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Wed, 10 Apr 2024 02:03:40 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hello Alexander and Dmitry,\n\n10.04.2024 02:03, Alexander Korotkov wrote:\n> On Mon, Apr 8, 2024 at 11:43 PM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n>> Attached fix for the problems found by Alexander Lakhin.\n>>\n>> About grammar errors.\n>> Unfortunately, I don't know English well.\n>> Therefore, I plan (in the coming days) to show the text to specialists\n>> who perform technical translation of documentation.\n> Thank you. I've pushed this fix with minor corrections from me.\n\nThank you for fixing that defect!\n\nPlease look at an error message emitted for foreign tables:\nCREATE TABLE t (i int) PARTITION BY RANGE (i);\nCREATE FOREIGN TABLE ftp_0_1 PARTITION OF t\n FOR VALUES FROM (0) TO (1)\n SERVER loopback OPTIONS (table_name 'lt_0_1');\nCREATE FOREIGN TABLE ftp_1_2 PARTITION OF t\n FOR VALUES FROM (1) TO (2)\n SERVER loopback OPTIONS (table_name 'lt_1_2');\nALTER TABLE t MERGE PARTITIONS (ftp_0_1, ftp_1_2) INTO ftp_0_2;\nERROR: \"ftp_0_1\" is not a table\n\nShouldn't it be more correct/precise?\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Wed, 10 Apr 2024 12:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "10.04.2024 12:00, Alexander Lakhin wrote:\n> Hello Alexander and Dmitry,\n>\n> 10.04.2024 02:03, Alexander Korotkov wrote:\n>> Thank you. I've pushed this fix with minor corrections from me.\n>\n\nPlease look at another anomaly with MERGE.\n\nCREATE TEMP TABLE t (i int) PARTITION BY RANGE (i);\nCREATE TABLE tp_0_2 PARTITION OF t\n FOR VALUES FROM (0) TO (2);\nfails with\nERROR: cannot create a permanent relation as partition of temporary relation \"t\"\n\nBut\nCREATE TEMP TABLE t (i int) PARTITION BY RANGE (i);\nCREATE TEMP TABLE tp_0_1 PARTITION OF t\n FOR VALUES FROM (0) TO (1);\nCREATE TEMP TABLE tp_1_2 PARTITION OF t\n FOR VALUES FROM (1) TO (2);\nALTER TABLE t MERGE PARTITIONS (tp_0_1, tp_1_2) INTO tp_0_2;\nsucceeds and we get:\nregression=# \\d+ t*\n Partitioned table \"pg_temp_1.t\"\n Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description\n--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n i | integer | | | | plain | | |\nPartition key: RANGE (i)\nPartitions: tp_0_2 FOR VALUES FROM (0) TO (2)\n\n Table \"public.tp_0_2\"\n Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description\n--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n i | integer | | | | plain | | |\nPartition of: t FOR VALUES FROM (0) TO (2)\nPartition constraint: ((i IS NOT NULL) AND (i >= 0) AND (i < 2))\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Wed, 10 Apr 2024 15:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi!\n\nAlexander Korotkov, thanks for the commit of previous fix.\nAlexander Lakhin, thanks for the problem you found.\n\nThere are two corrections attached to the letter:\n\n1) v1-0001-Fix-for-SPLIT-MERGE-partitions-of-temporary-table.patch - fix \nfor the problem [1].\n\n2) v1-0002-Fixes-for-english-text.patch - fixes for English text \n(comments, error messages etc.).\n\nLinks:\n[1] \nhttps://www.postgresql.org/message-id/dbc8b96c-3cf0-d1ee-860d-0e491da20485%40gmail.com\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Wed, 10 Apr 2024 20:22:35 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Thu, Apr 11, 2024 at 1:22 AM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n\n> 2) v1-0002-Fixes-for-english-text.patch - fixes for English text\n> (comments, error messages etc.).\n\n\nFWIW, I also proposed a patch earlier that fixes error messages and\ncomments in the split partition code at\nhttps://www.postgresql.org/message-id/flat/CAMbWs49DDsknxyoycBqiE72VxzL_sYHF6zqL8dSeNehKPJhkKg%40mail.gmail.com\n\nThanks\nRichard\n\nOn Thu, Apr 11, 2024 at 1:22 AM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n2) v1-0002-Fixes-for-english-text.patch - fixes for English text \n(comments, error messages etc.).FWIW, I also proposed a patch earlier that fixes error messages andcomments in the split partition code athttps://www.postgresql.org/message-id/flat/CAMbWs49DDsknxyoycBqiE72VxzL_sYHF6zqL8dSeNehKPJhkKg%40mail.gmail.comThanksRichard",
"msg_date": "Thu, 11 Apr 2024 15:57:12 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi!\n\n> FWIW, I also proposed a patch earlier that fixes error messages and\n> comments in the split partition code\n\nSorry, I thought all the fixes you suggested were already included in \nv1-0002-Fixes-for-english-text.patch (but they are not).\nAdded missing lines to v2-0002-Fixes-for-english-text.patch.\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Thu, 11 Apr 2024 11:59:10 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi Dmitry,\n\n11.04.2024 11:59, Dmitry Koval wrote:\n>\n>> FWIW, I also proposed a patch earlier that fixes error messages and\n>> comments in the split partition code\n>\n> Sorry, I thought all the fixes you suggested were already included in v1-0002-Fixes-for-english-text.patch (but they \n> are not).\n> Added missing lines to v2-0002-Fixes-for-english-text.patch.\n>\n\nIt seems to me that v2-0001-Fix-for-SPLIT-MERGE-partitions-of-temporary-table.patch\nis not complete either.\nTake a look, please:\nCREATE TABLE t (i int) PARTITION BY RANGE (i);\nSET search_path = pg_temp, public;\nCREATE TABLE tp_0_1 PARTITION OF t\n FOR VALUES FROM (0) TO (1);\n-- fails with:\nERROR: cannot create a temporary relation as partition of permanent relation \"t\"\n\nBut:\nCREATE TABLE t (i int) PARTITION BY RANGE (i);\nCREATE TABLE tp_0_1 PARTITION OF t\n FOR VALUES FROM (0) TO (1);\nCREATE TABLE tp_1_2 PARTITION OF t\n FOR VALUES FROM (1) TO (2);\nINSERT INTO t VALUES(0), (1);\nSELECT * FROM t;\n-- the expected result is:\n i\n---\n 0\n 1\n(2 rows)\n\nSET search_path = pg_temp, public;\nALTER TABLE t\nMERGE PARTITIONS (tp_0_1, tp_1_2) INTO tp_0_2;\n-- succeeds, and\n\\c -\nSELECT * FROM t;\n-- gives:\n i\n---\n(0 rows)\n\nPlease also ask your tech writers to check contents of src/test/sql/*, if\npossible (perhaps, they'll fix \"salesmans\" and improve grammar).\n\nBest regards,\nAlexander\n\n\n\n\n\nHi Dmitry,\n\n 11.04.2024 11:59, Dmitry Koval wrote:\n\n\nFWIW, I also proposed a patch earlier that\n fixes error messages and\n \n comments in the split partition code\n \n\n\n Sorry, I thought all the fixes you suggested were already included\n in v1-0002-Fixes-for-english-text.patch (but they are not).\n \n Added missing lines to v2-0002-Fixes-for-english-text.patch.\n \n\n\n\n It seems to me that\n v2-0001-Fix-for-SPLIT-MERGE-partitions-of-temporary-table.patch\n is not complete either.\n Take a look, please:\n CREATE TABLE t (i int) PARTITION BY RANGE (i);\n SET search_path = pg_temp, public;\n CREATE TABLE tp_0_1 PARTITION OF t \n FOR VALUES FROM (0) TO (1);\n -- fails with:\n ERROR: cannot create a temporary relation as partition of permanent\n relation \"t\"\n\n But:\n CREATE TABLE t (i int) PARTITION BY RANGE (i);\n CREATE TABLE tp_0_1 PARTITION OF t\n FOR VALUES FROM (0) TO (1);\n CREATE TABLE tp_1_2 PARTITION OF t\n FOR VALUES FROM (1) TO (2);\n INSERT INTO t VALUES(0), (1);\n SELECT * FROM t;\n -- the expected result is:\n i \n ---\n 0\n 1\n (2 rows)\n\n SET search_path = pg_temp, public;\n ALTER TABLE t\n MERGE PARTITIONS (tp_0_1, tp_1_2) INTO tp_0_2;\n -- succeeds, and\n\\c -\nSELECT * FROM t;\n -- gives:\n i \n ---\n (0 rows)\n\n Please also ask your tech writers to check contents of\n src/test/sql/*, if\n possible (perhaps, they'll fix \"salesmans\" and improve grammar).\n\n Best regards,\n Alexander",
"msg_date": "Thu, 11 Apr 2024 15:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi!\n\n1.\nAlexander Lakhin sent a question about index name after MERGE (partition \nname is the same as one of the merged partitions):\n\n----start of quote----\nI'm also confused by an index name after MERGE:\nCREATE TABLE t (i int) PARTITION BY RANGE (i);\n\nCREATE TABLE tp_0_1 PARTITION OF t FOR VALUES FROM (0) TO (1);\nCREATE TABLE tp_1_2 PARTITION OF t FOR VALUES FROM (1) TO (2);\n\nCREATE INDEX tidx ON t(i);\nALTER TABLE t MERGE PARTITIONS (tp_1_2, tp_0_1) INTO tp_1_2;\n\\d+ t*\n\n Table \"public.tp_1_2\"\n Column | Type | Collation | Nullable | Default | Storage | \nCompression | Stats target | Description\n--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n i | integer | | | | plain | \n | |\nPartition of: t FOR VALUES FROM (0) TO (2)\nPartition constraint: ((i IS NOT NULL) AND (i >= 0) AND (i < 2))\nIndexes:\n \"merge-16385-3A14B2-tmp_i_idx\" btree (i)\n\nIs the name \"merge-16385-3A14B2-tmp_i_idx\" valid or it's something \ntemporary?\n----end of quote----\n\nFix for this case added to file \nv3-0001-Fix-for-SPLIT-MERGE-partitions-of-temporary-table.patch.\n\n----\n\n2.\n >It seems to me that v2-0001-Fix-for-SPLIT-MERGE-partitions-of-\n >temporary-table.patch is not complete either.\n\nAdded correction (and test), see \nv3-0001-Fix-for-SPLIT-MERGE-partitions-of-temporary-table.patch.\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Thu, 11 Apr 2024 16:27:40 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi, Dmitry!\n\nOn Thu, Apr 11, 2024 at 4:27 PM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n> 1.\n> Alexander Lakhin sent a question about index name after MERGE (partition\n> name is the same as one of the merged partitions):\n>\n> ----start of quote----\n> I'm also confused by an index name after MERGE:\n> CREATE TABLE t (i int) PARTITION BY RANGE (i);\n>\n> CREATE TABLE tp_0_1 PARTITION OF t FOR VALUES FROM (0) TO (1);\n> CREATE TABLE tp_1_2 PARTITION OF t FOR VALUES FROM (1) TO (2);\n>\n> CREATE INDEX tidx ON t(i);\n> ALTER TABLE t MERGE PARTITIONS (tp_1_2, tp_0_1) INTO tp_1_2;\n> \\d+ t*\n>\n> Table \"public.tp_1_2\"\n> Column | Type | Collation | Nullable | Default | Storage |\n> Compression | Stats target | Description\n> --------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n> i | integer | | | | plain |\n> | |\n> Partition of: t FOR VALUES FROM (0) TO (2)\n> Partition constraint: ((i IS NOT NULL) AND (i >= 0) AND (i < 2))\n> Indexes:\n> \"merge-16385-3A14B2-tmp_i_idx\" btree (i)\n>\n> Is the name \"merge-16385-3A14B2-tmp_i_idx\" valid or it's something\n> temporary?\n> ----end of quote----\n>\n> Fix for this case added to file\n> v3-0001-Fix-for-SPLIT-MERGE-partitions-of-temporary-table.patch.\n>\n> ----\n>\n> 2.\n> >It seems to me that v2-0001-Fix-for-SPLIT-MERGE-partitions-of-\n> >temporary-table.patch is not complete either.\n>\n> Added correction (and test), see\n> v3-0001-Fix-for-SPLIT-MERGE-partitions-of-temporary-table.patch.\n\nThank you, I'll review this later today.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Thu, 11 Apr 2024 17:21:17 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "\n11.04.2024 16:27, Dmitry Koval wrote:\n>\n> Added correction (and test), see v3-0001-Fix-for-SPLIT-MERGE-partitions-of-temporary-table.patch.\n>\n\nThank you for the correction, but may be an attempt to merge into implicit\npg_temp should fail just like CREATE TABLE ... PARTITION OF ... does?\n\nPlease look also at another anomaly with schemas:\nCREATE SCHEMA s1;\nCREATE TABLE t (i int) PARTITION BY RANGE (i);\nCREATE TABLE tp_0_2 PARTITION OF t\n FOR VALUES FROM (0) TO (2);\nALTER TABLE t SPLIT PARTITION tp_0_2 INTO\n (PARTITION s1.tp0 FOR VALUES FROM (0) TO (1), PARTITION s1.tp1 FOR VALUES FROM (1) TO (2));\nresults in:\n\\d+ s1.*\nDid not find any relation named \"s1.*\"\n\\d+ tp*\n Table \"public.tp0\"\n...\n Table \"public.tp1\"\n...\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Thu, 11 Apr 2024 20:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Thu, Apr 11, 2024 at 8:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n> 11.04.2024 16:27, Dmitry Koval wrote:\n> >\n> > Added correction (and test), see v3-0001-Fix-for-SPLIT-MERGE-partitions-of-temporary-table.patch.\n> >\n>\n> Thank you for the correction, but may be an attempt to merge into implicit\n> pg_temp should fail just like CREATE TABLE ... PARTITION OF ... does?\n>\n> Please look also at another anomaly with schemas:\n> CREATE SCHEMA s1;\n> CREATE TABLE t (i int) PARTITION BY RANGE (i);\n> CREATE TABLE tp_0_2 PARTITION OF t\n> FOR VALUES FROM (0) TO (2);\n> ALTER TABLE t SPLIT PARTITION tp_0_2 INTO\n> (PARTITION s1.tp0 FOR VALUES FROM (0) TO (1), PARTITION s1.tp1 FOR VALUES FROM (1) TO (2));\n> results in:\n> \\d+ s1.*\n> Did not find any relation named \"s1.*\"\n> \\d+ tp*\n> Table \"public.tp0\"\n> ...\n> Table \"public.tp1\"\n\n+1\nI think we shouldn't unconditionally copy schema name and\nrelpersistence from the parent table. Instead we should throw the\nerror on a mismatch like CREATE TABLE ... PARTITION OF ... does. I'm\nworking on revising this fix.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Fri, 12 Apr 2024 04:53:43 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Thu, Apr 11, 2024 at 9:54 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> I think we shouldn't unconditionally copy schema name and\n> relpersistence from the parent table. Instead we should throw the\n> error on a mismatch like CREATE TABLE ... PARTITION OF ... does. I'm\n> working on revising this fix.\n\nWe definitely shouldn't copy the schema name from the parent table. It\nshould be possible to schema-qualify the new partition names, and if\nyou don't, then the search_path should determine where they get\nplaced.\n\nBut I am inclined to think that relpersistence should be copied. It's\nweird that you split an unlogged partition and you get logged\npartitions.\n\nOne of the things I dislike about this type of feature -- not this\nimplementation specifically, but just this kind of idea in general --\nis that the syntax mentions a whole bunch of tables but in a way where\nyou can't set their properties. Persistence, reloptions, whatever.\nThere's just no place to mention any of that stuff - and if you wanted\nto create a place, you'd have to invent special syntax for each\nseparate thing. That's why I think it's good that the normal way of\ncreating a partition is CREATE TABLE .. PARTITION OF. Because that\nway, we know that the full power of the CREATE TABLE statement is\nalways available, and you can set anything that you could set for a\ntable that is not a partition.\n\nOf course, that is not to say that some people won't like to have a\nfeature of this sort. I expect they will. The approach does have some\ndrawbacks, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 11 Apr 2024 22:20:53 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi!\n\nAttached is a patch with corrections based on comments in previous \nletters (I think these corrections are not final).\nI'll be very grateful for feedbacks and bug reports.\n\n11.04.2024 20:00, Alexander Lakhin wrote:\n > may be an attempt to merge into implicit\n > pg_temp should fail just like CREATE TABLE ... PARTITION OF ... does?\n\nCorrected. Result is:\n\n\\d+ s1.*\nTable \"s1.tp0\"\n...\nTable \"s1.tp1\"\n...\n\\d+ tp*\nDid not find any relation named \"tp*\".\n\n\n12.04.2024 4:53, Alexander Korotkov wrote:\n > I think we shouldn't unconditionally copy schema name and\n > relpersistence from the parent table. Instead we should throw the\n > error on a mismatch like CREATE TABLE ... PARTITION OF ... does.\n12.04.2024 5:20, Robert Haas wrote:\n > We definitely shouldn't copy the schema name from the parent table.\n\nFixed.\n\n12.04.2024 5:20, Robert Haas wrote:\n > One of the things I dislike about this type of feature -- not this\n > implementation specifically, but just this kind of idea in general --\n > is that the syntax mentions a whole bunch of tables but in a way where\n > you can't set their properties. Persistence, reloptions, whatever.\n\nIn next releases I want to allow specifying options (probably, first of \nall, specifying tablespace of the partitions).\nBut before that, I would like to get a users reaction - what options \nthey really need?\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Fri, 12 Apr 2024 16:04:23 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi Dmitry,\n\n12.04.2024 16:04, Dmitry Koval wrote:\n> Hi!\n>\n> Attached is a patch with corrections based on comments in previous letters (I think these corrections are not final).\n> I'll be very grateful for feedbacks and bug reports.\n>\n> 11.04.2024 20:00, Alexander Lakhin wrote:\n> > may be an attempt to merge into implicit\n> > pg_temp should fail just like CREATE TABLE ... PARTITION OF ... does?\n>\n> Corrected. Result is:\n\nThank you!\nStill now we're able to create a partition in the pg_temp schema\nexplicitly. Please try:\nALTER TABLE t\nMERGE PARTITIONS (tp_0_1, tp_1_2) INTO pg_temp.tp_0_2;\n\nin the scenario [1] and you'll get the same empty table.\n\n[1] https://www.postgresql.org/message-id/fdaa003e-919c-cbc9-4f0c-e4546e96bd65%40gmail.com\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Fri, 12 Apr 2024 20:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Thanks, Alexander!\n\n> Still now we're able to create a partition in the pg_temp schema\n> explicitly.\n\nAttached patches with fix.\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Fri, 12 Apr 2024 22:59:57 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi, Dmitry!\n\nOn Fri, Apr 12, 2024 at 10:59 PM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n>\n> Thanks, Alexander!\n>\n> > Still now we're able to create a partition in the pg_temp schema\n> > explicitly.\n>\n> Attached patches with fix.\n\nPlease, find a my version of this fix attached. I think we need to\ncheck relpersistence in a similar way ATTACH PARTITION or CREATE TABLE\n... PARTITION OF do. I'm going to polish this a little bit more.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Sat, 13 Apr 2024 13:04:58 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Sat, Apr 13, 2024 at 6:05 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> Please, find a my version of this fix attached. I think we need to\n> check relpersistence in a similar way ATTACH PARTITION or CREATE TABLE\n> ... PARTITION OF do. I'm going to polish this a little bit more.\n\n+ errmsg(\"\\\"%s\\\" is not an ordinary table\",\n\nThis is not a phrasing that we use in any other error message. We\nalways just say \"is not a table\".\n\n+ * Open the new partition and acquire exclusive lock on it. This will\n\nA minor nitpick is that this should probably say access exclusive\nrather than exclusive. But the bigger thing that confuses me here is\nthat if we just created the partition, surely we must *already* hold\nAccessExclusiveLoc on it. No?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 15 Apr 2024 10:30:57 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hello Robert,\n\n15.04.2024 17:30, Robert Haas wrote:\n> On Sat, Apr 13, 2024 at 6:05 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>> Please, find a my version of this fix attached. I think we need to\n>> check relpersistence in a similar way ATTACH PARTITION or CREATE TABLE\n>> ... PARTITION OF do. I'm going to polish this a little bit more.\n> + errmsg(\"\\\"%s\\\" is not an ordinary table\",\n>\n> This is not a phrasing that we use in any other error message. We\n> always just say \"is not a table\".\n\nInitially I was confused by that message, because of:\nCREATE TABLE t (i int) PARTITION BY RANGE (i);\nCREATE FOREIGN TABLE ftp_0_1 PARTITION OF t\n FOR VALUES FROM (0) TO (1)\n SERVER loopback OPTIONS (table_name 'lt_0_1');\nCREATE FOREIGN TABLE ftp_1_2 PARTITION OF t\n FOR VALUES FROM (1) TO (2)\n SERVER loopback OPTIONS (table_name 'lt_1_2');\nALTER TABLE t MERGE PARTITIONS (ftp_0_1, ftp_1_2) INTO ftp_0_2;\nERROR: \"ftp_0_1\" is not a table\n(Isn't a foreign table a table?)\n\nAnd also:\nCREATE TABLE t (i int) PARTITION BY RANGE (i);\nCREATE TABLE tp_0_1 PARTITION OF t\n FOR VALUES FROM (0) TO (1);\nCREATE TABLE t2 (i int) PARTITION BY RANGE (i);\nALTER TABLE t MERGE PARTITIONS (tp_0_1, t2) INTO tpn;\nERROR: \"t2\" is not a table\n(Isn't a partitioned table a table?)\n\nAnd in fact, an ordinary table is not suitable for MERGE anyway:\nCREATE TABLE t (i int) PARTITION BY RANGE (i);\nCREATE TABLE tp_0_1 PARTITION OF t\n FOR VALUES FROM (0) TO (1);\nCREATE TABLE t2 (i int);\nALTER TABLE t MERGE PARTITIONS (tp_0_1, t2) INTO tpn;\nERROR: \"t2\" is not a partition\n\nSo I don't think that \"an ordinary table\" is a good (unambiguous) term\neither.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Mon, 15 Apr 2024 18:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi!\n\n> Please, find a my version of this fix attached.\n\nIs it possible to make a small addition to the file v6-0001 ... .patch \n(see attachment)?\n\nMost important:\n1) Line 19:\n\n+ mergePartName = makeRangeVar(cmd->name->schemaname, tmpRelName, -1);\n\n(temporary table should use the same schema as the partition);\n\n2) Lines 116-123:\n\n+RESET search_path;\n+\n+-- Can't merge persistent partitions into a temporary partition\n+ALTER TABLE t MERGE PARTITIONS (tp_0_1, tp_1_2) INTO pg_temp.tp_0_2;\n+\n+SET search_path = pg_temp, public;\n\n(Alexandr Lakhin's test for using of pg_temp schema explicitly).\n\n\nThe rest of the changes in v6_afterfix.diff are not very important and \ncan be ignored.\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Mon, 15 Apr 2024 18:26:56 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Mon, Apr 15, 2024 at 11:00 AM Alexander Lakhin <exclusion@gmail.com> wrote:\n> Initially I was confused by that message, because of:\n> CREATE TABLE t (i int) PARTITION BY RANGE (i);\n> CREATE FOREIGN TABLE ftp_0_1 PARTITION OF t\n> FOR VALUES FROM (0) TO (1)\n> SERVER loopback OPTIONS (table_name 'lt_0_1');\n> CREATE FOREIGN TABLE ftp_1_2 PARTITION OF t\n> FOR VALUES FROM (1) TO (2)\n> SERVER loopback OPTIONS (table_name 'lt_1_2');\n> ALTER TABLE t MERGE PARTITIONS (ftp_0_1, ftp_1_2) INTO ftp_0_2;\n> ERROR: \"ftp_0_1\" is not a table\n> (Isn't a foreign table a table?)\n\nI agree that this can be confusing, but a patch that is about adding\nSPLIT and MERGE PARTITION operations cannot decide to also invent a\nnew error message phraseology and use it only in one place. We need to\nmaintain consistency across the whole code base.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 15 Apr 2024 11:38:04 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi, Dmitry!\n\nOn Mon, Apr 15, 2024 at 6:26 PM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n>\n> Hi!\n>\n> > Please, find a my version of this fix attached.\n>\n> Is it possible to make a small addition to the file v6-0001 ... .patch\n> (see attachment)?\n>\n> Most important:\n> 1) Line 19:\n>\n> + mergePartName = makeRangeVar(cmd->name->schemaname, tmpRelName, -1);\n>\n> (temporary table should use the same schema as the partition);\n>\n> 2) Lines 116-123:\n>\n> +RESET search_path;\n> +\n> +-- Can't merge persistent partitions into a temporary partition\n> +ALTER TABLE t MERGE PARTITIONS (tp_0_1, tp_1_2) INTO pg_temp.tp_0_2;\n> +\n> +SET search_path = pg_temp, public;\n>\n> (Alexandr Lakhin's test for using of pg_temp schema explicitly).\n>\n>\n> The rest of the changes in v6_afterfix.diff are not very important and\n> can be ignored.\n\nThank you. I've integrated your changes.\n\nThe revised patchset is attached.\n1) I've split the fix for the CommandCounterIncrement() issue and the\nfix for relation persistence issue into a separate patch.\n2) I've validated that the lock on the new partition is held in\ncreatePartitionTable() after ProcessUtility() as pointed out by\nRobert. So, no need to place the lock again.\n3) Added fix for problematic error message as a separate patch [1].\n4) Added rename \"salemans\" => \"salesmen\" for tests as a separate patch.\n\nI think these fixes are reaching committable shape, but I'd like\nsomeone to check it before I push.\n\nLinks.\n1. https://postgr.es/m/20240408.152402.1485994009160660141.horikyota.ntt%40gmail.com\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Thu, 18 Apr 2024 13:35:41 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi Alexander,\n\n18.04.2024 13:35, Alexander Korotkov wrote:\n>\n> The revised patchset is attached.\n> 1) I've split the fix for the CommandCounterIncrement() issue and the\n> fix for relation persistence issue into a separate patch.\n> 2) I've validated that the lock on the new partition is held in\n> createPartitionTable() after ProcessUtility() as pointed out by\n> Robert. So, no need to place the lock again.\n> 3) Added fix for problematic error message as a separate patch [1].\n> 4) Added rename \"salemans\" => \"salesmen\" for tests as a separate patch.\n>\n> I think these fixes are reaching committable shape, but I'd like\n> someone to check it before I push.\n\nI think the feature implementation should also provide tab completion for\nSPLIT/MERGE.\n(ALTER TABLE t S<Tab>\nfills in only SET now.)\n\nAlso, the following MERGE operation:\nCREATE TABLE t (i int, PRIMARY KEY(i)) PARTITION BY RANGE (i);\nCREATE TABLE tp_0 PARTITION OF t FOR VALUES FROM (0) TO (1);\nCREATE TABLE tp_1 PARTITION OF t FOR VALUES FROM (1) TO (2);\nALTER TABLE t MERGE PARTITIONS (tp_0, tp_1) INTO tp_0;\n\nleaves a strange constraint:\n\\d+ t*\n Table \"public.tp_0\"\n...\nNot-null constraints:\n \"merge-16385-26BCB0-tmp_i_not_null\" NOT NULL \"i\"\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Thu, 18 Apr 2024 19:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Alexander Lakhin <exclusion@gmail.com> writes:\n\n> Hi Alexander,\n>\n> 18.04.2024 13:35, Alexander Korotkov wrote:\n>>\n>> The revised patchset is attached.\n>> 1) I've split the fix for the CommandCounterIncrement() issue and the\n>> fix for relation persistence issue into a separate patch.\n>> 2) I've validated that the lock on the new partition is held in\n>> createPartitionTable() after ProcessUtility() as pointed out by\n>> Robert. So, no need to place the lock again.\n>> 3) Added fix for problematic error message as a separate patch [1].\n>> 4) Added rename \"salemans\" => \"salesmen\" for tests as a separate patch.\n>>\n>> I think these fixes are reaching committable shape, but I'd like\n>> someone to check it before I push.\n>\n> I think the feature implementation should also provide tab completion for\n> SPLIT/MERGE.\n> (ALTER TABLE t S<Tab>\n> fills in only SET now.)\n\nHere's a patch for that. One thing I noticed while testing it was that\nthe tab completeion for partitions (Query_for_partition_of_table) shows\nall the schemas in the DB, even ones that don't contain any partitions\nof the table being altered.\n\n- ilmari",
"msg_date": "Thu, 18 Apr 2024 18:03:21 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On 2024-Apr-18, Alexander Lakhin wrote:\n\n> I think the feature implementation should also provide tab completion\n> for SPLIT/MERGE.\n\nI don't think that we should be imposing on feature authors or\ncommitters the task of filling in tab-completion for whatever features\nthey contribute. I mean, if they want to add that, cool; but if not,\nsomebody else can do that, too. It's not a critical piece.\n\nNow, if we're talking about whether a patch to add tab-completion to a\nfeature post feature-freeze is acceptable, I think it absolutely is\n(even though you could claim that it's a new psql feature). But for\nsure we shouldn't mandate that a feature be reverted just because it\nlacks tab-completion -- such lack is not an open-item against the\nfeature in that sense.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"That sort of implies that there are Emacs keystrokes which aren't obscure.\nI've been using it daily for 2 years now and have yet to discover any key\nsequence which makes any sense.\" (Paul Thomas)\n\n\n",
"msg_date": "Thu, 18 Apr 2024 19:49:22 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Thu, Apr 18, 2024 at 6:35 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> The revised patchset is attached.\n> 1) I've split the fix for the CommandCounterIncrement() issue and the\n> fix for relation persistence issue into a separate patch.\n> 2) I've validated that the lock on the new partition is held in\n> createPartitionTable() after ProcessUtility() as pointed out by\n> Robert. So, no need to place the lock again.\n> 3) Added fix for problematic error message as a separate patch [1].\n> 4) Added rename \"salemans\" => \"salesmen\" for tests as a separate patch.\n>\n> I think these fixes are reaching committable shape, but I'd like\n> someone to check it before I push.\n\nReviewing 0001:\n\n- Seems mostly fine. I think the comment /* Unlock and drop merged\npartitions. */ is wrong. I think it should say something like /* Drop\nthe current partitions before adding the new one. */ because (a) it\ndoesn't unlock anything, and there's another comment saying that and\n(b) we now know that the drop vs. add order matters.\n\nReviewing 0002:\n\n- Commit message typos: behavious, corresponsing\n\n- Given the change to the header comment of createPartitionTable, it's\nrather surprising to me that this patch doesn't touch the\ndocumentation. Isn't that a big change in semantics?\n\n- My previous review comment was really about the code comment, I\nbelieve, rather than the use of AccessExclusiveLock. NoLock is\nprobably fine, but if it were me I'd be tempted to write\nAccessExclusiveLock and just make the comment say something like /* We\nshould already have the lock, but do it this way just to be certain\n*/. But what you have is probably fine, too. Mostly, I want to clarify\nthe intent of my previous comment.\n\n- Do we, or can we, have a test that if you split a partition that's\nnot in the search path, the resulting partitions end up in your\ncreation namespace? And similarly for merge? And maybe also that\nschema-qualification works properly?\n\nI haven't exhaustively verified the patch, but these are some things I\nnoticed when scrolling through it.\n\nReviewing 0003:\n\n- Are you sure this can't dereference datum when datum is NULL, in\neither the upper or lower half? It sure looks strange to have code\nthat looks like it can make datum a null pointer, and then an\nunconditional deference just after.\n\n- In general I think the wording changes are improvements. I'm\nslightly suspicious that there might be an even better way to word it,\nbut I can't think of it right at this very moment.\n\n- I'm kind of unhappy (but not totally unhappy) with the semantics.\nSuppose I have a partition that allows values from 0 to 1000, but\nactually only contains values that are either between 0 and 99 or\nbetween 901 and 1000. If I try to to split the partition into one that\nallows 0..100 and a second that allows 900..1000, it will fail. Maybe\nthat's good, because that means that if a failure is going to happen,\nit will happen right at the beginning, rather than maybe after doing a\nlot of work. But on the other hand, it also kind of stinks, because it\nfeels like I'm being told I can't do something that I know is\nperfectly fine.\n\nReviewing 0004:\n\n- Obviously this is quite trivial and there's no real problem with it,\nbut if we're changing it anyway, how about a gender-neutral term\n(salesperson/salespeople)?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 18 Apr 2024 15:59:01 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Here are some additional fixes to docs.",
"msg_date": "Thu, 18 Apr 2024 15:51:31 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi!\n\n18.04.2024 19:00, Alexander Lakhin wrote:\n> leaves a strange constraint:\n> \\d+ t*\n> Table \"public.tp_0\"\n> ...\n> Not-null constraints:\n> \"merge-16385-26BCB0-tmp_i_not_null\" NOT NULL \"i\"\n\nThanks!\nAttached fix (with test) for this case.\nThe patch should be applied after patches\nv6-0001- ... .patch ... v6-0004- ... .patch\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Fri, 19 Apr 2024 02:26:07 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "18.04.2024 20:49, Alvaro Herrera wrote:\n> On 2024-Apr-18, Alexander Lakhin wrote:\n>\n>> I think the feature implementation should also provide tab completion\n>> for SPLIT/MERGE.\n> I don't think that we should be imposing on feature authors or\n> committers the task of filling in tab-completion for whatever features\n> they contribute. I mean, if they want to add that, cool; but if not,\n> somebody else can do that, too. It's not a critical piece.\n\nI agree, I just wanted to note the lack of the current implementation.\nBut now, thanks to Dagfinn, we have the tab completion too.\n\nI have also a question regarding \"ALTER TABLE ... SET ACCESS METHOD\". The\ncurrent documentation says:\nWhen applied to a partitioned table, there is no data to rewrite, but\npartitions created afterwards will default to the given access method\nunless overridden by a USING clause.\n\nBut MERGE/SPLIT behave differently (if one can assume that MERGE/SPLIT\ncreate new partitions under the hood):\nCREATE ACCESS METHOD heap2 TYPE TABLE HANDLER heap_tableam_handler;\n\nCREATE TABLE t (i int, PRIMARY KEY(i)) PARTITION BY RANGE (i);\nALTER TABLE t SET ACCESS METHOD heap2;\nCREATE TABLE tp_0 PARTITION OF t FOR VALUES FROM (0) TO (1);\nCREATE TABLE tp_1 PARTITION OF t FOR VALUES FROM (1) TO (2);\n\\d t+\n Partitioned table \"public.t\"\n...\nAccess method: heap2\n\n Table \"public.tp_0\"\n...\nAccess method: heap2\n\n Table \"public.tp_1\"\n...\nAccess method: heap2\n\nALTER TABLE t MERGE PARTITIONS (tp_0, tp_1) INTO tp_0;\n Partitioned table \"public.t\"\n...\nAccess method: heap2\n\n Table \"public.tp_0\"\n...\nAccess method: heap\n\nShouldn't it be changed, what do you think?\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Fri, 19 Apr 2024 12:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Thu, Apr 11, 2024 at 10:20:53PM -0400, Robert Haas wrote:\n> On Thu, Apr 11, 2024 at 9:54 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > I think we shouldn't unconditionally copy schema name and\n> > relpersistence from the parent table. Instead we should throw the\n> > error on a mismatch like CREATE TABLE ... PARTITION OF ... does. I'm\n> > working on revising this fix.\n> \n> We definitely shouldn't copy the schema name from the parent table. It\n> should be possible to schema-qualify the new partition names, and if\n> you don't, then the search_path should determine where they get\n> placed.\n\n+1. Alexander Lakhin reported an issue with schemas and SPLIT, and I\nnoticed an issue with schemas with MERGE. The issue I hit is occurs\nwhen MERGE'ing into a partition with the same name, and it's fixed like\nso:\n\n--- a/src/backend/commands/tablecmds.c\n+++ b/src/backend/commands/tablecmds.c\n@@ -21526,8 +21526,7 @@ ATExecMergePartitions(List **wqueue, AlteredTableInfo *tab, Relation rel,\n \t{\n \t\t/* Create partition table with generated temporary name. */\n \t\tsprintf(tmpRelName, \"merge-%u-%X-tmp\", RelationGetRelid(rel), MyProcPid);\n-\t\tmergePartName = makeRangeVar(get_namespace_name(RelationGetNamespace(rel)),\n-\t\t\t\t\t\t\t\t\t tmpRelName, -1);\n+\t\tmergePartName = makeRangeVar(mergePartName->schemaname, tmpRelName, -1);\n \t}\n \tcreatePartitionTable(mergePartName,\n \t\t\t\t\t\t makeRangeVar(get_namespace_name(RelationGetNamespace(rel)),\n\n> One of the things I dislike about this type of feature -- not this\n> implementation specifically, but just this kind of idea in general --\n> is that the syntax mentions a whole bunch of tables but in a way where\n> you can't set their properties. Persistence, reloptions, whatever.\n> There's just no place to mention any of that stuff - and if you wanted\n> to create a place, you'd have to invent special syntax for each\n> separate thing. That's why I think it's good that the normal way of\n> creating a partition is CREATE TABLE .. PARTITION OF. Because that\n> way, we know that the full power of the CREATE TABLE statement is\n> always available, and you can set anything that you could set for a\n> table that is not a partition.\n\nRight. The current feature is useful and will probably work for 90% of\npeople's partitioned tables.\n\nCurrently, CREATE TABLE .. PARTITION OF does not create stats objects on\nthe child table, but MERGE PARTITIONS does, which seems strange.\nMaybe stats should not be included on the new child ?\n\nNote that stats on parent table are not analagous to indexes -\npartitioned indexes do nothing other than cause indexes to be created on\nany new/attached partitions. But stats objects on the parent 1) cause\nextended stats to be collected and computed across the whole partition\nheirarchy, and 2) do not cause stats to be computed for the individual\npartitions.\n\nPartitions can have different column definitions, for example null\nconstraints, FKs, defaults. And currently, if you MERGE partitions,\nthose will all be lost (or rather, replaced by whatever LIKE parent\ngives). I think that's totally fine - anyone using different defaults\non child tables could either not use MERGE PARTITIONS, or fix up the\ndefaults afterwards. There's not much confusion that the details of the\ndifferences between individual partitions will be lost when the\nindividual partitions are merged and no longer exist.\nBut I think it'd be useful to document how the new partitions will be\nconstructed.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 19 Apr 2024 06:34:46 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Fri, Apr 19, 2024 at 2:26 AM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n>\n> Hi!\n>\n> 18.04.2024 19:00, Alexander Lakhin wrote:\n> > leaves a strange constraint:\n> > \\d+ t*\n> > Table \"public.tp_0\"\n> > ...\n> > Not-null constraints:\n> > \"merge-16385-26BCB0-tmp_i_not_null\" NOT NULL \"i\"\n>\n> Thanks!\n> Attached fix (with test) for this case.\n> The patch should be applied after patches\n> v6-0001- ... .patch ... v6-0004- ... .patch\n\nI've incorporated this fix with 0001 patch.\n\nAlso added to the patchset\n005 – tab completion by Dagfinn [1]\n006 – draft fix for table AM issue spotted by Alexander Lakhin [2]\n007 – doc review by Justin [3]\n\nI'm continuing work on this.\n\nLinks\n1. https://www.postgresql.org/message-id/87plumiox2.fsf%40wibble.ilmari.org\n2. https://www.postgresql.org/message-id/84ada05b-be5c-473e-6d1c-ebe5dd21b190%40gmail.com\n3. https://www.postgresql.org/message-id/ZiGH0xc1lxJ71ZfB%40pryzbyj2023\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Fri, 19 Apr 2024 16:29:44 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi!\n\nOn Fri, Apr 19, 2024 at 4:29 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Fri, Apr 19, 2024 at 2:26 AM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n> > 18.04.2024 19:00, Alexander Lakhin wrote:\n> > > leaves a strange constraint:\n> > > \\d+ t*\n> > > Table \"public.tp_0\"\n> > > ...\n> > > Not-null constraints:\n> > > \"merge-16385-26BCB0-tmp_i_not_null\" NOT NULL \"i\"\n> >\n> > Thanks!\n> > Attached fix (with test) for this case.\n> > The patch should be applied after patches\n> > v6-0001- ... .patch ... v6-0004- ... .patch\n>\n> I've incorporated this fix with 0001 patch.\n>\n> Also added to the patchset\n> 005 – tab completion by Dagfinn [1]\n> 006 – draft fix for table AM issue spotted by Alexander Lakhin [2]\n> 007 – doc review by Justin [3]\n>\n> I'm continuing work on this.\n>\n> Links\n> 1. https://www.postgresql.org/message-id/87plumiox2.fsf%40wibble.ilmari.org\n> 2. https://www.postgresql.org/message-id/84ada05b-be5c-473e-6d1c-ebe5dd21b190%40gmail.com\n> 3. https://www.postgresql.org/message-id/ZiGH0xc1lxJ71ZfB%40pryzbyj2023\n\n0001\nThe way we handle name collisions during MERGE PARTITIONS operation is\nreworked by integration of patch [3]. This makes note about commit in\n[2] not relevant.\n\n0002\nThe persistence of the new partition is copied as suggested in [1].\nBut the checks are in-place, because search_path could influence new\ntable persistence. Per review [2], commit message typos are fixed,\ndocumentation is revised, revised tests to cover schema-qualification,\nusage of search_path.\n\n0003\nMaking code more clear that we're not going to dereference the NULL\ndatum per note in [2].\n\n0004\nGender-neutral terms are used per suggestions in [2].\n\n0005\nCommit message revised\n\n0006\nRevise documentation mentioning we're going to copy the parent's table\nAM. Regression tests are added. Commit message revised.\n\n0007\nCommit message revised\n\nLinks\n1. https://www.postgresql.org/message-id/CA%2BTgmoYcjL%2Bw2BQzku5iNXKR5fyxJMSP3avQta8xngioTX7D7A%40mail.gmail.com\n2. https://www.postgresql.org/message-id/CA%2BTgmoY_4r6BeeSCTim04nAiCmmXg-1pG1toxQovZOP2qaFJ0A%40mail.gmail.com\n3. https://www.postgresql.org/message-id/f8b5cbf5-965e-4e5b-b506-33bbf41b0d50%40postgrespro.ru\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Mon, 22 Apr 2024 13:31:48 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Mon, Apr 22, 2024 at 01:31:48PM +0300, Alexander Korotkov wrote:\n> Hi!\n> \n> On Fri, Apr 19, 2024 at 4:29 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > On Fri, Apr 19, 2024 at 2:26 AM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n> > > 18.04.2024 19:00, Alexander Lakhin wrote:\n> > > > leaves a strange constraint:\n> > > > \\d+ t*\n> > > > Table \"public.tp_0\"\n> > > > ...\n> > > > Not-null constraints:\n> > > > \"merge-16385-26BCB0-tmp_i_not_null\" NOT NULL \"i\"\n> > >\n> > > Thanks!\n> > > Attached fix (with test) for this case.\n> > > The patch should be applied after patches\n> > > v6-0001- ... .patch ... v6-0004- ... .patch\n> >\n> > I've incorporated this fix with 0001 patch.\n> >\n> > Also added to the patchset\n> > 005 – tab completion by Dagfinn [1]\n> > 006 – draft fix for table AM issue spotted by Alexander Lakhin [2]\n> > 007 – doc review by Justin [3]\n> >\n> > I'm continuing work on this.\n> >\n> > Links\n> > 1. https://www.postgresql.org/message-id/87plumiox2.fsf%40wibble.ilmari.org\n> > 2. https://www.postgresql.org/message-id/84ada05b-be5c-473e-6d1c-ebe5dd21b190%40gmail.com\n> > 3. https://www.postgresql.org/message-id/ZiGH0xc1lxJ71ZfB%40pryzbyj2023\n> \n> 0001\n> The way we handle name collisions during MERGE PARTITIONS operation is\n> reworked by integration of patch [3]. This makes note about commit in\n> [2] not relevant.\n\nThis patch also/already fixes the schema issue I reported. Thanks.\n\nIf you wanted to include a test case for that:\n\nbegin;\nCREATE SCHEMA s;\nCREATE SCHEMA t;\nCREATE TABLE p(i int) PARTITION BY RANGE(i);\nCREATE TABLE s.c1 PARTITION OF p FOR VALUES FROM (1)TO(2);\nCREATE TABLE s.c2 PARTITION OF p FOR VALUES FROM (2)TO(3);\nALTER TABLE p MERGE PARTITIONS (s.c1, s.c2) INTO s.c1; -- misbehaves if merging into the same name as an existing partition\n\\d+ p\n...\nPartitions: c1 FOR VALUES FROM (1) TO (3)\n\n> 0002\n> The persistence of the new partition is copied as suggested in [1].\n> But the checks are in-place, because search_path could influence new\n> table persistence. Per review [2], commit message typos are fixed,\n> documentation is revised, revised tests to cover schema-qualification,\n> usage of search_path.\n\nSubject: [PATCH v8 2/7] Make new partitions with parent's persistence during MERGE/SPLIT operations\n\nThis patch adds documentation saying:\n+ Any indexes, constraints and user-defined row-level triggers that exist\n+ in the parent table are cloned on new partitions [...]\n\nWhich is good to say, and addresses part of my message [0]\n[0] ZiJW1g2nbQs9ekwK@pryzbyj2023\n\nBut it doesn't have anything to do with \"creating new partitions with\nparent's persistence\". Maybe there was a merge conflict and the docs\nended up in the wrong patch ?\n\nAlso, defaults, storage options, compression are also copied. As will\nbe anything else from LIKE. And since anything added in the future will\nalso be copied, maybe it's better to just say that the tables will be\ncreated the same way as \"LIKE .. INCLUDING ALL EXCLUDING ..\", or\nsimilar. Otherwise, the next person who adds a new option for LIKE\nwould have to remember to update this paragraph...\n\nAlso, extended stats objects are currently cloned to new child tables.\nBut I suggested in [0] that they probably shouldn't be.\n\n> 007 – doc review by Justin [3]\n\nI suggest to drop this patch for now. I'll send some more minor fixes to\ndocs and code comments once the other patches are settled.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 24 Apr 2024 15:26:47 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi, Hackers!\n\nOn Thu, 25 Apr 2024 at 00:26, Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Mon, Apr 22, 2024 at 01:31:48PM +0300, Alexander Korotkov wrote:\n> > Hi!\n> >\n> > On Fri, Apr 19, 2024 at 4:29 PM Alexander Korotkov <aekorotkov@gmail.com>\n> wrote:\n> > > On Fri, Apr 19, 2024 at 2:26 AM Dmitry Koval <d.koval@postgrespro.ru>\n> wrote:\n> > > > 18.04.2024 19:00, Alexander Lakhin wrote:\n> > > > > leaves a strange constraint:\n> > > > > \\d+ t*\n> > > > > Table \"public.tp_0\"\n> > > > > ...\n> > > > > Not-null constraints:\n> > > > > \"merge-16385-26BCB0-tmp_i_not_null\" NOT NULL \"i\"\n> > > >\n> > > > Thanks!\n> > > > Attached fix (with test) for this case.\n> > > > The patch should be applied after patches\n> > > > v6-0001- ... .patch ... v6-0004- ... .patch\n> > >\n> > > I've incorporated this fix with 0001 patch.\n> > >\n> > > Also added to the patchset\n> > > 005 – tab completion by Dagfinn [1]\n> > > 006 – draft fix for table AM issue spotted by Alexander Lakhin [2]\n> > > 007 – doc review by Justin [3]\n> > >\n> > > I'm continuing work on this.\n> > >\n> > > Links\n> > > 1.\n> https://www.postgresql.org/message-id/87plumiox2.fsf%40wibble.ilmari.org\n> > > 2.\n> https://www.postgresql.org/message-id/84ada05b-be5c-473e-6d1c-ebe5dd21b190%40gmail.com\n> > > 3.\n> https://www.postgresql.org/message-id/ZiGH0xc1lxJ71ZfB%40pryzbyj2023\n> >\n> > 0001\n> > The way we handle name collisions during MERGE PARTITIONS operation is\n> > reworked by integration of patch [3]. This makes note about commit in\n> > [2] not relevant.\n>\n> This patch also/already fixes the schema issue I reported. Thanks.\n>\n> If you wanted to include a test case for that:\n>\n> begin;\n> CREATE SCHEMA s;\n> CREATE SCHEMA t;\n> CREATE TABLE p(i int) PARTITION BY RANGE(i);\n> CREATE TABLE s.c1 PARTITION OF p FOR VALUES FROM (1)TO(2);\n> CREATE TABLE s.c2 PARTITION OF p FOR VALUES FROM (2)TO(3);\n> ALTER TABLE p MERGE PARTITIONS (s.c1, s.c2) INTO s.c1; -- misbehaves if\n> merging into the same name as an existing partition\n> \\d+ p\n> ...\n> Partitions: c1 FOR VALUES FROM (1) TO (3)\n>\n> > 0002\n> > The persistence of the new partition is copied as suggested in [1].\n> > But the checks are in-place, because search_path could influence new\n> > table persistence. Per review [2], commit message typos are fixed,\n> > documentation is revised, revised tests to cover schema-qualification,\n> > usage of search_path.\n>\n> Subject: [PATCH v8 2/7] Make new partitions with parent's persistence\n> during MERGE/SPLIT operations\n>\n> This patch adds documentation saying:\n> + Any indexes, constraints and user-defined row-level triggers that\n> exist\n> + in the parent table are cloned on new partitions [...]\n>\n> Which is good to say, and addresses part of my message [0]\n> [0] ZiJW1g2nbQs9ekwK@pryzbyj2023\n>\n> But it doesn't have anything to do with \"creating new partitions with\n> parent's persistence\". Maybe there was a merge conflict and the docs\n> ended up in the wrong patch ?\n>\n> Also, defaults, storage options, compression are also copied. As will\n> be anything else from LIKE. And since anything added in the future will\n> also be copied, maybe it's better to just say that the tables will be\n> created the same way as \"LIKE .. INCLUDING ALL EXCLUDING ..\", or\n> similar. Otherwise, the next person who adds a new option for LIKE\n> would have to remember to update this paragraph...\n>\n> Also, extended stats objects are currently cloned to new child tables.\n> But I suggested in [0] that they probably shouldn't be.\n>\n> > 007 – doc review by Justin [3]\n>\n> I suggest to drop this patch for now. I'll send some more minor fixes to\n> docs and code comments once the other patches are settled.\n>\nI've looked at the patchset:\n\n0001 Look good.\n0002 Also right with docs modification proposed by Justin.\n0003:\nLooks like unused code\n5268 datum = cmpval ? list_nth(spec->lowerdatums, abs(cmpval) -\n1) : NULL;\noverridden by\n5278 datum = list_nth(spec->upperdatums, abs(cmpval) -\n1);\nand\n5290 datum = list_nth(spec->upperdatums, abs(cmpval) -\n1);\n\nOtherwise - good.\n\n0004:\nI suggest also getting rid of thee-noun compound words like:\nsalesperson_name. Maybe salesperson -> clerk? Or maybe use the same terms\nlike in pgbench: branches, tellers, accounts, balance.\n\n0005: Good\n0006: Patch is right\nIn comments:\n+ New partitions will have the same table access method,\n+ same column names and types as the partitioned table to which they\nbelong.\n(I'd suggest to remove second \"same\")\n\nTests are passed. I suppose that it's better to add similar tests for\nSPLIT/MERGE PARTITION(S) to those covering ATTACH/DETACH PARTITION (e.g.:\nsubscription/t/013_partition.pl and regression tests)\n\nOverall, great work! Thanks!\n\nRegards,\nPavel Borisov,\nSupabase.\n\nHi, Hackers!On Thu, 25 Apr 2024 at 00:26, Justin Pryzby <pryzby@telsasoft.com> wrote:On Mon, Apr 22, 2024 at 01:31:48PM +0300, Alexander Korotkov wrote:\n> Hi!\n> \n> On Fri, Apr 19, 2024 at 4:29 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > On Fri, Apr 19, 2024 at 2:26 AM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n> > > 18.04.2024 19:00, Alexander Lakhin wrote:\n> > > > leaves a strange constraint:\n> > > > \\d+ t*\n> > > > Table \"public.tp_0\"\n> > > > ...\n> > > > Not-null constraints:\n> > > > \"merge-16385-26BCB0-tmp_i_not_null\" NOT NULL \"i\"\n> > >\n> > > Thanks!\n> > > Attached fix (with test) for this case.\n> > > The patch should be applied after patches\n> > > v6-0001- ... .patch ... v6-0004- ... .patch\n> >\n> > I've incorporated this fix with 0001 patch.\n> >\n> > Also added to the patchset\n> > 005 – tab completion by Dagfinn [1]\n> > 006 – draft fix for table AM issue spotted by Alexander Lakhin [2]\n> > 007 – doc review by Justin [3]\n> >\n> > I'm continuing work on this.\n> >\n> > Links\n> > 1. https://www.postgresql.org/message-id/87plumiox2.fsf%40wibble.ilmari.org\n> > 2. https://www.postgresql.org/message-id/84ada05b-be5c-473e-6d1c-ebe5dd21b190%40gmail.com\n> > 3. https://www.postgresql.org/message-id/ZiGH0xc1lxJ71ZfB%40pryzbyj2023\n> \n> 0001\n> The way we handle name collisions during MERGE PARTITIONS operation is\n> reworked by integration of patch [3]. This makes note about commit in\n> [2] not relevant.\n\nThis patch also/already fixes the schema issue I reported. Thanks.\n\nIf you wanted to include a test case for that:\n\nbegin;\nCREATE SCHEMA s;\nCREATE SCHEMA t;\nCREATE TABLE p(i int) PARTITION BY RANGE(i);\nCREATE TABLE s.c1 PARTITION OF p FOR VALUES FROM (1)TO(2);\nCREATE TABLE s.c2 PARTITION OF p FOR VALUES FROM (2)TO(3);\nALTER TABLE p MERGE PARTITIONS (s.c1, s.c2) INTO s.c1; -- misbehaves if merging into the same name as an existing partition\n\\d+ p\n...\nPartitions: c1 FOR VALUES FROM (1) TO (3)\n\n> 0002\n> The persistence of the new partition is copied as suggested in [1].\n> But the checks are in-place, because search_path could influence new\n> table persistence. Per review [2], commit message typos are fixed,\n> documentation is revised, revised tests to cover schema-qualification,\n> usage of search_path.\n\nSubject: [PATCH v8 2/7] Make new partitions with parent's persistence during MERGE/SPLIT operations\n\nThis patch adds documentation saying:\n+ Any indexes, constraints and user-defined row-level triggers that exist\n+ in the parent table are cloned on new partitions [...]\n\nWhich is good to say, and addresses part of my message [0]\n[0] ZiJW1g2nbQs9ekwK@pryzbyj2023\n\nBut it doesn't have anything to do with \"creating new partitions with\nparent's persistence\". Maybe there was a merge conflict and the docs\nended up in the wrong patch ?\n\nAlso, defaults, storage options, compression are also copied. As will\nbe anything else from LIKE. And since anything added in the future will\nalso be copied, maybe it's better to just say that the tables will be\ncreated the same way as \"LIKE .. INCLUDING ALL EXCLUDING ..\", or\nsimilar. Otherwise, the next person who adds a new option for LIKE\nwould have to remember to update this paragraph...\n\nAlso, extended stats objects are currently cloned to new child tables.\nBut I suggested in [0] that they probably shouldn't be.\n\n> 007 – doc review by Justin [3]\n\nI suggest to drop this patch for now. I'll send some more minor fixes to\ndocs and code comments once the other patches are settled.I've looked at the patchset:0001 Look good.0002 Also right with docs modification proposed by Justin.0003:Looks like unused code5268 datum = cmpval ? list_nth(spec->lowerdatums, abs(cmpval) - 1) : NULL;overridden by5278 datum = list_nth(spec->upperdatums, abs(cmpval) - 1);and5290 datum = list_nth(spec->upperdatums, abs(cmpval) - 1);Otherwise - good.0004:I suggest also getting rid of thee-noun compound words like: salesperson_name. Maybe salesperson -> clerk? Or maybe use the same terms like in pgbench: branches, tellers, accounts, balance.0005: Good0006: Patch is rightIn comments:+ New partitions will have the same table access method,+ same column names and types as the partitioned table to which they belong.(I'd suggest to remove second \"same\")Tests are passed. I suppose that it's better to add similar tests for SPLIT/MERGE PARTITION(S) to those covering ATTACH/DETACH PARTITION (e.g.: subscription/t/013_partition.pl and regression tests)Overall, great work! Thanks!Regards,Pavel Borisov,Supabase.",
"msg_date": "Fri, 26 Apr 2024 17:33:33 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi, Pavel.\n\nThank you for the review.\n\nOn Fri, Apr 26, 2024 at 4:33 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> I've looked at the patchset:\n>\n> 0001 Look good.\n> 0002 Also right with docs modification proposed by Justin.\n\nModified as proposed by Justin. The documentation for the way new\npartitions are created is now in separate patch.\n\n> 0003:\n> Looks like unused code\n> 5268 datum = cmpval ? list_nth(spec->lowerdatums, abs(cmpval) - 1) : NULL;\n> overridden by\n> 5278 datum = list_nth(spec->upperdatums, abs(cmpval) - 1);\n> and\n> 5290 datum = list_nth(spec->upperdatums, abs(cmpval) - 1);\n>\n> Otherwise - good.\n\nFixed, thanks.\n\n> 0004:\n> I suggest also getting rid of thee-noun compound words like: salesperson_name. Maybe salesperson -> clerk? Or maybe use the same terms like in pgbench: branches, tellers, accounts, balance.\n\nThank you, but I'd like to prefer keeping these modifications simple.\nIt's just regression tests, we don't need to have perfect naming here.\nMy intention is to fix just obvious errors.\n\n> 0005: Good\n> 0006: Patch is right\n> In comments:\n> + New partitions will have the same table access method,\n> + same column names and types as the partitioned table to which they belong.\n> (I'd suggest to remove second \"same\")\n\nDocumentation is modified per proposal by Justin. Thus double \"same\"\nis already gone.\n\n> Tests are passed. I suppose that it's better to add similar tests for SPLIT/MERGE PARTITION(S) to those covering ATTACH/DETACH PARTITION (e.g.: subscription/t/013_partition.pl and regression tests)\n\nThe revised patchset is attached. I'm going to push it if there are\nno objections.\n\nThank you for your suggestions about adding tests similar to\nsubscription/t/013_partition.pl. I will work on this after pushing\nthis patchset.\n\n------\nRegards,\nAlexander Korotkov\nSupabase",
"msg_date": "Sun, 28 Apr 2024 03:59:37 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi Justin,\n\nThank you for your review. Please check v9 of the patchset [1].\n\nOn Wed, Apr 24, 2024 at 11:26 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> This patch also/already fixes the schema issue I reported. Thanks.\n>\n> If you wanted to include a test case for that:\n>\n> begin;\n> CREATE SCHEMA s;\n> CREATE SCHEMA t;\n> CREATE TABLE p(i int) PARTITION BY RANGE(i);\n> CREATE TABLE s.c1 PARTITION OF p FOR VALUES FROM (1)TO(2);\n> CREATE TABLE s.c2 PARTITION OF p FOR VALUES FROM (2)TO(3);\n> ALTER TABLE p MERGE PARTITIONS (s.c1, s.c2) INTO s.c1; -- misbehaves if merging into the same name as an existing partition\n> \\d+ p\n> ...\n> Partitions: c1 FOR VALUES FROM (1) TO (3)\n\nThere is already a test which checks merging into the same name as an\nexisting partition. And there are tests with schema-qualified names.\nI'm not yet convinced we need a test with both these properties\ntogether.\n\n> > 0002\n> > The persistence of the new partition is copied as suggested in [1].\n> > But the checks are in-place, because search_path could influence new\n> > table persistence. Per review [2], commit message typos are fixed,\n> > documentation is revised, revised tests to cover schema-qualification,\n> > usage of search_path.\n>\n> Subject: [PATCH v8 2/7] Make new partitions with parent's persistence during MERGE/SPLIT operations\n>\n> This patch adds documentation saying:\n> + Any indexes, constraints and user-defined row-level triggers that exist\n> + in the parent table are cloned on new partitions [...]\n>\n> Which is good to say, and addresses part of my message [0]\n> [0] ZiJW1g2nbQs9ekwK@pryzbyj2023\n>\n> But it doesn't have anything to do with \"creating new partitions with\n> parent's persistence\". Maybe there was a merge conflict and the docs\n> ended up in the wrong patch ?\n\nMakes sense. Extracted this into a separate patch in v10.\n\n> Also, defaults, storage options, compression are also copied. As will\n> be anything else from LIKE. And since anything added in the future will\n> also be copied, maybe it's better to just say that the tables will be\n> created the same way as \"LIKE .. INCLUDING ALL EXCLUDING ..\", or\n> similar. Otherwise, the next person who adds a new option for LIKE\n> would have to remember to update this paragraph...\n\nReworded that way. Thank you.\n\n> Also, extended stats objects are currently cloned to new child tables.\n> But I suggested in [0] that they probably shouldn't be.\n\nI will explore this. Do we copy extended stats when we do CREATE\nTABLE ... PARTITION OF? I think we need to do the same here.\n\n> > 007 – doc review by Justin [3]\n>\n> I suggest to drop this patch for now. I'll send some more minor fixes to\n> docs and code comments once the other patches are settled.\n\nYour edits are welcome. Dropped this for now. And waiting for the\nnext revision from you.\n\nLinks.\n1. https://www.postgresql.org/message-id/CAPpHfduYuYECrqpHMgcOsNr%2B4j3uJK%2BJPUJ_zDBn-tqjjh3p1Q%40mail.gmail.com\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n",
"msg_date": "Sun, 28 Apr 2024 04:04:54 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hello,\n\n28.04.2024 03:59, Alexander Korotkov wrote:\n> The revised patchset is attached. I'm going to push it if there are\n> no objections.\n\nI have one additional question regarding security, if you don't mind:\nWhat permissions should a user have to perform split/merge?\n\nWhen we deal with mixed ownership, say, bob is an owner of a\npartitioned table, but not an owner of a partition, should we\nallow him to perform merge with that partition?\nConsider the following script:\nCREATE ROLE alice;\nGRANT CREATE ON SCHEMA public TO alice;\n\nSET SESSION AUTHORIZATION alice;\nCREATE TABLE t (i int PRIMARY KEY, t text, u text) PARTITION BY RANGE (i);\nCREATE TABLE tp_00 PARTITION OF t FOR VALUES FROM (0) TO (10);\nCREATE TABLE tp_10 PARTITION OF t FOR VALUES FROM (10) TO (20);\n\nCREATE POLICY p1 ON tp_00 USING (u = current_user);\nALTER TABLE tp_00 ENABLE ROW LEVEL SECURITY;\n\nINSERT INTO t(i, t, u) VALUES (0, 'info for bob', 'bob');\nINSERT INTO t(i, t, u) VALUES (1, 'info for alice', 'alice');\nRESET SESSION AUTHORIZATION;\n\nCREATE ROLE bob;\nGRANT CREATE ON SCHEMA public TO bob;\nALTER TABLE t OWNER TO bob;\nGRANT SELECT ON TABLE tp_00 TO bob;\n\nSET SESSION AUTHORIZATION bob;\nSELECT * FROM tp_00;\n--- here bob can see his info only\n\\d\n Schema | Name | Type | Owner\n--------+-------+-------------------+-------\n public | t | partitioned table | bob\n public | tp_00 | table | alice\n public | tp_10 | table | alice\n\n-- but then bob can do:\nALTER TABLE t MERGE PARTITIONS (tp_00, tp_10) INTO tp_00;\n-- (yes, he also can detach the partition tp_00, but then he couldn't\n-- re-attach nor read it)\n\n\\d\n Schema | Name | Type | Owner\n--------+-------+-------------------+-------\n public | t | partitioned table | bob\n public | tp_00 | table | bob\n\nThus bob effectively have captured the partition with the data.\n\nWhat do you think, does this create a new security risk?\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Sun, 28 Apr 2024 14:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Sun, Apr 28, 2024 at 2:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n> 28.04.2024 03:59, Alexander Korotkov wrote:\n> > The revised patchset is attached. I'm going to push it if there are\n> > no objections.\n>\n> I have one additional question regarding security, if you don't mind:\n> What permissions should a user have to perform split/merge?\n>\n> When we deal with mixed ownership, say, bob is an owner of a\n> partitioned table, but not an owner of a partition, should we\n> allow him to perform merge with that partition?\n> Consider the following script:\n> CREATE ROLE alice;\n> GRANT CREATE ON SCHEMA public TO alice;\n>\n> SET SESSION AUTHORIZATION alice;\n> CREATE TABLE t (i int PRIMARY KEY, t text, u text) PARTITION BY RANGE (i);\n> CREATE TABLE tp_00 PARTITION OF t FOR VALUES FROM (0) TO (10);\n> CREATE TABLE tp_10 PARTITION OF t FOR VALUES FROM (10) TO (20);\n>\n> CREATE POLICY p1 ON tp_00 USING (u = current_user);\n> ALTER TABLE tp_00 ENABLE ROW LEVEL SECURITY;\n>\n> INSERT INTO t(i, t, u) VALUES (0, 'info for bob', 'bob');\n> INSERT INTO t(i, t, u) VALUES (1, 'info for alice', 'alice');\n> RESET SESSION AUTHORIZATION;\n>\n> CREATE ROLE bob;\n> GRANT CREATE ON SCHEMA public TO bob;\n> ALTER TABLE t OWNER TO bob;\n> GRANT SELECT ON TABLE tp_00 TO bob;\n>\n> SET SESSION AUTHORIZATION bob;\n> SELECT * FROM tp_00;\n> --- here bob can see his info only\n> \\d\n> Schema | Name | Type | Owner\n> --------+-------+-------------------+-------\n> public | t | partitioned table | bob\n> public | tp_00 | table | alice\n> public | tp_10 | table | alice\n>\n> -- but then bob can do:\n> ALTER TABLE t MERGE PARTITIONS (tp_00, tp_10) INTO tp_00;\n> -- (yes, he also can detach the partition tp_00, but then he couldn't\n> -- re-attach nor read it)\n>\n> \\d\n> Schema | Name | Type | Owner\n> --------+-------+-------------------+-------\n> public | t | partitioned table | bob\n> public | tp_00 | table | bob\n>\n> Thus bob effectively have captured the partition with the data.\n>\n> What do you think, does this create a new security risk?\n\nAlexander, thank you for discovering this. I believe that the one who\nmerges partitions should have permissions for all the partitions\nmerged. I'll recheck this and provide the patch.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sun, 28 Apr 2024 14:36:51 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Sun, Apr 28, 2024 at 04:04:54AM +0300, Alexander Korotkov wrote:\n> Hi Justin,\n> \n> Thank you for your review. Please check v9 of the patchset [1].\n> \n> On Wed, Apr 24, 2024 at 11:26 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > This patch also/already fixes the schema issue I reported. Thanks.\n> >\n> > If you wanted to include a test case for that:\n> >\n> > begin;\n> > CREATE SCHEMA s;\n> > CREATE SCHEMA t;\n> > CREATE TABLE p(i int) PARTITION BY RANGE(i);\n> > CREATE TABLE s.c1 PARTITION OF p FOR VALUES FROM (1)TO(2);\n> > CREATE TABLE s.c2 PARTITION OF p FOR VALUES FROM (2)TO(3);\n> > ALTER TABLE p MERGE PARTITIONS (s.c1, s.c2) INTO s.c1; -- misbehaves if merging into the same name as an existing partition\n> > \\d+ p\n> > ...\n> > Partitions: c1 FOR VALUES FROM (1) TO (3)\n> \n> There is already a test which checks merging into the same name as an\n> existing partition. And there are tests with schema-qualified names.\n> I'm not yet convinced we need a test with both these properties\n> together.\n\nI mentioned that the combination of schemas and merge-into-same-name is\nwhat currently doesn't work right.\n\n> > Also, extended stats objects are currently cloned to new child tables.\n> > But I suggested in [0] that they probably shouldn't be.\n> \n> I will explore this. Do we copy extended stats when we do CREATE\n> TABLE ... PARTITION OF? I think we need to do the same here.\n\nRight, they're not copied because an extended stats objs on the parent\ndoes something different than putting stats objects on each child.\nI've convinced myself that it's wrong to copy the parent's stats obj.\nIf someone wants stats objects on each child, they'll have to handle\nthem specially after MERGE/SPLIT, just as they would for per-child\ndefaults/constraints/etc.\n\nOn Sun, Apr 28, 2024 at 04:04:54AM +0300, Alexander Korotkov wrote:\n> On Wed, Apr 24, 2024 at 11:26 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > This patch adds documentation saying:\n> > + Any indexes, constraints and user-defined row-level triggers that exist\n> > + in the parent table are cloned on new partitions [...]\n> >\n> > Which is good to say, and addresses part of my message [0]\n> > [0] ZiJW1g2nbQs9ekwK@pryzbyj2023\n> \n> Makes sense. Extracted this into a separate patch in v10.\n\nI adjusted the language some and fixed a typo in the commit message.\n\ns/parition/partition/\n\n-- \nJustin",
"msg_date": "Sun, 28 Apr 2024 08:18:42 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Sunday, April 28, 2024, Alexander Lakhin <exclusion@gmail.com> wrote:\n\n>\n> When we deal with mixed ownership, say, bob is an owner of a\n> partitioned table, but not an owner of a partition, should we\n> allow him to perform merge with that partition?\n>\n>\nIIUC Merge causes the source tables to be dropped, their data having been\neffectively moved into the new partition. bob must not be allowed to drop\nAlice’s tables. Only an owner may do that. So if we do allow bob to build\na new partition using his select access, the tables he selected from would\nhave to remain behind if he is not an owner of them.\n\nDavid J.\n\nOn Sunday, April 28, 2024, Alexander Lakhin <exclusion@gmail.com> wrote:\nWhen we deal with mixed ownership, say, bob is an owner of a\npartitioned table, but not an owner of a partition, should we\nallow him to perform merge with that partition?\nIIUC Merge causes the source tables to be dropped, their data having been effectively moved into the new partition. bob must not be allowed to drop Alice’s tables. Only an owner may do that. So if we do allow bob to build a new partition using his select access, the tables he selected from would have to remain behind if he is not an owner of them.David J.",
"msg_date": "Sun, 28 Apr 2024 06:42:59 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Sunday, April 28, 2024, Alexander Lakhin <exclusion@gmail.com> wrote:\n\n>\n> When we deal with mixed ownership, say, bob is an owner of a\n> partitioned table, but not an owner of a partition, should we\n> allow him to perform merge with that partition?\n>\n>\nAttaching via alter table requires the user to own both the partitioned\ntable and the table being acted upon. Merge needs to behave similarly.\n\nThe fact that we let the superuser break the requirement of common\nownership is unfortunate but I guess understandable. But given the\nexisting behavior of attach merge should likewise fail if it find the user\ndoesn’t own the partitions being merged. The fact that the user can select\nfrom those tables can be acted upon manually if desired; these\nadministrative commands should all ensure common ownership and fail if that\nprecondition is not met.\n\nDavid J.\n\nOn Sunday, April 28, 2024, Alexander Lakhin <exclusion@gmail.com> wrote:\nWhen we deal with mixed ownership, say, bob is an owner of a\npartitioned table, but not an owner of a partition, should we\nallow him to perform merge with that partition?\nAttaching via alter table requires the user to own both the partitioned table and the table being acted upon. Merge needs to behave similarly.The fact that we let the superuser break the requirement of common ownership is unfortunate but I guess understandable. But given the existing behavior of attach merge should likewise fail if it find the user doesn’t own the partitions being merged. The fact that the user can select from those tables can be acted upon manually if desired; these administrative commands should all ensure common ownership and fail if that precondition is not met.David J.",
"msg_date": "Sun, 28 Apr 2024 07:09:09 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Sun, Apr 28, 2024 at 08:18:42AM -0500, Justin Pryzby wrote:\n> > I will explore this. Do we copy extended stats when we do CREATE\n> > TABLE ... PARTITION OF? I think we need to do the same here.\n> \n> Right, they're not copied because an extended stats objs on the parent\n> does something different than putting stats objects on each child.\n> I've convinced myself that it's wrong to copy the parent's stats obj.\n> If someone wants stats objects on each child, they'll have to handle\n> them specially after MERGE/SPLIT, just as they would for per-child\n> defaults/constraints/etc.\n\nI dug up this thread, in which the idea of copying extended stats from\nparent to child was considered some 6 years ago, but never implemented;\nfor consistency, MERGE/SPLIT shouldn't copy extended stats, either.\n\nhttps://www.postgresql.org/message-id/20180305195750.aecbpihhcvuskzba%40alvherre.pgsql\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 28 Apr 2024 09:54:16 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi Dmitry,\n\n19.04.2024 02:26, Dmitry Koval wrote:\n>\n> 18.04.2024 19:00, Alexander Lakhin wrote:\n>> leaves a strange constraint:\n>> \\d+ t*\n>> Table \"public.tp_0\"\n>> ...\n>> Not-null constraints:\n>> \"merge-16385-26BCB0-tmp_i_not_null\" NOT NULL \"i\"\n>\n> Thanks!\n> Attached fix (with test) for this case.\n> The patch should be applied after patches\n> v6-0001- ... .patch ... v6-0004- ... .patch\n\nI still wonder, why that constraint (now with a less questionable name) is\ncreated during MERGE?\n\nThat is, before MERGE, two partitions have only PRIMARY KEY indexes,\nwith no not-null constraint, and you can manually remove the constraint\nafter MERGE, so maybe it's not necessary...\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Mon, 29 Apr 2024 21:00:01 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi!\n\n1.\n29.04.2024 21:00, Alexander Lakhin wrote:\n> I still wonder, why that constraint (now with a less questionable name) is\n> created during MERGE?\n\nThe SPLIT/MERGE PARTITION(S) commands for creating partitions reuse the \nexisting code of CREATE TABLE .. LIKE ... command. A new partition was \ncreated with the name \"merge-16385-26BCB0-tmp\" (since there was an old \npartition with the same name). The constraint \n\"merge-16385-26BCB0-tmp_i_not_null\" was created too together with the \npartition. Subsequently, the table was renamed, but the constraint was not.\nNow a new partition is immediately created with the correct name (the \nold partition is renamed).\n\n2.\nJust in case, I am attaching a small fix v9_fix.diff for situation [1].\n\n[1] \nhttps://www.postgresql.org/message-id/0520c72e-8d97-245e-53f9-173beca2ab2e%40gmail.com\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Tue, 30 Apr 2024 03:10:47 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "30.04.2024 03:10, Dmitry Koval wrote:\n> Hi!\n>\n> 1.\n> 29.04.2024 21:00, Alexander Lakhin wrote:\n>> I still wonder, why that constraint (now with a less questionable name) is\n>> created during MERGE?\n>\n> The SPLIT/MERGE PARTITION(S) commands for creating partitions reuse the existing code of CREATE TABLE .. LIKE ... \n> command. A new partition was created with the name \"merge-16385-26BCB0-tmp\" (since there was an old partition with the \n> same name). The constraint \"merge-16385-26BCB0-tmp_i_not_null\" was created too together with the partition. \n> Subsequently, the table was renamed, but the constraint was not.\n> Now a new partition is immediately created with the correct name (the old partition is renamed).\n\nMaybe I'm doing something wrong, but the following script:\nCREATE TABLE t (i int, PRIMARY KEY(i)) PARTITION BY RANGE (i);\nCREATE TABLE tp_0 PARTITION OF t FOR VALUES FROM (0) TO (1);\nCREATE TABLE tp_1 PARTITION OF t FOR VALUES FROM (1) TO (2);\n\nCREATE TABLE t2 (LIKE t INCLUDING ALL);\nCREATE TABLE tp2 (LIKE tp_0 INCLUDING ALL);\ncreates tables t2, tp2 without not-null constraints.\n\nBut after\nALTER TABLE t MERGE PARTITIONS (tp_0, tp_1) INTO tp_0;\nI see:\n\\d+ tp_0\n...\nIndexes:\n \"tp_0_pkey\" PRIMARY KEY, btree (i)\nNot-null constraints:\n \"tp_0_i_not_null\" NOT NULL \"i\"\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Tue, 30 Apr 2024 06:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Thu, Apr 11, 2024 at 08:00:00PM +0300, Alexander Lakhin wrote:\n> 11.04.2024 16:27, Dmitry Koval wrote:\n> > \n> > Added correction (and test), see v3-0001-Fix-for-SPLIT-MERGE-partitions-of-temporary-table.patch.\n> \n> Thank you for the correction, but may be an attempt to merge into implicit\n> pg_temp should fail just like CREATE TABLE ... PARTITION OF ... does?\n> \n> Please look also at another anomaly with schemas:\n> CREATE SCHEMA s1;\n> CREATE TABLE t (i int) PARTITION BY RANGE (i);\n> CREATE TABLE tp_0_2 PARTITION OF t\n> � FOR VALUES FROM (0) TO (2);\n> ALTER TABLE t SPLIT PARTITION tp_0_2 INTO\n> � (PARTITION s1.tp0 FOR VALUES FROM (0) TO (1), PARTITION s1.tp1 FOR VALUES FROM (1) TO (2));\n> results in:\n> \\d+ s1.*\n> Did not find any relation named \"s1.*\"\n> \\d+ tp*\n> ����������������������������������������� Table \"public.tp0\"\n\nHi,\n\nIs this issue already fixed ?\n\nI wasn't able to reproduce it. Maybe it only happened with earlier\npatch versions applied ?\n\nThanks,\n-- \nJustin\n\n\n",
"msg_date": "Tue, 30 Apr 2024 15:15:05 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi!\n\n30.04.2024 6:00, Alexander Lakhin пишет:\n> Maybe I'm doing something wrong, but the following script:\n> CREATE TABLE t (i int, PRIMARY KEY(i)) PARTITION BY RANGE (i);\n> CREATE TABLE tp_0 PARTITION OF t FOR VALUES FROM (0) TO (1);\n> CREATE TABLE tp_1 PARTITION OF t FOR VALUES FROM (1) TO (2);\n> \n> CREATE TABLE t2 (LIKE t INCLUDING ALL);\n> CREATE TABLE tp2 (LIKE tp_0 INCLUDING ALL);\n> creates tables t2, tp2 without not-null constraints.\n\nTo create partitions is used the \"CREATE TABLE ... LIKE ...\" command \nwith the \"EXCLUDING INDEXES\" modifier (to speed up the insertion of values).\n\nCREATE TABLE t (i int, PRIMARY KEY(i)) PARTITION BY RANGE(i);\nCREATE TABLE t2 (LIKE t INCLUDING ALL EXCLUDING INDEXES EXCLUDING IDENTITY);\n\\d+ t2;\n...\nNot-null constraints:\n \"t2_i_not_null\" NOT NULL \"i\"\nAccess method: heap\n\n\n[1] \nhttps://github.com/postgres/postgres/blob/d12b4ba1bd3eedd862064cf1dad5ff107c5cba90/src/backend/commands/tablecmds.c#L21215\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com\n\n\n",
"msg_date": "Wed, 1 May 2024 00:14:07 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi!\n\n30.04.2024 23:15, Justin Pryzby пишет:\n> Is this issue already fixed ?\n> I wasn't able to reproduce it. Maybe it only happened with earlier\n> patch versions applied ?\n\nI think this was fixed in commit [1].\n\n[1] \nhttps://github.com/postgres/postgres/commit/fcf80c5d5f0f3787e70fca8fd029d2e08a923f91\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com\n\n\n",
"msg_date": "Wed, 1 May 2024 22:51:24 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Wed, May 01, 2024 at 10:51:24PM +0300, Dmitry Koval wrote:\n> Hi!\n> \n> 30.04.2024 23:15, Justin Pryzby пишет:\n> > Is this issue already fixed ?\n> > I wasn't able to reproduce it. Maybe it only happened with earlier\n> > patch versions applied ?\n> \n> I think this was fixed in commit [1].\n> \n> [1] https://github.com/postgres/postgres/commit/fcf80c5d5f0f3787e70fca8fd029d2e08a923f91\n\nI tried to reproduce it at fcf80c5d5f~, but couldn't. \nI don't see how that patch would fix it anyway.\nI'm hoping Alexander can confirm what happened.\n\nThe other remaining issues I'm aware of are for EXCLUDING STATISTICS and\nrefusing to ALTER if the owners don't match.\n\nNote that the error that led to \"EXCLUDING IDENTITY\" is being discused\nover here:\nhttps://www.postgresql.org/message-id/3b8a9dc1-bbc7-0ef5-6863-c432afac7d59@gmail.com\n\nIt's possible that once that's addressed, the exclusion should be\nremoved here, too.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 3 May 2024 08:23:14 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Fri, May 3, 2024 at 4:23 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Wed, May 01, 2024 at 10:51:24PM +0300, Dmitry Koval wrote:\n> > 30.04.2024 23:15, Justin Pryzby пишет:\n> > > Is this issue already fixed ?\n> > > I wasn't able to reproduce it. Maybe it only happened with earlier\n> > > patch versions applied ?\n> >\n> > I think this was fixed in commit [1].\n> >\n> > [1] https://github.com/postgres/postgres/commit/fcf80c5d5f0f3787e70fca8fd029d2e08a923f91\n>\n> I tried to reproduce it at fcf80c5d5f~, but couldn't.\n> I don't see how that patch would fix it anyway.\n> I'm hoping Alexander can confirm what happened.\n\nThis problem is only relevant for an old version of fix [1], which\noverrides schemas for new partitions. That version was never\ncommitted.\n\n> The other remaining issues I'm aware of are for EXCLUDING STATISTICS and\n> refusing to ALTER if the owners don't match.\n\nThese two are in my list. I'm planning to work on them in the next few days.\n\n> Note that the error that led to \"EXCLUDING IDENTITY\" is being discused\n> over here:\n> https://www.postgresql.org/message-id/3b8a9dc1-bbc7-0ef5-6863-c432afac7d59@gmail.com\n>\n> It's possible that once that's addressed, the exclusion should be\n> removed here, too.\n\n+1\n\nLinks.\n1. https://www.postgresql.org/message-id/edfbd846-dcc1-42d1-ac26-715691b687d3%40postgrespro.ru\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n",
"msg_date": "Fri, 3 May 2024 16:32:25 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Fri, May 3, 2024 at 4:32 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Fri, May 3, 2024 at 4:23 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > On Wed, May 01, 2024 at 10:51:24PM +0300, Dmitry Koval wrote:\n> > > 30.04.2024 23:15, Justin Pryzby пишет:\n> > > > Is this issue already fixed ?\n> > > > I wasn't able to reproduce it. Maybe it only happened with earlier\n> > > > patch versions applied ?\n> > >\n> > > I think this was fixed in commit [1].\n> > >\n> > > [1] https://github.com/postgres/postgres/commit/fcf80c5d5f0f3787e70fca8fd029d2e08a923f91\n> >\n> > I tried to reproduce it at fcf80c5d5f~, but couldn't.\n> > I don't see how that patch would fix it anyway.\n> > I'm hoping Alexander can confirm what happened.\n>\n> This problem is only relevant for an old version of fix [1], which\n> overrides schemas for new partitions. That version was never\n> committed.\n\nHere are the patches.\n0001 Adds permission checks on the partitions before doing MERGE/SPLIT\n0002 Skips copying extended statistics while creating new partitions\nin MERGE/SPLIT\n\n0001 looks quite simple and trivial for me. I'm going to push it if\nno objections.\nFor 0002 I'd like to hear some feedback on wordings used in docs and comments.\n\n------\nRegards,\nAlexander Korotkov\nSupabase",
"msg_date": "Wed, 8 May 2024 21:00:10 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Wed, May 1, 2024 at 12:14 AM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n> 30.04.2024 6:00, Alexander Lakhin пишет:\n> > Maybe I'm doing something wrong, but the following script:\n> > CREATE TABLE t (i int, PRIMARY KEY(i)) PARTITION BY RANGE (i);\n> > CREATE TABLE tp_0 PARTITION OF t FOR VALUES FROM (0) TO (1);\n> > CREATE TABLE tp_1 PARTITION OF t FOR VALUES FROM (1) TO (2);\n> >\n> > CREATE TABLE t2 (LIKE t INCLUDING ALL);\n> > CREATE TABLE tp2 (LIKE tp_0 INCLUDING ALL);\n> > creates tables t2, tp2 without not-null constraints.\n>\n> To create partitions is used the \"CREATE TABLE ... LIKE ...\" command\n> with the \"EXCLUDING INDEXES\" modifier (to speed up the insertion of\nvalues).\n>\n> CREATE TABLE t (i int, PRIMARY KEY(i)) PARTITION BY RANGE(i);\n> CREATE TABLE t2 (LIKE t INCLUDING ALL EXCLUDING INDEXES EXCLUDING\nIDENTITY);\n> \\d+ t2;\n> ...\n> Not-null constraints:\n> \"t2_i_not_null\" NOT NULL \"i\"\n> Access method: heap\n\nI've explored this a little bit more.\n\nIf the parent table has explicit not null constraint than results of\nMERGE/SPLIT look the same as result of CREATE TABLE ... PARTITION OF. In\nevery case there is explicit not null constraint in all the cases.\n\n# CREATE TABLE t (i int not null, PRIMARY KEY(i)) PARTITION BY RANGE(i);\n# \\d+ t\n Partitioned table \"public.t\"\n Column | Type | Collation | Nullable | Default | Storage | Compression\n| Stats target | Description\n--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n i | integer | | not null | | plain |\n| |\nPartition key: RANGE (i)\nIndexes:\n \"t_pkey\" PRIMARY KEY, btree (i)\nNot-null constraints:\n \"t_i_not_null\" NOT NULL \"i\"\nNumber of partitions: 0\n# CREATE TABLE tp_0_2 PARTITION OF t FOR VALUES FROM (0) TO (2);\n# \\d+ tp_0_2\n Table \"public.tp_0_2\"\n Column | Type | Collation | Nullable | Default | Storage | Compression\n| Stats target | Description\n--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n i | integer | | not null | | plain |\n| |\nPartition of: t FOR VALUES FROM (0) TO (2)\nPartition constraint: ((i IS NOT NULL) AND (i >= 0) AND (i < 2))\nIndexes:\n \"tp_0_2_pkey\" PRIMARY KEY, btree (i)\nNot-null constraints:\n \"t_i_not_null\" NOT NULL \"i\" (inherited)\nAccess method: heap\n# ALTER TABLE t SPLIT PARTITION tp_0_2 INTO\n# (PARTITION tp_0_1 FOR VALUES FROM (0) TO (1),\n# PARTITION tp_1_2 FOR VALUES FROM (1) TO (2))\n# \\d+ tp_0_1\n Table \"public.tp_0_1\"\n Column | Type | Collation | Nullable | Default | Storage | Compression\n| Stats target | Description\n--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n i | integer | | not null | | plain |\n| |\nPartition of: t FOR VALUES FROM (0) TO (1)\nPartition constraint: ((i IS NOT NULL) AND (i >= 0) AND (i < 1))\nIndexes:\n \"tp_0_1_pkey\" PRIMARY KEY, btree (i)\nNot-null constraints:\n \"t_i_not_null\" NOT NULL \"i\" (inherited)\nAccess method: heap\n\nHowever, if not null constraint is implicit and derived from primary key,\nthe situation is different. The partition created by CREATE TABLE ...\nPARTITION OF doesn't have explicit not null constraint just like the\nparent. But the partition created by MERGE/SPLIT has explicit not null\ncontraint.\n\n# CREATE TABLE t (i int not null, PRIMARY KEY(i)) PARTITION BY RANGE(i);\n# \\d+ t\n Partitioned table \"public.t\"\n Column | Type | Collation | Nullable | Default | Storage | Compression\n| Stats target | Description\n--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n i | integer | | not null | | plain |\n| |\nPartition key: RANGE (i)\nIndexes:\n \"t_pkey\" PRIMARY KEY, btree (i)\nNumber of partitions: 0\n# CREATE TABLE tp_0_2 PARTITION OF t FOR VALUES FROM (0) TO (2);\n# \\d+ tp_0_2\n Table \"public.tp_0_2\"\n Column | Type | Collation | Nullable | Default | Storage | Compression\n| Stats target | Description\n--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n i | integer | | not null | | plain |\n| |\nPartition of: t FOR VALUES FROM (0) TO (2)\nPartition constraint: ((i IS NOT NULL) AND (i >= 0) AND (i < 2))\nIndexes:\n \"tp_0_2_pkey\" PRIMARY KEY, btree (i)\nAccess method: heap\n# ALTER TABLE t SPLIT PARTITION tp_0_2 INTO\n# (PARTITION tp_0_1 FOR VALUES FROM (0) TO (1),\n# PARTITION tp_1_2 FOR VALUES FROM (1) TO (2))\n# \\d+ tp_0_1\n Table \"public.tp_0_1\"\n Column | Type | Collation | Nullable | Default | Storage | Compression\n| Stats target | Description\n--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n i | integer | | not null | | plain |\n| |\nPartition of: t FOR VALUES FROM (0) TO (1)\nPartition constraint: ((i IS NOT NULL) AND (i >= 0) AND (i < 1))\nIndexes:\n \"tp_0_1_pkey\" PRIMARY KEY, btree (i)\nNot-null constraints:\n \"tp_0_1_i_not_null\" NOT NULL \"i\"\nAccess method: heap\n\nI think this is related to the fact that we create indexes later. The same\napplies to CREATE TABLE ... LIKE. If we create indexes immediately, not\nexplicit not null contraints are created. Not if we do without indexes, we\nhave an explicit not null constraint.\n\n# CREATE TABLE t2 (LIKE t INCLUDING ALL);\n# \\d+ t2\n Table \"public.t2\"\n Column | Type | Collation | Nullable | Default | Storage | Compression\n| Stats target | Description\n--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n i | integer | | not null | | plain |\n| |\nNot-null constraints:\n \"t2_i_not_null\" NOT NULL \"i\"\nAccess method: heap\n# CREATE TABLE t3 (LIKE t INCLUDING ALL EXCLUDING IDENTITY);\n# \\d+ t3\n Table \"public.t3\"\n Column | Type | Collation | Nullable | Default | Storage | Compression\n| Stats target | Description\n--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n i | integer | | not null | | plain |\n| |\nIndexes:\n \"t3_pkey\" PRIMARY KEY, btree (i)\nAccess method: heap\n\nI think this is feasible to avoid. However, it's minor and we exactly\ndocumented how we create new partitions. So, I think it works \"as\ndocumented\" and we don't have to fix this for v17.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\nOn Wed, May 1, 2024 at 12:14 AM Dmitry Koval <d.koval@postgrespro.ru> wrote:> 30.04.2024 6:00, Alexander Lakhin пишет:> > Maybe I'm doing something wrong, but the following script:> > CREATE TABLE t (i int, PRIMARY KEY(i)) PARTITION BY RANGE (i);> > CREATE TABLE tp_0 PARTITION OF t FOR VALUES FROM (0) TO (1);> > CREATE TABLE tp_1 PARTITION OF t FOR VALUES FROM (1) TO (2);> >> > CREATE TABLE t2 (LIKE t INCLUDING ALL);> > CREATE TABLE tp2 (LIKE tp_0 INCLUDING ALL);> > creates tables t2, tp2 without not-null constraints.>> To create partitions is used the \"CREATE TABLE ... LIKE ...\" command> with the \"EXCLUDING INDEXES\" modifier (to speed up the insertion of values).>> CREATE TABLE t (i int, PRIMARY KEY(i)) PARTITION BY RANGE(i);> CREATE TABLE t2 (LIKE t INCLUDING ALL EXCLUDING INDEXES EXCLUDING IDENTITY);> \\d+ t2;> ...> Not-null constraints:> \"t2_i_not_null\" NOT NULL \"i\"> Access method: heapI've explored this a little bit more.If the parent table has explicit not null constraint than results of MERGE/SPLIT look the same as result of CREATE TABLE ... PARTITION OF. In every case there is explicit not null constraint in all the cases.# CREATE TABLE t (i int not null, PRIMARY KEY(i)) PARTITION BY RANGE(i);# \\d+ t Partitioned table \"public.t\" Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description--------+---------+-----------+----------+---------+---------+-------------+--------------+------------- i | integer | | not null | | plain | | |Partition key: RANGE (i)Indexes: \"t_pkey\" PRIMARY KEY, btree (i)Not-null constraints: \"t_i_not_null\" NOT NULL \"i\"Number of partitions: 0# CREATE TABLE tp_0_2 PARTITION OF t FOR VALUES FROM (0) TO (2);# \\d+ tp_0_2 Table \"public.tp_0_2\" Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description--------+---------+-----------+----------+---------+---------+-------------+--------------+------------- i | integer | | not null | | plain | | |Partition of: t FOR VALUES FROM (0) TO (2)Partition constraint: ((i IS NOT NULL) AND (i >= 0) AND (i < 2))Indexes: \"tp_0_2_pkey\" PRIMARY KEY, btree (i)Not-null constraints: \"t_i_not_null\" NOT NULL \"i\" (inherited)Access method: heap# ALTER TABLE t SPLIT PARTITION tp_0_2 INTO# (PARTITION tp_0_1 FOR VALUES FROM (0) TO (1),# PARTITION tp_1_2 FOR VALUES FROM (1) TO (2))# \\d+ tp_0_1 Table \"public.tp_0_1\" Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description--------+---------+-----------+----------+---------+---------+-------------+--------------+------------- i | integer | | not null | | plain | | |Partition of: t FOR VALUES FROM (0) TO (1)Partition constraint: ((i IS NOT NULL) AND (i >= 0) AND (i < 1))Indexes: \"tp_0_1_pkey\" PRIMARY KEY, btree (i)Not-null constraints: \"t_i_not_null\" NOT NULL \"i\" (inherited)Access method: heapHowever, if not null constraint is implicit and derived from primary key, the situation is different. The partition created by CREATE TABLE ... PARTITION OF doesn't have explicit not null constraint just like the parent. But the partition created by MERGE/SPLIT has explicit not null contraint.# CREATE TABLE t (i int not null, PRIMARY KEY(i)) PARTITION BY RANGE(i);# \\d+ t Partitioned table \"public.t\" Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description--------+---------+-----------+----------+---------+---------+-------------+--------------+------------- i | integer | | not null | | plain | | |Partition key: RANGE (i)Indexes: \"t_pkey\" PRIMARY KEY, btree (i)Number of partitions: 0# CREATE TABLE tp_0_2 PARTITION OF t FOR VALUES FROM (0) TO (2);# \\d+ tp_0_2 Table \"public.tp_0_2\" Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description--------+---------+-----------+----------+---------+---------+-------------+--------------+------------- i | integer | | not null | | plain | | |Partition of: t FOR VALUES FROM (0) TO (2)Partition constraint: ((i IS NOT NULL) AND (i >= 0) AND (i < 2))Indexes: \"tp_0_2_pkey\" PRIMARY KEY, btree (i)Access method: heap# ALTER TABLE t SPLIT PARTITION tp_0_2 INTO# (PARTITION tp_0_1 FOR VALUES FROM (0) TO (1),# PARTITION tp_1_2 FOR VALUES FROM (1) TO (2))# \\d+ tp_0_1 Table \"public.tp_0_1\" Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description--------+---------+-----------+----------+---------+---------+-------------+--------------+------------- i | integer | | not null | | plain | | |Partition of: t FOR VALUES FROM (0) TO (1)Partition constraint: ((i IS NOT NULL) AND (i >= 0) AND (i < 1))Indexes: \"tp_0_1_pkey\" PRIMARY KEY, btree (i)Not-null constraints: \"tp_0_1_i_not_null\" NOT NULL \"i\"Access method: heapI think this is related to the fact that we create indexes later. The same applies to CREATE TABLE ... LIKE. If we create indexes immediately, not explicit not null contraints are created. Not if we do without indexes, we have an explicit not null constraint.# CREATE TABLE t2 (LIKE t INCLUDING ALL);# \\d+ t2 Table \"public.t2\" Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description--------+---------+-----------+----------+---------+---------+-------------+--------------+------------- i | integer | | not null | | plain | | |Not-null constraints: \"t2_i_not_null\" NOT NULL \"i\"Access method: heap# CREATE TABLE t3 (LIKE t INCLUDING ALL EXCLUDING IDENTITY);# \\d+ t3 Table \"public.t3\" Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description--------+---------+-----------+----------+---------+---------+-------------+--------------+------------- i | integer | | not null | | plain | | |Indexes: \"t3_pkey\" PRIMARY KEY, btree (i)Access method: heapI think this is feasible to avoid. However, it's minor and we exactly documented how we create new partitions. So, I think it works \"as documented\" and we don't have to fix this for v17.------Regards,Alexander KorotkovSupabase",
"msg_date": "Wed, 8 May 2024 22:19:08 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Wed, May 08, 2024 at 09:00:10PM +0300, Alexander Korotkov wrote:\n> On Fri, May 3, 2024 at 4:32 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > On Fri, May 3, 2024 at 4:23 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > On Wed, May 01, 2024 at 10:51:24PM +0300, Dmitry Koval wrote:\n> > > > 30.04.2024 23:15, Justin Pryzby пишет:\n> > > > > Is this issue already fixed ?\n> > > > > I wasn't able to reproduce it. Maybe it only happened with earlier\n> > > > > patch versions applied ?\n> > > >\n> > > > I think this was fixed in commit [1].\n> > > >\n> > > > [1] https://github.com/postgres/postgres/commit/fcf80c5d5f0f3787e70fca8fd029d2e08a923f91\n> > >\n> > > I tried to reproduce it at fcf80c5d5f~, but couldn't.\n> > > I don't see how that patch would fix it anyway.\n> > > I'm hoping Alexander can confirm what happened.\n> >\n> > This problem is only relevant for an old version of fix [1], which\n> > overrides schemas for new partitions. That version was never\n> > committed.\n> \n> Here are the patches.\n> 0002 Skips copying extended statistics while creating new partitions in MERGE/SPLIT\n> \n> For 0002 I'd like to hear some feedback on wordings used in docs and comments.\n\ncommit message:\n\nCurrenlty => Currently\npartiions => partitios\ncopying => by copying\n\n> However, parent's table extended statistics already covers all its\n> children.\n\n=> That's the wrong explanation. It's not that \"stats on the parent\ntable cover its children\". It's that there are two types of stats:\nstats for the \"table hierarchy\" and stats for the individual table.\nThat's true for single-column stats as well as for extended stats.\nIn both cases, that's indicated by the inh flag in the code and in the\ncatalog.\n\nThe right explanation is that extended stats on partitioned tables are\nnot similar to indexes. Indexes on parent table are nothing other than\na mechanism to create indexes on the child tables. That's not true for\nstats.\n\nSee also my prior messages\nZiJW1g2nbQs9ekwK@pryzbyj2023\nZi5Msg74C61DjJKW@pryzbyj2023\n\nI think EXCLUDE IDENTITY can/should now also be removed - see 509199587.\nI'm not able to reproduce that problem anyway, even before that...\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 8 May 2024 16:37:46 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Thu, May 9, 2024 at 12:37 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Wed, May 08, 2024 at 09:00:10PM +0300, Alexander Korotkov wrote:\n> > On Fri, May 3, 2024 at 4:32 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > > On Fri, May 3, 2024 at 4:23 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > On Wed, May 01, 2024 at 10:51:24PM +0300, Dmitry Koval wrote:\n> > > > > 30.04.2024 23:15, Justin Pryzby пишет:\n> > > > > > Is this issue already fixed ?\n> > > > > > I wasn't able to reproduce it. Maybe it only happened with earlier\n> > > > > > patch versions applied ?\n> > > > >\n> > > > > I think this was fixed in commit [1].\n> > > > >\n> > > > > [1] https://github.com/postgres/postgres/commit/fcf80c5d5f0f3787e70fca8fd029d2e08a923f91\n> > > >\n> > > > I tried to reproduce it at fcf80c5d5f~, but couldn't.\n> > > > I don't see how that patch would fix it anyway.\n> > > > I'm hoping Alexander can confirm what happened.\n> > >\n> > > This problem is only relevant for an old version of fix [1], which\n> > > overrides schemas for new partitions. That version was never\n> > > committed.\n> >\n> > Here are the patches.\n> > 0002 Skips copying extended statistics while creating new partitions in MERGE/SPLIT\n> >\n> > For 0002 I'd like to hear some feedback on wordings used in docs and comments.\n>\n> commit message:\n>\n> Currenlty => Currently\n> partiions => partitios\n> copying => by copying\n\n\nThank you!\n\n>\n> > However, parent's table extended statistics already covers all its\n> > children.\n>\n> => That's the wrong explanation. It's not that \"stats on the parent\n> table cover its children\". It's that there are two types of stats:\n> stats for the \"table hierarchy\" and stats for the individual table.\n> That's true for single-column stats as well as for extended stats.\n> In both cases, that's indicated by the inh flag in the code and in the\n> catalog.\n>\n> The right explanation is that extended stats on partitioned tables are\n> not similar to indexes. Indexes on parent table are nothing other than\n> a mechanism to create indexes on the child tables. That's not true for\n> stats.\n>\n> See also my prior messages\n> ZiJW1g2nbQs9ekwK@pryzbyj2023\n> Zi5Msg74C61DjJKW@pryzbyj2023\n\nYes, I understand that parents pg_statistic entry with stainherit ==\ntrue includes statistics for the children. I tried to express this by\nword \"covers\". But you're right, this is the wrong explanation.\n\nCan I, please, ask you to revise the patch?\n\n> I think EXCLUDE IDENTITY can/should now also be removed - see 509199587.\n> I'm not able to reproduce that problem anyway, even before that...\n\nI will check this.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n",
"msg_date": "Thu, 9 May 2024 00:51:32 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hello Dmitry and Alexander,\n\nPlease look at one more anomaly with temporary tables:\nCREATE TEMP TABLE t (a int) PARTITION BY RANGE (a);\nCREATE TEMP TABLE tp_0 PARTITION OF t FOR VALUES FROM (0) TO (1) ;\nCREATE TEMP TABLE tp_1 PARTITION OF t FOR VALUES FROM (1) TO (2);\nALTER TABLE t MERGE PARTITIONS (tp_0, tp_1) INTO tp_0;\n-- succeeds, but:\nALTER TABLE t SPLIT PARTITION tp_0 INTO\n (PARTITION tp_0 FOR VALUES FROM (0) TO (1), PARTITION tp_1 FOR VALUES FROM (1) TO (2));\n-- fails with:\nERROR: relation \"tp_0\" already exists\n\nThough the same SPLIT succeeds with non-temporary tables...\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Sat, 11 May 2024 12:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi!\n\n11.05.2024 12:00, Alexander Lakhin wrote:\n> Please look at one more anomaly with temporary tables:\n\nThank you, Alexander!\n\nThe problem affects the SPLIT PARTITION command.\n\nCREATE TEMP TABLE t (a int) PARTITION BY RANGE (a);\nCREATE TEMP TABLE tp_0 PARTITION OF t FOR VALUES FROM (0) TO (2) ;\n-- ERROR: relation \"tp_0\" already exists\nALTER TABLE t SPLIT PARTITION tp_0 INTO\n (PARTITION tp_0 FOR VALUES FROM (0) TO (1), PARTITION tp_1 FOR \nVALUES FROM (1) TO (2));\n\nI'll try to fix it soon.\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com\n\n\n",
"msg_date": "Sat, 11 May 2024 16:19:38 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi!\n\nAttached draft version of fix for [1].\n\n[1] \nhttps://www.postgresql.org/message-id/86b4f1e3-0b5d-315c-9225-19860d64d685%40gmail.com\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Sun, 12 May 2024 17:43:40 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Commit 3ca43dbbb67f which adds the permission checks seems to cause conflicts\nin the pg_upgrade tests:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=piculet&dt=2024-05-13%2008%3A36%3A37\n\nThere is an issue with dropping and creating roles which seems to stem from\nthis commit:\n\n CREATE ROLE regress_partition_merge_alice;\n+ERROR: role \"regress_partition_merge_alice\" already exists\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 13 May 2024 10:45:57 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi!\n\n13.05.2024 11:45, Daniel Gustafsson пишет:\n> Commit 3ca43dbbb67f which adds the permission checks seems to cause conflicts\n> in the pg_upgrade tests\n\nThanks!\n\nIt will probably be enough to rename the roles:\n\nregress_partition_merge_alice -> regress_partition_split_alice\nregress_partition_merge_bob -> regress_partition_split_bob\n\n(changes in attachment)\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Mon, 13 May 2024 12:45:49 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Mon, May 13, 2024 at 12:45 PM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n> 13.05.2024 11:45, Daniel Gustafsson пишет:\n> > Commit 3ca43dbbb67f which adds the permission checks seems to cause conflicts\n> > in the pg_upgrade tests\n>\n> Thanks!\n>\n> It will probably be enough to rename the roles:\n>\n> regress_partition_merge_alice -> regress_partition_split_alice\n> regress_partition_merge_bob -> regress_partition_split_bob\n\nThanks to Danial for spotting this.\nThanks to Dmitry for the proposed fix.\n\nThe actual problem appears to be a bit more complex. Additionally to\nthe role names, the lack of permissions on schemas lead to creation of\ntables in public schema and potential conflict there. Fixed in\n2a679ae94e.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 13 May 2024 13:37:31 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Thu, May 09, 2024 at 12:51:32AM +0300, Alexander Korotkov wrote:\n> > > However, parent's table extended statistics already covers all its\n> > > children.\n> >\n> > => That's the wrong explanation. It's not that \"stats on the parent\n> > table cover its children\". It's that there are two types of stats:\n> > stats for the \"table hierarchy\" and stats for the individual table.\n> > That's true for single-column stats as well as for extended stats.\n> > In both cases, that's indicated by the inh flag in the code and in the\n> > catalog.\n> >\n> > The right explanation is that extended stats on partitioned tables are\n> > not similar to indexes. Indexes on parent table are nothing other than\n> > a mechanism to create indexes on the child tables. That's not true for\n> > stats.\n> >\n> > See also my prior messages\n> > ZiJW1g2nbQs9ekwK@pryzbyj2023\n> > Zi5Msg74C61DjJKW@pryzbyj2023\n> \n> Yes, I understand that parents pg_statistic entry with stainherit ==\n> true includes statistics for the children. I tried to express this by\n> word \"covers\". But you're right, this is the wrong explanation.\n> \n> Can I, please, ask you to revise the patch?\n\nI tried to make this clear but it'd be nice if someone (Tomas/Alvaro?)\nwould check that this says what's wanted.\n\n-- \nJustin",
"msg_date": "Tue, 14 May 2024 09:49:53 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Tue, May 14, 2024 at 5:49 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Thu, May 09, 2024 at 12:51:32AM +0300, Alexander Korotkov wrote:\n> > > > However, parent's table extended statistics already covers all its\n> > > > children.\n> > >\n> > > => That's the wrong explanation. It's not that \"stats on the parent\n> > > table cover its children\". It's that there are two types of stats:\n> > > stats for the \"table hierarchy\" and stats for the individual table.\n> > > That's true for single-column stats as well as for extended stats.\n> > > In both cases, that's indicated by the inh flag in the code and in the\n> > > catalog.\n> > >\n> > > The right explanation is that extended stats on partitioned tables are\n> > > not similar to indexes. Indexes on parent table are nothing other than\n> > > a mechanism to create indexes on the child tables. That's not true for\n> > > stats.\n> > >\n> > > See also my prior messages\n> > > ZiJW1g2nbQs9ekwK@pryzbyj2023\n> > > Zi5Msg74C61DjJKW@pryzbyj2023\n> >\n> > Yes, I understand that parents pg_statistic entry with stainherit ==\n> > true includes statistics for the children. I tried to express this by\n> > word \"covers\". But you're right, this is the wrong explanation.\n> >\n> > Can I, please, ask you to revise the patch?\n>\n> I tried to make this clear but it'd be nice if someone (Tomas/Alvaro?)\n> would check that this says what's wanted.\n\nThank you!\n\nI've assembled the patches with the pending fixes.\n0001 – The patch by Dmitry Koval for fixing detection of name\ncollision in SPLIT partition operation. Also, I found that name\ncollision detection doesn't work well for MERGE partitions. I've\nadded fix for that to this patch as well.\n0002 -– Patch for skipping copy of extended statistics. I would\nappreciate more feedback about wording, but I'd like to get a correct\nbehavior into the source tree sooner. If the docs and/or comments\nneed further improvements, we can fix that later.\n\nI'm going to push both if no objections.\n\nLinks.\n1. https://www.postgresql.org/message-id/147426d9-b793-4571-a5e5-7438affeeb5a%40postgrespro.ru\n\n------\nRegards,\nAlexander Korotkov\nSupabase",
"msg_date": "Fri, 17 May 2024 13:05:01 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi, Alexander:\n\nOn Fri, 17 May 2024 at 14:05, Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n> On Tue, May 14, 2024 at 5:49 PM Justin Pryzby <pryzby@telsasoft.com>\n> wrote:\n> > On Thu, May 09, 2024 at 12:51:32AM +0300, Alexander Korotkov wrote:\n> > > > > However, parent's table extended statistics already covers all its\n> > > > > children.\n> > > >\n> > > > => That's the wrong explanation. It's not that \"stats on the parent\n> > > > table cover its children\". It's that there are two types of stats:\n> > > > stats for the \"table hierarchy\" and stats for the individual table.\n> > > > That's true for single-column stats as well as for extended stats.\n> > > > In both cases, that's indicated by the inh flag in the code and in\n> the\n> > > > catalog.\n> > > >\n> > > > The right explanation is that extended stats on partitioned tables\n> are\n> > > > not similar to indexes. Indexes on parent table are nothing other\n> than\n> > > > a mechanism to create indexes on the child tables. That's not true\n> for\n> > > > stats.\n> > > >\n> > > > See also my prior messages\n> > > > ZiJW1g2nbQs9ekwK@pryzbyj2023\n> > > > Zi5Msg74C61DjJKW@pryzbyj2023\n> > >\n> > > Yes, I understand that parents pg_statistic entry with stainherit ==\n> > > true includes statistics for the children. I tried to express this by\n> > > word \"covers\". But you're right, this is the wrong explanation.\n> > >\n> > > Can I, please, ask you to revise the patch?\n> >\n> > I tried to make this clear but it'd be nice if someone (Tomas/Alvaro?)\n> > would check that this says what's wanted.\n>\n> Thank you!\n>\n> I've assembled the patches with the pending fixes.\n> 0001 – The patch by Dmitry Koval for fixing detection of name\n> collision in SPLIT partition operation. Also, I found that name\n> collision detection doesn't work well for MERGE partitions. I've\n> added fix for that to this patch as well.\n> 0002 -– Patch for skipping copy of extended statistics. I would\n> appreciate more feedback about wording, but I'd like to get a correct\n> behavior into the source tree sooner. If the docs and/or comments\n> need further improvements, we can fix that later.\n>\n> I'm going to push both if no objections.\n>\nThank you for working on this patch set!\n\nSome minor things:\n0001:\npartition_split.sql\n157 +-- Check that detection, that the new partition has the same name as\none of\n158 +-- the merged partitions, works correctly for temporary partitions\nTest for split with comment for merge. Maybe better something like:\n\"Split partition of a temporary table when one of the partitions after\nsplit has the same name as the partition being split\"\n\n0002:\nanalgous -> analogous (maybe better using \"like\" instead of \"analogous to\")\nheirarchy -> hierarchy\n\nalter_table.sgml:\nMaybe in documentation it's better not to provide reasoning, just state how\nit works:\nfor consistency with <command>CREATE TABLE PARTITION OF</command> ->\nsimilar to <command>CREATE TABLE PARTITION OF</command>\n\nRegards,\nPavel Borisov\n\nHi, Alexander:On Fri, 17 May 2024 at 14:05, Alexander Korotkov <aekorotkov@gmail.com> wrote:On Tue, May 14, 2024 at 5:49 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Thu, May 09, 2024 at 12:51:32AM +0300, Alexander Korotkov wrote:\n> > > > However, parent's table extended statistics already covers all its\n> > > > children.\n> > >\n> > > => That's the wrong explanation. It's not that \"stats on the parent\n> > > table cover its children\". It's that there are two types of stats:\n> > > stats for the \"table hierarchy\" and stats for the individual table.\n> > > That's true for single-column stats as well as for extended stats.\n> > > In both cases, that's indicated by the inh flag in the code and in the\n> > > catalog.\n> > >\n> > > The right explanation is that extended stats on partitioned tables are\n> > > not similar to indexes. Indexes on parent table are nothing other than\n> > > a mechanism to create indexes on the child tables. That's not true for\n> > > stats.\n> > >\n> > > See also my prior messages\n> > > ZiJW1g2nbQs9ekwK@pryzbyj2023\n> > > Zi5Msg74C61DjJKW@pryzbyj2023\n> >\n> > Yes, I understand that parents pg_statistic entry with stainherit ==\n> > true includes statistics for the children. I tried to express this by\n> > word \"covers\". But you're right, this is the wrong explanation.\n> >\n> > Can I, please, ask you to revise the patch?\n>\n> I tried to make this clear but it'd be nice if someone (Tomas/Alvaro?)\n> would check that this says what's wanted.\n\nThank you!\n\nI've assembled the patches with the pending fixes.\n0001 – The patch by Dmitry Koval for fixing detection of name\ncollision in SPLIT partition operation. Also, I found that name\ncollision detection doesn't work well for MERGE partitions. I've\nadded fix for that to this patch as well.\n0002 -– Patch for skipping copy of extended statistics. I would\nappreciate more feedback about wording, but I'd like to get a correct\nbehavior into the source tree sooner. If the docs and/or comments\nneed further improvements, we can fix that later.\n\nI'm going to push both if no objections.Thank you for working on this patch set!Some minor things:0001:partition_split.sql157 +-- Check that detection, that the new partition has the same name as one of158 +-- the merged partitions, works correctly for temporary partitions Test for split with comment for merge. Maybe better something like:\"Split partition of a temporary table when one of the partitions after split has the same name as the partition being split\"0002: analgous -> analogous (maybe better using \"like\" instead of \"analogous to\")heirarchy -> hierarchyalter_table.sgml:Maybe in documentation it's better not to provide reasoning, just state how it works: for consistency with <command>CREATE TABLE PARTITION OF</command> -> similar to <command>CREATE TABLE PARTITION OF</command>Regards,Pavel Borisov",
"msg_date": "Fri, 17 May 2024 15:02:40 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi, Pavel!\n\nOn Fri, May 17, 2024 at 2:02 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> On Fri, 17 May 2024 at 14:05, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>>\n>> On Tue, May 14, 2024 at 5:49 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> > On Thu, May 09, 2024 at 12:51:32AM +0300, Alexander Korotkov wrote:\n>> > > > > However, parent's table extended statistics already covers all its\n>> > > > > children.\n>> > > >\n>> > > > => That's the wrong explanation. It's not that \"stats on the parent\n>> > > > table cover its children\". It's that there are two types of stats:\n>> > > > stats for the \"table hierarchy\" and stats for the individual table.\n>> > > > That's true for single-column stats as well as for extended stats.\n>> > > > In both cases, that's indicated by the inh flag in the code and in the\n>> > > > catalog.\n>> > > >\n>> > > > The right explanation is that extended stats on partitioned tables are\n>> > > > not similar to indexes. Indexes on parent table are nothing other than\n>> > > > a mechanism to create indexes on the child tables. That's not true for\n>> > > > stats.\n>> > > >\n>> > > > See also my prior messages\n>> > > > ZiJW1g2nbQs9ekwK@pryzbyj2023\n>> > > > Zi5Msg74C61DjJKW@pryzbyj2023\n>> > >\n>> > > Yes, I understand that parents pg_statistic entry with stainherit ==\n>> > > true includes statistics for the children. I tried to express this by\n>> > > word \"covers\". But you're right, this is the wrong explanation.\n>> > >\n>> > > Can I, please, ask you to revise the patch?\n>> >\n>> > I tried to make this clear but it'd be nice if someone (Tomas/Alvaro?)\n>> > would check that this says what's wanted.\n>>\n>> Thank you!\n>>\n>> I've assembled the patches with the pending fixes.\n>> 0001 – The patch by Dmitry Koval for fixing detection of name\n>> collision in SPLIT partition operation. Also, I found that name\n>> collision detection doesn't work well for MERGE partitions. I've\n>> added fix for that to this patch as well.\n>> 0002 -– Patch for skipping copy of extended statistics. I would\n>> appreciate more feedback about wording, but I'd like to get a correct\n>> behavior into the source tree sooner. If the docs and/or comments\n>> need further improvements, we can fix that later.\n>>\n>> I'm going to push both if no objections.\n>\n> Thank you for working on this patch set!\n>\n> Some minor things:\n> 0001:\n> partition_split.sql\n> 157 +-- Check that detection, that the new partition has the same name as one of\n> 158 +-- the merged partitions, works correctly for temporary partitions\n> Test for split with comment for merge. Maybe better something like:\n> \"Split partition of a temporary table when one of the partitions after split has the same name as the partition being split\"\n\nThank you, fixed as proposed.\n\n> 0002:\n> analgous -> analogous (maybe better using \"like\" instead of \"analogous to\")\n> heirarchy -> hierarchy\n\nChanged \"are not analgous to\" to \"don't behave like\".\n\n> alter_table.sgml:\n> Maybe in documentation it's better not to provide reasoning, just state how it works:\n> for consistency with <command>CREATE TABLE PARTITION OF</command> -> similar to <command>CREATE TABLE PARTITION OF</command>\n\nI'd like to keep this. This is the question, which should naturally\narise when you read: \"Why this is not just INCLUDING ALL?\"\n\n------\nRegards,\nAlexander Korotkov\nSupabase",
"msg_date": "Fri, 17 May 2024 14:33:40 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "The partition_split test has unstable results, as shown at [1].\nI suggest adding \"ORDER BY conname\" to the two queries shown\nto fail there. Better look at other queries in the test for\npossible similar problems, too.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jackdaw&dt=2024-05-24%2015%3A58%3A17\n\n\n",
"msg_date": "Fri, 24 May 2024 15:29:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hello,\n\n24.05.2024 22:29, Tom Lane wrote:\n> The partition_split test has unstable results, as shown at [1].\n> I suggest adding \"ORDER BY conname\" to the two queries shown\n> to fail there. Better look at other queries in the test for\n> possible similar problems, too.\n\nYes, I've just reproduced it on an aarch64 device as follows:\necho \"autovacuum_naptime = 1\nautovacuum_vacuum_threshold = 1\nautovacuum_analyze_threshold = 1\n\" > ~/temp.config\nTEMP_CONFIG=~/temp.config TESTS=\"$(printf 'partition_split %.0s' `seq 100`)\" make -s check-tests\n...\nok 80 - partition_split 749 ms\nnot ok 81 - partition_split 728 ms\nok 82 - partition_split 732 ms\n\n$ cat src/test/regress/regression.diffs\ndiff -U3 .../src/test/regress/expected/partition_split.out .../src/test/regress/results/partition_split.out\n--- .../src/test/regress/expected/partition_split.out 2024-05-15 17:15:57.171999830 +0000\n+++ .../src/test/regress/results/partition_split.out 2024-05-24 19:28:37.329999749 +0000\n@@ -625,8 +625,8 @@\n SELECT pg_get_constraintdef(oid), conname, conkey FROM pg_constraint WHERE conrelid = \n'sales_feb_mar_apr2022'::regclass::oid;\npg_get_constraintdef | conname | conkey\n ---------------------------------------------------------------------+---------------------------------+--------\n- CHECK ((sales_amount > 1)) | sales_range_sales_amount_check | {2}\n FOREIGN KEY (salesperson_id) REFERENCES salespeople(salesperson_id) | sales_range_salesperson_id_fkey | {1}\n+ CHECK ((sales_amount > 1)) | sales_range_sales_amount_check | {2}\n (2 rows)\n\n ALTER TABLE sales_range SPLIT PARTITION sales_feb_mar_apr2022 INTO\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Fri, 24 May 2024 23:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Fri, May 24, 2024 at 11:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n>\n> 24.05.2024 22:29, Tom Lane wrote:\n> > The partition_split test has unstable results, as shown at [1].\n> > I suggest adding \"ORDER BY conname\" to the two queries shown\n> > to fail there. Better look at other queries in the test for\n> > possible similar problems, too.\n>\n> Yes, I've just reproduced it on an aarch64 device as follows:\n> echo \"autovacuum_naptime = 1\n> autovacuum_vacuum_threshold = 1\n> autovacuum_analyze_threshold = 1\n> \" > ~/temp.config\n> TEMP_CONFIG=~/temp.config TESTS=\"$(printf 'partition_split %.0s' `seq 100`)\" make -s check-tests\n> ...\n> ok 80 - partition_split 749 ms\n> not ok 81 - partition_split 728 ms\n> ok 82 - partition_split 732 ms\n>\n> $ cat src/test/regress/regression.diffs\n> diff -U3 .../src/test/regress/expected/partition_split.out .../src/test/regress/results/partition_split.out\n> --- .../src/test/regress/expected/partition_split.out 2024-05-15 17:15:57.171999830 +0000\n> +++ .../src/test/regress/results/partition_split.out 2024-05-24 19:28:37.329999749 +0000\n> @@ -625,8 +625,8 @@\n> SELECT pg_get_constraintdef(oid), conname, conkey FROM pg_constraint WHERE conrelid =\n> 'sales_feb_mar_apr2022'::regclass::oid;\n> pg_get_constraintdef | conname | conkey\n> ---------------------------------------------------------------------+---------------------------------+--------\n> - CHECK ((sales_amount > 1)) | sales_range_sales_amount_check | {2}\n> FOREIGN KEY (salesperson_id) REFERENCES salespeople(salesperson_id) | sales_range_salesperson_id_fkey | {1}\n> + CHECK ((sales_amount > 1)) | sales_range_sales_amount_check | {2}\n> (2 rows)\n>\n> ALTER TABLE sales_range SPLIT PARTITION sales_feb_mar_apr2022 INTO\n\nTom, Alexander, thank you for spotting this.\nI'm going to care about it later today.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n",
"msg_date": "Sat, 25 May 2024 15:53:11 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Fri, May 03, 2024 at 04:32:25PM +0300, Alexander Korotkov wrote:\n> On Fri, May 3, 2024 at 4:23 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Note that the error that led to \"EXCLUDING IDENTITY\" is being discused\n> > over here:\n> > https://www.postgresql.org/message-id/3b8a9dc1-bbc7-0ef5-6863-c432afac7d59@gmail.com\n> >\n> > It's possible that once that's addressed, the exclusion should be\n> > removed here, too.\n> \n> +1\n\nCan EXCLUDING IDENTITY be removed now ?\n\nI wasn't able to find why it was needed - at one point, I think there\nwas a test case that threw an error, but now when I remove the EXCLUDE,\nnothing goes wrong.\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 25 May 2024 12:53:17 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Sat, May 25, 2024 at 8:53 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Fri, May 03, 2024 at 04:32:25PM +0300, Alexander Korotkov wrote:\n> > On Fri, May 3, 2024 at 4:23 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > Note that the error that led to \"EXCLUDING IDENTITY\" is being discused\n> > > over here:\n> > > https://www.postgresql.org/message-id/3b8a9dc1-bbc7-0ef5-6863-c432afac7d59@gmail.com\n> > >\n> > > It's possible that once that's addressed, the exclusion should be\n> > > removed here, too.\n> >\n> > +1\n>\n> Can EXCLUDING IDENTITY be removed now ?\n>\n> I wasn't able to find why it was needed - at one point, I think there\n> was a test case that threw an error, but now when I remove the EXCLUDE,\n> nothing goes wrong.\n\nYes, it was broken before [1][2], but now it seems to work. At the\nsame time, I'm not sure if we need to remove the EXCLUDE now.\nIDENTITY is anyway successfully created when the new partition gets\nattached.\n\nLinks.\n1. https://www.postgresql.org/message-id/171085360143.2046436.7217841141682511557.pgcf@coridan.postgresql.org\n2. https://www.postgresql.org/message-id/flat/ZiGH0xc1lxJ71ZfB%40pryzbyj2023#297b6aef85cb089abb38e9b1a9a7ffff\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n",
"msg_date": "Sun, 26 May 2024 06:56:33 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Sat, May 25, 2024 at 3:53 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Fri, May 24, 2024 at 11:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n> >\n> > 24.05.2024 22:29, Tom Lane wrote:\n> > > The partition_split test has unstable results, as shown at [1].\n> > > I suggest adding \"ORDER BY conname\" to the two queries shown\n> > > to fail there. Better look at other queries in the test for\n> > > possible similar problems, too.\n> >\n> > Yes, I've just reproduced it on an aarch64 device as follows:\n> > echo \"autovacuum_naptime = 1\n> > autovacuum_vacuum_threshold = 1\n> > autovacuum_analyze_threshold = 1\n> > \" > ~/temp.config\n> > TEMP_CONFIG=~/temp.config TESTS=\"$(printf 'partition_split %.0s' `seq 100`)\" make -s check-tests\n> > ...\n> > ok 80 - partition_split 749 ms\n> > not ok 81 - partition_split 728 ms\n> > ok 82 - partition_split 732 ms\n> >\n> > $ cat src/test/regress/regression.diffs\n> > diff -U3 .../src/test/regress/expected/partition_split.out .../src/test/regress/results/partition_split.out\n> > --- .../src/test/regress/expected/partition_split.out 2024-05-15 17:15:57.171999830 +0000\n> > +++ .../src/test/regress/results/partition_split.out 2024-05-24 19:28:37.329999749 +0000\n> > @@ -625,8 +625,8 @@\n> > SELECT pg_get_constraintdef(oid), conname, conkey FROM pg_constraint WHERE conrelid =\n> > 'sales_feb_mar_apr2022'::regclass::oid;\n> > pg_get_constraintdef | conname | conkey\n> > ---------------------------------------------------------------------+---------------------------------+--------\n> > - CHECK ((sales_amount > 1)) | sales_range_sales_amount_check | {2}\n> > FOREIGN KEY (salesperson_id) REFERENCES salespeople(salesperson_id) | sales_range_salesperson_id_fkey | {1}\n> > + CHECK ((sales_amount > 1)) | sales_range_sales_amount_check | {2}\n> > (2 rows)\n> >\n> > ALTER TABLE sales_range SPLIT PARTITION sales_feb_mar_apr2022 INTO\n>\n> Tom, Alexander, thank you for spotting this.\n> I'm going to care about it later today.\n\nORDER BY is added in d53a4286d7 in these queries altogether with other\ncatalog queries with potentially unstable result.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n",
"msg_date": "Sun, 26 May 2024 06:58:11 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Sun, Apr 07, 2024 at 01:22:51AM +0300, Alexander Korotkov wrote:\n> I've pushed 0001 and 0002\n\nThe partition MERGE (1adf16b8f) and SPLIT (87c21bb94) v17 patches introduced\ncreatePartitionTable() with this code:\n\n\tcreateStmt->relation = newPartName;\n...\n\twrapper->utilityStmt = (Node *) createStmt;\n...\n\tProcessUtility(wrapper,\n...\n\tnewRel = table_openrv(newPartName, NoLock);\n\nThis breaks from the CVE-2014-0062 (commit 5f17304) principle of not repeating\nname lookups. The attached demo uses this defect to make one partition have\ntwo parents.",
"msg_date": "Thu, 8 Aug 2024 10:13:51 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Thu, Aug 8, 2024 at 8:14 PM Noah Misch <noah@leadboat.com> wrote:\n> On Sun, Apr 07, 2024 at 01:22:51AM +0300, Alexander Korotkov wrote:\n> > I've pushed 0001 and 0002\n>\n> The partition MERGE (1adf16b8f) and SPLIT (87c21bb94) v17 patches introduced\n> createPartitionTable() with this code:\n>\n> createStmt->relation = newPartName;\n> ...\n> wrapper->utilityStmt = (Node *) createStmt;\n> ...\n> ProcessUtility(wrapper,\n> ...\n> newRel = table_openrv(newPartName, NoLock);\n>\n> This breaks from the CVE-2014-0062 (commit 5f17304) principle of not repeating\n> name lookups. The attached demo uses this defect to make one partition have\n> two parents.\n\nThank you for a valuable report. I will dig into and fix that.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n",
"msg_date": "Fri, 9 Aug 2024 01:43:11 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "> This breaks from the CVE-2014-0062 (commit 5f17304) principle of not repeating\n> name lookups. The attached demo uses this defect to make one partition have\n> two parents.\n\nThank you very much for information (especially for the demo)!\n\nI'm not sure that we can get the identifier of the newly created \npartition from the ProcessUtility() function...\nMaybe it would be enough to check that the new partition is located in \nthe namespace in which we created it (see attachment)?\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Fri, 9 Aug 2024 10:18:29 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Fri, Aug 9, 2024 at 10:18 AM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n> > This breaks from the CVE-2014-0062 (commit 5f17304) principle of not repeating\n> > name lookups. The attached demo uses this defect to make one partition have\n> > two parents.\n>\n> Thank you very much for information (especially for the demo)!\n>\n> I'm not sure that we can get the identifier of the newly created\n> partition from the ProcessUtility() function...\n> Maybe it would be enough to check that the new partition is located in\n> the namespace in which we created it (see attachment)?\n\nThe new partition doesn't necessarily get created in the same\nnamespace as parent partition. I think it would be better to somehow\nopen partition by its oid.\n\nIt would be quite unfortunate to replicate significant part of\nProcessUtilitySlow(). So, the question is how to get the oid of newly\ncreated relation from ProcessUtility(). I don't like to change the\nsignature of ProcessUtility() especially as a part of backpatch. So,\nI tried to fit this into existing parameters. Probably\nQueryCompletion struct fits this purpose best from the existing\nparameters. Attached draft patch implements returning oid of newly\ncreated relation as part of QueryCompletion. Thoughts?\n\n------\nRegards,\nAlexander Korotkov\nSupabase",
"msg_date": "Sat, 10 Aug 2024 18:43:59 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "> Probably\n> QueryCompletion struct fits this purpose best from the existing\n> parameters. Attached draft patch implements returning oid of newly\n> created relation as part of QueryCompletion. Thoughts?\n\nI agree, returning the oid of the newly created relation is the best way \nto solve the problem.\n(Excuse me, I won't have access to a laptop for the next week - and \nwon't be able to look at the source code).\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com\n\n\n",
"msg_date": "Sat, 10 Aug 2024 18:57:48 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Sat, Aug 10, 2024 at 6:57 PM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n> > Probably\n> > QueryCompletion struct fits this purpose best from the existing\n> > parameters. Attached draft patch implements returning oid of newly\n> > created relation as part of QueryCompletion. Thoughts?\n>\n> I agree, returning the oid of the newly created relation is the best way\n> to solve the problem.\n> (Excuse me, I won't have access to a laptop for the next week - and\n> won't be able to look at the source code).\n\nThank you for your feedback. Although, I decided QueryCompletion is\nnot a good place for this new field. It looks more appropriate to\nplace it to TableLikeClause, which already contains one relation oid\ninside. The revised patch is attached.\n\n------\nRegards,\nAlexander Korotkov\nSupabase",
"msg_date": "Mon, 19 Aug 2024 01:24:02 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi, Alexander!\n\nOn Mon, 19 Aug 2024 at 02:24, Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n> On Sat, Aug 10, 2024 at 6:57 PM Dmitry Koval <d.koval@postgrespro.ru>\n> wrote:\n> > > Probably\n> > > QueryCompletion struct fits this purpose best from the existing\n> > > parameters. Attached draft patch implements returning oid of newly\n> > > created relation as part of QueryCompletion. Thoughts?\n> >\n> > I agree, returning the oid of the newly created relation is the best way\n> > to solve the problem.\n> > (Excuse me, I won't have access to a laptop for the next week - and\n> > won't be able to look at the source code).\n>\n> Thank you for your feedback. Although, I decided QueryCompletion is\n> not a good place for this new field. It looks more appropriate to\n> place it to TableLikeClause, which already contains one relation oid\n> inside. The revised patch is attached.\n>\n\nI've looked at the patch v2. Remembering the OID of a relation newly\ncreated with LIKE in TableLikeClause seems good to me.\nCheck-world passes sucessfully.\n\nShouldn't we also modify the TableLikeClause node in gram.y accordingly?\n\nFor the comments:\nPut the Oid -> Store the OID\nso caller might use it -> for the caller to use it.\n(Maybe also caller -> table create function)\n\nRegards,\nPavel Borisov\nSupabase\n\nHi, Alexander!On Mon, 19 Aug 2024 at 02:24, Alexander Korotkov <aekorotkov@gmail.com> wrote:On Sat, Aug 10, 2024 at 6:57 PM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n> > Probably\n> > QueryCompletion struct fits this purpose best from the existing\n> > parameters. Attached draft patch implements returning oid of newly\n> > created relation as part of QueryCompletion. Thoughts?\n>\n> I agree, returning the oid of the newly created relation is the best way\n> to solve the problem.\n> (Excuse me, I won't have access to a laptop for the next week - and\n> won't be able to look at the source code).\n\nThank you for your feedback. Although, I decided QueryCompletion is\nnot a good place for this new field. It looks more appropriate to\nplace it to TableLikeClause, which already contains one relation oid\ninside. The revised patch is attached.I've looked at the patch v2. Remembering the OID of a relation newly created with LIKE in TableLikeClause seems good to me.Check-world passes sucessfully.Shouldn't we also modify the TableLikeClause node in gram.y accordingly?For the comments: Put the Oid -> Store the OIDso caller might use it -> for the caller to use it. (Maybe also caller -> table create function)Regards, Pavel BorisovSupabase",
"msg_date": "Wed, 21 Aug 2024 14:48:45 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi, Pavel!\n\nOn Wed, Aug 21, 2024 at 1:48 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> On Mon, 19 Aug 2024 at 02:24, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>>\n>> On Sat, Aug 10, 2024 at 6:57 PM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n>> > > Probably\n>> > > QueryCompletion struct fits this purpose best from the existing\n>> > > parameters. Attached draft patch implements returning oid of newly\n>> > > created relation as part of QueryCompletion. Thoughts?\n>> >\n>> > I agree, returning the oid of the newly created relation is the best way\n>> > to solve the problem.\n>> > (Excuse me, I won't have access to a laptop for the next week - and\n>> > won't be able to look at the source code).\n>>\n>> Thank you for your feedback. Although, I decided QueryCompletion is\n>> not a good place for this new field. It looks more appropriate to\n>> place it to TableLikeClause, which already contains one relation oid\n>> inside. The revised patch is attached.\n>\n>\n> I've looked at the patch v2. Remembering the OID of a relation newly created with LIKE in TableLikeClause seems good to me.\n> Check-world passes sucessfully.\n\nThank you.\n\n> Shouldn't we also modify the TableLikeClause node in gram.y accordingly?\n\nOn the one hand, makeNode() uses palloc0() and initializes all fields\nwith zero anyway. On the other hand, there is already assignment of\nrelationOid. So, yes I'll add assignment of newRelationOid for the\nsake of uniformity.\n\n> For the comments:\n> Put the Oid -> Store the OID\n> so caller might use it -> for the caller to use it.\n\nAccepted.\n\n> (Maybe also caller -> table create function)\n\nI'll prefer to leave it \"caller\" as more generic term, which could\nalso fit potential future usages.\n\nThe revised patch is attached. I'm going to push it if no objections.\n\n------\nRegards,\nAlexander Korotkov\nSupabase",
"msg_date": "Wed, 21 Aug 2024 14:55:04 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi, Alexander!\n\nOn Wed, 21 Aug 2024 at 15:55, Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n> Hi, Pavel!\n>\n> On Wed, Aug 21, 2024 at 1:48 PM Pavel Borisov <pashkin.elfe@gmail.com>\n> wrote:\n> > On Mon, 19 Aug 2024 at 02:24, Alexander Korotkov <aekorotkov@gmail.com>\n> wrote:\n> >>\n> >> On Sat, Aug 10, 2024 at 6:57 PM Dmitry Koval <d.koval@postgrespro.ru>\n> wrote:\n> >> > > Probably\n> >> > > QueryCompletion struct fits this purpose best from the existing\n> >> > > parameters. Attached draft patch implements returning oid of newly\n> >> > > created relation as part of QueryCompletion. Thoughts?\n> >> >\n> >> > I agree, returning the oid of the newly created relation is the best\n> way\n> >> > to solve the problem.\n> >> > (Excuse me, I won't have access to a laptop for the next week - and\n> >> > won't be able to look at the source code).\n> >>\n> >> Thank you for your feedback. Although, I decided QueryCompletion is\n> >> not a good place for this new field. It looks more appropriate to\n> >> place it to TableLikeClause, which already contains one relation oid\n> >> inside. The revised patch is attached.\n> >\n> >\n> > I've looked at the patch v2. Remembering the OID of a relation newly\n> created with LIKE in TableLikeClause seems good to me.\n> > Check-world passes sucessfully.\n>\n> Thank you.\n>\n> > Shouldn't we also modify the TableLikeClause node in gram.y accordingly?\n>\n> On the one hand, makeNode() uses palloc0() and initializes all fields\n> with zero anyway. On the other hand, there is already assignment of\n> relationOid. So, yes I'll add assignment of newRelationOid for the\n> sake of uniformity.\n>\n> > For the comments:\n> > Put the Oid -> Store the OID\n\n> so caller might use it -> for the caller to use it.\n>\n> Accepted.\n>\n> > (Maybe also caller -> table create function)\n>\n> I'll prefer to leave it \"caller\" as more generic term, which could\n> also fit potential future usages.\n>\n> The revised patch is attached. I'm going to push it if no objections.\n>\nLooked at v3\nAll good except the patch has \"Oid\" and \"OID\" in two comments. I suppose\n\"OID\" is preferred elsewhere in the PG comments.\n\nRegards,\nPavel.\n\nHi, Alexander!On Wed, 21 Aug 2024 at 15:55, Alexander Korotkov <aekorotkov@gmail.com> wrote:Hi, Pavel!\n\nOn Wed, Aug 21, 2024 at 1:48 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> On Mon, 19 Aug 2024 at 02:24, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>>\n>> On Sat, Aug 10, 2024 at 6:57 PM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n>> > > Probably\n>> > > QueryCompletion struct fits this purpose best from the existing\n>> > > parameters. Attached draft patch implements returning oid of newly\n>> > > created relation as part of QueryCompletion. Thoughts?\n>> >\n>> > I agree, returning the oid of the newly created relation is the best way\n>> > to solve the problem.\n>> > (Excuse me, I won't have access to a laptop for the next week - and\n>> > won't be able to look at the source code).\n>>\n>> Thank you for your feedback. Although, I decided QueryCompletion is\n>> not a good place for this new field. It looks more appropriate to\n>> place it to TableLikeClause, which already contains one relation oid\n>> inside. The revised patch is attached.\n>\n>\n> I've looked at the patch v2. Remembering the OID of a relation newly created with LIKE in TableLikeClause seems good to me.\n> Check-world passes sucessfully.\n\nThank you.\n\n> Shouldn't we also modify the TableLikeClause node in gram.y accordingly?\n\nOn the one hand, makeNode() uses palloc0() and initializes all fields\nwith zero anyway. On the other hand, there is already assignment of\nrelationOid. So, yes I'll add assignment of newRelationOid for the\nsake of uniformity.\n\n> For the comments:\n> Put the Oid -> Store the OID \n> so caller might use it -> for the caller to use it.\n\nAccepted.\n\n> (Maybe also caller -> table create function)\n\nI'll prefer to leave it \"caller\" as more generic term, which could\nalso fit potential future usages.\n\nThe revised patch is attached. I'm going to push it if no objections.Looked at v3All good except the patch has \"Oid\" and \"OID\" in two comments. I suppose \"OID\" is preferred elsewhere in the PG comments.Regards,Pavel.",
"msg_date": "Wed, 21 Aug 2024 16:06:31 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Wed, Aug 21, 2024 at 3:06 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> On Wed, 21 Aug 2024 at 15:55, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>>\n>> Hi, Pavel!\n>>\n>> On Wed, Aug 21, 2024 at 1:48 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n>> > On Mon, 19 Aug 2024 at 02:24, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>> >>\n>> >> On Sat, Aug 10, 2024 at 6:57 PM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n>> >> > > Probably\n>> >> > > QueryCompletion struct fits this purpose best from the existing\n>> >> > > parameters. Attached draft patch implements returning oid of newly\n>> >> > > created relation as part of QueryCompletion. Thoughts?\n>> >> >\n>> >> > I agree, returning the oid of the newly created relation is the best way\n>> >> > to solve the problem.\n>> >> > (Excuse me, I won't have access to a laptop for the next week - and\n>> >> > won't be able to look at the source code).\n>> >>\n>> >> Thank you for your feedback. Although, I decided QueryCompletion is\n>> >> not a good place for this new field. It looks more appropriate to\n>> >> place it to TableLikeClause, which already contains one relation oid\n>> >> inside. The revised patch is attached.\n>> >\n>> >\n>> > I've looked at the patch v2. Remembering the OID of a relation newly created with LIKE in TableLikeClause seems good to me.\n>> > Check-world passes sucessfully.\n>>\n>> Thank you.\n>>\n>> > Shouldn't we also modify the TableLikeClause node in gram.y accordingly?\n>>\n>> On the one hand, makeNode() uses palloc0() and initializes all fields\n>> with zero anyway. On the other hand, there is already assignment of\n>> relationOid. So, yes I'll add assignment of newRelationOid for the\n>> sake of uniformity.\n>>\n>> > For the comments:\n>> > Put the Oid -> Store the OID\n>>\n>> > so caller might use it -> for the caller to use it.\n>>\n>> Accepted.\n>>\n>> > (Maybe also caller -> table create function)\n>>\n>> I'll prefer to leave it \"caller\" as more generic term, which could\n>> also fit potential future usages.\n>>\n>> The revised patch is attached. I'm going to push it if no objections.\n>\n> Looked at v3\n> All good except the patch has \"Oid\" and \"OID\" in two comments. I suppose \"OID\" is preferred elsewhere in the PG comments.\n\nCorrect, the same file contains \"OID\" multiple times. Revised version\nis attached.\n\n------\nRegards,\nAlexander Korotkov\nSupabase",
"msg_date": "Wed, 21 Aug 2024 15:40:44 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi,\n\nIn response to some concerns raised about this fix on the\npgsql-release list today, I spent some time investigating this patch.\nUnfortunately, I think there are too many problems here to be\nreasonably fixed before release, and I think all of SPLIT/MERGE\nPARTITION needs to be reverted.\n\nI focused my investigation on createPartitionTable(), which is a\nhelper for both SPLIT PARTITION and MERGE PARTITION, and it works by\nconsing up a CREATE TABLE AS statement and then feeding that back\nthrough\nProcessUtility. I think it's bad design to use such a high-level\nfacility here; it is unlike what we do elsewhere in tablecmds.c and\nopens us up to a variety of problems. The first thing that I\ndiscovered is that this patch does not fix all of the repeated name\nlookup problems. There is still this:\n\n tlc->relation =\nmakeRangeVar(get_namespace_name(RelationGetNamespace(modelRel)),\n RelationGetRelationName(modelRel), -1);\n\nAnd also this:\n\n createStmt->tablespacename =\nget_tablespace_name(modelRel->rd_rel->reltablespace);\n\nIn both cases, we do a reverse lookup on an OID to get a name which\nthe CREATE TABLE code will later turn back into an OID. If we don't\nget the same value, that's at least a bug and probably a security\nvulnerability, and there is no way to be certain that we will get the\nsame value. The only remedy is to not repeat the lookup in the first\nplace.\n\nThen I got to looking at this:\n\n tlc->options = CREATE_TABLE_LIKE_ALL &\n ~(CREATE_TABLE_LIKE_INDEXES | CREATE_TABLE_LIKE_IDENTITY |\nCREATE_TABLE_LIKE_STATISTICS);\n\nIt's not obvious at first glance that there is a critical problem\nhere, but there are reasons to be nervous. We're deploying a lot of\nmachinery here to copy a lot of stuff and, while that's efficient from\na coding perspective, it means that stuff you might not expect can\njust kind of happen. For instance:\n\nrobert.haas=# \\d+\n List of relations\n Schema | Name | Type | Owner | Persistence |\nAccess method | Size | Description\n--------+------+-------------------+-------------+-------------+---------------+------------+-------------\n public | foo | partitioned table | robert.haas | permanent |\n | 0 bytes |\n public | foo1 | table | robert.haas | permanent | heap\n | 8192 bytes |\n public | foo2 | table | bob | permanent | heap\n | 8192 bytes |\n(3 rows)\nrobert.haas=# alter table foo split partition foo2 into (partition\nfoo3 for values from (10) to (15), partition foo4 for values from (15)\nto (20));\nALTER TABLE\nrobert.haas=# \\d+\n List of relations\n Schema | Name | Type | Owner | Persistence |\nAccess method | Size | Description\n--------+------+-------------------+-------------+-------------+---------------+------------+-------------\n public | foo | partitioned table | robert.haas | permanent |\n | 0 bytes |\n public | foo1 | table | robert.haas | permanent | heap\n | 8192 bytes |\n public | foo3 | table | robert.haas | permanent | heap\n | 8192 bytes |\n public | foo4 | table | robert.haas | permanent | heap\n | 8192 bytes |\n(4 rows)\n\nI've split a partition owned by bob into two partitions owned by\nrobert.haas. That's rather surprising. It doesn't work to split a\npartition that I don't own (and thus gain access to it) but if the\nsuperuser splits a non-superuser's partition, the superuser ends\nupowning the new partitions. I don't know if that's a vulnerability or\njust unexpected. However, then I found this, which I'm pretty well\ncertain is a vulnerability:\n\nrobert.haas=# set role bob;\nSET\nrobert.haas=> create table foo (a int, b text) partition by range (a);\nCREATE TABLE\nrobert.haas=> create table foo1 partition of foo for values from (0) to (10);\nCREATE TABLE\nrobert.haas=> create table foo2 partition of foo for values from (10) to (20);\nCREATE TABLE\nrobert.haas=> insert into foo values (11, 'carrots'), (16, 'pineapple');\nINSERT 0 2\nrobert.haas=> create or replace function run_me(integer) returns\ninteger as $$begin raise notice 'you are running me as %',\ncurrent_user; return $1; end$$ language plpgsql immutable;\nCREATE FUNCTION\nrobert.haas=> create index on foo (run_me(a));\nNOTICE: you are running me as bob\nNOTICE: you are running me as bob\nCREATE INDEX\nrobert.haas=> reset role;\nRESET\nrobert.haas=# alter table foo split partition foo2 into (partition\nfoo3 for values from (10) to (15), partition foo4 for values from (15)\nto (20));\nNOTICE: you are running me as robert.haas\nNOTICE: you are running me as robert.haas\nALTER TABLE\n\nI think it is very unlikely that the problems mentioned above are the\nonly ones. They're just what I found in an hour or two of testing.\nEven if they were, we're probably too close to release to be rushing\nout last minute fixes to multiple unanticipated security problems. But\nbecause of the design that was chosen here, I think there is probably\nmore stuff here that is not right, some of which is security relevant\nand some of which is just a question of whether we're really getting\nthe behavior that we want. And I don't think we can fix all that\nwithout either a very large number of grotty hacks similar to the one\ninstalled by 04158e7fa37c2dda9c3421ca922d02807b86df19, or a complete\nredesign of the feature. I believe the latter is probably a wiser\ncourse of action.\n\n...Robert\n\n\n",
"msg_date": "Thu, 22 Aug 2024 12:33:27 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On 8/22/24 12:33 PM, Robert Haas wrote:\r\n\r\n> I think it is very unlikely that the problems mentioned above are the\r\n> only ones. They're just what I found in an hour or two of testing.\r\n> Even if they were, we're probably too close to release to be rushing\r\n> out last minute fixes to multiple unanticipated security problems. But\r\n> because of the design that was chosen here, I think there is probably\r\n> more stuff here that is not right, some of which is security relevant\r\n> and some of which is just a question of whether we're really getting\r\n> the behavior that we want. And I don't think we can fix all that\r\n> without either a very large number of grotty hacks similar to the one\r\n> installed by 04158e7fa37c2dda9c3421ca922d02807b86df19, or a complete\r\n> redesign of the feature. I believe the latter is probably a wiser\r\n> course of action.\r\n\r\nI can't comment on the design as much, but from a release standpoint, \r\nbut security concerns this close to the RC/GA period do concern me.\r\n\r\nApplying the lessons from PG15 + SQL/JSON where we (and I'll own that I \r\nwas the one who pushed hard to include it) let it stay too long when it \r\nshould have been reverted, I think we should take more time to work on \r\nthis feature, revert it for PG17, and target it for PG18.\r\n\r\nI understand it's disappointing to do a late revert of a feature, but I \r\nthink it's better to be safer, particularly if we believe there's a an \r\nelevated risk of releasing something with vulnerabilities. As we saw \r\nwith SQL/JSON, this we'll give us more time to come up with design we \r\nagree with, further test, and then promote as part of PG18.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Thu, 22 Aug 2024 12:43:22 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi!\n\nOn Thu, Aug 22, 2024 at 7:33 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> In response to some concerns raised about this fix on the\n> pgsql-release list today, I spent some time investigating this patch.\n> Unfortunately, I think there are too many problems here to be\n> reasonably fixed before release, and I think all of SPLIT/MERGE\n> PARTITION needs to be reverted.\n>\n> I focused my investigation on createPartitionTable(), which is a\n> helper for both SPLIT PARTITION and MERGE PARTITION, and it works by\n> consing up a CREATE TABLE AS statement and then feeding that back\n> through\n> ProcessUtility. I think it's bad design to use such a high-level\n> facility here; it is unlike what we do elsewhere in tablecmds.c and\n> opens us up to a variety of problems. The first thing that I\n> discovered is that this patch does not fix all of the repeated name\n> lookup problems. There is still this:\n>\n> tlc->relation =\n> makeRangeVar(get_namespace_name(RelationGetNamespace(modelRel)),\n> RelationGetRelationName(modelRel), -1);\n>\n> And also this:\n>\n> createStmt->tablespacename =\n> get_tablespace_name(modelRel->rd_rel->reltablespace);\n>\n> In both cases, we do a reverse lookup on an OID to get a name which\n> the CREATE TABLE code will later turn back into an OID. If we don't\n> get the same value, that's at least a bug and probably a security\n> vulnerability, and there is no way to be certain that we will get the\n> same value. The only remedy is to not repeat the lookup in the first\n> place.\n>\n> Then I got to looking at this:\n>\n> tlc->options = CREATE_TABLE_LIKE_ALL &\n> ~(CREATE_TABLE_LIKE_INDEXES | CREATE_TABLE_LIKE_IDENTITY |\n> CREATE_TABLE_LIKE_STATISTICS);\n>\n> It's not obvious at first glance that there is a critical problem\n> here, but there are reasons to be nervous. We're deploying a lot of\n> machinery here to copy a lot of stuff and, while that's efficient from\n> a coding perspective, it means that stuff you might not expect can\n> just kind of happen. For instance:\n>\n> robert.haas=# \\d+\n> List of relations\n> Schema | Name | Type | Owner | Persistence |\n> Access method | Size | Description\n> --------+------+-------------------+-------------+-------------+---------------+------------+-------------\n> public | foo | partitioned table | robert.haas | permanent |\n> | 0 bytes |\n> public | foo1 | table | robert.haas | permanent | heap\n> | 8192 bytes |\n> public | foo2 | table | bob | permanent | heap\n> | 8192 bytes |\n> (3 rows)\n> robert.haas=# alter table foo split partition foo2 into (partition\n> foo3 for values from (10) to (15), partition foo4 for values from (15)\n> to (20));\n> ALTER TABLE\n> robert.haas=# \\d+\n> List of relations\n> Schema | Name | Type | Owner | Persistence |\n> Access method | Size | Description\n> --------+------+-------------------+-------------+-------------+---------------+------------+-------------\n> public | foo | partitioned table | robert.haas | permanent |\n> | 0 bytes |\n> public | foo1 | table | robert.haas | permanent | heap\n> | 8192 bytes |\n> public | foo3 | table | robert.haas | permanent | heap\n> | 8192 bytes |\n> public | foo4 | table | robert.haas | permanent | heap\n> | 8192 bytes |\n> (4 rows)\n>\n> I've split a partition owned by bob into two partitions owned by\n> robert.haas. That's rather surprising. It doesn't work to split a\n> partition that I don't own (and thus gain access to it) but if the\n> superuser splits a non-superuser's partition, the superuser ends\n> upowning the new partitions. I don't know if that's a vulnerability or\n> just unexpected. However, then I found this, which I'm pretty well\n> certain is a vulnerability:\n>\n> robert.haas=# set role bob;\n> SET\n> robert.haas=> create table foo (a int, b text) partition by range (a);\n> CREATE TABLE\n> robert.haas=> create table foo1 partition of foo for values from (0) to (10);\n> CREATE TABLE\n> robert.haas=> create table foo2 partition of foo for values from (10) to (20);\n> CREATE TABLE\n> robert.haas=> insert into foo values (11, 'carrots'), (16, 'pineapple');\n> INSERT 0 2\n> robert.haas=> create or replace function run_me(integer) returns\n> integer as $$begin raise notice 'you are running me as %',\n> current_user; return $1; end$$ language plpgsql immutable;\n> CREATE FUNCTION\n> robert.haas=> create index on foo (run_me(a));\n> NOTICE: you are running me as bob\n> NOTICE: you are running me as bob\n> CREATE INDEX\n> robert.haas=> reset role;\n> RESET\n> robert.haas=# alter table foo split partition foo2 into (partition\n> foo3 for values from (10) to (15), partition foo4 for values from (15)\n> to (20));\n> NOTICE: you are running me as robert.haas\n> NOTICE: you are running me as robert.haas\n> ALTER TABLE\n>\n> I think it is very unlikely that the problems mentioned above are the\n> only ones. They're just what I found in an hour or two of testing.\n> Even if they were, we're probably too close to release to be rushing\n> out last minute fixes to multiple unanticipated security problems. But\n> because of the design that was chosen here, I think there is probably\n> more stuff here that is not right, some of which is security relevant\n> and some of which is just a question of whether we're really getting\n> the behavior that we want. And I don't think we can fix all that\n> without either a very large number of grotty hacks similar to the one\n> installed by 04158e7fa37c2dda9c3421ca922d02807b86df19, or a complete\n> redesign of the feature. I believe the latter is probably a wiser\n> course of action.\n\nThank you for your feedback. Yes, it seems that there is not enough\ntime to even carefully analyze all the issues in these features. The\nrule of thumb I can get from this experience is \"think multiple times\nbefore accessing something already opened by its name\". I'm going to\nrevert these features during next couple days.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n",
"msg_date": "Thu, 22 Aug 2024 19:43:35 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Thu, Aug 22, 2024 at 12:43 PM Alexander Korotkov\n<aekorotkov@gmail.com> wrote:\n> Thank you for your feedback. Yes, it seems that there is not enough\n> time to even carefully analyze all the issues in these features. The\n> rule of thumb I can get from this experience is \"think multiple times\n> before accessing something already opened by its name\". I'm going to\n> revert these features during next couple days.\n\nThanks, and sorry about that. I would say even \"think multiple times\"\nis possibly not strong enough -- it might almost be \"just don't ever\ndo it\". Even if (in some particular case) the invalidation mechanism\nseems to protect you from getting wrong answers, there are often holes\nin that, specifically around search_path = foo, bar and you're\noperating on an object in schema bar and an identically-named object\nis created in schema foo at just the wrong time. Sometimes there are\nproblems even when search_path is not involved, but when it is, there\nare more.\n\nHere, aside from the name lookup issues, there are also problems with\nexpression evaluation: we can't split partitions without reindexing\nrows that those partitions contain, and it is critical to think\nthrough which is going to do the evaluation and make sure it's\nproperly sandboxed. I think we might need\nSECURITY_RESTRICTED_OPERATION here.\n\nAnother thing I want to highlight if you do have another go at this\npatch is that it's really critical to think about where every single\nproperty of the newly-created tables comes from. The original patch\ndidn't consider relpersistence or tableam, and here I just discovered\nthat owner is also an issue that probably needs more consideration,\nbut it goes way beyond that. For example, I was surprised to discover\nthat if I put per-partition constraints or triggers on a partition and\nthen split it, they were not duplicated to the new partitions. Now,\nmaybe that's actually the behavior we want -- I'm not 100% positive --\nbut it sure wasn't what I was expecting. If we did duplicate them when\nsplitting, then what's supposed to happen when merging occurs? That is\nnot at all obvious, at least to me, but it needs careful thought. ACLs\nand rules and default values and foreign keys (both outbond and\ninbound) all need to be considered too, along with 27 other things\nthat I'm sure I'm not thinking about right now. Some of this behavior\nshould probably be explicitly documented, but all of it should be\nconsidered carefully enough before commit to avoid surprises later. I\nsay that both from a security point of view and also just from a user\nexperience point of view. Even if things aren't insecure, they can\nstill be annoying, but it's not uncommon in cases like this for\nannoying things to turn out to also be insecure.\n\nFinally, if you do revisit this, I believe it would be a good idea to\nthink a bit harder about how data is moved around. My impression (and\nplease correct me if I am mistaken) is that currently, any split or\nmerge operation rewrites all the data in the source partition(s). If a\nlarge partition is being split nearly equally, I think that has a good\nchance of being optimal, but I think that might be the only case. If\nwe're merging partitions, wouldn't it be better to adjust the\nconstraints on the first partition -- or perhaps the largest partition\nif we want to be clever -- and insert the data from all of the others\ninto it? Maybe that would even have syntax that puts the user in\ncontrol of which partition survives, e.g. ALTER TABLE tab1 MERGE\nPARTITION part1 WITH part2, part3, .... That would also make it really\nobvious to the user what all of the properties of part1 will be after\nthe merge: they will be exactly the same as they were before the\nmerge, except that the partition constraint will have been adjusted.\nYou basically dodge everything in the previous paragraph in one shot,\nand it seems like it would also be faster. Splitting there's no\nsimilar get-out-of-jail free card, at least not that I can see. Even\nif you add syntax that splits a partition by using INSERT/DELETE to\nmove some rows to a newly-created partition, you still have to make at\nleast one new partition. But possibly that syntax is worth having\nanyway, because it would be a lot quicker in the case of a highly\nasymmetric split. On the other hand, maybe even splits are much more\nlikely and we don't really need it. I don't know.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 22 Aug 2024 13:25:07 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Thu, Aug 22, 2024 at 8:25 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Aug 22, 2024 at 12:43 PM Alexander Korotkov\n> <aekorotkov@gmail.com> wrote:\n> > Thank you for your feedback. Yes, it seems that there is not enough\n> > time to even carefully analyze all the issues in these features. The\n> > rule of thumb I can get from this experience is \"think multiple times\n> > before accessing something already opened by its name\". I'm going to\n> > revert these features during next couple days.\n>\n> Thanks, and sorry about that. I would say even \"think multiple times\"\n> is possibly not strong enough -- it might almost be \"just don't ever\n> do it\". Even if (in some particular case) the invalidation mechanism\n> seems to protect you from getting wrong answers, there are often holes\n> in that, specifically around search_path = foo, bar and you're\n> operating on an object in schema bar and an identically-named object\n> is created in schema foo at just the wrong time. Sometimes there are\n> problems even when search_path is not involved, but when it is, there\n> are more.\n>\n> Here, aside from the name lookup issues, there are also problems with\n> expression evaluation: we can't split partitions without reindexing\n> rows that those partitions contain, and it is critical to think\n> through which is going to do the evaluation and make sure it's\n> properly sandboxed. I think we might need\n> SECURITY_RESTRICTED_OPERATION here.\n>\n> Another thing I want to highlight if you do have another go at this\n> patch is that it's really critical to think about where every single\n> property of the newly-created tables comes from. The original patch\n> didn't consider relpersistence or tableam, and here I just discovered\n> that owner is also an issue that probably needs more consideration,\n> but it goes way beyond that. For example, I was surprised to discover\n> that if I put per-partition constraints or triggers on a partition and\n> then split it, they were not duplicated to the new partitions. Now,\n> maybe that's actually the behavior we want -- I'm not 100% positive --\n> but it sure wasn't what I was expecting. If we did duplicate them when\n> splitting, then what's supposed to happen when merging occurs? That is\n> not at all obvious, at least to me, but it needs careful thought. ACLs\n> and rules and default values and foreign keys (both outbond and\n> inbound) all need to be considered too, along with 27 other things\n> that I'm sure I'm not thinking about right now. Some of this behavior\n> should probably be explicitly documented, but all of it should be\n> considered carefully enough before commit to avoid surprises later. I\n> say that both from a security point of view and also just from a user\n> experience point of view. Even if things aren't insecure, they can\n> still be annoying, but it's not uncommon in cases like this for\n> annoying things to turn out to also be insecure.\n>\n> Finally, if you do revisit this, I believe it would be a good idea to\n> think a bit harder about how data is moved around. My impression (and\n> please correct me if I am mistaken) is that currently, any split or\n> merge operation rewrites all the data in the source partition(s). If a\n> large partition is being split nearly equally, I think that has a good\n> chance of being optimal, but I think that might be the only case. If\n> we're merging partitions, wouldn't it be better to adjust the\n> constraints on the first partition -- or perhaps the largest partition\n> if we want to be clever -- and insert the data from all of the others\n> into it? Maybe that would even have syntax that puts the user in\n> control of which partition survives, e.g. ALTER TABLE tab1 MERGE\n> PARTITION part1 WITH part2, part3, .... That would also make it really\n> obvious to the user what all of the properties of part1 will be after\n> the merge: they will be exactly the same as they were before the\n> merge, except that the partition constraint will have been adjusted.\n> You basically dodge everything in the previous paragraph in one shot,\n> and it seems like it would also be faster. Splitting there's no\n> similar get-out-of-jail free card, at least not that I can see. Even\n> if you add syntax that splits a partition by using INSERT/DELETE to\n> move some rows to a newly-created partition, you still have to make at\n> least one new partition. But possibly that syntax is worth having\n> anyway, because it would be a lot quicker in the case of a highly\n> asymmetric split. On the other hand, maybe even splits are much more\n> likely and we don't really need it. I don't know.\n\nThank you for so valuable feedback! When I have another go over this\npatch I will ensure this is addressed.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n",
"msg_date": "Fri, 23 Aug 2024 03:56:23 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi!\n\nAlexander Korotkov, Robert Haas - thanks for fixes and feedbacks!\n\nThis email is a starting point for further work.\nThere are two files attached to this email:\nv32-0001-Implement-ALTER-TABLE-.-MERGE-PARTITIONS-.-comma.patch,\nv32-0002-Implement-ALTER-TABLE-.-SPLIT-PARTITION-.-comman.patch.\n\nThey contains changes from reverted commits 1adf16b8fb, 87c21bb941, and\nsubsequent fixes and improvements including df64c81ca9, c99ef1811a,\n9dfcac8e15, 885742b9f8, 842c9b2705, fcf80c5d5f, 96c7381c4c, f4fc7cb54b,\n60ae37a8bc, 259c96fa8f, 449cdcd486, 3ca43dbbb6, 2a679ae94e, 3a82c689fd,\nfbd4321fd5, d53a4286d7, c086896625, 4e5d6c4091.\nI didn't include fix 04158e7fa3 into patches because Robert Haas \nobjected to its use.\n\nA short list of known issues and questions (see more details in [1] and \n[2]):\n\n1. Function createPartitionTable() should be rewritten using partitioned \ntable OID (not name) and without using ProcessUtility().\n\n2. Should it be considered an error when we split a partition owned by \nanother user and get partitions that owned by our user?\n(I think this is not a problem. Perhaps disallow merging other users' \npartitions would be too strict a restriction.)\n\n3. About the functional index \"create index on foo (run_me(a));\".\n(Should we disallow merging of another user's partitions when \npartitioned table has functional indexes? SECURITY_RESTRICTED_OPERATION?)\n\n4. Need to decide what is correct in case there are per-partition \nconstraints or triggers on a split partition. They not duplicated to the \nnew partitions now. (But might be in this case we should have an error \nor warning?)\n\n5. \"If we're merging partitions, wouldn't it be better to adjust the \nconstraints on the first partition - or perhaps the largest partition if \nwe want to be clever -- and insert the data from all of the others into \nit? Maybe that would even have syntax that puts the user in control of \nwhich partition survives, e.g. ALTER TABLE tab1 MERGE PARTITION part1 \nWITH part2, part3, .... That would also make it really obvious to the \nuser what all of the properties of part1 will be after the merge: they \nwill be exactly the same as they were before the merge, except that the \npartition constraint will have been adjusted.\"\n(Similar optimization was proposed in [3] but was rejected [4]).\n\nLinks.\n[1] \nhttps://www.postgresql.org/message-id/CA%2BTgmobHYix%3DNn8D4RUHa6fhUVPR88KGAMq1pBfnGfOfEjRixA%40mail.gmail.com.\n[2] \nhttps://www.postgresql.org/message-id/CA%2BTgmoY0%3DbT_xBP8csR%3DMFE%3DFxGE2n2-me2-31jBOgEcLvW7ug%40mail.gmail.com\n[3] \nhttps://www.postgresql.org/message-id/c3730d78-6081-4c41-9715-d1d192734576%40postgrespro.ru, \nsee v31-0003-Additional-patch-for-ALTER-TABLE-.-MERGE-PARTITI.patch\n[4] \nhttps://www.postgresql.org/message-id/CAPpHfdtj7YsPaASoVPN%2BN3H4_Ct%2BkQw8QY1d_9u7FPnbghkicw%40mail.gmail.com\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Tue, 27 Aug 2024 21:24:35 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "On Tue, Aug 27, 2024 at 2:24 PM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n> They contains changes from reverted commits 1adf16b8fb, 87c21bb941, and\n> subsequent fixes and improvements including df64c81ca9, c99ef1811a,\n> 9dfcac8e15, 885742b9f8, 842c9b2705, fcf80c5d5f, 96c7381c4c, f4fc7cb54b,\n> 60ae37a8bc, 259c96fa8f, 449cdcd486, 3ca43dbbb6, 2a679ae94e, 3a82c689fd,\n> fbd4321fd5, d53a4286d7, c086896625, 4e5d6c4091.\n> I didn't include fix 04158e7fa3 into patches because Robert Haas\n> objected to its use.\n\nTo be clear, I'm not against 04158e7fa3. I just don't think it fixes everything.\n\n> 1. Function createPartitionTable() should be rewritten using partitioned\n> table OID (not name) and without using ProcessUtility().\n\nAgree.\n\n> 2. Should it be considered an error when we split a partition owned by\n> another user and get partitions that owned by our user?\n> (I think this is not a problem. Perhaps disallow merging other users'\n> partitions would be too strict a restriction.)\n>\n> 3. About the functional index \"create index on foo (run_me(a));\".\n> (Should we disallow merging of another user's partitions when\n> partitioned table has functional indexes? SECURITY_RESTRICTED_OPERATION?)\n>\n> 4. Need to decide what is correct in case there are per-partition\n> constraints or triggers on a split partition. They not duplicated to the\n> new partitions now. (But might be in this case we should have an error\n> or warning?)\n\nI think we want to avoid giving errors or warnings. For all of these\ncases, and others, we need to consider what the expected behavior is,\nand have test cases and documentation as appropriate. But we shouldn't\nthink of it as \"let's make it fail if the user does something that's\nnot safe\" but rather \"let's figure out how to make it safe.\"\n\n> 5. \"If we're merging partitions, wouldn't it be better to adjust the\n> constraints on the first partition - or perhaps the largest partition if\n> we want to be clever -- and insert the data from all of the others into\n> it? Maybe that would even have syntax that puts the user in control of\n> which partition survives, e.g. ALTER TABLE tab1 MERGE PARTITION part1\n> WITH part2, part3, .... That would also make it really obvious to the\n> user what all of the properties of part1 will be after the merge: they\n> will be exactly the same as they were before the merge, except that the\n> partition constraint will have been adjusted.\"\n> (Similar optimization was proposed in [3] but was rejected [4]).\n\nInteresting. Maybe it would be a good idea to set up some test cases\nto see which approach is better in different cases. Like try moving\ndata from foo1 to foo2 with DELETE..INSERT vs. creating a new table\nwith CTAS from foo1 UNION ALL foo2 and then indexing it. I think\nAlexander has a good point there, but I think my point is good too so\nI'm not sure which way wins.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 28 Aug 2024 09:45:36 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
},
{
"msg_contents": "Hi!\n\nI plan to prepare fixes for issues from email [1] as separate commits \n(for better code readability). Attachment in this email is a variant of \nfix for the issue:\n\n > 1. Function createPartitionTable() should be rewritten using\n > partitioned table OID (not name) and without using ProcessUtility().\n\nPatch \"Refactor createPartitionTable to remove ProcessUtility call\" \ncontains code changes + test (see file \nv33-0003-Refactor-createPartitionTable-to-remove-ProcessU.patch).\n\nBut I'm not sure that refactoring createPartitionTable is the best \nsolution. PostgreSQL code has issue CVE-2014-0062 (commit 5f17304) - see \nrelation_openrv() call in expandTableLikeClause() function [2] (opening \nrelation by name after we got relation Oid).\nExample for reproduce relation_openrv() call:\n\nCREATE TABLE t (b bigint, i int DEFAULT 100);\nCREATE TABLE t1 (LIKE t_bigint INCLUDING ALL);\n\nCommit 04158e7fa3 [3] (by Alexander Korotkov) might be a good fix for \nthis issue. But if we keep commit 04158e7fa3, do we need to refactor the \ncreatePartitionTable function (for removing ProcessUtility)?\nPerhaps the existing code\n1) v33-0002-Implement-ALTER-TABLE-.-SPLIT-PARTITION-.-comman.patch\n2) v33-0003-Refactor-createPartitionTable-to-remove-ProcessU.patch +\nwith patch 04158e7fa3 will look better.\n\n\nI would be very grateful for comments and suggestions.\n\nLinks.\n[1] \nhttps://www.postgresql.org/message-id/859476bf-3cb0-455e-b093-b8ab5ef17f0e%40postgrespro.ru\n[2] \nhttps://github.com/postgres/postgres/blob/c39afc38cfec7c34b883095062a89a63b221521a/src/backend/parser/parse_utilcmd.c#L1171\n[3] \nhttps://github.com/postgres/postgres/commit/04158e7fa37c2dda9c3421ca922d02807b86df19\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com",
"msg_date": "Fri, 30 Aug 2024 11:43:10 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add SPLIT PARTITION/MERGE PARTITIONS commands"
}
] |
[
{
"msg_contents": "|generate_series| ( /|start|/ |timestamp with time zone|, /|stop|/ \n|timestamp with time zone|, /|step|/ |interval| )\nproduces results depending on the timezone value set:\n\nSET timezone = 'UTC';\nSELECT ts, ts AT TIME ZONE 'UTC'\nFROM generate_series('2022-03-26 00:00:00+01'::timestamptz, '2022-03-30 \n00:00:00+01'::timestamptz, '1 day') AS ts;\n\nSET timezone = 'Europe/Warsaw';\nSELECT ts, ts AT TIME ZONE 'UTC'\nFROM generate_series('2022-03-26 00:00:00+01'::timestamptz, '2022-03-30 \n00:00:00+01'::timestamptz, '1 day') AS ts;\n\nSometimes this is a very big problem.\n\nThe fourth argument with the time zone will be very useful:\n|generate_series| ( /|start|/ |timestamp with time zone|, /|stop|/ \n|timestamp with time zone|, /|step|/ |interval| [, zone text] )\n\nThe situation is similar with the function timestamptz_pl_interval. The \nthird parameter for specifying the zone would be very useful.\n\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66\n\n\n\ngenerate_series ( start\ntimestamp with time zone, stop timestamp \nwith time zone, step interval )\nproduces results depending on the timezone value set:\n\nSET timezone = 'UTC';\nSELECT ts, ts AT TIME \nZONE 'UTC'\nFROM generate_series('2022-03-26 00:00:00+01'::timestamptz, '2022-03-30 \n00:00:00+01'::timestamptz, '1 day') AS ts;\n\nSET timezone = \n'Europe/Warsaw';\nSELECT ts, ts AT TIME \nZONE 'UTC'\nFROM generate_series('2022-03-26 00:00:00+01'::timestamptz, '2022-03-30 \n00:00:00+01'::timestamptz, '1 day') AS ts;\n\nSometimes this is a very big problem.\n\nThe fourth argument with the time zone will be very useful:\ngenerate_series ( start\ntimestamp with time zone, stop timestamp \nwith time zone, step interval [, zone text] ) \n \nThe situation is similar with the function timestamptz_pl_interval. The third parameter for \nspecifying the zone would be very useful.\n\n-- Przemysław Sztoch | \nMobile +48 509 99 00 66",
"msg_date": "Tue, 31 May 2022 21:54:05 +0200",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>",
"msg_from_op": true,
"msg_subject": "generate_series for timestamptz and time zone problem"
},
{
"msg_contents": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl> writes:\n> |generate_series| ( /|start|/ |timestamp with time zone|, /|stop|/ \n> |timestamp with time zone|, /|step|/ |interval| )\n> produces results depending on the timezone value set:\n\nThat's intentional. If you don't want it, maybe you should be using\ngenerate_series on timestamp without time zone?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 31 May 2022 16:54:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: generate_series for timestamptz and time zone problem"
},
{
"msg_contents": "Tom Lane wrote on 31.05.2022 22:54:\n> =?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl> writes:\n>> |generate_series| ( /|start|/ |timestamp with time zone|, /|stop|/\n>> |timestamp with time zone|, /|step|/ |interval| )\n>> produces results depending on the timezone value set:\n> That's intentional. If you don't want it, maybe you should be using\n> generate_series on timestamp without time zone?\n>\n> \t\t\tregards, tom lane\n1. Of course it is intentional. And usually everything works as it should.\n\nBut with multi-zone applications, using timestamptz generates a lot of \ntrouble.\nIt would be appropriate to supplement a few functions with the \npossibility of specifying a zone (of course, for timestamptz variants):\n- generate_series\n- date_bin (additionally with support for months and years)\n- timestamptz_plus_interval (the key issue is adding months and years, \n\"+\" operator only does this in the local zone)\n\nNot everything can be solved by converting the time between timestamptz \nand timestamp (e.g. using the timezone function).\nDaylight saving time reveals additional problems that are not visible at \nfirst glance.\n\nJust if DST did not exist, a simple conversion (AT TIME ZONE '...') \nwould have been enough.\nUnfortunately, DST is popular and, additionally, countries modify their \ntime zones from time to time.\n\n2. Because I lack the necessary experience, I want to introduce changes \nin parts.\nThere is patch for first function timestamptz_plus_interval.\n\nI don't know how to properly correct pg_proc.dat and add a variant of \nthis function with 3 arguments now.\n\nPlease comment on the patch and provide tips for pg_proc.\nIf it works for me, I will improve generate_series.\n\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Wed, 1 Jun 2022 16:45:16 +0200",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>",
"msg_from_op": true,
"msg_subject": "Re: generate_series for timestamptz and time zone problem"
},
{
"msg_contents": "Dear colleagues,\nPlease let me know what is the convention (procedure) of adding new \nfunctions to pg_proc. Specifically how oid is allocated.\nThis will allow me to continue working on the patch.\n\nI have to extend the timestamptz_pl_interval function, which is in fact \nan addition operator. But an additional parameter is needed to specify \nthe timezone.\nTherefore, should I add a second function timestamptz_pl_interval with \nthree arguments, or should a function with a different name be added so \nthat it does not get confused with operator functions (which only have \ntwo arguments)?\nWhat is the proposed name for such a function (add(timestamptz, \ninterval, timezone), date_add(timestamptz, interval, timezone), ...)?\n\nPrzemysław Sztoch wrote on 01.06.2022 16:45:\n>\n>\n> Tom Lane wrote on 31.05.2022 22:54:\n>> =?UTF-8?Q?Przemys=c5=82aw_Sztoch?=<przemyslaw@sztoch.pl> writes:\n>>> |generate_series| ( /|start|/ |timestamp with time zone|, /|stop|/\n>>> |timestamp with time zone|, /|step|/ |interval| )\n>>> produces results depending on the timezone value set:\n>> That's intentional. If you don't want it, maybe you should be using\n>> generate_series on timestamp without time zone?\n>>\n>> \t\t\tregards, tom lane\n> 1. Of course it is intentional. And usually everything works as it \n> should.\n>\n> But with multi-zone applications, using timestamptz generates a lot of \n> trouble.\n> It would be appropriate to supplement a few functions with the \n> possibility of specifying a zone (of course, for timestamptz variants):\n> - generate_series\n> - date_bin (additionally with support for months and years)\n> - timestamptz_plus_interval (the key issue is adding months and years, \n> \"+\" operator only does this in the local zone)\n>\n> Not everything can be solved by converting the time between \n> timestamptz and timestamp (e.g. using the timezone function).\n> Daylight saving time reveals additional problems that are not visible \n> at first glance.\n>\n> Just if DST did not exist, a simple conversion (AT TIME ZONE '...') \n> would have been enough.\n> Unfortunately, DST is popular and, additionally, countries modify \n> their time zones from time to time.\n>\n> 2. Because I lack the necessary experience, I want to introduce \n> changes in parts.\n> There is patch for first function timestamptz_plus_interval.\n>\n> I don't know how to properly correct pg_proc.dat and add a variant of \n> this function with 3 arguments now.\n>\n> Please comment on the patch and provide tips for pg_proc.\n> If it works for me, I will improve generate_series.\n>\n> -- \n> Przemysław Sztoch | Mobile +48 509 99 00 66\n\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66\n\n\n\nDear colleagues,\nPlease let me know what is the convention (procedure) of adding new \nfunctions to pg_proc. Specifically how oid is allocated.\nThis will allow me to continue working on the patch.\n\nI have to extend the timestamptz_pl_interval function, which is in fact \nan addition operator. But an additional parameter is needed to specify \nthe timezone.\nTherefore, should I add a second function timestamptz_pl_interval with \nthree arguments, or should a function with a different name be added so \nthat it does not get confused with operator functions (which only have \ntwo arguments)?\nWhat is the proposed name for such a function (add(timestamptz, \ninterval, timezone), date_add(timestamptz, interval, timezone), ...)?\n\nPrzemysław Sztoch wrote on 01.06.2022 16:45:\n\n\n\n\nTom Lane wrote on 31.05.2022 22:54:\n=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl> writes:\n\n|generate_series| ( /|start|/ |timestamp with time zone|, /|stop|/ \n|timestamp with time zone|, /|step|/ |interval| )\nproduces results depending on the timezone value set:\n\nThat's intentional. If you don't want it, maybe you should be using\ngenerate_series on timestamp without time zone?\n\n\t\t\tregards, tom lane\n\n\n1. Of course it is intentional. And usually everything works as it \nshould.\n\n\n\nBut with multi-zone applications, using timestamptz generates a lot of \ntrouble. \n\n\nIt would be appropriate to supplement a few functions with the \npossibility of specifying a zone (of course, for timestamptz variants):\n\n\n- generate_series\n\n\n- date_bin (additionally with support for months and years)\n\n\n- timestamptz_plus_interval (the key issue is adding months and years, \n\"+\" operator only does this in the local zone)\n\n\n\nNot everything can be solved by converting the time between timestamptz \nand timestamp (e.g. using the timezone function).\n\n\nDaylight saving time reveals additional problems that are not visible at\n first glance.\n\n\n\nJust if DST did not exist, a simple conversion (AT TIME ZONE '...') \nwould have been enough.\n\n\nUnfortunately, DST is popular and, additionally, countries modify their \ntime zones from time to time.\n\n\n2. Because I lack the necessary experience, I want to introduce changes \nin parts.\n\n\nThere is patch for first function timestamptz_plus_interval.\n\n\n\nI don't know how to properly correct pg_proc.dat and add a variant of \nthis function with 3 arguments now.\n\n\n\nPlease comment on the patch and provide tips for pg_proc.\n\n\nIf it works for me, I will improve generate_series.\n\n\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66\n\n\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Tue, 14 Jun 2022 15:18:07 +0200",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>",
"msg_from_op": true,
"msg_subject": "Re: generate_series for timestamptz and time zone problem"
},
{
"msg_contents": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl> writes:\n> Please let me know what is the convention (procedure) of adding new \n> functions to pg_proc. Specifically how oid is allocated.\n\nSee\nhttps://www.postgresql.org/docs/devel/system-catalog-initial-data.html#SYSTEM-CATALOG-OID-ASSIGNMENT\n(you should probably read that whole chapter for context).\n\n> Therefore, should I add a second function timestamptz_pl_interval with \n> three arguments, or should a function with a different name be added so \n> that it does not get confused with operator functions (which only have \n> two arguments)?\n\nThat's where you get into beauty-is-in-the-eye-of-the-beholder\nterritory. There's some value in naming related functions alike,\nbut on the other hand I doubt timestamptz_pl_interval would have\nbeen named so verbosely if anyone expected it to be called by\nname rather than via an operator. Coming up with good names is\npart of the work of preparing a patch like this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Jun 2022 09:43:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: generate_series for timestamptz and time zone problem"
},
{
"msg_contents": "Tom Lane wrote on 14.06.2022 15:43:\n> =?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl> writes:\n>> Please let me know what is the convention (procedure) of adding new\n>> functions to pg_proc. Specifically how oid is allocated.\n> See\n> https://www.postgresql.org/docs/devel/system-catalog-initial-data.html#SYSTEM-CATALOG-OID-ASSIGNMENT\n> (you should probably read that whole chapter for context).\nThx.\n\nThere is another patch.\nIt works, but one thing is wrongly done because I lack knowledge.\n\nWhere I'm using DirectFunctionCall3 I need to pass the timezone name, \nbut I'm using cstring_to_text and I'm pretty sure there's a memory leak \nhere. But I need help to fix this.\nI don't know how best to store the timezone in the generate_series \ncontext. Please, help.\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Tue, 14 Jun 2022 21:46:26 +0200",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>",
"msg_from_op": true,
"msg_subject": "Re: generate_series for timestamptz and time zone problem"
},
{
"msg_contents": "Przemysław Sztoch wrote on 14.06.2022 21:46:\n> Tom Lane wrote on 14.06.2022 15:43:\n>> =?UTF-8?Q?Przemys=c5=82aw_Sztoch?=<przemyslaw@sztoch.pl> writes:\n>>> Please let me know what is the convention (procedure) of adding new\n>>> functions to pg_proc. Specifically how oid is allocated.\n>> See\n>> https://www.postgresql.org/docs/devel/system-catalog-initial-data.html#SYSTEM-CATALOG-OID-ASSIGNMENT\n>> (you should probably read that whole chapter for context).\n> Thx.\n>\n> There is another patch.\n> It works, but one thing is wrongly done because I lack knowledge.\n>\n> Where I'm using DirectFunctionCall3 I need to pass the timezone name, \n> but I'm using cstring_to_text and I'm pretty sure there's a memory \n> leak here. But I need help to fix this.\n> I don't know how best to store the timezone in the generate_series \n> context. Please, help.\nPlease give me feedback on how to properly store the timezone name in \nthe function context structure. I can't finish my work without it.\n\nAdditionally, I added a new variant of the date_trunc function that \ntakes intervals as an argument.\nIt enables functionality similar to date_bin, but supports monthly, \nquarterly, annual, etc. periods.\nIn addition, it is resistant to the problems of different time zones and \ndaylight saving time (DST).\n\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Tue, 21 Jun 2022 16:55:40 +0200",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>",
"msg_from_op": true,
"msg_subject": "Re: generate_series for timestamptz and time zone problem"
},
{
"msg_contents": "On Tue, Jun 21, 2022 at 7:56 AM Przemysław Sztoch <przemyslaw@sztoch.pl> wrote:\n> There is another patch.\n> It works, but one thing is wrongly done because I lack knowledge.\n\nThank you for continuing to work on it despite this being your first\ntime contributing, and despite the difficulties. I'll try to help as\nmuch as I can.\n\n> Where I'm using DirectFunctionCall3 I need to pass the timezone name, but I'm using cstring_to_text and I'm pretty sure there's a memory leak here. But I need help to fix this.\n> I don't know how best to store the timezone in the generate_series context. Please, help.\n\nIn Postgres code we generally don't worry about memory leaks (a few\ncaveats apply). The MemoryContext infrastructure (see aset.c) enables\nus to be fast and loose with memory allocations. A good way to know if\nyou should be worried about your allocations, is to look in the\nneighboring code, and see what does it do with the memory it\nallocates.\n\nI think your use of cstring_to_text() is safe.\n\n> Please give me feedback on how to properly store the timezone name in the function context structure. I can't finish my work without it.\n\nThe way I see it, I don't think you need to store the tz-name in the\nfunction context structure, like you're currently doing. I think you\ncan remove the additional member from the\ngenerate_series_timestamptz_fctx struct, and refactor your code in\ngenerate_series_timestamptz() to work without it.; you seem to be\nusing the tzname member almost as a boolean flag, because the actual\nvalue you pass to DFCall3() can be calculated without first storing\nanything in the struct.\n\n> Additionally, I added a new variant of the date_trunc function that takes intervals as an argument.\n> It enables functionality similar to date_bin, but supports monthly, quarterly, annual, etc. periods.\n> In addition, it is resistant to the problems of different time zones and daylight saving time (DST).\n\nThis addition is beyond the original scope (add TZ param), so I think\nit would be considered a separate change/feature. But for now, we can\nkeep it in.\n\nAlthough not necessary, it'd be nice to have changes that can be\npresented as single units, be a patch of their own. If you're\nproficient with Git, can you please maintain each SQL-callable\nfunction as a separate commit in your branch, and use `git\nformat-patch` to generate a series for submission.\n\nCan you please explain why you chose to remove the provolatile\nattribute from the existing entry of date_trunc in pg_proc.dat.\n\nIt seems like you've picked/reused code from neighboring functions\n(e.g. from timestamptz_trunc_zone()). Can you please see if you can\nturn such code into a function, and call the function, instead of\ncopying it.\n\nAlso, according to the comment at the top of pg_proc.dat,\n\n # Once upon a time these entries were ordered by OID. Lately it's often\n # been the custom to insert new entries adjacent to related older entries.\n\nSo instead of adding your entries at the bottom of the file, please\neach entry closer to an existing entry that's relevant to it.\n\nI'm glad that you're following the advice on the patch-submission wiki\npage [1]. When submitting a patch for committers' consideration,\nthough, the submission needs to cross quite a few hurdles. So have\nprepared a markdown doc [2]. Let's fill in as much detail there as\npossible, before we mark it 'Ready for Committer' in the CF app.\n\n[1]: https://wiki.postgresql.org/wiki/Submitting_a_Patch\n[2]: https://wiki.postgresql.org/wiki/Patch_Reviews\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Thu, 30 Jun 2022 21:35:56 -0700",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": false,
"msg_subject": "Re: generate_series for timestamptz and time zone problem"
},
{
"msg_contents": "Gurjeet Singh wrote on 01.07.2022 06:35:\n> On Tue, Jun 21, 2022 at 7:56 AM Przemysław Sztoch <przemyslaw@sztoch.pl> wrote:\n>> Please give me feedback on how to properly store the timezone name in the function context structure. I can't finish my work without it.\n> The way I see it, I don't think you need to store the tz-name in the\n> function context structure, like you're currently doing. I think you\n> can remove the additional member from the\n> generate_series_timestamptz_fctx struct, and refactor your code in\n> generate_series_timestamptz() to work without it.; you seem to be\n> using the tzname member almost as a boolean flag, because the actual\n> value you pass to DFCall3() can be calculated without first storing\n> anything in the struct.\nDo I understand correctly that functions that return SET are executed \nmultiple times?\nIs access to arguments available all the time?\nI thought PG_GETARG_ could only be used when SRF_IS_FIRSTCALL () is true \n- was I right or wrong?\n> Can you please explain why you chose to remove the provolatile\n> attribute from the existing entry of date_trunc in pg_proc.dat.\nI believe it was a mistake in PG code.\nAll timestamptz functions must be STABLE as they depend on the current: \nSHOW timezone.\nIf new functions are created that pass the zone as a parameter, they \nbecome IMMUTABLE.\nFIrst date_trunc function implementaion was without time zone parameter \nand someone who\nadded second variant (with timezone as parameter) copied the definition \nwithout removing the STABLE flag.\n> It seems like you've picked/reused code from neighboring functions\n> (e.g. from timestamptz_trunc_zone()). Can you please see if you can\n> turn such code into a function, and call the function, instead of\n> copying it.\nOk. Changed.\n> Also, according to the comment at the top of pg_proc.dat,\n>\n> # Once upon a time these entries were ordered by OID. Lately it's often\n> # been the custom to insert new entries adjacent to related older entries.\n>\n> So instead of adding your entries at the bottom of the file, please\n> each entry closer to an existing entry that's relevant to it.\nOk. Changed.\n\nSome regression tests has been added.\n\nI have problem with this:\n-- Considering only built-in procs (prolang = 12), look for multiple uses\n-- of the same internal function (ie, matching prosrc fields). It's OK to\n-- have several entries with different pronames for the same internal \nfunction,\n-- but conflicts in the number of arguments and other critical items should\n-- be complained of. (We don't check data types here; see next query.)\n-- Note: ignore aggregate functions here, since they all point to the same\n-- dummy built-in function.\nSELECT p1.oid, p1.proname, p2.oid, p2.proname (...):\n oid | proname | oid | proname\n------+-------------------------+------+-----------------\n 1189 | timestamptz_pl_interval | 8800 | date_add\n 939 | generate_series | 8801 | generate_series\n(2 rows)\n\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Fri, 1 Jul 2022 15:43:05 +0200",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>",
"msg_from_op": true,
"msg_subject": "Re: generate_series for timestamptz and time zone problem"
},
{
"msg_contents": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl> writes:\n> I have problem with this:\n> -- Considering only built-in procs (prolang = 12), look for multiple uses\n> -- of the same internal function (ie, matching prosrc fields). It's OK to\n> -- have several entries with different pronames for the same internal \n> function,\n> -- but conflicts in the number of arguments and other critical items should\n> -- be complained of. (We don't check data types here; see next query.)\n\nIt's telling you you're violating project style. Don't make multiple\npg_proc entries point at the same C function and then use PG_NARGS\nto disambiguate; instead point at two separate functions. The functions\ncan share code at the next level down, if they want. (Just looking\nat the patch, though, I wonder if sharing code is really beneficial\nin this case. It seems quite messy, and I wouldn't be surprised\nif it hurts performance in the existing case.)\n\nYou also need to expend some more effort on refactoring code, to\neliminate silliness like looking up the timezone name each time\nthrough the SRF. That's got to be pretty awful performance-wise.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 03 Jul 2022 18:31:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: generate_series for timestamptz and time zone problem"
},
{
"msg_contents": "Tom Lane wrote on 04.07.2022 00:31:\n> =?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl> writes:\n>> I have problem with this:\n>> -- Considering only built-in procs (prolang = 12), look for multiple uses\n>> -- of the same internal function (ie, matching prosrc fields). It's OK to\n>> -- have several entries with different pronames for the same internal\n>> function,\n>> -- but conflicts in the number of arguments and other critical items should\n>> -- be complained of. (We don't check data types here; see next query.)\n> It's telling you you're violating project style. Don't make multiple\n> pg_proc entries point at the same C function and then use PG_NARGS\n> to disambiguate; instead point at two separate functions. The functions\n> can share code at the next level down, if they want. (Just looking\n> at the patch, though, I wonder if sharing code is really beneficial\n> in this case. It seems quite messy, and I wouldn't be surprised\n> if it hurts performance in the existing case.)\n>\n> You also need to expend some more effort on refactoring code, to\n> eliminate silliness like looking up the timezone name each time\n> through the SRF. That's got to be pretty awful performance-wise.\n>\n> \t\t\tregards, tom lane\nThx. Code is refactored. It is better, now.\n\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Mon, 4 Jul 2022 13:00:03 +0200",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>",
"msg_from_op": true,
"msg_subject": "Re: generate_series for timestamptz and time zone problem"
},
{
"msg_contents": "Przemysław Sztoch wrote on 01.07.2022 15:43:\n> Gurjeet Singh wrote on 01.07.2022 06:35:\n>> On Tue, Jun 21, 2022 at 7:56 AM Przemysław Sztoch<przemyslaw@sztoch.pl> wrote:\n>>> Please give me feedback on how to properly store the timezone name in the function context structure. I can't finish my work without it.\n>> The way I see it, I don't think you need to store the tz-name in the\n>> function context structure, like you're currently doing. I think you\n>> can remove the additional member from the\n>> generate_series_timestamptz_fctx struct, and refactor your code in\n>> generate_series_timestamptz() to work without it.; you seem to be\n>> using the tzname member almost as a boolean flag, because the actual\n>> value you pass to DFCall3() can be calculated without first storing\n>> anything in the struct.\n> Do I understand correctly that functions that return SET are executed \n> multiple times?\n> Is access to arguments available all the time?\n> I thought PG_GETARG_ could only be used when SRF_IS_FIRSTCALL () is \n> true - was I right or wrong?\nDear Gurjeet,\nI thought a bit after riding the bikes and the code repaired itself. :-)\nThanks for the clarification. Please check if patch v5 is satisfactory \nfor you.\n>> Can you please explain why you chose to remove the provolatile\n>> attribute from the existing entry of date_trunc in pg_proc.dat.\n> I believe it was a mistake in PG code.\n> All timestamptz functions must be STABLE as they depend on the \n> current: SHOW timezone.\n> If new functions are created that pass the zone as a parameter, they \n> become IMMUTABLE.\n> FIrst date_trunc function implementaion was without time zone \n> parameter and someone who\n> added second variant (with timezone as parameter) copied the \n> definition without removing the STABLE flag.\nHave I convinced everyone that this change is right? I assume I'm right \nand the mistake will be fatal.\n\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66\n\n\n\nPrzemysław Sztoch wrote on 01.07.2022 \n15:43:\n\n\nGurjeet Singh wrote on 01.07.2022 \n06:35:\nOn Tue, Jun 21, 2022 at 7:56 AM Przemysław Sztoch <przemyslaw@sztoch.pl> wrote:\n\nPlease give me feedback on how to properly store the timezone name in the function context structure. I can't finish my work without it.\n\nThe way I see it, I don't think you need to store the tz-name in the\nfunction context structure, like you're currently doing. I think you\ncan remove the additional member from the\ngenerate_series_timestamptz_fctx struct, and refactor your code in\ngenerate_series_timestamptz() to work without it.; you seem to be\nusing the tzname member almost as a boolean flag, because the actual\nvalue you pass to DFCall3() can be calculated without first storing\nanything in the struct.\n\nDo I understand correctly that functions that return SET are executed \nmultiple times?\n\nIs access to arguments available all the time?\n\nI thought PG_GETARG_ could only be used when SRF_IS_FIRSTCALL () is true\n - was I right or wrong?\n\nDear Gurjeet,\nI thought a bit after riding the bikes and the code repaired itself. :-)\nThanks for the clarification. Please check if patch v5 is satisfactory \nfor you.\n\n\nCan you please explain why you chose to remove the provolatile\nattribute from the existing entry of date_trunc in pg_proc.dat.\n\nI believe it was a mistake in PG code.\n\nAll timestamptz functions must be STABLE as they depend on the current: \nSHOW timezone.\n\nIf new functions are created that pass the zone as a parameter, they \nbecome IMMUTABLE.\n\nFIrst date_trunc function implementaion was without time zone parameter \nand someone who\n\nadded second variant (with timezone as parameter) copied the definition \nwithout removing the STABLE flag.\n\nHave I convinced everyone that this change is right? I assume I'm right \nand the mistake will be fatal.\n\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Mon, 4 Jul 2022 13:08:33 +0200",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>",
"msg_from_op": true,
"msg_subject": "Re: generate_series for timestamptz and time zone problem"
},
{
"msg_contents": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl> writes:\n> Gurjeet Singh wrote on 01.07.2022 06:35:\n>> Can you please explain why you chose to remove the provolatile\n>> attribute from the existing entry of date_trunc in pg_proc.dat.\n\n> I believe it was a mistake in PG code.\n> All timestamptz functions must be STABLE as they depend on the current: \n> SHOW timezone.\n> If new functions are created that pass the zone as a parameter, they \n> become IMMUTABLE.\n> FIrst date_trunc function implementaion was without time zone parameter \n> and someone who\n> added second variant (with timezone as parameter) copied the definition \n> without removing the STABLE flag.\n\nYeah, I think you are right, and the someone was me :-( (see 600b04d6b).\n\nI think what I was thinking is that timezone definitions do change\nfairly often and maybe we shouldn't risk treating them as immutable.\nHowever, we've not taken that into account in other volatility\nmarkings; for example the timezone() functions that underly AT TIME\nZONE are marked immutable, which is surely wrong if you are worried\nabout zone definitions changing. Given how long that's stood without\ncomplaint, I think marking timestamptz_trunc_zone as immutable\nshould be fine.\n\nHowever, what it shouldn't be is part of this patch. It's worth\npushing it separately to have a record of that decision. I've\nnow done that, so you'll need to rebase to remove that delta.\n\nI looked over the v5 patch very briefly, and have two main\ncomplaints:\n\n* There's no documentation additions. You can't add a user-visible\nfunction without adding an appropriate entry to func.sgml.\n\n* I'm pretty unimpressed with the whole truncate-to-interval thing\nand would recommend you drop it. I don't think it's adding much\nuseful functionality beyond what we can already do with the existing\ndate_trunc variants; and the definition seems excessively messy\n(too many restrictions and special cases).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 12 Nov 2022 13:44:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: generate_series for timestamptz and time zone problem"
},
{
"msg_contents": "On Sat, Nov 12, 2022 at 10:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> However, what it shouldn't be is part of this patch. It's worth\n> pushing it separately to have a record of that decision. I've\n> now done that, so you'll need to rebase to remove that delta.\n>\n> I looked over the v5 patch very briefly, and have two main\n> complaints:\n>\n> * There's no documentation additions. You can't add a user-visible\n> function without adding an appropriate entry to func.sgml.\n>\n> * I'm pretty unimpressed with the whole truncate-to-interval thing\n> and would recommend you drop it. I don't think it's adding much\n> useful functionality beyond what we can already do with the existing\n> date_trunc variants; and the definition seems excessively messy\n> (too many restrictions and special cases).\n\nPlease see attached v6 of the patch.\n\nThe changes since v5 are:\n.) Rebased and resolved conflicts caused by commit 533e02e92.\n.) Removed code and tests related to new date_trunc() functions, as\nsuggested by Tom.\n.) Added 3 more variants to accompany with date_add(tstz, interval, zone).\n date_add(tstz, interval)\n date_subtract(tstz, interval)\n date_subtract(tstz, interval, zone)\n\n.) Eliminate duplication of code; use common function to implement\ngenerate_series_timestamptz[_at_zone]() functions.\n.) Fixed bug where in one of the new code paths,\ngenerate_series_timestamptz_with_zone(), did not perform\nTIMESTAMP_NOT_FINITE() check.\n.) Replaced some DirectFunctionCall?() with direct calls to the\nrelevant *_internal() function; should be better for performance.\n.) Added documentation all 5 functions (2 date_add(), 2\ndate_subtract(), 1 overloaded version of generate_series()).\n\nI'm not sure of the convention around authorship. But since this was\nnot an insignificant amount of work, would this patch be considered as\nco-authored by Przemyslaw and myself? Should I add myself to Authors\nfield in the Commitfest app?\n\nHi Przemyslaw,\n\n I started working on this patch based on Tom's review a few days\nago, since you hadn't responded in a while, and I presumed you're not\nworking on this anymore. I should've consulted with/notified you of my\nintent before starting to work on it, to avoid duplication of work.\nSorry if this submission obviates any work you have in progress.\nPlease feel free to provide your feedback on the v6 of the patch.\n\nBest regards,\nGurjeet\nhttp://Gurje.et",
"msg_date": "Sun, 29 Jan 2023 23:18:53 -0800",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": false,
"msg_subject": "Re: generate_series for timestamptz and time zone problem"
},
{
"msg_contents": "Gurjeet Singh <gurjeet@singh.im> writes:\n> On Sat, Nov 12, 2022 at 10:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I looked over the v5 patch very briefly, and have two main\n>> complaints:\n>> ...\n\n> Please see attached v6 of the patch.\n\nThanks for updating that!\n\n> I'm not sure of the convention around authorship. But since this was\n> not an insignificant amount of work, would this patch be considered as\n> co-authored by Przemyslaw and myself? Should I add myself to Authors\n> field in the Commitfest app?\n\nWhile I'm not promising to commit this, if I were doing so I would\ncite both of you as authors. So feel free to change the CF entry.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 30 Jan 2023 02:28:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: generate_series for timestamptz and time zone problem"
},
{
"msg_contents": "Gurjeet Singh wrote on 30.01.2023 08:18:\n> On Sat, Nov 12, 2022 at 10:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> However, what it shouldn't be is part of this patch. It's worth\n>> pushing it separately to have a record of that decision. I've\n>> now done that, so you'll need to rebase to remove that delta.\n>>\n>> I looked over the v5 patch very briefly, and have two main\n>> complaints:\n>>\n>> * There's no documentation additions. You can't add a user-visible\n>> function without adding an appropriate entry to func.sgml.\n>>\n>> * I'm pretty unimpressed with the whole truncate-to-interval thing\n>> and would recommend you drop it. I don't think it's adding much\n>> useful functionality beyond what we can already do with the existing\n>> date_trunc variants; and the definition seems excessively messy\n>> (too many restrictions and special cases).\n> Please see attached v6 of the patch.\n>\n> The changes since v5 are:\n> .) Rebased and resolved conflicts caused by commit 533e02e92.\n> .) Removed code and tests related to new date_trunc() functions, as\n> suggested by Tom.\n> .) Added 3 more variants to accompany with date_add(tstz, interval, zone).\n> date_add(tstz, interval)\n> date_subtract(tstz, interval)\n> date_subtract(tstz, interval, zone)\n>\n> .) Eliminate duplication of code; use common function to implement\n> generate_series_timestamptz[_at_zone]() functions.\n> .) Fixed bug where in one of the new code paths,\n> generate_series_timestamptz_with_zone(), did not perform\n> TIMESTAMP_NOT_FINITE() check.\n> .) Replaced some DirectFunctionCall?() with direct calls to the\n> relevant *_internal() function; should be better for performance.\n> .) Added documentation all 5 functions (2 date_add(), 2\n> date_subtract(), 1 overloaded version of generate_series()).\nOther work distracted me from this patch.\nI looked at your update v6 and it looks ok.\nFor me the date_trunc function is important and I still have some corner \ncases. Now I will continue working with data_trunc in a separate patch.\n> I'm not sure of the convention around authorship. But since this was\n> not an insignificant amount of work, would this patch be considered as\n> co-authored by Przemyslaw and myself? Should I add myself to Authors\n> field in the Commitfest app?\nI see no obstacles for us to be co-authors.\n> Hi Przemyslaw,\n> I started working on this patch based on Tom's review a few days\n> ago, since you hadn't responded in a while, and I presumed you're not\n> working on this anymore. I should've consulted with/notified you of my\n> intent before starting to work on it, to avoid duplication of work.\n> Sorry if this submission obviates any work you have in progress.\n> Please feel free to provide your feedback on the v6 of the patch.\nI propose to get the approval of the current truncated version of the \npatch. As I wrote above, I will continue work on date_trunc later and as \na separate patch.\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66\n\n\n\n\nGurjeet Singh wrote on 30.01.2023 08:18:\n\nOn Sat, Nov 12, 2022 at 10:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\nHowever, what it shouldn't be is part of this patch. It's worth\npushing it separately to have a record of that decision. I've\nnow done that, so you'll need to rebase to remove that delta.\n\nI looked over the v5 patch very briefly, and have two main\ncomplaints:\n\n* There's no documentation additions. You can't add a user-visible\nfunction without adding an appropriate entry to func.sgml.\n\n* I'm pretty unimpressed with the whole truncate-to-interval thing\nand would recommend you drop it. I don't think it's adding much\nuseful functionality beyond what we can already do with the existing\ndate_trunc variants; and the definition seems excessively messy\n(too many restrictions and special cases).\n\n\nPlease see attached v6 of the patch.\n\nThe changes since v5 are:\n.) Rebased and resolved conflicts caused by commit 533e02e92.\n.) Removed code and tests related to new date_trunc() functions, as\nsuggested by Tom.\n.) Added 3 more variants to accompany with date_add(tstz, interval, zone).\n date_add(tstz, interval)\n date_subtract(tstz, interval)\n date_subtract(tstz, interval, zone)\n\n.) Eliminate duplication of code; use common function to implement\ngenerate_series_timestamptz[_at_zone]() functions.\n.) Fixed bug where in one of the new code paths,\ngenerate_series_timestamptz_with_zone(), did not perform\nTIMESTAMP_NOT_FINITE() check.\n.) Replaced some DirectFunctionCall?() with direct calls to the\nrelevant *_internal() function; should be better for performance.\n.) Added documentation all 5 functions (2 date_add(), 2\ndate_subtract(), 1 overloaded version of generate_series()).\n\nOther work distracted me from this patch.\nI looked at your update v6 and it looks ok.\nFor me the date_trunc function is important and I still have some corner\n cases. Now I will continue working with data_trunc in a separate patch.\n\nI'm not sure of the convention around authorship. But since this was\nnot an insignificant amount of work, would this patch be considered as\nco-authored by Przemyslaw and myself? Should I add myself to Authors\nfield in the Commitfest app?\n\nI see no obstacles for us to be co-authors.\n\nHi Przemyslaw,\n I started working on this patch based on Tom's review a few days\nago, since you hadn't responded in a while, and I presumed you're not\nworking on this anymore. I should've consulted with/notified you of my\nintent before starting to work on it, to avoid duplication of work.\nSorry if this submission obviates any work you have in progress.\nPlease feel free to provide your feedback on the v6 of the patch.\n\nI propose to get the approval of the current truncated version of the \npatch. As I wrote above, I will continue work on date_trunc later and as\n a separate patch.\n\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Mon, 30 Jan 2023 13:21:01 +0100",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>",
"msg_from_op": true,
"msg_subject": "Re: generate_series for timestamptz and time zone problem"
},
{
"msg_contents": "Gurjeet Singh <gurjeet@singh.im> writes:\n> [ generate_series_with_timezone.v6.patch ]\n\nThe cfbot isn't terribly happy with this. It looks like UBSan\nis detecting some undefined behavior. Possibly an uninitialized\nvariable?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 30 Jan 2023 19:07:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: generate_series for timestamptz and time zone problem"
},
{
"msg_contents": "On Mon, Jan 30, 2023 at 4:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Gurjeet Singh <gurjeet@singh.im> writes:\n> > [ generate_series_with_timezone.v6.patch ]\n>\n> The cfbot isn't terribly happy with this. It looks like UBSan\n> is detecting some undefined behavior. Possibly an uninitialized\n> variable?\n\nIt was the classical case of out-of-bounds access. I was trying to\naccess 4th argument, even in the case where the 3-argument variant of\ngenerate_series() was called.\n\nPlease see attached v7 of the patch. It now checks PG_NARGS() before\naccessing the optional parameter.\n\nThis mistake would've been caught early if there were assertions\npreventing access beyond the number of arguments passed to the\nfunction. I'll send the assert_enough_args.patch, that adds these\nchecks, in a separate thread to avoid potentially confusing cfbot.\n\nBest regards,\nGurjeet\nhttp://Gurje.et",
"msg_date": "Mon, 30 Jan 2023 23:50:46 -0800",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": false,
"msg_subject": "Re: generate_series for timestamptz and time zone problem"
},
{
"msg_contents": "On 31/01/2023 08:50, Gurjeet Singh wrote:\n> On Mon, Jan 30, 2023 at 4:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Gurjeet Singh <gurjeet@singh.im> writes:\n>>> [ generate_series_with_timezone.v6.patch ]\n>> The cfbot isn't terribly happy with this. It looks like UBSan\n>> is detecting some undefined behavior. Possibly an uninitialized\n>> variable?\n> It was the classical case of out-of-bounds access. I was trying to\n> access 4th argument, even in the case where the 3-argument variant of\n> generate_series() was called.\n>\n> Please see attached v7 of the patch. It now checks PG_NARGS() before\n> accessing the optional parameter.\n>\n> This mistake would've been caught early if there were assertions\n> preventing access beyond the number of arguments passed to the\n> function. I'll send the assert_enough_args.patch, that adds these\n> checks, in a separate thread to avoid potentially confusing cfbot.\n\nTested this patch on current head.\nThe patch applies, with a few offsets.\n\nFunctionality wise it works as documented, also tried with \n\"America/New_York\" and \"Europe/Berlin\" as time zone.\nThe included tests cover both an entire year (including a new year), and \nalso a DST switch (date_add() for 2021-10-31 in Europe/Warsaw, which is \nthe date the country switches to standard time).\n\nMinor nitpick: the texts use both \"time zone\" and \"timezone\".\n\n\nRegards,\n\n-- \n\t\t\t\tAndreas 'ads' Scherbaum\nGerman PostgreSQL User Group\nEuropean PostgreSQL User Group - Board of Directors\nVolunteer Regional Contact, Germany - PostgreSQL Project\n\n\n\n",
"msg_date": "Sat, 4 Mar 2023 01:28:26 +0100",
"msg_from": "Andreas 'ads' Scherbaum <ads@pgug.de>",
"msg_from_op": false,
"msg_subject": "Re: generate_series for timestamptz and time zone problem"
},
{
"msg_contents": "Pushed v7 after making a bunch of cosmetic changes. One gripe\nI had was that rearranging the logic in timestamptz_pl_interval[_internal]\nmade it nearly impossible to see what functional changes you'd made\nthere, while not really buying anything in return. I undid that to\nmake the diff readable.\n\nI did not push the fmgr.h changes. Maybe that is worthwhile (although\nI'd vote against it), but it certainly does not belong in a localized\nfeature patch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 18 Mar 2023 14:18:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: generate_series for timestamptz and time zone problem"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nInspired by a question on IRC, I noticed that while the core statement\nlogging system gained the option to log statement parameters in PG 13,\nauto_explain was left out.\n\nHere's a patch that adds a corresponding\nauto_explain.log_parameter_max_length config setting, which controls the\n\"Query Parameters\" node in the logged plan. Just like in core, the\ndefault is -1, which logs the parameters in full, and 0 disables\nparameter logging, while any other value truncates each parameter to\nthat many bytes.\n\n- ilmari",
"msg_date": "Tue, 31 May 2022 21:33:20 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": true,
"msg_subject": "Logging query parmeters in auto_explain"
},
{
"msg_contents": "Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> writes:\n\n> Hi hackers,\n>\n> Inspired by a question on IRC, I noticed that while the core statement\n> logging system gained the option to log statement parameters in PG 13,\n> auto_explain was left out.\n>\n> Here's a patch that adds a corresponding\n> auto_explain.log_parameter_max_length config setting, which controls the\n> \"Query Parameters\" node in the logged plan. Just like in core, the\n> default is -1, which logs the parameters in full, and 0 disables\n> parameter logging, while any other value truncates each parameter to\n> that many bytes.\n\nI've added added it to the upcoming commitfest:\n\nhttps://commitfest.postgresql.org/38/3660/\n\n- ilmari\n\n\n",
"msg_date": "Mon, 06 Jun 2022 18:12:02 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": true,
"msg_subject": "Re: Logging query parmeters in auto_explain"
},
{
"msg_contents": "On Tue, May 31, 2022 at 09:33:20PM +0100, Dagfinn Ilmari Mannsåker wrote:\n> Here's a patch that adds a corresponding\n> auto_explain.log_parameter_max_length config setting, which controls the\n> \"Query Parameters\" node in the logged plan. Just like in core, the\n> default is -1, which logs the parameters in full, and 0 disables\n> parameter logging, while any other value truncates each parameter to\n> that many bytes.\n\nWith a behavior similar to the in-core log_parameter_max_length, this\nlooks rather sane to me. This is consistent with the assumptions of\nerrdetail_params().\n\n+$node->append_conf('postgresql.conf', \"auto_explain.log_parameter_max_length = -1\");\nNit. You don't need this change in the TAP test, as this is the\ndefault value to log everything.\n--\nMichael",
"msg_date": "Tue, 7 Jun 2022 16:53:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Logging query parmeters in auto_explain"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n\n> On Tue, May 31, 2022 at 09:33:20PM +0100, Dagfinn Ilmari Mannsåker wrote:\n>> Here's a patch that adds a corresponding\n>> auto_explain.log_parameter_max_length config setting, which controls the\n>> \"Query Parameters\" node in the logged plan. Just like in core, the\n>> default is -1, which logs the parameters in full, and 0 disables\n>> parameter logging, while any other value truncates each parameter to\n>> that many bytes.\n>\n> With a behavior similar to the in-core log_parameter_max_length, this\n> looks rather sane to me. This is consistent with the assumptions of\n> errdetail_params().\n\nThat was the intention, yes.\n\n> +$node->append_conf('postgresql.conf', \"auto_explain.log_parameter_max_length = -1\");\n> Nit. You don't need this change in the TAP test, as this is the\n> default value to log everything.\n\nPoint, fixed in the attached v2. I've also added a test that truncation\nand disabling works.\n\n- ilmari",
"msg_date": "Tue, 07 Jun 2022 12:18:52 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": true,
"msg_subject": "Re: Logging query parmeters in auto_explain"
},
{
"msg_contents": "On Tue, Jun 07, 2022 at 12:18:52PM +0100, Dagfinn Ilmari Mannsåker wrote:\n> Point, fixed in the attached v2. I've also added a test that truncation\n> and disabling works.\n\nThe tests are structured so as all the queries are run first, then the\nfull set of logs is slurped and scanned. With things like tests for\ntruncation, it becomes easier to have a given test overlap with\nanother one in terms of pattern matching, so we could silently lose\ncoverage. Wouldn't it be better to reorganize things so we save the\ncurrent size of the log file (as of -s $node->logfile), run one query,\nand then slurp the log file based on the position saved previously?\nThe GUC updates had better be localized in each sub-section of the\ntests, as well. \n--\nMichael",
"msg_date": "Thu, 9 Jun 2022 10:44:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Logging query parmeters in auto_explain"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n\n> On Tue, Jun 07, 2022 at 12:18:52PM +0100, Dagfinn Ilmari Mannsåker wrote:\n>> Point, fixed in the attached v2. I've also added a test that truncation\n>> and disabling works.\n>\n> The tests are structured so as all the queries are run first, then the\n> full set of logs is slurped and scanned. With things like tests for\n> truncation, it becomes easier to have a given test overlap with\n> another one in terms of pattern matching, so we could silently lose\n> coverage. Wouldn't it be better to reorganize things so we save the\n> current size of the log file (as of -s $node->logfile), run one query,\n> and then slurp the log file based on the position saved previously?\n> The GUC updates had better be localized in each sub-section of the\n> tests, as well. \n\nDone (and more tests added), v3 attached.\n\n- ilmari",
"msg_date": "Thu, 09 Jun 2022 23:55:11 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": true,
"msg_subject": "Re: Logging query parmeters in auto_explain"
},
{
"msg_contents": "On Thu, Jun 09, 2022 at 11:55:11PM +0100, Dagfinn Ilmari Mannsåker wrote:\n> Done (and more tests added), v3 attached.\n\nOne thing I am wondering is if we'd better mention errdetail_params()\nat the top of the initial check done in ExplainQueryParameters().\nThat's a minor issue, though.\n\n+sub query_log\n+{\nPerhaps a short description at the top of this routine to explain it\nis worth it? The test still does a set of like() and unlike() after\nrunning each query when the parameter updates are done. One thing I\nwould have played with is to group the set of logs expected or not\nexpected as parameters of the centralized routine, but that would\nreduce the customization of the test names, so at the end the approach\nyou have taken for query_log() looks good to me.\n\n+$node->stop('fast');\nThere is no need for that. The END block of Cluster.pm does that\nalready.\n--\nMichael",
"msg_date": "Tue, 14 Jun 2022 16:50:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Logging query parmeters in auto_explain"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n\n> On Thu, Jun 09, 2022 at 11:55:11PM +0100, Dagfinn Ilmari Mannsåker wrote:\n>> Done (and more tests added), v3 attached.\n>\n> One thing I am wondering is if we'd better mention errdetail_params()\n> at the top of the initial check done in ExplainQueryParameters().\n> That's a minor issue, though.\n>\n> +sub query_log\n> +{\n> Perhaps a short description at the top of this routine to explain it\n> is worth it?\n\nDone. I also moved the function to the bottom of the file, to avoid\ndistracting from the actual test queries.\n\n> The test still does a set of like() and unlike() after running each\n> query when the parameter updates are done. One thing I would have\n> played with is to group the set of logs expected or not expected as\n> parameters of the centralized routine, but that would reduce the\n> customization of the test names, so at the end the approach you have\n> taken for query_log() looks good to me.\n\nI did consider passing the tests as a data structure to the function,\nbut that would amount to specifying exactly the same things but as a\ndata structure, and then calling the appropriate function by reference,\nwhich just makes things more cluttered.\n\nIf we were using TAP subtests, it might make a sense to have the\nfunction run each set of related tests in a subtest, rather than having\nmultiple subtest calls at the top level, but we don't, so it doesn't.\n\n> +$node->stop('fast');\n> There is no need for that. The END block of Cluster.pm does that\n> already.\n\nAh, I was not aware of that. The call was there in the original version,\nso I had just left it in. Removed.\n\nv4 patch attached.\n\n- ilmari",
"msg_date": "Mon, 27 Jun 2022 12:22:42 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": true,
"msg_subject": "Re: Logging query parmeters in auto_explain"
},
{
"msg_contents": "Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> writes:\n\n> Michael Paquier <michael@paquier.xyz> writes:\n>\n>> On Thu, Jun 09, 2022 at 11:55:11PM +0100, Dagfinn Ilmari Mannsåker wrote:\n>>> Done (and more tests added), v3 attached.\n>>\n>> One thing I am wondering is if we'd better mention errdetail_params()\n>> at the top of the initial check done in ExplainQueryParameters().\n>> That's a minor issue, though.\n>>\n>> +sub query_log\n>> +{\n>> Perhaps a short description at the top of this routine to explain it\n>> is worth it?\n>\n> Done. I also moved the function to the bottom of the file, to avoid\n> distracting from the actual test queries.\n\nI forgot to mention, I also changed the order of the query and\nparameters, so that they can more naturally be left out when no changes\nare needed.\n\n- ilmari\n\n\n",
"msg_date": "Mon, 27 Jun 2022 12:27:57 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": true,
"msg_subject": "Re: Logging query parmeters in auto_explain"
},
{
"msg_contents": "On Mon, Jun 27, 2022 at 12:27:57PM +0100, Dagfinn Ilmari Mannsåker wrote:\n> I forgot to mention, I also changed the order of the query and\n> parameters, so that they can more naturally be left out when no changes\n> are needed.\n\nI can see that, and I have added $node as an extra parameter of the\nroutine. After putting my hands on that, it also felt a bit unnatural\nto do the refactoring of the tests and the addition of the new GUC in\nthe same patch, so I have split things as the attached. The amount of\ncoverage is still the same but it makes the integration of the feature\neasier to follow. \n--\nMichael",
"msg_date": "Tue, 28 Jun 2022 16:54:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Logging query parmeters in auto_explain"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n\n> On Mon, Jun 27, 2022 at 12:27:57PM +0100, Dagfinn Ilmari Mannsåker wrote:\n>> I forgot to mention, I also changed the order of the query and\n>> parameters, so that they can more naturally be left out when no changes\n>> are needed.\n>\n> I can see that, and I have added $node as an extra parameter of the\n> routine. After putting my hands on that, it also felt a bit unnatural\n> to do the refactoring of the tests and the addition of the new GUC in\n> the same patch, so I have split things as the attached. The amount of\n> coverage is still the same but it makes the integration of the feature\n> easier to follow. \n\nThat makes sense, but I still think the query_log() function definition\nshould go at the end (after done_testing()), so the machinery doesn't\ndistract from what's actually being tested.\n\nAlso, the second paragraph of the second commit message now belongs in\nthe first commit (without the word \"Also\").\n\nThanks,\nilmari\n\n\n",
"msg_date": "Tue, 28 Jun 2022 10:12:18 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": true,
"msg_subject": "Re: Logging query parmeters in auto_explain"
},
{
"msg_contents": "On Tue, Jun 28, 2022 at 10:12:18AM +0100, Dagfinn Ilmari Mannsåker wrote:\n> That makes sense, but I still think the query_log() function definition\n> should go at the end (after done_testing()), so the machinery doesn't\n> distract from what's actually being tested.\n\nThe majority of TAP scripts have their subroutines at the beginning of\nthe script, and there are few having that at the end. I won't fight\nyou on that, but the former is more consistent.\n\n> Also, the second paragraph of the second commit message now belongs in\n> the first commit (without the word \"Also\").\n\nYep, will fix. I usually rewrite commit messages before merging them.\n--\nMichael",
"msg_date": "Wed, 29 Jun 2022 09:17:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Logging query parmeters in auto_explain"
},
{
"msg_contents": "On Wed, Jun 29, 2022 at 09:17:49AM +0900, Michael Paquier wrote:\n> The majority of TAP scripts have their subroutines at the beginning of\n> the script, and there are few having that at the end. I won't fight\n> you on that, but the former is more consistent.\n\nI have kept things as I originally intended, and applied 0001 for the\nrefactoring pieces.\n--\nMichael",
"msg_date": "Fri, 1 Jul 2022 09:58:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Logging query parmeters in auto_explain"
},
{
"msg_contents": "On Fri, Jul 01, 2022 at 09:58:52AM +0900, Michael Paquier wrote:\n> I have kept things as I originally intended, and applied 0001 for the\n> refactoring pieces.\n\nAnd done as well with 0002. So we are good for this thread.\n--\nMichael",
"msg_date": "Wed, 6 Jul 2022 10:02:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Logging query parmeters in auto_explain"
},
{
"msg_contents": "On Wed, 6 Jul 2022, at 02:02, Michael Paquier wrote:\n> On Fri, Jul 01, 2022 at 09:58:52AM +0900, Michael Paquier wrote:\n>> I have kept things as I originally intended, and applied 0001 for the\n>> refactoring pieces.\n>\n> And done as well with 0002. So we are good for this thread.\n\nThanks!\n\n- ilmari\n\n\n",
"msg_date": "Wed, 06 Jul 2022 10:57:53 +0100",
"msg_from": "=?UTF-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: Logging query parmeters in auto_explain"
}
] |
[
{
"msg_contents": "Apparently 5.36 rejiggers warning classifications in a way that breaks\none of our test cases. Perhaps we should switch it to some other\nwarning-triggering condition.\n\n\t\t\tregards, tom lane\n\n------- Forwarded Message\n\nDate: Wed, 01 Jun 2022 14:08:46 +0000\nFrom: bugzilla@redhat.com\nTo: tgl@sss.pgh.pa.us\nSubject: [Bug 2092426] New: postgresql-14.3-1.fc37: FTBFS with Perl 5.36\n\nhttps://bugzilla.redhat.com/show_bug.cgi?id=2092426\n\n Bug ID: 2092426\n Summary: postgresql-14.3-1.fc37: FTBFS with Perl 5.36\n Product: Fedora\n Version: rawhide\n URL: https://koji.fedoraproject.org/koji/buildinfo?buildID=\n 1974481\n Status: NEW\n Component: postgresql\n Assignee: fjanus@redhat.com\n Reporter: jplesnik@redhat.com\n QA Contact: extras-qa@fedoraproject.org\n CC: anon.amish@gmail.com, devrim@gunduz.org,\n fjanus@redhat.com, hhorak@redhat.com,\n jmlich83@gmail.com, mkulik@redhat.com,\n panovotn@redhat.com, pkubat@redhat.com,\n praiskup@redhat.com, tgl@sss.pgh.pa.us\n Target Milestone: ---\n Classification: Fedora\n\n\n\nI am working on adding Perl 5.36 to Fedora Rawhide/37 (not done yet).\n\nThe rebuild of postgresql failed with this version in side tag f37-perl:\n\n=== make failure: src/pl/plperl/regression.diffs ===\ndiff -U3\n/builddir/build/BUILD/postgresql-14.3/src/pl/plperl/expected/plperl.out\n/builddir/build/BUILD/postgresql-14.3/src/pl/plperl/results/plperl.out\n--- /builddir/build/BUILD/postgresql-14.3/src/pl/plperl/expected/plperl.out \n2022-05-09 21:14:45.000000000 +0000\n+++ /builddir/build/BUILD/postgresql-14.3/src/pl/plperl/results/plperl.out \n2022-06-01 11:23:50.925042793 +0000\n@@ -726,8 +726,6 @@\n -- check that we can \"use warnings\" (in this case to turn a warn into an\nerror)\n -- yields \"ERROR: Useless use of sort in scalar context.\"\n DO $do$ use warnings FATAL => qw(void) ; my @y; my $x = sort @y; 1; $do$\nLANGUAGE plperl;\n-ERROR: Useless use of sort in scalar context at line 1.\n-CONTEXT: PL/Perl anonymous code block\n -- make sure functions marked as VOID without an explicit return work\n CREATE OR REPLACE FUNCTION myfuncs() RETURNS void AS $$\n $_SHARED{myquote} = sub {\n\nThe reason of the failure is a change to existing diagnostics[1]:\n\n\"Useless use of sort in scalar context is now in the new scalar category.\n\nWhen sort is used in scalar context, it provokes a warning that doing this is\nnot useful. This warning used to be in the void category. A new category for\nwarnings about scalar context has now been added, called scalar.\"\n\n\nSolution is replacing\nuse warnings FATAL => qw(void)\nby\nuse warnings FATAL => qw(scalar)\nfor this case.\n\n[1]\nhttps://metacpan.org/dist/perl/view/pod/perldelta.pod#Changes-to-Existing-Diagnostics\n\n\n-- \nYou are receiving this mail because:\nYou are on the CC list for the bug.\nhttps://bugzilla.redhat.com/show_bug.cgi?id=2092426\n\n\n------- End of Forwarded Message\n\n\n",
"msg_date": "Wed, 01 Jun 2022 10:22:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "plperl tests fail with latest Perl 5.36"
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Apparently 5.36 rejiggers warning classifications in a way that breaks\n> one of our test cases. Perhaps we should switch it to some other\n> warning-triggering condition.\n\nThe simplest thing is to actually use sort in void context,\ni.e. removing the `my $x = ` part from the test, see the attached.\n\nTested on 5.36.0 and 5.8.9.\n\n- ilmari",
"msg_date": "Wed, 01 Jun 2022 16:11:53 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: plperl tests fail with latest Perl 5.36"
},
{
"msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>> Apparently 5.36 rejiggers warning classifications in a way that breaks\n>> one of our test cases. Perhaps we should switch it to some other\n>> warning-triggering condition.\n\n> The simplest thing is to actually use sort in void context,\n> i.e. removing the `my $x = ` part from the test, see the attached.\n\nLooks reasonable to me, but I'm hardly a Perl monk. Anybody have\na different opinion?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 Jun 2022 11:40:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: plperl tests fail with latest Perl 5.36"
},
{
"msg_contents": "On Wed, Jun 1, 2022 at 11:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Looks reasonable to me, but I'm hardly a Perl monk. Anybody have\n> a different opinion?\n\nWell, it falsifies the immediately preceding comment, but I think it's\nfine otherwise.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 1 Jun 2022 12:52:55 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: plperl tests fail with latest Perl 5.36"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Jun 1, 2022 at 11:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Looks reasonable to me, but I'm hardly a Perl monk. Anybody have\n>> a different opinion?\n\n> Well, it falsifies the immediately preceding comment, but I think it's\n> fine otherwise.\n\nDuh, right, will fix.\n\nThis seems appropriate to back-patch as far as 9.2, since AFAIK there's\nnot currently anything that breaks plperl in the out-of-support branches.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 Jun 2022 13:30:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: plperl tests fail with latest Perl 5.36"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI'm seeing a compiler warning in brin.c with an older version of gcc.\nSpecifically, it seems worried that a variable might not be initialized.\nAFAICT there is no real risk, so I've attached a small patch to silence the\nwarning.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 1 Jun 2022 09:35:37 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "silence compiler warning in brin.c"
},
{
"msg_contents": "On Wed, Jun 1, 2022 at 9:35 AM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n\n> Hi hackers,\n>\n> I'm seeing a compiler warning in brin.c with an older version of gcc.\n> Specifically, it seems worried that a variable might not be initialized.\n> AFAICT there is no real risk, so I've attached a small patch to silence the\n> warning.\n>\n> --\n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n\nHi,\nIt seems the variable can be initialized to the value of GUCNestLevel since\nlater in the func:\n\n /* Roll back any GUC changes executed by index functions */\n AtEOXact_GUC(false, save_nestlevel);\n\nCheers\n\nOn Wed, Jun 1, 2022 at 9:35 AM Nathan Bossart <nathandbossart@gmail.com> wrote:Hi hackers,\n\nI'm seeing a compiler warning in brin.c with an older version of gcc.\nSpecifically, it seems worried that a variable might not be initialized.\nAFAICT there is no real risk, so I've attached a small patch to silence the\nwarning.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.comHi,It seems the variable can be initialized to the value of GUCNestLevel since later in the func: /* Roll back any GUC changes executed by index functions */ AtEOXact_GUC(false, save_nestlevel);Cheers",
"msg_date": "Wed, 1 Jun 2022 09:46:52 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: silence compiler warning in brin.c"
},
{
"msg_contents": "Zhihong Yu <zyu@yugabyte.com> writes:\n> On Wed, Jun 1, 2022 at 9:35 AM Nathan Bossart <nathandbossart@gmail.com>\n> wrote:\n>> I'm seeing a compiler warning in brin.c with an older version of gcc.\n>> Specifically, it seems worried that a variable might not be initialized.\n>> AFAICT there is no real risk, so I've attached a small patch to silence the\n>> warning.\n\nYeah, I noticed the other day that a couple of older buildfarm members\n(curculio, gaur) are grousing about this too. We don't really have a\nhard-n-fast rule about how old a compiler needs to be before we stop\nworrying about its notions about uninitialized variables, but these are\nkind of old. Still, since this is the only such warning from these\nanimals, I'm inclined to silence it.\n\n> It seems the variable can be initialized to the value of GUCNestLevel since\n> later in the func:\n> /* Roll back any GUC changes executed by index functions */\n> AtEOXact_GUC(false, save_nestlevel);\n\nThat seems pretty inappropriate. If, thanks to some future thinko,\ncontrol were able to reach the AtEOXact_GUC call despite not having\ncalled NewGUCNestLevel, we'd want that to fail. It looks like\nAtEOXact_GUC asserts nestLevel > 0, so that either 0 or -1 would\ndo as an \"invalid\" value; I'd lean a bit to using 0.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 Jun 2022 13:06:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: silence compiler warning in brin.c"
},
{
"msg_contents": "On Wed, Jun 01, 2022 at 01:06:21PM -0400, Tom Lane wrote:\n> Zhihong Yu <zyu@yugabyte.com> writes:\n>> It seems the variable can be initialized to the value of GUCNestLevel since\n>> later in the func:\n>> /* Roll back any GUC changes executed by index functions */\n>> AtEOXact_GUC(false, save_nestlevel);\n> \n> That seems pretty inappropriate. If, thanks to some future thinko,\n> control were able to reach the AtEOXact_GUC call despite not having\n> called NewGUCNestLevel, we'd want that to fail.\n\n+1\n\n> It looks like\n> AtEOXact_GUC asserts nestLevel > 0, so that either 0 or -1 would\n> do as an \"invalid\" value; I'd lean a bit to using 0.\n\nI only chose -1 to follow a117ceb's example in amcheck. I have no\npreference.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 1 Jun 2022 10:38:24 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: silence compiler warning in brin.c"
},
{
"msg_contents": "Zhihong Yu <zyu@yugabyte.com> writes:\n> Hi,\n\n> if (heapRel == NULL || heapoid != IndexGetRelation(indexoid, false))\n> ereport(ERROR,\n\n> I wonder why the above check is not placed in the else block:\n\n> else\n> heapRel = NULL;\n\nBecause we don't want to throw that error until we've exhausted the\npossibilities for throwing other errors.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 Jun 2022 13:55:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: silence compiler warning in brin.c"
},
{
"msg_contents": "On Wed, Jun 1, 2022 at 10:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Zhihong Yu <zyu@yugabyte.com> writes:\n> > On Wed, Jun 1, 2022 at 9:35 AM Nathan Bossart <nathandbossart@gmail.com>\n> > wrote:\n> >> I'm seeing a compiler warning in brin.c with an older version of gcc.\n> >> Specifically, it seems worried that a variable might not be initialized.\n> >> AFAICT there is no real risk, so I've attached a small patch to silence\n> the\n> >> warning.\n>\n> Yeah, I noticed the other day that a couple of older buildfarm members\n> (curculio, gaur) are grousing about this too. We don't really have a\n> hard-n-fast rule about how old a compiler needs to be before we stop\n> worrying about its notions about uninitialized variables, but these are\n> kind of old. Still, since this is the only such warning from these\n> animals, I'm inclined to silence it.\n>\n> > It seems the variable can be initialized to the value of GUCNestLevel\n> since\n> > later in the func:\n> > /* Roll back any GUC changes executed by index functions */\n> > AtEOXact_GUC(false, save_nestlevel);\n>\n> That seems pretty inappropriate. If, thanks to some future thinko,\n> control were able to reach the AtEOXact_GUC call despite not having\n> called NewGUCNestLevel, we'd want that to fail. It looks like\n> AtEOXact_GUC asserts nestLevel > 0, so that either 0 or -1 would\n> do as an \"invalid\" value; I'd lean a bit to using 0.\n>\n> regards, tom lane\n>\nHi,\n\n if (heapRel == NULL || heapoid != IndexGetRelation(indexoid, false))\n ereport(ERROR,\n\nI wonder why the above check is not placed in the else block:\n\n else\n heapRel = NULL;\n\nbecause heapRel is not modified between the else and the above check.\nIf the check is placed in the else block, we can potentially save the call\nto index_open().\n\nCheers\n\nOn Wed, Jun 1, 2022 at 10:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Zhihong Yu <zyu@yugabyte.com> writes:\n> On Wed, Jun 1, 2022 at 9:35 AM Nathan Bossart <nathandbossart@gmail.com>\n> wrote:\n>> I'm seeing a compiler warning in brin.c with an older version of gcc.\n>> Specifically, it seems worried that a variable might not be initialized.\n>> AFAICT there is no real risk, so I've attached a small patch to silence the\n>> warning.\n\nYeah, I noticed the other day that a couple of older buildfarm members\n(curculio, gaur) are grousing about this too. We don't really have a\nhard-n-fast rule about how old a compiler needs to be before we stop\nworrying about its notions about uninitialized variables, but these are\nkind of old. Still, since this is the only such warning from these\nanimals, I'm inclined to silence it.\n\n> It seems the variable can be initialized to the value of GUCNestLevel since\n> later in the func:\n> /* Roll back any GUC changes executed by index functions */\n> AtEOXact_GUC(false, save_nestlevel);\n\nThat seems pretty inappropriate. If, thanks to some future thinko,\ncontrol were able to reach the AtEOXact_GUC call despite not having\ncalled NewGUCNestLevel, we'd want that to fail. It looks like\nAtEOXact_GUC asserts nestLevel > 0, so that either 0 or -1 would\ndo as an \"invalid\" value; I'd lean a bit to using 0.\n\n regards, tom laneHi, if (heapRel == NULL || heapoid != IndexGetRelation(indexoid, false)) ereport(ERROR,I wonder why the above check is not placed in the else block: else heapRel = NULL;because heapRel is not modified between the else and the above check.If the check is placed in the else block, we can potentially save the call to index_open().Cheers",
"msg_date": "Wed, 1 Jun 2022 10:58:20 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: silence compiler warning in brin.c"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Wed, Jun 01, 2022 at 01:06:21PM -0400, Tom Lane wrote:\n>> It looks like\n>> AtEOXact_GUC asserts nestLevel > 0, so that either 0 or -1 would\n>> do as an \"invalid\" value; I'd lean a bit to using 0.\n\n> I only chose -1 to follow a117ceb's example in amcheck. I have no\n> preference.\n\nHmm, if we're following amcheck's example it should be more like this:\n\ndiff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c\nindex 52f171772d..0de1441dc6 100644\n--- a/src/backend/access/brin/brin.c\n+++ b/src/backend/access/brin/brin.c\n@@ -1051,7 +1051,13 @@ brin_summarize_range(PG_FUNCTION_ARGS)\n save_nestlevel = NewGUCNestLevel();\n }\n else\n+ {\n heapRel = NULL;\n+ /* Set these just to suppress \"uninitialized variable\" warnings */\n+ save_userid = InvalidOid;\n+ save_sec_context = -1;\n+ save_nestlevel = -1;\n+ }\n \n indexRel = index_open(indexoid, ShareUpdateExclusiveLock);\n \nI like this better anyway since the fact that the other two variables\naren't warned about seems like an implementation artifact.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 Jun 2022 17:08:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: silence compiler warning in brin.c"
},
{
"msg_contents": "On Wed, Jun 01, 2022 at 05:08:03PM -0400, Tom Lane wrote:\n> Hmm, if we're following amcheck's example it should be more like this:\n> \n> diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c\n> index 52f171772d..0de1441dc6 100644\n> --- a/src/backend/access/brin/brin.c\n> +++ b/src/backend/access/brin/brin.c\n> @@ -1051,7 +1051,13 @@ brin_summarize_range(PG_FUNCTION_ARGS)\n> save_nestlevel = NewGUCNestLevel();\n> }\n> else\n> + {\n> heapRel = NULL;\n> + /* Set these just to suppress \"uninitialized variable\" warnings */\n> + save_userid = InvalidOid;\n> + save_sec_context = -1;\n> + save_nestlevel = -1;\n> + }\n> \n> indexRel = index_open(indexoid, ShareUpdateExclusiveLock);\n> \n> I like this better anyway since the fact that the other two variables\n> aren't warned about seems like an implementation artifact.\n\nYeah, that is better. It's not clear why the other variables aren't\nsubject to the same warnings, so we might as well cover our bases.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 1 Jun 2022 14:30:13 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: silence compiler warning in brin.c"
}
] |
[
{
"msg_contents": "Hi,\nFor non superusers, the max connections would be lower than what\nmax_connections\nspecifies.\n\nShould we display the effective value when non superuser issues `SHOW\nmax_connections` ?\n\nThanks\n\nHi,For non superusers, the max connections would be lower than what max_connections specifies.Should we display the effective value when non superuser issues `SHOW max_connections` ?Thanks",
"msg_date": "Wed, 1 Jun 2022 14:22:48 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "showing effective max_connections"
},
{
"msg_contents": "Zhihong Yu <zyu@yugabyte.com> writes:\n> Hi,\n> For non superusers, the max connections would be lower than what\n> max_connections\n> specifies.\n\n> Should we display the effective value when non superuser issues `SHOW\n> max_connections` ?\n\nThat seems more likely to add confusion than remove it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 Jun 2022 17:24:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: showing effective max_connections"
}
] |
[
{
"msg_contents": "Hello!\n\nFound out that test for pg_upgrade (test.sh for 11-14 and \n002_pg_upgrade.pl for 15+) doesn't work from 10th versions to higher \nones due to incompatible options for initdb and default PGDATA permissions.\n\nHere are the patches that may solve this problem.\n\nWould be glad to your comments and concerns.\n\n\nWith best regards,\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 2 Jun 2022 04:22:52 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "[PATCH] Fix pg_upgrade test from v10"
},
{
"msg_contents": "On Thu, Jun 02, 2022 at 04:22:52AM +0300, Anton A. Melnikov wrote:\n> Found out that test for pg_upgrade (test.sh for 11-14 and 002_pg_upgrade.pl\n> for 15+) doesn't work from 10th versions to higher ones due to incompatible\n> options for initdb and default PGDATA permissions.\n\nYeah, there are still TODOs in this stuff. Those tests can also break\neasily depending on the dump you are pushing to the old node when\ndoing cross-version upgrades. Perl makes it a bit easier to reason\nabout improving this area in the future, though, and MSVC is able to\ncatch up on that.\n\n> # To increase coverage of non-standard segment size and group access without\n> # increasing test runtime, run these tests with a custom setting.\n> # --allow-group-access and --wal-segsize have been added in v11.\n> -$oldnode->init(extra => [ '--wal-segsize', '1', '--allow-group-access' ]);\n> +my ($oldverstr) = `$ENV{oldinstall}/bin/pg_ctl --version` =~ /(\\d+\\.\\d+)/;\n> +my ($oldver) = (version->parse(${oldverstr}));\n> +$oldnode->init(extra => [ '--wal-segsize', '1', '--allow-group-access' ])\n> +\t\tif $oldver >= version->parse('11.0');\n> +$oldnode->init()\n> +\t\tif $oldver < version->parse('11.0');\n\nA node's pg_version is assigned via _set_pg_version() when creating it\nusing PostgreSQL::Test::Cluster::new(). In order to make the\ndifference with the set of initdb options to use when setting up the\nold node, it would be simpler to rely on that, no? Version.pm is able\nto handle integer as well as string comparisons for the version\nstrings.\n--\nMichael",
"msg_date": "Thu, 2 Jun 2022 10:37:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix pg_upgrade test from v10"
},
{
"msg_contents": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru> writes:\n> Found out that test for pg_upgrade (test.sh for 11-14 and \n> 002_pg_upgrade.pl for 15+) doesn't work from 10th versions to higher \n> ones due to incompatible options for initdb and default PGDATA permissions.\n\nThe buildfarm animals that test cross-version upgrades are not\nunhappy, so please be more specific about what problem you\nare trying to solve.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 Jun 2022 00:36:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix pg_upgrade test from v10"
},
{
"msg_contents": "On Thu, Jun 02, 2022 at 12:36:30AM -0400, Tom Lane wrote:\n> The buildfarm animals that test cross-version upgrades are not\n> unhappy, so please be more specific about what problem you\n> are trying to solve.\n\nAnton is complaining about the case where you try to use the in-core\nupgrade tests with a set of binaries/dump/source tree older that the\ncurrent version tested passed down as environment variables. test.sh\nand the new TAP tests authorize that but they have their limits in\nportability, which is what Anton is proposing to improve here. The\nclient buildfarm does not make use of the in-core facility, as it has\nits own module and logic to check after the case of cross-version\nupgrades (see PGBuild/Modules/TestUpgradeXversion.pm).\n\nMy 2c.\n--\nMichael",
"msg_date": "Thu, 2 Jun 2022 13:48:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix pg_upgrade test from v10"
},
{
"msg_contents": "\nOn 2022-06-01 We 21:37, Michael Paquier wrote:\n> On Thu, Jun 02, 2022 at 04:22:52AM +0300, Anton A. Melnikov wrote:\n>> Found out that test for pg_upgrade (test.sh for 11-14 and 002_pg_upgrade.pl\n>> for 15+) doesn't work from 10th versions to higher ones due to incompatible\n>> options for initdb and default PGDATA permissions.\n> Yeah, there are still TODOs in this stuff. Those tests can also break\n> easily depending on the dump you are pushing to the old node when\n> doing cross-version upgrades. Perl makes it a bit easier to reason\n> about improving this area in the future, though, and MSVC is able to\n> catch up on that.\n>\n>> # To increase coverage of non-standard segment size and group access without\n>> # increasing test runtime, run these tests with a custom setting.\n>> # --allow-group-access and --wal-segsize have been added in v11.\n>> -$oldnode->init(extra => [ '--wal-segsize', '1', '--allow-group-access' ]);\n>> +my ($oldverstr) = `$ENV{oldinstall}/bin/pg_ctl --version` =~ /(\\d+\\.\\d+)/;\n>> +my ($oldver) = (version->parse(${oldverstr}));\n>> +$oldnode->init(extra => [ '--wal-segsize', '1', '--allow-group-access' ])\n>> +\t\tif $oldver >= version->parse('11.0');\n>> +$oldnode->init()\n>> +\t\tif $oldver < version->parse('11.0');\n> A node's pg_version is assigned via _set_pg_version() when creating it\n> using PostgreSQL::Test::Cluster::new(). In order to make the\n> difference with the set of initdb options to use when setting up the\n> old node, it would be simpler to rely on that, no? Version.pm is able\n> to handle integer as well as string comparisons for the version\n> strings.\n\n\nBoth these patches look dubious.\n\n\n1. There is no mention of why there's a change w.r.t. Cygwin and \npermissions checks. Maybe it's ok, but it seems off topic and is\ncertainly not referred to in the patch submission.\n\n2. As Michael says, we should not be using perl's version module, we\nshould be using the version object built into each\nPostgreSQL::Test::Cluster instance.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 2 Jun 2022 16:57:12 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix pg_upgrade test from v10"
},
{
"msg_contents": "Hello!\n\nOn 02.06.2022 23:57, Andrew Dunstan wrote:\n\n> \n> 1. There is no mention of why there's a change w.r.t. Cygwin and\n> permissions checks. Maybe it's ok, but it seems off topic and is\n> certainly not referred to in the patch submission.\n> \nThanks for the comments!\nIt was my error to change w.r.t. Cygwin. I've fixed it in the second \nversion of the patch. But change in permissons check is correct. If we \nfix the error with initdb options, we've got the next one while testing \nupgrade from v10:\n\"files in PGDATA with permission != 640\"\nand the test.sh will end immediately.\nThe thing is that the default permissions have changed in v11+ due to \nthis commit: \nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=c37b3d08ca6873f9d4eaf24c72a90a550970cbb8.\nChanges of permissions checks in test.sh fix this error.\n\n > On 2022-06-01 We 21:37, Michael Paquier wrote:\n >> A node's pg_version is assigned via _set_pg_version() when creating it\n >> using PostgreSQL::Test::Cluster::new(). In order to make the\n >> difference with the set of initdb options to use when setting up the\n >> old node, it would be simpler to rely on that, no? Version.pm is able\n >> to handle integer as well as string comparisons for the version\n >> strings.\n >\n> 2. As Michael says, we should not be using perl's version module, we\n> should be using the version object built into each\n> PostgreSQL::Test::Cluster instance.\n> \nSure, very valuable note. Fixed it in the 2nd version of the patch attached.\n\nAlso find that i forgot to adjust initdb keys for new node in v15. So \nthere was an error due to wal-segsize mismatch. Fixed it in the 2nd \nversion too. And added patches for other versions.\n\n > The client buildfarm does not make use of the in-core facility, as it \n > has its own module and logic to check after the case of cross-version \n > upgrades (see PGBuild/Modules/TestUpgradeXversion.pm)..\n\nMichael, thanks a lot for your 2c.\n\nWith best regards,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sun, 5 Jun 2022 13:38:01 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Fix pg_upgrade test from v10"
},
{
"msg_contents": "Would you add this to to the (next) CF ?\n\nIt's silly to say that v9.2 will be supported potentially for a handful more\nyears, but that the upgrade-testing script itself doesn't support that, so\ndevelopers each have to reinvent its fixups.\n\nSee also 20220122183749.GO23027@telsasoft.com, where I proposed some of the\nsame things.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 1 Jul 2022 12:07:27 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix pg_upgrade test from v10"
},
{
"msg_contents": "Hello!\n\nOn 01.07.2022 20:07, Justin Pryzby wrote:\n> Would you add this to to the (next) CF ?\n\nYes, i've put it on september CF.\n\n> It's silly to say that v9.2 will be supported potentially for a handful more\n> years, but that the upgrade-testing script itself doesn't support that, so\n> developers each have to reinvent its fixups.\n\nI've test the attached patch in all variants from v9.5..15 to supported \nversions 10..master. The script test.sh for 9.5->10 and 9.6->10 upgrades \nworks fine without any patch.\nIn 9.4 there is a regress test largeobject to be patched to allow \nupgrade test from this version.So i've stopped at 9.5.\nThis is clear that we limit the destination version for upgrade test to \nthe supported versions only. In our case destination versions\nstarting from the 10th inclusively.\nBut is there are a limit for the source version for upgrade test from?\n\n> See also 20220122183749.GO23027@telsasoft.com, where I proposed some of the\n> same things.\n> \nThanks a lot, i've add some code for 14+ from \nhttps://www.postgresql.org/message-id/flat/20220122183749.GO23027%40telsasoft.com\nto the attached patch.\n\n\nWith best regards,\n\n--\nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 5 Jul 2022 09:01:49 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Fix pg_upgrade test from v10"
},
{
"msg_contents": "On Tue, Jul 05, 2022 at 09:01:49AM +0300, Anton A. Melnikov wrote:\n> On 01.07.2022 20:07, Justin Pryzby wrote:\n> > It's silly to say that v9.2 will be supported potentially for a handful more\n> > years, but that the upgrade-testing script itself doesn't support that, so\n> > developers each have to reinvent its fixups.\n> \n> I've test the attached patch in all variants from v9.5..15 to supported\n> versions 10..master. The script test.sh for 9.5->10 and 9.6->10 upgrades\n> works fine without any patch.\n> In 9.4 there is a regress test largeobject to be patched to allow upgrade\n> test from this version.So i've stopped at 9.5.\n> This is clear that we limit the destination version for upgrade test to the\n> supported versions only. In our case destination versions\n> starting from the 10th inclusively.\n> But is there are a limit for the source version for upgrade test from?\n\nAs of last year, there's a reasonably clear policy for support of old versions:\n\nhttps://www.postgresql.org/docs/devel/pgupgrade.html\n|pg_upgrade supports upgrades from 9.2.X and later to the current major release of PostgreSQL, including snapshot and beta releases.\n\nSee: e469f0aaf3c586c8390bd65923f97d4b1683cd9f\n\nSo it'd be ideal if this were to support versions down to 9.2.\n\nThis is failing in cfbot:\nhttp://cfbot.cputube.org/anton-melnikov.html\n\n..since it tries to apply all the *.patch files to the master branch, one after\nanother. For branches other than master, I suggest to name the patches *.txt\nor similar. Or, just focus for now on allowing upgrades *to* master. I'm not\nsure if anyone is interested in patching test.sh in backbranches. I'm not\nsure, but there may be more interest to backpatch the conversion to TAP\n(322becb60).\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 5 Jul 2022 14:08:24 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix pg_upgrade test from v10"
},
{
"msg_contents": "On Tue, Jul 05, 2022 at 02:08:24PM -0500, Justin Pryzby wrote:\n> ..since it tries to apply all the *.patch files to the master branch, one after\n> another. For branches other than master, I suggest to name the patches *.txt\n> or similar. Or, just focus for now on allowing upgrades *to* master. I'm not\n> sure if anyone is interested in patching test.sh in backbranches. I'm not\n> sure, but there may be more interest to backpatch the conversion to TAP\n> (322becb60).\n\nI am fine to do something for v15 and HEAD, as TAP makes that slightly\neasier. Now, this patch is far from being complete, as it would still\ngenerate a lot of diffs. Some of them cannot be easily avoided, but\nthere are areas where it is straightforward to do so:\n- pg_dump needs to use --extra-float-digits=0 when dumping from a\nversion strictly older than v12.\n- This does nothing for the headers ond footers of the logical dumps\nthat are version-dependent. One simple thing that can be done here is\nto remove the comments from the logical dumps, something that the\nbuildfarm code already does.\n\nThat's the kind of things I already proposed on this thread, aimed at\nimproving the coverage, and this takes care of more issues than what's\nproposed here:\nhttps://www.postgresql.org/message-id/flat/Yox1ME99GhAemMq1@paquier.xyz\n\nI'll rebase my patch to include fixes for --wal-segsize and\n--allow-group-access when using versions older than v11.\n--\nMichael",
"msg_date": "Wed, 6 Jul 2022 14:58:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix pg_upgrade test from v10"
},
{
"msg_contents": "Hello!\n\nOn 06.07.2022 08:58, Michael Paquier wrote:\n\n> That's the kind of things I already proposed on this thread, aimed at\n> improving the coverage, and this takes care of more issues than what's\n> proposed here:\n> https://www.postgresql.org/message-id/flat/Yox1ME99GhAemMq1(at)paquier(dot)xyz\n> I'll rebase my patch to include fixes for --wal-segsize and\n> --allow-group-access when using versions older than v11.\n> --\n> Michael\n\nThanks!\nI looked at this thread and tried to apply some changes from it in practice.\nAnd found one strange error and describe it in a comment here:\nhttps://www.postgresql.org/message-id/cc7e961a-d5ad-8c6d-574b-478aacc11cf7%40inbox.ru\nIt would be interesting to know if it occures on\nmy PC only or somewhere else.\n\nOn 05.07.2022 22:08, Justin Pryzby wrote:\n> \n> ..since it tries to apply all the *.patch files to the master branch, one after\n> another. For branches other than master, I suggest to name the patches *.txt\n> or similar. Or, just focus for now on allowing upgrades *to* master. I'm not\n> sure if anyone is interested in patching test.sh in backbranches. I'm not\n> sure, but there may be more interest to backpatch the conversion to TAP\n> (322becb60).\n> \n\nYes, the backport idea seems to be interesting. I wrote more about this in a new thread:\nhttps://www.postgresql.org/message-id/e2b1f3a0-4fda-ba72-5535-2d0395b9e68f%40inbox.ru\nas the current topic has nothing to do with the backport of TAP tests.\n\n\nWith best regards,\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Mon, 1 Aug 2022 01:04:39 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Fix pg_upgrade test from v10"
},
{
"msg_contents": "On Mon, Aug 01, 2022 at 01:04:39AM +0300, Anton A. Melnikov wrote:\n> I looked at this thread and tried to apply some changes from it in practice.\n> And found one strange error and describe it in a comment here:\n> https://www.postgresql.org/message-id/cc7e961a-d5ad-8c6d-574b-478aacc11cf7%40inbox.ru\n> It would be interesting to know if it occures on\n> my PC only or somewhere else.\n\nAs mentioned upthread, please note that I'll send my arguments on the\nother thread where I have sent my patch.\n--\nMichael",
"msg_date": "Tue, 2 Aug 2022 19:28:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix pg_upgrade test from v10"
}
] |
[
{
"msg_contents": "forking: <20220302205058.GJ15744@telsasoft.com>: Re: Adding CI to our tree\n\nOn Wed, Mar 02, 2022 at 02:50:58PM -0600, Justin Pryzby wrote:\n> BTW (regarding the last patch), I just noticed that -Og optimization can cause\n> warnings with gcc-4.8.5-39.el7.x86_64.\n> \n> be-fsstubs.c: In function 'be_lo_export':\n> be-fsstubs.c:522:24: warning: 'fd' may be used uninitialized in this function [-Wmaybe-uninitialized]\n> if (CloseTransientFile(fd) != 0)\n> ^\n> trigger.c: In function 'ExecCallTriggerFunc':\n> trigger.c:2400:2: warning: 'result' may be used uninitialized in this function [-Wmaybe-uninitialized]\n> return (HeapTuple) DatumGetPointer(result);\n> ^\n> xml.c: In function 'xml_pstrdup_and_free':\n> xml.c:1205:2: warning: 'result' may be used uninitialized in this function [-Wmaybe-uninitialized]\n> return result;\n\nToday's \"warnings\" thread suggests to me that these are worth fixing - it seems\nreasonable to compile postgres 14 on centos7 (as I sometimes have done), and\nthe patch seems even more reasonable when backpatched to older versions.\n(Also, I wonder if there's any consideration to backpatch cirrus.yaml, which\nuses -Og)\n\nThe buildfarm has old GCC, but they all use -O2, so the warnings are not seen\nthere.\n\nThe patch below applies and fixes warnings back to v13.\n\nIn v13, pl_handler.c has another warning, which suggests to backpatch\n7292fd8f1.\n\nIn v12, there's a disparate separate set of warnings which could be dealt with\nseparately.\n\nv9.3-v11 have no warnings on c7 with -Og.\n\nThomas mentioned [0] that cfbot's linux (which is using gcc 10) gives other\nwarnings since using -Og, which (in addition to being unpleasant to look at) is\nhard to accept, seeing as there's a whole separate task just for\n\"CompilerWarnings\"... But I don't know what to do about those.\n\n[0] https://www.postgresql.org/message-id/CA+hUKGK1cF+TMW1cyoujoDAX5FBdoA59C--1HT7yCQGBbq1ddQ@mail.gmail.com\n\ndiff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c\nindex 40441fdb4c..bb64de2843 100644\n--- a/src/backend/commands/trigger.c\n+++ b/src/backend/commands/trigger.c\n@@ -2105,7 +2105,7 @@ ExecCallTriggerFunc(TriggerData *trigdata,\n {\n \tLOCAL_FCINFO(fcinfo, 0);\n \tPgStat_FunctionCallUsage fcusage;\n-\tDatum\t\tresult;\n+\tDatum\t\tresult = 0;\n \tMemoryContext oldContext;\n \n \t/*\ndiff --git a/src/backend/libpq/be-fsstubs.c b/src/backend/libpq/be-fsstubs.c\nindex 63eaccc80a..3e2c094e1e 100644\n--- a/src/backend/libpq/be-fsstubs.c\n+++ b/src/backend/libpq/be-fsstubs.c\n@@ -467,7 +467,7 @@ be_lo_export(PG_FUNCTION_ARGS)\n {\n \tOid\t\t\tlobjId = PG_GETARG_OID(0);\n \ttext\t *filename = PG_GETARG_TEXT_PP(1);\n-\tint\t\t\tfd;\n+\tint\t\t\tfd = -1;\n \tint\t\t\tnbytes,\n \t\t\t\ttmp;\n \tchar\t\tbuf[BUFSIZE];\ndiff --git a/src/backend/utils/adt/xml.c b/src/backend/utils/adt/xml.c\nindex f90a9424d4..7ffbae5a09 100644\n--- a/src/backend/utils/adt/xml.c\n+++ b/src/backend/utils/adt/xml.c\n@@ -1185,7 +1185,7 @@ pg_xmlCharStrndup(const char *str, size_t len)\n static char *\n xml_pstrdup_and_free(xmlChar *str)\n {\n-\tchar\t *result;\n+\tchar\t *result = NULL;\n \n \tif (str)\n \t{\n@@ -1199,8 +1199,6 @@ xml_pstrdup_and_free(xmlChar *str)\n \t\t}\n \t\tPG_END_TRY();\n \t}\n-\telse\n-\t\tresult = NULL;\n \n \treturn result;\n }\n\n\n",
"msg_date": "Wed, 1 Jun 2022 21:42:44 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "compiler warnings with gcc 4.8 and -Og"
},
{
"msg_contents": "On Wed, Jun 01, 2022 at 09:42:44PM -0500, Justin Pryzby wrote:\n> Today's \"warnings\" thread suggests to me that these are worth fixing - it seems\n> reasonable to compile postgres 14 on centos7 (as I sometimes have done), and\n> the patch seems even more reasonable when backpatched to older versions.\n> (Also, I wonder if there's any consideration to backpatch cirrus.yaml, which\n> uses -Og)\n\nhttps://en.wikipedia.org/wiki/CentOS#CentOS_releases tells that centos\n7 will be supported until the end of 2024, so I would fix that.\n\n> The patch below applies and fixes warnings back to v13.\n\nI don't mind fixing what you have here, as a first step. All those\ncases are telling us that the compiler does not see PG_TRY() as\nsomething is can rely on to set up each variable. 7292fd8 is\ncomplaining about the same point, actually, aka setjmp clobberring the\nvariable, isn't it? So wouldn't it be better to initialize them, as\nyour patch does, but also mark them as volatile? In short, what\nhappens with -Wclobber and a non-optimized compilation?\n--\nMichael",
"msg_date": "Thu, 2 Jun 2022 13:24:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: compiler warnings with gcc 4.8 and -Og"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> forking: <20220302205058.GJ15744@telsasoft.com>: Re: Adding CI to our tree\n> On Wed, Mar 02, 2022 at 02:50:58PM -0600, Justin Pryzby wrote:\n>> BTW (regarding the last patch), I just noticed that -Og optimization can cause\n>> warnings with gcc-4.8.5-39.el7.x86_64.\n\nI'm a little dubious about whether -Og is a case we should pay special\nattention to? Our standard optimization setting for gcc is -O2, and\nonce you go away from that there are any number of weird cases that\nmay or may not produce warnings. I'm not entirely willing to buy\nthe proposition that we must suppress warnings on\nany-random-gcc-version combined with any-random-options.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 Jun 2022 01:09:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: compiler warnings with gcc 4.8 and -Og"
},
{
"msg_contents": "> On 2 Jun 2022, at 07:09, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I'm a little dubious about whether -Og is a case we should pay special\n> attention to? Our standard optimization setting for gcc is -O2, and\n> once you go away from that there are any number of weird cases that\n> may or may not produce warnings.\n\nI think we should pick one level to keep warning free, and stick to that. In\nlight of that, -O2 seems a lot more appealing than -Og.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 2 Jun 2022 15:32:10 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: compiler warnings with gcc 4.8 and -Og"
},
{
"msg_contents": "On Thu, 2 Jun 2022, 07:10 Tom Lane, <tgl@sss.pgh.pa.us> wrote:\n>\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > forking: <20220302205058.GJ15744@telsasoft.com>: Re: Adding CI to our tree\n> > On Wed, Mar 02, 2022 at 02:50:58PM -0600, Justin Pryzby wrote:\n> >> BTW (regarding the last patch), I just noticed that -Og optimization can cause\n> >> warnings with gcc-4.8.5-39.el7.x86_64.\n>\n> I'm a little dubious about whether -Og is a case we should pay special\n> attention to? Our standard optimization setting for gcc is -O2, and\n> once you go away from that there are any number of weird cases that\n> may or may not produce warnings. I'm not entirely willing to buy\n> the proposition that we must suppress warnings on\n> any-random-gcc-version combined with any-random-options.\n\nThe \"Developer FAQ\" page on the wiki suggests that when you develop\nwith gcc that you use CFLAGS=\"-ggdb -Og -g3 -fno-omit-frame-pointer\"\nduring development, so I'd hardly call -Og \"any random option\".\n\n-Matthias\n\n\n",
"msg_date": "Thu, 2 Jun 2022 16:27:25 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: compiler warnings with gcc 4.8 and -Og"
},
{
"msg_contents": "Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n> On Thu, 2 Jun 2022, 07:10 Tom Lane, <tgl@sss.pgh.pa.us> wrote:\n>> I'm a little dubious about whether -Og is a case we should pay special\n>> attention to?\n\n> The \"Developer FAQ\" page on the wiki suggests that when you develop\n> with gcc that you use CFLAGS=\"-ggdb -Og -g3 -fno-omit-frame-pointer\"\n> during development, so I'd hardly call -Og \"any random option\".\n\nI have no idea who wrote that FAQ entry, and I'd certainly not\naccept it as being project policy. I'd actually say that's an\nexcellent example of adding some random compiler options.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 Jun 2022 10:33:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: compiler warnings with gcc 4.8 and -Og"
},
{
"msg_contents": "Hi,\n\nOn 2022-06-02 13:24:28 +0900, Michael Paquier wrote:\n> On Wed, Jun 01, 2022 at 09:42:44PM -0500, Justin Pryzby wrote:\n> > Today's \"warnings\" thread suggests to me that these are worth fixing - it seems\n> > reasonable to compile postgres 14 on centos7 (as I sometimes have done), and\n> > the patch seems even more reasonable when backpatched to older versions.\n> > (Also, I wonder if there's any consideration to backpatch cirrus.yaml, which\n> > uses -Og)\n> \n> https://en.wikipedia.org/wiki/CentOS#CentOS_releases tells that centos\n> 7 will be supported until the end of 2024, so I would fix that.\n\nTo me fixing gcc 4.8 warnings feels like a fools errand, unless they're\nverbose enough to make compilation exceedingly verbose (e.g. warnings in a\nheader).\n\n\n> > The patch below applies and fixes warnings back to v13.\n> \n> I don't mind fixing what you have here, as a first step. All those\n> cases are telling us that the compiler does not see PG_TRY() as\n> something is can rely on to set up each variable. 7292fd8 is\n> complaining about the same point, actually, aka setjmp clobberring the\n> variable, isn't it? So wouldn't it be better to initialize them, as\n> your patch does, but also mark them as volatile? In short, what\n> happens with -Wclobber and a non-optimized compilation?\n\nFWIW, I found -Wclobber to be so buggy as to be pointless.\n\nI don't think it needs be volatile because it'll not be accessed if we error\nout? At least in the first instances in Justin's patch.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 2 Jun 2022 08:01:49 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: compiler warnings with gcc 4.8 and -Og"
},
{
"msg_contents": "Hi,\n\nOn 2022-06-02 10:33:52 -0400, Tom Lane wrote:\n> Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n> > On Thu, 2 Jun 2022, 07:10 Tom Lane, <tgl@sss.pgh.pa.us> wrote:\n> >> I'm a little dubious about whether -Og is a case we should pay special\n> >> attention to?\n> \n> > The \"Developer FAQ\" page on the wiki suggests that when you develop\n> > with gcc that you use CFLAGS=\"-ggdb -Og -g3 -fno-omit-frame-pointer\"\n> > during development, so I'd hardly call -Og \"any random option\".\n> \n> I have no idea who wrote that FAQ entry, and I'd certainly not\n> accept it as being project policy.\n\nI don't know either. However:\n\n> I'd actually say that's an excellent example of adding some random compiler\n> options.\n\nTo me they mostly make sense. -g3 with -ggdb makes gcc emit enough information\nabout macros that the debugger can interpret them. -fno-omit-frame-pointer\nmakes profiling with call graphs much much smaller.\n\nI tried to use -Og many times, but in the end mostly gave up, because it still\nmakes debugging harder compared to -O0.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 2 Jun 2022 08:04:30 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: compiler warnings with gcc 4.8 and -Og"
},
{
"msg_contents": "Hi,\n\nOn 2022-06-02 01:09:58 -0400, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > forking: <20220302205058.GJ15744@telsasoft.com>: Re: Adding CI to our tree\n> > On Wed, Mar 02, 2022 at 02:50:58PM -0600, Justin Pryzby wrote:\n> >> BTW (regarding the last patch), I just noticed that -Og optimization can cause\n> >> warnings with gcc-4.8.5-39.el7.x86_64.\n> \n> I'm a little dubious about whether -Og is a case we should pay special\n> attention to? Our standard optimization setting for gcc is -O2, and\n> once you go away from that there are any number of weird cases that\n> may or may not produce warnings. I'm not entirely willing to buy\n> the proposition that we must suppress warnings on\n> any-random-gcc-version combined with any-random-options.\n\nI think it'd be useful to have -Og in a usable state, despite my nearby\ngriping about it. It makes our tests use noticably fewer CPU cycles, and\ndebugging is less annoying than with -O2. It's also faster to compile.\n\nHowever, making that effort for compiler versions for a compiler that went out\nof support in 2015 doesn't seem useful. It may be useful to pay some attention\nto not producint too many warnings on LTS distribution compilers when\ncompiling with production oriented flags, but nobody should develop on them.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 2 Jun 2022 08:13:29 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: compiler warnings with gcc 4.8 and -Og"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I tried to use -Og many times, but in the end mostly gave up, because it still\n> makes debugging harder compared to -O0.\n\nYeah. My own habit is to build with -O2 normally. If I'm trying to\ndebug some bit of code and find that I can't follow things adequately\nin gdb, I recompile just the relevant file(s) with -O0.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 Jun 2022 12:26:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: compiler warnings with gcc 4.8 and -Og"
}
] |
[
{
"msg_contents": "Pretty trivial since this is documenting something that Postgres\n*doesn't* do, but it incorrectly reversed only the bits of each\nnibble, not the whole byte. See e.g.\nhttps://www.ibm.com/docs/en/csfdcd/7.1?topic=ls-bit-ordering-in-mac-addresses\nfor a handy table.",
"msg_date": "Wed, 1 Jun 2022 23:34:19 -0700",
"msg_from": "Will Mortensen <will@extrahop.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] fix doc example of bit-reversed MAC address"
},
{
"msg_contents": "Will Mortensen <will@extrahop.com> writes:\n> Pretty trivial since this is documenting something that Postgres\n> *doesn't* do, but it incorrectly reversed only the bits of each\n> nibble, not the whole byte.\n\nDuh, right. Will fix, thanks for noticing!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Jun 2022 11:37:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] fix doc example of bit-reversed MAC address"
}
] |
[
{
"msg_contents": "Hello,\n\nAttached is a small patch to add a description to the meta commands \nregarding\nlarge objects.\n\n\nthe actual description when using psql --help=commands is :\n\nLarge Objects\n \\lo_export LOBOID FILE\n \\lo_import FILE [COMMENT]\n \\lo_list\n \\lo_unlink LOBOID large object operations\n\nthe proposed description is :\n\nLarge Objects\n \\lo_export LOBOID FILE export large object to a file\n \\lo_import FILE [COMMENT] import large object from a file\n \\lo_list list large objects\n \\lo_unlink LOBOID delete a large object\n\n\nI tried to make an alignment on the description of other meta-commands.\n\nThanks.\nRegards.\n--\nThibaud W.",
"msg_date": "Thu, 2 Jun 2022 11:12:46 +0200",
"msg_from": "\"Thibaud W.\" <thibaud.walkowiak@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Proposal: adding a better description in psql command about large\n objects"
},
{
"msg_contents": "On Thu, Jun 02, 2022 at 11:12:46AM +0200, Thibaud W. wrote:\n> Attached is a small patch to add a description to the meta commands\n> regarding\n> large objects.\n\nThis seems reasonable to me. Your patch wasn't applying for some reason,\nso I created a new one with a commit message and some small adjustments.\nWhat do you think?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 2 Jun 2022 14:46:25 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: adding a better description in psql command about\n large objects"
},
{
"msg_contents": "On 6/2/22 23:46, Nathan Bossart wrote:\n> On Thu, Jun 02, 2022 at 11:12:46AM +0200, Thibaud W. wrote:\n>> Attached is a small patch to add a description to the meta commands\n>> regarding\n>> large objects.\n> This seems reasonable to me. Your patch wasn't applying for some reason,\n> so I created a new one with a commit message and some small adjustments.\n> What do you think?\nThanks for reading and fixing.\n\nIn fact the original tabs were missing in the first file.\nIn version v2, it seems interesting to keep calls to the fprintf \nfunction for translation. I attached a new file.\n\nThanks.\nRegards.\n-- \nThibaud W.",
"msg_date": "Fri, 3 Jun 2022 10:12:30 +0200",
"msg_from": "\"Thibaud W.\" <thibaud.walkowiak@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: adding a better description in psql command about large\n objects"
},
{
"msg_contents": "On Fri, Jun 03, 2022 at 10:12:30AM +0200, Thibaud W. wrote:\n> In fact the original tabs were missing in the first file.\n> In version v2, it seems interesting to keep calls to the fprintf function\n> for translation. I attached a new file.\n\nYes, it looks like the precedent is to have an fprintf() per command. I\nstill think the indentation needs some adjustment for readability. In the\nattached, I've lined up all the large object commands. This is offset from\nmost other commands, but IMO this is far easier to read, and something\nsimilar was done for the operator class/family commands. Thoughts?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 3 Jun 2022 07:39:21 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: adding a better description in psql command about\n large objects"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> Yes, it looks like the precedent is to have an fprintf() per command. I\n> still think the indentation needs some adjustment for readability. In the\n> attached, I've lined up all the large object commands. This is offset from\n> most other commands, but IMO this is far easier to read, and something\n> similar was done for the operator class/family commands. Thoughts?\n\nGenerally +1 here. The other style that is used in some places is to\nput the description on a separate line, but given that we're setting\nthe indent for a whole command group I think this looks better.\n\nA couple of other random thoughts:\n\n* How about \"write large object to file\" and \"read large object from\nfile\"? As it stands, if you are not totally sure which direction is\nexport and which is import, this description teaches you little.\n\n* While we're here, it seems like this whole group was placed at the\nend because of add-it-to-the-end-itis, not because that was the\nmost logical place for it. The other commands that interact with\nthe server are mostly further up. My first thought is to move it\nto just after the \"Informational\" group, but I'm not especially\nset on that. Making it not-last might make it harder to get away\nwith the inconsistent indentation, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Jun 2022 11:12:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: adding a better description in psql command about large\n objects"
},
{
"msg_contents": "On Fri, Jun 03, 2022 at 11:12:11AM -0400, Tom Lane wrote:\n> * How about \"write large object to file\" and \"read large object from\n> file\"? As it stands, if you are not totally sure which direction is\n> export and which is import, this description teaches you little.\n\n+1\n\n> * While we're here, it seems like this whole group was placed at the\n> end because of add-it-to-the-end-itis, not because that was the\n> most logical place for it. The other commands that interact with\n> the server are mostly further up. My first thought is to move it\n> to just after the \"Informational\" group, but I'm not especially\n> set on that. Making it not-last might make it harder to get away\n> with the inconsistent indentation, though.\n\nAnother option could be to move it after the \"Input/Output\" section so that\nit's closer to some other commands that involve files. I can't say I have\na strong opinion about whether/where to move it, though.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 3 Jun 2022 08:23:19 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: adding a better description in psql command about\n large objects"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Fri, Jun 03, 2022 at 11:12:11AM -0400, Tom Lane wrote:\n>> * While we're here, it seems like this whole group was placed at the\n>> end because of add-it-to-the-end-itis, not because that was the\n>> most logical place for it. The other commands that interact with\n>> the server are mostly further up. My first thought is to move it\n>> to just after the \"Informational\" group, but I'm not especially\n>> set on that. Making it not-last might make it harder to get away\n>> with the inconsistent indentation, though.\n\n> Another option could be to move it after the \"Input/Output\" section so that\n> it's closer to some other commands that involve files. I can't say I have\n> a strong opinion about whether/where to move it, though.\n\nYeah, I thought of that choice too, but it ends up placing the\nLarge Objects section higher up the list than seems warranted on\nfrequency-of-use grounds.\n\nAfter looking at the output I concluded that we'd be better off to\nstick with the normal indentation amount, and break the lo_import\nentry into two lines to make that work. One reason for this is\nthat some translators might've already settled on a different\nindentation amount in order to cope with translated parameter names,\nand deviating from the normal here will just complicate their lives.\nSo that leaves me proposing v5.\n\n(I also fixed the out-of-date line count in helpVariables.)\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 03 Jun 2022 12:56:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: adding a better description in psql command about large\n objects"
},
{
"msg_contents": "On Fri, Jun 03, 2022 at 12:56:20PM -0400, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> Another option could be to move it after the \"Input/Output\" section so that\n>> it's closer to some other commands that involve files. I can't say I have\n>> a strong opinion about whether/where to move it, though.\n> \n> Yeah, I thought of that choice too, but it ends up placing the\n> Large Objects section higher up the list than seems warranted on\n> frequency-of-use grounds.\n\nFair point.\n\n> After looking at the output I concluded that we'd be better off to\n> stick with the normal indentation amount, and break the lo_import\n> entry into two lines to make that work. One reason for this is\n> that some translators might've already settled on a different\n> indentation amount in order to cope with translated parameter names,\n> and deviating from the normal here will just complicate their lives.\n> So that leaves me proposing v5.\n\nI see. As you noted earlier, moving the entries higher makes the\ninconsistent indentation less appealing, too. So this LGTM.\n\n> (I also fixed the out-of-date line count in helpVariables.)\n\nYeah, it looks like 7844c99 missed this.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 3 Jun 2022 10:29:11 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: adding a better description in psql command about\n large objects"
},
{
"msg_contents": "Le ven. 3 juin 2022 à 19:29, Nathan Bossart <nathandbossart@gmail.com> a\nécrit :\n\n> On Fri, Jun 03, 2022 at 12:56:20PM -0400, Tom Lane wrote:\n> > Nathan Bossart <nathandbossart@gmail.com> writes:\n> >> Another option could be to move it after the \"Input/Output\" section so\n> that\n> >> it's closer to some other commands that involve files. I can't say I\n> have\n> >> a strong opinion about whether/where to move it, though.\n> >\n> > Yeah, I thought of that choice too, but it ends up placing the\n> > Large Objects section higher up the list than seems warranted on\n> > frequency-of-use grounds.\n>\n> Fair point.\n>\n> > After looking at the output I concluded that we'd be better off to\n> > stick with the normal indentation amount, and break the lo_import\n> > entry into two lines to make that work. One reason for this is\n> > that some translators might've already settled on a different\n> > indentation amount in order to cope with translated parameter names,\n> > and deviating from the normal here will just complicate their lives.\n> > So that leaves me proposing v5.\n>\n> I see. As you noted earlier, moving the entries higher makes the\n> inconsistent indentation less appealing, too. So this LGTM.\n>\n>\nSounds good to me too.\n\nThanks.\n\n\n-- \nGuillaume.\n\nLe ven. 3 juin 2022 à 19:29, Nathan Bossart <nathandbossart@gmail.com> a écrit :On Fri, Jun 03, 2022 at 12:56:20PM -0400, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> Another option could be to move it after the \"Input/Output\" section so that\n>> it's closer to some other commands that involve files. I can't say I have\n>> a strong opinion about whether/where to move it, though.\n> \n> Yeah, I thought of that choice too, but it ends up placing the\n> Large Objects section higher up the list than seems warranted on\n> frequency-of-use grounds.\n\nFair point.\n\n> After looking at the output I concluded that we'd be better off to\n> stick with the normal indentation amount, and break the lo_import\n> entry into two lines to make that work. One reason for this is\n> that some translators might've already settled on a different\n> indentation amount in order to cope with translated parameter names,\n> and deviating from the normal here will just complicate their lives.\n> So that leaves me proposing v5.\n\nI see. As you noted earlier, moving the entries higher makes the\ninconsistent indentation less appealing, too. So this LGTM.\nSounds good to me too.Thanks.-- Guillaume.",
"msg_date": "Sun, 5 Jun 2022 09:03:47 +0200",
"msg_from": "Guillaume Lelarge <guillaume@lelarge.info>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: adding a better description in psql command about large\n objects"
},
{
"msg_contents": "On 6/3/22 19:29, Nathan Bossart wrote:\n> On Fri, Jun 03, 2022 at 12:56:20PM -0400, Tom Lane wrote:\n>> Nathan Bossart <nathandbossart@gmail.com> writes:\n>>> Another option could be to move it after the \"Input/Output\" section so that\n>>> it's closer to some other commands that involve files. I can't say I have\n>>> a strong opinion about whether/where to move it, though.\n>> Yeah, I thought of that choice too, but it ends up placing the\n>> Large Objects section higher up the list than seems warranted on\n>> frequency-of-use grounds.\n> Fair point.\n>\n>> After looking at the output I concluded that we'd be better off to\n>> stick with the normal indentation amount, and break the lo_import\n>> entry into two lines to make that work. One reason for this is\n>> that some translators might've already settled on a different\n>> indentation amount in order to cope with translated parameter names,\n>> and deviating from the normal here will just complicate their lives.\n>> So that leaves me proposing v5.\n> I see. As you noted earlier, moving the entries higher makes the\n> inconsistent indentation less appealing, too. So this LGTM.\n>\n>> (I also fixed the out-of-date line count in helpVariables.)\n> Yeah, it looks like 7844c99 missed this.\nThanks, output is more readable this way.\n\nBest regards.\n-- \nThibaud W.\n\n\n",
"msg_date": "Tue, 7 Jun 2022 11:49:21 +0200",
"msg_from": "\"Thibaud W.\" <thibaud.walkowiak@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: adding a better description in psql command about large\n objects"
}
] |
[
{
"msg_contents": "Hello,\n\nI was using an object access hook for oat_post_create access while creating\nan extension and expected that I would be able to query for the newly\ncreated extension with get_extension_oid(), but it was returning\nInvalidOid. However, the same process works for triggers, so I was\nwondering what the expected behavior is?\n From the documentation in objectaccess.h, it doesn't mention anything about\nCommandCounterIncrement() for POST_CREATE which implied to me that it\nwasn't something I would need to worry about.\nOne option I thought of was this patch where CCI is called before the\naccess hook so that the new tuple is visible in the hook. Another option\nwould be to revise the documentation to reflect the expected behavior.\n\nThanks,\n\nMary Xu",
"msg_date": "Thu, 2 Jun 2022 15:37:01 -0700",
"msg_from": "Mary Xu <yxu2162@gmail.com>",
"msg_from_op": true,
"msg_subject": "oat_post_create expected behavior"
},
{
"msg_contents": "On Thu, Jun 2, 2022 at 6:37 PM Mary Xu <yxu2162@gmail.com> wrote:\n> I was using an object access hook for oat_post_create access while creating an extension and expected that I would be able to query for the newly created extension with get_extension_oid(), but it was returning InvalidOid. However, the same process works for triggers, so I was wondering what the expected behavior is?\n> From the documentation in objectaccess.h, it doesn't mention anything about CommandCounterIncrement() for POST_CREATE which implied to me that it wasn't something I would need to worry about.\n> One option I thought of was this patch where CCI is called before the access hook so that the new tuple is visible in the hook. Another option would be to revise the documentation to reflect the expected behavior.\n\nI don't think a proposal to add CommandCounterIncrement() calls just\nfor the convenience of object access hooks has much chance of being\naccepted. Possibly there is some work that could be done to ensure\nconsistent placement of the calls to post-create hooks so that either\nall of them happen before, or all of them happen after, a CCI has\noccurred, but I'm not sure whether or not that is feasible. Currently,\nI don't think we promise anything about whether a post-create hook\ncall will occur before or after a CCI, which is why\nsepgsql_schema_post_create(), sepgsql_schema_post_create(), and\nsepgsql_attribute_post_create() perform a catalog scan using\nSnapshotSelf, while sepgsql_database_post_create() uses\nget_database_oid(). You might want to adopt a similar technique.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 6 Jun 2022 10:51:12 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: oat_post_create expected behavior"
},
{
"msg_contents": "On Mon, 2022-06-06 at 10:51 -0400, Robert Haas wrote:\n> I don't think a proposal to add CommandCounterIncrement() calls just\n> for the convenience of object access hooks has much chance of being\n> accepted.\n\nOut of curiosity, why not? The proposed patch only runs it if the\nobject access hook is set. Do you see a situation where it would be\nconfusing that an earlier DDL change is visible? And if so, would it\nmake more sense to call CCI unconditionally?\n\nAlso, would it ever be reasonable for such a hook to call CCI itself?\nAs you say, it could use SnapshotSelf, but sometimes you might want to\ncall routines that assume they can use an MVCC snapshot. This question\napplies to the OAT_POST_ALTER hook as well as OAT_POST_CREATE.\n\n> Possibly there is some work that could be done to ensure\n> consistent placement of the calls to post-create hooks so that either\n> all of them happen before, or all of them happen after, a CCI has\n> occurred, but I'm not sure whether or not that is feasible. \n\nI like the idea of having a test in place so that we at least know when\nsomething changes. Otherwise it would be pretty easy to break an\nextension by adjusting some code.\n\n> Currently,\n> I don't think we promise anything about whether a post-create hook\n> call will occur before or after a CCI, which is why\n> sepgsql_schema_post_create(), sepgsql_schema_post_create(), and\n> sepgsql_attribute_post_create() perform a catalog scan using\n> SnapshotSelf, while sepgsql_database_post_create() uses\n> get_database_oid(). You might want to adopt a similar technique.\n\nIt would be good to document this a little better though:\n\n * OAT_POST_CREATE should be invoked just after the object is created.\n * Typically, this is done after inserting the primary catalog records\nand\n * associated dependencies.\n\ndoesn't really give any guidance, while the comment for alter does:\n\n * OAT_POST_ALTER should be invoked just after the object is altered,\n * but before the command counter is incremented. An extension using\nthe\n * hook can use a current MVCC snapshot to get the old version of the\ntuple,\n * and can use SnapshotSelf to get the new version of the tuple.\n\n\nRegards,\n\tJeff Davis\n\n\nPS: I added this to the July CF: \nhttps://commitfest.postgresql.org/38/3676/\n\n\n\n",
"msg_date": "Mon, 06 Jun 2022 10:34:58 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: oat_post_create expected behavior"
},
{
"msg_contents": "On Mon, Jun 6, 2022 at 1:35 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> Out of curiosity, why not? The proposed patch only runs it if the\n> object access hook is set. Do you see a situation where it would be\n> confusing that an earlier DDL change is visible? And if so, would it\n> make more sense to call CCI unconditionally?\n\nWell, I think that a fair amount of work has been done previously to\ncut down on unnecessary CCIs. I suspect Tom in particular is likely to\nobject to adding a whole bunch more of them, and I think that\nobjection would have some merit.\n\nI definitely think if we were going to do it, it would need to be\nunconditional. Otherwise I think we'll end up with bugs, because\nnobody's going to go test all of that code with and without an object\naccess hook installed every time they tweak some DDL-related code.\n\n> Also, would it ever be reasonable for such a hook to call CCI itself?\n> As you say, it could use SnapshotSelf, but sometimes you might want to\n> call routines that assume they can use an MVCC snapshot. This question\n> applies to the OAT_POST_ALTER hook as well as OAT_POST_CREATE.\n\nI definitely wouldn't want to warrant that it doesn't break anything.\nI think the extension can do it at its own risk, but I wouldn't\nrecommend it.\n\nOAT_POST_ALTER is unlike OAT_POST_CREATE in that OAT_POST_ALTER\ndocuments that it should be called after a CCI, whereas\nOAT_POST_CREATE does not make a representation either way.\n\n> > Possibly there is some work that could be done to ensure\n> > consistent placement of the calls to post-create hooks so that either\n> > all of them happen before, or all of them happen after, a CCI has\n> > occurred, but I'm not sure whether or not that is feasible.\n>\n> I like the idea of having a test in place so that we at least know when\n> something changes. Otherwise it would be pretty easy to break an\n> extension by adjusting some code.\n\nSure. I find writing meaningful tests for this kind of stuff hard, but\nthere are plenty of people around here who are better at figuring out\nhow to test obscure scenarios than I.\n\n> > Currently,\n> > I don't think we promise anything about whether a post-create hook\n> > call will occur before or after a CCI, which is why\n> > sepgsql_schema_post_create(), sepgsql_schema_post_create(), and\n> > sepgsql_attribute_post_create() perform a catalog scan using\n> > SnapshotSelf, while sepgsql_database_post_create() uses\n> > get_database_oid(). You might want to adopt a similar technique.\n>\n> It would be good to document this a little better though:\n>\n> * OAT_POST_CREATE should be invoked just after the object is created.\n> * Typically, this is done after inserting the primary catalog records\n> and\n> * associated dependencies.\n>\n> doesn't really give any guidance, while the comment for alter does:\n>\n> * OAT_POST_ALTER should be invoked just after the object is altered,\n> * but before the command counter is incremented. An extension using\n> the\n> * hook can use a current MVCC snapshot to get the old version of the\n> tuple,\n> * and can use SnapshotSelf to get the new version of the tuple.\n\nYeah, that comment could be made more clear.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 6 Jun 2022 13:43:51 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: oat_post_create expected behavior"
},
{
"msg_contents": "On Mon, 2022-06-06 at 13:43 -0400, Robert Haas wrote:\n> Yeah, that comment could be made more clear.\n\nI still don't understand what the rule is.\n\nIs the rule that OAT_POST_CREATE must always use SnapshotSelf for any\ncatalog access? And if so, do we need to update code in contrib\nextensions to follow that rule?\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Mon, 06 Jun 2022 12:46:25 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: oat_post_create expected behavior"
},
{
"msg_contents": "On Mon, Jun 6, 2022 at 3:46 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> On Mon, 2022-06-06 at 13:43 -0400, Robert Haas wrote:\n> > Yeah, that comment could be made more clear.\n>\n> I still don't understand what the rule is.\n>\n> Is the rule that OAT_POST_CREATE must always use SnapshotSelf for any\n> catalog access? And if so, do we need to update code in contrib\n> extensions to follow that rule?\n\nI don't think there is a rule in the sense that you want there to be\none. We sometimes call the object access hook before the CCI, and\nsometimes after, and the sepgsql code knows which cases are handled\nwhich ways and proceeds differently on that basis. If we went and\nchanged the sepgsql code that uses system catalog lookups to use\nSnapshotSelf instead, I think it would still work, but it would be\nless efficient, so that doesn't seem like a desirable change to me. If\nit's possible to make the hook placement always happen after a CCI,\nthen we could change the sepgsql code to always use catalog lookups,\nwhich would probably be more efficient but likely require adding some\nCCI calls, which may attract objections from Tom --- or maybe it\nwon't. Absent either of those things, I'm inclined to just make the\ncomment clearly state that we're not consistent about it. That's not\ngreat, but it may be the best we can do.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 6 Jun 2022 15:55:16 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: oat_post_create expected behavior"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Jun 6, 2022 at 1:35 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>> Out of curiosity, why not? The proposed patch only runs it if the\n>> object access hook is set. Do you see a situation where it would be\n>> confusing that an earlier DDL change is visible? And if so, would it\n>> make more sense to call CCI unconditionally?\n\n> Well, I think that a fair amount of work has been done previously to\n> cut down on unnecessary CCIs. I suspect Tom in particular is likely to\n> object to adding a whole bunch more of them, and I think that\n> objection would have some merit.\n\nWe've gotten things to the point where a no-op CCI is pretty cheap,\nso I'm not sure there is a performance concern here. I do wonder\nthough if there are semantic or bug-hazard considerations. A CCI\nthat occurs only if a particular hook is loaded seems pretty scary\nfrom a testability standpoint.\n\n> I definitely think if we were going to do it, it would need to be\n> unconditional. Otherwise I think we'll end up with bugs, because\n> nobody's going to go test all of that code with and without an object\n> access hook installed every time they tweak some DDL-related code.\n\nRight, same thing I'm saying. I also think we should discourage\npeople from doing cowboy CCIs inside their OAT hooks, because that\nmakes the testability problem even worse. Maybe that means we\nneed to uniformly move the CREATE hooks to after a system-provided\nCCI, but I've not thought hard about the implications of that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 Jun 2022 17:11:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: oat_post_create expected behavior"
},
{
"msg_contents": "On Mon, 2022-06-06 at 17:11 -0400, Tom Lane wrote:\n> Right, same thing I'm saying. I also think we should discourage\n> people from doing cowboy CCIs inside their OAT hooks, because that\n> makes the testability problem even worse. Maybe that means we\n> need to uniformly move the CREATE hooks to after a system-provided\n> CCI, but I've not thought hard about the implications of that.\n\nUniformly moving the post-create hooks after CCI might not be as\nconvenient as I thought at first. Many extensions using post-create\nhooks will also want to use post-alter hooks, and it would be difficult\nto reuse extension code between those two hooks. It's probably better\nto just always specify the snapshot unless you're sure you won't need a\npost-alter hook.\n\nIt would be nice if it was easier to enforce that these hooks do the\nright thing, though.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Fri, 01 Jul 2022 11:12:52 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: oat_post_create expected behavior"
},
{
"msg_contents": "> Right, same thing I'm saying. I also think we should discourage\n> people from doing cowboy CCIs inside their OAT hooks, because that\n> makes the testability problem even worse. Maybe that means we\n> need to uniformly move the CREATE hooks to after a system-provided\n> CCI, but I've not thought hard about the implications of that.\n\nI like this approach, however, I am relatively new to the PG scene and\nam not sure how or what I should look into in terms of the\nimplications that Tom mentioned. Are there any tips? What should be\nthe next course of action here? I could update my patch to always call\nCCI before the create hooks.\n\nThanks,\n\nMary Xu\n\nOn Fri, Jul 1, 2022 at 11:12 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Mon, 2022-06-06 at 17:11 -0400, Tom Lane wrote:\n> > Right, same thing I'm saying. I also think we should discourage\n> > people from doing cowboy CCIs inside their OAT hooks, because that\n> > makes the testability problem even worse. Maybe that means we\n> > need to uniformly move the CREATE hooks to after a system-provided\n> > CCI, but I've not thought hard about the implications of that.\n>\n> Uniformly moving the post-create hooks after CCI might not be as\n> convenient as I thought at first. Many extensions using post-create\n> hooks will also want to use post-alter hooks, and it would be difficult\n> to reuse extension code between those two hooks. It's probably better\n> to just always specify the snapshot unless you're sure you won't need a\n> post-alter hook.\n>\n> It would be nice if it was easier to enforce that these hooks do the\n> right thing, though.\n>\n> Regards,\n> Jeff Davis\n>\n>\n\n\n",
"msg_date": "Tue, 2 Aug 2022 13:30:52 -0700",
"msg_from": "Mary Xu <yxu2162@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: oat_post_create expected behavior"
},
{
"msg_contents": "On Tue, 2022-08-02 at 13:30 -0700, Mary Xu wrote:\n> > Right, same thing I'm saying. I also think we should discourage\n> > people from doing cowboy CCIs inside their OAT hooks, because that\n> > makes the testability problem even worse. Maybe that means we\n> > need to uniformly move the CREATE hooks to after a system-provided\n> > CCI, but I've not thought hard about the implications of that.\n> \n> I like this approach, however, I am relatively new to the PG scene\n> and\n> am not sure how or what I should look into in terms of the\n> implications that Tom mentioned. Are there any tips? What should be\n> the next course of action here? I could update my patch to always\n> call\n> CCI before the create hooks.\n\nI didn't see a clear consensus that we should call OAT_POST_CREATE\nafter CCI, so I went ahead and updated the comment. We can always\nupdate the behavior later when we do have consensus, but until that\ntime, at least the comment will be more helpful.\n\nIf you are satisfied you can mark the CF issue as \"committed\", or you\ncan leave it open if you think it's still unresolved.\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Tue, 20 Sep 2022 10:58:17 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: oat_post_create expected behavior"
}
] |
[
{
"msg_contents": "Hi,\n\nBF animal margay (a newly started Solaris 11.4/Sparc/GCC 11.2 box) is\nsometimes failing with:\n\nTRAP: FailedAssertion(\"seg->mapped_address != NULL\", File: \"dsm.c\",\nLine: 1069, PID: 9038)\n\nI can't immediately see why it's doing this, but my tool that looks\nfor assertion failures hasn't seen that on any other system. Example\nstack (trimmed from log), in this case a regular backend, other times\nit was a parallel worker:\n\nTRAP: FailedAssertion(\"seg->mapped_address != NULL\", File: \"dsm.c\",\nLine: 1069, PID: 3944)\nExceptionalCondition+0x64 [0x1008bb348]\ndsm_segment_address+0x44 [0x1006ff7d0]\nget_segment_by_index+0x7c [0x1008ee960]\ndsa_get_address+0x9c [0x1008ef754]\npgstat_get_entry_ref+0x1068 [0x10075f348]\npgstat_prep_pending_entry+0x58 [0x100758424]\npgstat_assoc_relation+0x44 [0x10075b314]\n_bt_first+0x9ac [0x10036cd78]\nbtgettuple+0x10c [0x1003653a8]\nindex_getnext_tid+0x4c [0x1003531c4]\nindex_getnext_slot+0x78 [0x100353564]\nsystable_getnext+0x18 [0x1003519b4]\nSearchCatCacheMiss+0x74 [0x10089ce18]\nSearchCatCacheInternal+0x1c0 [0x10089d0a4]\nGetSysCacheOid+0x34 [0x1008b5ca4]\nget_role_oid+0x18 [0x100767444]\nhba_getauthmethod+0x8 [0x100599da4]\nClientAuthentication+0x1c [0x10058cb68]\nInitPostgres+0xacc [0x1008d26b8]\nPostgresMain+0x94 [0x1007397f8]\nServerLoop+0x1184 [0x1006739e8]\nPostmasterMain+0x1400 [0x10067520c]\nmain+0x2e0 [0x1005a28c0]\n_start+0x64 [0x1002c5c44]\n\nI know that on Solaris we use dynamic_shared_memory=posix. The other\nSolaris/Sparc system is wrasse, and it's not doing this. I don't see\nit yet, but figured I'd report this much to the list in case someone\nelse does.\n\n\n",
"msg_date": "Fri, 3 Jun 2022 12:05:30 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "margay fails assertion in stats/dsa/dsm code"
},
{
"msg_contents": "On Fri, Jun 3, 2022 at 12:05 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> BF animal margay (a newly started Solaris 11.4/Sparc/GCC 11.2 box) is\n> sometimes failing with:\n>\n> TRAP: FailedAssertion(\"seg->mapped_address != NULL\", File: \"dsm.c\",\n> Line: 1069, PID: 9038)\n\nI spent some time on the GCC farm machine gcc211 (Sol 11.3, GCC 5.5),\nbut could not repro this. It's also not happening on wrasse (Sol\n11.3, Sun Studio compiler). I don't have access to a Sol 11.4\nCBE/Sparc system like margay, but I have learned that CBE is the name\nof a very recently announced rolling release intended for open source\ndevelopers[1]. I still have no idea if the active thing here is\nSparc, Sol 11.4, \"CBE\", GCC 11.2 or just timing conditions that reveal\nbugs in our dsm/dsa/dshash/pgstat code that show up here in about 1/4\nof make check runs on this stack, but miraculously nowhere else.\nPerhaps margay's owner could shed some light, or has a way to provide\nssh access to a similar zone with a debugger etc installed?\n\n[1] https://blogs.oracle.com/solaris/post/announcing-the-first-oracle-solaris-114-cbe\n\n\n",
"msg_date": "Tue, 28 Jun 2022 18:27:21 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: margay fails assertion in stats/dsa/dsm code"
},
{
"msg_contents": "Am 28.06.2022 um 08:27 schrieb Thomas Munro:\n> On Fri, Jun 3, 2022 at 12:05 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> BF animal margay (a newly started Solaris 11.4/Sparc/GCC 11.2 box) is\n>> sometimes failing with:\n>>\n>> TRAP: FailedAssertion(\"seg->mapped_address != NULL\", File: \"dsm.c\",\n>> Line: 1069, PID: 9038)\n> \n> I spent some time on the GCC farm machine gcc211 (Sol 11.3, GCC 5.5),\n> but could not repro this. It's also not happening on wrasse (Sol\n> 11.3, Sun Studio compiler). I don't have access to a Sol 11.4\n> CBE/Sparc system like margay, but I have learned that CBE is the name\n> of a very recently announced rolling release intended for open source\n> developers[1]. I still have no idea if the active thing here is\n> Sparc, Sol 11.4, \"CBE\", GCC 11.2 or just timing conditions that reveal\n> bugs in our dsm/dsa/dshash/pgstat code that show up here in about 1/4\n> of make check runs on this stack, but miraculously nowhere else.\n> Perhaps margay's owner could shed some light, or has a way to provide\n> ssh access to a similar zone with a debugger etc installed?\n> \n> [1] https://blogs.oracle.com/solaris/post/announcing-the-first-oracle-solaris-114-cbe\n\n\nLooks like a timing issue for me, because it happens only sometimes.\nNo problems with versions 14 and 13.\n\nI can provide ssh access to this system.\n\n\n\n",
"msg_date": "Tue, 28 Jun 2022 09:22:23 +0200",
"msg_from": "Marcel Hofstetter <hofstetter@jomasoft.ch>",
"msg_from_op": false,
"msg_subject": "Re: margay fails assertion in stats/dsa/dsm code"
},
{
"msg_contents": "On Thu, Jun 2, 2022 at 8:06 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I know that on Solaris we use dynamic_shared_memory=posix. The other\n> Solaris/Sparc system is wrasse, and it's not doing this. I don't see\n> it yet, but figured I'd report this much to the list in case someone\n> else does.\n\nMy first thought was that the return value of the call to\ndsm_impl_op() at the end of dsm_attach() is not checked and that maybe\nit was returning NULL, but it seems like whoever wrote\ndsm_impl_posix() was pretty careful to ereport(elevel, ...) in every\nfailure path, and elevel is ERROR here, so I don't see any issue. My\nsecond thought was that maybe control had escaped from dsm_attach()\ndue to an error before we got to the step where we actually map the\nsegment, but then the dsm_segment * would be returned to the caller.\nMaybe they could retrieve it later using dsm_find_mapping(), but that\nfunction has no callers in core.\n\nSo I'm kind of stumped too, but did you by any chance check whether\nthere are any DSM-related messages in the logs before the assertion\nfailure?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 28 Jun 2022 14:04:32 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: margay fails assertion in stats/dsa/dsm code"
},
{
"msg_contents": "On Wed, Jun 29, 2022 at 6:04 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> My first thought was that the return value of the call to\n> dsm_impl_op() at the end of dsm_attach() is not checked and that maybe\n> it was returning NULL, but it seems like whoever wrote\n> dsm_impl_posix() was pretty careful to ereport(elevel, ...) in every\n> failure path, and elevel is ERROR here, so I don't see any issue. My\n> second thought was that maybe control had escaped from dsm_attach()\n> due to an error before we got to the step where we actually map the\n> segment, but then the dsm_segment * would be returned to the caller.\n> Maybe they could retrieve it later using dsm_find_mapping(), but that\n> function has no callers in core.\n\nThanks for looking. Yeah. I also read through that code many times\nand drew the same conclusion.\n\n> So I'm kind of stumped too, but did you by any chance check whether\n> there are any DSM-related messages in the logs before the assertion\n> failure?\n\nMarcel kindly granted me access to his test machine, where the failure\ncan be reproduced by running make check lots of times. I eventually\nfigured out that the problem control flow is ... of course ... the one\npath that doesn't ereport(), and that's when errno == EEXIST. That is\na path that is intended to handle DSM_OP_CREATE. Here we are handling\nDSM_OP_ATTACH, and I have verified that we're passing in just O_RDWR.\nEEXIST is a nonsensical error for shm_open() without flags containing\nO_CREAT | O_EXCL (according to POSIX and Solaris's man page).\n\nOn this OS, shm_open() opens plain files in /tmp (normally a RAM disk,\nso kinda like /dev/shm on Linux), that much I can tell with a plain\nold \"ls\" command. We can also read its long lost open source cousin\n(which may be completely different for all I know, but I'd doubt it):\n\nhttps://github.com/illumos/illumos-gate/blob/master/usr/src/lib/libc/port/rt/shm.c\nhttps://github.com/illumos/illumos-gate/blob/master/usr/src/lib/libc/port/rt/pos4obj.c\n\nErm. It looks like __pos4obj_lock() could possibly return -1 and\nleave errno == EEXIST, if it runs out of retries? Then shm_open()\nwould return -1, and we'd blow up. However, for that to happen, one\nof those \"SHM_LOCK_TYPE\" files would have to linger for 64 sleep\nloops, and I'm not sure why that'd happen, or what to do about it. (I\ndon't immediately grok what that lock file is even for.)\n\nI suppose this could indicate that the machine and/or RAM disk is\noverloaded/swapping and one of those open() or unlink() calls is\ntaking a really long time, and that could be fixed with some system\ntuning. I suppose it's also remotely possible that the process is\ngetting peppered with signals so that funky shell script-style locking\nscheme is interrupted and doesn't really wait very long. Or maybe I\nguessed wrong and some other closed source path is to blame *shrug*.\n\nAs for whether PostgreSQL needs to do anything, perhaps we should\nereport for this unexpected error as a matter of self-preservation, to\navoid the NULL dereference you'd presumably get on a non-cassert build\nwith the current coding? Maybe just:\n\n- if (errno != EEXIST)\n+ if (op == DSM_OP_ATTACH || errno != EEXIST)\n ereport(elevel,\n (errcode_for_dynamic_shared_memory(),\n errmsg(\"could not open shared\nmemory segment \\\"%s\\\": %m\",\n\nmargay would probably still fail until that underlying problem is\naddressed, but less mysteriously on our side at least.\n\n\n",
"msg_date": "Wed, 29 Jun 2022 16:00:32 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: margay fails assertion in stats/dsa/dsm code"
},
{
"msg_contents": "On Wed, Jun 29, 2022 at 4:00 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I suppose this could indicate that the machine and/or RAM disk is\n> overloaded/swapping and one of those open() or unlink() calls is\n> taking a really long time, and that could be fixed with some system\n> tuning.\n\nHmm, I take that bit back. Every backend that starts up is trying to\nattach to the same segment, the one with the new pgstats stuff in it\n(once the small space in the main shmem segment is used up and we\ncreate a DSM segment). There's no fairness/queue, random back-off or\nguarantee of progress in that librt lock code, so you can get into\nlock-step with other backends retrying, and although some waiter\nalways gets to make progress, any given backend can lose every round\nand run out of retries. Even when you're lucky and don't fail with an\nundocumented incomprehensible error, it's very slow, and I'd\nconsidering filing a bug report about that. A work-around on\nPostgreSQL would be to set dynamic_shared_memory_type to mmap (= we\njust open our own files and map them directly), and making pg_dynshmem\na symlink to something under /tmp (or some other RAM disk) to avoid\ntouch regular disk file systems.\n\n\n",
"msg_date": "Wed, 29 Jun 2022 22:17:23 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: margay fails assertion in stats/dsa/dsm code"
},
{
"msg_contents": "On Wed, Jun 29, 2022 at 12:01 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> As for whether PostgreSQL needs to do anything, perhaps we should\n> ereport for this unexpected error as a matter of self-preservation, to\n> avoid the NULL dereference you'd presumably get on a non-cassert build\n> with the current coding? Maybe just:\n>\n> - if (errno != EEXIST)\n> + if (op == DSM_OP_ATTACH || errno != EEXIST)\n> ereport(elevel,\n> (errcode_for_dynamic_shared_memory(),\n> errmsg(\"could not open shared\n> memory segment \\\"%s\\\": %m\",\n>\n> margay would probably still fail until that underlying problem is\n> addressed, but less mysteriously on our side at least.\n\nThat seems like a correct fix, but maybe we should also be checking\nthe return value of dsm_impl_op() e.g. define dsm_impl_op_error() as\nan inline function that does if (!dsm_impl_op(..., ERROR)) elog(ERROR,\n\"the author of dsm.c is not as clever as he thinks he is\").\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 30 Jun 2022 12:02:21 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: margay fails assertion in stats/dsa/dsm code"
},
{
"msg_contents": "On Fri, Jul 1, 2022 at 4:02 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Wed, Jun 29, 2022 at 12:01 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > - if (errno != EEXIST)\n> > + if (op == DSM_OP_ATTACH || errno != EEXIST)\n> > ereport(elevel,\n> > (errcode_for_dynamic_shared_memory(),\n> > errmsg(\"could not open shared\n> > memory segment \\\"%s\\\": %m\",\n> >\n> > margay would probably still fail until that underlying problem is\n> > addressed, but less mysteriously on our side at least.\n>\n> That seems like a correct fix, but maybe we should also be checking\n> the return value of dsm_impl_op() e.g. define dsm_impl_op_error() as\n> an inline function that does if (!dsm_impl_op(..., ERROR)) elog(ERROR,\n> \"the author of dsm.c is not as clever as he thinks he is\").\n\nThanks. Also the mmap and sysv paths do something similar, so I also\nmade the same change there just on principle. I didn't make the extra\nbelt-and-braces check you suggested for now, preferring minimalism. I\nthink the author of dsm.c was pretty clever, it's just that the world\nturned out to be more hostile than expected, in one very specific way.\n\nPushed.\n\nSo that should get us to a state where margay still fails\noccasionally, but now with an ERROR rather than a crash.\n\nNext up, I confirmed my theory about what's happening on closed\nSolaris by tracing syscalls. It is indeed that clunky sleep(1) code\nthat gives up after 64 tries. Even in pre-shmem-stats releases that\ndon't contend enough to reach the bogus EEXIST error, I'm pretty sure\npeople must be getting random sleeps injected into their parallel\nqueries in the wild by this code.\n\nI have concluded that that implementation of shm_open() is not really\nusable for our purposes. We'll have to change *something* to turn\nmargay reliably green, not to mention bogus error reports we can\nexpect from 15 in the wild, and performance woes that I cannot now\nunsee.\n\nSo... I think we should select a different default\ndynamic_shared_memory_type in initdb.c if defined(__sun__). Which is\nthe least terrible? For sysv, it looks like all the relevant sysctls\nthat used to be required to use sysv memory became obsolete/automatic\nin Sol 10 (note: Sol 9 is long EOL'd), so it should just work AFAICT,\nwhereas for mmap mode your shared memory data is likely to cause file\nI/O because we put the temporary files in your data directory. I'm\nthinking perhaps we should default to dynamic_shared_memory_type=sysv\nfor 15+. I don't really want to change it in the back branches, since\nnobody has actually complained about \"posix\" performance and it might\nupset someone if we change it for newly initdb'd DBs in a major\nrelease series. But I'm not an expert or even user of this OS, I'm\njust trying to fix the build farm; better ideas welcome.\n\nThoughts?\n\n\n",
"msg_date": "Fri, 1 Jul 2022 14:33:28 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: margay fails assertion in stats/dsa/dsm code"
},
{
"msg_contents": "On Thu, Jun 30, 2022 at 10:34 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> So... I think we should select a different default\n> dynamic_shared_memory_type in initdb.c if defined(__sun__). Which is\n> the least terrible? For sysv, it looks like all the relevant sysctls\n> that used to be required to use sysv memory became obsolete/automatic\n> in Sol 10 (note: Sol 9 is long EOL'd), so it should just work AFAICT,\n> whereas for mmap mode your shared memory data is likely to cause file\n> I/O because we put the temporary files in your data directory. I'm\n> thinking perhaps we should default to dynamic_shared_memory_type=sysv\n> for 15+. I don't really want to change it in the back branches, since\n> nobody has actually complained about \"posix\" performance and it might\n> upset someone if we change it for newly initdb'd DBs in a major\n> release series. But I'm not an expert or even user of this OS, I'm\n> just trying to fix the build farm; better ideas welcome.\n\nBoy, relying on DSM for critical stuff sure is a lot of fun! This is\nexactly why I hate adding new facilities that have to be implemented\nin OS-dependent ways.\n\nChanging the default on certain platforms to 'posix' or 'sysv'\naccording to what works best on that platform seems reasonable to me.\nI agree that defaulting to 'mmap' doesn't seem like a lot of fun,\nalthough I think it could be a reasonable choice on a platform where\neverything else is broken. You could alternatively try to fix 'posix'\nby adding some kind of code to work around that platform's\ndeficiencies. Insert handwaving here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 1 Jul 2022 09:14:51 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: margay fails assertion in stats/dsa/dsm code"
},
{
"msg_contents": "On Sat, Jul 2, 2022 at 1:15 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Changing the default on certain platforms to 'posix' or 'sysv'\n> according to what works best on that platform seems reasonable to me.\n\nOk, I'm going to make that change in 15 + master.\n\n> I agree that defaulting to 'mmap' doesn't seem like a lot of fun,\n> although I think it could be a reasonable choice on a platform where\n> everything else is broken. You could alternatively try to fix 'posix'\n> by adding some kind of code to work around that platform's\n> deficiencies. Insert handwaving here.\n\nI don't think that 'posix' mode is salvageable on Solaris, but a new\nGUC to control where 'mmap' mode puts its files would be nice. Then\nyou could set it to '/tmp' (or some other RAM disk), and you'd have\nthe same end result as shm_open() on that platform, without the lock\nproblem. Perhaps someone could propose a patch for 16.\n\nAs for the commit I already made, we can now see the new error:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=margay&dt=2022-07-01%2016%3A00%3A07\n\n2022-07-01 18:25:25.848 CEST [27784:1] ERROR: could not open shared\nmemory segment \"/PostgreSQL.499018794\": File exists\n\nUnfortunately this particular run crashed anyway, for a new reason:\none backend didn't like the state the new error left the dshash in,\nduring shmem_exit:\n\n2022-07-01 18:25:25.848 CEST [27738:21] pg_regress/prepared_xacts\nERROR: could not open shared memory segment \"/PostgreSQL.499018794\":\nFile exists\n2022-07-01 18:25:25.848 CEST [27738:22] pg_regress/prepared_xacts\nSTATEMENT: SELECT * FROM pxtest1;\nTRAP: FailedAssertion(\"!hash_table->find_locked\", File: \"dshash.c\",\nLine: 312, PID: 27784)\n/home/marcel/build-farm-14/buildroot/HEAD/pgsql.build/tmp_install/home/marcel/build-farm-14/buildroot/HEAD/inst/bin/postgres'ExceptionalCondition+0x64\n[0x1008bb8b0]\n/home/marcel/build-farm-14/buildroot/HEAD/pgsql.build/tmp_install/home/marcel/build-farm-14/buildroot/HEAD/inst/bin/postgres'dshash_detach+0x48\n[0x10058674c]\n/home/marcel/build-farm-14/buildroot/HEAD/pgsql.build/tmp_install/home/marcel/build-farm-14/buildroot/HEAD/inst/bin/postgres'pgstat_detach_shmem+0x68\n[0x10075e630]\n/home/marcel/build-farm-14/buildroot/HEAD/pgsql.build/tmp_install/home/marcel/build-farm-14/buildroot/HEAD/inst/bin/postgres'pgstat_shutdown_hook+0x94\n[0x10075989c]\n/home/marcel/build-farm-14/buildroot/HEAD/pgsql.build/tmp_install/home/marcel/build-farm-14/buildroot/HEAD/inst/bin/postgres'shmem_exit+0x84\n[0x100701198]\n/home/marcel/build-farm-14/buildroot/HEAD/pgsql.build/tmp_install/home/marcel/build-farm-14/buildroot/HEAD/inst/bin/postgres'proc_exit_prepare+0x88\n[0x100701394]\n/home/marcel/build-farm-14/buildroot/HEAD/pgsql.build/tmp_install/home/marcel/build-farm-14/buildroot/HEAD/inst/bin/postgres'proc_exit+0x4\n[0x10070148c]\n/home/marcel/build-farm-14/buildroot/HEAD/pgsql.build/tmp_install/home/marcel/build-farm-14/buildroot/HEAD/inst/bin/postgres'StartBackgroundWorker+0x150\n[0x10066957c]\n/home/marcel/build-farm-14/buildroot/HEAD/pgsql.build/tmp_install/home/marcel/build-farm-14/buildroot/HEAD/inst/bin/postgres'maybe_start_bgworkers+0x604\n[0x1006717ec]\n/home/marcel/build-farm-14/buildroot/HEAD/pgsql.build/tmp_install/home/marcel/build-farm-14/buildroot/HEAD/inst/bin/postgres'sigusr1_handler+0x190\n[0x100672510]\n\nSo that's an exception safety problem in dshash or pgstat's new usage\nthereof, which is arguably independent of Solaris and probably\ndeserves a new thread. You don't need Solaris to see it, you can just\nadd in some random fault injection.\n\n\n",
"msg_date": "Sat, 2 Jul 2022 11:10:07 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: margay fails assertion in stats/dsa/dsm code"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-02 11:10:07 +1200, Thomas Munro wrote:\n> 2022-07-01 18:25:25.848 CEST [27738:21] pg_regress/prepared_xacts\n> ERROR: could not open shared memory segment \"/PostgreSQL.499018794\":\n> File exists\n> 2022-07-01 18:25:25.848 CEST [27738:22] pg_regress/prepared_xacts\n> STATEMENT: SELECT * FROM pxtest1;\n> TRAP: FailedAssertion(\"!hash_table->find_locked\", File: \"dshash.c\",\n> Line: 312, PID: 27784)\n> /home/marcel/build-farm-14/buildroot/HEAD/pgsql.build/tmp_install/home/marcel/build-farm-14/buildroot/HEAD/inst/bin/postgres'ExceptionalCondition+0x64\n> [0x1008bb8b0]\n> /home/marcel/build-farm-14/buildroot/HEAD/pgsql.build/tmp_install/home/marcel/build-farm-14/buildroot/HEAD/inst/bin/postgres'dshash_detach+0x48\n> [0x10058674c]\n> /home/marcel/build-farm-14/buildroot/HEAD/pgsql.build/tmp_install/home/marcel/build-farm-14/buildroot/HEAD/inst/bin/postgres'pgstat_detach_shmem+0x68\n> [0x10075e630]\n> /home/marcel/build-farm-14/buildroot/HEAD/pgsql.build/tmp_install/home/marcel/build-farm-14/buildroot/HEAD/inst/bin/postgres'pgstat_shutdown_hook+0x94\n> [0x10075989c]\n> /home/marcel/build-farm-14/buildroot/HEAD/pgsql.build/tmp_install/home/marcel/build-farm-14/buildroot/HEAD/inst/bin/postgres'shmem_exit+0x84\n> [0x100701198]\n> /home/marcel/build-farm-14/buildroot/HEAD/pgsql.build/tmp_install/home/marcel/build-farm-14/buildroot/HEAD/inst/bin/postgres'proc_exit_prepare+0x88\n> [0x100701394]\n> /home/marcel/build-farm-14/buildroot/HEAD/pgsql.build/tmp_install/home/marcel/build-farm-14/buildroot/HEAD/inst/bin/postgres'proc_exit+0x4\n> [0x10070148c]\n> /home/marcel/build-farm-14/buildroot/HEAD/pgsql.build/tmp_install/home/marcel/build-farm-14/buildroot/HEAD/inst/bin/postgres'StartBackgroundWorker+0x150\n> [0x10066957c]\n> /home/marcel/build-farm-14/buildroot/HEAD/pgsql.build/tmp_install/home/marcel/build-farm-14/buildroot/HEAD/inst/bin/postgres'maybe_start_bgworkers+0x604\n> [0x1006717ec]\n> /home/marcel/build-farm-14/buildroot/HEAD/pgsql.build/tmp_install/home/marcel/build-farm-14/buildroot/HEAD/inst/bin/postgres'sigusr1_handler+0x190\n> [0x100672510]\n> \n> So that's an exception safety problem in dshash or pgstat's new usage\n> thereof, which is arguably independent of Solaris and probably\n> deserves a new thread. You don't need Solaris to see it, you can just\n> add in some random fault injection.\n\nFWIW potentially relevant thread for that aspect: https://postgr.es/m/20220311012712.botrpsikaufzteyt%40alap3.anarazel.de\n\nWhat do you think about the proposal at the end of that email?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 1 Jul 2022 16:20:09 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: margay fails assertion in stats/dsa/dsm code"
},
{
"msg_contents": "On Sat, Jul 2, 2022 at 11:10 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sat, Jul 2, 2022 at 1:15 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Changing the default on certain platforms to 'posix' or 'sysv'\n> > according to what works best on that platform seems reasonable to me.\n>\n> Ok, I'm going to make that change in 15 + master.\n\nFor the record, I asked a Solaris kernel engineer about that\nshm_open() problem and learned that a fix shipped about a month after\nwe had this discussion (though I haven't tested it myself):\n\nhttps://twitter.com/casperdik/status/1730288613722562986\n\nI also reported the issue to illumos, since I'd like to be able to\nrevert 94ebf811 eventually...:\n\nhttps://www.illumos.org/issues/16093\n\n\n",
"msg_date": "Thu, 22 Feb 2024 10:10:10 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: margay fails assertion in stats/dsa/dsm code"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile performing pg_upgrade from v15Beta binaries/source,\nI got this error below error\n\ncould not create directory \"d2/pg_upgrade_output.d\": File exists\nFailure, exiting\n\n\n*Steps to reproduce *\nv15 Beta sources\ninitalize a cluster ( ./initdb -D d1)\ninitalize another cluster ( ./initdb -D d2)\nrun pg_upgrade with -c option ( ./pg_upgrade -d d1 -D d2 -b . -B . -c -v)\nrun pg_upgrade without -c option ( ./pg_upgrade -d d1 -D d2 -b . -B .)\n--\n--\n--\nError\n\n\nThis behavior was not there in earlier released versions, i guess.\nIs it expected behavior now onwards?\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n\n\n Hi,\n\n While performing pg_upgrade from v15Beta binaries/source, \n I got this error below error \n\n could not create directory \"d2/pg_upgrade_output.d\": File exists\n Failure, exiting\n\n\nSteps to reproduce \n v15 Beta sources\n initalize a cluster ( ./initdb -D d1)\n initalize another cluster ( ./initdb -D d2)\n run pg_upgrade with -c option ( ./pg_upgrade -d d1 -D d2 -b . -B .\n -c -v)\n run pg_upgrade without -c option ( ./pg_upgrade -d d1 -D d2 -b . -B\n .) \n --\n --\n --\n Error \n\n\n This behavior was not there in earlier released versions, i guess. \n Is it expected behavior now onwards?\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 3 Jun 2022 16:49:36 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "[v15 beta] pg_upgrade failed if earlier executed with -c switch"
},
{
"msg_contents": "> On 3 Jun 2022, at 13:19, tushar <tushar.ahuja@enterprisedb.com> wrote:\n\n> This behavior was not there in earlier released versions, i guess. \n> Is it expected behavior now onwards?\n\nThat's an unfortunate side effect which AFAICT was overlooked in the original\nthread. Having a predictable name was defined as important for CI/BF, but I\nagree that the above is likely to be a common user pattern (first running -c is\nexactly what I did when managing databases and upgraded them with pg_upgrade).\n\nThis might break a few automated upgrade scripts out there (but they might also\nalready need changes to cope with the moved file locations).\n\nWe can address this by documentation, and specifically highlight under the -c\noption in the manual that the folder need to removed/renamed (and possibly to\nSTDOUT aswell when run with -c).\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 3 Jun 2022 14:01:18 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [v15 beta] pg_upgrade failed if earlier executed with -c switch"
},
{
"msg_contents": "On Fri, Jun 03, 2022 at 02:01:18PM +0200, Daniel Gustafsson wrote:\n> > On 3 Jun 2022, at 13:19, tushar <tushar.ahuja@enterprisedb.com> wrote:\n> \n> > This behavior was not there in earlier released versions, i guess. \n> > Is it expected behavior now onwards?\n> \n> That's an unfortunate side effect which AFAICT was overlooked in the original\n> thread. Having a predictable name was defined as important for CI/BF, but I\n> agree that the above is likely to be a common user pattern (first running -c is\n> exactly what I did when managing databases and upgraded them with pg_upgrade).\n\nI agree that it's an problem, but it's not limited to -c.\n\nFor example, I ran this:\n\n|$ time /usr/pgsql-15/bin/pg_upgrade -b /usr/pgsql-14/bin/initdb -d ./pgsql14.dat -D ./pgsql15.dat \n|\"/usr/pgsql-14/bin/initdb\" is not a directory\n|Failure, exiting\n\nAnd then reran with the correct \"-b\" option, but then it failed because it had\nfailed before...\n\n|$ time /usr/pgsql-15/bin/pg_upgrade -b /usr/pgsql-14/bin -d ./pgsql14.dat -D ./pgsql15.dat\n|could not create directory \"pgsql15.dat/pg_upgrade_output.d\": File exists\n|Failure, exiting\n\nThis is a kind of geometric circle of errors - an error at point A requires\nfirst re-running after fixing A's issue, and then an error at B requires\nre-running after fixing B's issue, hitting the \"A\" error again, and then\nrerunning again again. It's the same kind of problem that led to 3c0471b5f.\n\n-c could use a different output directory, but that means it would fail if\npg_upgrade -c were run multiple times, which seems undesirable for a \"check\"\ncommand.\n\nWe could call cleanup() if -c was successful. But that doesn't help the case\nthat -c fails; the new dir would still need to be manually removed, which seems\nlike imposing useless busywork on the user.\n\nWe could allow mkdir to fail with EEXIST, except that breaks the original\nmotivation for the patch: the logs are appended to and any old errors are still\nin the logs after re-running pg_upgrade.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 3 Jun 2022 08:53:54 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [v15 beta] pg_upgrade failed if earlier executed with -c switch"
},
{
"msg_contents": "> On 3 Jun 2022, at 15:53, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> On Fri, Jun 03, 2022 at 02:01:18PM +0200, Daniel Gustafsson wrote:\n>>> On 3 Jun 2022, at 13:19, tushar <tushar.ahuja@enterprisedb.com> wrote:\n>> \n>>> This behavior was not there in earlier released versions, i guess. \n>>> Is it expected behavior now onwards?\n>> \n>> That's an unfortunate side effect which AFAICT was overlooked in the original\n>> thread. Having a predictable name was defined as important for CI/BF, but I\n>> agree that the above is likely to be a common user pattern (first running -c is\n>> exactly what I did when managing databases and upgraded them with pg_upgrade).\n> \n> I agree that it's an problem, but it's not limited to -c.\n\nIndeed.\n\n> For example, I ran this:\n> \n> |$ time /usr/pgsql-15/bin/pg_upgrade -b /usr/pgsql-14/bin/initdb -d ./pgsql14.dat -D ./pgsql15.dat \n> |\"/usr/pgsql-14/bin/initdb\" is not a directory\n> |Failure, exiting\n> \n> And then reran with the correct \"-b\" option, but then it failed because it had\n> failed before...\n\nThats, not ideal.\n\n> We could call cleanup() if -c was successful. But that doesn't help the case\n> that -c fails; the new dir would still need to be manually removed, which seems\n> like imposing useless busywork on the user.\n> \n> We could allow mkdir to fail with EEXIST, except that breaks the original\n> motivation for the patch: the logs are appended to and any old errors are still\n> in the logs after re-running pg_upgrade.\n\nOr we could revisit Tom's proposal in the thread that implemented the feature:\nto have timestamped directory names to get around this very problem? I think\nwe should be able to figure out a way to make it easy enough for the BF code to\nfigure out (and clean up).\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 3 Jun 2022 16:52:09 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [v15 beta] pg_upgrade failed if earlier executed with -c switch"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> Or we could revisit Tom's proposal in the thread that implemented the feature:\n> to have timestamped directory names to get around this very problem? I think\n> we should be able to figure out a way to make it easy enough for the BF code to\n> figure out (and clean up).\n\nHow about inserting an additional level of subdirectory?\n\npg_upgrade_output.d/20220603122528/foo.log\n\nThen code doing \"rm -rf pg_upgrade_output.d\" needs no changes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Jun 2022 12:26:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [v15 beta] pg_upgrade failed if earlier executed with -c switch"
},
{
"msg_contents": "> On 3 Jun 2022, at 18:26, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> Or we could revisit Tom's proposal in the thread that implemented the feature:\n>> to have timestamped directory names to get around this very problem? I think\n>> we should be able to figure out a way to make it easy enough for the BF code to\n>> figure out (and clean up).\n> \n> How about inserting an additional level of subdirectory?\n> \n> pg_upgrade_output.d/20220603122528/foo.log\n> \n> Then code doing \"rm -rf pg_upgrade_output.d\" needs no changes.\n\nOff the cuff that seems like a good compromise. Adding Andrew on CC: for input\non how that affects the buildfarm.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 3 Jun 2022 18:55:28 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [v15 beta] pg_upgrade failed if earlier executed with -c switch"
},
{
"msg_contents": "On Fri, Jun 03, 2022 at 06:55:28PM +0200, Daniel Gustafsson wrote:\n> On 3 Jun 2022, at 18:26, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> How about inserting an additional level of subdirectory?\n>> \n>> pg_upgrade_output.d/20220603122528/foo.log\n>> \n>> Then code doing \"rm -rf pg_upgrade_output.d\" needs no changes.\n> \n> Off the cuff that seems like a good compromise. Adding Andrew on CC: for input\n> on how that affects the buildfarm.\n\nI am not so sure. My first reaction was actually to bypass the\ncreation of the directories on EEXIST. But, isn't the problem\ndifferent and actually older here? In the set of commands given by\nTushar, he uses the --check option without --retain, but the logs are\nkept around after the execution of the command. It seems to me that\nthere is an argument for also removing the logs if the caller of the\ncommand does not want to retain them.\n\nSeeing the top of the thread, I think that it would be a good idea to\nadd an extra pg_upgrade --check before the real upgrade run. I've\nalso relied on --check as a safety measure in the past for automated\nworkflows.\n--\nMichael",
"msg_date": "Sat, 4 Jun 2022 12:13:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [v15 beta] pg_upgrade failed if earlier executed with -c switch"
},
{
"msg_contents": "On Sat, Jun 04, 2022 at 12:13:19PM +0900, Michael Paquier wrote:\n> On Fri, Jun 03, 2022 at 06:55:28PM +0200, Daniel Gustafsson wrote:\n> > On 3 Jun 2022, at 18:26, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> How about inserting an additional level of subdirectory?\n> >> \n> >> pg_upgrade_output.d/20220603122528/foo.log\n> >> \n> >> Then code doing \"rm -rf pg_upgrade_output.d\" needs no changes.\n> > \n> > Off the cuff that seems like a good compromise. Adding Andrew on CC: for input\n> > on how that affects the buildfarm.\n> \n> I am not so sure. My first reaction was actually to bypass the\n> creation of the directories on EEXIST.\n\nBut that breaks the original motive behind the patch I wrote - logfiles are\nappended to, even if they're full of errors that were fixed before re-running\npg_upgrade.\n\n> But, isn't the problem different and actually older here? In the set of\n> commands given by Tushar, he uses the --check option without --retain, but\n> the logs are kept around after the execution of the command. It seems to me\n> that there is an argument for also removing the logs if the caller of the\n> command does not want to retain them.\n\nYou're right that --check is a bit inconsistent, but I don't think we could\nbackpatch any change to fix it. It wouldn't matter much anyway, since the\nusual workflow would be \"pg_upgrade --check && pg_upgrade\". In which case the\nlogs would end up being removed anyway.\n\nOn Sat, Jun 04, 2022 at 12:13:19PM +0900, Michael Paquier wrote:\n> Seeing the top of the thread, I think that it would be a good idea to\n> add an extra pg_upgrade --check before the real upgrade run. I've\n> also relied on --check as a safety measure in the past for automated\n> workflows.\n\nIt already does this; --check really means \"stop-after-checking\".\n\nHmm .. maybe what you mean is that the *tap test* should first run with\n--check?\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 3 Jun 2022 22:32:27 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [v15 beta] pg_upgrade failed if earlier executed with -c switch"
},
{
"msg_contents": "On Fri, Jun 03, 2022 at 10:32:27PM -0500, Justin Pryzby wrote:\n> On Sat, Jun 04, 2022 at 12:13:19PM +0900, Michael Paquier wrote:\n>> I am not so sure. My first reaction was actually to bypass the\n>> creation of the directories on EEXIST.\n> \n> But that breaks the original motive behind the patch I wrote - logfiles are\n> appended to, even if they're full of errors that were fixed before re-running\n> pg_upgrade.\n\nYep.\n\n>> But, isn't the problem different and actually older here? In the set of\n>> commands given by Tushar, he uses the --check option without --retain, but\n>> the logs are kept around after the execution of the command. It seems to me\n>> that there is an argument for also removing the logs if the caller of the\n>> command does not want to retain them.\n> \n> You're right that --check is a bit inconsistent, but I don't think we could\n> backpatch any change to fix it. It wouldn't matter much anyway, since the\n> usual workflow would be \"pg_upgrade --check && pg_upgrade\". In which case the\n> logs would end up being removed anyway.\n\nExactly, the inconsistency in the log handling is annoying, and\ncleaning up the logs when --retain is not used makes sense to me when\nthe --check command succeeds, but we should keep them if the --check\nfails. I don't see an argument in backpatching that either.\n\n> Hmm .. maybe what you mean is that the *tap test* should first run with\n> --check?\n\nSorry for the confusion. I meant to add an extra command in the TAP\ntest itself.\n\nI would suggest the attached patch then, to add a --check command in\nthe test suite, with a change to clean up the logs when --check is\nused without --retain.\n--\nMichael",
"msg_date": "Sat, 4 Jun 2022 18:48:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [v15 beta] pg_upgrade failed if earlier executed with -c switch"
},
{
"msg_contents": "On Sat, Jun 04, 2022 at 06:48:19PM +0900, Michael Paquier wrote:\n> I would suggest the attached patch then, to add a --check command in\n> the test suite, with a change to clean up the logs when --check is\n> used without --retain.\n\nThis doesn't address one of the problems that I already enumerated.\n\n./tmp_install/usr/local/pgsql/bin/initdb -D pgsql15.dat\n./tmp_install/usr/local/pgsql/bin/initdb -D pgsql15.dat-2\n\n$ ./tmp_install/usr/local/pgsql/bin/pg_upgrade -b ./tmp_install/usr/local/pgsql/bin/bad -d pgsql15.dat-2 -D pgsql15.dat-2 \ncheck for \"tmp_install/usr/local/pgsql/bin/bad\" failed: No such file or directory\nFailure, exiting\n\n$ ./tmp_install/usr/local/pgsql/bin/pg_upgrade -b ./tmp_install/usr/local/pgsql/bin/bad -d pgsql15.dat-2 -D pgsql15.dat-2 \ncould not create directory \"pgsql15.dat-2/pg_upgrade_output.d\": File exists\nFailure, exiting\n\n..failing the 2nd time because it failed the 1st time (even if I fix the bad\nargument).\n\nMaybe that's easy enough to fix just be rearranging verify_directories() or\nmake_outputdirs().\n\nBut actually it seems annoying to have to remove the failed outputdir.\nIt's true that those logs *can* be useful to fix whatever underlying problem,\nbut I'm afraid the *requirement* to remove the failed outputdir is a nuisance,\neven outside of check mode.\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 4 Jun 2022 09:13:46 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [v15 beta] pg_upgrade failed if earlier executed with -c switch"
},
{
"msg_contents": "On Sat, Jun 04, 2022 at 09:13:46AM -0500, Justin Pryzby wrote:\n> Maybe that's easy enough to fix just be rearranging verify_directories() or\n> make_outputdirs().\n\nFor the case, I mentioned, yes.\n\n> But actually it seems annoying to have to remove the failed outputdir.\n> It's true that those logs *can* be useful to fix whatever underlying problem,\n> but I'm afraid the *requirement* to remove the failed outputdir is a nuisance,\n> even outside of check mode.\n\nWell, another error that could happen in the early code paths is\nEACCES on a custom socket directory specified, and we'd still face the\nsame problem on a follow-up restart. Using a sub-directory structure\nas Daniel and Tom mention would address all that (if ignoring EEXIST\nfor the BASE_OUTPUTDIR), removing any existing content from the base\npath when not using --retain. This comes with the disadvantage of\nbloating the disk on repeated errors, but this last bit would not\nreally be a huge problem, I guess, as it could be more useful to keep\nthe error information around.\n--\nMichael",
"msg_date": "Sun, 5 Jun 2022 09:24:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [v15 beta] pg_upgrade failed if earlier executed with -c switch"
},
{
"msg_contents": "On Sun, Jun 05, 2022 at 09:24:25AM +0900, Michael Paquier wrote:\n> Well, another error that could happen in the early code paths is\n> EACCES on a custom socket directory specified, and we'd still face the\n> same problem on a follow-up restart. Using a sub-directory structure\n> as Daniel and Tom mention would address all that (if ignoring EEXIST\n> for the BASE_OUTPUTDIR), removing any existing content from the base\n> path when not using --retain. This comes with the disadvantage of\n> bloating the disk on repeated errors, but this last bit would not\n> really be a huge problem, I guess, as it could be more useful to keep\n> the error information around.\n\nI have been toying with the idea of a sub-directory named with a\ntimestamp (Unix time, like log_line_prefix's %n but this could be\nany format) under pg_upgrade_output.d/ and finished with the\nattached. The logs are removed from the root path when --check is\nused without --retain, like for a non-check command. I have added a\nset of tests to provide some coverage for the whole:\n- Failure of --check where the binary path does not exist, and\npg_upgrade_output.d/ is not removed.\n- Follow-up run of pg_upgrade --check, where pg_upgrade_output.d/ is\nremoved.\n- Check that pg_upgrade_output.d/ is also removed after the main\nupgrade command completes.\n\nThe logic in charge of cleaning up the logs has been moved to a single\nroutine, aka cleanup_logs().\n\nThoughts?\n--\nMichael",
"msg_date": "Sun, 5 Jun 2022 18:19:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [v15 beta] pg_upgrade failed if earlier executed with -c switch"
},
{
"msg_contents": "> On 5 Jun 2022, at 11:19, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Sun, Jun 05, 2022 at 09:24:25AM +0900, Michael Paquier wrote:\n>> Well, another error that could happen in the early code paths is\n>> EACCES on a custom socket directory specified, and we'd still face the\n>> same problem on a follow-up restart. Using a sub-directory structure\n>> as Daniel and Tom mention would address all that (if ignoring EEXIST\n>> for the BASE_OUTPUTDIR), removing any existing content from the base\n>> path when not using --retain. This comes with the disadvantage of\n>> bloating the disk on repeated errors, but this last bit would not\n>> really be a huge problem, I guess, as it could be more useful to keep\n>> the error information around.\n> \n> I have been toying with the idea of a sub-directory named with a\n> timestamp (Unix time, like log_line_prefix's %n but this could be\n> any format) under pg_upgrade_output.d/ and finished with the\n> attached. \n\nI was thinking more along the lines of %m to make it (more) human readable, but\nI'm certainly not wedded to any format.\n\n> The logs are removed from the root path when --check is\n> used without --retain, like for a non-check command.\n\nThis removes all logs after a command without --retain, even if a previous\ncommand used --retain to keep the logs around.\n\nAs a user I would expect the logs from this current invocation to be removed\nwithout --retain, and any other older log entries be kept. I think we should\nremove log_opts.logdir and only remove log_opts.rootdir if it is left empty\nafter .logdir is removed.\n\n> The logic in charge of cleaning up the logs has been moved to a single\n> routine, aka cleanup_logs().\n\n+\t\tcleanup_logs();\n\nMaybe we should register cleanup_logs() as an atexit() handler once we're done\nwith option processing?\n\n+\tsnprintf(log_opts.logdir, MAXPGPATH, \"%s/%s/%s\", log_opts.rootdir,\n+\t\t\t timebuf, LOG_OUTPUTDIR);\n\nWhile not introduced by this patch, it does make me uneasy that we create paths\nwithout checking for buffer overflows..\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 6 Jun 2022 02:38:03 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [v15 beta] pg_upgrade failed if earlier executed with -c switch"
},
{
"msg_contents": "On Mon, Jun 06, 2022 at 02:38:03AM +0200, Daniel Gustafsson wrote:\n> On 5 Jun 2022, at 11:19, Michael Paquier <michael@paquier.xyz> wrote:\n>> On Sun, Jun 05, 2022 at 09:24:25AM +0900, Michael Paquier wrote:\n>>> Well, another error that could happen in the early code paths is\n>>> EACCES on a custom socket directory specified, and we'd still face the\n>>> same problem on a follow-up restart. Using a sub-directory structure\n>>> as Daniel and Tom mention would address all that (if ignoring EEXIST\n>>> for the BASE_OUTPUTDIR), removing any existing content from the base\n>>> path when not using --retain. This comes with the disadvantage of\n>>> bloating the disk on repeated errors, but this last bit would not\n>>> really be a huge problem, I guess, as it could be more useful to keep\n>>> the error information around.\n>> \n>> I have been toying with the idea of a sub-directory named with a\n>> timestamp (Unix time, like log_line_prefix's %n but this could be\n>> any format) under pg_upgrade_output.d/ and finished with the\n>> attached. \n> \n> I was thinking more along the lines of %m to make it (more) human readable, but\n> I'm certainly not wedded to any format.\n\nNeither am I. I would not map exactly to %m as it uses whitespaces,\nbut something like %Y%m%d_%H%M%S.%03d (3-digit ms for last part) would\nbe fine? If there are other ideas for the format, just let me know.\n\n> As a user I would expect the logs from this current invocation to be removed\n> without --retain, and any other older log entries be kept. I think we should\n> remove log_opts.logdir and only remove log_opts.rootdir if it is left empty\n> after .logdir is removed.\n\nOkay, however I think you mean log_opts.basedir rather than logdir?\nThat's simple enough to switch around as pg_check_dir() does this\njob.\n\n>> The logic in charge of cleaning up the logs has been moved to a single\n>> routine, aka cleanup_logs().\n> \n> +\t\tcleanup_logs();\n> \n> Maybe we should register cleanup_logs() as an atexit() handler once we're done\n> with option processing?\n\nIt seems to me that the original intention is to keep the logs around\non failure, hence we should only clean up things on a clean exit().\nThat's why I didn't add an exit callback for that.\n\n> +\tsnprintf(log_opts.logdir, MAXPGPATH, \"%s/%s/%s\", log_opts.rootdir,\n> +\t\t\t timebuf, LOG_OUTPUTDIR);\n> \n> While not introduced by this patch, it does make me uneasy that we create paths\n> without checking for buffer overflows..\n\nI don't mind adding such checks in those code paths. You are right\nthat they tend to produce longer path strings than others.\n--\nMichael",
"msg_date": "Mon, 6 Jun 2022 13:17:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [v15 beta] pg_upgrade failed if earlier executed with -c switch"
},
{
"msg_contents": "> On 6 Jun 2022, at 06:17, Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, Jun 06, 2022 at 02:38:03AM +0200, Daniel Gustafsson wrote:\n>> On 5 Jun 2022, at 11:19, Michael Paquier <michael@paquier.xyz> wrote:\n\n>>> I have been toying with the idea of a sub-directory named with a\n>>> timestamp (Unix time, like log_line_prefix's %n but this could be\n>>> any format) under pg_upgrade_output.d/ and finished with the\n>>> attached. \n>> \n>> I was thinking more along the lines of %m to make it (more) human readable, but\n>> I'm certainly not wedded to any format.\n> \n> Neither am I. I would not map exactly to %m as it uses whitespaces,\n> but something like %Y%m%d_%H%M%S.%03d (3-digit ms for last part) would\n> be fine? If there are other ideas for the format, just let me know.\n\nI think this makes more sense from an end-user perspective.\n\n>> As a user I would expect the logs from this current invocation to be removed\n>> without --retain, and any other older log entries be kept. I think we should\n>> remove log_opts.logdir and only remove log_opts.rootdir if it is left empty\n>> after .logdir is removed.\n> \n> Okay, however I think you mean log_opts.basedir rather than logdir?\n> That's simple enough to switch around as pg_check_dir() does this\n> job.\n\nCorrect, I mistyped. The cleanup in this version of the patch looks sane to\nme.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 6 Jun 2022 19:43:53 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [v15 beta] pg_upgrade failed if earlier executed with -c switch"
},
{
"msg_contents": "On Mon, Jun 06, 2022 at 07:43:53PM +0200, Daniel Gustafsson wrote:\n> > On 6 Jun 2022, at 06:17, Michael Paquier <michael@paquier.xyz> wrote:\n> > On Mon, Jun 06, 2022 at 02:38:03AM +0200, Daniel Gustafsson wrote:\n> >> On 5 Jun 2022, at 11:19, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> >>> I have been toying with the idea of a sub-directory named with a\n> >>> timestamp (Unix time, like log_line_prefix's %n but this could be\n> >>> any format) under pg_upgrade_output.d/ and finished with the\n> >>> attached. \n> >> \n> >> I was thinking more along the lines of %m to make it (more) human readable, but\n> >> I'm certainly not wedded to any format.\n\nIt seems important to use a format in most-significant-parts-first which sorts\nnicely by filename, but otherwise anything could be okay.\n\n> > Neither am I. I would not map exactly to %m as it uses whitespaces,\n> > but something like %Y%m%d_%H%M%S.%03d (3-digit ms for last part) would\n> > be fine? If there are other ideas for the format, just let me know.\n> \n> I think this makes more sense from an end-user perspective.\n\nIs it better to use \"T\" instead of \"_\" ?\n\nApparently, that's ISO 8601, which can optionally use separators\n(YYYY-MM-DDTHH:MM:SS).\n\nhttps://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations\n\nI was thinking this would not include fractional seconds. Maybe that would\nmean that the TAP tests would need to sleep(1) at some points...\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 6 Jun 2022 13:53:35 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [v15 beta] pg_upgrade failed if earlier executed with -c switch"
},
{
"msg_contents": "On Mon, Jun 06, 2022 at 01:53:35PM -0500, Justin Pryzby wrote:\n> It seems important to use a format in most-significant-parts-first which sorts\n> nicely by filename, but otherwise anything could be okay.\n\nAgreed.\n\n> Apparently, that's ISO 8601, which can optionally use separators\n> (YYYY-MM-DDTHH:MM:SS).\n\nOK, let's use a T, with the basic format and a minimal number of\nseparators then, we get 20220603T082255.\n\n> I was thinking this would not include fractional seconds. Maybe that would\n> mean that the TAP tests would need to sleep(1) at some points...\n\nIf we don't split by the millisecond, we would come back to the\nproblems of the original report. On my laptop, the --check phase\nthat passes takes more than 1s, but the one that fails takes 0.1s, so\na follow-up run would complain with the path conflicts. So at the end\nI would reduce the format to be YYYYMMDDTHHMMSS_ms (we could also use\na logic that checks for conflicts and appends an extra number if\nneeded, though the addition of the extra ms is a bit shorter).\n--\nMichael",
"msg_date": "Tue, 7 Jun 2022 08:30:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [v15 beta] pg_upgrade failed if earlier executed with -c switch"
},
{
"msg_contents": "On Tue, Jun 07, 2022 at 08:30:47AM +0900, Michael Paquier wrote:\n> If we don't split by the millisecond, we would come back to the\n> problems of the original report. On my laptop, the --check phase\n> that passes takes more than 1s, but the one that fails takes 0.1s, so\n> a follow-up run would complain with the path conflicts. So at the end\n> I would reduce the format to be YYYYMMDDTHHMMSS_ms (we could also use\n> a logic that checks for conflicts and appends an extra number if\n> needed, though the addition of the extra ms is a bit shorter).\n\nSo, attached is the patch I would like to apply for all that (commit\nmessage included). One issue I missed previously is that the TAP test\nmissed the log files on failure, so I had to tweak that with a find\nroutine. I have fixed a few comments, and improved the docs to\ndescribe the directory structure.\n\nWe are still need a refresh of the buildfarm client for the case where\npg_upgrade is tested without TAP, like that I guess:\n--- a/PGBuild/Modules/TestUpgrade.pm\n+++ b/PGBuild/Modules/TestUpgrade.pm\n@@ -140,6 +140,7 @@ sub check\n $self->{pgsql}/src/bin/pg_upgrade/log/*\n $self->{pgsql}/src/bin/pg_upgrade/tmp_check/*/*.diffs\n $self->{pgsql}/src/bin/pg_upgrade/tmp_check/data/pg_upgrade_output.d/log/*\n+ $self->{pgsql}/src/bin/pg_upgrade/tmp_check/data/pg_upgrade_output.d/*/log/*\n $self->{pgsql}/src/test/regress/*.diffs\"\n \t);\n \t$log->add_log($_) foreach (@logfiles);\n--\nMichael",
"msg_date": "Tue, 7 Jun 2022 11:42:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [v15 beta] pg_upgrade failed if earlier executed with -c switch"
},
{
"msg_contents": "tather => rather\nis charge => in charge\n\nOn Mon, Jun 06, 2022 at 01:17:52PM +0900, Michael Paquier wrote:\n> but something like %Y%m%d_%H%M%S.%03d (3-digit ms for last part) would\n\nOn Tue, Jun 07, 2022 at 08:30:47AM +0900, Michael Paquier wrote:\n> I would reduce the format to be YYYYMMDDTHHMMSS_ms (we could also use\n\nI think it's better with a dot (HHMMSS.ms) rather than underscore (HHMMSS_ms).\n\nOn Tue, Jun 07, 2022 at 11:42:37AM +0900, Michael Paquier wrote:\n> +\t/* append milliseconds */\n> +\tsnprintf(timebuf, sizeof(timebuf), \"%s_%03d\",\n> +\t\t\t timebuf, (int) (time.tv_usec / 1000));\n\n> + with a timestamp formatted as per ISO 8601\n> + (<literal>%Y%m%dT%H%M%S</literal>) appended by an underscore and\n> + the timestamp's milliseconds, where all the generated files are stored.\n\nThe ISO timestamp can include milliseconds (or apparently fractional parts of\nthe \"lowest-order\" unit), so the \"appended by\" part doesn't need to be\nexplained here.\n\n+ snprintf(timebuf, sizeof(timebuf), \"%s_%03d\",\n+ timebuf, (int) (time.tv_usec / 1000));\n\nIs it really allowed to sprintf a buffer onto itself ?\nI can't find any existing cases doing that.\n\nIt seems useless in any case - you could instead\nsnprintf(timebuf+strlen(timebuf), or increment len+=snprintf()...\n\nOr append the milliseconds here:\n\n+ len = snprintf(log_opts.basedir, MAXPGPATH, \"%s/%s\", log_opts.rootdir,\n+ timebuf);\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 6 Jun 2022 22:11:48 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [v15 beta] pg_upgrade failed if earlier executed with -c switch"
},
{
"msg_contents": "On Mon, Jun 06, 2022 at 10:11:48PM -0500, Justin Pryzby wrote:\n> tather => rather\n> is charge => in charge\n\nThanks for the extra read. Fixed. There was an extra one in the\ncomments, as of s/thier/their/.\n\n> I think it's better with a dot (HHMMSS.ms) rather than underscore (HHMMSS_ms).\n>\n> The ISO timestamp can include milliseconds (or apparently fractional parts of\n> the \"lowest-order\" unit), so the \"appended by\" part doesn't need to be\n> explained here.\n> \n> + snprintf(timebuf, sizeof(timebuf), \"%s_%03d\",\n> + timebuf, (int) (time.tv_usec / 1000));\n> \n> Is it really allowed to sprintf a buffer onto itself ?\n> I can't find any existing cases doing that.\n\nYes, there is no need to do that, so I have just appended the ms\ndigits to the end of the string.\n\nAnd applied, to take care of this open item.\n--\nMichael",
"msg_date": "Wed, 8 Jun 2022 10:55:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [v15 beta] pg_upgrade failed if earlier executed with -c switch"
},
{
"msg_contents": "On Wed, Jun 08, 2022 at 10:55:29AM +0900, Michael Paquier wrote:\n> And applied, to take care of this open item.\n\nShouldn't this wait for the buildfarm to be updated again ?\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 8 Jun 2022 16:13:37 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [v15 beta] pg_upgrade failed if earlier executed with -c switch"
},
{
"msg_contents": "On Wed, Jun 08, 2022 at 04:13:37PM -0500, Justin Pryzby wrote:\n> On Wed, Jun 08, 2022 at 10:55:29AM +0900, Michael Paquier wrote:\n>> And applied, to take care of this open item.\n> \n> Shouldn't this wait for the buildfarm to be updated again ?\n\nThe TAP logic is able to find any logs by itself on failure, so what\nwould be impacted is the case of the tests running pg_upgrade via the\npast route in TestUpgrade.pm (it had better not run in the buildfarm\nclient for 15~ and I am wondering if it would be worth backpatching\nthe TAP test once it brews a bit more). Anyway, seeing my time sheet\nfor the next couple of days coupled with a potential beta2 in the very\nshort term and with the broken upgrade workflow, I have given priority\nto fix the issue because that's what impacts directly people looking\nat 15 and testing their upgrades, which is what Tushar did.\n\nSaying that, I have already sent a pull request to the buildfarm repo\nto refresh the set of logs, as of the patch attached. This updates\nthe logic so as this would work for any changes in the structure of\npg_upgrade_output.d/, fetching any files prefixed by \".log\".\n--\nMichael",
"msg_date": "Thu, 9 Jun 2022 09:53:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [v15 beta] pg_upgrade failed if earlier executed with -c switch"
},
{
"msg_contents": "\nOn 2022-06-08 We 20:53, Michael Paquier wrote:\n> On Wed, Jun 08, 2022 at 04:13:37PM -0500, Justin Pryzby wrote:\n>> On Wed, Jun 08, 2022 at 10:55:29AM +0900, Michael Paquier wrote:\n>>> And applied, to take care of this open item.\n>> Shouldn't this wait for the buildfarm to be updated again ?\n> The TAP logic is able to find any logs by itself on failure, so what\n> would be impacted is the case of the tests running pg_upgrade via the\n> past route in TestUpgrade.pm (it had better not run in the buildfarm\n> client for 15~ and I am wondering if it would be worth backpatching\n> the TAP test once it brews a bit more). Anyway, seeing my time sheet\n> for the next couple of days coupled with a potential beta2 in the very\n> short term and with the broken upgrade workflow, I have given priority\n> to fix the issue because that's what impacts directly people looking\n> at 15 and testing their upgrades, which is what Tushar did.\n>\n> Saying that, I have already sent a pull request to the buildfarm repo\n> to refresh the set of logs, as of the patch attached. This updates\n> the logic so as this would work for any changes in the structure of\n> pg_upgrade_output.d/, fetching any files prefixed by \".log\".\n\n\n\n\nThe module is already a noop if there's a TAP test for pg_upgrade. So I\ndon't understand the point of the PR at all.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 10 Jun 2022 17:45:11 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: [v15 beta] pg_upgrade failed if earlier executed with -c switch"
},
{
"msg_contents": "On Fri, Jun 10, 2022 at 05:45:11PM -0400, Andrew Dunstan wrote:\n> The module is already a noop if there's a TAP test for pg_upgrade. So I\n> don't understand the point of the PR at all.\n\nOh. I thought that the old path was still taken as long as\n--enable-tap-tests was not used. I was wrong, then. I'll go and\nremove the pull request.\n--\nMichael",
"msg_date": "Tue, 14 Jun 2022 11:50:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [v15 beta] pg_upgrade failed if earlier executed with -c switch"
},
{
"msg_contents": "\nOn 2022-06-13 Mo 22:50, Michael Paquier wrote:\n> On Fri, Jun 10, 2022 at 05:45:11PM -0400, Andrew Dunstan wrote:\n>> The module is already a noop if there's a TAP test for pg_upgrade. So I\n>> don't understand the point of the PR at all.\n> Oh. I thought that the old path was still taken as long as\n> --enable-tap-tests was not used. I was wrong, then. I'll go and\n> remove the pull request.\n\n\nIt did that from 2018 (826d450), but from 2021(691e649) all it does is\nlook for the TAP test subdirectory. The old logic is still there\nredundantly, so I'll remove it to clean up confusion.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 14 Jun 2022 08:42:55 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: [v15 beta] pg_upgrade failed if earlier executed with -c switch"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile studying the issue discussed in thread \"Detaching a partition with a FK\non itself is not possible\"[1], I stumbled across an oddity while attaching a\npartition having the same multiple self-FK than the parent table.\n\nOnly one of the self-FK is found as a duplicate. Find in attachment some SQL to\nreproduce the scenario. Below the result of this scenario (constant from v12 to\ncommit 7e367924e3). Why \"child1_id_abc_no_part_fkey\" is found duplicated but not\nthe three others? From pg_constraint, only \"child1_id_abc_no_part_fkey\" has a\n\"conparentid\" set.\n\n\n conname | conparentid | conrelid | confrelid \n -----------------------------+-------------+----------+-----------\n child1_id_abc_no_part_fkey | 16901 | 16921 | 16921\n child1_id_def_no_part_fkey | 0 | 16921 | 16921\n child1_id_ghi_no_part_fkey | 0 | 16921 | 16921\n child1_id_jkl_no_part_fkey | 0 | 16921 | 16921\n parent_id_abc_no_part_fkey | 16901 | 16921 | 16894\n parent_id_abc_no_part_fkey | 0 | 16894 | 16894\n parent_id_abc_no_part_fkey1 | 16901 | 16894 | 16921\n parent_id_def_no_part_fkey | 16906 | 16921 | 16894\n parent_id_def_no_part_fkey | 0 | 16894 | 16894\n parent_id_def_no_part_fkey1 | 16906 | 16894 | 16921\n parent_id_ghi_no_part_fkey | 0 | 16894 | 16894\n parent_id_ghi_no_part_fkey | 16911 | 16921 | 16894\n parent_id_ghi_no_part_fkey1 | 16911 | 16894 | 16921\n parent_id_jkl_no_part_fkey | 0 | 16894 | 16894\n parent_id_jkl_no_part_fkey | 16916 | 16921 | 16894\n parent_id_jkl_no_part_fkey1 | 16916 | 16894 | 16921\n (16 rows)\n\n\n Table \"public.child1\"\n [...]\n Partition of: parent FOR VALUES IN ('1')\n Partition constraint: ((no_part IS NOT NULL) AND (no_part = '1'::smallint))\n Indexes:\n \"child1_pkey\" PRIMARY KEY, btree (id, no_part)\n Check constraints:\n \"child1\" CHECK (no_part = 1)\n Foreign-key constraints:\n \"child1_id_def_no_part_fkey\"\n FOREIGN KEY (id_def, no_part)\n REFERENCES child1(id, no_part) ON UPDATE RESTRICT ON DELETE RESTRICT\n \"child1_id_ghi_no_part_fkey\"\n FOREIGN KEY (id_ghi, no_part)\n REFERENCES child1(id, no_part) ON UPDATE RESTRICT ON DELETE RESTRICT\n \"child1_id_jkl_no_part_fkey\"\n FOREIGN KEY (id_jkl, no_part)\n REFERENCES child1(id, no_part) ON UPDATE RESTRICT ON DELETE RESTRICT\n TABLE \"parent\" CONSTRAINT \"parent_id_abc_no_part_fkey\"\n FOREIGN KEY (id_abc, no_part)\n REFERENCES parent(id, no_part) ON UPDATE RESTRICT ON DELETE RESTRICT\n TABLE \"parent\" CONSTRAINT \"parent_id_def_no_part_fkey\"\n FOREIGN KEY (id_def, no_part)\n REFERENCES parent(id, no_part) ON UPDATE RESTRICT ON DELETE RESTRICT\n TABLE \"parent\" CONSTRAINT \"parent_id_ghi_no_part_fkey\"\n FOREIGN KEY (id_ghi, no_part)\n REFERENCES parent(id, no_part) ON UPDATE RESTRICT ON DELETE RESTRICT\n TABLE \"parent\" CONSTRAINT \"parent_id_jkl_no_part_fkey\"\n FOREIGN KEY (id_jkl, no_part)\n REFERENCES parent(id, no_part) ON UPDATE RESTRICT ON DELETE RESTRICT\n Referenced by:\n TABLE \"child1\" CONSTRAINT \"child1_id_def_no_part_fkey\"\n FOREIGN KEY (id_def, no_part)\n REFERENCES child1(id, no_part) ON UPDATE RESTRICT ON DELETE RESTRICT\n TABLE \"child1\" CONSTRAINT \"child1_id_ghi_no_part_fkey\"\n FOREIGN KEY (id_ghi, no_part)\n REFERENCES child1(id, no_part) ON UPDATE RESTRICT ON DELETE RESTRICT\n TABLE \"child1\" CONSTRAINT \"child1_id_jkl_no_part_fkey\"\n FOREIGN KEY (id_jkl, no_part)\n REFERENCES child1(id, no_part) ON UPDATE RESTRICT ON DELETE RESTRICT\n TABLE \"parent\" CONSTRAINT \"parent_id_abc_no_part_fkey\"\n FOREIGN KEY (id_abc, no_part)\n REFERENCES parent(id, no_part) ON UPDATE RESTRICT ON DELETE RESTRICT\n TABLE \"parent\" CONSTRAINT \"parent_id_def_no_part_fkey\"\n FOREIGN KEY (id_def, no_part)\n REFERENCES parent(id, no_part) ON UPDATE RESTRICT ON DELETE RESTRICT\n TABLE \"parent\" CONSTRAINT \"parent_id_ghi_no_part_fkey\"\n FOREIGN KEY (id_ghi, no_part)\n REFERENCES parent(id, no_part) ON UPDATE RESTRICT ON DELETE RESTRICT\n TABLE \"parent\" CONSTRAINT \"parent_id_jkl_no_part_fkey\"\n FOREIGN KEY (id_jkl, no_part)\n REFERENCES parent(id, no_part) ON UPDATE RESTRICT ON DELETE RESTRICT\n\nRegards,\n\n[1]\nhttps://www.postgresql.org/message-id/flat/20220321113634.68c09d4b%40karst#83c0880a1b4921fcd00d836d4e6bceb3",
"msg_date": "Fri, 3 Jun 2022 15:42:32 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Self FK oddity when attaching a partition"
},
{
"msg_contents": "Hi all,\n\nI've been able to work on this issue and isolate where in the code the oddity\nis laying.\n\nDuring ATExecAttachPartition(), AttachPartitionEnsureIndexes() look for existing\nrequired index on the partition to attach. It creates missing index, or sets the\nparent's index when a matching one exists on the partition. Good.\n\nWhen a matching index is found, if the parent index enforce a constraint, the\nfunction look for the similar constraint in the partition-to-be, and set the\nconstraint parent as well:\n\n\tconstraintOid = get_relation_idx_constraint_oid(RelationGetRelid(rel),\n\t\t\t\t\t\t\tidx);\n\n\t[...]\n\n\t/*\n\t * If this index is being created in the parent because of a\n\t * constraint, then the child needs to have a constraint also,\n\t * so look for one. If there is no such constraint, this\n\t * index is no good, so keep looking.\n\t */\n\tif (OidIsValid(constraintOid))\n\t{\n\t\tcldConstrOid = get_relation_idx_constraint_oid(\n\t\t\t\t\tRelationGetRelid(attachrel),\n\t\t\t\t\tcldIdxId);\n\t\t/* no dice */\n\t\tif (!OidIsValid(cldConstrOid))\n\t\t\tcontinue;\n\t }\n\t /* bingo. */\n\t IndexSetParentIndex(attachrelIdxRels[i], idx);\n\t if (OidIsValid(constraintOid))\n\t\tConstraintSetParentConstraint(cldConstrOid, constraintOid,\n\t\t\t\t\t RelationGetRelid(attachrel));\n\nHowever, it seems get_relation_idx_constraint_oid(), introduced in eb7ed3f3063,\nassume there could be only ONE constraint depending to an index. But in fact,\nmultiple constraints can rely on the same index, eg.: the PK and a self\nreferencing FK. In consequence, when looking for a constraint depending on an\nindex for the given relation, either the FK or a PK can appears first depending\non various conditions. It is then possible to trick it make a FK constraint a\nparent of a PK...\n\nIn the following little scenario, when looking at the constraint linked to\nthe PK unique index using the same index than get_relation_idx_constraint_oid\nuse, this is the self-FK that is actually returned first by\nget_relation_idx_constraint_oid(), NOT the PK:\n\n postgres=# DROP TABLE IF EXISTS parent, child1;\n \n CREATE TABLE parent (\n id bigint NOT NULL default 1,\n no_part smallint NOT NULL,\n id_abc bigint,\n FOREIGN KEY (id_abc, no_part) REFERENCES parent(id, no_part)\n ON UPDATE RESTRICT ON DELETE RESTRICT,\n PRIMARY KEY (id, no_part)\n )\n PARTITION BY LIST (no_part);\n \n CREATE TABLE child1 (\n id bigint NOT NULL default 1,\n no_part smallint NOT NULL,\n id_abc bigint,\n PRIMARY KEY (id, no_part),\n CONSTRAINT child1 CHECK ((no_part = 1))\n );\n \n -- force an indexscan as get_relation_idx_constraint_oid() use the unique\n -- index on (conrelid, contypid, conname) to scan pg_cosntraint\n set enable_seqscan TO off;\n set enable_bitmapscan TO off;\n \n SELECT conname\n FROM pg_constraint\n WHERE conrelid = 'parent'::regclass <=== parent\n AND conindid = 'parent_pkey'::regclass; <=== PK index\n \n DROP TABLE\n CREATE TABLE\n CREATE TABLE\n SET\n SET\n conname \n ----------------------------\n parent_id_abc_no_part_fkey <==== WOOPS!\n parent_pkey\n (2 rows)\n\nIn consequence, when attaching the partition, the PK of child1 is not marked as\npartition of the parent's PK, which is wrong. WORST, the PK of child1 is\nactually unexpectedly marked as a partition of the parent's **self-FK**:\n\n postgres=# ALTER TABLE ONLY parent ATTACH PARTITION child1 \n FOR VALUES IN ('1');\n \n SELECT oid, conname, conparentid, conrelid, confrelid\n FROM pg_constraint\n WHERE conrelid in ('parent'::regclass, 'child1'::regclass) \n ORDER BY 1;\n \n ALTER TABLE\n oid | conname | conparentid | conrelid | confrelid \n -------+-----------------------------+-------------+----------+-----------\n 16700 | parent_pkey | 0 | 16695 | 0\n 16701 | parent_id_abc_no_part_fkey | 0 | 16695 | 16695\n 16706 | child1 | 0 | 16702 | 0\n 16708 | **child1_pkey** | **16701** | 16702 | 0\n 16709 | parent_id_abc_no_part_fkey1 | 16701 | 16695 | 16702\n 16712 | parent_id_abc_no_part_fkey | 16701 | 16702 | 16695\n (6 rows)\n\nThe expected result should probably be something like:\n\n oid | conname | conparentid | conrelid | confrelid \n -------+-----------------------------+-------------+----------+-----------\n 16700 | parent_pkey | 0 | 16695 | 0\n ...\n 16708 | child1_pkey | 16700 | 16702 | 0\n\n\nI suppose this bug might exists in ATExecAttachPartitionIdx(),\nDetachPartitionFinalize() and DefineIndex() where there's similar code and logic\nusing get_relation_idx_constraint_oid(). I didn't check for potential bugs there\nthough.\n\nI'm not sure yet of how this bug should be fixed. Any comment?\n\nRegards,\n\n\n",
"msg_date": "Tue, 23 Aug 2022 17:07:37 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "[BUG] parenting a PK constraint to a self-FK one (Was: Self FK\n oddity when attaching a partition)"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 8:07 AM Jehan-Guillaume de Rorthais <jgdr@dalibo.com>\nwrote:\n\n> Hi all,\n>\n> I've been able to work on this issue and isolate where in the code the\n> oddity\n> is laying.\n>\n> During ATExecAttachPartition(), AttachPartitionEnsureIndexes() look for\n> existing\n> required index on the partition to attach. It creates missing index, or\n> sets the\n> parent's index when a matching one exists on the partition. Good.\n>\n> When a matching index is found, if the parent index enforce a constraint,\n> the\n> function look for the similar constraint in the partition-to-be, and set\n> the\n> constraint parent as well:\n>\n> constraintOid =\n> get_relation_idx_constraint_oid(RelationGetRelid(rel),\n> idx);\n>\n> [...]\n>\n> /*\n> * If this index is being created in the parent because of a\n> * constraint, then the child needs to have a constraint also,\n> * so look for one. If there is no such constraint, this\n> * index is no good, so keep looking.\n> */\n> if (OidIsValid(constraintOid))\n> {\n> cldConstrOid = get_relation_idx_constraint_oid(\n> RelationGetRelid(attachrel),\n> cldIdxId);\n> /* no dice */\n> if (!OidIsValid(cldConstrOid))\n> continue;\n> }\n> /* bingo. */\n> IndexSetParentIndex(attachrelIdxRels[i], idx);\n> if (OidIsValid(constraintOid))\n> ConstraintSetParentConstraint(cldConstrOid, constraintOid,\n> RelationGetRelid(attachrel));\n>\n> However, it seems get_relation_idx_constraint_oid(), introduced in\n> eb7ed3f3063,\n> assume there could be only ONE constraint depending to an index. But in\n> fact,\n> multiple constraints can rely on the same index, eg.: the PK and a self\n> referencing FK. In consequence, when looking for a constraint depending on\n> an\n> index for the given relation, either the FK or a PK can appears first\n> depending\n> on various conditions. It is then possible to trick it make a FK\n> constraint a\n> parent of a PK...\n>\n> In the following little scenario, when looking at the constraint linked to\n> the PK unique index using the same index than\n> get_relation_idx_constraint_oid\n> use, this is the self-FK that is actually returned first by\n> get_relation_idx_constraint_oid(), NOT the PK:\n>\n> postgres=# DROP TABLE IF EXISTS parent, child1;\n>\n> CREATE TABLE parent (\n> id bigint NOT NULL default 1,\n> no_part smallint NOT NULL,\n> id_abc bigint,\n> FOREIGN KEY (id_abc, no_part) REFERENCES parent(id, no_part)\n> ON UPDATE RESTRICT ON DELETE RESTRICT,\n> PRIMARY KEY (id, no_part)\n> )\n> PARTITION BY LIST (no_part);\n>\n> CREATE TABLE child1 (\n> id bigint NOT NULL default 1,\n> no_part smallint NOT NULL,\n> id_abc bigint,\n> PRIMARY KEY (id, no_part),\n> CONSTRAINT child1 CHECK ((no_part = 1))\n> );\n>\n> -- force an indexscan as get_relation_idx_constraint_oid() use the\n> unique\n> -- index on (conrelid, contypid, conname) to scan pg_cosntraint\n> set enable_seqscan TO off;\n> set enable_bitmapscan TO off;\n>\n> SELECT conname\n> FROM pg_constraint\n> WHERE conrelid = 'parent'::regclass <=== parent\n> AND conindid = 'parent_pkey'::regclass; <=== PK index\n>\n> DROP TABLE\n> CREATE TABLE\n> CREATE TABLE\n> SET\n> SET\n> conname\n> ----------------------------\n> parent_id_abc_no_part_fkey <==== WOOPS!\n> parent_pkey\n> (2 rows)\n>\n> In consequence, when attaching the partition, the PK of child1 is not\n> marked as\n> partition of the parent's PK, which is wrong. WORST, the PK of child1 is\n> actually unexpectedly marked as a partition of the parent's **self-FK**:\n>\n> postgres=# ALTER TABLE ONLY parent ATTACH PARTITION child1\n> FOR VALUES IN ('1');\n>\n> SELECT oid, conname, conparentid, conrelid, confrelid\n> FROM pg_constraint\n> WHERE conrelid in ('parent'::regclass, 'child1'::regclass)\n> ORDER BY 1;\n>\n> ALTER TABLE\n> oid | conname | conparentid | conrelid |\n> confrelid\n>\n> -------+-----------------------------+-------------+----------+-----------\n> 16700 | parent_pkey | 0 | 16695 | 0\n> 16701 | parent_id_abc_no_part_fkey | 0 | 16695 | 16695\n> 16706 | child1 | 0 | 16702 | 0\n> 16708 | **child1_pkey** | **16701** | 16702 | 0\n> 16709 | parent_id_abc_no_part_fkey1 | 16701 | 16695 | 16702\n> 16712 | parent_id_abc_no_part_fkey | 16701 | 16702 | 16695\n> (6 rows)\n>\n> The expected result should probably be something like:\n>\n> oid | conname | conparentid | conrelid |\n> confrelid\n>\n> -------+-----------------------------+-------------+----------+-----------\n> 16700 | parent_pkey | 0 | 16695 | 0\n> ...\n> 16708 | child1_pkey | 16700 | 16702 | 0\n>\n>\n> I suppose this bug might exists in ATExecAttachPartitionIdx(),\n> DetachPartitionFinalize() and DefineIndex() where there's similar code and\n> logic\n> using get_relation_idx_constraint_oid(). I didn't check for potential bugs\n> there\n> though.\n>\n> I'm not sure yet of how this bug should be fixed. Any comment?\n>\n> Regards,\n>\n> Hi,\nIn this case the confrelid field of FormData_pg_constraint for the first\nconstraint would carry a valid Oid.\nCan we use this information and continue searching in\nget_relation_idx_constraint_oid() until an entry with 0 confrelid is found ?\nIf there is no such (secondary) entry, we return the first entry.\n\nCheers\n\nOn Tue, Aug 23, 2022 at 8:07 AM Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:Hi all,\n\nI've been able to work on this issue and isolate where in the code the oddity\nis laying.\n\nDuring ATExecAttachPartition(), AttachPartitionEnsureIndexes() look for existing\nrequired index on the partition to attach. It creates missing index, or sets the\nparent's index when a matching one exists on the partition. Good.\n\nWhen a matching index is found, if the parent index enforce a constraint, the\nfunction look for the similar constraint in the partition-to-be, and set the\nconstraint parent as well:\n\n constraintOid = get_relation_idx_constraint_oid(RelationGetRelid(rel),\n idx);\n\n [...]\n\n /*\n * If this index is being created in the parent because of a\n * constraint, then the child needs to have a constraint also,\n * so look for one. If there is no such constraint, this\n * index is no good, so keep looking.\n */\n if (OidIsValid(constraintOid))\n {\n cldConstrOid = get_relation_idx_constraint_oid(\n RelationGetRelid(attachrel),\n cldIdxId);\n /* no dice */\n if (!OidIsValid(cldConstrOid))\n continue;\n }\n /* bingo. */\n IndexSetParentIndex(attachrelIdxRels[i], idx);\n if (OidIsValid(constraintOid))\n ConstraintSetParentConstraint(cldConstrOid, constraintOid,\n RelationGetRelid(attachrel));\n\nHowever, it seems get_relation_idx_constraint_oid(), introduced in eb7ed3f3063,\nassume there could be only ONE constraint depending to an index. But in fact,\nmultiple constraints can rely on the same index, eg.: the PK and a self\nreferencing FK. In consequence, when looking for a constraint depending on an\nindex for the given relation, either the FK or a PK can appears first depending\non various conditions. It is then possible to trick it make a FK constraint a\nparent of a PK...\n\nIn the following little scenario, when looking at the constraint linked to\nthe PK unique index using the same index than get_relation_idx_constraint_oid\nuse, this is the self-FK that is actually returned first by\nget_relation_idx_constraint_oid(), NOT the PK:\n\n postgres=# DROP TABLE IF EXISTS parent, child1;\n\n CREATE TABLE parent (\n id bigint NOT NULL default 1,\n no_part smallint NOT NULL,\n id_abc bigint,\n FOREIGN KEY (id_abc, no_part) REFERENCES parent(id, no_part)\n ON UPDATE RESTRICT ON DELETE RESTRICT,\n PRIMARY KEY (id, no_part)\n )\n PARTITION BY LIST (no_part);\n\n CREATE TABLE child1 (\n id bigint NOT NULL default 1,\n no_part smallint NOT NULL,\n id_abc bigint,\n PRIMARY KEY (id, no_part),\n CONSTRAINT child1 CHECK ((no_part = 1))\n );\n\n -- force an indexscan as get_relation_idx_constraint_oid() use the unique\n -- index on (conrelid, contypid, conname) to scan pg_cosntraint\n set enable_seqscan TO off;\n set enable_bitmapscan TO off;\n\n SELECT conname\n FROM pg_constraint\n WHERE conrelid = 'parent'::regclass <=== parent\n AND conindid = 'parent_pkey'::regclass; <=== PK index\n\n DROP TABLE\n CREATE TABLE\n CREATE TABLE\n SET\n SET\n conname \n ----------------------------\n parent_id_abc_no_part_fkey <==== WOOPS!\n parent_pkey\n (2 rows)\n\nIn consequence, when attaching the partition, the PK of child1 is not marked as\npartition of the parent's PK, which is wrong. WORST, the PK of child1 is\nactually unexpectedly marked as a partition of the parent's **self-FK**:\n\n postgres=# ALTER TABLE ONLY parent ATTACH PARTITION child1 \n FOR VALUES IN ('1');\n\n SELECT oid, conname, conparentid, conrelid, confrelid\n FROM pg_constraint\n WHERE conrelid in ('parent'::regclass, 'child1'::regclass) \n ORDER BY 1;\n\n ALTER TABLE\n oid | conname | conparentid | conrelid | confrelid \n -------+-----------------------------+-------------+----------+-----------\n 16700 | parent_pkey | 0 | 16695 | 0\n 16701 | parent_id_abc_no_part_fkey | 0 | 16695 | 16695\n 16706 | child1 | 0 | 16702 | 0\n 16708 | **child1_pkey** | **16701** | 16702 | 0\n 16709 | parent_id_abc_no_part_fkey1 | 16701 | 16695 | 16702\n 16712 | parent_id_abc_no_part_fkey | 16701 | 16702 | 16695\n (6 rows)\n\nThe expected result should probably be something like:\n\n oid | conname | conparentid | conrelid | confrelid \n -------+-----------------------------+-------------+----------+-----------\n 16700 | parent_pkey | 0 | 16695 | 0\n ...\n 16708 | child1_pkey | 16700 | 16702 | 0\n\n\nI suppose this bug might exists in ATExecAttachPartitionIdx(),\nDetachPartitionFinalize() and DefineIndex() where there's similar code and logic\nusing get_relation_idx_constraint_oid(). I didn't check for potential bugs there\nthough.\n\nI'm not sure yet of how this bug should be fixed. Any comment?\n\nRegards,Hi,In this case the confrelid field of FormData_pg_constraint for the first constraint would carry a valid Oid.Can we use this information and continue searching in get_relation_idx_constraint_oid() until an entry with 0 confrelid is found ?If there is no such (secondary) entry, we return the first entry.Cheers",
"msg_date": "Tue, 23 Aug 2022 08:56:59 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] parenting a PK constraint to a self-FK one (Was: Self FK\n oddity when attaching a partition)"
},
{
"msg_contents": "On 2022-Aug-23, Jehan-Guillaume de Rorthais wrote:\n\nHi,\n\n[...]\n\n> However, it seems get_relation_idx_constraint_oid(), introduced in eb7ed3f3063,\n> assume there could be only ONE constraint depending to an index. But in fact,\n> multiple constraints can rely on the same index, eg.: the PK and a self\n> referencing FK. In consequence, when looking for a constraint depending on an\n> index for the given relation, either the FK or a PK can appears first depending\n> on various conditions. It is then possible to trick it make a FK constraint a\n> parent of a PK...\n\nHmm, wow, that sounds extremely stupid. I think a sufficient fix might\nbe to have get_relation_idx_constraint_oid ignore any constraints that\nare not unique or primary keys. I tried your scenario with the attached\nand it seems to work correctly. Can you confirm? (I only ran the\npg_regress tests, not anything else for now.)\n\nIf this is OK, we should make this API quirkiness very explicit in the\ncomments, so the patch needs to be a few lines larger in order to be\ncommittable. Also, perhaps the check should be that contype equals\neither primary or unique, rather than it doesn't equal foreign.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/",
"msg_date": "Tue, 23 Aug 2022 18:30:06 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] parenting a PK constraint to a self-FK one (Was: Self FK\n oddity when attaching a partition)"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 9:30 AM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2022-Aug-23, Jehan-Guillaume de Rorthais wrote:\n>\n> Hi,\n>\n> [...]\n>\n> > However, it seems get_relation_idx_constraint_oid(), introduced in\n> eb7ed3f3063,\n> > assume there could be only ONE constraint depending to an index. But in\n> fact,\n> > multiple constraints can rely on the same index, eg.: the PK and a self\n> > referencing FK. In consequence, when looking for a constraint depending\n> on an\n> > index for the given relation, either the FK or a PK can appears first\n> depending\n> > on various conditions. It is then possible to trick it make a FK\n> constraint a\n> > parent of a PK...\n>\n> Hmm, wow, that sounds extremely stupid. I think a sufficient fix might\n> be to have get_relation_idx_constraint_oid ignore any constraints that\n> are not unique or primary keys. I tried your scenario with the attached\n> and it seems to work correctly. Can you confirm? (I only ran the\n> pg_regress tests, not anything else for now.)\n>\n> If this is OK, we should make this API quirkiness very explicit in the\n> comments, so the patch needs to be a few lines larger in order to be\n> committable. Also, perhaps the check should be that contype equals\n> either primary or unique, rather than it doesn't equal foreign.\n>\n> --\n> Álvaro Herrera 48°01'N 7°57'E —\n> https://www.EnterpriseDB.com/\n\n\nI was thinking of the following patch.\nBasically, if there is only one matching constraint. we still return it.\n\ndiff --git a/src/postgres/src/backend/catalog/pg_constraint.c\nb/src/postgres/src/backend/catalog/pg_constraint.c\nindex f0726e9aa0..ddade138b4 100644\n--- a/src/postgres/src/backend/catalog/pg_constraint.c\n+++ b/src/postgres/src/backend/catalog/pg_constraint.c\n@@ -1003,7 +1003,8 @@ get_relation_idx_constraint_oid(Oid relationId, Oid\nindexId)\n constrForm = (Form_pg_constraint) GETSTRUCT(tuple);\n if (constrForm->conindid == indexId)\n {\n- constraintId = HeapTupleGetOid(tuple);\n+ if (constraintId == InvalidOid || constrForm->confrelid == 0)\n+ constraintId = HeapTupleGetOid(tuple);\n break;\n }\n }\n\nOn Tue, Aug 23, 2022 at 9:30 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2022-Aug-23, Jehan-Guillaume de Rorthais wrote:\n\nHi,\n\n[...]\n\n> However, it seems get_relation_idx_constraint_oid(), introduced in eb7ed3f3063,\n> assume there could be only ONE constraint depending to an index. But in fact,\n> multiple constraints can rely on the same index, eg.: the PK and a self\n> referencing FK. In consequence, when looking for a constraint depending on an\n> index for the given relation, either the FK or a PK can appears first depending\n> on various conditions. It is then possible to trick it make a FK constraint a\n> parent of a PK...\n\nHmm, wow, that sounds extremely stupid. I think a sufficient fix might\nbe to have get_relation_idx_constraint_oid ignore any constraints that\nare not unique or primary keys. I tried your scenario with the attached\nand it seems to work correctly. Can you confirm? (I only ran the\npg_regress tests, not anything else for now.)\n\nIf this is OK, we should make this API quirkiness very explicit in the\ncomments, so the patch needs to be a few lines larger in order to be\ncommittable. Also, perhaps the check should be that contype equals\neither primary or unique, rather than it doesn't equal foreign.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/I was thinking of the following patch.Basically, if there is only one matching constraint. we still return it.diff --git a/src/postgres/src/backend/catalog/pg_constraint.c b/src/postgres/src/backend/catalog/pg_constraint.cindex f0726e9aa0..ddade138b4 100644--- a/src/postgres/src/backend/catalog/pg_constraint.c+++ b/src/postgres/src/backend/catalog/pg_constraint.c@@ -1003,7 +1003,8 @@ get_relation_idx_constraint_oid(Oid relationId, Oid indexId) \t\tconstrForm = (Form_pg_constraint) GETSTRUCT(tuple); \t\tif (constrForm->conindid == indexId) \t\t{-\t\t\tconstraintId = HeapTupleGetOid(tuple);+\t\t\tif (constraintId == InvalidOid || constrForm->confrelid == 0)+\t\t\t\tconstraintId = HeapTupleGetOid(tuple); \t\t\tbreak; \t\t} }",
"msg_date": "Tue, 23 Aug 2022 09:42:06 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] parenting a PK constraint to a self-FK one (Was: Self FK\n oddity when attaching a partition)"
},
{
"msg_contents": "On 2022-Aug-23, Zhihong Yu wrote:\n\n> I was thinking of the following patch.\n> Basically, if there is only one matching constraint. we still return it.\n> \n> diff --git a/src/postgres/src/backend/catalog/pg_constraint.c\n> b/src/postgres/src/backend/catalog/pg_constraint.c\n> index f0726e9aa0..ddade138b4 100644\n> --- a/src/postgres/src/backend/catalog/pg_constraint.c\n> +++ b/src/postgres/src/backend/catalog/pg_constraint.c\n> @@ -1003,7 +1003,8 @@ get_relation_idx_constraint_oid(Oid relationId, Oid\n> indexId)\n> constrForm = (Form_pg_constraint) GETSTRUCT(tuple);\n> if (constrForm->conindid == indexId)\n> {\n> - constraintId = HeapTupleGetOid(tuple);\n> + if (constraintId == InvalidOid || constrForm->confrelid == 0)\n> + constraintId = HeapTupleGetOid(tuple);\n> break;\n> }\n> }\n\nWe could do this, but what do we gain by doing so? It seems to me that\nmy proposed formulation achieves the same and is less fuzzy about what\nthe returned constraint is. Please try to write a code comment that\nexplains what this does and see if it makes sense.\n\nFor my proposal, it would be \"return the OID of a primary key or unique\nconstraint associated with the given index in the given relation, or OID\nif no such index is catalogued\". This definition is clearly useful for\npartitioned tables, on which the unique and primary key constraints are\nuseful elements. There's nothing that cares about foreign keys.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"La virtud es el justo medio entre dos defectos\" (Aristóteles)\n\n\n",
"msg_date": "Tue, 23 Aug 2022 18:47:12 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] parenting a PK constraint to a self-FK one (Was: Self FK\n oddity when attaching a partition)"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 9:47 AM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2022-Aug-23, Zhihong Yu wrote:\n>\n> > I was thinking of the following patch.\n> > Basically, if there is only one matching constraint. we still return it.\n> >\n> > diff --git a/src/postgres/src/backend/catalog/pg_constraint.c\n> > b/src/postgres/src/backend/catalog/pg_constraint.c\n> > index f0726e9aa0..ddade138b4 100644\n> > --- a/src/postgres/src/backend/catalog/pg_constraint.c\n> > +++ b/src/postgres/src/backend/catalog/pg_constraint.c\n> > @@ -1003,7 +1003,8 @@ get_relation_idx_constraint_oid(Oid relationId, Oid\n> > indexId)\n> > constrForm = (Form_pg_constraint) GETSTRUCT(tuple);\n> > if (constrForm->conindid == indexId)\n> > {\n> > - constraintId = HeapTupleGetOid(tuple);\n> > + if (constraintId == InvalidOid || constrForm->confrelid == 0)\n> > + constraintId = HeapTupleGetOid(tuple);\n> > break;\n> > }\n> > }\n>\n> We could do this, but what do we gain by doing so? It seems to me that\n> my proposed formulation achieves the same and is less fuzzy about what\n> the returned constraint is. Please try to write a code comment that\n> explains what this does and see if it makes sense.\n>\n> For my proposal, it would be \"return the OID of a primary key or unique\n> constraint associated with the given index in the given relation, or OID\n> if no such index is catalogued\". This definition is clearly useful for\n> partitioned tables, on which the unique and primary key constraints are\n> useful elements. There's nothing that cares about foreign keys.\n>\n> --\n> Álvaro Herrera PostgreSQL Developer —\n> https://www.EnterpriseDB.com/\n> \"La virtud es el justo medio entre dos defectos\" (Aristóteles)\n>\n\nA bigger question I have, even with the additional filtering, is what if\nthere are multiple constraints ?\nHow do we decide which unique / primary key constraint to return ?\n\nLooks like there is no known SQL statements leading to such state, but\nshould we consider such possibility ?\n\nCheers\n\nOn Tue, Aug 23, 2022 at 9:47 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2022-Aug-23, Zhihong Yu wrote:\n\n> I was thinking of the following patch.\n> Basically, if there is only one matching constraint. we still return it.\n> \n> diff --git a/src/postgres/src/backend/catalog/pg_constraint.c\n> b/src/postgres/src/backend/catalog/pg_constraint.c\n> index f0726e9aa0..ddade138b4 100644\n> --- a/src/postgres/src/backend/catalog/pg_constraint.c\n> +++ b/src/postgres/src/backend/catalog/pg_constraint.c\n> @@ -1003,7 +1003,8 @@ get_relation_idx_constraint_oid(Oid relationId, Oid\n> indexId)\n> constrForm = (Form_pg_constraint) GETSTRUCT(tuple);\n> if (constrForm->conindid == indexId)\n> {\n> - constraintId = HeapTupleGetOid(tuple);\n> + if (constraintId == InvalidOid || constrForm->confrelid == 0)\n> + constraintId = HeapTupleGetOid(tuple);\n> break;\n> }\n> }\n\nWe could do this, but what do we gain by doing so? It seems to me that\nmy proposed formulation achieves the same and is less fuzzy about what\nthe returned constraint is. Please try to write a code comment that\nexplains what this does and see if it makes sense.\n\nFor my proposal, it would be \"return the OID of a primary key or unique\nconstraint associated with the given index in the given relation, or OID\nif no such index is catalogued\". This definition is clearly useful for\npartitioned tables, on which the unique and primary key constraints are\nuseful elements. There's nothing that cares about foreign keys.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"La virtud es el justo medio entre dos defectos\" (Aristóteles)A bigger question I have, even with the additional filtering, is what if there are multiple constraints ?How do we decide which unique / primary key constraint to return ?Looks like there is no known SQL statements leading to such state, but should we consider such possibility ?Cheers",
"msg_date": "Tue, 23 Aug 2022 09:57:35 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] parenting a PK constraint to a self-FK one (Was: Self FK\n oddity when attaching a partition)"
},
{
"msg_contents": "On 2022-Aug-23, Zhihong Yu wrote:\n\n> A bigger question I have, even with the additional filtering, is what if\n> there are multiple constraints ?\n> How do we decide which unique / primary key constraint to return ?\n> \n> Looks like there is no known SQL statements leading to such state, but\n> should we consider such possibility ?\n\nI don't think we care, but feel free to experiment and report any\nproblems. You should be able to have multiple UNIQUE constraints on the\nsame column, for example.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Postgres is bloatware by design: it was built to house\n PhD theses.\" (Joey Hellerstein, SIGMOD annual conference 2002)\n\n\n",
"msg_date": "Tue, 23 Aug 2022 19:50:59 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] parenting a PK constraint to a self-FK one (Was: Self FK\n oddity when attaching a partition)"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> If this is OK, we should make this API quirkiness very explicit in the\n> comments, so the patch needs to be a few lines larger in order to be\n> committable. Also, perhaps the check should be that contype equals\n> either primary or unique, rather than it doesn't equal foreign.\n\nYeah. See lsyscache.c's get_constraint_index(), as well as commit\n641f3dffc which fixed a mighty similar-seeming bug. One question\nthat precedent raises is whether to also include CONSTRAINT_EXCLUSION.\nBut in any case a positive test for the constraint types to allow\nseems best.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Aug 2022 14:00:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] parenting a PK constraint to a self-FK one (Was: Self FK\n oddity when attaching a partition)"
},
{
"msg_contents": "On Tue, 23 Aug 2022 18:30:06 +0200\nAlvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> On 2022-Aug-23, Jehan-Guillaume de Rorthais wrote:\n> \n> Hi,\n> \n> [...]\n> \n> > However, it seems get_relation_idx_constraint_oid(), introduced in\n> > eb7ed3f3063, assume there could be only ONE constraint depending to an\n> > index. But in fact, multiple constraints can rely on the same index, eg.:\n> > the PK and a self referencing FK. In consequence, when looking for a\n> > constraint depending on an index for the given relation, either the FK or a\n> > PK can appears first depending on various conditions. It is then possible\n> > to trick it make a FK constraint a parent of a PK... \n> \n> Hmm, wow, that sounds extremely stupid. I think a sufficient fix might\n> be to have get_relation_idx_constraint_oid ignore any constraints that\n> are not unique or primary keys. I tried your scenario with the attached\n> and it seems to work correctly. Can you confirm? (I only ran the\n> pg_regress tests, not anything else for now.)\n>\n> If this is OK, we should make this API quirkiness very explicit in the\n> comments, so the patch needs to be a few lines larger in order to be\n> committable. Also, perhaps the check should be that contype equals\n> either primary or unique, rather than it doesn't equal foreign.\n\nI was naively wondering about such a patch, but was worrying about potential\nside effects on ATExecAttachPartitionIdx(), DetachPartitionFinalize() and\nDefineIndex() where I didn't had a single glance. Did you had a look?\n\nI did a quick ATTACH + DETACH test, and it seems DETACH partly fails with its\nhousecleaning:\n\n DROP TABLE IF EXISTS parent, child1;\n \n CREATE TABLE parent (\n id bigint NOT NULL default 1,\n no_part smallint NOT NULL,\n id_abc bigint,\n FOREIGN KEY (id_abc, no_part) REFERENCES parent(id, no_part)\n ON UPDATE RESTRICT ON DELETE RESTRICT,\n PRIMARY KEY (id, no_part)\n )\n PARTITION BY LIST (no_part);\n \n CREATE TABLE child1 (\n id bigint NOT NULL default 1,\n no_part smallint NOT NULL,\n id_abc bigint,\n PRIMARY KEY (id, no_part),\n CONSTRAINT child1 CHECK ((no_part = 1))\n );\n\n \\C 'Before ATTACH'\n SELECT oid, conname, conparentid, conrelid, confrelid\n FROM pg_constraint\n WHERE conrelid in ('parent'::regclass, 'child1'::regclass)\n ORDER BY 1;\n\n ALTER TABLE parent ATTACH PARTITION child1 FOR VALUES IN ('1');\n\n \\C 'After ATTACH'\n SELECT oid, conname, conparentid, conrelid, confrelid\n FROM pg_constraint\n WHERE conrelid in ('parent'::regclass, 'child1'::regclass)\n ORDER BY 1;\n\n ALTER TABLE parent DETACH PARTITION child1;\n\n \\C 'After DETACH'\n SELECT oid, conname, conparentid, conrelid, confrelid\n FROM pg_constraint\n WHERE conrelid in ('parent'::regclass, 'child1'::regclass)\n ORDER BY 1;\n\n\n Before ATTACH\n oid | conname | conparentid | conrelid | confrelid \n -------+----------------------------+-------------+----------+-----------\n 24711 | parent_pkey | 0 | 24706 | 0\n 24712 | parent_id_abc_no_part_fkey | 0 | 24706 | 24706\n 24721 | child1 | 0 | 24717 | 0\n 24723 | child1_pkey | 0 | 24717 | 0\n (4 rows)\n \n After ATTACH\n oid | conname | conparentid | conrelid | confrelid \n -------+-----------------------------+-------------+----------+-----------\n 24711 | parent_pkey | 0 | 24706 | 0\n 24712 | parent_id_abc_no_part_fkey | 0 | 24706 | 24706\n 24721 | child1 | 0 | 24717 | 0\n 24723 | child1_pkey | 24711 | 24717 | 0\n 24724 | parent_id_abc_no_part_fkey1 | 24712 | 24706 | 24717\n 24727 | parent_id_abc_no_part_fkey | 24712 | 24717 | 24706\n (6 rows)\n \n After DETACH\n oid | conname | conparentid | conrelid | confrelid \n -------+----------------------------+-------------+----------+-----------\n 24711 | parent_pkey | 0 | 24706 | 0\n 24712 | parent_id_abc_no_part_fkey | 0 | 24706 | 24706\n 24721 | child1 | 0 | 24717 | 0\n 24723 | child1_pkey | 0 | 24717 | 0\n 24727 | parent_id_abc_no_part_fkey | 0 | 24717 | 24706\n (5 rows)\n\nLooking for few minutes in ATExecDetachPartitionFinalize(), it seems it only\nsupport removing the parental link on FK, not to clean the FKs added during the\nATTACH DDL anyway. That explains the FK child1->parent left behind. But in\nfact, this let me wonder if this part of the code ever considered implication\nof self-FK during the ATTACH and DETACH process? Why in the first place TWO FK\nare created during the ATTACH DDL?\n\nRegards,\n\n\n",
"msg_date": "Wed, 24 Aug 2022 12:28:50 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] parenting a PK constraint to a self-FK one (Was: Self FK\n oddity when attaching a partition)"
},
{
"msg_contents": "On 2022-Aug-24, Jehan-Guillaume de Rorthais wrote:\n\n> I was naively wondering about such a patch, but was worrying about potential\n> side effects on ATExecAttachPartitionIdx(), DetachPartitionFinalize() and\n> DefineIndex() where I didn't had a single glance. Did you had a look?\n\nNo. But AFAIR all the code there is supposed to worry about unique\nconstraints and PK only, not FKs. So if something changes, then most \nlikely it was wrong to begin with.\n\n> I did a quick ATTACH + DETACH test, and it seems DETACH partly fails with its\n> housecleaning:\n\nUgh. More fixes required, then.\n\n> Looking for few minutes in ATExecDetachPartitionFinalize(), it seems it only\n> support removing the parental link on FK, not to clean the FKs added during the\n> ATTACH DDL anyway. That explains the FK child1->parent left behind. But in\n> fact, this let me wonder if this part of the code ever considered implication\n> of self-FK during the ATTACH and DETACH process?\n\nNo, or at least I don't remember thinking about self-referencing FKs.\nIf there are no tests for it, then that's likely what happened.\n\n> Why in the first place TWO FK are created during the ATTACH DDL?\n\nThat's probably a bug too.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"The eagle never lost so much time, as\nwhen he submitted to learn of the crow.\" (William Blake)\n\n\n",
"msg_date": "Wed, 24 Aug 2022 12:49:13 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] parenting a PK constraint to a self-FK one (Was: Self FK\n oddity when attaching a partition)"
},
{
"msg_contents": "Hi,\n\nWhile studying and hacking on the parenting constraint issue, I found an\nincoherent piece of code leading to badly chosen fk name. If a constraint\nname collision is detected, while choosing a new name for the constraint,\nthe code uses fkconstraint->fk_attrs which is not yet populated:\n\n /* No dice. Set up to create our own constraint */\n fkconstraint = makeNode(Constraint);\n if (ConstraintNameIsUsed(CONSTRAINT_RELATION,\n RelationGetRelid(partRel),\n NameStr(constrForm->conname)))\n fkconstraint->conname =\n ChooseConstraintName(RelationGetRelationName(partRel),\n ChooseForeignKeyConstraintNameAddition(\n fkconstraint->fk_attrs), // <= WOO000OPS\n \"fkey\",\n RelationGetNamespace(partRel), NIL);\n else\n fkconstraint->conname = pstrdup(NameStr(constrForm->conname));\n fkconstraint->fk_upd_action = constrForm->confupdtype;\n fkconstraint->fk_del_action = constrForm->confdeltype;\n fkconstraint->deferrable = constrForm->condeferrable;\n fkconstraint->initdeferred = constrForm->condeferred;\n fkconstraint->fk_matchtype = constrForm->confmatchtype;\n for (int i = 0; i < numfks; i++)\n {\n Form_pg_attribute att;\n \n att = TupleDescAttr(RelationGetDescr(partRel),\n mapped_conkey[i] - 1);\n fkconstraint->fk_attrs = lappend(fkconstraint->fk_attrs, // <= POPULATING\n makeString(NameStr(att->attname)));\n }\n\nThe following SQL script showcase the bad constraint name:\n\n DROP TABLE IF EXISTS parent, child1;\n \n CREATE TABLE parent (\n id bigint NOT NULL default 1,\n no_part smallint NOT NULL,\n id_abc bigint,\n CONSTRAINT dummy_constr FOREIGN KEY (id_abc, no_part)\n REFERENCES parent(id, no_part) ON UPDATE RESTRICT ON DELETE RESTRICT,\n PRIMARY KEY (id, no_part)\n )\n PARTITION BY LIST (no_part);\n \n CREATE TABLE child1 (\n id bigint NOT NULL default 1,\n no_part smallint NOT NULL,\n id_abc bigint,\n PRIMARY KEY (id, no_part),\n CONSTRAINT dummy_constr CHECK ((no_part = 1))\n );\n\n ALTER TABLE parent ATTACH PARTITION child1 FOR VALUES IN ('1');\n\n SELECT conname\n FROM pg_constraint\n WHERE conrelid = 'child1'::regclass\n AND contype = 'f';\n\n DROP TABLE\n CREATE TABLE\n CREATE TABLE\n ALTER TABLE\n \n conname \n --------------\n child1__fkey\n (1 row)\n\nThe resulting constraint name \"child1__fkey\" is missing the attributes name the\noriginal code wanted to add. The expected name is \"child1_id_abc_no_part_fkey\".\n\nFind in attachment a simple fix, moving the name assignation after the\nFK attributes are populated.\n\nRegards,",
"msg_date": "Thu, 1 Sep 2022 18:41:56 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "[BUG] wrong FK constraint name when colliding name on ATTACH"
},
{
"msg_contents": "Hi there,\n\nI believe this very small bug and its fix are really trivial and could be push\nout of the way quite quickly. It's just about a bad constraint name fixed by\nmoving one assignation after the next one. This could easily be fixed for next\nround of releases.\n\nWell, I hope I'm not wrong :)\n\nRegards,\n\nOn Thu, 1 Sep 2022 18:41:56 +0200\nJehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n\n> While studying and hacking on the parenting constraint issue, I found an\n> incoherent piece of code leading to badly chosen fk name. If a constraint\n> name collision is detected, while choosing a new name for the constraint,\n> the code uses fkconstraint->fk_attrs which is not yet populated:\n> \n> /* No dice. Set up to create our own constraint */\n> fkconstraint = makeNode(Constraint);\n> if (ConstraintNameIsUsed(CONSTRAINT_RELATION,\n> RelationGetRelid(partRel),\n> NameStr(constrForm->conname)))\n> fkconstraint->conname =\n> ChooseConstraintName(RelationGetRelationName(partRel),\n> ChooseForeignKeyConstraintNameAddition(\n> fkconstraint->fk_attrs), // <= WOO000OPS\n> \"fkey\",\n> RelationGetNamespace(partRel), NIL);\n> else\n> fkconstraint->conname = pstrdup(NameStr(constrForm->conname));\n> fkconstraint->fk_upd_action = constrForm->confupdtype;\n> fkconstraint->fk_del_action = constrForm->confdeltype;\n> fkconstraint->deferrable = constrForm->condeferrable;\n> fkconstraint->initdeferred = constrForm->condeferred;\n> fkconstraint->fk_matchtype = constrForm->confmatchtype;\n> for (int i = 0; i < numfks; i++)\n> {\n> Form_pg_attribute att;\n> \n> att = TupleDescAttr(RelationGetDescr(partRel),\n> mapped_conkey[i] - 1);\n> fkconstraint->fk_attrs = lappend(fkconstraint->fk_attrs, // <=\n> POPULATING makeString(NameStr(att->attname)));\n> }\n> \n> The following SQL script showcase the bad constraint name:\n> \n> DROP TABLE IF EXISTS parent, child1;\n> \n> CREATE TABLE parent (\n> id bigint NOT NULL default 1,\n> no_part smallint NOT NULL,\n> id_abc bigint,\n> CONSTRAINT dummy_constr FOREIGN KEY (id_abc, no_part)\n> REFERENCES parent(id, no_part) ON UPDATE RESTRICT ON DELETE\n> RESTRICT, PRIMARY KEY (id, no_part)\n> )\n> PARTITION BY LIST (no_part);\n> \n> CREATE TABLE child1 (\n> id bigint NOT NULL default 1,\n> no_part smallint NOT NULL,\n> id_abc bigint,\n> PRIMARY KEY (id, no_part),\n> CONSTRAINT dummy_constr CHECK ((no_part = 1))\n> );\n> \n> ALTER TABLE parent ATTACH PARTITION child1 FOR VALUES IN ('1');\n> \n> SELECT conname\n> FROM pg_constraint\n> WHERE conrelid = 'child1'::regclass\n> AND contype = 'f';\n> \n> DROP TABLE\n> CREATE TABLE\n> CREATE TABLE\n> ALTER TABLE\n> \n> conname \n> --------------\n> child1__fkey\n> (1 row)\n> \n> The resulting constraint name \"child1__fkey\" is missing the attributes name\n> the original code wanted to add. The expected name is\n> \"child1_id_abc_no_part_fkey\".\n> \n> Find in attachment a simple fix, moving the name assignation after the\n> FK attributes are populated.\n> \n> Regards,\n\n\n\n",
"msg_date": "Thu, 8 Sep 2022 09:40:26 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] wrong FK constraint name when colliding name on ATTACH"
},
{
"msg_contents": "On 2022-Sep-08, Jehan-Guillaume de Rorthais wrote:\n\n> Hi there,\n> \n> I believe this very small bug and its fix are really trivial and could be push\n> out of the way quite quickly. It's just about a bad constraint name fixed by\n> moving one assignation after the next one. This could easily be fixed for next\n> round of releases.\n> \n> Well, I hope I'm not wrong :)\n\nI think you're right, so pushed, and backpatched to 12. I added the\ntest case to regression also.\n\nFor 11, I adjusted the test case so that it didn't depend on an FK\npointing to a partitioned table (which is not supported there); it turns\nout that the old code is not smart enough to get into the problem in the\nfirst place. Setup is\n\nCREATE TABLE parted_fk_naming_pk (id bigint primary key);\nCREATE TABLE parted_fk_naming (\n id_abc bigint,\n CONSTRAINT dummy_constr FOREIGN KEY (id_abc)\n REFERENCES parted_fk_naming_pk (id)\n)\nPARTITION BY LIST (id_abc);\nCREATE TABLE parted_fk_naming_1 (\n id_abc bigint,\n CONSTRAINT dummy_constr CHECK (true)\n);\n\nand then\nALTER TABLE parted_fk_naming ATTACH PARTITION parted_fk_naming_1 FOR VALUES IN ('1');\nthrows this error:\n\nERROR: duplicate key value violates unique constraint \"pg_constraint_conrelid_contypid_conname_index\"\nDETALLE: Key (conrelid, contypid, conname)=(686125, 0, dummy_constr) already exists.\n\nIt seems fair to say that this case, with pg11, is unsupported and\npeople should upgrade if they want better behavior.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Those who use electric razors are infidels destined to burn in hell while\nwe drink from rivers of beer, download free vids and mingle with naked\nwell shaved babes.\" (http://slashdot.org/comments.pl?sid=44793&cid=4647152)\n\n\n",
"msg_date": "Thu, 8 Sep 2022 13:25:15 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] wrong FK constraint name when colliding name on ATTACH"
},
{
"msg_contents": "On Thu, 8 Sep 2022 13:25:15 +0200\nAlvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> On 2022-Sep-08, Jehan-Guillaume de Rorthais wrote:\n> \n> > Hi there,\n> > \n> > I believe this very small bug and its fix are really trivial and could be\n> > push out of the way quite quickly. It's just about a bad constraint name\n> > fixed by moving one assignation after the next one. This could easily be\n> > fixed for next round of releases.\n> > \n> > Well, I hope I'm not wrong :) \n> \n> I think you're right, so pushed, and backpatched to 12. I added the\n> test case to regression also.\n\nGreat, thank you for the additional work on the regression test and the commit!\n\n> For 11, I adjusted the test case so that it didn't depend on an FK\n> pointing to a partitioned table (which is not supported there); it turns\n> out that the old code is not smart enough to get into the problem in the\n> first place. [...]\n> It seems fair to say that this case, with pg11, is unsupported and\n> people should upgrade if they want better behavior.\n\nThat works for me.\n\nThanks!\n\n\n",
"msg_date": "Thu, 8 Sep 2022 14:07:16 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] wrong FK constraint name when colliding name on ATTACH"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-08 13:25:15 +0200, Alvaro Herrera wrote:\n> I think you're right, so pushed, and backpatched to 12. I added the\n> test case to regression also.\n\nSomething here doesn't look to be quite right. Starting with this commit CI\n[1] started to fail on freebsd (stack trace [2]), and in the meson branch I've\nalso seen the crash on windows (CI run[3], stack trace [4]).\n\nThere does appear to be some probabilistic aspect, I also saw a run succeed.\n\nGreetings,\n\nAndres Freund\n\n[1] https://cirrus-ci.com/github/postgres/postgres/\n[2] https://api.cirrus-ci.com/v1/task/6180840047640576/logs/cores.log\n[3] https://cirrus-ci.com/task/6629440791773184\n[4] https://api.cirrus-ci.com/v1/artifact/task/6629440791773184/crashlog/crashlog-postgres.exe_1468_2022-09-08_17-05-24-591.txt\n\n\n",
"msg_date": "Thu, 8 Sep 2022 10:20:29 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] wrong FK constraint name when colliding name on ATTACH"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Something here doesn't look to be quite right. Starting with this commit CI\n> [1] started to fail on freebsd (stack trace [2]), and in the meson branch I've\n> also seen the crash on windows (CI run[3], stack trace [4]).\n\nThe crash seems 100% reproducible if I remove the early-exit optimization\nfrom GetForeignKeyActionTriggers:\n\ndiff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c\nindex 53b0f3a9c1..112ca77d97 100644\n--- a/src/backend/commands/tablecmds.c\n+++ b/src/backend/commands/tablecmds.c\n@@ -10591,8 +10591,6 @@ GetForeignKeyActionTriggers(Relation trigrel,\n Assert(*updateTriggerOid == InvalidOid);\n *updateTriggerOid = trgform->oid;\n }\n- if (OidIsValid(*deleteTriggerOid) && OidIsValid(*updateTriggerOid))\n- break;\n }\n \n if (!OidIsValid(*deleteTriggerOid))\n\nWith that in place, it's probabilistic whether the Asserts notice anything\nwrong, and mostly they don't. But there are multiple matching triggers:\n\nregression=# select oid, tgconstraint, tgrelid,tgconstrrelid, tgtype, tgname from pg_trigger where tgconstraint = 104301;\n oid | tgconstraint | tgrelid | tgconstrrelid | tgtype | tgname \n--------+--------------+---------+---------------+--------+-------------------------------\n 104302 | 104301 | 104294 | 104294 | 9 | RI_ConstraintTrigger_a_104302\n 104303 | 104301 | 104294 | 104294 | 17 | RI_ConstraintTrigger_a_104303\n 104304 | 104301 | 104294 | 104294 | 5 | RI_ConstraintTrigger_c_104304\n 104305 | 104301 | 104294 | 104294 | 17 | RI_ConstraintTrigger_c_104305\n(4 rows)\n\nI suspect that the filter conditions being applied are inadequate\nfor the case of a self-referential FK, which this evidently is\ngiven that tgrelid and tgconstrrelid are equal.\n\nI'd counsel dropping the early-exit optimization; it doesn't\nsave much I expect, and it evidently hides bugs. Or maybe\nmake it conditional on !USE_ASSERT_CHECKING.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 08 Sep 2022 15:54:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] wrong FK constraint name when colliding name on ATTACH"
},
{
"msg_contents": "On Fri, Sep 9, 2022 at 4:54 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Something here doesn't look to be quite right. Starting with this commit CI\n> > [1] started to fail on freebsd (stack trace [2]), and in the meson branch I've\n> > also seen the crash on windows (CI run[3], stack trace [4]).\n>\n> The crash seems 100% reproducible if I remove the early-exit optimization\n> from GetForeignKeyActionTriggers:\n\nIndeed, reproduced here.\n\n> diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c\n> index 53b0f3a9c1..112ca77d97 100644\n> --- a/src/backend/commands/tablecmds.c\n> +++ b/src/backend/commands/tablecmds.c\n> @@ -10591,8 +10591,6 @@ GetForeignKeyActionTriggers(Relation trigrel,\n> Assert(*updateTriggerOid == InvalidOid);\n> *updateTriggerOid = trgform->oid;\n> }\n> - if (OidIsValid(*deleteTriggerOid) && OidIsValid(*updateTriggerOid))\n> - break;\n> }\n>\n> if (!OidIsValid(*deleteTriggerOid))\n>\n> With that in place, it's probabilistic whether the Asserts notice anything\n> wrong, and mostly they don't. But there are multiple matching triggers:\n>\n> regression=# select oid, tgconstraint, tgrelid,tgconstrrelid, tgtype, tgname from pg_trigger where tgconstraint = 104301;\n> oid | tgconstraint | tgrelid | tgconstrrelid | tgtype | tgname\n> --------+--------------+---------+---------------+--------+-------------------------------\n> 104302 | 104301 | 104294 | 104294 | 9 | RI_ConstraintTrigger_a_104302\n> 104303 | 104301 | 104294 | 104294 | 17 | RI_ConstraintTrigger_a_104303\n> 104304 | 104301 | 104294 | 104294 | 5 | RI_ConstraintTrigger_c_104304\n> 104305 | 104301 | 104294 | 104294 | 17 | RI_ConstraintTrigger_c_104305\n> (4 rows)\n>\n> I suspect that the filter conditions being applied are inadequate\n> for the case of a self-referential FK, which this evidently is\n> given that tgrelid and tgconstrrelid are equal.\n\nYes, the loop in GetForeignKeyActionTriggers() needs this:\n\n+ /* Only ever look at \"action\" triggers on the PK side. */\n+ if (RI_FKey_trigger_type(trgform->tgfoid) != RI_TRIGGER_PK)\n+ continue;\n\nLikewise, GetForeignKeyActionTriggers() needs this:\n\n+ /* Only ever look at \"check\" triggers on the FK side. */\n+ if (RI_FKey_trigger_type(trgform->tgfoid) != RI_TRIGGER_FK)\n+ continue;\n\nWe evidently missed this in f4566345cf40b0.\n\n> I'd counsel dropping the early-exit optimization; it doesn't\n> save much I expect, and it evidently hides bugs. Or maybe\n> make it conditional on !USE_ASSERT_CHECKING.\n\nWhile neither of these functions are called in hot paths, I am\ninclined to keep the early-exit bit in non-assert builds.\n\nAttached a patch.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 9 Sep 2022 16:16:09 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] wrong FK constraint name when colliding name on ATTACH"
},
{
"msg_contents": "On 2022-Sep-09, Amit Langote wrote:\n\n> Yes, the loop in GetForeignKeyActionTriggers() needs this:\n> \n> + /* Only ever look at \"action\" triggers on the PK side. */\n> + if (RI_FKey_trigger_type(trgform->tgfoid) != RI_TRIGGER_PK)\n> + continue;\n> \n> Likewise, GetForeignKeyActionTriggers() needs this:\n> \n> + /* Only ever look at \"check\" triggers on the FK side. */\n> + if (RI_FKey_trigger_type(trgform->tgfoid) != RI_TRIGGER_FK)\n> + continue;\n> \n> We evidently missed this in f4566345cf40b0.\n\nOuch. Thank you, pushed.\n\n> On Fri, Sep 9, 2022 at 4:54 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> > I'd counsel dropping the early-exit optimization; it doesn't\n> > save much I expect, and it evidently hides bugs. Or maybe\n> > make it conditional on !USE_ASSERT_CHECKING.\n> \n> While neither of these functions are called in hot paths, I am\n> inclined to keep the early-exit bit in non-assert builds.\n\nI kept it that way.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"El hombre nunca sabe de lo que es capaz hasta que lo intenta\" (C. Dickens)\n\n\n",
"msg_date": "Fri, 9 Sep 2022 12:31:49 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] wrong FK constraint name when colliding name on ATTACH"
},
{
"msg_contents": "Hi,\n\nPlease, find in attachment a small serie of patch:\n\n 0001 fix the constraint parenting bug. Not much to say. It's basically your\n patch we discussed with some more comments and the check on contype equals to\n either primary, unique or exclusion.\n\n 0002 fix the self-FK being cloned twice on partitions\n\n 0003 add a regression test validating both fix.\n\nI should confess than even with these fix, I'm still wondering about this code\nsanity as we could still end up with a PK on a partition being parented with a\nsimple unique constraint from the table, on a field not even NOT NULL:\n\n DROP TABLE IF EXISTS parted_self_fk, part_with_pk;\n\n CREATE TABLE parted_self_fk (\n id bigint,\n id_abc bigint,\n FOREIGN KEY (id_abc) REFERENCES parted_self_fk(id),\n UNIQUE (id)\n )\n PARTITION BY RANGE (id);\n\n CREATE TABLE part_with_pk (\n id bigint PRIMARY KEY,\n id_abc bigint,\n CHECK ((id >= 0 AND id < 10))\n );\n\n ALTER TABLE parted_self_fk ATTACH\n PARTITION part_with_pk FOR VALUES FROM (0) TO (10);\n\n SELECT cr.relname, co.conname, co.contype, p.conname AS conparentrelname\n FROM pg_catalog.pg_constraint co\n JOIN pg_catalog.pg_class cr ON cr.oid = co.conrelid\n LEFT JOIN pg_catalog.pg_constraint p ON p.oid = co.conparentid\n WHERE cr.relname IN ('parted_self_fk', 'part_with_pk')\n AND co.contype IN ('u', 'p');\n \n DROP TABLE parted_self_fk;\n\n DROP TABLE\n CREATE TABLE\n CREATE TABLE\n ALTER TABLE\n relname | conname | contype | conparentrelname \n ----------------+-----------------------+---------+-----------------------\n parted_self_fk | parted_self_fk_id_key | u | \n part_with_pk | part_with_pk_pkey | p | parted_self_fk_id_key\n (2 rows)\n\nNothing forbid the partition to have stricter constraints than the parent\ntable, but it feels weird, so it might worth noting it here.\n\nI wonder if AttachPartitionEnsureConstraints() should exists and take care of\ncomparing/cloning constraints before calling AttachPartitionEnsureIndexes()\nwhich would handle missing index without paying attention to related\nconstraints?\n\nRegards,\n\nOn Wed, 24 Aug 2022 12:49:13 +0200\nAlvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> On 2022-Aug-24, Jehan-Guillaume de Rorthais wrote:\n> \n> > I was naively wondering about such a patch, but was worrying about potential\n> > side effects on ATExecAttachPartitionIdx(), DetachPartitionFinalize() and\n> > DefineIndex() where I didn't had a single glance. Did you had a look? \n> \n> No. But AFAIR all the code there is supposed to worry about unique\n> constraints and PK only, not FKs. So if something changes, then most \n> likely it was wrong to begin with.\n> \n> > I did a quick ATTACH + DETACH test, and it seems DETACH partly fails with\n> > its housecleaning: \n> \n> Ugh. More fixes required, then.\n> \n> > Looking for few minutes in ATExecDetachPartitionFinalize(), it seems it only\n> > support removing the parental link on FK, not to clean the FKs added during\n> > the ATTACH DDL anyway. That explains the FK child1->parent left behind. But\n> > in fact, this let me wonder if this part of the code ever considered\n> > implication of self-FK during the ATTACH and DETACH process? \n> \n> No, or at least I don't remember thinking about self-referencing FKs.\n> If there are no tests for it, then that's likely what happened.\n> \n> > Why in the first place TWO FK are created during the ATTACH DDL? \n> \n> That's probably a bug too.\n>",
"msg_date": "Sat, 1 Oct 2022 00:30:10 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] parenting a PK constraint to a self-FK one (Was: Self FK\n oddity when attaching a partition)"
},
{
"msg_contents": "On Fri, Sep 30, 2022 at 3:30 PM Jehan-Guillaume de Rorthais <jgdr@dalibo.com>\nwrote:\n\n> Hi,\n>\n> Please, find in attachment a small serie of patch:\n>\n> 0001 fix the constraint parenting bug. Not much to say. It's basically\n> your\n> patch we discussed with some more comments and the check on contype\n> equals to\n> either primary, unique or exclusion.\n>\n> 0002 fix the self-FK being cloned twice on partitions\n>\n> 0003 add a regression test validating both fix.\n>\n> I should confess than even with these fix, I'm still wondering about this\n> code\n> sanity as we could still end up with a PK on a partition being parented\n> with a\n> simple unique constraint from the table, on a field not even NOT NULL:\n>\n> DROP TABLE IF EXISTS parted_self_fk, part_with_pk;\n>\n> CREATE TABLE parted_self_fk (\n> id bigint,\n> id_abc bigint,\n> FOREIGN KEY (id_abc) REFERENCES parted_self_fk(id),\n> UNIQUE (id)\n> )\n> PARTITION BY RANGE (id);\n>\n> CREATE TABLE part_with_pk (\n> id bigint PRIMARY KEY,\n> id_abc bigint,\n> CHECK ((id >= 0 AND id < 10))\n> );\n>\n> ALTER TABLE parted_self_fk ATTACH\n> PARTITION part_with_pk FOR VALUES FROM (0) TO (10);\n>\n> SELECT cr.relname, co.conname, co.contype, p.conname AS conparentrelname\n> FROM pg_catalog.pg_constraint co\n> JOIN pg_catalog.pg_class cr ON cr.oid = co.conrelid\n> LEFT JOIN pg_catalog.pg_constraint p ON p.oid = co.conparentid\n> WHERE cr.relname IN ('parted_self_fk', 'part_with_pk')\n> AND co.contype IN ('u', 'p');\n>\n> DROP TABLE parted_self_fk;\n>\n> DROP TABLE\n> CREATE TABLE\n> CREATE TABLE\n> ALTER TABLE\n> relname | conname | contype | conparentrelname\n>\n>\n> ----------------+-----------------------+---------+-----------------------\n> parted_self_fk | parted_self_fk_id_key | u |\n> part_with_pk | part_with_pk_pkey | p | parted_self_fk_id_key\n> (2 rows)\n>\n> Nothing forbid the partition to have stricter constraints than the parent\n> table, but it feels weird, so it might worth noting it here.\n>\n> I wonder if AttachPartitionEnsureConstraints() should exists and take care\n> of\n> comparing/cloning constraints before calling AttachPartitionEnsureIndexes()\n> which would handle missing index without paying attention to related\n> constraints?\n>\n> Regards,\n>\n> On Wed, 24 Aug 2022 12:49:13 +0200\n> Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> > On 2022-Aug-24, Jehan-Guillaume de Rorthais wrote:\n> >\n> > > I was naively wondering about such a patch, but was worrying about\n> potential\n> > > side effects on ATExecAttachPartitionIdx(), DetachPartitionFinalize()\n> and\n> > > DefineIndex() where I didn't had a single glance. Did you had a look?\n> >\n> > No. But AFAIR all the code there is supposed to worry about unique\n> > constraints and PK only, not FKs. So if something changes, then most\n> > likely it was wrong to begin with.\n> >\n> > > I did a quick ATTACH + DETACH test, and it seems DETACH partly fails\n> with\n> > > its housecleaning:\n> >\n> > Ugh. More fixes required, then.\n> >\n> > > Looking for few minutes in ATExecDetachPartitionFinalize(), it seems\n> it only\n> > > support removing the parental link on FK, not to clean the FKs added\n> during\n> > > the ATTACH DDL anyway. That explains the FK child1->parent left\n> behind. But\n> > > in fact, this let me wonder if this part of the code ever considered\n> > > implication of self-FK during the ATTACH and DETACH process?\n> >\n> > No, or at least I don't remember thinking about self-referencing FKs.\n> > If there are no tests for it, then that's likely what happened.\n> >\n> > > Why in the first place TWO FK are created during the ATTACH DDL?\n> >\n> > That's probably a bug too.\n> >\n>\n> Hi,\n\n+ * Self-Foreign keys are ignored as the index was preliminary\ncreated\n\npreliminary created -> primarily created\n\n Cheers\n\nOn Fri, Sep 30, 2022 at 3:30 PM Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:Hi,\n\nPlease, find in attachment a small serie of patch:\n\n 0001 fix the constraint parenting bug. Not much to say. It's basically your\n patch we discussed with some more comments and the check on contype equals to\n either primary, unique or exclusion.\n\n 0002 fix the self-FK being cloned twice on partitions\n\n 0003 add a regression test validating both fix.\n\nI should confess than even with these fix, I'm still wondering about this code\nsanity as we could still end up with a PK on a partition being parented with a\nsimple unique constraint from the table, on a field not even NOT NULL:\n\n DROP TABLE IF EXISTS parted_self_fk, part_with_pk;\n\n CREATE TABLE parted_self_fk (\n id bigint,\n id_abc bigint,\n FOREIGN KEY (id_abc) REFERENCES parted_self_fk(id),\n UNIQUE (id)\n )\n PARTITION BY RANGE (id);\n\n CREATE TABLE part_with_pk (\n id bigint PRIMARY KEY,\n id_abc bigint,\n CHECK ((id >= 0 AND id < 10))\n );\n\n ALTER TABLE parted_self_fk ATTACH\n PARTITION part_with_pk FOR VALUES FROM (0) TO (10);\n\n SELECT cr.relname, co.conname, co.contype, p.conname AS conparentrelname\n FROM pg_catalog.pg_constraint co\n JOIN pg_catalog.pg_class cr ON cr.oid = co.conrelid\n LEFT JOIN pg_catalog.pg_constraint p ON p.oid = co.conparentid\n WHERE cr.relname IN ('parted_self_fk', 'part_with_pk')\n AND co.contype IN ('u', 'p');\n\n DROP TABLE parted_self_fk;\n\n DROP TABLE\n CREATE TABLE\n CREATE TABLE\n ALTER TABLE\n relname | conname | contype | conparentrelname \n ----------------+-----------------------+---------+-----------------------\n parted_self_fk | parted_self_fk_id_key | u | \n part_with_pk | part_with_pk_pkey | p | parted_self_fk_id_key\n (2 rows)\n\nNothing forbid the partition to have stricter constraints than the parent\ntable, but it feels weird, so it might worth noting it here.\n\nI wonder if AttachPartitionEnsureConstraints() should exists and take care of\ncomparing/cloning constraints before calling AttachPartitionEnsureIndexes()\nwhich would handle missing index without paying attention to related\nconstraints?\n\nRegards,\n\nOn Wed, 24 Aug 2022 12:49:13 +0200\nAlvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> On 2022-Aug-24, Jehan-Guillaume de Rorthais wrote:\n> \n> > I was naively wondering about such a patch, but was worrying about potential\n> > side effects on ATExecAttachPartitionIdx(), DetachPartitionFinalize() and\n> > DefineIndex() where I didn't had a single glance. Did you had a look? \n> \n> No. But AFAIR all the code there is supposed to worry about unique\n> constraints and PK only, not FKs. So if something changes, then most \n> likely it was wrong to begin with.\n> \n> > I did a quick ATTACH + DETACH test, and it seems DETACH partly fails with\n> > its housecleaning: \n> \n> Ugh. More fixes required, then.\n> \n> > Looking for few minutes in ATExecDetachPartitionFinalize(), it seems it only\n> > support removing the parental link on FK, not to clean the FKs added during\n> > the ATTACH DDL anyway. That explains the FK child1->parent left behind. But\n> > in fact, this let me wonder if this part of the code ever considered\n> > implication of self-FK during the ATTACH and DETACH process? \n> \n> No, or at least I don't remember thinking about self-referencing FKs.\n> If there are no tests for it, then that's likely what happened.\n> \n> > Why in the first place TWO FK are created during the ATTACH DDL? \n> \n> That's probably a bug too.\n> \nHi,+ * Self-Foreign keys are ignored as the index was preliminary createdpreliminary created -> primarily created Cheers",
"msg_date": "Fri, 30 Sep 2022 16:11:09 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] parenting a PK constraint to a self-FK one (Was: Self FK\n oddity when attaching a partition)"
},
{
"msg_contents": "On Fri, 30 Sep 2022 16:11:09 -0700\nZhihong Yu <zyu@yugabyte.com> wrote:\n\n> On Fri, Sep 30, 2022 at 3:30 PM Jehan-Guillaume de Rorthais <jgdr@dalibo.com>\n> wrote:\n...\n> \n> + * Self-Foreign keys are ignored as the index was preliminary\n> created\n> \n> preliminary created -> primarily created\n\nThank you! This is fixed and rebased on current master branch in patches\nattached.\n\nRegards,",
"msg_date": "Mon, 3 Oct 2022 14:47:57 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] parenting a PK constraint to a self-FK one (Was: Self FK\n oddity when attaching a partition)"
},
{
"msg_contents": "On 2022-Oct-03, Jehan-Guillaume de Rorthais wrote:\n\n> Thank you! This is fixed and rebased on current master branch in patches\n> attached.\n\nThanks. As far as I can see this fixes the bugs that were reported.\nI've been giving the patches a look and it caused me to notice two\nadditional bugs in the same area:\n\n- FKs in partitions are sometimes marked NOT VALID. This is because of\n missing initialization when faking up a Constraint node in\n CloneFkReferencing. Easy to fix, have patch, running tests now.\n\n- The feature added by d6f96ed94e73 (ON DELETE SET NULL (...)) is not\n correctly propagated. This should be an easy fix also, haven't tried,\n need to add a test case.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"La primera ley de las demostraciones en vivo es: no trate de usar el sistema.\nEscriba un guión que no toque nada para no causar daños.\" (Jakob Nielsen)\n\n\n",
"msg_date": "Wed, 5 Oct 2022 12:55:23 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] parenting a PK constraint to a self-FK one (Was: Self FK\n oddity when attaching a partition)"
},
{
"msg_contents": "Backpatching this to 12 shows yet another problem -- the topmost\nrelation acquires additional FK constraints, not yet sure why. I think\nwe must have fixed something in 13 that wasn't backpatched, but I can't\nremember what it is and whether it was intentionally not backpatched.\n\nLooking ...\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"I can see support will not be a problem. 10 out of 10.\" (Simon Wittber)\n (http://archives.postgresql.org/pgsql-general/2004-12/msg00159.php)\n\n\n",
"msg_date": "Wed, 5 Oct 2022 18:40:48 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] parenting a PK constraint to a self-FK one (Was: Self FK\n oddity when attaching a partition)"
},
{
"msg_contents": "On 2022-Oct-05, Alvaro Herrera wrote:\n\n> Backpatching this to 12 shows yet another problem -- the topmost\n> relation acquires additional FK constraints, not yet sure why. I think\n> we must have fixed something in 13 that wasn't backpatched, but I can't\n> remember what it is and whether it was intentionally not backpatched.\n\nThis was actually a mismerge. Once I fixed that, it worked properly.\n\nHowever, there was another bug, which only showed up when I did a\nDETACH, ATTACH, and repeat. The problem is that when we detach, the\nno-longer-partition retains an FK constraint to the partitioned table.\nThis is good -- we want that one -- but when we reattach, then we see\nthat the partitioned table is being referenced from outside, so we\nconsider that another constraint that we need to add the partition to,\n*in addition to the constraint that we need to clone*. So we need to\nignore both a self-referencing FK that goes to the partitioned table, as\nwell as a self-referencing one that comes from the partition-to-be.\nWhen we do that, then the clone correctly uses that one as the\nconstraint to retain and attach into the hierarchy of constraints, and\neverything [appears to] work correctly.\n\nSo I've pushed this, and things are now mostly good. Two problems\nremain, though I don't think either of them is terribly serious:\n\n1. one of the constraints in the final hierarchy is marked as not\nvalidated. I mentioned this before.\n\n2. (only in 15) There are useless pg_depend rows for the pg_trigger\nrows, which make them depend on their parent pg_trigger rows. This is\nnot related to self-referencing foreign keys, but I just happened to\nnotice because I was examining the catalog contents with the added test\ncase. I think this breakage is due to f4566345cf40. I couldn't find\nany actual misbehavior caused by these extra pg_depend entries, but we\nshould not be creating them anyway.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Por suerte hoy explotó el califont porque si no me habría muerto\n de aburrido\" (Papelucho)\n\n\n",
"msg_date": "Fri, 7 Oct 2022 19:53:55 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] parenting a PK constraint to a self-FK one (Was: Self FK\n oddity when attaching a partition)"
},
{
"msg_contents": "On 2022-Oct-05, Alvaro Herrera wrote:\n\n> I've been giving the patches a look and it caused me to notice two\n> additional bugs in the same area:\n> \n> - FKs in partitions are sometimes marked NOT VALID. This is because of\n> missing initialization when faking up a Constraint node in\n> CloneFkReferencing. Easy to fix, have patch, running tests now.\n\nI have pushed the fix for this now.\n\n> - The feature added by d6f96ed94e73 (ON DELETE SET NULL (...)) is not\n> correctly propagated. This should be an easy fix also, haven't tried,\n> need to add a test case.\n\nThere was no bug here actually: it's true that the struct member is left\nuninitialized, but in practice that doesn't matter, because the set of\ncolumns is propagated separately from the node.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Las navajas y los monos deben estar siempre distantes\" (Germán Poo)\n\n\n",
"msg_date": "Thu, 3 Nov 2022 20:44:16 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] parenting a PK constraint to a self-FK one (Was: Self FK\n oddity when attaching a partition)"
},
{
"msg_contents": "On Thu, 3 Nov 2022 20:44:16 +0100\nAlvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> On 2022-Oct-05, Alvaro Herrera wrote:\n> \n> > I've been giving the patches a look and it caused me to notice two\n> > additional bugs in the same area:\n> > \n> > - FKs in partitions are sometimes marked NOT VALID. This is because of\n> > missing initialization when faking up a Constraint node in\n> > CloneFkReferencing. Easy to fix, have patch, running tests now. \n> \n> I have pushed the fix for this now.\n\nThank you Alvaro!\n\n\n",
"msg_date": "Fri, 4 Nov 2022 10:29:10 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] parenting a PK constraint to a self-FK one (Was: Self FK\n oddity when attaching a partition)"
}
] |
[
{
"msg_contents": "I finally reached the point of being fed up with our inability\nto maintain the number of lines output by psql's usage() and\nsibling functions. Almost every year, we find ourselves updating\nthose magic constants sometime late in the dev cycle, and I just\nhad to do it again today.\n\nSo, attached is a patch to remove that maintenance chore by\nconstructing the output in a PQExpBuffer and then counting the\nlines automatically. While I was at it, I introduced a couple of\nmacros to make the code shorter rather than longer.\n\nWe could alternatively decide that we've blown past whatever\nvertical screen space anybody has and just use 1000 or something\nlike that as the PageOutput count. However, that's a somewhat\ndicey proposition for usage() itself, which is at 63 lines today;\nthat's well within reach of larger monitors.\n\nThoughts?\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 03 Jun 2022 16:51:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Count output lines automatically in psql/help.c"
},
{
"msg_contents": "On Fri, Jun 3, 2022 at 4:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thoughts?\n\n+1 from me. Wish we'd done this years ago.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 4 Jun 2022 08:55:25 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Count output lines automatically in psql/help.c"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Jun 3, 2022 at 4:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Thoughts?\n\n> +1 from me. Wish we'd done this years ago.\n\nPushed, thanks for looking at it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 Jun 2022 11:54:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Count output lines automatically in psql/help.c"
},
{
"msg_contents": "On 2022-Jun-03, Tom Lane wrote:\n\n> So, attached is a patch to remove that maintenance chore by\n> constructing the output in a PQExpBuffer and then counting the\n> lines automatically. While I was at it, I introduced a couple of\n> macros to make the code shorter rather than longer.\n\nWhat about adding stringInfoCountLines or something like that?\n\n-- \nÁlvaro Herrera\n\n\n",
"msg_date": "Sat, 4 Jun 2022 18:18:20 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Count output lines automatically in psql/help.c"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> What about adding stringInfoCountLines or something like that?\n\nIf we have other use-cases, maybe that'd be worthwhile.\n\n(In the committed patch, I dumbed it down to a plain per-char\nloop without the strchr() complication. So it's very little code.\nI'm not real sure that strchr would make it faster.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 Jun 2022 12:56:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Count output lines automatically in psql/help.c"
},
{
"msg_contents": "On 03.06.22 22:51, Tom Lane wrote:\n> +\tHELP0(\" -c, --command=COMMAND run only single command (SQL or internal) and exit\\n\");\n> +\tHELP(\" -d, --dbname=DBNAME database name to connect to (default: \\\"%s\\\")\\n\",\n> +\t\t env);\n\nI wonder whether this mix of HELP0 and HELP is necessary. The original \ncode didn't care about calling fprintf even if there are no \nsubstitutions. I think this could lead to misalignment errors. I \nvaguely recall we once had mixes of fprintf and fputs and got rid of \nthem for this reason.\n\n\n\n",
"msg_date": "Fri, 10 Jun 2022 12:16:32 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Count output lines automatically in psql/help.c"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> I wonder whether this mix of HELP0 and HELP is necessary. The original \n> code didn't care about calling fprintf even if there are no \n> substitutions. I think this could lead to misalignment errors. I \n> vaguely recall we once had mixes of fprintf and fputs and got rid of \n> them for this reason.\n\nIn the committed patch, I changed HELP to HELPN exactly so that\nthe strings would still line up.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 Jun 2022 07:50:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Count output lines automatically in psql/help.c"
}
] |
[
{
"msg_contents": "A few weeks back I sent a bug report [1] directly to the -bugs mailing\nlist, and I haven't seen any activity on it (maybe this is because I\nemailed directly instead of using the form?), but I got some time to\ntake a look and concluded that a first-level fix is pretty simple.\n\nA quick background refresher: after promoting a standby rewinding the\nformer primary requires that a checkpoint have been completed on the\nnew primary after promotion. This is correctly documented. However\npg_rewind incorrectly reports to the user that a rewind isn't\nnecessary because the source and target are on the same timeline.\n\nSpecifically, this happens when the control file on the newly promoted\nserver looks like:\n\n Latest checkpoint's TimeLineID: 4\n Latest checkpoint's PrevTimeLineID: 4\n ...\n Min recovery ending loc's timeline: 5\n\nAttached is a patch that detects this condition and reports it as an\nerror to the user.\n\nIn the spirit of the new-ish \"ensure shutdown\" functionality I could\nimagine extending this to automatically issue a checkpoint when this\nsituation is detected. I haven't started to code that up, however,\nwanting to first get buy-in on that.\n\nThanks,\nJames Coleman\n\n1: https://www.postgresql.org/message-id/CAAaqYe8b2DBbooTprY4v=BiZEd9qBqVLq+FD9j617eQFjk1KvQ@mail.gmail.com",
"msg_date": "Sat, 4 Jun 2022 08:59:12 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_rewind: warn when checkpoint hasn't happened after promotion"
},
{
"msg_contents": "On Sat, Jun 4, 2022 at 6:29 PM James Coleman <jtc331@gmail.com> wrote:\n>\n> A few weeks back I sent a bug report [1] directly to the -bugs mailing\n> list, and I haven't seen any activity on it (maybe this is because I\n> emailed directly instead of using the form?), but I got some time to\n> take a look and concluded that a first-level fix is pretty simple.\n>\n> A quick background refresher: after promoting a standby rewinding the\n> former primary requires that a checkpoint have been completed on the\n> new primary after promotion. This is correctly documented. However\n> pg_rewind incorrectly reports to the user that a rewind isn't\n> necessary because the source and target are on the same timeline.\n>\n> Specifically, this happens when the control file on the newly promoted\n> server looks like:\n>\n> Latest checkpoint's TimeLineID: 4\n> Latest checkpoint's PrevTimeLineID: 4\n> ...\n> Min recovery ending loc's timeline: 5\n>\n> Attached is a patch that detects this condition and reports it as an\n> error to the user.\n>\n> In the spirit of the new-ish \"ensure shutdown\" functionality I could\n> imagine extending this to automatically issue a checkpoint when this\n> situation is detected. I haven't started to code that up, however,\n> wanting to first get buy-in on that.\n>\n> 1: https://www.postgresql.org/message-id/CAAaqYe8b2DBbooTprY4v=BiZEd9qBqVLq+FD9j617eQFjk1KvQ@mail.gmail.com\n\nThanks. I had a quick look over the issue and patch - just a thought -\ncan't we let pg_rewind issue a checkpoint on the new primary instead\nof erroring out, maybe optionally? It might sound too much, but helps\npg_rewind to be self-reliant i.e. avoiding external actor to detect\nthe error and issue checkpoint the new primary to be able to\nsuccessfully run pg_rewind on the pld primary and repair it to use it\nas a new standby.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Sat, 4 Jun 2022 19:09:41 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind: warn when checkpoint hasn't happened after promotion"
},
{
"msg_contents": "At Sat, 4 Jun 2022 19:09:41 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Sat, Jun 4, 2022 at 6:29 PM James Coleman <jtc331@gmail.com> wrote:\n> >\n> > A few weeks back I sent a bug report [1] directly to the -bugs mailing\n> > list, and I haven't seen any activity on it (maybe this is because I\n> > emailed directly instead of using the form?), but I got some time to\n> > take a look and concluded that a first-level fix is pretty simple.\n> >\n> > A quick background refresher: after promoting a standby rewinding the\n> > former primary requires that a checkpoint have been completed on the\n> > new primary after promotion. This is correctly documented. However\n> > pg_rewind incorrectly reports to the user that a rewind isn't\n> > necessary because the source and target are on the same timeline.\n...\n> > Attached is a patch that detects this condition and reports it as an\n> > error to the user.\n\nI have some random thoughts on this.\n\nThere could be a problem in the case of gracefully shutdowned\nold-primary, so I think it is worth doing something if it can be in a\nsimple way.\n\nHowever, I don't think we can simply rely on minRecoveryPoint to\ndetect that situation, since it won't be reset on a standby. A standby\nalso still can be the upstream of a cascading standby. So, as\ndiscussed in the thread for the comment [2], what we can do here would be\nsimply waiting for the timelineID to advance, maybe having a timeout.\n\nIn a case of single-step replication set, a checkpoint request to the\nprimary makes the end-of-recovery checkpoint fast. It won't work as\nexpected in cascading replicas, but it might be acceptable.\n\n\n> > In the spirit of the new-ish \"ensure shutdown\" functionality I could\n> > imagine extending this to automatically issue a checkpoint when this\n> > situation is detected. I haven't started to code that up, however,\n> > wanting to first get buy-in on that.\n> >\n> > 1: https://www.postgresql.org/message-id/CAAaqYe8b2DBbooTprY4v=BiZEd9qBqVLq+FD9j617eQFjk1KvQ@mail.gmail.com\n> \n> Thanks. I had a quick look over the issue and patch - just a thought -\n> can't we let pg_rewind issue a checkpoint on the new primary instead\n> of erroring out, maybe optionally? It might sound too much, but helps\n> pg_rewind to be self-reliant i.e. avoiding external actor to detect\n> the error and issue checkpoint the new primary to be able to\n> successfully run pg_rewind on the pld primary and repair it to use it\n> as a new standby.\n\nAt the time of the discussion [2] for the it was the hinderance that\nthat requires superuser privileges. Now that has been narrowed down\nto the pg_checkpointer privileges.\n\nIf we know that the timeline IDs are different, we don't need to wait\nfor a checkpoint.\n\nIt seems to me that the exit status is significant. pg_rewind exits\nwith 1 when an invalid option is given. I don't think it is great if\nwe report this state by the same code.\n\nI don't think we always want to request a non-spreading checkpoint.\n\n[2] https://www.postgresql.org/message-id/flat/CABUevEz5bpvbwVsYCaSMV80CBZ5-82nkMzbb%2BBu%3Dh1m%3DrLdn%3Dg%40mail.gmail.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 06 Jun 2022 14:26:02 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind: warn when checkpoint hasn't happened after promotion"
},
{
"msg_contents": "On Sat, Jun 4, 2022 at 9:39 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Sat, Jun 4, 2022 at 6:29 PM James Coleman <jtc331@gmail.com> wrote:\n> >\n> > A few weeks back I sent a bug report [1] directly to the -bugs mailing\n> > list, and I haven't seen any activity on it (maybe this is because I\n> > emailed directly instead of using the form?), but I got some time to\n> > take a look and concluded that a first-level fix is pretty simple.\n> >\n> > A quick background refresher: after promoting a standby rewinding the\n> > former primary requires that a checkpoint have been completed on the\n> > new primary after promotion. This is correctly documented. However\n> > pg_rewind incorrectly reports to the user that a rewind isn't\n> > necessary because the source and target are on the same timeline.\n> >\n> > Specifically, this happens when the control file on the newly promoted\n> > server looks like:\n> >\n> > Latest checkpoint's TimeLineID: 4\n> > Latest checkpoint's PrevTimeLineID: 4\n> > ...\n> > Min recovery ending loc's timeline: 5\n> >\n> > Attached is a patch that detects this condition and reports it as an\n> > error to the user.\n> >\n> > In the spirit of the new-ish \"ensure shutdown\" functionality I could\n> > imagine extending this to automatically issue a checkpoint when this\n> > situation is detected. I haven't started to code that up, however,\n> > wanting to first get buy-in on that.\n> >\n> > 1: https://www.postgresql.org/message-id/CAAaqYe8b2DBbooTprY4v=BiZEd9qBqVLq+FD9j617eQFjk1KvQ@mail.gmail.com\n>\n> Thanks. I had a quick look over the issue and patch - just a thought -\n> can't we let pg_rewind issue a checkpoint on the new primary instead\n> of erroring out, maybe optionally? It might sound too much, but helps\n> pg_rewind to be self-reliant i.e. avoiding external actor to detect\n> the error and issue checkpoint the new primary to be able to\n> successfully run pg_rewind on the pld primary and repair it to use it\n> as a new standby.\n\nThat's what I had suggested as a \"further improvement\" option in the\nlast paragraph :)\n\nBut I think agreement on this more basic solution would still be good\n(even if I add the automatic checkpointing in this thread); given we\ncurrently explicitly mis-inform the user of pg_rewind, I think this is\na bug that should be considered for backpatching, and the simpler\n\"fail if detected\" patch is probably the only thing we could\nbackpatch.\n\nThanks for taking a look,\nJames Coleman\n\n\n",
"msg_date": "Mon, 6 Jun 2022 08:10:19 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_rewind: warn when checkpoint hasn't happened after promotion"
},
{
"msg_contents": "On Mon, Jun 6, 2022 at 1:26 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Sat, 4 Jun 2022 19:09:41 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > On Sat, Jun 4, 2022 at 6:29 PM James Coleman <jtc331@gmail.com> wrote:\n> > >\n> > > A few weeks back I sent a bug report [1] directly to the -bugs mailing\n> > > list, and I haven't seen any activity on it (maybe this is because I\n> > > emailed directly instead of using the form?), but I got some time to\n> > > take a look and concluded that a first-level fix is pretty simple.\n> > >\n> > > A quick background refresher: after promoting a standby rewinding the\n> > > former primary requires that a checkpoint have been completed on the\n> > > new primary after promotion. This is correctly documented. However\n> > > pg_rewind incorrectly reports to the user that a rewind isn't\n> > > necessary because the source and target are on the same timeline.\n> ...\n> > > Attached is a patch that detects this condition and reports it as an\n> > > error to the user.\n>\n> I have some random thoughts on this.\n>\n> There could be a problem in the case of gracefully shutdowned\n> old-primary, so I think it is worth doing something if it can be in a\n> simple way.\n>\n> However, I don't think we can simply rely on minRecoveryPoint to\n> detect that situation, since it won't be reset on a standby. A standby\n> also still can be the upstream of a cascading standby. So, as\n> discussed in the thread for the comment [2], what we can do here would be\n> simply waiting for the timelineID to advance, maybe having a timeout.\n\nTo confirm I'm following you correctly, you're envisioning a situation like:\n\n- Primary A\n- Replica B replicating from primary\n- Replica C replicating from replica B\n\nthen on failover from A to B you end up with:\n\n- Primary B\n- Replica C replication from primary\n- [needs rewind] A\n\nand you try to rewind A from C as the source?\n\n> In a case of single-step replication set, a checkpoint request to the\n> primary makes the end-of-recovery checkpoint fast. It won't work as\n> expected in cascading replicas, but it might be acceptable.\n\n\"Won't work as expected\" because there's no way to guarantee\nreplication is caught up or even advancing?\n\n> > > In the spirit of the new-ish \"ensure shutdown\" functionality I could\n> > > imagine extending this to automatically issue a checkpoint when this\n> > > situation is detected. I haven't started to code that up, however,\n> > > wanting to first get buy-in on that.\n> > >\n> > > 1: https://www.postgresql.org/message-id/CAAaqYe8b2DBbooTprY4v=BiZEd9qBqVLq+FD9j617eQFjk1KvQ@mail.gmail.com\n> >\n> > Thanks. I had a quick look over the issue and patch - just a thought -\n> > can't we let pg_rewind issue a checkpoint on the new primary instead\n> > of erroring out, maybe optionally? It might sound too much, but helps\n> > pg_rewind to be self-reliant i.e. avoiding external actor to detect\n> > the error and issue checkpoint the new primary to be able to\n> > successfully run pg_rewind on the pld primary and repair it to use it\n> > as a new standby.\n>\n> At the time of the discussion [2] for the it was the hinderance that\n> that requires superuser privileges. Now that has been narrowed down\n> to the pg_checkpointer privileges.\n>\n> If we know that the timeline IDs are different, we don't need to wait\n> for a checkpoint.\n\nCorrect.\n\n> It seems to me that the exit status is significant. pg_rewind exits\n> with 1 when an invalid option is given. I don't think it is great if\n> we report this state by the same code.\n\nI'm happy to change that; I only chose \"1\" as a placeholder for\n\"non-zero exit status\".\n\n> I don't think we always want to request a non-spreading checkpoint.\n\nI'm not familiar with the terminology \"non-spreading checkpoint\".\n\n> [2] https://www.postgresql.org/message-id/flat/CABUevEz5bpvbwVsYCaSMV80CBZ5-82nkMzbb%2BBu%3Dh1m%3DrLdn%3Dg%40mail.gmail.com\n\nI read through that thread, and one interesting idea stuck out to me:\nmaking \"tiimeline IDs are the same\" an error exit status. On the one\nhand that makes a certain amount of sense because it's unexpected. But\non the other hand there are entirely legitimate situations where upon\nfailover the timeline IDs happen to match (e.g., for use it happens\nsome percentage of the time naturally as we are using sync replication\nand failovers often involve STONITHing the original primary, so it's\nentirely possible that the promoted replica begins with exactly the\nsame WAL ending LSN from the primary before it stopped).\n\nThanks,\nJames Coleman\n\n\n",
"msg_date": "Mon, 6 Jun 2022 08:32:01 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_rewind: warn when checkpoint hasn't happened after promotion"
},
{
"msg_contents": "At Mon, 6 Jun 2022 08:32:01 -0400, James Coleman <jtc331@gmail.com> wrote in \n> On Mon, Jun 6, 2022 at 1:26 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Sat, 4 Jun 2022 19:09:41 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > > On Sat, Jun 4, 2022 at 6:29 PM James Coleman <jtc331@gmail.com> wrote:\n> > > >\n> > > > A few weeks back I sent a bug report [1] directly to the -bugs mailing\n> > > > list, and I haven't seen any activity on it (maybe this is because I\n> > > > emailed directly instead of using the form?), but I got some time to\n> > > > take a look and concluded that a first-level fix is pretty simple.\n> > > >\n> > > > A quick background refresher: after promoting a standby rewinding the\n> > > > former primary requires that a checkpoint have been completed on the\n> > > > new primary after promotion. This is correctly documented. However\n> > > > pg_rewind incorrectly reports to the user that a rewind isn't\n> > > > necessary because the source and target are on the same timeline.\n> > ...\n> > > > Attached is a patch that detects this condition and reports it as an\n> > > > error to the user.\n> >\n> > I have some random thoughts on this.\n> >\n> > There could be a problem in the case of gracefully shutdowned\n> > old-primary, so I think it is worth doing something if it can be in a\n> > simple way.\n> >\n> > However, I don't think we can simply rely on minRecoveryPoint to\n> > detect that situation, since it won't be reset on a standby. A standby\n> > also still can be the upstream of a cascading standby. So, as\n> > discussed in the thread for the comment [2], what we can do here would be\n> > simply waiting for the timelineID to advance, maybe having a timeout.\n> \n> To confirm I'm following you correctly, you're envisioning a situation like:\n> \n> - Primary A\n> - Replica B replicating from primary\n> - Replica C replicating from replica B\n> \n> then on failover from A to B you end up with:\n> \n> - Primary B\n> - Replica C replication from primary\n> - [needs rewind] A\n> \n> and you try to rewind A from C as the source?\n\nYes. I think it is a legit use case. That being said, like other\npoints, it might be acceptable.\n\n> > In a case of single-step replication set, a checkpoint request to the\n> > primary makes the end-of-recovery checkpoint fast. It won't work as\n> > expected in cascading replicas, but it might be acceptable.\n> \n> \"Won't work as expected\" because there's no way to guarantee\n> replication is caught up or even advancing?\n\nMaybe no. I meant that restartpoints don't run more frequently than\nthe intervals of checkpoint_timeout even if checkpint records come\nmore frequently.\n\n> > > > In the spirit of the new-ish \"ensure shutdown\" functionality I could\n> > > > imagine extending this to automatically issue a checkpoint when this\n> > > > situation is detected. I haven't started to code that up, however,\n> > > > wanting to first get buy-in on that.\n> > > >\n> > > > 1: https://www.postgresql.org/message-id/CAAaqYe8b2DBbooTprY4v=BiZEd9qBqVLq+FD9j617eQFjk1KvQ@mail.gmail.com\n> > >\n> > > Thanks. I had a quick look over the issue and patch - just a thought -\n> > > can't we let pg_rewind issue a checkpoint on the new primary instead\n> > > of erroring out, maybe optionally? It might sound too much, but helps\n> > > pg_rewind to be self-reliant i.e. avoiding external actor to detect\n> > > the error and issue checkpoint the new primary to be able to\n> > > successfully run pg_rewind on the pld primary and repair it to use it\n> > > as a new standby.\n> >\n> > At the time of the discussion [2] for the it was the hinderance that\n> > that requires superuser privileges. Now that has been narrowed down\n> > to the pg_checkpointer privileges.\n> >\n> > If we know that the timeline IDs are different, we don't need to wait\n> > for a checkpoint.\n> \n> Correct.\n> \n> > It seems to me that the exit status is significant. pg_rewind exits\n> > with 1 when an invalid option is given. I don't think it is great if\n> > we report this state by the same code.\n> \n> I'm happy to change that; I only chose \"1\" as a placeholder for\n> \"non-zero exit status\".\n> \n> > I don't think we always want to request a non-spreading checkpoint.\n> \n> I'm not familiar with the terminology \"non-spreading checkpoint\".\n\nDoes \"immediate checkpoint\" works? That is, a checkpoint that runs at\nfull-speed (i.e. with no delays between writes).\n\n> > [2] https://www.postgresql.org/message-id/flat/CABUevEz5bpvbwVsYCaSMV80CBZ5-82nkMzbb%2BBu%3Dh1m%3DrLdn%3Dg%40mail.gmail.com\n> \n> I read through that thread, and one interesting idea stuck out to me:\n> making \"tiimeline IDs are the same\" an error exit status. On the one\n> hand that makes a certain amount of sense because it's unexpected. But\n> on the other hand there are entirely legitimate situations where upon\n> failover the timeline IDs happen to match (e.g., for use it happens\n> some percentage of the time naturally as we are using sync replication\n> and failovers often involve STONITHing the original primary, so it's\n> entirely possible that the promoted replica begins with exactly the\n> same WAL ending LSN from the primary before it stopped).\n\nYes that is true for most cases unless old primary written some\nrecords that had not sent to the standby before its death. So if we\ndon't inspect WAL records (on the target cluster), we should always\nrun rewinding even in the STONITH-killed (or immediate-shutdown)\ncases.\n\nOne possible way to detect promotion reliably is to look into timeline\nhistory files. It is written immediately at promotion even on\nstandbys.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 07 Jun 2022 12:39:38 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind: warn when checkpoint hasn't happened after promotion"
},
{
"msg_contents": "At Tue, 07 Jun 2022 12:39:38 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> One possible way to detect promotion reliably is to look into timeline\n> history files. It is written immediately at promotion even on\n> standbys.\n\nThe attached seems to work. It uses timeline history files to identify\nthe source timeline. With this change pg_waldump no longer need to\nwait for end-of-recovery to finish.\n\n(It lacks doc part and test.. But I'm not sure how we can test this\nbehavior.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 07 Jun 2022 16:05:47 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind: warn when checkpoint hasn't happened after promotion"
},
{
"msg_contents": "On Tue, Jun 07, 2022 at 12:39:38PM +0900, Kyotaro Horiguchi wrote:\n> At Mon, 6 Jun 2022 08:32:01 -0400, James Coleman <jtc331@gmail.com> wrote in \n>> To confirm I'm following you correctly, you're envisioning a situation like:\n>> \n>> - Primary A\n>> - Replica B replicating from primary\n>> - Replica C replicating from replica B\n>> \n>> then on failover from A to B you end up with:\n>> \n>> - Primary B\n>> - Replica C replication from primary\n>> - [needs rewind] A\n>> \n>> and you try to rewind A from C as the source?\n> \n> Yes. I think it is a legit use case. That being said, like other\n> points, it might be acceptable.\n\nThis configuration is a case supported by pg_rewind, meaning that your\npatch to check after minRecoveryPointTLI would be confusing when using\na standby as a source because the checkpoint needs to apply on its\nprimary to allow the TLI of the standby to go up. If you want to\nprovide to the user more context, a more meaningful way may be to rely\non an extra check for ControlFileData.state, I guess, as a promoted \ncluster is marked as DB_IN_PRODUCTION before recoveryMinPoint is\ncleared by the first post-promotion checkpoint, with\nDB_IN_ARCHIVE_RECOVERY for a cascading standby.\n--\nMichael",
"msg_date": "Tue, 7 Jun 2022 16:16:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind: warn when checkpoint hasn't happened after promotion"
},
{
"msg_contents": "At Tue, 7 Jun 2022 16:16:09 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Tue, Jun 07, 2022 at 12:39:38PM +0900, Kyotaro Horiguchi wrote:\n> > At Mon, 6 Jun 2022 08:32:01 -0400, James Coleman <jtc331@gmail.com> wrote in \n> >> To confirm I'm following you correctly, you're envisioning a situation like:\n> >> \n> >> - Primary A\n> >> - Replica B replicating from primary\n> >> - Replica C replicating from replica B\n> >> \n> >> then on failover from A to B you end up with:\n> >> \n> >> - Primary B\n> >> - Replica C replication from primary\n> >> - [needs rewind] A\n> >> \n> >> and you try to rewind A from C as the source?\n> > \n> > Yes. I think it is a legit use case. That being said, like other\n> > points, it might be acceptable.\n> \n> This configuration is a case supported by pg_rewind, meaning that your\n> patch to check after minRecoveryPointTLI would be confusing when using\n> a standby as a source because the checkpoint needs to apply on its\n> primary to allow the TLI of the standby to go up. If you want to\n\nYeah, that what I meant.\n\n> provide to the user more context, a more meaningful way may be to rely\n> on an extra check for ControlFileData.state, I guess, as a promoted \n> cluster is marked as DB_IN_PRODUCTION before recoveryMinPoint is\n> cleared by the first post-promotion checkpoint, with\n> DB_IN_ARCHIVE_RECOVERY for a cascading standby.\n\nRight. However, IIUC, checkpoint LSN/TLI is not updated at the\ntime. The point of the minRecoveryPoint check is to confirm that we\ncan read the timeline ID of the promoted source cluster from\ncheckPointCopy.ThisTimeLineID. But we cannot do that yet at the time\nthe cluster state moves to DB_IN_PRODUCTION. And a standby is in\nDB_IN_ARCHIVE_RECOVERY since before the upstream promotes. It also\ndoesn't signal the reliability of checkPointCopy.ThisTimeLineID..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 07 Jun 2022 16:54:01 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind: warn when checkpoint hasn't happened after promotion"
},
{
"msg_contents": "I think this is a good improvement and also like the option (on pg_rewind) to potentially send checkpoints to the source.\n\n\n\n\nPersonal anecdote. I was using stolon and frequently failing over. For sometime the rewind was failing that it wasn't required. Only learnt that it's the checkpoint on the source which was missing. \n\nReferences https://github.com/sorintlab/stolon/issues/601\nAnd the fix https://github.com/sorintlab/stolon/pull/644\n https://github.com/sorintlab/stolon/issues/601\n\n\n\n\n\n---- On Sat, 04 Jun 2022 05:59:12 -0700 James Coleman <mailto:jtc331@gmail.com> wrote ----\n\n\n\nA few weeks back I sent a bug report [1] directly to the -bugs mailing \nlist, and I haven't seen any activity on it (maybe this is because I \nemailed directly instead of using the form?), but I got some time to \ntake a look and concluded that a first-level fix is pretty simple. \n \nA quick background refresher: after promoting a standby rewinding the \nformer primary requires that a checkpoint have been completed on the \nnew primary after promotion. This is correctly documented. However \npg_rewind incorrectly reports to the user that a rewind isn't \nnecessary because the source and target are on the same timeline. \n \nSpecifically, this happens when the control file on the newly promoted \nserver looks like: \n \n Latest checkpoint's TimeLineID: 4 \n Latest checkpoint's PrevTimeLineID: 4 \n ... \n Min recovery ending loc's timeline: 5 \n \nAttached is a patch that detects this condition and reports it as an \nerror to the user. \n \nIn the spirit of the new-ish \"ensure shutdown\" functionality I could \nimagine extending this to automatically issue a checkpoint when this \nsituation is detected. I haven't started to code that up, however, \nwanting to first get buy-in on that. \n \nThanks, \nJames Coleman \n \n1: https://www.postgresql.org/message-id/CAAaqYe8b2DBbooTprY4v=BiZEd9qBqVLq+FD9j617eQFjk1KvQ@mail.gmail.com\nI think this is a good improvement and also like the option (on pg_rewind) to potentially send checkpoints to the source.Personal anecdote. I was using stolon and frequently failing over. For sometime the rewind was failing that it wasn't required. Only learnt that it's the checkpoint on the source which was missing. References https://github.com/sorintlab/stolon/issues/601And the fix https://github.com/sorintlab/stolon/pull/644 ---- On Sat, 04 Jun 2022 05:59:12 -0700 James Coleman <jtc331@gmail.com> wrote ----A few weeks back I sent a bug report [1] directly to the -bugs mailing list, and I haven't seen any activity on it (maybe this is because I emailed directly instead of using the form?), but I got some time to take a look and concluded that a first-level fix is pretty simple. A quick background refresher: after promoting a standby rewinding the former primary requires that a checkpoint have been completed on the new primary after promotion. This is correctly documented. However pg_rewind incorrectly reports to the user that a rewind isn't necessary because the source and target are on the same timeline. Specifically, this happens when the control file on the newly promoted server looks like: Latest checkpoint's TimeLineID: 4 Latest checkpoint's PrevTimeLineID: 4 ... Min recovery ending loc's timeline: 5 Attached is a patch that detects this condition and reports it as an error to the user. In the spirit of the new-ish \"ensure shutdown\" functionality I could imagine extending this to automatically issue a checkpoint when this situation is detected. I haven't started to code that up, however, wanting to first get buy-in on that. Thanks, James Coleman 1: https://www.postgresql.org/message-id/CAAaqYe8b2DBbooTprY4v=BiZEd9qBqVLq+FD9j617eQFjk1KvQ@mail.gmail.com",
"msg_date": "Tue, 07 Jun 2022 07:41:11 -0700",
"msg_from": "vignesh ravichandran <admin@viggy28.dev>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind: warn when checkpoint hasn't happened after promotion"
},
{
"msg_contents": "At Tue, 07 Jun 2022 16:05:47 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Tue, 07 Jun 2022 12:39:38 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > One possible way to detect promotion reliably is to look into timeline\n> > history files. It is written immediately at promotion even on\n> > standbys.\n> \n> The attached seems to work. It uses timeline history files to identify\n> the source timeline. With this change pg_waldump no longer need to\n> wait for end-of-recovery to finish.\n> \n> (It lacks doc part and test.. But I'm not sure how we can test this\n> behavior.)\n\nThis is a revised version.\n\nRevised getTimelineHistory()'s logic (refactored, and changed so that\nit doesn't pick-up the wrong history files).\n\nperform_rewind always identify endtli based on source's timeline\nhistory.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 08 Jun 2022 18:15:09 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind: warn when checkpoint hasn't happened after promotion"
},
{
"msg_contents": "At Wed, 08 Jun 2022 18:15:09 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Tue, 07 Jun 2022 16:05:47 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > At Tue, 07 Jun 2022 12:39:38 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > > One possible way to detect promotion reliably is to look into timeline\n> > > history files. It is written immediately at promotion even on\n> > > standbys.\n> > \n> > The attached seems to work. It uses timeline history files to identify\n> > the source timeline. With this change pg_waldump no longer need to\n> > wait for end-of-recovery to finish.\n> > \n> > (It lacks doc part and test.. But I'm not sure how we can test this\n> > behavior.)\n> \n> This is a revised version.\n> \n> Revised getTimelineHistory()'s logic (refactored, and changed so that\n> it doesn't pick-up the wrong history files).\n> \n> perform_rewind always identify endtli based on source's timeline\n> history.\n\nNo need to \"search\" history file to identify it. The latest timeline\nmust be that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 08 Jun 2022 18:36:04 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind: warn when checkpoint hasn't happened after promotion"
},
{
"msg_contents": "On Sat, Jun 4, 2022 at 8:59 AM James Coleman <jtc331@gmail.com> wrote:\n> A quick background refresher: after promoting a standby rewinding the\n> former primary requires that a checkpoint have been completed on the\n> new primary after promotion. This is correctly documented. However\n> pg_rewind incorrectly reports to the user that a rewind isn't\n> necessary because the source and target are on the same timeline.\n\nIs there anything intrinsic to the mechanism of operation of pg_rewind\nthat requires a timeline change, or could we just rewind within the\nsame timeline to an earlier LSN? In other words, maybe we could just\nremove this limitation of pg_rewind, and then perhaps it wouldn't be\nnecessary to determine what the new timeline is.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Jul 2022 14:39:11 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind: warn when checkpoint hasn't happened after promotion"
},
{
"msg_contents": "On Tue, Jul 5, 2022 at 2:39 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Sat, Jun 4, 2022 at 8:59 AM James Coleman <jtc331@gmail.com> wrote:\n> > A quick background refresher: after promoting a standby rewinding the\n> > former primary requires that a checkpoint have been completed on the\n> > new primary after promotion. This is correctly documented. However\n> > pg_rewind incorrectly reports to the user that a rewind isn't\n> > necessary because the source and target are on the same timeline.\n>\n> Is there anything intrinsic to the mechanism of operation of pg_rewind\n> that requires a timeline change, or could we just rewind within the\n> same timeline to an earlier LSN? In other words, maybe we could just\n> remove this limitation of pg_rewind, and then perhaps it wouldn't be\n> necessary to determine what the new timeline is.\n\nI think (someone can correct me if I'm wrong) that in theory the\nmechanisms would support the source and target being on the same\ntimeline, but in practice that presents problems since you'd not have\nan LSN you could detect as the divergence point. If we allowed passing\n\"rewind to\" point LSN value, then that (again, as far as I understand\nit) would work, but it's a different use case. Specifically I wouldn't\nwant that option to need to be used for this particular case since in\nmy example there is in fact a real divergence point that we should be\ndetecting automatically.\n\nThanks,\nJames Coleman\n\n\n",
"msg_date": "Tue, 5 Jul 2022 14:46:13 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_rewind: warn when checkpoint hasn't happened after promotion"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Is there anything intrinsic to the mechanism of operation of pg_rewind\n> that requires a timeline change, or could we just rewind within the\n> same timeline to an earlier LSN? In other words, maybe we could just\n> remove this limitation of pg_rewind, and then perhaps it wouldn't be\n> necessary to determine what the new timeline is.\n\nThat seems like a fairly bad idea. For example, if you've already\narchived some WAL segments past the rewind target, there will shortly\nbe two versions of truth about what that part of the WAL space contains,\nand your archiver will either spit up or do probably-the-wrong-thing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 05 Jul 2022 14:47:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind: warn when checkpoint hasn't happened after promotion"
},
{
"msg_contents": "On Tue, Jul 5, 2022 at 2:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Is there anything intrinsic to the mechanism of operation of pg_rewind\n> > that requires a timeline change, or could we just rewind within the\n> > same timeline to an earlier LSN? In other words, maybe we could just\n> > remove this limitation of pg_rewind, and then perhaps it wouldn't be\n> > necessary to determine what the new timeline is.\n>\n> That seems like a fairly bad idea. For example, if you've already\n> archived some WAL segments past the rewind target, there will shortly\n> be two versions of truth about what that part of the WAL space contains,\n> and your archiver will either spit up or do probably-the-wrong-thing.\n\nWell, only if you void the warranty. If you rewind the ex-primary to\nthe LSN where the new primary is replaying and tell it to start\nreplaying from there and follow the new primary's subsequent switch\nonto a new timeline, there's no split-brain problem.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Jul 2022 14:51:35 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind: warn when checkpoint hasn't happened after promotion"
},
{
"msg_contents": "At Tue, 5 Jul 2022 14:46:13 -0400, James Coleman <jtc331@gmail.com> wrote in \n> On Tue, Jul 5, 2022 at 2:39 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Sat, Jun 4, 2022 at 8:59 AM James Coleman <jtc331@gmail.com> wrote:\n> > > A quick background refresher: after promoting a standby rewinding the\n> > > former primary requires that a checkpoint have been completed on the\n> > > new primary after promotion. This is correctly documented. However\n> > > pg_rewind incorrectly reports to the user that a rewind isn't\n> > > necessary because the source and target are on the same timeline.\n> >\n> > Is there anything intrinsic to the mechanism of operation of pg_rewind\n> > that requires a timeline change, or could we just rewind within the\n> > same timeline to an earlier LSN? In other words, maybe we could just\n> > remove this limitation of pg_rewind, and then perhaps it wouldn't be\n> > necessary to determine what the new timeline is.\n>\n> I think (someone can correct me if I'm wrong) that in theory the\n> mechanisms would support the source and target being on the same\n> timeline, but in practice that presents problems since you'd not have\n> an LSN you could detect as the divergence point. If we allowed passing\n> \"rewind to\" point LSN value, then that (again, as far as I understand\n> it) would work, but it's a different use case. Specifically I wouldn't\n> want that option to need to be used for this particular case since in\n> my example there is in fact a real divergence point that we should be\n> detecting automatically.\n\nThe point of pg_rewind is finding diverging point then finding all\nblocks modified in the dead history (from the diverging point) and\n\"replace\" them with those of the live history. In that sense, to be\nexact, pg_rewind does not \"rewind\" a cluster. If no diverging point,\nthe last LSN of the cluster getting behind (as target cluster?) is\nthat and just no need to replace a block at all because no WAL exists\n(on the cluster being behind) after the last LSN.\n\nThe issue here is pg_rewind looks into control file to determine the\nsoruce timeline, because the control file is not updated until the\nfirst checkpoint ends after promotion finishes, even though file\nblocks are already diverged.\n\nEven in that case history file for the new timeline is already\ncreated, so searching for the latest history file works.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 06 Jul 2022 11:38:42 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind: warn when checkpoint hasn't happened after promotion"
},
{
"msg_contents": "Hi, hackers\n\n> The issue here is pg_rewind looks into control file to determine the\n> soruce timeline, because the control file is not updated until the\n> first checkpoint ends after promotion finishes, even though file\n> blocks are already diverged.\n> \n> Even in that case history file for the new timeline is already\n> created, so searching for the latest history file works.\n\nI think this change is a good one because if I want\npg_rewind to run automatically after a promotion,\nI don't have to wait for the checkpoint to complete.\n\nThe attached patch is Horiguchi-san's patch with\nadditional tests. The tests are based on James's tests,\n\"010_no_checkpoint_after_promotion.pl\" tests that\npg_rewind is successfully executed without running\ncheckpoint after promote.\n\nBest Regards,\nKeisuke Kuroda\nNTT COMWARE",
"msg_date": "Wed, 16 Nov 2022 14:17:59 +0900",
"msg_from": "kuroda.keisuke@nttcom.co.jp",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind: warn when checkpoint hasn't happened after promotion"
},
{
"msg_contents": "On 16/11/2022 07:17, kuroda.keisuke@nttcom.co.jp wrote:\n>> The issue here is pg_rewind looks into control file to determine the\n>> soruce timeline, because the control file is not updated until the\n>> first checkpoint ends after promotion finishes, even though file\n>> blocks are already diverged.\n>>\n>> Even in that case history file for the new timeline is already\n>> created, so searching for the latest history file works.\n> \n> I think this change is a good one because if I want\n> pg_rewind to run automatically after a promotion,\n> I don't have to wait for the checkpoint to complete.\n> \n> The attached patch is Horiguchi-san's patch with\n> additional tests. The tests are based on James's tests,\n> \"010_no_checkpoint_after_promotion.pl\" tests that\n> pg_rewind is successfully executed without running\n> checkpoint after promote.\n\nI fixed this last week in commit 009eeee746, see thread [1]. I'm sorry I \ndidn't notice this thread earlier.\n\nI didn't realize that we had a notice about this in the docs. I'll go \nand remove that. Thanks!\n\n- Heikki\n\n\n\n",
"msg_date": "Mon, 27 Feb 2023 09:33:13 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind: warn when checkpoint hasn't happened after promotion"
},
{
"msg_contents": "hi Heikki,\n\nThanks to mail, and thanks also for the commit(0a0500207a)\nto fix the document.\nI'm glad the problem was solved.\n\nBest Regards,\nKeisuke Kuroda\nNTT COMWARE\n\n2023-02-27 16:33 に Heikki Linnakangas さんは書きました:\n> On 16/11/2022 07:17, kuroda.keisuke@nttcom.co.jp wrote:\n> \n> I fixed this last week in commit 009eeee746, see thread [1]. I'm sorry\n> I didn't notice this thread earlier.\n> \n> I didn't realize that we had a notice about this in the docs. I'll go\n> and remove that. Thanks!\n> \n> - Heikki\n\n\n\n",
"msg_date": "Tue, 28 Feb 2023 11:07:52 +0900",
"msg_from": "kuroda.keisuke@nttcom.co.jp",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind: warn when checkpoint hasn't happened after promotion"
},
{
"msg_contents": "On Mon, Feb 27, 2023 at 2:33 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 16/11/2022 07:17, kuroda.keisuke@nttcom.co.jp wrote:\n> >> The issue here is pg_rewind looks into control file to determine the\n> >> soruce timeline, because the control file is not updated until the\n> >> first checkpoint ends after promotion finishes, even though file\n> >> blocks are already diverged.\n> >>\n> >> Even in that case history file for the new timeline is already\n> >> created, so searching for the latest history file works.\n> >\n> > I think this change is a good one because if I want\n> > pg_rewind to run automatically after a promotion,\n> > I don't have to wait for the checkpoint to complete.\n> >\n> > The attached patch is Horiguchi-san's patch with\n> > additional tests. The tests are based on James's tests,\n> > \"010_no_checkpoint_after_promotion.pl\" tests that\n> > pg_rewind is successfully executed without running\n> > checkpoint after promote.\n>\n> I fixed this last week in commit 009eeee746, see thread [1]. I'm sorry I\n> didn't notice this thread earlier.\n>\n> I didn't realize that we had a notice about this in the docs. I'll go\n> and remove that. Thanks!\n>\n> - Heikki\n>\n\nThanks; I think the missing [1] (for reference) is:\nhttps://www.postgresql.org/message-id/9f568c97-87fe-a716-bd39-65299b8a60f4%40iki.fi\n\nJames\n\n\n",
"msg_date": "Tue, 28 Feb 2023 07:37:53 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_rewind: warn when checkpoint hasn't happened after promotion"
}
] |
[
{
"msg_contents": "Hi,\r\nI opened an issue with an attached code on oracle_fdw git page : https://github.com/laurenz/oracle_fdw/issues/534\r\nBasically I expected to obtain a \"no privilege\" error from PostgreSQL when I have no read privilege on the postgres foreign table but I obtained an Oracle error instead.\r\nLaurenz investigated and closed the issue but he suggested perhaps I should post that on the hackers list since it also occurs with postgres-fdw on some occasion (I have investigated some more, and postgres_fdw does the same thing when you turn on use_remote_estimate.). Hence I do...\r\n[https://opengraph.githubassets.com/e4d1de8890f6f00ee432d365f033677636df1c545e9d4c10ad623c5de5e7553e/laurenz/oracle_fdw/issues/534]<https://github.com/laurenz/oracle_fdw/issues/534>\r\nOracle error on a foreign table I have no privilege on · Issue #534 · laurenz/oracle_fdw<https://github.com/laurenz/oracle_fdw/issues/534>\r\nHi, I noticed a behaviour I didn't expect. Not really a bug but I obtained an Oracle error instead of a PostgreSQL error with a foreign table I had no privilege on. -- superuser prodige31=*>...\r\ngithub.com\r\n\r\nBest regards,\r\nPhil\r\n\n\n\n\n\n\n\n\r\nHi,\n\r\nI opened an issue with an attached code on oracle_fdw git page : \r\nhttps://github.com/laurenz/oracle_fdw/issues/534 \n\n\r\nBasically I expected to obtain a \"no privilege\" error from PostgreSQL when I have no read privilege on the postgres foreign table but I obtained an Oracle error instead.\r\n\n\nLaurenz investigated and closed the issue but he suggested perhaps I should post that on the hackers list since it also occurs with postgres-fdw on some occasion\r\n(I have investigated some more, and postgres_fdw does the same thing when you turn on\r\nuse_remote_estimate.). Hence I do...\n\n\n\n\n\n\n\n\n\n\n\nOracle error on a foreign table I have no privilege on · Issue #534 · laurenz/oracle_fdw\n\r\nHi, I noticed a behaviour I didn't expect. Not really a bug but I obtained an Oracle error instead of a PostgreSQL error with a foreign table I had no privilege on. -- superuser prodige31=*>...\n\r\ngithub.com\n\n\n\n\n\n\n\n\n\r\nBest regards,\n\r\nPhil",
"msg_date": "Sat, 4 Jun 2022 21:18:02 +0000",
"msg_from": "Phil Florent <philflorent@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Error from the foreign RDBMS on a foreign table I have no privilege\n on"
},
{
"msg_contents": "On Sat, 2022-06-04 at 21:18 +0000, Phil Florent wrote:\n> I opened an issue with an attached code on oracle_fdw git page : https://github.com/laurenz/oracle_fdw/issues/534 \n> Basically I expected to obtain a \"no privilege\" error from PostgreSQL when I have no read privilege\n> on the postgres foreign table but I obtained an Oracle error instead.\n> Laurenz investigated and closed the issue but he suggested perhaps I should post that on\n> the hackers list since it also occurs with postgres-fdw on some occasion(I have investigated some more,\n> and postgres_fdw does the same thing when you turn onuse_remote_estimate.). Hence I do...\n\nTo add more detais: permissions are checked at query execution time, but if \"use_remote_estimate\"\nis used, the planner already accesses the remote table, even if the user has no permissions\non the foreign table.\n\nI feel that that is no bug, but I'd be curious to know if others disagree.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 07 Jun 2022 05:03:00 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Error from the foreign RDBMS on a foreign table I have no\n privilege on"
},
{
"msg_contents": "On Tue, Jun 7, 2022, at 12:03 AM, Laurenz Albe wrote:\n> On Sat, 2022-06-04 at 21:18 +0000, Phil Florent wrote:\n> > I opened an issue with an attached code on oracle_fdw git page : https://github.com/laurenz/oracle_fdw/issues/534 \n> > Basically I expected to obtain a \"no privilege\" error from PostgreSQL when I have no read privilege\n> > on the postgres foreign table but I obtained an Oracle error instead.\n> > Laurenz investigated and closed the issue but he suggested perhaps I should post that on\n> > the hackers list since it also occurs with postgres-fdw on some occasion(I have investigated some more,\n> > and postgres_fdw does the same thing when you turn onuse_remote_estimate.). Hence I do...\n> \n> To add more detais: permissions are checked at query execution time, but if \"use_remote_estimate\"\n> is used, the planner already accesses the remote table, even if the user has no permissions\n> on the foreign table.\n> \n> I feel that that is no bug, but I'd be curious to know if others disagree.\nYou should expect an error (like in the example) -- probably not at that point.\nIt is behaving accordingly. However, that error is exposing an implementation\ndetail (FDW has to access the remote table at that phase). I don't think that\nchanging the current design (permission check after planning) for FDWs to\nprovide a good UX is worth it. IMO it is up to the FDW author to hide such\ncases if it doesn't cost much to do it.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Tue, Jun 7, 2022, at 12:03 AM, Laurenz Albe wrote:On Sat, 2022-06-04 at 21:18 +0000, Phil Florent wrote:> I opened an issue with an attached code on oracle_fdw git page : https://github.com/laurenz/oracle_fdw/issues/534 > Basically I expected to obtain a \"no privilege\" error from PostgreSQL when I have no read privilege> on the postgres foreign table but I obtained an Oracle error instead.> Laurenz investigated and closed the issue but he suggested perhaps I should post that on> the hackers list since it also occurs with postgres-fdw on some occasion(I have investigated some more,> and postgres_fdw does the same thing when you turn onuse_remote_estimate.). Hence I do...To add more detais: permissions are checked at query execution time, but if \"use_remote_estimate\"is used, the planner already accesses the remote table, even if the user has no permissionson the foreign table.I feel that that is no bug, but I'd be curious to know if others disagree.You should expect an error (like in the example) -- probably not at that point.It is behaving accordingly. However, that error is exposing an implementationdetail (FDW has to access the remote table at that phase). I don't think thatchanging the current design (permission check after planning) for FDWs toprovide a good UX is worth it. IMO it is up to the FDW author to hide suchcases if it doesn't cost much to do it.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Tue, 07 Jun 2022 11:24:55 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Error from the foreign RDBMS on a foreign table I have no\n privilege on"
},
{
"msg_contents": "At Tue, 07 Jun 2022 11:24:55 -0300, \"Euler Taveira\" <euler@eulerto.com> wrote in \n> \n> \n> On Tue, Jun 7, 2022, at 12:03 AM, Laurenz Albe wrote:\n> > On Sat, 2022-06-04 at 21:18 +0000, Phil Florent wrote:\n> > > I opened an issue with an attached code on oracle_fdw git page : https://github.com/laurenz/oracle_fdw/issues/534 \n> > > Basically I expected to obtain a \"no privilege\" error from PostgreSQL when I have no read privilege\n> > > on the postgres foreign table but I obtained an Oracle error instead.\n> > > Laurenz investigated and closed the issue but he suggested perhaps I should post that on\n> > > the hackers list since it also occurs with postgres-fdw on some occasion(I have investigated some more,\n> > > and postgres_fdw does the same thing when you turn onuse_remote_estimate.). Hence I do...\n> > \n> > To add more detais: permissions are checked at query execution time, but if \"use_remote_estimate\"\n> > is used, the planner already accesses the remote table, even if the user has no permissions\n> > on the foreign table.\n> > \n> > I feel that that is no bug, but I'd be curious to know if others disagree.\n> You should expect an error (like in the example) -- probably not at that point.\n> It is behaving accordingly. However, that error is exposing an implementation\n> detail (FDW has to access the remote table at that phase). I don't think that\n> changing the current design (permission check after planning) for FDWs to\n> provide a good UX is worth it. IMO it is up to the FDW author to hide such\n> cases if it doesn't cost much to do it.\n\nIt is few lines of code.\n\n>\ti = -1;\n>\twhile ((i = bms_next_member(rel->relids, i)) >= 0)\n>\t{\n>\t\tRangeTblEntry *rte = root->simple_rte_array[i];\n>\t\taclcheck_error(ACLCHECK_NO_PRIV,\n>\t\t\t\t\t get_relkind_objtype(rte->relkind),\n>\t\t\t\t\t get_rel_name(rte->relid));\n>\t}\n\nIt can be done in GetForeignRelSize callback by individual FDW, but it\nalso can be done in set_foreign_size() in core.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 08 Jun 2022 11:12:10 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error from the foreign RDBMS on a foreign table I have no\n privilege on"
},
{
"msg_contents": "On Wed, 2022-06-08 at 11:12 +0900, Kyotaro Horiguchi wrote:\n> At Tue, 07 Jun 2022 11:24:55 -0300, \"Euler Taveira\" <euler@eulerto.com> wrote in \n> > On Tue, Jun 7, 2022, at 12:03 AM, Laurenz Albe wrote:\n> > > On Sat, 2022-06-04 at 21:18 +0000, Phil Florent wrote:\n> > > > I opened an issue with an attached code on oracle_fdw git page : https://github.com/laurenz/oracle_fdw/issues/534 \n> > > > Basically I expected to obtain a \"no privilege\" error from PostgreSQL when I have no read privilege\n> > > > on the postgres foreign table but I obtained an Oracle error instead.\n> > > > Laurenz investigated and closed the issue but he suggested perhaps I should post that on\n> > > > the hackers list since it also occurs with postgres-fdw on some occasion(I have investigated some more,\n> > > > and postgres_fdw does the same thing when you turn onuse_remote_estimate.). Hence I do...\n> > > \n> > > To add more detais: permissions are checked at query execution time, but if \"use_remote_estimate\"\n> > > is used, the planner already accesses the remote table, even if the user has no permissions\n> > > on the foreign table.\n> > > \n> > > I feel that that is no bug, but I'd be curious to know if others disagree.\n> > You should expect an error (like in the example) -- probably not at that point.\n> > It is behaving accordingly. However, that error is exposing an implementation\n> > detail (FDW has to access the remote table at that phase). I don't think that\n> > changing the current design (permission check after planning) for FDWs to\n> > provide a good UX is worth it. IMO it is up to the FDW author to hide such\n> > cases if it doesn't cost much to do it.\n> \n> It is few lines of code.\n> \n> > \ti = -1;\n> > \twhile ((i = bms_next_member(rel->relids, i)) >= 0)\n> > \t{\n> > \t\tRangeTblEntry *rte = root->simple_rte_array[i];\n> > \t\taclcheck_error(ACLCHECK_NO_PRIV,\n> > \t\t\t\t\t get_relkind_objtype(rte->relkind),\n> > \t\t\t\t\t get_rel_name(rte->relid));\n> > \t}\n> \n> It can be done in GetForeignRelSize callback by individual FDW, but it\n> also can be done in set_foreign_size() in core.\n\nIf anything, it should be done in the FDW, because it is only necessary if the\nFDW calls the remote site during planning.\n\nThe question is: is this a bug in postgres_fdw that should be fixed?\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Wed, 08 Jun 2022 04:38:02 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Error from the foreign RDBMS on a foreign table I have no\n privilege on"
},
{
"msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> On Wed, 2022-06-08 at 11:12 +0900, Kyotaro Horiguchi wrote:\n> RangeTblEntry *rte = root->simple_rte_array[i];\n> aclcheck_error(ACLCHECK_NO_PRIV,\n> get_relkind_objtype(rte->relkind),\n> get_rel_name(rte->relid));\n\nI think it's completely inappropriate for FDWs to be taking it on\nthemselves to inject privilege checks. The system design is that\nthat is checked at executor start; not before, not after.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 Jun 2022 23:04:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Error from the foreign RDBMS on a foreign table I have no\n privilege on"
},
{
"msg_contents": "At Wed, 08 Jun 2022 04:38:02 +0200, Laurenz Albe <laurenz.albe@cybertec.at> wrote in \n> If anything, it should be done in the FDW, because it is only necessary if the\n> FDW calls the remote site during planning.\n> \n> The question is: is this a bug in postgres_fdw that should be fixed?\n\nIt's depends on what we think about allowing remote access trials\nthrough unprivileged foreign table in any style. It won't be a\nproblem if the system is configured appropriately but too-frequent\nestimate accesses via unprivileged foreign tables might be regarded as\nan attack attempt.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 08 Jun 2022 12:09:27 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error from the foreign RDBMS on a foreign table I have no\n privilege on"
},
{
"msg_contents": "At Wed, 08 Jun 2022 12:09:27 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Wed, 08 Jun 2022 04:38:02 +0200, Laurenz Albe <laurenz.albe@cybertec.at> wrote in \n> > If anything, it should be done in the FDW, because it is only necessary if the\n> > FDW calls the remote site during planning.\n> > \n> > The question is: is this a bug in postgres_fdw that should be fixed?\n> \n> It's depends on what we think about allowing remote access trials\n> through unprivileged foreign table in any style. It won't be a\n> problem if the system is configured appropriately but too-frequent\n> estimate accesses via unprivileged foreign tables might be regarded as\n> an attack attempt.\n\nIn other words, I don't think it's not a bug and no need to fix. If\none want to prevent such estimate accesses via unprivileged foreign\ntables, it is enough to prevent non-privileged users from having a\nuser mapping. This might be worth documenting?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 08 Jun 2022 13:06:25 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error from the foreign RDBMS on a foreign table I have no\n privilege on"
},
{
"msg_contents": "At Tue, 07 Jun 2022 23:04:52 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> > On Wed, 2022-06-08 at 11:12 +0900, Kyotaro Horiguchi wrote:\n> > RangeTblEntry *rte = root->simple_rte_array[i];\n> > aclcheck_error(ACLCHECK_NO_PRIV,\n> > get_relkind_objtype(rte->relkind),\n> > get_rel_name(rte->relid));\n> \n> I think it's completely inappropriate for FDWs to be taking it on\n> themselves to inject privilege checks. The system design is that\n> that is checked at executor start; not before, not after.\n\nAh, yes. It's not good that checking it at multiple stages, and the\nonly one place should be executor start.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 08 Jun 2022 13:08:16 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error from the foreign RDBMS on a foreign table I have no\n privilege on"
},
{
"msg_contents": "On Wed, 2022-06-08 at 13:06 +0900, Kyotaro Horiguchi wrote:\n> At Wed, 08 Jun 2022 12:09:27 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > At Wed, 08 Jun 2022 04:38:02 +0200, Laurenz Albe <laurenz.albe@cybertec.at> wrote in \n> > > If anything, it should be done in the FDW, because it is only necessary if the\n> > > FDW calls the remote site during planning.\n> > > \n> > > The question is: is this a bug in postgres_fdw that should be fixed?\n> > \n> > It's depends on what we think about allowing remote access trials\n> > through unprivileged foreign table in any style. It won't be a\n> > problem if the system is configured appropriately but too-frequent\n> > estimate accesses via unprivileged foreign tables might be regarded as\n> > an attack attempt.\n> \n> In other words, I don't think it's not a bug and no need to fix. If\n> one want to prevent such estimate accesses via unprivileged foreign\n> tables, it is enough to prevent non-privileged users from having a\n> user mapping. This might be worth documenting?\n\nI take Tom's comment above as saying that the current behavior is fine.\nSo yes, perhaps some documentation would be in order:\n\ndiff --git a/doc/src/sgml/postgres-fdw.sgml b/doc/src/sgml/postgres-fdw.sgml\nindex b43d0aecba..b4b7e36d28 100644\n--- a/doc/src/sgml/postgres-fdw.sgml\n+++ b/doc/src/sgml/postgres-fdw.sgml\n@@ -274,6 +274,14 @@ OPTIONS (ADD password_required 'false');\n but only for that table.\n The default is <literal>false</literal>.\n </para>\n+\n+ <para>\n+ Note that <command>EXPLAIN</command> will be run on the remote server\n+ at query planning time, <emphasis>before</emphasis> permissions on the\n+ foreign table are checked. This is not a security problem, since the\n+ subsequent error from the permission check will prevent the user from\n+ seeing any of the resulting data.\n+ </para>\n </listitem>\n </varlistentry>\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Wed, 08 Jun 2022 07:05:09 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Error from the foreign RDBMS on a foreign table I have no\n privilege on"
},
{
"msg_contents": "At Wed, 08 Jun 2022 07:05:09 +0200, Laurenz Albe <laurenz.albe@cybertec.at> wrote in \n> I take Tom's comment above as saying that the current behavior is fine.\n> So yes, perhaps some documentation would be in order:\n> \n> diff --git a/doc/src/sgml/postgres-fdw.sgml b/doc/src/sgml/postgres-fdw.sgml\n> index b43d0aecba..b4b7e36d28 100644\n> --- a/doc/src/sgml/postgres-fdw.sgml\n> +++ b/doc/src/sgml/postgres-fdw.sgml\n> @@ -274,6 +274,14 @@ OPTIONS (ADD password_required 'false');\n> but only for that table.\n> The default is <literal>false</literal>.\n> </para>\n> +\n> + <para>\n> + Note that <command>EXPLAIN</command> will be run on the remote server\n> + at query planning time, <emphasis>before</emphasis> permissions on the\n> + foreign table are checked. This is not a security problem, since the\n> + subsequent error from the permission check will prevent the user from\n> + seeing any of the resulting data.\n> + </para>\n> </listitem>\n> </varlistentry>\n\nLooks fine. I'd like to add something like \"If needed, depriving\nunprivileged users of relevant user mappings will prevent such remote\nexecutions that happen at planning-time.\"\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 08 Jun 2022 14:51:39 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error from the foreign RDBMS on a foreign table I have no\n privilege on"
},
{
"msg_contents": "On Wed, Jun 8, 2022 at 2:51 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Wed, 08 Jun 2022 07:05:09 +0200, Laurenz Albe <laurenz.albe@cybertec.at> wrote in\n> > I take Tom's comment above as saying that the current behavior is fine.\n> > So yes, perhaps some documentation would be in order:\n> >\n> > diff --git a/doc/src/sgml/postgres-fdw.sgml b/doc/src/sgml/postgres-fdw.sgml\n> > index b43d0aecba..b4b7e36d28 100644\n> > --- a/doc/src/sgml/postgres-fdw.sgml\n> > +++ b/doc/src/sgml/postgres-fdw.sgml\n> > @@ -274,6 +274,14 @@ OPTIONS (ADD password_required 'false');\n> > but only for that table.\n> > The default is <literal>false</literal>.\n> > </para>\n> > +\n> > + <para>\n> > + Note that <command>EXPLAIN</command> will be run on the remote server\n> > + at query planning time, <emphasis>before</emphasis> permissions on the\n> > + foreign table are checked. This is not a security problem, since the\n> > + subsequent error from the permission check will prevent the user from\n> > + seeing any of the resulting data.\n> > + </para>\n> > </listitem>\n> > </varlistentry>\n>\n> Looks fine. I'd like to add something like \"If needed, depriving\n> unprivileged users of relevant user mappings will prevent such remote\n> executions that happen at planning-time.\"\n\nI agree on that point; if the EXPLAIN done on the remote side is\nreally a problem, I think the user should revoke privileges from the\nremote user specified in the user mapping, to prevent it. I’d rather\nrecommend granting to the remote user privileges consistent with those\ngranted to the local user.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Wed, 8 Jun 2022 19:06:11 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error from the foreign RDBMS on a foreign table I have no\n privilege on"
},
{
"msg_contents": "On Wed, 2022-06-08 at 19:06 +0900, Etsuro Fujita wrote:\n> On Wed, Jun 8, 2022 at 2:51 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > At Wed, 08 Jun 2022 07:05:09 +0200, Laurenz Albe <laurenz.albe@cybertec.at> wrote in\n> > > diff --git a/doc/src/sgml/postgres-fdw.sgml b/doc/src/sgml/postgres-fdw.sgml\n> > > index b43d0aecba..b4b7e36d28 100644\n> > > --- a/doc/src/sgml/postgres-fdw.sgml\n> > > +++ b/doc/src/sgml/postgres-fdw.sgml\n> > > @@ -274,6 +274,14 @@ OPTIONS (ADD password_required 'false');\n> > > but only for that table.\n> > > The default is <literal>false</literal>.\n> > > </para>\n> > > +\n> > > + <para>\n> > > + Note that <command>EXPLAIN</command> will be run on the remote server\n> > > + at query planning time, <emphasis>before</emphasis> permissions on the\n> > > + foreign table are checked. This is not a security problem, since the\n> > > + subsequent error from the permission check will prevent the user from\n> > > + seeing any of the resulting data.\n> > > + </para>\n> > > </listitem>\n> > > </varlistentry>\n> > \n> > Looks fine. I'd like to add something like \"If needed, depriving\n> > unprivileged users of relevant user mappings will prevent such remote\n> > executions that happen at planning-time.\"\n> \n> I agree on that point; if the EXPLAIN done on the remote side is\n> really a problem, I think the user should revoke privileges from the\n> remote user specified in the user mapping, to prevent it. I’d rather\n> recommend granting to the remote user privileges consistent with those\n> granted to the local user.\n\nI don't think that is better. Even if the local and remote privileges are\nconsistent, you will get an error from the *remote* table access when trying\nto use a foreign table on which you don't have permissions.\nThe above paragraph describes why.\nNote that the original complaint against oracle_fdw that led to this thread\nwas just such a case.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Thu, 09 Jun 2022 02:49:02 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Error from the foreign RDBMS on a foreign table I have no\n privilege on"
},
{
"msg_contents": "On Thu, Jun 9, 2022 at 9:49 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> On Wed, 2022-06-08 at 19:06 +0900, Etsuro Fujita wrote:\n> > On Wed, Jun 8, 2022 at 2:51 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > > At Wed, 08 Jun 2022 07:05:09 +0200, Laurenz Albe <laurenz.albe@cybertec.at> wrote in\n> > > > diff --git a/doc/src/sgml/postgres-fdw.sgml b/doc/src/sgml/postgres-fdw.sgml\n> > > > index b43d0aecba..b4b7e36d28 100644\n> > > > --- a/doc/src/sgml/postgres-fdw.sgml\n> > > > +++ b/doc/src/sgml/postgres-fdw.sgml\n> > > > @@ -274,6 +274,14 @@ OPTIONS (ADD password_required 'false');\n> > > > but only for that table.\n> > > > The default is <literal>false</literal>.\n> > > > </para>\n> > > > +\n> > > > + <para>\n> > > > + Note that <command>EXPLAIN</command> will be run on the remote server\n> > > > + at query planning time, <emphasis>before</emphasis> permissions on the\n> > > > + foreign table are checked. This is not a security problem, since the\n> > > > + subsequent error from the permission check will prevent the user from\n> > > > + seeing any of the resulting data.\n> > > > + </para>\n> > > > </listitem>\n> > > > </varlistentry>\n> > >\n> > > Looks fine. I'd like to add something like \"If needed, depriving\n> > > unprivileged users of relevant user mappings will prevent such remote\n> > > executions that happen at planning-time.\"\n> >\n> > I agree on that point; if the EXPLAIN done on the remote side is\n> > really a problem, I think the user should revoke privileges from the\n> > remote user specified in the user mapping, to prevent it. I’d rather\n> > recommend granting to the remote user privileges consistent with those\n> > granted to the local user.\n>\n> I don't think that is better. Even if the local and remote privileges are\n> consistent, you will get an error from the *remote* table access when trying\n> to use a foreign table on which you don't have permissions.\n> The above paragraph describes why.\n> Note that the original complaint against oracle_fdw that led to this thread\n> was just such a case.\n\nI thought you were worried about security, so I thought that that\nwould be a good practice becasue that would reduce such risks, but I\ngot the point. However, I'm not 100% sure we really need to document\nsomething about this, because 1) this doesn't cause any actual\nproblems, as you described, and 2) this is a pretty-exceptional case\nIMO.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Thu, 9 Jun 2022 21:55:46 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error from the foreign RDBMS on a foreign table I have no\n privilege on"
},
{
"msg_contents": "On Thu, 2022-06-09 at 21:55 +0900, Etsuro Fujita wrote:\n> On Thu, Jun 9, 2022 at 9:49 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> > On Wed, 2022-06-08 at 19:06 +0900, Etsuro Fujita wrote:\n> > > On Wed, Jun 8, 2022 at 2:51 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > > > At Wed, 08 Jun 2022 07:05:09 +0200, Laurenz Albe <laurenz.albe@cybertec.at> wrote in\n> > > > > diff --git a/doc/src/sgml/postgres-fdw.sgml b/doc/src/sgml/postgres-fdw.sgml\n> > > > > index b43d0aecba..b4b7e36d28 100644\n> > > > > --- a/doc/src/sgml/postgres-fdw.sgml\n> > > > > +++ b/doc/src/sgml/postgres-fdw.sgml\n> > > > > @@ -274,6 +274,14 @@ OPTIONS (ADD password_required 'false');\n> > > > > but only for that table.\n> > > > > The default is <literal>false</literal>.\n> > > > > </para>\n> > > > > +\n> > > > > + <para>\n> > > > > + Note that <command>EXPLAIN</command> will be run on the remote server\n> > > > > + at query planning time, <emphasis>before</emphasis> permissions on the\n> > > > > + foreign table are checked. This is not a security problem, since the\n> > > > > + subsequent error from the permission check will prevent the user from\n> > > > > + seeing any of the resulting data.\n> > > > > + </para>\n> > > > > </listitem>\n> > > > > </varlistentry>\n> > > > \n> > > > Looks fine. I'd like to add something like \"If needed, depriving\n> > > > unprivileged users of relevant user mappings will prevent such remote\n> > > > executions that happen at planning-time.\"\n> > > \n> > > I agree on that point; if the EXPLAIN done on the remote side is\n> > > really a problem, I think the user should revoke privileges from the\n> > > remote user specified in the user mapping, to prevent it. I’d rather\n> > > recommend granting to the remote user privileges consistent with those\n> > > granted to the local user.\n> > \n> > I don't think that is better. Even if the local and remote privileges are\n> > consistent, you will get an error from the *remote* table access when trying\n> > to use a foreign table on which you don't have permissions.\n> > The above paragraph describes why.\n> > Note that the original complaint against oracle_fdw that led to this thread\n> > was just such a case.\n> \n> I thought you were worried about security, so I thought that that\n> would be a good practice becasue that would reduce such risks, but I\n> got the point. However, I'm not 100% sure we really need to document\n> something about this, because 1) this doesn't cause any actual\n> problems, as you described, and 2) this is a pretty-exceptional case\n> IMO.\n\nI am not sure if it worth adding to the documentation. I would never have thought\nof the problem if Phil hadn't brought it up. On the other hand, I was surprised\nto learn that permissions aren't checked until the executor kicks in.\nIt makes sense, but some documentation might help others in that situation.\n\nI'll gladly leave the decision to your judgement as a committer.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Thu, 09 Jun 2022 18:26:29 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Error from the foreign RDBMS on a foreign table I have no\n privilege on"
},
{
"msg_contents": "On Fri, Jun 10, 2022 at 1:26 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> On Thu, 2022-06-09 at 21:55 +0900, Etsuro Fujita wrote:\n> > However, I'm not 100% sure we really need to document\n> > something about this, because 1) this doesn't cause any actual\n> > problems, as you described, and 2) this is a pretty-exceptional case\n> > IMO.\n>\n> I am not sure if it worth adding to the documentation. I would never have thought\n> of the problem if Phil hadn't brought it up. On the other hand, I was surprised\n> to learn that permissions aren't checked until the executor kicks in.\n> It makes sense, but some documentation might help others in that situation.\n\n+1 for adding such a document.\n\n> I'll gladly leave the decision to your judgement as a committer.\n\nIIRC, there are no reports about this from the postgres_fdw users, so\nmy inclination would be to leave the documentation alone, for now.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Fri, 10 Jun 2022 17:17:22 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error from the foreign RDBMS on a foreign table I have no\n privilege on"
},
{
"msg_contents": "On Fri, 2022-06-10 at 17:17 +0900, Etsuro Fujita wrote:\n> > I am not sure if it worth adding to the documentation. I would never have thought\n> > of the problem if Phil hadn't brought it up. On the other hand, I was surprised\n> > to learn that permissions aren't checked until the executor kicks in.\n> > It makes sense, but some documentation might help others in that situation.\n> \n> +1 for adding such a document.\n> \n> > I'll gladly leave the decision to your judgement as a committer.\n> \n> IIRC, there are no reports about this from the postgres_fdw users, so\n> my inclination would be to leave the documentation alone, for now.\n\nI understand that you are for documenting the timing of permission checks,\nbut not in the postgres_fdw documentation. However, this is the only occasion\nwhere the user might notice unexpected behavior on account of the timing of\npermission checks. Other than that, I consider this below the threshold for\nuser-facing documentation.\n\nI'm ok with just doing nothing here, I just wanted it discussed in public.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Fri, 10 Jun 2022 11:17:07 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Error from the foreign RDBMS on a foreign table I have no\n privilege on"
},
{
"msg_contents": "Hi,\nThanks for your explanations.\nTest case had no real-world logic anyway. It was just an oversight in a one-time use legacy migration script.\nRegards,\nPhil\n________________________________\nFrom: Laurenz Albe <laurenz.albe@cybertec.at>\nSent: Friday, June 10, 2022 11:17:07 AM\nTo: Etsuro Fujita <etsuro.fujita@gmail.com>\nCc: Kyotaro Horiguchi <horikyota.ntt@gmail.com>; euler@eulerto.com <euler@eulerto.com>; philflorent@hotmail.com <philflorent@hotmail.com>; pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Error from the foreign RDBMS on a foreign table I have no privilege on\n\nOn Fri, 2022-06-10 at 17:17 +0900, Etsuro Fujita wrote:\n> > I am not sure if it worth adding to the documentation. I would never have thought\n> > of the problem if Phil hadn't brought it up. On the other hand, I was surprised\n> > to learn that permissions aren't checked until the executor kicks in.\n> > It makes sense, but some documentation might help others in that situation.\n>\n> +1 for adding such a document.\n>\n> > I'll gladly leave the decision to your judgement as a committer.\n>\n> IIRC, there are no reports about this from the postgres_fdw users, so\n> my inclination would be to leave the documentation alone, for now.\n\nI understand that you are for documenting the timing of permission checks,\nbut not in the postgres_fdw documentation. However, this is the only occasion\nwhere the user might notice unexpected behavior on account of the timing of\npermission checks. Other than that, I consider this below the threshold for\nuser-facing documentation.\n\nI'm ok with just doing nothing here, I just wanted it discussed in public.\n\nYours,\nLaurenz Albe\n\n\n\n\n\n\nHi,\nThanks for your explanations.\nTest case had no real-world logic anyway. It was just an oversight in a one-time use legacy migration script.\nRegards,\nPhil\n\nFrom: Laurenz Albe <laurenz.albe@cybertec.at>\nSent: Friday, June 10, 2022 11:17:07 AM\nTo: Etsuro Fujita <etsuro.fujita@gmail.com>\nCc: Kyotaro Horiguchi <horikyota.ntt@gmail.com>; euler@eulerto.com <euler@eulerto.com>; philflorent@hotmail.com <philflorent@hotmail.com>; pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Error from the foreign RDBMS on a foreign table I have no privilege on\n \n\n\nOn Fri, 2022-06-10 at 17:17 +0900, Etsuro Fujita wrote:\n> > I am not sure if it worth adding to the documentation. I would never have thought\n> > of the problem if Phil hadn't brought it up. On the other hand, I was surprised\n> > to learn that permissions aren't checked until the executor kicks in.\n> > It makes sense, but some documentation might help others in that situation.\n> \n> +1 for adding such a document.\n> \n> > I'll gladly leave the decision to your judgement as a committer.\n> \n> IIRC, there are no reports about this from the postgres_fdw users, so\n> my inclination would be to leave the documentation alone, for now.\n\nI understand that you are for documenting the timing of permission checks,\nbut not in the postgres_fdw documentation. However, this is the only occasion\nwhere the user might notice unexpected behavior on account of the timing of\npermission checks. Other than that, I consider this below the threshold for\nuser-facing documentation.\n\nI'm ok with just doing nothing here, I just wanted it discussed in public.\n\nYours,\nLaurenz Albe",
"msg_date": "Fri, 10 Jun 2022 16:20:09 +0000",
"msg_from": "Phil Florent <philflorent@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Error from the foreign RDBMS on a foreign table I have no\n privilege on"
},
{
"msg_contents": "On Fri, Jun 10, 2022 at 6:17 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> On Fri, 2022-06-10 at 17:17 +0900, Etsuro Fujita wrote:\n> > > I am not sure if it worth adding to the documentation. I would never have thought\n> > > of the problem if Phil hadn't brought it up. On the other hand, I was surprised\n> > > to learn that permissions aren't checked until the executor kicks in.\n> > > It makes sense, but some documentation might help others in that situation.\n> >\n> > +1 for adding such a document.\n> >\n> > > I'll gladly leave the decision to your judgement as a committer.\n> >\n> > IIRC, there are no reports about this from the postgres_fdw users, so\n> > my inclination would be to leave the documentation alone, for now.\n>\n> I understand that you are for documenting the timing of permission checks,\n> but not in the postgres_fdw documentation.\n\nYes, I think so.\n\n> However, this is the only occasion\n> where the user might notice unexpected behavior on account of the timing of\n> permission checks. Other than that, I consider this below the threshold for\n> user-facing documentation.\n\nI think PREPARE/EXECUTE have a similar issue:\n\npostgres=# create table t1 (a int, b int);\nCREATE TABLE\npostgres=# create user foouser;\nCREATE ROLE\npostgres=# set role foouser;\nSET\npostgres=> prepare fooplan (int, int) as insert into t1 values ($1, $2);\nPREPARE\npostgres=> execute fooplan (9999, 9999);\nERROR: permission denied for table t1\n\nThe user foouser is allowed to PREPARE the insert statement, without\nthe insert privilege on the table t1, as the permission check is\ndelayed until EXECUTE.\n\nSo I thought it would be good to add a note about the timing to the\ndocumentation about the Postgres core, such as arch-dev.sgml (the\n\"Overview of PostgreSQL Internals\" chapter). But as far as I know,\nthere aren’t any reports on the PREPARE/EXECUTE behavior, either, so\nthere might be less need to do so, I think.\n\nThanks for the discussion!\n\nSorry for the delay.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Mon, 4 Jul 2022 19:59:45 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error from the foreign RDBMS on a foreign table I have no\n privilege on"
}
] |
[
{
"msg_contents": "Hi,\n\nAt on of the pgcon unconference sessions a couple days ago, I presented\na bunch of benchmark results comparing performance with different\ndata/WAL block size. Most of the OLTP results showed significant gains\n(up to 50%) with smaller (4k) data pages.\n\nThis opened a long discussion about possible explanations - I claimed\none of the main factors is the adoption of flash storage, due to pretty\nfundamental differences between HDD and SSD systems. But the discussion\nconcluded with an agreement to continue investigating this, so here's an\nattempt to support the claim with some measurements/data.\n\nLet me present results of low-level fio benchmarks on a couple different\nHDD and SSD drives. This should eliminate any postgres-related influence\n(e.g. FPW), and demonstrates inherent HDD/SSD differences.\n\nEach of the PDF pages shows results for five basic workloads:\n\n - random read\n - random write\n - random r/w\n - sequential read\n - sequential write\n\nThe chars on the left show IOPS, charts on right bandwidth. The x-axis\nshows I/O depth - number of concurrent I/O requests or queue length,\nwith values 1, 2, 4, 8, 64, 128. And each \"group\" shows results for\ndifferent page size (1K, 2K, 4K, 8K, 16K, 32K). The colored page size is\nthe default value (8K).\n\nThis makes it clear how a page size affects performance (IOPS and BW)\nfor a given I/O depth, and also the impact of higher I/O depth.\n\n\nI do think the difference between HDD and SSD storage is pretty clearly\nvisible (even though there is some variability between the SSD devices).\n\nIMHO the crucial difference is that for HDD, the page size has almost no\nimpact on IOPS (in the random workloads). If you look at the random read\nresults, the page size does not matter - once you fix the I/O depth, the\nresult are pretty much exactly the same. For the random write test it's\neven clearer, because the I/O depth does not matter and you get 350 IOPS\nno matter the page size or I/O depth.\n\nThis makes perfect sense, because for \"spinning rust\" the dominant part\nis seeking to the right part of the platter. And once you've seeked to\nthe right place, it does not matter much if you read 1K or 32K - the\ncost is much lower than the seek.\n\nAnd the economy is pretty simple - you can't really improve IOPS, but\nyou can improve bandwidth by using larger pages. If you do 350 IOPS, it\ncan be either 350kB/s with 1K pages or 11MB/s with 32KB pages).\n\nSo we'd gain very little by using smaller pages, and larger pages\nimprove bandwidth - not just for random tests, but sequential too. And\n8KB seems like a reasonable compromise - bandwidth with 32KB pages is\nbetter, but with higher I/O depths (8 or more) we get pretty close,\nlikely due to hitting SATA limits.\n\n\nNow, compare this to the SSD. There are some differences between the\nmodels, manufacturers, interface etc. but the impact of page size on\nIOPS is pretty clear. On the Optane you can get +20-30% by using 4K\npages, on the Samsung it's even more, etc. This means that workloads\ndominated by random I/O get significant benefit from smaller pages.\n\nAnother consequence of this is that for sequential workloads, the\ndifference between page sizes is smaller, because when smaller pages\nreach better IOPS this reduces the difference in bandwidth.\n\n\nIf you imagine two extremes:\n\n 1) different pages yield the same IOPS\n\n 2) different pages yield the same bandwidth\n\nthen old-school HDDs are pretty close to (1), while future storage\nsystems (persistent memory) is likely close to (2).\n\nThis matters, because various trade-offs we've made in the past are\nreasonable for (1), but will be inefficient for (2). And as the results\nI shared during the pgcon session suggest, we might do so much better\neven for current SSDs, which are somewhere between (1) and (2).\n\n\nThe other important factor is the native SSD page, which is similar to\nsectors on HDD. SSDs however don't allow in-place updates, and have to\nreset/rewrite of the whole native page. It's actually more complicated,\nbecause the reset happens at a much larger scale (~8MB block), so it\ndoes matter how quickly we \"dirty\" the data. The consequence is that\nusing data pages smaller than the native page (depends on the device,\nbut seems 4K is the common value) either does not help or actually hurts\nthe write performance.\n\nAll the SSD results show this behavior - the Optane and Samsung nicely\nshow that 4K is much better (in random write IOPS) than 8K, but 1-2K\npages make it worse.\n\n\nI'm sure there are other important factors - for example, eliminating\nthe very expensive \"seek\" cost (SSDs can do 10k-100k IOPS easily, while\nHDDs did ~100-400 IOPS), other steps start to play much bigger role. I\nwouldn't be surprised if memcpy() started to matter, for example.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 5 Jun 2022 01:22:48 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "pgcon unconference / impact of block size on performance"
},
{
"msg_contents": "On Sat, Jun 4, 2022 at 5:23 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> Hi,\n>\n> At on of the pgcon unconference sessions a couple days ago, I presented\n> a bunch of benchmark results comparing performance with different\n> data/WAL block size. Most of the OLTP results showed significant gains\n> (up to 50%) with smaller (4k) data pages.\n\n\nThanks for sharing this Thomas.\n\nWe’ve been doing similar tests with different storage classes in kubernetes\nclusters.\n\nRoberto\n\n>\n\nOn Sat, Jun 4, 2022 at 5:23 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:Hi,\n\nAt on of the pgcon unconference sessions a couple days ago, I presented\na bunch of benchmark results comparing performance with different\ndata/WAL block size. Most of the OLTP results showed significant gains\n(up to 50%) with smaller (4k) data pages.Thanks for sharing this Thomas.We’ve been doing similar tests with different storage classes in kubernetes clusters. Roberto",
"msg_date": "Sat, 4 Jun 2022 18:21:07 -0600",
"msg_from": "Roberto Mello <roberto.mello@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgcon unconference / impact of block size on performance"
},
{
"msg_contents": "\n\nOn 6/5/22 02:21, Roberto Mello wrote:\n> On Sat, Jun 4, 2022 at 5:23 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com <mailto:tomas.vondra@enterprisedb.com>>\n> wrote:\n> \n> Hi,\n> \n> At on of the pgcon unconference sessions a couple days ago, I presented\n> a bunch of benchmark results comparing performance with different\n> data/WAL block size. Most of the OLTP results showed significant gains\n> (up to 50%) with smaller (4k) data pages.\n> \n> \n> Thanks for sharing this Thomas.\n> \n> We’ve been doing similar tests with different storage classes in\n> kubernetes clusters. \n> \n\nCan you share some of the results? Might be interesting, particularly if\nyou use network-attached storage (like EBS, etc.).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 5 Jun 2022 11:51:32 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pgcon unconference / impact of block size on performance"
},
{
"msg_contents": "Hi Tomas,\n\n> Hi,\n> \n> At on of the pgcon unconference sessions a couple days ago, I presented a\n> bunch of benchmark results comparing performance with different data/WAL\n> block size. Most of the OLTP results showed significant gains (up to 50%) with\n> smaller (4k) data pages.\n\nNice. I just saw this https://wiki.postgresql.org/wiki/PgCon_2022_Developer_Unconference , do you have any plans for publishing those other graphs too (e.g. WAL block size impact)?\n\n> This opened a long discussion about possible explanations - I claimed one of the\n> main factors is the adoption of flash storage, due to pretty fundamental\n> differences between HDD and SSD systems. But the discussion concluded with an\n> agreement to continue investigating this, so here's an attempt to support the\n> claim with some measurements/data.\n> \n> Let me present results of low-level fio benchmarks on a couple different HDD\n> and SSD drives. This should eliminate any postgres-related influence (e.g. FPW),\n> and demonstrates inherent HDD/SSD differences.\n> All the SSD results show this behavior - the Optane and Samsung nicely show\n> that 4K is much better (in random write IOPS) than 8K, but 1-2K pages make it\n> worse.\n> \n[..]\nCan you share what Linux kernel version, what filesystem , it's mount options and LVM setup were you using if any(?)\n\nI've hastily tried your script on 4VCPU/32GB RAM/1xNVMe device @ ~900GB (AWS i3.xlarge), kernel 5.x, ext4 defaults, no LVM, libaio only, fio deviations: runtime -> 1min, 64GB file, 1 iteration only. Results are attached, w/o graphs. \n\n> Now, compare this to the SSD. There are some differences between the models, manufacturers, interface etc. but the impact of page size on IOPS is pretty clear. On the Optane you can get +20-30% by using 4K pages, on the Samsung it's even more, etc. This means that workloads dominated by random I/O get significant benefit from smaller pages.\n\nYup, same here, reproduced, 1.42x faster on writes:\n[root@x ~]# cd libaio/nvme/randwrite/128/ # 128=queue depth\n[root@x 128]# grep -r \"write:\" * | awk '{print $1, $4, $5}' | sort -n\n1k/1.txt: bw=24162KB/s, iops=24161,\n2k/1.txt: bw=47164KB/s, iops=23582,\n4k/1.txt: bw=280450KB/s, iops=70112, <<<\n8k/1.txt: bw=393082KB/s, iops=49135,\n16k/1.txt: bw=393103KB/s, iops=24568,\n32k/1.txt: bw=393283KB/s, iops=12290,\nBTW it's interesting to compare to your's Optane 900P result (same two high bars for IOPS @ 4,8kB), but in my case it's even more import to select 4kB so it behaves more like Samsung 860 in your case\n\n# 1.41x on randreads\n[root@x ~]# cd libaio/nvme/randread/128/ # 128=queue depth\n[root@x 128]# grep -r \"read :\" | awk '{print $1, $5, $6}' | sort -n\n1k/1.txt: bw=169938KB/s, iops=169937,\n2k/1.txt: bw=376653KB/s, iops=188326,\n4k/1.txt: bw=691529KB/s, iops=172882, <<<\n8k/1.txt: bw=976916KB/s, iops=122114,\n16k/1.txt: bw=990524KB/s, iops=61907,\n32k/1.txt: bw=974318KB/s, iops=30447,\n\nI think that the above just a demonstration of device bandwidth saturation: 32k*30k IOPS =~ 1GB/s random reads. Given that DB would be tuned @ 4kB for app(OLTP), but once upon a time Parallel Seq Scans \"critical reports\" could only achieve 70% of what it could achieve on 8kB, correct? (I'm assuming most real systems are really OLTP but with some reporting/data exporting needs). One way or another it would be very nice to be able to select the tradeoff using initdb(1) without the need to recompile, which then begs for some initdb --calibrate /mnt/nvme (effective_io_concurrency, DB page size, ...).\n\nDo you envision any plans for this we still in a need to gather more info exactly why this happens? (perf reports?)\n\nAlso have you guys discussed on that meeting any long-term future plans on storage layer by any chance ? If sticking to 4kB pages on DB/page size/hardware sector size, wouldn't it be possible to win also disabling FPWs in the longer run using uring (assuming O_DIRECT | O_ATOMIC one day?)\nI recall that Thomas M. was researching O_ATOMIC, I think he wrote some of that pretty nicely in [1] \n\n[1] - https://wiki.postgresql.org/wiki/FreeBSD/AtomicIO",
"msg_date": "Mon, 6 Jun 2022 14:27:06 +0000",
"msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>",
"msg_from_op": false,
"msg_subject": "RE: pgcon unconference / impact of block size on performance"
},
{
"msg_contents": "\nOn 6/6/22 16:27, Jakub Wartak wrote:\n> Hi Tomas,\n> \n>> Hi,\n>>\n>> At on of the pgcon unconference sessions a couple days ago, I presented a\n>> bunch of benchmark results comparing performance with different data/WAL\n>> block size. Most of the OLTP results showed significant gains (up to 50%) with\n>> smaller (4k) data pages.\n> \n> Nice. I just saw this\nhttps://wiki.postgresql.org/wiki/PgCon_2022_Developer_Unconference , do\nyou have any plans for publishing those other graphs too (e.g. WAL block\nsize impact)?\n> \n\nWell, there's plenty of charts in the github repositories, including the\ncharts I think you're asking for:\n\nhttps://github.com/tvondra/pg-block-bench-pgbench/blob/master/process/heatmaps/xeon/20220406-fpw/16/heatmap-tps.png\n\nhttps://github.com/tvondra/pg-block-bench-pgbench/blob/master/process/heatmaps/i5/20220427-fpw/16/heatmap-io-tps.png\n\n\nI admit the charts may not be documented very clearly :-(\n\n>> This opened a long discussion about possible explanations - I claimed one of the\n>> main factors is the adoption of flash storage, due to pretty fundamental\n>> differences between HDD and SSD systems. But the discussion concluded with an\n>> agreement to continue investigating this, so here's an attempt to support the\n>> claim with some measurements/data.\n>>\n>> Let me present results of low-level fio benchmarks on a couple different HDD\n>> and SSD drives. This should eliminate any postgres-related influence (e.g. FPW),\n>> and demonstrates inherent HDD/SSD differences.\n>> All the SSD results show this behavior - the Optane and Samsung nicely show\n>> that 4K is much better (in random write IOPS) than 8K, but 1-2K pages make it\n>> worse.\n>>\n> [..]\n> Can you share what Linux kernel version, what filesystem , it's\n> mount options and LVM setup were you using if any(?)\n> \n\nThe PostgreSQL benchmarks were with 5.14.x kernels, with either ext4 or\nxfs filesystems.\n\ni5 uses LVM on the 6x SATA SSD devices, with this config:\n\nbench ~ # mdadm --detail /dev/md0\n/dev/md0:\n Version : 0.90\n Creation Time : Thu Feb 8 15:05:49 2018\n Raid Level : raid0\n Array Size : 586106880 (558.96 GiB 600.17 GB)\n Raid Devices : 6\n Total Devices : 6\n Preferred Minor : 0\n Persistence : Superblock is persistent\n\n Update Time : Thu Feb 8 15:05:49 2018\n State : clean\n Active Devices : 6\n Working Devices : 6\n Failed Devices : 0\n Spare Devices : 0\n\n Chunk Size : 512K\n\nConsistency Policy : none\n\n UUID : 24c6158c:36454b38:529cc8e5:b4b9cc9d (local to host\nbench)\n Events : 0.1\n\n Number Major Minor RaidDevice State\n 0 8 1 0 active sync /dev/sda1\n 1 8 17 1 active sync /dev/sdb1\n 2 8 33 2 active sync /dev/sdc1\n 3 8 49 3 active sync /dev/sdd1\n 4 8 65 4 active sync /dev/sde1\n 5 8 81 5 active sync /dev/sdf1\n\nbench ~ # mount | grep md0\n/dev/md0 on /mnt/raid type xfs\n(rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,sunit=16,swidth=96,noquota)\n\n\nand the xeon just uses ext4 on the device directly:\n\n/dev/nvme0n1p1 on /mnt/data type ext4 (rw,relatime)\n\n\n> I've hastily tried your script on 4VCPU/32GB RAM/1xNVMe device @ \n> ~900GB (AWS i3.xlarge), kernel 5.x, ext4 defaults, no LVM, libaio\n> only, fio deviations: runtime -> 1min, 64GB file, 1 iteration only.\n> Results are attached, w/o graphs.\n> \n>> Now, compare this to the SSD. There are some differences between \n>> the models, manufacturers, interface etc. but the impact of page\n>> size on IOPS is pretty clear. On the Optane you can get +20-30% by\n>> using 4K pages, on the Samsung it's even more, etc. This means that\n>> workloads dominated by random I/O get significant benefit from\n>> smaller pages.\n> \n> Yup, same here, reproduced, 1.42x faster on writes:\n> [root@x ~]# cd libaio/nvme/randwrite/128/ # 128=queue depth\n> [root@x 128]# grep -r \"write:\" * | awk '{print $1, $4, $5}' | sort -n\n> 1k/1.txt: bw=24162KB/s, iops=24161,\n> 2k/1.txt: bw=47164KB/s, iops=23582,\n> 4k/1.txt: bw=280450KB/s, iops=70112, <<<\n> 8k/1.txt: bw=393082KB/s, iops=49135,\n> 16k/1.txt: bw=393103KB/s, iops=24568,\n> 32k/1.txt: bw=393283KB/s, iops=12290,\n>\n> BTW it's interesting to compare to your's Optane 900P result (same \n> two high bars for IOPS @ 4,8kB), but in my case it's even more import\n> to select 4kB so it behaves more like Samsung 860 in your case\n> \n\nThanks. Interesting!\n\n> # 1.41x on randreads\n> [root@x ~]# cd libaio/nvme/randread/128/ # 128=queue depth\n> [root@x 128]# grep -r \"read :\" | awk '{print $1, $5, $6}' | sort -n\n> 1k/1.txt: bw=169938KB/s, iops=169937,\n> 2k/1.txt: bw=376653KB/s, iops=188326,\n> 4k/1.txt: bw=691529KB/s, iops=172882, <<<\n> 8k/1.txt: bw=976916KB/s, iops=122114,\n> 16k/1.txt: bw=990524KB/s, iops=61907,\n> 32k/1.txt: bw=974318KB/s, iops=30447,\n> \n> I think that the above just a demonstration of device bandwidth \n> saturation: 32k*30k IOPS =~ 1GB/s random reads. Given that DB would\n> be tuned @ 4kB for app(OLTP), but once upon a time Parallel Seq\n> Scans \"critical reports\" could only achieve 70% of what it could\n> achieve on 8kB, correct? (I'm assuming most real systems are really\n> OLTP but with some reporting/data exporting needs).\n> \n\nRight, that's roughly my thinking too. Also, OLAP queries often do a lot\nof random I/O, due to index scans etc.\n\nI also wonder how is this related to filesystem page size - in all the\nbenchmarks I did I used the default (4k), but maybe it'd behave if the\nfilesystem page matched the data page.\n\n> One way or another it would be very nice to be able to select the\n> tradeoff using initdb(1) without the need to recompile, which then\n> begs for some initdb --calibrate /mnt/nvme (effective_io_concurrency,\n> DB page size, ...).>\n> Do you envision any plans for this we still in a need to gather more\n> info exactly why this happens? (perf reports?)\n> \n\nNot sure I follow. Plans for what? Something that calibrates cost\nparameters? That might be useful, but that's a rather separate issue\nfrom what's discussed here - page size, which needs to happen before\ninitdb (at least with how things work currently).\n\nThe other issue (e.g. with effective_io_concurrency) is that it very\nmuch depends on the access pattern - random pages and sequential pages\nwill require very different e_i_c values. But again, that's something to\ndiscuss in a separate thread (e.g. [1])\n\n[1]: https://postgr.es/m/Yl92RVoXVfs+z2Yj@momjian.us\n\n> Also have you guys discussed on that meeting any long-term future \n> plans on storage layer by any chance ? If sticking to 4kB pages on \n> DB/page size/hardware sector size, wouldn't it be possible to win\n> also disabling FPWs in the longer run using uring (assuming O_DIRECT\n> | O_ATOMIC one day?)>\n> I recall that Thomas M. was researching O_ATOMIC, I think he wrote\n> some of that pretty nicely in [1] \n> \n> [1] - https://wiki.postgresql.org/wiki/FreeBSD/AtomicIO\n\nNo, no such discussion - at least no in this unconference slot.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 6 Jun 2022 17:00:56 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pgcon unconference / impact of block size on performance"
},
{
"msg_contents": "\n\nOn 6/6/22 17:00, Tomas Vondra wrote:\n> \n> On 6/6/22 16:27, Jakub Wartak wrote:\n>> Hi Tomas,\n>>\n>>> Hi,\n>>>\n>>> At on of the pgcon unconference sessions a couple days ago, I presented a\n>>> bunch of benchmark results comparing performance with different data/WAL\n>>> block size. Most of the OLTP results showed significant gains (up to 50%) with\n>>> smaller (4k) data pages.\n>>\n>> Nice. I just saw this\n> https://wiki.postgresql.org/wiki/PgCon_2022_Developer_Unconference , do\n> you have any plans for publishing those other graphs too (e.g. WAL block\n> size impact)?\n>>\n> \n> Well, there's plenty of charts in the github repositories, including the\n> charts I think you're asking for:\n> \n> https://github.com/tvondra/pg-block-bench-pgbench/blob/master/process/heatmaps/xeon/20220406-fpw/16/heatmap-tps.png\n> \n> https://github.com/tvondra/pg-block-bench-pgbench/blob/master/process/heatmaps/i5/20220427-fpw/16/heatmap-io-tps.png\n> \n> \n> I admit the charts may not be documented very clearly :-(\n> \n>>> This opened a long discussion about possible explanations - I claimed one of the\n>>> main factors is the adoption of flash storage, due to pretty fundamental\n>>> differences between HDD and SSD systems. But the discussion concluded with an\n>>> agreement to continue investigating this, so here's an attempt to support the\n>>> claim with some measurements/data.\n>>>\n>>> Let me present results of low-level fio benchmarks on a couple different HDD\n>>> and SSD drives. This should eliminate any postgres-related influence (e.g. FPW),\n>>> and demonstrates inherent HDD/SSD differences.\n>>> All the SSD results show this behavior - the Optane and Samsung nicely show\n>>> that 4K is much better (in random write IOPS) than 8K, but 1-2K pages make it\n>>> worse.\n>>>\n>> [..]\n>> Can you share what Linux kernel version, what filesystem , it's\n>> mount options and LVM setup were you using if any(?)\n>>\n> \n> The PostgreSQL benchmarks were with 5.14.x kernels, with either ext4 or\n> xfs filesystems.\n> \n\nI realized I mentioned just two of the devices, used for the postgres\ntest, but this thread is dealing mostly with about fio results. So let\nme list info about all the devices/filesystems:\n\ni5\n--\n\nIntel SSD 320 120GB SATA (SSDSA2CW12)\n/dev/sdh1 on /mnt/data type ext4 (rw,noatime)\n\n6x Intel SSD DC S3700 100GB SATA (SSDSC2BA10), LVM RAID0\n/dev/md0 on /mnt/raid type xfs\n(rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,sunit=16,swidth=96,noquota)\n\n\nxeon\n----\n\nSamsung SSD 860 EVO 2TB SATA (RVT04B6Q)\n/dev/sde1 on /mnt/samsung type ext4 (rw,relatime)\n\nIntel Optane 900P 280GB NVMe (SSDPED1D280GA)\n/dev/nvme0n1p1 on /mnt/data type ext4 (rw,relatime)\n\n3x Maxtor DiamondMax 21 500B 7.2k SATA (STM350063), LVM RAID0\n/dev/md0 on /mnt/raid type ext4 (rw,relatime,stripe=48)\n\n# mdadm --detail /dev/md0\n/dev/md0:\n Version : 1.2\n Creation Time : Fri Aug 31 21:11:48 2018\n Raid Level : raid0\n Array Size : 1464763392 (1396.91 GiB 1499.92 GB)\n Raid Devices : 3\n Total Devices : 3\n Persistence : Superblock is persistent\n\n Update Time : Fri Aug 31 21:11:48 2018\n State : clean\n Active Devices : 3\n Working Devices : 3\n Failed Devices : 0\n Spare Devices : 0\n\n Chunk Size : 64K\n\nConsistency Policy : none\n\n Name : bench2:0 (local to host bench2)\n UUID : 72e48e7b:a75554ea:05952b34:810ed6bc\n Events : 0\n\n Number Major Minor RaidDevice State\n 0 8 17 0 active sync /dev/sdb1\n 1 8 33 1 active sync /dev/sdc1\n 2 8 49 2 active sync /dev/sdd1\n\n\nHopefully this is more complete ...\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 6 Jun 2022 17:52:46 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pgcon unconference / impact of block size on performance"
},
{
"msg_contents": "\nHello Tomas,\n\n> At on of the pgcon unconference sessions a couple days ago, I presented\n> a bunch of benchmark results comparing performance with different\n> data/WAL block size. Most of the OLTP results showed significant gains\n> (up to 50%) with smaller (4k) data pages.\n\nYou wrote something about SSD a long time ago, but the link is now dead:\n\nhttp://www.fuzzy.cz/en/articles/ssd-benchmark-results-read-write-pgbench/\n\nSee also:\n\nhttp://www.cybertec.at/postgresql-block-sizes-getting-started/\nhttp://blog.coelho.net/database/2014/08/08/postgresql-page-size-for-SSD.html\n\n[...]\n\n> The other important factor is the native SSD page, which is similar to\n> sectors on HDD. SSDs however don't allow in-place updates, and have to\n> reset/rewrite of the whole native page. It's actually more complicated,\n> because the reset happens at a much larger scale (~8MB block), so it\n> does matter how quickly we \"dirty\" the data. The consequence is that\n> using data pages smaller than the native page (depends on the device,\n> but seems 4K is the common value) either does not help or actually hurts\n> the write performance.\n>\n> All the SSD results show this behavior - the Optane and Samsung nicely\n> show that 4K is much better (in random write IOPS) than 8K, but 1-2K\n> pages make it worse.\n\nYep. ISTM that uou should also consider the underlying FS block size. Ext4 \nuses 4 KiB by default, so if you write 2 KiB it will write 4 KiB anyway.\n\nThere is no much doubt that with SSD we should reduce the default page \nsize. There are some negative impacts (eg more space is lost because of \nheaders and the number of tuples that can be fitted), but I guess the \nshould be an overall benefit. It would help a lot if it would be possible \nto initdb with a different block size, without recompiling.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 6 Jun 2022 22:39:17 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pgcon unconference / impact of block size on performance"
},
{
"msg_contents": "On Sun, 5 Jun 2022 at 11:23, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> At on of the pgcon unconference sessions a couple days ago, I presented\n> a bunch of benchmark results comparing performance with different\n> data/WAL block size. Most of the OLTP results showed significant gains\n> (up to 50%) with smaller (4k) data pages.\n\nA few years ago when you and I were doing analysis into the TPC-H\nbenchmark, we found that larger page sizes helped various queries,\nespecially Q1. It would be good to see what the block size changes in\nperformance in a query such as: SELECT sum(value) FROM\ntable_with_tuples_of_several_hundred_bytes;. I don't recall the\nreason why 32k pages helped there, but it seems reasonable that doing\nmore work for each lookup in shared buffers might be 1 reason.\n\nMaybe some deeper analysis into various workloads might convince us\nthat it might be worth having an initdb option to specify the\nblocksize. There'd be various hurdles to get over in the code to make\nthat work. I doubt we could ever make the default smaller than it is\ntoday as it would nobody would be able to insert rows larger than 4\nkilobytes into a table anymore. Plus pg_upgrade issues.\n\nDavid\n\n[1] https://www.postgresql.org/docs/current/limits.html\n\n\n",
"msg_date": "Tue, 7 Jun 2022 10:39:53 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgcon unconference / impact of block size on performance"
},
{
"msg_contents": "Hi Tomas,\r\n\r\n> Well, there's plenty of charts in the github repositories, including the charts I\r\n> think you're asking for:\r\n\r\nThanks.\r\n\r\n> I also wonder how is this related to filesystem page size - in all the benchmarks I\r\n> did I used the default (4k), but maybe it'd behave if the filesystem page matched\r\n> the data page.\r\n\r\nThat may be it - using fio on raw NVMe device (without fs/VFS at all) shows:\r\n\r\n[root@x libaio-raw]# grep -r -e 'write:' -e 'read :' *\r\nnvme/randread/128/1k/1.txt: read : io=7721.9MB, bw=131783KB/s, iops=131783, runt= 60001msec [b]\r\nnvme/randread/128/2k/1.txt: read : io=15468MB, bw=263991KB/s, iops=131995, runt= 60001msec [b] \r\nnvme/randread/128/4k/1.txt: read : io=30142MB, bw=514408KB/s, iops=128602, runt= 60001msec [b]\r\nnvme/randread/128/8k/1.txt: read : io=56698MB, bw=967635KB/s, iops=120954, runt= 60001msec\r\nnvme/randwrite/128/1k/1.txt: write: io=4140.9MB, bw=70242KB/s, iops=70241, runt= 60366msec [a]\r\nnvme/randwrite/128/2k/1.txt: write: io=8271.5MB, bw=141161KB/s, iops=70580, runt= 60002msec [a]\r\nnvme/randwrite/128/4k/1.txt: write: io=16543MB, bw=281164KB/s, iops=70291, runt= 60248msec\r\nnvme/randwrite/128/8k/1.txt: write: io=22924MB, bw=390930KB/s, iops=48866, runt= 60047msec\r\n\r\nSo, I've found out two interesting things while playing with raw vs ext4:\r\na) I've got 70k IOPS always randwrite even on 1k,2k,4k without ext4 (so as expected, this was ext4 4kb default fs page size impact as you was thinking about when fio 1k was hitting ext4 4kB block)\r\nb) Another thing that you could also include in testing is that I've spotted a couple of times single-threaded fio might could be limiting factor (numjobs=1 by default), so I've tried with numjobs=2,group_reporting=1 and got this below ouput on ext4 defaults even while dropping caches (echo 3) each loop iteration -- something that I cannot explain (ext4 direct I/O caching effect? how's that even possible? reproduced several times even with numjobs=1) - the point being 206643 1kb IOPS @ ext4 direct-io > 131783 1kB IOPS @ raw, smells like some caching effect because for randwrite it does not happen. I've triple-checked with iostat -x... it cannot be any internal device cache as with direct I/O that doesn't happen:\r\n\r\n[root@x libaio-ext4]# grep -r -e 'write:' -e 'read :' *\r\nnvme/randread/128/1k/1.txt: read : io=12108MB, bw=206644KB/s, iops=206643, runt= 60001msec [b]\r\nnvme/randread/128/2k/1.txt: read : io=18821MB, bw=321210KB/s, iops=160604, runt= 60001msec [b]\r\nnvme/randread/128/4k/1.txt: read : io=36985MB, bw=631208KB/s, iops=157802, runt= 60001msec [b]\r\nnvme/randread/128/8k/1.txt: read : io=57364MB, bw=976923KB/s, iops=122115, runt= 60128msec\r\nnvme/randwrite/128/1k/1.txt: write: io=1036.2MB, bw=17683KB/s, iops=17683, runt= 60001msec [a, as before]\r\nnvme/randwrite/128/2k/1.txt: write: io=2023.2MB, bw=34528KB/s, iops=17263, runt= 60001msec [a, as before]\r\nnvme/randwrite/128/4k/1.txt: write: io=16667MB, bw=282977KB/s, iops=70744, runt= 60311msec [reproduced benefit, as per earlier email]\r\nnvme/randwrite/128/8k/1.txt: write: io=22997MB, bw=391839KB/s, iops=48979, runt= 60099msec\r\n\r\n> > One way or another it would be very nice to be able to select the\r\n> > tradeoff using initdb(1) without the need to recompile, which then\r\n> > begs for some initdb --calibrate /mnt/nvme (effective_io_concurrency,\r\n> > DB page size, ...).> Do you envision any plans for this we still in a\r\n> > need to gather more info exactly why this happens? (perf reports?)\r\n> >\r\n> \r\n> Not sure I follow. Plans for what? Something that calibrates cost parameters?\r\n> That might be useful, but that's a rather separate issue from what's discussed\r\n> here - page size, which needs to happen before initdb (at least with how things\r\n> work currently).\r\n[..]\r\n\r\nSorry, I was too far teched and assumed you guys were talking very long term. \r\n\r\n-J.\r\n\r\n",
"msg_date": "Tue, 7 Jun 2022 09:46:51 +0000",
"msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>",
"msg_from_op": false,
"msg_subject": "RE: pgcon unconference / impact of block size on performance"
},
{
"msg_contents": "[..]\n>I doubt we could ever\n> make the default smaller than it is today as it would nobody would be able to\n> insert rows larger than 4 kilobytes into a table anymore. \n\nAdd error \"values larger than 1/3 of a buffer page cannot be indexed\" to that list...\n\n-J.\n\n\n",
"msg_date": "Tue, 7 Jun 2022 09:46:53 +0000",
"msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>",
"msg_from_op": false,
"msg_subject": "RE: pgcon unconference / impact of block size on performance"
},
{
"msg_contents": "On 6/7/22 11:46, Jakub Wartak wrote:\n> Hi Tomas,\n> \n>> Well, there's plenty of charts in the github repositories, including the charts I\n>> think you're asking for:\n> \n> Thanks.\n> \n>> I also wonder how is this related to filesystem page size - in all the benchmarks I\n>> did I used the default (4k), but maybe it'd behave if the filesystem page matched\n>> the data page.\n> \n> That may be it - using fio on raw NVMe device (without fs/VFS at all) shows:\n> \n> [root@x libaio-raw]# grep -r -e 'write:' -e 'read :' *\n> nvme/randread/128/1k/1.txt: read : io=7721.9MB, bw=131783KB/s, iops=131783, runt= 60001msec [b]\n> nvme/randread/128/2k/1.txt: read : io=15468MB, bw=263991KB/s, iops=131995, runt= 60001msec [b] \n> nvme/randread/128/4k/1.txt: read : io=30142MB, bw=514408KB/s, iops=128602, runt= 60001msec [b]\n> nvme/randread/128/8k/1.txt: read : io=56698MB, bw=967635KB/s, iops=120954, runt= 60001msec\n> nvme/randwrite/128/1k/1.txt: write: io=4140.9MB, bw=70242KB/s, iops=70241, runt= 60366msec [a]\n> nvme/randwrite/128/2k/1.txt: write: io=8271.5MB, bw=141161KB/s, iops=70580, runt= 60002msec [a]\n> nvme/randwrite/128/4k/1.txt: write: io=16543MB, bw=281164KB/s, iops=70291, runt= 60248msec\n> nvme/randwrite/128/8k/1.txt: write: io=22924MB, bw=390930KB/s, iops=48866, runt= 60047msec\n> \n> So, I've found out two interesting things while playing with raw vs ext4:\n> a) I've got 70k IOPS always randwrite even on 1k,2k,4k without ext4 (so as expected, this was ext4 4kb default fs page size impact as you was thinking about when fio 1k was hitting ext4 4kB block)\n\nRight. Interesting, so for randread we get a consistent +30% speedup on\nraw devices with all page sizes, while on randwrite it's about 1.0x for\n4K. The really puzzling thing is why is the filesystem so much slower\nfor smaller pages. I mean, why would writing 1K be 1/3 of writing 4K?\nWhy would a filesystem have such effect?\n\n> b) Another thing that you could also include in testing is that I've spotted a couple of times single-threaded fio might could be limiting factor (numjobs=1 by default), so I've tried with numjobs=2,group_reporting=1 and got this below ouput on ext4 defaults even while dropping caches (echo 3) each loop iteration -- something that I cannot explain (ext4 direct I/O caching effect? how's that even possible? reproduced several times even with numjobs=1) - the point being 206643 1kb IOPS @ ext4 direct-io > 131783 1kB IOPS @ raw, smells like some caching effect because for randwrite it does not happen. I've triple-checked with iostat -x... it cannot be any internal device cache as with direct I/O that doesn't happen:\n> \n> [root@x libaio-ext4]# grep -r -e 'write:' -e 'read :' *\n> nvme/randread/128/1k/1.txt: read : io=12108MB, bw=206644KB/s, iops=206643, runt= 60001msec [b]\n> nvme/randread/128/2k/1.txt: read : io=18821MB, bw=321210KB/s, iops=160604, runt= 60001msec [b]\n> nvme/randread/128/4k/1.txt: read : io=36985MB, bw=631208KB/s, iops=157802, runt= 60001msec [b]\n> nvme/randread/128/8k/1.txt: read : io=57364MB, bw=976923KB/s, iops=122115, runt= 60128msec\n> nvme/randwrite/128/1k/1.txt: write: io=1036.2MB, bw=17683KB/s, iops=17683, runt= 60001msec [a, as before]\n> nvme/randwrite/128/2k/1.txt: write: io=2023.2MB, bw=34528KB/s, iops=17263, runt= 60001msec [a, as before]\n> nvme/randwrite/128/4k/1.txt: write: io=16667MB, bw=282977KB/s, iops=70744, runt= 60311msec [reproduced benefit, as per earlier email]\n> nvme/randwrite/128/8k/1.txt: write: io=22997MB, bw=391839KB/s, iops=48979, runt= 60099msec\n> \n\nNo idea what might be causing this. BTW so you're not using direct-io to\naccess the raw device? Or am I just misreading this?\n\n>>> One way or another it would be very nice to be able to select the\n>>> tradeoff using initdb(1) without the need to recompile, which then\n>>> begs for some initdb --calibrate /mnt/nvme (effective_io_concurrency,\n>>> DB page size, ...).> Do you envision any plans for this we still in a\n>>> need to gather more info exactly why this happens? (perf reports?)\n>>>\n>>\n>> Not sure I follow. Plans for what? Something that calibrates cost parameters?\n>> That might be useful, but that's a rather separate issue from what's discussed\n>> here - page size, which needs to happen before initdb (at least with how things\n>> work currently).\n> [..]\n> \n> Sorry, I was too far teched and assumed you guys were talking very long term. \n> \n\nNp, I think that'd be an useful tool, but it seems more like a\ncompletely separate discussion.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 7 Jun 2022 15:29:46 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pgcon unconference / impact of block size on performance"
},
{
"msg_contents": "Hi,\n\n> The really\n> puzzling thing is why is the filesystem so much slower for smaller pages. I mean,\n> why would writing 1K be 1/3 of writing 4K?\n> Why would a filesystem have such effect?\n\nHa! I don't care at this point as 1 or 2kB seems too small to handle many real world scenarios ;)\n\n> > b) Another thing that you could also include in testing is that I've spotted a\n> couple of times single-threaded fio might could be limiting factor (numjobs=1 by\n> default), so I've tried with numjobs=2,group_reporting=1 and got this below\n> ouput on ext4 defaults even while dropping caches (echo 3) each loop iteration -\n> - something that I cannot explain (ext4 direct I/O caching effect? how's that\n> even possible? reproduced several times even with numjobs=1) - the point being\n> 206643 1kb IOPS @ ext4 direct-io > 131783 1kB IOPS @ raw, smells like some\n> caching effect because for randwrite it does not happen. I've triple-checked with\n> iostat -x... it cannot be any internal device cache as with direct I/O that doesn't\n> happen:\n> >\n> > [root@x libaio-ext4]# grep -r -e 'write:' -e 'read :' *\n> > nvme/randread/128/1k/1.txt: read : io=12108MB, bw=206644KB/s,\n> > iops=206643, runt= 60001msec [b]\n> > nvme/randread/128/2k/1.txt: read : io=18821MB, bw=321210KB/s,\n> > iops=160604, runt= 60001msec [b]\n> > nvme/randread/128/4k/1.txt: read : io=36985MB, bw=631208KB/s,\n> > iops=157802, runt= 60001msec [b]\n> > nvme/randread/128/8k/1.txt: read : io=57364MB, bw=976923KB/s,\n> > iops=122115, runt= 60128msec\n> > nvme/randwrite/128/1k/1.txt: write: io=1036.2MB, bw=17683KB/s,\n> > iops=17683, runt= 60001msec [a, as before]\n> > nvme/randwrite/128/2k/1.txt: write: io=2023.2MB, bw=34528KB/s,\n> > iops=17263, runt= 60001msec [a, as before]\n> > nvme/randwrite/128/4k/1.txt: write: io=16667MB, bw=282977KB/s,\n> > iops=70744, runt= 60311msec [reproduced benefit, as per earlier email]\n> > nvme/randwrite/128/8k/1.txt: write: io=22997MB, bw=391839KB/s,\n> > iops=48979, runt= 60099msec\n> >\n> \n> No idea what might be causing this. BTW so you're not using direct-io to access\n> the raw device? Or am I just misreading this?\n\nBoth scenarios (raw and fs) have had direct=1 set. I just cannot understand how having direct I/O enabled (which disables caching) achieves better read IOPS on ext4 than on raw device... isn't it contradiction?\n\n-J.\n\n\n\n",
"msg_date": "Tue, 7 Jun 2022 13:48:09 +0000",
"msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>",
"msg_from_op": false,
"msg_subject": "RE: pgcon unconference / impact of block size on performance"
},
{
"msg_contents": "\n\nOn 6/7/22 15:48, Jakub Wartak wrote:\n> Hi,\n> \n>> The really\n>> puzzling thing is why is the filesystem so much slower for smaller pages. I mean,\n>> why would writing 1K be 1/3 of writing 4K?\n>> Why would a filesystem have such effect?\n> \n> Ha! I don't care at this point as 1 or 2kB seems too small to handle many real world scenarios ;)\n> \n\nI think that's not quite true - a lot of OLTP works with fairly narrow\nrows, and if they use more data, it's probably in TOAST, so again split\ninto smaller rows. It's true smaller pages would cut some of the limits\n(columns, index tuple, ...) of course, and that might be an issue.\n\nIndependently of that, it seems like an interesting behavior and it\nmight tell us something about how to optimize for larger pages.\n\n>>> b) Another thing that you could also include in testing is that I've spotted a\n>> couple of times single-threaded fio might could be limiting factor (numjobs=1 by\n>> default), so I've tried with numjobs=2,group_reporting=1 and got this below\n>> ouput on ext4 defaults even while dropping caches (echo 3) each loop iteration -\n>> - something that I cannot explain (ext4 direct I/O caching effect? how's that\n>> even possible? reproduced several times even with numjobs=1) - the point being\n>> 206643 1kb IOPS @ ext4 direct-io > 131783 1kB IOPS @ raw, smells like some\n>> caching effect because for randwrite it does not happen. I've triple-checked with\n>> iostat -x... it cannot be any internal device cache as with direct I/O that doesn't\n>> happen:\n>>>\n>>> [root@x libaio-ext4]# grep -r -e 'write:' -e 'read :' *\n>>> nvme/randread/128/1k/1.txt: read : io=12108MB, bw=206644KB/s,\n>>> iops=206643, runt= 60001msec [b]\n>>> nvme/randread/128/2k/1.txt: read : io=18821MB, bw=321210KB/s,\n>>> iops=160604, runt= 60001msec [b]\n>>> nvme/randread/128/4k/1.txt: read : io=36985MB, bw=631208KB/s,\n>>> iops=157802, runt= 60001msec [b]\n>>> nvme/randread/128/8k/1.txt: read : io=57364MB, bw=976923KB/s,\n>>> iops=122115, runt= 60128msec\n>>> nvme/randwrite/128/1k/1.txt: write: io=1036.2MB, bw=17683KB/s,\n>>> iops=17683, runt= 60001msec [a, as before]\n>>> nvme/randwrite/128/2k/1.txt: write: io=2023.2MB, bw=34528KB/s,\n>>> iops=17263, runt= 60001msec [a, as before]\n>>> nvme/randwrite/128/4k/1.txt: write: io=16667MB, bw=282977KB/s,\n>>> iops=70744, runt= 60311msec [reproduced benefit, as per earlier email]\n>>> nvme/randwrite/128/8k/1.txt: write: io=22997MB, bw=391839KB/s,\n>>> iops=48979, runt= 60099msec\n>>>\n>>\n>> No idea what might be causing this. BTW so you're not using direct-io to access\n>> the raw device? Or am I just misreading this?\n> \n> Both scenarios (raw and fs) have had direct=1 set. I just cannot understand how having direct I/O enabled (which disables caching) achieves better read IOPS on ext4 than on raw device... isn't it contradiction?\n> \n\nThanks for the clarification. Not sure what might be causing this. Did\nyou use the same parameters (e.g. iodepth) in both cases?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 7 Jun 2022 16:00:18 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pgcon unconference / impact of block size on performance"
},
{
"msg_contents": "On Sat, Jun 4, 2022 at 7:23 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> This opened a long discussion about possible explanations - I claimed\n> one of the main factors is the adoption of flash storage, due to pretty\n> fundamental differences between HDD and SSD systems. But the discussion\n> concluded with an agreement to continue investigating this, so here's an\n> attempt to support the claim with some measurements/data.\n\nInteresting. I wonder if the fact that x86 machines have a 4kB page\nsize matters here. It seems hard to be sure because it's not something\nyou can really change. But there are a few of your graphs where 4kB\nspikes up above any higher or lower value, and maybe that's why?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 7 Jun 2022 12:26:01 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgcon unconference / impact of block size on performance"
},
{
"msg_contents": "On 6/7/22 18:26, Robert Haas wrote:\n> On Sat, Jun 4, 2022 at 7:23 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> This opened a long discussion about possible explanations - I claimed\n>> one of the main factors is the adoption of flash storage, due to pretty\n>> fundamental differences between HDD and SSD systems. But the discussion\n>> concluded with an agreement to continue investigating this, so here's an\n>> attempt to support the claim with some measurements/data.\n> \n> Interesting. I wonder if the fact that x86 machines have a 4kB page\n> size matters here. It seems hard to be sure because it's not something\n> you can really change. But there are a few of your graphs where 4kB\n> spikes up above any higher or lower value, and maybe that's why?\n>\n\nPossibly, but why would that be the case? Maybe there are places that do\nstuff with memory and have different optimizations based on length? I'd\nbet the 4k page is way more optimized than the other cases.\n\nBut honestly, I think the SSD page size matters much more, and the main\nbump between 4k and 8k comes from having to deal with just a single\npage. Imagine you write 8k postgres page - the filesystem splits that\ninto two 4k pages, and then eventually writes them to storage. It may\nhappen the writeback flushes them separately, possibly even to different\nplaces on the device. Which might be more expensive to read later, etc.\n\nI'm just speculating, of course. Maybe the storage is smarter and can\nfigure some of this internally, or maybe the locality will remain high.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 7 Jun 2022 19:47:11 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pgcon unconference / impact of block size on performance"
},
{
"msg_contents": "On Tue, Jun 7, 2022 at 1:47 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> Possibly, but why would that be the case? Maybe there are places that do\n> stuff with memory and have different optimizations based on length? I'd\n> bet the 4k page is way more optimized than the other cases.\n\nI don't really know. It was just a thought. It feels like the fact\nthat the page sizes are different could be hurting us somehow, but I\ndon't really know what the mechanism would be.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 7 Jun 2022 15:15:14 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgcon unconference / impact of block size on performance"
},
{
"msg_contents": "Hi, got some answers!\n\nTL;DR for fio it would make sense to use many stressfiles (instead of 1) and same for numjobs ~ VCPU to avoid various pitfails.\n\n> >> The really\n> >> puzzling thing is why is the filesystem so much slower for smaller\n> >> pages. I mean, why would writing 1K be 1/3 of writing 4K?\n> >> Why would a filesystem have such effect?\n> >\n> > Ha! I don't care at this point as 1 or 2kB seems too small to handle\n> > many real world scenarios ;)\n[..]\n> Independently of that, it seems like an interesting behavior and it might tell us\n> something about how to optimize for larger pages.\n\nOK, curiosity won:\n\nWith randwrite on ext4 directio using 4kb the avgqu-sz reaches ~90-100 (close to fio's 128 queue depth?) and I'm getting ~70k IOPS [with maxdepth=128]\nWith randwrite on ext4 directio using 1kb the avgqu-sz is just 0.7 and I'm getting just ~17-22k IOPS [with maxdepth=128] -> conclusion: something is being locked thus preventing queue to build up\nWith randwrite on ext4 directio using 4kb the avgqu-sz reaches ~2.3 (so something is queued) and I'm also getting ~70k IOPS with minimal possible maxdepth=4 -> conclusion: I just need to split the lock contention by 4.\n\nThe 1kB (slow) profile top function is aio_write() -> .... -> iov_iter_get_pages() -> internal_get_user_pages_fast() and there's sadly plenty of \"lock\" keywords inside {related to memory manager, padding to full page size, inode locking} also one can find some articles / commits related to it [1] which didn't made a good feeling to be honest as the fio is using just 1 file (even while I'm on kernel 5.10.x). So I've switched to 4x files and numjobs=4 and got easily 60k IOPS, contention solved whatever it was :) So I would assume PostgreSQL (with it's splitting data files by default on 1GB boundaries and multiprocess architecture) should be relatively safe from such ext4 inode(?)/mm(?) contentions even with smallest 1kb block sizes on Direct I/O some day. \n\n[1] - https://www.phoronix.com/scan.php?page=news_item&px=EXT4-DIO-Faster-DBs\n\n> > Both scenarios (raw and fs) have had direct=1 set. I just cannot understand\n> how having direct I/O enabled (which disables caching) achieves better read\n> IOPS on ext4 than on raw device... isn't it contradiction?\n> >\n> \n> Thanks for the clarification. Not sure what might be causing this. Did you use the\n> same parameters (e.g. iodepth) in both cases?\n\nExplanation: it's the CPU scheduler migrations mixing the performance result during the runs of fio (as you have in your framework). Various VCPUs seem to be having varying max IOPS characteristics (sic!) and CPU scheduler seems to be unaware of it. At least on 1kB and 4kB blocksize this happens also notice that some VCPUs [XXXX marker] don't reach 100% CPU reaching almost twice the result; while cores 0, 3 do reach 100% and lack CPU power to perform more. The only thing that I don't get is that it doesn't make sense from extened lscpu output (but maybe it's AWS XEN mixing real CPU mappings, who knows). \n\n[root@x ~]# for((x=0; x<=3; x++)) ; do echo \"$x:\"; taskset -c $x fio fio.ext4 | grep -e 'read :' -e 'cpu '; done\n0:\n read : io=2416.8MB, bw=123730KB/s, iops=123730, runt= 20001msec\n cpu : usr=42.98%, sys=56.52%, ctx=2317, majf=0, minf=41 [XXXX: 100% cpu bottleneck and just 123k IOPS]\n1:\n read : io=4077.9MB, bw=208774KB/s, iops=208773, runt= 20001msec\n cpu : usr=29.47%, sys=51.43%, ctx=2993, majf=0, minf=42 [XXXX, some idle power and 208k IOPS just by switching to core1...]\n2:\n read : io=4036.7MB, bw=206636KB/s, iops=206636, runt= 20001msec\n cpu : usr=31.00%, sys=52.41%, ctx=2815, majf=0, minf=42 [XXXX]\n3:\n read : io=2398.4MB, bw=122791KB/s, iops=122791, runt= 20001msec\n cpu : usr=44.20%, sys=55.20%, ctx=2522, majf=0, minf=41\n[root@x ~]# for((x=0; x<=3; x++)) ; do echo \"$x:\"; taskset -c $x fio fio.raw | grep -e 'read :' -e 'cpu '; done\n0:\n read : io=2512.3MB, bw=128621KB/s, iops=128620, runt= 20001msec\n cpu : usr=47.62%, sys=51.58%, ctx=2365, majf=0, minf=42\n1:\n read : io=4070.2MB, bw=206748KB/s, iops=206748, runt= 20159msec\n cpu : usr=29.52%, sys=42.86%, ctx=2808, majf=0, minf=42 [XXXX]\n2:\n read : io=4101.3MB, bw=209975KB/s, iops=209975, runt= 20001msec\n cpu : usr=28.05%, sys=45.09%, ctx=3419, majf=0, minf=42 [XXXX]\n3:\n read : io=2519.4MB, bw=128985KB/s, iops=128985, runt= 20001msec\n cpu : usr=46.59%, sys=52.70%, ctx=2371, majf=0, minf=41\n\n[root@x ~]# lscpu --extended\nCPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ MINMHZ\n0 0 0 0 0:0:0:0 yes 3000.0000 1200.0000\n1 0 0 1 1:1:1:0 yes 3000.0000 1200.0000\n2 0 0 0 0:0:0:0 yes 3000.0000 1200.0000\n3 0 0 1 1:1:1:0 yes 3000.0000 1200.0000\n[root@x ~]# lscpu | grep -e ^Model -e ^NUMA -e ^Hyper\nNUMA node(s): 1\nModel: 79\nModel name: Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz\nHypervisor vendor: Xen\nNUMA node0 CPU(s): 0-3\n[root@x ~]# diff -u fio.raw fio.ext4\n--- fio.raw 2022-06-08 12:32:26.603482453 +0000\n+++ fio.ext4 2022-06-08 12:32:36.071621708 +0000\n@@ -1,5 +1,5 @@\n [global]\n-filename=/dev/nvme0n1\n+filename=/mnt/nvme/fio/data.file\n size=256GB\n direct=1\n ioengine=libaio\n[root@x ~]# cat fio.raw\n[global]\nfilename=/dev/nvme0n1\nsize=256GB\ndirect=1\nioengine=libaio\nruntime=20\nnumjobs=1\ngroup_reporting=1\n\n[job]\nrw=randread\niodepth=128\nbs=1k\nsize=64GB\n[root@x ~]#\n\n-J.\n\n\n",
"msg_date": "Wed, 8 Jun 2022 14:15:17 +0000",
"msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>",
"msg_from_op": false,
"msg_subject": "RE: pgcon unconference / impact of block size on performance"
},
{
"msg_contents": "On 6/8/22 16:15, Jakub Wartak wrote:\n> Hi, got some answers!\n> \n> TL;DR for fio it would make sense to use many stressfiles (instead of 1) and same for numjobs ~ VCPU to avoid various pitfails.\n> >>>> The really\n>>>> puzzling thing is why is the filesystem so much slower for smaller\n>>>> pages. I mean, why would writing 1K be 1/3 of writing 4K?\n>>>> Why would a filesystem have such effect?\n>>>\n>>> Ha! I don't care at this point as 1 or 2kB seems too small to handle\n>>> many real world scenarios ;)\n> [..]\n>> Independently of that, it seems like an interesting behavior and it might tell us\n>> something about how to optimize for larger pages.\n> \n> OK, curiosity won:\n> \n> With randwrite on ext4 directio using 4kb the avgqu-sz reaches ~90-100 (close to fio's 128 queue depth?) and I'm getting ~70k IOPS [with maxdepth=128]\n> With randwrite on ext4 directio using 1kb the avgqu-sz is just 0.7 and I'm getting just ~17-22k IOPS [with maxdepth=128] -> conclusion: something is being locked thus preventing queue to build up\n> With randwrite on ext4 directio using 4kb the avgqu-sz reaches ~2.3 (so something is queued) and I'm also getting ~70k IOPS with minimal possible maxdepth=4 -> conclusion: I just need to split the lock contention by 4.\n> \n> The 1kB (slow) profile top function is aio_write() -> .... -> iov_iter_get_pages() -> internal_get_user_pages_fast() and there's sadly plenty of \"lock\" keywords inside {related to memory manager, padding to full page size, inode locking} also one can find some articles / commits related to it [1] which didn't made a good feeling to be honest as the fio is using just 1 file (even while I'm on kernel 5.10.x). So I've switched to 4x files and numjobs=4 and got easily 60k IOPS, contention solved whatever it was :) So I would assume PostgreSQL (with it's splitting data files by default on 1GB boundaries and multiprocess architecture) should be relatively safe from such ext4 inode(?)/mm(?) contentions even with smallest 1kb block sizes on Direct I/O some day. \n> \n\nInteresting. So what parameter values would you suggest?\n\nFWIW some of the tests I did were on xfs, so I wonder if that might be\nhitting similar/other bottlenecks.\n\n> [1] - https://www.phoronix.com/scan.php?page=news_item&px=EXT4-DIO-Faster-DBs\n> \n>>> Both scenarios (raw and fs) have had direct=1 set. I just cannot understand\n>> how having direct I/O enabled (which disables caching) achieves better read\n>> IOPS on ext4 than on raw device... isn't it contradiction?\n>>>\n>>\n>> Thanks for the clarification. Not sure what might be causing this. Did you use the\n>> same parameters (e.g. iodepth) in both cases?\n> \n> Explanation: it's the CPU scheduler migrations mixing the performance result during the runs of fio (as you have in your framework). Various VCPUs seem to be having varying max IOPS characteristics (sic!) and CPU scheduler seems to be unaware of it. At least on 1kB and 4kB blocksize this happens also notice that some VCPUs [XXXX marker] don't reach 100% CPU reaching almost twice the result; while cores 0, 3 do reach 100% and lack CPU power to perform more. The only thing that I don't get is that it doesn't make sense from extened lscpu output (but maybe it's AWS XEN mixing real CPU mappings, who knows).\n\nUh, that's strange. I haven't seen anything like that, but I'm running\non physical HW and not AWS, so it's either that or maybe I just didn't\ndo the same test.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 8 Jun 2022 16:51:41 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pgcon unconference / impact of block size on performance"
},
{
"msg_contents": "> > >>>> The really\r\n> >>>> puzzling thing is why is the filesystem so much slower for smaller\r\n> >>>> pages. I mean, why would writing 1K be 1/3 of writing 4K?\r\n> >>>> Why would a filesystem have such effect?\r\n> >>>\r\n> >>> Ha! I don't care at this point as 1 or 2kB seems too small to handle\r\n> >>> many real world scenarios ;)\r\n> > [..]\r\n> >> Independently of that, it seems like an interesting behavior and it\r\n> >> might tell us something about how to optimize for larger pages.\r\n> >\r\n> > OK, curiosity won:\r\n> >\r\n> > With randwrite on ext4 directio using 4kb the avgqu-sz reaches ~90-100\r\n> > (close to fio's 128 queue depth?) and I'm getting ~70k IOPS [with\r\n> > maxdepth=128] With randwrite on ext4 directio using 1kb the avgqu-sz is just\r\n> 0.7 and I'm getting just ~17-22k IOPS [with maxdepth=128] -> conclusion:\r\n> something is being locked thus preventing queue to build up With randwrite on\r\n> ext4 directio using 4kb the avgqu-sz reaches ~2.3 (so something is queued) and\r\n> I'm also getting ~70k IOPS with minimal possible maxdepth=4 -> conclusion: I\r\n> just need to split the lock contention by 4.\r\n> >\r\n> > The 1kB (slow) profile top function is aio_write() -> .... -> iov_iter_get_pages()\r\n> -> internal_get_user_pages_fast() and there's sadly plenty of \"lock\" keywords\r\n> inside {related to memory manager, padding to full page size, inode locking}\r\n> also one can find some articles / commits related to it [1] which didn't made a\r\n> good feeling to be honest as the fio is using just 1 file (even while I'm on kernel\r\n> 5.10.x). So I've switched to 4x files and numjobs=4 and got easily 60k IOPS,\r\n> contention solved whatever it was :) So I would assume PostgreSQL (with it's\r\n> splitting data files by default on 1GB boundaries and multiprocess architecture)\r\n> should be relatively safe from such ext4 inode(?)/mm(?) contentions even with\r\n> smallest 1kb block sizes on Direct I/O some day.\r\n> >\r\n> \r\n> Interesting. So what parameter values would you suggest?\r\n\r\nAt least have 4x filename= entries and numjobs=4\r\n\r\n> FWIW some of the tests I did were on xfs, so I wonder if that might be hitting\r\n> similar/other bottlenecks.\r\n\r\nApparently XFS also shows same contention on single file for 1..2kb randwrite, see [ZZZ]. \r\n\r\n[root@x ~]# mount|grep /mnt/nvme\r\n/dev/nvme0n1 on /mnt/nvme type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)\r\n\r\n# using 1 fio job and 1 file\r\n[root@x ~]# grep -r -e 'read :' -e 'write:' libaio\r\nlibaio/nvme/randread/128/1k/1.txt: read : io=5779.1MB, bw=196573KB/s, iops=196573, runt= 30109msec\r\nlibaio/nvme/randread/128/2k/1.txt: read : io=10335MB, bw=352758KB/s, iops=176379, runt= 30001msec\r\nlibaio/nvme/randread/128/4k/1.txt: read : io=22220MB, bw=758408KB/s, iops=189601, runt= 30001msec\r\nlibaio/nvme/randread/128/8k/1.txt: read : io=28914MB, bw=986896KB/s, iops=123361, runt= 30001msec\r\nlibaio/nvme/randwrite/128/1k/1.txt: write: io=694856KB, bw=23161KB/s, iops=23161, runt= 30001msec [ZZZ]\r\nlibaio/nvme/randwrite/128/2k/1.txt: write: io=1370.7MB, bw=46782KB/s, iops=23390, runt= 30001msec [ZZZ]\r\nlibaio/nvme/randwrite/128/4k/1.txt: write: io=8261.3MB, bw=281272KB/s, iops=70318, runt= 30076msec [OK]\r\nlibaio/nvme/randwrite/128/8k/1.txt: write: io=11598MB, bw=394320KB/s, iops=49289, runt= 30118msec\r\n\r\n# but it's all ok using 4 fio jobs and 4 files\r\n[root@x ~]# grep -r -e 'read :' -e 'write:' libaio\r\nlibaio/nvme/randread/128/1k/1.txt: read : io=6174.6MB, bw=210750KB/s, iops=210750, runt= 30001msec\r\nlibaio/nvme/randread/128/2k/1.txt: read : io=12152MB, bw=413275KB/s, iops=206637, runt= 30110msec\r\nlibaio/nvme/randread/128/4k/1.txt: read : io=24382MB, bw=832116KB/s, iops=208028, runt= 30005msec\r\nlibaio/nvme/randread/128/8k/1.txt: read : io=29281MB, bw=985831KB/s, iops=123228, runt= 30415msec\r\nlibaio/nvme/randwrite/128/1k/1.txt: write: io=1692.2MB, bw=57748KB/s, iops=57748, runt= 30003msec\r\nlibaio/nvme/randwrite/128/2k/1.txt: write: io=3601.9MB, bw=122940KB/s, iops=61469, runt= 30001msec\r\nlibaio/nvme/randwrite/128/4k/1.txt: write: io=8470.8MB, bw=285857KB/s, iops=71464, runt= 30344msec\r\nlibaio/nvme/randwrite/128/8k/1.txt: write: io=11449MB, bw=390603KB/s, iops=48825, runt= 30014msec\r\n \r\n\r\n> >>> Both scenarios (raw and fs) have had direct=1 set. I just cannot\r\n> >>> understand\r\n> >> how having direct I/O enabled (which disables caching) achieves\r\n> >> better read IOPS on ext4 than on raw device... isn't it contradiction?\r\n> >>>\r\n> >>\r\n> >> Thanks for the clarification. Not sure what might be causing this.\r\n> >> Did you use the same parameters (e.g. iodepth) in both cases?\r\n> >\r\n> > Explanation: it's the CPU scheduler migrations mixing the performance result\r\n> during the runs of fio (as you have in your framework). Various VCPUs seem to\r\n> be having varying max IOPS characteristics (sic!) and CPU scheduler seems to be\r\n> unaware of it. At least on 1kB and 4kB blocksize this happens also notice that\r\n> some VCPUs [XXXX marker] don't reach 100% CPU reaching almost twice the\r\n> result; while cores 0, 3 do reach 100% and lack CPU power to perform more.\r\n> The only thing that I don't get is that it doesn't make sense from extened lscpu\r\n> output (but maybe it's AWS XEN mixing real CPU mappings, who knows).\r\n> \r\n> Uh, that's strange. I haven't seen anything like that, but I'm running on physical\r\n> HW and not AWS, so it's either that or maybe I just didn't do the same test.\r\n\r\nI couldn't belived it until I've checked via taskset 😊 BTW: I don't have real HW with NVMe , but we might be with worth checking if placing (taskset -c ...) fio on hyperthreading VCPU is not causing (there's /sys/devices/system/cpu/cpu0/topology/thread_siblings and maybe lscpu(1) output). On AWS I have feeling that lscpu might simply lie and I cannot identify which VCPU is HT and which isn't.\r\n\r\n-J.\r\n",
"msg_date": "Thu, 9 Jun 2022 11:23:36 +0000",
"msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>",
"msg_from_op": false,
"msg_subject": "RE: pgcon unconference / impact of block size on performance"
},
{
"msg_contents": "\n\n\nOn 6/9/22 13:23, Jakub Wartak wrote:\n>>>>>>> The really\n>>>>>> puzzling thing is why is the filesystem so much slower for smaller\n>>>>>> pages. I mean, why would writing 1K be 1/3 of writing 4K?\n>>>>>> Why would a filesystem have such effect?\n>>>>>\n>>>>> Ha! I don't care at this point as 1 or 2kB seems too small to handle\n>>>>> many real world scenarios ;)\n>>> [..]\n>>>> Independently of that, it seems like an interesting behavior and it\n>>>> might tell us something about how to optimize for larger pages.\n>>>\n>>> OK, curiosity won:\n>>>\n>>> With randwrite on ext4 directio using 4kb the avgqu-sz reaches ~90-100\n>>> (close to fio's 128 queue depth?) and I'm getting ~70k IOPS [with\n>>> maxdepth=128] With randwrite on ext4 directio using 1kb the avgqu-sz is just\n>> 0.7 and I'm getting just ~17-22k IOPS [with maxdepth=128] -> conclusion:\n>> something is being locked thus preventing queue to build up With randwrite on\n>> ext4 directio using 4kb the avgqu-sz reaches ~2.3 (so something is queued) and\n>> I'm also getting ~70k IOPS with minimal possible maxdepth=4 -> conclusion: I\n>> just need to split the lock contention by 4.\n>>>\n>>> The 1kB (slow) profile top function is aio_write() -> .... -> iov_iter_get_pages()\n>> -> internal_get_user_pages_fast() and there's sadly plenty of \"lock\" keywords\n>> inside {related to memory manager, padding to full page size, inode locking}\n>> also one can find some articles / commits related to it [1] which didn't made a\n>> good feeling to be honest as the fio is using just 1 file (even while I'm on kernel\n>> 5.10.x). So I've switched to 4x files and numjobs=4 and got easily 60k IOPS,\n>> contention solved whatever it was :) So I would assume PostgreSQL (with it's\n>> splitting data files by default on 1GB boundaries and multiprocess architecture)\n>> should be relatively safe from such ext4 inode(?)/mm(?) contentions even with\n>> smallest 1kb block sizes on Direct I/O some day.\n>>>\n>>\n>> Interesting. So what parameter values would you suggest?\n> \n> At least have 4x filename= entries and numjobs=4\n> \n>> FWIW some of the tests I did were on xfs, so I wonder if that might be hitting\n>> similar/other bottlenecks.\n> \n> Apparently XFS also shows same contention on single file for 1..2kb randwrite, see [ZZZ]. \n> \n\nI don't have any results yet, but after thinking about this a bit I find\nthis really strange. Why would there be any contention with a single fio\njob? Doesn't contention imply multiple processes competing for the same\nresource/lock etc.?\n\nIsn't this simply due to the iodepth increase? IIUC with multiple fio\njobs, each will use a separate iodepth value. So with numjobs=4, we'll\nreally use iodepth*4, which can make a big difference.\n\n\n>>>\n>>> Explanation: it's the CPU scheduler migrations mixing the performance result\n>> during the runs of fio (as you have in your framework). Various VCPUs seem to\n>> be having varying max IOPS characteristics (sic!) and CPU scheduler seems to be\n>> unaware of it. At least on 1kB and 4kB blocksize this happens also notice that\n>> some VCPUs [XXXX marker] don't reach 100% CPU reaching almost twice the\n>> result; while cores 0, 3 do reach 100% and lack CPU power to perform more.\n>> The only thing that I don't get is that it doesn't make sense from extened lscpu\n>> output (but maybe it's AWS XEN mixing real CPU mappings, who knows).\n>>\n>> Uh, that's strange. I haven't seen anything like that, but I'm running on physical\n>> HW and not AWS, so it's either that or maybe I just didn't do the same test.\n> \n> I couldn't belived it until I've checked via taskset 😊 BTW: I don't \n> have real HW with NVMe , but we might be with worth checking if\n> placing (taskset -c ...) fio on hyperthreading VCPU is not causing\n> (there's /sys/devices/system/cpu/cpu0/topology/thread_siblings and\n> maybe lscpu(1) output). On AWS I have feeling that lscpu might simply\n> lie and I cannot identify which VCPU is HT and which isn't.\n\nDid you see the same issue with io_uring?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 10 Jun 2022 00:24:29 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pgcon unconference / impact of block size on performance"
},
{
"msg_contents": "> On 6/9/22 13:23, Jakub Wartak wrote:\r\n> >>>>>>> The really\r\n> >>>>>> puzzling thing is why is the filesystem so much slower for\r\n> >>>>>> smaller pages. I mean, why would writing 1K be 1/3 of writing 4K?\r\n> >>>>>> Why would a filesystem have such effect?\r\n> >>>>>\r\n> >>>>> Ha! I don't care at this point as 1 or 2kB seems too small to\r\n> >>>>> handle many real world scenarios ;)\r\n> >>> [..]\r\n> >>>> Independently of that, it seems like an interesting behavior and it\r\n> >>>> might tell us something about how to optimize for larger pages.\r\n> >>>\r\n> >>> OK, curiosity won:\r\n> >>>\r\n> >>> With randwrite on ext4 directio using 4kb the avgqu-sz reaches\r\n> >>> ~90-100 (close to fio's 128 queue depth?) and I'm getting ~70k IOPS\r\n> >>> [with maxdepth=128] With randwrite on ext4 directio using 1kb the\r\n> >>> avgqu-sz is just\r\n> >> 0.7 and I'm getting just ~17-22k IOPS [with maxdepth=128] -> conclusion:\r\n> >> something is being locked thus preventing queue to build up With\r\n> >> randwrite on\r\n> >> ext4 directio using 4kb the avgqu-sz reaches ~2.3 (so something is\r\n> >> queued) and I'm also getting ~70k IOPS with minimal possible\r\n> >> maxdepth=4 -> conclusion: I just need to split the lock contention by 4.\r\n> >>>\r\n> >>> The 1kB (slow) profile top function is aio_write() -> .... ->\r\n> >>> iov_iter_get_pages()\r\n> >> -> internal_get_user_pages_fast() and there's sadly plenty of \"lock\"\r\n> >> -> keywords\r\n> >> inside {related to memory manager, padding to full page size, inode\r\n> >> locking} also one can find some articles / commits related to it [1]\r\n> >> which didn't made a good feeling to be honest as the fio is using\r\n> >> just 1 file (even while I'm on kernel 5.10.x). So I've switched to 4x\r\n> >> files and numjobs=4 and got easily 60k IOPS, contention solved\r\n> >> whatever it was :) So I would assume PostgreSQL (with it's splitting\r\n> >> data files by default on 1GB boundaries and multiprocess\r\n> >> architecture) should be relatively safe from such ext4 inode(?)/mm(?)\r\n> contentions even with smallest 1kb block sizes on Direct I/O some day.\r\n> >>>\r\n> >>\r\n> >> Interesting. So what parameter values would you suggest?\r\n> >\r\n> > At least have 4x filename= entries and numjobs=4\r\n> >\r\n> >> FWIW some of the tests I did were on xfs, so I wonder if that might\r\n> >> be hitting similar/other bottlenecks.\r\n> >\r\n> > Apparently XFS also shows same contention on single file for 1..2kb randwrite,\r\n> see [ZZZ].\r\n> >\r\n> \r\n> I don't have any results yet, but after thinking about this a bit I find this really\r\n> strange. Why would there be any contention with a single fio job? Doesn't contention \r\n> imply multiple processes competing for the same resource/lock etc.?\r\n\r\nMaybe 1 job throws a lot of concurrent random I/Os that contend against the same inodes / pages (?) \r\n \r\n> Isn't this simply due to the iodepth increase? IIUC with multiple fio jobs, each\r\n> will use a separate iodepth value. So with numjobs=4, we'll really use iodepth*4,\r\n> which can make a big difference.\r\n\r\nI was thinking the same (it should be enough to have big queue depth), but apparently one needs many files (inodes?) too:\r\n\r\nOn 1 file I'm not getting a lot of IOPS on small blocksize (even with numjobs), < 20k IOPS always:\r\nnumjobs=1/ext4/io_uring/nvme/randwrite/128/1k/1.txt: write: IOPS=13.5k, BW=13.2MiB/s (13.8MB/s)(396MiB/30008msec); 0 zone resets\r\nnumjobs=1/ext4/io_uring/nvme/randwrite/128/4k/1.txt: write: IOPS=49.1k, BW=192MiB/s (201MB/s)(5759MiB/30001msec); 0 zone resets\r\nnumjobs=1/ext4/libaio/nvme/randwrite/128/1k/1.txt: write: IOPS=16.8k, BW=16.4MiB/s (17.2MB/s)(494MiB/30001msec); 0 zone resets\r\nnumjobs=1/ext4/libaio/nvme/randwrite/128/4k/1.txt: write: IOPS=62.5k, BW=244MiB/s (256MB/s)(7324MiB/30001msec); 0 zone resets\r\nnumjobs=1/xfs/io_uring/nvme/randwrite/128/1k/1.txt: write: IOPS=14.7k, BW=14.3MiB/s (15.0MB/s)(429MiB/30008msec); 0 zone resets\r\nnumjobs=1/xfs/io_uring/nvme/randwrite/128/4k/1.txt: write: IOPS=46.4k, BW=181MiB/s (190MB/s)(5442MiB/30002msec); 0 zone resets\r\nnumjobs=1/xfs/libaio/nvme/randwrite/128/1k/1.txt: write: IOPS=22.3k, BW=21.8MiB/s (22.9MB/s)(654MiB/30001msec); 0 zone resets\r\nnumjobs=1/xfs/libaio/nvme/randwrite/128/4k/1.txt: write: IOPS=59.6k, BW=233MiB/s (244MB/s)(6988MiB/30001msec); 0 zone resets\r\nnumjobs=4/ext4/io_uring/nvme/randwrite/128/1k/1.txt: write: IOPS=13.9k, BW=13.6MiB/s (14.2MB/s)(407MiB/30035msec); 0 zone resets [FAIL 4*qdepth]\r\nnumjobs=4/ext4/io_uring/nvme/randwrite/128/4k/1.txt: write: IOPS=52.9k, BW=207MiB/s (217MB/s)(6204MiB/30010msec); 0 zone resets\r\nnumjobs=4/ext4/libaio/nvme/randwrite/128/1k/1.txt: write: IOPS=17.9k, BW=17.5MiB/s (18.4MB/s)(525MiB/30001msec); 0 zone resets [FAIL 4*qdepth]\r\nnumjobs=4/ext4/libaio/nvme/randwrite/128/4k/1.txt: write: IOPS=63.3k, BW=247MiB/s (259MB/s)(7417MiB/30001msec); 0 zone resets\r\nnumjobs=4/xfs/io_uring/nvme/randwrite/128/1k/1.txt: write: IOPS=14.3k, BW=13.9MiB/s (14.6MB/s)(419MiB/30033msec); 0 zone resets [FAIL 4*qdepth]\r\nnumjobs=4/xfs/io_uring/nvme/randwrite/128/4k/1.txt: write: IOPS=50.5k, BW=197MiB/s (207MB/s)(5917MiB/30010msec); 0 zone resets\r\nnumjobs=4/xfs/libaio/nvme/randwrite/128/1k/1.txt: write: IOPS=19.6k, BW=19.1MiB/s (20.1MB/s)(574MiB/30001msec); 0 zone resets [FAIL 4*qdepth]\r\nnumjobs=4/xfs/libaio/nvme/randwrite/128/4k/1.txt: write: IOPS=63.6k, BW=248MiB/s (260MB/s)(7448MiB/30001msec); 0 zone resets\r\n\r\nNow with 4 files: It is necessary to have *both* 4 files and bigger processes to get the result, irrespective of IO interface and fs to get closer to at least half of IOPS max\r\nnumjobs=1/ext4/io_uring/nvme/randwrite/128/1k/1.txt: write: IOPS=28.3k, BW=27.6MiB/s (28.9MB/s)(834MiB/30230msec); 0 zone resets\r\nnumjobs=1/ext4/io_uring/nvme/randwrite/128/4k/1.txt: write: IOPS=57.8k, BW=226MiB/s (237MB/s)(6772MiB/30001msec); 0 zone resets\r\nnumjobs=1/ext4/libaio/nvme/randwrite/128/1k/1.txt: write: IOPS=17.3k, BW=16.9MiB/s (17.7MB/s)(506MiB/30001msec); 0 zone resets\r\nnumjobs=1/ext4/libaio/nvme/randwrite/128/4k/1.txt: write: IOPS=61.6k, BW=240MiB/s (252MB/s)(7215MiB/30001msec); 0 zone resets\r\nnumjobs=1/xfs/io_uring/nvme/randwrite/128/1k/1.txt: write: IOPS=24.3k, BW=23.8MiB/s (24.9MB/s)(713MiB/30008msec); 0 zone resets\r\nnumjobs=1/xfs/io_uring/nvme/randwrite/128/4k/1.txt: write: IOPS=54.7k, BW=214MiB/s (224MB/s)(6408MiB/30002msec); 0 zone resets\r\nnumjobs=1/xfs/libaio/nvme/randwrite/128/1k/1.txt: write: IOPS=22.1k, BW=21.6MiB/s (22.6MB/s)(648MiB/30001msec); 0 zone resets\r\nnumjobs=1/xfs/libaio/nvme/randwrite/128/4k/1.txt: write: IOPS=65.7k, BW=257MiB/s (269MB/s)(7705MiB/30001msec); 0 zone resets\r\nnumjobs=4/ext4/io_uring/nvme/randwrite/128/1k/1.txt: write: IOPS=34.1k, BW=33.3MiB/s (34.9MB/s)(999MiB/30020msec); 0 zone resets [OK?]\r\nnumjobs=4/ext4/io_uring/nvme/randwrite/128/4k/1.txt: write: IOPS=64.5k, BW=252MiB/s (264MB/s)(7565MiB/30003msec); 0 zone resets\r\nnumjobs=4/ext4/libaio/nvme/randwrite/128/1k/1.txt: write: IOPS=49.7k, BW=48.5MiB/s (50.9MB/s)(1456MiB/30001msec); 0 zone resets [OK]\r\nnumjobs=4/ext4/libaio/nvme/randwrite/128/4k/1.txt: write: IOPS=67.1k, BW=262MiB/s (275MB/s)(7874MiB/30037msec); 0 zone resets\r\nnumjobs=4/xfs/io_uring/nvme/randwrite/128/1k/1.txt: write: IOPS=33.9k, BW=33.1MiB/s (34.7MB/s)(994MiB/30026msec); 0 zone resets [OK?]\r\nnumjobs=4/xfs/io_uring/nvme/randwrite/128/4k/1.txt: write: IOPS=67.7k, BW=264MiB/s (277MB/s)(7933MiB/30007msec); 0 zone resets\r\nnumjobs=4/xfs/libaio/nvme/randwrite/128/1k/1.txt: write: IOPS=61.0k, BW=59.5MiB/s (62.4MB/s)(1786MiB/30001msec); 0 zone resets [OK]\r\nnumjobs=4/xfs/libaio/nvme/randwrite/128/4k/1.txt: write: IOPS=69.2k, BW=270MiB/s (283MB/s)(8111MiB/30004msec); 0 zone resets\r\n\r\nIt makes me thing this looks like some file/inode<->process kind of a locking (reminder: Direct I/O case) -- note that even with files=4 and numjobs=1 it doesn't reach those levels it should. One way or another PostgreSQL should be safe on OLTP - that's the first though, but on 2nd thought - when thinking about extreme IOPS and single-threaded checkpointer / bgwriter / walrecovery on standbys I'm not so sure. In potential future IO API implementations - with Direct I/O (???) - the 1kb, 2kb apparently would seem to be limited unless you parallelize those processes due to some internal kernel locking (sigh! - at least that's what the result the 4 files/numjobs=1/../1k cases indicate; this may vary across kernel versions as per earlier link). \r\n \r\n> >>>\r\n> >>> Explanation: it's the CPU scheduler migrations mixing the\r\n> >>> performance result\r\n> >> during the runs of fio (as you have in your framework). Various\r\n> >> VCPUs seem to be having varying max IOPS characteristics (sic!) and\r\n> >> CPU scheduler seems to be unaware of it. At least on 1kB and 4kB\r\n> >> blocksize this happens also notice that some VCPUs [XXXX marker]\r\n> >> don't reach 100% CPU reaching almost twice the result; while cores 0, 3 do\r\n> reach 100% and lack CPU power to perform more.\r\n> >> The only thing that I don't get is that it doesn't make sense from\r\n> >> extened lscpu output (but maybe it's AWS XEN mixing real CPU mappings,\r\n> who knows).\r\n> >>\r\n> >> Uh, that's strange. I haven't seen anything like that, but I'm\r\n> >> running on physical HW and not AWS, so it's either that or maybe I just didn't\r\n> do the same test.\r\n> >\r\n> > I couldn't belived it until I've checked via taskset 😊 BTW: I don't\r\n> > have real HW with NVMe , but we might be with worth checking if\r\n> > placing (taskset -c ...) fio on hyperthreading VCPU is not causing\r\n> > (there's /sys/devices/system/cpu/cpu0/topology/thread_siblings and\r\n> > maybe lscpu(1) output). On AWS I have feeling that lscpu might simply\r\n> > lie and I cannot identify which VCPU is HT and which isn't.\r\n> \r\n> Did you see the same issue with io_uring?\r\n\r\nYes, tested today, got similar results (io_uring doesn’t change a thing and BTW it looks like hypervisor shifts real HW CPUs to logical VCPUs ) After reading this https://wiki.xenproject.org/wiki/Hyperthreading (section: Is Xen hyperthreading aware), I think solid NVMe testing shouldn't be conducted on anything virtualized - I have no control over potentially noisy CPU-heavy neighbors. So please take my results for with grain of salt, unless somebody reproduces this taskset -c .. fio tests on proper isolated HW, but another thing: that's where PostgreSQL runs in reality.\r\n\r\n-J.\r\n",
"msg_date": "Fri, 10 Jun 2022 08:52:49 +0000",
"msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>",
"msg_from_op": false,
"msg_subject": "RE: pgcon unconference / impact of block size on performance"
},
{
"msg_contents": "I did a couple tests to evaluate the impact of filesystem overhead and\nblock size, so here are some preliminary results. I'm running a more\nextensive set of tests, but some of this seems interesting.\n\nI did two sets of tests:\n\n1) fio test on raw devices\n\n2) fio tests on ext4/xfs with different fs block size\n\nBoth sets of tests were executed with varying iodepth (1, 2, 4, ...) and\nnumber of processes (1, 8).\n\nThe results are attached - CSV file with results, and PDF with pivot\ntables showing them in more readable format.\n\n\n1) raw device tests\n\nThe results for raw devices have regular patterns, with smaller blocks\ngiving better performance - particularly for read workloads. For write\nworkloads, it's similar, except that 4K blocks perform better than 1-2K\nones (this applies especially to the NVMe device).\n\n\n2) fs tests\n\nThis shows how the tests perform on ext4/xfs filesystems with different\nblock sizes (1K-4K). Overall the patterns are fairly similar to raw\ndevices. There are a couple strange things, though.\n\nFor example, ext4 often behaves like this on the \"write\" (i.e.\nsequential write) benchmark:\n\n fs block 1K 2K 4K 8K 16K 32K\n --------------------------------------------------------------\n 1024 33374 28290 27286 26453 22341 19568\n 2048 33420 38595 75741 63790 48474 33474\n 4096 33959 38913 73949 63940 49217 33017\n\nIt's somewhat expected that 1-2K blocks perform worse than 4K (the raw\ndevice behaves the same way), but notice how the behavior differs\ndepending on the fs block. For 2k and 4K fs blocks the throughput\nimproves, but for 1K blocks it just goes down. For higher iodepth values\nthis is even more visible:\n\n fs block 1K 2K 4K 8K 16K 32K\n ------------------------------------------------------------\n 1024 34879 25708 24744 23937 22527 19357\n 2048 31648 50348 282696 236118 121750 60646\n 4096 34273 39890 273395 214817 135072 66943\n\nThe interesting thing is xfs does not have this issue.\n\nFurthermore, it seems interesting to compare iops on a filesystem to the\nraw device, which might be seen as \"best case\" without the fs overhead.\nThe \"comparison\" attachmens do exactly that.\n\nThere are two interesting observations, here:\n\n1) ext4 seems to have some issue with 1-2K random writes (randrw and\nrandwrite tests) with larger 2-4K filesystem blocks. Consider for\nexample this:\n\n fs block 1K 2K 4K 8K 16K 32K\n ------------------------------------------------------------------\n 1024 214765 143564 108075 83098 58238 38569\n 2048 66010 216287 260116 214541 113848 57045\n 4096 66656 64155 268141 215860 109175 54877\n\nAgian, the xfs does not behave like this.\n\n2) Interestingly enough, compe cases can actually perform better on a\nfilesystem than directly on the raw device - I'm not sure what's the\nexplanation, but it only happens on the SSD RAID (not on the NVMe), and\nwith higher iodepth values.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 13 Jun 2022 16:06:56 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pgcon unconference / impact of block size on performance"
},
{
"msg_contents": "On Sat, Jun 4, 2022 at 6:23 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> Hi,\n>\n> At on of the pgcon unconference sessions a couple days ago, I presented\n> a bunch of benchmark results comparing performance with different\n> data/WAL block size. Most of the OLTP results showed significant gains\n> (up to 50%) with smaller (4k) data pages.\n>\n\nWow. Random numbers are fantastic, Significant reduction in sequential\nthroughput is a little painful though, I see 40% reduction in some cases if\nI'm reading that right. Any thoughts on why that's the case? Are there\nmitigations possible?\n\nmerlin\n\nOn Sat, Jun 4, 2022 at 6:23 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:Hi,\n\nAt on of the pgcon unconference sessions a couple days ago, I presented\na bunch of benchmark results comparing performance with different\ndata/WAL block size. Most of the OLTP results showed significant gains\n(up to 50%) with smaller (4k) data pages.Wow. Random numbers are fantastic, Significant reduction in sequential throughput is a little painful though, I see 40% reduction in some cases if I'm reading that right. Any thoughts on why that's the case? Are there mitigations possible?merlin",
"msg_date": "Mon, 13 Jun 2022 10:42:31 -0500",
"msg_from": "Merlin Moncure <mmoncure@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgcon unconference / impact of block size on performance"
},
{
"msg_contents": "On 6/13/22 17:42, Merlin Moncure wrote:\n> On Sat, Jun 4, 2022 at 6:23 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com <mailto:tomas.vondra@enterprisedb.com>>\n> wrote:\n> \n> Hi,\n> \n> At on of the pgcon unconference sessions a couple days ago, I presented\n> a bunch of benchmark results comparing performance with different\n> data/WAL block size. Most of the OLTP results showed significant gains\n> (up to 50%) with smaller (4k) data pages.\n> \n> \n> Wow. Random numbers are fantastic, Significant reduction in sequential\n> throughput is a little painful though, I see 40% reduction in some cases\n> if I'm reading that right. Any thoughts on why that's the case? Are\n> there mitigations possible?\n> \n\nI think you read that right - given a fixed I/O depth, the throughput\nfor sequential access gets reduced. Consider for example the attached\nchart with sequential read/write results for the Optane 900P. The IOPS\nincreases for smaller blocks, but not enough to compensate for the\nbandwidth drop.\n\nRegarding the mitigations - I think prefetching (read-ahead) should do\nthe trick. Just going to iodepth=2 mostly makes up for the bandwidth\ndifference. You might argue prefetching would improve the random I/O\nresults too, but I don't think that's the same thing - read-ahead for\nsequential workloads is much easier to implement (even transparently).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 13 Jun 2022 18:05:38 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pgcon unconference / impact of block size on performance"
}
] |
[
{
"msg_contents": "HI hackers,\n\nI thought it would be better to start a new thread to discuss.\nWhile working with sorting patch, and read others threads,\nI have some ideas to reduces memory consumption by aset and generation\nmemory modules.\n\nI have done basic benchmarks, and it seems to improve performance.\nI think it's really worth it, if it really is possible to reduce memory\nconsumption.\n\nLinux Ubuntu 64 bits\nwork_mem = 64MB\n\nset max_parallel_workers_per_gather = 0;\ncreate table t (a bigint not null, b bigint not null, c bigint not\nnull, d bigint not null, e bigint not null, f bigint not null);\n\ninsert into t select x,x,x,x,x,x from generate_Series(1,140247142) x; --\n10GB!\nvacuum freeze t;\n\nselect * from t order by a offset 140247142;\n\nHEAD:\npostgres=# select * from t order by a offset 140247142;\n a | b | c | d | e | f\n---+---+---+---+---+---\n(0 rows)\n\nwork_mem=64MB\nTime: 99603,544 ms (01:39,604)\nTime: 94000,342 ms (01:34,000)\n\npostgres=# set work_mem=\"64.2MB\";\nSET\nTime: 0,210 ms\npostgres=# select * from t order by a offset 140247142;\n a | b | c | d | e | f\n---+---+---+---+---+---\n(0 rows)\n\nTime: 95306,254 ms (01:35,306)\n\n\nPATCHED:\npostgres=# explain analyze select * from t order by a offset 140247142;\n a | b | c | d | e | f\n---+---+---+---+---+---\n(0 rows)\n\nwork_mem=64MB\nTime: 90946,482 ms (01:30,946)\n\npostgres=# set work_mem=\"64.2MB\";\nSET\nTime: 0,210 ms\npostgres=# select * from t order by a offset 140247142;\n a | b | c | d | e | f\n---+---+---+---+---+---\n(0 rows)\n\nTime: 91817,533 ms (01:31,818)\n\n\nThere is still room for further improvements, and at this point I need help.\n\nRegarding the patches we have:\n1) 001-aset-reduces-memory-consumption.patch\nReduces memory used by struct AllocBlockData by minus 8 bits,\nreducing the total size to 32 bits, which leads to \"fitting\" two structs in\na 64bit cache.\n\nMove some stores to fields struct, for the order of declaration, within the\nstructures.\n\nRemove tests elog(ERROR, \"could not find block containing chunk %p\" and\nelog(ERROR, \"could not find block containing chunk %p\", moving them to\nMEMORY_CONTEXT_CHECKING context.\n\nSince 8.2 versions, nobody complains about these tests.\nBut if is not acceptable, have the option (3)\n003-aset-reduces-memory-consumption.patch\n\n2) 002-generation-reduces-memory-consumption.patch\nReduces memory used by struct GenerationBlock, by minus 8 bits,\nreducing the total size to 32 bits, which leads to \"fitting\" two structs in\na 64bit cache.\n\nRemove all references to the field *block* used by struct GenerationChunk,\nenabling its removal! (not done yet).\nWhat would take the final size to 16 bits, which leads to \"fitting\" four\nstructs in a 64bit cache.\nUnfortunately, everything works only for the size 24, see the (4).\n\nMove some stores to fields struct, for the order of declaration, within the\nstructures.\n\n3) 003-aset-reduces-memory-consumption.patch\nSame to the (1), but without remove the tests:\nelog(ERROR, \"could not find block containing chunk %p\" and\nelog(ERROR, \"could not find block containing chunk %p\",\nBut at the cost of removing a one tiny part of the tests.\n\nSince 8.2 versions, nobody complains about these tests.\n\n4) 004-generation-reduces-memory-consumption-BUG.patch\nSame to the (2), but with BUG.\nIt only takes a few tweaks to completely remove the field block.\n\n@@ -117,9 +116,9 @@ struct GenerationChunk\n /* this is zero in a free chunk */\n Size requested_size;\n\n-#define GENERATIONCHUNK_RAWSIZE (SIZEOF_SIZE_T * 2 + SIZEOF_VOID_P * 2)\n+#define GENERATIONCHUNK_RAWSIZE (SIZEOF_SIZE_T * 2 + SIZEOF_VOID_P)\n #else\n-#define GENERATIONCHUNK_RAWSIZE (SIZEOF_SIZE_T + SIZEOF_VOID_P * 2)\n+#define GENERATIONCHUNK_RAWSIZE (SIZEOF_SIZE_T + SIZEOF_VOID_P)\n #endif /* MEMORY_CONTEXT_CHECKING */\n\n /* ensure proper alignment by adding padding if needed */\n@@ -127,7 +126,6 @@ struct GenerationChunk\n char padding[MAXIMUM_ALIGNOF - GENERATIONCHUNK_RAWSIZE % MAXIMUM_ALIGNOF];\n #endif\n\n- GenerationBlock *block; /* block owning this chunk */\n GenerationContext *context; /* owning context, or NULL if freed chunk */\n /* there must not be any padding to reach a MAXALIGN boundary here! */\n};\n\nThis fails with make check.\nI couldn't figure out why it doesn't work with 16 bits (struct\nGenerationChunk).",
"msg_date": "Sun, 5 Jun 2022 16:28:02 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Reducing Memory Consumption (aset and generation)"
},
{
"msg_contents": "On Mon, 6 Jun 2022 at 07:28, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> 4) 004-generation-reduces-memory-consumption-BUG.patch\n> Same to the (2), but with BUG.\n> It only takes a few tweaks to completely remove the field block.\n\n> This fails with make check.\n> I couldn't figure out why it doesn't work with 16 bits (struct GenerationChunk).\n\nI think you're misunderstanding how blocks and chunks work here. A\nblock can have many chunks. You can't find the block that a chunk is\non by subtracting Generation_BLOCKHDRSZ from the pointer given to\nGenerationFree(). That would only work if the chunk happened to be the\nfirst chunk on a block. If it's anything apart from that then you'll\nbe making adjustments to the memory of some prior chunk on the block.\nI imagine this is the reason you can't get the tests to pass.\n\nCan you also explain why you think moving code around randomly or\nadding unlikely() macros helps reduce the memory consumption overheads\nof generation contexts? I imagine you think that's helping to further\nimprove performance, but you've not offered any evidence of that\nseparately from the other changes you've made. If you think those are\nuseful changes then I recommend you run individual benchmarks and\noffer those as proof that those changes are worthwhile.\n\nDavid\n\n\n",
"msg_date": "Tue, 7 Jun 2022 11:36:55 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing Memory Consumption (aset and generation)"
},
{
"msg_contents": "Em seg., 6 de jun. de 2022 às 20:37, David Rowley <dgrowleyml@gmail.com>\nescreveu:\n\n> On Mon, 6 Jun 2022 at 07:28, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> > 4) 004-generation-reduces-memory-consumption-BUG.patch\n> > Same to the (2), but with BUG.\n> > It only takes a few tweaks to completely remove the field block.\n>\n> > This fails with make check.\n> > I couldn't figure out why it doesn't work with 16 bits (struct\n> GenerationChunk).\n>\n> Hi David, thanks for taking a look at this.\n\n\n> I think you're misunderstanding how blocks and chunks work here. A\n> block can have many chunks. You can't find the block that a chunk is\n> on by subtracting Generation_BLOCKHDRSZ from the pointer given to\n> GenerationFree(). That would only work if the chunk happened to be the\n> first chunk on a block. If it's anything apart from that then you'll\n> be making adjustments to the memory of some prior chunk on the block.\n> I imagine this is the reason you can't get the tests to pass.\n>\nOk, I am still learning about this.\nCan you explain why subtracting Generation_BLOCKHDRSZ from the pointer,\nworks for sizeof(struct GenerationChunk) = 24 bits,\nWhen all references for the block field have been removed.\nThis pass check-world.\n\n\n>\n> Can you also explain why you think moving code around randomly or\n> adding unlikely() macros helps reduce the memory consumption overheads\n> of generation contexts?\n\nOf course, those changes do not reduce memory consumption.\nBut, IMO, I think those changes improve the access to memory regions,\nbecause of the locality of the data.\n\nAbout \"unlikely macros\", this helps the branchs prediction, when most of\nthe time,\nmalloc and related functions, will not fail.\n\n\n> I imagine you think that's helping to further\n> improve performance, but you've not offered any evidence of that\n> separately from the other changes you've made. If you think those are\n> useful changes then I recommend you run individual benchmarks and\n> offer those as proof that those changes are worthwhile.\n>\nOk, I can understand, are changes unrelated.\n\nregards,\nRanier Vilela\n\nEm seg., 6 de jun. de 2022 às 20:37, David Rowley <dgrowleyml@gmail.com> escreveu:On Mon, 6 Jun 2022 at 07:28, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> 4) 004-generation-reduces-memory-consumption-BUG.patch\n> Same to the (2), but with BUG.\n> It only takes a few tweaks to completely remove the field block.\n\n> This fails with make check.\n> I couldn't figure out why it doesn't work with 16 bits (struct GenerationChunk).\nHi David, thanks for taking a look at this. \nI think you're misunderstanding how blocks and chunks work here. A\nblock can have many chunks. You can't find the block that a chunk is\non by subtracting Generation_BLOCKHDRSZ from the pointer given to\nGenerationFree(). That would only work if the chunk happened to be the\nfirst chunk on a block. If it's anything apart from that then you'll\nbe making adjustments to the memory of some prior chunk on the block.\nI imagine this is the reason you can't get the tests to pass.Ok, I am still learning about this.Can you explain why \nsubtracting Generation_BLOCKHDRSZ from the pointer,works for sizeof(struct GenerationChunk) = 24 bits,When all references for the block field have been removed.This pass check-world. \n\nCan you also explain why you think moving code around randomly or\nadding unlikely() macros helps reduce the memory consumption overheads\nof generation contexts?Of course, those changes do not reduce memory consumption.But, IMO, I think those changes improve the access to memory regions,because of the locality of the data.About \"unlikely macros\", this helps the branchs prediction, when most of the time,malloc and related functions, will not fail. I imagine you think that's helping to further\nimprove performance, but you've not offered any evidence of that\nseparately from the other changes you've made. If you think those are\nuseful changes then I recommend you run individual benchmarks and\noffer those as proof that those changes are worthwhile.Ok, I can understand, are changes unrelated.regards,Ranier Vilela",
"msg_date": "Mon, 6 Jun 2022 21:14:35 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing Memory Consumption (aset and generation)"
},
{
"msg_contents": "Em seg., 6 de jun. de 2022 às 21:14, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Em seg., 6 de jun. de 2022 às 20:37, David Rowley <dgrowleyml@gmail.com>\n> escreveu:\n>\n>> On Mon, 6 Jun 2022 at 07:28, Ranier Vilela <ranier.vf@gmail.com> wrote:\n>> > 4) 004-generation-reduces-memory-consumption-BUG.patch\n>> > Same to the (2), but with BUG.\n>> > It only takes a few tweaks to completely remove the field block.\n>>\n>> > This fails with make check.\n>> > I couldn't figure out why it doesn't work with 16 bits (struct\n>> GenerationChunk).\n>>\n>> Hi David, thanks for taking a look at this.\n>\n>\n>> I think you're misunderstanding how blocks and chunks work here. A\n>> block can have many chunks. You can't find the block that a chunk is\n>> on by subtracting Generation_BLOCKHDRSZ from the pointer given to\n>> GenerationFree(). That would only work if the chunk happened to be the\n>> first chunk on a block. If it's anything apart from that then you'll\n>> be making adjustments to the memory of some prior chunk on the block.\n>> I imagine this is the reason you can't get the tests to pass.\n>>\n> Ok, I am still learning about this.\n> Can you explain why subtracting Generation_BLOCKHDRSZ from the pointer,\n> works for sizeof(struct GenerationChunk) = 24 bits,\n> When all references for the block field have been removed.\n> This pass check-world.\n>\n>\n>>\n>> Can you also explain why you think moving code around randomly or\n>> adding unlikely() macros helps reduce the memory consumption overheads\n>> of generation contexts?\n>\n> Of course, those changes do not reduce memory consumption.\n> But, IMO, I think those changes improve the access to memory regions,\n> because of the locality of the data.\n>\n> About \"unlikely macros\", this helps the branchs prediction, when most of\n> the time,\n> malloc and related functions, will not fail.\n>\n>\n>> I imagine you think that's helping to further\n>> improve performance, but you've not offered any evidence of that\n>> separately from the other changes you've made. If you think those are\n>> useful changes then I recommend you run individual benchmarks and\n>> offer those as proof that those changes are worthwhile.\n>>\n> Ok, I can understand, are changes unrelated.\n>\nLet's restart this, to simplify the review and commit work.\n\nRegarding the patches now, we have:\n1) v1-001-aset-reduces-memory-consumption.patch\nReduces memory used by struct AllocBlockData by minus 8 bits,\nreducing the total size to 32 bits, which leads to \"fitting\" two structs in\na 64bit cache.\n\nRemove tests elog(ERROR, \"could not find block containing chunk %p\" and\nelog(ERROR, \"could not find block containing chunk %p\", moving them to\nMEMORY_CONTEXT_CHECKING context.\n\nSince 8.2 versions, nobody complains about these tests.\n\nBut if is not acceptable, have the option (3)\nv1-003-aset-reduces-memory-consumption.patch\n\n2) v1-002-generation-reduces-memory-consumption.patch\nReduces memory used by struct GenerationBlock, by minus 8 bits,\nreducing the total size to 32 bits, which leads to \"fitting\" two structs in\na 64bit cache.\n\n3) v1-003-aset-reduces-memory-consumption.patch\nSame to the (1), but without remove the tests:\nelog(ERROR, \"could not find block containing chunk %p\" and\nelog(ERROR, \"could not find block containing chunk %p\",\nBut at the cost of removing a one tiny part of the tests and\nmoving them to MEMORY_CONTEXT_CHECKING context.\n\nSince 8.2 versions, nobody complains about these tests.\n\nregards,\nRanier Vilela",
"msg_date": "Mon, 6 Jun 2022 22:09:06 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing Memory Consumption (aset and generation)"
},
{
"msg_contents": "On Tue, 7 Jun 2022 at 03:09, Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n> Let's restart this, to simplify the review and commit work.\n\nThe patchset fails to apply. Could you send an updated version that\napplies to the current master branch?\n\n> Regarding the patches now, we have:\n> 1) v1-001-aset-reduces-memory-consumption.patch\n> Reduces memory used by struct AllocBlockData by minus 8 bits,\n\nThis seems reasonable, considering we don't generally use the field\nfor anything but validation.\n\n> reducing the total size to 32 bits, which leads to \"fitting\" two structs in a 64bit cache.\n\nBy bits, you mean bytes, right?\n\nRegarding fitting 2 structs in 64 bytes, that point is moot, as each\nof these structs are stored at the front of each malloc-ed block, so\nyou will never see more than one of these in the same cache line. Less\nspace used is nice, but not as critical there IMO.\n\n> Remove tests elog(ERROR, \"could not find block containing chunk %p\" and\n> elog(ERROR, \"could not find block containing chunk %p\", moving them to\n> MEMORY_CONTEXT_CHECKING context.\n>\n> Since 8.2 versions, nobody complains about these tests.\n>\n> But if is not acceptable, have the option (3) v1-003-aset-reduces-memory-consumption.patch\n>\n> 2) v1-002-generation-reduces-memory-consumption.patch\n> Reduces memory used by struct GenerationBlock, by minus 8 bits,\n\nThat seems fairly straight-forward -- 8 bytes saved on each page isn't\na lot, but it's something.\n\n> reducing the total size to 32 bits, which leads to \"fitting\" two structs in a 64bit cache.\n\nYour size accounting seems wrong. On 64-bit architectures, we have\ndlist_node (=16) + Size (=8) + 2*int (=8) + 2 * (char*) (=16) = 48\nbytes. Shaving off the Size field reduces that by 8 bytes to 40 bytes.\n\nThe argument of fitting 2 of these structures into one cache line is\nmoot again, because here, too, two of this struct will not share a\ncache line (unless somehow we allocate 0-sized blocks, which would be\na bug).\n\n> 3) v1-003-aset-reduces-memory-consumption.patch\n> Same to the (1), but without remove the tests:\n> elog(ERROR, \"could not find block containing chunk %p\" and\n> elog(ERROR, \"could not find block containing chunk %p\",\n> But at the cost of removing a one tiny part of the tests and\n> moving them to MEMORY_CONTEXT_CHECKING context.\n\nI like this patch over 001 due to allowing less corruption to occur in\nthe memory context code. This allows for detecting some issues in 003,\nas opposed to none in 001.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Mon, 11 Jul 2022 10:47:53 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing Memory Consumption (aset and generation)"
},
{
"msg_contents": "Hi,\nThanks for take a look.\n\nEm seg., 11 de jul. de 2022 às 05:48, Matthias van de Meent <\nboekewurm+postgres@gmail.com> escreveu:\n\n> On Tue, 7 Jun 2022 at 03:09, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> >\n> > Let's restart this, to simplify the review and commit work.\n>\n> The patchset fails to apply. Could you send an updated version that\n> applies to the current master branch?\n>\nSure.\n\n\n>\n> > Regarding the patches now, we have:\n> > 1) v1-001-aset-reduces-memory-consumption.patch\n> > Reduces memory used by struct AllocBlockData by minus 8 bits,\n>\n> This seems reasonable, considering we don't generally use the field\n> for anything but validation.\n>\n> > reducing the total size to 32 bits, which leads to \"fitting\" two structs\n> in a 64bit cache.\n>\n> By bits, you mean bytes, right?\n>\nCorrect.\n\n\n>\n> Regarding fitting 2 structs in 64 bytes, that point is moot, as each\n> of these structs are stored at the front of each malloc-ed block, so\n> you will never see more than one of these in the same cache line. Less\n> space used is nice, but not as critical there IMO.\n>\n> > Remove tests elog(ERROR, \"could not find block containing chunk %p\" and\n> > elog(ERROR, \"could not find block containing chunk %p\", moving them to\n> > MEMORY_CONTEXT_CHECKING context.\n> >\n> > Since 8.2 versions, nobody complains about these tests.\n> >\n> > But if is not acceptable, have the option (3)\n> v1-003-aset-reduces-memory-consumption.patch\n> >\n> > 2) v1-002-generation-reduces-memory-consumption.patch\n> > Reduces memory used by struct GenerationBlock, by minus 8 bits,\n>\n> That seems fairly straight-forward -- 8 bytes saved on each page isn't\n> a lot, but it's something.\n>\n> > reducing the total size to 32 bits, which leads to \"fitting\" two structs\n> in a 64bit cache.\n>\n> Your size accounting seems wrong. On 64-bit architectures, we have\n> dlist_node (=16) + Size (=8) + 2*int (=8) + 2 * (char*) (=16) = 48\n> bytes. Shaving off the Size field reduces that by 8 bytes to 40 bytes.\n>\n> The argument of fitting 2 of these structures into one cache line is\n> moot again, because here, too, two of this struct will not share a\n> cache line (unless somehow we allocate 0-sized blocks, which would be\n> a bug).\n>\nRight. I think I was very tired.\n\n\n>\n> > 3) v1-003-aset-reduces-memory-consumption.patch\n> > Same to the (1), but without remove the tests:\n> > elog(ERROR, \"could not find block containing chunk %p\" and\n> > elog(ERROR, \"could not find block containing chunk %p\",\n> > But at the cost of removing a one tiny part of the tests and\n> > moving them to MEMORY_CONTEXT_CHECKING context.\n>\n> I like this patch over 001 due to allowing less corruption to occur in\n> the memory context code. This allows for detecting some issues in 003,\n> as opposed to none in 001.\n>\nI understand.\n\nregards,\nRanier Vilela\n\nHi,Thanks for take a look.Em seg., 11 de jul. de 2022 às 05:48, Matthias van de Meent <boekewurm+postgres@gmail.com> escreveu:On Tue, 7 Jun 2022 at 03:09, Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n> Let's restart this, to simplify the review and commit work.\n\nThe patchset fails to apply. Could you send an updated version that\napplies to the current master branch?Sure. \n\n> Regarding the patches now, we have:\n> 1) v1-001-aset-reduces-memory-consumption.patch\n> Reduces memory used by struct AllocBlockData by minus 8 bits,\n\nThis seems reasonable, considering we don't generally use the field\nfor anything but validation.\n\n> reducing the total size to 32 bits, which leads to \"fitting\" two structs in a 64bit cache.\n\nBy bits, you mean bytes, right?Correct. \n\nRegarding fitting 2 structs in 64 bytes, that point is moot, as each\nof these structs are stored at the front of each malloc-ed block, so\nyou will never see more than one of these in the same cache line. Less\nspace used is nice, but not as critical there IMO.\n\n> Remove tests elog(ERROR, \"could not find block containing chunk %p\" and\n> elog(ERROR, \"could not find block containing chunk %p\", moving them to\n> MEMORY_CONTEXT_CHECKING context.\n>\n> Since 8.2 versions, nobody complains about these tests.\n>\n> But if is not acceptable, have the option (3) v1-003-aset-reduces-memory-consumption.patch\n>\n> 2) v1-002-generation-reduces-memory-consumption.patch\n> Reduces memory used by struct GenerationBlock, by minus 8 bits,\n\nThat seems fairly straight-forward -- 8 bytes saved on each page isn't\na lot, but it's something.\n\n> reducing the total size to 32 bits, which leads to \"fitting\" two structs in a 64bit cache.\n\nYour size accounting seems wrong. On 64-bit architectures, we have\ndlist_node (=16) + Size (=8) + 2*int (=8) + 2 * (char*) (=16) = 48\nbytes. Shaving off the Size field reduces that by 8 bytes to 40 bytes.\n\nThe argument of fitting 2 of these structures into one cache line is\nmoot again, because here, too, two of this struct will not share a\ncache line (unless somehow we allocate 0-sized blocks, which would be\na bug).Right. I think I was very tired. \n\n> 3) v1-003-aset-reduces-memory-consumption.patch\n> Same to the (1), but without remove the tests:\n> elog(ERROR, \"could not find block containing chunk %p\" and\n> elog(ERROR, \"could not find block containing chunk %p\",\n> But at the cost of removing a one tiny part of the tests and\n> moving them to MEMORY_CONTEXT_CHECKING context.\n\nI like this patch over 001 due to allowing less corruption to occur in\nthe memory context code. This allows for detecting some issues in 003,\nas opposed to none in 001.\nI understand.regards,Ranier Vilela",
"msg_date": "Mon, 11 Jul 2022 09:25:15 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing Memory Consumption (aset and generation)"
},
{
"msg_contents": "Em seg., 11 de jul. de 2022 às 09:25, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Hi,\n> Thanks for take a look.\n>\n> Em seg., 11 de jul. de 2022 às 05:48, Matthias van de Meent <\n> boekewurm+postgres@gmail.com> escreveu:\n>\n>> On Tue, 7 Jun 2022 at 03:09, Ranier Vilela <ranier.vf@gmail.com> wrote:\n>> >\n>> > Let's restart this, to simplify the review and commit work.\n>>\n>> The patchset fails to apply. Could you send an updated version that\n>> applies to the current master branch?\n>>\n> Sure.\n>\nHere the patchs updated.\n\nregards,\nRanier Vilela",
"msg_date": "Mon, 11 Jul 2022 11:18:22 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing Memory Consumption (aset and generation)"
},
{
"msg_contents": "On Mon, 11 Jul 2022 at 20:48, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> > 2) v1-002-generation-reduces-memory-consumption.patch\n> > Reduces memory used by struct GenerationBlock, by minus 8 bits,\n>\n> That seems fairly straight-forward -- 8 bytes saved on each page isn't\n> a lot, but it's something.\n\nI think 002 is likely the only patch here that has some merit.\nHowever, it's hard to imagine any measurable performance gains from\nit. I think the smallest generation block we have today is 8192\nbytes. Saving 8 bytes in that equates to a saving of 0.1% of memory.\nFor an 8MB page, it's 1024 times less than that.\n\nI imagine Ranier has been working on this due the performance\nregression mentioned in [1]. I think it'll be much more worthwhile to\naim to reduce the memory chunk overheads rather than the block\noverheads, as Ranier is doing here. I posted a patch in [2] which does\nthat. To make that work, I need to have the owning context in the\nblock. The 001 and 003 patch seems to remove those here.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvqXpLzav6dUeR5vO_RBh_feHrHMLhigVQXw9jHCyKP9PA@mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAApHDvpjauCRXcgcaL6+e3eqecEHoeRm9D-kcbuvBitgPnW=vw@mail.gmail.com\n\n\n",
"msg_date": "Tue, 12 Jul 2022 17:34:57 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing Memory Consumption (aset and generation)"
},
{
"msg_contents": "Em ter., 12 de jul. de 2022 às 02:35, David Rowley <dgrowleyml@gmail.com>\nescreveu:\n\n> On Mon, 11 Jul 2022 at 20:48, Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > > 2) v1-002-generation-reduces-memory-consumption.patch\n> > > Reduces memory used by struct GenerationBlock, by minus 8 bits,\n> >\n> > That seems fairly straight-forward -- 8 bytes saved on each page isn't\n> > a lot, but it's something.\n>\n> I think 002 is likely the only patch here that has some merit.\n> However, it's hard to imagine any measurable performance gains from\n> it. I think the smallest generation block we have today is 8192\n> bytes. Saving 8 bytes in that equates to a saving of 0.1% of memory.\n> For an 8MB page, it's 1024 times less than that.\n>\n\n> I imagine Ranier has been working on this due the performance\n> regression mentioned in [1]. I think it'll be much more worthwhile to\n> aim to reduce the memory chunk overheads rather than the block\n> overheads, as Ranier is doing here. I posted a patch in [2] which does\n> that. To make that work, I need to have the owning context in the\n> block. The 001 and 003 patch seems to remove those here.\n>\nI saw the numbers at [2], 17% is very impressive.\nHow you need the context in the block, 001 and 003,\nthey are more of a hindrance than a help.\n\nSo, feel free to incorporate 002 into your patch if you wish.\nThe best thing to do here is to close and withdraw from commitfest.\n\nregards,\nRanier Vilela\n\nEm ter., 12 de jul. de 2022 às 02:35, David Rowley <dgrowleyml@gmail.com> escreveu:On Mon, 11 Jul 2022 at 20:48, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> > 2) v1-002-generation-reduces-memory-consumption.patch\n> > Reduces memory used by struct GenerationBlock, by minus 8 bits,\n>\n> That seems fairly straight-forward -- 8 bytes saved on each page isn't\n> a lot, but it's something.\n\nI think 002 is likely the only patch here that has some merit.\nHowever, it's hard to imagine any measurable performance gains from\nit. I think the smallest generation block we have today is 8192\nbytes. Saving 8 bytes in that equates to a saving of 0.1% of memory.\nFor an 8MB page, it's 1024 times less than that.\n\nI imagine Ranier has been working on this due the performance\nregression mentioned in [1]. I think it'll be much more worthwhile to\naim to reduce the memory chunk overheads rather than the block\noverheads, as Ranier is doing here. I posted a patch in [2] which does\nthat. To make that work, I need to have the owning context in the\nblock. The 001 and 003 patch seems to remove those here.I saw the numbers at [2], 17% is very impressive.How you need the context in the block, 001 and 003, they are more of a hindrance than a help.So, feel free to incorporate 002 into your patch if you wish.The best thing to do here is to close and withdraw from commitfest.regards,Ranier Vilela",
"msg_date": "Tue, 12 Jul 2022 21:23:45 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing Memory Consumption (aset and generation)"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nUsing placeholders for application variables allows the use of RLS for\napplication users as shown in this blog post\nhttps://www.2ndquadrant.com/en/blog/application-users-vs-row-level-security/\n.\n\n SET my.username = 'tomas'\n CREATE POLICY chat_policy ON chat\n USING (current_setting('my.username') IN (message_from, message_to))\n WITH CHECK (message_from = current_setting('my.username'))\n\nThis technique has enabled postgres sidecar services(PostgREST,\nPostGraphQL, etc) to keep the application security at the database level,\nwhich has worked great.\n\nHowever, defining placeholders at the role level require superuser:\n\n alter role myrole set my.username to 'tomas';\n ERROR: permission denied to set parameter \"my.username\"\n\nWhich is inconsistent and surprising behavior. I think it doesn't make\nsense since you can already set them at the session or transaction\nlevel(SET LOCAL my.username = 'tomas'). Enabling this would allow sidecar\nservices to store metadata scoped to its pertaining role.\n\nI've attached a patch that removes this restriction. From my testing, this\ndoesn't affect permission checking when an extension defines its custom GUC\nvariables.\n\n DefineCustomStringVariable(\"my.custom\", NULL, NULL, &my_custom, NULL,\n PGC_SUSET, ..);\n\nUsing PGC_SUSET or PGC_SIGHUP will fail accordingly. Also no tests fail\nwhen doing \"make installcheck\".\n\n---\nSteve Chavez\nEngineering at https://supabase.com/",
"msg_date": "Sun, 5 Jun 2022 23:20:38 -0500",
"msg_from": "Steve Chavez <steve@supabase.io>",
"msg_from_op": true,
"msg_subject": "Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "On Sun, Jun 05, 2022 at 11:20:38PM -0500, Steve Chavez wrote:\n> However, defining placeholders at the role level require superuser:\n> \n> alter role myrole set my.username to 'tomas';\n> ERROR: permission denied to set parameter \"my.username\"\n> \n> Which is inconsistent and surprising behavior. I think it doesn't make\n> sense since you can already set them at the session or transaction\n> level(SET LOCAL my.username = 'tomas'). Enabling this would allow sidecar\n> services to store metadata scoped to its pertaining role.\n> \n> I've attached a patch that removes this restriction. From my testing, this\n> doesn't affect permission checking when an extension defines its custom GUC\n> variables.\n> \n> DefineCustomStringVariable(\"my.custom\", NULL, NULL, &my_custom, NULL,\n> PGC_SUSET, ..);\n> \n> Using PGC_SUSET or PGC_SIGHUP will fail accordingly. Also no tests fail\n> when doing \"make installcheck\".\n\nIIUC you are basically proposing to revert a6dcd19 [0], but it is not clear\nto me why that is safe. Am I missing something?\n\n[0] https://www.postgresql.org/message-id/flat/4090.1258042387%40sss.pgh.pa.us\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 1 Jul 2022 16:40:27 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "On Fri, Jul 01, 2022 at 04:40:27PM -0700, Nathan Bossart wrote:\n> On Sun, Jun 05, 2022 at 11:20:38PM -0500, Steve Chavez wrote:\n>> However, defining placeholders at the role level require superuser:\n>> \n>> alter role myrole set my.username to 'tomas';\n>> ERROR: permission denied to set parameter \"my.username\"\n>> \n>> Which is inconsistent and surprising behavior. I think it doesn't make\n>> sense since you can already set them at the session or transaction\n>> level(SET LOCAL my.username = 'tomas'). Enabling this would allow sidecar\n>> services to store metadata scoped to its pertaining role.\n>> \n>> I've attached a patch that removes this restriction. From my testing, this\n>> doesn't affect permission checking when an extension defines its custom GUC\n>> variables.\n>> \n>> DefineCustomStringVariable(\"my.custom\", NULL, NULL, &my_custom, NULL,\n>> PGC_SUSET, ..);\n>> \n>> Using PGC_SUSET or PGC_SIGHUP will fail accordingly. Also no tests fail\n>> when doing \"make installcheck\".\n> \n> IIUC you are basically proposing to revert a6dcd19 [0], but it is not clear\n> to me why that is safe. Am I missing something?\n\nI spent some more time looking into this, and I think I've constructed a\nsimple example that demonstrates the problem with removing this\nrestriction.\n\n\tpostgres=# CREATE ROLE test CREATEROLE;\n\tCREATE ROLE\n\tpostgres=# CREATE ROLE other LOGIN;\n\tCREATE ROLE\n\tpostgres=# GRANT CREATE ON DATABASE postgres TO other;\n\tGRANT\n\tpostgres=# SET ROLE test;\n\tSET\n\tpostgres=> ALTER ROLE other SET plperl.on_plperl_init = 'test';\n\tALTER ROLE\n\tpostgres=> \\c postgres other\n\tYou are now connected to database \"postgres\" as user \"other\".\n\tpostgres=> CREATE EXTENSION plperl;\n\tCREATE EXTENSION\n\tpostgres=> SHOW plperl.on_plperl_init;\n\t plperl.on_plperl_init \n\t-----------------------\n\t test\n\t(1 row)\n\nIn this example, the non-superuser role sets a placeholder GUC for another\nrole. This GUC becomes a PGC_SUSET GUC when plperl is loaded, so a\nnon-superuser role will have successfully set a PGC_SUSET GUC. If we had a\nrecord of who ran ALTER ROLE, we might be able to apply appropriate\npermissions checking when the module is loaded, but this information\ndoesn't exist in pg_db_role_setting. IIUC we have the following options:\n\n\t1. Store information about who ran ALTER ROLE. I think there are a\n\t couple of problems with this. For example, what happens if the\n\t grantor was dropped or its privileges were altered? Should we\n\t instead store the context of the user (i.e., PGC_USERSET or\n\t PGC_SUSET)? Do we need to add entries to pg_depend?\n\t2. Ignore or ERROR for any ALTER ROLE settings for custom GUCs. Since\n\t we don't know who ran ALTER ROLE, we can't trust the value.\n\t3. Require superuser to use ALTER ROLE for a placeholder. This is what\n\t we do today. Since we know a superuser set the value, we can always\n\t apply it when the custom GUC is finally defined.\n\nIf this is an accurate representation of the options, it seems clear why\nthe superuser restriction is in place.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 18 Jul 2022 17:03:05 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "Thanks a lot for the feedback Nathan.\n\nTaking your options into consideration, for me the correct behaviour should\nbe:\n\n- The ALTER ROLE placeholder should always be stored with a PGC_USERSET\nGucContext. It's a placeholder anyway, so it should be the less restrictive\none. If the user wants to define it as PGC_SUSET or other this should be\ndone through a custom extension.\n- When an extension claims the placeholder, we should check the\nDefineCustomXXXVariable GucContext with PGC_USERSET. If there's a match,\nthen the value gets applied, otherwise WARN or ERR.\n The role GUCs get applied at login time right? So at this point we can\nWARN or ERR about the defined role GUCs.\n\nWhat do you think?\n\nOn Mon, 18 Jul 2022 at 19:03, Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n\n> On Fri, Jul 01, 2022 at 04:40:27PM -0700, Nathan Bossart wrote:\n> > On Sun, Jun 05, 2022 at 11:20:38PM -0500, Steve Chavez wrote:\n> >> However, defining placeholders at the role level require superuser:\n> >>\n> >> alter role myrole set my.username to 'tomas';\n> >> ERROR: permission denied to set parameter \"my.username\"\n> >>\n> >> Which is inconsistent and surprising behavior. I think it doesn't make\n> >> sense since you can already set them at the session or transaction\n> >> level(SET LOCAL my.username = 'tomas'). Enabling this would allow\n> sidecar\n> >> services to store metadata scoped to its pertaining role.\n> >>\n> >> I've attached a patch that removes this restriction. From my testing,\n> this\n> >> doesn't affect permission checking when an extension defines its custom\n> GUC\n> >> variables.\n> >>\n> >> DefineCustomStringVariable(\"my.custom\", NULL, NULL, &my_custom,\n> NULL,\n> >> PGC_SUSET, ..);\n> >>\n> >> Using PGC_SUSET or PGC_SIGHUP will fail accordingly. Also no tests fail\n> >> when doing \"make installcheck\".\n> >\n> > IIUC you are basically proposing to revert a6dcd19 [0], but it is not\n> clear\n> > to me why that is safe. Am I missing something?\n>\n> I spent some more time looking into this, and I think I've constructed a\n> simple example that demonstrates the problem with removing this\n> restriction.\n>\n> postgres=# CREATE ROLE test CREATEROLE;\n> CREATE ROLE\n> postgres=# CREATE ROLE other LOGIN;\n> CREATE ROLE\n> postgres=# GRANT CREATE ON DATABASE postgres TO other;\n> GRANT\n> postgres=# SET ROLE test;\n> SET\n> postgres=> ALTER ROLE other SET plperl.on_plperl_init = 'test';\n> ALTER ROLE\n> postgres=> \\c postgres other\n> You are now connected to database \"postgres\" as user \"other\".\n> postgres=> CREATE EXTENSION plperl;\n> CREATE EXTENSION\n> postgres=> SHOW plperl.on_plperl_init;\n> plperl.on_plperl_init\n> -----------------------\n> test\n> (1 row)\n>\n> In this example, the non-superuser role sets a placeholder GUC for another\n> role. This GUC becomes a PGC_SUSET GUC when plperl is loaded, so a\n> non-superuser role will have successfully set a PGC_SUSET GUC. If we had a\n> record of who ran ALTER ROLE, we might be able to apply appropriate\n> permissions checking when the module is loaded, but this information\n> doesn't exist in pg_db_role_setting. IIUC we have the following options:\n>\n> 1. Store information about who ran ALTER ROLE. I think there are a\n> couple of problems with this. For example, what happens if the\n> grantor was dropped or its privileges were altered? Should we\n> instead store the context of the user (i.e., PGC_USERSET or\n> PGC_SUSET)? Do we need to add entries to pg_depend?\n> 2. Ignore or ERROR for any ALTER ROLE settings for custom GUCs.\n> Since\n> we don't know who ran ALTER ROLE, we can't trust the value.\n> 3. Require superuser to use ALTER ROLE for a placeholder. This is\n> what\n> we do today. Since we know a superuser set the value, we can\n> always\n> apply it when the custom GUC is finally defined.\n>\n> If this is an accurate representation of the options, it seems clear why\n> the superuser restriction is in place.\n>\n> --\n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n>\n\nThanks a lot for the feedback Nathan.Taking your options into consideration, for me the correct behaviour should be:- The ALTER ROLE placeholder should always be stored with a PGC_USERSET GucContext. It's a placeholder anyway, so it should be the less restrictive one. If the user wants to define it as PGC_SUSET or other this should be done through a custom extension.- When an extension claims the placeholder, we should check the DefineCustomXXXVariable GucContext with PGC_USERSET. If there's a match, then the value gets applied, otherwise WARN or ERR. The role GUCs get applied at login time right? So at this point we can WARN or ERR about the defined role GUCs.What do you think?On Mon, 18 Jul 2022 at 19:03, Nathan Bossart <nathandbossart@gmail.com> wrote:On Fri, Jul 01, 2022 at 04:40:27PM -0700, Nathan Bossart wrote:\n> On Sun, Jun 05, 2022 at 11:20:38PM -0500, Steve Chavez wrote:\n>> However, defining placeholders at the role level require superuser:\n>> \n>> alter role myrole set my.username to 'tomas';\n>> ERROR: permission denied to set parameter \"my.username\"\n>> \n>> Which is inconsistent and surprising behavior. I think it doesn't make\n>> sense since you can already set them at the session or transaction\n>> level(SET LOCAL my.username = 'tomas'). Enabling this would allow sidecar\n>> services to store metadata scoped to its pertaining role.\n>> \n>> I've attached a patch that removes this restriction. From my testing, this\n>> doesn't affect permission checking when an extension defines its custom GUC\n>> variables.\n>> \n>> DefineCustomStringVariable(\"my.custom\", NULL, NULL, &my_custom, NULL,\n>> PGC_SUSET, ..);\n>> \n>> Using PGC_SUSET or PGC_SIGHUP will fail accordingly. Also no tests fail\n>> when doing \"make installcheck\".\n> \n> IIUC you are basically proposing to revert a6dcd19 [0], but it is not clear\n> to me why that is safe. Am I missing something?\n\nI spent some more time looking into this, and I think I've constructed a\nsimple example that demonstrates the problem with removing this\nrestriction.\n\n postgres=# CREATE ROLE test CREATEROLE;\n CREATE ROLE\n postgres=# CREATE ROLE other LOGIN;\n CREATE ROLE\n postgres=# GRANT CREATE ON DATABASE postgres TO other;\n GRANT\n postgres=# SET ROLE test;\n SET\n postgres=> ALTER ROLE other SET plperl.on_plperl_init = 'test';\n ALTER ROLE\n postgres=> \\c postgres other\n You are now connected to database \"postgres\" as user \"other\".\n postgres=> CREATE EXTENSION plperl;\n CREATE EXTENSION\n postgres=> SHOW plperl.on_plperl_init;\n plperl.on_plperl_init \n -----------------------\n test\n (1 row)\n\nIn this example, the non-superuser role sets a placeholder GUC for another\nrole. This GUC becomes a PGC_SUSET GUC when plperl is loaded, so a\nnon-superuser role will have successfully set a PGC_SUSET GUC. If we had a\nrecord of who ran ALTER ROLE, we might be able to apply appropriate\npermissions checking when the module is loaded, but this information\ndoesn't exist in pg_db_role_setting. IIUC we have the following options:\n\n 1. Store information about who ran ALTER ROLE. I think there are a\n couple of problems with this. For example, what happens if the\n grantor was dropped or its privileges were altered? Should we\n instead store the context of the user (i.e., PGC_USERSET or\n PGC_SUSET)? Do we need to add entries to pg_depend?\n 2. Ignore or ERROR for any ALTER ROLE settings for custom GUCs. Since\n we don't know who ran ALTER ROLE, we can't trust the value.\n 3. Require superuser to use ALTER ROLE for a placeholder. This is what\n we do today. Since we know a superuser set the value, we can always\n apply it when the custom GUC is finally defined.\n\nIf this is an accurate representation of the options, it seems clear why\nthe superuser restriction is in place.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 19 Jul 2022 00:55:14 -0500",
"msg_from": "Steve Chavez <steve@supabase.io>",
"msg_from_op": true,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "On Tue, Jul 19, 2022 at 12:55:14AM -0500, Steve Chavez wrote:\n> Taking your options into consideration, for me the correct behaviour should\n> be:\n> \n> - The ALTER ROLE placeholder should always be stored with a PGC_USERSET\n> GucContext. It's a placeholder anyway, so it should be the less restrictive\n> one. If the user wants to define it as PGC_SUSET or other this should be\n> done through a custom extension.\n> - When an extension claims the placeholder, we should check the\n> DefineCustomXXXVariable GucContext with PGC_USERSET. If there's a match,\n> then the value gets applied, otherwise WARN or ERR.\n> The role GUCs get applied at login time right? So at this point we can\n> WARN or ERR about the defined role GUCs.\n> \n> What do you think?\n\nHm. I would expect ALTER ROLE to store the PGC_SUSET context when executed\nby a superuser or a role with privileges via pg_parameter_acl. Storing all\nplaceholder GUC settings as PGC_USERSET would make things more restrictive\nthan they are today. For example, it would no longer be possible to apply\nany ALTER ROLE settings from superusers for placeholders that later become\ncustom GUCS.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 19 Jul 2022 09:53:39 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "At Tue, 19 Jul 2022 09:53:39 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in \n> On Tue, Jul 19, 2022 at 12:55:14AM -0500, Steve Chavez wrote:\n> > Taking your options into consideration, for me the correct behaviour should\n> > be:\n> > \n> > - The ALTER ROLE placeholder should always be stored with a PGC_USERSET\n> > GucContext. It's a placeholder anyway, so it should be the less restrictive\n> > one. If the user wants to define it as PGC_SUSET or other this should be\n> > done through a custom extension.\n> > - When an extension claims the placeholder, we should check the\n> > DefineCustomXXXVariable GucContext with PGC_USERSET. If there's a match,\n> > then the value gets applied, otherwise WARN or ERR.\n> > The role GUCs get applied at login time right? So at this point we can\n> > WARN or ERR about the defined role GUCs.\n> > \n> > What do you think?\n> \n> Hm. I would expect ALTER ROLE to store the PGC_SUSET context when executed\n> by a superuser or a role with privileges via pg_parameter_acl. Storing all\n> placeholder GUC settings as PGC_USERSET would make things more restrictive\n> than they are today. For example, it would no longer be possible to apply\n> any ALTER ROLE settings from superusers for placeholders that later become\n> custom GUCS.\n\nCurrently placehoders are always created PGC_USERSET, thus\nnon-superuser can set it. But if loaded module defines the custom\nvariable as PGC_SUSET, the value set by the user is refused then the\nvalue from ALTER-ROLE-SET or otherwise the default value from\nDefineCustom*Variable is used. If the module defines it as\nPGC_USERSET, the last value is accepted.\n\nIf a placehoders were created PGC_SUSET, non-superusers cannot set it\non-session. But that behavior is not needed since loadable modules\nreject PGC_USERSET values as above.\n\n\nReturning to the topic, that operation can be allowed in PG15, having\nbeing granted by superuser using the GRANT SET ON PARMETER command.\n\n=# GRANT SET ON PARAMETER my.username TO r1;\n\nr1=> ALTER ROLE r1 SET my.username = 'hoge_user_x';\n<success>\nr1=> \\c\nr1=> => show my.username;\n my.username \n-------------\n hoge_user_x\n(1 row)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 20 Jul 2022 15:28:36 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> At Tue, 19 Jul 2022 09:53:39 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in \n>> Hm. I would expect ALTER ROLE to store the PGC_SUSET context when executed\n>> by a superuser or a role with privileges via pg_parameter_acl. Storing all\n>> placeholder GUC settings as PGC_USERSET would make things more restrictive\n>> than they are today. For example, it would no longer be possible to apply\n>> any ALTER ROLE settings from superusers for placeholders that later become\n>> custom GUCS.\n\n> Returning to the topic, that operation can be allowed in PG15, having\n> being granted by superuser using the GRANT SET ON PARMETER command.\n\nI think that 13d838815 has completely changed the terms that this\ndiscussion needs to be conducted under. It seems clear to me now\nthat if you want to relax this only-superusers restriction, what\nyou have to do is store the OID of the role that issued ALTER ROLE/DB SET,\nand then apply the same checks that would be used in the ordinary case\nwhere a placeholder is being filled in after being set intra-session.\nThat is, we'd no longer assume that a value coming from pg_db_role_setting\nwas set with superuser privileges, but we'd know exactly who did set it.\n\nThis might also tie into Nathan's question in another thread about\nexactly what permissions should be required to issue ALTER ROLE/DB SET.\nIn particular I'm wondering if different permissions should be needed to\noverride an existing entry than if there is no existing entry. If not,\nwe could find ourselves downgrading a superuser-set entry to a\nnon-superuser-set entry, which might have bad consequences later\n(eg, by rendering the entry nonfunctional because when we actually\nload the extension we find out the GUC is SUSET).\n\nPossibly related to this: I felt while working on 13d838815 that\nPGC_SUSET and PGC_SU_BACKEND should be usable as GucContext\nvalues for GUC variables, indicating that the GUC requires special\nprivileges to be set, but we should no longer use them as passed-in\nGucContext values. That is, we should remove privilege tests from\nthe call sites, like this:\n\n (void) set_config_option(stmt->name,\n ExtractSetVariableArgs(stmt),\n- (superuser() ? PGC_SUSET : PGC_USERSET),\n+ PGC_USERSET,\n PGC_S_SESSION,\n action, true, 0, false);\n\nand instead put that behavior inside set_config_option_ext, which\nwould want to look at superuser_arg(srole) instead, and indeed might\nnot need to do anything because pg_parameter_aclcheck would subsume\nthe test. I didn't pursue this further because it wasn't essential\nto fixing the bug. But it seems relevant here, because that line of\nthought leads to the conclusion that storing PGC_SUSET vs PGC_USERSET\nis entirely the wrong approach.\n\nThere is a bunch of infrastructure work that has to be done if anyone\nwants to make this happen:\n\n* redesign physical representation of pg_db_role_setting\n\n* be sure to clean up if a role mentioned in pg_db_role_setting is dropped\n\n* pg_dump would need to be taught to dump the state of affairs correctly.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Jul 2022 11:50:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "On Wed, Jul 20, 2022 at 11:50:10AM -0400, Tom Lane wrote:\n> I think that 13d838815 has completely changed the terms that this\n> discussion needs to be conducted under. It seems clear to me now\n> that if you want to relax this only-superusers restriction, what\n> you have to do is store the OID of the role that issued ALTER ROLE/DB SET,\n> and then apply the same checks that would be used in the ordinary case\n> where a placeholder is being filled in after being set intra-session.\n> That is, we'd no longer assume that a value coming from pg_db_role_setting\n> was set with superuser privileges, but we'd know exactly who did set it.\n\nI was imagining that the permissions checks would apply at ALTER ROLE/DB\nSET time, not at login time. Otherwise, changing a role's privileges might\nimpact other roles' parameters, and it's not clear (at least to me) what\nshould happen when the role is dropped. Another reason I imagined it this\nway is because that's basically how it works today. We assume that the\npg_db_role_setting entry was added by a superuser, but we don't check that\nthe user that ran ALTER ROLE/DB SET is still superuser every time you log\nin.\n\n> This might also tie into Nathan's question in another thread about\n> exactly what permissions should be required to issue ALTER ROLE/DB SET.\n> In particular I'm wondering if different permissions should be needed to\n> override an existing entry than if there is no existing entry. If not,\n> we could find ourselves downgrading a superuser-set entry to a\n> non-superuser-set entry, which might have bad consequences later\n> (eg, by rendering the entry nonfunctional because when we actually\n> load the extension we find out the GUC is SUSET).\n\nYeah, this is why I suggested storing something that equates to PGC_SUSET\nany time a role is superuser or has grantable GUC permissions.\n\n> Possibly related to this: I felt while working on 13d838815 that\n> PGC_SUSET and PGC_SU_BACKEND should be usable as GucContext\n> values for GUC variables, indicating that the GUC requires special\n> privileges to be set, but we should no longer use them as passed-in\n> GucContext values. That is, we should remove privilege tests from\n> the call sites, like this:\n> \n> (void) set_config_option(stmt->name,\n> ExtractSetVariableArgs(stmt),\n> - (superuser() ? PGC_SUSET : PGC_USERSET),\n> + PGC_USERSET,\n> PGC_S_SESSION,\n> action, true, 0, false);\n> \n> and instead put that behavior inside set_config_option_ext, which\n> would want to look at superuser_arg(srole) instead, and indeed might\n> not need to do anything because pg_parameter_aclcheck would subsume\n> the test. I didn't pursue this further because it wasn't essential\n> to fixing the bug. But it seems relevant here, because that line of\n> thought leads to the conclusion that storing PGC_SUSET vs PGC_USERSET\n> is entirely the wrong approach.\n\nCouldn't ProcessGUCArray() use set_config_option_ext() with the context\nindicated by pg_db_role_setting? Also, instead of using PGC_USERSET in all\nthe set_config_option() call sites, shouldn't we remove the \"context\"\nargument altogether? I am likely misunderstanding your proposal, but while\nI think simplifying set_config_option() is worthwhile, I don't see why it\nwould preclude storing the context in pg_db_role_setting.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 21 Jul 2022 10:56:41 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 10:56:41AM -0700, Nathan Bossart wrote:\n> Couldn't ProcessGUCArray() use set_config_option_ext() with the context\n> indicated by pg_db_role_setting? Also, instead of using PGC_USERSET in all\n> the set_config_option() call sites, shouldn't we remove the \"context\"\n> argument altogether? I am likely misunderstanding your proposal, but while\n> I think simplifying set_config_option() is worthwhile, I don't see why it\n> would preclude storing the context in pg_db_role_setting.\n\nThis thread has remained idle for a bit more than two months, so I\nhave marked its CF entry as returned with feedback.\n--\nMichael",
"msg_date": "Wed, 12 Oct 2022 14:48:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "Hi!\n\nI'd like to resume this discussion.\n\nOn Wed, Jul 20, 2022 at 6:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > At Tue, 19 Jul 2022 09:53:39 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in\n> >> Hm. I would expect ALTER ROLE to store the PGC_SUSET context when executed\n> >> by a superuser or a role with privileges via pg_parameter_acl. Storing all\n> >> placeholder GUC settings as PGC_USERSET would make things more restrictive\n> >> than they are today. For example, it would no longer be possible to apply\n> >> any ALTER ROLE settings from superusers for placeholders that later become\n> >> custom GUCS.\n>\n> > Returning to the topic, that operation can be allowed in PG15, having\n> > being granted by superuser using the GRANT SET ON PARMETER command.\n>\n> I think that 13d838815 has completely changed the terms that this\n> discussion needs to be conducted under. It seems clear to me now\n> that if you want to relax this only-superusers restriction, what\n> you have to do is store the OID of the role that issued ALTER ROLE/DB SET,\n> and then apply the same checks that would be used in the ordinary case\n> where a placeholder is being filled in after being set intra-session.\n> That is, we'd no longer assume that a value coming from pg_db_role_setting\n> was set with superuser privileges, but we'd know exactly who did set it.\n\nThis makes sense. But do we really need to store the OID of the role?\n validate_option_array_item() already checks if the placeholder option\npasses validation for PGC_SUSET. So, we can just save a flag\nindicating that this check was not successful. If so, then the value\nstored can be only used for PGC_USERSET. Do you think this would be\ncorrect?\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sat, 19 Nov 2022 00:26:27 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> This makes sense. But do we really need to store the OID of the role?\n> validate_option_array_item() already checks if the placeholder option\n> passes validation for PGC_SUSET. So, we can just save a flag\n> indicating that this check was not successful. If so, then the value\n> stored can be only used for PGC_USERSET. Do you think this would be\n> correct?\n\nMeh ... doesn't seem like much of an improvement. You still need\nto store something that's not there now. This also seems to require\nsome shaky assumptions about decisions having been made when storing\nstill being valid later on. Given the possibility of granting or\nrevoking permissions for SET, I think we don't really want it to act\nthat way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 18 Nov 2022 16:33:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "... BTW, re-reading the commit message for a0ffa885e:\n\n One caveat is that PGC_USERSET GUCs are unaffected by the SET privilege\n --- one could wish that those were handled by a revocable grant to\n PUBLIC, but they are not, because we couldn't make it robust enough\n for GUCs defined by extensions.\n\nit suddenly struck me to wonder if the later 13d838815 changed the\nsituation enough to allow revisiting that problem, and/or if storing\nthe source role's OID in pg_db_role_setting would help.\n\nI don't immediately recall all the problems that led us to leave USERSET\nGUCs out of the feature, so maybe this is nuts; but maybe it isn't.\nIt'd be worth considering if we're trying to improve matters here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 18 Nov 2022 16:41:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "On Sat, Nov 19, 2022 at 12:33 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > This makes sense. But do we really need to store the OID of the role?\n> > validate_option_array_item() already checks if the placeholder option\n> > passes validation for PGC_SUSET. So, we can just save a flag\n> > indicating that this check was not successful. If so, then the value\n> > stored can be only used for PGC_USERSET. Do you think this would be\n> > correct?\n>\n> Meh ... doesn't seem like much of an improvement. You still need\n> to store something that's not there now.\n\nYes, but it wouldn't be needed to track dependencies of pg_role\nmentions in pg_db_role_setting. That seems to be a significant\nsimplification.\n\n> This also seems to require\n> some shaky assumptions about decisions having been made when storing\n> still being valid later on. Given the possibility of granting or\n> revoking permissions for SET, I think we don't really want it to act\n> that way.\n\nYes, it might be shaky. Consider user sets parameter\npg_db_role_setting, and that appears to be capable only for\nPGC_USERSET. Next this user gets the SET permissions. Then this\nparameter needs to be set again in order for the new permission to\ntake effect.\n\nBut consider the other side. How should we handle stored OID of a\nrole? Should the privilege checking be moved from \"set time\" to \"run\ntime\"? Therefore, revoking SET permission from role may affect\nexisting parameters in pg_db_role_setting. It feels like revoke of\nSET permission also aborts changes previously made with that\npermission. This is not how we normally do, and that seems confusing.\n\nI think if we implement the flag and make it user-visible, e.g.\nimplement something like \"ALTER ROLE ... SET ... USERSET;\", then it\nmight be the lesser confusing option.\n\nThoughts?\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sat, 19 Nov 2022 03:56:35 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "On Sat, Nov 19, 2022 at 12:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> ... BTW, re-reading the commit message for a0ffa885e:\n>\n> One caveat is that PGC_USERSET GUCs are unaffected by the SET privilege\n> --- one could wish that those were handled by a revocable grant to\n> PUBLIC, but they are not, because we couldn't make it robust enough\n> for GUCs defined by extensions.\n>\n> it suddenly struck me to wonder if the later 13d838815 changed the\n> situation enough to allow revisiting that problem, and/or if storing\n> the source role's OID in pg_db_role_setting would help.\n>\n> I don't immediately recall all the problems that led us to leave USERSET\n> GUCs out of the feature, so maybe this is nuts; but maybe it isn't.\n> It'd be worth considering if we're trying to improve matters here.\n\nI think if we implement the user-visible USERSET flag for ALTER ROLE,\nthen we might just check permissions for such parameters from the\ntarget role.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sat, 19 Nov 2022 04:02:04 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "On Sat, Nov 19, 2022 at 4:02 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Sat, Nov 19, 2022 at 12:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > ... BTW, re-reading the commit message for a0ffa885e:\n> >\n> > One caveat is that PGC_USERSET GUCs are unaffected by the SET privilege\n> > --- one could wish that those were handled by a revocable grant to\n> > PUBLIC, but they are not, because we couldn't make it robust enough\n> > for GUCs defined by extensions.\n> >\n> > it suddenly struck me to wonder if the later 13d838815 changed the\n> > situation enough to allow revisiting that problem, and/or if storing\n> > the source role's OID in pg_db_role_setting would help.\n> >\n> > I don't immediately recall all the problems that led us to leave USERSET\n> > GUCs out of the feature, so maybe this is nuts; but maybe it isn't.\n> > It'd be worth considering if we're trying to improve matters here.\n>\n> I think if we implement the user-visible USERSET flag for ALTER ROLE,\n> then we might just check permissions for such parameters from the\n> target role.\n\nI've drafted a patch implementing ALTER ROLE ... SET ... TO ... USER SET syntax.\n\nThese options are working only for USERSET GUC variables, but require\nless privileges to set. I think there is no problem to implement\n\nAlso it seems that this approach doesn't conflict with future\nprivileges for USERSET GUCs [1]. I expect that USERSET GUCs should be\navailable unless explicitly REVOKEd. That mean we should be able to\ncheck those privileges during ALTER ROLE.\n\nOpinions on the patch draft?\n\nLinks\n1. https://mail.google.com/mail/u/0/?ik=a20b091faa&view=om&permmsgid=msg-f%3A1749871710745577015\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Sun, 20 Nov 2022 20:48:04 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": ".On Sun, Nov 20, 2022 at 8:48 PM Alexander Korotkov\n<aekorotkov@gmail.com> wrote:\n> I've drafted a patch implementing ALTER ROLE ... SET ... TO ... USER SET syntax.\n>\n> These options are working only for USERSET GUC variables, but require\n> less privileges to set. I think there is no problem to implement\n>\n> Also it seems that this approach doesn't conflict with future\n> privileges for USERSET GUCs [1]. I expect that USERSET GUCs should be\n> available unless explicitly REVOKEd. That mean we should be able to\n> check those privileges during ALTER ROLE.\n>\n> Opinions on the patch draft?\n>\n> Links\n> 1. https://mail.google.com/mail/u/0/?ik=a20b091faa&view=om&permmsgid=msg-f%3A1749871710745577015\n\nUh, sorry for the wrong link. I meant\nhttps://www.postgresql.org/message-id/2271988.1668807706@sss.pgh.pa.us\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sun, 20 Nov 2022 20:50:19 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "Hey Alexander,\n\nLooks like your latest patch addresses the original issue I posted!\n\nSo now I can create a placeholder with the USERSET modifier without a\nsuperuser, while non-USERSET placeholders still require superuser:\n\n```sql\ncreate role foo noinherit;\nset role to foo;\n\nalter role foo set prefix.bar to true user set;\nALTER ROLE\n\nalter role foo set prefix.baz to true;\nERROR: permission denied to set parameter \"prefix.baz\"\n\nset role to postgres;\nalter role foo set prefix.baz to true;\nALTER ROLE\n```\n\nAlso USERSET gucs are marked(`(u)`) on `pg_db_role_setting`:\n\n```sql\nselect * from pg_db_role_setting ;\n setdatabase | setrole | setconfig\n-------------+---------+--------------------------------------\n 0 | 16384 | {prefix.bar(u)=true,prefix.baz=true}\n```\n\nWhich I guess avoids the need for adding columns to `pg_catalog` and makes\nthe \"fix\" simpler.\n\nSo from my side this all looks good!\n\nBest regards,\nSteve\n\nOn Sun, 20 Nov 2022 at 12:50, Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n> .On Sun, Nov 20, 2022 at 8:48 PM Alexander Korotkov\n> <aekorotkov@gmail.com> wrote:\n> > I've drafted a patch implementing ALTER ROLE ... SET ... TO ... USER SET\n> syntax.\n> >\n> > These options are working only for USERSET GUC variables, but require\n> > less privileges to set. I think there is no problem to implement\n> >\n> > Also it seems that this approach doesn't conflict with future\n> > privileges for USERSET GUCs [1]. I expect that USERSET GUCs should be\n> > available unless explicitly REVOKEd. That mean we should be able to\n> > check those privileges during ALTER ROLE.\n> >\n> > Opinions on the patch draft?\n> >\n> > Links\n> > 1.\n> https://mail.google.com/mail/u/0/?ik=a20b091faa&view=om&permmsgid=msg-f%3A1749871710745577015\n>\n> Uh, sorry for the wrong link. I meant\n> https://www.postgresql.org/message-id/2271988.1668807706@sss.pgh.pa.us\n>\n> ------\n> Regards,\n> Alexander Korotkov\n>\n\nHey Alexander,Looks like your latest patch addresses the original issue I posted! So now I can create a placeholder with the USERSET modifier without a superuser, while non-USERSET placeholders still require superuser:```sqlcreate role foo noinherit;set role to foo;alter role foo set prefix.bar to true user set;ALTER ROLEalter role foo set prefix.baz to true;ERROR: permission denied to set parameter \"prefix.baz\"set role to postgres;alter role foo set prefix.baz to true;ALTER ROLE```Also USERSET gucs are marked(`(u)`) on `pg_db_role_setting`:```sqlselect * from pg_db_role_setting ; setdatabase | setrole | setconfig-------------+---------+-------------------------------------- 0 | 16384 | {prefix.bar(u)=true,prefix.baz=true}```Which I guess avoids the need for adding columns to `pg_catalog` and makes the \"fix\" simpler.So from my side this all looks good!Best regards,SteveOn Sun, 20 Nov 2022 at 12:50, Alexander Korotkov <aekorotkov@gmail.com> wrote:.On Sun, Nov 20, 2022 at 8:48 PM Alexander Korotkov\n<aekorotkov@gmail.com> wrote:\n> I've drafted a patch implementing ALTER ROLE ... SET ... TO ... USER SET syntax.\n>\n> These options are working only for USERSET GUC variables, but require\n> less privileges to set. I think there is no problem to implement\n>\n> Also it seems that this approach doesn't conflict with future\n> privileges for USERSET GUCs [1]. I expect that USERSET GUCs should be\n> available unless explicitly REVOKEd. That mean we should be able to\n> check those privileges during ALTER ROLE.\n>\n> Opinions on the patch draft?\n>\n> Links\n> 1. https://mail.google.com/mail/u/0/?ik=a20b091faa&view=om&permmsgid=msg-f%3A1749871710745577015\n\nUh, sorry for the wrong link. I meant\nhttps://www.postgresql.org/message-id/2271988.1668807706@sss.pgh.pa.us\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Tue, 22 Nov 2022 17:53:24 -0500",
"msg_from": "Steve Chavez <steve@supabase.io>",
"msg_from_op": true,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "On Wed, Nov 23, 2022 at 1:53 AM Steve Chavez <steve@supabase.io> wrote:\n> So from my side this all looks good!\n\nThank you for your feedback.\n\nThe next revision of the patch is attached. It contains code\nimprovements, comments and documentation. I'm going to also write\nsode tests. pg_db_role_setting doesn't seem to be well-covered with\ntests. I will probably need to write a new module into\nsrc/tests/modules to check now placeholders interacts with dynamically\ndefined GUCs.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Thu, 1 Dec 2022 06:14:37 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "On Thu, Dec 1, 2022 at 6:14 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Wed, Nov 23, 2022 at 1:53 AM Steve Chavez <steve@supabase.io> wrote:\n> > So from my side this all looks good!\n>\n> Thank you for your feedback.\n>\n> The next revision of the patch is attached. It contains code\n> improvements, comments and documentation. I'm going to also write\n> sode tests. pg_db_role_setting doesn't seem to be well-covered with\n> tests. I will probably need to write a new module into\n> src/tests/modules to check now placeholders interacts with dynamically\n> defined GUCs.\n\nAnother revision of patch is attached. It's fixed now that USER SET\nvalues can't be used for PGC_SUSET parameters. Tests are added. That\nrequire new module test_pg_db_role_setting to check dynamically\ndefined GUCs.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Mon, 5 Dec 2022 06:32:47 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "> On Thu, Dec 1, 2022 at 6:14 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > On Wed, Nov 23, 2022 at 1:53 AM Steve Chavez <steve@supabase.io> wrote:\n> > > So from my side this all looks good!\n> >\n> > Thank you for your feedback.\n> >\n> > The next revision of the patch is attached. It contains code\n> > improvements, comments and documentation. I'm going to also write\n> > sode tests. pg_db_role_setting doesn't seem to be well-covered with\n> > tests. I will probably need to write a new module into\n> > src/tests/modules to check now placeholders interacts with dynamically\n> > defined GUCs.\n>\n> Another revision of patch is attached. It's fixed now that USER SET\n> values can't be used for PGC_SUSET parameters. Tests are added. That\n> require new module test_pg_db_role_setting to check dynamically\n> defined GUCs.\n\nI've looked through the last version of a patch. The tests in v3\nfailed due to naming mismatches. I fixed this in v4 (PFA).\nThe other thing that may seem unexpected: is whether the value should\napply to the ordinary user only, encoded in the parameter name. The\npro of this is that it doesn't break catalog compatibility by a\nseparate field for GUC permissions a concept that doesn't exist today\n(and maybe not needed at all). Also, it makes the patch more\nminimalistic in the code. This is also fully compatible with the\nprevious parameters naming due to parentheses being an unsupported\nsymbol for the parameter name.\n\nI've also tried to revise the comments and docs a little bit to\nreflect the changes.\nThe CI-enabled build of patch v4 for reference is at\nhttps://github.com/pashkinelfe/postgres/tree/placeholders-in-alter-role-v4\n\nOverall the patch looks useful and good enough to be committed.\n\nKind regards,\nPavel Borisov,\nSupabase",
"msg_date": "Mon, 5 Dec 2022 15:18:07 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "After posting the patch I've found my own typo in docs. So corrected\nit in v5 (PFA).\n\nRegards,\nPavel.",
"msg_date": "Mon, 5 Dec 2022 15:26:29 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "On Mon, Dec 5, 2022 at 2:27 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> After posting the patch I've found my own typo in docs. So corrected\n> it in v5 (PFA).\n\nThe new revision of the patch is attached.\n\nI've removed the mention of \"(s)\" suffix from the \"Server\nConfiguration\" docs section. I think it might be confusing since this\nsuffix isn't a part of the variable name. It is only used for storage.\nInstead, I've added the description of this suffix to the catalog\nstructure description and psql documentation.\n\nAlso, I've added psql tab completion for the USER SET flag, and made\nsome enhancements to comments, tests, and commit message.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Mon, 5 Dec 2022 16:51:01 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "Hi, Alexander!\n\nOn Mon, 5 Dec 2022 at 17:51, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> On Mon, Dec 5, 2022 at 2:27 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> > After posting the patch I've found my own typo in docs. So corrected\n> > it in v5 (PFA).\n>\n> The new revision of the patch is attached.\n>\n> I've removed the mention of \"(s)\" suffix from the \"Server\n> Configuration\" docs section. I think it might be confusing since this\n> suffix isn't a part of the variable name. It is only used for storage.\n> Instead, I've added the description of this suffix to the catalog\n> structure description and psql documentation.\n>\n> Also, I've added psql tab completion for the USER SET flag, and made\n> some enhancements to comments, tests, and commit message.\n\nThe changes in expected test results are somehow lost in v6, I've\ncorrected them in v7.\nOtherwise, I've looked through the updated patch and it is good.\n\nRegards,\nPavel.",
"msg_date": "Mon, 5 Dec 2022 18:42:35 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "I couldn't find any discussion of the idea of adding \"(s)\" to the\nvariable name in order to mark the variable userset in the catalog, and\nI have to admit I find it a bit strange. Are we really agreed that\nthat's the way to proceed?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Having your biases confirmed independently is how scientific progress is\nmade, and hence made our great society what it is today\" (Mary Gardiner)\n\n\n",
"msg_date": "Mon, 5 Dec 2022 18:11:20 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> I couldn't find any discussion of the idea of adding \"(s)\" to the\n> variable name in order to mark the variable userset in the catalog, and\n> I have to admit I find it a bit strange. Are we really agreed that\n> that's the way to proceed?\n\nI hadn't been paying close attention to this thread, sorry.\n\nI agree that that seems like a very regrettable choice,\nespecially if you anticipate having to bump catversion anyway.\nBetter to add a bool column to the catalog.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Dec 2022 12:18:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "On Mon, Dec 5, 2022 at 8:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > I couldn't find any discussion of the idea of adding \"(s)\" to the\n> > variable name in order to mark the variable userset in the catalog, and\n> > I have to admit I find it a bit strange. Are we really agreed that\n> > that's the way to proceed?\n>\n> I hadn't been paying close attention to this thread, sorry.\n>\n> I agree that that seems like a very regrettable choice,\n> especially if you anticipate having to bump catversion anyway.\n\nI totally understand that this change requires a catversion bump.\nI've reflected this in the commit message.\n\n> Better to add a bool column to the catalog.\n\nWhat about adding a boolean array to the pg_db_role_setting? So,\npg_db_role_setting would have the following columns.\n * setdatabase oid\n * setrole oid\n * setconfig text[]\n * setuser bool[]\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 5 Dec 2022 22:32:39 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "On Mon, Dec 5, 2022 at 10:32 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Mon, Dec 5, 2022 at 8:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > > I couldn't find any discussion of the idea of adding \"(s)\" to the\n> > > variable name in order to mark the variable userset in the catalog, and\n> > > I have to admit I find it a bit strange. Are we really agreed that\n> > > that's the way to proceed?\n> >\n> > I hadn't been paying close attention to this thread, sorry.\n> >\n> > I agree that that seems like a very regrettable choice,\n> > especially if you anticipate having to bump catversion anyway.\n>\n> I totally understand that this change requires a catversion bump.\n> I've reflected this in the commit message.\n>\n> > Better to add a bool column to the catalog.\n>\n> What about adding a boolean array to the pg_db_role_setting? So,\n> pg_db_role_setting would have the following columns.\n> * setdatabase oid\n> * setrole oid\n> * setconfig text[]\n> * setuser bool[]\n\nThe revised patch implements this way for storage USER SET flag. I\nthink it really became more structured and less cumbersome.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Tue, 6 Dec 2022 18:00:54 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "Hi, Alexander!\n\nOn Tue, 6 Dec 2022 at 19:01, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> On Mon, Dec 5, 2022 at 10:32 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > On Mon, Dec 5, 2022 at 8:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > > > I couldn't find any discussion of the idea of adding \"(s)\" to the\n> > > > variable name in order to mark the variable userset in the catalog, and\n> > > > I have to admit I find it a bit strange. Are we really agreed that\n> > > > that's the way to proceed?\n> > >\n> > > I hadn't been paying close attention to this thread, sorry.\n> > >\n> > > I agree that that seems like a very regrettable choice,\n> > > especially if you anticipate having to bump catversion anyway.\n> >\n> > I totally understand that this change requires a catversion bump.\n> > I've reflected this in the commit message.\n> >\n> > > Better to add a bool column to the catalog.\n> >\n> > What about adding a boolean array to the pg_db_role_setting? So,\n> > pg_db_role_setting would have the following columns.\n> > * setdatabase oid\n> > * setrole oid\n> > * setconfig text[]\n> > * setuser bool[]\n>\n> The revised patch implements this way for storage USER SET flag.\n> think it really became more structured and less cumbersome.\n\nI agree that the patch became more structured and the complications\nfor string parameter suffixing have gone away. I've looked it through\nand don't see problems with it. The only two-lines fix regarding\nvariable initializing may be relevant (see v9). Tests pass and CI is\nalso happy with it. I'd like to set it ready for committer if no\nobjections.\n\nRegards,\nPavel Borisov,\nSupabase.",
"msg_date": "Wed, 7 Dec 2022 02:26:55 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "On Wed, Dec 7, 2022 at 1:28 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> On Tue, 6 Dec 2022 at 19:01, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> >\n> > On Mon, Dec 5, 2022 at 10:32 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > > On Mon, Dec 5, 2022 at 8:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > > Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > > > > I couldn't find any discussion of the idea of adding \"(s)\" to the\n> > > > > variable name in order to mark the variable userset in the catalog, and\n> > > > > I have to admit I find it a bit strange. Are we really agreed that\n> > > > > that's the way to proceed?\n> > > >\n> > > > I hadn't been paying close attention to this thread, sorry.\n> > > >\n> > > > I agree that that seems like a very regrettable choice,\n> > > > especially if you anticipate having to bump catversion anyway.\n> > >\n> > > I totally understand that this change requires a catversion bump.\n> > > I've reflected this in the commit message.\n> > >\n> > > > Better to add a bool column to the catalog.\n> > >\n> > > What about adding a boolean array to the pg_db_role_setting? So,\n> > > pg_db_role_setting would have the following columns.\n> > > * setdatabase oid\n> > > * setrole oid\n> > > * setconfig text[]\n> > > * setuser bool[]\n> >\n> > The revised patch implements this way for storage USER SET flag.\n> > think it really became more structured and less cumbersome.\n>\n> I agree that the patch became more structured and the complications\n> for string parameter suffixing have gone away. I've looked it through\n> and don't see problems with it. The only two-lines fix regarding\n> variable initializing may be relevant (see v9). Tests pass and CI is\n> also happy with it. I'd like to set it ready for committer if no\n> objections.\n\nThank you, Pavel.\nI've made few minor improvements in the docs and comments.\nI'm going to push this if no objections.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Wed, 7 Dec 2022 16:36:02 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "On Wed, Dec 7, 2022 at 4:36 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Wed, Dec 7, 2022 at 1:28 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> > On Tue, 6 Dec 2022 at 19:01, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > >\n> > > On Mon, Dec 5, 2022 at 10:32 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > > > On Mon, Dec 5, 2022 at 8:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > > > Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > > > > > I couldn't find any discussion of the idea of adding \"(s)\" to the\n> > > > > > variable name in order to mark the variable userset in the catalog, and\n> > > > > > I have to admit I find it a bit strange. Are we really agreed that\n> > > > > > that's the way to proceed?\n> > > > >\n> > > > > I hadn't been paying close attention to this thread, sorry.\n> > > > >\n> > > > > I agree that that seems like a very regrettable choice,\n> > > > > especially if you anticipate having to bump catversion anyway.\n> > > >\n> > > > I totally understand that this change requires a catversion bump.\n> > > > I've reflected this in the commit message.\n> > > >\n> > > > > Better to add a bool column to the catalog.\n> > > >\n> > > > What about adding a boolean array to the pg_db_role_setting? So,\n> > > > pg_db_role_setting would have the following columns.\n> > > > * setdatabase oid\n> > > > * setrole oid\n> > > > * setconfig text[]\n> > > > * setuser bool[]\n> > >\n> > > The revised patch implements this way for storage USER SET flag.\n> > > think it really became more structured and less cumbersome.\n> >\n> > I agree that the patch became more structured and the complications\n> > for string parameter suffixing have gone away. I've looked it through\n> > and don't see problems with it. The only two-lines fix regarding\n> > variable initializing may be relevant (see v9). Tests pass and CI is\n> > also happy with it. I'd like to set it ready for committer if no\n> > objections.\n>\n> Thank you, Pavel.\n> I've made few minor improvements in the docs and comments.\n> I'm going to push this if no objections.\n\nPushed, thanks to everyone!\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Fri, 9 Dec 2022 13:23:56 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "Hi, Alexander!\n> Pushed, thanks to everyone!\n\nThank you!\nI've found a minor thing that makes the new test fail on sifaka and\nlongfin build farm animals. If role names in regression don't start\nwith \"regress_\" this invokes a warning. I've consulted in other\nmodules regression tests e.g. in test_rls_hooks and changed the role\nnaming accordingly. In essence, a fix is just a batch replace in test\nSQL and expected results.\n\nRegards,\nPavel.",
"msg_date": "Fri, 9 Dec 2022 15:52:13 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "Hi, Alexander!\n> Hi, Alexander!\n> > Pushed, thanks to everyone!\n>\n> Thank you!\n> I've found a minor thing that makes the new test fail on sifaka and\n> longfin build farm animals. If role names in regression don't start\n> with \"regress_\" this invokes a warning. I've consulted in other\n> modules regression tests e.g. in test_rls_hooks and changed the role\n> naming accordingly. In essence, a fix is just a batch replace in test\n> SQL and expected results.\n\nI see you already pushed the fix for this in beecbe8e5001. So no\nworries, it it not needed anymore.\n\nRegards,\nPavel.\n\n\n",
"msg_date": "Fri, 9 Dec 2022 15:56:05 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "On Fri, Dec 9, 2022 at 2:57 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> > > Pushed, thanks to everyone!\n> >\n> > Thank you!\n> > I've found a minor thing that makes the new test fail on sifaka and\n> > longfin build farm animals. If role names in regression don't start\n> > with \"regress_\" this invokes a warning. I've consulted in other\n> > modules regression tests e.g. in test_rls_hooks and changed the role\n> > naming accordingly. In essence, a fix is just a batch replace in test\n> > SQL and expected results.\n>\n> I see you already pushed the fix for this in beecbe8e5001. So no\n> worries, it it not needed anymore.\n\nOK. Thank you for keeping eyes on buildfarm.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Fri, 9 Dec 2022 15:30:14 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "On Fri, Dec 09, 2022 at 01:23:56PM +0300, Alexander Korotkov wrote:\n> Pushed, thanks to everyone!\n\nFYI: this causes meson test running (\"installcheck\") fail when run\ntwice. I guess that's expected to work, per:\n\nb62303794efd97f2afb55f1e1b82fffae2cf8a2d\nf31111bbe81db0e84fb486c6423a234c47091b30\n6928484bda454f9ab2456d385b2d317f18b6bf1a\n072710dff3eef4540f1c64d07890eb128535e212\nb22b770683806db0a1c0a52a4601a3b6755891e0\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 27 Dec 2022 00:54:56 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> FYI: this causes meson test running (\"installcheck\") fail when run\n> twice. I guess that's expected to work, per:\n\nWe do indeed expect that to work ... but I don't see any problem\nwith repeat \"make installcheck\" on HEAD. Can you provide more\ndetail about what you're seeing?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 27 Dec 2022 01:58:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "On Tue, Dec 27, 2022 at 01:58:14AM -0500, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > FYI: this causes meson test running (\"installcheck\") fail when run\n> > twice. I guess that's expected to work, per:\n> \n> We do indeed expect that to work ... but I don't see any problem\n> with repeat \"make installcheck\" on HEAD. Can you provide more\n> detail about what you're seeing?\n\nThis fails when run more than once:\ntime meson test --setup running --print test_pg_db_role_setting-running/regress\n\n@@ -1,12 +1,13 @@\n CREATE EXTENSION test_pg_db_role_setting;\n CREATE USER regress_super_user SUPERUSER;\n+ERROR: role \"regress_super_user\" already exists\n CREATE USER regress_regular_user;\n+ERROR: role \"regress_regular_user\" already exists\n...\n\nIt didn't fail for you because it says:\n\n./src/test/modules/test_pg_db_role_setting/Makefile\n+# disable installcheck for now\n+NO_INSTALLCHECK = 1\n\nIt also says:\n+# and also for now force NO_LOCALE and UTF8\n+ENCODING = UTF8\n+NO_LOCALE = 1\n\nwhich was evidently copied from the \"oat\" tests, which have said that\nsince March (5b29a9f77, 7c51b7f7c).\n\nIt fails the same way with \"make\" if you change it to not disable\ninstallcheck:\n\nEXTRA_REGRESS_OPTS=\"--bindir=`pwd`/tmp_install/usr/local/pgsql/bin\" PGHOST=/tmp make installcheck -C src/test/modules/test_pg_db_role_setting\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 27 Dec 2022 19:06:55 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> This fails when run more than once:\n> time meson test --setup running --print test_pg_db_role_setting-running/regress\n\nAh.\n\n> It didn't fail for you because it says:\n> ./src/test/modules/test_pg_db_role_setting/Makefile\n> +# disable installcheck for now\n> +NO_INSTALLCHECK = 1\n\nSo ... exactly why is the meson infrastructure failing to honor that?\nThis test looks sufficiently careless about its side-effects that\nI completely agree with the decision to not run it in pre-existing\ninstallations. Failing to drop a created superuser is just one of\nits risk factors --- it also leaves around pg_db_role_setting entries.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 27 Dec 2022 23:29:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "On Tue, Dec 27, 2022 at 11:29:40PM -0500, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > This fails when run more than once:\n> > time meson test --setup running --print test_pg_db_role_setting-running/regress\n> \n> Ah.\n> \n> > It didn't fail for you because it says:\n> > ./src/test/modules/test_pg_db_role_setting/Makefile\n> > +# disable installcheck for now\n> > +NO_INSTALLCHECK = 1\n> \n> So ... exactly why is the meson infrastructure failing to honor that?\n> This test looks sufficiently careless about its side-effects that\n> I completely agree with the decision to not run it in pre-existing\n> installations. Failing to drop a created superuser is just one of\n> its risk factors --- it also leaves around pg_db_role_setting entries.\n\nMeson doesn't try to parse the Makefiles (like the MSVC scripts) but\n(since 3f0e786ccb) has its own implementation, which involves setting\n'runningcheck': false.\n\n096dd80f3c seems to have copied the NO_INSTALLCHECK from oat's makefile,\nbut didn't copy \"runningcheck\" from oat's meson.build (but did copy its\nregress_args).\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 28 Dec 2022 11:28:07 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "Hi, Justin!\n\nOn Wed, 28 Dec 2022 at 21:28, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Tue, Dec 27, 2022 at 11:29:40PM -0500, Tom Lane wrote:\n> > Justin Pryzby <pryzby@telsasoft.com> writes:\n> > > This fails when run more than once:\n> > > time meson test --setup running --print test_pg_db_role_setting-running/regress\n> >\n> > Ah.\n> >\n> > > It didn't fail for you because it says:\n> > > ./src/test/modules/test_pg_db_role_setting/Makefile\n> > > +# disable installcheck for now\n> > > +NO_INSTALLCHECK = 1\n> >\n> > So ... exactly why is the meson infrastructure failing to honor that?\n> > This test looks sufficiently careless about its side-effects that\n> > I completely agree with the decision to not run it in pre-existing\n> > installations. Failing to drop a created superuser is just one of\n> > its risk factors --- it also leaves around pg_db_role_setting entries.\n>\n> Meson doesn't try to parse the Makefiles (like the MSVC scripts) but\n> (since 3f0e786ccb) has its own implementation, which involves setting\n> 'runningcheck': false.\n>\n> 096dd80f3c seems to have copied the NO_INSTALLCHECK from oat's makefile,\n> but didn't copy \"runningcheck\" from oat's meson.build (but did copy its\n> regress_args).\n\nI completely agree with your analysis. Fixes by 3f0e786ccbf5 to oat\nand the other modules tests came just a couple of days before\ncommitting the main pg_db_role_setting commit 096dd80f3c so they were\nforgotten to be included into it.\n\nI support committing the same fix to pg_db_role_setting test and added\nthis minor fix as a patch to this thread.\n\nKind regards, and happy New year!\nPavel Borisov,\nSupabase.",
"msg_date": "Mon, 2 Jan 2023 12:53:30 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "Justin, Tom, Pavel, thank you for catching this.\n\nOn Mon, Jan 2, 2023 at 11:54 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> I completely agree with your analysis. Fixes by 3f0e786ccbf5 to oat\n> and the other modules tests came just a couple of days before\n> committing the main pg_db_role_setting commit 096dd80f3c so they were\n> forgotten to be included into it.\n>\n> I support committing the same fix to pg_db_role_setting test and added\n> this minor fix as a patch to this thread.\n\nI'm going to push this if no objections.\n\n> Kind regards, and happy New year!\n\nThanks, and happy New Year to you!\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 2 Jan 2023 18:14:48 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "On Mon, Jan 02, 2023 at 06:14:48PM +0300, Alexander Korotkov wrote:\n> I'm going to push this if no objections.\n\nI also suggest that meson.build should not copy regress_args.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 2 Jan 2023 09:42:40 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "On Mon, Jan 2, 2023 at 6:42 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Mon, Jan 02, 2023 at 06:14:48PM +0300, Alexander Korotkov wrote:\n> > I'm going to push this if no objections.\n>\n> I also suggest that meson.build should not copy regress_args.\n\nGood point, thanks.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Tue, 3 Jan 2023 09:29:00 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "On Tue, 3 Jan 2023 at 09:29, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> On Mon, Jan 2, 2023 at 6:42 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > On Mon, Jan 02, 2023 at 06:14:48PM +0300, Alexander Korotkov wrote:\n> > > I'm going to push this if no objections.\n> >\n> > I also suggest that meson.build should not copy regress_args.\n>\n> Good point, thanks.\nNice, thanks!\nIsn't there the same reason to remove regress_args from meson.build in\noat's test and possibly from other modules with runningcheck=false?\n\nRegards,\nPavel Borisov\n\n\n",
"msg_date": "Tue, 3 Jan 2023 11:50:23 +0300",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "On Tue, Jan 3, 2023 at 11:51 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> On Tue, 3 Jan 2023 at 09:29, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> >\n> > On Mon, Jan 2, 2023 at 6:42 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > On Mon, Jan 02, 2023 at 06:14:48PM +0300, Alexander Korotkov wrote:\n> > > > I'm going to push this if no objections.\n> > >\n> > > I also suggest that meson.build should not copy regress_args.\n> >\n> > Good point, thanks.\n> Nice, thanks!\n> Isn't there the same reason to remove regress_args from meson.build in\n> oat's test and possibly from other modules with runningcheck=false?\n\nThis makes sense to me too. I don't see anything specific in oat's\nregression test that requires setting regress_args.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Tue, 3 Jan 2023 13:48:45 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "Hi, Alexander!\n\nOn Tue, 3 Jan 2023 at 13:48, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> On Tue, Jan 3, 2023 at 11:51 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> > On Tue, 3 Jan 2023 at 09:29, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > >\n> > > On Mon, Jan 2, 2023 at 6:42 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > On Mon, Jan 02, 2023 at 06:14:48PM +0300, Alexander Korotkov wrote:\n> > > > > I'm going to push this if no objections.\n> > > >\n> > > > I also suggest that meson.build should not copy regress_args.\n> > >\n> > > Good point, thanks.\n> > Nice, thanks!\n> > Isn't there the same reason to remove regress_args from meson.build in\n> > oat's test and possibly from other modules with runningcheck=false?\n>\n> This makes sense to me too. I don't see anything specific in oat's\n> regression test that requires setting regress_args.\nYes, it seems so.\nRegress args in oat's Makefile are added as a response to a buildfarm\nissues by 7c51b7f7cc08. They seem unneeded to be copied into\nmeson.build with runningcheck=false. I may mistake but it seems to me\nthat removing regress_args from meson.build with runningcheck=false is\njust to make it neat, not for functionality. So I consider fixing it\nin pg_db_role_setting, oat, or both of them optional.\n\nRegards,\nPavel Borisov.\n\n\n",
"msg_date": "Tue, 3 Jan 2023 14:20:38 +0300",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "On Tue, Jan 03, 2023 at 02:20:38PM +0300, Pavel Borisov wrote:\n> Hi, Alexander!\n> \n> On Tue, 3 Jan 2023 at 13:48, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> >\n> > On Tue, Jan 3, 2023 at 11:51 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> > > On Tue, 3 Jan 2023 at 09:29, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > > >\n> > > > On Mon, Jan 2, 2023 at 6:42 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > > On Mon, Jan 02, 2023 at 06:14:48PM +0300, Alexander Korotkov wrote:\n> > > > > > I'm going to push this if no objections.\n> > > > >\n> > > > > I also suggest that meson.build should not copy regress_args.\n> > > >\n> > > > Good point, thanks.\n> > > Nice, thanks!\n> > > Isn't there the same reason to remove regress_args from meson.build in\n> > > oat's test and possibly from other modules with runningcheck=false?\n> >\n> > This makes sense to me too. I don't see anything specific in oat's\n> > regression test that requires setting regress_args.\n> Yes, it seems so.\n> Regress args in oat's Makefile are added as a response to a buildfarm\n> issues by 7c51b7f7cc08. They seem unneeded to be copied into\n> meson.build with runningcheck=false. I may mistake but it seems to me\n> that removing regress_args from meson.build with runningcheck=false is\n> just to make it neat, not for functionality. So I consider fixing it\n> in pg_db_role_setting, oat, or both of them optional.\n\nRight - my suggestion is to \"uncopy\" them from pg_db_role_setting, where\nthey serve no purpose, since they shouldn't have been copied originally.\n\nOn Tue, Jan 03, 2023 at 09:29:00AM +0300, Alexander Korotkov wrote:\n> On Mon, Jan 2, 2023 at 6:42 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > On Mon, Jan 02, 2023 at 06:14:48PM +0300, Alexander Korotkov wrote:\n> > > I'm going to push this if no objections.\n> >\n> > I also suggest that meson.build should not copy regress_args.\n> \n> Good point, thanks.\n\nI should've mentioned that the same things should be removed from\nMakefile, too...\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 3 Jan 2023 08:28:56 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "On Tue, 3 Jan 2023 at 17:28, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Tue, Jan 03, 2023 at 02:20:38PM +0300, Pavel Borisov wrote:\n> > Hi, Alexander!\n> >\n> > On Tue, 3 Jan 2023 at 13:48, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > >\n> > > On Tue, Jan 3, 2023 at 11:51 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> > > > On Tue, 3 Jan 2023 at 09:29, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > > > >\n> > > > > On Mon, Jan 2, 2023 at 6:42 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > > > On Mon, Jan 02, 2023 at 06:14:48PM +0300, Alexander Korotkov wrote:\n> > > > > > > I'm going to push this if no objections.\n> > > > > >\n> > > > > > I also suggest that meson.build should not copy regress_args.\n> > > > >\n> > > > > Good point, thanks.\n> > > > Nice, thanks!\n> > > > Isn't there the same reason to remove regress_args from meson.build in\n> > > > oat's test and possibly from other modules with runningcheck=false?\n> > >\n> > > This makes sense to me too. I don't see anything specific in oat's\n> > > regression test that requires setting regress_args.\n> > Yes, it seems so.\n> > Regress args in oat's Makefile are added as a response to a buildfarm\n> > issues by 7c51b7f7cc08. They seem unneeded to be copied into\n> > meson.build with runningcheck=false. I may mistake but it seems to me\n> > that removing regress_args from meson.build with runningcheck=false is\n> > just to make it neat, not for functionality. So I consider fixing it\n> > in pg_db_role_setting, oat, or both of them optional.\n>\n> Right - my suggestion is to \"uncopy\" them from pg_db_role_setting, where\n> they serve no purpose, since they shouldn't have been copied originally.\n>\n> On Tue, Jan 03, 2023 at 09:29:00AM +0300, Alexander Korotkov wrote:\n> > On Mon, Jan 2, 2023 at 6:42 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > On Mon, Jan 02, 2023 at 06:14:48PM +0300, Alexander Korotkov wrote:\n> > > > I'm going to push this if no objections.\n> > >\n> > > I also suggest that meson.build should not copy regress_args.\n> >\n> > Good point, thanks.\n>\n> I should've mentioned that the same things should be removed from\n> Makefile, too...\n>\n> --\nThanks, Justin!\nAttached is a new patch accordingly.\n\nRegards,\nPavel Borisov!",
"msg_date": "Tue, 3 Jan 2023 17:40:21 +0300",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "On Tue, Jan 3, 2023 at 5:28 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Tue, Jan 03, 2023 at 09:29:00AM +0300, Alexander Korotkov wrote:\n> > On Mon, Jan 2, 2023 at 6:42 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > I also suggest that meson.build should not copy regress_args.\n> >\n> > Good point, thanks.\n>\n> I should've mentioned that the same things should be removed from\n> Makefile, too...\n\nThis makes sense too. See the attached patchset.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Tue, 3 Jan 2023 17:41:06 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
},
{
"msg_contents": "On Tue, Jan 3, 2023 at 5:41 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> On Tue, 3 Jan 2023 at 17:28, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > I should've mentioned that the same things should be removed from\n> > Makefile, too...\n> >\n> Thanks, Justin!\n> Attached is a new patch accordingly.\n\nThank you. I've pushed my version, which is split into two commits.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Thu, 5 Jan 2023 13:14:28 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow placeholders in ALTER ROLE w/o superuser"
}
] |
[
{
"msg_contents": "Dear PostgreSQL Hacker Community,\n\nI am facing a tricky bug which makes the Query Planner crashes when using COUNT(*) function.\nWithout any upgrade suddenly a table of a database instance could not be queried this way:\n\nSELECT COUNT(*) FROM items;\n\n-- ERROR: variable not found in subplan target list\n-- SQL state: XX000\n\nMessage and behaviour seem related to the Query Planner:\n\n\nEXPLAIN SELECT COUNT(*) FROM item;\n\n-- ERROR: variable not found in subplan target list\n\n-- SQL state: XX000\n\nLooks like a column name could not be found (see https://github.com/postgres/postgres/blob/ce4f46fdc814eb1b704d81640f6d8f03625d0f53/src/backend/optimizer/plan/setrefs.c#L2967-L2972) in some specific context that is somehow hard to reproduce.\n\nInteresting facts:\n\n\nSELECT COUNT(id) FROM items; -- 213\n\n\n\nSELECT COUNT(*) FROM items WHERE id > 0; -- 213\n\nWork as expected.\n\nI can see that other people are recently facing a similar problem (https://www.postgresql.org/message-id/flat/4c347490-d734-5fdd-d613-1327601b4e7e%40mit.edu).\nIf it is the same bug then it is not related to the PGroonga extension as I don't use it all.\n\nAnyway, the bug is difficult to reproduce on my application.\nAt the time of writing, I could just isolate it on a specific database but I could not draw a MCVE from it.\nI am looking for help to make it reproducible and feed your knowledge database.\n\nMy first guess was to open a post of SO (see for details https://stackoverflow.com/questions/72498741/how-can-i-reproduce-a-database-context-to-debug-a-tricky-postgresql-error-vari), but digging deeper in the investigation it seems it will require people with strong insights on how PostgreSQL actually works under the hood.\nTherefore, I chose this specific mailing list.\n\nThe bug is tricky to reproduce, I could not succeed to replicate elsewhere (dump/restore does not preserve it).\nAnyway it makes my database unusable and looks like a potential bug for your product and applications relying on it.\n\nFaulty setup is about:\n\n\nSELECT version();\n\n-- PostgreSQL 13.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit\n\n\n\nSELECT extname, extversion FROM pg_extension;\n\n-- \"plpgsql\" \"1.0\"\n\n-- \"postgis\" \"3.1.1\"\n\nBy now, the only workarounds I have found are:\n\n\n * Dump database and recreate a new instance (problem seems to vanish but there is no guarantee it is solved or it will not happened later on);\n * Add dummy filter on all queries (more a trick than a solution).\n\nI am writing to this mailing list to raise you attention on it.\nI'll be happy to help you investigate it deeper.\n\nBest regards,\n\nLandercy Jean\n\n\n\n\n\n\n\n\n\n\nDear PostgreSQL Hacker Community,\n \nI am facing a tricky bug which makes the Query Planner crashes when using COUNT(*) function.\nWithout any upgrade suddenly a table of a database instance could not be queried this way:\n \nSELECT COUNT(*) FROM items;\n \n-- ERROR: variable not found in subplan target list\n-- SQL state: XX000\n \nMessage and behaviour seem related to the Query Planner:\n \nEXPLAIN SELECT COUNT(*) FROM item;\n-- ERROR: variable not found in subplan target list\n-- SQL state: XX000\n \nLooks like a column name could not be found (see\n\nhttps://github.com/postgres/postgres/blob/ce4f46fdc814eb1b704d81640f6d8f03625d0f53/src/backend/optimizer/plan/setrefs.c#L2967-L2972) in some specific context that is somehow hard to reproduce.\n \nInteresting facts:\n \nSELECT COUNT(id) FROM items; -- 213\n \nSELECT COUNT(*) FROM items WHERE id > 0; -- 213\n \nWork as expected.\n \nI can see that other people are recently facing a similar problem (https://www.postgresql.org/message-id/flat/4c347490-d734-5fdd-d613-1327601b4e7e%40mit.edu).\nIf it is the same bug then it is not related to the PGroonga extension as I don’t use it all.\n \nAnyway, the bug is difficult to reproduce on my application.\nAt the time of writing, I could just isolate it on a specific database but I could not draw a MCVE from it.\nI am looking for help to make it reproducible and feed your knowledge database.\n \nMy first guess was to open a post of SO (see for details\n\nhttps://stackoverflow.com/questions/72498741/how-can-i-reproduce-a-database-context-to-debug-a-tricky-postgresql-error-vari), but digging deeper in the investigation it seems it will require people with strong insights on how PostgreSQL actually works under\n the hood.\nTherefore, I chose this specific mailing list.\n \nThe bug is tricky to reproduce, I could not succeed to replicate elsewhere (dump/restore does not preserve it).\nAnyway it makes my database unusable and looks like a potential bug for your product and applications relying on it.\n \nFaulty setup is about:\n \nSELECT version();\n-- PostgreSQL 13.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit\n \nSELECT extname, extversion FROM pg_extension;\n-- \"plpgsql\" \"1.0\"\n-- \"postgis\" \"3.1.1\"\n \nBy now, the only workarounds I have found are:\n \n\nDump database and recreate a new instance (problem seems to vanish but there is no guarantee it is solved or it will not happened later on);Add dummy filter on all queries (more a trick than a solution).\n \nI am writing to this mailing list to raise you attention on it.\nI’ll be happy to help you investigate it deeper.\n \nBest regards,\n \nLandercy Jean",
"msg_date": "Mon, 6 Jun 2022 09:34:24 +0000",
"msg_from": "Jean Landercy - BEEODIVERSITY <jean.landercy@beeodiversity.com>",
"msg_from_op": true,
"msg_subject": "Sudden database error with COUNT(*) making Query Planner crashes:\n variable not found in subplan target list"
},
{
"msg_contents": "On Mon, Jun 06, 2022 at 09:34:24AM +0000, Jean Landercy - BEEODIVERSITY wrote:\n> Faulty setup is about:\n> \n> SELECT version();\n> \n> -- PostgreSQL 13.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit\n\nPlease check if the problem occurs in v13.7\n\nhttps://www.postgresql.org/message-id/2197859.1644623850@sss.pgh.pa.us\nhttps://www.postgresql.org/message-id/flat/2121219.1644607692%40sss.pgh.pa.us#190c43702a91dbd0509ba545dbbab58d\n\n\n",
"msg_date": "Mon, 6 Jun 2022 09:20:44 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Sudden database error with COUNT(*) making Query Planner\n crashes: variable not found in subplan target list"
},
{
"msg_contents": "Dear Justin,\n\nThank you for your quick reply.\nUnfortunately, the server having this issue is an Azure Flexible Server.\nUpgrades are managed by Azure, I will have to wait until they release the version 13.7.\n\nIs there a procedure to replicate the database and preserve the bug.\nMy attempts with pg_dump/psql failed (the bug vanishes).\nIf so then I can clone the faulty database and upgrade it on a newer version.\n\nBest regards, \n\nLandercy Jean\n\n\n\n",
"msg_date": "Mon, 6 Jun 2022 16:50:55 +0000",
"msg_from": "Jean Landercy - BEEODIVERSITY <jean.landercy@beeodiversity.com>",
"msg_from_op": true,
"msg_subject": "RE: Sudden database error with COUNT(*) making Query Planner crashes:\n variable not found in subplan target list"
},
{
"msg_contents": "On Mon, Jun 06, 2022 at 04:50:55PM +0000, Jean Landercy - BEEODIVERSITY wrote:\n> Dear Justin,\n> \n> Thank you for your quick reply.\n> Unfortunately, the server having this issue is an Azure Flexible Server.\n> Upgrades are managed by Azure, I will have to wait until they release the version 13.7.\n\nI don't know what to suggest other than to open a support ticket with the\nvendor.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 6 Jun 2022 13:59:17 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Sudden database error with COUNT(*) making Query Planner\n crashes: variable not found in subplan target list"
},
{
"msg_contents": "On Mon, 6 Jun 2022 at 21:34, Jean Landercy - BEEODIVERSITY\n<jean.landercy@beeodiversity.com> wrote:\n> SELECT COUNT(*) FROM items;\n> -- ERROR: variable not found in subplan target list\n> -- SQL state: XX000\n\nCan you share some more details about what \"items\" is. psql's \"\\d\nitems\" output would be useful. From what you've reported we can't\ntell if this is a table or a view.\n\n> The bug is tricky to reproduce, I could not succeed to replicate elsewhere (dump/restore does not preserve it).\n\nCan you share the output of:\n\nselect attname,atttypid::regtype,attnum,atthasdef,atthasmissing,attgenerated,attisdropped\nfrom pg_attribute where attrelid = 'items'::regclass order by attnum;\n\nThis will let us see if there's something strange going on with\ndropped or has missing columns. There may be some sequence of ALTER\nTABLE ADD COLUMN ... DEFAULT / DROP COLUMN that is causing this. The\noutput of that might help us see if that could be a factor.\n\nDavid\n\n\n",
"msg_date": "Tue, 7 Jun 2022 09:27:45 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Sudden database error with COUNT(*) making Query Planner crashes:\n variable not found in subplan target list"
},
{
"msg_contents": "Dear David,\r\n\r\nThank you for taking time on this issue.\r\n\r\nHere is the detail of the table (I have anonymized it on SO, this is its real name):\r\n\r\n\\d logistic_site\r\n Table « public.logistic_site »\r\n Colonne | Type | Collationnement | NULL-able | Par défaut\r\n\r\n-------------+--------------------------+-----------------+-----------+-------------------------------------------\r\n id | bigint | | not null | nextval('logistic_site_id_seq'::regclass)\r\n key | character varying(32) | | not null |\r\n name | character varying(128) | | |\r\n created | timestamp with time zone | | not null |\r\n updated | timestamp with time zone | | not null |\r\n archived | timestamp with time zone | | |\r\n geom | geometry(Polygon,4326) | | |\r\n location | geometry(Point,4326) | | |\r\n notes | text | | |\r\n country_id | bigint | | |\r\n customer_id | bigint | | |\r\n\r\nIndex :\r\n \"logistic_site_pkey\" PRIMARY KEY, btree (id)\r\n \"logistic_site_country_id_9a696481\" btree (country_id)\r\n \"logistic_site_customer_id_a2c8a74a\" btree (customer_id)\r\n \"logistic_site_geom_105a08da_id\" gist (geom)\r\n \"logistic_site_key_2e791173_like\" btree (key varchar_pattern_ops)\r\n \"logistic_site_key_key\" UNIQUE CONSTRAINT, btree (key)\r\n \"logistic_site_location_54ae0166_id\" gist (location)\r\nContraintes de clés étrangères :\r\n \"logistic_site_country_id_9a696481_fk_logistic_country_id\" FOREIGN KEY (country_id) REFERENCES logistic_country(id) DEFERRABLE INITIALLY DEFERRED\r\n \"logistic_site_customer_id_a2c8a74a_fk_logistic_customer_id\" FOREIGN KEY (customer_id) REFERENCES logistic_customer(id) DEFERRABLE INITIALLY DEFERRED\r\nRéférencé par :\r\n TABLE \"logistic_hive\" CONSTRAINT \"logistic_hive_site_id_50c29dd8_fk_logistic_site_id\" FOREIGN KEY (site_id) REFERENCES logistic_site(id) DEFERRABLE INITIALLY DEFERRED\r\n TABLE \"logistic_packorder\" CONSTRAINT \"logistic_packorder_site_id_16e1a41a_fk_logistic_site_id\" FOREIGN KEY (site_id) REFERENCES logistic_site(id) DEFERRABLE INITIALLY DEFERRED\r\n TABLE \"logistic_projectsite\" CONSTRAINT \"logistic_projectsite_site_id_522bf74b_fk_logistic_site_id\" FOREIGN KEY (site_id) REFERENCES logistic_site(id) DEFERRABLE INITIALLY DEFERRED\r\n TABLE \"scientific_identification\" CONSTRAINT \"scientific_identification_site_id_d9e79149_fk_logistic_site_id\" FOREIGN KEY (site_id) REFERENCES logistic_site(id) DEFERRABLE INITIALLY DEFERRED\r\n TABLE \"scientific_inventory\" CONSTRAINT \"scientific_inventory_site_id_72521353_fk_logistic_site_id\" FOREIGN KEY (site_id) REFERENCES logistic_site(id) DEFERRABLE INITIALLY DEFERRED\r\n TABLE \"scientific_result\" CONSTRAINT \"scientific_result_site_id_af6c815d_fk_logistic_site_id\" FOREIGN KEY (site_id) REFERENCES logistic_site(id) DEFERRABLE INITIALLY DEFERRED\r\n TABLE \"scientific_selection\" CONSTRAINT \"scientific_selection_site_id_88d69cab_fk_logistic_site_id\" FOREIGN KEY (site_id) REFERENCES logistic_site(id) DEFERRABLE INITIALLY DEFERRED\r\n\r\nAnd the output of the related query:\r\n\r\nSELECT\r\n attname, atttypid::regtype, attnum,atthasdef, atthasmissing, attgenerated, attisdropped\r\nFROM\r\n pg_attribute \r\nWHERE\r\n attrelid = 'logistic_site'::regclass\r\nORDER BY\r\n attnum;\r\n\r\n attname | atttypid | attnum | atthasdef | atthasmissing | attgenerated | attisdropped\r\n-------------+--------------------------+--------+-----------+---------------+--------------+--------------\r\n tableoid | oid | -6 | f | f | | f\r\n cmax | cid | -5 | f | f | | f\r\n xmax | xid | -4 | f | f | | f\r\n cmin | cid | -3 | f | f | | f\r\n xmin | xid | -2 | f | f | | f\r\n ctid | tid | -1 | f | f | | f\r\n id | bigint | 1 | t | f | | f\r\n key | character varying | 2 | f | f | | f\r\n name | character varying | 3 | f | f | | f\r\n created | timestamp with time zone | 4 | f | f | | f\r\n updated | timestamp with time zone | 5 | f | f | | f\r\n archived | timestamp with time zone | 6 | f | f | | f\r\n geom | geometry | 7 | f | f | | f\r\n location | geometry | 8 | f | f | | f\r\n notes | text | 9 | f | f | | f\r\n country_id | bigint | 10 | f | f | | f\r\n customer_id | bigint | 11 | f | f | | f\r\n(17 lignes)\r\n\r\nAdditional information:\r\nWhen trying to read the SQL related query for this table in PgAdmin4 I also have the error message popping up and the I get no SQL. So maybe the problem resides in a deeper function the Query Planner and SQL generator functions rely on. \r\n\r\nDon't hesitate to ask for more information.\r\n\r\nBest regards,\r\n\r\nJean\r\n",
"msg_date": "Tue, 7 Jun 2022 07:58:23 +0000",
"msg_from": "Jean Landercy - BEEODIVERSITY <jean.landercy@beeodiversity.com>",
"msg_from_op": true,
"msg_subject": "RE: Sudden database error with COUNT(*) making Query Planner crashes:\n variable not found in subplan target list"
},
{
"msg_contents": "On Tue, 7 Jun 2022 at 19:58, Jean Landercy - BEEODIVERSITY\n<jean.landercy@beeodiversity.com> wrote:\n> Here is the detail of the table (I have anonymized it on SO, this is its real name):\n\n> \"logistic_site_location_54ae0166_id\" gist (location)\n\nI imagine this is due to the planner choosing an index-only scan on\nthe above index. A similar problem was reported in [1].\n\nA fix was committed in [2], which appears in 13.7.\n\nYou could turn off enable_indexonlyscan until 13.7 is available to you.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAHUie24ddN+pDNw7fkhNrjrwAX=fXXfGZZEHhRuofV_N_ftaSg@mail.gmail.com\n[2] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=0778b24ced8a873b432641001d046d1dde602466\n\n\n",
"msg_date": "Wed, 8 Jun 2022 07:46:50 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Sudden database error with COUNT(*) making Query Planner crashes:\n variable not found in subplan target list"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Tue, 7 Jun 2022 at 19:58, Jean Landercy - BEEODIVERSITY\n> <jean.landercy@beeodiversity.com> wrote:\n>> Here is the detail of the table (I have anonymized it on SO, this is its real name):\n>> \"logistic_site_location_54ae0166_id\" gist (location)\n> I imagine this is due to the planner choosing an index-only scan on\n> the above index. A similar problem was reported in [1].\n\nThe other gist index could also be the problem. It seems odd though\nthat the planner would favor either index for this purpose over the btree\nindexes on scalar columns, which you'd think would be a lot smaller.\nI wonder if there is some quirk in gist cost estimation that makes it\nimproperly claim to be cheaper than btree scans.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 Jun 2022 15:55:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Sudden database error with COUNT(*) making Query Planner crashes:\n variable not found in subplan target list"
},
{
"msg_contents": "On Wed, 8 Jun 2022 at 07:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > On Tue, 7 Jun 2022 at 19:58, Jean Landercy - BEEODIVERSITY\n> > <jean.landercy@beeodiversity.com> wrote:\n> >> Here is the detail of the table (I have anonymized it on SO, this is its real name):\n> >> \"logistic_site_location_54ae0166_id\" gist (location)\n> > I imagine this is due to the planner choosing an index-only scan on\n> > the above index. A similar problem was reported in [1].\n>\n> The other gist index could also be the problem. It seems odd though\n> that the planner would favor either index for this purpose over the btree\n> indexes on scalar columns, which you'd think would be a lot smaller.\n> I wonder if there is some quirk in gist cost estimation that makes it\n> improperly claim to be cheaper than btree scans.\n\nI installed PostGIS 3.1.1 and mocked this up with the attached.\n\nLooking at the plans, I see:\n\n# explain select count(*) from logistic_site;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------\n Aggregate (cost=20.18..20.19 rows=1 width=8)\n -> Bitmap Heap Scan on logistic_site (cost=5.92..19.32 rows=340 width=0)\n -> Bitmap Index Scan on logistic_site_location_54ae0166_id\n(cost=0.00..5.84 rows=340 width=0)\n(3 rows)\n\n# drop index logistic_site_location_54ae0166_id;\n# explain select count(*) from logistic_site;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------\n Aggregate (cost=9.92..9.93 rows=1 width=8)\n -> Bitmap Heap Scan on logistic_site (cost=5.26..9.39 rows=213 width=0)\n -> Bitmap Index Scan on logistic_site_geom_105a08da_id\n(cost=0.00..5.20 rows=213 width=0)\n(3 rows)\n\n# drop index logistic_site_geom_105a08da_id;\n# explain select count(*) from logistic_site;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------\n Aggregate (cost=13.93..13.94 rows=1 width=8)\n -> Bitmap Heap Scan on logistic_site (cost=9.26..13.39 rows=213 width=0)\n -> Bitmap Index Scan on logistic_site_key_2e791173_like\n(cost=0.00..9.21 rows=213 width=0)\n(3 rows)\n\nSo it does appear that the location index is being chosen, at least\nwith the data that I inserted. Those gist indexes are costing quite a\nbit cheaper than the cheapest btree index.\n\nDavid",
"msg_date": "Wed, 8 Jun 2022 08:31:58 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Sudden database error with COUNT(*) making Query Planner crashes:\n variable not found in subplan target list"
},
{
"msg_contents": "On Wed, 8 Jun 2022 at 08:31, David Rowley <dgrowleyml@gmail.com> wrote:\n> So it does appear that the location index is being chosen, at least\n> with the data that I inserted. Those gist indexes are costing quite a\n> bit cheaper than the cheapest btree index.\n\nThis seems just to be because the gist indexes are smaller, which is\nlikely due to me having inserted NULL values into them.\n\npostgres=# select pg_relation_size('logistic_site_key_key');\n pg_relation_size\n------------------\n 16384\n(1 row)\n\n\npostgres=# select pg_relation_size('logistic_site_location_54ae0166_id');\n pg_relation_size\n------------------\n 8192\n(1 row)\n\nDavid\n\n\n",
"msg_date": "Wed, 8 Jun 2022 08:49:05 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Sudden database error with COUNT(*) making Query Planner crashes:\n variable not found in subplan target list"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Wed, 8 Jun 2022 at 07:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I wonder if there is some quirk in gist cost estimation that makes it\n>> improperly claim to be cheaper than btree scans.\n\n> I installed PostGIS 3.1.1 and mocked this up with the attached.\n\n> Looking at the plans, I see:\n\n> # explain select count(*) from logistic_site;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------\n> Aggregate (cost=20.18..20.19 rows=1 width=8)\n> -> Bitmap Heap Scan on logistic_site (cost=5.92..19.32 rows=340 width=0)\n> -> Bitmap Index Scan on logistic_site_location_54ae0166_id\n> (cost=0.00..5.84 rows=340 width=0)\n> (3 rows)\n\n> # drop index logistic_site_location_54ae0166_id;\n> # explain select count(*) from logistic_site;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------\n> Aggregate (cost=9.92..9.93 rows=1 width=8)\n> -> Bitmap Heap Scan on logistic_site (cost=5.26..9.39 rows=213 width=0)\n> -> Bitmap Index Scan on logistic_site_geom_105a08da_id\n> (cost=0.00..5.20 rows=213 width=0)\n> (3 rows)\n\nThat ... is pretty quirky already. How did it prefer a scan with cost\n19.32 over one with cost 9.39? Seems like we've got a bug here somewhere.\nThe change in estimated rowcount is rather broken, too.\n\n> So it does appear that the location index is being chosen, at least\n> with the data that I inserted. Those gist indexes are costing quite a\n> bit cheaper than the cheapest btree index.\n\nIt looks like the data you inserted for the geometry columns was uniformly\nNULL, which perhaps would result in a very small gist index. So maybe\nfor this test data the choice isn't so odd. Seems unlikely that that'd\nbe true of the OP's production data, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 Jun 2022 16:57:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Sudden database error with COUNT(*) making Query Planner crashes:\n variable not found in subplan target list"
},
{
"msg_contents": "I wrote:\n> That ... is pretty quirky already. How did it prefer a scan with cost\n> 19.32 over one with cost 9.39? Seems like we've got a bug here somewhere.\n> The change in estimated rowcount is rather broken, too.\n\nAh, false alarm. I can reproduce your results if I stick an ANALYZE\nbetween the first and second EXPLAIN. So probably your change in\nestimated rowcount and hence cost can be explained by an auto-analyze\ncoming along at just the right time.\n\nAlso, if I fill the geom and location columns with non-null data,\nthe planner stops preferring those indexes.\n\nSo now I'm guessing that the OP's data *was* mostly null, and the\nplanner preferred the gist indexes because they were smallest,\nand then tripped over the nonreturnable-column bug.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 Jun 2022 17:24:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Sudden database error with COUNT(*) making Query Planner crashes:\n variable not found in subplan target list"
}
] |
[
{
"msg_contents": "Hi, hackers\nI just wrote a test code for the `pg_buffercache` extension which\ndoesn't not have test code.\nI wrote the sql query to ensure that the buffer cache results are the\nsame when `make installcheck` is performed.\n\n---\nregards\nLee Dong Wook.",
"msg_date": "Mon, 6 Jun 2022 22:30:25 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_buffercache: add sql test"
},
{
"msg_contents": "> On 6 Jun 2022, at 15:30, Dong Wook Lee <sh95119@gmail.com> wrote:\n\n> I just wrote a test code for the `pg_buffercache` extension which\n> doesn't not have test code.\n\nPlease add this patch to the next commitfest to make sure it's not lost before\nthen.\n\n\thttps://commitfest.postgresql.org/38/\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 6 Jun 2022 18:46:41 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_buffercache: add sql test"
},
{
"msg_contents": "Greetings,\n\n* Daniel Gustafsson (daniel@yesql.se) wrote:\n> > On 6 Jun 2022, at 15:30, Dong Wook Lee <sh95119@gmail.com> wrote:\n> \n> > I just wrote a test code for the `pg_buffercache` extension which\n> > doesn't not have test code.\n> \n> Please add this patch to the next commitfest to make sure it's not lost before\n> then.\n> \n> \thttps://commitfest.postgresql.org/38/\n\nSeems to be there now, at least:\n\nhttps://commitfest.postgresql.org/38/3674/\n\nHowever, I don't think we should have a 'target version' set for this\n(and in particular it shouldn't be 15). I'd suggest removing that.\n\nThanks,\n\nStephen\n\n\n",
"msg_date": "Mon, 6 Jun 2022 13:04:15 -0400",
"msg_from": "Stephen Frost <stephen@crunchydata.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_buffercache: add sql test"
},
{
"msg_contents": "I removed it on your advice.\nThanks.\n\n2022년 6월 7일 (화) 오전 2:04, Stephen Frost <stephen@crunchydata.com>님이 작성:\n>\n> Greetings,\n>\n> * Daniel Gustafsson (daniel@yesql.se) wrote:\n> > > On 6 Jun 2022, at 15:30, Dong Wook Lee <sh95119@gmail.com> wrote:\n> >\n> > > I just wrote a test code for the `pg_buffercache` extension which\n> > > doesn't not have test code.\n> >\n> > Please add this patch to the next commitfest to make sure it's not lost before\n> > then.\n> >\n> > https://commitfest.postgresql.org/38/\n>\n> Seems to be there now, at least:\n>\n> https://commitfest.postgresql.org/38/3674/\n>\n> However, I don't think we should have a 'target version' set for this\n> (and in particular it shouldn't be 15). I'd suggest removing that.\n>\n> Thanks,\n>\n> Stephen\n\n\n",
"msg_date": "Tue, 7 Jun 2022 10:47:17 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_buffercache: add sql test"
},
{
"msg_contents": "Dong Wook Lee <sh95119@gmail.com> writes:\n> I just wrote a test code for the `pg_buffercache` extension which\n> doesn't not have test code.\n\nPushed with minor adjustments. Some notes:\n\n* A .gitignore file is needed so that \"git status\" won't whine after\nrunning the test. This tends to be pretty much boilerplate; I copied\nit from another contrib directory.\n\n* Pay attention to \"git diff --check\" formatting warnings. In this\ncase it bleated about an extra blank line at the end of the .sql file.\n\n* I didn't care for the direct use of pg_show_all_settings(). The\nofficial API there is the pg_settings view, and there's no need for\nthis test to get friendly with the view's internals.\n\nThanks for the patch!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 30 Jul 2022 15:39:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_buffercache: add sql test"
},
{
"msg_contents": "On Sun, Jul 31, 2022 at 3:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> * A .gitignore file is needed so that \"git status\" won't whine after\n> running the test. This tends to be pretty much boilerplate; I copied\n> it from another contrib directory.\n\nIs there any reason we don't add a .gitignore in the contrib/\ndirectory to ignore all */log/, */results/ and */tmp_check/ by\ndefault rather having at least /log/, /results/ and /tmp_check/ in\nalmost all subdirectories .gitignore? Sure any underlying\n\"(log|results|tmp_check)\" top-directory will then be ignored even if\nit's not supposed to be needed, but I don't think it would matter in\npractice. And if it does matter you could still force some file to be\nincluded or even override the parent gitignore.\n\n\n",
"msg_date": "Mon, 1 Aug 2022 09:47:21 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_buffercache: add sql test"
},
{
"msg_contents": "I've been missing what I have to add to the .gitignore file when I\nwrite the test.\nI will refer to it when I write the test code from now on.\n\nThank you.\n\n2022년 7월 31일 (일) 오전 4:39, Tom Lane <tgl@sss.pgh.pa.us>님이 작성:\n>\n> Dong Wook Lee <sh95119@gmail.com> writes:\n> > I just wrote a test code for the `pg_buffercache` extension which\n> > doesn't not have test code.\n>\n> Pushed with minor adjustments. Some notes:\n>\n> * A .gitignore file is needed so that \"git status\" won't whine after\n> running the test. This tends to be pretty much boilerplate; I copied\n> it from another contrib directory.\n>\n> * Pay attention to \"git diff --check\" formatting warnings. In this\n> case it bleated about an extra blank line at the end of the .sql file.\n>\n> * I didn't care for the direct use of pg_show_all_settings(). The\n> official API there is the pg_settings view, and there's no need for\n> this test to get friendly with the view's internals.\n>\n> Thanks for the patch!\n>\n> regards, tom lane\n\n\n",
"msg_date": "Tue, 2 Aug 2022 09:50:05 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_buffercache: add sql test"
}
] |
[
{
"msg_contents": "The logical replication tablesync ignores the publication 'publish'\noperations during the initial data copy.\n\nThis is current/known PG behaviour (e.g. as recently mentioned [1])\nbut it was not documented anywhere.\n\nThis patch just documents the existing behaviour and gives some examples.\n\n------\n[1] https://www.postgresql.org/message-id/CAA4eK1L_98LF7Db4yFY1PhKKRzoT83xtN41jTS5X%2B8OeGrAkLw%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Tue, 7 Jun 2022 14:10:51 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "tablesync copy ignores publication actions"
},
{
"msg_contents": "On Tue, Jun 7, 2022, at 1:10 AM, Peter Smith wrote:\n> The logical replication tablesync ignores the publication 'publish'\n> operations during the initial data copy.\n> \n> This is current/known PG behaviour (e.g. as recently mentioned [1])\n> but it was not documented anywhere.\ninitial data synchronization != replication. publish parameter is a replication\nproperty; it is not a initial data synchronization property. Maybe we should\nmake it clear like you are suggesting.\n\n> This patch just documents the existing behaviour and gives some examples.\nWhy did you add this information to that specific paragraph? IMO it belongs to\na separate paragraph; I would add it as the first paragraph in that subsection.\n\nI suggest the following paragraph:\n\n<para>\nThe initial data synchronization does not take into account the \n<literal>publish</literal> parameter to copy the existing data.\n</para>\n\nThere is no point to link the Initial Snapshot subsection. That subsection is\nexplaining the initial copy steps and you want to inform about the effect of a\npublication parameter on the initial copy. Although both are talking about the\nsame topic (initial copy), that link to Initial Snapshot subsection won't add\nadditional information about the publish parameter. You could expand the\nsuggested sentence to make it clear what publish parameter is or even add a\nlink to the CREATE PUBLICATION synopsis (that explains about publish\nparameter).\n\nYou add an empty paragraph. Remove it.\n\nI'm not sure it deserves an example. It is an easy-to-understand concept and a\ngood description is better than ~ 80 new lines.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Tue, Jun 7, 2022, at 1:10 AM, Peter Smith wrote:The logical replication tablesync ignores the publication 'publish'operations during the initial data copy.This is current/known PG behaviour (e.g. as recently mentioned [1])but it was not documented anywhere.initial data synchronization != replication. publish parameter is a replicationproperty; it is not a initial data synchronization property. Maybe we shouldmake it clear like you are suggesting.This patch just documents the existing behaviour and gives some examples.Why did you add this information to that specific paragraph? IMO it belongs toa separate paragraph; I would add it as the first paragraph in that subsection.I suggest the following paragraph:<para>The initial data synchronization does not take into account the <literal>publish</literal> parameter to copy the existing data.</para>There is no point to link the Initial Snapshot subsection. That subsection isexplaining the initial copy steps and you want to inform about the effect of apublication parameter on the initial copy. Although both are talking about thesame topic (initial copy), that link to Initial Snapshot subsection won't addadditional information about the publish parameter. You could expand thesuggested sentence to make it clear what publish parameter is or even add alink to the CREATE PUBLICATION synopsis (that explains about publishparameter).You add an empty paragraph. Remove it.I'm not sure it deserves an example. It is an easy-to-understand concept and agood description is better than ~ 80 new lines.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Tue, 07 Jun 2022 10:38:04 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: tablesync copy ignores publication actions"
},
{
"msg_contents": "On Tue, Jun 7, 2022 at 7:08 PM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Tue, Jun 7, 2022, at 1:10 AM, Peter Smith wrote:\n>\n> The logical replication tablesync ignores the publication 'publish'\n> operations during the initial data copy.\n>\n> This is current/known PG behaviour (e.g. as recently mentioned [1])\n> but it was not documented anywhere.\n>\n> initial data synchronization != replication. publish parameter is a replication\n> property; it is not a initial data synchronization property. Maybe we should\n> make it clear like you are suggesting.\n>\n\n+1 to document it. We respect some other properties of publication\nlike the publish_via_partition_root parameter, column lists, and row\nfilters. So it is better to explain about 'publish' parameter which we\nignore during the initial sync.\n\n> This patch just documents the existing behaviour and gives some examples.\n>\n> Why did you add this information to that specific paragraph? IMO it belongs to\n> a separate paragraph; I would add it as the first paragraph in that subsection.\n>\n> I suggest the following paragraph:\n>\n> <para>\n> The initial data synchronization does not take into account the\n> <literal>publish</literal> parameter to copy the existing data.\n> </para>\n>\n> There is no point to link the Initial Snapshot subsection. That subsection is\n> explaining the initial copy steps and you want to inform about the effect of a\n> publication parameter on the initial copy. Although both are talking about the\n> same topic (initial copy), that link to Initial Snapshot subsection won't add\n> additional information about the publish parameter.\n>\n\nHere, we are explaining the behavior of row filters during initial\nsync so adding a link to the Initial Snapshot section makes sense to\nme.\n\n> You could expand the\n> suggested sentence to make it clear what publish parameter is or even add a\n> link to the CREATE PUBLICATION synopsis (that explains about publish\n> parameter).\n>\n\n+1. I suggest that we should add some text about the behavior of\ninitial sync in CREATE PUBLICATION doc (along with the 'publish'\nparameter) or otherwise, we can explain it where we are talking about\npublications [1].\n\n> You add an empty paragraph. Remove it.\n>\n> I'm not sure it deserves an example. It is an easy-to-understand concept and a\n> good description is better than ~ 80 new lines.\n>\n\nI don't think it is very clear that \"initial data synchronization !=\nreplication\" as mentioned by you nor does our docs does a good job in\nexplaining it otherwise the confusion wouldn't have arisen in the\nemail link shared by Peter. Personally, I think such things can be\nbetter explained by example and in that regards the example shared by\nPeter does half the job because it doesn't explain the replication\npart. I don't think \"Initial Snapshot\" is the right place for these\nexamples considering we want to show the replication based on the\npublish actions. We can extend it to show one example with row filters\nas well. How about showing these examples in the Subscription section\n[2]?\n\n[1]: https://www.postgresql.org/docs/devel/logical-replication-publication.html\n[2]: https://www.postgresql.org/docs/devel/logical-replication-subscription.html\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 8 Jun 2022 09:40:22 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: tablesync copy ignores publication actions"
},
{
"msg_contents": "On Wed, Jun 8, 2022 12:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Tue, Jun 7, 2022 at 7:08 PM Euler Taveira <euler@eulerto.com> wrote:\r\n> >\r\n> > On Tue, Jun 7, 2022, at 1:10 AM, Peter Smith wrote:\r\n> >\r\n> > The logical replication tablesync ignores the publication 'publish'\r\n> > operations during the initial data copy.\r\n> >\r\n> > This is current/known PG behaviour (e.g. as recently mentioned [1])\r\n> > but it was not documented anywhere.\r\n> >\r\n> > initial data synchronization != replication. publish parameter is a replication\r\n> > property; it is not a initial data synchronization property. Maybe we should\r\n> > make it clear like you are suggesting.\r\n> >\r\n> \r\n> +1 to document it. We respect some other properties of publication\r\n> like the publish_via_partition_root parameter, column lists, and row\r\n> filters. So it is better to explain about 'publish' parameter which we\r\n> ignore during the initial sync.\r\n> \r\n\r\nI also agree to add it to the document.\r\n\r\n> > You could expand the\r\n> > suggested sentence to make it clear what publish parameter is or even add\r\n> a\r\n> > link to the CREATE PUBLICATION synopsis (that explains about publish\r\n> > parameter).\r\n> >\r\n> \r\n> +1. I suggest that we should add some text about the behavior of\r\n> initial sync in CREATE PUBLICATION doc (along with the 'publish'\r\n> parameter) or otherwise, we can explain it where we are talking about\r\n> publications [1].\r\n> \r\n\r\nI noticed that CREATE SUBSCRIPTION doc mentions that row filter will affect\r\ninitial sync along with \"copy_data\" parameter.[1] Maybe we can add some text for\r\n\"publish\" parameter there.\r\n\r\n[1] https://www.postgresql.org/docs/devel/sql-createsubscription.html\r\n\r\nRegards,\r\nShi yu\r\n",
"msg_date": "Tue, 14 Jun 2022 03:34:09 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: tablesync copy ignores publication actions"
},
{
"msg_contents": "PSA v2 of the patch, based on all feedback received.\n\n~~~\n\nMain differences from v1:\n\n* Rewording and more explanatory text.\n\n* The examples were moved to the \"Subscription\" [1] page and also\nextended to show some normal replication and row filter examples, from\n[Amit].\n\n* Added some text to CREATE PUBLICATION 'publish' param [2], from [Euler][Amit].\n\n* Added some text to CREATE SUBSCRIPTION Notes [3], from [Shi-san].\n\n* Added some text to the \"Publication page\" [4] to say the 'publish'\nis only for DML operations.\n\n* I changed the note in \"Row Filter - Initial Data Synchronization\"\n[5] to be a warning because I felt users could be surprised to see\ndata exposed by the initial copy, which a DML operation would not\nexpose.\n\n------\n[1] https://www.postgresql.org/docs/devel/logical-replication-subscription.html\n[2] https://www.postgresql.org/docs/devel/sql-createpublication.html\n[3] https://www.postgresql.org/docs/devel/sql-createsubscription.html\n[4] https://www.postgresql.org/docs/devel/logical-replication-publication.html\n[5] https://www.postgresql.org/docs/devel/logical-replication-row-filter.html#LOGICAL-REPLICATION-ROW-FILTER-INITIAL-DATA-SYNC\n\n[Euler] https://www.postgresql.org/message-id/bd49c14d-7a01-4ae3-b424-8c49630fec57%40www.fastmail.com\n[Amit] https://www.postgresql.org/message-id/CAA4eK1Lb5QpWCQU8qkELnX6t8z7JeVtGantmKptxkkpxnYnpHA%40mail.gmail.com\n[Shi-san] https://www.postgresql.org/message-id/OSZPR01MB631026B8428422EAC1BFB8A4FDAA9%40OSZPR01MB6310.jpnprd01.prod.outlook.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Tue, 14 Jun 2022 17:35:31 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tablesync copy ignores publication actions"
},
{
"msg_contents": "On Tue, Jun 14, 2022 3:36 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> \r\n> PSA v2 of the patch, based on all feedback received.\r\n> \r\n> ~~~\r\n> \r\n> Main differences from v1:\r\n> \r\n> * Rewording and more explanatory text.\r\n> \r\n> * The examples were moved to the \"Subscription\" [1] page and also\r\n> extended to show some normal replication and row filter examples, from\r\n> [Amit].\r\n> \r\n> * Added some text to CREATE PUBLICATION 'publish' param [2], from\r\n> [Euler][Amit].\r\n> \r\n> * Added some text to CREATE SUBSCRIPTION Notes [3], from [Shi-san].\r\n> \r\n> * Added some text to the \"Publication page\" [4] to say the 'publish'\r\n> is only for DML operations.\r\n> \r\n> * I changed the note in \"Row Filter - Initial Data Synchronization\"\r\n> [5] to be a warning because I felt users could be surprised to see\r\n> data exposed by the initial copy, which a DML operation would not\r\n> expose.\r\n> \r\n\r\nThanks for updating the patch. Two comments:\r\n\r\n1.\r\n+ it means the copied table <literal>t3</literal> contains all rows even when\r\n+ they do not patch the row filter of publication <literal>pub3b</literal>.\r\n\r\nTypo. I think \"they do not patch the row filter\" should be \"they do not match\r\nthe row filter\", right?\r\n\r\n2.\r\n@@ -500,7 +704,6 @@\r\n </para>\r\n </listitem>\r\n </itemizedlist></para>\r\n-\r\n </sect2>\r\n \r\n <sect2 id=\"logical-replication-row-filter-examples\">\r\n\r\nIt seems we should remove this change.\r\n\r\nRegards,\r\nShi yu\r\n",
"msg_date": "Wed, 15 Jun 2022 07:05:19 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: tablesync copy ignores publication actions"
},
{
"msg_contents": "On Wed, Jun 15, 2022 at 5:05 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n...\n> Thanks for updating the patch. Two comments:\n>\n> 1.\n> + it means the copied table <literal>t3</literal> contains all rows even when\n> + they do not patch the row filter of publication <literal>pub3b</literal>.\n>\n> Typo. I think \"they do not patch the row filter\" should be \"they do not match\n> the row filter\", right?\n>\n> 2.\n> @@ -500,7 +704,6 @@\n> </para>\n> </listitem>\n> </itemizedlist></para>\n> -\n> </sect2>\n>\n> <sect2 id=\"logical-replication-row-filter-examples\">\n>\n> It seems we should remove this change.\n>\n\nThank you for your review comments. Those reported mistakes are fixed\nin the attached patch v3.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Thu, 16 Jun 2022 10:37:09 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tablesync copy ignores publication actions"
},
{
"msg_contents": "On Thu, Jun 16, 2022 at 6:07 AM Peter Smith <smithpb2250@gmail.com> wrote:\n\n>\n> Thank you for your review comments. Those reported mistakes are fixed\n> in the attached patch v3.\n>\n\nThis patch looks mostly good to me except for a few minor comments\nwhich are mentioned below. It is not very clear in which branch(es) we\nshould commit this patch? As per my understanding, this is a\npre-existing behavior but we want to document it because (a) It was\nnot already documented, and (b) we followed it for row filters in\nPG-15 it seems that should be explained. So, we have the following\noptions (a) commit it only for PG-15, (b) commit for PG-15 and\nbackpatch the relevant sections, or (c) commit it when branch opens\nfor PG-16. What do you or others think?\n\nFew comments:\n==============\n1.\n>\n- particular event types. By default, all operation types are replicated.\n- (Row filters have no effect for <command>TRUNCATE</command>. See\n- <xref linkend=\"logical-replication-row-filter\"/>).\n+ particular event types. By default, all operation types are replicated.\n+ These are DML operation limitations only; they do not affect the initial\n+ data synchronization copy.\n>\n\nUsing limitations in the above sentence can be misleading. Can we\nchange it to something like: \"These publication specifications apply\nonly for DML operations; they do ... \".\n\n2.\n+ operations. The publication <literal>pub3b</literal> has a row filter.\n\nIn the Examples section, you have used row filter whereas that section\nis later in the docs. So, it is better if you give reference to that\nsection in the above sentence (see Section ...).\n\n3.\n+ <para>\n+ This parameter only affects DML operations. In particular, the\n+ subscription initial data synchronization does not take\nthis parameter\n+ into account when copying existing table data.\n+ </para>\n\nIn the second sentence: \"... subscription initial data synchronization\n...\" doesn't sound appropriate. Can we change it to something like:\n\"In particular, the initial data synchronization (see Section ..) in\nlogical replication does not take this parameter into account when\ncopying existing table data.\"?\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 22 Jun 2022 09:48:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: tablesync copy ignores publication actions"
},
{
"msg_contents": "On Wed, Jun 22, 2022 at 2:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jun 16, 2022 at 6:07 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> >\n> > Thank you for your review comments. Those reported mistakes are fixed\n> > in the attached patch v3.\n> >\n>\n> This patch looks mostly good to me except for a few minor comments\n> which are mentioned below. It is not very clear in which branch(es) we\n> should commit this patch? As per my understanding, this is a\n> pre-existing behavior but we want to document it because (a) It was\n> not already documented, and (b) we followed it for row filters in\n> PG-15 it seems that should be explained. So, we have the following\n> options (a) commit it only for PG-15, (b) commit for PG-15 and\n> backpatch the relevant sections, or (c) commit it when branch opens\n> for PG-16. What do you or others think?\n\nEven though this is a very old docs omission, AFAIK nobody ever raised\nit as a problem before. It only became more important because of the\nPG15 row-filters. So I think option (a) is ok.\n\n>\n> Few comments:\n> ==============\n> 1.\n> >\n> - particular event types. By default, all operation types are replicated.\n> - (Row filters have no effect for <command>TRUNCATE</command>. See\n> - <xref linkend=\"logical-replication-row-filter\"/>).\n> + particular event types. By default, all operation types are replicated.\n> + These are DML operation limitations only; they do not affect the initial\n> + data synchronization copy.\n> >\n>\n> Using limitations in the above sentence can be misleading. Can we\n> change it to something like: \"These publication specifications apply\n> only for DML operations; they do ... \".\n>\n\nOK - modified as suggested.\n\n> 2.\n> + operations. The publication <literal>pub3b</literal> has a row filter.\n>\n> In the Examples section, you have used row filter whereas that section\n> is later in the docs. So, it is better if you give reference to that\n> section in the above sentence (see Section ...).\n>\n\nOK - added xref as suggested.\n\n> 3.\n> + <para>\n> + This parameter only affects DML operations. In particular, the\n> + subscription initial data synchronization does not take\n> this parameter\n> + into account when copying existing table data.\n> + </para>\n>\n> In the second sentence: \"... subscription initial data synchronization\n> ...\" doesn't sound appropriate. Can we change it to something like:\n> \"In particular, the initial data synchronization (see Section ..) in\n> logical replication does not take this parameter into account when\n> copying existing table data.\"?\n>\n\nOK - modified and added xref as suggested.\n\n~~\n\nPSA patch v4 to address all the above review comments.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 22 Jun 2022 18:49:17 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tablesync copy ignores publication actions"
},
{
"msg_contents": "On Wed, Jun 22, 2022 4:49 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> \r\n> On Wed, Jun 22, 2022 at 2:18 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Thu, Jun 16, 2022 at 6:07 AM Peter Smith <smithpb2250@gmail.com>\r\n> wrote:\r\n> >\r\n> > >\r\n> > > Thank you for your review comments. Those reported mistakes are fixed\r\n> > > in the attached patch v3.\r\n> > >\r\n> >\r\n> > This patch looks mostly good to me except for a few minor comments\r\n> > which are mentioned below. It is not very clear in which branch(es) we\r\n> > should commit this patch? As per my understanding, this is a\r\n> > pre-existing behavior but we want to document it because (a) It was\r\n> > not already documented, and (b) we followed it for row filters in\r\n> > PG-15 it seems that should be explained. So, we have the following\r\n> > options (a) commit it only for PG-15, (b) commit for PG-15 and\r\n> > backpatch the relevant sections, or (c) commit it when branch opens\r\n> > for PG-16. What do you or others think?\r\n> \r\n> Even though this is a very old docs omission, AFAIK nobody ever raised\r\n> it as a problem before. It only became more important because of the\r\n> PG15 row-filters. So I think option (a) is ok.\r\n> \r\n\r\nI also think option (a) is ok.\r\n\r\n> \r\n> PSA patch v4 to address all the above review comments.\r\n> \r\n\r\nThanks for updating the patch. It looks good to me.\r\n\r\nBesides, I tested the examples in the patch, and there's no problem.\r\n\r\nRegards,\r\nShi yu\r\n",
"msg_date": "Thu, 23 Jun 2022 03:13:08 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: tablesync copy ignores publication actions"
},
{
"msg_contents": "On Thu, Jun 23, 2022 at 8:43 AM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Wed, Jun 22, 2022 4:49 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > >\n> > > This patch looks mostly good to me except for a few minor comments\n> > > which are mentioned below. It is not very clear in which branch(es) we\n> > > should commit this patch? As per my understanding, this is a\n> > > pre-existing behavior but we want to document it because (a) It was\n> > > not already documented, and (b) we followed it for row filters in\n> > > PG-15 it seems that should be explained. So, we have the following\n> > > options (a) commit it only for PG-15, (b) commit for PG-15 and\n> > > backpatch the relevant sections, or (c) commit it when branch opens\n> > > for PG-16. What do you or others think?\n> >\n> > Even though this is a very old docs omission, AFAIK nobody ever raised\n> > it as a problem before. It only became more important because of the\n> > PG15 row-filters. So I think option (a) is ok.\n> >\n>\n> I also think option (a) is ok.\n>\n> >\n> > PSA patch v4 to address all the above review comments.\n> >\n>\n> Thanks for updating the patch. It looks good to me.\n>\n\nThe patch looks good to me as well. I will push this patch in HEAD (as\nper option (a)) tomorrow unless I see any more suggestions/comments.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 23 Jun 2022 11:42:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: tablesync copy ignores publication actions"
},
{
"msg_contents": "On Thu, Jun 23, 2022 at 2:13 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> The patch looks good to me as well. I will push this patch in HEAD (as\n> per option (a)) tomorrow unless I see any more suggestions/comments.\n\nThe example seems to demonstrate the point quite well but one thing\nthat I notice is that it is quite long. I don't really see an obvious\nway of making it shorter without making it less clear, so perhaps\nthat's fine.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 23 Jun 2022 16:38:51 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: tablesync copy ignores publication actions"
},
{
"msg_contents": "On Fri, Jun 24, 2022 at 2:09 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Jun 23, 2022 at 2:13 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > The patch looks good to me as well. I will push this patch in HEAD (as\n> > per option (a)) tomorrow unless I see any more suggestions/comments.\n>\n> The example seems to demonstrate the point quite well but one thing\n> that I notice is that it is quite long. I don't really see an obvious\n> way of making it shorter without making it less clear, so perhaps\n> that's fine.\n>\n\nThanks for looking into it. Pushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 24 Jun 2022 14:35:43 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: tablesync copy ignores publication actions"
}
] |
[
{
"msg_contents": "I've been doing some preliminary prep work to see how an inverted index\nusing roaring bitmaps (https://roaringbitmap.org/) would perform. I'm\npresenting some early work using SQL code with the roaring bitmap Postgres\nextension (https://github.com/ChenHuajun/pg_roaringbitmap) to simulate a\nhypothetical index using this approach.\n\nI'd like to solicit feedback from the community to see if this is something\nworth pursuing or if there are potential issues that I'm not aware of (or\nif this approach has been considered in the past and discarded for whatever\nreason). For context, my experience with Postgres is primarily as a user\nand I'm not at all familiar with the code base, so please be gentle :).\n\nThat said, here's a quick and dirty demo:\n\nI have a table \"cities\"\n\n Table \"public.cities\"\n Column | Type | Collation | Nullable | Default\n---------+------+-----------+----------+---------\n city | text | | |\n country | text | | |\nIndexes:\n \"cities_country_idx\" btree (country)\n\nselect count(*) from cities;\n count\n--------\n 139739\n(1 row)\n\nAnd just some sample rows:\n\nselect * from cities order by random() limit 10;\n city | country\n------------------------+------------------------------\n Alcalá de la Selva | Spain\n Bekirhan | Turkey\n Ceggia | Italy\n Châtillon-en-Vendelais | France\n Hohenfelde | Germany\n Boedo | Argentina\n Saint-Vith | Belgium\n Gaggenau | Germany\n Lake Ozark | United States\n Igunga | Tanzania, United Republic of\n(10 rows)\n\nSince a bitmap requires you to convert your inputs into integers, I created\na function as a hack to convert our TIDs to integers. It's ugly as hell,\nbut it serves. 2048 is 2^11, which according to the GIN index source code\nis a safe assumption for the highest possible MaxHeapTuplesPerPage.\n\ncreate function ctid_to_int(ctid tid) returns integer as $$\nselect (ctid::text::point)[0] * 2048 + (ctid::text::point)[1];\n$$\nlanguage sql returns null on null input;\n\nAnd the reverse:\ncreate function int_to_ctid(i integer) returns tid as $$\nselect point(i/2048, i%2048)::text::tid;\n$$\nlanguage sql returns null on null input;\n\nIn addition, I created a table \"cities_rb\" to roughly represent an \"index\"\non the country column:\n\ncreate table cities_rb as (select country,\nroaringbitmap(array_agg(ctid_to_int(ctid))::text) idx from cities group by\ncountry);\n\n Table \"public.cities_rb\"\n Column | Type | Collation | Nullable | Default\n---------+---------------+-----------+----------+---------\n country | text | | |\n idx | roaringbitmap | | |\n\n\nNow for the fun stuff - to simulate the \"index\" I will be running some\nqueries against the cities_rb table using bitmap aggregations and comparing\nthem to functionally the same queries using the BTree index on cities.\n\nexplain analyze select ctid from cities where country = 'Japan';\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on cities (cost=18.77..971.83 rows=1351 width=6) (actual\ntime=0.041..0.187 rows=1322 loops=1)\n Recheck Cond: (country = 'Japan'::text)\n Heap Blocks: exact=65\n -> Bitmap Index Scan on cities_country_idx (cost=0.00..18.43 rows=1351\nwidth=0) (actual time=0.031..0.031 rows=1322 loops=1)\n Index Cond: (country = 'Japan'::text)\n Planning Time: 0.055 ms\n Execution Time: 0.233 ms\n(7 rows)\n\nexplain analyze select rb_to_array(idx) from cities_rb where country =\n'Japan';\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------\n Seq Scan on cities_rb (cost=0.00..14.88 rows=1 width=32) (actual\ntime=0.050..0.067 rows=1 loops=1)\n Filter: (country = 'Japan'::text)\n Rows Removed by Filter: 229\n Planning Time: 0.033 ms\n Execution Time: 0.076 ms\n(5 rows)\n\nexplain analyze select count(*) from cities where country = 'Japan';\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=35.31..35.32 rows=1 width=8) (actual time=0.151..0.151\nrows=1 loops=1)\n -> Index Only Scan using cities_country_idx on cities\n (cost=0.29..31.94 rows=1351 width=0) (actual time=0.026..0.103 rows=1322\nloops=1)\n Index Cond: (country = 'Japan'::text)\n Heap Fetches: 0\n Planning Time: 0.056 ms\n Execution Time: 0.180 ms\n(6 rows)\n\nexplain analyze select rb_cardinality(idx) from cities_rb where country =\n'Japan';\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------\n Seq Scan on cities_rb (cost=0.00..14.88 rows=1 width=8) (actual\ntime=0.037..0.053 rows=1 loops=1)\n Filter: (country = 'Japan'::text)\n Rows Removed by Filter: 229\n Planning Time: 0.037 ms\n Execution Time: 0.063 ms\n(5 rows)\n\n\nexplain analyze select country, count(*) from cities group by country;\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=2990.09..2992.22 rows=214 width=17) (actual\ntime=34.054..34.076 rows=230 loops=1)\n Group Key: country\n Batches: 1 Memory Usage: 77kB\n -> Seq Scan on cities (cost=0.00..2291.39 rows=139739 width=9) (actual\ntime=0.005..8.552 rows=139739 loops=1)\n Planning Time: 0.051 ms\n Execution Time: 34.103 ms\n(6 rows)\n\nexplain analyze select country, rb_cardinality(idx) from cities_rb;\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------\n Seq Scan on cities_rb (cost=0.00..14.88 rows=230 width=19) (actual\ntime=0.008..0.184 rows=230 loops=1)\n Planning Time: 0.030 ms\n Execution Time: 0.200 ms\n(3 rows)\n\nThe simulated index in this case is outrageously fast, up to ~150x on the\nGROUP BY.\n\n\nCheers,\nChinmay\n\nI've been doing some preliminary prep work to see how an inverted index using roaring bitmaps (https://roaringbitmap.org/) would perform. I'm presenting some early work using SQL code with the roaring bitmap Postgres extension (https://github.com/ChenHuajun/pg_roaringbitmap) to simulate a hypothetical index using this approach. I'd like to solicit feedback from the community to see if this is something worth pursuing or if there are potential issues that I'm not aware of (or if this approach has been considered in the past and discarded for whatever reason). For context, my experience with Postgres is primarily as a user and I'm not at all familiar with the code base, so please be gentle :).That said, here's a quick and dirty demo:I have a table \"cities\" Table \"public.cities\" Column | Type | Collation | Nullable | Default ---------+------+-----------+----------+--------- city | text | | | country | text | | | Indexes: \"cities_country_idx\" btree (country)select count(*) from cities; count -------- 139739(1 row)And just some sample rows:select * from cities order by random() limit 10; city | country ------------------------+------------------------------ Alcalá de la Selva | Spain Bekirhan | Turkey Ceggia | Italy Châtillon-en-Vendelais | France Hohenfelde | Germany Boedo | Argentina Saint-Vith | Belgium Gaggenau | Germany Lake Ozark | United States Igunga | Tanzania, United Republic of(10 rows)Since a bitmap requires you to convert your inputs into integers, I created a function as a hack to convert our TIDs to integers. It's ugly as hell, but it serves. 2048 is 2^11, which according to the GIN index source code is a safe assumption for the highest possible MaxHeapTuplesPerPage. create function ctid_to_int(ctid tid) returns integer as $$select (ctid::text::point)[0] * 2048 + (ctid::text::point)[1];$$language sql returns null on null input;And the reverse:create function int_to_ctid(i integer) returns tid as $$select point(i/2048, i%2048)::text::tid;$$language sql returns null on null input;In addition, I created a table \"cities_rb\" to roughly represent an \"index\" on the country column:create table cities_rb as (select country, roaringbitmap(array_agg(ctid_to_int(ctid))::text) idx from cities group by country); Table \"public.cities_rb\" Column | Type | Collation | Nullable | Default ---------+---------------+-----------+----------+--------- country | text | | | idx | roaringbitmap | | |Now for the fun stuff - to simulate the \"index\" I will be running some queries against the cities_rb table using bitmap aggregations and comparing them to functionally the same queries using the BTree index on cities.explain analyze select ctid from cities where country = 'Japan'; QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------- Bitmap Heap Scan on cities (cost=18.77..971.83 rows=1351 width=6) (actual time=0.041..0.187 rows=1322 loops=1) Recheck Cond: (country = 'Japan'::text) Heap Blocks: exact=65 -> Bitmap Index Scan on cities_country_idx (cost=0.00..18.43 rows=1351 width=0) (actual time=0.031..0.031 rows=1322 loops=1) Index Cond: (country = 'Japan'::text) Planning Time: 0.055 ms Execution Time: 0.233 ms(7 rows)explain analyze select rb_to_array(idx) from cities_rb where country = 'Japan'; QUERY PLAN ----------------------------------------------------------------------------------------------------- Seq Scan on cities_rb (cost=0.00..14.88 rows=1 width=32) (actual time=0.050..0.067 rows=1 loops=1) Filter: (country = 'Japan'::text) Rows Removed by Filter: 229 Planning Time: 0.033 ms Execution Time: 0.076 ms(5 rows)explain analyze select count(*) from cities where country = 'Japan'; QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=35.31..35.32 rows=1 width=8) (actual time=0.151..0.151 rows=1 loops=1) -> Index Only Scan using cities_country_idx on cities (cost=0.29..31.94 rows=1351 width=0) (actual time=0.026..0.103 rows=1322 loops=1) Index Cond: (country = 'Japan'::text) Heap Fetches: 0 Planning Time: 0.056 ms Execution Time: 0.180 ms(6 rows)explain analyze select rb_cardinality(idx) from cities_rb where country = 'Japan'; QUERY PLAN ---------------------------------------------------------------------------------------------------- Seq Scan on cities_rb (cost=0.00..14.88 rows=1 width=8) (actual time=0.037..0.053 rows=1 loops=1) Filter: (country = 'Japan'::text) Rows Removed by Filter: 229 Planning Time: 0.037 ms Execution Time: 0.063 ms(5 rows)explain analyze select country, count(*) from cities group by country; QUERY PLAN ------------------------------------------------------------------------------------------------------------------- HashAggregate (cost=2990.09..2992.22 rows=214 width=17) (actual time=34.054..34.076 rows=230 loops=1) Group Key: country Batches: 1 Memory Usage: 77kB -> Seq Scan on cities (cost=0.00..2291.39 rows=139739 width=9) (actual time=0.005..8.552 rows=139739 loops=1) Planning Time: 0.051 ms Execution Time: 34.103 ms(6 rows)explain analyze select country, rb_cardinality(idx) from cities_rb; QUERY PLAN --------------------------------------------------------------------------------------------------------- Seq Scan on cities_rb (cost=0.00..14.88 rows=230 width=19) (actual time=0.008..0.184 rows=230 loops=1) Planning Time: 0.030 ms Execution Time: 0.200 ms(3 rows)The simulated index in this case is outrageously fast, up to ~150x on the GROUP BY. Cheers,Chinmay",
"msg_date": "Mon, 6 Jun 2022 22:41:52 -0700",
"msg_from": "Chinmay Kanchi <cgkanchi@gmail.com>",
"msg_from_op": true,
"msg_subject": "An inverted index using roaring bitmaps"
},
{
"msg_contents": "On Mon, Jun 6, 2022 at 10:42 PM Chinmay Kanchi <cgkanchi@gmail.com> wrote:\n> The simulated index in this case is outrageously fast, up to ~150x on the GROUP BY.\n\nCouldn't you make a similar argument in favor of adding a B-Tree index\non \"country\"? This probably won't be effective in practice, but the\nreasons for this have little to do with how a B-Tree index represents\nTIDs. A GIN index can compress TIDs much more effectively, but the\nsame issues apply there.\n\nThe main reason why it won't work with a B-Tree is that indexes in\nPostgres are not transactionally consistent structures, in general.\nWhereas your cities_rb table is transactionally consistent (or perhaps\njust simulates a transactionally consistent index). Maybe it could\nwork in cases where an index-only scan could be used, which is roughly\ncomparable to having a transactionally consistent index. But that\ndepends on having the visibility map set most or all heap pages\nall-visible.\n\nGIN indexes don't support index-only scans, and I don't see that\nchanging. So it's possible that just adding TID compression to B-Tree\nindexes would significantly speedup this kind of query, just by making\nlow cardinality indexes much smaller. Though that's a hard project,\nfor many subtle reasons. This really amounts to building a bitmap\nindex, of the kind that are typically used for data warehousing, which\nis something that has been discussed plenty of times on this list. GIN\nindexes were really built for things like full-text search, not for\ndata warehousing.\n\nB-Tree deduplication makes B-Tree indexes a lot smaller, but it tends\nto \"saturate\" at about 3.5x smaller (relative to the same index with\ndeduplication disabled) once there are about 10 or so distinct keys\nper row (the exception is indexes that happen to have huge keys, which\naren't very interesting). There are many B-Tree indexes (with typical\nsized keys) that are similar in size to an \"equivalent\" GIN index --\nthe ability to compress TIDs isn't very valuable when you don't have\nthat many TIDs per key anyway. It's different when you have many TIDs\nper key, of course. GIN indexes alone don't \"saturate\" at the same\npoint -- there is often a big size difference between low cardinality\nand ultra low cardinality data. There are bound to be cases where not\nhaving that level of space efficiency matters, especially with B-Tree\nindex-only scans that scan a significant fraction of the entire index,\nor even the entire index.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 7 Jun 2022 11:53:28 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: An inverted index using roaring bitmaps"
},
{
"msg_contents": "I personally don't think this is a great replacement for a BTree index -\nfor one thing, it isn't really possible to use this approach beyond\nequality comparisons (for scalars) or \"contains\"-type operations for arrays\n(or tsvectors, jsonb, etc). I see this more as \"competing\" with GIN, though\nI think GIN solves a different use-case. The primary thought here is that\nwe could build lightning fast inverted indexes for the cases where these\nreally help.\n\nI played with using roaring bitmaps in production to build rollup tables,\nfor instance - where a single bitmap per key could satisfy count() queries\nand count(*) ... GROUP BY with multiple WHERE conditions way faster than\neven an index-only scan could, and without the overhead of multi-column\nindexes. In our particular case, there were about 2 dozen columns with\naround 30-40 million rows, and we were able to run these queries in\nsingle-digit milliseconds. We ultimately abandoned that project because of\nthe difficulty of keeping the bitmaps in sync with changing data, which\nwould no longer be an issue, if this was built as an index.\n\nI think your point about data warehouse-style bitmap indexes hits the nail\non the head here. This would be pretty much just that, a very efficient way\nto accelerate such queries.\n\nCheers,\nChinmay\n\n\nOn Tue, Jun 7, 2022 at 11:53 AM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Mon, Jun 6, 2022 at 10:42 PM Chinmay Kanchi <cgkanchi@gmail.com> wrote:\n> > The simulated index in this case is outrageously fast, up to ~150x on\n> the GROUP BY.\n>\n> Couldn't you make a similar argument in favor of adding a B-Tree index\n> on \"country\"? This probably won't be effective in practice, but the\n> reasons for this have little to do with how a B-Tree index represents\n> TIDs. A GIN index can compress TIDs much more effectively, but the\n> same issues apply there.\n>\n> The main reason why it won't work with a B-Tree is that indexes in\n> Postgres are not transactionally consistent structures, in general.\n> Whereas your cities_rb table is transactionally consistent (or perhaps\n> just simulates a transactionally consistent index). Maybe it could\n> work in cases where an index-only scan could be used, which is roughly\n> comparable to having a transactionally consistent index. But that\n> depends on having the visibility map set most or all heap pages\n> all-visible.\n>\n> GIN indexes don't support index-only scans, and I don't see that\n> changing. So it's possible that just adding TID compression to B-Tree\n> indexes would significantly speedup this kind of query, just by making\n> low cardinality indexes much smaller. Though that's a hard project,\n> for many subtle reasons. This really amounts to building a bitmap\n> index, of the kind that are typically used for data warehousing, which\n> is something that has been discussed plenty of times on this list. GIN\n> indexes were really built for things like full-text search, not for\n> data warehousing.\n>\n> B-Tree deduplication makes B-Tree indexes a lot smaller, but it tends\n> to \"saturate\" at about 3.5x smaller (relative to the same index with\n> deduplication disabled) once there are about 10 or so distinct keys\n> per row (the exception is indexes that happen to have huge keys, which\n> aren't very interesting). There are many B-Tree indexes (with typical\n> sized keys) that are similar in size to an \"equivalent\" GIN index --\n> the ability to compress TIDs isn't very valuable when you don't have\n> that many TIDs per key anyway. It's different when you have many TIDs\n> per key, of course. GIN indexes alone don't \"saturate\" at the same\n> point -- there is often a big size difference between low cardinality\n> and ultra low cardinality data. There are bound to be cases where not\n> having that level of space efficiency matters, especially with B-Tree\n> index-only scans that scan a significant fraction of the entire index,\n> or even the entire index.\n>\n> --\n> Peter Geoghegan\n>\n\nI personally don't think this is a great replacement for a BTree index - for one thing, it isn't really possible to use this approach beyond equality comparisons (for scalars) or \"contains\"-type operations for arrays (or tsvectors, jsonb, etc). I see this more as \"competing\" with GIN, though I think GIN solves a different use-case. The primary thought here is that we could build lightning fast inverted indexes for the cases where these really help. I played with using roaring bitmaps in production to build rollup tables, for instance - where a single bitmap per key could satisfy count() queries and count(*) ... GROUP BY with multiple WHERE conditions way faster than even an index-only scan could, and without the overhead of multi-column indexes. In our particular case, there were about 2 dozen columns with around 30-40 million rows, and we were able to run these queries in single-digit milliseconds. We ultimately abandoned that project because of the difficulty of keeping the bitmaps in sync with changing data, which would no longer be an issue, if this was built as an index.I think your point about data warehouse-style bitmap indexes hits the nail on the head here. This would be pretty much just that, a very efficient way to accelerate such queries.Cheers,ChinmayOn Tue, Jun 7, 2022 at 11:53 AM Peter Geoghegan <pg@bowt.ie> wrote:On Mon, Jun 6, 2022 at 10:42 PM Chinmay Kanchi <cgkanchi@gmail.com> wrote:\n> The simulated index in this case is outrageously fast, up to ~150x on the GROUP BY.\n\nCouldn't you make a similar argument in favor of adding a B-Tree index\non \"country\"? This probably won't be effective in practice, but the\nreasons for this have little to do with how a B-Tree index represents\nTIDs. A GIN index can compress TIDs much more effectively, but the\nsame issues apply there.\n\nThe main reason why it won't work with a B-Tree is that indexes in\nPostgres are not transactionally consistent structures, in general.\nWhereas your cities_rb table is transactionally consistent (or perhaps\njust simulates a transactionally consistent index). Maybe it could\nwork in cases where an index-only scan could be used, which is roughly\ncomparable to having a transactionally consistent index. But that\ndepends on having the visibility map set most or all heap pages\nall-visible.\n\nGIN indexes don't support index-only scans, and I don't see that\nchanging. So it's possible that just adding TID compression to B-Tree\nindexes would significantly speedup this kind of query, just by making\nlow cardinality indexes much smaller. Though that's a hard project,\nfor many subtle reasons. This really amounts to building a bitmap\nindex, of the kind that are typically used for data warehousing, which\nis something that has been discussed plenty of times on this list. GIN\nindexes were really built for things like full-text search, not for\ndata warehousing.\n\nB-Tree deduplication makes B-Tree indexes a lot smaller, but it tends\nto \"saturate\" at about 3.5x smaller (relative to the same index with\ndeduplication disabled) once there are about 10 or so distinct keys\nper row (the exception is indexes that happen to have huge keys, which\naren't very interesting). There are many B-Tree indexes (with typical\nsized keys) that are similar in size to an \"equivalent\" GIN index --\nthe ability to compress TIDs isn't very valuable when you don't have\nthat many TIDs per key anyway. It's different when you have many TIDs\nper key, of course. GIN indexes alone don't \"saturate\" at the same\npoint -- there is often a big size difference between low cardinality\nand ultra low cardinality data. There are bound to be cases where not\nhaving that level of space efficiency matters, especially with B-Tree\nindex-only scans that scan a significant fraction of the entire index,\nor even the entire index.\n\n-- \nPeter Geoghegan",
"msg_date": "Tue, 7 Jun 2022 17:00:45 -0700",
"msg_from": "Chinmay Kanchi <cgkanchi@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: An inverted index using roaring bitmaps"
},
{
"msg_contents": "On Tue, Jun 7, 2022 at 5:00 PM Chinmay Kanchi <cgkanchi@gmail.com> wrote:\n> I personally don't think this is a great replacement for a BTree index - for one thing, it isn't really possible to use this approach beyond equality comparisons (for scalars) or \"contains\"-type operations for arrays (or tsvectors, jsonb, etc).\n\nWhy not? A bitmap is just a way of representing TIDs, that is often\nvery space efficient once compression is applied. In principle a\nbitmap index can do anything that a B-Tree index can do, at least for\nSELECTs.\n\nBitmap indexes are considered totally distinct to B-Tree indexes in\nsome DB systems because the concurrency control characteristics (the\nuse of heavyweight locks to protect the logical contents of the\ndatabase) are very different . I think that this is because the index\nstructure itself is so dense that the only practical approach that's\ncompatible with 2PL style concurrency control (or versions to MVCC\nbased on 2PL) is to lock a large number of TIDs at the same time. This\ncan lead to deadlocks with even light concurrent modifications --\nwhich would never happen with an equivalent B-Tree index. But the data\nstructure is nevertheless more similar than different.\n\nI probably wouldn't want to have a technique like roaring bitmap\ncompression of TIDs get applied by default within Postgres B-Trees,\nbut the reasons for that are pretty subtle. I might still advocate\n*optional* TID list compression in Postgres B-Trees, which might even\nbe something we'd end up calling a bitmap index, that would only be\nrecommended for use in data warehousing scenarios. Extreme TID list\ncompression isn't free -- it really isn't desirable when there are\nmany concurrent modifications to relatively few index pages, as is\ncommon in OLTP applications. That's one important reason why bitmap\nindexes are generally only used in data warehousing environments,\nwhere the downside doesn't really matter, but the upside pays for\nitself (usually a fact table will have several bitmap indexes that are\nusually combined, not just one).\n\n> We ultimately abandoned that project because of the difficulty of keeping the bitmaps in sync with changing data, which would no longer be an issue, if this was built as an index.\n\nI think that it would in fact still be an issue if this was built as\nan index. There is a reason why the concurrency characteristics of\nbitmap indexes make them unsuitable for OLTP apps, which seems\nrelated. That wouldn't mean that it wouldn't still be worth it, but it\nwould definitely be a real downside with some workloads. B-Tree\ndeduplication is designed to have very little overhead with mixed\nreads and writes, so it's a performance all-rounder that can still be\nbeaten by specialized techniques that come with their own downsides.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 7 Jun 2022 18:13:13 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: An inverted index using roaring bitmaps"
}
] |
[
{
"msg_contents": "If I want to read a file that I'm not sure of the existence but I want\nto read the whole file if exists, I would call\npg_read_binary_file('path', 0, -1, true) but unfortunately this\ndoesn't work.\n\nDoes it make sense to change the function so as to accept the\nparameter specification above? Or the arguments could be ('path',\nnull, null, true) but (0,-1) is simpler considering the\ncharacteristics of the function.\n\n(We could also rearrange the the parameter order as \"filename,\nmissing_ok, offset, length\" but that is simply confusing..)\n\nIf it is, pg_read_file() is worth receive the same modification and\nI'll post the version containing doc part.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 07 Jun 2022 16:05:20 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Inconvenience of pg_read_binary_file()"
},
{
"msg_contents": "On Tue, Jun 07, 2022 at 04:05:20PM +0900, Kyotaro Horiguchi wrote:\n> If I want to read a file that I'm not sure of the existence but I want\n> to read the whole file if exists, I would call\n> pg_read_binary_file('path', 0, -1, true) but unfortunately this\n> doesn't work.\n\nYeah, the \"normal\" cases that I have seen in the past just used an\nextra call to pg_stat_file() to retrieve the size of the file before\nreading it, but arguably it does not help if the file gets extended\nbetween the stat() call and the read() call (we don't need to care\nabout this case with pg_rewind that has been the reason why the\nmissing_ok argument was introduced first, for one, as file extensions\ndon't matter as we'd replay from the LSN point where the rewind\nbegan, adding the new blocks at replay).\n\nThere is also an argument for supporting negative values rather than\njust -1. For example -2 could mean to read all the file except the\nlast byte. Or you could have an extra flavor, as of\npg_read_file(text, bool) to read the whole by default. Supporting\njust -1 as special value for the amount of data to read would be\nconfusing IMO. So I would tend to choose for a set of arguments based\non (text, bool).\n--\nMichael",
"msg_date": "Tue, 7 Jun 2022 16:33:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Inconvenience of pg_read_binary_file()"
},
{
"msg_contents": "At Tue, 7 Jun 2022 16:33:53 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Tue, Jun 07, 2022 at 04:05:20PM +0900, Kyotaro Horiguchi wrote:\n> > If I want to read a file that I'm not sure of the existence but I want\n> > to read the whole file if exists, I would call\n> > pg_read_binary_file('path', 0, -1, true) but unfortunately this\n> > doesn't work.\n> \n> Yeah, the \"normal\" cases that I have seen in the past just used an\n> extra call to pg_stat_file() to retrieve the size of the file before\n> reading it, but arguably it does not help if the file gets extended\n> between the stat() call and the read() call (we don't need to care\n> about this case with pg_rewind that has been the reason why the\n> missing_ok argument was introduced first, for one, as file extensions\n> don't matter as we'd replay from the LSN point where the rewind\n> began, adding the new blocks at replay).\n\nSure.\n\n> There is also an argument for supporting negative values rather than\n> just -1. For example -2 could mean to read all the file except the\n> last byte. Or you could have an extra flavor, as of\n> pg_read_file(text, bool) to read the whole by default. Supporting\n> just -1 as special value for the amount of data to read would be\n> confusing IMO. So I would tend to choose for a set of arguments based\n> on (text, bool).\n\nI'm not sure about the negative length smaller than -1, since I don't\nfind an apprpriate offset that represents (last byte + 1).\n\npg_read_file(text, bool) makes sense to me, but it doesn't seem like\nto be able to share C function with other variations.\npg_read_binary_file() need to accept some out-of-range value for\noffset or length to signal that offset and length are not specified.\n\nIn the attached pg_read(_binary)_file_all() is modifiedf so that they\nhave the different body from pg_read(_binary)_file().\n\n(function comments needs to be edited and docs are needed)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 07 Jun 2022 17:29:31 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Inconvenience of pg_read_binary_file()"
},
{
"msg_contents": "At Tue, 07 Jun 2022 17:29:31 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> pg_read_file(text, bool) makes sense to me, but it doesn't seem like\n> to be able to share C function with other variations.\n> pg_read_binary_file() need to accept some out-of-range value for\n> offset or length to signal that offset and length are not specified.\n\nIn this version all the polypmorphic variations share the same body\nfunction. I tempted to add tail-reading feature but it would be\nanother feature.\n\n> (function comments needs to be edited and docs are needed)\n\n- Simplified the implementation (by complexifying argument handling..).\n- REVOKEd EXECUTE from the new functions.\n- Edited the signature of the two functions.\n\n> - pg_read_file ( filename text [, offset bigint, length bigint [, missing_ok boolean ]] ) → text\n> + pg_read_file ( filename text [, offset bigint, length bigint ] [, missing_ok boolean ] ) → text\n\nAnd registered this to the next CF.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 30 Jun 2022 10:59:52 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Inconvenience of pg_read_binary_file()"
},
{
"msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> - Simplified the implementation (by complexifying argument handling..).\n> - REVOKEd EXECUTE from the new functions.\n> - Edited the signature of the two functions.\n\n>> - pg_read_file ( filename text [, offset bigint, length bigint [, missing_ok boolean ]] ) \u001b$B\"*\u001b(B text\n>> + pg_read_file ( filename text [, offset bigint, length bigint ] [, missing_ok boolean ] ) \u001b$B\"*\u001b(B text\n\nI'm okay with allowing this variant of the functions. Since there's\nno implicit cast between bigint and bool, plus the fact that you\ncan't give just offset without length, there shouldn't be much risk\nof confusion as to which variant to invoke.\n\nI don't really like the implementation style though. That mess of\nPG_NARGS tests is illegible code already and this makes it worse.\nI think it'd be way cleaner to have all the PG_GETARG calls in the\nbottom SQL-callable functions (which are already one-per-signature)\nand then pass them on to a common function that has an ordinary C\ncall signature, along the lines of\n\nstatic Datum\npg_read_file_common(text *filename_t,\n int64 seek_offset, int64 bytes_to_read,\n bool read_to_eof, bool missing_ok)\n{\n if (read_to_eof)\n bytes_to_read = -1; // or just Assert that it's -1 ?\n else if (bytes_to_read < 0)\n ereport(...);\n ...\n}\n\nDatum\npg_read_file_off_len(PG_FUNCTION_ARGS)\n{\n text *filename_t = PG_GETARG_TEXT_PP(0);\n int64 seek_offset = PG_GETARG_INT64(1);\n int64 bytes_to_read = PG_GETARG_INT64(2);\n\n return pg_read_file_common(filename_t, seek_offset, bytes_to_read,\n false, false);\n}\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 28 Jul 2022 16:22:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Inconvenience of pg_read_binary_file()"
},
{
"msg_contents": "Thanks for taking a look!\n\nAt Thu, 28 Jul 2022 16:22:17 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > - Simplified the implementation (by complexifying argument handling..).\n> > - REVOKEd EXECUTE from the new functions.\n> > - Edited the signature of the two functions.\n> \n> >> - pg_read_file ( filename text [, offset bigint, length bigint [, missing_ok boolean ]] ) → text\n> >> + pg_read_file ( filename text [, offset bigint, length bigint ] [, missing_ok boolean ] ) → text\n> \n> I'm okay with allowing this variant of the functions. Since there's\n> no implicit cast between bigint and bool, plus the fact that you\n> can't give just offset without length, there shouldn't be much risk\n> of confusion as to which variant to invoke.\n\nGrad to hear that.\n\n> I don't really like the implementation style though. That mess of\n> PG_NARGS tests is illegible code already and this makes it worse.\n\nAh..., I have to admit that I faintly felt that feeling while on it...\n\n> I think it'd be way cleaner to have all the PG_GETARG calls in the\n> bottom SQL-callable functions (which are already one-per-signature)\n> and then pass them on to a common function that has an ordinary C\n> call signature, along the lines of\n> \n> static Datum\n> pg_read_file_common(text *filename_t,\n> int64 seek_offset, int64 bytes_to_read,\n> bool read_to_eof, bool missing_ok)\n> {\n> if (read_to_eof)\n> bytes_to_read = -1; // or just Assert that it's -1 ?\n\nI prefer assertion since that parameter cannot be passed by users.\n\n> else if (bytes_to_read < 0)\n> ereport(...);\n> ...\n> }\n\nThis function cannot return NULL directly. Without the ability to\nreturn NULL, it is pointless for the function to return Datum. In the\nattached the function returns text*.\n\n> Datum\n> pg_read_file_off_len(PG_FUNCTION_ARGS)\n> {\n> text *filename_t = PG_GETARG_TEXT_PP(0);\n> int64 seek_offset = PG_GETARG_INT64(1);\n> int64 bytes_to_read = PG_GETARG_INT64(2);\n> \n> return pg_read_file_common(filename_t, seek_offset, bytes_to_read,\n> false, false);\n\nAs the result this function need to return NULL or TEXT_P according to\nthe returned value from pg_read_file_common.\n\n+\tif (!ret)\n+\t\tPG_RETURN_NULL();\n+\n+\tPG_RETURN_TEXT_P(ret);\n> }\n\n# I'm tempted to call read_text_file() directly from each SQL functions..\n\nPlease find the attached. I added some regression tests for both\npg_read_file() and pg_read_binary_file().\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 29 Jul 2022 16:21:45 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Inconvenience of pg_read_binary_file()"
},
{
"msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> Please find the attached. I added some regression tests for both\n> pg_read_file() and pg_read_binary_file().\n\nYeah, I definitely find this way cleaner even if it's a bit more verbose.\n\nI think that the PG_RETURN_NULL code paths are not reachable in the\nwrappers that don't have missing_ok. I concur with your decision\nto write them all the same, though.\n\nPushed after some fooling with the docs and test cases. (Notably,\nI do not think we can assume that pg_hba.conf exists in $PGDATA; some\ninstallations keep it elsewhere. I used postgresql.auto.conf instead.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 29 Jul 2022 15:44:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Inconvenience of pg_read_binary_file()"
},
{
"msg_contents": "On Fri, Jul 29, 2022 at 03:44:25PM -0400, Tom Lane wrote:\n> Pushed after some fooling with the docs and test cases. (Notably,\n> I do not think we can assume that pg_hba.conf exists in $PGDATA; some\n> installations keep it elsewhere. I used postgresql.auto.conf instead.)\n\nAre you sure that this last part is a good idea? We don't force the\ncreation of postgresql.auto.conf when starting a server, so this\nimpacts the portability of the tests with installcheck if one decides\nto remove it from the data folder, and it sounds plausible to me that\nsome distributions do exactly that..\n\nI guess that you could rely on config_file or hba_file instead.\n--\nMichael",
"msg_date": "Sat, 30 Jul 2022 11:37:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Inconvenience of pg_read_binary_file()"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Jul 29, 2022 at 03:44:25PM -0400, Tom Lane wrote:\n>> Pushed after some fooling with the docs and test cases. (Notably,\n>> I do not think we can assume that pg_hba.conf exists in $PGDATA; some\n>> installations keep it elsewhere. I used postgresql.auto.conf instead.)\n\n> Are you sure that this last part is a good idea? We don't force the\n> creation of postgresql.auto.conf when starting a server, so this\n> impacts the portability of the tests with installcheck if one decides\n> to remove it from the data folder, and it sounds plausible to me that\n> some distributions do exactly that..\n\nHm. I considered reading PG_VERSION instead, or postmaster.pid.\nPG_VERSION would be a very boring test case, but it should certainly\nbe present in $PGDATA.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 29 Jul 2022 23:35:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Inconvenience of pg_read_binary_file()"
},
{
"msg_contents": "On Fri, Jul 29, 2022 at 11:35:36PM -0400, Tom Lane wrote:\n> Hm. I considered reading PG_VERSION instead, or postmaster.pid.\n> PG_VERSION would be a very boring test case, but it should certainly\n> be present in $PGDATA.\n\nPG_VERSION would be simpler. Looking at postmaster.pid would require\na lookup at external_pid_file, and as it is not set by default you\nwould need to add some extra logic in the tests where\nexternal_pid_file = NULL <=> PGDATA/postmaster.pid.\n--\nMichael",
"msg_date": "Sat, 30 Jul 2022 14:51:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Inconvenience of pg_read_binary_file()"
},
{
"msg_contents": "On 2022-Jul-30, Michael Paquier wrote:\n\n> PG_VERSION would be simpler. Looking at postmaster.pid would require\n> a lookup at external_pid_file, and as it is not set by default you\n> would need to add some extra logic in the tests where\n> external_pid_file = NULL <=> PGDATA/postmaster.pid.\n\nHmm, no? as I recall external_pid_file is an *additional* PID file; it\ndoesn't supplant postmaster.pid.\n\npostmaster.opts is also an option.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"E pur si muove\" (Galileo Galilei)\n\n\n",
"msg_date": "Sat, 30 Jul 2022 13:47:05 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Inconvenience of pg_read_binary_file()"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2022-Jul-30, Michael Paquier wrote:\n>> PG_VERSION would be simpler. Looking at postmaster.pid would require\n>> a lookup at external_pid_file, and as it is not set by default you\n>> would need to add some extra logic in the tests where\n>> external_pid_file = NULL <=> PGDATA/postmaster.pid.\n\n> Hmm, no? as I recall external_pid_file is an *additional* PID file; it\n> doesn't supplant postmaster.pid.\n\nRight. postmaster.pid absolutely should be there if the postmaster\nis up (and if it ain't, you're going to have lots of other difficulty\nin running the regression tests...). It doesn't feel quite as static\nas PG_VERSION, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 30 Jul 2022 10:24:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Inconvenience of pg_read_binary_file()"
},
{
"msg_contents": "At Sat, 30 Jul 2022 10:24:39 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > On 2022-Jul-30, Michael Paquier wrote:\n> >> PG_VERSION would be simpler. Looking at postmaster.pid would require\n> >> a lookup at external_pid_file, and as it is not set by default you\n> >> would need to add some extra logic in the tests where\n> >> external_pid_file = NULL <=> PGDATA/postmaster.pid.\n> \n> > Hmm, no? as I recall external_pid_file is an *additional* PID file; it\n> > doesn't supplant postmaster.pid.\n> \n> Right. postmaster.pid absolutely should be there if the postmaster\n> is up (and if it ain't, you're going to have lots of other difficulty\n> in running the regression tests...). It doesn't feel quite as static\n> as PG_VERSION, though.\n\nThanks for committing it. Also the revised test (being suggested by\nMichael) looks good.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 01 Aug 2022 17:41:48 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Inconvenience of pg_read_binary_file()"
}
] |
[
{
"msg_contents": "Hi\n\npgbench tests fails, probably due using czech locale\n\nAll tests successful.\nFiles=2, Tests=633, 7 wallclock secs ( 0.14 usr 0.02 sys + 1.91 cusr\n 1.05 csys = 3.12 CPU)\nResult: PASS\nmake[2]: Opouští se adresář\n„/home/pavel/src/postgresql.master/src/bin/pgbench“\nmake -C psql check\nmake[2]: Vstupuje se do adresáře\n„/home/pavel/src/postgresql.master/src/bin/psql“\necho \"+++ tap check in src/bin/psql +++\" && rm -rf\n'/home/pavel/src/postgresql.master/src/bin/psql'/tmp_check &&\n/usr/bin/mkdir -p\n'/home/pavel/src/postgresql.master/src/bin/psql'/tmp_check && cd . &&\nTESTDIR='/home/pavel/src/postgresql.master/src/bin/psql'\nPATH=\"/home/pavel/src/postgresql.master/tmp_install/usr/local/pgsql/master/bin:/home/pavel/src/postgresql.master/src/bin/psql:$PATH\"\nLD_LIBRARY_PATH=\"/home/pavel/src/postgresql.master/tmp_install/usr/local/pgsql/master/lib\"\n PGPORT='65432'\nPG_REGRESS='/home/pavel/src/postgresql.master/src/bin/psql/../../../src/test/regress/pg_regress'\n/usr/bin/prove -I ../../../src/test/perl/ -I . t/*.pl\n+++ tap check in src/bin/psql +++\nt/001_basic.pl ........... 15/?\n# Failed test '\\timing with successful query: matches'\n# at t/001_basic.pl line 83.\n# '1\n# Time: 0,717 ms'\n# doesn't match '(?^m:^1$\n# ^Time: \\d+\\.\\d\\d\\d ms)'\n\n# Failed test '\\timing with query error: timing output appears'\n# at t/001_basic.pl line 95.\n# 'Time: 0,293 ms'\n# doesn't match '(?^m:^Time: \\d+\\.\\d\\d\\d ms)'\n# Looks like you failed 2 tests of 58.\nt/001_basic.pl ........... Dubious, test returned 2 (wstat 512, 0x200)\nFailed 2/58 subtests\nt/010_tab_completion.pl .. ok\nt/020_cancel.pl .......... ok\n\nTest Summary Report\n-------------------\nt/001_basic.pl (Wstat: 512 (exited 2) Tests: 58 Failed: 2)\n Failed tests: 28, 30\n Non-zero exit status: 2\nFiles=3, Tests=146, 6 wallclock secs ( 0.07 usr 0.01 sys + 3.15 cusr\n 1.14 csys = 4.37 CPU)\nResult: FAIL\nmake[2]: *** [Makefile:87: check] Chyba 1\nmake[2]: Opouští se adresář „/home/pavel/src/postgresql.master/src/bin/psql“\nmake[1]: *** [Makefile:43: check-psql-recurse] Chyba 2\nmake[1]: Opouští se adresář „/home/pavel/src/postgresql.master/src/bin“\nmake: *** [GNUmakefile:71: check-world-src/bin-recurse] Chyba 2\n\nRegards\n\nPavel\n\nHipgbench tests fails, probably due using czech localeAll tests successful.Files=2, Tests=633, 7 wallclock secs ( 0.14 usr 0.02 sys + 1.91 cusr 1.05 csys = 3.12 CPU)Result: PASSmake[2]: Opouští se adresář „/home/pavel/src/postgresql.master/src/bin/pgbench“make -C psql checkmake[2]: Vstupuje se do adresáře „/home/pavel/src/postgresql.master/src/bin/psql“echo \"+++ tap check in src/bin/psql +++\" && rm -rf '/home/pavel/src/postgresql.master/src/bin/psql'/tmp_check && /usr/bin/mkdir -p '/home/pavel/src/postgresql.master/src/bin/psql'/tmp_check && cd . && TESTDIR='/home/pavel/src/postgresql.master/src/bin/psql' PATH=\"/home/pavel/src/postgresql.master/tmp_install/usr/local/pgsql/master/bin:/home/pavel/src/postgresql.master/src/bin/psql:$PATH\" LD_LIBRARY_PATH=\"/home/pavel/src/postgresql.master/tmp_install/usr/local/pgsql/master/lib\" PGPORT='65432' PG_REGRESS='/home/pavel/src/postgresql.master/src/bin/psql/../../../src/test/regress/pg_regress' /usr/bin/prove -I ../../../src/test/perl/ -I . t/*.pl+++ tap check in src/bin/psql +++t/001_basic.pl ........... 15/? # Failed test '\\timing with successful query: matches'# at t/001_basic.pl line 83.# '1# Time: 0,717 ms'# doesn't match '(?^m:^1$# ^Time: \\d+\\.\\d\\d\\d ms)'# Failed test '\\timing with query error: timing output appears'# at t/001_basic.pl line 95.# 'Time: 0,293 ms'# doesn't match '(?^m:^Time: \\d+\\.\\d\\d\\d ms)'# Looks like you failed 2 tests of 58.t/001_basic.pl ........... Dubious, test returned 2 (wstat 512, 0x200)Failed 2/58 subtests t/010_tab_completion.pl .. ok t/020_cancel.pl .......... ok Test Summary Report-------------------t/001_basic.pl (Wstat: 512 (exited 2) Tests: 58 Failed: 2) Failed tests: 28, 30 Non-zero exit status: 2Files=3, Tests=146, 6 wallclock secs ( 0.07 usr 0.01 sys + 3.15 cusr 1.14 csys = 4.37 CPU)Result: FAILmake[2]: *** [Makefile:87: check] Chyba 1make[2]: Opouští se adresář „/home/pavel/src/postgresql.master/src/bin/psql“make[1]: *** [Makefile:43: check-psql-recurse] Chyba 2make[1]: Opouští se adresář „/home/pavel/src/postgresql.master/src/bin“make: *** [GNUmakefile:71: check-world-src/bin-recurse] Chyba 2RegardsPavel",
"msg_date": "Tue, 7 Jun 2022 10:52:45 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "broken regress tests on fedora 36"
},
{
"msg_contents": "On Tue, Jun 07, 2022 at 10:52:45AM +0200, Pavel Stehule wrote:\n> # Failed test '\\timing with query error: timing output appears'\n> # at t/001_basic.pl line 95.\n> # 'Time: 0,293 ms'\n> # doesn't match '(?^m:^Time: \\d+\\.\\d\\d\\d ms)'\n> # Looks like you failed 2 tests of 58.\n\nFun. The difference is in the separator: dot vs comma. This should\nfail with French the same way. Perhaps it would fail differently in\nother languages? There is no need to be that precise with the regex\nIMO, so I would just cut the regex with the number, checking only the\nunit at the end.\n--\nMichael",
"msg_date": "Tue, 7 Jun 2022 21:56:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: broken regress tests on fedora 36"
},
{
"msg_contents": "\nOn 2022-06-07 Tu 08:56, Michael Paquier wrote:\n> On Tue, Jun 07, 2022 at 10:52:45AM +0200, Pavel Stehule wrote:\n>> # Failed test '\\timing with query error: timing output appears'\n>> # at t/001_basic.pl line 95.\n>> # 'Time: 0,293 ms'\n>> # doesn't match '(?^m:^Time: \\d+\\.\\d\\d\\d ms)'\n>> # Looks like you failed 2 tests of 58.\n> Fun. The difference is in the separator: dot vs comma. This should\n> fail with French the same way. Perhaps it would fail differently in\n> other languages? There is no need to be that precise with the regex\n> IMO, so I would just cut the regex with the number, checking only the\n> unit at the end.\n\n\nor just replace '\\.' with '[.,]'\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 7 Jun 2022 10:54:07 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: broken regress tests on fedora 36"
},
{
"msg_contents": "On Tue, Jun 07, 2022 at 10:54:07AM -0400, Andrew Dunstan wrote:\n> On 2022-06-07 Tu 08:56, Michael Paquier wrote:\n>> On Tue, Jun 07, 2022 at 10:52:45AM +0200, Pavel Stehule wrote:\n>>> # Failed test '\\timing with query error: timing output appears'\n>>> # at t/001_basic.pl line 95.\n>>> # 'Time: 0,293 ms'\n>>> # doesn't match '(?^m:^Time: \\d+\\.\\d\\d\\d ms)'\n>>> # Looks like you failed 2 tests of 58.\n>> Fun. The difference is in the separator: dot vs comma. This should\n>> fail with French the same way. Perhaps it would fail differently in\n>> other languages? There is no need to be that precise with the regex\n>> IMO, so I would just cut the regex with the number, checking only the\n>> unit at the end.\n> \n> or just replace '\\.' with '[.,]'\n\nI was wondering about other separators actually:\nhttps://en.wikipedia.org/wiki/Decimal_separator#Usage_worldwide\n\nThese two should be enough, though. So changing only that sounds fine\nby me.\n--\nMichael",
"msg_date": "Wed, 8 Jun 2022 08:59:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: broken regress tests on fedora 36"
},
{
"msg_contents": "On 07.06.22 14:56, Michael Paquier wrote:\n> On Tue, Jun 07, 2022 at 10:52:45AM +0200, Pavel Stehule wrote:\n>> # Failed test '\\timing with query error: timing output appears'\n>> # at t/001_basic.pl line 95.\n>> # 'Time: 0,293 ms'\n>> # doesn't match '(?^m:^Time: \\d+\\.\\d\\d\\d ms)'\n>> # Looks like you failed 2 tests of 58.\n> \n> Fun. The difference is in the separator: dot vs comma. This should\n> fail with French the same way. Perhaps it would fail differently in\n> other languages? There is no need to be that precise with the regex\n> IMO, so I would just cut the regex with the number, checking only the\n> unit at the end.\n\nShouldn't we reset the locale setting (LC_NUMERIC?) to a known value? \nWe clearly already do that for other categories, or it wouldn't say \"Time:\".\n\n\n",
"msg_date": "Thu, 9 Jun 2022 15:41:08 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: broken regress tests on fedora 36"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Shouldn't we reset the locale setting (LC_NUMERIC?) to a known value? \n> We clearly already do that for other categories, or it wouldn't say \"Time:\".\n\npg_regress.c and Utils.pm force LC_MESSAGES to C, explaining\n\n\t * Set translation-related settings to English; otherwise psql will\n\t * produce translated messages and produce diffs.\n\nWhile that seems clearly necessary, I'm inclined to think that we\nshould not mess with the user's LC_XXX environment more than we\nabsolutely must. pg_regress only resets the rest of that if you\nsay --no-locale, an option the TAP infrastructure lacks.\n\nIn short, I think the committed fix is better than this proposal.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Jun 2022 11:25:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: broken regress tests on fedora 36"
}
] |
[
{
"msg_contents": "",
"msg_date": "Tue, 7 Jun 2022 18:24:43 +0900",
"msg_from": "sh95119 <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "Add TAP test for auth_delay extension"
},
{
"msg_contents": "Hi Hackers,\nI just wrote a test for `auth_delay` extension.\nIt's a test which confirms whether there is a delay for a second when\nyou enter the wrong password.\nI sent an email using mutt, but I have a problem and sent it again.\n\n---\nRegards,\nDong Wook Lee.\n\n\n",
"msg_date": "Tue, 7 Jun 2022 18:32:27 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add TAP test for auth_delay extension"
},
{
"msg_contents": "2022년 6월 7일 (화) 오후 6:32, Dong Wook Lee <sh95119@gmail.com>님이 작성:\n>\n> Hi Hackers,\n> I just wrote a test for `auth_delay` extension.\n> It's a test which confirms whether there is a delay for a second when\n> you enter the wrong password.\n> I sent an email using mutt, but I have a problem and sent it again.\n>\n> ---\n> Regards,\n> Dong Wook Lee.\n\nHi,\n\nI have written a test for the auth_delay extension before,\nbut if it is okay, can you review it?\n\n---\nRegards,\nDongWook Lee.\n\n\n",
"msg_date": "Sat, 18 Jun 2022 11:06:02 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add TAP test for auth_delay extension"
},
{
"msg_contents": "On Sat, Jun 18, 2022 at 11:06:02AM +0900, Dong Wook Lee wrote:\n> I have written a test for the auth_delay extension before,\n> but if it is okay, can you review it?\n\n+# check enter wrong password\n+my $t0 = [gettimeofday];\n+test_login($node, 'user_role', \"wrongpass\", 2);\n+my $elapsed = tv_interval($t0, [gettimeofday]);\n+ok($elapsed >= $delay_milliseconds / 1000, \"auth_delay $elapsed seconds\");\n+\n+# check enter correct password\n+my $t0 = [gettimeofday];\n+test_login($node, 'user_role', \"pass\", 0);\n+my $elapsed = tv_interval($t0, [gettimeofday]);\n+ok($elapsed < $delay_milliseconds / 1000, \"auth_delay $elapsed seconds\");\n\nOn a slow machine, I suspect that the second test is going to be\nunstable as it would fail if the login attempt (that succeeds) takes\nmore than $delay_milliseconds. You could increase more\ndelay_milliseconds to leverage that, but it would make the first test\nslower for nothing on faster machines in the case where the\nauthentication attempt has failed. I guess that you could leverage\nthat by using a large value for delay_milliseconds in the second test,\nbecause we are never going to wait. For the first test, you could on\nthe contrary use a much lower value, still on slow machines it may not\ntest what the code path of auth_delay you are willing to test.\n\nAs a whole, I am not sure that this is really worth spending cycles on\nwhen running check-world or similar, and the code of the extension is\ntrivial.\n--\nMichael",
"msg_date": "Sat, 18 Jun 2022 12:07:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add TAP test for auth_delay extension"
},
{
"msg_contents": "On 22/06/18 12:07오후, Michael Paquier wrote:\n> On Sat, Jun 18, 2022 at 11:06:02AM +0900, Dong Wook Lee wrote:\n> > I have written a test for the auth_delay extension before,\n> > but if it is okay, can you review it?\n> \n> +# check enter wrong password\n> +my $t0 = [gettimeofday];\n> +test_login($node, 'user_role', \"wrongpass\", 2);\n> +my $elapsed = tv_interval($t0, [gettimeofday]);\n> +ok($elapsed >= $delay_milliseconds / 1000, \"auth_delay $elapsed seconds\");\n> +\n> +# check enter correct password\n> +my $t0 = [gettimeofday];\n> +test_login($node, 'user_role', \"pass\", 0);\n> +my $elapsed = tv_interval($t0, [gettimeofday]);\n> +ok($elapsed < $delay_milliseconds / 1000, \"auth_delay $elapsed seconds\");\n> \n> On a slow machine, I suspect that the second test is going to be\n> unstable as it would fail if the login attempt (that succeeds) takes\n> more than $delay_milliseconds. You could increase more\n> delay_milliseconds to leverage that, but it would make the first test\n> slower for nothing on faster machines in the case where the\n> authentication attempt has failed. I guess that you could leverage\n> that by using a large value for delay_milliseconds in the second test,\n> because we are never going to wait. For the first test, you could on\n> the contrary use a much lower value, still on slow machines it may not\n> test what the code path of auth_delay you are willing to test.\n> \n\nThank you for your valuable advice I didn't think about the slow system.\nTherefore, in the case of the second test, the time was extended a little.\n\n> As a whole, I am not sure that this is really worth spending cycles on\n> when running check-world or similar, and the code of the extension is\n> trivial.\n\nEven though it is trivial, I think it would be better if there was a test.\n\n> --\n> Michael",
"msg_date": "Mon, 20 Jun 2022 21:43:37 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add TAP test for auth_delay extension"
},
{
"msg_contents": "Dong Wook Lee <sh95119@gmail.com> writes:\n> On 22/06/18 12:07오후, Michael Paquier wrote:\n>> As a whole, I am not sure that this is really worth spending cycles on\n>> when running check-world or similar, and the code of the extension is\n>> trivial.\n\n> Even though it is trivial, I think it would be better if there was a test.\n\nI looked at this and concur with Michael's evaluation. A new TAP module\nis quite an expensive thing, since it incurs (at least) an initdb run.\nIn this case, the need to delay a long time to ensure that the test\ndoesn't fail on slow systems makes that even worse. I don't think\nI want to incur these costs every time I run check-world in order to\ntest a pg_usleep() call, which is what this module boils down to.\n\nIf we had some sort of \"attic\" of tests that aren't run by either\ncheck-world or most buildfarm members, perhaps this would be worth\nputting there. But we don't.\n\nOne idea could be to install the test but leave the TAP_TESTS line in\nthe Makefile commented out. Then, somebody who was actively working on\nthe module could enable the test easily enough (without even modifying\nthat file: just do \"make check TAP_TESTS=1\"), but otherwise we don't\npay for it. However, I'm not sure how well that plan will translate\nto the upcoming meson build system.\n\nIf we don't do it like that, I'd vote for rejecting the patch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 30 Jul 2022 17:13:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add TAP test for auth_delay extension"
},
{
"msg_contents": "On Sat, Jul 30, 2022 at 05:13:12PM -0400, Tom Lane wrote:\n> If we don't do it like that, I'd vote for rejecting the patch.\n\nYep. Done this way.\n--\nMichael",
"msg_date": "Wed, 12 Oct 2022 14:35:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add TAP test for auth_delay extension"
}
] |
[
{
"msg_contents": "The present implementation of JSON_TABLE sets the collation of the \noutput columns to the default collation if the specified data type is \ncollatable. Why don't we use the collation of the type directly? This \nwould make domains with attached collations work correctly.\n\nSee attached patch for how to change this. I hacked up a regression \ntest case to demonstrate this.",
"msg_date": "Tue, 7 Jun 2022 15:19:01 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "JSON_TABLE output collations"
},
{
"msg_contents": "\nOn 2022-06-07 Tu 09:19, Peter Eisentraut wrote:\n>\n> The present implementation of JSON_TABLE sets the collation of the\n> output columns to the default collation if the specified data type is\n> collatable. Why don't we use the collation of the type directly? \n> This would make domains with attached collations work correctly.\n>\n> See attached patch for how to change this. I hacked up a regression\n> test case to demonstrate this.\n\n\n\nLooks reasonable.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 7 Jun 2022 10:51:35 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: JSON_TABLE output collations"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nWhile exploring some code in logical replication worker\nimplementation, I noticed that we're accessing an invalid memory while\ntraversing LogicalRepCtx->workers[i].\nFor the above structure, we're allocating\nmax_logical_replication_workers times LogicalRepWorker amount of\nmemory in ApplyLauncherShmemSize. But, in the for loop, we're\naccessing the max_logical_replication_workers + 1 location which is\nresulting in random crashes.\n\nPlease find the patch that fixes the issue. I'm not sure whether we\nshould add a regression test for the same.\n\n-- \nThanks & Regards,\nKuntal Ghosh",
"msg_date": "Tue, 7 Jun 2022 22:36:23 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": true,
"msg_subject": "Invalid memory access in pg_stat_get_subscription"
},
{
"msg_contents": "Kuntal Ghosh <kuntalghosh.2007@gmail.com> writes:\n> While exploring some code in logical replication worker\n> implementation, I noticed that we're accessing an invalid memory while\n> traversing LogicalRepCtx->workers[i].\n> For the above structure, we're allocating\n> max_logical_replication_workers times LogicalRepWorker amount of\n> memory in ApplyLauncherShmemSize. But, in the for loop, we're\n> accessing the max_logical_replication_workers + 1 location which is\n> resulting in random crashes.\n\nI concur that that's a bug, but eyeing the code, it seems like an\nactual crash would be improbable. Have you seen one? Can you\nreproduce it?\n\n> Please find the patch that fixes the issue. I'm not sure whether we\n> should add a regression test for the same.\n\nHow would you make a stable regression test for that?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 Jun 2022 15:14:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Invalid memory access in pg_stat_get_subscription"
},
{
"msg_contents": "Hello Tom,\n\nOn Wed, Jun 8, 2022 at 12:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Kuntal Ghosh <kuntalghosh.2007@gmail.com> writes:\n> > While exploring some code in logical replication worker\n> > implementation, I noticed that we're accessing an invalid memory while\n> > traversing LogicalRepCtx->workers[i].\n> > For the above structure, we're allocating\n> > max_logical_replication_workers times LogicalRepWorker amount of\n> > memory in ApplyLauncherShmemSize. But, in the for loop, we're\n> > accessing the max_logical_replication_workers + 1 location which is\n> > resulting in random crashes.\n>\n> I concur that that's a bug, but eyeing the code, it seems like an\n> actual crash would be improbable. Have you seen one? Can you\n> reproduce it?\nThank you for looking into it. Unfortunately, I'm not able to\nreproduce the crash, but I've seen one crash while executing the\nfunction. The crash occurred at the following line:\n> if (!worker.proc || !IsBackendPid(worker.proc->pid))\n(gdb) p worker.proc\n$6 = (PGPROC *) 0x2bf0b9\nThe PGPROC structure was pointing to an invalid memory location.\n\n-- \nThanks & Regards,\nKuntal Ghosh\n\n\n",
"msg_date": "Wed, 8 Jun 2022 18:28:26 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Invalid memory access in pg_stat_get_subscription"
}
] |
[
{
"msg_contents": "Hi everyone:\n\nI'd like to propose a change to PostgreSQL to allow the creation of a foreign\nkey constraint referencing a superset of uniquely constrained columns.\n\nAs it currently stands:\n\n CREATE TABLE foo (a integer PRIMARY KEY, b integer);\n CREATE TABLE bar (x integer, y integer,\n FOREIGN KEY (x, y) REFERENCES foo(a, b));\n > ERROR: there is no unique constraint matching given keys for referenced\n table \"foo\"\n\nDespite the fact that in \"foo\", the combination of columns (a, b) is guaranteed\nto be unique by virtue of being a superset of the primary key (a).\n\nThis capability has been requested before, at least once in 2004 [1] and again\nin 2021 [2].\n\nTo illustrate when it'd be useful to define such a foreign key constraint,\nconsider a database that will store graphs (consisting of nodes and edges) where\ngraphs are discrete and intergraph edges are prohibited:\n\n CREATE TABLE graph (\n id INTEGER PRIMARY KEY GENERATED BY DEFAULT AS IDENTITY\n );\n\n CREATE TABLE node (\n id INTEGER PRIMARY KEY GENERATED BY DEFAULT AS IDENTITY,\n graph_id INTEGER NOT NULL REFERENCES graph(id)\n );\n\n CREATE TABLE edge (\n graph_id INTEGER,\n source_id INTEGER,\n FOREIGN KEY (source_id, graph_id) REFERENCES node(id, graph_id),\n\n target_id INTEGER,\n FOREIGN KEY (target_id, graph_id) REFERENCES node(id, graph_id),\n\n PRIMARY KEY (graph_id, source_id, target_id)\n );\n\nThis schema is unsupported by PostgreSQL absent this constraint:\n\n ALTER TABLE node ADD UNIQUE (id, graph_id);\n\nHowever, this constraint, as well as its underlying unique index, is superfluous\nas node(id) itself is unique. Its addition serves no semantic purpose but incurs\ncost of additional on disk storage and update time. Note the prohibition of\nintergraph edges isn't enforceable on \"edge\" without the composite foreign keys\n(or triggers).\n\nAn alternative approach is to redefine node's PRIMARY KEY as (id, graph_id).\nHowever, this would force every table referring to \"node\" to duplicate both\ncolumns into their schema, even when a singular \"node_id\" would suffice. This is\nundesirable if there are many tables referring to \"node\" that have no such\nintergraph restrictions and few that do.\n\nWhile it can be argued that this schema contains some degree of denormalization,\nit isn't uncommon and a recent patch was merged to support exactly this kind of\ndesign [3].\n\nIn that case, the SET NULL and SET DEFAULT referential actions gained support\nfor an explicit column list to accomodate this type of design.\n\nA problem evinced by Tom Lane in the 2004 discussion on this was that, were this\npermitted, the index supporting a foreign key constraint could be ambiguous:\n\n > I think one reason for this is that otherwise it's not clear which\n > unique constraint the FK constraint depends on. Consider\n >\n > create table a (f1 int unique, f2 int unique);\n >\n > create table b (f1 int, f2 int,\n > foreign key (f1,f2) references a(f1,f2));\n >\n > How would you decide which constraint to make the FK depend on?\n > It'd be purely arbitrary.\n\nI propose that this problem is addressed, with an extension to the SQL standard,\nas follows in the definition of a foreign key constraint:\n\n1. Change the restriction on the referenced columns in a foreign key constraint\n to:\n\n The referenced columns must be a *superset* (the same, or a strict superset)\n of the columns of a non-deferrable unique or primary key index on the\n referenced table.\n\n2. The FOREIGN KEY constraint syntax gains a [ USING INDEX index_name ] clause\n optionally following the referenced column list.\n\n The index specified by this clause is used to support the foreign key\n constraint, and it must be a non-deferrable unique or primary key index on\n the referenced table compatible with the referenced columns.\n\n Here, compatible means that the columns of the index are a subset (the same,\n or a strict subset) of the referenced columns.\n\n3. If the referenced columns are the same as the columns of such a unique (or\n primary key) index on the referenced table, then the behavior in PostgreSQL\n doesn't change.\n\n4. If the referenced columns are a strict superset of the columns of such an\n index on the referenced table, then:\n 1. If the primary key of the referenced table is a strict subset of the\n referenced columns, then its index is used to support the foreign key if\n no USING INDEX clause is present.\n 2. Otherwise, the USING INDEX clause is required.\n\nI believe that this scheme is unambiguous and should stably round trip a dump\nand restore. In my previous example, the foreign key contraints could then be\ndefined as:\n\n FOREIGN KEY (source_id, graph_id) REFERENCES node(id, graph_id),\n FOREIGN KEY (target_id, graph_id) REFERENCES node(id, graph_id),\n\nOr alternatively:\n\n FOREIGN KEY (source_id, graph_id) REFERENCES node(id, graph_id)\n USING INDEX node_pkey,\n FOREIGN KEY (target_id, graph_id) REFERENCES node(id, graph_id)\n USING INDEX node_pkey,\n\nAlso, the addition of a USING INDEX clause may be useful in its own right. It's\nalready possible to create multiple unique indexes on a table, with the same set\nof columns, but differing storage parameters of index tablespaces. In this\nsituation, when multiple indexes can support a foreign key constraint, which\nindex is chosen appears to be determined in OID order. This clause would allow a\nuser to unambiguously specify which index to use.\n\nI've attached a first draft patch that implements what I've described and would\nlove some feedback. For all referenced columns that aren't covered by the chosen\nindex, it chooses the opclass to use by finding the default opclass for the\ncolumn's type for the same access method as the chosen index. (Would it be\nuseful to allow the specification of opclasses in the referenced column list of\na foreign key constraint, similar to the column list in CREATE INDEX?)\n\nAlso, in pg_get_constraintdef_worker(), it adds a USING INDEX clause only when a\nforeign key constraint is supported by an index that:\n\n1. Isn't a primary key, and\n2. Has a different number of key columns than the number of constrained columns\n\nI *think* that this should ensure that a USING INDEX clause is added only when\nrequired.\n\n[1] https://www.postgresql.org/message-id/flat/CAF%2B2_SGhbc6yGUoNFjDOgjq1VpKpE5WZfOo0M%2BUwcPH%3DmddNMg%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/1092734724.2627.4.camel%40dicaprio.akademie1.de\n[3] https://www.postgresql.org/message-id/flat/CACqFVBZQyMYJV%3DnjbSMxf%2BrbDHpx%3DW%3DB7AEaMKn8dWn9OZJY7w%40mail.gmail.com",
"msg_date": "Tue, 7 Jun 2022 14:59:24 -0400",
"msg_from": "Kaiting Chen <ktchen14@gmail.com>",
"msg_from_op": true,
"msg_subject": "Allow foreign keys to reference a superset of unique columns"
},
{
"msg_contents": "On 07.06.22 20:59, Kaiting Chen wrote:\n> 2. The FOREIGN KEY constraint syntax gains a [ USING INDEX index_name ] clause\n> optionally following the referenced column list.\n> \n> The index specified by this clause is used to support the foreign key\n> constraint, and it must be a non-deferrable unique or primary key index on\n> the referenced table compatible with the referenced columns.\n> \n> Here, compatible means that the columns of the index are a subset (the same,\n> or a strict subset) of the referenced columns.\n\nI think this should be referring to constraint name, not an index name.\n\n\n",
"msg_date": "Fri, 10 Jun 2022 05:08:31 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow foreign keys to reference a superset of unique columns"
},
{
"msg_contents": "On Fri, 10 Jun 2022 at 15:08, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 07.06.22 20:59, Kaiting Chen wrote:\n> > 2. The FOREIGN KEY constraint syntax gains a [ USING INDEX index_name ] clause\n> > optionally following the referenced column list.\n> >\n> > The index specified by this clause is used to support the foreign key\n> > constraint, and it must be a non-deferrable unique or primary key index on\n> > the referenced table compatible with the referenced columns.\n> >\n> > Here, compatible means that the columns of the index are a subset (the same,\n> > or a strict subset) of the referenced columns.\n>\n> I think this should be referring to constraint name, not an index name.\n\nCan you explain why you think that?\n\nMy thoughts are that it should be an index name. I'm basing that on\nthe fact that transformFkeyCheckAttrs() look for valid unique indexes\nrather than constraints. The referenced table does not need any\nprimary key or unique constraints to be referenced by a foreign key.\nIt just needs a unique index matching the referencing columns.\n\nIt would seem very strange to me if we required a unique or primary\nkey constraint to exist only when this new syntax is being used. Maybe\nI'm missing something though?\n\nDavid\n\n\n",
"msg_date": "Fri, 10 Jun 2022 15:47:34 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow foreign keys to reference a superset of unique columns"
},
{
"msg_contents": "On 10.06.22 05:47, David Rowley wrote:\n>> I think this should be referring to constraint name, not an index name.\n> Can you explain why you think that?\n\nIf you wanted to specify this feature in the SQL standard (I'm not \nproposing that, but it seems plausible), then you need to deal in terms \nof constraints, not indexes. Maybe referring to an index directly could \nbe a backup option if desired, but I don't see why that would be \nnecessary, since you can easily create a real constraint on top of an index.\n\n\n",
"msg_date": "Fri, 10 Jun 2022 06:14:11 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow foreign keys to reference a superset of unique columns"
},
{
"msg_contents": "On Fri, 10 Jun 2022 at 16:14, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 10.06.22 05:47, David Rowley wrote:\n> >> I think this should be referring to constraint name, not an index name.\n> > Can you explain why you think that?\n>\n> If you wanted to specify this feature in the SQL standard (I'm not\n> proposing that, but it seems plausible), then you need to deal in terms\n> of constraints, not indexes. Maybe referring to an index directly could\n> be a backup option if desired, but I don't see why that would be\n> necessary, since you can easily create a real constraint on top of an index.\n\nThat's a good point, but, if we invented syntax for specifying a\nconstraint name, would that not increase the likelihood that we'd end\nup with something that conflicts with some future extension to the SQL\nstandard?\n\nWe already have USING INDEX as an extension to ADD CONSTRAINT.\n\nDavid\n\n\n",
"msg_date": "Fri, 10 Jun 2022 17:11:41 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow foreign keys to reference a superset of unique columns"
},
{
"msg_contents": "On Fri, Jun 10, 2022 at 12:14 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 10.06.22 05:47, David Rowley wrote:\n> >> I think this should be referring to constraint name, not an index name.\n> > Can you explain why you think that?\n>\n> If you wanted to specify this feature in the SQL standard (I'm not\n> proposing that, but it seems plausible), then you need to deal in terms\n> of constraints, not indexes. Maybe referring to an index directly could\n> be a backup option if desired, but I don't see why that would be\n> necessary, since you can easily create a real constraint on top of an index.\n\nI think that there's a subtle difference between specifying a\nconstraint or an index in that the ALTER TABLE ADD CONSTRAINT USING\nINDEX command prohibits the creation of a constraint using an index\nwhere the key columns are associated with non default opclasses. As\nfar as I can tell, a foreign key constraint *can* pick an index with\nnon default opclasses. So specifying a constraint name in the FOREIGN\nKEY syntax would result in a strange situation where the foreign key\nconstraint could implicitly pick a supporting index with non default\nopclasses to use, but there'd be no way to explicitly specify that\nindex.\n\n\n",
"msg_date": "Fri, 10 Jun 2022 09:28:56 -0400",
"msg_from": "Kaiting Chen <ktchen14@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow foreign keys to reference a superset of unique columns"
},
{
"msg_contents": "Kaiting Chen <ktchen14@gmail.com> writes:\n> I'd like to propose a change to PostgreSQL to allow the creation of a foreign\n> key constraint referencing a superset of uniquely constrained columns.\n\nTBH, I think this is a fundamentally bad idea and should be rejected\noutright. It fuzzes the semantics of the FK relationship, and I'm\nnot convinced that there are legitimate use-cases. Your example\nschema could easily be dismissed as bad design that should be done\nsome other way.\n\nFor one example of where the semantics get fuzzy, it's not\nvery clear how the extra-baggage columns ought to participate in\nCASCADE updates. Currently, if we have\n CREATE TABLE foo (a integer PRIMARY KEY, b integer);\nthen an update that changes only foo.b doesn't need to update\nreferencing tables, and I think we even have optimizations that\nassume that if no unique-key columns are touched then RI checks\nneed not be made. But if you did\n CREATE TABLE bar (x integer, y integer,\n FOREIGN KEY (x, y) REFERENCES foo(a, b) ON UPDATE CASCADE);\nthen perhaps you expect bar.y to be updated ... or maybe you don't?\n\nAnother example is that I think the idea is only well-defined when\nthe subset column(s) are a primary key, or at least all marked NOT NULL.\nOtherwise they're not as unique as you're claiming. But then the FK\nconstraint really has to be dependent on a PK constraint not just an\nindex definition, since indexes in themselves don't enforce not-nullness.\nThat gets back to Peter's complaint that referring to an index isn't\ngood enough.\n\nAnyway, seeing that the patch touches neither ri_triggers.c nor any\nregression tests, I find it hard to believe that such semantic\nquestions have been thought through.\n\nIt's also unclear to me how this ought to interact with the\ninformation_schema views concerning foreign keys. We generally\nfeel that we don't want to present any non-SQL-compatible data\nin information_schema, for fear that it will confuse applications\nthat expect to see SQL-spec behavior there. So do we leave such\nFKs out of the views altogether, or show only the columns involving\nthe associated unique constraint? Neither answer seems pleasant.\n\nFWIW, the patch is currently failing to apply per the cfbot.\nI think you don't need to manually update backend/nodes/ anymore,\nbut the gram.y changes look to have been sideswiped by some\nother recent commit.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Jul 2022 15:11:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allow foreign keys to reference a superset of unique columns"
},
{
"msg_contents": "Kaiting Chen:\n> I'd like to propose a change to PostgreSQL to allow the creation of a foreign\n> key constraint referencing a superset of uniquely constrained columns.\n\n+1\n\nTom Lane:\n> TBH, I think this is a fundamentally bad idea and should be rejected\n> outright. It fuzzes the semantics of the FK relationship, and I'm\n> not convinced that there are legitimate use-cases. Your example\n> schema could easily be dismissed as bad design that should be done\n> some other way.\n\nI had to add quite a few unique constraints on a superset of already \nuniquely constrained columns in the past, just to be able to support FKs \nto those columns. I think those cases most often come up when dealing \nwith slightly denormalized schemas, e.g. for efficiency.\n\nOne other use-case I had recently, was along the followling lines, in \nabstract terms:\n\nCREATE TABLE classes (class INT PRIMARY KEY, ...);\n\nCREATE TABLE instances (\n instance INT PRIMARY KEY,\n class INT REFERENCES classes,\n ...\n);\n\nThink about classes and instances as in OOP. So the table classes \ncontains some definitions for different types of object and the table \ninstances realizes them into concrete objects.\n\nNow, assume you have some property of a class than is best modeled as a \ntable like this:\n\nCREATE TABLE classes_prop (\n property INT PRIMARY KEY,\n class INT REFERNECES classes,\n ...\n);\n\nNow, assume you need to store data for each of those classes_prop rows \nfor each instance. You'd do the following:\n\nCREATE TABLE instances_prop (\n instance INT REFERENCES instances,\n property INT REFERENCES classes_prop,\n ...\n);\n\nHowever, this does not ensure that the instance and the property you're \nreferencing in instances_prop are actually from the same class, so you \nadd a class column:\n\nCREATE TABLE instances_prop (\n instance INT,\n class INT,\n property INT,\n FOREIGN KEY (instance, class) REFERENCES instances,\n FOREIGN KEY (property, class) REFERENCES classes_prop,\n ...\n);\n\nBut this won't work, without creating some UNIQUE constraints on those \nsupersets of the PK column first.\n\n> For one example of where the semantics get fuzzy, it's not\n> very clear how the extra-baggage columns ought to participate in\n> CASCADE updates. Currently, if we have\n> CREATE TABLE foo (a integer PRIMARY KEY, b integer);\n> then an update that changes only foo.b doesn't need to update\n> referencing tables, and I think we even have optimizations that\n> assume that if no unique-key columns are touched then RI checks\n> need not be made. But if you did\n> CREATE TABLE bar (x integer, y integer,\n> FOREIGN KEY (x, y) REFERENCES foo(a, b) ON UPDATE CASCADE);\n> then perhaps you expect bar.y to be updated ... or maybe you don't?\n\nIn all use-cases I had so far, I would expect bar.y to be updated, too.\n\nI think it would not even be possible to NOT update bar.y, because the \nFK would then not match anymore. foo.a is the PK, so the value in bar.x \nalready forces bar.y to be the same as foo.b at all times.\n\nbar.y is a little bit like a generated value in that sense, it should \nalways match foo.b. I think it would be great, if we could actually go a \nstep further, too: On an update to bar.x to a new value, if foo.a=bar.x \nexists, I would like to set bar.y automatically to the new foo.b. \nOtherwise those kind of updates always have to either query foo before, \nor add a trigger to do the same.\n\nIn the classes/instances example above, when updating \ninstances_prop.property to a new value, instances_prop.class would be \nupdated automatically to match classes_prop.class. This would fail, when \nthe class is different than the class required by the FK to instances, \nthough, providing exactly the safe-guard that this constraint was \nsupposed to provide, without incurring additional overhead in update \nstatements.\n\nIn the foo/bar example above, which is just a bit of denormalization, \nthis automatic update would also be helpful - because rejecting the \nupdate on the grounds that the columns don't match doesn't make sense here.\n\n> Another example is that I think the idea is only well-defined when\n> the subset column(s) are a primary key, or at least all marked NOT NULL.\n> Otherwise they're not as unique as you're claiming.\n\nI fail to see why. My understanding is that rows with NULL values in the \nreferenced table can't participate in FK matches anyway, because both \nMATCH SIMPLE and MATCH FULL wouldn't require a match when any/all of the \ncolumns in the referencing table are NULL. MATCH PARTIAL is not \nimplemented, so I can't tell whether the semantics would be different there.\n\nI'm not sure whether a FK on a superset of unique columns would be \nuseful with MATCH SIMPLE. Maybe it could be forced to be MATCH FULL, if \nMATCH SIMPLE is indeed not well-defined.\n\n> It's also unclear to me how this ought to interact with the\n> information_schema views concerning foreign keys. We generally\n> feel that we don't want to present any non-SQL-compatible data\n> in information_schema, for fear that it will confuse applications\n> that expect to see SQL-spec behavior there. So do we leave such\n> FKs out of the views altogether, or show only the columns involving\n> the associated unique constraint? Neither answer seems pleasant.\n\nInstead of tweaking FKs, maybe it would be possible to define a UNIQUE \nconstraint re-using an existing index that guarantees uniqueness on a \nsubset of columns already? This would allow to create those FK \nrelationships by creating another unique constraint - without the \noverhead of creating yet another index.\n\nSo roughly something like this:\n\nALTER TABLE foo ADD UNIQUE (a, b) USING INDEX foo_pk;\n\nThis should give a consistent output for information_schema views?\n\nBest\n\nWolfgang\n\n\n",
"msg_date": "Fri, 2 Sep 2022 11:42:25 +0200",
"msg_from": "Wolfgang Walther <walther@technowledgy.de>",
"msg_from_op": false,
"msg_subject": "Re: Allow foreign keys to reference a superset of unique columns"
},
{
"msg_contents": "On Fri, Sep 2, 2022 at 5:42 AM Wolfgang Walther <walther@technowledgy.de> wrote:\n>\n> Kaiting Chen:\n> > I'd like to propose a change to PostgreSQL to allow the creation of a foreign\n> > key constraint referencing a superset of uniquely constrained columns.\n>\n> +1\n>\n> Tom Lane:\n> > TBH, I think this is a fundamentally bad idea and should be rejected\n> > outright. It fuzzes the semantics of the FK relationship, and I'm\n> > not convinced that there are legitimate use-cases. Your example\n> > schema could easily be dismissed as bad design that should be done\n> > some other way.\n>\n> I had to add quite a few unique constraints on a superset of already\n> uniquely constrained columns in the past, just to be able to support FKs\n> to those columns. I think those cases most often come up when dealing\n> with slightly denormalized schemas, e.g. for efficiency.\n>\n> One other use-case I had recently, was along the followling lines, in\n> abstract terms:\n>\n> CREATE TABLE classes (class INT PRIMARY KEY, ...);\n>\n> CREATE TABLE instances (\n> instance INT PRIMARY KEY,\n> class INT REFERENCES classes,\n> ...\n> );\n>\n> Think about classes and instances as in OOP. So the table classes\n> contains some definitions for different types of object and the table\n> instances realizes them into concrete objects.\n>\n> Now, assume you have some property of a class than is best modeled as a\n> table like this:\n>\n> CREATE TABLE classes_prop (\n> property INT PRIMARY KEY,\n> class INT REFERNECES classes,\n> ...\n> );\n>\n> Now, assume you need to store data for each of those classes_prop rows\n> for each instance. You'd do the following:\n>\n> CREATE TABLE instances_prop (\n> instance INT REFERENCES instances,\n> property INT REFERENCES classes_prop,\n> ...\n> );\n>\n> However, this does not ensure that the instance and the property you're\n> referencing in instances_prop are actually from the same class, so you\n> add a class column:\n>\n> CREATE TABLE instances_prop (\n> instance INT,\n> class INT,\n> property INT,\n> FOREIGN KEY (instance, class) REFERENCES instances,\n> FOREIGN KEY (property, class) REFERENCES classes_prop,\n> ...\n> );\n>\n> But this won't work, without creating some UNIQUE constraints on those\n> supersets of the PK column first.\n\nIf I'm following properly this sounds like an overengineered EAV\nschema, and neither of those things inspires me to think \"this is a\nuse case I want to support\".\n\nThat being said, I know that sometimes examples that have been\nabstracted enough to share aren't always the best, so perhaps there's\nsomething underlying this that's a more valuable example.\n\n> > For one example of where the semantics get fuzzy, it's not\n> > very clear how the extra-baggage columns ought to participate in\n> > CASCADE updates. Currently, if we have\n> > CREATE TABLE foo (a integer PRIMARY KEY, b integer);\n> > then an update that changes only foo.b doesn't need to update\n> > referencing tables, and I think we even have optimizations that\n> > assume that if no unique-key columns are touched then RI checks\n> > need not be made. But if you did\n> > CREATE TABLE bar (x integer, y integer,\n> > FOREIGN KEY (x, y) REFERENCES foo(a, b) ON UPDATE CASCADE);\n> > then perhaps you expect bar.y to be updated ... or maybe you don't?\n>\n> In all use-cases I had so far, I would expect bar.y to be updated, too.\n>\n> I think it would not even be possible to NOT update bar.y, because the\n> FK would then not match anymore. foo.a is the PK, so the value in bar.x\n> already forces bar.y to be the same as foo.b at all times.\n>\n> bar.y is a little bit like a generated value in that sense, it should\n> always match foo.b. I think it would be great, if we could actually go a\n> step further, too: On an update to bar.x to a new value, if foo.a=bar.x\n> exists, I would like to set bar.y automatically to the new foo.b.\n> Otherwise those kind of updates always have to either query foo before,\n> or add a trigger to do the same.\n\nIsn't this actually contradictory to the behavior you currently have\nwith a multi-column foreign key? In the example above then an update\nto bar.x is going to update the rows in foo that match bar.x = foo.a\nand bar.y = foo.b *using the old values of bar.x and bar.y* to be the\nnew values. You seem to be suggesting that instead it should look for\nother rows that already match the *new value* of only one of the\ncolumns in the constraint. If I'm understanding the example correctly,\nthat seems like a *very* bad idea.\n\nJames Coleman\n\n\n",
"msg_date": "Sat, 24 Sep 2022 21:34:15 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow foreign keys to reference a superset of unique columns"
},
{
"msg_contents": "James Coleman:\n> If I'm following properly this sounds like an overengineered EAV\n> schema, and neither of those things inspires me to think \"this is a\n> use case I want to support\".\n> \n> That being said, I know that sometimes examples that have been\n> abstracted enough to share aren't always the best, so perhaps there's\n> something underlying this that's a more valuable example.\n\nMost my use-cases are slightly denormalized and I was looking for an \nexample that didn't require those kind of FKS only because of the \ndenormalization. So that's why it might have been a bit artifical or \nabstracted too much.\n\nTake another example: I deal with multi-tenancy systems, for which I \nwant to use RLS to separate the data between tenants:\n\nCREATE TABLE tenants (tenant INT PRIMARY KEY);\n\nEach tenant can create multiple users and groups:\n\nCREATE TABLE users (\n \"user\" INT PRIMARY KEY,\n tenant INT NOT NULL REFERENCES tenants\n);\n\nCREATE TABLLE groups (\n \"group\" INT PRIMARY KEY,\n tenant INT NOT NULL REFERENCES tenants\n);\n\nUsers can be members of groups. The simple approach would be:\n\nCREATE TABLE members (\n PRIMARY KEY (\"user\", \"group\"),\n \"user\" INT REFERENCES users,\n \"group\" INT REFERENCES groups\n);\n\nBut this falls short in two aspects:\n- To make RLS policies simpler to write and quicker to execute, I want \nto add \"tenant\" columns to **all** other tables. A slightly denormalized \nschema for efficiency.\n- The schema above does not ensure that users can only be members in \ngroups of the same tenant. Our business model requires to separate \ntenants cleanly, but as written above, cross-tenant memberships would be \nallowed.\n\nIn comes the \"tenant\" column which solves both of this:\n\nCREATE TABLE members (\n PRIMARY KEY (\"user\", \"group\"),\n tenant INT REFERENCES tenants,\n \"user\" INT,\n \"group\" INT,\n FOREIGN KEY (\"user\", tenant) REFERENCES users (\"user\", tenant),\n FOREIGN KEY (\"group\", tenant) REFERENCES groups (\"group\", tenant)\n);\n\nThis is not possible to do right now, without adding more UNIQUE \nconstraints to the users and groups tables - on a superset of already \nunique columns.\n\n>> bar.y is a little bit like a generated value in that sense, it should\n>> always match foo.b. I think it would be great, if we could actually go a\n>> step further, too: On an update to bar.x to a new value, if foo.a=bar.x\n>> exists, I would like to set bar.y automatically to the new foo.b.\n>> Otherwise those kind of updates always have to either query foo before,\n>> or add a trigger to do the same.\n> \n> Isn't this actually contradictory to the behavior you currently have\n> with a multi-column foreign key? In the example above then an update\n> to bar.x is going to update the rows in foo that match bar.x = foo.a\n> and bar.y = foo.b *using the old values of bar.x and bar.y* to be the\n> new values.\n\nNo, I think there was a misunderstanding. An update to bar should not \nupdate rows in foo. An update to bar.x should update bar.y implicitly, \nto match the new value of foo.b.\n\n> You seem to be suggesting that instead it should look for\n> other rows that already match the *new value* of only one of the\n> columns in the constraint.\n\nYes. I think basically what I'm suggesting is, that for an FK to a \nsuperset of unique columns, all the FK-logic should still be done on the \nalready unique set of columns only - and then the additional columns \nshould be mirrored into the referencing table. The referencing table can \nthen put additional constraints on this column. In the members example \nabove, this additional constraint is the fact that the tenant column \ncan't be filled with two different values for the users and groups FKs. \nBut this could also be a CHECK constraint to allow FKs only to a subset \nof rows in the target table:\n\nCREATE TYPE foo_type AS ENUM ('A', 'B', 'C');\n\nCREATE TABLE foo (\n f INT PRIMARY KEY,\n type foo_type\n);\n\nCREATE TABLE bar (\n b INT PRIMARY KEY,\n f INT,\n ftype foo_type CHECK (ftype <> 'C'),\n FOREIGN KEY (f, ftype) REFERENCES foo (f, type);\n);\n\nIn this example, the additional ftype column is just used to enforce \nthat bar can only reference rows with type A or B, but not C. Assume:\n\nINSERT INTO foo VALUES (1, 'A'), (2, 'B'), (3, 'C');\n\nIn this case, it would be nice to be able to do the following, i.e. \nderive the value for bar.ftype automatically:\n\nINSERT INTO bar (b, f) VALUES (10, 1); -- bar.ftype is then 'A'\nUPDATE bar SET f = 2 WHERE b = 10; -- bar.ftype is then 'B'\n\nAnd it would throw errors in the following cases, because the \nautomatically derived value fails the CHECK constraint:\n\nINSERT INTO bar (b, f) VALUES (20, 3);\nUPDATE bar SET f = 3 WHERE b = 10;\n\nNote: This \"automatically derived columns\" extension would be a separate \nfeature. Really nice to have, but the above mentioned FKs to supersets \nof unique columns would be very valuable without it already.\n\nBest\n\nWolfgang\n\n\n",
"msg_date": "Sun, 25 Sep 2022 10:49:15 +0200",
"msg_from": "Wolfgang Walther <walther@technowledgy.de>",
"msg_from_op": false,
"msg_subject": "Re: Allow foreign keys to reference a superset of unique columns"
},
{
"msg_contents": "On Sun, Sep 25, 2022 at 4:49 AM Wolfgang Walther\n<walther@technowledgy.de> wrote:\n>\n> James Coleman:\n> > If I'm following properly this sounds like an overengineered EAV\n> > schema, and neither of those things inspires me to think \"this is a\n> > use case I want to support\".\n> >\n> > That being said, I know that sometimes examples that have been\n> > abstracted enough to share aren't always the best, so perhaps there's\n> > something underlying this that's a more valuable example.\n>\n> Most my use-cases are slightly denormalized and I was looking for an\n> example that didn't require those kind of FKS only because of the\n> denormalization. So that's why it might have been a bit artifical or\n> abstracted too much.\n>\n> Take another example: I deal with multi-tenancy systems, for which I\n> want to use RLS to separate the data between tenants:\n>\n> CREATE TABLE tenants (tenant INT PRIMARY KEY);\n>\n> Each tenant can create multiple users and groups:\n>\n> CREATE TABLE users (\n> \"user\" INT PRIMARY KEY,\n> tenant INT NOT NULL REFERENCES tenants\n> );\n>\n> CREATE TABLLE groups (\n> \"group\" INT PRIMARY KEY,\n> tenant INT NOT NULL REFERENCES tenants\n> );\n>\n> Users can be members of groups. The simple approach would be:\n>\n> CREATE TABLE members (\n> PRIMARY KEY (\"user\", \"group\"),\n> \"user\" INT REFERENCES users,\n> \"group\" INT REFERENCES groups\n> );\n>\n> But this falls short in two aspects:\n> - To make RLS policies simpler to write and quicker to execute, I want\n> to add \"tenant\" columns to **all** other tables. A slightly denormalized\n> schema for efficiency.\n> - The schema above does not ensure that users can only be members in\n> groups of the same tenant. Our business model requires to separate\n> tenants cleanly, but as written above, cross-tenant memberships would be\n> allowed.\n>\n> In comes the \"tenant\" column which solves both of this:\n>\n> CREATE TABLE members (\n> PRIMARY KEY (\"user\", \"group\"),\n> tenant INT REFERENCES tenants,\n> \"user\" INT,\n> \"group\" INT,\n> FOREIGN KEY (\"user\", tenant) REFERENCES users (\"user\", tenant),\n> FOREIGN KEY (\"group\", tenant) REFERENCES groups (\"group\", tenant)\n> );\n>\n> This is not possible to do right now, without adding more UNIQUE\n> constraints to the users and groups tables - on a superset of already\n> unique columns.\n\nThanks, that's a more interesting use case IMO (and doesn't smell in\nthe way the other did).\n\n> >> bar.y is a little bit like a generated value in that sense, it should\n> >> always match foo.b. I think it would be great, if we could actually go a\n> >> step further, too: On an update to bar.x to a new value, if foo.a=bar.x\n> >> exists, I would like to set bar.y automatically to the new foo.b.\n> >> Otherwise those kind of updates always have to either query foo before,\n> >> or add a trigger to do the same.\n> >\n> > Isn't this actually contradictory to the behavior you currently have\n> > with a multi-column foreign key? In the example above then an update\n> > to bar.x is going to update the rows in foo that match bar.x = foo.a\n> > and bar.y = foo.b *using the old values of bar.x and bar.y* to be the\n> > new values.\n>\n> No, I think there was a misunderstanding. An update to bar should not\n> update rows in foo. An update to bar.x should update bar.y implicitly,\n> to match the new value of foo.b.\n>\n> > You seem to be suggesting that instead it should look for\n> > other rows that already match the *new value* of only one of the\n> > columns in the constraint.\n>\n> Yes. I think basically what I'm suggesting is, that for an FK to a\n> superset of unique columns, all the FK-logic should still be done on the\n> already unique set of columns only - and then the additional columns\n> should be mirrored into the referencing table. The referencing table can\n> then put additional constraints on this column. In the members example\n> above, this additional constraint is the fact that the tenant column\n> can't be filled with two different values for the users and groups FKs.\n\nIf we have a declared constraint on x,y where x is unique based on an\nindex including on x I do not think we should have that fk constraint\nwork differently than a constraint on x,y where there is a unique\nindex on x,y. That would seem to be incredibly confusing behavior\n(even if it would be useful for some specific use case).\n\n> But this could also be a CHECK constraint to allow FKs only to a subset\n> of rows in the target table:\n\nAre you suggesting a check constraint that queries another table?\nBecause check constraints are not supposed to do that (I assume this\nis technically possible to declare via a function, just like it is\ntechnically possible to do in a functional index, but like in the\nindex case it's a bad idea since it's not actually guaranteed).\n\n> CREATE TYPE foo_type AS ENUM ('A', 'B', 'C');\n>\n> CREATE TABLE foo (\n> f INT PRIMARY KEY,\n> type foo_type\n> );\n>\n> CREATE TABLE bar (\n> b INT PRIMARY KEY,\n> f INT,\n> ftype foo_type CHECK (ftype <> 'C'),\n> FOREIGN KEY (f, ftype) REFERENCES foo (f, type);\n> );\n>\n> In this example, the additional ftype column is just used to enforce\n> that bar can only reference rows with type A or B, but not C. Assume:\n>\n> INSERT INTO foo VALUES (1, 'A'), (2, 'B'), (3, 'C');\n>\n> In this case, it would be nice to be able to do the following, i.e.\n> derive the value for bar.ftype automatically:\n>\n> INSERT INTO bar (b, f) VALUES (10, 1); -- bar.ftype is then 'A'\n> UPDATE bar SET f = 2 WHERE b = 10; -- bar.ftype is then 'B'\n>\n> And it would throw errors in the following cases, because the\n> automatically derived value fails the CHECK constraint:\n>\n> INSERT INTO bar (b, f) VALUES (20, 3);\n> UPDATE bar SET f = 3 WHERE b = 10;\n>\n> Note: This \"automatically derived columns\" extension would be a separate\n> feature. Really nice to have, but the above mentioned FKs to supersets\n> of unique columns would be very valuable without it already.\n\nThis \"derive the value automatically\" is not what foreign key\nconstraints do right now at all, right? And if fact it's contradictory\nto existing behavior, no?\n\nThis part just seems like a very bad idea. Unless I'm misunderstanding\nI think we should reject this part of the proposals on this thread as\nsomething we would not even consider implementing.\n\nJames Coleman\n\n\n",
"msg_date": "Sun, 25 Sep 2022 15:28:32 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow foreign keys to reference a superset of unique columns"
},
{
"msg_contents": "Hello Kaiting,\n\nThe use case you're looking to handle seems interesting to me.\n\nOn Wed, Jul 27, 2022 at 3:11 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Kaiting Chen <ktchen14@gmail.com> writes:\n> > I'd like to propose a change to PostgreSQL to allow the creation of a foreign\n> > key constraint referencing a superset of uniquely constrained columns.\n>\n> TBH, I think this is a fundamentally bad idea and should be rejected\n> outright. It fuzzes the semantics of the FK relationship, and I'm\n> not convinced that there are legitimate use-cases. Your example\n> schema could easily be dismissed as bad design that should be done\n> some other way.\n>\n> For one example of where the semantics get fuzzy, it's not\n> very clear how the extra-baggage columns ought to participate in\n> CASCADE updates. Currently, if we have\n> CREATE TABLE foo (a integer PRIMARY KEY, b integer);\n> then an update that changes only foo.b doesn't need to update\n> referencing tables, and I think we even have optimizations that\n> assume that if no unique-key columns are touched then RI checks\n> need not be made. But if you did\n> CREATE TABLE bar (x integer, y integer,\n> FOREIGN KEY (x, y) REFERENCES foo(a, b) ON UPDATE CASCADE);\n> then perhaps you expect bar.y to be updated ... or maybe you don't?\n>\n> Another example is that I think the idea is only well-defined when\n> the subset column(s) are a primary key, or at least all marked NOT NULL.\n> Otherwise they're not as unique as you're claiming. But then the FK\n> constraint really has to be dependent on a PK constraint not just an\n> index definition, since indexes in themselves don't enforce not-nullness.\n> That gets back to Peter's complaint that referring to an index isn't\n> good enough.\n>\n> Anyway, seeing that the patch touches neither ri_triggers.c nor any\n> regression tests, I find it hard to believe that such semantic\n> questions have been thought through.\n>\n> It's also unclear to me how this ought to interact with the\n> information_schema views concerning foreign keys. We generally\n> feel that we don't want to present any non-SQL-compatible data\n> in information_schema, for fear that it will confuse applications\n> that expect to see SQL-spec behavior there. So do we leave such\n> FKs out of the views altogether, or show only the columns involving\n> the associated unique constraint? Neither answer seems pleasant.\n>\n> FWIW, the patch is currently failing to apply per the cfbot.\n> I think you don't need to manually update backend/nodes/ anymore,\n> but the gram.y changes look to have been sideswiped by some\n> other recent commit.\n\nAs I was reading through the email chain I had this thought: could you\nget the same benefit (or 90% of it anyway) by instead allowing the\ncreation of a uniqueness constraint that contains more columns than\nthe index backing it? So long as the index backing it still guaranteed\nthe uniqueness on a subset of columns that would seem to be safe.\n\nTom notes the additional columns being nullable is something to think\nabout. But if we go the route of allowing unique constraints with\nbacking indexes having a subset of columns from the constraint I don't\nthink the nullability is an issue since it's already the case that a\nunique constraint can be declared on columns that are nullable. Indeed\nit's also the case that we already support a foreign key constraint\nbacked by a unique constraint including nullable columns.\n\nBecause such an approach would, I believe, avoid changing the foreign\nkey code (or semantics) at all, I believe that would address Tom's\nconcerns about information_schema and fuzziness of semantics.\n\nAfter writing down that idea I noticed Wolfgang Walther had commented\nsimilarly, but it appears that that idea got lost (or at least not\nresponded to).\n\nI'd be happy to sign up to review an updated patch if you're\ninterested in continuing this effort. If so, could you register the\npatch in the CF app (if not there already)?\n\nThanks,\nJames Coleman\n\n\n",
"msg_date": "Sun, 25 Sep 2022 22:00:27 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow foreign keys to reference a superset of unique columns"
},
{
"msg_contents": "James Coleman:\n> If we have a declared constraint on x,y where x is unique based on an\n> index including on x I do not think we should have that fk constraint\n> work differently than a constraint on x,y where there is a unique\n> index on x,y. That would seem to be incredibly confusing behavior\n> (even if it would be useful for some specific use case).\n\nI don't think it's behaving differently from how it does now. See below. \nBut I can see how that could be confusing. Maybe it's just about \ndescribing the feature in a better way than I did so far. Or maybe it \nneeds a different syntax.\n\nAnyway, I don't think it's just a specific use case. In every use case I \nhad for $subject so far, the immediate next step was to write some \ntriggers to fetch those derived values from the referenced table.\n\nUltimately it's a question of efficiency: We can achieve the same thing \nin two ways today:\n- We can either **not** add the additional column (members.tenant, \nbar.ftype in my examples) to the referencing table at all, and add \nconstraint triggers that do all those checks instead. This adds \ncomplexity to write the triggers and more complicated RLS policies etc, \nand also is potentially slower when executing those more complicated \nqueries.\n- Or we can add the additional column, but also add an additional unique \nindex on the referenced table, and then make it part of the FK. This \nremoves some of the constraint triggers and makes RLS policies simpler \nand likely faster to execute queries. It comes at a cost of additional \ncost of storage, though - and this is something that $subject tries to \naddress.\n\nStill, even when $subject is allowed, in practice we need some of the \ntriggers to fetch those dependent values. Considering that the current \nFK triggers already do the same kind of queries at the same times, it'd \nbe more efficient to have those FK queries fetch those dependent values.\n\n>> But this could also be a CHECK constraint to allow FKs only to a subset\n>> of rows in the target table:\n> \n> Are you suggesting a check constraint that queries another table?\n\nNo. I was talking about the CHECK constraint in my example in the next \nparagraph of that mail. The CHECK constraint on bar.ftype is a regular \nCHECK constraint, but because of how ftype is updated automatically, it \neffectively behaves like some kind of additional constraint on the FK \nitself.\n\n> This \"derive the value automatically\" is not what foreign key\n> constraints do right now at all, right? And if fact it's contradictory\n> to existing behavior, no?\n\nI don't think it's contradicting. Maybe a better way to put my idea is this:\n\nFor a foreign key to a superset of unique columns, the already-unique \ncolumns should behave according to the specified ON UPDATE clause. \nHowever, the extra columns should always behave as they were ON UPDATE \nCASCADE. And additionally, they should behave similar to something like \nON INSERT CASCADE. Although that INSERT is about the referencing table, \nnot the referenced table, so the analogy isn't 100%.\n\nI guess this would also be a more direct answer to Tom's earlier \nquestion about what to expect in the ON UPDATE scenario.\n\nBest\n\nWolfgang\n\n\n",
"msg_date": "Mon, 26 Sep 2022 08:28:14 +0200",
"msg_from": "Wolfgang Walther <walther@technowledgy.de>",
"msg_from_op": false,
"msg_subject": "Re: Allow foreign keys to reference a superset of unique columns"
},
{
"msg_contents": "On Mon, Sep 26, 2022 at 2:28 AM Wolfgang Walther\n<walther@technowledgy.de> wrote:\n>\n> James Coleman:\n> > If we have a declared constraint on x,y where x is unique based on an\n> > index including on x I do not think we should have that fk constraint\n> > work differently than a constraint on x,y where there is a unique\n> > index on x,y. That would seem to be incredibly confusing behavior\n> > (even if it would be useful for some specific use case).\n>\n> I don't think it's behaving differently from how it does now. See below.\n> But I can see how that could be confusing. Maybe it's just about\n> describing the feature in a better way than I did so far. Or maybe it\n> needs a different syntax.\n>\n> Anyway, I don't think it's just a specific use case. In every use case I\n> had for $subject so far, the immediate next step was to write some\n> triggers to fetch those derived values from the referenced table.\n>\n> Ultimately it's a question of efficiency: We can achieve the same thing\n> in two ways today:\n> - We can either **not** add the additional column (members.tenant,\n> bar.ftype in my examples) to the referencing table at all, and add\n> constraint triggers that do all those checks instead. This adds\n> complexity to write the triggers and more complicated RLS policies etc,\n> and also is potentially slower when executing those more complicated\n> queries.\n> - Or we can add the additional column, but also add an additional unique\n> index on the referenced table, and then make it part of the FK. This\n> removes some of the constraint triggers and makes RLS policies simpler\n> and likely faster to execute queries. It comes at a cost of additional\n> cost of storage, though - and this is something that $subject tries to\n> address.\n>\n> Still, even when $subject is allowed, in practice we need some of the\n> triggers to fetch those dependent values. Considering that the current\n> FK triggers already do the same kind of queries at the same times, it'd\n> be more efficient to have those FK queries fetch those dependent values.\n>\n> >> But this could also be a CHECK constraint to allow FKs only to a subset\n> >> of rows in the target table:\n> >\n> > Are you suggesting a check constraint that queries another table?\n>\n> No. I was talking about the CHECK constraint in my example in the next\n> paragraph of that mail. The CHECK constraint on bar.ftype is a regular\n> CHECK constraint, but because of how ftype is updated automatically, it\n> effectively behaves like some kind of additional constraint on the FK\n> itself.\n\nAh, OK.\n\n> > This \"derive the value automatically\" is not what foreign key\n> > constraints do right now at all, right? And if fact it's contradictory\n> > to existing behavior, no?\n>\n> I don't think it's contradicting. Maybe a better way to put my idea is this:\n>\n> For a foreign key to a superset of unique columns, the already-unique\n> columns should behave according to the specified ON UPDATE clause.\n> However, the extra columns should always behave as they were ON UPDATE\n> CASCADE. And additionally, they should behave similar to something like\n> ON INSERT CASCADE. Although that INSERT is about the referencing table,\n> not the referenced table, so the analogy isn't 100%.\n>\n> I guess this would also be a more direct answer to Tom's earlier\n> question about what to expect in the ON UPDATE scenario.\n\nSo the broader point I'm trying to make is that, as I understand it,\nindexes backing foreign key constraints is an implementation detail.\nThe SQL standard details the behavior of foreign key constraints\nregardless of implementation details like a backing index. That means\nthat the behavior of two column foreign key constraints is defined in\na single way whether or not there's a backing index at all or whether\nsuch a backing index, if present, contains one or two columns.\n\nI understand that for the use case you're describing this isn't the\nabsolute most efficient way to implement the desired data semantics.\nBut it would be incredibly confusing (and, I think, a violation of the\nSQL standard) to have one foreign key constraint work in a different\nway from another such constraint when both are indistinguishable at\nthe constraint level (the backing index isn't an attribute of the\nconstraint; it's merely an implementation detail).\n\nJames Coleman\n\n\n",
"msg_date": "Mon, 26 Sep 2022 08:37:28 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow foreign keys to reference a superset of unique columns"
},
{
"msg_contents": "James Coleman:\n> So the broader point I'm trying to make is that, as I understand it,\n> indexes backing foreign key constraints is an implementation detail.\n> The SQL standard details the behavior of foreign key constraints\n> regardless of implementation details like a backing index. That means\n> that the behavior of two column foreign key constraints is defined in\n> a single way whether or not there's a backing index at all or whether\n> such a backing index, if present, contains one or two columns.\n> \n> I understand that for the use case you're describing this isn't the\n> absolute most efficient way to implement the desired data semantics.\n> But it would be incredibly confusing (and, I think, a violation of the\n> SQL standard) to have one foreign key constraint work in a different\n> way from another such constraint when both are indistinguishable at\n> the constraint level (the backing index isn't an attribute of the\n> constraint; it's merely an implementation detail).\n\nAh, thanks, I understand better now.\n\nThe two would only be indistinguishable at the constraint level, if \n$subject was implemented by allowing to create unique constraints on a \nsuperset of unique columns, backed by a different index (the suggestion \nwe both made independently). But if it was possible to reference a \nsuperset of unique columns, where there was only a unique constraint put \non a subset of the referenced columns (the idea originally introduced in \nthis thread), then there would be a difference, right?\n\nThat's if it was only the backing index that is not part of the SQL \nstandard, and not also the fact that a foreign key should reference a \nprimary key or unique constraint?\n\nAnyway, I can see very well how that would be quite confusing overall. \nIt would probably be wiser to allow something roughly like this (if at \nall, of course):\n\nCREATE TABLE bar (\n b INT PRIMARY KEY,\n f INT,\n ftype foo_type GENERATED ALWAYS AS REFERENCE TO foo.type,\n FOREIGN KEY (f, ftype) REFERENCES foo (f, type)\n);\n\nIt likely wouldn't work exactly like that, but given a foreign key to \nfoo, the GENERATED clause could be used to fetch the value through the \nsame triggers that form that FK for efficiency. My main point for now \nis: With a much more explicit syntax anything near that, this would \ncertainly be an entirely different feature than $subject **and** it \nwould be possible to implement on top of $subject. If at all.\n\nSo no need for me to distract this thread from $subject anymore. I think \nthe idea of allowing to create unique constraints on a superset of the \ncolumns of an already existing unique index is a good one, so let's \ndiscuss this further.\n\nBest\n\nWolfgang\n\n\n",
"msg_date": "Mon, 26 Sep 2022 15:59:41 +0200",
"msg_from": "Wolfgang Walther <walther@technowledgy.de>",
"msg_from_op": false,
"msg_subject": "Re: Allow foreign keys to reference a superset of unique columns"
},
{
"msg_contents": "James Coleman:\n> As I was reading through the email chain I had this thought: could you\n> get the same benefit (or 90% of it anyway) by instead allowing the\n> creation of a uniqueness constraint that contains more columns than\n> the index backing it? So long as the index backing it still guaranteed\n> the uniqueness on a subset of columns that would seem to be safe.\n> \n> Tom notes the additional columns being nullable is something to think\n> about. But if we go the route of allowing unique constraints with\n> backing indexes having a subset of columns from the constraint I don't\n> think the nullability is an issue since it's already the case that a\n> unique constraint can be declared on columns that are nullable. Indeed\n> it's also the case that we already support a foreign key constraint\n> backed by a unique constraint including nullable columns.\n> \n> Because such an approach would, I believe, avoid changing the foreign\n> key code (or semantics) at all, I believe that would address Tom's\n> concerns about information_schema and fuzziness of semantics.\n\n\nCould we create this additional unique constraint implicitly, when using \nFOREIGN KEY ... REFERENCES on a superset of unique columns? This would \nmake it easier to use, but still give proper information_schema output.\n\nBest\n\nWolfgang\n\n\n",
"msg_date": "Mon, 26 Sep 2022 16:04:49 +0200",
"msg_from": "Wolfgang Walther <walther@technowledgy.de>",
"msg_from_op": false,
"msg_subject": "Re: Allow foreign keys to reference a superset of unique columns"
},
{
"msg_contents": "On Mon, Sep 26, 2022 at 9:59 AM Wolfgang Walther\n<walther@technowledgy.de> wrote:\n>\n> James Coleman:\n> > So the broader point I'm trying to make is that, as I understand it,\n> > indexes backing foreign key constraints is an implementation detail.\n> > The SQL standard details the behavior of foreign key constraints\n> > regardless of implementation details like a backing index. That means\n> > that the behavior of two column foreign key constraints is defined in\n> > a single way whether or not there's a backing index at all or whether\n> > such a backing index, if present, contains one or two columns.\n> >\n> > I understand that for the use case you're describing this isn't the\n> > absolute most efficient way to implement the desired data semantics.\n> > But it would be incredibly confusing (and, I think, a violation of the\n> > SQL standard) to have one foreign key constraint work in a different\n> > way from another such constraint when both are indistinguishable at\n> > the constraint level (the backing index isn't an attribute of the\n> > constraint; it's merely an implementation detail).\n>\n> Ah, thanks, I understand better now.\n>\n> The two would only be indistinguishable at the constraint level, if\n> $subject was implemented by allowing to create unique constraints on a\n> superset of unique columns, backed by a different index (the suggestion\n> we both made independently). But if it was possible to reference a\n> superset of unique columns, where there was only a unique constraint put\n> on a subset of the referenced columns (the idea originally introduced in\n> this thread), then there would be a difference, right?\n>\n> That's if it was only the backing index that is not part of the SQL\n> standard, and not also the fact that a foreign key should reference a\n> primary key or unique constraint?\n\nI think that's not true: the SQL standard doesn't have the option of\n\"this foreign key is backed by this unique constraint\", does it? So in\neither case I believe we would be at minimum implementing an extension\nto the standard (and as I argued already I think it would actually be\ncontradictory to the standard).\n\n> Anyway, I can see very well how that would be quite confusing overall.\n> It would probably be wiser to allow something roughly like this (if at\n> all, of course):\n>\n> CREATE TABLE bar (\n> b INT PRIMARY KEY,\n> f INT,\n> ftype foo_type GENERATED ALWAYS AS REFERENCE TO foo.type,\n> FOREIGN KEY (f, ftype) REFERENCES foo (f, type)\n> );\n>\n> It likely wouldn't work exactly like that, but given a foreign key to\n> foo, the GENERATED clause could be used to fetch the value through the\n> same triggers that form that FK for efficiency. My main point for now\n> is: With a much more explicit syntax anything near that, this would\n> certainly be an entirely different feature than $subject **and** it\n> would be possible to implement on top of $subject. If at all.\n\nYeah, I think that would make more sense if one were proposing an\naddition to the SQL standard (or an explicit extension to it that\nPostgres would support indepently of the standard).\n\n> So no need for me to distract this thread from $subject anymore. I think\n> the idea of allowing to create unique constraints on a superset of the\n> columns of an already existing unique index is a good one, so let's\n> discuss this further.\n\nSounds good to me!\n\nJames Coleman\n\n\n",
"msg_date": "Mon, 26 Sep 2022 13:08:36 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow foreign keys to reference a superset of unique columns"
},
{
"msg_contents": "On Mon, Sep 26, 2022 at 10:04 AM Wolfgang Walther\n<walther@technowledgy.de> wrote:\n>\n> James Coleman:\n> > As I was reading through the email chain I had this thought: could you\n> > get the same benefit (or 90% of it anyway) by instead allowing the\n> > creation of a uniqueness constraint that contains more columns than\n> > the index backing it? So long as the index backing it still guaranteed\n> > the uniqueness on a subset of columns that would seem to be safe.\n> >\n> > Tom notes the additional columns being nullable is something to think\n> > about. But if we go the route of allowing unique constraints with\n> > backing indexes having a subset of columns from the constraint I don't\n> > think the nullability is an issue since it's already the case that a\n> > unique constraint can be declared on columns that are nullable. Indeed\n> > it's also the case that we already support a foreign key constraint\n> > backed by a unique constraint including nullable columns.\n> >\n> > Because such an approach would, I believe, avoid changing the foreign\n> > key code (or semantics) at all, I believe that would address Tom's\n> > concerns about information_schema and fuzziness of semantics.\n>\n>\n> Could we create this additional unique constraint implicitly, when using\n> FOREIGN KEY ... REFERENCES on a superset of unique columns? This would\n> make it easier to use, but still give proper information_schema output.\n\nPossibly. It'd be my preference to discuss that as a second patch\n(could be in the same series); my intuition is that it'd be easier to\nget agreement on the first part first, but of course I could be wrong\n(if some committer, for example, thought the feature only made sense\nas an implicit creation of such a constraint to back the use case\nKaiting opened with).\n\nJames Coleman\n\n\n",
"msg_date": "Mon, 26 Sep 2022 13:11:02 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow foreign keys to reference a superset of unique columns"
},
{
"msg_contents": "On Tue, 27 Sept 2022 at 06:08, James Coleman <jtc331@gmail.com> wrote:\n>\n> On Mon, Sep 26, 2022 at 9:59 AM Wolfgang Walther\n> <walther@technowledgy.de> wrote:\n> > So no need for me to distract this thread from $subject anymore. I think\n> > the idea of allowing to create unique constraints on a superset of the\n> > columns of an already existing unique index is a good one, so let's\n> > discuss this further.\n>\n> Sounds good to me!\n\nI don't see any immediate problems with allowing UNIQUE constraints to\nbe supported using a unique index which contains only a subset of\ncolumns that are mentioned in the constraint. There would be a few\nthings to think about. e.g INSERT ON CONFLICT might need some\nattention as a unique constraint can be specified for use as the\narbiter.\n\nPerhaps the patch could be broken down as follows:\n\n0001:\n\n* Extend ALTER TABLE ADD CONSTRAINT UNIQUE syntax to allow a column\nlist when specifying USING INDEX.\n* Add checks to ensure the index in USING INDEX contains only columns\nmentioned in the column list.\n* Do any required work for INSERT ON CONFLICT. I've not looked at the\ncode but maybe some adjustments are required for where it gets the\nlist of columns.\n* Address any other places that assume the supporting index contains\nall columns of the unique constraint.\n\n0002:\n\n* Adjust transformFkeyCheckAttrs() to have it look at UNIQUE\nconstraints as well as unique indexes\n* Ensure information_schema.referential_constraints view still works correctly.\n\nI think that would address all of Tom's concerns he mentioned in [1].\nI wasn't quite sure I understood the NOT NULL concern there since\ngoing by RI_FKey_pk_upd_check_required(), we don't enforce FKs when\nthe referenced table has a NULL in the FK's columns.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/3057718.1658949103@sss.pgh.pa.us\n\n\n",
"msg_date": "Tue, 27 Sep 2022 21:18:23 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow foreign keys to reference a superset of unique columns"
},
{
"msg_contents": "> For one example of where the semantics get fuzzy, it's not\n> very clear how the extra-baggage columns ought to participate in\n> CASCADE updates. Currently, if we have\n> CREATE TABLE foo (a integer PRIMARY KEY, b integer);\n> then an update that changes only foo.b doesn't need to update\n> referencing tables, and I think we even have optimizations that\n> assume that if no unique-key columns are touched then RI checks\n> need not be made. But if you did\n> CREATE TABLE bar (x integer, y integer,\n> FOREIGN KEY (x, y) REFERENCES foo(a, b) ON UPDATE\nCASCADE);\n> then perhaps you expect bar.y to be updated ... or maybe you don't?\n\nI'd expect bar.y to be updated. In my mind, the FOREIGN KEY constraint\nshould\nbehave the same, regardless of whether the underlying unique index on the\nreferenced side is an equivalent set to, or a strict subset of, the\nreferenced\ncolumns.\n\n> Another example is that I think the idea is only well-defined when\n> the subset column(s) are a primary key, or at least all marked NOT NULL.\n> Otherwise they're not as unique as you're claiming. But then the FK\n> constraint really has to be dependent on a PK constraint not just an\n> index definition, since indexes in themselves don't enforce not-nullness.\n> That gets back to Peter's complaint that referring to an index isn't\n> good enough.\n\nI think that uniqueness should be guaranteed enough even if the subset\ncolumns\nare nullable:\n\n CREATE TABLE foo (a integer UNIQUE, b integer);\n\n CREATE TABLE bar (\n x integer,\n y integer,\n FOREIGN KEY (x, y) REFERENCES foo(a, b)\n );\n\nThe unique index underlying foo.a guarantees that (foo.a, foo.b) is unique\nif\nfoo.a isn't NULL. That is, there can be multiple rows (NULL, 1) in foo.\nHowever,\nsuch a row can't be the target of the foreign key constraint anyway. So, I'm\nfairly certain that, where it matters, a unique index on a nullable subset\nof\nthe referenced columns guarantees a distinct referenced row.\n\n> It's also unclear to me how this ought to interact with the\n> information_schema views concerning foreign keys. We generally\n> feel that we don't want to present any non-SQL-compatible data\n> in information_schema, for fear that it will confuse applications\n> that expect to see SQL-spec behavior there. So do we leave such\n> FKs out of the views altogether, or show only the columns involving\n> the associated unique constraint? Neither answer seems pleasant.\n\nHere's the information_schema output for this example:\n\n CREATE TABLE foo (a integer, b integer);\n\n CREATE UNIQUE INDEX ON foo (a, b);\n\n CREATE TABLE bar (\n x integer,\n y integer,\n FOREIGN KEY (x, y) REFERENCES foo(a, b)\n );\n\n # SELECT * FROM information_schema.referential_constraints\n WHERE constraint_name = 'bar_x_y_fkey';\n\n -[ RECORD 1 ]-------------+----------------------------------------------\n constraint_catalog | kaitingc\n constraint_schema | public\n constraint_name | bar_x_y_fkey\n unique_constraint_catalog |\n unique_constraint_schema |\n unique_constraint_name |\n match_option | NONE\n update_rule | NO ACTION\n delete_rule | NO ACTION\n\n # SELECT * FROM information_schema.key_column_usage\n WHERE constraint_name = 'bar_x_y_fkey';\n\n -[ RECORD 173\n]---------------+----------------------------------------------\n constraint_catalog | kaitingc\n constraint_schema | public\n constraint_name | bar_x_y_fkey\n table_catalog | kaitingc\n table_schema | public\n table_name | bar\n column_name | x\n ordinal_position | 1\n position_in_unique_constraint | 1\n -[ RECORD 174\n]---------------+----------------------------------------------\n constraint_catalog | kaitingc\n constraint_schema | public\n constraint_name | bar_x_y_fkey\n table_catalog | kaitingc\n table_schema | public\n table_name | bar\n column_name | y\n ordinal_position | 2\n position_in_unique_constraint | 2\n\nIt appears that currently in PostgreSQL, the unique_constraint_catalog,\nschema,\nand name are NULL in referential_constraints when a unique index (without an\nassociated unique constraint) underlies the referenced columns. The\nbehaviour\nI'm proposing would have the same behavior vis-a-vis\nreferential_constraints.\n\nAs for key_column_usage, I propose that position_in_unique_constraint be\nNULL if\nthe referenced column isn't indexed.\n\n> For one example of where the semantics get fuzzy, it's not> very clear how the extra-baggage columns ought to participate in> CASCADE updates. Currently, if we have> CREATE TABLE foo (a integer PRIMARY KEY, b integer);> then an update that changes only foo.b doesn't need to update> referencing tables, and I think we even have optimizations that> assume that if no unique-key columns are touched then RI checks> need not be made. But if you did> CREATE TABLE bar (x integer, y integer,> FOREIGN KEY (x, y) REFERENCES foo(a, b) ON UPDATE CASCADE);> then perhaps you expect bar.y to be updated ... or maybe you don't?I'd expect bar.y to be updated. In my mind, the FOREIGN KEY constraint shouldbehave the same, regardless of whether the underlying unique index on thereferenced side is an equivalent set to, or a strict subset of, the referencedcolumns.> Another example is that I think the idea is only well-defined when> the subset column(s) are a primary key, or at least all marked NOT NULL.> Otherwise they're not as unique as you're claiming. But then the FK> constraint really has to be dependent on a PK constraint not just an> index definition, since indexes in themselves don't enforce not-nullness.> That gets back to Peter's complaint that referring to an index isn't> good enough.I think that uniqueness should be guaranteed enough even if the subset columnsare nullable: CREATE TABLE foo (a integer UNIQUE, b integer); CREATE TABLE bar ( x integer, y integer, FOREIGN KEY (x, y) REFERENCES foo(a, b) );The unique index underlying foo.a guarantees that (foo.a, foo.b) is unique iffoo.a isn't NULL. That is, there can be multiple rows (NULL, 1) in foo. However,such a row can't be the target of the foreign key constraint anyway. So, I'mfairly certain that, where it matters, a unique index on a nullable subset ofthe referenced columns guarantees a distinct referenced row.> It's also unclear to me how this ought to interact with the> information_schema views concerning foreign keys. We generally> feel that we don't want to present any non-SQL-compatible data> in information_schema, for fear that it will confuse applications> that expect to see SQL-spec behavior there. So do we leave such> FKs out of the views altogether, or show only the columns involving> the associated unique constraint? Neither answer seems pleasant.Here's the information_schema output for this example: CREATE TABLE foo (a integer, b integer); CREATE UNIQUE INDEX ON foo (a, b); CREATE TABLE bar ( x integer, y integer, FOREIGN KEY (x, y) REFERENCES foo(a, b) ); # SELECT * FROM information_schema.referential_constraints WHERE constraint_name = 'bar_x_y_fkey'; -[ RECORD 1 ]-------------+---------------------------------------------- constraint_catalog | kaitingc constraint_schema | public constraint_name | bar_x_y_fkey unique_constraint_catalog | unique_constraint_schema | unique_constraint_name | match_option | NONE update_rule | NO ACTION delete_rule | NO ACTION # SELECT * FROM information_schema.key_column_usage WHERE constraint_name = 'bar_x_y_fkey'; -[ RECORD 173 ]---------------+---------------------------------------------- constraint_catalog | kaitingc constraint_schema | public constraint_name | bar_x_y_fkey table_catalog | kaitingc table_schema | public table_name | bar column_name | x ordinal_position | 1 position_in_unique_constraint | 1 -[ RECORD 174 ]---------------+---------------------------------------------- constraint_catalog | kaitingc constraint_schema | public constraint_name | bar_x_y_fkey table_catalog | kaitingc table_schema | public table_name | bar column_name | y ordinal_position | 2 position_in_unique_constraint | 2It appears that currently in PostgreSQL, the unique_constraint_catalog, schema,and name are NULL in referential_constraints when a unique index (without anassociated unique constraint) underlies the referenced columns. The behaviourI'm proposing would have the same behavior vis-a-vis referential_constraints.As for key_column_usage, I propose that position_in_unique_constraint be NULL ifthe referenced column isn't indexed.",
"msg_date": "Tue, 27 Sep 2022 17:35:15 -0400",
"msg_from": "Kaiting Chen <ktchen14@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow foreign keys to reference a superset of unique columns"
},
{
"msg_contents": "> As I was reading through the email chain I had this thought: could you\n> get the same benefit (or 90% of it anyway) by instead allowing the\n> creation of a uniqueness constraint that contains more columns than\n> the index backing it? So long as the index backing it still guaranteed\n> the uniqueness on a subset of columns that would seem to be safe.\n\n> After writing down that idea I noticed Wolfgang Walther had commented\n> similarly, but it appears that that idea got lost (or at least not\n> responded to).\n\nIs it necessary to have the unique constraint at all? This currently works\nin\nPostgreSQL:\n\n CREATE TABLE foo (a integer, b integer);\n\n CREATE UNIQUE INDEX ON foo (a, b);\n\n CREATE TABLE bar (\n x integer,\n y integer,\n FOREIGN KEY (x, y) REFERENCES foo(a, b)\n );\n\nWhere no unique constraint exists on foo (a, b). Forcing the creation of a\nunique constraint in this case seems more confusing to me, as a user, than\nallowing it without the definition of the unique constraint, given the\nexisting\nbehavior.\n\n> I'd be happy to sign up to review an updated patch if you're\n> interested in continuing this effort. If so, could you register the\n> patch in the CF app (if not there already)?\n\nThe patch should already be registered! Though it's still in a state that\nneeds\na lot of work.\n\n> As I was reading through the email chain I had this thought: could you> get the same benefit (or 90% of it anyway) by instead allowing the> creation of a uniqueness constraint that contains more columns than> the index backing it? So long as the index backing it still guaranteed> the uniqueness on a subset of columns that would seem to be safe.> After writing down that idea I noticed Wolfgang Walther had commented> similarly, but it appears that that idea got lost (or at least not> responded to).Is it necessary to have the unique constraint at all? This currently works inPostgreSQL: CREATE TABLE foo (a integer, b integer); CREATE UNIQUE INDEX ON foo (a, b); CREATE TABLE bar ( x integer, y integer, FOREIGN KEY (x, y) REFERENCES foo(a, b) );Where no unique constraint exists on foo (a, b). Forcing the creation of aunique constraint in this case seems more confusing to me, as a user, thanallowing it without the definition of the unique constraint, given the existingbehavior.> I'd be happy to sign up to review an updated patch if you're> interested in continuing this effort. If so, could you register the> patch in the CF app (if not there already)?The patch should already be registered! Though it's still in a state that needsa lot of work.",
"msg_date": "Tue, 27 Sep 2022 17:58:31 -0400",
"msg_from": "Kaiting Chen <ktchen14@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow foreign keys to reference a superset of unique columns"
},
{
"msg_contents": "> Could we create this additional unique constraint implicitly, when using\n> FOREIGN KEY ... REFERENCES on a superset of unique columns? This would\n> make it easier to use, but still give proper information_schema output.\n\nCurrently, a foreign key declared where the referenced columns have only a\nunique index and not a unique constraint already populates the constraint\nrelated columns of information_schema.referential_constraints with NULL. It\ndoesn't seem like this change would require a major deviation from the\nexisting\nbehavior in information_schema:\n\n CREATE TABLE foo (a integer, b integer);\n\n CREATE UNIQUE INDEX ON foo (a, b);\n\n CREATE TABLE bar (\n x integer,\n y integer,\n FOREIGN KEY (x, y) REFERENCES foo(a, b)\n );\n\n # SELECT * FROM information_schema.referential_constraints\n WHERE constraint_name = 'bar_x_y_fkey';\n\n -[ RECORD 1 ]-------------+----------------------------------------------\n constraint_catalog | kaitingc\n constraint_schema | public\n constraint_name | bar_x_y_fkey\n unique_constraint_catalog |\n unique_constraint_schema |\n unique_constraint_name |\n match_option | NONE\n update_rule | NO ACTION\n delete_rule | NO ACTION\n\nThe only change would be to information_schema.key_column_usage:\n\n # SELECT * FROM information_schema.key_column_usage\n WHERE constraint_name = 'bar_x_y_fkey';\n\n -[ RECORD 173\n]---------------+----------------------------------------------\n constraint_catalog | kaitingc\n constraint_schema | public\n constraint_name | bar_x_y_fkey\n table_catalog | kaitingc\n table_schema | public\n table_name | bar\n column_name | x\n ordinal_position | 1\n position_in_unique_constraint | 1\n -[ RECORD 174\n]---------------+----------------------------------------------\n constraint_catalog | kaitingc\n constraint_schema | public\n constraint_name | bar_x_y_fkey\n table_catalog | kaitingc\n table_schema | public\n table_name | bar\n column_name | y\n ordinal_position | 2\n position_in_unique_constraint | 2\n\nWhere position_in_unique_constraint would have to be NULL for the referenced\ncolumns that don't appear in the unique index. That column is already\nnullable:\n\n For a foreign-key constraint, ordinal position of the referenced column\nwithin\n its unique constraint (count starts at 1); otherwise null\n\nSo it seems like this would be a minor documentation change at most. Also,\nshould that documentation be updated to mention that it's actually the\n\"ordinal\nposition of the referenced column within its unique index\" (since it's a\nlittle\nconfusing that in referential_constraints, unique_constraint_name is NULL)?\n\n> Could we create this additional unique constraint implicitly, when using> FOREIGN KEY ... REFERENCES on a superset of unique columns? This would> make it easier to use, but still give proper information_schema output.Currently, a foreign key declared where the referenced columns have only aunique index and not a unique constraint already populates the constraintrelated columns of information_schema.referential_constraints with NULL. Itdoesn't seem like this change would require a major deviation from the existingbehavior in information_schema: CREATE TABLE foo (a integer, b integer); CREATE UNIQUE INDEX ON foo (a, b); CREATE TABLE bar ( x integer, y integer, FOREIGN KEY (x, y) REFERENCES foo(a, b) ); # SELECT * FROM information_schema.referential_constraints WHERE constraint_name = 'bar_x_y_fkey'; -[ RECORD 1 ]-------------+---------------------------------------------- constraint_catalog | kaitingc constraint_schema | public constraint_name | bar_x_y_fkey unique_constraint_catalog | unique_constraint_schema | unique_constraint_name | match_option | NONE update_rule | NO ACTION delete_rule | NO ACTIONThe only change would be to information_schema.key_column_usage: # SELECT * FROM information_schema.key_column_usage WHERE constraint_name = 'bar_x_y_fkey'; -[ RECORD 173 ]---------------+---------------------------------------------- constraint_catalog | kaitingc constraint_schema | public constraint_name | bar_x_y_fkey table_catalog | kaitingc table_schema | public table_name | bar column_name | x ordinal_position | 1 position_in_unique_constraint | 1 -[ RECORD 174 ]---------------+---------------------------------------------- constraint_catalog | kaitingc constraint_schema | public constraint_name | bar_x_y_fkey table_catalog | kaitingc table_schema | public table_name | bar column_name | y ordinal_position | 2 position_in_unique_constraint | 2Where position_in_unique_constraint would have to be NULL for the referencedcolumns that don't appear in the unique index. That column is already nullable: For a foreign-key constraint, ordinal position of the referenced column within its unique constraint (count starts at 1); otherwise nullSo it seems like this would be a minor documentation change at most. Also,should that documentation be updated to mention that it's actually the \"ordinalposition of the referenced column within its unique index\" (since it's a littleconfusing that in referential_constraints, unique_constraint_name is NULL)?",
"msg_date": "Tue, 27 Sep 2022 18:09:33 -0400",
"msg_from": "Kaiting Chen <ktchen14@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow foreign keys to reference a superset of unique columns"
},
{
"msg_contents": "Kaiting Chen <ktchen14@gmail.com> writes:\n>> Another example is that I think the idea is only well-defined when\n>> the subset column(s) are a primary key, or at least all marked NOT NULL.\n>> Otherwise they're not as unique as you're claiming. But then the FK\n>> constraint really has to be dependent on a PK constraint not just an\n>> index definition, since indexes in themselves don't enforce not-nullness.\n>> That gets back to Peter's complaint that referring to an index isn't\n>> good enough.\n\n> The unique index underlying foo.a guarantees that (foo.a, foo.b) is unique\n> if foo.a isn't NULL. That is, there can be multiple rows (NULL, 1) in foo.\n> However, such a row can't be the target of the foreign key constraint\n> anyway.\n\nYou're ignoring the possibility of a MATCH PARTIAL FK constraint.\nAdmittedly, we don't implement those today, and there hasn't been\na lot of interest in doing so. But they're in the SQL spec so we\nshould fix that someday.\n\nI also wonder how this all interacts with the UNIQUE NULLS NOT\nDISTINCT feature that we just got done implementing for v15.\nI don't know if the spec says that an FK depending on such a\nconstraint should treat nulls as ordinary unique values --- but\nit sure seems like that'd be a plausible user expectation.\n\n\nThe bottom line is there's zero chance you'll ever convince me that this\nis a good idea. I think the semantics are at best questionable, I think\nit will break important optimizations, and I think the chances of\nfinding ourselves in conflict with some future SQL spec extension are\ntoo high. (Even if you can make the case that this isn't violating the\nspec *today*, which I rather doubt so far as the information_schema is\nconcerned. The fact that we've got legacy behaviors that are outside\nthe spec there isn't a great argument for adding more.)\n\nNow, if you can persuade the SQL committee that this behavior should be\nstandardized, then two of those concerns would go away (since I don't\nthink you'll get squishy semantics past them). But I think submitting\na patch now is way premature and mostly going to waste people's time.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 27 Sep 2022 18:25:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allow foreign keys to reference a superset of unique columns"
},
{
"msg_contents": ">>> Another example is that I think the idea is only well-defined when\n>>> the subset column(s) are a primary key, or at least all marked NOT NULL.\n>>> Otherwise they're not as unique as you're claiming. But then the FK\n>>> constraint really has to be dependent on a PK constraint not just an\n>>> index definition, since indexes in themselves don't enforce\nnot-nullness.\n>>> That gets back to Peter's complaint that referring to an index isn't\n>>> good enough.\n\n>> The unique index underlying foo.a guarantees that (foo.a, foo.b) is\nunique\n>> if foo.a isn't NULL. That is, there can be multiple rows (NULL, 1) in\nfoo.\n>> However, such a row can't be the target of the foreign key constraint\n>> anyway.\n\n> You're ignoring the possibility of a MATCH PARTIAL FK constraint.\n> Admittedly, we don't implement those today, and there hasn't been\n> a lot of interest in doing so. But they're in the SQL spec so we\n> should fix that someday.\n\n> I also wonder how this all interacts with the UNIQUE NULLS NOT\n> DISTINCT feature that we just got done implementing for v15.\n> I don't know if the spec says that an FK depending on such a\n> constraint should treat nulls as ordinary unique values --- but\n> it sure seems like that'd be a plausible user expectation.\n\nI don't think that the UNIQUE NULLS DISTINCT/NOT DISTINCT patch will have\nany\nimpact on this proposal. Currently (and admittedly I haven't thought at all\nabout MATCH PARTIAL), a NULL in a referencing row precludes a reference at\nall:\n\n * If the foreign key constraint is declared MATCH SIMPLE, then no\nreferenced\n row exists for the referencing row.\n * If the foreign key constraint is declared MATCH FULL, then the\nreferencing\n row must not have a NULL in any of its referencing columns.\n\nUNIQUE NULLS NOT DISTINCT is the current behavior, and this proposql\nshouldn't\nhave a problem with the current behavior. In the case of UNIQUE NULLS\nDISTINCT,\nthen NULLs behave, from a uniqueness perspective, as a singleton value and\nthus\nshouldn't cause any additional semantic difficulties in regards to this\nproposal.\n\nI don't have access to a copy of the SQL specification and it doesn't look\nlike\nanyone implements MATCH PARTIAL. Based on what I can gather from the\ninternet,\nit appears that MATCH PARTIAL allows at most one referencing column to be\nNULL,\nand guarantees that at least one row in the referenced table matches the\nremaining columns; implicitly, multiple matches are allowed. If these are\nthe\nsemantics of MATCH PARTIAL, then it seems to me that uniqueness of the\nreferenced rows aren't very important.\n\nWhat other semantics and edge cases regarding this proposal should I\nconsider?\n\n>>> Another example is that I think the idea is only well-defined when>>> the subset column(s) are a primary key, or at least all marked NOT NULL.>>> Otherwise they're not as unique as you're claiming. But then the FK>>> constraint really has to be dependent on a PK constraint not just an>>> index definition, since indexes in themselves don't enforce not-nullness.>>> That gets back to Peter's complaint that referring to an index isn't>>> good enough.>> The unique index underlying foo.a guarantees that (foo.a, foo.b) is unique>> if foo.a isn't NULL. That is, there can be multiple rows (NULL, 1) in foo.>> However, such a row can't be the target of the foreign key constraint>> anyway.> You're ignoring the possibility of a MATCH PARTIAL FK constraint.> Admittedly, we don't implement those today, and there hasn't been> a lot of interest in doing so. But they're in the SQL spec so we> should fix that someday.> I also wonder how this all interacts with the UNIQUE NULLS NOT> DISTINCT feature that we just got done implementing for v15.> I don't know if the spec says that an FK depending on such a> constraint should treat nulls as ordinary unique values --- but> it sure seems like that'd be a plausible user expectation.I don't think that the UNIQUE NULLS DISTINCT/NOT DISTINCT patch will have anyimpact on this proposal. Currently (and admittedly I haven't thought at allabout MATCH PARTIAL), a NULL in a referencing row precludes a reference at all: * If the foreign key constraint is declared MATCH SIMPLE, then no referenced row exists for the referencing row. * If the foreign key constraint is declared MATCH FULL, then the referencing row must not have a NULL in any of its referencing columns.UNIQUE NULLS NOT DISTINCT is the current behavior, and this proposql shouldn'thave a problem with the current behavior. In the case of UNIQUE NULLS DISTINCT,then NULLs behave, from a uniqueness perspective, as a singleton value and thusshouldn't cause any additional semantic difficulties in regards to thisproposal.I don't have access to a copy of the SQL specification and it doesn't look likeanyone implements MATCH PARTIAL. Based on what I can gather from the internet,it appears that MATCH PARTIAL allows at most one referencing column to be NULL,and guarantees that at least one row in the referenced table matches theremaining columns; implicitly, multiple matches are allowed. If these are thesemantics of MATCH PARTIAL, then it seems to me that uniqueness of thereferenced rows aren't very important.What other semantics and edge cases regarding this proposal should I consider?",
"msg_date": "Tue, 27 Sep 2022 18:39:40 -0400",
"msg_from": "Kaiting Chen <ktchen14@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow foreign keys to reference a superset of unique columns"
},
{
"msg_contents": "> So the broader point I'm trying to make is that, as I understand it,\n> indexes backing foreign key constraints is an implementation detail.\n> The SQL standard details the behavior of foreign key constraints\n> regardless of implementation details like a backing index. That means\n> that the behavior of two column foreign key constraints is defined in\n> a single way whether or not there's a backing index at all or whether\n> such a backing index, if present, contains one or two columns.\n\n> I understand that for the use case you're describing this isn't the\n> absolute most efficient way to implement the desired data semantics.\n> But it would be incredibly confusing (and, I think, a violation of the\n> SQL standard) to have one foreign key constraint work in a different\n> way from another such constraint when both are indistinguishable at\n> the constraint level (the backing index isn't an attribute of the\n> constraint; it's merely an implementation detail).\n\nIt appears to me that the unique index backing a foreign key constraint\n*isn't*\nan implementation detail in PostgreSQL; rather, the *unique constraint* is\nthe\nimplementation detail. The reason I say this is because it's possible to\ncreate\na foreign key constraint where the uniqueness of referenced columns are\nguaranteed by only a unique index and where no such unique constraint\nexists.\n\nSpecifically, this line in the documentation:\n\n The referenced columns must be the columns of a non-deferrable unique or\n primary key constraint in the referenced table.\n\nIsn't true. In practice, the referenced columns must be the columns of a\nvalid,\nnondeferrable, nonfunctional, nonpartial, unique index. Whether or not a\nunique\nconstraint exists is immaterial to whether or not postgres will let you\ndefine\nsuch a foreign key constraint.\n\n> So the broader point I'm trying to make is that, as I understand it,> indexes backing foreign key constraints is an implementation detail.> The SQL standard details the behavior of foreign key constraints> regardless of implementation details like a backing index. That means> that the behavior of two column foreign key constraints is defined in> a single way whether or not there's a backing index at all or whether> such a backing index, if present, contains one or two columns.> I understand that for the use case you're describing this isn't the> absolute most efficient way to implement the desired data semantics.> But it would be incredibly confusing (and, I think, a violation of the> SQL standard) to have one foreign key constraint work in a different> way from another such constraint when both are indistinguishable at> the constraint level (the backing index isn't an attribute of the> constraint; it's merely an implementation detail).It appears to me that the unique index backing a foreign key constraint *isn't*an implementation detail in PostgreSQL; rather, the *unique constraint* is theimplementation detail. The reason I say this is because it's possible to createa foreign key constraint where the uniqueness of referenced columns areguaranteed by only a unique index and where no such unique constraint exists.Specifically, this line in the documentation: The referenced columns must be the columns of a non-deferrable unique or primary key constraint in the referenced table.Isn't true. In practice, the referenced columns must be the columns of a valid,nondeferrable, nonfunctional, nonpartial, unique index. Whether or not a uniqueconstraint exists is immaterial to whether or not postgres will let you definesuch a foreign key constraint.",
"msg_date": "Tue, 27 Sep 2022 18:50:42 -0400",
"msg_from": "Kaiting Chen <ktchen14@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow foreign keys to reference a superset of unique columns"
},
{
"msg_contents": "> For one example of where the semantics get fuzzy, it's not\n> very clear how the extra-baggage columns ought to participate in\n> CASCADE updates. Currently, if we have\n> CREATE TABLE foo (a integer PRIMARY KEY, b integer);\n> then an update that changes only foo.b doesn't need to update\n> referencing tables, and I think we even have optimizations that\n> assume that if no unique-key columns are touched then RI checks\n> need not be made.\n\nRegarding optimizations that skip RI checks on the PK side of the\nrelationship,\nI believe the relevant code is here (in trigger.c):\n\n if (TRIGGER_FIRED_BY_UPDATE(event) || TRIGGER_FIRED_BY_DELETE(event)) {\n ...\n\n switch (RI_FKey_trigger_type(trigger->tgfoid)) {\n ...\n\n case RI_TRIGGER_PK:\n ...\n\n /* Update or delete on trigger's PK table */\n if (!RI_FKey_pk_upd_check_required(trigger, rel, oldslot,\nnewslot))\n {\n /* skip queuing this event */\n continue;\n }\n\n ...\n\nAnd the checks done in RI_FKey_pk_upd_check_required() are:\n\n const RI_ConstraintInfo *riinfo;\n\n riinfo = ri_FetchConstraintInfo(trigger, pk_rel, true);\n\n /*\n * If any old key value is NULL, the row could not have been referenced\nby\n * an FK row, so no check is needed.\n */\n if (ri_NullCheck(RelationGetDescr(pk_rel), oldslot, riinfo, true) !=\nRI_KEYS_NONE_NULL)\n return false;\n\n /* If all old and new key values are equal, no check is needed */\n if (newslot && ri_KeysEqual(pk_rel, oldslot, newslot, riinfo, true))\n return false;\n\n /* Else we need to fire the trigger. */\n return true;\n\nThe columns inspected by both ri_NullCheck() and ri_KeysEqual() are based\non the\nriinfo->pk_attnums:\n\n if (rel_is_pk)\n attnums = riinfo->pk_attnums;\n else\n attnums = riinfo->fk_attnums;\n\nThe check in ri_NullCheck() is, expectedly, a straightforward NULL check:\n\n for (int i = 0; i < riinfo->nkeys; i++)\n {\n if (slot_attisnull(slot, attnums[i]))\n nonenull = false;\n else\n allnull = false;\n }\n\nThe check in ri_KeysEqual() is a bytewise comparison:\n\n /* XXX: could be worthwhile to fetch all necessary attrs at once */\n for (int i = 0; i < riinfo->nkeys; i++)\n {\n Datum oldvalue;\n Datum newvalue;\n bool isnull;\n\n /*\n * Get one attribute's oldvalue. If it is NULL - they're not\nequal.\n */\n oldvalue = slot_getattr(oldslot, attnums[i], &isnull);\n if (isnull)\n return false;\n\n /*\n * Get one attribute's newvalue. If it is NULL - they're not\nequal.\n */\n newvalue = slot_getattr(newslot, attnums[i], &isnull);\n if (isnull)\n return false;\n\n if (rel_is_pk)\n {\n /*\n * If we are looking at the PK table, then do a bytewise\n * comparison. We must propagate PK changes if the\nvalue is\n * changed to one that \"looks\" different but would\ncompare as\n * equal using the equality operator. This only makes a\n * difference for ON UPDATE CASCADE, but for consistency\nwe treat\n * all changes to the PK the same.\n */\n Form_pg_attribute att =\nTupleDescAttr(oldslot->tts_tupleDescriptor, attnums[i] - 1);\n\n if (!datum_image_eq(oldvalue, newvalue, att->attbyval,\natt->attlen))\n return false;\n }\n else\n {\n /*\n * For the FK table, compare with the appropriate\nequality\n * operator. Changes that compare equal will still\nsatisfy the\n * constraint after the update.\n */\n if (!ri_AttributesEqual(riinfo->ff_eq_oprs[i],\nRIAttType(rel, attnums[i]),\n oldvalue,\nnewvalue))\n return false;\n }\n }\n\nIt seems like neither optimization actually requires the presence of the\nunique\nindex. And, as my proposed patch would leave both riinfo->nkeys and\nriinfo->pk_attnums exactly the same as before, I don't believe that it\nshould\nhave any impact on these optimizations.\n\n> For one example of where the semantics get fuzzy, it's not> very clear how the extra-baggage columns ought to participate in> CASCADE updates. Currently, if we have> CREATE TABLE foo (a integer PRIMARY KEY, b integer);> then an update that changes only foo.b doesn't need to update> referencing tables, and I think we even have optimizations that> assume that if no unique-key columns are touched then RI checks> need not be made.Regarding optimizations that skip RI checks on the PK side of the relationship,I believe the relevant code is here (in trigger.c): if (TRIGGER_FIRED_BY_UPDATE(event) || TRIGGER_FIRED_BY_DELETE(event)) { ... switch (RI_FKey_trigger_type(trigger->tgfoid)) { ... case RI_TRIGGER_PK: ... /* Update or delete on trigger's PK table */ if (!RI_FKey_pk_upd_check_required(trigger, rel, oldslot, newslot)) { /* skip queuing this event */ continue; } ...And the checks done in RI_FKey_pk_upd_check_required() are: const RI_ConstraintInfo *riinfo; riinfo = ri_FetchConstraintInfo(trigger, pk_rel, true); /* * If any old key value is NULL, the row could not have been referenced by * an FK row, so no check is needed. */ if (ri_NullCheck(RelationGetDescr(pk_rel), oldslot, riinfo, true) != RI_KEYS_NONE_NULL) return false; /* If all old and new key values are equal, no check is needed */ if (newslot && ri_KeysEqual(pk_rel, oldslot, newslot, riinfo, true)) return false; /* Else we need to fire the trigger. */ return true;The columns inspected by both ri_NullCheck() and ri_KeysEqual() are based on theriinfo->pk_attnums: if (rel_is_pk) attnums = riinfo->pk_attnums; else attnums = riinfo->fk_attnums;The check in ri_NullCheck() is, expectedly, a straightforward NULL check: for (int i = 0; i < riinfo->nkeys; i++) { if (slot_attisnull(slot, attnums[i])) nonenull = false; else allnull = false; }The check in ri_KeysEqual() is a bytewise comparison: /* XXX: could be worthwhile to fetch all necessary attrs at once */ for (int i = 0; i < riinfo->nkeys; i++) { Datum\t\toldvalue; Datum\t\tnewvalue; bool\t\tisnull; /* * Get one attribute's oldvalue. If it is NULL - they're not equal. */ oldvalue = slot_getattr(oldslot, attnums[i], &isnull); if (isnull) return false; /* * Get one attribute's newvalue. If it is NULL - they're not equal. */ newvalue = slot_getattr(newslot, attnums[i], &isnull); if (isnull) return false; if (rel_is_pk) { /* * If we are looking at the PK table, then do a bytewise * comparison. We must propagate PK changes if the value is * changed to one that \"looks\" different but would compare as * equal using the equality operator. This only makes a * difference for ON UPDATE CASCADE, but for consistency we treat * all changes to the PK the same. */ Form_pg_attribute att = TupleDescAttr(oldslot->tts_tupleDescriptor, attnums[i] - 1); if (!datum_image_eq(oldvalue, newvalue, att->attbyval, att->attlen)) return false; } else { /* * For the FK table, compare with the appropriate equality * operator. Changes that compare equal will still satisfy the * constraint after the update. */ if (!ri_AttributesEqual(riinfo->ff_eq_oprs[i], RIAttType(rel, attnums[i]), oldvalue, newvalue)) return false; } }It seems like neither optimization actually requires the presence of the uniqueindex. And, as my proposed patch would leave both riinfo->nkeys andriinfo->pk_attnums exactly the same as before, I don't believe that it shouldhave any impact on these optimizations.",
"msg_date": "Wed, 28 Sep 2022 11:14:00 -0400",
"msg_from": "Kaiting Chen <ktchen14@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow foreign keys to reference a superset of unique columns"
},
{
"msg_contents": "On 28.09.22 00:39, Kaiting Chen wrote:\n> What other semantics and edge cases regarding this proposal should I \n> consider?\n\nI'm not as pessimistic as others that it couldn't be made to work. But \nit's the job of this proposal to figure this out. Implementing it is \nprobably not that hard in the end, but working out the specification in \nall details is the actual job.\n\n\n\n",
"msg_date": "Thu, 6 Oct 2022 14:39:51 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow foreign keys to reference a superset of unique columns"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI saw a problem in logical replication, when the target table on subscriber is a\npartitioned table, it only checks whether the Replica Identity of partitioned\ntable is consistent with the publisher, and doesn't check Replica Identity of\nthe partition.\n\nFor example:\n-- publisher --\ncreate table tbl (a int not null, b int);\ncreate unique INDEX ON tbl (a);\nalter table tbl replica identity using INDEX tbl_a_idx;\ncreate publication pub for table tbl;\n\n-- subscriber --\n-- table tbl (parent table) has RI index, while table child has no RI index.\ncreate table tbl (a int not null, b int) partition by range (a);\ncreate table child partition of tbl default;\ncreate unique INDEX ON tbl (a);\nalter table tbl replica identity using INDEX tbl_a_idx;\ncreate subscription sub connection 'port=5432 dbname=postgres' publication pub;\n\n-- publisher --\ninsert into tbl values (11,11);\nupdate tbl set a=a+1;\n\nIt caused an assertion failure on subscriber:\nTRAP: FailedAssertion(\"OidIsValid(idxoid) || (remoterel->replident == REPLICA_IDENTITY_FULL)\", File: \"worker.c\", Line: 2088, PID: 1616523)\n\nThe backtrace is attached.\n\nWe got the assertion failure because idxoid is invalid, as table child has no\nReplica Identity or Primary Key. We have a check in check_relation_updatable(),\nbut what it checked is table tbl (the parent table) and it passed the check.\n\nI think one approach to fix it is to check the target partition in this case,\ninstead of the partitioned table.\n\nWhen trying to fix it, I saw some other problems about updating partition map\ncache.\n\na) In logicalrep_partmap_invalidate_cb(), the type of the entry in\nLogicalRepPartMap should be LogicalRepPartMapEntry, instead of\nLogicalRepRelMapEntry.\n\nb) In logicalrep_partition_open(), it didn't check if the entry is valid.\n\nc) When the publisher send new relation mapping, only relation map cache will be\nupdated, and partition map cache wouldn't. I think it also should be updated\nbecause it has remote relation information, too.\n\nAttach two patches which tried to fix them.\n0001 patch: fix the above three problems about partition map cache.\n0002 patch: fix the assertion failure, check the Replica Identity of the\npartition if the target table is a partitioned table.\n\nThanks to Hou Zhijie for helping me in the first patch.\n\nI will add a test for it later if no one doesn't like this fix.\n\nRegards,\nShi yu",
"msg_date": "Wed, 8 Jun 2022 08:46:46 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Wed, Jun 8, 2022 at 2:17 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> I saw a problem in logical replication, when the target table on subscriber is a\n> partitioned table, it only checks whether the Replica Identity of partitioned\n> table is consistent with the publisher, and doesn't check Replica Identity of\n> the partition.\n>\n...\n>\n> It caused an assertion failure on subscriber:\n> TRAP: FailedAssertion(\"OidIsValid(idxoid) || (remoterel->replident == REPLICA_IDENTITY_FULL)\", File: \"worker.c\", Line: 2088, PID: 1616523)\n>\n> The backtrace is attached.\n>\n> We got the assertion failure because idxoid is invalid, as table child has no\n> Replica Identity or Primary Key. We have a check in check_relation_updatable(),\n> but what it checked is table tbl (the parent table) and it passed the check.\n>\n\nI can reproduce the problem. This seems to be the problem since commit\nf1ac27bf (Add logical replication support to replicate into\npartitioned tables), so adding Amit L. and Peter E.\n\n> I think one approach to fix it is to check the target partition in this case,\n> instead of the partitioned table.\n>\n\nThis approach sounds reasonable to me. One minor point:\n+/*\n+ * Check that replica identity matches.\n+ *\n+ * We allow for stricter replica identity (fewer columns) on subscriber as\n+ * that will not stop us from finding unique tuple. IE, if publisher has\n+ * identity (id,timestamp) and subscriber just (id) this will not be a\n+ * problem, but in the opposite scenario it will.\n+ *\n+ * Don't throw any error here just mark the relation entry as not updatable,\n+ * as replica identity is only for updates and deletes but inserts can be\n+ * replicated even without it.\n+ */\n+static void\n+logicalrep_check_updatable(LogicalRepRelMapEntry *entry)\n\nCan we name this function as logicalrep_rel_mark_updatable as we are\ndoing that? If so, change the comments as well.\n\n> When trying to fix it, I saw some other problems about updating partition map\n> cache.\n>\n> a) In logicalrep_partmap_invalidate_cb(), the type of the entry in\n> LogicalRepPartMap should be LogicalRepPartMapEntry, instead of\n> LogicalRepRelMapEntry.\n>\n> b) In logicalrep_partition_open(), it didn't check if the entry is valid.\n>\n> c) When the publisher send new relation mapping, only relation map cache will be\n> updated, and partition map cache wouldn't. I think it also should be updated\n> because it has remote relation information, too.\n>\n\nIs there any test case that can show the problem due to your above observations?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 9 Jun 2022 16:32:20 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "Hi Amit,\n\nOn Thu, Jun 9, 2022 at 8:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Wed, Jun 8, 2022 at 2:17 PM shiy.fnst@fujitsu.com\n> <shiy.fnst@fujitsu.com> wrote:\n> > I saw a problem in logical replication, when the target table on subscriber is a\n> > partitioned table, it only checks whether the Replica Identity of partitioned\n> > table is consistent with the publisher, and doesn't check Replica Identity of\n> > the partition.\n> ...\n> >\n> > It caused an assertion failure on subscriber:\n> > TRAP: FailedAssertion(\"OidIsValid(idxoid) || (remoterel->replident == REPLICA_IDENTITY_FULL)\", File: \"worker.c\", Line: 2088, PID: 1616523)\n> >\n> > The backtrace is attached.\n> >\n> > We got the assertion failure because idxoid is invalid, as table child has no\n> > Replica Identity or Primary Key. We have a check in check_relation_updatable(),\n> > but what it checked is table tbl (the parent table) and it passed the check.\n> >\n>\n> I can reproduce the problem. This seems to be the problem since commit\n> f1ac27bf (Add logical replication support to replicate into\n> partitioned tables), so adding Amit L. and Peter E.\n\nThanks, I can see the problem.\n\nI have looked at other mentioned problems with the code too and agree\nthey look like bugs.\n\nBoth patches look to be on the right track to fix the issues, but will\nlook more closely to see if I've anything to add.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 9 Jun 2022 22:34:29 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Thu, June 9, 2022 7:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> > I think one approach to fix it is to check the target partition in this case,\r\n> > instead of the partitioned table.\r\n> >\r\n> \r\n> This approach sounds reasonable to me. One minor point:\r\n> +/*\r\n> + * Check that replica identity matches.\r\n> + *\r\n> + * We allow for stricter replica identity (fewer columns) on subscriber as\r\n> + * that will not stop us from finding unique tuple. IE, if publisher has\r\n> + * identity (id,timestamp) and subscriber just (id) this will not be a\r\n> + * problem, but in the opposite scenario it will.\r\n> + *\r\n> + * Don't throw any error here just mark the relation entry as not updatable,\r\n> + * as replica identity is only for updates and deletes but inserts can be\r\n> + * replicated even without it.\r\n> + */\r\n> +static void\r\n> +logicalrep_check_updatable(LogicalRepRelMapEntry *entry)\r\n> \r\n> Can we name this function as logicalrep_rel_mark_updatable as we are\r\n> doing that? If so, change the comments as well.\r\n> \r\n\r\nOK. Modified.\r\n\r\n> > When trying to fix it, I saw some other problems about updating partition\r\n> map\r\n> > cache.\r\n> >\r\n> > a) In logicalrep_partmap_invalidate_cb(), the type of the entry in\r\n> > LogicalRepPartMap should be LogicalRepPartMapEntry, instead of\r\n> > LogicalRepRelMapEntry.\r\n> >\r\n> > b) In logicalrep_partition_open(), it didn't check if the entry is valid.\r\n> >\r\n> > c) When the publisher send new relation mapping, only relation map cache\r\n> will be\r\n> > updated, and partition map cache wouldn't. I think it also should be updated\r\n> > because it has remote relation information, too.\r\n> >\r\n> \r\n> Is there any test case that can show the problem due to your above\r\n> observations?\r\n> \r\n\r\nPlease see the following case.\r\n\r\n-- publisher\r\ncreate table tbl (a int primary key, b int);\r\ncreate publication pub for table tbl;\r\n\r\n-- subscriber\r\ncreate table tbl (a int primary key, b int, c int) partition by range (a);\r\ncreate table child partition of tbl default;\r\n\r\n-- publisher, make cache\r\ninsert into tbl values (1,1);\r\nupdate tbl set a=a+1;\r\nalter table tbl add column c int;\r\nupdate tbl set c=1 where a=2;\r\n\r\n-- subscriber\r\npostgres=# select * from tbl;\r\n a | b | c\r\n---+---+---\r\n 2 | 1 |\r\n(1 row)\r\n\r\nThe value of column c updated failed on subscriber.\r\nAnd after applying the first patch, it would work fine.\r\n\r\nI have added this case to the first patch. Also add a test case for the second\r\npatch.\r\n\r\nAttach the new patches.\r\n\r\nRegards,\r\nShi yu",
"msg_date": "Fri, 10 Jun 2022 08:45:04 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "Hello,\n\nOn Wed, Jun 8, 2022 at 5:47 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n> Hi hackers,\n>\n> I saw a problem in logical replication, when the target table on subscriber is a\n> partitioned table, it only checks whether the Replica Identity of partitioned\n> table is consistent with the publisher, and doesn't check Replica Identity of\n> the partition.\n>\n> For example:\n> -- publisher --\n> create table tbl (a int not null, b int);\n> create unique INDEX ON tbl (a);\n> alter table tbl replica identity using INDEX tbl_a_idx;\n> create publication pub for table tbl;\n>\n> -- subscriber --\n> -- table tbl (parent table) has RI index, while table child has no RI index.\n> create table tbl (a int not null, b int) partition by range (a);\n> create table child partition of tbl default;\n> create unique INDEX ON tbl (a);\n> alter table tbl replica identity using INDEX tbl_a_idx;\n> create subscription sub connection 'port=5432 dbname=postgres' publication pub;\n>\n> -- publisher --\n> insert into tbl values (11,11);\n> update tbl set a=a+1;\n>\n> It caused an assertion failure on subscriber:\n> TRAP: FailedAssertion(\"OidIsValid(idxoid) || (remoterel->replident == REPLICA_IDENTITY_FULL)\", File: \"worker.c\", Line: 2088, PID: 1616523)\n>\n> The backtrace is attached.\n>\n> We got the assertion failure because idxoid is invalid, as table child has no\n> Replica Identity or Primary Key. We have a check in check_relation_updatable(),\n> but what it checked is table tbl (the parent table) and it passed the check.\n>\n> I think one approach to fix it is to check the target partition in this case,\n> instead of the partitioned table.\n\nThat makes sense. A normal user update of a partitioned table will\nonly perform CheckCmdReplicaIdentity() for leaf partitions and the\nlogical replication updates should do the same. I may have been\nconfused at the time to think that ALTER TABLE REPLICA IDENTITY makes\nsure that the replica identities of all relations in a partition tree\nare forced to be the same at all times, though it seems that the patch\nto do so [1] didn't actually get in. I guess adding a test case would\nhave helped.\n\n> When trying to fix it, I saw some other problems about updating partition map\n> cache.\n>\n> a) In logicalrep_partmap_invalidate_cb(), the type of the entry in\n> LogicalRepPartMap should be LogicalRepPartMapEntry, instead of\n> LogicalRepRelMapEntry.\n\nIndeed.\n\n> b) In logicalrep_partition_open(), it didn't check if the entry is valid.\n\nYeah, that's bad. Actually, it seems that localrelvalid stuff for\nLogicalRepRelMapEntry came in 3d65b0593c5, but it's likely we missed\nin that commit that LogicalRepPartMapEntrys contain copies of, not\npointers to, the relevant parent's entry. This patch fixes that\noversight.\n\n> c) When the publisher send new relation mapping, only relation map cache will be\n> updated, and partition map cache wouldn't. I think it also should be updated\n> because it has remote relation information, too.\n\nYes, again a result of forgetting that the partition map entries have\ncopies of relation map entries.\n\n+logicalrep_partmap_invalidate\n\nI wonder why not call this logicalrep_partmap_update() to go with\nlogicalrep_relmap_update()? It seems confusing to have\nlogicalrep_partmap_invalidate() right next to\nlogicalrep_partmap_invalidate_cb().\n\n+/*\n+ * Invalidate the existing entry in the partition map.\n\nI think logicalrep_partmap_invalidate() may update *multiple* entries,\nbecause the hash table scan may find multiple PartMapEntrys containing\na copy of the RelMapEntry with given remoteid, that is, for multiple\npartitions of a given local parent table mapped to that remote\nrelation. So, please fix the comment as:\n\nInvalidate/Update the entries in the partition map that refer to 'remoterel'\n\nLikewise:\n\n+ /* Invalidate the corresponding partition map as well */\n\nMaybe, this should say:\n\nAlso update all entries in the partition map that refer to 'remoterel'.\n\nIn 0002:\n\n+logicalrep_check_updatable\n\n+1 to Amit's suggestion to use \"mark\" instead of \"check\".\n\n@@ -1735,6 +1735,13 @@ apply_handle_insert_internal(ApplyExecutionData *edata,\n static void\n check_relation_updatable(LogicalRepRelMapEntry *rel)\n {\n+ /*\n+ * If it is a partitioned table, we don't check it, we will check its\n+ * partition later.\n+ */\n+ if (rel->localrel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n+ return;\n\nWhy do this? I mean why if logicalrep_check_updatable() doesn't care\nif the relation is partitioned or not -- it does all the work\nregardless.\n\nI suggest we don't add this check in check_relation_updatable().\n\n+ /* Check if we can do the update or delete. */\n\nMaybe mention \"leaf partition\", as:\n\nCheck if we can do the update or delete on the leaf partition.\n\nBTW, the following hunk in patch 0002 should really be a part of 0001.\n\n@@ -584,7 +594,6 @@ logicalrep_partition_open(LogicalRepRelMapEntry *root,\n Oid partOid = RelationGetRelid(partrel);\n AttrMap *attrmap = root->attrmap;\n bool found;\n- int i;\n MemoryContext oldctx;\n\n if (LogicalRepPartMap == NULL)\n\n> Thanks to Hou Zhijie for helping me in the first patch.\n\nThank you both for the fixes.\n\n> I will add a test for it later\n\nThat would be very welcome.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/flat/201902041630.gpadougzab7v%40alvherre.pgsql\n\n\n",
"msg_date": "Fri, 10 Jun 2022 17:55:44 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Fri, Jun 10, 2022 at 2:26 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> +logicalrep_partmap_invalidate\n>\n> I wonder why not call this logicalrep_partmap_update() to go with\n> logicalrep_relmap_update()? It seems confusing to have\n> logicalrep_partmap_invalidate() right next to\n> logicalrep_partmap_invalidate_cb().\n>\n\nI am thinking about why we need to update the relmap in this new\nfunction logicalrep_partmap_invalidate()? I think it may be better to\ndo it in logicalrep_partition_open() when actually required,\notherwise, we end up doing a lot of work that may not be of use unless\nthe corresponding partition is accessed. Also, it seems awkward to me\nthat we do the same thing in this new function\nlogicalrep_partmap_invalidate() and then also in\nlogicalrep_partition_open() under different conditions.\n\nOne more point about the 0001, it doesn't seem to have a test that\nvalidates logicalrep_partmap_invalidate_cb() functionality. I think\nfor that we need to Alter the local table (table on the subscriber\nside). Can we try to write a test for it?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 11 Jun 2022 07:06:22 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Saturday, June 11, 2022 9:36 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Fri, Jun 10, 2022 at 2:26 PM Amit Langote <amitlangote09@gmail.com>\r\n> wrote:\r\n> >\r\n> > +logicalrep_partmap_invalidate\r\n> >\r\n> > I wonder why not call this logicalrep_partmap_update() to go with\r\n> > logicalrep_relmap_update()? It seems confusing to have\r\n> > logicalrep_partmap_invalidate() right next to\r\n> > logicalrep_partmap_invalidate_cb().\r\n> >\r\n> \r\n> I am thinking about why we need to update the relmap in this new function\r\n> logicalrep_partmap_invalidate()? I think it may be better to do it in\r\n> logicalrep_partition_open() when actually required, otherwise, we end up doing\r\n> a lot of work that may not be of use unless the corresponding partition is\r\n> accessed. Also, it seems awkward to me that we do the same thing in this new\r\n> function\r\n> logicalrep_partmap_invalidate() and then also in\r\n> logicalrep_partition_open() under different conditions.\r\n> \r\n> One more point about the 0001, it doesn't seem to have a test that validates\r\n> logicalrep_partmap_invalidate_cb() functionality. I think for that we need to Alter\r\n> the local table (table on the subscriber side). Can we try to write a test for it?\r\n\r\n\r\nThanks for Amit. L and Amit. K for your comments ! I agree with this point.\r\nHere is the version patch set which try to address all these comments.\r\n\r\nIn addition, when reviewing the code, I found some other related\r\nproblems in the code.\r\n\r\n1)\r\n\t\tentry->attrmap = make_attrmap(map->maplen);\r\n\t\tfor (attno = 0; attno < entry->attrmap->maplen; attno++)\r\n\t\t{\r\n\t\t\tAttrNumber\troot_attno = map->attnums[attno];\r\n\r\n\t\t\tentry->attrmap->attnums[attno] = attrmap->attnums[root_attno - 1];\r\n\t\t}\r\n\r\nIn this case, It’s possible that 'attno' points to a dropped column in which\r\ncase the root_attno would be '0'. I think in this case we should just set the\r\nentry->attrmap->attnums[attno] to '-1' instead of accessing the\r\nattrmap->attnums[]. I included this change in 0001 because the testcase which\r\ncan reproduce these problems are related(we need to ALTER the partition on\r\nsubscriber to reproduce it).\r\n\r\n2)\r\n\tif (entry->attrmap)\r\n\t\tpfree(entry->attrmap);\r\n\r\nI think we should use free_attrmap instead of pfree here to avoid memory leak.\r\nAnd we also need to check the attrmap in logicalrep_rel_open() and\r\nlogicalrep_partition_open() and free it if needed. I am not sure shall we put this\r\nin the 0001 patch, so attach a separate patch for this. We can merge later this if needed.\r\n\r\nBest regards,\r\nHou zj",
"msg_date": "Sat, 11 Jun 2022 09:06:16 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Sat, Jun 11, 2022 at 2:36 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Saturday, June 11, 2022 9:36 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Jun 10, 2022 at 2:26 PM Amit Langote <amitlangote09@gmail.com>\n> > wrote:\n> > >\n> > > +logicalrep_partmap_invalidate\n> > >\n> > > I wonder why not call this logicalrep_partmap_update() to go with\n> > > logicalrep_relmap_update()? It seems confusing to have\n> > > logicalrep_partmap_invalidate() right next to\n> > > logicalrep_partmap_invalidate_cb().\n> > >\n> >\n> > I am thinking about why we need to update the relmap in this new function\n> > logicalrep_partmap_invalidate()? I think it may be better to do it in\n> > logicalrep_partition_open() when actually required, otherwise, we end up doing\n> > a lot of work that may not be of use unless the corresponding partition is\n> > accessed. Also, it seems awkward to me that we do the same thing in this new\n> > function\n> > logicalrep_partmap_invalidate() and then also in\n> > logicalrep_partition_open() under different conditions.\n> >\n> > One more point about the 0001, it doesn't seem to have a test that validates\n> > logicalrep_partmap_invalidate_cb() functionality. I think for that we need to Alter\n> > the local table (table on the subscriber side). Can we try to write a test for it?\n>\n>\n> Thanks for Amit. L and Amit. K for your comments ! I agree with this point.\n> Here is the version patch set which try to address all these comments.\n>\n> In addition, when reviewing the code, I found some other related\n> problems in the code.\n>\n> 1)\n> entry->attrmap = make_attrmap(map->maplen);\n> for (attno = 0; attno < entry->attrmap->maplen; attno++)\n> {\n> AttrNumber root_attno = map->attnums[attno];\n>\n> entry->attrmap->attnums[attno] = attrmap->attnums[root_attno - 1];\n> }\n>\n> In this case, It’s possible that 'attno' points to a dropped column in which\n> case the root_attno would be '0'. I think in this case we should just set the\n> entry->attrmap->attnums[attno] to '-1' instead of accessing the\n> attrmap->attnums[]. I included this change in 0001 because the testcase which\n> can reproduce these problems are related(we need to ALTER the partition on\n> subscriber to reproduce it).\n>\n\nHmm, this appears to be a different issue. Can we separate out the\nbug-fix code for the subscriber-side issue caused by the DDL on the\nsubscriber?\n\nFew other comments:\n+ * Note that we don't update the remoterel information in the entry here,\n+ * we will update the information in logicalrep_partition_open to save\n+ * unnecessary work.\n+ */\n+void\n+logicalrep_partmap_invalidate(LogicalRepRelation *remoterel)\n\n/to save/to avoid\n\nAlso, I agree with Amit L. that it is confusing to have\nlogicalrep_partmap_invalidate() right next to\nlogicalrep_partmap_invalidate_cb() and both have somewhat different\nkinds of logic. So, we can either name it as\nlogicalrep_partmap_reset_relmap() or logicalrep_partmap_update()\nunless you have any other better suggestions? Accordingly, change the\ncomment atop this function.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 13 Jun 2022 11:22:54 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Monday, June 13, 2022 1:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Sat, Jun 11, 2022 at 2:36 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Saturday, June 11, 2022 9:36 AM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > On Fri, Jun 10, 2022 at 2:26 PM Amit Langote\r\n> > > <amitlangote09@gmail.com>\r\n> > > wrote:\r\n> > > >\r\n> > > > +logicalrep_partmap_invalidate\r\n> > > >\r\n> > > > I wonder why not call this logicalrep_partmap_update() to go with\r\n> > > > logicalrep_relmap_update()? It seems confusing to have\r\n> > > > logicalrep_partmap_invalidate() right next to\r\n> > > > logicalrep_partmap_invalidate_cb().\r\n> > > >\r\n> > >\r\n> > > I am thinking about why we need to update the relmap in this new\r\n> > > function logicalrep_partmap_invalidate()? I think it may be better\r\n> > > to do it in\r\n> > > logicalrep_partition_open() when actually required, otherwise, we\r\n> > > end up doing a lot of work that may not be of use unless the\r\n> > > corresponding partition is accessed. Also, it seems awkward to me\r\n> > > that we do the same thing in this new function\r\n> > > logicalrep_partmap_invalidate() and then also in\r\n> > > logicalrep_partition_open() under different conditions.\r\n> > >\r\n> > > One more point about the 0001, it doesn't seem to have a test that\r\n> > > validates\r\n> > > logicalrep_partmap_invalidate_cb() functionality. I think for that\r\n> > > we need to Alter the local table (table on the subscriber side). Can we try to\r\n> write a test for it?\r\n> >\r\n> >\r\n> > Thanks for Amit. L and Amit. K for your comments ! I agree with this point.\r\n> > Here is the version patch set which try to address all these comments.\r\n> >\r\n> > In addition, when reviewing the code, I found some other related\r\n> > problems in the code.\r\n> >\r\n> > 1)\r\n> > entry->attrmap = make_attrmap(map->maplen);\r\n> > for (attno = 0; attno < entry->attrmap->maplen; attno++)\r\n> > {\r\n> > AttrNumber root_attno =\r\n> map->attnums[attno];\r\n> >\r\n> > entry->attrmap->attnums[attno] =\r\n> attrmap->attnums[root_attno - 1];\r\n> > }\r\n> >\r\n> > In this case, It’s possible that 'attno' points to a dropped column in\r\n> > which case the root_attno would be '0'. I think in this case we should\r\n> > just set the\r\n> > entry->attrmap->attnums[attno] to '-1' instead of accessing the\r\n> > attrmap->attnums[]. I included this change in 0001 because the\r\n> > attrmap->testcase which\r\n> > can reproduce these problems are related(we need to ALTER the\r\n> > partition on subscriber to reproduce it).\r\n> >\r\n> \r\n> Hmm, this appears to be a different issue. Can we separate out the bug-fix\r\n> code for the subscriber-side issue caused by the DDL on the subscriber?\r\n> \r\n> Few other comments:\r\n> + * Note that we don't update the remoterel information in the entry\r\n> +here,\r\n> + * we will update the information in logicalrep_partition_open to save\r\n> + * unnecessary work.\r\n> + */\r\n> +void\r\n> +logicalrep_partmap_invalidate(LogicalRepRelation *remoterel)\r\n> \r\n> /to save/to avoid\r\n> \r\n> Also, I agree with Amit L. that it is confusing to have\r\n> logicalrep_partmap_invalidate() right next to\r\n> logicalrep_partmap_invalidate_cb() and both have somewhat different kinds of\r\n> logic. So, we can either name it as\r\n> logicalrep_partmap_reset_relmap() or logicalrep_partmap_update() unless you\r\n> have any other better suggestions? Accordingly, change the comment atop\r\n> this function.\r\n\r\nThanks for the comments.\r\n\r\nI have separated out the bug-fix for the subscriber-side.\r\nAnd fix the typo and function name.\r\nAttach the new version patch set.\r\n\r\nBest regards,\r\nHou zj",
"msg_date": "Mon, 13 Jun 2022 07:33:18 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Sat, Jun 11, 2022 at 10:36 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Fri, Jun 10, 2022 at 2:26 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> >\n> > +logicalrep_partmap_invalidate\n> >\n> > I wonder why not call this logicalrep_partmap_update() to go with\n> > logicalrep_relmap_update()? It seems confusing to have\n> > logicalrep_partmap_invalidate() right next to\n> > logicalrep_partmap_invalidate_cb().\n> >\n>\n> I am thinking about why we need to update the relmap in this new\n> function logicalrep_partmap_invalidate()? I think it may be better to\n> do it in logicalrep_partition_open() when actually required,\n> otherwise, we end up doing a lot of work that may not be of use unless\n> the corresponding partition is accessed. Also, it seems awkward to me\n> that we do the same thing in this new function\n> logicalrep_partmap_invalidate() and then also in\n> logicalrep_partition_open() under different conditions.\n\nBoth logicalrep_rel_open() and logicalrel_partition_open() only ever\ntouch the local Relation, never the LogicalRepRelation. Updating the\nlatter is the responsibility of logicalrep_relmap_update(), which is\nthere to support handling of the RELATION message by\napply_handle_relation(). Given that we make a separate copy of the\nparent's LogicalRepRelMapEntry for each partition to put into the\ncorresponding LogicalRepPartMapEntry, those copies must be updated as\nwell when a RELATION message targeting the parent's entry arrives. So\nit seems fine that the patch is making it the\nlogicalrep_relmap_update()'s responsibility to update the partition\ncopies using the new logicalrep_partition_invalidate/update()\nsubroutine.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 13 Jun 2022 17:50:41 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Mon, Jun 13, 2022 at 2:20 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Sat, Jun 11, 2022 at 10:36 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Fri, Jun 10, 2022 at 2:26 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > >\n> > > +logicalrep_partmap_invalidate\n> > >\n> > > I wonder why not call this logicalrep_partmap_update() to go with\n> > > logicalrep_relmap_update()? It seems confusing to have\n> > > logicalrep_partmap_invalidate() right next to\n> > > logicalrep_partmap_invalidate_cb().\n> > >\n> >\n> > I am thinking about why we need to update the relmap in this new\n> > function logicalrep_partmap_invalidate()? I think it may be better to\n> > do it in logicalrep_partition_open() when actually required,\n> > otherwise, we end up doing a lot of work that may not be of use unless\n> > the corresponding partition is accessed. Also, it seems awkward to me\n> > that we do the same thing in this new function\n> > logicalrep_partmap_invalidate() and then also in\n> > logicalrep_partition_open() under different conditions.\n>\n> Both logicalrep_rel_open() and logicalrel_partition_open() only ever\n> touch the local Relation, never the LogicalRepRelation.\n>\n\nWe do make the copy of remote rel in logicalrel_partition_open() when\nthe entry is not found. I feel the same should happen when remote\nrelation is reset/invalidated by the RELATION message.\n\n> Updating the\n> latter is the responsibility of logicalrep_relmap_update(), which is\n> there to support handling of the RELATION message by\n> apply_handle_relation(). Given that we make a separate copy of the\n> parent's LogicalRepRelMapEntry for each partition to put into the\n> corresponding LogicalRepPartMapEntry, those copies must be updated as\n> well when a RELATION message targeting the parent's entry arrives. So\n> it seems fine that the patch is making it the\n> logicalrep_relmap_update()'s responsibility to update the partition\n> copies using the new logicalrep_partition_invalidate/update()\n> subroutine.\n>\n\nI think we can do that way as well but do you see any benefit in it?\nThe way I am suggesting will avoid the effort of updating the remote\nrel copy till we try to access that particular partition.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 13 Jun 2022 14:44:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Mon, Jun 13, 2022 at 1:03 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Monday, June 13, 2022 1:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> I have separated out the bug-fix for the subscriber-side.\n> And fix the typo and function name.\n> Attach the new version patch set.\n>\n\nThe first patch looks good to me. I have slightly modified one of the\ncomments and the commit message. I think we need to backpatch this\nthrough 13 where we introduced support to replicate into partitioned\ntables (commit f1ac27bf). If you guys are fine, I'll push this once\nthe work for PG14.4 is done.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Mon, 13 Jun 2022 17:56:33 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Mon, Jun 13, 2022 at 9:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Mon, Jun 13, 2022 at 1:03 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> > On Monday, June 13, 2022 1:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I have separated out the bug-fix for the subscriber-side.\n> > And fix the typo and function name.\n> > Attach the new version patch set.\n> >\n>\n> The first patch looks good to me. I have slightly modified one of the\n> comments and the commit message. I think we need to backpatch this\n> through 13 where we introduced support to replicate into partitioned\n> tables (commit f1ac27bf). If you guys are fine, I'll push this once\n> the work for PG14.4 is done.\n\nBoth the code changes and test cases look good to me. Just a couple\nof minor nitpicks with test changes:\n\n+ CREATE UNIQUE INDEX tab5_a_idx ON tab5 (a);\n+ ALTER TABLE tab5 REPLICA IDENTITY USING INDEX tab5_a_idx;\n+ ALTER TABLE tab5_1 REPLICA IDENTITY USING INDEX tab5_1_a_idx;});\n\nNot sure if we follow it elsewhere, but should we maybe avoid using\nthe internally generated index name as in the partition's case above?\n\n+# Test the case that target table on subscriber is a partitioned table and\n+# check that the changes are replicated correctly after changing the schema of\n+# table on subcriber.\n\nThe first sentence makes it sound like the tests that follow are the\nfirst ones in the file where the target table is partitioned, which is\nnot true, so I think we should drop that part. Also how about being\nmore specific about the test intent, say:\n\nTest that replication continues to work correctly after altering the\npartition of a partitioned target table.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 14 Jun 2022 15:17:33 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Mon, Jun 13, 2022 at 6:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Mon, Jun 13, 2022 at 2:20 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Sat, Jun 11, 2022 at 10:36 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > On Fri, Jun 10, 2022 at 2:26 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > >\n> > > > +logicalrep_partmap_invalidate\n> > > >\n> > > > I wonder why not call this logicalrep_partmap_update() to go with\n> > > > logicalrep_relmap_update()? It seems confusing to have\n> > > > logicalrep_partmap_invalidate() right next to\n> > > > logicalrep_partmap_invalidate_cb().\n> > > >\n> > >\n> > > I am thinking about why we need to update the relmap in this new\n> > > function logicalrep_partmap_invalidate()? I think it may be better to\n> > > do it in logicalrep_partition_open() when actually required,\n> > > otherwise, we end up doing a lot of work that may not be of use unless\n> > > the corresponding partition is accessed. Also, it seems awkward to me\n> > > that we do the same thing in this new function\n> > > logicalrep_partmap_invalidate() and then also in\n> > > logicalrep_partition_open() under different conditions.\n> >\n> > Both logicalrep_rel_open() and logicalrel_partition_open() only ever\n> > touch the local Relation, never the LogicalRepRelation.\n>\n> We do make the copy of remote rel in logicalrel_partition_open() when\n> the entry is not found. I feel the same should happen when remote\n> relation is reset/invalidated by the RELATION message.\n\nHmm, the problem is that a RELATION message will only invalidate the\nLogicalRepRelation portion of the target parent's entry, while any\ncopies that have been made for partitions that were opened till that\npoint will continue to have the old LogicalRepRelation information.\nAs things stand, logicalrep_partition_open() won't know that the\nparent entry's LogicalRepRelation may have been modified due to a\nRELATION message. It will reconstruct the entry only if the partition\nitself was modified locally, that is, if\nlogicalrep_partman_invalidate_cb() was called on the partition.\n\n> > Updating the\n> > latter is the responsibility of logicalrep_relmap_update(), which is\n> > there to support handling of the RELATION message by\n> > apply_handle_relation(). Given that we make a separate copy of the\n> > parent's LogicalRepRelMapEntry for each partition to put into the\n> > corresponding LogicalRepPartMapEntry, those copies must be updated as\n> > well when a RELATION message targeting the parent's entry arrives. So\n> > it seems fine that the patch is making it the\n> > logicalrep_relmap_update()'s responsibility to update the partition\n> > copies using the new logicalrep_partition_invalidate/update()\n> > subroutine.\n>\n> I think we can do that way as well but do you see any benefit in it?\n> The way I am suggesting will avoid the effort of updating the remote\n> rel copy till we try to access that particular partition.\n\nI don't see any benefit as such to doing it the way the patch does,\nit's just that that seems to be the only way to go given the way\nthings are.\n\nThis would have been unnecessary, for example, if the relation map\nentry had contained a LogicalRepRelation pointer instead of the\nstruct. The partition entries would point to the same entry as the\nparent's if that were the case and there would be no need to modify\nthe partitions' copies explicitly.\n\nAm I missing something?\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 14 Jun 2022 15:31:25 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Tue, Jun 14, 2022 at 3:31 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Mon, Jun 13, 2022 at 6:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I think we can do that way as well but do you see any benefit in it?\n> > The way I am suggesting will avoid the effort of updating the remote\n> > rel copy till we try to access that particular partition.\n>\n> I don't see any benefit as such to doing it the way the patch does,\n> it's just that that seems to be the only way to go given the way\n> things are.\n\nOh, I see that v4-0002 has this:\n\n+/*\n+ * Reset the entries in the partition map that refer to remoterel\n+ *\n+ * Called when new relation mapping is sent by the publisher to update our\n+ * expected view of incoming data from said publisher.\n+ *\n+ * Note that we don't update the remoterel information in the entry here,\n+ * we will update the information in logicalrep_partition_open to avoid\n+ * unnecessary work.\n+ */\n+void\n+logicalrep_partmap_reset_relmap(LogicalRepRelation *remoterel)\n+{\n+ HASH_SEQ_STATUS status;\n+ LogicalRepPartMapEntry *part_entry;\n+ LogicalRepRelMapEntry *entry;\n+\n+ if (LogicalRepPartMap == NULL)\n+ return;\n+\n+ hash_seq_init(&status, LogicalRepPartMap);\n+ while ((part_entry = (LogicalRepPartMapEntry *)\nhash_seq_search(&status)) != NULL)\n+ {\n+ entry = &part_entry->relmapentry;\n+\n+ if (entry->remoterel.remoteid != remoterel->remoteid)\n+ continue;\n+\n+ logicalrep_relmap_free_entry(entry);\n+\n+ memset(entry, 0, sizeof(LogicalRepRelMapEntry));\n+ }\n+}\n\nThe previous versions would also call logicalrep_relmap_update() on\nthe entry after the memset, which is no longer done, so that is indeed\nsaving useless work. I also see that both logicalrep_relmap_update()\nand the above function basically invalidate the whole\nLogicalRepRelMapEntry before setting the new remote relation info so\nthat the next logicaprep_rel_open() or logicalrep_partition_open()\nhave to refill the other members too.\n\nThough, I thought maybe you were saying that we shouldn't need this\nfunction for resetting partitions in the first place, which I guess\nyou weren't.\n\nv4-0002 looks good btw, except the bitpick about test comment similar\nto my earlier comment regarding v5-0001:\n\n+# Change the column order of table on publisher\n\nI think it might be better to say something specific to describe the\ntest intent, like:\n\nTest that replication into partitioned target table continues to works\ncorrectly when the published table is altered\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 14 Jun 2022 16:32:05 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Tue, Jun 14, 2022 2:18 PM Amit Langote <amitlangote09@gmail.com> wrote:\r\n> \r\n> On Mon, Jun 13, 2022 at 9:26 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > On Mon, Jun 13, 2022 at 1:03 PM houzj.fnst@fujitsu.com\r\n> > <houzj.fnst@fujitsu.com> wrote:\r\n> > > On Monday, June 13, 2022 1:53 PM Amit Kapila\r\n> <amit.kapila16@gmail.com> wrote:\r\n> > > I have separated out the bug-fix for the subscriber-side.\r\n> > > And fix the typo and function name.\r\n> > > Attach the new version patch set.\r\n> > >\r\n> >\r\n> > The first patch looks good to me. I have slightly modified one of the\r\n> > comments and the commit message. I think we need to backpatch this\r\n> > through 13 where we introduced support to replicate into partitioned\r\n> > tables (commit f1ac27bf). If you guys are fine, I'll push this once\r\n> > the work for PG14.4 is done.\r\n> \r\n> Both the code changes and test cases look good to me. Just a couple\r\n> of minor nitpicks with test changes:\r\n\r\nThanks for your comments.\r\n\r\n> \r\n> + CREATE UNIQUE INDEX tab5_a_idx ON tab5 (a);\r\n> + ALTER TABLE tab5 REPLICA IDENTITY USING INDEX tab5_a_idx;\r\n> + ALTER TABLE tab5_1 REPLICA IDENTITY USING INDEX tab5_1_a_idx;});\r\n> \r\n> Not sure if we follow it elsewhere, but should we maybe avoid using\r\n> the internally generated index name as in the partition's case above?\r\n> \r\n\r\nI saw some existing tests also use internally generated index name (e.g.\r\nreplica_identity.sql, ddl.sql and 031_column_list.pl), so maybe it's better to\r\nfix them all in a separate patch. I didn't change this.\r\n\r\n> +# Test the case that target table on subscriber is a partitioned table and\r\n> +# check that the changes are replicated correctly after changing the schema\r\n> of\r\n> +# table on subcriber.\r\n> \r\n> The first sentence makes it sound like the tests that follow are the\r\n> first ones in the file where the target table is partitioned, which is\r\n> not true, so I think we should drop that part. Also how about being\r\n> more specific about the test intent, say:\r\n> \r\n> Test that replication continues to work correctly after altering the\r\n> partition of a partitioned target table.\r\n> \r\n\r\nOK, modified.\r\n\r\nAttached the new version of patch set, and the patches for pg14 and pg13.\r\n\r\nRegards,\r\nShi yu",
"msg_date": "Tue, 14 Jun 2022 09:01:51 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Tue, Jun 14, 2022 at 1:02 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Tue, Jun 14, 2022 at 3:31 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Mon, Jun 13, 2022 at 6:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > I think we can do that way as well but do you see any benefit in it?\n> > > The way I am suggesting will avoid the effort of updating the remote\n> > > rel copy till we try to access that particular partition.\n> >\n> > I don't see any benefit as such to doing it the way the patch does,\n> > it's just that that seems to be the only way to go given the way\n> > things are.\n>\n> Oh, I see that v4-0002 has this:\n>\n> +/*\n> + * Reset the entries in the partition map that refer to remoterel\n> + *\n> + * Called when new relation mapping is sent by the publisher to update our\n> + * expected view of incoming data from said publisher.\n> + *\n> + * Note that we don't update the remoterel information in the entry here,\n> + * we will update the information in logicalrep_partition_open to avoid\n> + * unnecessary work.\n> + */\n> +void\n> +logicalrep_partmap_reset_relmap(LogicalRepRelation *remoterel)\n> +{\n> + HASH_SEQ_STATUS status;\n> + LogicalRepPartMapEntry *part_entry;\n> + LogicalRepRelMapEntry *entry;\n> +\n> + if (LogicalRepPartMap == NULL)\n> + return;\n> +\n> + hash_seq_init(&status, LogicalRepPartMap);\n> + while ((part_entry = (LogicalRepPartMapEntry *)\n> hash_seq_search(&status)) != NULL)\n> + {\n> + entry = &part_entry->relmapentry;\n> +\n> + if (entry->remoterel.remoteid != remoterel->remoteid)\n> + continue;\n> +\n> + logicalrep_relmap_free_entry(entry);\n> +\n> + memset(entry, 0, sizeof(LogicalRepRelMapEntry));\n> + }\n> +}\n>\n> The previous versions would also call logicalrep_relmap_update() on\n> the entry after the memset, which is no longer done, so that is indeed\n> saving useless work. I also see that both logicalrep_relmap_update()\n> and the above function basically invalidate the whole\n> LogicalRepRelMapEntry before setting the new remote relation info so\n> that the next logicaprep_rel_open() or logicalrep_partition_open()\n> have to refill the other members too.\n>\n> Though, I thought maybe you were saying that we shouldn't need this\n> function for resetting partitions in the first place, which I guess\n> you weren't.\n>\n\nRight.\n\n> v4-0002 looks good btw, except the bitpick about test comment similar\n> to my earlier comment regarding v5-0001:\n>\n> +# Change the column order of table on publisher\n>\n> I think it might be better to say something specific to describe the\n> test intent, like:\n>\n> Test that replication into partitioned target table continues to works\n> correctly when the published table is altered\n>\n\nOkay changed this and slightly modify the comments and commit message.\nI am just attaching the HEAD patches for the first two issues.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Tue, 14 Jun 2022 18:26:51 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Tue, Jun 14, 2022 at 9:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Tue, Jun 14, 2022 at 1:02 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > +# Change the column order of table on publisher\n> > I think it might be better to say something specific to describe the\n> > test intent, like:\n> >\n> > Test that replication into partitioned target table continues to works\n> > correctly when the published table is altered\n>\n> Okay changed this and slightly modify the comments and commit message.\n> I am just attaching the HEAD patches for the first two issues.\n\nLGTM, thanks.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 Jun 2022 11:41:40 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Tue, Jun 14, 2022 8:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> > v4-0002 looks good btw, except the bitpick about test comment similar\r\n> > to my earlier comment regarding v5-0001:\r\n> >\r\n> > +# Change the column order of table on publisher\r\n> >\r\n> > I think it might be better to say something specific to describe the\r\n> > test intent, like:\r\n> >\r\n> > Test that replication into partitioned target table continues to works\r\n> > correctly when the published table is altered\r\n> >\r\n> \r\n> Okay changed this and slightly modify the comments and commit message.\r\n> I am just attaching the HEAD patches for the first two issues.\r\n> \r\n\r\nThanks for updating the patch.\r\n\r\nAttached the new patch set which ran pgindent, and the patches for pg14 and\r\npg13. (Only the first two patches of the patch set.)\r\n\r\nRegards,\r\nShi yu",
"msg_date": "Wed, 15 Jun 2022 03:22:19 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Wed, Jun 15, 2022 at 8:52 AM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Tue, Jun 14, 2022 8:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > v4-0002 looks good btw, except the bitpick about test comment similar\n> > > to my earlier comment regarding v5-0001:\n> > >\n> > > +# Change the column order of table on publisher\n> > >\n> > > I think it might be better to say something specific to describe the\n> > > test intent, like:\n> > >\n> > > Test that replication into partitioned target table continues to works\n> > > correctly when the published table is altered\n> > >\n> >\n> > Okay changed this and slightly modify the comments and commit message.\n> > I am just attaching the HEAD patches for the first two issues.\n> >\n>\n> Thanks for updating the patch.\n>\n> Attached the new patch set which ran pgindent, and the patches for pg14 and\n> pg13. (Only the first two patches of the patch set.)\n>\n\nI have pushed the first bug-fix patch today.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 15 Jun 2022 17:59:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Wed, Jun 15, 2022 8:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> I have pushed the first bug-fix patch today.\r\n> \r\n\r\nThanks.\r\n\r\nAttached the remaining patches which are rebased.\r\n\r\nRegards,\r\nShi yu",
"msg_date": "Thu, 16 Jun 2022 05:07:19 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "Hi,\n\nOn Thu, Jun 16, 2022 at 2:07 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n> On Wed, Jun 15, 2022 8:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I have pushed the first bug-fix patch today.\n>\n> Attached the remaining patches which are rebased.\n\nThanks.\n\nComments on v9-0001:\n\n+ * Don't throw any error here just mark the relation entry as not updatable,\n+ * as replica identity is only for updates and deletes but inserts can be\n+ * replicated even without it.\n\nI know you're simply copying the old comment, but I think we can\nrewrite it to be slightly more useful:\n\nWe just mark the relation entry as not updatable here if the local\nreplica identity is found to be insufficient and leave it to\ncheck_relation_updatable() to throw the actual error if needed.\n\n+ /* Check that replica identity matches. */\n+ logicalrep_rel_mark_updatable(entry);\n\nMaybe the comment (there are 2 instances) should say:\n\nSet if the table's replica identity is enough to apply update/delete.\n\nFinally,\n\n+# Alter REPLICA IDENTITY on subscriber.\n+# No REPLICA IDENTITY in the partitioned table on subscriber, but what we check\n+# is the partition, so it works fine.\n\nFor consistency with other recently added comments, I'd suggest the\nfollowing wording:\n\nTest that replication works correctly as long as the leaf partition\nhas the necessary REPLICA IDENTITY, even though the actual target\npartitioned table does not.\n\nOn v9-0002:\n\n+ /* cleanup the invalid attrmap */\n\nIt seems that \"invalid\" here really means no-longer-useful, so we\nshould use that phrase as a nearby comment does:\n\nRelease the no-longer-useful attrmap, if any.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 16 Jun 2022 15:13:07 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Thu, Jun 16, 2022 at 11:43 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Thu, Jun 16, 2022 at 2:07 PM shiy.fnst@fujitsu.com\n> <shiy.fnst@fujitsu.com> wrote:\n> > On Wed, Jun 15, 2022 8:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > I have pushed the first bug-fix patch today.\n> >\n> > Attached the remaining patches which are rebased.\n>\n> Thanks.\n>\n> Comments on v9-0001:\n>\n> + * Don't throw any error here just mark the relation entry as not updatable,\n> + * as replica identity is only for updates and deletes but inserts can be\n> + * replicated even without it.\n>\n> I know you're simply copying the old comment, but I think we can\n> rewrite it to be slightly more useful:\n>\n> We just mark the relation entry as not updatable here if the local\n> replica identity is found to be insufficient and leave it to\n> check_relation_updatable() to throw the actual error if needed.\n>\n\nI am fine with improving this comment but it would be better if in\nsome way we keep the following part of the comment: \"as replica\nidentity is only for updates and deletes but inserts can be replicated\neven without it.\" as that makes it more clear why it is okay to just\nmark the entry as not updatable. One idea could be: \"We just mark the\nrelation entry as not updatable here if the local replica identity is\nfound to be insufficient and leave it to check_relation_updatable() to\nthrow the actual error if needed. This is because replica identity is\nonly for updates and deletes but inserts can be replicated even\nwithout it.\". Feel free to suggest if you have any better ideas?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 16 Jun 2022 12:15:41 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Thu, Jun 16, 2022 at 3:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Thu, Jun 16, 2022 at 11:43 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > + * Don't throw any error here just mark the relation entry as not updatable,\n> > + * as replica identity is only for updates and deletes but inserts can be\n> > + * replicated even without it.\n> >\n> > I know you're simply copying the old comment, but I think we can\n> > rewrite it to be slightly more useful:\n> >\n> > We just mark the relation entry as not updatable here if the local\n> > replica identity is found to be insufficient and leave it to\n> > check_relation_updatable() to throw the actual error if needed.\n>\n> I am fine with improving this comment but it would be better if in\n> some way we keep the following part of the comment: \"as replica\n> identity is only for updates and deletes but inserts can be replicated\n> even without it.\" as that makes it more clear why it is okay to just\n> mark the entry as not updatable. One idea could be: \"We just mark the\n> relation entry as not updatable here if the local replica identity is\n> found to be insufficient and leave it to check_relation_updatable() to\n> throw the actual error if needed. This is because replica identity is\n> only for updates and deletes but inserts can be replicated even\n> without it.\". Feel free to suggest if you have any better ideas?\n\nI thought mentioning check_relation_updatable() would make it clear\nthat only updates (and deletes) care about a valid local replica\nidentity, because only apply_handle_{update|delete}() call that\nfunction. Anyway, how about this:\n\nWe just mark the relation entry as not updatable here if the local\nreplica identity is found to be insufficient for applying\nupdates/deletes (inserts don't care!) and leave it to\ncheck_relation_updatable() to throw the actual error if needed.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 16 Jun 2022 16:00:23 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Thu, Jun 16, 2022 at 12:30 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Thu, Jun 16, 2022 at 3:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Thu, Jun 16, 2022 at 11:43 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > + * Don't throw any error here just mark the relation entry as not updatable,\n> > > + * as replica identity is only for updates and deletes but inserts can be\n> > > + * replicated even without it.\n> > >\n> > > I know you're simply copying the old comment, but I think we can\n> > > rewrite it to be slightly more useful:\n> > >\n> > > We just mark the relation entry as not updatable here if the local\n> > > replica identity is found to be insufficient and leave it to\n> > > check_relation_updatable() to throw the actual error if needed.\n> >\n> > I am fine with improving this comment but it would be better if in\n> > some way we keep the following part of the comment: \"as replica\n> > identity is only for updates and deletes but inserts can be replicated\n> > even without it.\" as that makes it more clear why it is okay to just\n> > mark the entry as not updatable. One idea could be: \"We just mark the\n> > relation entry as not updatable here if the local replica identity is\n> > found to be insufficient and leave it to check_relation_updatable() to\n> > throw the actual error if needed. This is because replica identity is\n> > only for updates and deletes but inserts can be replicated even\n> > without it.\". Feel free to suggest if you have any better ideas?\n>\n> I thought mentioning check_relation_updatable() would make it clear\n> that only updates (and deletes) care about a valid local replica\n> identity, because only apply_handle_{update|delete}() call that\n> function. Anyway, how about this:\n>\n> We just mark the relation entry as not updatable here if the local\n> replica identity is found to be insufficient for applying\n> updates/deletes (inserts don't care!) and leave it to\n> check_relation_updatable() to throw the actual error if needed.\n>\n\nThis sounds better to me than the previous text.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 16 Jun 2022 14:00:12 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Fri, Jun 10, 2022 at 2:26 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> @@ -1735,6 +1735,13 @@ apply_handle_insert_internal(ApplyExecutionData *edata,\n> static void\n> check_relation_updatable(LogicalRepRelMapEntry *rel)\n> {\n> + /*\n> + * If it is a partitioned table, we don't check it, we will check its\n> + * partition later.\n> + */\n> + if (rel->localrel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n> + return;\n>\n> Why do this? I mean why if logicalrep_check_updatable() doesn't care\n> if the relation is partitioned or not -- it does all the work\n> regardless.\n>\n> I suggest we don't add this check in check_relation_updatable().\n>\n\nI think based on this suggestion patch has moved this check to\nlogicalrep_rel_mark_updatable(). For a partitioned table, it won't\neven validate whether it can mark updatable as false which seems odd\nto me even though there might not be any bug due to that. Was your\nsuggestion actually intended to move it to\nlogicalrep_rel_mark_updatable? If so, why do you think that is a\nbetter place?\n\nI think it is important to have this check to avoid giving error via\ncheck_relation_updatable() when partitioned tables don't have RI but\nnot clear which is the right place. I think check_relation_updatable()\nis better place than logicalrep_rel_mark_updatable() but may be there\nis a reason why that is not a good idea.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 16 Jun 2022 15:12:03 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Thu, Jun 16, 2022 at 6:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Fri, Jun 10, 2022 at 2:26 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > @@ -1735,6 +1735,13 @@ apply_handle_insert_internal(ApplyExecutionData *edata,\n> > static void\n> > check_relation_updatable(LogicalRepRelMapEntry *rel)\n> > {\n> > + /*\n> > + * If it is a partitioned table, we don't check it, we will check its\n> > + * partition later.\n> > + */\n> > + if (rel->localrel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n> > + return;\n> >\n> > Why do this? I mean why if logicalrep_check_updatable() doesn't care\n> > if the relation is partitioned or not -- it does all the work\n> > regardless.\n> >\n> > I suggest we don't add this check in check_relation_updatable().\n>\n> I think based on this suggestion patch has moved this check to\n> logicalrep_rel_mark_updatable(). For a partitioned table, it won't\n> even validate whether it can mark updatable as false which seems odd\n> to me even though there might not be any bug due to that. Was your\n> suggestion actually intended to move it to\n> logicalrep_rel_mark_updatable?\n\nNo, I didn't intend to suggest that we move this check to\nlogicalrep_rel_mark_updatable(); didn't notice that that's what the\nlatest patch did.\n\nWhat I said is that we shouldn't ignore the updatable flag for a\npartitioned table in check_relation_updatable(), because\nlogicalrep_rel_mark_updatable() would have set the updatable flag\ncorrectly even for partitioned tables. IOW, we should not\nspecial-case partitioned tables anywhere.\n\nI guess the point of adding the check is to allow the case where a\nleaf partition's replica identity can be used to apply an update\noriginally targeting its ancestor that doesn't itself have one.\n\nI wonder if it wouldn't be better to move the\ncheck_relation_updatable() call to\napply_handle_{update|delete}_internal()? We know for sure that we\nonly ever get there for leaf tables. If we do that, we won't need the\nrelkind check.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 16 Jun 2022 20:54:41 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Thu, Jun 16, 2022 at 5:24 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Thu, Jun 16, 2022 at 6:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Fri, Jun 10, 2022 at 2:26 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > @@ -1735,6 +1735,13 @@ apply_handle_insert_internal(ApplyExecutionData *edata,\n> > > static void\n> > > check_relation_updatable(LogicalRepRelMapEntry *rel)\n> > > {\n> > > + /*\n> > > + * If it is a partitioned table, we don't check it, we will check its\n> > > + * partition later.\n> > > + */\n> > > + if (rel->localrel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n> > > + return;\n> > >\n> > > Why do this? I mean why if logicalrep_check_updatable() doesn't care\n> > > if the relation is partitioned or not -- it does all the work\n> > > regardless.\n> > >\n> > > I suggest we don't add this check in check_relation_updatable().\n> >\n> > I think based on this suggestion patch has moved this check to\n> > logicalrep_rel_mark_updatable(). For a partitioned table, it won't\n> > even validate whether it can mark updatable as false which seems odd\n> > to me even though there might not be any bug due to that. Was your\n> > suggestion actually intended to move it to\n> > logicalrep_rel_mark_updatable?\n>\n> No, I didn't intend to suggest that we move this check to\n> logicalrep_rel_mark_updatable(); didn't notice that that's what the\n> latest patch did.\n>\n> What I said is that we shouldn't ignore the updatable flag for a\n> partitioned table in check_relation_updatable(), because\n> logicalrep_rel_mark_updatable() would have set the updatable flag\n> correctly even for partitioned tables. IOW, we should not\n> special-case partitioned tables anywhere.\n>\n> I guess the point of adding the check is to allow the case where a\n> leaf partition's replica identity can be used to apply an update\n> originally targeting its ancestor that doesn't itself have one.\n>\n> I wonder if it wouldn't be better to move the\n> check_relation_updatable() call to\n> apply_handle_{update|delete}_internal()? We know for sure that we\n> only ever get there for leaf tables. If we do that, we won't need the\n> relkind check.\n>\n\nI think this won't work for updates via apply_handle_tuple_routing()\nunless we call it from some other place(s) as well. It will do\nFindReplTupleInLocalRel() before doing update/delete for CMD_UPDATE\ncase and will lead to assertion failure.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 16 Jun 2022 17:58:40 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Thu, Jun 16, 2022 at 9:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Thu, Jun 16, 2022 at 5:24 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Thu, Jun 16, 2022 at 6:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > On Fri, Jun 10, 2022 at 2:26 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > > @@ -1735,6 +1735,13 @@ apply_handle_insert_internal(ApplyExecutionData *edata,\n> > > > static void\n> > > > check_relation_updatable(LogicalRepRelMapEntry *rel)\n> > > > {\n> > > > + /*\n> > > > + * If it is a partitioned table, we don't check it, we will check its\n> > > > + * partition later.\n> > > > + */\n> > > > + if (rel->localrel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n> > > > + return;\n> > > >\n> > > > Why do this? I mean why if logicalrep_check_updatable() doesn't care\n> > > > if the relation is partitioned or not -- it does all the work\n> > > > regardless.\n> > > >\n> > > > I suggest we don't add this check in check_relation_updatable().\n> > >\n> > > I think based on this suggestion patch has moved this check to\n> > > logicalrep_rel_mark_updatable(). For a partitioned table, it won't\n> > > even validate whether it can mark updatable as false which seems odd\n> > > to me even though there might not be any bug due to that. Was your\n> > > suggestion actually intended to move it to\n> > > logicalrep_rel_mark_updatable?\n> >\n> > No, I didn't intend to suggest that we move this check to\n> > logicalrep_rel_mark_updatable(); didn't notice that that's what the\n> > latest patch did.\n> >\n> > What I said is that we shouldn't ignore the updatable flag for a\n> > partitioned table in check_relation_updatable(), because\n> > logicalrep_rel_mark_updatable() would have set the updatable flag\n> > correctly even for partitioned tables. IOW, we should not\n> > special-case partitioned tables anywhere.\n> >\n> > I guess the point of adding the check is to allow the case where a\n> > leaf partition's replica identity can be used to apply an update\n> > originally targeting its ancestor that doesn't itself have one.\n> >\n> > I wonder if it wouldn't be better to move the\n> > check_relation_updatable() call to\n> > apply_handle_{update|delete}_internal()? We know for sure that we\n> > only ever get there for leaf tables. If we do that, we won't need the\n> > relkind check.\n>\n> I think this won't work for updates via apply_handle_tuple_routing()\n> unless we call it from some other place(s) as well. It will do\n> FindReplTupleInLocalRel() before doing update/delete for CMD_UPDATE\n> case and will lead to assertion failure.\n\nYou're right. I guess it's fine then to check the relkind in\ncheck_relation_updatable() the way the original patch did, even though\nit would've been nice if it didn't need to.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 16 Jun 2022 22:17:01 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Thu, Jun 16, 2022 2:13 PM Amit Langote <amitlangote09@gmail.com> wrote:\r\n> \r\n> Hi,\r\n> \r\n> On Thu, Jun 16, 2022 at 2:07 PM shiy.fnst@fujitsu.com\r\n> <shiy.fnst@fujitsu.com> wrote:\r\n> > On Wed, Jun 15, 2022 8:30 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > > I have pushed the first bug-fix patch today.\r\n> >\r\n> > Attached the remaining patches which are rebased.\r\n> \r\n> Thanks.\r\n> \r\n> Comments on v9-0001:\r\n\r\nThanks for your comments.\r\n\r\n> \r\n> + * Don't throw any error here just mark the relation entry as not updatable,\r\n> + * as replica identity is only for updates and deletes but inserts can be\r\n> + * replicated even without it.\r\n> \r\n> I know you're simply copying the old comment, but I think we can\r\n> rewrite it to be slightly more useful:\r\n> \r\n> We just mark the relation entry as not updatable here if the local\r\n> replica identity is found to be insufficient and leave it to\r\n> check_relation_updatable() to throw the actual error if needed.\r\n> \r\n\r\nModified as you suggested in another mail [1].\r\n\r\n> + /* Check that replica identity matches. */\r\n> + logicalrep_rel_mark_updatable(entry);\r\n> \r\n> Maybe the comment (there are 2 instances) should say:\r\n> \r\n> Set if the table's replica identity is enough to apply update/delete.\r\n> \r\n\r\nModified as suggested.\r\n\r\n> Finally,\r\n> \r\n> +# Alter REPLICA IDENTITY on subscriber.\r\n> +# No REPLICA IDENTITY in the partitioned table on subscriber, but what we\r\n> check\r\n> +# is the partition, so it works fine.\r\n> \r\n> For consistency with other recently added comments, I'd suggest the\r\n> following wording:\r\n> \r\n> Test that replication works correctly as long as the leaf partition\r\n> has the necessary REPLICA IDENTITY, even though the actual target\r\n> partitioned table does not.\r\n> \r\n\r\nModified as suggested.\r\n\r\n> On v9-0002:\r\n> \r\n> + /* cleanup the invalid attrmap */\r\n> \r\n> It seems that \"invalid\" here really means no-longer-useful, so we\r\n> should use that phrase as a nearby comment does:\r\n> \r\n> Release the no-longer-useful attrmap, if any.\r\n> \r\n\r\nModified as suggested.\r\n\r\nAttached the new version of patch set. I also moved the partitioned table check\r\nin logicalrep_rel_mark_updatable() to check_relation_updatable() as discussed\r\n[2].\r\n\r\n[1] https://www.postgresql.org/message-id/CA%2BHiwqG3Xi%3DwH4rBHm61ku-j0gm%2B-rc5VmDHxf%3DTeFkUsHtooA%40mail.gmail.com\r\n[2] https://www.postgresql.org/message-id/CA%2BHiwqHfN789ekiYVE%2B0xsLswMosMrWBwv4cPvYgWREWejw7HA%40mail.gmail.com\r\n\r\nRegards,\r\nShi yu",
"msg_date": "Fri, 17 Jun 2022 03:05:33 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Fri Jun 17, 2022 11:06 AM shiy.fnst@fujitsu.com <shiy.fnst@fujitsu.com> wrote:\r\n> \r\n> Attached the new version of patch set. I also moved the partitioned table\r\n> check\r\n> in logicalrep_rel_mark_updatable() to check_relation_updatable() as\r\n> discussed\r\n> [2].\r\n> \r\n\r\nAttached back-branch patches of the first patch.\r\n\r\nRegards,\r\nShi yu",
"msg_date": "Fri, 17 Jun 2022 05:52:05 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Fri, Jun 17, 2022 at 11:22 AM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Fri Jun 17, 2022 11:06 AM shiy.fnst@fujitsu.com <shiy.fnst@fujitsu.com> wrote:\n> >\n> > Attached the new version of patch set. I also moved the partitioned table\n> > check\n> > in logicalrep_rel_mark_updatable() to check_relation_updatable() as\n> > discussed\n> > [2].\n> >\n>\n> Attached back-branch patches of the first patch.\n>\n\nOne minor comment:\n+ /*\n+ * If it is a partitioned table, we don't check it, we will check its\n+ * partition later.\n+ */\n\nCan we change the above comment to: \"For partitioned tables, we only\nneed to care if the target partition is updatable (aka has PK or RI\ndefined for it).\"?\n\nApart from this looks good to me. I'll push this tomorrow unless there\nare any more suggestions/comments.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 20 Jun 2022 11:03:09 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Mon, Jun 20, 2022 1:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Fri, Jun 17, 2022 at 11:22 AM shiy.fnst@fujitsu.com\r\n> <shiy.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Fri Jun 17, 2022 11:06 AM shiy.fnst@fujitsu.com <shiy.fnst@fujitsu.com>\r\n> wrote:\r\n> > >\r\n> > > Attached the new version of patch set. I also moved the partitioned table\r\n> > > check\r\n> > > in logicalrep_rel_mark_updatable() to check_relation_updatable() as\r\n> > > discussed\r\n> > > [2].\r\n> > >\r\n> >\r\n> > Attached back-branch patches of the first patch.\r\n> >\r\n> \r\n> One minor comment:\r\n> + /*\r\n> + * If it is a partitioned table, we don't check it, we will check its\r\n> + * partition later.\r\n> + */\r\n> \r\n> Can we change the above comment to: \"For partitioned tables, we only\r\n> need to care if the target partition is updatable (aka has PK or RI\r\n> defined for it).\"?\r\n> \r\n\r\nThanks for your comment. Modified in the attached patches. \r\n\r\nRegards,\r\nShi yu",
"msg_date": "Mon, 20 Jun 2022 06:46:47 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Mon, Jun 20, 2022 at 3:46 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n> On Mon, Jun 20, 2022 1:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > One minor comment:\n> > + /*\n> > + * If it is a partitioned table, we don't check it, we will check its\n> > + * partition later.\n> > + */\n> >\n> > Can we change the above comment to: \"For partitioned tables, we only\n> > need to care if the target partition is updatable (aka has PK or RI\n> > defined for it).\"?\n> >\n> Thanks for your comment. Modified in the attached patches.\n\nHow about: ...target \"leaf\" partition is updatable\n\nRegarding the commit message's top line, which is this:\n\n Fix partition table's RI checking on the subscriber.\n\nI think it should spell out REPLICA IDENTITY explicitly to avoid the\ncommit being confused to have to do with \"Referential Integrity\nchecking\".\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 21 Jun 2022 11:19:09 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Tue, Jun 21, 2022 at 7:49 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Mon, Jun 20, 2022 at 3:46 PM shiy.fnst@fujitsu.com\n> <shiy.fnst@fujitsu.com> wrote:\n> > On Mon, Jun 20, 2022 1:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > One minor comment:\n> > > + /*\n> > > + * If it is a partitioned table, we don't check it, we will check its\n> > > + * partition later.\n> > > + */\n> > >\n> > > Can we change the above comment to: \"For partitioned tables, we only\n> > > need to care if the target partition is updatable (aka has PK or RI\n> > > defined for it).\"?\n> > >\n> > Thanks for your comment. Modified in the attached patches.\n>\n> How about: ...target \"leaf\" partition is updatable\n>\n\nI am not very sure if this is an improvement over the current.\n\n> Regarding the commit message's top line, which is this:\n>\n> Fix partition table's RI checking on the subscriber.\n>\n> I think it should spell out REPLICA IDENTITY explicitly to avoid the\n> commit being confused to have to do with \"Referential Integrity\n> checking\".\n>\n\nThis makes sense. I'll take care of this.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 21 Jun 2022 08:02:51 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Tue, Jun 21, 2022 at 8:02 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jun 21, 2022 at 7:49 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> >\n> >\n> > I think it should spell out REPLICA IDENTITY explicitly to avoid the\n> > commit being confused to have to do with \"Referential Integrity\n> > checking\".\n> >\n>\n> This makes sense. I'll take care of this.\n>\n\nAfter pushing this patch, buildfarm member prion has failed.\nhttps://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=prion&br=HEAD\n\nIt seems to me that the problem could be due to the reason that the\nentry returned by logicalrep_partition_open() may not have the correct\nvalue for localrel when we found the entry and localrelvalid is also\ntrue. The point is that before this commit we never use localrel value\nfrom the rel entry returned by logicalrep_partition_open. I think we\nneed to always update the localrel value in\nlogicalrep_partition_open().\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 21 Jun 2022 10:59:25 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Tuesday, June 21, 2022 1:29 PM Amit Kapila <amit.kapila16@gmail.com>:\r\n> \r\n> On Tue, Jun 21, 2022 at 8:02 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > On Tue, Jun 21, 2022 at 7:49 AM Amit Langote <amitlangote09@gmail.com>\r\n> wrote:\r\n> > >\r\n> > >\r\n> > > I think it should spell out REPLICA IDENTITY explicitly to avoid the\r\n> > > commit being confused to have to do with \"Referential Integrity\r\n> > > checking\".\r\n> > >\r\n> >\r\n> > This makes sense. I'll take care of this.\r\n> >\r\n> \r\n> After pushing this patch, buildfarm member prion has failed.\r\n> https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=prion&br=HE\r\n> AD\r\n> \r\n> It seems to me that the problem could be due to the reason that the\r\n> entry returned by logicalrep_partition_open() may not have the correct\r\n> value for localrel when we found the entry and localrelvalid is also\r\n> true. The point is that before this commit we never use localrel value\r\n> from the rel entry returned by logicalrep_partition_open. I think we\r\n> need to always update the localrel value in\r\n> logicalrep_partition_open().\r\n\r\nAgreed.\r\n\r\nAnd I have confirmed that the failure is due to the segmentation violation when\r\naccess the cached relation. I reproduced this by using -DRELCACHE_FORCE_RELEASE\r\n-DCATCACHE_FORCE_RELEASE option which was hinted by Tom.\r\n\r\nStack:\r\n#0 check_relation_updatable (rel=0x1cf4548) at worker.c:1745\r\n#1 0x0000000000909cbb in apply_handle_tuple_routing (edata=0x1cbf4e8, remoteslot=0x1cbf908, newtup=0x0, operation=CMD_DELETE) at worker.c:2181\r\n#2 0x00000000009097a5 in apply_handle_delete (s=0x7ffcef7fd730) at worker.c:2005\r\n#3 0x000000000090a794 in apply_dispatch (s=0x7ffcef7fd730) at worker.c:2503\r\n#4 0x000000000090ad43 in LogicalRepApplyLoop (last_received=22299920) at worker.c:2775\r\n#5 0x000000000090c2ab in start_apply (origin_startpos=0) at worker.c:3549\r\n#6 0x000000000090ca8d in ApplyWorkerMain (main_arg=0) at worker.c:3805\r\n#7 0x00000000008c4c64 in StartBackgroundWorker () at bgworker.c:858\r\n#8 0x00000000008ceaeb in do_start_bgworker (rw=0x1c3c6b0) at postmaster.c:5815\r\n#9 0x00000000008cee97 in maybe_start_bgworkers () at postmaster.c:6039\r\n#10 0x00000000008cdf4e in sigusr1_handler (postgres_signal_arg=10) at postmaster.c:5204\r\n#11 <signal handler called>\r\n#12 0x00007fd8fbe0d4ab in select () from /lib64/libc.so.6\r\n#13 0x00000000008c9cfb in ServerLoop () at postmaster.c:1770\r\n#14 0x00000000008c96e4 in PostmasterMain (argc=4, argv=0x1c110a0) at postmaster.c:1478\r\n#15 0x00000000007c665b in main (argc=4, argv=0x1c110a0) at main.c:202\r\n(gdb) p rel->localrel->rd_rel\r\n$5 = (Form_pg_class) 0x7f7f7f7f7f7f7f7f\r\n\r\nWe didn't hit this problem because we only access that relation when we plan to\r\nreport an error[1] and then the worker will restart and cache will be built, so\r\neverything seems OK.\r\n\r\nThe problem seems already existed and we hit this because we started to access\r\nthe cached relation in more places.\r\n\r\nI think we should try to update the relation every time as the relation is\r\nopened and closed by caller and here is the patch to do that.\r\n\r\n[1]\r\n\t/*\r\n\t * We are in error mode so it's fine this is somewhat slow. It's better to\r\n\t * give user correct error.\r\n\t */\r\n\tif (OidIsValid(GetRelationIdentityOrPK(rel->localrel)))\r\n\r\nBest regards,\r\nHou zj",
"msg_date": "Tue, 21 Jun 2022 06:35:41 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Tue, Jun 21, 2022 at 3:35 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n> On Tuesday, June 21, 2022 1:29 PM Amit Kapila <amit.kapila16@gmail.com>:\n> > After pushing this patch, buildfarm member prion has failed.\n> > https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=prion&br=HE\n> > AD\n> >\n> > It seems to me that the problem could be due to the reason that the\n> > entry returned by logicalrep_partition_open() may not have the correct\n> > value for localrel when we found the entry and localrelvalid is also\n> > true. The point is that before this commit we never use localrel value\n> > from the rel entry returned by logicalrep_partition_open. I think we\n> > need to always update the localrel value in\n> > logicalrep_partition_open().\n>\n> Agreed.\n>\n> And I have confirmed that the failure is due to the segmentation violation when\n> access the cached relation. I reproduced this by using -DRELCACHE_FORCE_RELEASE\n> -DCATCACHE_FORCE_RELEASE option which was hinted by Tom.\n>\n> Stack:\n> #0 check_relation_updatable (rel=0x1cf4548) at worker.c:1745\n> #1 0x0000000000909cbb in apply_handle_tuple_routing (edata=0x1cbf4e8, remoteslot=0x1cbf908, newtup=0x0, operation=CMD_DELETE) at worker.c:2181\n> #2 0x00000000009097a5 in apply_handle_delete (s=0x7ffcef7fd730) at worker.c:2005\n> #3 0x000000000090a794 in apply_dispatch (s=0x7ffcef7fd730) at worker.c:2503\n> #4 0x000000000090ad43 in LogicalRepApplyLoop (last_received=22299920) at worker.c:2775\n> #5 0x000000000090c2ab in start_apply (origin_startpos=0) at worker.c:3549\n> #6 0x000000000090ca8d in ApplyWorkerMain (main_arg=0) at worker.c:3805\n> #7 0x00000000008c4c64 in StartBackgroundWorker () at bgworker.c:858\n> #8 0x00000000008ceaeb in do_start_bgworker (rw=0x1c3c6b0) at postmaster.c:5815\n> #9 0x00000000008cee97 in maybe_start_bgworkers () at postmaster.c:6039\n> #10 0x00000000008cdf4e in sigusr1_handler (postgres_signal_arg=10) at postmaster.c:5204\n> #11 <signal handler called>\n> #12 0x00007fd8fbe0d4ab in select () from /lib64/libc.so.6\n> #13 0x00000000008c9cfb in ServerLoop () at postmaster.c:1770\n> #14 0x00000000008c96e4 in PostmasterMain (argc=4, argv=0x1c110a0) at postmaster.c:1478\n> #15 0x00000000007c665b in main (argc=4, argv=0x1c110a0) at main.c:202\n> (gdb) p rel->localrel->rd_rel\n> $5 = (Form_pg_class) 0x7f7f7f7f7f7f7f7f\n>\n> We didn't hit this problem because we only access that relation when we plan to\n> report an error[1] and then the worker will restart and cache will be built, so\n> everything seems OK.\n>\n> The problem seems already existed and we hit this because we started to access\n> the cached relation in more places.\n>\n> I think we should try to update the relation every time as the relation is\n> opened and closed by caller and here is the patch to do that.\n\nThanks for the patch.\n\nI agree it's an old bug. A partition map entry's localrel may point\nto a stale Relation pointer, because once the caller had closed the\nrelation, the relcache subsystem is free to \"clear\" it, like in the\ncase of a RELCACHE_FORCE_RELEASE build.\n\nFixing it the way patch does seems fine, though it feels like\nlocalrelvalid will lose some of its meaning for the partition map\nentries -- we will now overwrite localrel even if localrelvalid is\ntrue.\n\n+ /*\n+ * Relation is opened and closed by caller, so we need to always update the\n+ * partrel in case the cached relation was closed.\n+ */\n+ entry->localrel = partrel;\n+\n+ if (entry->localrelvalid)\n return entry;\n\nMaybe we should add a comment here about why it's okay to overwrite\nlocalrel even if localrelvalid is true. How about the following hunk:\n\n@@ -596,8 +596,20 @@ logicalrep_partition_open(LogicalRepRelMapEntry *root,\n\n entry = &part_entry->relmapentry;\n\n+ /*\n+ * We must always overwrite entry->localrel with the latest partition\n+ * Relation pointer, because the Relation pointed to by the old value may\n+ * have been cleared after the caller would have closed the partition\n+ * relation after the last use of this entry. Note that localrelvalid is\n+ * only updated by the relcache invalidation callback, so it may still be\n+ * true irrespective of whether the Relation pointed to by localrel has\n+ * been cleared or not.\n+ */\n if (found && entry->localrelvalid)\n+ {\n+ entry->localrel = partrel;\n return entry;\n+ }\n\nAttached a patch containing the above to consider as an alternative.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 21 Jun 2022 16:20:35 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Tuesday, June 21, 2022 3:21 PM Amit Langote <amitlangote09@gmail.com> wrote:\r\n> \r\n> On Tue, Jun 21, 2022 at 3:35 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> > On Tuesday, June 21, 2022 1:29 PM Amit Kapila <amit.kapila16@gmail.com>:\r\n> > > After pushing this patch, buildfarm member prion has failed.\r\n> > >\r\n> https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=prion&br=HE\r\n> > > AD\r\n> > >\r\n> > > It seems to me that the problem could be due to the reason that the\r\n> > > entry returned by logicalrep_partition_open() may not have the correct\r\n> > > value for localrel when we found the entry and localrelvalid is also\r\n> > > true. The point is that before this commit we never use localrel value\r\n> > > from the rel entry returned by logicalrep_partition_open. I think we\r\n> > > need to always update the localrel value in\r\n> > > logicalrep_partition_open().\r\n> >\r\n> > Agreed.\r\n> >\r\n> > And I have confirmed that the failure is due to the segmentation violation\r\n> when\r\n> > access the cached relation. I reproduced this by using\r\n> -DRELCACHE_FORCE_RELEASE\r\n> > -DCATCACHE_FORCE_RELEASE option which was hinted by Tom.\r\n> >\r\n> > Stack:\r\n> > #0 check_relation_updatable (rel=0x1cf4548) at worker.c:1745\r\n> > #1 0x0000000000909cbb in apply_handle_tuple_routing (edata=0x1cbf4e8,\r\n> remoteslot=0x1cbf908, newtup=0x0, operation=CMD_DELETE) at\r\n> worker.c:2181\r\n> > #2 0x00000000009097a5 in apply_handle_delete (s=0x7ffcef7fd730) at\r\n> worker.c:2005\r\n> > #3 0x000000000090a794 in apply_dispatch (s=0x7ffcef7fd730) at\r\n> worker.c:2503\r\n> > #4 0x000000000090ad43 in LogicalRepApplyLoop\r\n> (last_received=22299920) at worker.c:2775\r\n> > #5 0x000000000090c2ab in start_apply (origin_startpos=0) at worker.c:3549\r\n> > #6 0x000000000090ca8d in ApplyWorkerMain (main_arg=0) at\r\n> worker.c:3805\r\n> > #7 0x00000000008c4c64 in StartBackgroundWorker () at bgworker.c:858\r\n> > #8 0x00000000008ceaeb in do_start_bgworker (rw=0x1c3c6b0) at\r\n> postmaster.c:5815\r\n> > #9 0x00000000008cee97 in maybe_start_bgworkers () at postmaster.c:6039\r\n> > #10 0x00000000008cdf4e in sigusr1_handler (postgres_signal_arg=10) at\r\n> postmaster.c:5204\r\n> > #11 <signal handler called>\r\n> > #12 0x00007fd8fbe0d4ab in select () from /lib64/libc.so.6\r\n> > #13 0x00000000008c9cfb in ServerLoop () at postmaster.c:1770\r\n> > #14 0x00000000008c96e4 in PostmasterMain (argc=4, argv=0x1c110a0) at\r\n> postmaster.c:1478\r\n> > #15 0x00000000007c665b in main (argc=4, argv=0x1c110a0) at main.c:202\r\n> > (gdb) p rel->localrel->rd_rel\r\n> > $5 = (Form_pg_class) 0x7f7f7f7f7f7f7f7f\r\n> >\r\n> > We didn't hit this problem because we only access that relation when we plan\r\n> to\r\n> > report an error[1] and then the worker will restart and cache will be built, so\r\n> > everything seems OK.\r\n> >\r\n> > The problem seems already existed and we hit this because we started to\r\n> access\r\n> > the cached relation in more places.\r\n> >\r\n> > I think we should try to update the relation every time as the relation is\r\n> > opened and closed by caller and here is the patch to do that.\r\n> Thanks for the patch.\r\n> \r\n> I agree it's an old bug. A partition map entry's localrel may point\r\n> to a stale Relation pointer, because once the caller had closed the\r\n> relation, the relcache subsystem is free to \"clear\" it, like in the\r\n> case of a RELCACHE_FORCE_RELEASE build.\r\n\r\nHi,\r\n\r\nThanks for replying.\r\n\r\n> Fixing it the way patch does seems fine, though it feels like\r\n> localrelvalid will lose some of its meaning for the partition map\r\n> entries -- we will now overwrite localrel even if localrelvalid is\r\n> true.\r\n\r\nTo me, it seems localrelvalid doesn't have the meaning that the cached relation\r\npointer is valid. In logicalrep_rel_open(), we also reopen and update the\r\nrelation even if the localrelvalid is true.\r\n\r\n> + /*\r\n> + * Relation is opened and closed by caller, so we need to always update the\r\n> + * partrel in case the cached relation was closed.\r\n> + */\r\n> + entry->localrel = partrel;\r\n> +\r\n> + if (entry->localrelvalid)\r\n> return entry;\r\n> \r\n> Maybe we should add a comment here about why it's okay to overwrite\r\n> localrel even if localrelvalid is true. How about the following hunk:\r\n> \r\n> @@ -596,8 +596,20 @@ logicalrep_partition_open(LogicalRepRelMapEntry\r\n> *root,\r\n> \r\n> entry = &part_entry->relmapentry;\r\n> \r\n> + /*\r\n> + * We must always overwrite entry->localrel with the latest partition\r\n> + * Relation pointer, because the Relation pointed to by the old value may\r\n> + * have been cleared after the caller would have closed the partition\r\n> + * relation after the last use of this entry. Note that localrelvalid is\r\n> + * only updated by the relcache invalidation callback, so it may still be\r\n> + * true irrespective of whether the Relation pointed to by localrel has\r\n> + * been cleared or not.\r\n> + */\r\n> if (found && entry->localrelvalid)\r\n> + {\r\n> + entry->localrel = partrel;\r\n> return entry;\r\n> + }\r\n> \r\n> Attached a patch containing the above to consider as an alternative.\r\n\r\nThis looks fine to me as well.\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Tue, 21 Jun 2022 08:07:55 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Tue, Jun 21, 2022 at 12:50 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Tue, Jun 21, 2022 at 3:35 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n>\n> Attached a patch containing the above to consider as an alternative.\n>\n\nThanks, the patch looks good to me. I'll push this after doing some testing.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 21 Jun 2022 14:19:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Tuesday, June 21, 2022 4:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Tue, Jun 21, 2022 at 12:50 PM Amit Langote <amitlangote09@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Tue, Jun 21, 2022 at 3:35 PM houzj.fnst@fujitsu.com\r\n> > <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > Attached a patch containing the above to consider as an alternative.\r\n> >\r\n> \r\n> Thanks, the patch looks good to me. I'll push this after doing some testing.\r\n\r\nThe patch looks good to me as well.\r\n\r\nI also verified that the patch can be applied cleanly on back-branches and I\r\nconfirmed that the bug exists on back branches before this patch and is fixed\r\nafter applying this patch. The regression tests also passed with and without\r\nRELCACHE_FORCE_RELEASE option in my machine.\r\n\r\nRegards,\r\nShi yu\r\n",
"msg_date": "Tue, 21 Jun 2022 09:10:57 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Tue, Jun 21, 2022 at 5:08 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n> On Tuesday, June 21, 2022 3:21 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Thanks for the patch.\n> >\n> > I agree it's an old bug. A partition map entry's localrel may point\n> > to a stale Relation pointer, because once the caller had closed the\n> > relation, the relcache subsystem is free to \"clear\" it, like in the\n> > case of a RELCACHE_FORCE_RELEASE build.\n>\n> Hi,\n>\n> Thanks for replying.\n>\n> > Fixing it the way patch does seems fine, though it feels like\n> > localrelvalid will lose some of its meaning for the partition map\n> > entries -- we will now overwrite localrel even if localrelvalid is\n> > true.\n>\n> To me, it seems localrelvalid doesn't have the meaning that the cached relation\n> pointer is valid. In logicalrep_rel_open(), we also reopen and update the\n> relation even if the localrelvalid is true.\n\nAh, right. I guess only the localrelvalid=false case is really\ninteresting then. Only in that case, we need to (re-)build other\nfields that are computed using localrel. In the localrelvalid=true\ncase, we don't need to worry about other fields, but still need to\nmake sure that localrel points to an up to date relcache entry of the\nrelation.\n\n> > + /*\n> > + * Relation is opened and closed by caller, so we need to always update the\n> > + * partrel in case the cached relation was closed.\n> > + */\n> > + entry->localrel = partrel;\n> > +\n> > + if (entry->localrelvalid)\n> > return entry;\n> >\n> > Maybe we should add a comment here about why it's okay to overwrite\n> > localrel even if localrelvalid is true. How about the following hunk:\n> >\n> > @@ -596,8 +596,20 @@ logicalrep_partition_open(LogicalRepRelMapEntry\n> > *root,\n> >\n> > entry = &part_entry->relmapentry;\n> >\n> > + /*\n> > + * We must always overwrite entry->localrel with the latest partition\n> > + * Relation pointer, because the Relation pointed to by the old value may\n> > + * have been cleared after the caller would have closed the partition\n> > + * relation after the last use of this entry. Note that localrelvalid is\n> > + * only updated by the relcache invalidation callback, so it may still be\n> > + * true irrespective of whether the Relation pointed to by localrel has\n> > + * been cleared or not.\n> > + */\n> > if (found && entry->localrelvalid)\n> > + {\n> > + entry->localrel = partrel;\n> > return entry;\n> > + }\n> >\n> > Attached a patch containing the above to consider as an alternative.\n>\n> This looks fine to me as well.\n\nThank you.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 21 Jun 2022 18:22:56 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Tuesday, June 21, 2022 4:49 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> \r\n> On Tue, Jun 21, 2022 at 12:50 PM Amit Langote <amitlangote09@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Tue, Jun 21, 2022 at 3:35 PM houzj.fnst@fujitsu.com\r\n> > <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > Attached a patch containing the above to consider as an alternative.\r\n> >\r\n> \r\n> Thanks, the patch looks good to me. I'll push this after doing some testing.\r\n\r\nSince the patch has been committed. Attach the last patch to fix the memory leak.\r\n\r\nThe bug exists on PG10 ~ PG15(HEAD).\r\n\r\nFor HEAD,PG14,PG13, to fix the memory leak, I think we should use\r\nfree_attrmap instead of pfree and release the no-longer-useful attrmap\r\nWhen rebuilding the map info.\r\n\r\nFor PG12,PG11,PG10, we only need to add the code to release the\r\nno-longer-useful attrmap when rebuilding the map info. We can still use\r\npfree() because the attrmap in back-branch is a single array like:\r\n\r\nentry->attrmap = palloc(desc->natts * sizeof(AttrNumber));\r\n\r\nBest regards,\r\nHou zj",
"msg_date": "Wed, 22 Jun 2022 03:02:36 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "Hi,\n\nOn Wed, Jun 22, 2022 at 12:02 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n> Since the patch has been committed. Attach the last patch to fix the memory leak.\n>\n> The bug exists on PG10 ~ PG15(HEAD).\n>\n> For HEAD,PG14,PG13, to fix the memory leak, I think we should use\n> free_attrmap instead of pfree and release the no-longer-useful attrmap\n> When rebuilding the map info.\n>\n> For PG12,PG11,PG10, we only need to add the code to release the\n> no-longer-useful attrmap when rebuilding the map info. We can still use\n> pfree() because the attrmap in back-branch is a single array like:\n>\n> entry->attrmap = palloc(desc->natts * sizeof(AttrNumber));\n\nLGTM, thank you.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 22 Jun 2022 13:38:59 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Wed, Jun 22, 2022 at 10:09 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Wed, Jun 22, 2022 at 12:02 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> > Since the patch has been committed. Attach the last patch to fix the memory leak.\n> >\n> > The bug exists on PG10 ~ PG15(HEAD).\n> >\n> > For HEAD,PG14,PG13, to fix the memory leak, I think we should use\n> > free_attrmap instead of pfree and release the no-longer-useful attrmap\n> > When rebuilding the map info.\n> >\n> > For PG12,PG11,PG10, we only need to add the code to release the\n> > no-longer-useful attrmap when rebuilding the map info. We can still use\n> > pfree() because the attrmap in back-branch is a single array like:\n> >\n> > entry->attrmap = palloc(desc->natts * sizeof(AttrNumber));\n>\n> LGTM, thank you.\n>\n\nLGTM as well. One thing I am not completely sure about is whether to\nmake this change in PG10 for which the final release is in Nov?\nAFAICS, the leak can only occur after the relcache invalidation on the\nsubscriber which may or may not be a very frequent case. What do you\nguys think?\n\nPersonally, I feel it is good to fix it in all branches including PG10.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 22 Jun 2022 16:35:45 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Wed, Jun 22, 2022 at 8:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Wed, Jun 22, 2022 at 10:09 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Wed, Jun 22, 2022 at 12:02 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > > Since the patch has been committed. Attach the last patch to fix the memory leak.\n> > >\n> > > The bug exists on PG10 ~ PG15(HEAD).\n> > >\n> > > For HEAD,PG14,PG13, to fix the memory leak, I think we should use\n> > > free_attrmap instead of pfree and release the no-longer-useful attrmap\n> > > When rebuilding the map info.\n> > >\n> > > For PG12,PG11,PG10, we only need to add the code to release the\n> > > no-longer-useful attrmap when rebuilding the map info. We can still use\n> > > pfree() because the attrmap in back-branch is a single array like:\n> > >\n> > > entry->attrmap = palloc(desc->natts * sizeof(AttrNumber));\n> >\n> > LGTM, thank you.\n>\n> LGTM as well. One thing I am not completely sure about is whether to\n> make this change in PG10 for which the final release is in Nov?\n> AFAICS, the leak can only occur after the relcache invalidation on the\n> subscriber which may or may not be a very frequent case. What do you\n> guys think?\n\nAgree that the leak does not seem very significant, though...\n\n> Personally, I feel it is good to fix it in all branches including PG10.\n\n...yes, why not.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 22 Jun 2022 20:32:31 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Wednesday, June 22, 2022 7:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Wed, Jun 22, 2022 at 10:09 AM Amit Langote <amitlangote09@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Wed, Jun 22, 2022 at 12:02 PM houzj.fnst@fujitsu.com\r\n> > <houzj.fnst@fujitsu.com> wrote:\r\n> > > Since the patch has been committed. Attach the last patch to fix the\r\n> memory leak.\r\n> > >\r\n> > > The bug exists on PG10 ~ PG15(HEAD).\r\n> > >\r\n> > > For HEAD,PG14,PG13, to fix the memory leak, I think we should use\r\n> > > free_attrmap instead of pfree and release the no-longer-useful\r\n> > > attrmap When rebuilding the map info.\r\n> > >\r\n> > > For PG12,PG11,PG10, we only need to add the code to release the\r\n> > > no-longer-useful attrmap when rebuilding the map info. We can still\r\n> > > use\r\n> > > pfree() because the attrmap in back-branch is a single array like:\r\n> > >\r\n> > > entry->attrmap = palloc(desc->natts * sizeof(AttrNumber));\r\n> >\r\n> > LGTM, thank you.\r\n> >\r\n> \r\n> LGTM as well. One thing I am not completely sure about is whether to make this\r\n> change in PG10 for which the final release is in Nov?\r\n> AFAICS, the leak can only occur after the relcache invalidation on the subscriber\r\n> which may or may not be a very frequent case. What do you guys think?\r\n> \r\n> Personally, I feel it is good to fix it in all branches including PG10.\r\n+1\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Wed, 22 Jun 2022 11:35:07 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Replica Identity check of partition table on subscriber"
},
{
"msg_contents": "On Wed, Jun 22, 2022 at 5:05 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Wednesday, June 22, 2022 7:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Jun 22, 2022 at 10:09 AM Amit Langote <amitlangote09@gmail.com>\n> > wrote:\n> > >\n> > > On Wed, Jun 22, 2022 at 12:02 PM houzj.fnst@fujitsu.com\n> > > <houzj.fnst@fujitsu.com> wrote:\n> > > > Since the patch has been committed. Attach the last patch to fix the\n> > memory leak.\n> > > >\n> > > > The bug exists on PG10 ~ PG15(HEAD).\n> > > >\n> > > > For HEAD,PG14,PG13, to fix the memory leak, I think we should use\n> > > > free_attrmap instead of pfree and release the no-longer-useful\n> > > > attrmap When rebuilding the map info.\n> > > >\n> > > > For PG12,PG11,PG10, we only need to add the code to release the\n> > > > no-longer-useful attrmap when rebuilding the map info. We can still\n> > > > use\n> > > > pfree() because the attrmap in back-branch is a single array like:\n> > > >\n> > > > entry->attrmap = palloc(desc->natts * sizeof(AttrNumber));\n> > >\n> > > LGTM, thank you.\n> > >\n> >\n> > LGTM as well. One thing I am not completely sure about is whether to make this\n> > change in PG10 for which the final release is in Nov?\n> > AFAICS, the leak can only occur after the relcache invalidation on the subscriber\n> > which may or may not be a very frequent case. What do you guys think?\n> >\n> > Personally, I feel it is good to fix it in all branches including PG10.\n> +1\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 23 Jun 2022 17:39:19 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replica Identity check of partition table on subscriber"
}
] |
[
{
"msg_contents": "I believe functions in Postgres follow a late binding approach and hence\nnested function dependencies are resolved using search_path at run time.\nThis way a user can override nested functions in its schema and change the\nbehaviour of wrapper functions. However, a more serious issue is when\nfunctional Indexes (with nested function calls) are created on a table and\nthen the data inserted in Indexes could be entirely dependent on which user\nis inserting the data (by overriding nested function).\n\nI performed a couple of test cases where data inserted is dependent on the\nuser overriding nested functions. I understand this is not the best\npractice to scatter functions/indexes/tables in different different schemas\nand use such kind schema setup but I still expect Postgres to save us from\nsuch data inconsistencies issues by using early binding for functional\nIndexes. In fact Postgres does that linking for a single function Index\n(where no nested function are there) and qualifies the function used in the\nIndex with its schema name and also it works in cases where all\nfunctions, table, Indexes are present in the same schema.\n\nHowever still there are cases where functional Indexes are created on\nextension functions (For Ex - cube extension) which are present in\ndifferent schemas and then those cube functions are defined as invoker\nsecurity type with nested functions calls without any schema qualification.\n\nIssue that would arise with late binding for functional Indexes is that\nwhen we are migrating such tables/indexes/data from one database to another\n(using pg_dump/pg_restore or any other method) data can be changed\ndepending on which user we are using for import.\n(These tests i performed using invoker functions, i think definer functions\nproduce correct behavior). One way would be to define search_path for such\nnested functions.\n\n\n 1. =========Case1======\n 2.\n 3. ##Table and functions are in different schemas.\n 4.\n 5. Session1::\n 6. User:Postgres\n 7.\n 8. create user idxusr1 with password '*****';\n 9. grant idxusr1 to postgres;\n 10. create schema idxusr1 AUTHORIZATION idxusr1;\n 11.\n 12. create user idxusr2 with password '*****';\n 13. grant idxusr2 to postgres;\n 14. create schema idxusr2 AUTHORIZATION idxusr2;\n 15.\n 16. Session2::\n 17. User:idxusr1\n 18.\n 19. set search_path to idxusr1,public;\n 20.\n 21. CREATE FUNCTION sumcall(int, int) RETURNS int LANGUAGE SQL\nIMMUTABLE STRICT PARALLEL SAFE AS 'SELECT ($1+$2)';\n 22.\n 23. CREATE FUNCTION wrapsum(int, int) RETURNS int LANGUAGE SQL\nIMMUTABLE STRICT PARALLEL SAFE AS 'SELECT sumcall($1,$2)';\n 24.\n 25. ##create table in another schema\n 26.\n 27. create table public.test(n1 int);\n 28. create unique index idxtst on public.test(idxusr1.wrapsum(n1,1));\n 29.\n 30. grant insert on table public.test to idxusr2;\n 31.\n 32. postgres=> insert into test values(1);\n 33. INSERT 0 1\n 34. postgres=> insert into test values(1);\n 35. ERROR: duplicate key value violates unique constraint \"idxtst\"\n 36. DETAIL: Key (wrapsum(n1, 1))=(2) already exists.\n 37.\n 38. Session3::\n 39. User:idxusr2\n 40.\n 41. set search_path to idxusr2,public;\n 42.\n 43. CREATE FUNCTION sumcall(int, int) RETURNS int LANGUAGE SQL\nIMMUTABLE STRICT PARALLEL SAFE AS 'SELECT ($1 - $2)';\n 44.\n 45. postgres=> insert into test values(1);\n 46. INSERT 0 1\n 47. postgres=> insert into test values(1);\n 48. ERROR: duplicate key value violates unique constraint \"idxtst\"\n 49. DETAIL: Key (idxusr1.wrapsum(n1, 1))=(0) already exists.\n 50.\n 51. ======Case2==========\n 52.\n 53. ##Functions are in different schemas.\n 54.\n 55. Session1::\n 56. User:Postgres\n 57.\n 58. create user idxusr1 with password '*****';\n 59. grant idxusr1 to postgres;\n 60. create schema idxusr1 AUTHORIZATION idxusr1;\n 61.\n 62. create user idxusr2 with password '*****';\n 63. grant idxusr2 to postgres;\n 64. create schema idxusr2 AUTHORIZATION idxusr2;\n 65.\n 66. Session2::\n 67. User:idxusr1\n 68.\n 69. set search_path to idxusr1,public;\n 70.\n 71. ##create internal function in own schema and wrapper function\nin another schema.\n 72.\n 73. CREATE FUNCTION sumcall(int, int) RETURNS int LANGUAGE SQL\nIMMUTABLE STRICT PARALLEL SAFE AS 'SELECT ($1+$2)';\n 74.\n 75. CREATE FUNCTION public.wrapsum(int, int) RETURNS int LANGUAGE\nSQL IMMUTABLE STRICT PARALLEL SAFE AS 'SELECT sumcall($1,$2)';\n 76.\n 77. create table test(n1 int);\n 78. create unique index idxtst on test(public.wrapsum(n1,1));\n 79.\n 80. grant usage on schema idxusr1 to idxusr2;\n 81. grant insert on table test to idxusr2;\n 82. postgres=> insert into test values(1);\n 83. INSERT 0 1\n 84. postgres=> insert into test values(1);\n 85. ERROR: duplicate key value violates unique constraint \"idxtst\"\n 86. DETAIL: Key (wrapsum(n1, 1))=(2) already exists.\n 87.\n 88. Session3::\n 89. User:idxusr2\n 90.\n 91. set search_path to idxusr2,public;\n 92.\n 93. CREATE FUNCTION sumcall(int, int) RETURNS int LANGUAGE SQL\nIMMUTABLE STRICT PARALLEL SAFE AS 'SELECT ($1 - $2)';\n 94.\n 95. postgres=> insert into idxusr1.test values(1);\n 96. INSERT 0 1\n 97. postgres=> insert into idxusr1.test values(1);\n 98. ERROR: duplicate key value violates unique constraint \"idxtst\"\n 99. DETAIL: Key (wrapsum(n1, 1))=(0) already exists.\n 100. postgres=>\n\nI believe functions in Postgres follow a late binding approach and hence nested function dependencies are resolved using search_path at run time. This way a user can override nested functions in its schema and change the behaviour of wrapper functions. However, a more serious issue is when functional Indexes (with nested function calls) are created on a table and then the data inserted in Indexes could be entirely dependent on which user is inserting the data (by overriding nested function).I performed a couple of test cases where data inserted is dependent on the user overriding nested functions. I understand this is not the best practice to scatter functions/indexes/tables in different different schemas and use such kind schema setup but I still expect Postgres to save us from such data inconsistencies issues by using early binding for functional Indexes. In fact Postgres does that linking for a single function Index (where no nested function are there) and qualifies the function used in the Index with its schema name and also it works in cases where all functions, table, Indexes are present in the same schema. However still there are cases where functional Indexes are created on extension functions (For Ex - cube extension) which are present in different schemas and then those cube functions are defined as invoker security type with nested functions calls without any schema qualification.Issue that would arise with late binding for functional Indexes is that when we are migrating such tables/indexes/data from one database to another (using pg_dump/pg_restore or any other method) data can be changed depending on which user we are using for import.(These tests i performed using invoker functions, i think definer functions produce correct behavior). One way would be to define search_path for such nested functions.=========Case1====== ##Table and functions are in different schemas.Session1::User:Postgres create user idxusr1 with password '*****';grant idxusr1 to postgres;create schema idxusr1 AUTHORIZATION idxusr1; create user idxusr2 with password '*****';grant idxusr2 to postgres;create schema idxusr2 AUTHORIZATION idxusr2; Session2::User:idxusr1 set search_path to idxusr1,public; CREATE FUNCTION sumcall(int, int) RETURNS int LANGUAGE SQL IMMUTABLE STRICT PARALLEL SAFE AS 'SELECT ($1+$2)'; CREATE FUNCTION wrapsum(int, int) RETURNS int LANGUAGE SQL IMMUTABLE STRICT PARALLEL SAFE AS 'SELECT sumcall($1,$2)';##create table in another schema create table public.test(n1 int);create unique index idxtst on public.test(idxusr1.wrapsum(n1,1)); grant insert on table public.test to idxusr2; postgres=> insert into test values(1);INSERT 0 1postgres=> insert into test values(1);ERROR: duplicate key value violates unique constraint \"idxtst\"DETAIL: Key (wrapsum(n1, 1))=(2) already exists. Session3::User:idxusr2 set search_path to idxusr2,public; CREATE FUNCTION sumcall(int, int) RETURNS int LANGUAGE SQL IMMUTABLE STRICT PARALLEL SAFE AS 'SELECT ($1 - $2)'; postgres=> insert into test values(1);INSERT 0 1postgres=> insert into test values(1);ERROR: duplicate key value violates unique constraint \"idxtst\"DETAIL: Key (idxusr1.wrapsum(n1, 1))=(0) already exists. ======Case2========== ##Functions are in different schemas.Session1::User:Postgres create user idxusr1 with password '*****';grant idxusr1 to postgres;create schema idxusr1 AUTHORIZATION idxusr1; create user idxusr2 with password '*****';grant idxusr2 to postgres;create schema idxusr2 AUTHORIZATION idxusr2; Session2::User:idxusr1 set search_path to idxusr1,public;##create internal function in own schema and wrapper function in another schema. CREATE FUNCTION sumcall(int, int) RETURNS int LANGUAGE SQL IMMUTABLE STRICT PARALLEL SAFE AS 'SELECT ($1+$2)'; CREATE FUNCTION public.wrapsum(int, int) RETURNS int LANGUAGE SQL IMMUTABLE STRICT PARALLEL SAFE AS 'SELECT sumcall($1,$2)'; create table test(n1 int);create unique index idxtst on test(public.wrapsum(n1,1)); grant usage on schema idxusr1 to idxusr2;grant insert on table test to idxusr2;postgres=> insert into test values(1);INSERT 0 1postgres=> insert into test values(1);ERROR: duplicate key value violates unique constraint \"idxtst\"DETAIL: Key (wrapsum(n1, 1))=(2) already exists. Session3::User:idxusr2 set search_path to idxusr2,public; CREATE FUNCTION sumcall(int, int) RETURNS int LANGUAGE SQL IMMUTABLE STRICT PARALLEL SAFE AS 'SELECT ($1 - $2)'; postgres=> insert into idxusr1.test values(1);INSERT 0 1postgres=> insert into idxusr1.test values(1);ERROR: duplicate key value violates unique constraint \"idxtst\"DETAIL: Key (wrapsum(n1, 1))=(0) already exists.postgres=>",
"msg_date": "Wed, 8 Jun 2022 17:27:32 +0530",
"msg_from": "Virender Singla <virender.cse@gmail.com>",
"msg_from_op": true,
"msg_subject": "invoker function security issues"
},
{
"msg_contents": "On Wed, Jun 8, 2022 at 7:29 AM Virender Singla <virender.cse@gmail.com>\nwrote:\n\n> but I still expect Postgres to save us from such data inconsistencies\n> issues by using early binding for functional Indexes.\n>\n\nWell, if the functions you are writing are \"black boxes\" to PostgreSQL this\nexpectation seems unreasonable. As of v14 at least you have the option to\nwrite a SQL standard function definition which is whose parsed expression\nand dependencies are saved instead of a black box of text.\n\n\n> However still there are cases where functional Indexes are created on\n> extension functions (For Ex - cube extension) which are present in\n> different schemas and then those cube functions are defined as invoker\n> security type with nested functions calls without any schema qualification.\n>\n\nRight, because if you try doing that in a security definer context the lack\nof a schema qualification will provoke a function not found error due to\nthe sanitized search_path (IIRC, if it doesn't then the two cases should\nbehave identically...)\n\nOne way would be to define search_path for such nested functions.\n>\n\nWhich the user is well advised to do, or, better, just schema-qualify all\nobject references. This can get a bit annoying for operators, in which\ncase an explicit, localized, search_path becomes more appealing.\n\nThe tools are available for one to protect themself. I suspect the\nhistorical baggage we carry prevents the server from being more aggressive\nin enforcing the use of these tools since not all cases where they are not\nused are problematic and there is lots of legacy code working just fine.\nThe security lockdown for this dynamic has already happened by basically\nadmitting that the idea of a \"public\" schema at the front of the default\nsearch_path was a poor decision. And while I see that there is possibly\nroom for improvement here if desired, it is, for me, acceptable for the\nproject to put the responsibility of not executing problematic code in the\nhands of the DBA.\n\nI'm curious how \"EXECUTE\" command and dynamic SQL fit into your POV here\n(specifically in function bodies). Right now, with \"black box inside of\nblack box\" mechanics it isn't really an issue but if you want to not keep\nfunction bodies as black boxes now dynamic SQL becomes the top-most late\nbinding point.\n\nDavid J.\n\nOn Wed, Jun 8, 2022 at 7:29 AM Virender Singla <virender.cse@gmail.com> wrote: but I still expect Postgres to save us from such data inconsistencies issues by using early binding for functional Indexes.Well, if the functions you are writing are \"black boxes\" to PostgreSQL this expectation seems unreasonable. As of v14 at least you have the option to write a SQL standard function definition which is whose parsed expression and dependencies are saved instead of a black box of text.However still there are cases where functional Indexes are created on extension functions (For Ex - cube extension) which are present in different schemas and then those cube functions are defined as invoker security type with nested functions calls without any schema qualification.Right, because if you try doing that in a security definer context the lack of a schema qualification will provoke a function not found error due to the sanitized search_path (IIRC, if it doesn't then the two cases should behave identically...)One way would be to define search_path for such nested functions.Which the user is well advised to do, or, better, just schema-qualify all object references. This can get a bit annoying for operators, in which case an explicit, localized, search_path becomes more appealing.The tools are available for one to protect themself. I suspect the historical baggage we carry prevents the server from being more aggressive in enforcing the use of these tools since not all cases where they are not used are problematic and there is lots of legacy code working just fine. The security lockdown for this dynamic has already happened by basically admitting that the idea of a \"public\" schema at the front of the default search_path was a poor decision. And while I see that there is possibly room for improvement here if desired, it is, for me, acceptable for the project to put the responsibility of not executing problematic code in the hands of the DBA.I'm curious how \"EXECUTE\" command and dynamic SQL fit into your POV here (specifically in function bodies). Right now, with \"black box inside of black box\" mechanics it isn't really an issue but if you want to not keep function bodies as black boxes now dynamic SQL becomes the top-most late binding point.David J.",
"msg_date": "Wed, 8 Jun 2022 08:54:34 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: invoker function security issues"
}
] |
[
{
"msg_contents": "Since Postgres 9.5 it is possible to use the => operators for filling in \nnamed parameters in a function call, like perform my_func(named_param => \n'some value'). The old form, with the := operator is still allowed for \nbackward reference.\n\nBut with open cursor, still only the old := operator is allowed. \nWouldn't it be more appropriate to at least allow the => operator there \nas well, like open my_cursor(named_param => 'some value')?\n\n\n\n",
"msg_date": "Wed, 8 Jun 2022 14:29:23 +0200",
"msg_from": "Martin Butter <martin.butter@splendiddata.com>",
"msg_from_op": true,
"msg_subject": "=> operator for named parameters in open cursor"
},
{
"msg_contents": "st 8. 6. 2022 v 14:29 odesílatel Martin Butter <\nmartin.butter@splendiddata.com> napsal:\n\n> Since Postgres 9.5 it is possible to use the => operators for filling in\n> named parameters in a function call, like perform my_func(named_param =>\n> 'some value'). The old form, with the := operator is still allowed for\n> backward reference.\n>\n> But with open cursor, still only the old := operator is allowed.\n> Wouldn't it be more appropriate to at least allow the => operator there\n> as well, like open my_cursor(named_param => 'some value')?\n>\n\n+1\n\nPavel\n\nst 8. 6. 2022 v 14:29 odesílatel Martin Butter <martin.butter@splendiddata.com> napsal:Since Postgres 9.5 it is possible to use the => operators for filling in \nnamed parameters in a function call, like perform my_func(named_param => \n'some value'). The old form, with the := operator is still allowed for \nbackward reference.\n\nBut with open cursor, still only the old := operator is allowed. \nWouldn't it be more appropriate to at least allow the => operator there \nas well, like open my_cursor(named_param => 'some value')?+1Pavel",
"msg_date": "Wed, 8 Jun 2022 15:17:50 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: => operator for named parameters in open cursor"
}
] |
[
{
"msg_contents": "We currently can check for missing heap/index files by comparing\npg_class with the database directory files. However, I am not clear if\nthis is safe during concurrent DDL. I assume we create the file before\nthe update to pg_class is visible, but do we always delete the file\nafter the update to pg_class is visible? I assume any external checking\ntool would need to lock the relation to prevent concurrent DDL.\n\nAlso, how would it check if the number of extents is correct? Seems we\nwould need this value to be in pg_class, and have the same update\nprotections outlined above. Seems that would require heavier locking.\n\nIs this something anyone has even needed or had requested?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 8 Jun 2022 08:45:59 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Checking for missing heap/index files"
},
{
"msg_contents": "\n\n> On Jun 8, 2022, at 5:45 AM, Bruce Momjian <bruce@momjian.us> wrote:\n> \n> Is this something anyone has even needed or had requested?\n\nI might have put this in amcheck's verify_heapam() had there been an interface for it. I vaguely recall wanting something like this, yes.\n\nAs it stands, verify_heapam() may trigger mdread()'s \"could not open file\" or \"could not read block\" error, in the course of verifying the table. There isn't an option in amcheck to just verify that the underlying files exist. If struct f_smgr had a function for validating that all segment files exist, I may have added an option to amcheck (and the pg_amcheck frontend tool) to quickly look for missing files.\n\nLooking at smgr/md.c, it seems mdnblocks() is close to what we want, but it skips already opened segments \"to avoid redundant seeks\". Perhaps we'd want to add a function to f_smgr, say \"smgr_allexist\", to check for all segment files? I'm not sure how heavy-handed the corresponding mdallexist() function should be. Should it close all open segments, then reopen and check the size of all of them by calling mdnblocks()? That seems safer than merely asking the filesystem if the file exists without verifying that it can be opened.\n\nIf we made these changes, and added corresponding quick check options to amcheck and pg_amcheck, would that meet your current needs? The downside to using amcheck for this sort of thing is that we did not (and likely will not) back port it. I have had several occasions to want this functionality recently, but the customers were on pre-v14 servers, so these tools were not available anyway.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 9 Jun 2022 09:46:51 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Checking for missing heap/index files"
},
{
"msg_contents": "On Thu, Jun 9, 2022 at 09:46:51AM -0700, Mark Dilger wrote:\n>\n>\n> > On Jun 8, 2022, at 5:45 AM, Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > Is this something anyone has even needed or had requested?\n>\n> I might have put this in amcheck's verify_heapam() had there been an\n> interface for it. I vaguely recall wanting something like this, yes.\n>\n> As it stands, verify_heapam() may trigger mdread()'s \"could not open\n> file\" or \"could not read block\" error, in the course of verifying\n> the table. There isn't an option in amcheck to just verify that\n> the underlying files exist. If struct f_smgr had a function for\n> validating that all segment files exist, I may have added an option to\n> amcheck (and the pg_amcheck frontend tool) to quickly look for missing\n> files.\n\nWell, how do we know what files should exist? We know the first segment\nshould probably exist, but what about +1GB segments?\n\n> Looking at smgr/md.c, it seems mdnblocks() is close to what we want,\n> but it skips already opened segments \"to avoid redundant seeks\".\n> Perhaps we'd want to add a function to f_smgr, say \"smgr_allexist\",\n> to check for all segment files? I'm not sure how heavy-handed the\n> corresponding mdallexist() function should be. Should it close\n> all open segments, then reopen and check the size of all of them\n> by calling mdnblocks()? That seems safer than merely asking the\n> filesystem if the file exists without verifying that it can be opened.\n\nYes.\n\n> If we made these changes, and added corresponding quick check options\n> to amcheck and pg_amcheck, would that meet your current needs? The\n> downside to using amcheck for this sort of thing is that we did not\n> (and likely will not) back port it. I have had several occasions to\n> want this functionality recently, but the customers were on pre-v14\n> servers, so these tools were not available anyway.\n\nI don't have a need for it --- I was just wondering why we have\nsomething that checks the relation contents, but not the file existence?\n\nIt seems like pg_amcheck would be a nice place to add this checking\nability.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 9 Jun 2022 14:46:02 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Checking for missing heap/index files"
},
{
"msg_contents": "On Thu, Jun 9, 2022 at 11:46 AM Bruce Momjian <bruce@momjian.us> wrote:\n> I don't have a need for it --- I was just wondering why we have\n> something that checks the relation contents, but not the file existence?\n\nWe do this for B-tree indexes within amcheck. They must always have\nstorage, if only to store the index metapage. (Actually unlogged\nindexes that run on a standby don't, but that's accounted for\ndirectly.)\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 9 Jun 2022 11:47:42 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Checking for missing heap/index files"
},
{
"msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> We currently can check for missing heap/index files by comparing\n> pg_class with the database directory files. However, I am not clear if\n> this is safe during concurrent DDL. I assume we create the file before\n> the update to pg_class is visible, but do we always delete the file\n> after the update to pg_class is visible? I assume any external checking\n> tool would need to lock the relation to prevent concurrent DDL.\n\nIt'd sure be nice if an external tool (such as one trying to back up the\ndatabase..) could get this full list *without* having to run around and\nlock everything. This is because of some fun discoveries that have been\nmade around readdir() not always being entirely honest. Take a look at:\n\nhttps://github.com/pgbackrest/pgbackrest/issues/1754\n\nand\n\nhttps://gitlab.alpinelinux.org/alpine/aports/-/issues/10960\n\nTL;DR: if you're removing files from a directory that you've got an\nactive readdir() running through, you might not actually get all of the\n*existing* files. Given that PG is happy to remove files from PGDATA\nwhile a backup is running, in theory this could lead to a backup utility\nlike pgbackrest or pg_basebackup not actually backing up all the files.\n\nNow, pgbackrest runs the readdir() very quickly to build a manifest of\nall of the files to backup, minimizing the window for this to possibly\nhappen, but pg_basebackup keeps a readdir() open during the entire\nbackup, making this more possible.\n\n> Also, how would it check if the number of extents is correct? Seems we\n> would need this value to be in pg_class, and have the same update\n> protections outlined above. Seems that would require heavier locking.\n\nWould be nice to have but also would be expensive to maintain..\n\nThanks,\n\nStephen",
"msg_date": "Thu, 9 Jun 2022 15:28:37 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Checking for missing heap/index files"
},
{
"msg_contents": "On Wed, Jun 8, 2022 at 8:46 AM Bruce Momjian <bruce@momjian.us> wrote:\n> We currently can check for missing heap/index files by comparing\n> pg_class with the database directory files. However, I am not clear if\n> this is safe during concurrent DDL. I assume we create the file before\n> the update to pg_class is visible, but do we always delete the file\n> after the update to pg_class is visible? I assume any external checking\n> tool would need to lock the relation to prevent concurrent DDL.\n\nIf you see an entry in pg_class, then there should definitely be a\nfile present on disk. The reverse is not true: just because you don't\nsee an entry in pg_class for a file that's on disk doesn't mean it's\nsafe to remove that file.\n\n> Also, how would it check if the number of extents is correct? Seems we\n> would need this value to be in pg_class, and have the same update\n> protections outlined above. Seems that would require heavier locking.\n\nYeah, and it's not just the number of extents but the length of the\nlast one. If the last extent is supposed to be 700MB and it gets\ntruncated to 200MB, it would be nice if we could notice that.\n\nOne idea might be for each heap table to have a metapage and store the\nlength - or an upper bound on the length - in the metapage. That'd\nprobably be cheaper than updating pg_class, but might still be\nexpensive in some scenarios, and it's a fairly large amount of\nengineering.\n\n> Is this something anyone has even needed or had requested?\n\nDefinitely. And also the reverse: figuring out which files on disk are\nold garbage that can be safely nuked.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 13 Jun 2022 16:06:12 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checking for missing heap/index files"
},
{
"msg_contents": "On Mon, Jun 13, 2022 at 04:06:12PM -0400, Robert Haas wrote:\n> One idea might be for each heap table to have a metapage and store the\n> length - or an upper bound on the length - in the metapage. That'd\n> probably be cheaper than updating pg_class, but might still be\n> expensive in some scenarios, and it's a fairly large amount of\n> engineering.\n\nI agree --- it would be nice, but might be hard to justify the\nengineering and overhead involved. I guess I was just checking that I\nwasn't missing something obvious.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Mon, 13 Jun 2022 19:15:29 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Checking for missing heap/index files"
},
{
"msg_contents": "On Mon, Jun 13, 2022 at 4:15 PM Bruce Momjian <bruce@momjian.us> wrote:\n> I agree --- it would be nice, but might be hard to justify the\n> engineering and overhead involved. I guess I was just checking that I\n> wasn't missing something obvious.\n\nI suspect that the cost of being sloppy about this kind of thing\noutweighs any benefit -- it's a false economy.\n\nI believe we ought to eventually have crash-safe relation extension\nand file allocation. Right now we're held back by concerns about\nleaking a large number of empty pages (at least until the next\nVACUUM). If leaking space was simply not possible in the first place,\nwe could afford to be more aggressive in code like\nRelationAddExtraBlocks() -- it currently has a conservative cap of 512\npages per extension right now. This would require work in the FSM of\nthe kind I've been working on, on and off.\n\nEach relation extension is bound to be more expensive when the process\nis made crash safe, obviously -- but only if no other factor changes.\nWith larger batch sizes per relation extension, it could be very\ndifferent. Once you factor in lock contention, then having fewer\nindividual relation extensions for a fixed number of pages may make\nall the difference.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 13 Jun 2022 16:59:06 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Checking for missing heap/index files"
},
{
"msg_contents": "On 2022-Jun-09, Stephen Frost wrote:\n\n> TL;DR: if you're removing files from a directory that you've got an\n> active readdir() running through, you might not actually get all of the\n> *existing* files. Given that PG is happy to remove files from PGDATA\n> while a backup is running, in theory this could lead to a backup utility\n> like pgbackrest or pg_basebackup not actually backing up all the files.\n> \n> Now, pgbackrest runs the readdir() very quickly to build a manifest of\n> all of the files to backup, minimizing the window for this to possibly\n> happen, but pg_basebackup keeps a readdir() open during the entire\n> backup, making this more possible.\n\nHmm, this sounds pretty bad, and I agree that a workaround should be put\nin place. But where is pg_basebackup looping around readdir()? I\ncouldn't find it. There's a call to readdir() in FindStreamingStart(),\nbut that doesn't seem to match what you describe.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 17 Jun 2022 17:56:33 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Checking for missing heap/index files"
},
{
"msg_contents": "Greetings,\n\nOn Fri, Jun 17, 2022 at 14:32 Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2022-Jun-09, Stephen Frost wrote:\n>\n> > TL;DR: if you're removing files from a directory that you've got an\n> > active readdir() running through, you might not actually get all of the\n> > *existing* files. Given that PG is happy to remove files from PGDATA\n> > while a backup is running, in theory this could lead to a backup utility\n> > like pgbackrest or pg_basebackup not actually backing up all the files.\n> >\n> > Now, pgbackrest runs the readdir() very quickly to build a manifest of\n> > all of the files to backup, minimizing the window for this to possibly\n> > happen, but pg_basebackup keeps a readdir() open during the entire\n> > backup, making this more possible.\n>\n> Hmm, this sounds pretty bad, and I agree that a workaround should be put\n> in place. But where is pg_basebackup looping around readdir()? I\n> couldn't find it. There's a call to readdir() in FindStreamingStart(),\n> but that doesn't seem to match what you describe.\n\n\nIt’s the server side that does it in basebackup.c when it’s building the\ntarball for the data dir and each table space and sending it to the client.\nIt’s not done by src/bin/pg_basebackup. Sorry for not being clear.\nTechnically this would be beyond just pg_basebackup but would impact,\npotentially, anything using BASE_BACKUP from the replication protocol (in\naddition to other backup tools which operate against the data directory\nwith readdir, of course).\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Fri, Jun 17, 2022 at 14:32 Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2022-Jun-09, Stephen Frost wrote:\n\n> TL;DR: if you're removing files from a directory that you've got an\n> active readdir() running through, you might not actually get all of the\n> *existing* files. Given that PG is happy to remove files from PGDATA\n> while a backup is running, in theory this could lead to a backup utility\n> like pgbackrest or pg_basebackup not actually backing up all the files.\n> \n> Now, pgbackrest runs the readdir() very quickly to build a manifest of\n> all of the files to backup, minimizing the window for this to possibly\n> happen, but pg_basebackup keeps a readdir() open during the entire\n> backup, making this more possible.\n\nHmm, this sounds pretty bad, and I agree that a workaround should be put\nin place. But where is pg_basebackup looping around readdir()? I\ncouldn't find it. There's a call to readdir() in FindStreamingStart(),\nbut that doesn't seem to match what you describe.It’s the server side that does it in basebackup.c when it’s building the tarball for the data dir and each table space and sending it to the client. It’s not done by src/bin/pg_basebackup. Sorry for not being clear. Technically this would be beyond just pg_basebackup but would impact, potentially, anything using BASE_BACKUP from the replication protocol (in addition to other backup tools which operate against the data directory with readdir, of course).Thanks,Stephen",
"msg_date": "Fri, 17 Jun 2022 18:30:46 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Checking for missing heap/index files"
},
{
"msg_contents": "On Fri, Jun 17, 2022 at 6:31 PM Stephen Frost <sfrost@snowman.net> wrote:\n>> Hmm, this sounds pretty bad, and I agree that a workaround should be put\n>> in place. But where is pg_basebackup looping around readdir()? I\n>> couldn't find it. There's a call to readdir() in FindStreamingStart(),\n>> but that doesn't seem to match what you describe.\n>\n> It’s the server side that does it in basebackup.c when it’s building the tarball for the data dir and each table space and sending it to the client. It’s not done by src/bin/pg_basebackup. Sorry for not being clear. Technically this would be beyond just pg_basebackup but would impact, potentially, anything using BASE_BACKUP from the replication protocol (in addition to other backup tools which operate against the data directory with readdir, of course).\n\nSpecifically, sendDir() can either recurse back into sendDir(), or it\ncan call sendFile(). So in theory if your directory contains\nsome/stupid/long/path/to/a/file, you could have 6 directory scans open\nall at the same time, and then a file being read concurrently with\nthat. That provides a lot of time for things to change concurrently,\nespecially at the outer levels. Before the scan of the outermost\ndirectory moves to the next file, it will have to completely finish\nreading and sending every file in the directory tree that was rooted\nat the directory entry we last read.\n\nI think this could be changed pretty easily. We could change sendDir()\nto read all of the directory entries into an in-memory buffer and then\nclose the directory and iterate over the buffer. I see two potential\ndisadvantages of that approach. Number one, we could encounter a\ndirectory with a vast number of relations. There are probably\nsubdirectories of PostgreSQL data directories that contain tens of\nmillions of files, possibly hundreds of millions of files. So,\nremembering all that data in memory would potentially take gigabytes\nof memory. I'm not sure that's a huge problem, because if you have\nhundreds of millions of tables in a single database, you should\nprobably have enough memory in the system that a few GB of RAM to take\na backup is no big deal. However, people don't always have the memory\nthat they should have, and many users do in fact run systems at a\nlevel of load that pushes the boundaries of their hardware.\nNonetheless, we could choose to take the position that caching the\nlist of filenames is worth it to avoid this risk.\n\nThe other consideration here is that this is not a complete remedy. It\nmakes the race condition narrower, I suppose, but it does not remove\nit. Ideally, we would like to do better than \"our new code will\ncorrupt your backup less often.\" However, I don't quite see how to get\nthere. We either need the OS to deliver us a reliable list of what\nfiles exist - and I don't see how to make it do that if readdir\ndoesn't - or we need a way to know what files are supposed to exist\nwithout reference to the OS - which would require some way of reading\nthe list of relfilenodes from a database to which we're not connected.\nSo maybe corrupting your backup less often is the best we can do. I do\nwonder how often this actually happens though, and on which\nfilesystems. The provided links seem to suggest that this is mostly a\nproblem with network filesystems, especially CIFS, and maybe also NFS.\n\nI'd be really interested in knowing whether this happens on a\nmainstream, non-networked filesystem. It's not an irrelevant concern\neven if it happens only on networked filesystems, but a lot more\npeople will be at risk if it also happens on ext4 or xfs. It does seem\na little bit surprising if no filesystem has a way of preventing this.\nI mean, does open() also randomly but with low probability fail to\nfind a file that exists, due to a concurrent directory modification on\nsome directory in the pathname? I assume that would be unacceptable,\nand the file system has a way of preventing that from happening, then\nit has some way of ensuring a stable read of a directory, at least\nover a short period.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 18 Oct 2022 12:44:38 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checking for missing heap/index files"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I'd be really interested in knowing whether this happens on a\n> mainstream, non-networked filesystem. It's not an irrelevant concern\n> even if it happens only on networked filesystems, but a lot more\n> people will be at risk if it also happens on ext4 or xfs. It does seem\n> a little bit surprising if no filesystem has a way of preventing this.\n> I mean, does open() also randomly but with low probability fail to\n> find a file that exists, due to a concurrent directory modification on\n> some directory in the pathname? I assume that would be unacceptable,\n> and the file system has a way of preventing that from happening, then\n> it has some way of ensuring a stable read of a directory, at least\n> over a short period.\n\nThe POSIX spec for readdir(3) has a little bit of info:\n\n The type DIR, which is defined in the <dirent.h> header, represents a\n directory stream, which is an ordered sequence of all the directory\n entries in a particular directory. Directory entries represent files;\n files may be removed from a directory or added to a directory\n asynchronously to the operation of readdir().\n\n If a file is removed from or added to the directory after the most\n recent call to opendir() or rewinddir(), whether a subsequent call to\n readdir() returns an entry for that file is unspecified.\n\nThere is no text suggesting that it's okay to miss, or to double-return,\nan entry that is present throughout the scan. So I'd interpret the case\nyou're worried about as \"forbidden by POSIX\". Of course, it's known that\nNFS fails to provide POSIX semantics in all cases --- but I don't know\nif this is one of them.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Oct 2022 12:59:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Checking for missing heap/index files"
},
{
"msg_contents": "On Tue, Oct 18, 2022 at 12:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> There is no text suggesting that it's okay to miss, or to double-return,\n> an entry that is present throughout the scan. So I'd interpret the case\n> you're worried about as \"forbidden by POSIX\". Of course, it's known that\n> NFS fails to provide POSIX semantics in all cases --- but I don't know\n> if this is one of them.\n\nYeah, me neither. One problem I see is that, even if the behavior is\nforbidden by POSIX, if it happens in practice on systems people\nactually use, then it's an issue. We even have documentation saying\nthat it's OK to use NFS, and a lot of people do -- which IMHO is\nunfortunate, but it's also not clear what the realistic alternatives\nare. It's pretty hard to tell people in 2022 that they are only\nallowed to use PostgreSQL with local storage.\n\nBut to put my cards on the table, it's not so much that I am worried\nabout this problem myself as that I want to know whether we're going\nto do anything about it as a project, and if so, what, because it\nintersects a patch that I'm working on. So if we want to readdir() in\none fell swoop and cache the results, I'm going to go write a patch\nfor that. If we don't, then I'd like to know whether (a) we think that\nwould be theoretically acceptable but not justified by the evidence\npresently available or (b) would be unacceptable due to (b1) the\npotential for increased memory usage or (b2) some other reason.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 18 Oct 2022 13:27:20 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checking for missing heap/index files"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Tue, Oct 18, 2022 at 12:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > There is no text suggesting that it's okay to miss, or to double-return,\n> > an entry that is present throughout the scan. So I'd interpret the case\n> > you're worried about as \"forbidden by POSIX\". Of course, it's known that\n> > NFS fails to provide POSIX semantics in all cases --- but I don't know\n> > if this is one of them.\n> \n> Yeah, me neither. One problem I see is that, even if the behavior is\n> forbidden by POSIX, if it happens in practice on systems people\n> actually use, then it's an issue. We even have documentation saying\n> that it's OK to use NFS, and a lot of people do -- which IMHO is\n> unfortunate, but it's also not clear what the realistic alternatives\n> are. It's pretty hard to tell people in 2022 that they are only\n> allowed to use PostgreSQL with local storage.\n> \n> But to put my cards on the table, it's not so much that I am worried\n> about this problem myself as that I want to know whether we're going\n> to do anything about it as a project, and if so, what, because it\n> intersects a patch that I'm working on. So if we want to readdir() in\n> one fell swoop and cache the results, I'm going to go write a patch\n> for that. If we don't, then I'd like to know whether (a) we think that\n> would be theoretically acceptable but not justified by the evidence\n> presently available or (b) would be unacceptable due to (b1) the\n> potential for increased memory usage or (b2) some other reason.\n\nWhile I don't think it's really something that should be happening, it's\ndefinitely something that's been seen with some networked filesystems,\nas reported. I also strongly suspect that on local filesystems there's\nsomething that prevents this from happening but as mentioned that\ndoesn't cover all PG use cases.\n\nIn pgbackrest, we moved to doing a scan and cache'ing all of the results\nin memory to reduce the risk when reading from the PG data dir. We also\nreworked our expire code (which removes an older backup from the backup\nrepository) to also do a complete scan before removing files.\n\nI don't see it as likely to be acceptable, but arranging to not add or\nremove files while the scan is happening would presumably eliminate the\nrisk entirely. We've not seen this issue recur in the expire command\nsince the change to first completely scan the directory and then go and\nremove the files from it. Perhaps just not removing files during the\nscan would be sufficient which might be more reasonable to do.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 18 Oct 2022 14:37:36 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Checking for missing heap/index files"
},
{
"msg_contents": "On Tue, Oct 18, 2022 at 2:37 PM Stephen Frost <sfrost@snowman.net> wrote:\n> While I don't think it's really something that should be happening, it's\n> definitely something that's been seen with some networked filesystems,\n> as reported.\n\nDo you have clear and convincing evidence of this happening on\nanything other than CIFS?\n\n> I don't see it as likely to be acceptable, but arranging to not add or\n> remove files while the scan is happening would presumably eliminate the\n> risk entirely. We've not seen this issue recur in the expire command\n> since the change to first completely scan the directory and then go and\n> remove the files from it. Perhaps just not removing files during the\n> scan would be sufficient which might be more reasonable to do.\n\nI don't think that's a complete non-starter, but I do think it would\nbe somewhat expensive in some workloads. I hate to make everyone pay\nthat much for insurance against a shouldn't-happen case. We could make\nit optional, but then we're asking users to decide whether or not they\nneed insurance. Since we don't even know which filesystems are\npotentially affected, how is anyone else supposed to know? Worse\nstill, if you have a corruption event, you're still not going to know\nfor sure whether this would have fixed it, so you still don't know\nwhether you should turn on the feature for next time. And if you do\nturn it on and don't get corruption again, you don't know whether you\nwould have had a problem if you hadn't used the feature. It all just\nseems like a lot of guesswork that will end up being frustrating to\nboth users and developers.\n\nJust deciding to cache to the results of readdir() in memory is much\ncheaper insurance. I think I'd probably be willing to foist that\noverhead onto everyone, all the time. As I mentioned before, it could\nstill hose someone who is right on the brink of a memory disaster, but\nthat's a much narrower blast radius than putting locking around all\noperations that create or remove a file in the same directory as a\nrelation file. But it's also not a complete fix, which sucks.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 18 Oct 2022 15:14:41 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checking for missing heap/index files"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Oct 18, 2022 at 2:37 PM Stephen Frost <sfrost@snowman.net> wrote:\n>> I don't see it as likely to be acceptable, but arranging to not add or\n>> remove files while the scan is happening would presumably eliminate the\n>> risk entirely. We've not seen this issue recur in the expire command\n>> since the change to first completely scan the directory and then go and\n>> remove the files from it. Perhaps just not removing files during the\n>> scan would be sufficient which might be more reasonable to do.\n\n> Just deciding to cache to the results of readdir() in memory is much\n> cheaper insurance. I think I'd probably be willing to foist that\n> overhead onto everyone, all the time. As I mentioned before, it could\n> still hose someone who is right on the brink of a memory disaster, but\n> that's a much narrower blast radius than putting locking around all\n> operations that create or remove a file in the same directory as a\n> relation file. But it's also not a complete fix, which sucks.\n\nYeah, that. I'm not sure if we need to do anything about this, but\nif we do, I don't think that's it. Agreed that the memory-consumption\nobjection is pretty weak; the one that seems compelling is that by\nitself, this does nothing to fix the problem beyond narrowing the\nwindow some.\n\nIsn't it already the case (or could be made so) that relation file\nremoval happens only in the checkpointer? I wonder if we could\nget to a situation where we can interlock file removal just by\ncommanding the checkpointer to not do it for awhile. Then combining\nthat with caching readdir results (to narrow the window in which we\nhave to stop the checkpointer) might yield a solution that has some\ncredibility. This scheme doesn't attempt to prevent file creation\nconcurrently with a readdir, but you'd have to make some really\nadverse assumptions to believe that file creation would cause a\npre-existing entry to get missed (as opposed to getting scanned\ntwice). So it might be an acceptable answer.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Oct 2022 15:59:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Checking for missing heap/index files"
},
{
"msg_contents": "On Tue, Oct 18, 2022 at 3:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Isn't it already the case (or could be made so) that relation file\n> removal happens only in the checkpointer? I wonder if we could\n> get to a situation where we can interlock file removal just by\n> commanding the checkpointer to not do it for awhile. Then combining\n> that with caching readdir results (to narrow the window in which we\n> have to stop the checkpointer) might yield a solution that has some\n> credibility. This scheme doesn't attempt to prevent file creation\n> concurrently with a readdir, but you'd have to make some really\n> adverse assumptions to believe that file creation would cause a\n> pre-existing entry to get missed (as opposed to getting scanned\n> twice). So it might be an acceptable answer.\n\nI believe that individual backends directly remove all relation forks\nother than the main fork and all segments other than the first one.\nThe discussion on various other threads has been in the direction of\ntrying to standardize on moving that last case out of the checkpointer\n- i.e. getting rid of what Thomas dubbed \"tombstone\" files - which is\npretty much the exact opposite of this proposal. But even apart from\nthat, I don't think this would be that easy to implement. If you\nremoved a large relation, you'd have to tell the checkpointer to\nremove many files instead of just 1. That sounds kinda painful: it\nwould be more IPC, and it would delay file removal just so that we can\ntell the checkpointer to delay it some more.\n\nAnd I don't think we really need to do any of that. We could invent a\nnew kind of lock tag for <dboid/tsoid> combination. Take a share lock\nto create or remove files. Take an exclusive lock to scan the\ndirectory. I think that accomplishes the same thing as your proposal,\nbut more directly, and with less overhead. It's still substantially\nmore than NO overhead, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 18 Oct 2022 17:34:05 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checking for missing heap/index files"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Oct 18, 2022 at 3:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Isn't it already the case (or could be made so) that relation file\n>> removal happens only in the checkpointer?\n\n> I believe that individual backends directly remove all relation forks\n> other than the main fork and all segments other than the first one.\n\nYeah, obviously some changes would need to be made to that, but ISTM\nwe could just treat all the forks as we now treat the first one.\n\n> The discussion on various other threads has been in the direction of\n> trying to standardize on moving that last case out of the checkpointer\n> - i.e. getting rid of what Thomas dubbed \"tombstone\" files - which is\n> pretty much the exact opposite of this proposal.\n\nYeah, we'd have to give up on that. If that goes anywhere then\nit kills this idea.\n\n> But even apart from\n> that, I don't think this would be that easy to implement. If you\n> removed a large relation, you'd have to tell the checkpointer to\n> remove many files instead of just 1.\n\nThe backends just implement this by deleting files until they don't\nfind the next one in sequence. I fail to see how it'd be any\nharder for the checkpointer to do that.\n\n> And I don't think we really need to do any of that. We could invent a\n> new kind of lock tag for <dboid/tsoid> combination. Take a share lock\n> to create or remove files. Take an exclusive lock to scan the\n> directory. I think that accomplishes the same thing as your proposal,\n> but more directly, and with less overhead. It's still substantially\n> more than NO overhead, though.\n\nMy concern about that is that it implies touching a whole lot of\nplaces, and if you miss even one then you've lost whatever guarantee\nyou thought you were getting. More, there's no easy way to find\nall the relevant places (some will be in extensions, no doubt).\nSo I have approximately zero faith that it could be made reliable.\nFunneling things through the checkpointer would make that a lot\nmore centralized. I concede that cowboy unlink() calls could still\nbe a problem ... but I doubt there's any solution that's totally\nfree of that hazard.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Oct 2022 17:44:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Checking for missing heap/index files"
},
{
"msg_contents": "On Tue, Oct 18, 2022 at 5:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> My concern about that is that it implies touching a whole lot of\n> places, and if you miss even one then you've lost whatever guarantee\n> you thought you were getting. More, there's no easy way to find\n> all the relevant places (some will be in extensions, no doubt).\n\n*scratches head*\n\nI must be missing something. It seems to me that it requires touching\nexactly the same set of places as your idea. And it also seems to me\nlike it's not that many places. Am I being really dumb here?\n\nIt's still somewhat unclear to me whether we should be doing anything\nat all here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 19 Oct 2022 10:20:20 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Checking for missing heap/index files"
}
] |
[
{
"msg_contents": "Hello,\n\nExperimenting with pipeline mode, with libpq 14.2, sometimes we\nreceive the notice \"message type 0x33 arrived from server while idle\".\nTested with Postgres server 12 and 14.\n\nThis notice is generated by libpq upon receiving messages after using\nPQsendQuery(). The libpq trace shows:\n\n F 101 Parse \"\" \"INSERT INTO pq_pipeline_demo(itemno,\nint8filler) VALUES (1, 4611686018427387904) RETURNING id\" 0\n F 12 Bind \"\" \"\" 0 0 0\n F 6 Describe P \"\"\n F 9 Execute \"\" 0\n F 6 Close P \"\"\n F 4 Flush\n B 4 ParseComplete\n B 4 BindComplete\n B 27 RowDescription 1 \"id\" 561056 1 23 4 -1 0\n B 11 DataRow 1 1 '3'\n B 15 CommandComplete \"INSERT 0 1\"\n B 4 CloseComplete\n F 4 Sync\n B 5 ReadyForQuery I\n\nin the state the server messages are received, CloseComplete is unexpected.\n\nFor comparison, PQsendQueryParams() produces the trace:\n\n F 93 Parse \"\" \"INSERT INTO pq_pipeline_demo(itemno,\nint8filler) VALUES ($1, $2) RETURNING id\" 2 21 20\n F 36 Bind \"\" \"\" 2 1 1 2 2 '\\x00\\x01' 8\n'@\\x00\\x00\\x00\\x00\\x00\\x00\\x00' 1 0\n F 6 Describe P \"\"\n F 9 Execute \"\" 0\n F 4 Flush\n B 4 ParseComplete\n B 4 BindComplete\n B 27 RowDescription 1 \"id\" 561056 1 23 4 -1 0\n B 11 DataRow 1 1 '4'\n B 15 CommandComplete \"INSERT 0 1\"\n F 4 Sync\n B 5 ReadyForQuery I\n\nwhere no Close is sent.\n\nIs this a problem with PQexecQuery which should not send the Close, or\nwith receiving in IDLE mode which should expect a CloseComplete?\n\nShould we avoid using PQexecQuery in pipeline mode altogether?\n\nA playground to reproduce the issue is available at\nhttps://github.com/psycopg/psycopg/issues/314\n\nCheers\n\n-- Daniele\n\n\n",
"msg_date": "Wed, 8 Jun 2022 15:59:41 +0200",
"msg_from": "Daniele Varrazzo <daniele.varrazzo@gmail.com>",
"msg_from_op": true,
"msg_subject": "Using PQexecQuery in pipeline mode produces unexpected Close messages"
},
{
"msg_contents": "On 2022-Jun-08, Daniele Varrazzo wrote:\n\n> Is this a problem with PQexecQuery which should not send the Close, or\n> with receiving in IDLE mode which should expect a CloseComplete?\n\nInteresting.\n\nWhat that Close message is doing is closing the unnamed portal, which\nis otherwise closed implicitly when the next one is opened. That's how\nsingle-query mode works: if you run a single portal, it'll be kept open.\n\nI believe that the right fix is to not send that Close message in\nPQsendQuery.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Para tener más hay que desear menos\"\n\n\n",
"msg_date": "Wed, 8 Jun 2022 17:08:47 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Using PQexecQuery in pipeline mode produces unexpected Close\n messages"
},
{
"msg_contents": "(Moved to -hackers)\n\nAt Wed, 8 Jun 2022 17:08:47 +0200, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> What that Close message is doing is closing the unnamed portal, which\n> is otherwise closed implicitly when the next one is opened. That's how\n> single-query mode works: if you run a single portal, it'll be kept open.\n> \n> I believe that the right fix is to not send that Close message in\n> PQsendQuery.\n\nAgreed. At least Close message in that context is useless and\nPQsendQueryGuts doesn't send it. And removes the Close message surely\nfixes the issue.\n\nThe doc [1] says:\n\n[1] https://www.postgresql.org/docs/14/protocol-flow.html\n\n> The simple Query message is approximately equivalent to the series\n> Parse, Bind, portal Describe, Execute, Close, Sync, using the\n> unnamed prepared statement and portal objects and no parameters. One\n\nThe current implement of PQsendQueryInternal looks like the result of\na misunderstanding of the doc. In the regression tests, that path is\nexcercised only for an error case, where no CloseComplete comes.\n\nThe attached adds a test for the normal-path of pipelined\nPQsendQuery() to simple_pipeline test then modifies that function not\nto send Close message. Without the fix, the test fails by \"unexpected\nnotice\" even if the trace matches the \"expected\" content.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 10 Jun 2022 15:25:44 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Using PQexecQuery in pipeline mode produces unexpected Close\n messages"
},
{
"msg_contents": "At Fri, 10 Jun 2022 15:25:44 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> The current implement of PQsendQueryInternal looks like the result of\n> a misunderstanding of the doc. In the regression tests, that path is\n> excercised only for an error case, where no CloseComplete comes.\n> \n> The attached adds a test for the normal-path of pipelined\n> PQsendQuery() to simple_pipeline test then modifies that function not\n> to send Close message. Without the fix, the test fails by \"unexpected\n> notice\" even if the trace matches the \"expected\" content.\n\nAnd, as a matter of course, this fix should be back-patched to 14.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 10 Jun 2022 15:33:36 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Using PQexecQuery in pipeline mode produces unexpected Close\n messages"
},
{
"msg_contents": "On Fri, Jun 10, 2022, at 8:25 AM, Kyotaro Horiguchi wrote:\n> \n> The current implement of PQsendQueryInternal looks like the result of\n> a misunderstanding of the doc. In the regression tests, that path is\n> excercised only for an error case, where no CloseComplete comes.\n> \n> The attached adds a test for the normal-path of pipelined\n> PQsendQuery() to simple_pipeline test then modifies that function not\n> to send Close message. Without the fix, the test fails by \"unexpected\n> notice\" even if the trace matches the \"expected\" content.\n\nHah, the patch I wrote is almost identical to yours, down to the notice processor counting the number of notices received. The only difference is that I put my test in pipeline_abort.\n\nSadly, it looks like I won't be able to get this patched pushed for 14.4.\n\n\nOn Fri, Jun 10, 2022, at 8:25 AM, Kyotaro Horiguchi wrote:The current implement of PQsendQueryInternal looks like the result ofa misunderstanding of the doc. In the regression tests, that path isexcercised only for an error case, where no CloseComplete comes.The attached adds a test for the normal-path of pipelinedPQsendQuery() to simple_pipeline test then modifies that function notto send Close message. Without the fix, the test fails by \"unexpectednotice\" even if the trace matches the \"expected\" content.Hah, the patch I wrote is almost identical to yours, down to the notice processor counting the number of notices received. The only difference is that I put my test in pipeline_abort.Sadly, it looks like I won't be able to get this patched pushed for 14.4.",
"msg_date": "Mon, 13 Jun 2022 13:09:48 +0200",
"msg_from": "=?UTF-8?Q?=C3=81lvaro_Herrera?= <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Using PQexecQuery in pipeline mode produces unexpected Close\n messages"
},
{
"msg_contents": "=?UTF-8?Q?=C3=81lvaro_Herrera?= <alvherre@alvh.no-ip.org> writes:\n> Sadly, it looks like I won't be able to get this patched pushed for 14.4.\n\nI think that's a good thing actually; this isn't urgent enough to\nrisk a last-minute commit. Please wait till the release freeze\nlifts.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 13 Jun 2022 10:54:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Using PQexecQuery in pipeline mode produces unexpected Close\n messages"
},
{
"msg_contents": "On 2022-Jun-10, Kyotaro Horiguchi wrote:\n\n> (Moved to -hackers)\n> \n> At Wed, 8 Jun 2022 17:08:47 +0200, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> > What that Close message is doing is closing the unnamed portal, which\n> > is otherwise closed implicitly when the next one is opened. That's how\n> > single-query mode works: if you run a single portal, it'll be kept open.\n> > \n> > I believe that the right fix is to not send that Close message in\n> > PQsendQuery.\n> \n> Agreed. At least Close message in that context is useless and\n> PQsendQueryGuts doesn't send it. And removes the Close message surely\n> fixes the issue.\n\nSo, git archaeology led me to this thread\nhttps://postgr.es/m/202106072107.d4i55hdscxqj@alvherre.pgsql\nwhich is why we added that message in the first place.\n\nI was about to push the attached patch (a merged version of Kyotaro's\nand mine), but now I'm wondering if that's the right approach.\n\nAlternatives:\n\n- Have the client not complain if it gets CloseComplete in idle state.\n (After all, it's a pretty useless message, since we already do nothing\n with it if we get it in BUSY state.)\n\n- Have the server not send CloseComplete at all in the cases where\n the client is not expecting it. Not sure how this would be\n implemented.\n\n- Other ideas?\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"That sort of implies that there are Emacs keystrokes which aren't obscure.\nI've been using it daily for 2 years now and have yet to discover any key\nsequence which makes any sense.\" (Paul Thomas)",
"msg_date": "Wed, 15 Jun 2022 20:26:33 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Using PQexecQuery in pipeline mode produces unexpected Close\n messages"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> So, git archaeology led me to this thread\n> https://postgr.es/m/202106072107.d4i55hdscxqj@alvherre.pgsql\n> which is why we added that message in the first place.\n\nUm. Good thing you looked. I doubt we want to revert that change now.\n\n> Alternatives:\n> - Have the client not complain if it gets CloseComplete in idle state.\n> (After all, it's a pretty useless message, since we already do nothing\n> with it if we get it in BUSY state.)\n\nISTM the actual problem here is that we're reverting to IDLE state too\nsoon. I didn't try to trace down exactly where that's happening, but\nI notice that in the non-pipeline case we don't go to IDLE till we've\nseen 'Z' (Sync). Something in the pipeline logic must be jumping the\ngun on that state transition.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 Jun 2022 14:56:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Using PQexecQuery in pipeline mode produces unexpected Close\n messages"
},
{
"msg_contents": "At Wed, 15 Jun 2022 14:56:42 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > So, git archaeology led me to this thread\n> > https://postgr.es/m/202106072107.d4i55hdscxqj@alvherre.pgsql\n> > which is why we added that message in the first place.\n> \n> Um. Good thing you looked. I doubt we want to revert that change now.\n> \n> > Alternatives:\n> > - Have the client not complain if it gets CloseComplete in idle state.\n> > (After all, it's a pretty useless message, since we already do nothing\n> > with it if we get it in BUSY state.)\n> \n> ISTM the actual problem here is that we're reverting to IDLE state too\n> soon. I didn't try to trace down exactly where that's happening, but\n\nYes. I once visited that fact but also I thought that in the\ncomparison with non-pipelined PQsendQuery, the three messages look\nextra. Thus I concluded (at the time) that removing Close is enough\nhere.\n\n> I notice that in the non-pipeline case we don't go to IDLE till we've\n> seen 'Z' (Sync). Something in the pipeline logic must be jumping the\n> gun on that state transition.\n\nPQgetResult() resets the state to IDLE when not in pipeline mode.\n\nfe-exec.c:2171\n\n>\t\t\tif (conn->pipelineStatus != PQ_PIPELINE_OFF)\n>\t\t\t{\n>\t\t\t\t/*\n>\t\t\t\t * We're about to send the results of the current query. Set\n>\t\t\t\t * us idle now, and ...\n>\t\t\t\t */\n>\t\t\t\tconn->asyncStatus = PGASYNC_IDLE;\n\nAnd actually that code let the connection state enter to IDLE before\nCloseComplete. In the test case I posted, the following happens.\n\n PQsendQuery(conn, \"SELECT 1;\");\n PQsendFlushRequest(conn);\n PQgetResult(conn); // state enters IDLE, reads down to <CommandComplete>\n PQgetResult(conn); // reads <CloseComplete comes>\n PQpipelineSync(conn); // sync too late\n\nPipeline feature seems intending to allow PQgetResult called before\nPQpipelineSync. And also seems allowing to call QPpipelineSync() after\nPQgetResult().\n\nI haven't come up with a valid *fix* of this flow..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 16 Jun 2022 10:34:22 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Using PQexecQuery in pipeline mode produces unexpected Close\n messages"
},
{
"msg_contents": "At Thu, 16 Jun 2022 10:34:22 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Wed, 15 Jun 2022 14:56:42 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> > Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > > So, git archaeology led me to this thread\n> > > https://postgr.es/m/202106072107.d4i55hdscxqj@alvherre.pgsql\n> > > which is why we added that message in the first place.\n> > \n> > Um. Good thing you looked. I doubt we want to revert that change now.\n> > \n> > > Alternatives:\n> > > - Have the client not complain if it gets CloseComplete in idle state.\n> > > (After all, it's a pretty useless message, since we already do nothing\n> > > with it if we get it in BUSY state.)\n> > \n> > ISTM the actual problem here is that we're reverting to IDLE state too\n> > soon. I didn't try to trace down exactly where that's happening, but\n> \n> Yes. I once visited that fact but also I thought that in the\n> comparison with non-pipelined PQsendQuery, the three messages look\n> extra. Thus I concluded (at the time) that removing Close is enough\n> here.\n> \n> > I notice that in the non-pipeline case we don't go to IDLE till we've\n> > seen 'Z' (Sync). Something in the pipeline logic must be jumping the\n> > gun on that state transition.\n> \n- PQgetResult() resets the state to IDLE when not in pipeline mode.\n\nD... the \"not\" should not be there.\n\n+ PQgetResult() resets the state to IDLE while in pipeline mode.\n\n> fe-exec.c:2171\n> \n> >\t\t\tif (conn->pipelineStatus != PQ_PIPELINE_OFF)\n> >\t\t\t{\n> >\t\t\t\t/*\n> >\t\t\t\t * We're about to send the results of the current query. Set\n> >\t\t\t\t * us idle now, and ...\n> >\t\t\t\t */\n> >\t\t\t\tconn->asyncStatus = PGASYNC_IDLE;\n> \n> And actually that code let the connection state enter to IDLE before\n> CloseComplete. In the test case I posted, the following happens.\n> \n> PQsendQuery(conn, \"SELECT 1;\");\n> PQsendFlushRequest(conn);\n> PQgetResult(conn); // state enters IDLE, reads down to <CommandComplete>\n> PQgetResult(conn); // reads <CloseComplete comes>\n> PQpipelineSync(conn); // sync too late\n> \n> Pipeline feature seems intending to allow PQgetResult called before\n> PQpipelineSync. And also seems allowing to call QPpipelineSync() after\n> PQgetResult().\n> \n> I haven't come up with a valid *fix* of this flow..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 16 Jun 2022 10:41:21 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Using PQexecQuery in pipeline mode produces unexpected Close\n messages"
},
{
"msg_contents": "At Thu, 16 Jun 2022 10:34:22 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> PQgetResult() resets the state to IDLE while in pipeline mode.\n> \n> fe-exec.c:2171\n> \n> >\t\t\tif (conn->pipelineStatus != PQ_PIPELINE_OFF)\n> >\t\t\t{\n> >\t\t\t\t/*\n> >\t\t\t\t * We're about to send the results of the current query. Set\n> >\t\t\t\t * us idle now, and ...\n> >\t\t\t\t */\n> >\t\t\t\tconn->asyncStatus = PGASYNC_IDLE;\n> \n> And actually that code let the connection state enter to IDLE before\n> CloseComplete. In the test case I posted, the following happens.\n> \n> PQsendQuery(conn, \"SELECT 1;\");\n> PQsendFlushRequest(conn);\n> PQgetResult(conn); // state enters IDLE, reads down to <CommandComplete>\n> PQgetResult(conn); // reads <CloseComplete comes>\n> PQpipelineSync(conn); // sync too late\n> \n> Pipeline feature seems intending to allow PQgetResult called before\n> PQpipelineSync. And also seems allowing to call QPpipelineSync() after\n> PQgetResult().\n> \n> I haven't come up with a valid *fix* of this flow..\n\nThe attached is a crude patch to separate the state for PIPELINE-IDLE\nfrom PGASYNC_IDLE. I haven't found a better way..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 16 Jun 2022 12:07:47 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Using PQexecQuery in pipeline mode produces unexpected Close\n messages"
},
{
"msg_contents": "On 2022-Jun-16, Kyotaro Horiguchi wrote:\n\n> The attached is a crude patch to separate the state for PIPELINE-IDLE\n> from PGASYNC_IDLE. I haven't found a better way..\n\nAh, yeah, this might be a way to fix this.\n\nSomething very similar to a PIPELINE_IDLE mode was present in Craig's\ninitial patch for pipeline mode. However, I fought very hard to remove\nit, because it seemed to me that failing to handle it correctly\neverywhere would lead to more bugs than not having it. (Indeed, there\nwere some.)\n\nHowever, I see now that your patch would not only fix this bug, but also\nlet us remove the ugly \"notionally not-idle\" bit in fe-protocol3.c,\nwhich makes me ecstatic. So let's push forward with this. However,\nthis means we'll have to go over all places that use asyncStatus to\nensure that they all handle the new value correctly.\n\nI did found one bug in your patch: in the switch for asyncStatus in\nPQsendQueryStart, you introduce a new error message. With the current\ntests, that never fires, which is telling us that our coverage is not\ncomplete. But with the right sequence of libpq calls, which the\nattached adds (note that it's for REL_14_STABLE), that can be hit, and\nit's easy to see that throwing an error there is a mistake. The right\naction to take there is to let the action through.\n\nOthers to think about:\n\nPQisBusy (I think no changes are needed),\nPQfn (I think it should accept a call in PGASYNC_PIPELINE_IDLE mode;\nfully untested in pipeline mode),\nPQexitPipelineMode (I think it needs to return error; needs test case),\nPQsendFlushRequest (I think it should let through; ditto).\n\nI also attach a patch to make the test suite use Test::Differences, if\navailable. It makes the diffs of the traces much easier to read, when\nthey fail. (I wish for a simple way to set the context size, but that\nwould need a shim routine that I'm currently too lazy to write.)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/",
"msg_date": "Fri, 17 Jun 2022 20:31:50 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Using PQexecQuery in pipeline mode produces unexpected Close\n messages"
},
{
"msg_contents": "At Fri, 17 Jun 2022 20:31:50 +0200, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> On 2022-Jun-16, Kyotaro Horiguchi wrote:\n> \n> > The attached is a crude patch to separate the state for PIPELINE-IDLE\n> > from PGASYNC_IDLE. I haven't found a better way..\n> \n> Ah, yeah, this might be a way to fix this.\n> \n> Something very similar to a PIPELINE_IDLE mode was present in Craig's\n> initial patch for pipeline mode. However, I fought very hard to remove\n> it, because it seemed to me that failing to handle it correctly\n> everywhere would lead to more bugs than not having it. (Indeed, there\n> were some.)\n> \n> However, I see now that your patch would not only fix this bug, but also\n> let us remove the ugly \"notionally not-idle\" bit in fe-protocol3.c,\n> which makes me ecstatic. So let's push forward with this. However,\n\nYey.\n\n> this means we'll have to go over all places that use asyncStatus to\n> ensure that they all handle the new value correctly.\n\nSure.\n\n> I did found one bug in your patch: in the switch for asyncStatus in\n> PQsendQueryStart, you introduce a new error message. With the current\n> tests, that never fires, which is telling us that our coverage is not\n> complete. But with the right sequence of libpq calls, which the\n> attached adds (note that it's for REL_14_STABLE), that can be hit, and\n\n# (ah, I wondered why it failed to apply..)\n\n> it's easy to see that throwing an error there is a mistake. The right\n> action to take there is to let the action through.\n\nYeah.. Actulallly I really did it carelessly.. Thanks!\n\n> Others to think about:\n> \n> PQisBusy (I think no changes are needed),\n\nAgreed.\n\n> PQfn (I think it should accept a call in PGASYNC_PIPELINE_IDLE mode;\n> fully untested in pipeline mode),\n\nDoes a PQ_PIPELINE_OFF path need that? Rather I thought that we need\nAssert(!conn->asyncStatus != PGASYNC_PIPELINE_IDLE) there. But anyway\nwe might need a test for this path.\n\n> PQexitPipelineMode (I think it needs to return error; needs test case),\n\nAgreed about test case. Currently the function doesn't handle\nPGASYNC_IDLE so I thought that PGASYNC_PIPELINE_IDLE also don't need a\ncare. If the function treats PGASYNC_PIPELINE_IDLE state as error,\nthe regression test fails (but I haven't examine it furtuer.)\n\n> PQsendFlushRequest (I think it should let through; ditto).\n\nDoes that mean exit without pushing 'H' message?\n\n> I also attach a patch to make the test suite use Test::Differences, if\n> available. It makes the diffs of the traces much easier to read, when\n> they fail. (I wish for a simple way to set the context size, but that\n> would need a shim routine that I'm currently too lazy to write.)\n\nYeah, it was annoying that the script prints expected and result trace\nseparately. It looks pretty good with the patch. I don't think\nthere's much use of context size here.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 21 Jun 2022 11:42:59 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Using PQexecQuery in pipeline mode produces unexpected Close\n messages"
},
{
"msg_contents": "At Tue, 21 Jun 2022 11:42:59 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Fri, 17 Jun 2022 20:31:50 +0200, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> > Others to think about:\n> > \n> > PQisBusy (I think no changes are needed),\n> \n> Agreed.\n> \n> > PQfn (I think it should accept a call in PGASYNC_PIPELINE_IDLE mode;\n> > fully untested in pipeline mode),\n> \n> Does a PQ_PIPELINE_OFF path need that? Rather I thought that we need\n> Assert(!conn->asyncStatus != PGASYNC_PIPELINE_IDLE) there. But anyway\n> we might need a test for this path.\n\nIn the attached, PQfn() is used while in pipeline mode and\nPGASYNC_PIPELINE_IDLE. Both error out and effectivelly no-op.\n\n> > PQexitPipelineMode (I think it needs to return error; needs test case),\n> \n> Agreed about test case. Currently the function doesn't handle\n> PGASYNC_IDLE so I thought that PGASYNC_PIPELINE_IDLE also don't need a\n> care. If the function treats PGASYNC_PIPELINE_IDLE state as error,\n> the regression test fails (but I haven't examine it furtuer.)\n\nPQexitPipelineMode() is called while PGASYNC_PIPELINE_IDLE.\n\n> > PQsendFlushRequest (I think it should let through; ditto).\n> \n> Does that mean exit without pushing 'H' message?\n\nI didn't do anything on this in the sttached.\n\nBy the way, I noticed that \"libpq_pipeline uniqviol\" intermittently\nfails for uncertain reasons.\n\n> result 574/575: pipeline aborted\n> ...........................................................\n> done writing\n> \n> libpq_pipeline:1531: got unexpected NULL\n\nThe \"...........done writing\" is printed too late in the error cases.\n\nThis causes the TAP test fail, but I haven't find what's happnening at\nthe time.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 21 Jun 2022 14:56:40 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Using PQexecQuery in pipeline mode produces unexpected Close\n messages"
},
{
"msg_contents": "At Tue, 21 Jun 2022 14:56:40 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> By the way, I noticed that \"libpq_pipeline uniqviol\" intermittently\n> fails for uncertain reasons.\n> \n> > result 574/575: pipeline aborted\n> > ...........................................................\n> > done writing\n> > \n> > libpq_pipeline:1531: got unexpected NULL\n> \n> The \"...........done writing\" is printed too late in the error cases.\n> \n> This causes the TAP test fail, but I haven't find what's happnening at\n> the time.\n\nJust to make sure, I see this with the master branch\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 21 Jun 2022 14:59:07 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Using PQexecQuery in pipeline mode produces unexpected Close\n messages"
},
{
"msg_contents": "At Tue, 21 Jun 2022 14:59:07 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Tue, 21 Jun 2022 14:56:40 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > By the way, I noticed that \"libpq_pipeline uniqviol\" intermittently\n> > fails for uncertain reasons.\n> > \n> > > result 574/575: pipeline aborted\n> > > ...........................................................\n> > > done writing\n> > > \n> > > libpq_pipeline:1531: got unexpected NULL\n> > \n> > The \"...........done writing\" is printed too late in the error cases.\n> > \n> > This causes the TAP test fail, but I haven't find what's happnening at\n> > the time.\n> \n> Just to make sure, I see this with the master branch\n\nNo. It *is* caused by the fix. Sorry for the mistake. The test module\nlinked to the wrong binary..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 21 Jun 2022 15:05:02 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Using PQexecQuery in pipeline mode produces unexpected Close\n messages"
},
{
"msg_contents": "At Tue, 21 Jun 2022 14:56:40 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> By the way, I noticed that \"libpq_pipeline uniqviol\" intermittently\n> fails for uncertain reasons.\n> \n> > result 574/575: pipeline aborted\n> > ...........................................................\n> > done writing\n> > \n> > libpq_pipeline:1531: got unexpected NULL\n\nPQsendQueryPrepared() is called after the conection's state has moved\nto PGASYNC_IDLE so PQgetResult returns NULL. But actually there are\nresults. So, if pqPipelineProcessorQueue() doesn't move the async\nstate to PGASYNC_IDLE when queue is emtpy, uniqviol can run till the\nend. But that change breaks almost all of other test items.\n\nFinally, I found that the change in pqPipelineProcessorQueue() as\nattached fixes the uniqviol failure and doesn't break other tests.\nHowever, I don't understand what I did by the change for now... X(\nIt seems to me something's wrong in the PQ_PIPELINE_ABORTED mode..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 21 Jun 2022 17:46:54 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Using PQexecQuery in pipeline mode produces unexpected Close\n messages"
},
{
"msg_contents": "So I wrote some more test scenarios for this, and as I wrote in some\nother thread, I realized that there are more problems than just some\nNOTICE trouble. For instance, if you send a query, then read the result\nbut not the terminating NULL then send another query, everything gets\nconfused; the next thing you'll read is the result for the second query,\nwithout having read the NULL that terminates the results of the first\nquery. Any application that expects the usual flow of results will be\nconfused. Kyotaro's patch to add PIPELINE_IDLE fixes this bit too, as\nfar as I can tell.\n\nHowever, another problem case, not fixed by PIPELINE_IDLE, occurs if you\nexit pipeline mode after PQsendQuery() and then immediately use\nPQexec(). The CloseComplete will be received at the wrong time, and a\nnotice is emitted nevertheless.\n\nI spent a lot of time trying to understand how to fix this last bit, and\nthe solution I came up with is that PQsendQuery() must add a second\nentry to the command queue after the PGQUERY_EXTENDED one, to match the\nCloseComplete message being expected; with that entry in the queue,\nPQgetResult will really only get to IDLE mode after the Close has been\nseen, which is what we want. I named it PGQUERY_CLOSE.\n\nSadly, some hacks are needed to make this work fully:\n\n1. the client is never expecting that PQgetResult() would return\n anything for the CloseComplete, so something needs to consume the\n CloseComplete silently (including the queue entry for it) when it is\n received; I chose to do this directly in pqParseInput3. I tried to\n make PQgetResult itself do it, but it became a pile of hacks until I\n was no longer sure what was going on. Putting it in fe-protocol3.c\n ends up a lot cleaner. However, we still need PQgetResult to invoke\n parseInput() at the point where Close is expected.\n\n2. if an error occurs while executing the query, the CloseComplete will\n of course never arrive, so we need to erase it from the queue\n silently if we're returning an error.\n\nI toyed with the idea of having parseInput() produce a PGresult that\nencodes the Close message, and have PQgetResult consume and discard\nthat, then read some further message to have something to return. But\nit seemed inefficient and equally ugly and I'm not sure that flow\ncontrol is any simpler.\n\nI think another possibility would be to make PQexitPipelineMode\nresponsible for /something/, but I'm not sure what that would be.\nChecking the queue and seeing if the next message is CloseComplete, then\neating that message before exiting pipeline mode? That seems way too\nmagical. I didn't attempt this.\n\nI attach a patch series that implements the proposed fix (again for\nREL_14_STABLE) in steps, to make it easy to read. I intend to squash\nthe whole lot into a single commit before pushing. But while writing\nthis email it occurred to me that I need to add at least one more test,\nto receive a WARNING while waiting for CloseComplete. AFAICT it should\nwork, but better make sure.\n\nI produced pipeline_idle.trace file by running the test in the fully\nfixed tree, then used it to verify that all tests fail in different ways\nwhen run without the fixes. The first fix with PIPELINE_IDLE fixes some\nof these failures, and the PGQUERY_CLOSE one fixes the remaining one.\nReading the trace file, it looks correct to me.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Doing what he did amounts to sticking his fingers under the hood of the\nimplementation; if he gets his fingers burnt, it's his problem.\" (Tom Lane)",
"msg_date": "Wed, 29 Jun 2022 14:09:17 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Using PQexecQuery in pipeline mode produces unexpected Close\n messages"
},
{
"msg_contents": "Thanks for the further testing scenario.\n\nAt Wed, 29 Jun 2022 14:09:17 +0200, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> So I wrote some more test scenarios for this, and as I wrote in some\n> other thread, I realized that there are more problems than just some\n> NOTICE trouble. For instance, if you send a query, then read the result\n> but not the terminating NULL then send another query, everything gets\n> confused; the next thing you'll read is the result for the second query,\n> without having read the NULL that terminates the results of the first\n> query. Any application that expects the usual flow of results will be\n> confused. Kyotaro's patch to add PIPELINE_IDLE fixes this bit too, as\n> far as I can tell.\n> \n> However, another problem case, not fixed by PIPELINE_IDLE, occurs if you\n> exit pipeline mode after PQsendQuery() and then immediately use\n> PQexec(). The CloseComplete will be received at the wrong time, and a\n> notice is emitted nevertheless.\n\nMmm. My patch moves the point of failure of the scenario a bit but\nstill a little short. However, as my understanding, it seems like the\ntask of the PQpipelineSync()-PQgetResult() pair to consume the\nCloseComplete. If Iinserted PQpipelineSync() just after PQsendQuery()\nand called PQgetResult() for PGRES_PIPELINE_SYNC before\nPQexitPipelineMode(), the out-of-sync CloseComplete is not seen in the\nscenario. But if it is right, I'd like to complain about the\nobscure-but-stiff protocol of pipleline mode..\n\n> I spent a lot of time trying to understand how to fix this last bit, and\n> the solution I came up with is that PQsendQuery() must add a second\n> entry to the command queue after the PGQUERY_EXTENDED one, to match the\n> CloseComplete message being expected; with that entry in the queue,\n> PQgetResult will really only get to IDLE mode after the Close has been\n> seen, which is what we want. I named it PGQUERY_CLOSE.\n> \n> Sadly, some hacks are needed to make this work fully:\n> \n> 1. the client is never expecting that PQgetResult() would return\n> anything for the CloseComplete, so something needs to consume the\n> CloseComplete silently (including the queue entry for it) when it is\n> received; I chose to do this directly in pqParseInput3. I tried to\n> make PQgetResult itself do it, but it became a pile of hacks until I\n> was no longer sure what was going on. Putting it in fe-protocol3.c\n> ends up a lot cleaner. However, we still need PQgetResult to invoke\n> parseInput() at the point where Close is expected.\n> \n> 2. if an error occurs while executing the query, the CloseComplete will\n> of course never arrive, so we need to erase it from the queue\n> silently if we're returning an error.\n> \n> I toyed with the idea of having parseInput() produce a PGresult that\n> encodes the Close message, and have PQgetResult consume and discard\n> that, then read some further message to have something to return. But\n> it seemed inefficient and equally ugly and I'm not sure that flow\n> control is any simpler.\n> \n> I think another possibility would be to make PQexitPipelineMode\n> responsible for /something/, but I'm not sure what that would be.\n> Checking the queue and seeing if the next message is CloseComplete, then\n> eating that message before exiting pipeline mode? That seems way too\n> magical. I didn't attempt this.\n> \n> I attach a patch series that implements the proposed fix (again for\n> REL_14_STABLE) in steps, to make it easy to read. I intend to squash\n> the whole lot into a single commit before pushing. But while writing\n> this email it occurred to me that I need to add at least one more test,\n> to receive a WARNING while waiting for CloseComplete. AFAICT it should\n> work, but better make sure.\n> \n> I produced pipeline_idle.trace file by running the test in the fully\n\nBy the perl script doesn't produce the trace file since the list in\n$cmptrace line doesn't contain pipleline_idle..\n\n> fixed tree, then used it to verify that all tests fail in different ways\n> when run without the fixes. The first fix with PIPELINE_IDLE fixes some\n> of these failures, and the PGQUERY_CLOSE one fixes the remaining one.\n> Reading the trace file, it looks correct to me.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 04 Jul 2022 17:27:44 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Using PQexecQuery in pipeline mode produces unexpected Close\n messages"
},
{
"msg_contents": "On 2022-Jul-04, Kyotaro Horiguchi wrote:\n\n> At Wed, 29 Jun 2022 14:09:17 +0200, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n\n> > However, another problem case, not fixed by PIPELINE_IDLE, occurs if you\n> > exit pipeline mode after PQsendQuery() and then immediately use\n> > PQexec(). The CloseComplete will be received at the wrong time, and a\n> > notice is emitted nevertheless.\n> \n> Mmm. My patch moves the point of failure of the scenario a bit but\n> still a little short. However, as my understanding, it seems like the\n> task of the PQpipelineSync()-PQgetResult() pair to consume the\n> CloseComplete. If Iinserted PQpipelineSync() just after PQsendQuery()\n> and called PQgetResult() for PGRES_PIPELINE_SYNC before\n> PQexitPipelineMode(), the out-of-sync CloseComplete is not seen in the\n> scenario. But if it is right, I'd like to complain about the\n> obscure-but-stiff protocol of pipleline mode..\n\nYeah, if you introduce PQpipelineSync then I think it'll work okay, but\nmy point here was to make it work without requiring that; that's why I\nwrote the test to use PQsendFlushRequest instead.\n\nBTW I patch for the problem with uniqviol also (not fixed by v7). I'll\nsend an updated patch in a little while.\n\n> > I produced pipeline_idle.trace file by running the test in the fully\n> \n> By the perl script doesn't produce the trace file since the list in\n> $cmptrace line doesn't contain pipleline_idle..\n\nOuch, of course, thanks for noticing.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 4 Jul 2022 10:49:33 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Using PQexecQuery in pipeline mode produces unexpected Close\n messages"
},
{
"msg_contents": "On 2022-Jul-04, Alvaro Herrera wrote:\n\n> BTW I patch for the problem with uniqviol also (not fixed by v7). I'll\n> send an updated patch in a little while.\n\nHere it is. I ran \"libpq_pipeline uniqviol\" in a tight loop a few\nthousand times and didn't get any error. Before these fixes, it would\nfail in half a dozen iterations.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/",
"msg_date": "Mon, 4 Jul 2022 11:32:38 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Using PQexecQuery in pipeline mode produces unexpected Close\n messages"
},
{
"msg_contents": "I have pushed this to all three branches. Thanks!\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"It takes less than 2 seconds to get to 78% complete; that's a good sign.\nA few seconds later it's at 90%, but it seems to have stuck there. Did\nsomebody make percentages logarithmic while I wasn't looking?\"\n http://smylers.hates-software.com/2005/09/08/1995c749.html\n\n\n",
"msg_date": "Tue, 5 Jul 2022 14:39:32 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Using PQexecQuery in pipeline mode produces unexpected Close\n messages"
},
{
"msg_contents": "At Mon, 4 Jul 2022 10:49:33 +0200, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> > Mmm. My patch moves the point of failure of the scenario a bit but\n> > still a little short. However, as my understanding, it seems like the\n> > task of the PQpipelineSync()-PQgetResult() pair to consume the\n> > CloseComplete. If Iinserted PQpipelineSync() just after PQsendQuery()\n> > and called PQgetResult() for PGRES_PIPELINE_SYNC before\n> > PQexitPipelineMode(), the out-of-sync CloseComplete is not seen in the\n> > scenario. But if it is right, I'd like to complain about the\n> > obscure-but-stiff protocol of pipleline mode..\n> \n> Yeah, if you introduce PQpipelineSync then I think it'll work okay, but\n> my point here was to make it work without requiring that; that's why I\n> wrote the test to use PQsendFlushRequest instead.\n\nA bit too late, but it is good to make state-transition simpler.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 06 Jul 2022 10:05:04 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Using PQexecQuery in pipeline mode produces unexpected Close\n messages"
}
] |
[
{
"msg_contents": "Hello Devs,\n\nI am investigating backporting the fixes for CVE-2022-1552 to 9.6 and\n9.4 as part of Debian LTS and Extended LTS. I am aware that these\nreleases are no longer supported upstream, but I have made an attempt at\nadapting commits ef792f7856dea2576dcd9cab92b2b05fe955696b and\nf26d5702857a9c027f84850af48b0eea0f3aa15c from the REL_10_STABLE branch.\nI would appreciate a review of the attached patches and any comments on\nthings that may have been missed and/or adapted improperly.\n\nThe first thing I did was to adapt the full patches, with functional\nchanges and regression tests. Since amcheck was new to version 10, I\ndropped that part of the patch. Additionally, since partitioned tables\nwere new in 10 I dropped those parts of the tests. The absence of block\nrange indices in 9.4 means I also dropped that part of the change and\nassociated test as well.\n\nOnce everything built successfully, I built again with only the\nregression tests to confirm that the vulnerability was presented and\ntriggerred by the regression test [*].\n\nWhen building with only the adapted regression tests, the 9.6 build\nfailed with this in the test output:\n\n+ ERROR: sro_ifun(10) called by pbuilder\n+ CONTEXT: PL/pgSQL function sro_ifun(integer) line 4 at ASSERT\n\nThis seems to indicate that the vulnerability was encountered and that\nthe function was called as the invoking user rather than the owning\nuser. Naturally, there were further differneces in the test output\nowing to the index creation failure.\n\nFor 9.4, the error looked like this:\n\n+ ERROR: called by pbuilder\n\nAs a result of ASSERT not being present in 9.4 I had to resort to an IF\nstatement with a RAISE. However, it appears to be the identical\nproblem.\n\nThere are 4 patches attached to this mail, one for each of the two\ncommits referenced above as adapted for 9.6 and 9.4. Please advise on\nwhether adjustments are needed or whether I can proceed with publishing\nupdated 9.6 and 9.4 packages for Debian with said patches.\n\nRegards,\n\n-Roberto\n\n[*] Side note: my approach revealed that the adapted regression tests\ntrigger the vulnerability in both 9.6 and 9.4. However, the SUSE\nsecurity information page for CVE-2022-1552 [0] lists 9.6 as \"not\naffected\". Presumably this is based on the language in the upstream\nadvisory \"Versions Affected: 10 - 14.\"\n\n[0] https://www.suse.com/security/cve/CVE-2022-1552.html\n\n-- \nRoberto C. S�nchez",
"msg_date": "Wed, 8 Jun 2022 12:04:04 -0400",
"msg_from": "Roberto =?iso-8859-1?Q?C=2E_S=E1nchez?= <roberto@debian.org>",
"msg_from_op": true,
"msg_subject": "Request for assistance to backport CVE-2022-1552 fixes to 9.6 and 9.4"
},
{
"msg_contents": "Roberto =?iso-8859-1?Q?C=2E_S=E1nchez?= <roberto@debian.org> writes:\n> I am investigating backporting the fixes for CVE-2022-1552 to 9.6 and\n> 9.4 as part of Debian LTS and Extended LTS. I am aware that these\n> releases are no longer supported upstream, but I have made an attempt at\n> adapting commits ef792f7856dea2576dcd9cab92b2b05fe955696b and\n> f26d5702857a9c027f84850af48b0eea0f3aa15c from the REL_10_STABLE branch.\n> I would appreciate a review of the attached patches and any comments on\n> things that may have been missed and/or adapted improperly.\n\nFWIW, I would not recommend being in a huge hurry to back-port those\nchanges, pending the outcome of this discussion:\n\nhttps://www.postgresql.org/message-id/flat/f8a4105f076544c180a87ef0c4822352%40stmuk.bayern.de\n\nWe're going to have to tweak that code somehow, and it's not yet\nentirely clear how.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 Jun 2022 16:15:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Request for assistance to backport CVE-2022-1552 fixes to 9.6 and\n 9.4"
},
{
"msg_contents": "On Wed, Jun 08, 2022 at 04:15:47PM -0400, Tom Lane wrote:\n> Roberto =?iso-8859-1?Q?C=2E_S=E1nchez?= <roberto@debian.org> writes:\n> > I am investigating backporting the fixes for CVE-2022-1552 to 9.6 and\n> > 9.4 as part of Debian LTS and Extended LTS. I am aware that these\n> > releases are no longer supported upstream, but I have made an attempt at\n> > adapting commits ef792f7856dea2576dcd9cab92b2b05fe955696b and\n> > f26d5702857a9c027f84850af48b0eea0f3aa15c from the REL_10_STABLE branch.\n> > I would appreciate a review of the attached patches and any comments on\n> > things that may have been missed and/or adapted improperly.\n> \n> FWIW, I would not recommend being in a huge hurry to back-port those\n> changes, pending the outcome of this discussion:\n> \n> https://www.postgresql.org/message-id/flat/f8a4105f076544c180a87ef0c4822352%40stmuk.bayern.de\n> \nThanks for the pointer.\n\n> We're going to have to tweak that code somehow, and it's not yet\n> entirely clear how.\n> \nI will monitor the discussion to see what comes of it.\n\nRegards,\n\n-Roberto\n-- \nRoberto C. S�nchez\n\n\n",
"msg_date": "Wed, 8 Jun 2022 17:31:09 -0400",
"msg_from": "Roberto =?iso-8859-1?Q?C=2E_S=E1nchez?= <roberto@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: Request for assistance to backport CVE-2022-1552 fixes to 9.6\n and 9.4"
},
{
"msg_contents": "On Wed, Jun 08, 2022 at 05:31:22PM -0400, Roberto C. S�nchez wrote:\n> On Wed, Jun 08, 2022 at 04:15:47PM -0400, Tom Lane wrote:\n> > Roberto =?iso-8859-1?Q?C=2E_S=E1nchez?= <roberto@debian.org> writes:\n> > > I am investigating backporting the fixes for CVE-2022-1552 to 9.6 and\n> > > 9.4 as part of Debian LTS and Extended LTS. I am aware that these\n> > > releases are no longer supported upstream, but I have made an attempt at\n> > > adapting commits ef792f7856dea2576dcd9cab92b2b05fe955696b and\n> > > f26d5702857a9c027f84850af48b0eea0f3aa15c from the REL_10_STABLE branch.\n> > > I would appreciate a review of the attached patches and any comments on\n> > > things that may have been missed and/or adapted improperly.\n> > \n> > FWIW, I would not recommend being in a huge hurry to back-port those\n> > changes, pending the outcome of this discussion:\n> > \n> > https://www.postgresql.org/message-id/flat/f8a4105f076544c180a87ef0c4822352%40stmuk.bayern.de\n> > \n> Thanks for the pointer.\n> \n> > We're going to have to tweak that code somehow, and it's not yet\n> > entirely clear how.\n> > \n> I will monitor the discussion to see what comes of it.\n> \nBased on the discussion in the other thread, I have made an attempt to\nbackport commit 88b39e61486a8925a3861d50c306a51eaa1af8d6 to 9.6 and 9.4.\nThe only significant change that I had to make was to add the full\nfunction signatures for the REVOKE/GRANT in the citext test.\n\nOne question that I had about the change as committed is whether a\nREVOKE is needed on s.citext_ne, like so:\n\nREVOKE ALL ON FUNCTION s.citext_ne FROM PUBLIC;\n\nOr (for pre-10):\n\nREVOKE ALL ON FUNCTION s.citext_ne(s.citext, s.citext) FROM PUBLIC;\n\nI ask because the comment immediately preceding the sequence of REVOKEs\nincludes the comment \"Revoke all conceivably-relevant ACLs within the\nextension.\" I am not especially knowledgable about deep internals, but\nthat function seems like it would belong in the same group with the\nothers.\n\nIn any event, would someone be willing to review the attached patches\nfor correctness? I would like to shortly publish updates to 9.6 and 9.4\nin Debian and a review would be most appreciated.\n\nRegards,\n\n-Roberto\n\n-- \nRoberto C. S�nchez",
"msg_date": "Mon, 4 Jul 2022 18:06:51 -0400",
"msg_from": "Roberto =?iso-8859-1?Q?C=2E_S=E1nchez?= <roberto@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: Request for assistance to backport CVE-2022-1552 fixes to 9.6\n and 9.4"
},
{
"msg_contents": "Hello pgsql-hackers,\n\nIs there anyone willing to review the patches that I prepared? I'd have\nsubstatntially more confidence in the patches with a review from an\nupstream developer who is familiar with the code.\n\nRegards,\n\n-Roberto\n\nOn Mon, Jul 04, 2022 at 06:06:58PM -0400, Roberto C. S�nchez wrote:\n> On Wed, Jun 08, 2022 at 05:31:22PM -0400, Roberto C. S�nchez wrote:\n> > On Wed, Jun 08, 2022 at 04:15:47PM -0400, Tom Lane wrote:\n> > > Roberto =?iso-8859-1?Q?C=2E_S=E1nchez?= <roberto@debian.org> writes:\n> > > > I am investigating backporting the fixes for CVE-2022-1552 to 9.6 and\n> > > > 9.4 as part of Debian LTS and Extended LTS. I am aware that these\n> > > > releases are no longer supported upstream, but I have made an attempt at\n> > > > adapting commits ef792f7856dea2576dcd9cab92b2b05fe955696b and\n> > > > f26d5702857a9c027f84850af48b0eea0f3aa15c from the REL_10_STABLE branch.\n> > > > I would appreciate a review of the attached patches and any comments on\n> > > > things that may have been missed and/or adapted improperly.\n> > > \n> > > FWIW, I would not recommend being in a huge hurry to back-port those\n> > > changes, pending the outcome of this discussion:\n> > > \n> > > https://www.postgresql.org/message-id/flat/f8a4105f076544c180a87ef0c4822352%40stmuk.bayern.de\n> > > \n> > Thanks for the pointer.\n> > \n> > > We're going to have to tweak that code somehow, and it's not yet\n> > > entirely clear how.\n> > > \n> > I will monitor the discussion to see what comes of it.\n> > \n> Based on the discussion in the other thread, I have made an attempt to\n> backport commit 88b39e61486a8925a3861d50c306a51eaa1af8d6 to 9.6 and 9.4.\n> The only significant change that I had to make was to add the full\n> function signatures for the REVOKE/GRANT in the citext test.\n> \n> One question that I had about the change as committed is whether a\n> REVOKE is needed on s.citext_ne, like so:\n> \n> REVOKE ALL ON FUNCTION s.citext_ne FROM PUBLIC;\n> \n> Or (for pre-10):\n> \n> REVOKE ALL ON FUNCTION s.citext_ne(s.citext, s.citext) FROM PUBLIC;\n> \n> I ask because the comment immediately preceding the sequence of REVOKEs\n> includes the comment \"Revoke all conceivably-relevant ACLs within the\n> extension.\" I am not especially knowledgable about deep internals, but\n> that function seems like it would belong in the same group with the\n> others.\n> \n> In any event, would someone be willing to review the attached patches\n> for correctness? I would like to shortly publish updates to 9.6 and 9.4\n> in Debian and a review would be most appreciated.\n> \n> Regards,\n> \n> -Roberto\n> \n> -- \n> Roberto C. S�nchez\n\n> From ef792f7856dea2576dcd9cab92b2b05fe955696b Mon Sep 17 00:00:00 2001\n> From: Noah Misch <noah@leadboat.com>\n> Date: Mon, 9 May 2022 08:35:08 -0700\n> Subject: [PATCH] Make relation-enumerating operations be security-restricted\n> operations.\n> \n> When a feature enumerates relations and runs functions associated with\n> all found relations, the feature's user shall not need to trust every\n> user having permission to create objects. BRIN-specific functionality\n> in autovacuum neglected to account for this, as did pg_amcheck and\n> CLUSTER. An attacker having permission to create non-temp objects in at\n> least one schema could execute arbitrary SQL functions under the\n> identity of the bootstrap superuser. CREATE INDEX (not a\n> relation-enumerating operation) and REINDEX protected themselves too\n> late. This change extends to the non-enumerating amcheck interface.\n> Back-patch to v10 (all supported versions).\n> \n> Sergey Shinderuk, reviewed (in earlier versions) by Alexander Lakhin.\n> Reported by Alexander Lakhin.\n> \n> Security: CVE-2022-1552\n> ---\n> src/backend/access/brin/brin.c | 30 ++++++++++++++++-\n> src/backend/catalog/index.c | 41 +++++++++++++++++------\n> src/backend/commands/cluster.c | 35 ++++++++++++++++----\n> src/backend/commands/indexcmds.c | 53 +++++++++++++++++++++++++++++--\n> src/backend/utils/init/miscinit.c | 24 ++++++++------\n> src/test/regress/expected/privileges.out | 42 ++++++++++++++++++++++++\n> src/test/regress/sql/privileges.sql | 36 +++++++++++++++++++++\n> 7 files changed, 231 insertions(+), 30 deletions(-)\n> \n> --- a/src/backend/access/brin/brin.c\n> +++ b/src/backend/access/brin/brin.c\n> @@ -28,6 +28,7 @@\n> #include \"pgstat.h\"\n> #include \"storage/bufmgr.h\"\n> #include \"storage/freespace.h\"\n> +#include \"utils/guc.h\"\n> #include \"utils/index_selfuncs.h\"\n> #include \"utils/memutils.h\"\n> #include \"utils/rel.h\"\n> @@ -786,6 +787,9 @@\n> \tOid\t\t\theapoid;\n> \tRelation\tindexRel;\n> \tRelation\theapRel;\n> +\tOid\t\t\tsave_userid;\n> +\tint\t\t\tsave_sec_context;\n> +\tint\t\t\tsave_nestlevel;\n> \tdouble\t\tnumSummarized = 0;\n> \n> \tif (RecoveryInProgress())\n> @@ -799,10 +803,28 @@\n> \t * passed indexoid isn't an index then IndexGetRelation() will fail.\n> \t * Rather than emitting a not-very-helpful error message, postpone\n> \t * complaining, expecting that the is-it-an-index test below will fail.\n> +\t *\n> +\t * unlike brin_summarize_range(), autovacuum never calls this. hence, we\n> +\t * don't switch userid.\n> \t */\n> \theapoid = IndexGetRelation(indexoid, true);\n> \tif (OidIsValid(heapoid))\n> +\t{\n> \t\theapRel = heap_open(heapoid, ShareUpdateExclusiveLock);\n> +\n> +\t\t/*\n> +\t\t * Autovacuum calls us. For its benefit, switch to the table owner's\n> +\t\t * userid, so that any index functions are run as that user. Also\n> +\t\t * lock down security-restricted operations and arrange to make GUC\n> +\t\t * variable changes local to this command. This is harmless, albeit\n> +\t\t * unnecessary, when called from SQL, because we fail shortly if the\n> +\t\t * user does not own the index.\n> +\t\t */\n> +\t\tGetUserIdAndSecContext(&save_userid, &save_sec_context);\n> +\t\tSetUserIdAndSecContext(heapRel->rd_rel->relowner,\n> +\t\t\t\t\t\t\t save_sec_context | SECURITY_RESTRICTED_OPERATION);\n> +\t\tsave_nestlevel = NewGUCNestLevel();\n> +\t}\n> \telse\n> \t\theapRel = NULL;\n> \n> @@ -817,7 +839,7 @@\n> \t\t\t\t\t\tRelationGetRelationName(indexRel))));\n> \n> \t/* User must own the index (comparable to privileges needed for VACUUM) */\n> -\tif (!pg_class_ownercheck(indexoid, GetUserId()))\n> +\tif (heapRel != NULL && !pg_class_ownercheck(indexoid, save_userid))\n> \t\taclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_CLASS,\n> \t\t\t\t\t RelationGetRelationName(indexRel));\n> \n> @@ -835,6 +857,12 @@\n> \t/* OK, do it */\n> \tbrinsummarize(indexRel, heapRel, &numSummarized, NULL);\n> \n> +\t/* Roll back any GUC changes executed by index functions */\n> +\tAtEOXact_GUC(false, save_nestlevel);\n> +\n> +\t/* Restore userid and security context */\n> +\tSetUserIdAndSecContext(save_userid, save_sec_context);\n> +\n> \trelation_close(indexRel, ShareUpdateExclusiveLock);\n> \trelation_close(heapRel, ShareUpdateExclusiveLock);\n> \n> --- a/src/backend/catalog/index.c\n> +++ b/src/backend/catalog/index.c\n> @@ -2908,7 +2908,17 @@\n> \n> \t/* Open and lock the parent heap relation */\n> \theapRelation = heap_open(heapId, ShareUpdateExclusiveLock);\n> -\t/* And the target index relation */\n> +\n> +\t/*\n> +\t * Switch to the table owner's userid, so that any index functions are run\n> +\t * as that user. Also lock down security-restricted operations and\n> +\t * arrange to make GUC variable changes local to this command.\n> +\t */\n> +\tGetUserIdAndSecContext(&save_userid, &save_sec_context);\n> +\tSetUserIdAndSecContext(heapRelation->rd_rel->relowner,\n> +\t\t\t\t\t\t save_sec_context | SECURITY_RESTRICTED_OPERATION);\n> +\tsave_nestlevel = NewGUCNestLevel();\n> +\n> \tindexRelation = index_open(indexId, RowExclusiveLock);\n> \n> \t/*\n> @@ -2922,16 +2932,6 @@\n> \tindexInfo->ii_Concurrent = true;\n> \n> \t/*\n> -\t * Switch to the table owner's userid, so that any index functions are run\n> -\t * as that user. Also lock down security-restricted operations and\n> -\t * arrange to make GUC variable changes local to this command.\n> -\t */\n> -\tGetUserIdAndSecContext(&save_userid, &save_sec_context);\n> -\tSetUserIdAndSecContext(heapRelation->rd_rel->relowner,\n> -\t\t\t\t\t\t save_sec_context | SECURITY_RESTRICTED_OPERATION);\n> -\tsave_nestlevel = NewGUCNestLevel();\n> -\n> -\t/*\n> \t * Scan the index and gather up all the TIDs into a tuplesort object.\n> \t */\n> \tivinfo.index = indexRelation;\n> @@ -3395,6 +3395,9 @@\n> \tRelation\tiRel,\n> \t\t\t\theapRelation;\n> \tOid\t\t\theapId;\n> +\tOid\t\t\tsave_userid;\n> +\tint\t\t\tsave_sec_context;\n> +\tint\t\t\tsave_nestlevel;\n> \tIndexInfo *indexInfo;\n> \tvolatile bool skipped_constraint = false;\n> \tPGRUsage\tru0;\n> @@ -3409,6 +3412,16 @@\n> \theapRelation = heap_open(heapId, ShareLock);\n> \n> \t/*\n> +\t * Switch to the table owner's userid, so that any index functions are run\n> +\t * as that user. Also lock down security-restricted operations and\n> +\t * arrange to make GUC variable changes local to this command.\n> +\t */\n> +\tGetUserIdAndSecContext(&save_userid, &save_sec_context);\n> +\tSetUserIdAndSecContext(heapRelation->rd_rel->relowner,\n> +\t\t\t\t\t\t save_sec_context | SECURITY_RESTRICTED_OPERATION);\n> +\tsave_nestlevel = NewGUCNestLevel();\n> +\n> +\t/*\n> \t * Open the target index relation and get an exclusive lock on it, to\n> \t * ensure that no one else is touching this particular index.\n> \t */\n> @@ -3550,6 +3563,12 @@\n> \t\t\t\t errdetail_internal(\"%s\",\n> \t\t\t\t\t\t pg_rusage_show(&ru0))));\n> \n> +\t/* Roll back any GUC changes executed by index functions */\n> +\tAtEOXact_GUC(false, save_nestlevel);\n> +\n> +\t/* Restore userid and security context */\n> +\tSetUserIdAndSecContext(save_userid, save_sec_context);\n> +\n> \t/* Close rels, but keep locks */\n> \tindex_close(iRel, NoLock);\n> \theap_close(heapRelation, NoLock);\n> --- a/src/backend/commands/cluster.c\n> +++ b/src/backend/commands/cluster.c\n> @@ -44,6 +44,7 @@\n> #include \"storage/smgr.h\"\n> #include \"utils/acl.h\"\n> #include \"utils/fmgroids.h\"\n> +#include \"utils/guc.h\"\n> #include \"utils/inval.h\"\n> #include \"utils/lsyscache.h\"\n> #include \"utils/memutils.h\"\n> @@ -260,6 +261,9 @@\n> cluster_rel(Oid tableOid, Oid indexOid, bool recheck, bool verbose)\n> {\n> \tRelation\tOldHeap;\n> +\tOid\t\t\tsave_userid;\n> +\tint\t\t\tsave_sec_context;\n> +\tint\t\t\tsave_nestlevel;\n> \n> \t/* Check for user-requested abort. */\n> \tCHECK_FOR_INTERRUPTS();\n> @@ -277,6 +281,16 @@\n> \t\treturn;\n> \n> \t/*\n> +\t * Switch to the table owner's userid, so that any index functions are run\n> +\t * as that user. Also lock down security-restricted operations and\n> +\t * arrange to make GUC variable changes local to this command.\n> +\t */\n> +\tGetUserIdAndSecContext(&save_userid, &save_sec_context);\n> +\tSetUserIdAndSecContext(OldHeap->rd_rel->relowner,\n> +\t\t\t\t\t\t save_sec_context | SECURITY_RESTRICTED_OPERATION);\n> +\tsave_nestlevel = NewGUCNestLevel();\n> +\n> +\t/*\n> \t * Since we may open a new transaction for each relation, we have to check\n> \t * that the relation still is what we think it is.\n> \t *\n> @@ -290,10 +304,10 @@\n> \t\tForm_pg_index indexForm;\n> \n> \t\t/* Check that the user still owns the relation */\n> -\t\tif (!pg_class_ownercheck(tableOid, GetUserId()))\n> +\t\tif (!pg_class_ownercheck(tableOid, save_userid))\n> \t\t{\n> \t\t\trelation_close(OldHeap, AccessExclusiveLock);\n> -\t\t\treturn;\n> +\t\t\tgoto out;\n> \t\t}\n> \n> \t\t/*\n> @@ -307,7 +321,7 @@\n> \t\tif (RELATION_IS_OTHER_TEMP(OldHeap))\n> \t\t{\n> \t\t\trelation_close(OldHeap, AccessExclusiveLock);\n> -\t\t\treturn;\n> +\t\t\tgoto out;\n> \t\t}\n> \n> \t\tif (OidIsValid(indexOid))\n> @@ -318,7 +332,7 @@\n> \t\t\tif (!SearchSysCacheExists1(RELOID, ObjectIdGetDatum(indexOid)))\n> \t\t\t{\n> \t\t\t\trelation_close(OldHeap, AccessExclusiveLock);\n> -\t\t\t\treturn;\n> +\t\t\t\tgoto out;\n> \t\t\t}\n> \n> \t\t\t/*\n> @@ -328,14 +342,14 @@\n> \t\t\tif (!HeapTupleIsValid(tuple))\t\t/* probably can't happen */\n> \t\t\t{\n> \t\t\t\trelation_close(OldHeap, AccessExclusiveLock);\n> -\t\t\t\treturn;\n> +\t\t\t\tgoto out;\n> \t\t\t}\n> \t\t\tindexForm = (Form_pg_index) GETSTRUCT(tuple);\n> \t\t\tif (!indexForm->indisclustered)\n> \t\t\t{\n> \t\t\t\tReleaseSysCache(tuple);\n> \t\t\t\trelation_close(OldHeap, AccessExclusiveLock);\n> -\t\t\t\treturn;\n> +\t\t\t\tgoto out;\n> \t\t\t}\n> \t\t\tReleaseSysCache(tuple);\n> \t\t}\n> @@ -389,7 +403,7 @@\n> \t\t!RelationIsPopulated(OldHeap))\n> \t{\n> \t\trelation_close(OldHeap, AccessExclusiveLock);\n> -\t\treturn;\n> +\t\tgoto out;\n> \t}\n> \n> \t/*\n> @@ -404,6 +418,13 @@\n> \trebuild_relation(OldHeap, indexOid, verbose);\n> \n> \t/* NB: rebuild_relation does heap_close() on OldHeap */\n> +\n> +out:\n> +\t/* Roll back any GUC changes executed by index functions */\n> +\tAtEOXact_GUC(false, save_nestlevel);\n> +\n> +\t/* Restore userid and security context */\n> +\tSetUserIdAndSecContext(save_userid, save_sec_context);\n> }\n> \n> /*\n> --- a/src/backend/commands/indexcmds.c\n> +++ b/src/backend/commands/indexcmds.c\n> @@ -49,6 +49,7 @@\n> #include \"utils/acl.h\"\n> #include \"utils/builtins.h\"\n> #include \"utils/fmgroids.h\"\n> +#include \"utils/guc.h\"\n> #include \"utils/inval.h\"\n> #include \"utils/lsyscache.h\"\n> #include \"utils/memutils.h\"\n> @@ -339,8 +340,13 @@\n> \tLOCKTAG\t\theaplocktag;\n> \tLOCKMODE\tlockmode;\n> \tSnapshot\tsnapshot;\n> +\tOid\t\t\troot_save_userid;\n> +\tint\t\t\troot_save_sec_context;\n> +\tint\t\t\troot_save_nestlevel;\n> \tint\t\t\ti;\n> \n> +\troot_save_nestlevel = NewGUCNestLevel();\n> +\n> \t/*\n> \t * Force non-concurrent build on temporary relations, even if CONCURRENTLY\n> \t * was requested. Other backends can't access a temporary relation, so\n> @@ -381,6 +387,15 @@\n> \tlockmode = concurrent ? ShareUpdateExclusiveLock : ShareLock;\n> \trel = heap_open(relationId, lockmode);\n> \n> +\t/*\n> +\t * Switch to the table owner's userid, so that any index functions are run\n> +\t * as that user. Also lock down security-restricted operations. We\n> +\t * already arranged to make GUC variable changes local to this command.\n> +\t */\n> +\tGetUserIdAndSecContext(&root_save_userid, &root_save_sec_context);\n> +\tSetUserIdAndSecContext(rel->rd_rel->relowner,\n> +\t\t\t\t\t\t root_save_sec_context | SECURITY_RESTRICTED_OPERATION);\n> +\n> \trelationId = RelationGetRelid(rel);\n> \tnamespaceId = RelationGetNamespace(rel);\n> \n> @@ -422,7 +437,7 @@\n> \t{\n> \t\tAclResult\taclresult;\n> \n> -\t\taclresult = pg_namespace_aclcheck(namespaceId, GetUserId(),\n> +\t\taclresult = pg_namespace_aclcheck(namespaceId, root_save_userid,\n> \t\t\t\t\t\t\t\t\t\t ACL_CREATE);\n> \t\tif (aclresult != ACLCHECK_OK)\n> \t\t\taclcheck_error(aclresult, ACL_KIND_NAMESPACE,\n> @@ -449,7 +464,7 @@\n> \t{\n> \t\tAclResult\taclresult;\n> \n> -\t\taclresult = pg_tablespace_aclcheck(tablespaceId, GetUserId(),\n> +\t\taclresult = pg_tablespace_aclcheck(tablespaceId, root_save_userid,\n> \t\t\t\t\t\t\t\t\t\t ACL_CREATE);\n> \t\tif (aclresult != ACLCHECK_OK)\n> \t\t\taclcheck_error(aclresult, ACL_KIND_TABLESPACE,\n> @@ -679,15 +694,33 @@\n> \n> \tif (!OidIsValid(indexRelationId))\n> \t{\n> +\t\t/* Roll back any GUC changes executed by index functions. */\n> +\t\tAtEOXact_GUC(false, root_save_nestlevel);\n> +\n> +\t\t/* Restore userid and security context */\n> +\t\tSetUserIdAndSecContext(root_save_userid, root_save_sec_context);\n> +\n> \t\theap_close(rel, NoLock);\n> \t\treturn address;\n> \t}\n> \n> +\t/*\n> +\t * Roll back any GUC changes executed by index functions, and keep\n> +\t * subsequent changes local to this command. It's barely possible that\n> +\t * some index function changed a behavior-affecting GUC, e.g. xmloption,\n> +\t * that affects subsequent steps.\n> +\t */\n> +\tAtEOXact_GUC(false, root_save_nestlevel);\n> +\troot_save_nestlevel = NewGUCNestLevel();\n> +\n> \t/* Add any requested comment */\n> \tif (stmt->idxcomment != NULL)\n> \t\tCreateComments(indexRelationId, RelationRelationId, 0,\n> \t\t\t\t\t stmt->idxcomment);\n> \n> +\tAtEOXact_GUC(false, root_save_nestlevel);\n> +\tSetUserIdAndSecContext(root_save_userid, root_save_sec_context);\n> +\n> \tif (!concurrent)\n> \t{\n> \t\t/* Close the heap and we're done, in the non-concurrent case */\n> @@ -766,6 +799,16 @@\n> \t/* Open and lock the parent heap relation */\n> \trel = heap_openrv(stmt->relation, ShareUpdateExclusiveLock);\n> \n> +\t/*\n> +\t * Switch to the table owner's userid, so that any index functions are run\n> +\t * as that user. Also lock down security-restricted operations and\n> +\t * arrange to make GUC variable changes local to this command.\n> +\t */\n> +\tGetUserIdAndSecContext(&root_save_userid, &root_save_sec_context);\n> +\tSetUserIdAndSecContext(rel->rd_rel->relowner,\n> +\t\t\t\t\t\t root_save_sec_context | SECURITY_RESTRICTED_OPERATION);\n> +\troot_save_nestlevel = NewGUCNestLevel();\n> +\n> \t/* And the target index relation */\n> \tindexRelation = index_open(indexRelationId, RowExclusiveLock);\n> \n> @@ -781,6 +824,12 @@\n> \t/* Now build the index */\n> \tindex_build(rel, indexRelation, indexInfo, stmt->primary, false);\n> \n> +\t/* Roll back any GUC changes executed by index functions */\n> +\tAtEOXact_GUC(false, root_save_nestlevel);\n> +\n> +\t/* Restore userid and security context */\n> +\tSetUserIdAndSecContext(root_save_userid, root_save_sec_context);\n> +\n> \t/* Close both the relations, but keep the locks */\n> \theap_close(rel, NoLock);\n> \tindex_close(indexRelation, NoLock);\n> --- a/src/backend/utils/init/miscinit.c\n> +++ b/src/backend/utils/init/miscinit.c\n> @@ -365,15 +365,21 @@\n> * with guc.c's internal state, so SET ROLE has to be disallowed.\n> *\n> * SECURITY_RESTRICTED_OPERATION indicates that we are inside an operation\n> - * that does not wish to trust called user-defined functions at all. This\n> - * bit prevents not only SET ROLE, but various other changes of session state\n> - * that normally is unprotected but might possibly be used to subvert the\n> - * calling session later. An example is replacing an existing prepared\n> - * statement with new code, which will then be executed with the outer\n> - * session's permissions when the prepared statement is next used. Since\n> - * these restrictions are fairly draconian, we apply them only in contexts\n> - * where the called functions are really supposed to be side-effect-free\n> - * anyway, such as VACUUM/ANALYZE/REINDEX.\n> + * that does not wish to trust called user-defined functions at all. The\n> + * policy is to use this before operations, e.g. autovacuum and REINDEX, that\n> + * enumerate relations of a database or schema and run functions associated\n> + * with each found relation. The relation owner is the new user ID. Set this\n> + * as soon as possible after locking the relation. Restore the old user ID as\n> + * late as possible before closing the relation; restoring it shortly after\n> + * close is also tolerable. If a command has both relation-enumerating and\n> + * non-enumerating modes, e.g. ANALYZE, both modes set this bit. This bit\n> + * prevents not only SET ROLE, but various other changes of session state that\n> + * normally is unprotected but might possibly be used to subvert the calling\n> + * session later. An example is replacing an existing prepared statement with\n> + * new code, which will then be executed with the outer session's permissions\n> + * when the prepared statement is next used. These restrictions are fairly\n> + * draconian, but the functions called in relation-enumerating operations are\n> + * really supposed to be side-effect-free anyway.\n> *\n> * SECURITY_NOFORCE_RLS indicates that we are inside an operation which should\n> * ignore the FORCE ROW LEVEL SECURITY per-table indication. This is used to\n> --- a/src/test/regress/expected/privileges.out\n> +++ b/src/test/regress/expected/privileges.out\n> @@ -1244,6 +1244,48 @@\n> -- security-restricted operations\n> \\c -\n> CREATE ROLE regress_sro_user;\n> +-- Check that index expressions and predicates are run as the table's owner\n> +-- A dummy index function checking current_user\n> +CREATE FUNCTION sro_ifun(int) RETURNS int AS $$\n> +BEGIN\n> +\t-- Below we set the table's owner to regress_sro_user\n> +\tASSERT current_user = 'regress_sro_user',\n> +\t\tformat('sro_ifun(%s) called by %s', $1, current_user);\n> +\tRETURN $1;\n> +END;\n> +$$ LANGUAGE plpgsql IMMUTABLE;\n> +-- Create a table owned by regress_sro_user\n> +CREATE TABLE sro_tab (a int);\n> +ALTER TABLE sro_tab OWNER TO regress_sro_user;\n> +INSERT INTO sro_tab VALUES (1), (2), (3);\n> +-- Create an expression index with a predicate\n> +CREATE INDEX sro_idx ON sro_tab ((sro_ifun(a) + sro_ifun(0)))\n> +\tWHERE sro_ifun(a + 10) > sro_ifun(10);\n> +DROP INDEX sro_idx;\n> +-- Do the same concurrently\n> +CREATE INDEX CONCURRENTLY sro_idx ON sro_tab ((sro_ifun(a) + sro_ifun(0)))\n> +\tWHERE sro_ifun(a + 10) > sro_ifun(10);\n> +-- REINDEX\n> +REINDEX TABLE sro_tab;\n> +REINDEX INDEX sro_idx;\n> +REINDEX TABLE CONCURRENTLY sro_tab; -- v12+ feature\n> +ERROR: syntax error at or near \"CONCURRENTLY\"\n> +LINE 1: REINDEX TABLE CONCURRENTLY sro_tab;\n> + ^\n> +DROP INDEX sro_idx;\n> +-- CLUSTER\n> +CREATE INDEX sro_cluster_idx ON sro_tab ((sro_ifun(a) + sro_ifun(0)));\n> +CLUSTER sro_tab USING sro_cluster_idx;\n> +DROP INDEX sro_cluster_idx;\n> +-- BRIN index\n> +CREATE INDEX sro_brin ON sro_tab USING brin ((sro_ifun(a) + sro_ifun(0)));\n> +SELECT brin_summarize_new_values('sro_brin');\n> + brin_summarize_new_values \n> +---------------------------\n> + 0\n> +(1 row)\n> +\n> +DROP TABLE sro_tab;\n> SET SESSION AUTHORIZATION regress_sro_user;\n> CREATE FUNCTION unwanted_grant() RETURNS void LANGUAGE sql AS\n> \t'GRANT regress_group2 TO regress_sro_user';\n> --- a/src/test/regress/sql/privileges.sql\n> +++ b/src/test/regress/sql/privileges.sql\n> @@ -762,6 +762,42 @@\n> \\c -\n> CREATE ROLE regress_sro_user;\n> \n> +-- Check that index expressions and predicates are run as the table's owner\n> +\n> +-- A dummy index function checking current_user\n> +CREATE FUNCTION sro_ifun(int) RETURNS int AS $$\n> +BEGIN\n> +\t-- Below we set the table's owner to regress_sro_user\n> +\tASSERT current_user = 'regress_sro_user',\n> +\t\tformat('sro_ifun(%s) called by %s', $1, current_user);\n> +\tRETURN $1;\n> +END;\n> +$$ LANGUAGE plpgsql IMMUTABLE;\n> +-- Create a table owned by regress_sro_user\n> +CREATE TABLE sro_tab (a int);\n> +ALTER TABLE sro_tab OWNER TO regress_sro_user;\n> +INSERT INTO sro_tab VALUES (1), (2), (3);\n> +-- Create an expression index with a predicate\n> +CREATE INDEX sro_idx ON sro_tab ((sro_ifun(a) + sro_ifun(0)))\n> +\tWHERE sro_ifun(a + 10) > sro_ifun(10);\n> +DROP INDEX sro_idx;\n> +-- Do the same concurrently\n> +CREATE INDEX CONCURRENTLY sro_idx ON sro_tab ((sro_ifun(a) + sro_ifun(0)))\n> +\tWHERE sro_ifun(a + 10) > sro_ifun(10);\n> +-- REINDEX\n> +REINDEX TABLE sro_tab;\n> +REINDEX INDEX sro_idx;\n> +REINDEX TABLE CONCURRENTLY sro_tab; -- v12+ feature\n> +DROP INDEX sro_idx;\n> +-- CLUSTER\n> +CREATE INDEX sro_cluster_idx ON sro_tab ((sro_ifun(a) + sro_ifun(0)));\n> +CLUSTER sro_tab USING sro_cluster_idx;\n> +DROP INDEX sro_cluster_idx;\n> +-- BRIN index\n> +CREATE INDEX sro_brin ON sro_tab USING brin ((sro_ifun(a) + sro_ifun(0)));\n> +SELECT brin_summarize_new_values('sro_brin');\n> +DROP TABLE sro_tab;\n> +\n> SET SESSION AUTHORIZATION regress_sro_user;\n> CREATE FUNCTION unwanted_grant() RETURNS void LANGUAGE sql AS\n> \t'GRANT regress_group2 TO regress_sro_user';\n\n> From f26d5702857a9c027f84850af48b0eea0f3aa15c Mon Sep 17 00:00:00 2001\n> From: Noah Misch <noah@leadboat.com>\n> Date: Mon, 9 May 2022 08:35:08 -0700\n> Subject: [PATCH] In REFRESH MATERIALIZED VIEW, set user ID before running user\n> code.\n> \n> It intended to, but did not, achieve this. Adopt the new standard of\n> setting user ID just after locking the relation. Back-patch to v10 (all\n> supported versions).\n> \n> Reviewed by Simon Riggs. Reported by Alvaro Herrera.\n> \n> Security: CVE-2022-1552\n> ---\n> src/backend/commands/matview.c | 30 +++++++++++-------------------\n> src/test/regress/expected/privileges.out | 15 +++++++++++++++\n> src/test/regress/sql/privileges.sql | 16 ++++++++++++++++\n> 3 files changed, 42 insertions(+), 19 deletions(-)\n> \n> --- a/src/backend/commands/matview.c\n> +++ b/src/backend/commands/matview.c\n> @@ -164,6 +164,17 @@\n> \t\t\t\t\t\t\t\t\t\t lockmode, false, false,\n> \t\t\t\t\t\t\t\t\t\t RangeVarCallbackOwnsTable, NULL);\n> \tmatviewRel = heap_open(matviewOid, NoLock);\n> +\trelowner = matviewRel->rd_rel->relowner;\n> +\n> +\t/*\n> +\t * Switch to the owner's userid, so that any functions are run as that\n> +\t * user. Also lock down security-restricted operations and arrange to\n> +\t * make GUC variable changes local to this command.\n> +\t */\n> +\tGetUserIdAndSecContext(&save_userid, &save_sec_context);\n> +\tSetUserIdAndSecContext(relowner,\n> +\t\t\t\t\t\t save_sec_context | SECURITY_RESTRICTED_OPERATION);\n> +\tsave_nestlevel = NewGUCNestLevel();\n> \n> \t/* Make sure it is a materialized view. */\n> \tif (matviewRel->rd_rel->relkind != RELKIND_MATVIEW)\n> @@ -269,19 +280,6 @@\n> \t */\n> \tSetMatViewPopulatedState(matviewRel, !stmt->skipData);\n> \n> -\trelowner = matviewRel->rd_rel->relowner;\n> -\n> -\t/*\n> -\t * Switch to the owner's userid, so that any functions are run as that\n> -\t * user. Also arrange to make GUC variable changes local to this command.\n> -\t * Don't lock it down too tight to create a temporary table just yet. We\n> -\t * will switch modes when we are about to execute user code.\n> -\t */\n> -\tGetUserIdAndSecContext(&save_userid, &save_sec_context);\n> -\tSetUserIdAndSecContext(relowner,\n> -\t\t\t\t\t\t save_sec_context | SECURITY_LOCAL_USERID_CHANGE);\n> -\tsave_nestlevel = NewGUCNestLevel();\n> -\n> \t/* Concurrent refresh builds new data in temp tablespace, and does diff. */\n> \tif (concurrent)\n> \t{\n> @@ -304,12 +302,6 @@\n> \tLockRelationOid(OIDNewHeap, AccessExclusiveLock);\n> \tdest = CreateTransientRelDestReceiver(OIDNewHeap);\n> \n> -\t/*\n> -\t * Now lock down security-restricted operations.\n> -\t */\n> -\tSetUserIdAndSecContext(relowner,\n> -\t\t\t\t\t\t save_sec_context | SECURITY_RESTRICTED_OPERATION);\n> -\n> \t/* Generate the data, if wanted. */\n> \tif (!stmt->skipData)\n> \t\trefresh_matview_datafill(dest, dataQuery, queryString);\n> --- a/src/test/regress/expected/privileges.out\n> +++ b/src/test/regress/expected/privileges.out\n> @@ -1323,6 +1323,21 @@\n> SQL statement \"SELECT unwanted_grant()\"\n> PL/pgSQL function sro_trojan() line 1 at PERFORM\n> SQL function \"mv_action\" statement 1\n> +-- REFRESH MATERIALIZED VIEW CONCURRENTLY use of eval_const_expressions()\n> +SET SESSION AUTHORIZATION regress_sro_user;\n> +CREATE FUNCTION unwanted_grant_nofail(int) RETURNS int\n> +\tIMMUTABLE LANGUAGE plpgsql AS $$\n> +BEGIN\n> +\tPERFORM unwanted_grant();\n> +\tRAISE WARNING 'owned';\n> +\tRETURN 1;\n> +EXCEPTION WHEN OTHERS THEN\n> +\tRETURN 2;\n> +END$$;\n> +CREATE MATERIALIZED VIEW sro_index_mv AS SELECT 1 AS c;\n> +CREATE UNIQUE INDEX ON sro_index_mv (c) WHERE unwanted_grant_nofail(1) > 0;\n> +\\c -\n> +REFRESH MATERIALIZED VIEW sro_index_mv;\n> DROP OWNED BY regress_sro_user;\n> DROP ROLE regress_sro_user;\n> -- Admin options\n> --- a/src/test/regress/sql/privileges.sql\n> +++ b/src/test/regress/sql/privileges.sql\n> @@ -824,6 +824,22 @@\n> REFRESH MATERIALIZED VIEW sro_mv;\n> BEGIN; SET CONSTRAINTS ALL IMMEDIATE; REFRESH MATERIALIZED VIEW sro_mv; COMMIT;\n> \n> +-- REFRESH MATERIALIZED VIEW CONCURRENTLY use of eval_const_expressions()\n> +SET SESSION AUTHORIZATION regress_sro_user;\n> +CREATE FUNCTION unwanted_grant_nofail(int) RETURNS int\n> +\tIMMUTABLE LANGUAGE plpgsql AS $$\n> +BEGIN\n> +\tPERFORM unwanted_grant();\n> +\tRAISE WARNING 'owned';\n> +\tRETURN 1;\n> +EXCEPTION WHEN OTHERS THEN\n> +\tRETURN 2;\n> +END$$;\n> +CREATE MATERIALIZED VIEW sro_index_mv AS SELECT 1 AS c;\n> +CREATE UNIQUE INDEX ON sro_index_mv (c) WHERE unwanted_grant_nofail(1) > 0;\n> +\\c -\n> +REFRESH MATERIALIZED VIEW sro_index_mv;\n> +\n> DROP OWNED BY regress_sro_user;\n> DROP ROLE regress_sro_user;\n> \n\n> From 88b39e61486a8925a3861d50c306a51eaa1af8d6 Mon Sep 17 00:00:00 2001\n> From: Noah Misch <noah@leadboat.com>\n> Date: Sat, 25 Jun 2022 09:07:41 -0700\n> Subject: [PATCH] CREATE INDEX: use the original userid for more ACL checks.\n> \n> Commit a117cebd638dd02e5c2e791c25e43745f233111b used the original userid\n> for ACL checks located directly in DefineIndex(), but it still adopted\n> the table owner userid for more ACL checks than intended. That broke\n> dump/reload of indexes that refer to an operator class, collation, or\n> exclusion operator in a schema other than \"public\" or \"pg_catalog\".\n> Back-patch to v10 (all supported versions), like the earlier commit.\n> \n> Nathan Bossart and Noah Misch\n> \n> Discussion: https://postgr.es/m/f8a4105f076544c180a87ef0c4822352@stmuk.bayern.de\n> ---\n> contrib/citext/Makefile | 2 \n> contrib/citext/expected/create_index_acl.out | 81 +++++++++++++++++++++++\n> contrib/citext/sql/create_index_acl.sql | 82 ++++++++++++++++++++++++\n> src/backend/commands/indexcmds.c | 92 +++++++++++++++++++++++----\n> 4 files changed, 244 insertions(+), 13 deletions(-)\n> create mode 100644 contrib/citext/expected/create_index_acl.out\n> create mode 100644 contrib/citext/sql/create_index_acl.sql\n> \n> --- a/contrib/citext/Makefile\n> +++ b/contrib/citext/Makefile\n> @@ -7,7 +7,7 @@\n> \tcitext--1.0--1.1.sql citext--unpackaged--1.0.sql\n> PGFILEDESC = \"citext - case-insensitive character string data type\"\n> \n> -REGRESS = citext\n> +REGRESS = create_index_acl citext\n> \n> ifdef USE_PGXS\n> PG_CONFIG = pg_config\n> --- /dev/null\n> +++ b/contrib/citext/expected/create_index_acl.out\n> @@ -0,0 +1,81 @@\n> +-- Each DefineIndex() ACL check uses either the original userid or the table\n> +-- owner userid; see its header comment. Here, confirm that DefineIndex()\n> +-- uses its original userid where necessary. The test works by creating\n> +-- indexes that refer to as many sorts of objects as possible, with the table\n> +-- owner having as few applicable privileges as possible. (The privileges.sql\n> +-- regress_sro_user tests look for the opposite defect; they confirm that\n> +-- DefineIndex() uses the table owner userid where necessary.)\n> +-- Don't override tablespaces; this version lacks allow_in_place_tablespaces.\n> +BEGIN;\n> +CREATE ROLE regress_minimal;\n> +CREATE SCHEMA s;\n> +CREATE EXTENSION citext SCHEMA s;\n> +-- Revoke all conceivably-relevant ACLs within the extension. The system\n> +-- doesn't check all these ACLs, but this will provide some coverage if that\n> +-- ever changes.\n> +REVOKE ALL ON TYPE s.citext FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.citext_lt(s.citext, s.citext) FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.citext_le(s.citext, s.citext) FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.citext_eq(s.citext, s.citext) FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.citext_ge(s.citext, s.citext) FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.citext_gt(s.citext, s.citext) FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.citext_cmp(s.citext, s.citext) FROM PUBLIC;\n> +-- Functions sufficient for making an index column that has the side effect of\n> +-- changing search_path at expression planning time.\n> +CREATE FUNCTION public.setter() RETURNS bool VOLATILE\n> + LANGUAGE SQL AS $$SET search_path = s; SELECT true$$;\n> +CREATE FUNCTION s.const() RETURNS bool IMMUTABLE\n> + LANGUAGE SQL AS $$SELECT public.setter()$$;\n> +CREATE FUNCTION s.index_this_expr(s.citext, bool) RETURNS s.citext IMMUTABLE\n> + LANGUAGE SQL AS $$SELECT $1$$;\n> +REVOKE ALL ON FUNCTION public.setter() FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.const() FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.index_this_expr(s.citext, bool) FROM PUBLIC;\n> +-- Even for an empty table, expression planning calls s.const & public.setter.\n> +GRANT EXECUTE ON FUNCTION public.setter() TO regress_minimal;\n> +GRANT EXECUTE ON FUNCTION s.const() TO regress_minimal;\n> +-- Function for index predicate.\n> +CREATE FUNCTION s.index_row_if(s.citext) RETURNS bool IMMUTABLE\n> + LANGUAGE SQL AS $$SELECT $1 IS NOT NULL$$;\n> +REVOKE ALL ON FUNCTION s.index_row_if(s.citext) FROM PUBLIC;\n> +-- Even for an empty table, CREATE INDEX checks ii_Predicate permissions.\n> +GRANT EXECUTE ON FUNCTION s.index_row_if(s.citext) TO regress_minimal;\n> +-- Non-extension, non-function objects.\n> +CREATE COLLATION s.coll (LOCALE=\"C\");\n> +CREATE TABLE s.x (y s.citext);\n> +ALTER TABLE s.x OWNER TO regress_minimal;\n> +-- Empty-table DefineIndex()\n> +CREATE UNIQUE INDEX u0rows ON s.x USING btree\n> + ((s.index_this_expr(y, s.const())) COLLATE s.coll s.citext_ops)\n> + WHERE s.index_row_if(y);\n> +ALTER TABLE s.x ADD CONSTRAINT e0rows EXCLUDE USING btree\n> + ((s.index_this_expr(y, s.const())) COLLATE s.coll WITH s.=)\n> + WHERE (s.index_row_if(y));\n> +-- Make the table nonempty.\n> +INSERT INTO s.x VALUES ('foo'), ('bar');\n> +-- If the INSERT runs the planner on index expressions, a search_path change\n> +-- survives. As of 2022-06, the INSERT reuses a cached plan. It does so even\n> +-- under debug_discard_caches, since each index is new-in-transaction. If\n> +-- future work changes a cache lifecycle, this RESET may become necessary.\n> +RESET search_path;\n> +-- For a nonempty table, owner needs permissions throughout ii_Expressions.\n> +GRANT EXECUTE ON FUNCTION s.index_this_expr(s.citext, bool) TO regress_minimal;\n> +CREATE UNIQUE INDEX u2rows ON s.x USING btree\n> + ((s.index_this_expr(y, s.const())) COLLATE s.coll s.citext_ops)\n> + WHERE s.index_row_if(y);\n> +ALTER TABLE s.x ADD CONSTRAINT e2rows EXCLUDE USING btree\n> + ((s.index_this_expr(y, s.const())) COLLATE s.coll WITH s.=)\n> + WHERE (s.index_row_if(y));\n> +-- Shall not find s.coll via search_path, despite the s.const->public.setter\n> +-- call having set search_path=s during expression planning. Suppress the\n> +-- message itself, which depends on the database encoding.\n> +\\set VERBOSITY terse\n> +DO $$\n> +BEGIN\n> +ALTER TABLE s.x ADD CONSTRAINT underqualified EXCLUDE USING btree\n> + ((s.index_this_expr(y, s.const())) COLLATE coll WITH s.=)\n> + WHERE (s.index_row_if(y));\n> +EXCEPTION WHEN OTHERS THEN RAISE EXCEPTION '%', sqlstate; END$$;\n> +ERROR: 42704\n> +\\set VERBOSITY default\n> +ROLLBACK;\n> --- /dev/null\n> +++ b/contrib/citext/sql/create_index_acl.sql\n> @@ -0,0 +1,82 @@\n> +-- Each DefineIndex() ACL check uses either the original userid or the table\n> +-- owner userid; see its header comment. Here, confirm that DefineIndex()\n> +-- uses its original userid where necessary. The test works by creating\n> +-- indexes that refer to as many sorts of objects as possible, with the table\n> +-- owner having as few applicable privileges as possible. (The privileges.sql\n> +-- regress_sro_user tests look for the opposite defect; they confirm that\n> +-- DefineIndex() uses the table owner userid where necessary.)\n> +\n> +-- Don't override tablespaces; this version lacks allow_in_place_tablespaces.\n> +\n> +BEGIN;\n> +CREATE ROLE regress_minimal;\n> +CREATE SCHEMA s;\n> +CREATE EXTENSION citext SCHEMA s;\n> +-- Revoke all conceivably-relevant ACLs within the extension. The system\n> +-- doesn't check all these ACLs, but this will provide some coverage if that\n> +-- ever changes.\n> +REVOKE ALL ON TYPE s.citext FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.citext_lt(s.citext, s.citext) FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.citext_le(s.citext, s.citext) FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.citext_eq(s.citext, s.citext) FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.citext_ge(s.citext, s.citext) FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.citext_gt(s.citext, s.citext) FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.citext_cmp(s.citext, s.citext) FROM PUBLIC;\n> +-- Functions sufficient for making an index column that has the side effect of\n> +-- changing search_path at expression planning time.\n> +CREATE FUNCTION public.setter() RETURNS bool VOLATILE\n> + LANGUAGE SQL AS $$SET search_path = s; SELECT true$$;\n> +CREATE FUNCTION s.const() RETURNS bool IMMUTABLE\n> + LANGUAGE SQL AS $$SELECT public.setter()$$;\n> +CREATE FUNCTION s.index_this_expr(s.citext, bool) RETURNS s.citext IMMUTABLE\n> + LANGUAGE SQL AS $$SELECT $1$$;\n> +REVOKE ALL ON FUNCTION public.setter() FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.const() FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.index_this_expr(s.citext, bool) FROM PUBLIC;\n> +-- Even for an empty table, expression planning calls s.const & public.setter.\n> +GRANT EXECUTE ON FUNCTION public.setter() TO regress_minimal;\n> +GRANT EXECUTE ON FUNCTION s.const() TO regress_minimal;\n> +-- Function for index predicate.\n> +CREATE FUNCTION s.index_row_if(s.citext) RETURNS bool IMMUTABLE\n> + LANGUAGE SQL AS $$SELECT $1 IS NOT NULL$$;\n> +REVOKE ALL ON FUNCTION s.index_row_if(s.citext) FROM PUBLIC;\n> +-- Even for an empty table, CREATE INDEX checks ii_Predicate permissions.\n> +GRANT EXECUTE ON FUNCTION s.index_row_if(s.citext) TO regress_minimal;\n> +-- Non-extension, non-function objects.\n> +CREATE COLLATION s.coll (LOCALE=\"C\");\n> +CREATE TABLE s.x (y s.citext);\n> +ALTER TABLE s.x OWNER TO regress_minimal;\n> +-- Empty-table DefineIndex()\n> +CREATE UNIQUE INDEX u0rows ON s.x USING btree\n> + ((s.index_this_expr(y, s.const())) COLLATE s.coll s.citext_ops)\n> + WHERE s.index_row_if(y);\n> +ALTER TABLE s.x ADD CONSTRAINT e0rows EXCLUDE USING btree\n> + ((s.index_this_expr(y, s.const())) COLLATE s.coll WITH s.=)\n> + WHERE (s.index_row_if(y));\n> +-- Make the table nonempty.\n> +INSERT INTO s.x VALUES ('foo'), ('bar');\n> +-- If the INSERT runs the planner on index expressions, a search_path change\n> +-- survives. As of 2022-06, the INSERT reuses a cached plan. It does so even\n> +-- under debug_discard_caches, since each index is new-in-transaction. If\n> +-- future work changes a cache lifecycle, this RESET may become necessary.\n> +RESET search_path;\n> +-- For a nonempty table, owner needs permissions throughout ii_Expressions.\n> +GRANT EXECUTE ON FUNCTION s.index_this_expr(s.citext, bool) TO regress_minimal;\n> +CREATE UNIQUE INDEX u2rows ON s.x USING btree\n> + ((s.index_this_expr(y, s.const())) COLLATE s.coll s.citext_ops)\n> + WHERE s.index_row_if(y);\n> +ALTER TABLE s.x ADD CONSTRAINT e2rows EXCLUDE USING btree\n> + ((s.index_this_expr(y, s.const())) COLLATE s.coll WITH s.=)\n> + WHERE (s.index_row_if(y));\n> +-- Shall not find s.coll via search_path, despite the s.const->public.setter\n> +-- call having set search_path=s during expression planning. Suppress the\n> +-- message itself, which depends on the database encoding.\n> +\\set VERBOSITY terse\n> +DO $$\n> +BEGIN\n> +ALTER TABLE s.x ADD CONSTRAINT underqualified EXCLUDE USING btree\n> + ((s.index_this_expr(y, s.const())) COLLATE coll WITH s.=)\n> + WHERE (s.index_row_if(y));\n> +EXCEPTION WHEN OTHERS THEN RAISE EXCEPTION '%', sqlstate; END$$;\n> +\\set VERBOSITY default\n> +ROLLBACK;\n> --- a/src/backend/commands/indexcmds.c\n> +++ b/src/backend/commands/indexcmds.c\n> @@ -70,7 +70,10 @@\n> \t\t\t\t Oid relId,\n> \t\t\t\t char *accessMethodName, Oid accessMethodId,\n> \t\t\t\t bool amcanorder,\n> -\t\t\t\t bool isconstraint);\n> +\t\t\t\t bool isconstraint,\n> +\t\t\t\t Oid ddl_userid,\n> +\t\t\t\t int ddl_sec_context,\n> +\t\t\t\t int *ddl_save_nestlevel);\n> static Oid GetIndexOpClass(List *opclass, Oid attrType,\n> \t\t\t\tchar *accessMethodName, Oid accessMethodId);\n> static char *ChooseIndexName(const char *tabname, Oid namespaceId,\n> @@ -176,8 +179,7 @@\n> \t * Compute the operator classes, collations, and exclusion operators for\n> \t * the new index, so we can test whether it's compatible with the existing\n> \t * one. Note that ComputeIndexAttrs might fail here, but that's OK:\n> -\t * DefineIndex would have called this function with the same arguments\n> -\t * later on, and it would have failed then anyway.\n> +\t * DefineIndex would have failed later.\n> \t */\n> \tindexInfo = makeNode(IndexInfo);\n> \tindexInfo->ii_Expressions = NIL;\n> @@ -195,7 +197,7 @@\n> \t\t\t\t\t coloptions, attributeList,\n> \t\t\t\t\t exclusionOpNames, relationId,\n> \t\t\t\t\t accessMethodName, accessMethodId,\n> -\t\t\t\t\t amcanorder, isconstraint);\n> +\t\t\t\t\t amcanorder, isconstraint, InvalidOid, 0, NULL);\n> \n> \n> \t/* Get the soon-obsolete pg_index tuple. */\n> @@ -288,6 +290,19 @@\n> * DefineIndex\n> *\t\tCreates a new index.\n> *\n> + * This function manages the current userid according to the needs of pg_dump.\n> + * Recreating old-database catalog entries in new-database is fine, regardless\n> + * of which users would have permission to recreate those entries now. That's\n> + * just preservation of state. Running opaque expressions, like calling a\n> + * function named in a catalog entry or evaluating a pg_node_tree in a catalog\n> + * entry, as anyone other than the object owner, is not fine. To adhere to\n> + * those principles and to remain fail-safe, use the table owner userid for\n> + * most ACL checks. Use the original userid for ACL checks reached without\n> + * traversing opaque expressions. (pg_dump can predict such ACL checks from\n> + * catalogs.) Overall, this is a mess. Future DDL development should\n> + * consider offering one DDL command for catalog setup and a separate DDL\n> + * command for steps that run opaque expressions.\n> + *\n> * 'relationId': the OID of the heap relation on which the index is to be\n> *\t\tcreated\n> * 'stmt': IndexStmt describing the properties of the new index.\n> @@ -598,7 +613,8 @@\n> \t\t\t\t\t coloptions, stmt->indexParams,\n> \t\t\t\t\t stmt->excludeOpNames, relationId,\n> \t\t\t\t\t accessMethodName, accessMethodId,\n> -\t\t\t\t\t amcanorder, stmt->isconstraint);\n> +\t\t\t\t\t amcanorder, stmt->isconstraint, root_save_userid,\n> +\t\t\t\t\t root_save_sec_context, &root_save_nestlevel);\n> \n> \t/*\n> \t * Extra checks when creating a PRIMARY KEY index.\n> @@ -706,9 +722,8 @@\n> \n> \t/*\n> \t * Roll back any GUC changes executed by index functions, and keep\n> -\t * subsequent changes local to this command. It's barely possible that\n> -\t * some index function changed a behavior-affecting GUC, e.g. xmloption,\n> -\t * that affects subsequent steps.\n> +\t * subsequent changes local to this command. This is essential if some\n> +\t * index function changed a behavior-affecting GUC, e.g. search_path.\n> \t */\n> \tAtEOXact_GUC(false, root_save_nestlevel);\n> \troot_save_nestlevel = NewGUCNestLevel();\n> @@ -1063,6 +1078,10 @@\n> /*\n> * Compute per-index-column information, including indexed column numbers\n> * or index expressions, opclasses, and indoptions.\n> + *\n> + * If the caller switched to the table owner, ddl_userid is the role for ACL\n> + * checks reached without traversing opaque expressions. Otherwise, it's\n> + * InvalidOid, and other ddl_* arguments are undefined.\n> */\n> static void\n> ComputeIndexAttrs(IndexInfo *indexInfo,\n> @@ -1076,11 +1095,16 @@\n> \t\t\t\t char *accessMethodName,\n> \t\t\t\t Oid accessMethodId,\n> \t\t\t\t bool amcanorder,\n> -\t\t\t\t bool isconstraint)\n> +\t\t\t\t bool isconstraint,\n> +\t\t\t\t Oid ddl_userid,\n> +\t\t\t\t int ddl_sec_context,\n> +\t\t\t\t int *ddl_save_nestlevel)\n> {\n> \tListCell *nextExclOp;\n> \tListCell *lc;\n> \tint\t\t\tattn;\n> +\tOid\t\t\tsave_userid;\n> +\tint\t\t\tsave_sec_context;\n> \n> \t/* Allocate space for exclusion operator info, if needed */\n> \tif (exclusionOpNames)\n> @@ -1096,6 +1120,9 @@\n> \telse\n> \t\tnextExclOp = NULL;\n> \n> +\tif (OidIsValid(ddl_userid))\n> +\t\tGetUserIdAndSecContext(&save_userid, &save_sec_context);\n> +\n> \t/*\n> \t * process attributeList\n> \t */\n> @@ -1190,10 +1217,24 @@\n> \t\ttypeOidP[attn] = atttype;\n> \n> \t\t/*\n> -\t\t * Apply collation override if any\n> +\t\t * Apply collation override if any. Use of ddl_userid is necessary\n> +\t\t * due to ACL checks therein, and it's safe because collations don't\n> +\t\t * contain opaque expressions (or non-opaque expressions).\n> \t\t */\n> \t\tif (attribute->collation)\n> +\t\t{\n> +\t\t\tif (OidIsValid(ddl_userid))\n> +\t\t\t{\n> +\t\t\t\tAtEOXact_GUC(false, *ddl_save_nestlevel);\n> +\t\t\t\tSetUserIdAndSecContext(ddl_userid, ddl_sec_context);\n> +\t\t\t}\n> \t\t\tattcollation = get_collation_oid(attribute->collation, false);\n> +\t\t\tif (OidIsValid(ddl_userid))\n> +\t\t\t{\n> +\t\t\t\tSetUserIdAndSecContext(save_userid, save_sec_context);\n> +\t\t\t\t*ddl_save_nestlevel = NewGUCNestLevel();\n> +\t\t\t}\n> +\t\t}\n> \n> \t\t/*\n> \t\t * Check we have a collation iff it's a collatable type. The only\n> @@ -1221,12 +1262,25 @@\n> \t\tcollationOidP[attn] = attcollation;\n> \n> \t\t/*\n> -\t\t * Identify the opclass to use.\n> +\t\t * Identify the opclass to use. Use of ddl_userid is necessary due to\n> +\t\t * ACL checks therein. This is safe despite opclasses containing\n> +\t\t * opaque expressions (specifically, functions), because only\n> +\t\t * superusers can define opclasses.\n> \t\t */\n> +\t\tif (OidIsValid(ddl_userid))\n> +\t\t{\n> +\t\t\tAtEOXact_GUC(false, *ddl_save_nestlevel);\n> +\t\t\tSetUserIdAndSecContext(ddl_userid, ddl_sec_context);\n> +\t\t}\n> \t\tclassOidP[attn] = GetIndexOpClass(attribute->opclass,\n> \t\t\t\t\t\t\t\t\t\t atttype,\n> \t\t\t\t\t\t\t\t\t\t accessMethodName,\n> \t\t\t\t\t\t\t\t\t\t accessMethodId);\n> +\t\tif (OidIsValid(ddl_userid))\n> +\t\t{\n> +\t\t\tSetUserIdAndSecContext(save_userid, save_sec_context);\n> +\t\t\t*ddl_save_nestlevel = NewGUCNestLevel();\n> +\t\t}\n> \n> \t\t/*\n> \t\t * Identify the exclusion operator, if any.\n> @@ -1240,9 +1294,23 @@\n> \n> \t\t\t/*\n> \t\t\t * Find the operator --- it must accept the column datatype\n> -\t\t\t * without runtime coercion (but binary compatibility is OK)\n> +\t\t\t * without runtime coercion (but binary compatibility is OK).\n> +\t\t\t * Operators contain opaque expressions (specifically, functions).\n> +\t\t\t * compatible_oper_opid() boils down to oper() and\n> +\t\t\t * IsBinaryCoercible(). PostgreSQL would have security problems\n> +\t\t\t * elsewhere if oper() started calling opaque expressions.\n> \t\t\t */\n> +\t\t\tif (OidIsValid(ddl_userid))\n> +\t\t\t{\n> +\t\t\t\tAtEOXact_GUC(false, *ddl_save_nestlevel);\n> +\t\t\t\tSetUserIdAndSecContext(ddl_userid, ddl_sec_context);\n> +\t\t\t}\n> \t\t\topid = compatible_oper_opid(opname, atttype, atttype, false);\n> +\t\t\tif (OidIsValid(ddl_userid))\n> +\t\t\t{\n> +\t\t\t\tSetUserIdAndSecContext(save_userid, save_sec_context);\n> +\t\t\t\t*ddl_save_nestlevel = NewGUCNestLevel();\n> +\t\t\t}\n> \n> \t\t\t/*\n> \t\t\t * Only allow commutative operators to be used in exclusion\n\n> From ef792f7856dea2576dcd9cab92b2b05fe955696b Mon Sep 17 00:00:00 2001\n> From: Noah Misch <noah@leadboat.com>\n> Date: Mon, 9 May 2022 08:35:08 -0700\n> Subject: [PATCH] Make relation-enumerating operations be security-restricted\n> operations.\n> \n> When a feature enumerates relations and runs functions associated with\n> all found relations, the feature's user shall not need to trust every\n> user having permission to create objects. BRIN-specific functionality\n> in autovacuum neglected to account for this, as did pg_amcheck and\n> CLUSTER. An attacker having permission to create non-temp objects in at\n> least one schema could execute arbitrary SQL functions under the\n> identity of the bootstrap superuser. CREATE INDEX (not a\n> relation-enumerating operation) and REINDEX protected themselves too\n> late. This change extends to the non-enumerating amcheck interface.\n> Back-patch to v10 (all supported versions).\n> \n> Sergey Shinderuk, reviewed (in earlier versions) by Alexander Lakhin.\n> Reported by Alexander Lakhin.\n> \n> Security: CVE-2022-1552\n> ---\n> src/backend/catalog/index.c | 41 +++++++++++++++++++--------\n> src/backend/commands/cluster.c | 35 ++++++++++++++++++-----\n> src/backend/commands/indexcmds.c | 47 +++++++++++++++++++++++++++++--\n> src/backend/utils/init/miscinit.c | 24 +++++++++------\n> src/test/regress/expected/privileges.out | 35 +++++++++++++++++++++++\n> src/test/regress/sql/privileges.sql | 34 ++++++++++++++++++++++\n> 6 files changed, 187 insertions(+), 29 deletions(-)\n> \n> --- a/src/backend/catalog/index.c\n> +++ b/src/backend/catalog/index.c\n> @@ -2743,7 +2743,17 @@\n> \n> \t/* Open and lock the parent heap relation */\n> \theapRelation = heap_open(heapId, ShareUpdateExclusiveLock);\n> -\t/* And the target index relation */\n> +\n> +\t/*\n> +\t * Switch to the table owner's userid, so that any index functions are run\n> +\t * as that user. Also lock down security-restricted operations and\n> +\t * arrange to make GUC variable changes local to this command.\n> +\t */\n> +\tGetUserIdAndSecContext(&save_userid, &save_sec_context);\n> +\tSetUserIdAndSecContext(heapRelation->rd_rel->relowner,\n> +\t\t\t\t\t\t save_sec_context | SECURITY_RESTRICTED_OPERATION);\n> +\tsave_nestlevel = NewGUCNestLevel();\n> +\n> \tindexRelation = index_open(indexId, RowExclusiveLock);\n> \n> \t/*\n> @@ -2757,16 +2767,6 @@\n> \tindexInfo->ii_Concurrent = true;\n> \n> \t/*\n> -\t * Switch to the table owner's userid, so that any index functions are run\n> -\t * as that user. Also lock down security-restricted operations and\n> -\t * arrange to make GUC variable changes local to this command.\n> -\t */\n> -\tGetUserIdAndSecContext(&save_userid, &save_sec_context);\n> -\tSetUserIdAndSecContext(heapRelation->rd_rel->relowner,\n> -\t\t\t\t\t\t save_sec_context | SECURITY_RESTRICTED_OPERATION);\n> -\tsave_nestlevel = NewGUCNestLevel();\n> -\n> -\t/*\n> \t * Scan the index and gather up all the TIDs into a tuplesort object.\n> \t */\n> \tivinfo.index = indexRelation;\n> @@ -3178,6 +3178,9 @@\n> \tRelation\tiRel,\n> \t\t\t\theapRelation;\n> \tOid\t\t\theapId;\n> +\tOid\t\t\tsave_userid;\n> +\tint\t\t\tsave_sec_context;\n> +\tint\t\t\tsave_nestlevel;\n> \tIndexInfo *indexInfo;\n> \tvolatile bool skipped_constraint = false;\n> \n> @@ -3189,6 +3192,16 @@\n> \theapRelation = heap_open(heapId, ShareLock);\n> \n> \t/*\n> +\t * Switch to the table owner's userid, so that any index functions are run\n> +\t * as that user. Also lock down security-restricted operations and\n> +\t * arrange to make GUC variable changes local to this command.\n> +\t */\n> +\tGetUserIdAndSecContext(&save_userid, &save_sec_context);\n> +\tSetUserIdAndSecContext(heapRelation->rd_rel->relowner,\n> +\t\t\t\t\t\t save_sec_context | SECURITY_RESTRICTED_OPERATION);\n> +\tsave_nestlevel = NewGUCNestLevel();\n> +\n> +\t/*\n> \t * Open the target index relation and get an exclusive lock on it, to\n> \t * ensure that no one else is touching this particular index.\n> \t */\n> @@ -3324,6 +3337,12 @@\n> \t\theap_close(pg_index, RowExclusiveLock);\n> \t}\n> \n> +\t/* Roll back any GUC changes executed by index functions */\n> +\tAtEOXact_GUC(false, save_nestlevel);\n> +\n> +\t/* Restore userid and security context */\n> +\tSetUserIdAndSecContext(save_userid, save_sec_context);\n> +\n> \t/* Close rels, but keep locks */\n> \tindex_close(iRel, NoLock);\n> \theap_close(heapRelation, NoLock);\n> --- a/src/backend/commands/cluster.c\n> +++ b/src/backend/commands/cluster.c\n> @@ -41,6 +41,7 @@\n> #include \"storage/smgr.h\"\n> #include \"utils/acl.h\"\n> #include \"utils/fmgroids.h\"\n> +#include \"utils/guc.h\"\n> #include \"utils/inval.h\"\n> #include \"utils/lsyscache.h\"\n> #include \"utils/memutils.h\"\n> @@ -259,6 +260,9 @@\n> cluster_rel(Oid tableOid, Oid indexOid, bool recheck, bool verbose)\n> {\n> \tRelation\tOldHeap;\n> +\tOid\t\t\tsave_userid;\n> +\tint\t\t\tsave_sec_context;\n> +\tint\t\t\tsave_nestlevel;\n> \n> \t/* Check for user-requested abort. */\n> \tCHECK_FOR_INTERRUPTS();\n> @@ -276,6 +280,16 @@\n> \t\treturn;\n> \n> \t/*\n> +\t * Switch to the table owner's userid, so that any index functions are run\n> +\t * as that user. Also lock down security-restricted operations and\n> +\t * arrange to make GUC variable changes local to this command.\n> +\t */\n> +\tGetUserIdAndSecContext(&save_userid, &save_sec_context);\n> +\tSetUserIdAndSecContext(OldHeap->rd_rel->relowner,\n> +\t\t\t\t\t\t save_sec_context | SECURITY_RESTRICTED_OPERATION);\n> +\tsave_nestlevel = NewGUCNestLevel();\n> +\n> +\t/*\n> \t * Since we may open a new transaction for each relation, we have to check\n> \t * that the relation still is what we think it is.\n> \t *\n> @@ -289,10 +303,10 @@\n> \t\tForm_pg_index indexForm;\n> \n> \t\t/* Check that the user still owns the relation */\n> -\t\tif (!pg_class_ownercheck(tableOid, GetUserId()))\n> +\t\tif (!pg_class_ownercheck(tableOid, save_userid))\n> \t\t{\n> \t\t\trelation_close(OldHeap, AccessExclusiveLock);\n> -\t\t\treturn;\n> +\t\t\tgoto out;\n> \t\t}\n> \n> \t\t/*\n> @@ -306,7 +320,7 @@\n> \t\tif (RELATION_IS_OTHER_TEMP(OldHeap))\n> \t\t{\n> \t\t\trelation_close(OldHeap, AccessExclusiveLock);\n> -\t\t\treturn;\n> +\t\t\tgoto out;\n> \t\t}\n> \n> \t\tif (OidIsValid(indexOid))\n> @@ -317,7 +331,7 @@\n> \t\t\tif (!SearchSysCacheExists1(RELOID, ObjectIdGetDatum(indexOid)))\n> \t\t\t{\n> \t\t\t\trelation_close(OldHeap, AccessExclusiveLock);\n> -\t\t\t\treturn;\n> +\t\t\t\tgoto out;\n> \t\t\t}\n> \n> \t\t\t/*\n> @@ -327,14 +341,14 @@\n> \t\t\tif (!HeapTupleIsValid(tuple))\t\t/* probably can't happen */\n> \t\t\t{\n> \t\t\t\trelation_close(OldHeap, AccessExclusiveLock);\n> -\t\t\t\treturn;\n> +\t\t\t\tgoto out;\n> \t\t\t}\n> \t\t\tindexForm = (Form_pg_index) GETSTRUCT(tuple);\n> \t\t\tif (!indexForm->indisclustered)\n> \t\t\t{\n> \t\t\t\tReleaseSysCache(tuple);\n> \t\t\t\trelation_close(OldHeap, AccessExclusiveLock);\n> -\t\t\t\treturn;\n> +\t\t\t\tgoto out;\n> \t\t\t}\n> \t\t\tReleaseSysCache(tuple);\n> \t\t}\n> @@ -388,7 +402,7 @@\n> \t\t!RelationIsPopulated(OldHeap))\n> \t{\n> \t\trelation_close(OldHeap, AccessExclusiveLock);\n> -\t\treturn;\n> +\t\tgoto out;\n> \t}\n> \n> \t/*\n> @@ -403,6 +417,13 @@\n> \trebuild_relation(OldHeap, indexOid, verbose);\n> \n> \t/* NB: rebuild_relation does heap_close() on OldHeap */\n> +\n> +out:\n> +\t/* Roll back any GUC changes executed by index functions */\n> +\tAtEOXact_GUC(false, save_nestlevel);\n> +\n> +\t/* Restore userid and security context */\n> +\tSetUserIdAndSecContext(save_userid, save_sec_context);\n> }\n> \n> /*\n> --- a/src/backend/commands/indexcmds.c\n> +++ b/src/backend/commands/indexcmds.c\n> @@ -44,6 +44,7 @@\n> #include \"utils/acl.h\"\n> #include \"utils/builtins.h\"\n> #include \"utils/fmgroids.h\"\n> +#include \"utils/guc.h\"\n> #include \"utils/inval.h\"\n> #include \"utils/lsyscache.h\"\n> #include \"utils/memutils.h\"\n> @@ -329,8 +330,13 @@\n> \tLOCKTAG\t\theaplocktag;\n> \tLOCKMODE\tlockmode;\n> \tSnapshot\tsnapshot;\n> +\tOid\t\t\troot_save_userid;\n> +\tint\t\t\troot_save_sec_context;\n> +\tint\t\t\troot_save_nestlevel;\n> \tint\t\t\ti;\n> \n> +\troot_save_nestlevel = NewGUCNestLevel();\n> +\n> \t/*\n> \t * Force non-concurrent build on temporary relations, even if CONCURRENTLY\n> \t * was requested. Other backends can't access a temporary relation, so\n> @@ -371,6 +377,15 @@\n> \tlockmode = concurrent ? ShareUpdateExclusiveLock : ShareLock;\n> \trel = heap_open(relationId, lockmode);\n> \n> +\t/*\n> +\t * Switch to the table owner's userid, so that any index functions are run\n> +\t * as that user. Also lock down security-restricted operations. We\n> +\t * already arranged to make GUC variable changes local to this command.\n> +\t */\n> +\tGetUserIdAndSecContext(&root_save_userid, &root_save_sec_context);\n> +\tSetUserIdAndSecContext(rel->rd_rel->relowner,\n> +\t\t\t\t\t\t root_save_sec_context | SECURITY_RESTRICTED_OPERATION);\n> +\n> \trelationId = RelationGetRelid(rel);\n> \tnamespaceId = RelationGetNamespace(rel);\n> \n> @@ -412,7 +427,7 @@\n> \t{\n> \t\tAclResult\taclresult;\n> \n> -\t\taclresult = pg_namespace_aclcheck(namespaceId, GetUserId(),\n> +\t\taclresult = pg_namespace_aclcheck(namespaceId, root_save_userid,\n> \t\t\t\t\t\t\t\t\t\t ACL_CREATE);\n> \t\tif (aclresult != ACLCHECK_OK)\n> \t\t\taclcheck_error(aclresult, ACL_KIND_NAMESPACE,\n> @@ -439,7 +454,7 @@\n> \t{\n> \t\tAclResult\taclresult;\n> \n> -\t\taclresult = pg_tablespace_aclcheck(tablespaceId, GetUserId(),\n> +\t\taclresult = pg_tablespace_aclcheck(tablespaceId, root_save_userid,\n> \t\t\t\t\t\t\t\t\t\t ACL_CREATE);\n> \t\tif (aclresult != ACLCHECK_OK)\n> \t\t\taclcheck_error(aclresult, ACL_KIND_TABLESPACE,\n> @@ -622,11 +637,23 @@\n> \t\t\t\t\t skip_build || concurrent,\n> \t\t\t\t\t concurrent, !check_rights);\n> \n> +\t/*\n> +\t * Roll back any GUC changes executed by index functions, and keep\n> +\t * subsequent changes local to this command. It's barely possible that\n> +\t * some index function changed a behavior-affecting GUC, e.g. xmloption,\n> +\t * that affects subsequent steps.\n> +\t */\n> +\tAtEOXact_GUC(false, root_save_nestlevel);\n> +\troot_save_nestlevel = NewGUCNestLevel();\n> +\n> \t/* Add any requested comment */\n> \tif (stmt->idxcomment != NULL)\n> \t\tCreateComments(indexRelationId, RelationRelationId, 0,\n> \t\t\t\t\t stmt->idxcomment);\n> \n> +\tAtEOXact_GUC(false, root_save_nestlevel);\n> +\tSetUserIdAndSecContext(root_save_userid, root_save_sec_context);\n> +\n> \tif (!concurrent)\n> \t{\n> \t\t/* Close the heap and we're done, in the non-concurrent case */\n> @@ -705,6 +732,16 @@\n> \t/* Open and lock the parent heap relation */\n> \trel = heap_openrv(stmt->relation, ShareUpdateExclusiveLock);\n> \n> +\t/*\n> +\t * Switch to the table owner's userid, so that any index functions are run\n> +\t * as that user. Also lock down security-restricted operations and\n> +\t * arrange to make GUC variable changes local to this command.\n> +\t */\n> +\tGetUserIdAndSecContext(&root_save_userid, &root_save_sec_context);\n> +\tSetUserIdAndSecContext(rel->rd_rel->relowner,\n> +\t\t\t\t\t\t root_save_sec_context | SECURITY_RESTRICTED_OPERATION);\n> +\troot_save_nestlevel = NewGUCNestLevel();\n> +\n> \t/* And the target index relation */\n> \tindexRelation = index_open(indexRelationId, RowExclusiveLock);\n> \n> @@ -720,6 +757,12 @@\n> \t/* Now build the index */\n> \tindex_build(rel, indexRelation, indexInfo, stmt->primary, false);\n> \n> +\t/* Roll back any GUC changes executed by index functions */\n> +\tAtEOXact_GUC(false, root_save_nestlevel);\n> +\n> +\t/* Restore userid and security context */\n> +\tSetUserIdAndSecContext(root_save_userid, root_save_sec_context);\n> +\n> \t/* Close both the relations, but keep the locks */\n> \theap_close(rel, NoLock);\n> \tindex_close(indexRelation, NoLock);\n> --- a/src/backend/utils/init/miscinit.c\n> +++ b/src/backend/utils/init/miscinit.c\n> @@ -235,15 +235,21 @@\n> * with guc.c's internal state, so SET ROLE has to be disallowed.\n> *\n> * SECURITY_RESTRICTED_OPERATION indicates that we are inside an operation\n> - * that does not wish to trust called user-defined functions at all. This\n> - * bit prevents not only SET ROLE, but various other changes of session state\n> - * that normally is unprotected but might possibly be used to subvert the\n> - * calling session later. An example is replacing an existing prepared\n> - * statement with new code, which will then be executed with the outer\n> - * session's permissions when the prepared statement is next used. Since\n> - * these restrictions are fairly draconian, we apply them only in contexts\n> - * where the called functions are really supposed to be side-effect-free\n> - * anyway, such as VACUUM/ANALYZE/REINDEX.\n> + * that does not wish to trust called user-defined functions at all. The\n> + * policy is to use this before operations, e.g. autovacuum and REINDEX, that\n> + * enumerate relations of a database or schema and run functions associated\n> + * with each found relation. The relation owner is the new user ID. Set this\n> + * as soon as possible after locking the relation. Restore the old user ID as\n> + * late as possible before closing the relation; restoring it shortly after\n> + * close is also tolerable. If a command has both relation-enumerating and\n> + * non-enumerating modes, e.g. ANALYZE, both modes set this bit. This bit\n> + * prevents not only SET ROLE, but various other changes of session state that\n> + * normally is unprotected but might possibly be used to subvert the calling\n> + * session later. An example is replacing an existing prepared statement with\n> + * new code, which will then be executed with the outer session's permissions\n> + * when the prepared statement is next used. These restrictions are fairly\n> + * draconian, but the functions called in relation-enumerating operations are\n> + * really supposed to be side-effect-free anyway.\n> *\n> * Unlike GetUserId, GetUserIdAndSecContext does *not* Assert that the current\n> * value of CurrentUserId is valid; nor does SetUserIdAndSecContext require\n> --- a/src/test/regress/expected/privileges.out\n> +++ b/src/test/regress/expected/privileges.out\n> @@ -1194,6 +1194,41 @@\n> -- security-restricted operations\n> \\c -\n> CREATE ROLE regress_sro_user;\n> +-- Check that index expressions and predicates are run as the table's owner\n> +-- A dummy index function checking current_user\n> +CREATE FUNCTION sro_ifun(int) RETURNS int AS $$\n> +BEGIN\n> +\t-- Below we set the table's owner to regress_sro_user\n> +\tIF current_user <> 'regress_sro_user' THEN\n> +\t\tRAISE 'called by %', current_user;\n> +\tEND IF;\n> +\tRETURN $1;\n> +END;\n> +$$ LANGUAGE plpgsql IMMUTABLE;\n> +-- Create a table owned by regress_sro_user\n> +CREATE TABLE sro_tab (a int);\n> +ALTER TABLE sro_tab OWNER TO regress_sro_user;\n> +INSERT INTO sro_tab VALUES (1), (2), (3);\n> +-- Create an expression index with a predicate\n> +CREATE INDEX sro_idx ON sro_tab ((sro_ifun(a) + sro_ifun(0)))\n> +\tWHERE sro_ifun(a + 10) > sro_ifun(10);\n> +DROP INDEX sro_idx;\n> +-- Do the same concurrently\n> +CREATE INDEX CONCURRENTLY sro_idx ON sro_tab ((sro_ifun(a) + sro_ifun(0)))\n> +\tWHERE sro_ifun(a + 10) > sro_ifun(10);\n> +-- REINDEX\n> +REINDEX TABLE sro_tab;\n> +REINDEX INDEX sro_idx;\n> +REINDEX TABLE CONCURRENTLY sro_tab; -- v12+ feature\n> +ERROR: syntax error at or near \"CONCURRENTLY\"\n> +LINE 1: REINDEX TABLE CONCURRENTLY sro_tab;\n> + ^\n> +DROP INDEX sro_idx;\n> +-- CLUSTER\n> +CREATE INDEX sro_cluster_idx ON sro_tab ((sro_ifun(a) + sro_ifun(0)));\n> +CLUSTER sro_tab USING sro_cluster_idx;\n> +DROP INDEX sro_cluster_idx;\n> +DROP TABLE sro_tab;\n> SET SESSION AUTHORIZATION regress_sro_user;\n> CREATE FUNCTION unwanted_grant() RETURNS void LANGUAGE sql AS\n> \t'GRANT regressgroup2 TO regress_sro_user';\n> --- a/src/test/regress/sql/privileges.sql\n> +++ b/src/test/regress/sql/privileges.sql\n> @@ -724,6 +724,40 @@\n> \\c -\n> CREATE ROLE regress_sro_user;\n> \n> +-- Check that index expressions and predicates are run as the table's owner\n> +\n> +-- A dummy index function checking current_user\n> +CREATE FUNCTION sro_ifun(int) RETURNS int AS $$\n> +BEGIN\n> +\t-- Below we set the table's owner to regress_sro_user\n> +\tIF current_user <> 'regress_sro_user' THEN\n> +\t\tRAISE 'called by %', current_user;\n> +\tEND IF;\n> +\tRETURN $1;\n> +END;\n> +$$ LANGUAGE plpgsql IMMUTABLE;\n> +-- Create a table owned by regress_sro_user\n> +CREATE TABLE sro_tab (a int);\n> +ALTER TABLE sro_tab OWNER TO regress_sro_user;\n> +INSERT INTO sro_tab VALUES (1), (2), (3);\n> +-- Create an expression index with a predicate\n> +CREATE INDEX sro_idx ON sro_tab ((sro_ifun(a) + sro_ifun(0)))\n> +\tWHERE sro_ifun(a + 10) > sro_ifun(10);\n> +DROP INDEX sro_idx;\n> +-- Do the same concurrently\n> +CREATE INDEX CONCURRENTLY sro_idx ON sro_tab ((sro_ifun(a) + sro_ifun(0)))\n> +\tWHERE sro_ifun(a + 10) > sro_ifun(10);\n> +-- REINDEX\n> +REINDEX TABLE sro_tab;\n> +REINDEX INDEX sro_idx;\n> +REINDEX TABLE CONCURRENTLY sro_tab; -- v12+ feature\n> +DROP INDEX sro_idx;\n> +-- CLUSTER\n> +CREATE INDEX sro_cluster_idx ON sro_tab ((sro_ifun(a) + sro_ifun(0)));\n> +CLUSTER sro_tab USING sro_cluster_idx;\n> +DROP INDEX sro_cluster_idx;\n> +DROP TABLE sro_tab;\n> +\n> SET SESSION AUTHORIZATION regress_sro_user;\n> CREATE FUNCTION unwanted_grant() RETURNS void LANGUAGE sql AS\n> \t'GRANT regressgroup2 TO regress_sro_user';\n\n> From f26d5702857a9c027f84850af48b0eea0f3aa15c Mon Sep 17 00:00:00 2001\n> From: Noah Misch <noah@leadboat.com>\n> Date: Mon, 9 May 2022 08:35:08 -0700\n> Subject: [PATCH] In REFRESH MATERIALIZED VIEW, set user ID before running user\n> code.\n> \n> It intended to, but did not, achieve this. Adopt the new standard of\n> setting user ID just after locking the relation. Back-patch to v10 (all\n> supported versions).\n> \n> Reviewed by Simon Riggs. Reported by Alvaro Herrera.\n> \n> Security: CVE-2022-1552\n> ---\n> src/backend/commands/matview.c | 30 +++++++++++-------------------\n> src/test/regress/expected/privileges.out | 15 +++++++++++++++\n> src/test/regress/sql/privileges.sql | 16 ++++++++++++++++\n> 3 files changed, 42 insertions(+), 19 deletions(-)\n> \n> --- a/src/backend/commands/matview.c\n> +++ b/src/backend/commands/matview.c\n> @@ -161,6 +161,17 @@\n> \t\t\t\t\t\t\t\t\t\t lockmode, false, false,\n> \t\t\t\t\t\t\t\t\t\t RangeVarCallbackOwnsTable, NULL);\n> \tmatviewRel = heap_open(matviewOid, NoLock);\n> +\trelowner = matviewRel->rd_rel->relowner;\n> +\n> +\t/*\n> +\t * Switch to the owner's userid, so that any functions are run as that\n> +\t * user. Also lock down security-restricted operations and arrange to\n> +\t * make GUC variable changes local to this command.\n> +\t */\n> +\tGetUserIdAndSecContext(&save_userid, &save_sec_context);\n> +\tSetUserIdAndSecContext(relowner,\n> +\t\t\t\t\t\t save_sec_context | SECURITY_RESTRICTED_OPERATION);\n> +\tsave_nestlevel = NewGUCNestLevel();\n> \n> \t/* Make sure it is a materialized view. */\n> \tif (matviewRel->rd_rel->relkind != RELKIND_MATVIEW)\n> @@ -233,19 +244,6 @@\n> \t */\n> \tSetMatViewPopulatedState(matviewRel, !stmt->skipData);\n> \n> -\trelowner = matviewRel->rd_rel->relowner;\n> -\n> -\t/*\n> -\t * Switch to the owner's userid, so that any functions are run as that\n> -\t * user. Also arrange to make GUC variable changes local to this command.\n> -\t * Don't lock it down too tight to create a temporary table just yet. We\n> -\t * will switch modes when we are about to execute user code.\n> -\t */\n> -\tGetUserIdAndSecContext(&save_userid, &save_sec_context);\n> -\tSetUserIdAndSecContext(relowner,\n> -\t\t\t\t\t\t save_sec_context | SECURITY_LOCAL_USERID_CHANGE);\n> -\tsave_nestlevel = NewGUCNestLevel();\n> -\n> \t/* Concurrent refresh builds new data in temp tablespace, and does diff. */\n> \tif (concurrent)\n> \t\ttableSpace = GetDefaultTablespace(RELPERSISTENCE_TEMP);\n> @@ -262,12 +260,6 @@\n> \tLockRelationOid(OIDNewHeap, AccessExclusiveLock);\n> \tdest = CreateTransientRelDestReceiver(OIDNewHeap);\n> \n> -\t/*\n> -\t * Now lock down security-restricted operations.\n> -\t */\n> -\tSetUserIdAndSecContext(relowner,\n> -\t\t\t\t\t\t save_sec_context | SECURITY_RESTRICTED_OPERATION);\n> -\n> \t/* Generate the data, if wanted. */\n> \tif (!stmt->skipData)\n> \t\trefresh_matview_datafill(dest, dataQuery, queryString);\n> --- a/src/test/regress/expected/privileges.out\n> +++ b/src/test/regress/expected/privileges.out\n> @@ -1266,6 +1266,21 @@\n> SQL statement \"SELECT unwanted_grant()\"\n> PL/pgSQL function sro_trojan() line 1 at PERFORM\n> SQL function \"mv_action\" statement 1\n> +-- REFRESH MATERIALIZED VIEW CONCURRENTLY use of eval_const_expressions()\n> +SET SESSION AUTHORIZATION regress_sro_user;\n> +CREATE FUNCTION unwanted_grant_nofail(int) RETURNS int\n> +\tIMMUTABLE LANGUAGE plpgsql AS $$\n> +BEGIN\n> +\tPERFORM unwanted_grant();\n> +\tRAISE WARNING 'owned';\n> +\tRETURN 1;\n> +EXCEPTION WHEN OTHERS THEN\n> +\tRETURN 2;\n> +END$$;\n> +CREATE MATERIALIZED VIEW sro_index_mv AS SELECT 1 AS c;\n> +CREATE UNIQUE INDEX ON sro_index_mv (c) WHERE unwanted_grant_nofail(1) > 0;\n> +\\c -\n> +REFRESH MATERIALIZED VIEW sro_index_mv;\n> DROP OWNED BY regress_sro_user;\n> DROP ROLE regress_sro_user;\n> -- Admin options\n> --- a/src/test/regress/sql/privileges.sql\n> +++ b/src/test/regress/sql/privileges.sql\n> @@ -784,6 +784,22 @@\n> REFRESH MATERIALIZED VIEW sro_mv;\n> BEGIN; SET CONSTRAINTS ALL IMMEDIATE; REFRESH MATERIALIZED VIEW sro_mv; COMMIT;\n> \n> +-- REFRESH MATERIALIZED VIEW CONCURRENTLY use of eval_const_expressions()\n> +SET SESSION AUTHORIZATION regress_sro_user;\n> +CREATE FUNCTION unwanted_grant_nofail(int) RETURNS int\n> +\tIMMUTABLE LANGUAGE plpgsql AS $$\n> +BEGIN\n> +\tPERFORM unwanted_grant();\n> +\tRAISE WARNING 'owned';\n> +\tRETURN 1;\n> +EXCEPTION WHEN OTHERS THEN\n> +\tRETURN 2;\n> +END$$;\n> +CREATE MATERIALIZED VIEW sro_index_mv AS SELECT 1 AS c;\n> +CREATE UNIQUE INDEX ON sro_index_mv (c) WHERE unwanted_grant_nofail(1) > 0;\n> +\\c -\n> +REFRESH MATERIALIZED VIEW sro_index_mv;\n> +\n> DROP OWNED BY regress_sro_user;\n> DROP ROLE regress_sro_user;\n> \n\n> From 88b39e61486a8925a3861d50c306a51eaa1af8d6 Mon Sep 17 00:00:00 2001\n> From: Noah Misch <noah@leadboat.com>\n> Date: Sat, 25 Jun 2022 09:07:41 -0700\n> Subject: [PATCH] CREATE INDEX: use the original userid for more ACL checks.\n> \n> Commit a117cebd638dd02e5c2e791c25e43745f233111b used the original userid\n> for ACL checks located directly in DefineIndex(), but it still adopted\n> the table owner userid for more ACL checks than intended. That broke\n> dump/reload of indexes that refer to an operator class, collation, or\n> exclusion operator in a schema other than \"public\" or \"pg_catalog\".\n> Back-patch to v10 (all supported versions), like the earlier commit.\n> \n> Nathan Bossart and Noah Misch\n> \n> Discussion: https://postgr.es/m/f8a4105f076544c180a87ef0c4822352@stmuk.bayern.de\n> ---\n> contrib/citext/Makefile | 2 \n> contrib/citext/expected/create_index_acl.out | 81 +++++++++++++++++++++++\n> contrib/citext/sql/create_index_acl.sql | 82 ++++++++++++++++++++++++\n> src/backend/commands/indexcmds.c | 92 +++++++++++++++++++++++----\n> 4 files changed, 244 insertions(+), 13 deletions(-)\n> create mode 100644 contrib/citext/expected/create_index_acl.out\n> create mode 100644 contrib/citext/sql/create_index_acl.sql\n> \n> --- a/contrib/citext/Makefile\n> +++ b/contrib/citext/Makefile\n> @@ -7,7 +7,7 @@\n> citext--1.1--1.0.sql citext--unpackaged--1.0.sql\n> PGFILEDESC = \"citext - case-insensitive character string data type\"\n> \n> -REGRESS = citext\n> +REGRESS = create_index_acl citext\n> \n> ifdef USE_PGXS\n> PG_CONFIG = pg_config\n> --- /dev/null\n> +++ b/contrib/citext/expected/create_index_acl.out\n> @@ -0,0 +1,81 @@\n> +-- Each DefineIndex() ACL check uses either the original userid or the table\n> +-- owner userid; see its header comment. Here, confirm that DefineIndex()\n> +-- uses its original userid where necessary. The test works by creating\n> +-- indexes that refer to as many sorts of objects as possible, with the table\n> +-- owner having as few applicable privileges as possible. (The privileges.sql\n> +-- regress_sro_user tests look for the opposite defect; they confirm that\n> +-- DefineIndex() uses the table owner userid where necessary.)\n> +-- Don't override tablespaces; this version lacks allow_in_place_tablespaces.\n> +BEGIN;\n> +CREATE ROLE regress_minimal;\n> +CREATE SCHEMA s;\n> +CREATE EXTENSION citext SCHEMA s;\n> +-- Revoke all conceivably-relevant ACLs within the extension. The system\n> +-- doesn't check all these ACLs, but this will provide some coverage if that\n> +-- ever changes.\n> +REVOKE ALL ON TYPE s.citext FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.citext_lt(s.citext, s.citext) FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.citext_le(s.citext, s.citext) FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.citext_eq(s.citext, s.citext) FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.citext_ge(s.citext, s.citext) FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.citext_gt(s.citext, s.citext) FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.citext_cmp(s.citext, s.citext) FROM PUBLIC;\n> +-- Functions sufficient for making an index column that has the side effect of\n> +-- changing search_path at expression planning time.\n> +CREATE FUNCTION public.setter() RETURNS bool VOLATILE\n> + LANGUAGE SQL AS $$SET search_path = s; SELECT true$$;\n> +CREATE FUNCTION s.const() RETURNS bool IMMUTABLE\n> + LANGUAGE SQL AS $$SELECT public.setter()$$;\n> +CREATE FUNCTION s.index_this_expr(s.citext, bool) RETURNS s.citext IMMUTABLE\n> + LANGUAGE SQL AS $$SELECT $1$$;\n> +REVOKE ALL ON FUNCTION public.setter() FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.const() FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.index_this_expr(s.citext, bool) FROM PUBLIC;\n> +-- Even for an empty table, expression planning calls s.const & public.setter.\n> +GRANT EXECUTE ON FUNCTION public.setter() TO regress_minimal;\n> +GRANT EXECUTE ON FUNCTION s.const() TO regress_minimal;\n> +-- Function for index predicate.\n> +CREATE FUNCTION s.index_row_if(s.citext) RETURNS bool IMMUTABLE\n> + LANGUAGE SQL AS $$SELECT $1 IS NOT NULL$$;\n> +REVOKE ALL ON FUNCTION s.index_row_if(s.citext) FROM PUBLIC;\n> +-- Even for an empty table, CREATE INDEX checks ii_Predicate permissions.\n> +GRANT EXECUTE ON FUNCTION s.index_row_if(s.citext) TO regress_minimal;\n> +-- Non-extension, non-function objects.\n> +CREATE COLLATION s.coll (LOCALE=\"C\");\n> +CREATE TABLE s.x (y s.citext);\n> +ALTER TABLE s.x OWNER TO regress_minimal;\n> +-- Empty-table DefineIndex()\n> +CREATE UNIQUE INDEX u0rows ON s.x USING btree\n> + ((s.index_this_expr(y, s.const())) COLLATE s.coll s.citext_ops)\n> + WHERE s.index_row_if(y);\n> +ALTER TABLE s.x ADD CONSTRAINT e0rows EXCLUDE USING btree\n> + ((s.index_this_expr(y, s.const())) COLLATE s.coll WITH s.=)\n> + WHERE (s.index_row_if(y));\n> +-- Make the table nonempty.\n> +INSERT INTO s.x VALUES ('foo'), ('bar');\n> +-- If the INSERT runs the planner on index expressions, a search_path change\n> +-- survives. As of 2022-06, the INSERT reuses a cached plan. It does so even\n> +-- under debug_discard_caches, since each index is new-in-transaction. If\n> +-- future work changes a cache lifecycle, this RESET may become necessary.\n> +RESET search_path;\n> +-- For a nonempty table, owner needs permissions throughout ii_Expressions.\n> +GRANT EXECUTE ON FUNCTION s.index_this_expr(s.citext, bool) TO regress_minimal;\n> +CREATE UNIQUE INDEX u2rows ON s.x USING btree\n> + ((s.index_this_expr(y, s.const())) COLLATE s.coll s.citext_ops)\n> + WHERE s.index_row_if(y);\n> +ALTER TABLE s.x ADD CONSTRAINT e2rows EXCLUDE USING btree\n> + ((s.index_this_expr(y, s.const())) COLLATE s.coll WITH s.=)\n> + WHERE (s.index_row_if(y));\n> +-- Shall not find s.coll via search_path, despite the s.const->public.setter\n> +-- call having set search_path=s during expression planning. Suppress the\n> +-- message itself, which depends on the database encoding.\n> +\\set VERBOSITY terse\n> +DO $$\n> +BEGIN\n> +ALTER TABLE s.x ADD CONSTRAINT underqualified EXCLUDE USING btree\n> + ((s.index_this_expr(y, s.const())) COLLATE coll WITH s.=)\n> + WHERE (s.index_row_if(y));\n> +EXCEPTION WHEN OTHERS THEN RAISE EXCEPTION '%', sqlstate; END$$;\n> +ERROR: 42704\n> +\\set VERBOSITY default\n> +ROLLBACK;\n> --- /dev/null\n> +++ b/contrib/citext/sql/create_index_acl.sql\n> @@ -0,0 +1,82 @@\n> +-- Each DefineIndex() ACL check uses either the original userid or the table\n> +-- owner userid; see its header comment. Here, confirm that DefineIndex()\n> +-- uses its original userid where necessary. The test works by creating\n> +-- indexes that refer to as many sorts of objects as possible, with the table\n> +-- owner having as few applicable privileges as possible. (The privileges.sql\n> +-- regress_sro_user tests look for the opposite defect; they confirm that\n> +-- DefineIndex() uses the table owner userid where necessary.)\n> +\n> +-- Don't override tablespaces; this version lacks allow_in_place_tablespaces.\n> +\n> +BEGIN;\n> +CREATE ROLE regress_minimal;\n> +CREATE SCHEMA s;\n> +CREATE EXTENSION citext SCHEMA s;\n> +-- Revoke all conceivably-relevant ACLs within the extension. The system\n> +-- doesn't check all these ACLs, but this will provide some coverage if that\n> +-- ever changes.\n> +REVOKE ALL ON TYPE s.citext FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.citext_lt(s.citext, s.citext) FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.citext_le(s.citext, s.citext) FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.citext_eq(s.citext, s.citext) FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.citext_ge(s.citext, s.citext) FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.citext_gt(s.citext, s.citext) FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.citext_cmp(s.citext, s.citext) FROM PUBLIC;\n> +-- Functions sufficient for making an index column that has the side effect of\n> +-- changing search_path at expression planning time.\n> +CREATE FUNCTION public.setter() RETURNS bool VOLATILE\n> + LANGUAGE SQL AS $$SET search_path = s; SELECT true$$;\n> +CREATE FUNCTION s.const() RETURNS bool IMMUTABLE\n> + LANGUAGE SQL AS $$SELECT public.setter()$$;\n> +CREATE FUNCTION s.index_this_expr(s.citext, bool) RETURNS s.citext IMMUTABLE\n> + LANGUAGE SQL AS $$SELECT $1$$;\n> +REVOKE ALL ON FUNCTION public.setter() FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.const() FROM PUBLIC;\n> +REVOKE ALL ON FUNCTION s.index_this_expr(s.citext, bool) FROM PUBLIC;\n> +-- Even for an empty table, expression planning calls s.const & public.setter.\n> +GRANT EXECUTE ON FUNCTION public.setter() TO regress_minimal;\n> +GRANT EXECUTE ON FUNCTION s.const() TO regress_minimal;\n> +-- Function for index predicate.\n> +CREATE FUNCTION s.index_row_if(s.citext) RETURNS bool IMMUTABLE\n> + LANGUAGE SQL AS $$SELECT $1 IS NOT NULL$$;\n> +REVOKE ALL ON FUNCTION s.index_row_if(s.citext) FROM PUBLIC;\n> +-- Even for an empty table, CREATE INDEX checks ii_Predicate permissions.\n> +GRANT EXECUTE ON FUNCTION s.index_row_if(s.citext) TO regress_minimal;\n> +-- Non-extension, non-function objects.\n> +CREATE COLLATION s.coll (LOCALE=\"C\");\n> +CREATE TABLE s.x (y s.citext);\n> +ALTER TABLE s.x OWNER TO regress_minimal;\n> +-- Empty-table DefineIndex()\n> +CREATE UNIQUE INDEX u0rows ON s.x USING btree\n> + ((s.index_this_expr(y, s.const())) COLLATE s.coll s.citext_ops)\n> + WHERE s.index_row_if(y);\n> +ALTER TABLE s.x ADD CONSTRAINT e0rows EXCLUDE USING btree\n> + ((s.index_this_expr(y, s.const())) COLLATE s.coll WITH s.=)\n> + WHERE (s.index_row_if(y));\n> +-- Make the table nonempty.\n> +INSERT INTO s.x VALUES ('foo'), ('bar');\n> +-- If the INSERT runs the planner on index expressions, a search_path change\n> +-- survives. As of 2022-06, the INSERT reuses a cached plan. It does so even\n> +-- under debug_discard_caches, since each index is new-in-transaction. If\n> +-- future work changes a cache lifecycle, this RESET may become necessary.\n> +RESET search_path;\n> +-- For a nonempty table, owner needs permissions throughout ii_Expressions.\n> +GRANT EXECUTE ON FUNCTION s.index_this_expr(s.citext, bool) TO regress_minimal;\n> +CREATE UNIQUE INDEX u2rows ON s.x USING btree\n> + ((s.index_this_expr(y, s.const())) COLLATE s.coll s.citext_ops)\n> + WHERE s.index_row_if(y);\n> +ALTER TABLE s.x ADD CONSTRAINT e2rows EXCLUDE USING btree\n> + ((s.index_this_expr(y, s.const())) COLLATE s.coll WITH s.=)\n> + WHERE (s.index_row_if(y));\n> +-- Shall not find s.coll via search_path, despite the s.const->public.setter\n> +-- call having set search_path=s during expression planning. Suppress the\n> +-- message itself, which depends on the database encoding.\n> +\\set VERBOSITY terse\n> +DO $$\n> +BEGIN\n> +ALTER TABLE s.x ADD CONSTRAINT underqualified EXCLUDE USING btree\n> + ((s.index_this_expr(y, s.const())) COLLATE coll WITH s.=)\n> + WHERE (s.index_row_if(y));\n> +EXCEPTION WHEN OTHERS THEN RAISE EXCEPTION '%', sqlstate; END$$;\n> +\\set VERBOSITY default\n> +ROLLBACK;\n> --- a/src/backend/commands/indexcmds.c\n> +++ b/src/backend/commands/indexcmds.c\n> @@ -65,7 +65,10 @@\n> \t\t\t\t Oid relId,\n> \t\t\t\t char *accessMethodName, Oid accessMethodId,\n> \t\t\t\t bool amcanorder,\n> -\t\t\t\t bool isconstraint);\n> +\t\t\t\t bool isconstraint,\n> +\t\t\t\t Oid ddl_userid,\n> +\t\t\t\t int ddl_sec_context,\n> +\t\t\t\t int *ddl_save_nestlevel);\n> static Oid GetIndexOpClass(List *opclass, Oid attrType,\n> \t\t\t\tchar *accessMethodName, Oid accessMethodId);\n> static char *ChooseIndexName(const char *tabname, Oid namespaceId,\n> @@ -168,8 +171,7 @@\n> \t * Compute the operator classes, collations, and exclusion operators for\n> \t * the new index, so we can test whether it's compatible with the existing\n> \t * one. Note that ComputeIndexAttrs might fail here, but that's OK:\n> -\t * DefineIndex would have called this function with the same arguments\n> -\t * later on, and it would have failed then anyway.\n> +\t * DefineIndex would have failed later.\n> \t */\n> \tindexInfo = makeNode(IndexInfo);\n> \tindexInfo->ii_Expressions = NIL;\n> @@ -187,7 +189,7 @@\n> \t\t\t\t\t coloptions, attributeList,\n> \t\t\t\t\t exclusionOpNames, relationId,\n> \t\t\t\t\t accessMethodName, accessMethodId,\n> -\t\t\t\t\t amcanorder, isconstraint);\n> +\t\t\t\t\t amcanorder, isconstraint, InvalidOid, 0, NULL);\n> \n> \n> \t/* Get the soon-obsolete pg_index tuple. */\n> @@ -280,6 +282,19 @@\n> * DefineIndex\n> *\t\tCreates a new index.\n> *\n> + * This function manages the current userid according to the needs of pg_dump.\n> + * Recreating old-database catalog entries in new-database is fine, regardless\n> + * of which users would have permission to recreate those entries now. That's\n> + * just preservation of state. Running opaque expressions, like calling a\n> + * function named in a catalog entry or evaluating a pg_node_tree in a catalog\n> + * entry, as anyone other than the object owner, is not fine. To adhere to\n> + * those principles and to remain fail-safe, use the table owner userid for\n> + * most ACL checks. Use the original userid for ACL checks reached without\n> + * traversing opaque expressions. (pg_dump can predict such ACL checks from\n> + * catalogs.) Overall, this is a mess. Future DDL development should\n> + * consider offering one DDL command for catalog setup and a separate DDL\n> + * command for steps that run opaque expressions.\n> + *\n> * 'relationId': the OID of the heap relation on which the index is to be\n> *\t\tcreated\n> * 'stmt': IndexStmt describing the properties of the new index.\n> @@ -581,7 +596,8 @@\n> \t\t\t\t\t coloptions, stmt->indexParams,\n> \t\t\t\t\t stmt->excludeOpNames, relationId,\n> \t\t\t\t\t accessMethodName, accessMethodId,\n> -\t\t\t\t\t amcanorder, stmt->isconstraint);\n> +\t\t\t\t\t amcanorder, stmt->isconstraint, root_save_userid,\n> +\t\t\t\t\t root_save_sec_context, &root_save_nestlevel);\n> \n> \t/*\n> \t * Extra checks when creating a PRIMARY KEY index.\n> @@ -639,9 +655,8 @@\n> \n> \t/*\n> \t * Roll back any GUC changes executed by index functions, and keep\n> -\t * subsequent changes local to this command. It's barely possible that\n> -\t * some index function changed a behavior-affecting GUC, e.g. xmloption,\n> -\t * that affects subsequent steps.\n> +\t * subsequent changes local to this command. This is essential if some\n> +\t * index function changed a behavior-affecting GUC, e.g. search_path.\n> \t */\n> \tAtEOXact_GUC(false, root_save_nestlevel);\n> \troot_save_nestlevel = NewGUCNestLevel();\n> @@ -996,6 +1011,10 @@\n> /*\n> * Compute per-index-column information, including indexed column numbers\n> * or index expressions, opclasses, and indoptions.\n> + *\n> + * If the caller switched to the table owner, ddl_userid is the role for ACL\n> + * checks reached without traversing opaque expressions. Otherwise, it's\n> + * InvalidOid, and other ddl_* arguments are undefined.\n> */\n> static void\n> ComputeIndexAttrs(IndexInfo *indexInfo,\n> @@ -1009,11 +1028,16 @@\n> \t\t\t\t char *accessMethodName,\n> \t\t\t\t Oid accessMethodId,\n> \t\t\t\t bool amcanorder,\n> -\t\t\t\t bool isconstraint)\n> +\t\t\t\t bool isconstraint,\n> +\t\t\t\t Oid ddl_userid,\n> +\t\t\t\t int ddl_sec_context,\n> +\t\t\t\t int *ddl_save_nestlevel)\n> {\n> \tListCell *nextExclOp;\n> \tListCell *lc;\n> \tint\t\t\tattn;\n> +\tOid\t\t\tsave_userid;\n> +\tint\t\t\tsave_sec_context;\n> \n> \t/* Allocate space for exclusion operator info, if needed */\n> \tif (exclusionOpNames)\n> @@ -1029,6 +1053,9 @@\n> \telse\n> \t\tnextExclOp = NULL;\n> \n> +\tif (OidIsValid(ddl_userid))\n> +\t\tGetUserIdAndSecContext(&save_userid, &save_sec_context);\n> +\n> \t/*\n> \t * process attributeList\n> \t */\n> @@ -1123,10 +1150,24 @@\n> \t\ttypeOidP[attn] = atttype;\n> \n> \t\t/*\n> -\t\t * Apply collation override if any\n> +\t\t * Apply collation override if any. Use of ddl_userid is necessary\n> +\t\t * due to ACL checks therein, and it's safe because collations don't\n> +\t\t * contain opaque expressions (or non-opaque expressions).\n> \t\t */\n> \t\tif (attribute->collation)\n> +\t\t{\n> +\t\t\tif (OidIsValid(ddl_userid))\n> +\t\t\t{\n> +\t\t\t\tAtEOXact_GUC(false, *ddl_save_nestlevel);\n> +\t\t\t\tSetUserIdAndSecContext(ddl_userid, ddl_sec_context);\n> +\t\t\t}\n> \t\t\tattcollation = get_collation_oid(attribute->collation, false);\n> +\t\t\tif (OidIsValid(ddl_userid))\n> +\t\t\t{\n> +\t\t\t\tSetUserIdAndSecContext(save_userid, save_sec_context);\n> +\t\t\t\t*ddl_save_nestlevel = NewGUCNestLevel();\n> +\t\t\t}\n> +\t\t}\n> \n> \t\t/*\n> \t\t * Check we have a collation iff it's a collatable type. The only\n> @@ -1154,12 +1195,25 @@\n> \t\tcollationOidP[attn] = attcollation;\n> \n> \t\t/*\n> -\t\t * Identify the opclass to use.\n> +\t\t * Identify the opclass to use. Use of ddl_userid is necessary due to\n> +\t\t * ACL checks therein. This is safe despite opclasses containing\n> +\t\t * opaque expressions (specifically, functions), because only\n> +\t\t * superusers can define opclasses.\n> \t\t */\n> +\t\tif (OidIsValid(ddl_userid))\n> +\t\t{\n> +\t\t\tAtEOXact_GUC(false, *ddl_save_nestlevel);\n> +\t\t\tSetUserIdAndSecContext(ddl_userid, ddl_sec_context);\n> +\t\t}\n> \t\tclassOidP[attn] = GetIndexOpClass(attribute->opclass,\n> \t\t\t\t\t\t\t\t\t\t atttype,\n> \t\t\t\t\t\t\t\t\t\t accessMethodName,\n> \t\t\t\t\t\t\t\t\t\t accessMethodId);\n> +\t\tif (OidIsValid(ddl_userid))\n> +\t\t{\n> +\t\t\tSetUserIdAndSecContext(save_userid, save_sec_context);\n> +\t\t\t*ddl_save_nestlevel = NewGUCNestLevel();\n> +\t\t}\n> \n> \t\t/*\n> \t\t * Identify the exclusion operator, if any.\n> @@ -1173,9 +1227,23 @@\n> \n> \t\t\t/*\n> \t\t\t * Find the operator --- it must accept the column datatype\n> -\t\t\t * without runtime coercion (but binary compatibility is OK)\n> +\t\t\t * without runtime coercion (but binary compatibility is OK).\n> +\t\t\t * Operators contain opaque expressions (specifically, functions).\n> +\t\t\t * compatible_oper_opid() boils down to oper() and\n> +\t\t\t * IsBinaryCoercible(). PostgreSQL would have security problems\n> +\t\t\t * elsewhere if oper() started calling opaque expressions.\n> \t\t\t */\n> +\t\t\tif (OidIsValid(ddl_userid))\n> +\t\t\t{\n> +\t\t\t\tAtEOXact_GUC(false, *ddl_save_nestlevel);\n> +\t\t\t\tSetUserIdAndSecContext(ddl_userid, ddl_sec_context);\n> +\t\t\t}\n> \t\t\topid = compatible_oper_opid(opname, atttype, atttype, false);\n> +\t\t\tif (OidIsValid(ddl_userid))\n> +\t\t\t{\n> +\t\t\t\tSetUserIdAndSecContext(save_userid, save_sec_context);\n> +\t\t\t\t*ddl_save_nestlevel = NewGUCNestLevel();\n> +\t\t\t}\n> \n> \t\t\t/*\n> \t\t\t * Only allow commutative operators to be used in exclusion\n\n\n-- \nRoberto C. S�nchez\nhttp://people.connexer.com/~roberto\nhttp://www.connexer.com\n\n\n",
"msg_date": "Wed, 27 Jul 2022 11:52:47 -0400",
"msg_from": "Roberto =?iso-8859-1?Q?C=2E_S=E1nchez?= <roberto@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: Request for assistance to backport CVE-2022-1552 fixes to 9.6\n and 9.4"
}
] |
[
{
"msg_contents": "We have these two definitions in the source code:\n\n#define BTMaxItemSize(page) \\\n MAXALIGN_DOWN((PageGetPageSize(page) - \\\n MAXALIGN(SizeOfPageHeaderData + \\\n 3*sizeof(ItemIdData) + \\\n 3*sizeof(ItemPointerData)) - \\\n MAXALIGN(sizeof(BTPageOpaqueData))) / 3)\n#define BTMaxItemSizeNoHeapTid(page) \\\n MAXALIGN_DOWN((PageGetPageSize(page) - \\\n MAXALIGN(SizeOfPageHeaderData + 3*sizeof(ItemIdData)) - \\\n MAXALIGN(sizeof(BTPageOpaqueData))) / 3)\n\nIn my tests, PageGetPageSize(page) = 8192, SizeOfPageHeaderData = 24,\nsizeof(ItemIdData) = 4, sizeof(ItemPointerData) = 6, and\nsizeof(BTPageOpaqueData) = 16. Assuming MAXIMUM_ALIGNOF == 8, I\nbelieve that makes BTMaxItemSize come out to 2704 and\nBTMaxItemSizeNoHeapTid come out to 2712. I have no quibble with the\nformula for BTMaxItemSizeNoHeapTid. It's just saying that if you\nsubtract out the page header data, the special space, and enough space\nfor 3 line pointers, you have a certain amount of space left (8152\nbytes) and so a single item shouldn't use more than a third of that\n(2717 bytes) but since items use up space in increments of MAXALIGN,\nyou have to round down to the next such multiple (2712 bytes).\n\nBut what's up with BTMaxItemSize? Here, the idea as I understand it is\nthat we might need to add a TID to the item, so it has to be small\nenough to squeeze one in while still fitting under the limit. And at\nfirst glance everything seems to be OK, because BTMaxItemSize comes\nout to be 8 bytes less than BTMaxItemSizeNoHeapTid and that's enough\nspace to fit a heap TID for sure. However, it seems to me that the\nformula calculates this as if those additional 6 bytes were being\nseparately added to the page header or the line pointer array, whereas\nin reality they will be part of the tuple itself. I think that we\nshould be subtracting sizeof(ItemPointerData) at the very end, rather\nthan subtracting 3*sizeof(ItemPointerData) from the space available to\nbe divided by three.\n\nTo see why, suppose that sizeof(BTPageOpaqueData) were 24 rather than\n16. Then we'd have:\n\nBTMaxItemSize = MAXALIGN_DOWN((8192 - MAXALIGN(24 + 3 * 4 + 3 * 6) -\nMAXALIGN(24)) / 3) = MAXALIGN_DOWN((8192 - MAXALIGN(54) - 24) / 3) =\nMAXALIGN_DOWN(2704) = 2704\nBTMaxItemSizeNoHeapTid = MAXALIGN_DOWN((8192 - MAXALIGN(24 + 3 * 4) -\nMAXALIGN(24)) / 3 = MAXALIGN_DOWN((8192 - MAXALIGN(36) - 24) / 3) =\nMAXALIGN_DOWN(2709) = 2704\n\nThat's a problem, because if in that scenario you allow three 2704\nbyte items that don't need a heap TID and later you find you need to\nadd a heap TID to one of those items, the result will be bigger than\n2704 bytes, and then you can't fit three of them into a page.\n\nApplying the attached patch and running 'make check' suffices to\ndemonstrate the problem for me:\n\ndiff -U3 /Users/rhaas/pgsql/src/test/regress/expected/vacuum.out\n/Users/rhaas/pgsql/src/test/regress/results/vacuum.out\n--- /Users/rhaas/pgsql/src/test/regress/expected/vacuum.out\n2022-06-06 14:46:17.000000000 -0400\n+++ /Users/rhaas/pgsql/src/test/regress/results/vacuum.out\n2022-06-08 17:20:58.000000000 -0400\n@@ -137,7 +137,9 @@\n repeat('1234567890',269));\n -- index cleanup option is ignored if VACUUM FULL\n VACUUM (INDEX_CLEANUP TRUE, FULL TRUE) no_index_cleanup;\n+ERROR: cannot insert oversized tuple of size 2712 on internal page\nof index \"no_index_cleanup_idx\"\n VACUUM (FULL TRUE) no_index_cleanup;\n+ERROR: cannot insert oversized tuple of size 2712 on internal page\nof index \"no_index_cleanup_idx\"\n -- Toast inherits the value from its parent table.\n ALTER TABLE no_index_cleanup SET (vacuum_index_cleanup = false);\n DELETE FROM no_index_cleanup WHERE i < 15;\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 8 Jun 2022 17:23:24 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "BTMaxItemSize seems to be subtly incorrect"
},
{
"msg_contents": "On Wed, Jun 8, 2022 at 2:23 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> In my tests, PageGetPageSize(page) = 8192, SizeOfPageHeaderData = 24,\n> sizeof(ItemIdData) = 4, sizeof(ItemPointerData) = 6, and\n> sizeof(BTPageOpaqueData) = 16. Assuming MAXIMUM_ALIGNOF == 8, I\n> believe that makes BTMaxItemSize come out to 2704 and\n> BTMaxItemSizeNoHeapTid come out to 2712.\n\nI agree that these numbers are what you get on all mainstream\nplatforms. I know these specifics from memory alone, actually.\n\n> To see why, suppose that sizeof(BTPageOpaqueData) were 24 rather than\n> 16. Then we'd have:\n>\n> BTMaxItemSize = MAXALIGN_DOWN((8192 - MAXALIGN(24 + 3 * 4 + 3 * 6) -\n> MAXALIGN(24)) / 3) = MAXALIGN_DOWN((8192 - MAXALIGN(54) - 24) / 3) =\n> MAXALIGN_DOWN(2704) = 2704\n> BTMaxItemSizeNoHeapTid = MAXALIGN_DOWN((8192 - MAXALIGN(24 + 3 * 4) -\n> MAXALIGN(24)) / 3 = MAXALIGN_DOWN((8192 - MAXALIGN(36) - 24) / 3) =\n> MAXALIGN_DOWN(2709) = 2704\n>\n> That's a problem, because if in that scenario you allow three 2704\n> byte items that don't need a heap TID and later you find you need to\n> add a heap TID to one of those items, the result will be bigger than\n> 2704 bytes, and then you can't fit three of them into a page.\n\nSeems you must be right. I'm guessing that the field \"cabbage\" was\noriginally a nonce value, as part of a draft patch you're working on?\n\nI actually tested this in a fairly brute force fashion back when I was\nworking on the Postgres 12 nbtree stuff. Essentially I found a way to\nbuild the tallest possible B-Tree, consisting of only 3 items (plus a\nhigh key) on each leaf page, each of which was the largest possible\nsize, up to the byte. If memory serves, it is just about impossible to\nget beyond 7 levels. It took as long as 30 minutes or more to run the\ntest.\n\nI think that we should fix this on HEAD, on general principle. There\nis no reason to believe that this is a live bug, so a backpatch seems\nunnecessary.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 8 Jun 2022 14:55:27 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BTMaxItemSize seems to be subtly incorrect"
},
{
"msg_contents": "On Wed, Jun 8, 2022 at 5:55 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > That's a problem, because if in that scenario you allow three 2704\n> > byte items that don't need a heap TID and later you find you need to\n> > add a heap TID to one of those items, the result will be bigger than\n> > 2704 bytes, and then you can't fit three of them into a page.\n>\n> Seems you must be right. I'm guessing that the field \"cabbage\" was\n> originally a nonce value, as part of a draft patch you're working on?\n\nI wasn't originally setting out to modify BTPageOpaqueData at all,\njust borrow some special space. See the \"storing an explicit nonce\"\ndiscussion and patch set. But when this regression failure turned up I\nsaid to myself, hmm, I think this is an unrelated bug.\n\n> I think that we should fix this on HEAD, on general principle. There\n> is no reason to believe that this is a live bug, so a backpatch seems\n> unnecessary.\n\nYeah, I agree with not back-patching the fix, unless it turns out that\nthere is some platform where the same issue occurs without any\ncabbage. I assume that if it happened on any common system someone\nwould have complained about it by now, so probably it doesn't. I\nsuppose we could try to enumerate plausibly different values of the\nquantities involved and see if any of the combinations look like they\nlead to a bad result. I'm not really sure how many things could\nplausibly be different, though, apart from MAXIMUM_ALIGNOF.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 8 Jun 2022 19:18:01 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: BTMaxItemSize seems to be subtly incorrect"
},
{
"msg_contents": "On Wed, Jun 8, 2022 at 4:18 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I wasn't originally setting out to modify BTPageOpaqueData at all,\n> just borrow some special space. See the \"storing an explicit nonce\"\n> discussion and patch set. But when this regression failure turned up I\n> said to myself, hmm, I think this is an unrelated bug.\n\nFWIW I don't see much difference between borrowing special space and\nadding something to BTPageOpaqueData. If anything I'd prefer the\nlatter.\n\n> > I think that we should fix this on HEAD, on general principle. There\n> > is no reason to believe that this is a live bug, so a backpatch seems\n> > unnecessary.\n>\n> Yeah, I agree with not back-patching the fix, unless it turns out that\n> there is some platform where the same issue occurs without any\n> cabbage. I assume that if it happened on any common system someone\n> would have complained about it by now, so probably it doesn't.\n\nI don't think it's possible on 32-bit platforms, which is the only\nfurther variation I can think of. But let's assume that the same\nproblem was in fact possible \"without cabbage\", just for the sake of\nargument. The worst problem that could result would be a failure to\nsplit a page that had maximally large keys, without TOAST compression\n-- which is what you demonstrated. There wouldn't be any downstream\nconsequences.\n\nHere's why: BTMaxItemSizeNoHeapTid() is actually what BTMaxItemSize()\nlooked like prior to Postgres 12. So the limit on internal pages never\nchanged, even in Postgres 12. There was no separate leaf page limit\nprior to 12. Only the rules on the leaf level ever really changed.\n\nNote also that amcheck has tests for this stuff. Though that probably\ndoesn't matter at all.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 8 Jun 2022 16:43:48 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BTMaxItemSize seems to be subtly incorrect"
},
{
"msg_contents": "On Wed, Jun 8, 2022 at 7:44 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> FWIW I don't see much difference between borrowing special space and\n> adding something to BTPageOpaqueData. If anything I'd prefer the\n> latter.\n\nI think this discussion will take us too far afield from the topic of\nthis thread, so I'll just say here that wouldn't solve the problem I\nwas trying to tackle.\n\n> Here's why: BTMaxItemSizeNoHeapTid() is actually what BTMaxItemSize()\n> looked like prior to Postgres 12. So the limit on internal pages never\n> changed, even in Postgres 12. There was no separate leaf page limit\n> prior to 12. Only the rules on the leaf level ever really changed.\n\nYeah, I noticed that, too.\n\nAre you going to code up a patch?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 9 Jun 2022 09:40:29 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: BTMaxItemSize seems to be subtly incorrect"
},
{
"msg_contents": "On Thu, Jun 9, 2022 at 6:40 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Are you going to code up a patch?\n\nI can, but feel free to fix it yourself if you prefer. Your analysis\nseems sound.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 9 Jun 2022 10:39:17 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BTMaxItemSize seems to be subtly incorrect"
},
{
"msg_contents": "On Thu, Jun 9, 2022 at 1:39 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Thu, Jun 9, 2022 at 6:40 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Are you going to code up a patch?\n>\n> I can, but feel free to fix it yourself if you prefer. Your analysis\n> seems sound.\n\nI think I'd feel more comfortable if you wrote the patch, if that's possible.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 9 Jun 2022 14:20:01 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: BTMaxItemSize seems to be subtly incorrect"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Wed, Jun 8, 2022 at 5:55 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > > That's a problem, because if in that scenario you allow three 2704\n> > > byte items that don't need a heap TID and later you find you need to\n> > > add a heap TID to one of those items, the result will be bigger than\n> > > 2704 bytes, and then you can't fit three of them into a page.\n> >\n> > Seems you must be right. I'm guessing that the field \"cabbage\" was\n> > originally a nonce value, as part of a draft patch you're working on?\n> \n> I wasn't originally setting out to modify BTPageOpaqueData at all,\n> just borrow some special space. See the \"storing an explicit nonce\"\n> discussion and patch set. But when this regression failure turned up I\n> said to myself, hmm, I think this is an unrelated bug.\n\nI had seen something along these lines when also playing with trying to\nuse special space. I hadn't had a chance to run down exactly where it\nwas coming from, so thanks for working on this.\n\n> > I think that we should fix this on HEAD, on general principle. There\n> > is no reason to believe that this is a live bug, so a backpatch seems\n> > unnecessary.\n> \n> Yeah, I agree with not back-patching the fix, unless it turns out that\n> there is some platform where the same issue occurs without any\n> cabbage. I assume that if it happened on any common system someone\n> would have complained about it by now, so probably it doesn't. I\n> suppose we could try to enumerate plausibly different values of the\n> quantities involved and see if any of the combinations look like they\n> lead to a bad result. I'm not really sure how many things could\n> plausibly be different, though, apart from MAXIMUM_ALIGNOF.\n\nAgreed that it doesn't seem like we'd need to backpatch this.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 9 Jun 2022 14:47:31 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: BTMaxItemSize seems to be subtly incorrect"
},
{
"msg_contents": "On Thu, Jun 9, 2022 at 11:20 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I think I'd feel more comfortable if you wrote the patch, if that's possible.\n\nOkay, pushed a fix just now.\n\nThanks\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 4 Aug 2022 20:55:33 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BTMaxItemSize seems to be subtly incorrect"
},
{
"msg_contents": "On Fri, Aug 5, 2022 at 3:56 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Thu, Jun 9, 2022 at 11:20 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I think I'd feel more comfortable if you wrote the patch, if that's possible.\n>\n> Okay, pushed a fix just now.\n\nFYI florican and lapwing showed:\n\n2022-08-05 01:04:29.903 EDT [34485:5] FATAL: deduplication failed to\nadd heap tid to pending posting list\n2022-08-05 01:04:29.903 EDT [34485:6] CONTEXT: WAL redo at 0/49708D8\nfor Btree/DEDUP: nintervals 4; blkref #0: rel 1663/16384/2674, blk 1\n\n\n",
"msg_date": "Fri, 5 Aug 2022 17:25:08 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BTMaxItemSize seems to be subtly incorrect"
},
{
"msg_contents": "On Thu, Aug 4, 2022 at 10:25 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> FYI florican and lapwing showed:\n>\n> 2022-08-05 01:04:29.903 EDT [34485:5] FATAL: deduplication failed to\n> add heap tid to pending posting list\n> 2022-08-05 01:04:29.903 EDT [34485:6] CONTEXT: WAL redo at 0/49708D8\n> for Btree/DEDUP: nintervals 4; blkref #0: rel 1663/16384/2674, blk 1\n\nThis very likely has something to do with the way nbtdedup.c uses\nBTMaxItemSize(), which apparently won't work on these 32-bit systems\nnow.\n\nI'll fix this tomorrow morning.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 4 Aug 2022 22:40:44 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BTMaxItemSize seems to be subtly incorrect"
},
{
"msg_contents": "On Thu, Aug 4, 2022 at 10:40 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> This very likely has something to do with the way nbtdedup.c uses\n> BTMaxItemSize(), which apparently won't work on these 32-bit systems\n> now.\n\nUpdate: I discovered that I can get the regression tests to fail (even\non mainstream 64-bit platforms) by MAXALIGN()'ing the expression that\nwe assign to state->maxpostingsize at the top of _bt_dedup_pass().\nThis is surprising; it contradicts existing comments that explain that\nthe existing max is 1/6 of a page by choice, to get better space\nutilization than the more natural cap of 1/3 of a page. It now looks\nlike that might have actually been strictly necessary, all along.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 5 Aug 2022 10:13:41 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BTMaxItemSize seems to be subtly incorrect"
},
{
"msg_contents": "On Fri, Aug 5, 2022 at 10:13 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> Update: I discovered that I can get the regression tests to fail (even\n> on mainstream 64-bit platforms) by MAXALIGN()'ing the expression that\n> we assign to state->maxpostingsize at the top of _bt_dedup_pass().\n\nLooks like this was nothing more than a silly oversight with how the\nmacro was defined. As written, it would evaluate to the wrong thing at\nthe same point in nbtdedup.c, just because it was used in an\nexpression.\n\nPushed a fix for that just now.\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 5 Aug 2022 13:10:47 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BTMaxItemSize seems to be subtly incorrect"
}
] |
[
{
"msg_contents": "Recently I had someone complaining about a pg_restore failure and I\nbelieve we semi-regularly get complaints that are similar -- though\nI'm having trouble searching for them because the keywords \"dump\nrestore failure\" are pretty generic.\n\nThe root cause here -- and I believe for a lot of users -- are\nfunctions that are declared IMMUTABLE but are not for reasons that\naren't obvious to the user. Indeed poking at this more carefully I\nthink it's downright challenging to write an IMMUTABLE function at\nall. I suspect *most* users, perhaps even nearly all users, who write\nfunctions intending them to be immutable are actually not really as\nsuccessful as they believe.\n\nThe biggest culprit is of course search_path. Afaics it's nigh\nimpossible to write any non-trivial immutable function without just\nsetting the search_path GUC on the function. And there's nothing\nPostgres that requires that. I don't even see anything in the docs\nrecommending it.\n\nMany users probably always run with the same search_path so in\npractice they're probably mostly safe. But one day they could insert\ndata with a different search path with a different function definition\nin their path and corrupt their index which would be.... poor... Or as\nin my user they could discover the problem only in the middle of an\nupgrade which is a terrible time to discover it.\n\nI would suggest we should probably at the very least warn users if\nthey create an immutable function that doesn't have search_path set\nfor the function. I would actually prefer to make it an error but that\nbrings in compatibility issues. Perhaps it could be optional.\n\nBut putting a GUC on a function imposes a pretty heavy performance\ncost. I'm not sure how bad it is compared to running plpgsql code let\nalone other languages but IIUC it invalidates some catalog caches\nwhich for something happening repeatedly in, e.g. a data load would be\npretty bad.\n\nIt would be nice to have a way to avoid the performance cost and I see\ntwo options.\n\nThinking of plpgsql here, we already run the raw parser on all sql\nwhen the function is defined. We could somehow check whether the\nraw_parser found any non-schema-qualified references. This looks like\nit would be awkward but doable. That would allow users to write\nnon-search_path-dependent code and if postgres doesn't warn they would\nknow they've done it properly. It would still put quite a burden on\nusers, especially when it comes to operators...\n\nOr alternatively we could offer lexical scoping so that all objects\nare looked up at parse time and the fully qualified reference is\nstored instead of the non-qualified reference. That would be more\nsimilar to how views and other object references are handled.\n\nI suppose there's a third option that we could provide something which\ninstead of *setting* the guc when a function is entered just verifies\nthat guc is set as expected. That way the function would simply throw\nan error if search_path is \"incorrect\" and not have to invalidate any\ncaches. That would at least avoid index corruption but not guarantee\ndump/reload would work.\n\n-- \ngreg\n\n\n",
"msg_date": "Wed, 8 Jun 2022 17:42:48 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Tightening behaviour for non-immutable behaviour in immutable\n functions"
},
{
"msg_contents": "\n\n> On Jun 8, 2022, at 2:42 PM, Greg Stark <stark@mit.edu> wrote:\n> \n> Thinking of plpgsql here, we already run the raw parser on all sql\n> when the function is defined. We could somehow check whether the\n> raw_parser found any non-schema-qualified references. This looks like\n> it would be awkward but doable. That would allow users to write\n> non-search_path-dependent code and if postgres doesn't warn they would\n> know they've done it properly. It would still put quite a burden on\n> users, especially when it comes to operators...\n> \n> Or alternatively we could offer lexical scoping so that all objects\n> are looked up at parse time and the fully qualified reference is\n> stored instead of the non-qualified reference. That would be more\n> similar to how views and other object references are handled.\n\nI like the general idea, but I'm confused why you are limiting the analysis to search path resolution. The following is clearly wrong, but not for that reason:\n\ncreate function public.identity () returns double precision as $$\n select random()::integer;\n$$\nlanguage sql\nimmutable\nparallel safe\n-- set search_path to 'pg_catalog'\n;\n\nUncommenting that last bit wouldn't make it much better.\n\nIsn't the more general approach to look for non-immutable (or non-stable) operations, with object resolution just one type of non-immutable operation? Perhaps raise an error when you can prove the given function's provolatile marking is wrong, and a warning when you cannot prove the marking is correct? That would tend to give warnings for polymorphic functions that use functions or operators over the polymorphic types, or which use dynamic sql, but maybe that's ok. Those functions probably deserve closer scrutiny anyway.\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 8 Jun 2022 16:39:07 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Tightening behaviour for non-immutable behaviour in immutable\n functions"
},
{
"msg_contents": "On Wed, 8 Jun 2022 at 19:39, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>\n>\n> I like the general idea, but I'm confused why you are limiting the analysis to search path resolution. The following is clearly wrong, but not for that reason:\n>\n> create function public.identity () returns double precision as $$\n> select random()::integer;\n\nWell.... I did originally think it would be necessary to consider\ncases like this. (or even just cases where you call a user function\nthat is not immutable because of search_path dependencies).\n\nBut there are two problems:\n\nFirstly, that would be a lot harder to implement. We don't actually do\nany object lookups in plpgsql when defining plpgsql functions. So this\nwould be a much bigger change.\n\nBut secondly, there are a lot of cases where non-immutable functions\n*are* immutable if they're used carefully. to_char() is obviously the\ncommon example, but it's perfectly safe if you set the timezone or\nother locale settings or if your format string doesn't actually depend\non any settings.\n\nSimilarly, a user function that is non-immutable only due to a\ndependency on search_path *would* be safe to call from within an\nimmutable function if that function does set search_path. The\nsearch_path would be inherited alleviating the problem.\n\nEven something like random() could be safely used in an immutable\nfunction as long as it doesn't actually change the output -- say if it\njust logs diagnostic messages as a result?\n\nGenerally I think the idea is that the user *is* responsible for\nwriting immutable functions carefully to hide any non-deterministic\nbehaviour from the code they're calling. But that does raise the\nquestion of why to focus on search_path.\n\nI guess I'm just saying my goal isn't to *prove* the code is correct.\nThe user is still responsible for asserting it's correct. I just want\nto detect cases where I can prove (or at least show it's likely that)\nit's *not* correct.\n\n-- \ngreg\n\n\n",
"msg_date": "Thu, 9 Jun 2022 15:39:07 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Tightening behaviour for non-immutable behaviour in immutable\n functions"
},
{
"msg_contents": "On Thu, Jun 9, 2022 at 12:39 PM Greg Stark <stark@mit.edu> wrote:\n> Generally I think the idea is that the user *is* responsible for\n> writing immutable functions carefully to hide any non-deterministic\n> behaviour from the code they're calling. But that does raise the\n> question of why to focus on search_path.\n>\n> I guess I'm just saying my goal isn't to *prove* the code is correct.\n> The user is still responsible for asserting it's correct. I just want\n> to detect cases where I can prove (or at least show it's likely that)\n> it's *not* correct.\n\nRight. It's virtually impossible to prove that, for many reasons, so\nthe final responsibility must lie with the user-defined code.\n\nPresumably there is still significant value in detecting cases where\nsome user-defined code provably does the wrong thing. Especially by\ntargeting mistakes that experience has shown are relatively common.\nThat's what the search_path case seems like to me.\n\nIf somebody else wants to write another patch that adds on that,\ngreat. If not, then having this much still seems useful.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 9 Jun 2022 13:11:31 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Tightening behaviour for non-immutable behaviour in immutable\n functions"
},
{
"msg_contents": "On Thu, 9 Jun 2022 at 16:12, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> Presumably there is still significant value in detecting cases where\n> some user-defined code provably does the wrong thing. Especially by\n> targeting mistakes that experience has shown are relatively common.\n> That's what the search_path case seems like to me.\n\nBy \"relatively common\" I think we're talking \"nigh universal\". Afaics\nthere are no warnings in the docs about worrying about search_path on\nIMMUTABLE functions. There is for SECURITY DEFINER but I have to admit\nI wasn't aware myself of all the gotchas described there.\n\nFor that matter.... the gotchas are a bit .... convoluted....\n\nIf you leave out pg_catalog from search_path that's fine but if you\nleave out pg_temp that's a security disaster. If you put pg_catalog in\nit better be first or else it might be ok or might be a security issue\nbut when you put pg_temp in it better be last or else it's\n*definitely* a disaster. $user is in search_path by default and that's\nfine for SECURITY DEFINER functions but it's a disaster for IMMUTABLE\nfunctions...\n\nI kind of feel like perhaps all the implicit stuff is unnecessary\nbaroque frills. We should just put pg_temp and pg_catalog into the\ndefault postgresql.conf search_path and assume users will keep them\nthere. And I'm not sure why we put pg_temp *first* -- I mean it sort\nof seems superficially sensible but it doesn't seem like there's any\nreal reason to name your temporary tables the same as your permanent\nones so why not just always add it last?\n\n\nI've attached a very WIP patch that implements the checks I'm leaning\ntowards making (as elogs currently). They cause a ton of regression\nfailures so probably we need to think about how to reduce the pain for\nusers upgrading...\n\nPerhaps we should automatically fix up the current search patch and\nattach it to functions where necessary for users instead of just\nwhingeing at them....",
"msg_date": "Mon, 13 Jun 2022 16:50:54 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Tightening behaviour for non-immutable behaviour in immutable\n functions"
},
{
"msg_contents": "On Mon, Jun 13, 2022 at 1:51 PM Greg Stark <stark@mit.edu> wrote:\n> By \"relatively common\" I think we're talking \"nigh universal\". Afaics\n> there are no warnings in the docs about worrying about search_path on\n> IMMUTABLE functions. There is for SECURITY DEFINER but I have to admit\n> I wasn't aware myself of all the gotchas described there.\n\nI didn't realize that it was that bad. Even if it's only 10% as bad as\nyou say, it would still be very valuable to do something about it\n(ideally with an approach that is non-invasive).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 13 Jun 2022 18:41:17 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Tightening behaviour for non-immutable behaviour in immutable\n functions"
},
{
"msg_contents": "On Mon, Jun 13, 2022 at 06:41:17PM -0700, Peter Geoghegan wrote:\n> On Mon, Jun 13, 2022 at 1:51 PM Greg Stark <stark@mit.edu> wrote:\n>> By \"relatively common\" I think we're talking \"nigh universal\". Afaics\n>> there are no warnings in the docs about worrying about search_path on\n>> IMMUTABLE functions. There is for SECURITY DEFINER but I have to admit\n>> I wasn't aware myself of all the gotchas described there.\n> \n> I didn't realize that it was that bad. Even if it's only 10% as bad as\n> you say, it would still be very valuable to do something about it\n> (ideally with an approach that is non-invasive).\n\nHaving checks implemented so as users cannot bite themselves back is a\ngood idea in the long term, but I have also seen cases where abusing\nof immutable functions was useful:\n- Enforce the push down of function expressions to remote server.\n- Regression tests. Just a few weeks ago I have played with an\nadvisory lock within an index expression.\n\nPerhaps I never should have done what the first point was doing\nanyway, but having a way to disable any of that, be it just a\ndeveloper option for the purpose of some regression tests, would be\nnice. Skimming quickly through the patch, any of the checks related\nto search_path would not apply to the fancy cases I saw, though.\n--\nMichael",
"msg_date": "Tue, 14 Jun 2022 11:32:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Tightening behaviour for non-immutable behaviour in immutable\n functions"
},
{
"msg_contents": "On Mon, 13 Jun 2022 at 16:50, Greg Stark <stark@mit.edu> wrote:\n>\n> For that matter.... the gotchas are a bit .... convoluted....\n>\n> Perhaps we should automatically fix up the current search patch and\n> attach it to functions where necessary for users instead of just\n> whingeing at them....\n\nSo I reviewed my own patch and.... it was completely broken... I fixed\nit to actually check the right variables.\n\nI also implemented the other idea above of actually fixing up\nsearch_path in proconfig for the user by default. I think this is\ngoing to be more practical.\n\nThe problem with expecting the user to provide search_path is that\nthey probably aren't today so the warnings would be firing for\neverything...\n\nProviding a fixed up search_path for users would give them a smoother\nupgrade process where we can give a warning only if the search_path is\nchanged substantively which is much less likely.\n\nI'm still quite concerned about the performance impact of having\nsearch_path on so many functions. It causes a cache flush which could\nbe pretty painful on a frequently called function such as one in an\nindex expression during a data load....\n\nThe other issue is that having proconfig set does prevent these\nfunctions from being inlined which can be seen in the regression test\nas seen below. I'm not sure how big a problem this is for users.\nInlining is important for many use cases I think. Maybe we can go\nahead and inline things even if they have a proconfig if it matches\nthe proconfig on the caller? Or maybe even check if we get the same\nobjects from both search_paths?\n\n\nOf course this patch is still very WIP. Only one or the other function\nmakes sense to keep. And I'm not opposed to having a GUC to\nenable/disable the enforcement or warnings. And the code itself needs\nto be cleaned up with parts of it moving to guc.c and/or namespace.c.\n\n\nExample of regression tests noticing that immutable functions with\nproconfig set become non-inlineable:\n\ndiff -U3 /home/stark/src/postgresql/src/test/regress/expected/rangefuncs.out\n/home/stark/src/postgresql/src/test/regress/results/rangefuncs.out\n--- /home/stark/src/postgresql/src/test/regress/expected/rangefuncs.out\n2022-01-17 12:01:54.958628564 -0500\n+++ /home/stark/src/postgresql/src/test/regress/results/rangefuncs.out\n2022-06-16 02:16:47.589703966 -0400\n@@ -1924,14 +1924,14 @@\n select * from array_to_set(array['one', 'two']) as t(f1 point,f2 text);\n ERROR: return type mismatch in function declared to return record\n DETAIL: Final statement returns integer instead of point at column 1.\n-CONTEXT: SQL function \"array_to_set\" during inlining\n+CONTEXT: SQL function \"array_to_set\" during startup\n explain (verbose, costs off)\n select * from array_to_set(array['one', 'two']) as t(f1\nnumeric(4,2),f2 text);\n- QUERY PLAN\n---------------------------------------------------------------\n- Function Scan on pg_catalog.generate_subscripts i\n- Output: i.i, ('{one,two}'::text[])[i.i]\n- Function Call: generate_subscripts('{one,two}'::text[], 1)\n+ QUERY PLAN\n+----------------------------------------------------\n+ Function Scan on public.array_to_set t\n+ Output: f1, f2\n+ Function Call: array_to_set('{one,two}'::text[])\n (3 rows)\n\n create temp table rngfunc(f1 int8, f2 int8);\n@@ -2064,11 +2064,12 @@\n\n explain (verbose, costs off)\n select * from testrngfunc();\n- QUERY PLAN\n---------------------------------------------------------\n- Result\n- Output: 7.136178::numeric(35,6), 7.14::numeric(35,2)\n-(2 rows)\n+ QUERY PLAN\n+-------------------------------------\n+ Function Scan on public.testrngfunc\n+ Output: f1, f2\n+ Function Call: testrngfunc()\n+(3 rows)\n\n select * from testrngfunc();\n f1 | f2\n\n\n--\ngreg",
"msg_date": "Thu, 16 Jun 2022 12:04:12 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Tightening behaviour for non-immutable behaviour in immutable\n functions"
},
{
"msg_contents": "On Thu, 16 Jun 2022 at 12:04, Greg Stark <stark@mit.edu> wrote:\n>\n> Providing a fixed up search_path for users would give them a smoother\n> upgrade process where we can give a warning only if the search_path is\n> changed substantively which is much less likely.\n>\n> I'm still quite concerned about the performance impact of having\n> search_path on so many functions. It causes a cache flush which could\n> be pretty painful on a frequently called function such as one in an\n> index expression during a data load....\n\nSo it seems I missed a small change in Postgres SQL function world,\nnamely the SQL standard syntax and prosqlbody column from e717a9a18.\n\nThis feature is huge. It's awesome! It basically provides the lexical\nscoping feature I was hoping to implement. Any sql immutable standard\nsyntax sql function can be safely used in indexes or elsewhere\nregardless of your search_path as all the names are already resolved.\n\nI'm now thinking we should just provide a LEXICAL option on Postgres\nstyle functions to implement the same name path and store sqlbody for\nthem as well. They would have to be bound by the same restrictions\n(notably no polymorphic parameters) but otherwise I think it should be\nstraightforward.\n\nFunctions defined this way would always be safe for pg_dump regardless\nof the search_path used to define them and would also protect users\nfrom accidentally corrupting indexes when users have different\nsearch_paths.\n\nThis doesn't really address plpgsql functions of course, I doubt we\ncan do the same thing.\n\n\n--\ngreg\n\n\n",
"msg_date": "Wed, 22 Jun 2022 18:35:18 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Tightening behaviour for non-immutable behaviour in immutable\n functions"
},
{
"msg_contents": "Hi,\n\nOn 2022-06-16 12:04:12 -0400, Greg Stark wrote:\n> Of course this patch is still very WIP. Only one or the other function\n> makes sense to keep. And I'm not opposed to having a GUC to\n> enable/disable the enforcement or warnings. And the code itself needs\n> to be cleaned up with parts of it moving to guc.c and/or namespace.c.\n\nThis currently obviously doesn't pass tests - are you planning to work on this\nfurther? As is I'm not really clear what the CF entry is for. Given the\ncurrent state it doesn't look like it's actually looking for review?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 1 Oct 2022 18:05:55 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Tightening behaviour for non-immutable behaviour in immutable\n functions"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nRecently when looking at the \"System Catalogs\" Tables of Contents [1],\nI was wondering why are those headings \"Overview\" and \"System Views\"\nat the same section level as the catalogs/views within them.\n\n~~~\n\ne.g.1. Current:\n\nChapter 53. \"System Catalogs\"\n======\n53.1. Overview\n53.2. pg_aggregate\n53.3. pg_am\n53.4. pg_amop\n53.5. pg_amproc\n...\n53.66. System Views\n53.67. pg_available_extensions\n53.68. pg_available_extension_versions\n53.69. pg_backend_memory_contexts\n53.70. pg_config\n...\n======\n\ne.g.2 What I thought it should look like:\n\nChapter 53. \"System Catalogs and Views\" <-- chapter name change\n======\n53.1. System Catalogs <-- heading name change\n53.1.1. pg_aggregate\n53.1.2. pg_am\n53.1.3. pg_amop\n53.1.4. pg_amproc\n...\n53.2. System Views\n53.2.1. pg_available_extensions\n53.2.2. pg_available_extension_versions\n53.2.3. pg_backend_memory_contexts\n53.2.4. pg_config\n...\n======\n\n~~~\n\nOTOH it looks like this table of contents page has been this way\nforever (20+ years?). It is hard to believe nobody else suggested\nmodifying it in all that time, so perhaps there is some reason for it\nbeing like it is?\n\nThoughts?\n\n------\n[1] https://www.postgresql.org/docs/15/catalogs.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 9 Jun 2022 09:29:25 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": ""
},
{
"msg_contents": "On 09.06.22 01:29, Peter Smith wrote:\n> OTOH it looks like this table of contents page has been this way\n> forever (20+ years?). It is hard to believe nobody else suggested\n> modifying it in all that time, so perhaps there is some reason for it\n> being like it is?\n\nInitially, that chapter did not document any system views.\n\n\n",
"msg_date": "Thu, 9 Jun 2022 15:33:53 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re:"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Initially, that chapter did not document any system views.\n\nMaybe we could make the system views a separate chapter?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Jun 2022 09:50:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": ""
},
{
"msg_contents": "On Thu, Jun 9, 2022 at 11:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > Initially, that chapter did not document any system views.\n>\n> Maybe we could make the system views a separate chapter?\n\n+1\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 16 Jun 2022 10:59:52 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re:"
},
{
"msg_contents": "On Thu, Jun 16, 2022 at 10:59 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Thu, Jun 9, 2022 at 11:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > > Initially, that chapter did not document any system views.\n> >\n> > Maybe we could make the system views a separate chapter?\n>\n> +1\n\nThere has not been any activity on this thread for a while, so I am\njust wondering what I should do next about it:\n\nAre there any other opinions about this?\n\nIf there is no interest whatsoever in splitting the existing \"System\nCatalogs\" into 2 chapters (\"System Catalogs\" and \"System Views\") then\nI will abandon the idea.\n\nBut if others also feel it might be better to split them, I can put\npatching this on my TODO list and share it sometime later.\n\nTIA.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 7 Jul 2022 09:29:13 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re:"
},
{
"msg_contents": "On Thu, Jul 7, 2022 at 09:29:13AM +1000, Peter Smith wrote:\n> On Thu, Jun 16, 2022 at 10:59 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Thu, Jun 9, 2022 at 11:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > > > Initially, that chapter did not document any system views.\n> > >\n> > > Maybe we could make the system views a separate chapter?\n> >\n> > +1\n> \n> There has not been any activity on this thread for a while, so I am\n> just wondering what I should do next about it:\n> \n> Are there any other opinions about this?\n> \n> If there is no interest whatsoever in splitting the existing \"System\n> Catalogs\" into 2 chapters (\"System Catalogs\" and \"System Views\") then\n> I will abandon the idea.\n> \n> But if others also feel it might be better to split them, I can put\n> patching this on my TODO list and share it sometime later.\n\nLooking at the docs:\n\n\thttps://www.postgresql.org/docs/devel/catalogs.html\n\thttps://www.postgresql.org/docs/devel/views-overview.html\n\nit is clear this needs to be fixed, and I would be glad to do it soon. \nI don't need a submitted patch.\n\nMy only question is whether we apply this to head, head & PG 15, or all\nbranches? I think the URLs will change with this adjustment so we might\nwant to do only head & PG 15.\n\nThere are two reasons this didn't get addressed earlier. First, I have\nbeen focusing on some larger community issues the past few months, and I\nnow see people are complaining some of these issues are being ignored\n--- I need to refocus on those smaller issues. Second, the original\nemail thread had no email subject, which tends to cause it to get\nignored and to sometimes be threaded with other unrelated emails that\nalso have no subject line.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 7 Jul 2022 12:17:40 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "System catalog documentation chapter"
},
{
"msg_contents": "On Fri, Jul 8, 2022 at 2:17 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Thu, Jul 7, 2022 at 09:29:13AM +1000, Peter Smith wrote:\n> > On Thu, Jun 16, 2022 at 10:59 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > On Thu, Jun 9, 2022 at 11:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > >\n> > > > Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > > > > Initially, that chapter did not document any system views.\n> > > >\n> > > > Maybe we could make the system views a separate chapter?\n> > >\n> > > +1\n> >\n> > There has not been any activity on this thread for a while, so I am\n> > just wondering what I should do next about it:\n> >\n> > Are there any other opinions about this?\n> >\n> > If there is no interest whatsoever in splitting the existing \"System\n> > Catalogs\" into 2 chapters (\"System Catalogs\" and \"System Views\") then\n> > I will abandon the idea.\n> >\n> > But if others also feel it might be better to split them, I can put\n> > patching this on my TODO list and share it sometime later.\n>\n> Looking at the docs:\n\nThanks for looking at this.\n\n>\n> https://www.postgresql.org/docs/devel/catalogs.html\n> https://www.postgresql.org/docs/devel/views-overview.html\n>\n> it is clear this needs to be fixed, and I would be glad to do it soon.\n> I don't need a submitted patch.\n\nSure. I will step back now and let you fix it.\n\n>\n> My only question is whether we apply this to head, head & PG 15, or all\n> branches? I think the URLs will change with this adjustment so we might\n> want to do only head & PG 15.\n\nAFAIK the chapter has been structured like this for many years and\nnobody patched it sooner, so perhaps that is an indication the older\nbranches don't really need changing?\n\n>\n> There are two reasons this didn't get addressed earlier. First, I have\n> been focusing on some larger community issues the past few months, and I\n> now see people are complaining some of these issues are being ignored\n> --- I need to refocus on those smaller issues. Second, the original\n> email thread had no email subject, which tends to cause it to get\n> ignored and to sometimes be threaded with other unrelated emails that\n> also have no subject line.\n>\n\nI'm not complaining - the initial dodgy subject was entirely my fault.\nI immediately re-posted the email to include a proper subject, but\nthen the responses came back on the (no subject) thread anyway so that\nbecame the dominant one. Next time I'll try to take more care.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 8 Jul 2022 09:21:13 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: System catalog documentation chapter"
},
{
"msg_contents": "On Fri, Jul 8, 2022 at 09:21:13AM +1000, Peter Smith wrote:\n> > My only question is whether we apply this to head, head & PG 15, or all\n> > branches? I think the URLs will change with this adjustment so we might\n> > want to do only head & PG 15.\n> \n> AFAIK the chapter has been structured like this for many years and\n> nobody patched it sooner, so perhaps that is an indication the older\n> branches don't really need changing?\n\nAgreed. I don't want to break links into the documentation in final\nreleased versions, so head and PG15 seem wise.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Fri, 8 Jul 2022 11:45:33 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: System catalog documentation chapter"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> Agreed. I don't want to break links into the documentation in final\n> released versions, so head and PG15 seem wise.\n\nI would not expect this to change the doc URLs for any individual\ncatalogs or views --- if it does, I won't be happy.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 08 Jul 2022 11:49:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: System catalog documentation chapter"
},
{
"msg_contents": "On Fri, Jul 8, 2022 at 11:49:47AM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > Agreed. I don't want to break links into the documentation in final\n> > released versions, so head and PG15 seem wise.\n> \n> I would not expect this to change the doc URLs for any individual\n> catalogs or views --- if it does, I won't be happy.\n\nGood point --- I thought the chapter was in the URL, but I now see it is\njust the section heading:\n\n\thttps://www.postgresql.org/docs/devel/view-pg-available-extensions.html\n\nso I guess we can backpatch this with no issues.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Fri, 8 Jul 2022 12:07:45 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: System catalog documentation chapter"
},
{
"msg_contents": "On Fri, Jul 8, 2022 at 12:07:45PM -0400, Bruce Momjian wrote:\n> On Fri, Jul 8, 2022 at 11:49:47AM -0400, Tom Lane wrote:\n> > Bruce Momjian <bruce@momjian.us> writes:\n> > > Agreed. I don't want to break links into the documentation in final\n> > > released versions, so head and PG15 seem wise.\n> > \n> > I would not expect this to change the doc URLs for any individual\n> > catalogs or views --- if it does, I won't be happy.\n> \n> Good point --- I thought the chapter was in the URL, but I now see it is\n> just the section heading:\n> \n> \thttps://www.postgresql.org/docs/devel/view-pg-available-extensions.html\n> \n> so I guess we can backpatch this with no issues.\n\nAttached is a patch to accomplish this. Its output can be seen here:\n\n\thttps://momjian.us/tmp/pgsql/internals.html\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson",
"msg_date": "Fri, 8 Jul 2022 15:32:56 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: System catalog documentation chapter"
},
{
"msg_contents": "On Sat, Jul 9, 2022 at 5:32 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n...\n\n> Attached is a patch to accomplish this. Its output can be seen here:\n>\n> https://momjian.us/tmp/pgsql/internals.html\n>\n\nThat output looks good to me.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 11 Jul 2022 08:55:08 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: System catalog documentation chapter"
},
{
"msg_contents": "On 08.07.22 18:07, Bruce Momjian wrote:\n> so I guess we can backpatch this with no issues.\n\nIt inserts a new chapter, which would renumber all other chapters. \nThat's a pretty big change to backpatch. I'm against that.\n\n\n",
"msg_date": "Tue, 12 Jul 2022 20:56:01 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: System catalog documentation chapter"
},
{
"msg_contents": "On 08.07.22 21:32, Bruce Momjian wrote:\n> On Fri, Jul 8, 2022 at 12:07:45PM -0400, Bruce Momjian wrote:\n>> On Fri, Jul 8, 2022 at 11:49:47AM -0400, Tom Lane wrote:\n>>> Bruce Momjian <bruce@momjian.us> writes:\n>>>> Agreed. I don't want to break links into the documentation in final\n>>>> released versions, so head and PG15 seem wise.\n>>>\n>>> I would not expect this to change the doc URLs for any individual\n>>> catalogs or views --- if it does, I won't be happy.\n>>\n>> Good point --- I thought the chapter was in the URL, but I now see it is\n>> just the section heading:\n>>\n>> \thttps://www.postgresql.org/docs/devel/view-pg-available-extensions.html\n>>\n>> so I guess we can backpatch this with no issues.\n> \n> Attached is a patch to accomplish this. Its output can be seen here:\n> \n> \thttps://momjian.us/tmp/pgsql/internals.html\n\nviews.sgml is a pretty generic name for a chapter that just contains \nsystem views.\n\n\n\n",
"msg_date": "Tue, 12 Jul 2022 20:56:36 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: System catalog documentation chapter"
},
{
"msg_contents": "On Tue, Jul 12, 2022 at 08:56:01PM +0200, Peter Eisentraut wrote:\n> On 08.07.22 18:07, Bruce Momjian wrote:\n> > so I guess we can backpatch this with no issues.\n> \n> It inserts a new chapter, which would renumber all other chapters. That's a\n> pretty big change to backpatch. I'm against that.\n\nOkay, I can see the renumbering as being confusing so I will do PG 15\nand head only.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 12 Jul 2022 17:16:41 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: System catalog documentation chapter"
},
{
"msg_contents": "On Tue, Jul 12, 2022 at 08:56:36PM +0200, Peter Eisentraut wrote:\n> On 08.07.22 21:32, Bruce Momjian wrote:\n> > On Fri, Jul 8, 2022 at 12:07:45PM -0400, Bruce Momjian wrote:\n> > > On Fri, Jul 8, 2022 at 11:49:47AM -0400, Tom Lane wrote:\n> > > > Bruce Momjian <bruce@momjian.us> writes:\n> > > > > Agreed. I don't want to break links into the documentation in final\n> > > > > released versions, so head and PG15 seem wise.\n> > > > \n> > > > I would not expect this to change the doc URLs for any individual\n> > > > catalogs or views --- if it does, I won't be happy.\n> > > \n> > > Good point --- I thought the chapter was in the URL, but I now see it is\n> > > just the section heading:\n> > > \n> > > \thttps://www.postgresql.org/docs/devel/view-pg-available-extensions.html\n> > > \n> > > so I guess we can backpatch this with no issues.\n> > \n> > Attached is a patch to accomplish this. Its output can be seen here:\n> > \n> > \thttps://momjian.us/tmp/pgsql/internals.html\n> \n> views.sgml is a pretty generic name for a chapter that just contains system\n> views.\n\nYes, I struggled with that. What made me choose \"views\" is that the\ncurrent name was catalogs.sgml, not syscatalogs.sgml. If is acceptable\nto use catalogs.sgml and sysviews.sgml?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 12 Jul 2022 17:17:51 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: System catalog documentation chapter"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Tue, Jul 12, 2022 at 08:56:36PM +0200, Peter Eisentraut wrote:\n>> views.sgml is a pretty generic name for a chapter that just contains system\n>> views.\n\n> Yes, I struggled with that. What made me choose \"views\" is that the\n> current name was catalogs.sgml, not syscatalogs.sgml. If is acceptable\n> to use catalogs.sgml and sysviews.sgml?\n\n\"catalogs\" isn't too confusable with user-defined objects, so I think\nthat name is fine --- and anyway it has decades of history so changing\nit seems unwise.\n\nWe seem to have been trending towards less-abbreviated .sgml file names\nover time, so personally I'd go for system-views.sgml.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 12 Jul 2022 17:24:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: System catalog documentation chapter"
},
{
"msg_contents": "On Tue, Jul 12, 2022 at 05:16:41PM -0400, Bruce Momjian wrote:\n> On Tue, Jul 12, 2022 at 08:56:01PM +0200, Peter Eisentraut wrote:\n> > On 08.07.22 18:07, Bruce Momjian wrote:\n> > > so I guess we can backpatch this with no issues.\n> > \n> > It inserts a new chapter, which would renumber all other chapters. That's a\n> > pretty big change to backpatch. I'm against that.\n> \n> Okay, I can see the renumbering as being confusing so I will do PG 15\n> and head only.\n\nPatch applied to PG 15 and master.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 14 Jul 2022 16:07:30 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: System catalog documentation chapter"
},
{
"msg_contents": "\nOn 14.07.22 22:07, Bruce Momjian wrote:\n> On Tue, Jul 12, 2022 at 05:16:41PM -0400, Bruce Momjian wrote:\n>> On Tue, Jul 12, 2022 at 08:56:01PM +0200, Peter Eisentraut wrote:\n>>> On 08.07.22 18:07, Bruce Momjian wrote:\n>>>> so I guess we can backpatch this with no issues.\n>>>\n>>> It inserts a new chapter, which would renumber all other chapters. That's a\n>>> pretty big change to backpatch. I'm against that.\n>>\n>> Okay, I can see the renumbering as being confusing so I will do PG 15\n>> and head only.\n> \n> Patch applied to PG 15 and master.\n\nNow that I see the result, I don't think this is really the right \nimprovement yet.\n\nThe new System Views chapter lists views that are effectively \nquasi-system catalogs, such as pg_shadow or pg_replication_origin_status \n-- the fact that these are views and not tables is secondary. On the \nother hand, it lists views that are more on the level of information \nschema views, that is, they are explicitly user-facing wrappers around \ninformation available elsewhere, such as pg_sequences, pg_views.\n\nI think most of them are in the second category. So having this chapter \nin the \"Internals\" part seems wrong. But then moving it, say, closer to \nwhere the information schema is documented wouldn't be right either, \nunless we move the views in the first category elsewhere.\n\nAlso, consider that we document the pg_stats_ views in yet another \nplace, and it's not really clear why something like \npg_replication_slots, which might often be used together with stats \nviews, is documented so far away from them.\n\nMaybe this whole notion that \"system views\" is one thing is not suitable.\n\n\n",
"msg_date": "Sat, 16 Jul 2022 10:53:17 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: System catalog documentation chapter"
},
{
"msg_contents": "On Sat, Jul 16, 2022 at 10:53:17AM +0200, Peter Eisentraut wrote:\n> Now that I see the result, I don't think this is really the right\n> improvement yet.\n> \n> The new System Views chapter lists views that are effectively quasi-system\n> catalogs, such as pg_shadow or pg_replication_origin_status -- the fact that\n> these are views and not tables is secondary. On the other hand, it lists\n> views that are more on the level of information schema views, that is, they\n> are explicitly user-facing wrappers around information available elsewhere,\n> such as pg_sequences, pg_views.\n> \n> I think most of them are in the second category. So having this chapter in\n> the \"Internals\" part seems wrong. But then moving it, say, closer to where\n> the information schema is documented wouldn't be right either, unless we\n> move the views in the first category elsewhere.\n> \n> Also, consider that we document the pg_stats_ views in yet another place,\n> and it's not really clear why something like pg_replication_slots, which\n> might often be used together with stats views, is documented so far away\n> from them.\n> \n> Maybe this whole notion that \"system views\" is one thing is not suitable.\n\nAre you thinking we should just call the chapter \"System Catalogs and\nViews\" and just place them alphabetically in a single chapter?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Mon, 18 Jul 2022 20:33:42 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: System catalog documentation chapter"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Sat, Jul 16, 2022 at 10:53:17AM +0200, Peter Eisentraut wrote:\n>> Maybe this whole notion that \"system views\" is one thing is not suitable.\n\n> Are you thinking we should just call the chapter \"System Catalogs and\n> Views\" and just place them alphabetically in a single chapter?\n\nI didn't think that was Peter's argument at all. He's complaining\nthat \"system views\" isn't a monolithic category, which is a reasonable\npoint, especially since we have a bunch of built-in views that appear\nin other chapters. But to then also confuse them with catalogs isn't\nimproving the situation.\n\nThe views that are actually reinterpretations of catalog contents should\nprobably be documented near the catalogs. But a lot of stuff in that\nchapter is no such thing. For example, it's really unclear why\npg_backend_memory_contexts is documented here and not somewhere near\nthe stats views. We also have stuff like pg_available_extensions,\npg_file_settings, and pg_timezone_names, which are reporting ground truth\nof some sort that didn't come from the catalogs. I'm not sure if those\nbelong near the catalogs or not.\n\nThe larger point, perhaps, is that this whole area is underneath\n\"Part VII: Internals\", and that being the case what you would expect\nto find here is stuff that we don't intend people to interact with\nin day-to-day usage. Most of the \"system views\" are specifically\nintended for day-to-day use, maybe only by DBAs, but nonetheless they\nare user-facing in a way that the catalogs aren't.\n\nMaybe we should move them all to Part IV, in a chapter or chapters\nadjacent to the Information Schema chapter. Or maybe try to separate\n\"user\" views from \"DBA\" views, and put user views in Part IV while\nDBA views go into a new chapter in Part III, near the stats views.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Jul 2022 21:22:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: System catalog documentation chapter"
},
{
"msg_contents": "On Tue, Jul 19, 2022 at 1:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Sat, Jul 16, 2022 at 10:53:17AM +0200, Peter Eisentraut wrote:\n> >> Maybe this whole notion that \"system views\" is one thing is not suitable.\n>\n> > Are you thinking we should just call the chapter \"System Catalogs and\n> > Views\" and just place them alphabetically in a single chapter?\n>\n> I didn't think that was Peter's argument at all. He's complaining\n> that \"system views\" isn't a monolithic category, which is a reasonable\n> point, especially since we have a bunch of built-in views that appear\n> in other chapters. But to then also confuse them with catalogs isn't\n> improving the situation.\n>\n\nMy original post was prompted when I was scrolling in the\ntable-of-contents for chapter 53 \"System Catalogs\". unable to find a\nCatalog because I did not realise it was really a View. It was only\nwhen I couldn't find it alphabetically that I noticed there was\n*another* appended list of Views, but then the \"System Views\" heading\nseemed strangely buried at the same heading level as everything\nelse... and although there was an \"Overview\" section for Catalogs\nthere was no \"Overview\" section for the Views...\n\nMaybe I was only seeing the tip of the iceberg. I'm not sure anymore\nwhat the best solution is. I do prefer the recent changes over how it\nused to be, but perhaps they also introduce a whole new set of\nproblems.\n\n---\n\n(It used to look like this)\n\nChapter 53. \"System Catalogs\"\n======\n53.1. Overview\n53.2. pg_aggregate\n53.3. pg_am\n53.4. pg_amop\n53.5. pg_amproc\n...\n53.66. System Views <--- 2nd heading just hiding here....\n53.67. pg_available_extensions\n53.68. pg_available_extension_versions\n53.69. pg_backend_memory_contexts\n53.70. pg_config\n...\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Tue, 19 Jul 2022 17:31:04 +1200",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: System catalog documentation chapter"
},
{
"msg_contents": "On Mon, Jul 18, 2022 at 09:22:24PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Sat, Jul 16, 2022 at 10:53:17AM +0200, Peter Eisentraut wrote:\n> >> Maybe this whole notion that \"system views\" is one thing is not suitable.\n> \n> > Are you thinking we should just call the chapter \"System Catalogs and\n> > Views\" and just place them alphabetically in a single chapter?\n> \n> I didn't think that was Peter's argument at all. He's complaining\n> that \"system views\" isn't a monolithic category, which is a reasonable\n> point, especially since we have a bunch of built-in views that appear\n> in other chapters. But to then also confuse them with catalogs isn't\n> improving the situation.\n\nI think I see now --- system _tables_ are really not for user consumption\nbut system views often are. I am thinking the best approach is to move\nmost of the system views out of the system views section and into the\nsections where they make sense.\n\n> The views that are actually reinterpretations of catalog contents should\n> probably be documented near the catalogs. But a lot of stuff in that\n> chapter is no such thing. For example, it's really unclear why\n\nRight.\n\n> pg_backend_memory_contexts is documented here and not somewhere near\n> the stats views. We also have stuff like pg_available_extensions,\n\nRight.\n\n> pg_file_settings, and pg_timezone_names, which are reporting ground truth\n> of some sort that didn't come from the catalogs. I'm not sure if those\n> belong near the catalogs or not.\n\nI am thinking some of those need to be in the Server Configuration\nchapter.\n\n> The larger point, perhaps, is that this whole area is underneath\n> \"Part VII: Internals\", and that being the case what you would expect\n> to find here is stuff that we don't intend people to interact with\n> in day-to-day usage. Most of the \"system views\" are specifically\n> intended for day-to-day use, maybe only by DBAs, but nonetheless they\n> are user-facing in a way that the catalogs aren't.\n> \n> Maybe we should move them all to Part IV, in a chapter or chapters\n> adjacent to the Information Schema chapter. Or maybe try to separate\n> \"user\" views from \"DBA\" views, and put user views in Part IV while\n> DBA views go into a new chapter in Part III, near the stats views.\n\nI am going to look at moving system views that make sense into the\nchapters where their contents are mentioned. I don't think having a\ncentral list of views is really helping us because we expect the views\nto be used in ways the system catalogs would not be.\n\nI will develop a proposed patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 19 Jul 2022 13:41:44 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: System catalog documentation chapter"
},
{
"msg_contents": "On Tue, Jul 19, 2022 at 01:41:44PM -0400, Bruce Momjian wrote:\n> I am going to look at moving system views that make sense into the\n> chapters where their contents are mentioned. I don't think having a\n> central list of views is really helping us because we expect the views\n> to be used in ways the system catalogs would not be.\n\nI have grouped the views by topic. What I would like to do next is to\nmove these view sections to the end of relevant documentation chapters.\nIs that going to be an improvement?\n\n---------------------------------------------------------------------------\n\npg_available_extensions\npg_available_extension_versions\n\npg_backend_memory_contexts\n\npg_config\n\npg_cursors\n\npg_file_settings\npg_hba_file_rules\npg_ident_file_mappings\npg_settings\n\npg_locks\n\npg_policies\n\npg_prepared_statements\n\npg_prepared_xacts\n\npg_publication_tables\npg_replication_origin_status\npg_replication_slots\n\npg_group\npg_roles\npg_shadow\npg_user\npg_user_mappings\n\npg_shmem_allocations\n\npg_stats\npg_stats_ext\npg_stats_ext_exprs\n\npg_timezone_abbrevs\npg_timezone_names\n\npg_indexes\npg_matviews\npg_rules\npg_seclabels\npg_sequences\npg_tables\npg_views\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 20 Jul 2022 16:07:58 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: System catalog documentation chapter"
},
{
"msg_contents": "On Wed, 20 Jul 2022 at 16:08, Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Tue, Jul 19, 2022 at 01:41:44PM -0400, Bruce Momjian wrote:\n> > I am going to look at moving system views that make sense into the\n> > chapters where their contents are mentioned. I don't think having a\n> > central list of views is really helping us because we expect the views\n> > to be used in ways the system catalogs would not be.\n>\n> I have grouped the views by topic. What I would like to do next is to\n> move these view sections to the end of relevant documentation chapters.\n> Is that going to be an improvement?\n\n\nWill there be a comprehensive list somewhere? Even if it just lists the\nviews, gives maybe a one-sentence description, and links to the relevant\nchapter, I would find that helpful sometimes.\n\nI ask because I occasionally find myself wanting a comprehensive list of\nfunctions, and as far as I can tell it doesn't exist. I'm hoping to avoid\nthat situation for views.\n\nOn Wed, 20 Jul 2022 at 16:08, Bruce Momjian <bruce@momjian.us> wrote:On Tue, Jul 19, 2022 at 01:41:44PM -0400, Bruce Momjian wrote:\n> I am going to look at moving system views that make sense into the\n> chapters where their contents are mentioned. I don't think having a\n> central list of views is really helping us because we expect the views\n> to be used in ways the system catalogs would not be.\n\nI have grouped the views by topic. What I would like to do next is to\nmove these view sections to the end of relevant documentation chapters.\nIs that going to be an improvement?Will there be a comprehensive list somewhere? Even if it just lists the views, gives maybe a one-sentence description, and links to the relevant chapter, I would find that helpful sometimes.I ask because I occasionally find myself wanting a comprehensive list of functions, and as far as I can tell it doesn't exist. I'm hoping to avoid that situation for views.",
"msg_date": "Wed, 20 Jul 2022 16:23:21 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: System catalog documentation chapter"
},
{
"msg_contents": "On Wed, Jul 20, 2022 at 04:23:21PM -0400, Isaac Morland wrote:\n> On Wed, 20 Jul 2022 at 16:08, Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Tue, Jul 19, 2022 at 01:41:44PM -0400, Bruce Momjian wrote:\n> > I am going to look at moving system views that make sense into the\n> > chapters where their contents are mentioned. I don't think having a\n> > central list of views is really helping us because we expect the views\n> > to be used in ways the system catalogs would not be.\n> \n> I have grouped the views by topic. What I would like to do next is to\n> move these view sections to the end of relevant documentation chapters.\n> Is that going to be an improvement?\n> \n> \n> Will there be a comprehensive list somewhere? Even if it just lists the views,\n> gives maybe a one-sentence description, and links to the relevant chapter, I\n> would find that helpful sometimes.\n\nI was not planning on that since we don't do that in any other cases I\ncan think of.\n\n> I ask because I occasionally find myself wanting a comprehensive list of\n> functions, and as far as I can tell it doesn't exist. I'm hoping to avoid that\n> situation for views.\n\nWell, then we just leave them where the are and link to them from other\nparts of the documentation, which I assume/hope we already do.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 20 Jul 2022 16:32:46 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: System catalog documentation chapter"
},
{
"msg_contents": "On Wed, Jul 20, 2022 at 04:32:46PM -0400, Bruce Momjian wrote:\n> On Wed, Jul 20, 2022 at 04:23:21PM -0400, Isaac Morland wrote:\n> > Will there be a comprehensive list somewhere? Even if it just lists the views,\n> > gives maybe a one-sentence description, and links to the relevant chapter, I\n> > would find that helpful sometimes.\n> \n> I was not planning on that since we don't do that in any other cases I\n> can think of.\n> \n> > I ask because I occasionally find myself wanting a comprehensive list of\n> > functions, and as far as I can tell it doesn't exist. I'm hoping to avoid that\n> > situation for views.\n> \n> Well, then we just leave them where the are and link to them from other\n> parts of the documentation, which I assume/hope we already do.\n\nPeople have mentioned the view documentation doesn't belong in the\nInternals part. Maybe we can just move it to the Server\nAdministration part and leave it together.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 20 Jul 2022 21:19:17 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: System catalog documentation chapter"
},
{
"msg_contents": "On Wed, Jul 20, 2022 at 09:19:17PM -0400, Bruce Momjian wrote:\n> On Wed, Jul 20, 2022 at 04:32:46PM -0400, Bruce Momjian wrote:\n> > On Wed, Jul 20, 2022 at 04:23:21PM -0400, Isaac Morland wrote:\n> > > Will there be a comprehensive list somewhere? Even if it just lists the views,\n> > > gives maybe a one-sentence description, and links to the relevant chapter, I\n> > > would find that helpful sometimes.\n> > \n> > I was not planning on that since we don't do that in any other cases I\n> > can think of.\n> > \n> > > I ask because I occasionally find myself wanting a comprehensive list of\n> > > functions, and as far as I can tell it doesn't exist. I'm hoping to avoid that\n> > > situation for views.\n> > \n> > Well, then we just leave them where the are and link to them from other\n> > parts of the documentation, which I assume/hope we already do.\n> \n> People have mentioned the view documentation doesn't belong in the\n> Internals part. Maybe we can just move it to the Server\n> Administration part and leave it together.\n\nThinking some more about this, I wonder if we should distinguish system\nviews that are needed for a task vs those used for reporting. For\nexample, pg_stat_activity is a dymamic view and is needed for\nmonitoring. pg_prepared_statements just reports the prepared\nstatements.\n\nCould it be that over time, we have moved the \"needed for a task\" views\ninto the relevant sections, and the reporting views have just stayed as\na group, and that is okay --- maybe they just need to be moved to Server\nAdministration?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 21 Jul 2022 09:48:18 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: System catalog documentation chapter"
}
] |
[
{
"msg_contents": "Hi hackers,\n\n(Resending because my previous post was missing the subject - sorry for noise)\n\nRecently when looking at the \"System Catalogs\" Tables of Contents [1],\nI was wondering why are those headings \"Overview\" and \"System Views\"\nat the same section level as the catalogs/views within them.\n\n~~~\n\ne.g.1. Current:\n\nChapter 53. \"System Catalogs\"\n======\n53.1. Overview\n53.2. pg_aggregate\n53.3. pg_am\n53.4. pg_amop\n53.5. pg_amproc\n...\n53.66. System Views\n53.67. pg_available_extensions\n53.68. pg_available_extension_versions\n53.69. pg_backend_memory_contexts\n53.70. pg_config\n...\n======\n\ne.g.2 What I thought it should look like:\n\nChapter 53. \"System Catalogs and Views\" <-- chapter name change\n======\n53.1. System Catalogs <-- heading name change\n53.1.1. pg_aggregate\n53.1.2. pg_am\n53.1.3. pg_amop\n53.1.4. pg_amproc\n...\n53.2. System Views\n53.2.1. pg_available_extensions\n53.2.2. pg_available_extension_versions\n53.2.3. pg_backend_memory_contexts\n53.2.4. pg_config\n...\n======\n\n~~~\n\nOTOH it looks like this table of contents page has been this way\nforever (20+ years?). It is hard to believe nobody else suggested\nmodifying it in all that time, so perhaps there is some reason for it\nbeing like it is?\n\nThoughts?\n\n------\n[1] https://www.postgresql.org/docs/15/catalogs.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 9 Jun 2022 09:42:40 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "PGDOCS - \"System Catalogs\" table-of-contents page structure"
},
{
"msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> e.g.2 What I thought it should look like:\n\n> Chapter 53. \"System Catalogs and Views\" <-- chapter name change\n> ======\n> 53.1. System Catalogs <-- heading name change\n> 53.1.1. pg_aggregate\n> 53.1.2. pg_am\n> 53.1.3. pg_amop\n> 53.1.4. pg_amproc\n\nThen the catalog descriptions would not be on separate pages.\n\n> OTOH it looks like this table of contents page has been this way\n> forever (20+ years?). It is hard to believe nobody else suggested\n> modifying it in all that time, so perhaps there is some reason for it\n> being like it is?\n\nPerhaps that it's just fine as-is.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 Jun 2022 20:00:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PGDOCS - \"System Catalogs\" table-of-contents page structure"
},
{
"msg_contents": "On Thu, Jun 9, 2022 at 10:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Smith <smithpb2250@gmail.com> writes:\n> > e.g.2 What I thought it should look like:\n>\n> > Chapter 53. \"System Catalogs and Views\" <-- chapter name change\n> > ======\n> > 53.1. System Catalogs <-- heading name change\n> > 53.1.1. pg_aggregate\n> > 53.1.2. pg_am\n> > 53.1.3. pg_amop\n> > 53.1.4. pg_amproc\n>\n> Then the catalog descriptions would not be on separate pages.\n>\n\nOh, right. Thanks for the explanation.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Thu, 9 Jun 2022 10:29:39 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PGDOCS - \"System Catalogs\" table-of-contents page structure"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nDue to the issue with potential data corruption when running `CREATE \r\nINDEX CONCURRENTLY` / `REINDEX CONCURRENTLY` on PostgreSQL 14[1], the \r\nrelease team has decided to make an out-of-cycle release available for \r\nPostgreSQL 14 on June 16, 2022.\r\n\r\nThis release will be numbered 14.4. This release is *only* for \r\nPostgreSQL 14; we are not releasing any other versions as part of this \r\nrelease.\r\n\r\nIf you are planning to include any other bug fixes, please do so no \r\nlater than Saturday, June 11 11:59pm AoE[2]. Because this is an \r\nout-of-cycle release, we ask that you use extra discretion when \r\ncommitting fixes to the REL_14_STABLE branch as we would like to avoid \r\nadditional regressions.\r\n\r\nPlease let us know if you have any questions.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://www.postgresql.org/message-id/flat/17485-396609c6925b982d%40postgresql.org\r\n[2] https://en.wikipedia.org/wiki/Anywhere_on_Earth",
"msg_date": "Wed, 8 Jun 2022 21:15:12 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "June 16, 2022 out-of-cycle release for PostgreSQL 14"
}
] |
[
{
"msg_contents": "Hi,\n\nCurrently CREATE_REPLICATION_SLOT/pg_create_logical_replication_slot waits\nunboundedly if there are any in-progress write transactions [1]. The wait\nis for a reason actually i.e. for building an initial snapshot, but waiting\nunboundedly isn't good for usability of the command/function and when\nstuck, the callers will not have any information as to why.\n\nHow about we provide a timeout for the command/function instead of letting\nthem wait unboundedly? The behavior will be something like this - if the\nlogical replication slot isn't created within this timeout, the\ncommand/function will fail.\n\nWe could've asked callers to set statement_timeout before calling\nCREATE_REPLICATION_SLOT/pg_create_logical_replication_slot but that impacts\nthe queries running in all other sessions and it may not be always possible\nto set this parameter just for the session that runs command\nCREATE_REPLICATION_SLOT.\n\nThoughts?\n\n[1]\n(gdb) bt\n#0 0x00007fc21509a45a in epoll_wait (epfd=9, events=0x561874204e88,\nmaxevents=1, timeout=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30\n#1 0x000056187350e9cc in WaitEventSetWaitBlock (set=0x561874204e28,\ncur_timeout=-1, occurred_events=0x7fff72b3a4a0, nevents=1) at latch.c:1467\n#2 0x000056187350e847 in WaitEventSetWait (set=0x561874204e28, timeout=-1,\noccurred_events=0x7fff72b3a4a0, nevents=1, wait_event_info=50331653) at\nlatch.c:1413\n#3 0x000056187350db64 in WaitLatch (latch=0x7fc21292f324, wakeEvents=33,\ntimeout=0, wait_event_info=50331653) at latch.c:475\n#4 0x000056187353b5b2 in ProcSleep (locallock=0x56187422aa58,\nlockMethodTable=0x561873a61a20 <default_lockmethod>) at proc.c:1337\n#5 0x0000561873527e49 in WaitOnLock (locallock=0x56187422aa58,\nowner=0x5618742888b0) at lock.c:1859\n#6 0x0000561873526730 in LockAcquireExtended (locktag=0x7fff72b3a8a0,\nlockmode=5, sessionLock=false, dontWait=false, reportMemoryError=true,\nlocallockp=0x0) at lock.c:1101\n#7 0x0000561873525b9d in LockAcquire (locktag=0x7fff72b3a8a0, lockmode=5,\nsessionLock=false, dontWait=false) at lock.c:752\n#8 0x0000561873524099 in XactLockTableWait (xid=734, rel=0x0, ctid=0x0,\noper=XLTW_None) at lmgr.c:702\n#9 0x00005618734a69c4 in SnapBuildWaitSnapshot (running=0x561874315a18,\ncutoff=735) at snapbuild.c:1416\n#10 0x00005618734a67a2 in SnapBuildFindSnapshot (builder=0x561874311a80,\nlsn=21941704, running=0x561874315a18) at snapbuild.c:1328\n#11 0x00005618734a62c4 in SnapBuildProcessRunningXacts\n(builder=0x561874311a80, lsn=21941704, running=0x561874315a18) at\nsnapbuild.c:1117\n#12 0x000056187348cab0 in standby_decode (ctx=0x5618742fb9e0,\nbuf=0x7fff72b3aa00) at decode.c:346\n#13 0x000056187348c34e in LogicalDecodingProcessRecord (ctx=0x5618742fb9e0,\nrecord=0x5618742fbda0) at decode.c:119\n#14 0x000056187349124e in DecodingContextFindStartpoint\n(ctx=0x5618742fb9e0) at logical.c:613\n#15 0x00005618734c2ab3 in create_logical_replication_slot\n(name=0x56187420d848 \"slot1\", plugin=0x56187420d8f8 \"test_decoding\",\ntemporary=false, two_phase=false, restart_lsn=0, find_startpoint=true) at\nslotfuncs.c:158\n#16 0x00005618734c2bb8 in pg_create_logical_replication_slot\n(fcinfo=0x5618742efdd0) at slotfuncs.c:187\n#17 0x00005618732def6b in ExecMakeTableFunctionResult\n(setexpr=0x5618742dc318, econtext=0x5618742dc1d0,\nargContext=0x5618742efcb0, expectedDesc=0x5618742ec098, randomAccess=false)\nat execSRF.c:234\n#18 0x00005618732fbc27 in FunctionNext (node=0x5618742dbfb8) at\nnodeFunctionscan.c:95\n#19 0x00005618732e0987 in ExecScanFetch (node=0x5618742dbfb8,\naccessMtd=0x5618732fbb72 <FunctionNext>, recheckMtd=0x5618732fbf6e\n<FunctionRecheck>) at execScan.c:133\n#20 0x00005618732e0a00 in ExecScan (node=0x5618742dbfb8,\naccessMtd=0x5618732fbb72 <FunctionNext>, recheckMtd=0x5618732fbf6e\n<FunctionRecheck>) at execScan.c:182\n#21 0x00005618732fbfc4 in ExecFunctionScan (pstate=0x5618742dbfb8) at\nnodeFunctionscan.c:270\n#22 0x00005618732dc693 in ExecProcNodeFirst (node=0x5618742dbfb8) at\nexecProcnode.c:463\n#23 0x00005618732cfe80 in ExecProcNode (node=0x5618742dbfb8) at\n../../../src/include/executor/executor.h:259\n\nRegards,\nBharath Rupireddy.\n\nHi,Currently CREATE_REPLICATION_SLOT/pg_create_logical_replication_slot waits unboundedly if there are any in-progress write transactions [1]. The wait is for a reason actually i.e. for building an initial snapshot, but waiting unboundedly isn't good for usability of the command/function and when stuck, the callers will not have any information as to why.How about we provide a timeout for the command/function instead of letting them wait unboundedly? The behavior will be something like this - if the logical replication slot isn't created within this timeout, the command/function will fail.We could've asked callers to set statement_timeout before calling CREATE_REPLICATION_SLOT/pg_create_logical_replication_slot but that impacts the queries running in all other sessions and it may not be always possible to set this parameter just for the session that runs command CREATE_REPLICATION_SLOT.Thoughts?[1](gdb) bt#0 0x00007fc21509a45a in epoll_wait (epfd=9, events=0x561874204e88, maxevents=1, timeout=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30#1 0x000056187350e9cc in WaitEventSetWaitBlock (set=0x561874204e28, cur_timeout=-1, occurred_events=0x7fff72b3a4a0, nevents=1) at latch.c:1467#2 0x000056187350e847 in WaitEventSetWait (set=0x561874204e28, timeout=-1, occurred_events=0x7fff72b3a4a0, nevents=1, wait_event_info=50331653) at latch.c:1413#3 0x000056187350db64 in WaitLatch (latch=0x7fc21292f324, wakeEvents=33, timeout=0, wait_event_info=50331653) at latch.c:475#4 0x000056187353b5b2 in ProcSleep (locallock=0x56187422aa58, lockMethodTable=0x561873a61a20 <default_lockmethod>) at proc.c:1337#5 0x0000561873527e49 in WaitOnLock (locallock=0x56187422aa58, owner=0x5618742888b0) at lock.c:1859#6 0x0000561873526730 in LockAcquireExtended (locktag=0x7fff72b3a8a0, lockmode=5, sessionLock=false, dontWait=false, reportMemoryError=true, locallockp=0x0) at lock.c:1101#7 0x0000561873525b9d in LockAcquire (locktag=0x7fff72b3a8a0, lockmode=5, sessionLock=false, dontWait=false) at lock.c:752#8 0x0000561873524099 in XactLockTableWait (xid=734, rel=0x0, ctid=0x0, oper=XLTW_None) at lmgr.c:702#9 0x00005618734a69c4 in SnapBuildWaitSnapshot (running=0x561874315a18, cutoff=735) at snapbuild.c:1416#10 0x00005618734a67a2 in SnapBuildFindSnapshot (builder=0x561874311a80, lsn=21941704, running=0x561874315a18) at snapbuild.c:1328#11 0x00005618734a62c4 in SnapBuildProcessRunningXacts (builder=0x561874311a80, lsn=21941704, running=0x561874315a18) at snapbuild.c:1117#12 0x000056187348cab0 in standby_decode (ctx=0x5618742fb9e0, buf=0x7fff72b3aa00) at decode.c:346#13 0x000056187348c34e in LogicalDecodingProcessRecord (ctx=0x5618742fb9e0, record=0x5618742fbda0) at decode.c:119#14 0x000056187349124e in DecodingContextFindStartpoint (ctx=0x5618742fb9e0) at logical.c:613#15 0x00005618734c2ab3 in create_logical_replication_slot (name=0x56187420d848 \"slot1\", plugin=0x56187420d8f8 \"test_decoding\", temporary=false, two_phase=false, restart_lsn=0, find_startpoint=true) at slotfuncs.c:158#16 0x00005618734c2bb8 in pg_create_logical_replication_slot (fcinfo=0x5618742efdd0) at slotfuncs.c:187#17 0x00005618732def6b in ExecMakeTableFunctionResult (setexpr=0x5618742dc318, econtext=0x5618742dc1d0, argContext=0x5618742efcb0, expectedDesc=0x5618742ec098, randomAccess=false) at execSRF.c:234#18 0x00005618732fbc27 in FunctionNext (node=0x5618742dbfb8) at nodeFunctionscan.c:95#19 0x00005618732e0987 in ExecScanFetch (node=0x5618742dbfb8, accessMtd=0x5618732fbb72 <FunctionNext>, recheckMtd=0x5618732fbf6e <FunctionRecheck>) at execScan.c:133#20 0x00005618732e0a00 in ExecScan (node=0x5618742dbfb8, accessMtd=0x5618732fbb72 <FunctionNext>, recheckMtd=0x5618732fbf6e <FunctionRecheck>) at execScan.c:182#21 0x00005618732fbfc4 in ExecFunctionScan (pstate=0x5618742dbfb8) at nodeFunctionscan.c:270#22 0x00005618732dc693 in ExecProcNodeFirst (node=0x5618742dbfb8) at execProcnode.c:463#23 0x00005618732cfe80 in ExecProcNode (node=0x5618742dbfb8) at ../../../src/include/executor/executor.h:259Regards,\nBharath Rupireddy.",
"msg_date": "Thu, 9 Jun 2022 10:25:06 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "A proposal to provide a timeout option for\n CREATE_REPLICATION_SLOT/pg_create_logical_replication_slot"
},
{
"msg_contents": "At Thu, 9 Jun 2022 10:25:06 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> Hi,\n> \n> Currently CREATE_REPLICATION_SLOT/pg_create_logical_replication_slot waits\n> unboundedly if there are any in-progress write transactions [1]. The wait\n> is for a reason actually i.e. for building an initial snapshot, but waiting\n> unboundedly isn't good for usability of the command/function and when\n> stuck, the callers will not have any information as to why.\n> \n> How about we provide a timeout for the command/function instead of letting\n> them wait unboundedly? The behavior will be something like this - if the\n> logical replication slot isn't created within this timeout, the\n> command/function will fail.\n> \n> We could've asked callers to set statement_timeout before calling\n> CREATE_REPLICATION_SLOT/pg_create_logical_replication_slot but that impacts\n> the queries running in all other sessions and it may not be always possible\n> to set this parameter just for the session that runs command\n> CREATE_REPLICATION_SLOT.\n>\n> Thoughts?\n\nHow can the other sessions get affected by setting statement_timeout a\nsession? And \"SET LOCAL\" narrows the effect down to within a\ntransaction. I think that is sufficient. On the other hand,\nCREATE_REPLICATION_SLOT doesn't honor statement_timeout, but honors\nlock_timeout. (It's a bit strange but I would hardly go so far as to\nsay we should \"fix\" it..) If a program issues CREATE_REPLICATION_SLOT,\nit's hard to believe that the same program cannot issue SET (for\nlock_timeout) command as well.\n\nWhen CREATE_REPLICATION_SLOT is called from a CREATE SUBSCRIPTION\ncommand, the latter command itself honors statement_timeout and\ndisconnects the peer walsender. Thus, client_connection_check_interval\nset on publisher side kills the walsender shortly after the\ndisconnection.\n\nIn short, I don't see much point in the timeout of the function/command.\n\nAs a general discussion on the timeout of functions/commands by a\nparameter, I can only come up with pg_terminate_backend() for now, but\nits timeout parameter not only determines timeout seconds but also\nspecifies whether the function waits for the process termination. That\nfunctionality cannot be achieved by statement timeout. In that sense\nit is a bit apart from pg_logical_replication_slot().\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 09 Jun 2022 16:31:17 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A proposal to provide a timeout option for\n CREATE_REPLICATION_SLOT/pg_create_logical_replication_slot"
},
{
"msg_contents": "On Thu, Jun 9, 2022 at 1:01 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> > Currently CREATE_REPLICATION_SLOT/pg_create_logical_replication_slot waits\n> > unboundedly if there are any in-progress write transactions [1]. [....]\n> >\n> > How about we provide a timeout for the command/function instead of letting\n> > them wait unboundedly?\n>\n> How can the other sessions get affected by setting statement_timeout a\n> session? And \"SET LOCAL\" narrows the effect down to within a\n> transaction. I think that is sufficient.\n\nSET LOCAL needs to be run within an explicit txn whereas CREATE\nSUBSCRIPTION can't.\n\n> On the other hand,\n> CREATE_REPLICATION_SLOT doesn't honor statement_timeout, but honors\n> lock_timeout. (It's a bit strange but I would hardly go so far as to\n> say we should \"fix\" it..) If a program issues CREATE_REPLICATION_SLOT,\n> it's hard to believe that the same program cannot issue SET (for\n> lock_timeout) command as well.\n\nYes it can issue lock_timeout.\n\n> When CREATE_REPLICATION_SLOT is called from a CREATE SUBSCRIPTION\n> command, the latter command itself honors statement_timeout and\n> disconnects the peer walsender. Thus, client_connection_check_interval\n> set on publisher side kills the walsender shortly after the\n> disconnection.\n\nRight.\n\n> In short, I don't see much point in the timeout of the function/command.\n\nI played with it a bit today. There are a couple of ways to get around\nthe CREATE SUBSCRIPTION blocking issue - set statement_timeout [1] or\ntransaction_timeout [2] on the subscriber at the session level before\ncreating the subscription, or set lock_timeout [3] on the publisher.\n\nSince we have a bunch of timeouts already (transaction_timeout being\nthe latest addition), I don't think we need another one here. So I\nwithdraw my initial idea on this thread to have a separate timeout to\ncreate a logical replication slot.\n\n[1]\npostgres=# SET transaction_timeout = '10s';\nSET\npostgres=# CREATE SUBSCRIPTION mysub CONNECTION 'dbname=postgres\nport=5432' PUBLICATION mypub;\nFATAL: terminating connection due to transaction timeout\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Succeeded.\n\n[2]\npostgres=# SET statement_timeout = '10s';\nSET\npostgres=# CREATE SUBSCRIPTION mysub CONNECTION 'dbname=postgres\nport=5432' PUBLICATION mypub;\n\nERROR: canceling statement due to statement timeout\n\n[3]\npostgres=# CREATE SUBSCRIPTION mysub CONNECTION 'dbname=postgres\nport=5432' PUBLICATION mypub;\n\nERROR: could not create replication slot \"mysub\": ERROR: canceling\nstatement due to lock timeout\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 16 Feb 2024 18:23:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: A proposal to provide a timeout option for\n CREATE_REPLICATION_SLOT/pg_create_logical_replication_slot"
}
] |
[
{
"msg_contents": "Hi,\n\nCurrently postgres doesn't allow dropping a replication slot that's active\n[1]. This can make certain operations more time-consuming or stuck in\nproduction environments. These operations are - disable async/sync standbys\nand disable logical replication that require the postgres running on\nstandby or the subscriber to go down. If stopping postgres server takes\ntime, the VM or container will have to be killed forcefully which can take\na considerable amount of time as there are many layers in between.\n\nHow about we provide a function to force-drop a replication slot? All other\nthings such as stopping postgres and gracefully unprovisioning VM etc. can\nbe taken care of in the background. This force-drop function will also have\nto ensure that the walsender that's active for the replication slot is\nterminated gracefully without letting postmaster restart the other backends\n(right now if a wal sender is exited/terminated, the postmaster restarts\nall other backends too). The main advantage of the force-drop function is\nthat the disable operations can be quicker and there is no down time/crash\non the primary/source server.\n\nThoughts?\n\n[1] ERROR: replication slot \"foo\" is active for PID 2598155\n\nRegards,\nBharath Rupireddy.\n\nHi,Currently postgres doesn't allow dropping a replication slot that's active [1]. This can make certain operations more time-consuming or stuck in production environments. These operations are - disable async/sync standbys and disable logical replication that require the postgres running on standby or the subscriber to go down. If stopping postgres server takes time, the VM or container will have to be killed forcefully which can take a considerable amount of time as there are many layers in between.How about we provide a function to force-drop a replication slot? All other things such as stopping postgres and gracefully unprovisioning VM etc. can be taken care of in the background. This force-drop function will also have to ensure that the walsender that's active for the replication slot is terminated gracefully without letting postmaster restart the other backends (right now if a wal sender is exited/terminated, the postmaster restarts all other backends too). The main advantage of the force-drop function is that the disable operations can be quicker and there is no down time/crash on the primary/source server.Thoughts?[1] ERROR: replication slot \"foo\" is active for PID 2598155Regards,\nBharath Rupireddy.",
"msg_date": "Thu, 9 Jun 2022 11:07:15 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "A proposal to force-drop replication slots to make disabling\n async/sync standbys or logical replication faster in production environments"
},
{
"msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> How about we provide a function to force-drop a replication slot?\n\nIsn't this akin to filing off the safety interlock on the loaded revolver\nyou keep in your hip pocket? IMO the entire point of replication slots\nis to not make it easy to lose data.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Jun 2022 01:54:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: A proposal to force-drop replication slots to make disabling\n async/sync standbys or logical replication faster in production environments"
},
{
"msg_contents": "On Thu, Jun 9, 2022 at 11:07 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Currently postgres doesn't allow dropping a replication slot that's active [1]. This can make certain operations more time-consuming or stuck in production environments. These operations are - disable async/sync standbys and disable logical replication that require the postgres running on standby or the subscriber to go down. If stopping postgres server takes time, the VM or container will have to be killed forcefully which can take a considerable amount of time as there are many layers in between.\n>\n\nWhy do you want to drop the slot when the server is going down? Is it\nsome temporary replication slot, otherwise, how will you resume\nreplication after restarting the server?\n\n--\nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 9 Jun 2022 12:11:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A proposal to force-drop replication slots to make disabling\n async/sync standbys or logical replication faster in production environments"
},
{
"msg_contents": "On Thu, Jun 9, 2022 at 11:24 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > How about we provide a function to force-drop a replication slot?\n>\n> Isn't this akin to filing off the safety interlock on the loaded revolver\n> you keep in your hip pocket? IMO the entire point of replication slots\n> is to not make it easy to lose data.\n\nAgree. How about making the function superuser-only?\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Fri, 10 Jun 2022 15:32:56 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: A proposal to force-drop replication slots to make disabling\n async/sync standbys or logical replication faster in production environments"
},
{
"msg_contents": "On Thu, Jun 9, 2022 at 12:11 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jun 9, 2022 at 11:07 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Currently postgres doesn't allow dropping a replication slot that's active [1]. This can make certain operations more time-consuming or stuck in production environments. These operations are - disable async/sync standbys and disable logical replication that require the postgres running on standby or the subscriber to go down. If stopping postgres server takes time, the VM or container will have to be killed forcefully which can take a considerable amount of time as there are many layers in between.\n> >\n>\n> Why do you want to drop the slot when the server is going down? Is it\n> some temporary replication slot, otherwise, how will you resume\n> replication after restarting the server?\n\nThe setup is this - primary, bunch of sync standbys, bunch of read\nreplicas (async standbys), bunch of logical replication subscribers -\nnow, the user wants to remove any of them for whatever reasons,\ntypical flow is to first stop the server, if stopping the server takes\ntime (for instance the standbys or subscribers lag behind the primary\nby too much), kill the VM/host server to make the corresponding\nreplication slots inactive on the primary and then drop the\nreplication slots. The proposed force-drop function helps speed up\nthese operations in production environments and it will also be\npossible to provide an SLA for these disable operations.\n\nI hope the user case is clear.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Fri, 10 Jun 2022 15:33:27 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: A proposal to force-drop replication slots to make disabling\n async/sync standbys or logical replication faster in production environments"
},
{
"msg_contents": "Hi,\n\nWhy couldn't you terminate the active_pid associated with the slot you \nwant to drop if it's active prior to dropping?\n\n\nOn 6/10/22 3:03 AM, Bharath Rupireddy wrote:\n> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>\n>\n>\n> On Thu, Jun 9, 2022 at 12:11 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> On Thu, Jun 9, 2022 at 11:07 AM Bharath Rupireddy\n>> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>> Currently postgres doesn't allow dropping a replication slot that's active [1]. This can make certain operations more time-consuming or stuck in production environments. These operations are - disable async/sync standbys and disable logical replication that require the postgres running on standby or the subscriber to go down. If stopping postgres server takes time, the VM or container will have to be killed forcefully which can take a considerable amount of time as there are many layers in between.\n>>>\n>> Why do you want to drop the slot when the server is going down? Is it\n>> some temporary replication slot, otherwise, how will you resume\n>> replication after restarting the server?\n> The setup is this - primary, bunch of sync standbys, bunch of read\n> replicas (async standbys), bunch of logical replication subscribers -\n> now, the user wants to remove any of them for whatever reasons,\n> typical flow is to first stop the server, if stopping the server takes\n> time (for instance the standbys or subscribers lag behind the primary\n> by too much), kill the VM/host server to make the corresponding\n> replication slots inactive on the primary and then drop the\n> replication slots. The proposed force-drop function helps speed up\n> these operations in production environments and it will also be\n> possible to provide an SLA for these disable operations.\n>\n> I hope the user case is clear.\n>\n> Regards,\n> Bharath Rupireddy.\n>\n>\n\n\n",
"msg_date": "Fri, 10 Jun 2022 08:12:16 -0700",
"msg_from": "\"Hsu, John\" <hsuchen@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: A proposal to force-drop replication slots to make disabling\n async/sync\n standbys or logical replication faster in production environments"
},
{
"msg_contents": "On Fri, Jun 10, 2022 at 8:42 PM Hsu, John <hsuchen@amazon.com> wrote:\n>\n> Hi,\n>\n> Why couldn't you terminate the active_pid associated with the slot you\n> want to drop if it's active prior to dropping?\n\nIn that case, the slot becomes active immediately after killing the\nold walsender because the standby/subscriber opens another connection\nwith the primary using the same replication slot. The replication slot\nwill be inactive for a moment during pg_terminate_backend and becomes\nactive again by the time we call pg_drop_replication_slot and we hit\nthe same ERROR: replication slot \"foo\" is active for PID XXXXX.\n\nThe idea proposed here is to have a force-drop function that\nterminates the walsender gracefully and drops the replication slot\neven though there's somebody using it and all of this is done with an\nexclusive lock on the slot so that nobody can acquire it while we are\ndropping it.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Tue, 14 Jun 2022 13:20:39 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: A proposal to force-drop replication slots to make disabling\n async/sync standbys or logical replication faster in production environments"
}
] |
[
{
"msg_contents": "Hi,\n\nReposting this on its own thread.\n\nhttps://www.postgresql.org/message-id/flat/CAKFQuwby1aMsJDMeibaBaohgoaZhivAo4WcqHC1%3D9-GDZ3TSng%40mail.gmail.com\n\nPresently, the open item seems to be whether my novelty regarding the\nreworked example is too much.\n\nDavid J.",
"msg_date": "Thu, 9 Jun 2022 08:36:09 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "doc: Clarify Savepoint Behavior"
},
{
"msg_contents": "On Thu, Jun 9, 2022 at 8:36 AM David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> Hi,\n>\n> Reposting this on its own thread.\n>\n>\n> https://www.postgresql.org/message-id/flat/CAKFQuwby1aMsJDMeibaBaohgoaZhivAo4WcqHC1%3D9-GDZ3TSng%40mail.gmail.com\n>\n> Presently, the open item seems to be whether my novelty regarding the\n> reworked example is too much.\n>\n>\nCommentary:\n\n Per documentation comment the savepoint command lacks an example\n where the savepoint name is reused. The suggested example didn't\n conform to the others on the page, nor did the suggested location\n in compatibility seem desirable, but the omission rang true. Add\n another example to the examples section demonstrating this case.\n Additionally, document under the description for savepoint_name\n that we allow for the name to be repeated - and note what that\n means in terms of release and rollback. It seems desirable to\n place this comment in description rather than notes for savepoint.\n For the other two commands the behavior in the presence of\n duplicate savepoint names best fits as notes. In fact release\n already had one. This commit copies the same verbiage over to\n rollback.\n\nDavid J.\n\nOn Thu, Jun 9, 2022 at 8:36 AM David G. Johnston <david.g.johnston@gmail.com> wrote:Hi,Reposting this on its own thread.https://www.postgresql.org/message-id/flat/CAKFQuwby1aMsJDMeibaBaohgoaZhivAo4WcqHC1%3D9-GDZ3TSng%40mail.gmail.comPresently, the open item seems to be whether my novelty regarding the reworked example is too much.Commentary: Per documentation comment the savepoint command lacks an example where the savepoint name is reused. The suggested example didn't conform to the others on the page, nor did the suggested location in compatibility seem desirable, but the omission rang true. Add another example to the examples section demonstrating this case. Additionally, document under the description for savepoint_name that we allow for the name to be repeated - and note what that means in terms of release and rollback. It seems desirable to place this comment in description rather than notes for savepoint. For the other two commands the behavior in the presence of duplicate savepoint names best fits as notes. In fact release already had one. This commit copies the same verbiage over to rollback.David J.",
"msg_date": "Thu, 9 Jun 2022 08:40:44 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: Clarify Savepoint Behavior"
},
{
"msg_contents": "On Thu, 9 Jun 2022 at 16:41, David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Thu, Jun 9, 2022 at 8:36 AM David G. Johnston <david.g.johnston@gmail.com> wrote:\n>>\n>> Hi,\n>>\n>> Reposting this on its own thread.\n>>\n>> https://www.postgresql.org/message-id/flat/CAKFQuwby1aMsJDMeibaBaohgoaZhivAo4WcqHC1%3D9-GDZ3TSng%40mail.gmail.com\n>>\n>> Presently, the open item seems to be whether my novelty regarding the reworked example is too much.\n>>\n>\n> Commentary:\n>\n> Per documentation comment the savepoint command lacks an example\n> where the savepoint name is reused. The suggested example didn't\n> conform to the others on the page, nor did the suggested location\n> in compatibility seem desirable, but the omission rang true. Add\n> another example to the examples section demonstrating this case.\n> Additionally, document under the description for savepoint_name\n> that we allow for the name to be repeated - and note what that\n> means in terms of release and rollback. It seems desirable to\n> place this comment in description rather than notes for savepoint.\n> For the other two commands the behavior in the presence of\n> duplicate savepoint names best fits as notes. In fact release\n> already had one. This commit copies the same verbiage over to\n> rollback.\n\nGood idea.\n\n\"The name to give to the new savepoint. The name may already exist,\n+ in which case a rollback or release to the same name will use the\n+ one that was most recently defined.\"\n\ninstead I propose:\n\n\"The name to give to the new savepoint. If the name duplicates a\n previously defined savepoint name then only the latest savepoint with that name\n can be referenced in a later ROLLBACK TO SAVEPOINT.\"\n\n+ <para>\n+ If multiple savepoints have the same name, only the one that was most\n+ recently defined is released.\n+ </para>\n\ninstead I propose\n\n\"Searches backwards through previously defined savepoints until the\n a savepoint name matches the request. If the savepoint name duplicated earlier\n defined savepoints then those earlier savepoints can only be released if\n multiple ROLLBACK TO SAVEPOINT commands are issued with the same\n name, as shown in the following example.\"\n\nAlso, I would just call the savepoint \"s\" in the example, to declutter it.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 23 Jun 2022 13:34:54 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: Clarify Savepoint Behavior"
},
{
"msg_contents": "Thank you for the review.\n\nOn Thu, Jun 23, 2022 at 5:35 AM Simon Riggs <simon.riggs@enterprisedb.com>\nwrote:\n\n> On Thu, 9 Jun 2022 at 16:41, David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n>\n> \"The name to give to the new savepoint. The name may already exist,\n> + in which case a rollback or release to the same name will use the\n> + one that was most recently defined.\"\n>\n> instead I propose:\n>\n> \"The name to give to the new savepoint. If the name duplicates a\n> previously defined savepoint name then only the latest savepoint with\n> that name\n> can be referenced in a later ROLLBACK TO SAVEPOINT.\"\n>\n\nSo leave the \"release\" behavior implied from the rollback behavior?\n\nOn the whole I'm slightly in favor of your proposed wording (mostly due to\nthe better fitting of the ROLLBACK command, though at the omission of\nRELEASE...) but are you seeing anything beyond personal style as to why you\nfeel one is better than the other? Is there some existing wording in the\ndocs that I should be conforming to here?\n\n\n> + <para>\n> + If multiple savepoints have the same name, only the one that was most\n> + recently defined is released.\n> + </para>\n>\n> instead I propose\n>\n> \"Searches backwards through previously defined savepoints until the\n> a savepoint name matches the request. If the savepoint name duplicated\n> earlier\n> defined savepoints then those earlier savepoints can only be released if\n> multiple ROLLBACK TO SAVEPOINT commands are issued with the same\n> name, as shown in the following example.\"\n>\n>\nUpon reflection, adding this after the comments about cursors seems like a\npoor location, I will probably move it up one paragraph.\n\nI dislike this proposal for its added wordiness that doesn't bring in new\nmaterial. The whole idea of \"searching backwards until the name is found\"\nis already covered in the description:\n\n\"ROLLBACK TO SAVEPOINT implicitly destroys all savepoints that were\nestablished after the named savepoint.\"\n\nUsing the phrase \"can only be released if\" here in the rollback to\nsavepoint command page just seems to be asking for confusion between this\nand the release savepoint command.\n\nAlso, I would just call the savepoint \"s\" in the example, to declutter it.\n>\n>\nIf I do use a name that differs from the other two examples on that page\nI'll probably go with \"sp\" for added detectability - but deviating from the\nestablished convention doesn't seem warranted here.\n\nIn all, I'm still content with the patch as-is; though I or the committer\nshould consider moving up the one paragraph in rollback to savepoint.\nOtherwise I'll probably post an updated patch sometime this coming week and\ngive another look at the savepoint name description and make that paragraph\nmove.\n\nDavid J.\n\nThank you for the review.On Thu, Jun 23, 2022 at 5:35 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:On Thu, 9 Jun 2022 at 16:41, David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n\"The name to give to the new savepoint. The name may already exist,\n+ in which case a rollback or release to the same name will use the\n+ one that was most recently defined.\"\n\ninstead I propose:\n\n\"The name to give to the new savepoint. If the name duplicates a\n previously defined savepoint name then only the latest savepoint with that name\n can be referenced in a later ROLLBACK TO SAVEPOINT.\"So leave the \"release\" behavior implied from the rollback behavior?On the whole I'm slightly in favor of your proposed wording (mostly due to the better fitting of the ROLLBACK command, though at the omission of RELEASE...) but are you seeing anything beyond personal style as to why you feel one is better than the other? Is there some existing wording in the docs that I should be conforming to here?\n\n+ <para>\n+ If multiple savepoints have the same name, only the one that was most\n+ recently defined is released.\n+ </para>\n\ninstead I propose\n\n\"Searches backwards through previously defined savepoints until the\n a savepoint name matches the request. If the savepoint name duplicated earlier\n defined savepoints then those earlier savepoints can only be released if\n multiple ROLLBACK TO SAVEPOINT commands are issued with the same\n name, as shown in the following example.\"\nUpon reflection, adding this after the comments about cursors seems like a poor location, I will probably move it up one paragraph.I dislike this proposal for its added wordiness that doesn't bring in new material. The whole idea of \"searching backwards until the name is found\" is already covered in the description:\"ROLLBACK TO SAVEPOINT implicitly destroys all savepoints that were established after the named savepoint.\"Using the phrase \"can only be released if\" here in the rollback to savepoint command page just seems to be asking for confusion between this and the release savepoint command.\nAlso, I would just call the savepoint \"s\" in the example, to declutter it.If I do use a name that differs from the other two examples on that page I'll probably go with \"sp\" for added detectability - but deviating from the established convention doesn't seem warranted here.In all, I'm still content with the patch as-is; though I or the committer should consider moving up the one paragraph in rollback to savepoint. Otherwise I'll probably post an updated patch sometime this coming week and give another look at the savepoint name description and make that paragraph move.David J.",
"msg_date": "Sun, 26 Jun 2022 09:14:56 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: Clarify Savepoint Behavior"
},
{
"msg_contents": "On Sun, Jun 26, 2022 at 09:14:56AM -0700, David G. Johnston wrote:\n> So leave the \"release\" behavior implied from the rollback behavior?\n> \n> On the whole I'm slightly in favor of your proposed wording (mostly due to the\n> better fitting of the ROLLBACK command, though at the omission of RELEASE...)\n> but are you seeing anything beyond personal style as to why you feel one is\n> better than the other? Is there some existing wording in the docs that I\n> should be conforming to here?\n\nI have developed the attached patch based on the discussion here. I\ntried to simplify the language and example to clarify the intent.\n\nI was confused why the first part of the patch added a mention of\nreleasing savepoints to the ROLLBACK TO manual page --- I have removed\nthat and improved the text in RELEASE SAVEPOINT.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson",
"msg_date": "Sat, 9 Jul 2022 12:59:23 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: Clarify Savepoint Behavior"
},
{
"msg_contents": "On Sat, Jul 9, 2022 at 12:59:23PM -0400, Bruce Momjian wrote:\n> On Sun, Jun 26, 2022 at 09:14:56AM -0700, David G. Johnston wrote:\n> > So leave the \"release\" behavior implied from the rollback behavior?\n> > \n> > On the whole I'm slightly in favor of your proposed wording (mostly due to the\n> > better fitting of the ROLLBACK command, though at the omission of RELEASE...)\n> > but are you seeing anything beyond personal style as to why you feel one is\n> > better than the other? Is there some existing wording in the docs that I\n> > should be conforming to here?\n> \n> I have developed the attached patch based on the discussion here. I\n> tried to simplify the language and example to clarify the intent.\n> \n> I was confused why the first part of the patch added a mention of\n> releasing savepoints to the ROLLBACK TO manual page --- I have removed\n> that and improved the text in RELEASE SAVEPOINT.\n\nPatch applied to all supported versions.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 14 Jul 2022 15:44:48 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: Clarify Savepoint Behavior"
},
{
"msg_contents": "On Thu, Jul 14, 2022 at 12:44 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Sat, Jul 9, 2022 at 12:59:23PM -0400, Bruce Momjian wrote:\n> > On Sun, Jun 26, 2022 at 09:14:56AM -0700, David G. Johnston wrote:\n> > > So leave the \"release\" behavior implied from the rollback behavior?\n> > >\n> > > On the whole I'm slightly in favor of your proposed wording (mostly\n> due to the\n> > > better fitting of the ROLLBACK command, though at the omission of\n> RELEASE...)\n> > > but are you seeing anything beyond personal style as to why you feel\n> one is\n> > > better than the other? Is there some existing wording in the docs\n> that I\n> > > should be conforming to here?\n> >\n> > I have developed the attached patch based on the discussion here. I\n> > tried to simplify the language and example to clarify the intent.\n> >\n> > I was confused why the first part of the patch added a mention of\n> > releasing savepoints to the ROLLBACK TO manual page --- I have removed\n> > that and improved the text in RELEASE SAVEPOINT.\n>\n> Patch applied to all supported versions.\n>\n>\nBruce,\n\nThanks for committing this and the other patches. Should I go into the\ncommitfest and mark the entries for these as committed or does protocol\ndictate I remind you and you do that?\n\nDavid J.\n\nOn Thu, Jul 14, 2022 at 12:44 PM Bruce Momjian <bruce@momjian.us> wrote:On Sat, Jul 9, 2022 at 12:59:23PM -0400, Bruce Momjian wrote:\n> On Sun, Jun 26, 2022 at 09:14:56AM -0700, David G. Johnston wrote:\n> > So leave the \"release\" behavior implied from the rollback behavior?\n> > \n> > On the whole I'm slightly in favor of your proposed wording (mostly due to the\n> > better fitting of the ROLLBACK command, though at the omission of RELEASE...)\n> > but are you seeing anything beyond personal style as to why you feel one is\n> > better than the other? Is there some existing wording in the docs that I\n> > should be conforming to here?\n> \n> I have developed the attached patch based on the discussion here. I\n> tried to simplify the language and example to clarify the intent.\n> \n> I was confused why the first part of the patch added a mention of\n> releasing savepoints to the ROLLBACK TO manual page --- I have removed\n> that and improved the text in RELEASE SAVEPOINT.\n\nPatch applied to all supported versions.Bruce,Thanks for committing this and the other patches. Should I go into the commitfest and mark the entries for these as committed or does protocol dictate I remind you and you do that?David J.",
"msg_date": "Tue, 26 Jul 2022 08:07:58 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: Clarify Savepoint Behavior"
}
] |
[
{
"msg_contents": "Hi,\n\nReposting this on its own thread.\n\nhttps://www.postgresql.org/message-id/flat/CAKFQuwby1aMsJDMeibaBaohgoaZhivAo4WcqHC1%3D9-GDZ3TSng%40mail.gmail.com\n\n As one cannot place excluded in a FROM clause (subquery) in the\n ON CONFLICT clause referring to it as a table, with plural rows\n nonetheless, leads the reader to infer more about what the\n behavior here is than is correct. We already just say use the\n table's name for the existing row so just match that pattern\n of using the name excluded for the proposed row.\n\n The alias description doesn't have the same issue regarding the\n use of the word table and rows, as the use there is more conceptual,\n but the wording about \"otherwise taken as\" is wrong: rather two\n labels of excluded end up in scope and you get an ambiguous name error.\n\n The error messages still consider excluded to be a table reference\n and this patch does not try to change that. That implementation\n detail need not force the user-facing documentation for the feature\n to use the term table when it doesn't really apply.\n\nDavid J.",
"msg_date": "Thu, 9 Jun 2022 08:39:43 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "doc: Clarify what \"excluded\" represents for INSERT ON CONFLICT"
},
{
"msg_contents": "On Thu, Jun 9, 2022 at 11:40 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> As one cannot place excluded in a FROM clause (subquery) in the\n> ON CONFLICT clause referring to it as a table, ...\n\nWell, it would be nice if you had included a test case rather than\nleaving it to the reviewer or committer to construct one. In general,\ndropping subtle patches with minimal commentary isn't really very\nhelpful.\n\nBut I decided to dig in and see what I could figure out. I constructed\nthis test case first, which does work:\n\nrhaas=# create table foo (a int primary key, b text);\nCREATE TABLE\nrhaas=# insert into foo values (1, 'blarg');\nINSERT 0 1\nrhaas=# insert into foo values (1, 'frob') on conflict (a) do update\nset b = (select excluded.b || 'nitz');\nINSERT 0 1\nrhaas=# select * from foo;\n a | b\n---+----------\n 1 | frobnitz\n(1 row)\n\nInitially I thought that was the case you were talking about, but\nafter staring at your email for another 20 minutes, I figured out that\nyou're probably talking about something more like this, which doesn't\nwork:\n\nrhaas=# insert into foo values (1, 'frob') on conflict (a) do update\nset b = (select b || 'nitz' from excluded);\nERROR: relation \"excluded\" does not exist\nLINE 1: ...ct (a) do update set b = (select b || 'nitz' from excluded);\n\nI do find that a bit of a curious error message, because that relation\nclearly DOES exist in the range table. I know that because, if I use a\nwrong column name, I get a complaint about the column not existing,\nnot the relation not existing:\n\nrhaas=# insert into foo values (1, 'frob') on conflict (a) do update\nset b = (select excluded.bbbbbbbbb || 'nitz');\nERROR: column excluded.bbbbbbbbb does not exist\nLINE 1: ...'frob') on conflict (a) do update set b = (select excluded.b...\n\nThat said, I am not convinced that changing the documentation in this\nway is a good idea. It is clear that, at the level of the code,\n\"excluded\" behaves like a pseudo-table, and the fact that it isn't\nequivalent to a real table in all ways, or that it can't be referenced\nat every point in the query equally, doesn't change that. I don't\nthink that the language you're proposing is horrible or anything --\nthe distinction between a special table and a special name that\nbehaves somewhat like a single-row table is subtle at best -- but I\nthink that the threshold to commit a patch like this is that the\nchange has to be a clear improvement, and I don't think it is.\n\nI think it might be fruitful to consider whether some of the error\nmessages here could be improved or even whether some of the\nnon-working cases could be made to work, but I'm just not really\nseeing the value of tinkering with documentation which is, in my view,\nnot wrong.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 30 Jun 2022 16:43:21 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: Clarify what \"excluded\" represents for INSERT ON CONFLICT"
},
{
"msg_contents": "On Thu, Jun 30, 2022 at 1:43 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> rhaas=# insert into foo values (1, 'frob') on conflict (a) do update\n> set b = (select b || 'nitz' from excluded);\n> ERROR: relation \"excluded\" does not exist\n> LINE 1: ...ct (a) do update set b = (select b || 'nitz' from excluded);\n>\n> I do find that a bit of a curious error message, because that relation\n> clearly DOES exist in the range table.\n\nLet's say that we supported this syntax. I see some problems with that\n(you may have thought of these already). Thinking of \"excluded\" as a\nseparate table creates some very thorny questions, such as:\n\nHow would the user be expected to handle the case where there was more\nthan a single \"row\" in \"excluded\"? How could the implementation know\nwhat the contents of the \"excluded table\" were ahead of time in the\nmulti-row-insert case? We'd have to know about *all* of the conflicts\nfor all rows proposed for insertion to do this, which seems impossible\nin the general case -- because some of those conflicts won't have\nhappened yet, and cannot be predicted.\n\nThe \"excluded\" pseudo-table is conceptually similar to an from_item\nalias used within an UPDATE .... FROM ... where the target table is\nalso the from_item table (i.e. there is a self-join). There is only\none table involved.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 30 Jun 2022 14:05:20 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: doc: Clarify what \"excluded\" represents for INSERT ON CONFLICT"
},
{
"msg_contents": "On Thu, Jun 30, 2022 at 1:43 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Jun 9, 2022 at 11:40 AM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> > As one cannot place excluded in a FROM clause (subquery) in the\n> > ON CONFLICT clause referring to it as a table, ...\n>\n> Well, it would be nice if you had included a test case rather than\n> leaving it to the reviewer or committer to construct one. In general,\n> dropping subtle patches with minimal commentary isn't really very\n> helpful.\n>\n\nFair point.\n\n>\n> But I decided to dig in and see what I could figure out. I constructed\n> this test case first, which does work:\n>\n> rhaas=# create table foo (a int primary key, b text);\n> CREATE TABLE\n> rhaas=# insert into foo values (1, 'blarg');\n> INSERT 0 1\n> rhaas=# insert into foo values (1, 'frob') on conflict (a) do update\n> set b = (select excluded.b || 'nitz');\n> INSERT 0 1\n> rhaas=# select * from foo;\n> a | b\n> ---+----------\n> 1 | frobnitz\n> (1 row)\n>\n> Initially I thought that was the case you were talking about, but\n> after staring at your email for another 20 minutes, I figured out that\n> you're probably talking about something more like this, which doesn't\n> work:\n>\n> rhaas=# insert into foo values (1, 'frob') on conflict (a) do update\n> set b = (select b || 'nitz' from excluded);\n> ERROR: relation \"excluded\" does not exist\n> LINE 1: ...ct (a) do update set b = (select b || 'nitz' from excluded);\n>\n\nRight, the word \"excluded\" appearing immediately after the word FROM is\nwhat I meant by:\n\n\"As one cannot place excluded in a FROM clause (subquery) in the\n ON CONFLICT clause\"\n\nIt is clear that, at the level of the code,\n> \"excluded\" behaves like a pseudo-table,\n\n\nAnd people in the code are capable of understanding this without difficulty\nno matter how we write it. They are not the target audience.\n\n\n> but I\n> think that the threshold to commit a patch like this is that the\n> change has to be a clear improvement, and I don't think it is.\n>\n\nI'm hoping for \"more clear and accurate without making things worse\"...\n\nThe fact that it does not and cannot use FROM and that it never refers to\nmore than a single row (which is what motivated the change in the first\nplace) for me make using the word table here more trouble than it is worth.\n\n\n>\n> I think it might be fruitful to consider whether some of the error\n> messages here could be improved\n\n\nPossibly...\n\n\n> or even whether some of the\n> non-working cases could be made to work,\n\n\nThat would, IMO, make things worse. \"excluded\" isn't a table in that\nsense, anymore than \"NEW\" and \"OLD\" in the context of triggers.\n\nbut I'm just not really\n> seeing the value of tinkering with documentation which is, in my view,\n> not wrong.\n>\n>\nCurrent:\n\"The SET and WHERE clauses in ON CONFLICT DO UPDATE have access to the\nexisting row using the table's name (or an alias), and to [rows] proposed\nfor insertion using the special excluded table.\"\n\nThe word table in that sentence is wrong and not a useful way to think of\nthe thing which we've named excluded. It is a single value of a composite\ntype having the structure of the named table.\n\nI'll agree that most people will mentally paper over the difference and go\nmerrily on their way. At least one person recently did not do that, which\nprompted an email to the community, which prompted a response and this\nsuggestion to avoid that in the future while, IMO, not making understanding\nof the concept any less clear.\n\nDavid J.\n\nOn Thu, Jun 30, 2022 at 1:43 PM Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Jun 9, 2022 at 11:40 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> As one cannot place excluded in a FROM clause (subquery) in the\n> ON CONFLICT clause referring to it as a table, ...\n\nWell, it would be nice if you had included a test case rather than\nleaving it to the reviewer or committer to construct one. In general,\ndropping subtle patches with minimal commentary isn't really very\nhelpful.Fair point.\n\nBut I decided to dig in and see what I could figure out. I constructed\nthis test case first, which does work:\n\nrhaas=# create table foo (a int primary key, b text);\nCREATE TABLE\nrhaas=# insert into foo values (1, 'blarg');\nINSERT 0 1\nrhaas=# insert into foo values (1, 'frob') on conflict (a) do update\nset b = (select excluded.b || 'nitz');\nINSERT 0 1\nrhaas=# select * from foo;\n a | b\n---+----------\n 1 | frobnitz\n(1 row)\n\nInitially I thought that was the case you were talking about, but\nafter staring at your email for another 20 minutes, I figured out that\nyou're probably talking about something more like this, which doesn't\nwork:\n\nrhaas=# insert into foo values (1, 'frob') on conflict (a) do update\nset b = (select b || 'nitz' from excluded);\nERROR: relation \"excluded\" does not exist\nLINE 1: ...ct (a) do update set b = (select b || 'nitz' from excluded);Right, the word \"excluded\" appearing immediately after the word FROM is what I meant by:\"As one cannot place excluded in a FROM clause (subquery) in the ON CONFLICT clause\"It is clear that, at the level of the code,\n\"excluded\" behaves like a pseudo-table,And people in the code are capable of understanding this without difficulty no matter how we write it. They are not the target audience. but I\nthink that the threshold to commit a patch like this is that the\nchange has to be a clear improvement, and I don't think it is.I'm hoping for \"more clear and accurate without making things worse\"...The fact that it does not and cannot use FROM and that it never refers to more than a single row (which is what motivated the change in the first place) for me make using the word table here more trouble than it is worth. \n\nI think it might be fruitful to consider whether some of the error\nmessages here could be improvedPossibly... or even whether some of the\nnon-working cases could be made to work,That would, IMO, make things worse. \"excluded\" isn't a table in that sense, anymore than \"NEW\" and \"OLD\" in the context of triggers. but I'm just not really\nseeing the value of tinkering with documentation which is, in my view,\nnot wrong.Current:\"The SET and WHERE clauses in ON CONFLICT DO UPDATE have access to theexisting row using the table's name (or an alias), and to [rows] proposedfor insertion using the special excluded table.\"The word table in that sentence is wrong and not a useful way to think of the thing which we've named excluded. It is a single value of a composite type having the structure of the named table.I'll agree that most people will mentally paper over the difference and go merrily on their way. At least one person recently did not do that, which prompted an email to the community, which prompted a response and this suggestion to avoid that in the future while, IMO, not making understanding of the concept any less clear.David J.",
"msg_date": "Thu, 30 Jun 2022 14:06:58 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: Clarify what \"excluded\" represents for INSERT ON CONFLICT"
},
{
"msg_contents": "On Thu, Jun 30, 2022 at 2:07 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> Current:\n> \"The SET and WHERE clauses in ON CONFLICT DO UPDATE have access to the\n> existing row using the table's name (or an alias), and to [rows] proposed\n> for insertion using the special excluded table.\"\n>\n> The word table in that sentence is wrong and not a useful way to think of the thing which we've named excluded. It is a single value of a composite type having the structure of the named table.\n\nI think that your reasoning is correct, but I don't agree with your\nconclusion. The term \"special excluded table\" is a fudge, but that\nisn't necessarily a bad thing. Sure, we could add something about the\nUPDATE being similar to an UPDATE with a self-join, as I said\nupthread. But I think that that would make the concept harder to\ngrasp.\n\n> I'll agree that most people will mentally paper over the difference and go merrily on their way. At least one person recently did not do that, which prompted an email to the community\n\nCan you provide a reference for this? Didn't see anything like that in\nthe reference you gave upthread.\n\nI have a hard time imagining a user that reads the INSERT docs and\nimagines that \"excluded\" is a relation that is visible to the query in\nways that are not limited to expression evaluation for the UPDATE's\nWHERE/SET. The way that it works (and doesn't work) follows naturally\nfrom what a user would want to do in order to upsert. MySQL's INSERT\n... ON DUPLICATE KEY UPDATE feature has a magical UPSERT-only\nexpression instead of \"excluded\".\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 30 Jun 2022 14:30:38 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: doc: Clarify what \"excluded\" represents for INSERT ON CONFLICT"
},
{
"msg_contents": "On Thu, Jun 30, 2022 at 2:31 PM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Thu, Jun 30, 2022 at 2:07 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> > Current:\n> > \"The SET and WHERE clauses in ON CONFLICT DO UPDATE have access to the\n> > existing row using the table's name (or an alias), and to [rows] proposed\n> > for insertion using the special excluded table.\"\n> >\n> > The word table in that sentence is wrong and not a useful way to think\n> of the thing which we've named excluded. It is a single value of a\n> composite type having the structure of the named table.\n>\n> I think that your reasoning is correct, but I don't agree with your\n> conclusion. The term \"special excluded table\" is a fudge, but that\n> isn't necessarily a bad thing. Sure, we could add something about the\n> UPDATE being similar to an UPDATE with a self-join, as I said\n> upthread. But I think that that would make the concept harder to\n> grasp.\n>\n\nI don't think incorporating self-joining to be helpful; the status quo is\nbetter than that. I believe people mostly think of \"composite variable\"\nfrom the current description even if we don't use those words - or such a\nconcept can be explained by analogy with NEW and OLD (I think of it like a\ntrigger, only that SQL doesn't have variables so we cannot use that term,\nhence just using \"name\").\n\n\n>\n> > I'll agree that most people will mentally paper over the difference and\n> go merrily on their way. At least one person recently did not do that,\n> which prompted an email to the community\n>\n> Can you provide a reference for this? Didn't see anything like that in\n> the reference you gave upthread.\n>\n\nOK, the discussion I am recalling happened on Discord hence the lack of a\nlink.\n\nOn roughly 3/8 the following conversation occurred (I've trimmed out some\nnon-relevant comments):\n\n>>>OP\nHello, I have a simple question.\nMy table has a column 'transaction_date'\nI have an insert statement with an ON CONFLICT statement\nI update using the 'excluded' values, but I only want to update if the\ntransaction date is the same or newer.\nDo I just use: \"WHERE EXCLUDED.transaction_date >= transaction_date\"?\nso the full query is something like: INSERT INTO table VALUES (pk, yadda1,\nyadda2) ON CONFLICT (pk) DO UPDATE SET (yadda1 = EXCLUDED.yadda1, yadda2 =\nEXCLUDED.yadda2) WHERE EXCLUDED.transaction_date >= transaction_date;\n\n>>>Other Person\nI mean, the ... like 3 examples imply what it contains, and it vaguely says\n\"and to rows proposed for insertion using the special excluded table.\"\nbut...\nStill, based on the BNF, that should work as you stated it.\n\n>>>OP\nwould perhaps it try to overwrite more than one row because many rows would\nmeet the criteria?\nIt seems like it limits it to the conflict row but..\n\n>>>Other Person\nWell, you're only conflicting on the PK, which is guaranteed to be unique.\n\n>>>OP\nAh, so then it is limited to that row if it is specified within the ON\nCONFLICT action if I am reading correct.\n[...]\nIf it matters to you, the only thing I got wrong apparently (in my limited\nnon-sufficient testing) is that to access the current value within the\ntable row you must use the table name. So: WHERE EXCLUDED.transaction_date\n>= tableName.transaction_date\n\n>>>ME\n\"have access [...] to rows proposed for insertion using the special\nexcluded table.\". You have an update situation where two tables (the\ntarget and \"excluded\") are in scope with the exact same column names (by\ndefinition) so any column references in the value expressions need to be\nprefixed with which of the two tables you want to examine. As with a\nnormal UPDATE, the left side of the SET clause entry must reference the\ntarget table and so its column cannot, and must not, be table qualified.\nWhile it speaks of \"rows\" this is basically a per-row thing. As each row\nis tested and finds a conflict the update is executed.\n\n>>>Other Person\nMentioning something as critical as that offhand is a mistake IMO. It\nshould have its own section.\nIt's also not mentioned in the BNF, though it shows up in the examples. You\nhave to basically infer everything.\n\n>>>ME\nThe exact wording of the conflict_action description in head is: \"The SET\nand WHERE clauses in ON CONFLICT DO UPDATE have access to the existing row\nusing the table's name (or an alias), and to rows proposed for insertion\nusing the special excluded table.\" I haven't read anything here that gives\nme a hint as to how that ended up misinterpreted so that I could possibly\nformulate an alternative wording. And I cannot think of a more appropriate\nplace to locate that sentence either. The examples do cover this and the\nspecifics here are not something that we try to represent in BNF.\nI'd probably change \"and to rows proposed for insertion\" to \"and to the\ncorresponding row proposed for insertion\".\n\n>>>OP\nThis does not change the original conclusion we arrived at correct? If I am\nreading what you are saying right, since it only discovered the conflict\nafter examining the row, then by the same token it will only affect the\nsame row where the conflict was detected.\n\n\n> I have a hard time imagining a user that reads the INSERT docs and\n> imagines that \"excluded\" is a relation that is visible to the query in\n> ways that are not limited to expression evaluation for the UPDATE's\n> WHERE/SET.\n\n\nYes, and based on a single encounter I agree this doesn't seem like a\nbroadly encountered issue. My takeaway from that eventually led to this\nproposal. The \"Other Person\" who is complaining about the docs is one of\nthe mentors on the Discord server and works for one of the corporate\ncontributors to the community. (I suppose Discord is considered public so\nmaybe this redaction is unnecessary...)\n\nDavid J.\n\nOn Thu, Jun 30, 2022 at 2:31 PM Peter Geoghegan <pg@bowt.ie> wrote:On Thu, Jun 30, 2022 at 2:07 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> Current:\n> \"The SET and WHERE clauses in ON CONFLICT DO UPDATE have access to the\n> existing row using the table's name (or an alias), and to [rows] proposed\n> for insertion using the special excluded table.\"\n>\n> The word table in that sentence is wrong and not a useful way to think of the thing which we've named excluded. It is a single value of a composite type having the structure of the named table.\n\nI think that your reasoning is correct, but I don't agree with your\nconclusion. The term \"special excluded table\" is a fudge, but that\nisn't necessarily a bad thing. Sure, we could add something about the\nUPDATE being similar to an UPDATE with a self-join, as I said\nupthread. But I think that that would make the concept harder to\ngrasp.I don't think incorporating self-joining to be helpful; the status quo is better than that. I believe people mostly think of \"composite variable\" from the current description even if we don't use those words - or such a concept can be explained by analogy with NEW and OLD (I think of it like a trigger, only that SQL doesn't have variables so we cannot use that term, hence just using \"name\"). \n\n> I'll agree that most people will mentally paper over the difference and go merrily on their way. At least one person recently did not do that, which prompted an email to the community\n\nCan you provide a reference for this? Didn't see anything like that in\nthe reference you gave upthread.OK, the discussion I am recalling happened on Discord hence the lack of a link.On roughly 3/8 the following conversation occurred (I've trimmed out some non-relevant comments):>>>OPHello, I have a simple question.My table has a column 'transaction_date'I have an insert statement with an ON CONFLICT statementI update using the 'excluded' values, but I only want to update if the transaction date is the same or newer.Do I just use: \"WHERE EXCLUDED.transaction_date >= transaction_date\"? so the full query is something like: INSERT INTO table VALUES (pk, yadda1, yadda2) ON CONFLICT (pk) DO UPDATE SET (yadda1 = EXCLUDED.yadda1, yadda2 = EXCLUDED.yadda2) WHERE EXCLUDED.transaction_date >= transaction_date;>>>Other PersonI mean, the ... like 3 examples imply what it contains, and it vaguely says \"and to rows proposed for insertion using the special excluded table.\" but...Still, based on the BNF, that should work as you stated it.>>>OPwould perhaps it try to overwrite more than one row because many rows would meet the criteria?It seems like it limits it to the conflict row but..>>>Other PersonWell, you're only conflicting on the PK, which is guaranteed to be unique.>>>OPAh, so then it is limited to that row if it is specified within the ON CONFLICT action if I am reading correct.[...]If it matters to you, the only thing I got wrong apparently (in my limited non-sufficient testing) is that to access the current value within the table row you must use the table name. So: WHERE EXCLUDED.transaction_date >= tableName.transaction_date>>>ME\"have access [...] to rows proposed for insertion using the special excluded table.\". You have an update situation where two tables (the target and \"excluded\") are in scope with the exact same column names (by definition) so any column references in the value expressions need to be prefixed with which of the two tables you want to examine. As with a normal UPDATE, the left side of the SET clause entry must reference the target table and so its column cannot, and must not, be table qualified.While it speaks of \"rows\" this is basically a per-row thing. As each row is tested and finds a conflict the update is executed.>>>Other PersonMentioning something as critical as that offhand is a mistake IMO. It should have its own section.It's also not mentioned in the BNF, though it shows up in the examples. You have to basically infer everything.>>>METhe exact wording of the conflict_action description in head is: \"The SET and WHERE clauses in ON CONFLICT DO UPDATE have access to the existing row using the table's name (or an alias), and to rows proposed for insertion using the special excluded table.\" I haven't read anything here that gives me a hint as to how that ended up misinterpreted so that I could possibly formulate an alternative wording. And I cannot think of a more appropriate place to locate that sentence either. The examples do cover this and the specifics here are not something that we try to represent in BNF.I'd probably change \"and to rows proposed for insertion\" to \"and to the corresponding row proposed for insertion\".>>>OPThis does not change the original conclusion we arrived at correct? If I am reading what you are saying right, since it only discovered the conflict after examining the row, then by the same token it will only affect the same row where the conflict was detected.\n\nI have a hard time imagining a user that reads the INSERT docs and\nimagines that \"excluded\" is a relation that is visible to the query in\nways that are not limited to expression evaluation for the UPDATE's\nWHERE/SET.Yes, and based on a single encounter I agree this doesn't seem like a broadly encountered issue. My takeaway from that eventually led to this proposal. The \"Other Person\" who is complaining about the docs is one of the mentors on the Discord server and works for one of the corporate contributors to the community. (I suppose Discord is considered public so maybe this redaction is unnecessary...)David J.",
"msg_date": "Thu, 30 Jun 2022 15:07:11 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: Clarify what \"excluded\" represents for INSERT ON CONFLICT"
},
{
"msg_contents": "On Thu, Jun 30, 2022 at 3:07 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> Yes, and based on a single encounter I agree this doesn't seem like a broadly encountered issue. My takeaway from that eventually led to this proposal. The \"Other Person\" who is complaining about the docs is one of the mentors on the Discord server and works for one of the corporate contributors to the community. (I suppose Discord is considered public so maybe this redaction is unnecessary...)\n\nMy impression from reading this transcript is that the user was\nconfused as to why they needed to qualify the target table name in the\nON CONFLICT DO UPDATE's WHERE clause -- they didn't have to qualify it\nin the targetlist that appears in \"SET ... \", so why the need to do it\nin the WHERE clause? This isn't something that upsert statements need\nto do all that often, just because adding additional conditions to the\nWHERE clause isn't usually necessary. That much makes sense to me -- I\n*can* imagine how that could cause confusion.\n\nIf that interpretation is correct, then it's not clear what it should\nmean for how the INSERT documentation describes EXCLUDED. EXCLUDED is\ninvolved here, since EXCLUDED is the thing that creates the ambiguity,\nbut that seems almost incidental to me. This user would probably not\nhave been confused if they didn't need to use a WHERE clause (very\nmuch the common case), even when expression evaluation involving\nEXCLUDED in the SET was still used (also common).\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 30 Jun 2022 15:40:23 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: doc: Clarify what \"excluded\" represents for INSERT ON CONFLICT"
},
{
"msg_contents": "On Thu, Jun 30, 2022 at 5:05 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Thu, Jun 30, 2022 at 1:43 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > rhaas=# insert into foo values (1, 'frob') on conflict (a) do update\n> > set b = (select b || 'nitz' from excluded);\n> > ERROR: relation \"excluded\" does not exist\n> > LINE 1: ...ct (a) do update set b = (select b || 'nitz' from excluded);\n> >\n> > I do find that a bit of a curious error message, because that relation\n> > clearly DOES exist in the range table.\n>\n> Let's say that we supported this syntax. I see some problems with that\n> (you may have thought of these already). Thinking of \"excluded\" as a\n> separate table creates some very thorny questions, such as:\n>\n> How would the user be expected to handle the case where there was more\n> than a single \"row\" in \"excluded\"? How could the implementation know\n> what the contents of the \"excluded table\" were ahead of time in the\n> multi-row-insert case? We'd have to know about *all* of the conflicts\n> for all rows proposed for insertion to do this, which seems impossible\n> in the general case -- because some of those conflicts won't have\n> happened yet, and cannot be predicted.\n\nI was assuming it would just behave like a 1-row table i.e. these\nwould do the same thing:\n\ninsert into foo values (1, 'frob') on conflict (a) do update set b =\n(select excluded.b || 'nitz');\ninsert into foo values (1, 'frob') on conflict (a) do update set b =\n(select b || 'nitz' from excluded);\n\nI'm actually kinda unsure why that doesn't already work. There may\nwell be a very good reason, but my naive thought would be that if\nexcluded doesn't have a range table entry, the first one would fail\nbecause excluded can't be looked up in the range table, and if it does\nhave a range-table entry, then the second one would work because the\nfrom-clause reference would find it just like the qualified column\nreference did.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 1 Jul 2022 08:30:39 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: Clarify what \"excluded\" represents for INSERT ON CONFLICT"
},
{
"msg_contents": "On Thu, Jun 30, 2022 at 6:40 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> My impression from reading this transcript is that the user was\n> confused as to why they needed to qualify the target table name in the\n> ON CONFLICT DO UPDATE's WHERE clause -- they didn't have to qualify it\n> in the targetlist that appears in \"SET ... \", so why the need to do it\n> in the WHERE clause? This isn't something that upsert statements need\n> to do all that often, just because adding additional conditions to the\n> WHERE clause isn't usually necessary. That much makes sense to me -- I\n> *can* imagine how that could cause confusion.\n\n+1.\n\nI think that the issue here is simply that because both the updated\ntable and the \"excluded\" pseudo-table are visible here, and have the\nsame columns, an unqualified name is ambiguous. I don't really think\nthat it's worth documenting. The error message you get if you fail to\ndo it is actually pretty good:\n\nrhaas=# insert into foo values (1, 'frob') on conflict (a) do update\nset b = (select b || 'nitz');\nERROR: column reference \"b\" is ambiguous\nLINE 1: ...'frob') on conflict (a) do update set b = (select b || 'nitz...\n ^\n\nNow you could read that and not understand that the ambiguity is\nbetween the target table and the \"excluded\" pseudo-table, for sure.\nBut, would you think to check the documentation at that point? I'm not\nsure that's what people would really do. And if they did, I think that\nDavid's proposed patch would be unlikely to make them less confused.\nWhat would probably help more is adding something like this to the\nerror message:\n\nHINT: column \"b\" could refer to any of these relations: \"foo\", \"excluded\"\n\nThat could also help people who encounter this error in other\nsituations. I'm not 100% sure this is a good idea, but I feel like it\nwould have a much better chance of helping someone in this situation\nthan the proposed doc patch.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 1 Jul 2022 09:00:56 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: Clarify what \"excluded\" represents for INSERT ON CONFLICT"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I think that the issue here is simply that because both the updated\n> table and the \"excluded\" pseudo-table are visible here, and have the\n> same columns, an unqualified name is ambiguous. I don't really think\n> that it's worth documenting. The error message you get if you fail to\n> do it is actually pretty good:\n\n> ERROR: column reference \"b\" is ambiguous\n\n> Now you could read that and not understand that the ambiguity is\n> between the target table and the \"excluded\" pseudo-table, for sure.\n\nAgreed. It doesn't help that there's no explicit use of \"excluded\"\nanywhere, as there is in more usual ambiguous-column cases.\n\n> What would probably help more is adding something like this to the\n> error message:\n> HINT: column \"b\" could refer to any of these relations: \"foo\", \"excluded\"\n\n+1, that seems like it could be handy across the board.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 01 Jul 2022 09:40:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: Clarify what \"excluded\" represents for INSERT ON CONFLICT"
},
{
"msg_contents": "On Fri, Jul 1, 2022 at 6:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > What would probably help more is adding something like this to the\n> > error message:\n> > HINT: column \"b\" could refer to any of these relations: \"foo\", \"excluded\"\n>\n> +1, that seems like it could be handy across the board.\n\nThe user *will* get a similar HINT if they happen to *also* spell the\nwould-be ambiguous column name slightly incorrectly:\n\nERROR: column \"barr\" does not exist\nLINE 1: ...lict (bar) do update set bar = excluded.bar where barr != 5;\n ^\nHINT: Perhaps you meant to reference the column \"foo.bar\" or the\ncolumn \"excluded.bar\".\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 1 Jul 2022 07:37:40 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: doc: Clarify what \"excluded\" represents for INSERT ON CONFLICT"
},
{
"msg_contents": "On Fri, Jul 1, 2022 at 6:01 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> What would probably help more is adding something like this to the\n> error message:\n>\n> HINT: column \"b\" could refer to any of these relations: \"foo\", \"excluded\"\n>\n> That could also help people who encounter this error in other\n> situations. I'm not 100% sure this is a good idea, but I feel like it\n> would have a much better chance of helping someone in this situation\n> than the proposed doc patch.\n\nI agree with everything you've said here, though I am 100% sure that\nsomething like your proposed HINT would be a real usability win.\n\nThe \"Perhaps you meant to reference the column\" HINT displayed when\nthe user misspells a column is a surprisingly popular feature. I'm\naware of quite a few people singing its praises on Twitter, for\nexample. That hardly ever happens, even with features that we think of\nas high impact major features. So evidently users really value these\nkinds of quality of life improvements.\n\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 1 Jul 2022 07:57:53 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: doc: Clarify what \"excluded\" represents for INSERT ON CONFLICT"
},
{
"msg_contents": "On Fri, Jul 1, 2022 at 7:58 AM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Fri, Jul 1, 2022 at 6:01 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > What would probably help more is adding something like this to the\n> > error message:\n> >\n> > HINT: column \"b\" could refer to any of these relations: \"foo\", \"excluded\"\n> >\n> > That could also help people who encounter this error in other\n> > situations. I'm not 100% sure this is a good idea, but I feel like it\n> > would have a much better chance of helping someone in this situation\n> > than the proposed doc patch.\n>\n> I agree with everything you've said here, though I am 100% sure that\n> something like your proposed HINT would be a real usability win.\n>\n> The \"Perhaps you meant to reference the column\" HINT displayed when\n> the user misspells a column is a surprisingly popular feature. I'm\n> aware of quite a few people singing its praises on Twitter, for\n> example. That hardly ever happens, even with features that we think of\n> as high impact major features. So evidently users really value these\n> kinds of quality of life improvements.\n>\n>\n\n+1 to this better approach to address the specific confusion regarding\nambiguity.\n\nWithout any other changes being made I'm content with the status quo\ncalling excluded a table a reasonable approximation that hasn't been seen\nto be confusing to our readers.\n\nThat said, I still think that the current wording should be tweak with\nrespect to row vs. rows (especially if we continue to call it a table):\n\nCurrent:\n\"The SET and WHERE clauses in ON CONFLICT DO UPDATE have access to the\nexisting row using the table's name (or an alias), and to [rows] proposed\nfor insertion using the special excluded table.\"\n\nChange [rows] to:\n\n\"the row\"\n\n\nI'm undecided whether \"FROM excluded\" should be something that works - but\nI also don't think it would actually be used in any case.\n\nDavid J.\n\nOn Fri, Jul 1, 2022 at 7:58 AM Peter Geoghegan <pg@bowt.ie> wrote:On Fri, Jul 1, 2022 at 6:01 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> What would probably help more is adding something like this to the\n> error message:\n>\n> HINT: column \"b\" could refer to any of these relations: \"foo\", \"excluded\"\n>\n> That could also help people who encounter this error in other\n> situations. I'm not 100% sure this is a good idea, but I feel like it\n> would have a much better chance of helping someone in this situation\n> than the proposed doc patch.\n\nI agree with everything you've said here, though I am 100% sure that\nsomething like your proposed HINT would be a real usability win.\n\nThe \"Perhaps you meant to reference the column\" HINT displayed when\nthe user misspells a column is a surprisingly popular feature. I'm\naware of quite a few people singing its praises on Twitter, for\nexample. That hardly ever happens, even with features that we think of\nas high impact major features. So evidently users really value these\nkinds of quality of life improvements. +1 to this better approach to address the specific confusion regarding ambiguity.Without any other changes being made I'm content with the status quo calling excluded a table a reasonable approximation that hasn't been seen to be confusing to our readers.That said, I still think that the current wording should be tweak with respect to row vs. rows (especially if we continue to call it a table):Current:\"The SET and WHERE clauses in ON CONFLICT DO UPDATE have access to theexisting row using the table's name (or an alias), and to [rows] proposedfor insertion using the special excluded table.\"Change [rows] to:\"the row\"I'm undecided whether \"FROM excluded\" should be something that works - but I also don't think it would actually be used in any case.David J.",
"msg_date": "Fri, 1 Jul 2022 08:11:36 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: Clarify what \"excluded\" represents for INSERT ON CONFLICT"
},
{
"msg_contents": "On Fri, Jul 1, 2022 at 08:11:36AM -0700, David G. Johnston wrote:\n> That said, I still think that the current wording should be tweak with respect\n> to row vs. rows (especially if we continue to call it a table):\n> \n> Current:\n> \"The SET and WHERE clauses in ON CONFLICT DO UPDATE have access to the\n> existing row using the table's name (or an alias), and to [rows] proposed\n> for insertion using the special excluded table.\"\n> \n> Change [rows] to:\n> \n> \"the row\"\n> \n> \n> I'm undecided whether \"FROM excluded\" should be something that works - but I\n> also don't think it would actually be used in any case.\n\nI found two places where a singular \"row\" would be better, doc patch\nattached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson",
"msg_date": "Fri, 8 Jul 2022 23:18:43 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: Clarify what \"excluded\" represents for INSERT ON CONFLICT"
},
{
"msg_contents": "On Fri, Jul 8, 2022 at 11:18 PM Bruce Momjian <bruce@momjian.us> wrote:\n> I found two places where a singular \"row\" would be better, doc patch\n> attached.\n\n+1.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 11 Jul 2022 12:25:44 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: Clarify what \"excluded\" represents for INSERT ON CONFLICT"
},
{
"msg_contents": "On Fri, Jul 8, 2022 at 11:18:43PM -0400, Bruce Momjian wrote:\n> On Fri, Jul 1, 2022 at 08:11:36AM -0700, David G. Johnston wrote:\n> > That said, I still think that the current wording should be tweak with respect\n> > to row vs. rows (especially if we continue to call it a table):\n> > \n> > Current:\n> > \"The SET and WHERE clauses in ON CONFLICT DO UPDATE have access to the\n> > existing row using the table's name (or an alias), and to [rows] proposed\n> > for insertion using the special excluded table.\"\n> > \n> > Change [rows] to:\n> > \n> > \"the row\"\n> > \n> > \n> > I'm undecided whether \"FROM excluded\" should be something that works - but I\n> > also don't think it would actually be used in any case.\n> \n> I found two places where a singular \"row\" would be better, doc patch\n> attached.\n\nPatch applied to all supported versions.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 14 Jul 2022 15:34:54 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: Clarify what \"excluded\" represents for INSERT ON CONFLICT"
}
] |
[
{
"msg_contents": "Hi.\n\nReposting this on its own thread.\n\nhttps://www.postgresql.org/message-id/flat/CAKFQuwby1aMsJDMeibaBaohgoaZhivAo4WcqHC1%3D9-GDZ3TSng%40mail.gmail.com\n\n Per discussion on -general the documentation for the\n ALTER ROUTINE ... DEPENDS ON EXTENSION and DROP EXTENSION doesn't\n clearly indicate that these dependent routines are treated in a\n similar manner to the extension's owned objects when it comes to\n using RESTRICT mode drop: namely their presence doesn't force\n the drop command to abort. Clear that up.\n\nDavid J.",
"msg_date": "Thu, 9 Jun 2022 08:43:22 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "doc: Clarify Routines and Extension Membership"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n\n+ A function that's marked as dependent on an extension is dropped when the\n+ extension is dropped, even if cascade is not specified.\n+ dependency checking in restrict mode <xref linkend=\"sql-dropextension\"/>.\n+ A function can depend upon multiple extensions, and will be dropped when\n+ any one of those extensions is dropped.\n\nThird line here seems like a copy/paste mistake? Also I'd tend\nto mark up the keyword as <literal>CASCADE</literal>.\n\n+ This form marks the procedure as dependent on the extension, or no longer\n+ dependent on that extension if <literal>NO</literal> is specified.\n\nThe/that inconsistency ... choose one. Or actually, the \"an ... the\"\ncombination you used elsewhere doesn't grate on the ear either.\n\n+ For each extension, refuse to drop anything if any objects (other than the\n+ extensions listed) depend on it. However, its own member objects, and routines\n+ that are explicitly dependent on this extension, are skipped.\n+ This is the default.\n\n\"skipped\" seems like a horrible choice of word; it could easily be read as\n\"they don't get dropped\". I am not convinced that mentioning the member\nobjects here is an improvement either. In the first sentence you are\ntreating each extension as a monolithic object; why not in the second?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 05 Jul 2022 20:12:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: Clarify Routines and Extension Membership"
},
{
"msg_contents": "On Tue, Jul 5, 2022 at 08:12:09PM -0400, Tom Lane wrote:\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> \n> + A function that's marked as dependent on an extension is dropped when the\n> + extension is dropped, even if cascade is not specified.\n> + dependency checking in restrict mode <xref linkend=\"sql-dropextension\"/>.\n> + A function can depend upon multiple extensions, and will be dropped when\n> + any one of those extensions is dropped.\n> \n> Third line here seems like a copy/paste mistake? Also I'd tend\n> to mark up the keyword as <literal>CASCADE</literal>.\n> \n> + This form marks the procedure as dependent on the extension, or no longer\n> + dependent on that extension if <literal>NO</literal> is specified.\n> \n> The/that inconsistency ... choose one. Or actually, the \"an ... the\"\n> combination you used elsewhere doesn't grate on the ear either.\n> \n> + For each extension, refuse to drop anything if any objects (other than the\n> + extensions listed) depend on it. However, its own member objects, and routines\n> + that are explicitly dependent on this extension, are skipped.\n> + This is the default.\n> \n> \"skipped\" seems like a horrible choice of word; it could easily be read as\n> \"they don't get dropped\". I am not convinced that mentioning the member\n> objects here is an improvement either. In the first sentence you are\n> treating each extension as a monolithic object; why not in the second?\n\nI created a modified patch based on this feedback; patch attached. I\nrewrote the last change.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson",
"msg_date": "Fri, 8 Jul 2022 22:55:55 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: Clarify Routines and Extension Membership"
},
{
"msg_contents": "On Fri, Jul 8, 2022 at 10:55:55PM -0400, Bruce Momjian wrote:\n> > The/that inconsistency ... choose one. Or actually, the \"an ... the\"\n> > combination you used elsewhere doesn't grate on the ear either.\n> > \n> > + For each extension, refuse to drop anything if any objects (other than the\n> > + extensions listed) depend on it. However, its own member objects, and routines\n> > + that are explicitly dependent on this extension, are skipped.\n> > + This is the default.\n> > \n> > \"skipped\" seems like a horrible choice of word; it could easily be read as\n> > \"they don't get dropped\". I am not convinced that mentioning the member\n> > objects here is an improvement either. In the first sentence you are\n> > treating each extension as a monolithic object; why not in the second?\n> \n> I created a modified patch based on this feedback; patch attached. I\n> rewrote the last change.\n\nPatch applied to PG 13 and later, where extension dependency was added. \nThank you for the patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 14 Jul 2022 17:41:51 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: Clarify Routines and Extension Membership"
},
{
"msg_contents": "On Thu, Jul 14, 2022 at 2:41 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Fri, Jul 8, 2022 at 10:55:55PM -0400, Bruce Momjian wrote:\n> > > The/that inconsistency ... choose one. Or actually, the \"an ... the\"\n> > > combination you used elsewhere doesn't grate on the ear either.\n> > >\n> > > + For each extension, refuse to drop anything if any objects\n> (other than the\n> > > + extensions listed) depend on it. However, its own member\n> objects, and routines\n> > > + that are explicitly dependent on this extension, are skipped.\n> > > + This is the default.\n> > >\n> > > \"skipped\" seems like a horrible choice of word; it could easily be\n> read as\n> > > \"they don't get dropped\". I am not convinced that mentioning the\n> member\n> > > objects here is an improvement either. In the first sentence you are\n> > > treating each extension as a monolithic object; why not in the second?\n> >\n> > I created a modified patch based on this feedback; patch attached. I\n> > rewrote the last change.\n>\n> Patch applied to PG 13 and later, where extension dependency was added.\n> Thank you for the patch.\n>\n\nThank you and apologies for being quiet here and on a few of the\nother threads. I've been on vacation and flagged as ToDo some of the\nnon-simple feedback items that have come this way.\n\nThe change to restrict and description in drop extension needs to be fixed\nup (the other pages look good).\n\n\"This option prevents the specified extensions from being dropped if there\nexists non-extension-member objects that depends on any the extensions.\nThis is the default.\"\n\nAt minimum: \"...that depend on any of the extensions.\"\n\nI did just now confirm that if any of the named extensions failed to be\ndropped the entire command fails. There is no partial success mode.\n\nI'd like to avoid non-extension-member, and one of the main points is that\nthe routine dependency is member-like, not actual membership. Hence the\nseparate wording.\n\nI thus propose to replace the drop extension / restrict paragraph and\nreplace it with the following:\n\n\"This option prevents the specified extensions from being dropped if other\nobjects - besides these extensions, their members, and their explicitly\ndependent routines - depend on them. This is the default.\"\n\nAlso, I'm thinking to change, on the same page (description):\n\n\"Dropping an extension causes its component objects,\"\n\nto be:\n\n\"Dropping an extension causes its member objects,\"\n\nI'm not sure why I originally chose component over member...\n\nDavid J.\n\nOn Thu, Jul 14, 2022 at 2:41 PM Bruce Momjian <bruce@momjian.us> wrote:On Fri, Jul 8, 2022 at 10:55:55PM -0400, Bruce Momjian wrote:\n> > The/that inconsistency ... choose one. Or actually, the \"an ... the\"\n> > combination you used elsewhere doesn't grate on the ear either.\n> > \n> > + For each extension, refuse to drop anything if any objects (other than the\n> > + extensions listed) depend on it. However, its own member objects, and routines\n> > + that are explicitly dependent on this extension, are skipped.\n> > + This is the default.\n> > \n> > \"skipped\" seems like a horrible choice of word; it could easily be read as\n> > \"they don't get dropped\". I am not convinced that mentioning the member\n> > objects here is an improvement either. In the first sentence you are\n> > treating each extension as a monolithic object; why not in the second?\n> \n> I created a modified patch based on this feedback; patch attached. I\n> rewrote the last change.\n\nPatch applied to PG 13 and later, where extension dependency was added. \nThank you for the patch.Thank you and apologies for being quiet here and on a few of the other threads. I've been on vacation and flagged as ToDo some of the non-simple feedback items that have come this way.The change to restrict and description in drop extension needs to be fixed up (the other pages look good).\"This option prevents the specified extensions from being dropped if there exists non-extension-member objects that depends on any the extensions. This is the default.\"At minimum: \"...that depend on any of the extensions.\"I did just now confirm that if any of the named extensions failed to be dropped the entire command fails. There is no partial success mode.I'd like to avoid non-extension-member, and one of the main points is that the routine dependency is member-like, not actual membership. Hence the separate wording.I thus propose to replace the drop extension / restrict paragraph and replace it with the following:\"This option prevents the specified extensions from being dropped if other objects - besides these extensions, their members, and their explicitly dependent routines - depend on them. This is the default.\"Also, I'm thinking to change, on the same page (description):\"Dropping an extension causes its component objects,\"to be:\"Dropping an extension causes its member objects,\"I'm not sure why I originally chose component over member...David J.",
"msg_date": "Thu, 14 Jul 2022 18:27:17 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: Clarify Routines and Extension Membership"
},
{
"msg_contents": "On Thu, Jul 14, 2022 at 06:27:17PM -0700, David G. Johnston wrote:\n> Thank you and apologies for being quiet here and on a few of the other threads.\n> I've been on vacation and flagged as ToDo some of the non-simple feedback items\n> that have come this way.\n\nNo need to worry --- we will incorporate your suggestions whenever you\ncan supply them. I know you waited months for these to be addressed.\n\n> The change to restrict and description in drop extension needs to be fixed up\n> (the other pages look good).\n> \n> \"This option prevents the specified extensions from being dropped if there\n> exists non-extension-member objects that depends on any the extensions. This is\n> the default.\"\n> \n> At minimum: \"...that depend on any of the extensions.\"\n\nAgreed.\n\n> I did just now confirm that if any of the named extensions failed to be dropped\n> the entire command fails. There is no partial success mode.\n> \n> I'd like to avoid non-extension-member, and one of the main points is that the\n> routine dependency is member-like, not actual membership. Hence the separate\n> wording.\n\nOkay.\n\n> I thus propose to replace the drop extension / restrict paragraph and replace\n> it with the following:\n> \n> \"This option prevents the specified extensions from being dropped if other\n> objects - besides these extensions, their members, and their explicitly\n> dependent routines - depend on them. This is the default.\"\n\nGood.\n\n> Also, I'm thinking to change, on the same page (description):\n> \n> \"Dropping an extension causes its component objects,\"\n> \n> to be:\n> \n> \"Dropping an extension causes its member objects,\"\n> \n> I'm not sure why I originally chose component over member...\n\nAll done, in the attached patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson",
"msg_date": "Mon, 18 Jul 2022 16:40:15 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: Clarify Routines and Extension Membership"
},
{
"msg_contents": "On Mon, Jul 18, 2022 at 04:40:15PM -0400, Bruce Momjian wrote:\n> > Also, I'm thinking to change, on the same page (description):\n> > \n> > \"Dropping an extension causes its component objects,\"\n> > \n> > to be:\n> > \n> > \"Dropping an extension causes its member objects,\"\n> > \n> > I'm not sure why I originally chose component over member...\n> \n> All done, in the attached patch.\n\nPatch applied through PG 13, where extension dependency tracking was\nadded.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Fri, 12 Aug 2022 09:14:12 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: Clarify Routines and Extension Membership"
}
] |
[
{
"msg_contents": "Hi.\n\nReposting this on its own thread.\n\nhttps://www.postgresql.org/message-id/flat/CAKFQuwby1aMsJDMeibaBaohgoaZhivAo4WcqHC1%3D9-GDZ3TSng%40mail.gmail.com\n\n The default database name is just the user name, not the\n operating-system user name.\n\n In passing, the authentication error examples use the phrase\n \"database user name\" in a couple of locations. The word\n database in both cases is both unusual and unnecessary for\n understanding. The reference to user name means the one in/for the\n database unless otherwise specified.\n\n Furthermore, it seems better to tell the reader the likely\n reason why the displayed database name happens to be a user name.\n\nThis change is probably optional but I think it makes sense:\n- The indicated database user name was not found.\n+ The indicated user name was not found.\n\nThe other changes simply try to avoid the issue altogether.\n\nDavid J.",
"msg_date": "Thu, 9 Jun 2022 08:55:07 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "doc: Fix description of how the default user name is chosen"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> In passing, the authentication error examples use the phrase\n> \"database user name\" in a couple of locations. The word\n> database in both cases is both unusual and unnecessary for\n> understanding. The reference to user name means the one in/for the\n> database unless otherwise specified.\n\nI'm not convinced that just saying \"user name\" is an improvement.\nThe thing that we are trying to clarify in much of this section\nis the relationship between your operating-system-assigned user\nname and your database-cluster-assigned user name. So just saying\n\"user name\" adds an undesirable element of ambiguity.\n\nMaybe we could change \"database user name\" to \"Postgres user name\"?\n\n- if you do not specify a database name, it defaults to the database\n- user name, which might or might not be the right thing.\n+ if the database name shown matches the user name you are connecting\n+ as it is not by accident: the default database name is the\n+ user name.\n\nThis does absolutely not seem like an improvement.\n\n Since the database server uses the same default, you will not have\n to specify the port in most cases. The default user name is your\n- operating-system user name, as is the default database name.\n+ operating-system user name. The default database name is the resolved user name.\n\nI agree this phrasing needs some work, but \"resolved\" doesn't seem\nhelpful, since it's not defined here or nearby. Maybe \"The default\ndatabase name is the specified (or defaulted) user name.\" ?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 05 Jul 2022 20:20:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: Fix description of how the default user name is chosen"
},
{
"msg_contents": "On Tue, Jul 5, 2022 at 5:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > In passing, the authentication error examples use the phrase\n> > \"database user name\" in a couple of locations. The word\n> > database in both cases is both unusual and unnecessary for\n> > understanding. The reference to user name means the one in/for the\n> > database unless otherwise specified.\n>\n> I'm not convinced that just saying \"user name\" is an improvement.\n> The thing that we are trying to clarify in much of this section\n> is the relationship between your operating-system-assigned user\n> name and your database-cluster-assigned user name. So just saying\n> \"user name\" adds an undesirable element of ambiguity.\n\n\n> Maybe we could change \"database user name\" to \"Postgres user name\"?\n>\n\nI'm fine with just leaving \"database user name\" as no one seems to have the\nsame qualm with it that I do. Besides, I just finished reading:\n\nhttps://www.postgresql.org/docs/current/client-authentication.html\n\nand it seems pointless to leave that written as-is and gripe about the\nspecific change I was recommending.\n\n>\n> - if you do not specify a database name, it defaults to the database\n> - user name, which might or might not be the right thing.\n> + if the database name shown matches the user name you are connecting\n> + as it is not by accident: the default database name is the\n> + user name.\n>\n> This does absolutely not seem like an improvement.\n>\n\nIn that case I don't see the need for any form of commentary beyond:\n\n\"If you do not specify a database name it defaults to the database user\nname.\"\n\n\n> Since the database server uses the same default, you will not have\n> to specify the port in most cases. The default user name is your\n> - operating-system user name, as is the default database name.\n> + operating-system user name. The default database name is the resolved\n> user name.\n>\n> I agree this phrasing needs some work, but \"resolved\" doesn't seem\n> helpful, since it's not defined here or nearby. Maybe \"The default\n> database name is the specified (or defaulted) user name.\" ?\n>\n>\n\"The default database name is the specified (or defaulted) database user\nname.\"\n\nI'll accept that \"specified (or defaulted)\" is simply another way to write\nwhat I understand to be the common meaning of \"resolved\" in this situation.\n\nDavid J.\n\nOn Tue, Jul 5, 2022 at 5:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> In passing, the authentication error examples use the phrase\n> \"database user name\" in a couple of locations. The word\n> database in both cases is both unusual and unnecessary for\n> understanding. The reference to user name means the one in/for the\n> database unless otherwise specified.\n\nI'm not convinced that just saying \"user name\" is an improvement.\nThe thing that we are trying to clarify in much of this section\nis the relationship between your operating-system-assigned user\nname and your database-cluster-assigned user name. So just saying\n\"user name\" adds an undesirable element of ambiguity.\n\nMaybe we could change \"database user name\" to \"Postgres user name\"?I'm fine with just leaving \"database user name\" as no one seems to have the same qualm with it that I do. Besides, I just finished reading:https://www.postgresql.org/docs/current/client-authentication.htmland it seems pointless to leave that written as-is and gripe about the specific change I was recommending.\n\n- if you do not specify a database name, it defaults to the database\n- user name, which might or might not be the right thing.\n+ if the database name shown matches the user name you are connecting\n+ as it is not by accident: the default database name is the\n+ user name.\n\nThis does absolutely not seem like an improvement.In that case I don't see the need for any form of commentary beyond:\"If you do not specify a database name it defaults to the database user name.\"\n\n Since the database server uses the same default, you will not have\n to specify the port in most cases. The default user name is your\n- operating-system user name, as is the default database name.\n+ operating-system user name. The default database name is the resolved user name.\n\nI agree this phrasing needs some work, but \"resolved\" doesn't seem\nhelpful, since it's not defined here or nearby. Maybe \"The default\ndatabase name is the specified (or defaulted) user name.\" ?\"The default database name is the specified (or defaulted) database user name.\"I'll accept that \"specified (or defaulted)\" is simply another way to write what I understand to be the common meaning of \"resolved\" in this situation.David J.",
"msg_date": "Tue, 5 Jul 2022 17:43:19 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: Fix description of how the default user name is chosen"
},
{
"msg_contents": "On Wed, 6 Jul 2022 at 02:43, David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Tue, Jul 5, 2022 at 5:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n>> > In passing, the authentication error examples use the phrase\n>> > \"database user name\" in a couple of locations. The word\n>> > database in both cases is both unusual and unnecessary for\n>> > understanding. The reference to user name means the one in/for the\n>> > database unless otherwise specified.\n>>\n>> I'm not convinced that just saying \"user name\" is an improvement.\n>> The thing that we are trying to clarify in much of this section\n>> is the relationship between your operating-system-assigned user\n>> name and your database-cluster-assigned user name. So just saying\n>> \"user name\" adds an undesirable element of ambiguity.\n>>\n>>\n>> Maybe we could change \"database user name\" to \"Postgres user name\"?\n>\n>\n> I'm fine with just leaving \"database user name\" as no one seems to have the same qualm with it that I do. Besides, I just finished reading:\n>\n> https://www.postgresql.org/docs/current/client-authentication.html\n>\n> and it seems pointless to leave that written as-is and gripe about the specific change I was recommending.\n\nIf we're going to change this anyway, could we replace 'user name'\nwith 'username' in the connection documentation? It irks me to see so\nmuch 'user name' while our connection parameter is 'username', and we\nuse the username of the OS user, not the OS user's (display) name - or\nat least, that's how it behaved under Linux last time I checked.\n\n>>\n>>\n>> - if you do not specify a database name, it defaults to the database\n>> - user name, which might or might not be the right thing.\n>> + if the database name shown matches the user name you are connecting\n>> + as it is not by accident: the default database name is the\n>> + user name.\n>>\n>> This does absolutely not seem like an improvement.\n>\n>\n> In that case I don't see the need for any form of commentary beyond:\n>\n> \"If you do not specify a database name it defaults to the database user name.\"\n\nAgreed to both.\nThe right-ness of the default can either be systematic (\"we should or\nshould not default to connection username instead of some other\ndefault\") or in context of connection establishment (\"this connection\nshould or should not connect to the database named after the user's\nusername\").\nThe right-ness of the systematic default doesn't matter in this\ncontext (that's something to put in the comments of that code or\ndiscuss on -hackers), and the right-ness of the contextual default was\nalready proven to be wrong in this configuration of server and client,\nby the context of failing to connect to that defaulted database.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Wed, 6 Jul 2022 14:30:57 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: Fix description of how the default user name is chosen"
},
{
"msg_contents": "On 06.07.22 14:30, Matthias van de Meent wrote:\n> If we're going to change this anyway, could we replace 'user name'\n> with 'username' in the connection documentation? It irks me to see so\n> much 'user name' while our connection parameter is 'username', and we\n> use the username of the OS user, not the OS user's (display) name - or\n> at least, that's how it behaved under Linux last time I checked.\n\nThis might make sense if you are referring specifically to the value of \nthat connection option, and you mark it up accordingly. Otherwise, the \nsubtlety might get lost.\n\n\n\n",
"msg_date": "Wed, 6 Jul 2022 14:37:05 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: Fix description of how the default user name is chosen"
},
{
"msg_contents": "On Tue, Jul 5, 2022 at 08:20:25PM -0400, Tom Lane wrote:\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > In passing, the authentication error examples use the phrase\n> > \"database user name\" in a couple of locations. The word\n> > database in both cases is both unusual and unnecessary for\n> > understanding. The reference to user name means the one in/for the\n> > database unless otherwise specified.\n> \n> I'm not convinced that just saying \"user name\" is an improvement.\n> The thing that we are trying to clarify in much of this section\n> is the relationship between your operating-system-assigned user\n> name and your database-cluster-assigned user name. So just saying\n> \"user name\" adds an undesirable element of ambiguity.\n> \n> Maybe we could change \"database user name\" to \"Postgres user name\"?\n> \n> - if you do not specify a database name, it defaults to the database\n> - user name, which might or might not be the right thing.\n> + if the database name shown matches the user name you are connecting\n> + as it is not by accident: the default database name is the\n> + user name.\n> \n> This does absolutely not seem like an improvement.\n> \n> Since the database server uses the same default, you will not have\n> to specify the port in most cases. The default user name is your\n> - operating-system user name, as is the default database name.\n> + operating-system user name. The default database name is the resolved user name.\n> \n> I agree this phrasing needs some work, but \"resolved\" doesn't seem\n> helpful, since it's not defined here or nearby. Maybe \"The default\n> database name is the specified (or defaulted) user name.\" ?\n\nI am not seeing much improvement in the proposed patch either. I wonder\nif we should be calling this the \"session\" or \"connection\" user name. \nWhen the docs say \"if you do not specify a database name, it defaults to\nthe database user name\", there is so much \"database in there that the\nmeaing is unclear, and in this context, the user name is a property of\nthe connection or session, not of the database.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Fri, 8 Jul 2022 21:54:35 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: Fix description of how the default user name is chosen"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Tue, Jul 5, 2022 at 08:20:25PM -0400, Tom Lane wrote:\n>> I agree this phrasing needs some work, but \"resolved\" doesn't seem\n>> helpful, since it's not defined here or nearby. Maybe \"The default\n>> database name is the specified (or defaulted) user name.\" ?\n\n> I am not seeing much improvement in the proposed patch either. I wonder\n> if we should be calling this the \"session\" or \"connection\" user name. \n> When the docs say \"if you do not specify a database name, it defaults to\n> the database user name\", there is so much \"database in there that the\n> meaing is unclear, and in this context, the user name is a property of\n> the connection or session, not of the database.\n\nUmm ... you could make the exact same statement with respect to the\nuser's operating-system login session, so I doubt that \"session\" or\n\"connection\" adds any clarity.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 08 Jul 2022 22:17:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: Fix description of how the default user name is chosen"
},
{
"msg_contents": "On Fri, Jul 8, 2022 at 10:17:11PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Tue, Jul 5, 2022 at 08:20:25PM -0400, Tom Lane wrote:\n> >> I agree this phrasing needs some work, but \"resolved\" doesn't seem\n> >> helpful, since it's not defined here or nearby. Maybe \"The default\n> >> database name is the specified (or defaulted) user name.\" ?\n> \n> > I am not seeing much improvement in the proposed patch either. I wonder\n> > if we should be calling this the \"session\" or \"connection\" user name. \n> > When the docs say \"if you do not specify a database name, it defaults to\n> > the database user name\", there is so much \"database in there that the\n> > meaing is unclear, and in this context, the user name is a property of\n> > the connection or session, not of the database.\n> \n> Umm ... you could make the exact same statement with respect to the\n> user's operating-system login session, so I doubt that \"session\" or\n> \"connection\" adds any clarity.\n\nWell, one confusion is that there is a database name and a database user\nname. We don't have different operating system names that users can\nconnect to, usually.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Fri, 8 Jul 2022 22:42:58 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: Fix description of how the default user name is chosen"
},
{
"msg_contents": "On Friday, July 8, 2022, Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Fri, Jul 8, 2022 at 10:17:11PM -0400, Tom Lane wrote:\n> > Bruce Momjian <bruce@momjian.us> writes:\n> > > On Tue, Jul 5, 2022 at 08:20:25PM -0400, Tom Lane wrote:\n> > >> I agree this phrasing needs some work, but \"resolved\" doesn't seem\n> > >> helpful, since it's not defined here or nearby. Maybe \"The default\n> > >> database name is the specified (or defaulted) user name.\" ?\n> >\n> > > I am not seeing much improvement in the proposed patch either. I\n> wonder\n> > > if we should be calling this the \"session\" or \"connection\" user name.\n> > > When the docs say \"if you do not specify a database name, it defaults\n> to\n> > > the database user name\", there is so much \"database in there that the\n> > > meaing is unclear, and in this context, the user name is a property of\n> > > the connection or session, not of the database.\n> >\n> > Umm ... you could make the exact same statement with respect to the\n> > user's operating-system login session, so I doubt that \"session\" or\n> > \"connection\" adds any clarity.\n>\n> Well, one confusion is that there is a database name and a database user\n> name. We don't have different operating system names that users can\n> connect to, usually.\n>\n>\nMaybe invoke the wording from the libpq docs and say:\n\nThe default database name is the same as the user connection parameter.\n\nhttps://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS\n\nThat page doesn’t feel the need to qualify user name and I don’t think it\nhurts comprehension; and the writing “user parameter” there, instead of\n“user name”, since the parameter is simply “user”, not “username”.\n\nDavid J.\n\nOn Friday, July 8, 2022, Bruce Momjian <bruce@momjian.us> wrote:On Fri, Jul 8, 2022 at 10:17:11PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Tue, Jul 5, 2022 at 08:20:25PM -0400, Tom Lane wrote:\n> >> I agree this phrasing needs some work, but \"resolved\" doesn't seem\n> >> helpful, since it's not defined here or nearby. Maybe \"The default\n> >> database name is the specified (or defaulted) user name.\" ?\n> \n> > I am not seeing much improvement in the proposed patch either. I wonder\n> > if we should be calling this the \"session\" or \"connection\" user name. \n> > When the docs say \"if you do not specify a database name, it defaults to\n> > the database user name\", there is so much \"database in there that the\n> > meaing is unclear, and in this context, the user name is a property of\n> > the connection or session, not of the database.\n> \n> Umm ... you could make the exact same statement with respect to the\n> user's operating-system login session, so I doubt that \"session\" or\n> \"connection\" adds any clarity.\n\nWell, one confusion is that there is a database name and a database user\nname. We don't have different operating system names that users can\nconnect to, usually.\nMaybe invoke the wording from the libpq docs and say:The default database name is the same as the user connection parameter.https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDSThat page doesn’t feel the need to qualify user name and I don’t think it hurts comprehension; and the writing “user parameter” there, instead of “user name”, since the parameter is simply “user”, not “username”.David J.",
"msg_date": "Sat, 9 Jul 2022 08:06:21 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: Fix description of how the default user name is chosen"
},
{
"msg_contents": "On Sat, Jul 9, 2022 at 08:06:21AM -0700, David G. Johnston wrote:\n> Maybe invoke the wording from the libpq docs and say:\n> \n> The default database name is the same as the user connection parameter.\n> \n> https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS\n> \n> That page doesn’t feel the need to qualify user name and I don’t think it hurts\n> comprehension; and the writing “user parameter” there, instead of “user name”,\n> since the parameter is simply “user”, not “username”.\n\nWell, it could be the login OS name if the user connection parameter is\nunspecified, right?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Sat, 9 Jul 2022 11:15:58 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: Fix description of how the default user name is chosen"
},
{
"msg_contents": "On Sat, Jul 9, 2022, 08:16 Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Sat, Jul 9, 2022 at 08:06:21AM -0700, David G. Johnston wrote:\n> > Maybe invoke the wording from the libpq docs and say:\n> >\n> > The default database name is the same as the user connection parameter.\n> >\n> >\n> https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS\n> >\n> > That page doesn’t feel the need to qualify user name and I don’t think\n> it hurts\n> > comprehension; and the writing “user parameter” there, instead of “user\n> name”,\n> > since the parameter is simply “user”, not “username”.\n>\n> Well, it could be the login OS name if the user connection parameter is\n> unspecified, right?\n>\n>\nNo. It is always the user parameter. It just so happens that parameter\nalso has a default. And so while there is a transitive aspect the\nresolution of the user parameter happens first, using the OS user if\nneeded, then the dbname parameter is resolved using the user parameter if\nneeded to supply the default.\n\nDavid J.\n\nOn Sat, Jul 9, 2022, 08:16 Bruce Momjian <bruce@momjian.us> wrote:On Sat, Jul 9, 2022 at 08:06:21AM -0700, David G. Johnston wrote:\n> Maybe invoke the wording from the libpq docs and say:\n> \n> The default database name is the same as the user connection parameter.\n> \n> https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS\n> \n> That page doesn’t feel the need to qualify user name and I don’t think it hurts\n> comprehension; and the writing “user parameter” there, instead of “user name”,\n> since the parameter is simply “user”, not “username”.\n\nWell, it could be the login OS name if the user connection parameter is\nunspecified, right?\nNo. It is always the user parameter. It just so happens that parameter also has a default. And so while there is a transitive aspect the resolution of the user parameter happens first, using the OS user if needed, then the dbname parameter is resolved using the user parameter if needed to supply the default.David J.",
"msg_date": "Sat, 9 Jul 2022 08:52:46 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: Fix description of how the default user name is chosen"
},
{
"msg_contents": "On 09.07.22 17:52, David G. Johnston wrote:\n> No. It is always the user parameter. It just so happens that parameter \n> also has a default. And so while there is a transitive aspect the \n> resolution of the user parameter happens first, using the OS user if \n> needed, then the dbname parameter is resolved using the user parameter \n> if needed to supply the default.\n\nWill there be an updated patch here? The original patch contained three \nhunks; I'm not sure which one of those was intending to fix a real bug \nand which ones were cosmetic. Is anything in the current documentation \nactually wrong?\n\n\n\n",
"msg_date": "Mon, 31 Oct 2022 14:41:45 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: Fix description of how the default user name is chosen"
},
{
"msg_contents": "This is the only sentence I claim is factually incorrect, with a suggested\nre-wording.\n\ndiff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml\nindex 9494f28063..f375a0fc11 100644\n--- a/doc/src/sgml/ref/psql-ref.sgml\n+++ b/doc/src/sgml/ref/psql-ref.sgml\n@@ -660,7 +660,8 @@ EOF\n determined at compile time.\n Since the database server uses the same default, you will not have\n to specify the port in most cases. The default user name is your\n- operating-system user name, as is the default database name.\n+ operating-system user name. Once the user name is determined it is\n+ used as the default database name.\n Note that you cannot\n just connect to any database under any user name. Your database\n administrator should have informed you about your access rights.\n\nOddly, this section is the only one where I'd want to say \"database user\nname\" but it doesn't do that. For consistency on that point, the following\nchunk can be used instead (the attached diff does this):\n\ndiff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml\nindex 9494f28063..38d12933ca 100644\n--- a/doc/src/sgml/ref/psql-ref.sgml\n+++ b/doc/src/sgml/ref/psql-ref.sgml\n@@ -646,23 +646,23 @@ EOF\n <application>psql</application> is a regular\n <productname>PostgreSQL</productname> client application. In order\n to connect to a database you need to know the name of your target\n- database, the host name and port number of the server, and what user\n- name you want to connect as. <application>psql</application> can be\n- told about those parameters via command line options, namely\n+ database, the host name and port number of the server, and what\n+ database user name you want to connect as.\n<application>psql</application>\n+ can be told about those parameters via command line options, namely\n <option>-d</option>, <option>-h</option>, <option>-p</option>, and\n <option>-U</option> respectively. If an argument is found that does\n not belong to any option it will be interpreted as the database name\n- (or the user name, if the database name is already given). Not all\n+ (or the database user name, if the database name is already given).\nNot all\n of these options are required; there are useful defaults. If you omit\nthe host\n name, <application>psql</application> will connect via a Unix-domain\nsocket\n to a server on the local host, or via TCP/IP to\n<literal>localhost</literal> on\n- Windows. The default port number is\n- determined at compile time.\n+ Windows. The default port number is determined at compile time.\n Since the database server uses the same default, you will not have\n- to specify the port in most cases. The default user name is your\n- operating-system user name, as is the default database name.\n+ to specify the port in most cases. The default database user name is\nyour\n+ operating-system user name. Once the database user name is determined\nit is\n+ used as the default database name.\n Note that you cannot\n- just connect to any database under any user name. Your database\n+ just connect to any database under any database user name. Your\ndatabase\n administrator should have informed you about your access rights.\n </para>\n\nAnd removing the unnecessary commentary in client-auth.sgml\n\ndiff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml\nindex 32d5d45863..5c6211809b 100644\n--- a/doc/src/sgml/client-auth.sgml\n+++ b/doc/src/sgml/client-auth.sgml\n@@ -2255,7 +2255,7 @@ FATAL: database \"testdb\" does not exist\n </programlisting>\n The database you are trying to connect to does not exist. Note that\n if you do not specify a database name, it defaults to the database\n- user name, which might or might not be the right thing.\n+ user name.\n </para>\n\n <tip>\n\nDavid J.\n\n\n\n\nOn Mon, Oct 31, 2022 at 6:41 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 09.07.22 17:52, David G. Johnston wrote:\n> > No. It is always the user parameter. It just so happens that parameter\n> > also has a default. And so while there is a transitive aspect the\n> > resolution of the user parameter happens first, using the OS user if\n> > needed, then the dbname parameter is resolved using the user parameter\n> > if needed to supply the default.\n>\n> Will there be an updated patch here? The original patch contained three\n> hunks; I'm not sure which one of those was intending to fix a real bug\n> and which ones were cosmetic. Is anything in the current documentation\n> actually wrong?\n>\n>",
"msg_date": "Tue, 1 Nov 2022 14:31:04 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: Fix description of how the default user name is chosen"
},
{
"msg_contents": "On 01.11.22 22:31, David G. Johnston wrote:\n> This is the only sentence I claim is factually incorrect, with a \n> suggested re-wording.\n\ncommitted\n\n\n\n",
"msg_date": "Thu, 24 Nov 2022 09:11:31 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: Fix description of how the default user name is chosen"
}
] |
[
{
"msg_contents": "Hi,\n\nReposting this to its own thread.\n\nhttps://www.postgresql.org/message-id/flat/CAKFQuwby1aMsJDMeibaBaohgoaZhivAo4WcqHC1%3D9-GDZ3TSng%40mail.gmail.com\n\n doc: make unique non-null join selectivity example match the prose\n\n The description of the computation for the unique, non-null,\n join selectivity describes a division by the maximum of two values,\n while the example shows a multiplication by their reciprocal. While\n equivalent the max phrasing is easier to understand; which seems\n more important here than precisely adhering to the formula used\n in the code (for which either variant is still an approximation).\n\n While both num_distinct and num_rows are equal for a unique column\n both the concept and formula use row count (10,000) and the\n field num_distinct has already been set to mean the specific value\n present in the pg_stats table (i.e, -1), so use num_rows here.\n\nDavid J.",
"msg_date": "Thu, 9 Jun 2022 08:57:23 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "doc: Make selectivity example match wording"
},
{
"msg_contents": "On Thu Jun 9, 2022 at 11:57 AM EDT, David G. Johnston wrote:\n> Reposting this to its own thread.\n>\n> https://www.postgresql.org/message-id/flat/CAKFQuwby1aMsJDMeibaBaohgoaZhivAo4WcqHC1%3D9-GDZ3TSng%40mail.gmail.com\n>\n> doc: make unique non-null join selectivity example match the prose\n>\n> The description of the computation for the unique, non-null,\n> join selectivity describes a division by the maximum of two values,\n> while the example shows a multiplication by their reciprocal. While\n> equivalent the max phrasing is easier to understand; which seems\n> more important here than precisely adhering to the formula used\n> in the code (for which either variant is still an approximation).\n>\n> While both num_distinct and num_rows are equal for a unique column\n> both the concept and formula use row count (10,000) and the\n> field num_distinct has already been set to mean the specific value\n> present in the pg_stats table (i.e, -1), so use num_rows here.\n\nPointing out that n_distinct = -1 is helpful but changing \"because\" to\n\"and\" suggests that the missing MCV info is coincidental or a side\neffect. Is there any case in which the stronger \"because\" wouldn't be\nappropriate?\n\nThe second parenthetical (num_rows, not shown, but \"tenk\") took me a\nminute to get since the row counts are only apparent on looking somewhat\nclosely at the other examples in the chapter. num_rows also isn't a\ncolumn in pg_stats which the \"not shown\" could be taken to imply; it's\nsourced from somewhere else and only given as num_rows in this example.\nHow's '(as num_rowsN, 10,000 for both \"tenk\" example tables)'?\n\nBy \"this value does get scaled in the non-unique case\" do you mean it\nrelies on n_distinct as in the uncorrected algorithm listing? If so I\nthink it'd help to specify that.\n\nYou didn't take this line on but \"This is, subtract the null\nfraction...\" omits the step of multiplying the complements of the null\nfractions together before dividing.\n\nShould n_distinct and num_rows be <structname>d in the text?\n\n\n",
"msg_date": "Sat, 02 Jul 2022 15:42:28 -0400",
"msg_from": "\"Dian M Fay\" <dian.m.fay@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: Make selectivity example match wording"
},
{
"msg_contents": "On Sat, Jul 2, 2022 at 12:42 PM Dian M Fay <dian.m.fay@gmail.com> wrote:\n\n> On Thu Jun 9, 2022 at 11:57 AM EDT, David G. Johnston wrote:\n> > Reposting this to its own thread.\n> >\n> >\n> https://www.postgresql.org/message-id/flat/CAKFQuwby1aMsJDMeibaBaohgoaZhivAo4WcqHC1%3D9-GDZ3TSng%40mail.gmail.com\n> >\n> > doc: make unique non-null join selectivity example match the prose\n> >\n> > The description of the computation for the unique, non-null,\n> > join selectivity describes a division by the maximum of two values,\n> > while the example shows a multiplication by their reciprocal. While\n> > equivalent the max phrasing is easier to understand; which seems\n> > more important here than precisely adhering to the formula used\n> > in the code (for which either variant is still an approximation).\n>\n> Should n_distinct and num_rows be <structname>d in the text?\n>\n\nThanks for the review. I generally like everything you said but it made me\nrealize that I still didn't really understand the intent behind the\nformula. I spent way too much time working that out for myself, then\nturned what I found useful into this v2 patch.\n\nIt may need some semantic markup still but figured I'd see if the idea\nmakes sense.\n\nI basically rewrote, in a bit different style, the same material into the\ncode comments, then proceeded to rework the proof that was already present\nthere.\n\nI did do this in somewhat of a vacuum. I'm not inclined to learn this all\nstart-to-end though. If the abrupt style change is unwanted so be it. I'm\nnot really sure how much benefit the proof really provides. The comments\nin the docs are probably sufficient for the code as well - just define why\nthe three pieces of the formula exist and are packaged into a single\nmultiplier called selectivity as an API choice. I suspect once someone\ngets to that comment it is fair to assume some prior knowledge.\nAdmittedly, I didn't really come into this that way...\n\nDavid J.",
"msg_date": "Sat, 16 Jul 2022 20:23:59 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: Make selectivity example match wording"
},
{
"msg_contents": "On Sat Jul 16, 2022 at 11:23 PM EDT, David G. Johnston wrote:\n> Thanks for the review. I generally like everything you said but it made me\n> realize that I still didn't really understand the intent behind the\n> formula. I spent way too much time working that out for myself, then\n> turned what I found useful into this v2 patch.\n>\n> It may need some semantic markup still but figured I'd see if the idea\n> makes sense.\n>\n> I basically rewrote, in a bit different style, the same material into the\n> code comments, then proceeded to rework the proof that was already present\n> there.\n>\n> I did do this in somewhat of a vacuum. I'm not inclined to learn this all\n> start-to-end though. If the abrupt style change is unwanted so be it. I'm\n> not really sure how much benefit the proof really provides. The comments\n> in the docs are probably sufficient for the code as well - just define why\n> the three pieces of the formula exist and are packaged into a single\n> multiplier called selectivity as an API choice. I suspect once someone\n> gets to that comment it is fair to assume some prior knowledge.\n> Admittedly, I didn't really come into this that way...\n\nFair enough, I only know what I can glean from the comments in\neqjoinsel_inner and friends myself. I do think even this smaller change\nis valuable because the current example talks about using an algorithm\nbased on the number of distinct values immediately after showing\nn_distinct == -1, so making it clear that this case uses num_rows\ninstead is helpful.\n\n\"This value does get scaled in the non-unique case\" again could be more\nspecific (\"since here all values are unique, otherwise the calculation\nuses num_distinct\" perhaps?). But past that quibble I'm good.\n\n\n",
"msg_date": "Sun, 17 Jul 2022 19:07:15 -0400",
"msg_from": "\"Dian M Fay\" <dian.m.fay@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: Make selectivity example match wording"
},
{
"msg_contents": "On Sun, Jul 17, 2022 at 07:07:15PM -0400, Dian M Fay wrote:\n> On Sat Jul 16, 2022 at 11:23 PM EDT, David G. Johnston wrote:\n> > Thanks for the review. I generally like everything you said but it made me\n> > realize that I still didn't really understand the intent behind the\n> > formula. I spent way too much time working that out for myself, then\n> > turned what I found useful into this v2 patch.\n> >\n> > It may need some semantic markup still but figured I'd see if the idea\n> > makes sense.\n> >\n> > I basically rewrote, in a bit different style, the same material into the\n> > code comments, then proceeded to rework the proof that was already present\n> > there.\n> >\n> > I did do this in somewhat of a vacuum. I'm not inclined to learn this all\n> > start-to-end though. If the abrupt style change is unwanted so be it. I'm\n> > not really sure how much benefit the proof really provides. The comments\n> > in the docs are probably sufficient for the code as well - just define why\n> > the three pieces of the formula exist and are packaged into a single\n> > multiplier called selectivity as an API choice. I suspect once someone\n> > gets to that comment it is fair to assume some prior knowledge.\n> > Admittedly, I didn't really come into this that way...\n> \n> Fair enough, I only know what I can glean from the comments in\n> eqjoinsel_inner and friends myself. I do think even this smaller change\n> is valuable because the current example talks about using an algorithm\n> based on the number of distinct values immediately after showing\n> n_distinct == -1, so making it clear that this case uses num_rows\n> instead is helpful.\n> \n> \"This value does get scaled in the non-unique case\" again could be more\n> specific (\"since here all values are unique, otherwise the calculation\n> uses num_distinct\" perhaps?). But past that quibble I'm good.\n\nPatch applied to master.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Tue, 31 Oct 2023 11:42:12 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: Make selectivity example match wording"
}
] |
[
{
"msg_contents": "Per suggestion over on -docs:\n\nhttps://www.postgresql.org/message-id/BL0PR06MB4978F6C0B69F3F03AEBED0FBB3C29@BL0PR06MB4978.namprd06.prod.outlook.com\n\nDavid J.",
"msg_date": "Thu, 9 Jun 2022 09:11:59 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "doc: Move enum storage commentary to top of section"
},
{
"msg_contents": "On Thu, 9 Jun 2022 at 18:12, David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> Per suggestion over on -docs:\n>\n> https://www.postgresql.org/message-id/BL0PR06MB4978F6C0B69F3F03AEBED0FBB3C29@BL0PR06MB4978.namprd06.prod.outlook.com\n\nI agree with moving the size of an enum into the top, but I don't\nthink that the label length documentation also should be included in\nthe top section. It seems excessively detailed for that section with\nits reference to NAMEDATALEN.\n\n-Matthias\n\n\n",
"msg_date": "Wed, 6 Jul 2022 19:23:52 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: Move enum storage commentary to top of section"
},
{
"msg_contents": "On Wed, Jul 6, 2022 at 10:24 AM Matthias van de Meent <\nboekewurm+postgres@gmail.com> wrote:\n\n> On Thu, 9 Jun 2022 at 18:12, David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> >\n> > Per suggestion over on -docs:\n> >\n> >\n> https://www.postgresql.org/message-id/BL0PR06MB4978F6C0B69F3F03AEBED0FBB3C29@BL0PR06MB4978.namprd06.prod.outlook.com\n>\n> I agree with moving the size of an enum into the top, but I don't\n> think that the label length documentation also should be included in\n> the top section. It seems excessively detailed for that section with\n> its reference to NAMEDATALEN.\n>\n> -Matthias\n>\n\nAgreed.\n\nTangentially: It does seem a bit unusual to call the fact that the values\nboth case-sensitive and limited to the length of a system identifier an\nimplementation detail. But if anything the length is more of one than the\ncase-sensitivity. Specifying NAMEDATALEN here seems like it breaks\nencapsulation, it could refer by comparison to an identifier and only those\nthat care can learn how that length might be changed in a custom build of\nPostgreSQL.\n\nDavid J.\n\nOn Wed, Jul 6, 2022 at 10:24 AM Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:On Thu, 9 Jun 2022 at 18:12, David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> Per suggestion over on -docs:\n>\n> https://www.postgresql.org/message-id/BL0PR06MB4978F6C0B69F3F03AEBED0FBB3C29@BL0PR06MB4978.namprd06.prod.outlook.com\n\nI agree with moving the size of an enum into the top, but I don't\nthink that the label length documentation also should be included in\nthe top section. It seems excessively detailed for that section with\nits reference to NAMEDATALEN.\n\n-MatthiasAgreed.Tangentially: It does seem a bit unusual to call the fact that the values both case-sensitive and limited to the length of a system identifier an implementation detail. But if anything the length is more of one than the case-sensitivity. Specifying NAMEDATALEN here seems like it breaks encapsulation, it could refer by comparison to an identifier and only those that care can learn how that length might be changed in a custom build of PostgreSQL.David J.",
"msg_date": "Wed, 6 Jul 2022 10:34:58 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: Move enum storage commentary to top of section"
},
{
"msg_contents": "On Wed, Jul 6, 2022 at 10:34:58AM -0700, David G. Johnston wrote:\n> Agreed.\n> \n> Tangentially: It does seem a bit unusual to call the fact that the values both\n> case-sensitive and limited to the length of a system identifier an\n> implementation detail. But if anything the length is more of one than the\n> case-sensitivity. Specifying NAMEDATALEN here seems like it breaks\n> encapsulation, it could refer by comparison to an identifier and only those\n> that care can learn how that length might be changed in a custom build of\n> PostgreSQL.\n\nI don't think we can do much to improve what we have already in the\ndocs.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Fri, 8 Jul 2022 21:21:31 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: Move enum storage commentary to top of section"
},
{
"msg_contents": "On Fri, Jul 8, 2022 at 09:21:31PM -0400, Bruce Momjian wrote:\n> On Wed, Jul 6, 2022 at 10:34:58AM -0700, David G. Johnston wrote:\n> > Agreed.\n> > \n> > Tangentially: It does seem a bit unusual to call the fact that the values both\n> > case-sensitive and limited to the length of a system identifier an\n> > implementation detail. But if anything the length is more of one than the\n> > case-sensitivity. Specifying NAMEDATALEN here seems like it breaks\n> > encapsulation, it could refer by comparison to an identifier and only those\n> > that care can learn how that length might be changed in a custom build of\n> > PostgreSQL.\n> \n> I don't think we can do much to improve what we have already in the\n> docs.\n\nI have marked the commit-fest entry as returned with feedback.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Fri, 8 Jul 2022 21:21:55 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: Move enum storage commentary to top of section"
}
] |
[
{
"msg_contents": "Hey,\r\n\r\nI’m currently working on a parallelization optimization of the Sequential Scan in the codebase, and I need to share information between the workers as they scan a relation. I’ve done a decent amount of testing, and I know that the parallel workers all share the same dsa_area in the plan state. However, by the time I’m actually able to allocate a dsa_pointer via dsa_allocate0(), the separate parallel workers have already been created so I can’t actually share the pointer with them. Since the workers all share the same dsa_area, all I need to do is be able to share the single dsa_pointer with them but so far I’ve been out of luck. Any advice?\r\n\r\nMarcus\r\n\r\n\n\n\n\n\n\n\n\n\nHey,\n \nI’m currently working on a parallelization optimization of the Sequential Scan in the codebase, and I need to share information between the workers as they scan a relation. I’ve done a decent amount of testing,\r\n and I know that the parallel workers all share the same dsa_area in the plan state. However, by the time I’m actually able to allocate a dsa_pointer via dsa_allocate0(), the separate parallel workers have already been created so I can’t actually share the\r\n pointer with them. Since the workers all share the same dsa_area, all I need to do is be able to share the single dsa_pointer with them but so far I’ve been out of luck. Any advice?\n \nMarcus",
"msg_date": "Thu, 9 Jun 2022 18:36:21 +0000",
"msg_from": "\"Ma, Marcus\" <marcjma@amazon.com>",
"msg_from_op": true,
"msg_subject": "Sharing DSA pointer between parallel workers after they've been\n created"
},
{
"msg_contents": "On Thu, Jun 9, 2022 at 2:36 PM Ma, Marcus <marcjma@amazon.com> wrote:\n> I’m currently working on a parallelization optimization of the Sequential Scan in the codebase, and I need to share information between the workers as they scan a relation. I’ve done a decent amount of testing, and I know that the parallel workers all share the same dsa_area in the plan state. However, by the time I’m actually able to allocate a dsa_pointer via dsa_allocate0(), the separate parallel workers have already been created so I can’t actually share the pointer with them. Since the workers all share the same dsa_area, all I need to do is be able to share the single dsa_pointer with them but so far I’ve been out of luck. Any advice?\n\nGenerally, the way you share information with a parallel worker is by\nmaking an entry in a DSM TOC using a well-known value as the key, and\nthen the parallel worker reads that entry. That entry might contain\nthings like a dsa_pointer, in which case you can hang any amount of\nadditional stuff off of that storage. In the case of the executor, the\nwell-known value used as the key the plan_node_id. See\nExecSeqScanInitializeDSM and ExecSeqScanInitializeWorker for an\nexample of how to share data that is known before starting the workers\nadvance. In your case you'd need to adapt that technique. But notice\nthat all we're doing here is making a TOC entry for a\nParallelTableScanDesc. The contents of that struct can be anything.\nFor instance, it could contain a dsa_pointer and an LWLock protecting\nthe pointer and a ConditionVariable to wait for the pointer to change.\n\nAnother approach would be to set up a shm_mq and transmit the\ndsa_pointer through it as a message.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 10 Jun 2022 12:11:03 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Sharing DSA pointer between parallel workers after they've been\n created"
}
] |
[
{
"msg_contents": "Hi.\n\nThe fact that one transaction will wait on another if they are trying to\nclaim the same unique value is presently relegated to a subchapter of the\ndocumentation where the typical reader will not even understand (rightly\nso) the main chapter's title. This has prompted a number of questions\nbeing sent to the mailing list over the years about a topic we do cover in\nsome detail in the documentation, most recently here:\n\nhttps://www.postgresql.org/message-id/CAJQY8UosNct0m0xbD7gkWGs02c0SOZN1DET-Q94jjpV1LrC2SQ@mail.gmail.com\n\nAttached is a proposal for incorporating some high-level detail within the\nMVCC Chapter, where readers are already looking to learn about how\ntransactions interact with each other and are \"isolated\" (or not, in this\ncase) from each other.\n\nI'm neither married to the verbiage nor location but it seems better than\nnothing and a modest improvement for not much effort. It's basically a\nglorified cross-reference. I didn't dislike directing the reader to the\ninternals section enough to try and establish a better location for\nthe main content. It just needs better navigation to it from places the\nreader is expected to peruse.\n\nDavid J.",
"msg_date": "Thu, 9 Jun 2022 16:58:48 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "doc: Bring mention of unique index forced transaction wait behavior\n outside of the internal section"
},
{
"msg_contents": "Hi David,\n\n> It's basically a glorified cross-reference. I didn't dislike directing the reader to the internals section enough to try and establish a better location for the main content.\n\nOne problem I see is that:\n\n+ [..], but as there is no pre-existing data, visibility checks are unnecessary.\n\n... allows a wide variety of interpretations, most of which will be\nwrong. And all in all I find an added paragraph somewhat cryptic.\n\nIf the goal is to add a cross-reference I suggest keeping it short,\nsomething like \"For additional details on various corner cases please\nsee ...\".\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 21 Jun 2022 16:48:49 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: Bring mention of unique index forced transaction wait\n behavior outside of the internal section"
},
{
"msg_contents": "On Tue, Jun 21, 2022 at 6:49 AM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi David,\n>\n> > It's basically a glorified cross-reference. I didn't dislike directing\n> the reader to the internals section enough to try and establish a better\n> location for the main content.\n>\n> One problem I see is that:\n>\n> + [..], but as there is no pre-existing data, visibility checks are\n> unnecessary.\n>\n> ... allows a wide variety of interpretations, most of which will be\n> wrong. And all in all I find an added paragraph somewhat cryptic.\n\n\nYeah, I'd probably have to say \"but since no existing record is being\nmodified, visibility checks are unnecessary\".\n\nIs there a specific mis-interpretation that first came to mind for you that\nI can consider specifically?\n\n>\n> If the goal is to add a cross-reference I suggest keeping it short,\n> something like \"For additional details on various corner cases please\n> see ...\".\n>\n>\nThat does work, and I may end up there, but it feels unsatisfying to be so\nvague/general.\n\nDavid J.\n\nOn Tue, Jun 21, 2022 at 6:49 AM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi David,\n\n> It's basically a glorified cross-reference. I didn't dislike directing the reader to the internals section enough to try and establish a better location for the main content.\n\nOne problem I see is that:\n\n+ [..], but as there is no pre-existing data, visibility checks are unnecessary.\n\n... allows a wide variety of interpretations, most of which will be\nwrong. And all in all I find an added paragraph somewhat cryptic.Yeah, I'd probably have to say \"but since no existing record is being modified, visibility checks are unnecessary\".Is there a specific mis-interpretation that first came to mind for you that I can consider specifically?\n\nIf the goal is to add a cross-reference I suggest keeping it short,\nsomething like \"For additional details on various corner cases please\nsee ...\".That does work, and I may end up there, but it feels unsatisfying to be so vague/general.David J.",
"msg_date": "Tue, 21 Jun 2022 09:07:42 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: Bring mention of unique index forced transaction wait\n behavior outside of the internal section"
},
{
"msg_contents": "On Tue, Jun 21, 2022 at 09:07:42AM -0700, David G. Johnston wrote:\n> On Tue, Jun 21, 2022 at 6:49 AM Aleksander Alekseev <aleksander@timescale.com>\n> wrote:\n> \n> Hi David,\n> \n> > It's basically a glorified cross-reference. I didn't dislike directing\n> the reader to the internals section enough to try and establish a better\n> location for the main content.\n> \n> One problem I see is that:\n> \n> + [..], but as there is no pre-existing data, visibility checks are\n> unnecessary.\n> \n> ... allows a wide variety of interpretations, most of which will be\n> wrong. And all in all I find an added paragraph somewhat cryptic.\n> \n> Yeah, I'd probably have to say \"but since no existing record is being modified,\n> visibility checks are unnecessary\".\n> \n> Is there a specific mis-interpretation that first came to mind for you that I\n> can consider specifically?\n> \n> \n> If the goal is to add a cross-reference I suggest keeping it short,\n> something like \"For additional details on various corner cases please\n> see ...\".\n> \n> That does work, and I may end up there, but it feels unsatisfying to be so\n> vague/general.\n\nI was not happy with putting this in the Transaction Isolation section.\nI rewrote it and put it in the INSERT secion, right before ON CONFLICT; \npatch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson",
"msg_date": "Fri, 8 Jul 2022 21:11:43 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: Bring mention of unique index forced transaction wait\n behavior outside of the internal section"
},
{
"msg_contents": "Hi Bruce,\n\n> I was not happy with putting this in the Transaction Isolation section.\n> I rewrote it and put it in the INSERT secion, right before ON CONFLICT;\n> patch attached.\n\nLooks good.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 11 Jul 2022 17:22:41 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: Bring mention of unique index forced transaction wait\n behavior outside of the internal section"
},
{
"msg_contents": "On Mon, Jul 11, 2022 at 05:22:41PM +0300, Aleksander Alekseev wrote:\n> Hi Bruce,\n> \n> > I was not happy with putting this in the Transaction Isolation section.\n> > I rewrote it and put it in the INSERT secion, right before ON CONFLICT;\n> > patch attached.\n> \n> Looks good.\n\nApplied to all supported PG versions.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 14 Jul 2022 15:18:11 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: Bring mention of unique index forced transaction wait\n behavior outside of the internal section"
},
{
"msg_contents": "On Thu, Jul 14, 2022 at 12:18 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Mon, Jul 11, 2022 at 05:22:41PM +0300, Aleksander Alekseev wrote:\n> > Hi Bruce,\n> >\n> > > I was not happy with putting this in the Transaction Isolation section.\n> > > I rewrote it and put it in the INSERT secion, right before ON CONFLICT;\n> > > patch attached.\n> >\n> > Looks good.\n>\n> Applied to all supported PG versions.\n>\n>\nSorry for the delayed response on this but I'm not a fan. A comment of\nsome form in transaction isolation seems to make sense (even if not my\noriginal thought...that patch got messed up a bit anyhow), and while having\nsomething in INSERT makes sense this doesn't seem precise enough.\n\nComments about locking and modifying rows doesn't make sense (the issue\nisn't relegated to ON CONFLICT, simple inserts will wait if they happen to\nchoose the same key to insert).\n\nI would also phrase it as simply \"Tables with a unique index will...\" and\nnot even mention tables that lack a unique index - those don't really exist\nand inference of their behavior by contrast seems sufficient.\n\nSticking close to what you proposed then:\n\nINSERT into tables with a unique index might block when concurrent sessions\nare inserting conflicting rows (i.e., have identical values for the unique\nindex columns) or when there already exists a conflicting row which is in\nthe process of being deleted. Details are covered in <xref\nlinkend=\"index-unique-checks\"/>.\n\nI can modify my original patch to be shorter and more on-point for\ninclusion in the MVCC chapter if there is interest in having a pointer from\nthere to index-unique-checks as well. I think such a note regarding\nconcurrency on an index naturally fits into one of the main pages for\nlearning about concurrency in PostgreSQL.\n\nDavid J.\n\nOn Thu, Jul 14, 2022 at 12:18 PM Bruce Momjian <bruce@momjian.us> wrote:On Mon, Jul 11, 2022 at 05:22:41PM +0300, Aleksander Alekseev wrote:\n> Hi Bruce,\n> \n> > I was not happy with putting this in the Transaction Isolation section.\n> > I rewrote it and put it in the INSERT secion, right before ON CONFLICT;\n> > patch attached.\n> \n> Looks good.\n\nApplied to all supported PG versions.Sorry for the delayed response on this but I'm not a fan. A comment of some form in transaction isolation seems to make sense (even if not my original thought...that patch got messed up a bit anyhow), and while having something in INSERT makes sense this doesn't seem precise enough.Comments about locking and modifying rows doesn't make sense (the issue isn't relegated to ON CONFLICT, simple inserts will wait if they happen to choose the same key to insert).I would also phrase it as simply \"Tables with a unique index will...\" and not even mention tables that lack a unique index - those don't really exist and inference of their behavior by contrast seems sufficient.Sticking close to what you proposed then:INSERT into tables with a unique index might block when concurrent sessions are inserting conflicting rows (i.e., have identical values for the unique index columns) or when there already exists a conflicting row which is in the process of being deleted. Details are covered in <xref linkend=\"index-unique-checks\"/>.I can modify my original patch to be shorter and more on-point for inclusion in the MVCC chapter if there is interest in having a pointer from there to index-unique-checks as well. I think such a note regarding concurrency on an index naturally fits into one of the main pages for learning about concurrency in PostgreSQL.David J.",
"msg_date": "Fri, 15 Jul 2022 13:42:12 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: Bring mention of unique index forced transaction wait\n behavior outside of the internal section"
}
] |
[
{
"msg_contents": "Hi,\n\nPer discussion here:\n\nhttps://www.postgresql.org/message-id/163636931138.8076.5140809232053731248%40wrigleys.postgresql.org\n\nWe can now easily document the array_length behavior of returning null\ninstead of zero for an empty array/dimension.\n\nI added an example to the json_array_length function to demonstrate that it\ndoes return 0 as one would expect, but contrary to the SQL array behavior.\n\nI did not bother to add examples to the other half dozen or so \"_length\"\nfunctions that all produce 0 as expected. Just the surprising case and the\nadjacent one.\n\nDavid J.",
"msg_date": "Thu, 9 Jun 2022 17:30:27 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "doc: array_length produces null instead of 0"
},
{
"msg_contents": "Hi David,\n\n> Per discussion here:\n>\n> https://www.postgresql.org/message-id/163636931138.8076.5140809232053731248%40wrigleys.postgresql.org\n>\n> We can now easily document the array_length behavior of returning null instead of zero for an empty array/dimension.\n>\n> I added an example to the json_array_length function to demonstrate that it does return 0 as one would expect, but contrary to the SQL array behavior.\n>\n> I did not bother to add examples to the other half dozen or so \"_length\" functions that all produce 0 as expected. Just the surprising case and the adjacent one.\n\nGood catch.\n\n+ <literal>array_length(array[], 1)</literal>\n+ <returnvalue>NULL</returnvalue>\n\nOne tiny nitpick I have is that this example will not work if used\nliterally, as is:\n\n```\n=# select array_length(array[], 1);\nERROR: cannot determine type of empty array\nLINE 1: select array_length(array[], 1);\n```\n\nMaybe it's worth using `array_length(array[] :: int[], 1)` instead.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 21 Jun 2022 16:33:03 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: array_length produces null instead of 0"
},
{
"msg_contents": "On Tue, Jun 21, 2022 at 6:33 AM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi David,\n>\n> > Per discussion here:\n> >\n> >\n> https://www.postgresql.org/message-id/163636931138.8076.5140809232053731248%40wrigleys.postgresql.org\n> >\n> > We can now easily document the array_length behavior of returning null\n> instead of zero for an empty array/dimension.\n> >\n> > I added an example to the json_array_length function to demonstrate that\n> it does return 0 as one would expect, but contrary to the SQL array\n> behavior.\n> >\n> > I did not bother to add examples to the other half dozen or so \"_length\"\n> functions that all produce 0 as expected. Just the surprising case and the\n> adjacent one.\n>\n> Good catch.\n>\n> + <literal>array_length(array[], 1)</literal>\n> + <returnvalue>NULL</returnvalue>\n>\n> One tiny nitpick I have is that this example will not work if used\n> literally, as is:\n>\n> ```\n> =# select array_length(array[], 1);\n> ERROR: cannot determine type of empty array\n> LINE 1: select array_length(array[], 1);\n> ```\n>\n> Maybe it's worth using `array_length(array[] :: int[], 1)` instead.\n>\n>\nI think subconsciously the cast looked ugly to me so I probably skipped\nadding it. I do agree the example should be executable though, and most of\nthe existing examples use integer[] (not the abbreviated form, int) so I'll\nplan to go with that.\n\nThanks for the review!\n\nDavid J.\n\nOn Tue, Jun 21, 2022 at 6:33 AM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi David,\n\n> Per discussion here:\n>\n> https://www.postgresql.org/message-id/163636931138.8076.5140809232053731248%40wrigleys.postgresql.org\n>\n> We can now easily document the array_length behavior of returning null instead of zero for an empty array/dimension.\n>\n> I added an example to the json_array_length function to demonstrate that it does return 0 as one would expect, but contrary to the SQL array behavior.\n>\n> I did not bother to add examples to the other half dozen or so \"_length\" functions that all produce 0 as expected. Just the surprising case and the adjacent one.\n\nGood catch.\n\n+ <literal>array_length(array[], 1)</literal>\n+ <returnvalue>NULL</returnvalue>\n\nOne tiny nitpick I have is that this example will not work if used\nliterally, as is:\n\n```\n=# select array_length(array[], 1);\nERROR: cannot determine type of empty array\nLINE 1: select array_length(array[], 1);\n```\n\nMaybe it's worth using `array_length(array[] :: int[], 1)` instead.I think subconsciously the cast looked ugly to me so I probably skipped adding it. I do agree the example should be executable though, and most of the existing examples use integer[] (not the abbreviated form, int) so I'll plan to go with that.Thanks for the review!David J.",
"msg_date": "Tue, 21 Jun 2022 09:02:41 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: array_length produces null instead of 0"
},
{
"msg_contents": "On Tue, Jun 21, 2022 at 09:02:41AM -0700, David G. Johnston wrote:\n> On Tue, Jun 21, 2022 at 6:33 AM Aleksander Alekseev <aleksander@timescale.com>\n> Maybe it's worth using `array_length(array[] :: int[], 1)` instead.\n> \n> I think subconsciously the cast looked ugly to me so I probably skipped adding\n> it. I do agree the example should be executable though, and most of the\n> existing examples use integer[] (not the abbreviated form, int) so I'll plan to\n> go with that.\n\nPatch applied through PG 13, with adjustments suggested above. Our doc\nformatting for pre-PG 13 was too different for me to risk backpatching\nfurther back.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Fri, 8 Jul 2022 20:24:52 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: array_length produces null instead of 0"
},
{
"msg_contents": "Hi,\n\nWhile compiling the PostgreSQL I have found that *memset_s function\nrequires a define \"*__STDC_WANT_LIB_EXT1__*\" *\n\n*explicit_bzero.c:* In function ‘*explicit_bzero*’:\n\n*explicit_bzero.c:23:9:* *warning: *implicit declaration of function ‘\n*memset_s*’; did you mean ‘*memset*’? [*-Wimplicit-function-declaration*]\n\n (void) *memset_s*(buf, len, 0, len);\n\n *^~~~~~~~*\n\nAttached is the patch to define that in the case of Solaris.\n\n\n-- \nIbrar Ahmed",
"msg_date": "Sat, 9 Jul 2022 06:27:04 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Compilation issue on Solaris."
},
{
"msg_contents": "Ibrar Ahmed <ibrar.ahmad@gmail.com> writes:\n> While compiling the PostgreSQL I have found that *memset_s function\n> requires a define \"*__STDC_WANT_LIB_EXT1__*\" *\n> *explicit_bzero.c:* In function ‘*explicit_bzero*’:\n> *explicit_bzero.c:23:9:* *warning: *implicit declaration of function ‘\n> *memset_s*’; did you mean ‘*memset*’? [*-Wimplicit-function-declaration*]\n\nHmm.\n\n> Attached is the patch to define that in the case of Solaris.\n\nIf you don't have any test you want to make before adding the\n#define, I don't think this is idiomatic use of autoconf.\nPersonally I'd have just added \"-D__STDC_WANT_LIB_EXT1__\" into\nthe CPPFLAGS for Solaris, perhaps in src/template/solaris,\nor maybe just adjust the stanza immediately above this one:\n\nif test \"$PORTNAME\" = \"solaris\"; then\n CPPFLAGS=\"$CPPFLAGS -D_POSIX_PTHREAD_SEMANTICS\"\nfi\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 08 Jul 2022 21:46:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Compilation issue on Solaris."
},
{
"msg_contents": "On Sat, Jul 9, 2022 at 6:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Ibrar Ahmed <ibrar.ahmad@gmail.com> writes:\n> > While compiling the PostgreSQL I have found that *memset_s function\n> > requires a define \"*__STDC_WANT_LIB_EXT1__*\" *\n> > *explicit_bzero.c:* In function ‘*explicit_bzero*’:\n> > *explicit_bzero.c:23:9:* *warning: *implicit declaration of function ‘\n> > *memset_s*’; did you mean ‘*memset*’? [*-Wimplicit-function-declaration*]\n>\n> Hmm.\n>\n> > Attached is the patch to define that in the case of Solaris.\n>\n> If you don't have any test you want to make before adding the\n> #define, I don't think this is idiomatic use of autoconf.\n> Personally I'd have just added \"-D__STDC_WANT_LIB_EXT1__\" into\n> the CPPFLAGS for Solaris, perhaps in src/template/solaris,\n> or maybe just adjust the stanza immediately above this one:\n>\n> if test \"$PORTNAME\" = \"solaris\"; then\n> CPPFLAGS=\"$CPPFLAGS -D_POSIX_PTHREAD_SEMANTICS\"\n> fi\n>\n> regards, tom lane\n>\n\nThanks for looking at that, yes you are right, the attached patch do that\nnow\n\n if test \"$PORTNAME\" = \"solaris\"; then\n\n CPPFLAGS=\"$CPPFLAGS -D_POSIX_PTHREAD_SEMANTICS\"\n\n+ CPPFLAGS=\"$CPPFLAGS -D__STDC_WANT_LIB_EXT1__\"\n\n fi\n\n-- \nIbrar Ahmed",
"msg_date": "Sat, 9 Jul 2022 07:01:35 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Compilation issue on Solaris."
},
{
"msg_contents": "On Sat, Jul 9, 2022 at 2:02 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> Thanks for looking at that, yes you are right, the attached patch do that now\n>\n> if test \"$PORTNAME\" = \"solaris\"; then\n>\n> CPPFLAGS=\"$CPPFLAGS -D_POSIX_PTHREAD_SEMANTICS\"\n>\n> + CPPFLAGS=\"$CPPFLAGS -D__STDC_WANT_LIB_EXT1__\"\n>\n> fi\n\nHmm. K.3.3.1 of [1] says you can show or hide all that _s stuff by\ndefining that macro to 0 or 1 before you include <string.h>, but it's\nimplementation-defined whether they are exposed by default, and the\ntemplate file is one way to deal with that\nimplementation-definedness... it's not quite in the autoconf spirit\nthough, it's kinda manual. Another approach would be to define it\nunconditionally at the top of explicit_bzero.c before including \"c.h\",\non all platforms. The man page on my system tells me I should do that\nanyway, even though you don't need to on my system.\n\nWhy is your Solaris system trying to compile that file in the first\nplace? A quick check of the Solaris and Illumos build farm animals\nand some online man pages tells me they have explicit_bzero().\nAhhh... looks like it came a few years ago in some Solaris 11.4\nupdate[2], and Illumos (which forked around 10) probably added it\nindependently (why do Solaris man pages not have a history section to\ntell us these things?!). I guess you must be running an old version.\nOK then.\n\n[1] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n1548.pdf\n[2] https://blogs.oracle.com/solaris/post/expanding-the-library\n\n\n",
"msg_date": "Sat, 9 Jul 2022 17:27:31 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Compilation issue on Solaris."
},
{
"msg_contents": "On Sat, Jul 9, 2022 at 10:28 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Sat, Jul 9, 2022 at 2:02 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> > Thanks for looking at that, yes you are right, the attached patch do\n> that now\n> >\n> > if test \"$PORTNAME\" = \"solaris\"; then\n> >\n> > CPPFLAGS=\"$CPPFLAGS -D_POSIX_PTHREAD_SEMANTICS\"\n> >\n> > + CPPFLAGS=\"$CPPFLAGS -D__STDC_WANT_LIB_EXT1__\"\n> >\n> > fi\n>\n> Hmm. K.3.3.1 of [1] says you can show or hide all that _s stuff by\n> defining that macro to 0 or 1 before you include <string.h>, but it's\n> implementation-defined whether they are exposed by default, and the\n> template file is one way to deal with that\n> implementation-definedness... it's not quite in the autoconf spirit\n> though, it's kinda manual. Another approach would be to define it\n> unconditionally at the top of explicit_bzero.c before including \"c.h\",\n> on all platforms. The man page on my system tells me I should do that\n> anyway, even though you don't need to on my system.\n>\n> Why is your Solaris system trying to compile that file in the first\n> place? A quick check of the Solaris and Illumos build farm animals\n> and some online man pages tells me they have explicit_bzero().\n> Ahhh... looks like it came a few years ago in some Solaris 11.4\n> update[2], and Illumos (which forked around 10) probably added it\n> independently (why do Solaris man pages not have a history section to\n> tell us these things?!). I guess you must be running an old version.\n> OK then.\n>\n> [1] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n1548.pdf\n> [2] https://blogs.oracle.com/solaris/post/expanding-the-library\n>\nI am using \"SunOS solaris-vagrant 5.11 11.4.0.15.0 i86pc i386 i86pc\", I gave\nanother thought and Tom is right src/template/solaris is a better place to\nadd that.\n\n\n\n-- \nIbrar Ahmed\n\nOn Sat, Jul 9, 2022 at 10:28 AM Thomas Munro <thomas.munro@gmail.com> wrote:On Sat, Jul 9, 2022 at 2:02 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> Thanks for looking at that, yes you are right, the attached patch do that now\n>\n> if test \"$PORTNAME\" = \"solaris\"; then\n>\n> CPPFLAGS=\"$CPPFLAGS -D_POSIX_PTHREAD_SEMANTICS\"\n>\n> + CPPFLAGS=\"$CPPFLAGS -D__STDC_WANT_LIB_EXT1__\"\n>\n> fi\n\nHmm. K.3.3.1 of [1] says you can show or hide all that _s stuff by\ndefining that macro to 0 or 1 before you include <string.h>, but it's\nimplementation-defined whether they are exposed by default, and the\ntemplate file is one way to deal with that\nimplementation-definedness... it's not quite in the autoconf spirit\nthough, it's kinda manual. Another approach would be to define it\nunconditionally at the top of explicit_bzero.c before including \"c.h\",\non all platforms. The man page on my system tells me I should do that\nanyway, even though you don't need to on my system.\n\nWhy is your Solaris system trying to compile that file in the first\nplace? A quick check of the Solaris and Illumos build farm animals\nand some online man pages tells me they have explicit_bzero().\nAhhh... looks like it came a few years ago in some Solaris 11.4\nupdate[2], and Illumos (which forked around 10) probably added it\nindependently (why do Solaris man pages not have a history section to\ntell us these things?!). I guess you must be running an old version.\nOK then.\n\n[1] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n1548.pdf\n[2] https://blogs.oracle.com/solaris/post/expanding-the-library\nI am using \"SunOS solaris-vagrant 5.11 11.4.0.15.0 i86pc i386 i86pc\", I gaveanother thought and Tom is right src/template/solaris is a better place to add that.\n-- Ibrar Ahmed",
"msg_date": "Sat, 9 Jul 2022 22:47:15 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Compilation issue on Solaris."
},
{
"msg_contents": "On Sun, Jul 10, 2022 at 5:47 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> I am using \"SunOS solaris-vagrant 5.11 11.4.0.15.0 i86pc i386 i86pc\",\n\nHah. So your vagrant image must be from a fairly narrow range of time\nwhen Solaris 11.4 came out with memset_s but didn't yet have\nexplicit_bzero. That arrived in SRU12 in 2019, which came out before\nwe started using the function. Real Solaris systems would have\nabsorbed that via \"pkg update\", explaining why no one ever noticed\nthis problem.\n\n> I gave\n> another thought and Tom is right src/template/solaris is a better place to add that.\n\nSomething bothers me about adding yet more clutter to every compile\nline for the rest of time to solve a problem that exists only for\nunpatched systems, and also that it's not even really a Solaris thing,\nit's a C11 thing. But I'm not going to object. At least it's\nrecorded in the archives that it's an obvious candidate to be removed\nagain in a few years... I was mostly interested in understanding WHY\nwe suddenly need this...\n\n\n",
"msg_date": "Sun, 10 Jul 2022 09:47:56 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Compilation issue on Solaris."
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Something bothers me about adding yet more clutter to every compile\n> line for the rest of time to solve a problem that exists only for\n> unpatched systems, and also that it's not even really a Solaris thing,\n> it's a C11 thing.\n\nI tend to agree with this standpoint: if it's only a warning, and\nit only appears in a small range of not-up-to-date Solaris builds,\nthen a reasonable approach is \"update your system if you don't want\nto see the warning\".\n\nA positive argument for doing nothing is that there's room to worry\nwhether -D__STDC_WANT_LIB_EXT1__ might have any side-effects we\n*don't* want.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 10 Jul 2022 10:27:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Compilation issue on Solaris."
},
{
"msg_contents": "On Sun, Jul 10, 2022 at 9:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Something bothers me about adding yet more clutter to every compile\n> > line for the rest of time to solve a problem that exists only for\n> > unpatched systems, and also that it's not even really a Solaris thing,\n> > it's a C11 thing.\n>\n> I tend to agree with this standpoint: if it's only a warning, and\n> it only appears in a small range of not-up-to-date Solaris builds,\n> then a reasonable approach is \"update your system if you don't want\n> to see the warning\".\n>\n> A positive argument for doing nothing is that there's room to worry\n> whether -D__STDC_WANT_LIB_EXT1__ might have any side-effects we\n> *don't* want.\n\nThis is still listed in the CF as needing review, so I went and marked\nit rejected.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 6 Sep 2022 11:24:25 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Compilation issue on Solaris."
},
{
"msg_contents": "On Tue, Sep 6, 2022 at 9:24 AM John Naylor <john.naylor@enterprisedb.com>\nwrote:\n\n> On Sun, Jul 10, 2022 at 9:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Thomas Munro <thomas.munro@gmail.com> writes:\n> > > Something bothers me about adding yet more clutter to every compile\n> > > line for the rest of time to solve a problem that exists only for\n> > > unpatched systems, and also that it's not even really a Solaris thing,\n> > > it's a C11 thing.\n> >\n> > I tend to agree with this standpoint: if it's only a warning, and\n> > it only appears in a small range of not-up-to-date Solaris builds,\n> > then a reasonable approach is \"update your system if you don't want\n> > to see the warning\".\n> >\n> > A positive argument for doing nothing is that there's room to worry\n> > whether -D__STDC_WANT_LIB_EXT1__ might have any side-effects we\n> > *don't* want.\n>\n> This is still listed in the CF as needing review, so I went and marked\n> it rejected.\n>\n> +1, Thanks\n\n> --\n> John Naylor\n> EDB: http://www.enterprisedb.com\n>\n\n\n-- \nIbrar Ahmed\n\nOn Tue, Sep 6, 2022 at 9:24 AM John Naylor <john.naylor@enterprisedb.com> wrote:On Sun, Jul 10, 2022 at 9:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Something bothers me about adding yet more clutter to every compile\n> > line for the rest of time to solve a problem that exists only for\n> > unpatched systems, and also that it's not even really a Solaris thing,\n> > it's a C11 thing.\n>\n> I tend to agree with this standpoint: if it's only a warning, and\n> it only appears in a small range of not-up-to-date Solaris builds,\n> then a reasonable approach is \"update your system if you don't want\n> to see the warning\".\n>\n> A positive argument for doing nothing is that there's room to worry\n> whether -D__STDC_WANT_LIB_EXT1__ might have any side-effects we\n> *don't* want.\n\nThis is still listed in the CF as needing review, so I went and marked\nit rejected.\n+1, Thanks \n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n-- Ibrar Ahmed",
"msg_date": "Tue, 6 Sep 2022 11:18:35 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Compilation issue on Solaris."
}
] |
[
{
"msg_contents": "Load balancing connections across multiple read replicas is a pretty\ncommon way of scaling out read queries. There are two main ways of doing\nso, both with their own advantages and disadvantages:\n1. Load balancing at the client level\n2. Load balancing by connecting to an intermediary load balancer\n\nOption 1 has been supported by JDBC (Java) for 8 years and Npgsql (C#)\nmerged support about a year ago. This patch adds the same functionality\nto libpq. The way it's implemented is the same as the implementation of\nJDBC, and contains two levels of load balancing:\n1. The given hosts are randomly shuffled, before resolving them\n one-by-one.\n2. Once a host its addresses get resolved, those addresses are shuffled,\n before trying to connect to them one-by-one.",
"msg_date": "Fri, 10 Jun 2022 16:31:26 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Support load balancing in libpq"
},
{
"msg_contents": "Hi Jelte,\n\n> Load balancing connections across multiple read replicas is a pretty\n> common way of scaling out read queries. There are two main ways of doing\n> so, both with their own advantages and disadvantages:\n> 1. Load balancing at the client level\n> 2. Load balancing by connecting to an intermediary load balancer\n>\n> Option 1 has been supported by JDBC (Java) for 8 years and Npgsql (C#)\n> merged support about a year ago. This patch adds the same functionality\n> to libpq. The way it's implemented is the same as the implementation of\n> JDBC, and contains two levels of load balancing:\n> 1. The given hosts are randomly shuffled, before resolving them\n> one-by-one.\n> 2. Once a host its addresses get resolved, those addresses are shuffled,\n> before trying to connect to them one-by-one.\n\nThanks for the patch.\n\nI don't mind the feature but I believe the name is misleading. Unless\nI missed something, the patch merely allows choosing a random host\nfrom the provided list. By load balancing people generally expect\nsomething more elaborate - e.g. sending read-only queries to replicas\nand read/write queries to the primary, or perhaps using weights\nproportional to the server throughput/performance.\n\nRandomization would be a better term for what the patch does.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 21 Jun 2022 16:22:03 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Support load balancing in libpq"
},
{
"msg_contents": "I tried to stay in line with the naming of this same option in JDBC and\nNpgsql, where it's called \"loadBalanceHosts\" and \"Load Balance Hosts\"\nrespectively. So, actually to be more in line it should be the option for \nlibpq should be called \"load_balance_hosts\" (not \"loadbalance\" like \nin the previous patch). I attached a new patch with the name of the \noption changed to this.\n\nI also don't think the name is misleading. Randomization of hosts will \nautomatically result in balancing the load across multiple hosts. This is \nassuming more than a single connection is made using the connection \nstring, either on the same client node or on different client nodes. I think\nI think is a fair assumption to make. Also note that this patch does not load \nbalance queries, it load balances connections. This is because libpq works\nat the connection level, not query level, due to session level state. \n\nI agree it is indeed fairly simplistic load balancing. But many dedicated load \nbalancers often use simplistic load balancing too. Round-robin, random and\nhash+modulo based load balancing are all very commonly used load balancer\nstrategies. Using this patch you should even be able to implement the \nweighted load balancing that you suggest, by supplying the same host + port \npair multiple times in the list of hosts. \n\nMy preference would be to use load_balance_hosts for the option name.\nHowever, if the name of the option becomes the main point of contention\nI would be fine with changing the option to \"randomize_hosts\". I think \nin the end it comes down to what we want the name of the option to reflect:\n1. load_balance_hosts reflects what you (want to) achieve by enabling it\n2. randomize_hosts reflects how it is achieved\n\n\nJelte",
"msg_date": "Wed, 22 Jun 2022 07:54:19 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Support load balancing in libpq"
},
{
"msg_contents": "On Fri, Jun 10, 2022 at 10:01 PM Jelte Fennema\n<Jelte.Fennema@microsoft.com> wrote:\n>\n> Load balancing connections across multiple read replicas is a pretty\n> common way of scaling out read queries. There are two main ways of doing\n> so, both with their own advantages and disadvantages:\n> 1. Load balancing at the client level\n> 2. Load balancing by connecting to an intermediary load balancer\n>\n> Option 1 has been supported by JDBC (Java) for 8 years and Npgsql (C#)\n> merged support about a year ago. This patch adds the same functionality\n> to libpq. The way it's implemented is the same as the implementation of\n> JDBC, and contains two levels of load balancing:\n> 1. The given hosts are randomly shuffled, before resolving them\n> one-by-one.\n> 2. Once a host its addresses get resolved, those addresses are shuffled,\n> before trying to connect to them one-by-one.\n\nThanks for the patch. +1 for the general idea of redirecting connections.\n\nI'm quoting a previous attempt by Satyanarayana Narlapuram on this\ntopic [1], it also has a patch set.\n\nIMO, rebalancing of the load must be based on parameters (as also\nsuggested by Aleksander Alekseev in this thread) such as the\nread-only/write queries, CPU/IO/Memory utilization of the\nprimary/standby, network distance etc. We may not have to go the extra\nmile to determine all of these parameters dynamically during query\nauthentication time, but we can let users provide a list of standby\nhosts based on \"some\" priority (Satya's thread [1] attempts to do\nthis, in a way, with users specifying the hosts via pg_hba.conf file).\nIf required, randomization in choosing the hosts can be optional.\n\nAlso, IMO, the solution must have a fallback mechanism if the\nstandby/chosen host isn't reachable.\n\nFew thoughts on the patch:\n1) How are we determining if the submitted query is read-only or write?\n2) What happens for explicit transactions? The queries related to the\nsame txn get executed on the same host right? How are we guaranteeing\nthis?\n3) Isn't it good to provide a way to test the patch?\n\n[1] https://www.postgresql.org/message-id/flat/CY1PR21MB00246DE1F9E9C58455A78A37915C0%40CY1PR21MB0024.namprd21.prod.outlook.com\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Tue, 5 Jul 2022 18:12:14 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Support load balancing in libpq"
},
{
"msg_contents": "> I'm quoting a previous attempt by Satyanarayana Narlapuram on this\n> topic [1], it also has a patch set.\n\nThanks for sharing that. It's indeed a different approach to solve the\nsame problem. I think my approach is much simpler, since it only \nrequires minimal changes to the libpq client and none to the postgres \nserver or the postgres protocol.\n\nHowever, that linked patch is more flexible due to allowing redirection\nbased on users and databases. With my patch something similar could\nstill be achieved by using different hostname lists for different databases\nor users at the client side.\n\nTo be completely clear on the core difference between the patch IMO:\nIn this patch a DNS server or (a hardcoded hostname/IP list at the client\nside) is used to determine what host to connect to. In the linked\npatch instead the Postgres server starts being the source of truth of what\nto connect to, thus essentially becoming something similar to a DNS server.\n\n> We may not have to go the extra\n> mile to determine all of these parameters dynamically during query\n> authentication time, but we can let users provide a list of standby\n> hosts based on \"some\" priority (Satya's thread [1] attempts to do\n> this, in a way, with users specifying the hosts via pg_hba.conf file).\n> If required, randomization in choosing the hosts can be optional.\n\nI'm not sure if you read my response to Aleksander. I feel like I\naddressed part of this at least. But maybe I was not clear enough, \nor added too much fluff. So, I'll re-iterate the important part:\nBy specifying the same host multiple times in the DNS response or\nthe hostname/IP list you can achieve weighted load balancing.\n\nFew thoughts on the patch:\n> 1) How are we determining if the submitted query is read-only or write?\n\nThis is not part of this patch. libpq and thus this patch works at the connection \nlevel, not at the query level, so determining a read-only query or write only query\nis not possible without large changes.\n\nHowever, libpq already has a target_session_attrs[1] connection option. This can be \nused to open connections specifically to read-only or writable servers. However,\nonce a read-only connection is opened it is then the responsibility of the client \nnot to send write queries over this read-only connection, otherwise they will fail.\n\n> 2) What happens for explicit transactions? The queries related to the\n> same txn get executed on the same host right? How are we guaranteeing\n> this?\n\nWe're load balancing connections, not queries. Once a connection is made\nall queries on that connection will be executed on the same host. \n\n> 3) Isn't it good to provide a way to test the patch?\n\nThe way I tested it myself was by setting up a few databases on my local machine\nlistening on 127.0.0.1, 127.0.0.2, 127.0.0.3 and then putting all those in the connection \nstring. Then looking at the connection attempts on the servers their logs showed that\nthe client was indeed connecting to a random one (by using log_connections=true \nin postgresql.conf).\n\nI would definitely like to have some automated tests for this, but I couldn't find tests\nfor libpq that were connecting to multiple postgres servers. If they exist, any pointers\nare appreciated. If they don't exist, pointers to similar tests are also appreciated.\n\n[1]: https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNECT-TARGET-SESSION-ATTRS\n\n",
"msg_date": "Tue, 5 Jul 2022 15:23:04 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [EXTERNAL] Re: Support load balancing in libpq"
},
{
"msg_contents": "Dear Jelte,\r\n\r\nI like your idea. But do we have to sort randomly even if target_session_attr is set to 'primary' or 'read-write'?\r\n\r\nI think this parameter can be used when all listed servers have same data,\r\nand we can assume that one of members is a primary and others are secondary.\r\n\r\nIn this case user maybe add a primary host to top of the list,\r\nso sorting may increase time durations for establishing connection.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 15 Jul 2022 04:56:45 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Support load balancing in libpq"
},
{
"msg_contents": "> we can assume that one of members is a primary and others are secondary.\n\nWith plain Postgres this assumption is probably correct. But the main reason\nI'm interested in this patch was because I would like to be able to load\nbalance across the workers in a Citus cluster. These workers are all primaries.\nSimilar usage would likely be possible with BDR (bidirectional replication).\n\n> In this case user maybe add a primary host to top of the list,\n> so sorting may increase time durations for establishing connection.\n\nIf the user takes such care when building their host list, they could simply\nnot add the load_balance_hosts=true option to the connection string.\nIf you know there's only one primary in the list and you're looking for\nthe primary, then there's no reason to use load_balance_hosts=true.\n\n\n\n\n\n\n\n\n> we\n can assume that one of members is a primary and others are secondary.\n\n\n\n\nWith\n plain Postgres this assumption is probably correct. But the main reason\n\nI'm\n interested in this patch was because I would like to be able to load\n\nbalance\n across the workers in a Citus cluster. These workers are all primaries.\nSimilar usage would likely be possible with BDR (bidirectional replication).\n\n\n\n\n> In this case user maybe add a primary host to top of the list,\n\n\n> so sorting may increase time durations for establishing connection.\n\n\n\nIf the user takes such care when building their host list, they could simply \nnot add the load_balance_hosts=true option to the connection string.\nIf you know there's only one primary in the list and you're looking for\nthe primary, then there's no reason to use load_balance_hosts=true.",
"msg_date": "Fri, 15 Jul 2022 14:59:55 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Support load balancing in libpq"
},
{
"msg_contents": "Dear Jelte,\n\n> With plain Postgres this assumption is probably correct. But the main reason\n> I'm interested in this patch was because I would like to be able to load\n> balance across the workers in a Citus cluster. These workers are all primaries.\n> Similar usage would likely be possible with BDR (bidirectional replication).\n\nI agree this feature is useful for BDR-like solutions.\n\n> If the user takes such care when building their host list, they could simply \n> not add the load_balance_hosts=true option to the connection string.\n> If you know there's only one primary in the list and you're looking for\n> the primary, then there's no reason to use load_balance_hosts=true.\n\nYou meant that it was the user responsibility to set correctly, right?\nIt seemed reasonable because libpq was just a library for connecting to server\nand should not be smart.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Thu, 28 Jul 2022 02:50:15 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Support load balancing in libpq"
},
{
"msg_contents": "Hi,\n\nthe patch no longer applies cleanly, please rebase (it's trivial).\n\nI don't like the provided commit message very much, I think the\ndiscussion about pgJDBC having had load balancing for a while belongs\nelsewhere.\n\nOn Wed, Jun 22, 2022 at 07:54:19AM +0000, Jelte Fennema wrote:\n> I tried to stay in line with the naming of this same option in JDBC and\n> Npgsql, where it's called \"loadBalanceHosts\" and \"Load Balance Hosts\"\n> respectively. So, actually to be more in line it should be the option for \n> libpq should be called \"load_balance_hosts\" (not \"loadbalance\" like \n> in the previous patch). I attached a new patch with the name of the \n> option changed to this.\n\nMaybe my imagination is not so great, but what else than hosts could we\npossibly load-balance? I don't mind calling it load_balance, but I also\ndon't feel very strongly one way or the other and this is clearly\nbikeshed territory.\n \n> I also don't think the name is misleading. Randomization of hosts will \n> automatically result in balancing the load across multiple hosts. This is \n> assuming more than a single connection is made using the connection \n> string, either on the same client node or on different client nodes. I think\n> I think is a fair assumption to make. Also note that this patch does not load \n> balance queries, it load balances connections. This is because libpq works\n> at the connection level, not query level, due to session level state. \n\nI agree.\n\nAlso, I think the scope is ok for a first implementation (but see\nbelow). You or others could possibly further enhance the algorithm in\nthe future, but it seems to be useful as-is.\n\n> I agree it is indeed fairly simplistic load balancing.\n\nIf I understand correctly, you've added DNS-based load balancing on top\nof just shuffling the provided hostnames. This makes sense if a\nhostname is backed by more than one IP address in the context of load\nbalancing, but it also complicates the patch. So I'm wondering how much\nshorter the patch would be if you leave that out for now?\n\nOn the other hand, I believe pgJDBC keeps track of which hosts are up or\ndown and only load balances among the ones which are up (maybe\nrechecking after a timeout? I don't remember), is this something you're\ndoing, or did you consider it?\n\nSome quick remarks on the patch:\n\n /* OK, scan this addrlist for a working server address */\n- conn->addr_cur = conn->addrlist;\n reset_connection_state_machine = true;\n conn->try_next_host = false;\n\nThe comment might need to be updated.\n\n+ int naddr; /* # of addrs returned by getaddrinfo */\n\nThis is spelt \"number of\" in several other places in the file, and we\nstill have enough space to spell it out here as well without a\nline-wrap.\n\n\nMichael\n\n\n",
"msg_date": "Sat, 10 Sep 2022 23:43:11 +0200",
"msg_from": "Michael Banck <mbanck@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Support load balancing in libpq"
},
{
"msg_contents": "Attached is an updated patch with the following changes:\n1. rebased (including solved merge conflict)\n2. fixed failing tests in CI\n3. changed the commit message a little bit\n4. addressed the two remarks from Micheal\n5. changed the prng_state from a global to a connection level value for thread-safety\n6. use pg_prng_uint64_range\n\n> Maybe my imagination is not so great, but what else than hosts could we\n> possibly load-balance? I don't mind calling it load_balance, but I also\n> don't feel very strongly one way or the other and this is clearly\n> bikeshed territory.\n\nI agree, which is why I called it load_balance in my original patch. But I also\nthink it's useful to match the naming for the already existing implementations \nin the PG ecosystem around this. But like you I don't really feel strongly either\nway. It's a tradeoff between short name and consistency in the ecosystem.\n\n> If I understand correctly, you've added DNS-based load balancing on top\n> of just shuffling the provided hostnames. This makes sense if a\n> hostname is backed by more than one IP address in the context of load\n> balancing, but it also complicates the patch. So I'm wondering how much\n> shorter the patch would be if you leave that out for now?\n\nYes, that's correct and indeed the patch would be simpler without, i.e. all the\naddrinfo changes would become unnecessary. But IMHO the behaviour of \nthe added option would be very unexpected if it didn't load balance across\nmultiple IPs in a DNS record. libpq currently makes no real distinction in \nhandling of provided hosts and handling of their resolved IPs. If load balancing\nwould only apply to the host list that would start making a distinction\nbetween the two.\n\nApart from that the load balancing across IPs is one of the main reasons\nfor my interest in this patch. The reason is that it allows expanding or reducing\nthe number of nodes that are being load balanced across transparently to the\napplication. Which means that there's no need to re-deploy applications with \nnew connection strings when changing the number hosts.\n\n> On the other hand, I believe pgJDBC keeps track of which hosts are up or\n> down and only load balances among the ones which are up (maybe\n> rechecking after a timeout? I don't remember), is this something you're\n> doing, or did you consider it?\n\nI don't think it's possible to do this in libpq without huge changes to its\narchitecture, since normally a connection will only a PGconn will only\ncreate a single connection. The reason pgJDBC can do this is because\nit's actually a connection pooler, so it will open more than one connection \nand can thus keep some global state about the different hosts.\n\nJelte",
"msg_date": "Mon, 12 Sep 2022 14:16:56 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [EXTERNAL] Re: Support load balancing in libpq"
},
{
"msg_contents": "+1 for overall idea of load balancing via random host selection.\n\nFor the patch itself, I think it is better to use a more precise time\nfunction in libpq_prng_init or call it only once.\nThought it is a special corner case, imagine all the connection attempts at\nfirst second will be seeded with the save\nvalue, i.e. will attempt to connect to the same host. I think, this is not\nwe want to achieve.\n\nAnd the \"hostroder\" option should be free'd in freePGconn.\n\n> Also, IMO, the solution must have a fallback mechanism if the\n> standby/chosen host isn't reachable.\n\nYeah, I think it should. I'm not insisting on a particular name of options\nhere, but in my view, the overall idea may be next:\n- we have two libpq's options: \"load_balance_hosts\" and \"failover_timeout\";\n- the \"load_balance_hosts\" should be \"sequential\" or \"random\";\n- the \"failover_timeout\" is a time period, within which, if connection to\nthe server is not established, we switch to the next address or host.\n\nWhile writing this text, I start thinking that load balancing is a\ncombination of two parameters above.\n\n> 3) Isn't it good to provide a way to test the patch?\n\nGood idea too. I think, we should add tap test here.\n\n\n-- \nBest regards,\nMaxim Orlov.\n\n+1 for overall idea of load balancing via random host selection.For the patch itself, I think it is better to use a more precise time function in libpq_prng_init or call it only once.Thought it is a special corner case, imagine all the connection attempts at first second will be seeded with the savevalue, i.e. will attempt to connect to the same host. I think, this is not we want to achieve.And the \"hostroder\" option should be free'd in freePGconn.> Also, IMO, the solution must have a fallback mechanism if the> standby/chosen host isn't reachable.Yeah, I think it should. I'm not insisting on a particular name of options here, but in my view, the overall idea may be next:- we have two libpq's options: \"load_balance_hosts\" and \"failover_timeout\";- the \"load_balance_hosts\" should be \"sequential\" or \"random\";- the \"failover_timeout\" is a time period, within which, if connection to the server is not established, we switch to the next address or host.While writing this text, I start thinking that load balancing is a combination of two parameters above.> 3) Isn't it good to provide a way to test the patch?Good idea too. I think, we should add tap test here.-- Best regards,Maxim Orlov.",
"msg_date": "Wed, 14 Sep 2022 17:53:48 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Support load balancing in libpq"
},
{
"msg_contents": "Hi,\n\nOn Mon, Sep 12, 2022 at 02:16:56PM +0000, Jelte Fennema wrote:\n> Attached is an updated patch with the following changes:\n> 1. rebased (including solved merge conflict)\n> 2. fixed failing tests in CI\n> 3. changed the commit message a little bit\n> 4. addressed the two remarks from Micheal\n> 5. changed the prng_state from a global to a connection level value for thread-safety\n> 6. use pg_prng_uint64_range\n\nThanks!\n \nI tested this some more, and found it somewhat surprising that at least\nwhen looking at it on a microscopic level, some hosts are chosen more\noften than the others for a while.\n\nI basically ran \n\nwhile true; do psql -At \"host=pg1,pg2,pg3 load_balance_hosts=1\" -c\n\"SELECT inet_server_addr()\"; sleep 1; done\n\nand the initial output was:\n\n10.0.3.109\n10.0.3.109\n10.0.3.240\n10.0.3.109\n10.0.3.109\n10.0.3.240\n10.0.3.109\n10.0.3.240\n10.0.3.240\n10.0.3.240\n10.0.3.240\n10.0.3.109\n10.0.3.240\n10.0.3.109\n10.0.3.109\n10.0.3.240\n10.0.3.240\n10.0.3.109\n10.0.3.60\n\nI.e. the second host (pg2/10.0.3.60) was only hit after 19 iterations.\n\nOnce significantly more than a hundred iterations are run, the hosts\nsomewhat even out, but it is maybe suprising to users:\n\n 50 100 250 500 1000 10000\n10.0.3.60 9 24 77 165 328 3317\n10.0.3.109 25 42 88 178 353 3372\n10.0.3.240 16 34 85 157 319 3311\n\nOr maybe my test setup is skewed? When I choose a two seconds timeout\nbetween psql calls, I get a more even distribution initially, but it\nthen diverges after 100 iterations:\n\n 50 100 250 500 1000\n10.0.3.60 19 36 98 199 374 \n10.0.3.109 13 33 80 150 285 \n10.0.3.240 18 31 72 151 341 \n\nCould just be bad luck...\n\nI also switch one host to have two IP addresses in /etc/hosts:\n\n10.0.3.109 pg1\n10.0.3.60 pg1\n10.0.3.240 pg3\n\nAnd this resulted in this (one second timeout again):\n\nFirst run:\n\n 50 100 250 500 1000\n10.0.3.60 10 18 56 120 255 \n10.0.3.109 14 30 67 139 278 \n10.0.3.240 26 52 127 241 467 \n\nSecond run:\n\n 50 100 250 500 1000\n10.0.3.60 20 31 77 138 265 \n10.0.3.109 9 20 52 116 245 \n10.0.3.240 21 49 121 246 490 \n\nSo it looks like it load-balances between pg1 and pg3, and not between\nthe three IPs - is this expected?\n\nIf I switch from \"host=pg1,pg3\" to \"host=pg1,pg1,pg3\", each IP adress is\nhit roughly equally.\n\nSo I guess this is how it should work, but in that case I think the\ndocumentation should be more explicit about what is to be expected if a\nhost has multiple IP addresses or hosts are specified multiple times in\nthe connection string.\n\n> > Maybe my imagination is not so great, but what else than hosts could we\n> > possibly load-balance? I don't mind calling it load_balance, but I also\n> > don't feel very strongly one way or the other and this is clearly\n> > bikeshed territory.\n> \n> I agree, which is why I called it load_balance in my original patch.\n> But I also think it's useful to match the naming for the already\n> existing implementations in the PG ecosystem around this. \n> But like you I don't really feel strongly either way. It's a tradeoff\n> between short name and consistency in the ecosystem.\n\nI don't think consistency is an extremely valid concern. As a\ncounterpoint, pgJDBC had targetServerType some time before Postgres, and\nthe libpq parameter was then named somewhat differently when it was\nintroduced, namely target_session_attrs.\n\n> > If I understand correctly, you've added DNS-based load balancing on top\n> > of just shuffling the provided hostnames.� This makes sense if a\n> > hostname is backed by more than one IP address in the context of load\n> > balancing, but it also complicates the patch. So I'm wondering how much\n> > shorter the patch would be if you leave that out for now?\n> \n> Yes, that's correct and indeed the patch would be simpler without, i.e. all the\n> addrinfo changes would become unnecessary. But IMHO the behaviour of \n> the added option would be very unexpected if it didn't load balance across\n> multiple IPs in a DNS record. libpq currently makes no real distinction in \n> handling of provided hosts and handling of their resolved IPs. If load balancing\n> would only apply to the host list that would start making a distinction\n> between the two.\n\nFair enough, I agree.\n \n> Apart from that the load balancing across IPs is one of the main reasons\n> for my interest in this patch. The reason is that it allows expanding or reducing\n> the number of nodes that are being load balanced across transparently to the\n> application. Which means that there's no need to re-deploy applications with \n> new connection strings when changing the number hosts.\n\nThat's a good point as well.\n \n> > On the other hand, I believe pgJDBC keeps track of which hosts are up or\n> > down and only load balances among the ones which are up (maybe\n> > rechecking after a timeout? I don't remember), is this something you're\n> > doing, or did you consider it?\n> \n> I don't think it's possible to do this in libpq without huge changes to its\n> architecture, since normally a connection will only a PGconn will only\n> create a single connection. The reason pgJDBC can do this is because\n> it's actually a connection pooler, so it will open more than one connection \n> and can thus keep some global state about the different hosts.\n\nOk.\n\n\nMichael\n\n\n",
"msg_date": "Sat, 17 Sep 2022 18:57:39 +0200",
"msg_from": "Michael Banck <mbanck@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Support load balancing in libpq"
},
{
"msg_contents": "Hi,\n\nOn Wed, Sep 14, 2022 at 05:53:48PM +0300, Maxim Orlov wrote:\n> > Also, IMO, the solution must have a fallback mechanism if the\n> > standby/chosen host isn't reachable.\n> \n> Yeah, I think it should. I'm not insisting on a particular name of options\n> here, but in my view, the overall idea may be next:\n> - we have two libpq's options: \"load_balance_hosts\" and \"failover_timeout\";\n> - the \"load_balance_hosts\" should be \"sequential\" or \"random\";\n> - the \"failover_timeout\" is a time period, within which, if connection to\n> the server is not established, we switch to the next address or host.\n\nIsn't this exactly what connect_timeout is providing? In my tests, it\nworked exactly as I would expect it, i.e. after connect_timeout seconds,\nlibpq was re-shuffling and going for another host.\n\nIf you specify only one host (or all are down), you get an error.\n\nIn any case, I am not sure what failover has to do with it if we are\ntalking about initiating connections - usually failover is for already\nestablished connections that suddendly go away for one reason or\nanother.\n\nOr maybe I'm just not understanding where you're getting at?\n\n> While writing this text, I start thinking that load balancing is a\n> combination of two parameters above.\n\nSo I guess what you are saying is that if load_balance_hosts is set,\nnot setting connect_timeout would be a hazard, cause it would stall the\nconnection attempt even though other hosts would be available.\n\nThat's right, but I guess it's already a hazard if you put multiple\nhosts in there, and the connection is not immediately failed (because\nthe host doesn't exist or it rejects the connection) but stalled by a\nfirewall for one host, while other hosts later on in the list would be\nhappy to accept connections.\n\nSo maybe this is something to think about, but just changing the\ndefaul of connect_timeout to something else when load balancing is on\nwould be very surprising. In any case, I don't think this absolutely\nneeds to be addressed by the initial feature, it could be expanded upon\nlater on if needed, the feature is useful on its own, along with\nconnect_timeout.\n\n\nMichael\n\n\n",
"msg_date": "Sat, 17 Sep 2022 20:43:31 +0200",
"msg_from": "Michael Banck <mbanck@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Support load balancing in libpq"
},
{
"msg_contents": "I attached a new patch which does the following:\n1. adds tap tests\n2. adds random_seed parameter to libpq (required for tap tests)\n3. frees conn->loadbalance in freePGConn\n4. add more expansive docs on the feature its behaviour\n\nApart from bike shedding on the name of the option I think it's pretty good now.\n\n> Isn't this exactly what connect_timeout is providing? In my tests, it\n> worked exactly as I would expect it, i.e. after connect_timeout seconds,\n> libpq was re-shuffling and going for another host.\n\nYes, this was the main purpose of multiple hosts previously. This patch\ndoesn't change that, and it indeed continues to work when enabling\nload balancing too. I included this in the tap tests.\n\n> I tested this some more, and found it somewhat surprising that at least\n> when looking at it on a microscopic level, some hosts are chosen more\n> often than the others for a while.\n\nThat does seem surprising, but it looks like it might simply be bad luck.\nDid you compile with OpenSSL support? Otherwise, the strong random\nsource might not be used. \n\n> So it looks like it load-balances between pg1 and pg3, and not between\n> the three IPs - is this expected?\n>\n> If I switch from \"host=pg1,pg3\" to \"host=pg1,pg1,pg3\", each IP adress is\n> hit roughly equally.\n>\n> So I guess this is how it should work, but in that case I think the\n> documentation should be more explicit about what is to be expected if a\n> host has multiple IP addresses or hosts are specified multiple times in\n> the connection string.\n\nYes, this behaviour is expected I tried to make that clearer in the newest\nversion of the docs. \n\n> For the patch itself, I think it is better to use a more precise time\n> function in libpq_prng_init or call it only once.\n> Thought it is a special corner case, imagine all the connection attempts at\n> first second will be seeded with the save\n\nI agree that using microseconds would probably be preferable. But that seems\nlike a separate patch, since I took this initialization code from the InitProcessGlobals\nfunction. Also, it shouldn't be a big issue in practice, since usually the strong random \nsource will be used.",
"msg_date": "Mon, 3 Oct 2022 12:15:05 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Support load balancing in libpq"
},
{
"msg_contents": "Attached is a new version with the tests cleaned up a bit (more comments mostly).\n\n@Michael, did you have a chance to look at the last version? Because I feel that the \npatch is pretty much ready for a committer to look at, at this point.",
"msg_date": "Tue, 29 Nov 2022 14:57:08 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Support load balancing in libpq"
},
{
"msg_contents": "Hi,\n\nOn Tue, Nov 29, 2022 at 02:57:08PM +0000, Jelte Fennema wrote:\n> Attached is a new version with the tests cleaned up a bit (more\n> comments mostly).\n> \n> @Michael, did you have a chance to look at the last version? Because I\n> feel that the patch is pretty much ready for a committer to look at,\n> at this point.\n\nI had another look; it still applies and tests pass. I still find the\ndistribution over three hosts a bit skewed (even when OpenSSL is\nenabled, which wasn't the case when I first tested it), but couldn't\nfind a general fault and it worked well enough in my testing.\n\nI wonder whether making the parameter a boolean will paint us into a\ncorner, and whether maybe additional modes could be envisioned in the\nfuture, but I can't think of some right now (you can pretty neatly\nrestrict load-balancing to standbys by setting\ntarget_session_attrs=standby in addition). Maybe a way to change the\nbehaviour if a dns hostname is backed by multiple entries?\n\nSome further (mostly nitpicking) comments on the patch:\n\n> From 6e20bb223012b666161521b5e7249c066467a5f3 Mon Sep 17 00:00:00 2001\n> From: Jelte Fennema <github-tech@jeltef.nl>\n> Date: Mon, 12 Sep 2022 09:44:06 +0200\n> Subject: [PATCH v5] Support load balancing in libpq\n> \n> Load balancing connections across multiple read replicas is a pretty\n> common way of scaling out read queries. There are two main ways of doing\n> so, both with their own advantages and disadvantages:\n> 1. Load balancing at the client level\n> 2. Load balancing by connecting to an intermediary load balancer\n> \n> Both JBDC (Java) and Npgsql (C#) already support client level load\n> balancing (option #1). This patch implements client level load balancing\n> for libpq as well. To stay consistent with the JDBC and Npgsql part of\n> the ecosystem, a similar implementation and name for the option are\n> used. \n\nI still think all of the above has no business in the commit message,\nthough maybe the first paragraph can stay as introduction.\n\n> It contains two levels of load balancing:\n> 1. The given hosts are randomly shuffled, before resolving them\n> one-by-one.\n> 2. Once a host its addresses get resolved, those addresses are shuffled,\n> before trying to connect to them one-by-one.\n\nThat's fine.\n\nWhat should be in the commit message is at least a mention of what the\nnew connection parameter is called and possibly what is done to\naccomplish it.\n\nBut the committer will pick this up if needed I guess.\n\n> diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml\n> index f9558dec3b..6ce7a0c9cc 100644\n> --- a/doc/src/sgml/libpq.sgml\n> +++ b/doc/src/sgml/libpq.sgml\n> @@ -1316,6 +1316,54 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry id=\"libpq-load-balance-hosts\" xreflabel=\"load_balance_hosts\">\n> + <term><literal>load_balance_hosts</literal></term>\n> + <listitem>\n> + <para>\n> + Controls whether the client load balances connections across hosts and\n> + adresses. The default value is 0, meaning off, this means that hosts are\n> + tried in order they are provided and addresses are tried in the order\n> + they are received from DNS or a hosts file. If this value is set to 1,\n> + meaning on, the hosts and addresses that they resolve to are tried in\n> + random order. Subsequent queries once connected will still be sent to\n> + the same server. Setting this to 1, is mostly useful when opening\n> + multiple connections at the same time, possibly from different machines.\n> + This way connections can be load balanced across multiple Postgres\n> + servers.\n> + </para>\n> + <para>\n> + When providing multiple hosts, these hosts are resolved in random order.\n> + Then if that host resolves to multiple addresses, these addresses are\n> + connected to in random order. Only once all addresses for a single host\n> + have been tried, the addresses for the next random host will be\n> + resolved. This behaviour can lead to non-uniform address selection in\n> + certain cases. Such as when not all hosts resolve to the same number of\n> + addresses, or when multiple hosts resolve to the same address. So if you\n> + want uniform load balancing, this is something to keep in mind. However,\n> + non-uniform load balancing also has usecases, e.g. providing the\n> + hostname of a larger server multiple times in the host string so it gets\n> + more requests.\n> + </para>\n> + <para>\n> + When using this setting it's recommended to also configure a reasonable\n> + value for <xref linkend=\"libpq-connect-connect-timeout\"/>. Because then,\n> + if one of the nodes that are used for load balancing is not responding,\n> + a new node will be tried.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n\nI think this whole section is generally fine, but needs some\ncopyediting.\n\n> + <varlistentry id=\"libpq-random-seed\" xreflabel=\"random_seed\">\n> + <term><literal>random_seed</literal></term>\n> + <listitem>\n> + <para>\n> + Sets the random seed that is used by <xref linkend=\"libpq-load-balance-hosts\"/>\n> + to randomize the host order. This option is mostly useful when running\n> + tests that require a stable random order.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n\nI wonder whether this needs to be documented if it is mostly a\ndevelopment/testing parameter?\n\n> diff --git a/src/include/libpq/pqcomm.h b/src/include/libpq/pqcomm.h\n> index fcf68df39b..39e93b1392 100644\n> --- a/src/include/libpq/pqcomm.h\n> +++ b/src/include/libpq/pqcomm.h\n> @@ -27,6 +27,12 @@ typedef struct\n> \tsocklen_t\tsalen;\n> } SockAddr;\n> \n> +typedef struct\n> +{\n> +\tint\t\t\tfamily;\n> +\tSockAddr\taddr;\n> +}\t\t\tAddrInfo;\n\nThat last line looks weirdly indented compared to SockAddr; in the\nstruct above.\n\n> /* Configure the UNIX socket location for the well known port. */\n> \n> #define UNIXSOCK_PATH(path, port, sockdir) \\\n> diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c\n> index f88d672c6c..b4d3613713 100644\n> --- a/src/interfaces/libpq/fe-connect.c\n> +++ b/src/interfaces/libpq/fe-connect.c\n> @@ -241,6 +241,14 @@ static const internalPQconninfoOption PQconninfoOptions[] = {\n> \t\t\"Fallback-Application-Name\", \"\", 64,\n> \toffsetof(struct pg_conn, fbappname)},\n> \n> +\t{\"load_balance_hosts\", NULL, NULL, NULL,\n> +\t\t\"Load-Balance\", \"\", 1,\t/* should be just '0' or '1' */\n> +\toffsetof(struct pg_conn, loadbalance)},\n> +\n> +\t{\"random_seed\", NULL, NULL, NULL,\n> +\t\t\"Random-Seed\", \"\", 10,\t/* strlen(INT32_MAX) == 10 */\n> +\toffsetof(struct pg_conn, randomseed)},\n> +\n> \t{\"keepalives\", NULL, NULL, NULL,\n> \t\t\"TCP-Keepalives\", \"\", 1,\t/* should be just '0' or '1' */\n> \toffsetof(struct pg_conn, keepalives)},\n> @@ -379,6 +387,7 @@ static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);\n> static void freePGconn(PGconn *conn);\n> static void closePGconn(PGconn *conn);\n> static void release_conn_addrinfo(PGconn *conn);\n> +static bool store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist);\n> static void sendTerminateConn(PGconn *conn);\n> static PQconninfoOption *conninfo_init(PQExpBuffer errorMessage);\n> static PQconninfoOption *parse_connection_string(const char *connstr,\n> @@ -424,6 +433,9 @@ static void pgpassfileWarning(PGconn *conn);\n> static void default_threadlock(int acquire);\n> static bool sslVerifyProtocolVersion(const char *version);\n> static bool sslVerifyProtocolRange(const char *min, const char *max);\n> +static int\tloadBalance(PGconn *conn);\n> +static bool parse_int_param(const char *value, int *result, PGconn *conn,\n> +\t\t\t\t\t\t\tconst char *context);\n> \n> \n> /* global variable because fe-auth.c needs to access it */\n> @@ -1007,6 +1019,46 @@ parse_comma_separated_list(char **startptr, bool *more)\n> \treturn p;\n> }\n> \n> +/*\n> + * Initializes the prng_state field of the connection. We want something\n> + * unpredictable, so if possible, use high-quality random bits for the\n> + * seed. Otherwise, fall back to a seed based on timestamp and PID.\n> + */\n> +static bool\n> +libpq_prng_init(PGconn *conn)\n> +{\n> +\tif (unlikely(conn->randomseed))\n> +\t{\n> +\t\tint\t\t\trseed;\n> +\n> +\t\tif (!parse_int_param(conn->randomseed, &rseed, conn, \"random_seed\"))\n> +\t\t{\n> +\t\t\treturn false;\n> +\t\t};\n\nI think it's project policy to drop the braces for single statements in\nif blocks.\n\n> +\t\tpg_prng_seed(&conn->prng_state, rseed);\n> +\t}\n> +\telse if (unlikely(!pg_prng_strong_seed(&conn->prng_state)))\n> +\t{\n> +\t\tuint64\t\trseed;\n> +\t\ttime_t\t\tnow = time(NULL);\n> +\n> +\t\t/*\n> +\t\t * Since PIDs and timestamps tend to change more frequently in their\n> +\t\t * least significant bits, shift the timestamp left to allow a larger\n> +\t\t * total number of seeds in a given time period. Since that would\n> +\t\t * leave only 20 bits of the timestamp that cycle every ~1 second,\n> +\t\t * also mix in some higher bits.\n> +\t\t */\n> +\t\trseed = ((uint64) getpid()) ^\n> +\t\t\t((uint64) now << 12) ^\n> +\t\t\t((uint64) now >> 20);\n> +\n> +\t\tpg_prng_seed(&conn->prng_state, rseed);\n> +\t}\n> +\treturn true;\n> +}\n> +\n> +\n> /*\n\nAdditional newline.\n\n> @@ -1164,6 +1217,36 @@ connectOptions2(PGconn *conn)\n> \t\t}\n> \t}\n> \n> +\tif (loadbalancehosts < 0)\n> +\t{\n> +\t\tappendPQExpBufferStr(&conn->errorMessage,\n> +\t\t\t\t\t\t\t libpq_gettext(\"loadbalance parameter must be an integer\\n\"));\n> +\t\treturn false;\n> +\t}\n> +\n> +\tif (loadbalancehosts)\n> +\t{\n> +\t\tif (!libpq_prng_init(conn))\n> +\t\t{\n> +\t\t\treturn false;\n> +\t\t}\n> +\n> +\t\t/*\n> +\t\t * Shuffle connhost with a Durstenfeld/Knuth version of the\n> +\t\t * Fisher-Yates shuffle. Source:\n> +\t\t * https://en.wikipedia.org/wiki/Fisher%E2%80%93Yates_shuffle#The_modern_algorithm\n> +\t\t */\n> +\t\tfor (i = conn->nconnhost - 1; i > 0; i--)\n> +\t\t{\n> +\t\t\tint\t\t\tj = pg_prng_uint64_range(&conn->prng_state, 0, i);\n> +\t\t\tpg_conn_host temp = conn->connhost[j];\n> +\n> +\t\t\tconn->connhost[j] = conn->connhost[i];\n> +\t\t\tconn->connhost[i] = temp;\n> +\t\t}\n> +\t}\n> +\n> +\n> \t/*\n\nAdditional newline.\n\n> @@ -1726,6 +1809,27 @@ connectFailureMessage(PGconn *conn, int errorno)\n> \t\tlibpq_append_conn_error(conn, \"\\tIs the server running on that host and accepting TCP/IP connections?\");\n> }\n> \n> +/*\n> + * Should we load balance across hosts? Returns 1 if yes, 0 if no, and -1 if\n> + * conn->loadbalance is set to a value which is not parseable as an integer.\n> + */\n> +static int\n> +loadBalance(PGconn *conn)\n> +{\n> +\tchar\t *ep;\n> +\tint\t\t\tval;\n> +\n> +\tif (conn->loadbalance == NULL)\n> +\t{\n> +\t\treturn 0;\n> +\t}\n\nAnother case of additional braces.\n\n> +\tval = strtol(conn->loadbalance, &ep, 10);\n> +\tif (*ep)\n> +\t\treturn -1;\n> +\treturn val != 0 ? 1 : 0;\n> +}\n> +\n> +\n> /*\n\nAdditional newline.\n\n> @@ -4041,6 +4154,63 @@ freePGconn(PGconn *conn)\n> \tfree(conn);\n> }\n> \n> +\n> +/*\n\nAdditional newline.\n\n> + * Copies over the AddrInfos from addrlist to the PGconn.\n> + */\n> +static bool\n> +store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist)\n> +{\n> +\tstruct addrinfo *ai = addrlist;\n> +\n> +\tconn->whichaddr = 0;\n> +\n> +\tconn->naddr = 0;\n> +\twhile (ai)\n> +\t{\n> +\t\tai = ai->ai_next;\n> +\t\tconn->naddr++;\n> +\t}\n> +\n> +\tconn->addr = calloc(conn->naddr, sizeof(AddrInfo));\n> +\tif (conn->addr == NULL)\n> +\t{\n> +\t\treturn false;\n> +\t}\n\nAdditional braces.\n\n> @@ -4048,11 +4218,10 @@ freePGconn(PGconn *conn)\n> static void\n> release_conn_addrinfo(PGconn *conn)\n> {\n> -\tif (conn->addrlist)\n> +\tif (conn->addr)\n> \t{\n> -\t\tpg_freeaddrinfo_all(conn->addrlist_family, conn->addrlist);\n> -\t\tconn->addrlist = NULL;\n> -\t\tconn->addr_cur = NULL;\t/* for safety */\n> +\t\tfree(conn->addr);\n> +\t\tconn->addr = NULL;\n> \t}\n> }\n> \n> diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h\n> index 512762f999..76ee988038 100644\n> --- a/src/interfaces/libpq/libpq-int.h\n> +++ b/src/interfaces/libpq/libpq-int.h\n> @@ -82,6 +82,8 @@ typedef struct\n> #endif\n> #endif\t\t\t\t\t\t\t/* USE_OPENSSL */\n> \n> +#include \"common/pg_prng.h\"\n> +\n> /*\n> * POSTGRES backend dependent Constants.\n> */\n> @@ -373,6 +375,8 @@ struct pg_conn\n> \tchar\t *pgpassfile;\t\t/* path to a file containing password(s) */\n> \tchar\t *channel_binding;\t/* channel binding mode\n> \t\t\t\t\t\t\t\t\t * (require,prefer,disable) */\n> +\tchar\t *loadbalance;\t/* load balance over hosts */\n> +\tchar\t *randomseed;\t\t/* seed for randomization of load balancing */\n> \tchar\t *keepalives;\t\t/* use TCP keepalives? */\n\nA bit unclear why you put the variables at this point in the list, but\nthe list doesn't seem to be ordered strictly anyway; still, maybe they\nwould fit best at the bottom below target_session_attrs?\n\n> \tchar\t *keepalives_idle;\t/* time between TCP keepalives */\n> \tchar\t *keepalives_interval;\t/* time between TCP keepalive\n> @@ -461,8 +465,10 @@ struct pg_conn\n> \tPGTargetServerType target_server_type;\t/* desired session properties */\n> \tbool\t\ttry_next_addr;\t/* time to advance to next address/host? */\n> \tbool\t\ttry_next_host;\t/* time to advance to next connhost[]? */\n> -\tstruct addrinfo *addrlist;\t/* list of addresses for current connhost */\n> -\tstruct addrinfo *addr_cur;\t/* the one currently being tried */\n> +\tint\t\t\tnaddr;\t\t\t/* number of addrs returned by getaddrinfo */\n> +\tint\t\t\twhichaddr;\t\t/* the addr currently being tried */\n\nAddress(es) is always spelt out in the comments, those two addr(s)\nshould also I think.\n\n> diff --git a/src/interfaces/libpq/t/003_loadbalance.pl b/src/interfaces/libpq/t/003_loadbalance.pl\n> new file mode 100644\n> index 0000000000..07eddbe9cc\n> --- /dev/null\n> +++ b/src/interfaces/libpq/t/003_loadbalance.pl\n> @@ -0,0 +1,167 @@\n> +# Copyright (c) 2022, PostgreSQL Global Development Group\n\nCopyright bump needed.\n\n\nCheers,\n\nMichael\n\n\n",
"msg_date": "Fri, 6 Jan 2023 18:21:23 +0100",
"msg_from": "Michael Banck <mbanck@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Support load balancing in libpq"
},
{
"msg_contents": "Attached an updated patch which should address your feedback and\nI updated the commit message.\n\n> I wonder whether making the parameter a boolean will paint us into a\n> corner\n\nI made it a string option, just like target_session_attrs. I'm pretty sure a \nround-robin load balancing policy could be implemented in the future\ngiven certain constraints, like connections being made within the same\nprocess. I adjusted the docs accordingly.\n\n> > +typedef struct\n> > +{\n> > +\tint\t\t\tfamily;\n> > +\tSockAddr\taddr;\n> > +}\t\t\tAddrInfo;\n>\n> That last line looks weirdly indented compared to SockAddr; in the\n> struct above.\n\nYes I agree, but for some reason pgindent really badly wants it formatted\nthat way. I now undid the changes made by pgindent manually.\n\n> I wonder whether this needs to be documented if it is mostly a\n> development/testing parameter?\n\nI also wasn't sure whether it should be documented or not. I'm fine with\neither, I'll leave it in for now and let a committer decide if it's wanted or not.\n\n> A bit unclear why you put the variables at this point in the list, but\n> the list doesn't seem to be ordered strictly anyway; still, maybe they\n> would fit best at the bottom below target_session_attrs?\n\nGood point, I added them after target_session_attrs now and also moved\ndocs/parsing accordingly. This makes conceptually to me as well, since\ntarget_session_attrs and load_balance_hosts have some interesting\nsense contextually too.\n\nP.S. I also attached the same pgindent run patch that I added in\nhttps://www.postgresql.org/message-id/flat/AM5PR83MB0178D3B31CA1B6EC4A8ECC42F7529@AM5PR83MB0178.EURPRD83.prod.outlook.com",
"msg_date": "Mon, 9 Jan 2023 13:00:01 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [EXTERNAL] Re: Support load balancing in libpq"
},
{
"msg_contents": "On Wed, Sep 14, 2022 at 7:54 AM Maxim Orlov <orlovmg@gmail.com> wrote:\n> For the patch itself, I think it is better to use a more precise time function in libpq_prng_init or call it only once.\n> Thought it is a special corner case, imagine all the connection attempts at first second will be seeded with the save\n> value, i.e. will attempt to connect to the same host. I think, this is not we want to achieve.\n\nJust a quick single-issue review, but I agree with Maxim that having\none PRNG, seeded once, would be simpler -- with the tangible benefit\nthat it would eliminate weird behavior on simultaneous connections\nwhen the client isn't using OpenSSL. (I'm guessing a simple lock on a\nglobal PRNG would be less overhead than the per-connection\nstrong_random machinery, too, but I have no data to back that up.) The\ntest seed could then be handled globally as well (envvar?) so that you\ndon't have to introduce a debug-only option into the connection\nstring.\n\nOverall, I like the concept.\n\n--Jacob\n\n\n",
"msg_date": "Thu, 12 Jan 2023 15:19:19 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Support load balancing in libpq"
},
{
"msg_contents": "> Just a quick single-issue review, but I agree with Maxim that having\n> one PRNG, seeded once, would be simpler\n\nI don't agree that it's simpler. Because now there's a mutex you have\nto manage, and honestly cross-platform threading in C is not simple.\nHowever, I attached two additional patches that implement this\napproach on top of the previous patchset. Just to make sure that\nthis patch is not blocked on this.\n\n> with the tangible benefit that it would eliminate weird behavior on\n> simultaneous connections when the client isn't using OpenSSL.\n\nThis is true, but still I think in practice very few people have a libpq\nthat's compiled without OpenSSL support.\n\n> I'm guessing a simple lock on a\n> global PRNG would be less overhead than the per-connection\n> strong_random machinery, too, but I have no data to back that up.\n\nIt might very well have less overhead, but neither of them should take\nup any significant amount of time during connection establishment.\n\n> The test seed could then be handled globally as well (envvar?) so that you\n> don't have to introduce a debug-only option into the connection string.\n\nWhy is a debug-only envvar any better than a debug-only connection option?\nFor now I kept the connection option approach, since to me they seem pretty\nmuch equivalent.",
"msg_date": "Fri, 13 Jan 2023 18:10:23 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Support load balancing in libpq"
},
{
"msg_contents": "On Fri, Jan 13, 2023 at 9:10 AM Jelte Fennema <postgres@jeltef.nl> wrote:\n>\n> > Just a quick single-issue review, but I agree with Maxim that having\n> > one PRNG, seeded once, would be simpler\n>\n> I don't agree that it's simpler. Because now there's a mutex you have\n> to manage, and honestly cross-platform threading in C is not simple.\n> However, I attached two additional patches that implement this\n> approach on top of the previous patchset. Just to make sure that\n> this patch is not blocked on this.\n\nIt hadn't been my intention to block the patch on it, sorry. Just\nregistering a preference.\n\nI also didn't intend to make you refactor the locking code -- my\nassumption was that you could use the existing pglock_thread() to\nhandle it, since it didn't seem like the additional contention would\nhurt too much. Maybe that's not actually performant enough, in which\ncase my suggestion loses an advantage.\n\n> > The test seed could then be handled globally as well (envvar?) so that you\n> > don't have to introduce a debug-only option into the connection string.\n>\n> Why is a debug-only envvar any better than a debug-only connection option?\n> For now I kept the connection option approach, since to me they seem pretty\n> much equivalent.\n\nI guess I worry less about envvar namespace pollution than pollution\nof the connection options. And my thought was that the one-time\ninitialization could be moved to a place that doesn't need to know the\nconnection options at all, to make it easier to reason about the\narchitecture. Say, next to the WSAStartup machinery.\n\nBut as it is now, I agree that the implementation hasn't really lost\nany complexity compared to the original, and I don't feel particularly\nstrongly about it. If it doesn't help to make the change, then it\ndoesn't help.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Fri, 13 Jan 2023 10:44:50 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Support load balancing in libpq"
},
{
"msg_contents": "On Fri, Jan 13, 2023 at 10:44 AM Jacob Champion <jchampion@timescale.com> wrote:\n> And my thought was that the one-time\n> initialization could be moved to a place that doesn't need to know the\n> connection options at all, to make it easier to reason about the\n> architecture. Say, next to the WSAStartup machinery.\n\n(And after marinating on this over the weekend, it occurred to me that\nkeeping the per-connection option while making the PRNG global\nintroduces an additional hazard, because two concurrent connections\ncan now fight over the seed value.)\n\n--Jacob\n\n\n",
"msg_date": "Tue, 17 Jan 2023 14:52:13 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Support load balancing in libpq"
},
{
"msg_contents": "As far as I can tell this is ready for committer feedback now btw. I'd\nreally like to get this into PG16.\n\n> It hadn't been my intention to block the patch on it, sorry. Just\n> registering a preference.\n\nNo problem. I hadn't looked into the shared PRNG solution closely\nenough to determine if I thought it was better or not. Now that I\nimplemented an initial version I personally don't think it brings\nenough advantages to warrant the added complexity. I definitely\ndon't think it's required for this patch, if needed this change can\nalways be done later without negative user impact afaict. And the\nconnection local PRNG works well enough to bring advantages.\n\n> my\n> assumption was that you could use the existing pglock_thread() to\n> handle it, since it didn't seem like the additional contention would\n> hurt too much.\n\nThat definitely would have been the easier approach and I considered\nit. But the purpose of pglock_thread seemed so different from this lock\nthat it felt weird to combine the two. Another reason I refactored the lock\ncode is that it would be probably be necessary for a future round-robin\nload balancing, which would require sharing state between different\nconnections.\n\n> > And my thought was that the one-time\n> > initialization could be moved to a place that doesn't need to know the\n> > connection options at all, to make it easier to reason about the\n> > architecture. Say, next to the WSAStartup machinery.\n\nThat's an interesting thought, but I don't think it would really simplify\nthe initialization code. Mostly it would change its location.\n\n> (And after marinating on this over the weekend, it occurred to me that\n> keeping the per-connection option while making the PRNG global\n> introduces an additional hazard, because two concurrent connections\n> can now fight over the seed value.)\n\nI think since setting the initial seed value is really only meant for testing\nit's not a big deal if it doesn't work with concurrent connections.\n\n\n",
"msg_date": "Wed, 18 Jan 2023 11:24:20 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Support load balancing in libpq"
},
{
"msg_contents": "After discussing this patch privately with Andres here's a new version of this\npatch. The major differences are:\n1. Use the pointer value of the connection as a randomness source\n2. Use more precise time as randomness source\n3. Move addrinfo changes into a separate commit. This is both to make\nthe actual change cleaner, and because another patch of mine (non\nblocking cancels) benefits from the same change.\n4. Use the same type of Fisher-Yates shuffle as is done in two other\nplaces in the PG source code.\n5. Move tests depending on hosts file to a separate file. This makes\nit clear in the output that tests are skipped, because skip_all shows\na nice message.\n6. Only enable hosts file load balancing when loadbalance is included\nin PG_TEST_EXTRA, since this test listens on TCP socket and is thus\ndangerous on a multi-user system.\n\nOn Wed, 18 Jan 2023 at 11:24, Jelte Fennema <postgres@jeltef.nl> wrote:\n>\n> As far as I can tell this is ready for committer feedback now btw. I'd\n> really like to get this into PG16.\n>\n> > It hadn't been my intention to block the patch on it, sorry. Just\n> > registering a preference.\n>\n> No problem. I hadn't looked into the shared PRNG solution closely\n> enough to determine if I thought it was better or not. Now that I\n> implemented an initial version I personally don't think it brings\n> enough advantages to warrant the added complexity. I definitely\n> don't think it's required for this patch, if needed this change can\n> always be done later without negative user impact afaict. And the\n> connection local PRNG works well enough to bring advantages.\n>\n> > my\n> > assumption was that you could use the existing pglock_thread() to\n> > handle it, since it didn't seem like the additional contention would\n> > hurt too much.\n>\n> That definitely would have been the easier approach and I considered\n> it. But the purpose of pglock_thread seemed so different from this lock\n> that it felt weird to combine the two. Another reason I refactored the lock\n> code is that it would be probably be necessary for a future round-robin\n> load balancing, which would require sharing state between different\n> connections.\n>\n> > > And my thought was that the one-time\n> > > initialization could be moved to a place that doesn't need to know the\n> > > connection options at all, to make it easier to reason about the\n> > > architecture. Say, next to the WSAStartup machinery.\n>\n> That's an interesting thought, but I don't think it would really simplify\n> the initialization code. Mostly it would change its location.\n>\n> > (And after marinating on this over the weekend, it occurred to me that\n> > keeping the per-connection option while making the PRNG global\n> > introduces an additional hazard, because two concurrent connections\n> > can now fight over the seed value.)\n>\n> I think since setting the initial seed value is really only meant for testing\n> it's not a big deal if it doesn't work with concurrent connections.",
"msg_date": "Thu, 26 Jan 2023 17:29:06 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Support load balancing in libpq"
},
{
"msg_contents": "This patch seems to need a rebase.\n\nI'll update the status to Waiting on Author for now. After rebasing\nplease update it to either Needs Review or Ready for Committer\ndepending on how simple the rebase was and whether there are open\nquestions to finish it.\n\n\n",
"msg_date": "Wed, 1 Mar 2023 14:12:45 -0500",
"msg_from": "Greg S <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Support load balancing in libpq"
},
{
"msg_contents": "done and updated cf entry\n\nOn Wed, 1 Mar 2023 at 20:13, Greg S <stark.cfm@gmail.com> wrote:\n>\n> This patch seems to need a rebase.\n>\n> I'll update the status to Waiting on Author for now. After rebasing\n> please update it to either Needs Review or Ready for Committer\n> depending on how simple the rebase was and whether there are open\n> questions to finish it.",
"msg_date": "Wed, 1 Mar 2023 21:03:31 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Support load balancing in libpq"
},
{
"msg_contents": "On Wed, Mar 1, 2023 at 12:03 PM Jelte Fennema <postgres@jeltef.nl> wrote:\n>\n> done and updated cf entry\n>\n\nHi Jelte!\n\nI've looked into the patch. Although so many improvements can be\nsuggested, It definitely makes sense as-is too.\nThese improvements might be, for example, sorting hosts according to\nping latency or something like that. Or, perhaps, some other balancing\npolicies. Anyway, randomizing is a good start too.\n\nI want to note that the Fisher-Yates algorithm is implemented in a\ndifficult to understand manner.\n+if (j < i) /* avoid fetching undefined data if j=i */\nThis stuff does not make sense in case of shuffling arrays inplace. It\nis important only for making a new copy of an array and only in\nlanguages that cannot access uninitialized values. I'd suggest just\nremoving this line (in both cases).\n\n\nBest regards, Andrey Borodin.\n\n\n",
"msg_date": "Thu, 2 Mar 2023 20:52:49 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Support load balancing in libpq"
},
{
"msg_contents": "> I want to note that the Fisher-Yates algorithm is implemented in a\n> difficult to understand manner.\n> +if (j < i) /* avoid fetching undefined data if j=i */\n> This stuff does not make sense in case of shuffling arrays inplace. It\n> is important only for making a new copy of an array and only in\n> languages that cannot access uninitialized values. I'd suggest just\n> removing this line (in both cases).\n\nDone. Also added another patch to remove the same check from another\nplace in the codebase where it is unnecessary.",
"msg_date": "Fri, 3 Mar 2023 15:37:55 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Support load balancing in libpq"
},
{
"msg_contents": "Small update. Improved some wording in the docs.\n\nOn Fri, 3 Mar 2023 at 15:37, Jelte Fennema <postgres@jeltef.nl> wrote:\n>\n> > I want to note that the Fisher-Yates algorithm is implemented in a\n> > difficult to understand manner.\n> > +if (j < i) /* avoid fetching undefined data if j=i */\n> > This stuff does not make sense in case of shuffling arrays inplace. It\n> > is important only for making a new copy of an array and only in\n> > languages that cannot access uninitialized values. I'd suggest just\n> > removing this line (in both cases).\n>\n> Done. Also added another patch to remove the same check from another\n> place in the codebase where it is unnecessary.",
"msg_date": "Mon, 6 Mar 2023 15:35:47 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Support load balancing in libpq"
},
{
"msg_contents": "The pgindent run in b6dfee28f is causing this patch to need a rebase\nfor the cfbot to apply it.\n\n\n",
"msg_date": "Tue, 14 Mar 2023 14:05:00 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Support load balancing in libpq"
},
{
"msg_contents": "Rebased\n\nOn Tue, 14 Mar 2023 at 19:05, Gregory Stark (as CFM)\n<stark.cfm@gmail.com> wrote:\n>\n> The pgindent run in b6dfee28f is causing this patch to need a rebase\n> for the cfbot to apply it.",
"msg_date": "Wed, 15 Mar 2023 09:46:08 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Support load balancing in libpq"
},
{
"msg_contents": "In general I think this feature makes sense (which has been echoed many times\nin the thread), and the implementation strikes a good balance of robustness and\nsimplicity. Reading this I think it's very close to being committable, but I\nhave a few comments on the patch series:\n\n+ sent to the same server. There are currently two modes:\nThe documentation lists the modes disabled and random, but I wonder if it's\nworth expanding the docs to mention that \"disabled\" is pretty much a round\nrobin load balancing scheme? It reads a bit odd to present load balancing\nwithout a mention of round robin balancing given how common it is.\n\n- conn->addrlist_family = hint.ai_family = AF_UNSPEC;\n+ hint.ai_family = AF_UNSPEC;\nThis removes all uses of conn->addrlist_family and that struct member can be\nremoved.\n\n+ to, for example, load balance over stanby servers only. Once successfully\ns/stanby/standby/\n\n+ Postgres servers.\ns/Postgres/<productname>PostgreSQL</productname>/\n\n+ more addresses than others. So if you want uniform load balancing,\n+ this is something to keep in mind. However, non-uniform load\n+ balancing can also be used to your advantage, e.g. by providing the\nThe documentation typically use a less personal form, I would suggest something\nalong the lines of:\n\n \"If uniform load balancing is required then an external load balancing tool\n must be used. Non-uniform load balancing can also be used to skew the\n results, e.g. by providing the..\"\n\n+ if (!libpq_prng_init(conn))\n+ return false;\nThis needs to set a returned error message with libpq_append_conn_error (feel\nfree to come up with a better wording of course):\n\n libpq_append_conn_error(conn, \"unable to initiate random number generator\");\n\n-#ifndef WIN32\n+/* MinGW has sys/time.h, but MSVC doesn't */\n+#ifndef _MSC_VER\n #include <sys/time.h>\nThis seems unrelated to the patch in question, and should be a separate commit IMO.\n\n+ LOAD_BALANCE_RANDOM, /* Read-write server */\nI assume this comment is a copy/paste and should have been reflecting random order?\n\n+++ b/src/interfaces/libpq/t/003_loadbalance_host_list.pl\nNitpick, but we should probably rename this load_balance to match the parameter\nbeing tested.\n\n+++ b/src/interfaces/libpq/t/004_loadbalance_dns.pl\nOn the subject of tests, I'm a bit torn. I appreciate that the patch includes\na thorough test, but I'm not convinced we should add that to the tree. A test\nwhich require root permission level manual system changes stand a very low\nchance of ever being executed, and as such will equate to dead code that may\neasily be broken or subtly broken.\n \nI am also not a fan of the random_seed parameter. A parameter only useful for\ntesting, and which enables testing by circumventing the mechanism to test\n(making randomness predictable), seems like expensive bagage to carry around.\nFrom experience we also know this risks ending up in production configs for all\nthe wrong reasons.\n\nGiven the implementation of this feature, the actual connection phase isn't any\ndifferent from just passing multiple hostnames and operating in the round-robin\nfashion. What we want to ensure is that the randomization isn't destroying the\nconnection array. Let's see what we can do while still having tests that can\nbe executed in the buildfarm.\n\nA few ideas:\n\n * Add basic tests for the load_balance_host connection param only accepting\n sane values\n\n * Alter the connect_ok() tests in 003_loadbalance_host_list.pl to not require\n random_seed but instead using randomization. Thinking out loud, how about\n something along these lines?\n - Passing a list of unreachable hostnames with a single good hostname\n should result in a connection.\n - Passing a list of one good hostname should result in a connection\n - Passing a list on n good hostname (where all are equal) should result in\n a connection\n - Passing a list of n unreachable hostnames should result in log entries\n for n broken resolves in some order, and running that test n' times\n shouldn't - statistically - result in the same order for a large enough n'.\n\n * Remove random_seed and 004_loadbalance_dns.pl\n\nThoughts?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 16 Mar 2023 11:47:39 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Support load balancing in libpq"
},
{
"msg_contents": "> The documentation lists the modes disabled and random, but I wonder if it's\n> worth expanding the docs to mention that \"disabled\" is pretty much a round\n> robin load balancing scheme? It reads a bit odd to present load balancing\n> without a mention of round robin balancing given how common it is.\n\nI think you misunderstood what I meant in that section, so I rewrote\nit to hopefully be clearer. Because disabled really isn't the same as\nround-robin.\n\n> This removes all uses of conn->addrlist_family and that struct member can be\n> removed.\n\ndone\n\n> s/stanby/standby/\n> s/Postgres/<productname>PostgreSQL</productname>/\n\ndone\n\n> The documentation typically use a less personal form, I would suggest something\n> along the lines of:\n>\n> \"If uniform load balancing is required then an external load balancing tool\n> must be used. Non-uniform load balancing can also be used to skew the\n> results, e.g. by providing the..\"\n\nrewrote this to stop using \"you\" and expanded a bit on the topic\n\n> libpq_append_conn_error(conn, \"unable to initiate random number generator\");\n\ndone\n\n> -#ifndef WIN32\n> +/* MinGW has sys/time.h, but MSVC doesn't */\n> +#ifndef _MSC_VER\n> #include <sys/time.h>\n> This seems unrelated to the patch in question, and should be a separate commit IMO.\n\nIt's not really unrelated. This only started to be needed because\nlibpq_prng_init calls gettimeofday . That did not work on MinGW\nsystems. Before this patch libpq was never calling gettimeofday. So I\nthink it makes sense to leave it in the commit.\n\n> + LOAD_BALANCE_RANDOM, /* Read-write server */\n> I assume this comment is a copy/paste and should have been reflecting random order?\n\nYes, done\n\n> +++ b/src/interfaces/libpq/t/003_loadbalance_host_list.pl\n> Nitpick, but we should probably rename this load_balance to match the parameter\n> being tested.\n\nDone\n\n> A test\n> which require root permission level manual system changes stand a very low\n> chance of ever being executed, and as such will equate to dead code that may\n> easily be broken or subtly broken.\n\nWhile I definitely agree that it makes it hard to execute, I don't\nthink that means it will be executed nearly as few times as you\nsuggest. Maybe you missed it, but I modified the .cirrus.yml file to\nconfigure the hosts file for both Linux and Windows runs. So, while I\nagree it is unlikely to be executed manually by many people, it would\nstill be run on every commit fest entry (which should capture most\nissues that I can imagine could occur).\n\n> I am also not a fan of the random_seed parameter.\n\nFair enough. Removed\n\n> A few ideas:\n>\n> * Add basic tests for the load_balance_host connection param only accepting\n> sane values\n>\n> * Alter the connect_ok() tests in 003_loadbalance_host_list.pl to not require\n> random_seed but instead using randomization. Thinking out loud, how about\n> something along these lines?\n> - Passing a list of unreachable hostnames with a single good hostname\n> should result in a connection.\n> - Passing a list of one good hostname should result in a connection\n> - Passing a list on n good hostname (where all are equal) should result in\n> a connection\n\nImplemented all these.\n\n> - Passing a list of n unreachable hostnames should result in log entries\n> for n broken resolves in some order, and running that test n' times\n> shouldn't - statistically - result in the same order for a large enough n'.\n\nI didn't implement this one. Instead I went for another statistics\nbased approach with working hosts (see test for details).\n\n> * Remove random_seed and 004_loadbalance_dns.pl\n\nI moved 004_load_balance_dns.pl to a separate commit (after making\nsimilar random_seed removal related changes to it). As explained above\nI think it's worth it to have it because it gets executed in CI. But\nfeel free to commit only the main patch, if you disagree.",
"msg_date": "Fri, 17 Mar 2023 09:50:41 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Support load balancing in libpq"
},
{
"msg_contents": "Rebased patch after conflicts with bfc9497ece01c7c45437bc36387cb1ebe346f4d2",
"msg_date": "Wed, 22 Mar 2023 13:27:00 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Support load balancing in libpq"
},
{
"msg_contents": "> On 17 Mar 2023, at 09:50, Jelte Fennema <postgres@jeltef.nl> wrote:\n> \n>> The documentation lists the modes disabled and random, but I wonder if it's\n>> worth expanding the docs to mention that \"disabled\" is pretty much a round\n>> robin load balancing scheme? It reads a bit odd to present load balancing\n>> without a mention of round robin balancing given how common it is.\n> \n> I think you misunderstood what I meant in that section, so I rewrote\n> it to hopefully be clearer. Because disabled really isn't the same as\n> round-robin.\n\nThinking more about it I removed that section since it adds more confusion than\nit resolves I think. It would be interesting to make it a true round-robin\nwith some form of locally stored pointer to last connection but thats for\nfuture hacking.\n\n>> -#ifndef WIN32\n>> +/* MinGW has sys/time.h, but MSVC doesn't */\n>> +#ifndef _MSC_VER\n>> #include <sys/time.h>\n>> This seems unrelated to the patch in question, and should be a separate commit IMO.\n> \n> It's not really unrelated. This only started to be needed because\n> libpq_prng_init calls gettimeofday . That did not work on MinGW\n> systems. Before this patch libpq was never calling gettimeofday. So I\n> think it makes sense to leave it in the commit.\n\nGotcha.\n\n>> A test\n>> which require root permission level manual system changes stand a very low\n>> chance of ever being executed, and as such will equate to dead code that may\n>> easily be broken or subtly broken.\n> \n> While I definitely agree that it makes it hard to execute, I don't\n> think that means it will be executed nearly as few times as you\n> suggest. Maybe you missed it, but I modified the .cirrus.yml file to\n> configure the hosts file for both Linux and Windows runs. So, while I\n> agree it is unlikely to be executed manually by many people, it would\n> still be run on every commit fest entry (which should capture most\n> issues that I can imagine could occur).\n\nI did see it was used in the CI since the jobs there are containerized, what\nI'm less happy about is that we wont be able to test this in the BF. That\nbeing said, not having the test at all would mean even less testing so in the\nend I agree that including it is the least bad option. Longer term I would\nlike to rework into something less do-this-manually test, but I have no good\nideas right now.\n\nI've played around some more with this and came up with the attached v15 which\nI think is close to the final state. The changes I've made are:\n\n * Added the DNS test back into the main commit\n * A few incorrect (referred to how the test worked previously) comments in\n the tests fixed.\n * The check against PG_TEST_EXTRA performed before any processing done\n * Reworked the check for hosts content attempting to make it a bit more\n robust\n * Changed store_conn_addrinfo to return int like how all the functions\n dealing with addrinfo does. Also moved the error reporting to inside there\n where the error happened.\n * Made the prng init function void as it always returned true anyways.\n * Minor comment and docs tweaking.\n * I removed the change to geqo, while I don't think it's incorrect it also\n hardly seems worth the churn.\n * Commit messages are reworded.\n\nI would like to see this wrapped up in the current CF, what do you think about\nthe attached?\n\n--\nDaniel Gustafsson",
"msg_date": "Mon, 27 Mar 2023 11:43:03 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Support load balancing in libpq"
},
{
"msg_contents": "Looks good overall. I attached a new version with a few small changes:\n\n> * Changed store_conn_addrinfo to return int like how all the functions\n> dealing with addrinfo does. Also moved the error reporting to inside there\n> where the error happened.\n\nI don't feel strong about the int vs bool return type. The existing\nstatic libpq functions are a bit of a mixed bag around this, so either\nway seems fine to me. And moving the log inside the function seems\nfine too. But it seems you accidentally removed the \"goto\nerror_return\" part as well, so now we're completely ignoring the\nallocation failure. The attached patch fixes that.\n\n>+ok($node1_occurences > 1, \"expected at least one execution on node1, found none\");\n>+ok($node2_occurences > 1, \"expected at least one execution on node2, found none\");\n>+ok($node3_occurences > 1, \"expected at least one execution on node3, found none\");\n\nI changed the message to be a description of the expected case,\ninstead of the failure case. This is in line with the way these\nmessages are used in other tests, and indeed seems like the correct\nway because you get output from \"meson test -v postgresql:libpq /\nlibpq/003_load_balance_host_list\" like this:\n▶ 6/6 - received at least one connection on node1 OK\n▶ 6/6 - received at least one connection on node2 OK\n▶ 6/6 - received at least one connection on node3 OK\n▶ 6/6 - received 50 connections across all nodes OK\n\nFinally, I changed a few small typos in your updated commit message\n(some of which originated from my earlier commit messages)",
"msg_date": "Mon, 27 Mar 2023 13:50:39 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Support load balancing in libpq"
},
{
"msg_contents": "> On 27 Mar 2023, at 13:50, Jelte Fennema <postgres@jeltef.nl> wrote:\n> \n> Looks good overall. I attached a new version with a few small changes:\n> \n>> * Changed store_conn_addrinfo to return int like how all the functions\n>> dealing with addrinfo does. Also moved the error reporting to inside there\n>> where the error happened.\n> \n> I don't feel strong about the int vs bool return type. The existing\n> static libpq functions are a bit of a mixed bag around this, so either\n> way seems fine to me. And moving the log inside the function seems\n> fine too. But it seems you accidentally removed the \"goto\n> error_return\" part as well, so now we're completely ignoring the\n> allocation failure. The attached patch fixes that.\n\nUgh, thanks. I had a conflict here when rebasing with the load balancing\ncommit in place and clearly fat-fingered that one.\n\n>> +ok($node1_occurences > 1, \"expected at least one execution on node1, found none\");\n>> +ok($node2_occurences > 1, \"expected at least one execution on node2, found none\");\n>> +ok($node3_occurences > 1, \"expected at least one execution on node3, found none\");\n> \n> I changed the message to be a description of the expected case,\n> instead of the failure case. This is in line with the way these\n> messages are used in other tests, and indeed seems like the correct\n> way because you get output from \"meson test -v postgresql:libpq /\n> libpq/003_load_balance_host_list\" like this:\n> ▶ 6/6 - received at least one connection on node1 OK\n> ▶ 6/6 - received at least one connection on node2 OK\n> ▶ 6/6 - received at least one connection on node3 OK\n> ▶ 6/6 - received 50 connections across all nodes OK\n\nGood point.\n\n> Finally, I changed a few small typos in your updated commit message\n> (some of which originated from my earlier commit messages)\n\n+1\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 27 Mar 2023 13:57:46 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Support load balancing in libpq"
},
{
"msg_contents": "Hi,\n\n> > ▶ 6/6 - received at least one connection on node1 OK\n> > ▶ 6/6 - received at least one connection on node2 OK\n> > ▶ 6/6 - received at least one connection on node3 OK\n> > ▶ 6/6 - received 50 connections across all nodes OK\n>\n> Good point.\n>\n> > Finally, I changed a few small typos in your updated commit message\n> > (some of which originated from my earlier commit messages)\n>\n> +1\n\nHi,\n\n> I would like to see this wrapped up in the current CF, what do you think about\n> the attached?\n\nIn v15-0001:\n\n```\n+ conn->addr = calloc(conn->naddr, sizeof(AddrInfo));\n+ if (conn->addr == NULL)\n+ {\n+ libpq_append_conn_error(conn, \"out of memory\");\n+ return 1;\n+ }\n```\n\nAccording to the man pages, in a corner case when naddr is 0 calloc\ncan return NULL which will not indicate an error.\n\nSo I think it should be:\n\n```\nif (conn->addr == NULL && conn->naddr != 0)\n```\n\nOther than that v15 looked very good. It was checked on Linux and\nMacOS including running it under sanitizer.\n\nI will take a look at v16 now.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 27 Mar 2023 15:01:36 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Support load balancing in libpq"
},
{
"msg_contents": "Hi,\n\n> So I think it should be:\n>\n> ```\n> if (conn->addr == NULL && conn->naddr != 0)\n> ```\n>\n> [...]\n>\n> I will take a look at v16 now.\n\nThe code coverage could be slightly better.\n\nIn v16-0001:\n\n```\n+ ret = store_conn_addrinfo(conn, addrlist);\n+ pg_freeaddrinfo_all(hint.ai_family, addrlist);\n+ if (ret)\n+ goto error_return; /* message already logged */\n```\n\nThe goto path is not test-covered.\n\nIn v16-0002:\n\n```\n+ }\n+ else\n+ conn->load_balance_type = LOAD_BALANCE_DISABLE;\n```\n\nThe else branch is never executed.\n\n```\n if (ret)\n goto error_return; /* message already logged */\n\n+ /*\n+ * If random load balancing is enabled we shuffle the addresses.\n+ */\n+ if (conn->load_balance_type == LOAD_BALANCE_RANDOM)\n+ {\n+ /*\n+ * This is the \"inside-out\" variant of the Fisher-Yates shuffle\n[...]\n+ */\n+ for (int i = 1; i < conn->naddr; i++)\n+ {\n+ int j =\npg_prng_uint64_range(&conn->prng_state, 0, i);\n+ AddrInfo temp = conn->addr[j];\n+\n+ conn->addr[j] = conn->addr[i];\n+ conn->addr[i] = temp;\n+ }\n+ }\n```\n\nStrangely enough the body of the for loop is never executed either.\nApparently only one address is used and there is nothing to shuffle?\n\nHere is the exact command I used to build the code coverage report:\n\n```\ngit clean -dfx && meson setup --buildtype debug -Db_coverage=true\n-Dcassert=true -DPG_TEST_EXTRA=\"kerberos ldap ssl load_balance\"\n-Dldap=disabled -Dssl=openssl -Dtap_tests=enabled\n-Dprefix=/home/eax/projects/pginstall build && ninja -C build &&\nPG_TEST_EXTRA=1 meson test -C build && ninja -C build coverage-html\n```\n\nI'm sharing this for the sake of completeness. I don't have a strong\nopinion on whether we should bother with covering every new line of\ncode with tests.\n\nExcept for the named nitpicks v16 looks good to me.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 27 Mar 2023 16:32:20 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Support load balancing in libpq"
},
{
"msg_contents": "Hi,\n\n> ```\n> + ret = store_conn_addrinfo(conn, addrlist);\n> + pg_freeaddrinfo_all(hint.ai_family, addrlist);\n> + if (ret)\n> + goto error_return; /* message already logged */\n> ```\n> The goto path is not test-covered.\n\nD'oh, this one is fine since store_conn_addrinfo() is going to fail\nonly when we are out of memory.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 27 Mar 2023 16:35:50 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Support load balancing in libpq"
},
{
"msg_contents": "> > ```\n> > if (conn->addr == NULL && conn->naddr != 0)\n> > ```\n\nAfaict this is not necessary, since getaddrinfo already returns an\nerror if the host could not be resolved to any addresses. A quick test\ngives me this error:\nerror: could not translate host name \"doesnotexist\" to address: Name\nor service not known\n\n>\n> ```\n> + }\n> + else\n> + conn->load_balance_type = LOAD_BALANCE_DISABLE;\n> ```\n>\n> The else branch is never executed.\n\nI don't think that line is coverable then. There's definitely places\nin the test suite where load_balance_hosts is not explicitly set. But\neven in those cases I guess the argument parsing logic will use\nDefaultLoadBalanceHosts instead of NULL as a value for\nconn->load_balance_type.\n\n> Strangely enough the body of the for loop is never executed either.\n> Apparently only one address is used and there is nothing to shuffle?\n>\n> Here is the exact command I used to build the code coverage report:\n\nI guess you didn't set up the hostnames in /etc/hosts as described in\n004_load_balance_dns.pl. Then it's expected that the loop body isn't\ncovered. As discussed upthread, running this test manually is much\nmore cumbersome than is desirable, but it's still better than not\nhaving the test at all, because it is run in CI.\n\n\n",
"msg_date": "Mon, 27 Mar 2023 16:51:56 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Support load balancing in libpq"
},
{
"msg_contents": "Hi,\n\n> I guess you didn't set up the hostnames in /etc/hosts as described in\n> 004_load_balance_dns.pl. Then it's expected that the loop body isn't\n> covered. As discussed upthread, running this test manually is much\n> more cumbersome than is desirable, but it's still better than not\n> having the test at all, because it is run in CI.\n\nGot it, thanks.\n\nI guess I'm completely out of nitpicks then!\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 27 Mar 2023 19:19:34 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Support load balancing in libpq"
},
{
"msg_contents": "Hi,\n\n\"unlikely\" macro is used in libpq_prng_init() in the patch. I wonder\nif the place is really 'hot' to use \"unlikely\" macro.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Tue, 28 Mar 2023 16:16:28 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Support load balancing in libpq"
},
{
"msg_contents": "> On 28 Mar 2023, at 09:16, Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> \"unlikely\" macro is used in libpq_prng_init() in the patch. I wonder\n> if the place is really 'hot' to use \"unlikely\" macro.\n\nI don't think it is, I was thinking to rewrite as the below sketch:\n\n{\n if (pg_prng_strong_seed(&conn->prng_state)))\n return;\n\n /* fallback seeding */\n}\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 28 Mar 2023 09:21:57 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Support load balancing in libpq"
},
{
"msg_contents": "I think it's fine to remove it. It originated from postmaster.c, where\nI copied the original implementation of libpq_prng_init from.\n\nOn Tue, 28 Mar 2023 at 09:22, Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 28 Mar 2023, at 09:16, Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n>\n> > \"unlikely\" macro is used in libpq_prng_init() in the patch. I wonder\n> > if the place is really 'hot' to use \"unlikely\" macro.\n>\n> I don't think it is, I was thinking to rewrite as the below sketch:\n>\n> {\n> if (pg_prng_strong_seed(&conn->prng_state)))\n> return;\n>\n> /* fallback seeding */\n> }\n>\n> --\n> Daniel Gustafsson\n>\n\n\n",
"msg_date": "Tue, 28 Mar 2023 09:33:27 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Support load balancing in libpq"
},
{
"msg_contents": ">> \"unlikely\" macro is used in libpq_prng_init() in the patch. I wonder\n>> if the place is really 'hot' to use \"unlikely\" macro.\n> \n> I don't think it is, I was thinking to rewrite as the below sketch:\n> \n> {\n> if (pg_prng_strong_seed(&conn->prng_state)))\n> return;\n> \n> /* fallback seeding */\n> }\n\n+1.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Tue, 28 Mar 2023 16:36:18 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Support load balancing in libpq"
},
{
"msg_contents": "> I think it's fine to remove it. It originated from postmaster.c, where\n> I copied the original implementation of libpq_prng_init from.\n\nI agree to remove unlikely macro here.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Tue, 28 Mar 2023 16:37:07 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Support load balancing in libpq"
},
{
"msg_contents": "I took another couple of looks at this and pushed it after a few small tweaks\nto the docs.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 29 Mar 2023 22:18:43 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Support load balancing in libpq"
},
{
"msg_contents": "Dear Daniel, Jelte\n\nThank you for creating a good feature!\nWhile checking the buildfarm, I found a failure on NetBSD caused by the added code[1]:\n\n```\nfe-connect.c: In function 'libpq_prng_init':\nfe-connect.c:1048:11: error: cast from pointer to integer of different size [-Werror=pointer-to-int-cast]\n 1048 | rseed = ((uint64) conn) ^\n | ^\ncc1: all warnings being treated as errors\n```\n\nThis failure seemed to occurr when the pointer is casted to different size.\nAnd while checking more, I found that this machine seemed that size of pointer is 4 byte [2],\nwhereas sizeof(uint64) is 8.\n\n```\nchecking size of void *... (cached) 4\n```\n\nI could not test because I do not have NetBSD, but I have come up with\nFollowing solution to avoid the failure. sizeof(uintptr_t) will be addressed\nbased on the environment. How do you think?\n\n```\ndiff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c\nindex a13ec16b32..bb7347cb0c 100644\n--- a/src/interfaces/libpq/fe-connect.c\n+++ b/src/interfaces/libpq/fe-connect.c\n@@ -1045,7 +1045,7 @@ libpq_prng_init(PGconn *conn)\n \n gettimeofday(&tval, NULL);\n \n- rseed = ((uint64) conn) ^\n+ rseed = ((uintptr_t) conn) ^\n ((uint64) getpid()) ^\n ((uint64) tval.tv_usec) ^\n ((uint64) tval.tv_sec);\n```\n\n[1]: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2023-03-29%2023%3A24%3A44\n[2]: https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=mamba&dt=2023-03-29%2023%3A24%3A44&stg=configure\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Thu, 30 Mar 2023 01:48:29 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [EXTERNAL] Support load balancing in libpq"
},
{
"msg_contents": "> On 30 Mar 2023, at 03:48, Hayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com> wrote:\n\n> While checking the buildfarm, I found a failure on NetBSD caused by the added code[1]:\n\nThanks for reporting, I see that lapwing which runs Linux (Debian 7, gcc 4.7.2)\nhas the same error. I'll look into it today to get a fix committed.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 30 Mar 2023 09:02:15 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Support load balancing in libpq"
},
{
"msg_contents": "On Thu, Mar 30, 2023 at 3:03 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 30 Mar 2023, at 03:48, Hayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com> wrote:\n>\n> > While checking the buildfarm, I found a failure on NetBSD caused by the added code[1]:\n>\n> Thanks for reporting, I see that lapwing which runs Linux (Debian 7, gcc 4.7.2)\n> has the same error. I'll look into it today to get a fix committed.\n\nThis is an i686 machine, so it probably has the same void *\ndifference. Building with -m32 might be enough to reproduce the\nproblem.\n\n\n",
"msg_date": "Thu, 30 Mar 2023 16:00:29 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Support load balancing in libpq"
},
{
"msg_contents": "> On 30 Mar 2023, at 10:00, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> \n> On Thu, Mar 30, 2023 at 3:03 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> \n>>> On 30 Mar 2023, at 03:48, Hayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com> wrote:\n>> \n>>> While checking the buildfarm, I found a failure on NetBSD caused by the added code[1]:\n>> \n>> Thanks for reporting, I see that lapwing which runs Linux (Debian 7, gcc 4.7.2)\n>> has the same error. I'll look into it today to get a fix committed.\n> \n> This is an i686 machine, so it probably has the same void *\n> difference. Building with -m32 might be enough to reproduce the\n> problem.\n\nMakes sense. I think the best option is to simply remove conn from being part\nof the seed and rely on the other values. Will apply that after a testrun.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 30 Mar 2023 10:21:56 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Support load balancing in libpq"
},
{
"msg_contents": "> On 30 Mar 2023, at 10:21, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 30 Mar 2023, at 10:00, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>> \n>> On Thu, Mar 30, 2023 at 3:03 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>>> \n>>>> On 30 Mar 2023, at 03:48, Hayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com> wrote:\n>>> \n>>>> While checking the buildfarm, I found a failure on NetBSD caused by the added code[1]:\n>>> \n>>> Thanks for reporting, I see that lapwing which runs Linux (Debian 7, gcc 4.7.2)\n>>> has the same error. I'll look into it today to get a fix committed.\n>> \n>> This is an i686 machine, so it probably has the same void *\n>> difference. Building with -m32 might be enough to reproduce the\n>> problem.\n> \n> Makes sense. I think the best option is to simply remove conn from being part\n> of the seed and rely on the other values. Will apply that after a testrun.\n\nAfter some offlist discussion I ended up pushing the proposed uintptr_t fix\ninstead, now waiting for these animals to build.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 30 Mar 2023 11:49:11 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Support load balancing in libpq"
}
] |
[
{
"msg_contents": "I see we removed \"plpythonu\" when we removed \"plpython2u\" in PG 15. Is\nthere a good reason for that? We don't have the software version number\nin other server-side language names, as far as I know. We added\n\"plpython2u\" when we were adding \"plpython3u\", but now that we have\nremoved \"plpython2u\", why not just use \"plpythonu\"? Do we need to\nremove \"plpythonu\" for a while until everyone has upgraded?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Fri, 10 Jun 2022 14:34:13 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Removing \"plpythonu\" in PG 15"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> I see we removed \"plpythonu\" when we removed \"plpython2u\" in PG 15. Is\n> there a good reason for that?\n\nThere was extensive discussion of that in the relevant threads,\nbut basically (1) risk of confusion and (2) most python installations\nhave removed \"python\", not redefined it to mean \"python3\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 Jun 2022 14:53:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Removing \"plpythonu\" in PG 15"
},
{
"msg_contents": "On Fri, Jun 10, 2022 at 02:53:22PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > I see we removed \"plpythonu\" when we removed \"plpython2u\" in PG 15. Is\n> > there a good reason for that?\n> \n> There was extensive discussion of that in the relevant threads,\n> but basically (1) risk of confusion and (2) most python installations\n> have removed \"python\", not redefined it to mean \"python3\".\n\nOkay, just confirming, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Fri, 10 Jun 2022 15:01:41 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Removing \"plpythonu\" in PG 15"
}
] |
[
{
"msg_contents": "I have just got to the bottom of why the new subscription tests\n027_nosuperuser.pl and 029_on_error.pl have been failing for me - it's\nbecause my test setup has log_error_verbosity set to 'verbose'. Either\nwe should force log_error_verbosity to 'default' for these tests, or we\nshould make the regexes we're testing for more forgiving as in the attached.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 11 Jun 2022 14:08:54 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Subscription tests vs log_error_verbosity"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> I have just got to the bottom of why the new subscription tests\n> 027_nosuperuser.pl and 029_on_error.pl have been failing for me - it's\n> because my test setup has log_error_verbosity set to 'verbose'. Either\n> we should force log_error_verbosity to 'default' for these tests, or we\n> should make the regexes we're testing for more forgiving as in the attached.\n\n+1 for the second answer. I don't like forcing parameter settings\nthat we don't absolutely have to --- it reduces our test coverage.\n(Admittedly, changing log_error_verbosity in particular is probably\nnot giving up much coverage, but as a general principle it's bad.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 11 Jun 2022 14:52:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Subscription tests vs log_error_verbosity"
},
{
"msg_contents": "\nOn 2022-06-11 Sa 14:52, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> I have just got to the bottom of why the new subscription tests\n>> 027_nosuperuser.pl and 029_on_error.pl have been failing for me - it's\n>> because my test setup has log_error_verbosity set to 'verbose'. Either\n>> we should force log_error_verbosity to 'default' for these tests, or we\n>> should make the regexes we're testing for more forgiving as in the attached.\n> +1 for the second answer. I don't like forcing parameter settings\n> that we don't absolutely have to --- it reduces our test coverage.\n> (Admittedly, changing log_error_verbosity in particular is probably\n> not giving up much coverage, but as a general principle it's bad.)\n>\n> \t\t\t\n\n\nYeah, Done that way.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 12 Jun 2022 10:19:23 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Subscription tests vs log_error_verbosity"
}
] |
[
{
"msg_contents": "Hi,\n\nI've noticed that JIT performance counter generation_counter seems to include\nactions, relevant for both jit_expressions and jit_tuple_deforming options. It\nmeans one can't directly see what is the influence of jit_tuple_deforming\nalone, which would be helpful when adjusting JIT options. To make it better a\nnew counter can be introduced, does it make sense?",
"msg_date": "Sun, 12 Jun 2022 11:12:53 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "[RFC] Add jit deform_counter"
},
{
"msg_contents": "2022年6月12日(日) 18:14 Dmitry Dolgov <9erthalion6@gmail.com>:\n>\n> Hi,\n>\n> I've noticed that JIT performance counter generation_counter seems to include\n> actions, relevant for both jit_expressions and jit_tuple_deforming options. It\n> means one can't directly see what is the influence of jit_tuple_deforming\n> alone, which would be helpful when adjusting JIT options. To make it better a\n> new counter can be introduced, does it make sense?\n\nHi Pavel\n\nI see you are added as reviewer in the CF app; have you been able to take a look\nat this?\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Sun, 11 Dec 2022 09:14:42 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [RFC] Add jit deform_counter"
},
{
"msg_contents": "Hi\n\nne 11. 12. 2022 v 1:14 odesílatel Ian Lawrence Barwick <barwick@gmail.com>\nnapsal:\n\n> 2022年6月12日(日) 18:14 Dmitry Dolgov <9erthalion6@gmail.com>:\n> >\n> > Hi,\n> >\n> > I've noticed that JIT performance counter generation_counter seems to\n> include\n> > actions, relevant for both jit_expressions and jit_tuple_deforming\n> options. It\n> > means one can't directly see what is the influence of jit_tuple_deforming\n> > alone, which would be helpful when adjusting JIT options. To make it\n> better a\n> > new counter can be introduced, does it make sense?\n>\n> Hi Pavel\n>\n> I see you are added as reviewer in the CF app; have you been able to take\n> a look\n> at this?\n>\n\nI hope so yes\n\nRegards\n\nPavel\n\n\n>\n> Regards\n>\n> Ian Barwick\n>\n\nHine 11. 12. 2022 v 1:14 odesílatel Ian Lawrence Barwick <barwick@gmail.com> napsal:2022年6月12日(日) 18:14 Dmitry Dolgov <9erthalion6@gmail.com>:\n>\n> Hi,\n>\n> I've noticed that JIT performance counter generation_counter seems to include\n> actions, relevant for both jit_expressions and jit_tuple_deforming options. It\n> means one can't directly see what is the influence of jit_tuple_deforming\n> alone, which would be helpful when adjusting JIT options. To make it better a\n> new counter can be introduced, does it make sense?\n\nHi Pavel\n\nI see you are added as reviewer in the CF app; have you been able to take a look\nat this?I hope so yesRegardsPavel \n\nRegards\n\nIan Barwick",
"msg_date": "Sun, 11 Dec 2022 05:44:54 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [RFC] Add jit deform_counter"
},
{
"msg_contents": "Hi\n\n\nne 11. 12. 2022 v 5:44 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> ne 11. 12. 2022 v 1:14 odesílatel Ian Lawrence Barwick <barwick@gmail.com>\n> napsal:\n>\n>> 2022年6月12日(日) 18:14 Dmitry Dolgov <9erthalion6@gmail.com>:\n>> >\n>> > Hi,\n>> >\n>> > I've noticed that JIT performance counter generation_counter seems to\n>> include\n>> > actions, relevant for both jit_expressions and jit_tuple_deforming\n>> options. It\n>> > means one can't directly see what is the influence of\n>> jit_tuple_deforming\n>> > alone, which would be helpful when adjusting JIT options. To make it\n>> better a\n>> > new counter can be introduced, does it make sense?\n>>\n>> Hi Pavel\n>>\n>> I see you are added as reviewer in the CF app; have you been able to take\n>> a look\n>> at this?\n>>\n>\n> I hope so yes\n>\n\nthere are some problems with stability of regress tests\n\nhttp://cfbot.cputube.org/dmitry-dolgov.html\n\nRegards\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>\n>\n>>\n>> Regards\n>>\n>> Ian Barwick\n>>\n>\n\nHine 11. 12. 2022 v 5:44 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:Hine 11. 12. 2022 v 1:14 odesílatel Ian Lawrence Barwick <barwick@gmail.com> napsal:2022年6月12日(日) 18:14 Dmitry Dolgov <9erthalion6@gmail.com>:\n>\n> Hi,\n>\n> I've noticed that JIT performance counter generation_counter seems to include\n> actions, relevant for both jit_expressions and jit_tuple_deforming options. It\n> means one can't directly see what is the influence of jit_tuple_deforming\n> alone, which would be helpful when adjusting JIT options. To make it better a\n> new counter can be introduced, does it make sense?\n\nHi Pavel\n\nI see you are added as reviewer in the CF app; have you been able to take a look\nat this?I hope so yesthere are some problems with stability of regress testshttp://cfbot.cputube.org/dmitry-dolgov.htmlRegardsPavelRegardsPavel \n\nRegards\n\nIan Barwick",
"msg_date": "Sun, 25 Dec 2022 18:55:02 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [RFC] Add jit deform_counter"
},
{
"msg_contents": "> On Sun, Dec 25, 2022 at 06:55:02PM +0100, Pavel Stehule wrote:\n> there are some problems with stability of regress tests\n>\n> http://cfbot.cputube.org/dmitry-dolgov.html\n\nLooks like this small change predates moving to meson, the attached\nversion should help.",
"msg_date": "Mon, 2 Jan 2023 17:55:50 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [RFC] Add jit deform_counter"
},
{
"msg_contents": "Hi\r\n\r\n\r\npo 2. 1. 2023 v 17:55 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\r\nnapsal:\r\n\r\n> > On Sun, Dec 25, 2022 at 06:55:02PM +0100, Pavel Stehule wrote:\r\n> > there are some problems with stability of regress tests\r\n> >\r\n> > http://cfbot.cputube.org/dmitry-dolgov.html\r\n>\r\n> Looks like this small change predates moving to meson, the attached\r\n> version should help.\r\n>\r\n\r\nThe explain part is working, the part of pg_stat_statements doesn't\r\n\r\nset jit_above_cost to 10;\r\nset jit_optimize_above_cost to 10;\r\nset jit_inline_above_cost to 10;\r\n\r\n(2023-01-06 09:08:59) postgres=# explain analyze select\r\ncount(length(prosrc) > 0) from pg_proc;\r\n┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n│ QUERY PLAN\r\n │\r\n╞════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n│ Aggregate (cost=154.10..154.11 rows=1 width=8) (actual\r\ntime=132.320..132.321 rows=1 loops=1) │\r\n│ -> Seq Scan on pg_proc (cost=0.00..129.63 rows=3263 width=16) (actual\r\ntime=0.013..0.301 rows=3266 loops=1) │\r\n│ Planning Time: 0.070 ms\r\n │\r\n│ JIT:\r\n │\r\n│ Functions: 3\r\n │\r\n│ Options: Inlining true, Optimization true, Expressions true, Deforming\r\ntrue │\r\n│ Timing: Generation 0.597 ms, Deforming 0.407 ms, Inlining 8.943 ms,\r\nOptimization 79.403 ms, Emission 43.091 ms, Total 132.034 ms │\r\n│ Execution Time: 132.986 ms\r\n │\r\n└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n(8 rows)\r\n\r\nI see the result of deforming in explain analyze, but related values in\r\npg_stat_statements are 0.\r\n\r\nMinimally, the values are assigned in wrong order\r\n\r\n+ if (api_version >= PGSS_V1_11)\r\n+ {\r\n+ values[i++] = Float8GetDatumFast(tmp.jit_deform_time);\r\n+ values[i++] = Int64GetDatumFast(tmp.jit_deform_count);\r\n+ }\r\n\r\nAfter reading the doc, I am confused what this metric means\r\n\r\n+ <row>\r\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n+ <structfield>jit_deform_count</structfield> <type>bigint</type>\r\n+ </para>\r\n+ <para>\r\n+ Number of times tuples have been deformed\r\n+ </para></entry>\r\n+ </row>\r\n+\r\n+ <row>\r\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n+ <structfield>jit_deform_time</structfield> <type>double\r\nprecision</type>\r\n+ </para>\r\n+ <para>\r\n+ Total time spent by the statement on deforming tuples, in\r\nmilliseconds\r\n+ </para></entry>\r\n+ </row>\r\n\r\nIt is not clean so these times and these numbers are related just to the\r\ncompilation of the deforming process, not by own deforming.\r\n\r\nRegards\r\n\r\nPavel\r\n\nHipo 2. 1. 2023 v 17:55 odesílatel Dmitry Dolgov <9erthalion6@gmail.com> napsal:> On Sun, Dec 25, 2022 at 06:55:02PM +0100, Pavel Stehule wrote:\r\n> there are some problems with stability of regress tests\r\n>\r\n> http://cfbot.cputube.org/dmitry-dolgov.html\n\r\nLooks like this small change predates moving to meson, the attached\r\nversion should help.The explain part is working, the part of pg_stat_statements doesn'tset jit_above_cost to 10;set jit_optimize_above_cost to 10;set jit_inline_above_cost to 10;(2023-01-06 09:08:59) postgres=# explain analyze select count(length(prosrc) > 0) from pg_proc;┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐│ QUERY PLAN │╞════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡│ Aggregate (cost=154.10..154.11 rows=1 width=8) (actual time=132.320..132.321 rows=1 loops=1) ││ -> Seq Scan on pg_proc (cost=0.00..129.63 rows=3263 width=16) (actual time=0.013..0.301 rows=3266 loops=1) ││ Planning Time: 0.070 ms ││ JIT: ││ Functions: 3 ││ Options: Inlining true, Optimization true, Expressions true, Deforming true ││ Timing: Generation 0.597 ms, Deforming 0.407 ms, Inlining 8.943 ms, Optimization 79.403 ms, Emission 43.091 ms, Total 132.034 ms ││ Execution Time: 132.986 ms │└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘(8 rows)I see the result of deforming in explain analyze, but related values in pg_stat_statements are 0.Minimally, the values are assigned in wrong order+ if (api_version >= PGSS_V1_11)+ {+ values[i++] = Float8GetDatumFast(tmp.jit_deform_time);+ values[i++] = Int64GetDatumFast(tmp.jit_deform_count);+ }After reading the doc, I am confused what this metric means+ <row>+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">+ <structfield>jit_deform_count</structfield> <type>bigint</type>+ </para>+ <para>+ Number of times tuples have been deformed+ </para></entry>+ </row>++ <row>+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">+ <structfield>jit_deform_time</structfield> <type>double precision</type>+ </para>+ <para>+ Total time spent by the statement on deforming tuples, in milliseconds+ </para></entry>+ </row>It is not clean so these times and these numbers are related just to the compilation of the deforming process, not by own deforming.RegardsPavel",
"msg_date": "Fri, 6 Jan 2023 09:42:09 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [RFC] Add jit deform_counter"
},
{
"msg_contents": "> On Fri, Jan 06, 2023 at 09:42:09AM +0100, Pavel Stehule wrote:\n> The explain part is working, the part of pg_stat_statements doesn't\n>\n> set jit_above_cost to 10;\n> set jit_optimize_above_cost to 10;\n> set jit_inline_above_cost to 10;\n>\n> (2023-01-06 09:08:59) postgres=# explain analyze select\n> count(length(prosrc) > 0) from pg_proc;\n> ┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\n> │ QUERY PLAN\n> │\n> ╞════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\n> │ Aggregate (cost=154.10..154.11 rows=1 width=8) (actual\n> time=132.320..132.321 rows=1 loops=1) │\n> │ -> Seq Scan on pg_proc (cost=0.00..129.63 rows=3263 width=16) (actual\n> time=0.013..0.301 rows=3266 loops=1) │\n> │ Planning Time: 0.070 ms\n> │\n> │ JIT:\n> │\n> │ Functions: 3\n> │\n> │ Options: Inlining true, Optimization true, Expressions true, Deforming\n> true │\n> │ Timing: Generation 0.597 ms, Deforming 0.407 ms, Inlining 8.943 ms,\n> Optimization 79.403 ms, Emission 43.091 ms, Total 132.034 ms │\n> │ Execution Time: 132.986 ms\n> │\n> └────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\n> (8 rows)\n>\n> I see the result of deforming in explain analyze, but related values in\n> pg_stat_statements are 0.\n\nI'm not sure why, but pgss jit metrics are always nulls for explain\nanalyze queries. I have noticed this with surprise myself, when recently\nwas reviewing the lazy jit patch, but haven't yet figure out what is the\nreason. Anyway, without \"explain analyze\" you'll get correct deforming\nnumbers in pgss.\n\n> Minimally, the values are assigned in wrong order\n>\n> + if (api_version >= PGSS_V1_11)\n> + {\n> + values[i++] = Float8GetDatumFast(tmp.jit_deform_time);\n> + values[i++] = Int64GetDatumFast(tmp.jit_deform_count);\n> + }\n\n(facepalm) Yep, will fix the order.\n\n> After reading the doc, I am confused what this metric means\n>\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>jit_deform_count</structfield> <type>bigint</type>\n> + </para>\n> + <para>\n> + Number of times tuples have been deformed\n> + </para></entry>\n> + </row>\n> +\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>jit_deform_time</structfield> <type>double\n> precision</type>\n> + </para>\n> + <para>\n> + Total time spent by the statement on deforming tuples, in\n> milliseconds\n> + </para></entry>\n> + </row>\n>\n> It is not clean so these times and these numbers are related just to the\n> compilation of the deforming process, not by own deforming.\n\nGood point, I need to formulate this more clearly.\n\n\n",
"msg_date": "Sat, 7 Jan 2023 16:47:05 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [RFC] Add jit deform_counter"
},
{
"msg_contents": "so 7. 1. 2023 v 16:48 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\r\nnapsal:\r\n\r\n> > On Fri, Jan 06, 2023 at 09:42:09AM +0100, Pavel Stehule wrote:\r\n> > The explain part is working, the part of pg_stat_statements doesn't\r\n> >\r\n> > set jit_above_cost to 10;\r\n> > set jit_optimize_above_cost to 10;\r\n> > set jit_inline_above_cost to 10;\r\n> >\r\n> > (2023-01-06 09:08:59) postgres=# explain analyze select\r\n> > count(length(prosrc) > 0) from pg_proc;\r\n> >\r\n> ┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n> > │ QUERY PLAN\r\n> > │\r\n> >\r\n> ╞════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n> > │ Aggregate (cost=154.10..154.11 rows=1 width=8) (actual\r\n> > time=132.320..132.321 rows=1 loops=1)\r\n> │\r\n> > │ -> Seq Scan on pg_proc (cost=0.00..129.63 rows=3263 width=16)\r\n> (actual\r\n> > time=0.013..0.301 rows=3266 loops=1) │\r\n> > │ Planning Time: 0.070 ms\r\n> > │\r\n> > │ JIT:\r\n> > │\r\n> > │ Functions: 3\r\n> > │\r\n> > │ Options: Inlining true, Optimization true, Expressions true,\r\n> Deforming\r\n> > true │\r\n> > │ Timing: Generation 0.597 ms, Deforming 0.407 ms, Inlining 8.943 ms,\r\n> > Optimization 79.403 ms, Emission 43.091 ms, Total 132.034 ms │\r\n> > │ Execution Time: 132.986 ms\r\n> > │\r\n> >\r\n> └────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n> > (8 rows)\r\n> >\r\n> > I see the result of deforming in explain analyze, but related values in\r\n> > pg_stat_statements are 0.\r\n>\r\n> I'm not sure why, but pgss jit metrics are always nulls for explain\r\n> analyze queries. I have noticed this with surprise myself, when recently\r\n> was reviewing the lazy jit patch, but haven't yet figure out what is the\r\n> reason. Anyway, without \"explain analyze\" you'll get correct deforming\r\n> numbers in pgss.\r\n>\r\n\r\nIt was really strange, because I tested the queries without EXPLAIN ANALYZE\r\ntoo, and new columns were always zero on my comp. Other jit columns were\r\nfilled. But I didn't do a deeper investigation.\r\n\r\n\r\n\r\n> > Minimally, the values are assigned in wrong order\r\n> >\r\n> > + if (api_version >= PGSS_V1_11)\r\n> > + {\r\n> > + values[i++] = Float8GetDatumFast(tmp.jit_deform_time);\r\n> > + values[i++] = Int64GetDatumFast(tmp.jit_deform_count);\r\n> > + }\r\n>\r\n> (facepalm) Yep, will fix the order.\r\n>\r\n> > After reading the doc, I am confused what this metric means\r\n> >\r\n> > + <row>\r\n> > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n> > + <structfield>jit_deform_count</structfield> <type>bigint</type>\r\n> > + </para>\r\n> > + <para>\r\n> > + Number of times tuples have been deformed\r\n> > + </para></entry>\r\n> > + </row>\r\n> > +\r\n> > + <row>\r\n> > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n> > + <structfield>jit_deform_time</structfield> <type>double\r\n> > precision</type>\r\n> > + </para>\r\n> > + <para>\r\n> > + Total time spent by the statement on deforming tuples, in\r\n> > milliseconds\r\n> > + </para></entry>\r\n> > + </row>\r\n> >\r\n> > It is not clean so these times and these numbers are related just to the\r\n> > compilation of the deforming process, not by own deforming.\r\n>\r\n> Good point, I need to formulate this more clearly.\r\n>\r\n\nso 7. 1. 2023 v 16:48 odesílatel Dmitry Dolgov <9erthalion6@gmail.com> napsal:> On Fri, Jan 06, 2023 at 09:42:09AM +0100, Pavel Stehule wrote:\r\n> The explain part is working, the part of pg_stat_statements doesn't\r\n>\r\n> set jit_above_cost to 10;\r\n> set jit_optimize_above_cost to 10;\r\n> set jit_inline_above_cost to 10;\r\n>\r\n> (2023-01-06 09:08:59) postgres=# explain analyze select\r\n> count(length(prosrc) > 0) from pg_proc;\r\n> ┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n> │ QUERY PLAN\r\n> │\r\n> ╞════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n> │ Aggregate (cost=154.10..154.11 rows=1 width=8) (actual\r\n> time=132.320..132.321 rows=1 loops=1) │\r\n> │ -> Seq Scan on pg_proc (cost=0.00..129.63 rows=3263 width=16) (actual\r\n> time=0.013..0.301 rows=3266 loops=1) │\r\n> │ Planning Time: 0.070 ms\r\n> │\r\n> │ JIT:\r\n> │\r\n> │ Functions: 3\r\n> │\r\n> │ Options: Inlining true, Optimization true, Expressions true, Deforming\r\n> true │\r\n> │ Timing: Generation 0.597 ms, Deforming 0.407 ms, Inlining 8.943 ms,\r\n> Optimization 79.403 ms, Emission 43.091 ms, Total 132.034 ms │\r\n> │ Execution Time: 132.986 ms\r\n> │\r\n> └────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n> (8 rows)\r\n>\r\n> I see the result of deforming in explain analyze, but related values in\r\n> pg_stat_statements are 0.\n\r\nI'm not sure why, but pgss jit metrics are always nulls for explain\r\nanalyze queries. I have noticed this with surprise myself, when recently\r\nwas reviewing the lazy jit patch, but haven't yet figure out what is the\r\nreason. Anyway, without \"explain analyze\" you'll get correct deforming\r\nnumbers in pgss.It was really strange, because I tested the queries without EXPLAIN ANALYZE too, and new columns were always zero on my comp. Other jit columns were filled. But I didn't do a deeper investigation. \n\r\n> Minimally, the values are assigned in wrong order\r\n>\r\n> + if (api_version >= PGSS_V1_11)\r\n> + {\r\n> + values[i++] = Float8GetDatumFast(tmp.jit_deform_time);\r\n> + values[i++] = Int64GetDatumFast(tmp.jit_deform_count);\r\n> + }\n\r\n(facepalm) Yep, will fix the order.\n\r\n> After reading the doc, I am confused what this metric means\r\n>\r\n> + <row>\r\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n> + <structfield>jit_deform_count</structfield> <type>bigint</type>\r\n> + </para>\r\n> + <para>\r\n> + Number of times tuples have been deformed\r\n> + </para></entry>\r\n> + </row>\r\n> +\r\n> + <row>\r\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n> + <structfield>jit_deform_time</structfield> <type>double\r\n> precision</type>\r\n> + </para>\r\n> + <para>\r\n> + Total time spent by the statement on deforming tuples, in\r\n> milliseconds\r\n> + </para></entry>\r\n> + </row>\r\n>\r\n> It is not clean so these times and these numbers are related just to the\r\n> compilation of the deforming process, not by own deforming.\n\r\nGood point, I need to formulate this more clearly.",
"msg_date": "Sat, 7 Jan 2023 19:09:11 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [RFC] Add jit deform_counter"
},
{
"msg_contents": "> On Sat, Jan 07, 2023 at 07:09:11PM +0100, Pavel Stehule wrote:\n> so 7. 1. 2023 v 16:48 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\n> napsal:\n>\n> > > On Fri, Jan 06, 2023 at 09:42:09AM +0100, Pavel Stehule wrote:\n> > > The explain part is working, the part of pg_stat_statements doesn't\n> > >\n> > > set jit_above_cost to 10;\n> > > set jit_optimize_above_cost to 10;\n> > > set jit_inline_above_cost to 10;\n> > >\n> > > (2023-01-06 09:08:59) postgres=# explain analyze select\n> > > count(length(prosrc) > 0) from pg_proc;\n> > >\n> > ┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\n> > > │ QUERY PLAN\n> > > │\n> > >\n> > ╞════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\n> > > │ Aggregate (cost=154.10..154.11 rows=1 width=8) (actual\n> > > time=132.320..132.321 rows=1 loops=1)\n> > │\n> > > │ -> Seq Scan on pg_proc (cost=0.00..129.63 rows=3263 width=16)\n> > (actual\n> > > time=0.013..0.301 rows=3266 loops=1) │\n> > > │ Planning Time: 0.070 ms\n> > > │\n> > > │ JIT:\n> > > │\n> > > │ Functions: 3\n> > > │\n> > > │ Options: Inlining true, Optimization true, Expressions true,\n> > Deforming\n> > > true │\n> > > │ Timing: Generation 0.597 ms, Deforming 0.407 ms, Inlining 8.943 ms,\n> > > Optimization 79.403 ms, Emission 43.091 ms, Total 132.034 ms │\n> > > │ Execution Time: 132.986 ms\n> > > │\n> > >\n> > └────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\n> > > (8 rows)\n> > >\n> > > I see the result of deforming in explain analyze, but related values in\n> > > pg_stat_statements are 0.\n> >\n> > I'm not sure why, but pgss jit metrics are always nulls for explain\n> > analyze queries. I have noticed this with surprise myself, when recently\n> > was reviewing the lazy jit patch, but haven't yet figure out what is the\n> > reason. Anyway, without \"explain analyze\" you'll get correct deforming\n> > numbers in pgss.\n> >\n>\n> It was really strange, because I tested the queries without EXPLAIN ANALYZE\n> too, and new columns were always zero on my comp. Other jit columns were\n> filled. But I didn't do a deeper investigation.\n\nInteresting. I've verified it once more with the query and the\nparameters you've posted, got the following:\n\n jit_functions | 3\n jit_generation_time | 1.257522\n jit_deform_count | 1\n jit_deform_time | 10.381345\n jit_inlining_count | 1\n jit_inlining_time | 71.628168\n jit_optimization_count | 1\n jit_optimization_time | 48.146447\n jit_emission_count | 1\n jit_emission_time | 0.737822\n\nMaybe there is anything else special about how you run it?\n\nOtherwise addressed the rest of commentaries, thanks.",
"msg_date": "Sun, 8 Jan 2023 11:56:01 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [RFC] Add jit deform_counter"
},
{
"msg_contents": "ne 8. 1. 2023 v 11:57 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\r\nnapsal:\r\n\r\n> > On Sat, Jan 07, 2023 at 07:09:11PM +0100, Pavel Stehule wrote:\r\n> > so 7. 1. 2023 v 16:48 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\r\n> > napsal:\r\n> >\r\n> > > > On Fri, Jan 06, 2023 at 09:42:09AM +0100, Pavel Stehule wrote:\r\n> > > > The explain part is working, the part of pg_stat_statements doesn't\r\n> > > >\r\n> > > > set jit_above_cost to 10;\r\n> > > > set jit_optimize_above_cost to 10;\r\n> > > > set jit_inline_above_cost to 10;\r\n> > > >\r\n> > > > (2023-01-06 09:08:59) postgres=# explain analyze select\r\n> > > > count(length(prosrc) > 0) from pg_proc;\r\n> > > >\r\n> > >\r\n> ┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n> > > > │ QUERY\r\n> PLAN\r\n> > > > │\r\n> > > >\r\n> > >\r\n> ╞════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n> > > > │ Aggregate (cost=154.10..154.11 rows=1 width=8) (actual\r\n> > > > time=132.320..132.321 rows=1 loops=1)\r\n> > > │\r\n> > > > │ -> Seq Scan on pg_proc (cost=0.00..129.63 rows=3263 width=16)\r\n> > > (actual\r\n> > > > time=0.013..0.301 rows=3266 loops=1) │\r\n> > > > │ Planning Time: 0.070 ms\r\n> > > > │\r\n> > > > │ JIT:\r\n> > > > │\r\n> > > > │ Functions: 3\r\n> > > > │\r\n> > > > │ Options: Inlining true, Optimization true, Expressions true,\r\n> > > Deforming\r\n> > > > true │\r\n> > > > │ Timing: Generation 0.597 ms, Deforming 0.407 ms, Inlining 8.943\r\n> ms,\r\n> > > > Optimization 79.403 ms, Emission 43.091 ms, Total 132.034 ms │\r\n> > > > │ Execution Time: 132.986 ms\r\n> > > > │\r\n> > > >\r\n> > >\r\n> └────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n> > > > (8 rows)\r\n> > > >\r\n> > > > I see the result of deforming in explain analyze, but related values\r\n> in\r\n> > > > pg_stat_statements are 0.\r\n> > >\r\n> > > I'm not sure why, but pgss jit metrics are always nulls for explain\r\n> > > analyze queries. I have noticed this with surprise myself, when\r\n> recently\r\n> > > was reviewing the lazy jit patch, but haven't yet figure out what is\r\n> the\r\n> > > reason. Anyway, without \"explain analyze\" you'll get correct deforming\r\n> > > numbers in pgss.\r\n> > >\r\n> >\r\n> > It was really strange, because I tested the queries without EXPLAIN\r\n> ANALYZE\r\n> > too, and new columns were always zero on my comp. Other jit columns were\r\n> > filled. But I didn't do a deeper investigation.\r\n>\r\n> Interesting. I've verified it once more with the query and the\r\n> parameters you've posted, got the following:\r\n>\r\n> jit_functions | 3\r\n> jit_generation_time | 1.257522\r\n> jit_deform_count | 1\r\n> jit_deform_time | 10.381345\r\n> jit_inlining_count | 1\r\n> jit_inlining_time | 71.628168\r\n> jit_optimization_count | 1\r\n> jit_optimization_time | 48.146447\r\n> jit_emission_count | 1\r\n> jit_emission_time | 0.737822\r\n>\r\n> Maybe there is anything else special about how you run it?\r\n>\r\n\r\nI hope not, but I'll see. I recheck updated patch\r\n\r\n\r\n\r\n>\r\n> Otherwise addressed the rest of commentaries, thanks.\r\n>\r\n\nne 8. 1. 2023 v 11:57 odesílatel Dmitry Dolgov <9erthalion6@gmail.com> napsal:> On Sat, Jan 07, 2023 at 07:09:11PM +0100, Pavel Stehule wrote:\r\n> so 7. 1. 2023 v 16:48 odesílatel Dmitry Dolgov <9erthalion6@gmail.com>\r\n> napsal:\r\n>\r\n> > > On Fri, Jan 06, 2023 at 09:42:09AM +0100, Pavel Stehule wrote:\r\n> > > The explain part is working, the part of pg_stat_statements doesn't\r\n> > >\r\n> > > set jit_above_cost to 10;\r\n> > > set jit_optimize_above_cost to 10;\r\n> > > set jit_inline_above_cost to 10;\r\n> > >\r\n> > > (2023-01-06 09:08:59) postgres=# explain analyze select\r\n> > > count(length(prosrc) > 0) from pg_proc;\r\n> > >\r\n> > ┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n> > > │ QUERY PLAN\r\n> > > │\r\n> > >\r\n> > ╞════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n> > > │ Aggregate (cost=154.10..154.11 rows=1 width=8) (actual\r\n> > > time=132.320..132.321 rows=1 loops=1)\r\n> > │\r\n> > > │ -> Seq Scan on pg_proc (cost=0.00..129.63 rows=3263 width=16)\r\n> > (actual\r\n> > > time=0.013..0.301 rows=3266 loops=1) │\r\n> > > │ Planning Time: 0.070 ms\r\n> > > │\r\n> > > │ JIT:\r\n> > > │\r\n> > > │ Functions: 3\r\n> > > │\r\n> > > │ Options: Inlining true, Optimization true, Expressions true,\r\n> > Deforming\r\n> > > true │\r\n> > > │ Timing: Generation 0.597 ms, Deforming 0.407 ms, Inlining 8.943 ms,\r\n> > > Optimization 79.403 ms, Emission 43.091 ms, Total 132.034 ms │\r\n> > > │ Execution Time: 132.986 ms\r\n> > > │\r\n> > >\r\n> > └────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n> > > (8 rows)\r\n> > >\r\n> > > I see the result of deforming in explain analyze, but related values in\r\n> > > pg_stat_statements are 0.\r\n> >\r\n> > I'm not sure why, but pgss jit metrics are always nulls for explain\r\n> > analyze queries. I have noticed this with surprise myself, when recently\r\n> > was reviewing the lazy jit patch, but haven't yet figure out what is the\r\n> > reason. Anyway, without \"explain analyze\" you'll get correct deforming\r\n> > numbers in pgss.\r\n> >\r\n>\r\n> It was really strange, because I tested the queries without EXPLAIN ANALYZE\r\n> too, and new columns were always zero on my comp. Other jit columns were\r\n> filled. But I didn't do a deeper investigation.\n\r\nInteresting. I've verified it once more with the query and the\r\nparameters you've posted, got the following:\n\r\n jit_functions | 3\r\n jit_generation_time | 1.257522\r\n jit_deform_count | 1\r\n jit_deform_time | 10.381345\r\n jit_inlining_count | 1\r\n jit_inlining_time | 71.628168\r\n jit_optimization_count | 1\r\n jit_optimization_time | 48.146447\r\n jit_emission_count | 1\r\n jit_emission_time | 0.737822\n\r\nMaybe there is anything else special about how you run it?I hope not, but I'll see. I recheck updated patch \n\r\nOtherwise addressed the rest of commentaries, thanks.",
"msg_date": "Sun, 8 Jan 2023 12:00:12 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [RFC] Add jit deform_counter"
},
{
"msg_contents": "Hi\r\n\r\n> > I'm not sure why, but pgss jit metrics are always nulls for explain\r\n>> > > analyze queries. I have noticed this with surprise myself, when\r\n>> recently\r\n>> > > was reviewing the lazy jit patch, but haven't yet figure out what is\r\n>> the\r\n>> > > reason. Anyway, without \"explain analyze\" you'll get correct deforming\r\n>> > > numbers in pgss.\r\n>> > >\r\n>\r\n>\r\n>\r\nIt is working although I am not sure if it is correctly\r\n\r\nwhen I run EXPLAIN ANALYZE for query `explain analyze select\r\ncount(length(prosrc) > 0) from pg_proc;`\r\n\r\nI got plan and times\r\n\r\n┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n│ QUERY PLAN\r\n │\r\n╞═════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n│ Aggregate (cost=154.10..154.11 rows=1 width=8) (actual\r\ntime=134.450..134.451 rows=1 loops=1)\r\n│\r\n│ -> Seq Scan on pg_proc (cost=0.00..129.63 rows=3263 width=16) (actual\r\ntime=0.013..0.287 rows=3266 loops=1) │\r\n│ Planning Time: 0.088 ms\r\n │\r\n│ JIT:\r\n │\r\n│ Functions: 3\r\n │\r\n│ Options: Inlining true, Optimization true, Expressions true, Deforming\r\ntrue │\r\n│ Timing: Generation 0.631 ms, Deforming 0.396 ms, Inlining 10.026 ms,\r\nOptimization 78.608 ms, Emission 44.915 ms, Total 134.181 ms │\r\n│ Execution Time: 135.173 ms\r\n │\r\n└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n(8 rows)\r\n\r\n Deforming is 0.396ms\r\n\r\nWhen I run mentioned query, and when I look to pg_stat_statements table, I\r\nsee different times\r\n\r\ndeforming is about 10ms\r\n\r\nwal_bytes │ 0\r\njit_functions │ 9\r\njit_generation_time │ 1.9040409999999999\r\njit_deform_count │ 3\r\njit_deform_time │ 36.395131\r\njit_inlining_count │ 3\r\njit_inlining_time │ 256.104205\r\njit_optimization_count │ 3\r\njit_optimization_time │ 132.45361300000002\r\njit_emission_count │ 3\r\njit_emission_time │ 1.210633\r\n\r\ncounts are correct, but times are strange - there is not consistency with\r\nvalues from EXPLAIN\r\n\r\nWhen I run this query on master, the values are correct\r\n\r\n jit_functions │ 6\r\n jit_generation_time │ 1.350521\r\n jit_inlining_count │ 2\r\n jit_inlining_time │ 24.018382000000003\r\n jit_optimization_count │ 2\r\n jit_optimization_time │ 173.405792\r\n jit_emission_count │ 2\r\n jit_emission_time │ 91.226655\r\n────────────────────────┴───────────────────\r\n\r\n│ JIT:\r\n │\r\n│ Functions: 3\r\n │\r\n│ Options: Inlining true, Optimization true, Expressions true, Deforming\r\ntrue │\r\n│ Timing: Generation 0.636 ms, Inlining 9.309 ms, Optimization 89.653 ms,\r\nEmission 45.812 ms, Total 145.410 ms │\r\n│ Execution Time: 146.410 ms\r\n │\r\n└────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n\r\nRegards\r\n\r\nPavel\r\n\nHi\r\n> > I'm not sure why, but pgss jit metrics are always nulls for explain\r\n> > analyze queries. I have noticed this with surprise myself, when recently\r\n> > was reviewing the lazy jit patch, but haven't yet figure out what is the\r\n> > reason. Anyway, without \"explain analyze\" you'll get correct deforming\r\n> > numbers in pgss.\r\n> >It is working although I am not sure if it is correctlywhen I run EXPLAIN ANALYZE for query `explain analyze select count(length(prosrc) > 0) from pg_proc;`I got plan and times┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐│ QUERY PLAN │╞═════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡│ Aggregate (cost=154.10..154.11 rows=1 width=8) (actual time=134.450..134.451 rows=1 loops=1) ││ -> Seq Scan on pg_proc (cost=0.00..129.63 rows=3263 width=16) (actual time=0.013..0.287 rows=3266 loops=1) ││ Planning Time: 0.088 ms ││ JIT: ││ Functions: 3 ││ Options: Inlining true, Optimization true, Expressions true, Deforming true ││ Timing: Generation 0.631 ms, Deforming 0.396 ms, Inlining 10.026 ms, Optimization 78.608 ms, Emission 44.915 ms, Total 134.181 ms ││ Execution Time: 135.173 ms │└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘(8 rows) Deforming is 0.396msWhen I run mentioned query, and when I look to pg_stat_statements table, I see different times deforming is about 10mswal_bytes │ 0 jit_functions │ 9 jit_generation_time │ 1.9040409999999999 jit_deform_count │ 3 jit_deform_time │ 36.395131 jit_inlining_count │ 3 jit_inlining_time │ 256.104205 jit_optimization_count │ 3jit_optimization_time │ 132.45361300000002jit_emission_count │ 3 jit_emission_time │ 1.210633 counts are correct, but times are strange - there is not consistency with values from EXPLAIN When I run this query on master, the values are correct jit_functions │ 6 jit_generation_time │ 1.350521 jit_inlining_count │ 2 jit_inlining_time │ 24.018382000000003 jit_optimization_count │ 2 jit_optimization_time │ 173.405792 jit_emission_count │ 2 jit_emission_time │ 91.226655 ────────────────────────┴───────────────────│ JIT: ││ Functions: 3 ││ Options: Inlining true, Optimization true, Expressions true, Deforming true ││ Timing: Generation 0.636 ms, Inlining 9.309 ms, Optimization 89.653 ms, Emission 45.812 ms, Total 145.410 ms ││ Execution Time: 146.410 ms │└────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘RegardsPavel",
"msg_date": "Sun, 8 Jan 2023 21:06:33 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [RFC] Add jit deform_counter"
},
{
"msg_contents": "> On Sun, Jan 08, 2023 at 09:06:33PM +0100, Pavel Stehule wrote:\n> It is working although I am not sure if it is correctly\n>\n> when I run EXPLAIN ANALYZE for query `explain analyze select\n> count(length(prosrc) > 0) from pg_proc;`\n>\n> I got plan and times\n>\n> ┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\n> │ QUERY PLAN\n> │\n> ╞═════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\n> │ Aggregate (cost=154.10..154.11 rows=1 width=8) (actual\n> time=134.450..134.451 rows=1 loops=1)\n> │\n> │ -> Seq Scan on pg_proc (cost=0.00..129.63 rows=3263 width=16) (actual\n> time=0.013..0.287 rows=3266 loops=1) │\n> │ Planning Time: 0.088 ms\n> │\n> │ JIT:\n> │\n> │ Functions: 3\n> │\n> │ Options: Inlining true, Optimization true, Expressions true, Deforming\n> true │\n> │ Timing: Generation 0.631 ms, Deforming 0.396 ms, Inlining 10.026 ms,\n> Optimization 78.608 ms, Emission 44.915 ms, Total 134.181 ms │\n> │ Execution Time: 135.173 ms\n> │\n> └─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\n> (8 rows)\n>\n> Deforming is 0.396ms\n>\n> When I run mentioned query, and when I look to pg_stat_statements table, I\n> see different times\n>\n> deforming is about 10ms\n>\n> wal_bytes │ 0\n> jit_functions │ 9\n> jit_generation_time │ 1.9040409999999999\n> jit_deform_count │ 3\n> jit_deform_time │ 36.395131\n> jit_inlining_count │ 3\n> jit_inlining_time │ 256.104205\n> jit_optimization_count │ 3\n> jit_optimization_time │ 132.45361300000002\n> jit_emission_count │ 3\n> jit_emission_time │ 1.210633\n>\n> counts are correct, but times are strange - there is not consistency with\n> values from EXPLAIN\n>\n> When I run this query on master, the values are correct\n>\n> jit_functions │ 6\n> jit_generation_time │ 1.350521\n> jit_inlining_count │ 2\n> jit_inlining_time │ 24.018382000000003\n> jit_optimization_count │ 2\n> jit_optimization_time │ 173.405792\n> jit_emission_count │ 2\n> jit_emission_time │ 91.226655\n> ────────────────────────┴───────────────────\n>\n> │ JIT:\n> │\n> │ Functions: 3\n> │\n> │ Options: Inlining true, Optimization true, Expressions true, Deforming\n> true │\n> │ Timing: Generation 0.636 ms, Inlining 9.309 ms, Optimization 89.653 ms,\n> Emission 45.812 ms, Total 145.410 ms │\n> │ Execution Time: 146.410 ms\n> │\n> └────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\n\nThanks for noticing. Similarly to the previous issue, the order of\ncolumns was incorrect -- deform counters have to be the last columns in\nthe view.",
"msg_date": "Sun, 15 Jan 2023 14:57:37 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [RFC] Add jit deform_counter"
},
{
"msg_contents": "Hi\n\n\nThanks for noticing. Similarly to the previous issue, the order of\n> columns was incorrect -- deform counters have to be the last columns in\n> the view.\n>\n\nI tested it and now looks well\n\ncheck-world passed\nmake doc passed\n\nI mark this patch as ready for committer\n\nRegards\n\nPavel\n\nHi\nThanks for noticing. Similarly to the previous issue, the order of\ncolumns was incorrect -- deform counters have to be the last columns in\nthe view.I tested it and now looks wellcheck-world passedmake doc passedI mark this patch as ready for committer RegardsPavel",
"msg_date": "Sun, 15 Jan 2023 17:47:00 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [RFC] Add jit deform_counter"
},
{
"msg_contents": "On Sun, 12 Jun 2022 at 21:14, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> I've noticed that JIT performance counter generation_counter seems to include\n> actions, relevant for both jit_expressions and jit_tuple_deforming options. It\n> means one can't directly see what is the influence of jit_tuple_deforming\n> alone, which would be helpful when adjusting JIT options. To make it better a\n> new counter can be introduced, does it make sense?\n\nI'm not so sure about this idea. As of now, if I look at EXPLAIN\nANALYZE's JIT summary, the individual times add up to the total time.\n\nIf we add this deform time, then that's no longer going to be true as\nthe \"Generation\" time includes the newly added deform time.\n\nmaster:\n JIT:\n Functions: 600\n Options: Inlining false, Optimization false, Expressions true, Deforming true\n Timing: Generation 37.758 ms, Inlining 0.000 ms, Optimization 6.736\nms, Emission 172.244 ms, Total 216.738 ms\n\n37.758 + 6.736 + 172.244 = 216.738\n\nI think if I was a DBA wondering why JIT was taking so long, I'd\nprobably either be very astonished or I'd report a bug if I noticed\nthat all the individual component JIT times didn't add up to the total\ntime.\n\nI don't think the solution is to subtract the deform time from the\ngeneration time either.\n\nCan't users just get this by looking at EXPLAIN ANALYZE with and\nwithout jit_tuple_deforming?\n\nDavid\n\n\n",
"msg_date": "Wed, 29 Mar 2023 13:50:37 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [RFC] Add jit deform_counter"
},
{
"msg_contents": "> On Wed, Mar 29, 2023 at 01:50:37PM +1300, David Rowley wrote:\n> On Sun, 12 Jun 2022 at 21:14, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> > I've noticed that JIT performance counter generation_counter seems to include\n> > actions, relevant for both jit_expressions and jit_tuple_deforming options. It\n> > means one can't directly see what is the influence of jit_tuple_deforming\n> > alone, which would be helpful when adjusting JIT options. To make it better a\n> > new counter can be introduced, does it make sense?\n>\n> I'm not so sure about this idea. As of now, if I look at EXPLAIN\n> ANALYZE's JIT summary, the individual times add up to the total time.\n>\n> If we add this deform time, then that's no longer going to be true as\n> the \"Generation\" time includes the newly added deform time.\n>\n> master:\n> JIT:\n> Functions: 600\n> Options: Inlining false, Optimization false, Expressions true, Deforming true\n> Timing: Generation 37.758 ms, Inlining 0.000 ms, Optimization 6.736\n> ms, Emission 172.244 ms, Total 216.738 ms\n>\n> 37.758 + 6.736 + 172.244 = 216.738\n>\n> I think if I was a DBA wondering why JIT was taking so long, I'd\n> probably either be very astonished or I'd report a bug if I noticed\n> that all the individual component JIT times didn't add up to the total\n> time.\n>\n> I don't think the solution is to subtract the deform time from the\n> generation time either.\n>\n> Can't users just get this by looking at EXPLAIN ANALYZE with and\n> without jit_tuple_deforming?\n\nIt could be done this way, but then users need to know that tuple\ndeforming is included into generation time (I've skimmed through the\ndocs, there seems to be no direct statements about that, although it\ncould be guessed). At the same time I don't think it's very\nuser-friendly approach -- after all it could be the same for other\ntimings, i.e. only one counter for all JIT operations present,\nexpecting users to experiment how would it change if this or that option\nwill be different.\n\nI agree about adding up to the total time though. What about changing\nthe format to something like this?\n\n Options: Inlining false, Optimization false, Expressions true, Deforming true\n Timing: Generation 37.758 ms (Deforming 1.234 ms), Inlining 0.000 ms, Optimization 6.736 ms, Emission 172.244 ms, Total 216.738 ms\n\nThis way it doesn't look like deforming timing is in the same category\nas others, but rather a part of another value.\n\n\n",
"msg_date": "Fri, 31 Mar 2023 19:39:27 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [RFC] Add jit deform_counter"
},
{
"msg_contents": "> On Fri, Mar 31, 2023 at 07:39:27PM +0200, Dmitry Dolgov wrote:\n> > On Wed, Mar 29, 2023 at 01:50:37PM +1300, David Rowley wrote:\n> > On Sun, 12 Jun 2022 at 21:14, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> > > I've noticed that JIT performance counter generation_counter seems to include\n> > > actions, relevant for both jit_expressions and jit_tuple_deforming options. It\n> > > means one can't directly see what is the influence of jit_tuple_deforming\n> > > alone, which would be helpful when adjusting JIT options. To make it better a\n> > > new counter can be introduced, does it make sense?\n> >\n> > I'm not so sure about this idea. As of now, if I look at EXPLAIN\n> > ANALYZE's JIT summary, the individual times add up to the total time.\n> >\n> > If we add this deform time, then that's no longer going to be true as\n> > the \"Generation\" time includes the newly added deform time.\n> >\n> > master:\n> > JIT:\n> > Functions: 600\n> > Options: Inlining false, Optimization false, Expressions true, Deforming true\n> > Timing: Generation 37.758 ms, Inlining 0.000 ms, Optimization 6.736\n> > ms, Emission 172.244 ms, Total 216.738 ms\n> >\n> > 37.758 + 6.736 + 172.244 = 216.738\n> >\n> > I think if I was a DBA wondering why JIT was taking so long, I'd\n> > probably either be very astonished or I'd report a bug if I noticed\n> > that all the individual component JIT times didn't add up to the total\n> > time.\n> >\n> > I don't think the solution is to subtract the deform time from the\n> > generation time either.\n> >\n> > Can't users just get this by looking at EXPLAIN ANALYZE with and\n> > without jit_tuple_deforming?\n>\n> It could be done this way, but then users need to know that tuple\n> deforming is included into generation time (I've skimmed through the\n> docs, there seems to be no direct statements about that, although it\n> could be guessed). At the same time I don't think it's very\n> user-friendly approach -- after all it could be the same for other\n> timings, i.e. only one counter for all JIT operations present,\n> expecting users to experiment how would it change if this or that option\n> will be different.\n>\n> I agree about adding up to the total time though. What about changing\n> the format to something like this?\n>\n> Options: Inlining false, Optimization false, Expressions true, Deforming true\n> Timing: Generation 37.758 ms (Deforming 1.234 ms), Inlining 0.000 ms, Optimization 6.736 ms, Emission 172.244 ms, Total 216.738 ms\n>\n> This way it doesn't look like deforming timing is in the same category\n> as others, but rather a part of another value.\n\nHere is the patch with the proposed variation.",
"msg_date": "Sat, 15 Apr 2023 16:40:57 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [RFC] Add jit deform_counter"
},
{
"msg_contents": "> On 15 Apr 2023, at 16:40, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>> On Fri, Mar 31, 2023 at 07:39:27PM +0200, Dmitry Dolgov wrote:\n>>> On Wed, Mar 29, 2023 at 01:50:37PM +1300, David Rowley wrote:\n\nI had a look at this patch today and I agree that it would be good to give the\nuser an easier way to gain insights into this since we make it configurable.\n\n>>> If we add this deform time, then that's no longer going to be true as\n>>> the \"Generation\" time includes the newly added deform time.\n>>> \n>>> master:\n>>> JIT:\n>>> Functions: 600\n>>> Options: Inlining false, Optimization false, Expressions true, Deforming true\n>>> Timing: Generation 37.758 ms, Inlining 0.000 ms, Optimization 6.736\n>>> ms, Emission 172.244 ms, Total 216.738 ms\n>>> \n>>> 37.758 + 6.736 + 172.244 = 216.738\n>>> \n>>> I think if I was a DBA wondering why JIT was taking so long, I'd\n>>> probably either be very astonished or I'd report a bug if I noticed\n>>> that all the individual component JIT times didn't add up to the total\n>>> time.\n\nWhile true, the current EXPLAIN output for JIT isn't without confusing details\nas it is. The example above has \"Optimization false\" and \"Optimization 6.736\",\nand it takes reading the very last line on a docs page commenting on an example\nto understand why.\n\n>>> I don't think the solution is to subtract the deform time from the\n>>> generation time either.\n\nAgreed.\n\n>> I agree about adding up to the total time though. What about changing\n>> the format to something like this?\n>> \n>> Options: Inlining false, Optimization false, Expressions true, Deforming true\n>> Timing: Generation 37.758 ms (Deforming 1.234 ms), Inlining 0.000 ms, Optimization 6.736 ms, Emission 172.244 ms, Total 216.738 ms\n>> \n>> This way it doesn't look like deforming timing is in the same category\n>> as others, but rather a part of another value.\n\nI think this is a good trade-off, but the wording \"deforming\" makes it sound\nlike it's the act of tuple deforming and not that of compiling tuple deforming\ncode. I don't have too many better suggestions, but maybe \"Deform\" is enough\nto differentiate it?\n\n> Here is the patch with the proposed variation.\n\nThis version still leaves non-text EXPLAIN formats with timing which doesn't\nadd up. Below are JSON and XML examples:\n\n \"Timing\": { +\n \"Generation\": 0.564, +\n \"Deforming\": 0.111, +\n \"Inlining\": 0.000, +\n \"Optimization\": 0.358, +\n \"Emission\": 6.505, +\n \"Total\": 7.426 +\n } +\n\n <Timing> +\n <Generation>0.598</Generation> +\n <Deforming>0.117</Deforming> +\n <Inlining>0.000</Inlining> +\n <Optimization>0.367</Optimization> +\n <Emission>6.400</Emission> +\n <Total>7.365</Total> +\n </Timing> +\n\nIt's less obvious how the additional level of details should be represented\nhere.\n\n+ int64 jit_deform_count; /* number of times deform time has been >\n+ * 0 */\nWhile not a new problem with this patch, the comments on this struct yields\npretty awkward reflows by pgindent. I wonder if we should make a separate pass\nover this at some point to clean it up?\n\nThe patch also fails to update doc/src/sgml/jit.sgml with the new EXPLAIN\noutput.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 18 Jul 2023 15:32:43 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [RFC] Add jit deform_counter"
},
{
"msg_contents": "> On Tue, Jul 18, 2023, 3:32 PM Daniel Gustafsson <daniel@yesql.se> wrote\n>> Here is the patch with the proposed variation.\n>\n> This version still leaves non-text EXPLAIN formats with timing which\ndoesn't\n> add up. Below are JSON and XML examples:\n\nGood point. For the structured formats it should be represented via a nested\nlevel. I'll try to do this and other proposed changes as soon as I'll get\nback.\n\n> On Tue, Jul 18, 2023, 3:32 PM Daniel Gustafsson <daniel@yesql.se> wrote>> Here is the patch with the proposed variation.>> This version still leaves non-text EXPLAIN formats with timing which doesn't> add up. Below are JSON and XML examples:Good point. For the structured formats it should be represented via a nestedlevel. I'll try to do this and other proposed changes as soon as I'll get back.",
"msg_date": "Wed, 19 Jul 2023 17:18:29 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [RFC] Add jit deform_counter"
},
{
"msg_contents": "> On Wed, Jul 19, 2023 at 05:18:29PM +0200, Dmitry Dolgov wrote:\n> > On Tue, Jul 18, 2023, 3:32 PM Daniel Gustafsson <daniel@yesql.se> wrote\n> >> Here is the patch with the proposed variation.\n> >\n> > This version still leaves non-text EXPLAIN formats with timing which\n> doesn't\n> > add up. Below are JSON and XML examples:\n>\n> Good point. For the structured formats it should be represented via a nested\n> level. I'll try to do this and other proposed changes as soon as I'll get\n> back.\n\nAnd here is it. The json version of EXPLAIN now looks like this:\n\n \"JIT\": {\n\t [...]\n \"Timing\": {\n \"Generation\": {\n \"Deform\": 0.000,\n \"Total\": 0.205\n },\n \"Inlining\": 0.065,\n \"Optimization\": 2.465,\n \"Emission\": 2.337,\n \"Total\": 5.072\n }\n },",
"msg_date": "Mon, 14 Aug 2023 16:36:42 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [RFC] Add jit deform_counter"
},
{
"msg_contents": "> On 14 Aug 2023, at 16:36, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n\n> And here is it. The json version of EXPLAIN now looks like this:\n> \n> \"JIT\": {\n> \t [...]\n> \"Timing\": {\n> \"Generation\": {\n> \"Deform\": 0.000,\n> \"Total\": 0.205\n> },\n> \"Inlining\": 0.065,\n> \"Optimization\": 2.465,\n> \"Emission\": 2.337,\n> \"Total\": 5.072\n> }\n> },\n\nI've gone over this version of the patch and I think it's ready to go in. I'm\nmarking this Ready for Committer and will go ahead with it shortly barring any\nobjections.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 5 Sep 2023 16:37:20 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [RFC] Add jit deform_counter"
},
{
"msg_contents": "> On 5 Sep 2023, at 16:37, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> I've gone over this version of the patch and I think it's ready to go in. I'm\n> marking this Ready for Committer and will go ahead with it shortly barring any\n> objections.\n\nPushed, after another round of review with some minor fixes.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 8 Sep 2023 15:34:42 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [RFC] Add jit deform_counter"
},
{
"msg_contents": "> On Fri, Sep 08, 2023 at 03:34:42PM +0200, Daniel Gustafsson wrote:\n> > On 5 Sep 2023, at 16:37, Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > I've gone over this version of the patch and I think it's ready to go in. I'm\n> > marking this Ready for Committer and will go ahead with it shortly barring any\n> > objections.\n>\n> Pushed, after another round of review with some minor fixes.\n\nThanks!\n\n\n",
"msg_date": "Fri, 8 Sep 2023 15:45:27 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [RFC] Add jit deform_counter"
},
{
"msg_contents": "Hi,\n\nOn Fri, 8 Sept 2023 at 20:22, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>\n> > On Fri, Sep 08, 2023 at 03:34:42PM +0200, Daniel Gustafsson wrote:\n> > > On 5 Sep 2023, at 16:37, Daniel Gustafsson <daniel@yesql.se> wrote:\n> >\n> > > I've gone over this version of the patch and I think it's ready to go in. I'm\n> > > marking this Ready for Committer and will go ahead with it shortly barring any\n> > > objections.\n> >\n> > Pushed, after another round of review with some minor fixes.\n\nI realized that pg_stat_statements is bumped to 1.11 with this patch\nbut oldextversions test is not updated. So, I attached a patch for\nupdating oldextversions.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Thu, 12 Oct 2023 16:37:36 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [RFC] Add jit deform_counter"
},
{
"msg_contents": "> On 12 Oct 2023, at 15:37, Nazir Bilal Yavuz <byavuz81@gmail.com> wrote:\n> \n> Hi,\n> \n> On Fri, 8 Sept 2023 at 20:22, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>> \n>>> On Fri, Sep 08, 2023 at 03:34:42PM +0200, Daniel Gustafsson wrote:\n>>>> On 5 Sep 2023, at 16:37, Daniel Gustafsson <daniel@yesql.se> wrote:\n>>> \n>>>> I've gone over this version of the patch and I think it's ready to go in. I'm\n>>>> marking this Ready for Committer and will go ahead with it shortly barring any\n>>>> objections.\n>>> \n>>> Pushed, after another round of review with some minor fixes.\n> \n> I realized that pg_stat_statements is bumped to 1.11 with this patch\n> but oldextversions test is not updated. So, I attached a patch for\n> updating oldextversions.\n\nThanks for the patch, that was an oversight in the original commit for this.\nFrom a quick look it seems correct, I'll have another look later today and will\nthen apply it.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 12 Oct 2023 15:40:00 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [RFC] Add jit deform_counter"
},
{
"msg_contents": "> On 12 Oct 2023, at 15:40, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> On 12 Oct 2023, at 15:37, Nazir Bilal Yavuz <byavuz81@gmail.com> wrote:\n\n>> I realized that pg_stat_statements is bumped to 1.11 with this patch\n>> but oldextversions test is not updated. So, I attached a patch for\n>> updating oldextversions.\n> \n> Thanks for the patch, that was an oversight in the original commit for this.\n> From a quick look it seems correct, I'll have another look later today and will\n> then apply it.\n\nApplied, thanks!\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 13 Oct 2023 14:20:52 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [RFC] Add jit deform_counter"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nIt's been a while since the last time I explored how PostgreSQL stores\nthe data on disk, so I decided to refresh my memory. All in all this\ntopic is well documented, but there is one question that I couldn't\nfind an answer to quickly.\n\n From README.HOT:\n\n> If an update changes any indexed column, or there is not room on the\n> same page for the new tuple, then the HOT chain ends: the last member\n> has a regular t_ctid link to the next version and is not marked\n> HEAP_HOT_UPDATED.\n\nSo t_ctid will point to the newer version of the tuple regardless of\nwhether HOT is used or not. But I couldn't find an answer to how\nt_ctid is used when a tuple is not a part of a HOT chain, or is the\nlast item in the chain. Which brings a question, maybe it shouldn't\ntake that much space on disk.\n\nProbably I missed something. Could you please point me to the document\nor comments that describe this topic? Or maybe we should add a brief\ncomment to HeapTupleHeaderData.t_ctid field and/or README.HOT that\nwould clarify this. For sure this could be learned from the code, but\nI believe clarifying this moment in the comments could simplify the\nlife of the newcomers a bit.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Sun, 12 Jun 2022 13:42:13 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Quick question regarding HeapTupleHeaderData.t_ctid"
},
{
"msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n> So t_ctid will point to the newer version of the tuple regardless of\n> whether HOT is used or not. But I couldn't find an answer to how\n> t_ctid is used when a tuple is not a part of a HOT chain, or is the\n> last item in the chain.\n\nt_ctid points to the tuple itself if it's the latest version of its row.\n\n> Which brings a question, maybe it shouldn't\n> take that much space on disk.\n\nHow would you make it optional? In particular, what are you going to\nto when it's time to update a row (and therefore insert a ctid link)\nand the page is already completely full?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 12 Jun 2022 11:24:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Quick question regarding HeapTupleHeaderData.t_ctid"
},
{
"msg_contents": "Hi Tom,\n\n> > Which brings a question, maybe it shouldn't\n> > take that much space on disk.\n>\n> How would you make it optional? In particular, what are you going to\n> to when it's time to update a row (and therefore insert a ctid link)\n> and the page is already completely full?\n\nIn other words, if I have an ItemPointer to an old tuple and try to\nUPDATE it, t_ctid allows me to find the next page with another HOT\nchain and, if possible, add a new tuple to that HOT chain. And\nalthough there are newer versions of the tuple they are not\nnecessarily alive, e.g. if the corresponding transactions were\naborted, or they are running and it's not clear whether they will\nsucceed or not. I didn't think about this scenario.\n\nIt also explains why t_ctid can't be variable in size depending on\nwhether it points to a tuple in the same page or in the different one.\nNext time we change t_ctid its size may change which will require\nresizing the tuple, and the whole story becomes very complicated.\n\nI think I get it now. Many thanks!\n\nJust to clarify, is t_ctid used for anything _but_ HOT?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Sun, 12 Jun 2022 19:55:08 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: Quick question regarding HeapTupleHeaderData.t_ctid"
},
{
"msg_contents": "Hi again,\n\n> Just to clarify, is t_ctid used for anything _but_ HOT?\n\nApparently, I got carried away with HOT too much. htup_details.h\npretty much answers that it does.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Sun, 12 Jun 2022 20:10:34 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: Quick question regarding HeapTupleHeaderData.t_ctid"
}
] |
[
{
"msg_contents": "Hello!\n\nWe are seeing connection failures when using \"sslmode=require\" on forked\nconnections. Attached is example code that makes 2 passes. The first pass\nuses \"sslmode=disable\" and the second uses \"sslmode=require\". The first\npass completes successfully, but the second pass fails. I'm looking for\ninsight as to why this might be happening.\n\nNote: we are very aware of the dev notes about forking, however know that\nwe are not sharing the forked connection, we simply open the connection in\nthe parent thread and then pass that to the child thread to use.\n\nThank you for any insight,\n\n-Jim P.",
"msg_date": "Sun, 12 Jun 2022 10:05:59 -0400",
"msg_from": "Jim Popovitch <jim.popovitch@replatformtech.com>",
"msg_from_op": true,
"msg_subject": "connection failures on forked processes"
},
{
"msg_contents": "On Sun, Jun 12, 2022 at 10:05:59AM -0400, Jim Popovitch wrote:\n> We are seeing connection failures when using \"sslmode=require\" on forked\n> connections. Attached is example code that makes 2 passes. The first pass\n> uses \"sslmode=disable\" and the second uses \"sslmode=require\". The first\n> pass completes successfully, but the second pass fails. I'm looking for\n> insight as to why this might be happening.\n\nThe child's connection works fine, but the grandchild's connection doesn't.\n\nThe most obvious reason is that the first child exits which tears down the SSL\nconnection in a way that doesn't allow sending more data on it (maybe as a\ndeliberate security measure). \n\nYou'll see the same failure in both cases if you PQfinish() after PQclear().\n\n> Note: we are very aware of the dev notes about forking, however know that\n> we are not sharing the forked connection, we simply open the connection in\n> the parent thread and then pass that to the child thread to use.\n\nIn case there is any doubt: processes are not threads.\n(Also, I don't think the phrase \"parent thread\" is more misleading than\naccurate, although I'm sure some people use it.)\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 12 Jun 2022 09:57:00 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: connection failures on forked processes"
}
] |
[
{
"msg_contents": "Recently we added the error messages \"buffer for root directory too\nsmall\" and siblings to pg_upgrade. This means \"<new_cluster's\npgdata>/pg_upgrade_output.d\" was longer than MAXPGPATH.\n\nI feel that the \"root directory\" is obscure here, and moreover \"buffer\nis too small\" looks pointless since no user can do something on the\nbuffer length. At least I can't read out from the message concretely\nwhat I should do next..\n\nThe root cause of the errors is that the user-provided directory path\nof new cluster's root was too long. Anywhich one of the four buffers\nis overflowed, it doesn't makes any difference for users and doesn't\noffer any further detail to suppoerters/developers. I see \"output\ndirectory path of new cluster too long\" clear enough.\n\nAbove all, this change reduces the number of messages that need\ntranslation:)\n\n# And the messages are missing trailing line breaks.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 13 Jun 2022 12:05:51 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "\"buffer too small\" or \"path too long\"?"
},
{
"msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> The root cause of the errors is that the user-provided directory path\n> of new cluster's root was too long. Anywhich one of the four buffers\n> is overflowed, it doesn't makes any difference for users and doesn't\n> offer any further detail to suppoerters/developers. I see \"output\n> directory path of new cluster too long\" clear enough.\n\n+1, but I'm inclined to make it read \"... is too long\".\n\n> # And the messages are missing trailing line breaks.\n\nI was about to question that, but now I remember that pg_upgrade has\nits own logging facility with a different idea about who provides\nthe trailing newline than common/logging.[hc] has. Undoubtedly\nthat's the source of this mistake. We really need to get pg_upgrade\nout of the business of having its own logging conventions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 13 Jun 2022 13:25:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"buffer too small\" or \"path too long\"?"
},
{
"msg_contents": "At Mon, 13 Jun 2022 13:25:01 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > The root cause of the errors is that the user-provided directory path\n> > of new cluster's root was too long. Anywhich one of the four buffers\n> > is overflowed, it doesn't makes any difference for users and doesn't\n> > offer any further detail to suppoerters/developers. I see \"output\n> > directory path of new cluster too long\" clear enough.\n> \n> +1, but I'm inclined to make it read \"... is too long\".\n\nYeah, I feel so and it is what I wondered about recently when I saw\nsome complete error messages. Is that because of the length of the\nsubject?\n\n> > # And the messages are missing trailing line breaks.\n> \n> I was about to question that, but now I remember that pg_upgrade has\n> its own logging facility with a different idea about who provides\n> the trailing newline than common/logging.[hc] has. Undoubtedly\n> that's the source of this mistake. We really need to get pg_upgrade\n> out of the business of having its own logging conventions.\n\nYes... I don't find a written reason excluding pg_upgrade in both the\ncommit 9a374b77fb and or the thread [1]. But I guess that we decided\nthat we first provide the facility in the best style ignoring the\ncurrent impletent in pg_upgrade then let pg_upgrade use it. So I\nthink it should emerge in the next cycle? I'll give it a shot if no\none is willing to do that for now. (I believe it is straightforward..)\n\n[1] https://www.postgresql.org/message-id/941719.1645587865%40sss.pgh.pa.us\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 14 Jun 2022 09:48:26 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: \"buffer too small\" or \"path too long\"?"
},
{
"msg_contents": "At Tue, 14 Jun 2022 09:48:26 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Mon, 13 Jun 2022 13:25:01 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> > +1, but I'm inclined to make it read \"... is too long\".\n> \n> Yeah, I feel so and it is what I wondered about recently when I saw\n> some complete error messages. Is that because of the length of the\n> subject?\n\nAnd I found that it is alrady done. Thanks!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 14 Jun 2022 09:52:52 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: \"buffer too small\" or \"path too long\"?"
},
{
"msg_contents": "On Tue, Jun 14, 2022 at 09:52:52AM +0900, Kyotaro Horiguchi wrote:\n> At Tue, 14 Jun 2022 09:48:26 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n>> Yeah, I feel so and it is what I wondered about recently when I saw\n>> some complete error messages. Is that because of the length of the\n>> subject?\n> \n> And I found that it is alrady done. Thanks!\n\nI have noticed this thread and 4e54d23 as a result this morning. If\nyou want to spread this style more, wouldn't it be better to do that\nin all the places of pg_upgrade where we store paths to files? I can\nsee six code paths with log_opts.basedir that could do the same, as of\nthe attached. The hardcoded file names have various lengths, and some\nof them are quite long making the generated paths more exposed to\nbeing cut in the middle.\n--\nMichael",
"msg_date": "Tue, 14 Jun 2022 10:55:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: \"buffer too small\" or \"path too long\"?"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> I have noticed this thread and 4e54d23 as a result this morning. If\n> you want to spread this style more, wouldn't it be better to do that\n> in all the places of pg_upgrade where we store paths to files? I can\n> see six code paths with log_opts.basedir that could do the same, as of\n> the attached. The hardcoded file names have various lengths, and some\n> of them are quite long making the generated paths more exposed to\n> being cut in the middle.\n\nWell, I just fixed the ones in make_outputdirs because it seemed weird\nthat that part of the function was not doing something the earlier parts\ndid. I didn't look around for more trouble.\n\nI think that pg_fatal'ing on the grounds of path-too-long once we've\nalready started the upgrade isn't all that great. Really we want to\nfail on that early on --- so coding make_outputdirs like this is\nfine, but maybe we need a different plan for files made later.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 13 Jun 2022 22:04:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"buffer too small\" or \"path too long\"?"
},
{
"msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> At Mon, 13 Jun 2022 13:25:01 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n>> I was about to question that, but now I remember that pg_upgrade has\n>> its own logging facility with a different idea about who provides\n>> the trailing newline than common/logging.[hc] has. Undoubtedly\n>> that's the source of this mistake. We really need to get pg_upgrade\n>> out of the business of having its own logging conventions.\n\n> Yes... I don't find a written reason excluding pg_upgrade in both the\n> commit 9a374b77fb and or the thread [1].\n\nWell, as far as 9a374b77fb went, Peter had left pg_upgrade out of the\nmix in the original creation of common/logging.c, and I didn't think\nthat dealing with that was a reasonable part of my update patch.\n\n> But I guess that we decided\n> that we first provide the facility in the best style ignoring the\n> current impletent in pg_upgrade then let pg_upgrade use it. So I\n> think it should emerge in the next cycle? I'll give it a shot if no\n> one is willing to do that for now. (I believe it is straightforward..)\n\nActually, I spent some time earlier today looking into that, and I can\nsee why Peter stayed away from it :-(. There are a few issues:\n\n* The inconsistency with the rest of the world about trailing newlines.\nThat aspect actually seems fixable fairly easily, and I have a patch\nmostly done for it.\n\n* logging.c believes it should prefix every line of output with the\nprogram's name and so on. This doesn't seem terribly appropriate\nfor pg_upgrade's use --- at least, not unless we make pg_upgrade\nWAY less chatty. Perhaps that'd be fine, I dunno.\n\n* pg_upgrade's pg_log_v duplicates all (well, most) stdout messages\ninto the INTERNAL_LOG_FILE log file, something logging.c has no\nprovision for (and it'd not be too easy to do, because of the C\nstandard's restrictions on use of va_list). Personally I'd be okay\nwith nuking the INTERNAL_LOG_FILE log file from orbit, but I bet\nsomebody will fight to keep it.\n\n* pg_log_v has also got a bunch of specialized rules around how\nto format PG_STATUS message traffic. Again I wonder how useful\nthat whole behavior really is, but taking it out would be a big\nuser-visible change.\n\nIn short, it seems like pg_upgrade's logging habits are sufficiently\nfar out in left field that we couldn't rebase it on top of logging.c\nwithout some seriously large user-visible behavioral changes.\nI have better things to spend my time on than advocating for that.\n\nHowever, the inconsistency in newline handling is a problem:\nI found that there are already other bugs with missing or extra\nnewlines, and it will only get worse if we don't unify that\nbehavior. So my inclination for now is to fix that and let the\nother issues go. Patch coming.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 13 Jun 2022 22:41:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"buffer too small\" or \"path too long\"?"
},
{
"msg_contents": "On Mon, Jun 13, 2022 at 10:41:41PM -0400, Tom Lane wrote:\n> * logging.c believes it should prefix every line of output with the\n> program's name and so on. This doesn't seem terribly appropriate\n> for pg_upgrade's use --- at least, not unless we make pg_upgrade\n> WAY less chatty. Perhaps that'd be fine, I dunno.\n\npg_upgrade was designed to be chatty because it felt it could fail under\nunpredictable circumstances --- I am not sure how true that is today.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Mon, 13 Jun 2022 22:59:47 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: \"buffer too small\" or \"path too long\"?"
},
{
"msg_contents": "On 14.06.22 03:55, Michael Paquier wrote:\n> On Tue, Jun 14, 2022 at 09:52:52AM +0900, Kyotaro Horiguchi wrote:\n>> At Tue, 14 Jun 2022 09:48:26 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n>>> Yeah, I feel so and it is what I wondered about recently when I saw\n>>> some complete error messages. Is that because of the length of the\n>>> subject?\n>>\n>> And I found that it is alrady done. Thanks!\n> \n> I have noticed this thread and 4e54d23 as a result this morning. If\n> you want to spread this style more, wouldn't it be better to do that\n> in all the places of pg_upgrade where we store paths to files? I can\n> see six code paths with log_opts.basedir that could do the same, as of\n> the attached. The hardcoded file names have various lengths, and some\n> of them are quite long making the generated paths more exposed to\n> being cut in the middle.\n\nWe have this problem of long file names being silently truncated all \nover the source code. Instead of equipping each one of them with a \nlength check, why don't we get rid of the fixed-size buffers and \nallocate dynamically, as in the attached patch.",
"msg_date": "Wed, 15 Jun 2022 08:51:26 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: \"buffer too small\" or \"path too long\"?"
},
{
"msg_contents": "On Wed, Jun 15, 2022 at 2:51 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> We have this problem of long file names being silently truncated all\n> over the source code. Instead of equipping each one of them with a\n> length check, why don't we get rid of the fixed-size buffers and\n> allocate dynamically, as in the attached patch.\n\nI've always wondered why we rely on MAXPGPATH instead of dynamic\nallocation. It seems pretty lame.\n\nI don't know how much we gain by fixing one place and not all the\nothers, but maybe it would set a trend.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 Jun 2022 13:08:11 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"buffer too small\" or \"path too long\"?"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Jun 15, 2022 at 2:51 AM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>> We have this problem of long file names being silently truncated all\n>> over the source code. Instead of equipping each one of them with a\n>> length check, why don't we get rid of the fixed-size buffers and\n>> allocate dynamically, as in the attached patch.\n\n> I don't know how much we gain by fixing one place and not all the\n> others, but maybe it would set a trend.\n\nYeah, that was what was bugging me about this proposal. Removing\none function's dependency on MAXPGPATH isn't much of a step forward.\n\nI note also that the patch leaks quite a lot of memory (a kilobyte or\nso per pathname, IIRC). That's probably negligible in this particular\ncontext, but anyplace that was called more than once per program run\nwould need to be more tidy.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 Jun 2022 14:02:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"buffer too small\" or \"path too long\"?"
},
{
"msg_contents": "On Wed, Jun 15, 2022 at 02:02:03PM -0400, Tom Lane wrote:\n> Yeah, that was what was bugging me about this proposal. Removing\n> one function's dependency on MAXPGPATH isn't much of a step forward.\n\nThis comes down to out-of-memory vs path length at the end. Changing\nonly the paths of make_outputdirs() without touching all the paths in \ncheck.c and the one in function.c does not sound good to me, as this\nincreases the risk of failing pg_upgrade in the middle, and that's\nwhat we should avoid, as said upthread.\n\n> I note also that the patch leaks quite a lot of memory (a kilobyte or\n> so per pathname, IIRC). That's probably negligible in this particular\n> context, but anyplace that was called more than once per program run\n> would need to be more tidy.\n\nSurely.\n--\nMichael",
"msg_date": "Thu, 16 Jun 2022 08:48:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: \"buffer too small\" or \"path too long\"?"
},
{
"msg_contents": "On 15.06.22 19:08, Robert Haas wrote:\n> On Wed, Jun 15, 2022 at 2:51 AM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>> We have this problem of long file names being silently truncated all\n>> over the source code. Instead of equipping each one of them with a\n>> length check, why don't we get rid of the fixed-size buffers and\n>> allocate dynamically, as in the attached patch.\n> \n> I've always wondered why we rely on MAXPGPATH instead of dynamic\n> allocation. It seems pretty lame.\n\nI think it came in before we had extensible string buffers APIs.\n\n\n",
"msg_date": "Fri, 17 Jun 2022 09:50:38 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: \"buffer too small\" or \"path too long\"?"
}
] |
[
{
"msg_contents": "Folks,\n\nPlease find attached a patch to do $Subject. As dates in a fair number\nof fields of endeavor are expressed this way, it seems reasonable to\nensure tha we can parse them on input. Making it possible to use them\nin output is a more invasive patch, and would involve changes to\nto_date and similar that would require careful consideration.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Mon, 13 Jun 2022 05:51:37 +0000",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Parse CE and BCE in dates and times"
},
{
"msg_contents": "Op 13-06-2022 om 07:51 schreef David Fetter:\n> Folks,\n> \n> Please find attached a patch to do $Subject. As dates in a fair number\n> of fields of endeavor are expressed this way, it seems reasonable to\n> ensure tha we can parse them on input. Making it possible to use them\n> in output is a more invasive patch, and would involve changes to\n> to_date and similar that would require careful consideration.\n\nHi David,\n\nI find some unexpected results:\n\n# select '112-04-30 BC'::date;\n date\n---------------\n 0112-04-30 BC\n(1 row)\n\nbut the same with the ' BCE' suffix seems broken:\n\n# select '112-04-30 BCE'::date;\nERROR: invalid input syntax for type date: \"112-04-30 BCE\"\nLINE 1: select '112-04-30 BCE'::date;\n\nThe same goes for '112-04-30 AD' (works) and its CE version (errors out).\n\nOr is this as expected?\n\n\nErik Rijkers\n\n\n\n\n\n\n\n\n> \n> Best,\n> David.\n\n\n",
"msg_date": "Mon, 13 Jun 2022 09:11:56 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: Parse CE and BCE in dates and times"
},
{
"msg_contents": "On Mon, Jun 13, 2022 at 09:11:56AM +0200, Erik Rijkers wrote:\n> Op 13-06-2022 om 07:51 schreef David Fetter:\n> > Folks,\n> > \n> > Please find attached a patch to do $Subject. As dates in a fair number\n> > of fields of endeavor are expressed this way, it seems reasonable to\n> > ensure tha we can parse them on input. Making it possible to use them\n> > in output is a more invasive patch, and would involve changes to\n> > to_date and similar that would require careful consideration.\n> \n> Hi David,\n> \n> I find some unexpected results:\n> \n> # select '112-04-30 BC'::date;\n> date\n> ---------------\n> 0112-04-30 BC\n> (1 row)\n> \n> but the same with the ' BCE' suffix seems broken:\n> \n> # select '112-04-30 BCE'::date;\n> ERROR: invalid input syntax for type date: \"112-04-30 BCE\"\n> LINE 1: select '112-04-30 BCE'::date;\n> \n> The same goes for '112-04-30 AD' (works) and its CE version (errors out).\n> \n> Or is this as expected?\n\nIt's not, and thanks for looking at this. Will check to see what's\ngoing on here.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Mon, 13 Jun 2022 14:39:52 +0000",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Parse CE and BCE in dates and times"
},
{
"msg_contents": "This entry has been waiting on author input for a while (our current\nthreshold is roughly two weeks), so I've marked it Returned with\nFeedback.\n\nOnce you think the patchset is ready for review again, you (or any\ninterested party) can resurrect the patch entry by visiting\n\n https://commitfest.postgresql.org/38/3682/\n\nand changing the status to \"Needs Review\", and then changing the\nstatus again to \"Move to next CF\". (Don't forget the second step;\nhopefully we will have streamlined this in the near future!)\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Tue, 2 Aug 2022 11:13:23 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Parse CE and BCE in dates and times"
}
] |
[
{
"msg_contents": "A little while ago, the pltcl tests starting crashing for me on macOS. \nI don't know what had changed, but I suspect it was either an operating \nsystem update or something like an xcode update.\n\nHere is a backtrace:\n\n * frame #0: 0x00007ff7b0e61853\n frame #1: 0x00007ff803a28751 libsystem_c.dylib`hash_search + 215\n frame #2: 0x0000000110357700 \npltcl.so`compile_pltcl_function(fn_oid=16418, tgreloid=0, \nis_event_trigger=false, pltrusted=true) at pltcl.c:1418:13\n frame #3: 0x0000000110355d50 \npltcl.so`pltcl_func_handler(fcinfo=0x00007fb6f1817028, \ncall_state=0x00007ff7b0e61b80, pltrusted=true) at pltcl.c:814:12\n...\n\nNote that the hash_search call goes into some system library, not postgres.\n\nThe command to link pltcl is:\n\ngcc ... -ltcl8.6 -lz -lpthread -framework CoreFoundation -lc \n-bundle_loader ../../../src/backend/postgres\n\nNotice the -lc in there. If I remove that, it works again.\n\nThe -lc is explicitly added in src/pl/tcl/Makefile, so it's our own \ndoing. I tracked this back, and it's been moved and rearranged in that \nmakefile a number of time. The original addition was\n\ncommit e3909672f12e0ddf3e202b824fda068ad2195ef2\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Mon Dec 14 00:46:49 1998\n\n Build pltcl.so correctly on platforms that want dependent\n shared libraries to be listed in the link command.\n\nHas anyone else seen this?\n\nNote, I'm using the tcl-tk package from Homebrew. The tcl installation \nprovided by macOS itself no longer appears to work for linking against.\n\n\n",
"msg_date": "Mon, 13 Jun 2022 08:53:36 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "pltcl crash on recent macOS"
},
{
"msg_contents": "On Mon, Jun 13, 2022 at 6:53 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> frame #1: 0x00007ff803a28751 libsystem_c.dylib`hash_search + 215\n> frame #2: 0x0000000110357700\n> pltcl.so`compile_pltcl_function(fn_oid=16418, tgreloid=0,\n\nHmm, I can’t reproduce that…. although that symbol is present in my\nlibSystem.B.dylib according to dlsym() and callable from a simple\nprogram not linked to anything else, pltcl.so is apparently reaching\npostgres’s hash_search for me, based on the fact that make -C\nsrc/pl/tcl check succeeds and nm -m on pltcl.so shows it as \"from\nexecutable\". It would be interesting to see what nm -m shows for you.\n\nArcheological note: That hash_search stuff, header <strhash.h>, seems\nto have been copied from ancient FreeBSD before it was dropped\nupstream for the crime of polluting the global symbol namespace with\njunk[1]. It's been languishing in Apple's libc for at least 19\nyears[2], though, so I'm not sure why it's showing up suddenly as a\nproblem for you now.\n\n> Note, I'm using the tcl-tk package from Homebrew. The tcl installation\n> provided by macOS itself no longer appears to work for linking against.\n\nI’m using tcl 8.6.12 installed by MacPorts on macOS 12.4, though, hmm,\nSDK 12.3. I see the explicit -lc when building pltcl.so, and I see\nthat libSystem.B.dylib is explicitly mentioned here, whether or not I\nhave -lc:\n\n% otool -L ./tmp_install/Users/tmunro/install/lib/postgresql/pltcl.so\n./tmp_install/Users/tmunro/install/lib/postgresql/pltcl.so:\n/opt/local/lib/libtcl8.6.dylib (compatibility version 8.6.0, current\nversion 8.6.12)\n/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current\nversion 1311.100.3)\n\nHere’s the complete link line:\n\nccache cc -Wall -Wmissing-prototypes -Wpointer-arith\n-Wdeclaration-after-statement -Werror=vla\n-Werror=unguarded-availability-new -Wendif-labels\n-Wmissing-format-attribute -Wcast-function-type -Wformat-security\n-fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument\n-Wno-compound-token-split-by-macro -g -O0 -bundle -multiply_defined\nsuppress -o pltcl.so pltcl.o -L../../../src/port\n-L../../../src/common -isysroot\n/Library/Developer/CommandLineTools/SDKs/MacOSX12.3.sdk\n-Wl,-dead_strip_dylibs -L/opt/local/lib -ltcl8.6 -lz -lpthread\n-framework CoreFoundation -lc -bundle_loader\n../../../src/backend/postgres\n\n[1] https://github.com/freebsd/freebsd-src/commit/dc196afb2e58dd05cd66e2da44872bb3d619910f\n[2] https://github.com/apple-open-source-mirror/Libc/blame/master/stdlib/FreeBSD/strhash.c\n\n\n",
"msg_date": "Mon, 13 Jun 2022 23:27:35 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pltcl crash on recent macOS"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Mon, Jun 13, 2022 at 6:53 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>> frame #1: 0x00007ff803a28751 libsystem_c.dylib`hash_search + 215\n>> frame #2: 0x0000000110357700\n>> pltcl.so`compile_pltcl_function(fn_oid=16418, tgreloid=0,\n\n> Hmm, I can’t reproduce that….\n\nI can't either, although I'm using the macOS-provided Tcl code,\nwhich still works fine for me. (I grant that Apple might desupport\nthat someday, but they haven't yet.) sifaka and longfin aren't\nunhappy either; although sifaka is close to identical to my laptop.\n\nHaving said that, I wonder whether the position of the -bundle_loader\nswitch in the command line is relevant to which way the hash_search\nreference is resolved. Seems like we could put it in front of the\nvarious -l options if that'd help.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 13 Jun 2022 12:01:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pltcl crash on recent macOS"
},
{
"msg_contents": "On 13.06.22 13:27, Thomas Munro wrote:\n> On Mon, Jun 13, 2022 at 6:53 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>> frame #1: 0x00007ff803a28751 libsystem_c.dylib`hash_search + 215\n>> frame #2: 0x0000000110357700\n>> pltcl.so`compile_pltcl_function(fn_oid=16418, tgreloid=0,\n> \n> Hmm, I can’t reproduce that…. although that symbol is present in my\n> libSystem.B.dylib according to dlsym() and callable from a simple\n> program not linked to anything else, pltcl.so is apparently reaching\n> postgres’s hash_search for me, based on the fact that make -C\n> src/pl/tcl check succeeds and nm -m on pltcl.so shows it as \"from\n> executable\". It would be interesting to see what nm -m shows for you.\n\n...\n (undefined) external _get_call_result_type (from executable)\n (undefined) external _getmissingattr (from executable)\n (undefined) external _hash_create (from libSystem)\n (undefined) external _hash_search (from libSystem)\n...\n\n> I’m using tcl 8.6.12 installed by MacPorts on macOS 12.4, though, hmm,\n> SDK 12.3. I see the explicit -lc when building pltcl.so, and I see\n> that libSystem.B.dylib is explicitly mentioned here, whether or not I\n> have -lc:\n> \n> % otool -L ./tmp_install/Users/tmunro/install/lib/postgresql/pltcl.so\n> ./tmp_install/Users/tmunro/install/lib/postgresql/pltcl.so:\n> /opt/local/lib/libtcl8.6.dylib (compatibility version 8.6.0, current\n> version 8.6.12)\n> /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current\n> version 1311.100.3)\n\nLooks the same here:\n\npltcl.so:\n\t/usr/local/opt/tcl-tk/lib/libtcl8.6.dylib (compatibility version 8.6.0, \ncurrent version 8.6.12)\n\t/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current \nversion 1311.100.3)\n\n> Here’s the complete link line:\n> \n> ccache cc -Wall -Wmissing-prototypes -Wpointer-arith\n> -Wdeclaration-after-statement -Werror=vla\n> -Werror=unguarded-availability-new -Wendif-labels\n> -Wmissing-format-attribute -Wcast-function-type -Wformat-security\n> -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument\n> -Wno-compound-token-split-by-macro -g -O0 -bundle -multiply_defined\n> suppress -o pltcl.so pltcl.o -L../../../src/port\n> -L../../../src/common -isysroot\n> /Library/Developer/CommandLineTools/SDKs/MacOSX12.3.sdk\n> -Wl,-dead_strip_dylibs -L/opt/local/lib -ltcl8.6 -lz -lpthread\n> -framework CoreFoundation -lc -bundle_loader\n> ../../../src/backend/postgres\n\nThe difference is that I use CC=gcc-11. I have change to CC=cc, then it \nworks (nm output shows \"from executable\"). So it's gcc that gets thrown \noff by the -lc.\n\n\n",
"msg_date": "Mon, 13 Jun 2022 22:21:17 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pltcl crash on recent macOS"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> The difference is that I use CC=gcc-11. I have change to CC=cc, then it \n> works (nm output shows \"from executable\"). So it's gcc that gets thrown \n> off by the -lc.\n\nHah, that makes sense. So does changing the option order help?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 13 Jun 2022 16:24:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pltcl crash on recent macOS"
},
{
"msg_contents": "On 13.06.22 18:01, Tom Lane wrote:\n> Having said that, I wonder whether the position of the -bundle_loader\n> switch in the command line is relevant to which way the hash_search\n> reference is resolved. Seems like we could put it in front of the\n> various -l options if that'd help.\n\nSwitching the order of -bundle_loader and -lc did not help.\n\n\n",
"msg_date": "Mon, 13 Jun 2022 23:05:32 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pltcl crash on recent macOS"
},
{
"msg_contents": "On Tue, Jun 14, 2022 at 8:21 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> The difference is that I use CC=gcc-11. I have change to CC=cc, then it\n> works (nm output shows \"from executable\"). So it's gcc that gets thrown\n> off by the -lc.\n\nHrmph, I changed my CC to \"ccache gcc-mp-11\" (what MacPorts calls GCC\n11), and I still can't reproduce the problem. I still get \"(from\nexecutable)\". In your original quote you showed \"gcc\", not \"gcc-11\",\nwhich (assuming it is found as /usr/bin/gcc) is just a little binary\nthat redirects to clang... trying that, this time without ccache in\nthe mix... and still no cigar. So something is different about GCC 11\nfrom homebrew, or the linker invocation it produces under the covers,\nor the linker it's using?\n\n\n",
"msg_date": "Tue, 14 Jun 2022 09:32:36 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pltcl crash on recent macOS"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Switching the order of -bundle_loader and -lc did not help.\n\nMeh. Well, it was worth a try.\n\nI'd be okay with just dropping the -lc from pl/tcl/Makefile and seeing\nwhat the buildfarm says. The fact that we needed it in 1998 doesn't\nmean that we still need it on supported versions of Tcl; nor was it\never anything but a hack for us to be overriding what TCL_LIBS says.\n\nAs a quick check, I tried it on prairiedog's host (which has the oldest\nTcl installation I still have in captivity), and it seemed fine.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 13 Jun 2022 23:05:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pltcl crash on recent macOS"
},
{
"msg_contents": "On 13.06.22 23:32, Thomas Munro wrote:\n> Hrmph, I changed my CC to \"ccache gcc-mp-11\" (what MacPorts calls GCC\n> 11), and I still can't reproduce the problem. I still get \"(from\n> executable)\". In your original quote you showed \"gcc\", not \"gcc-11\",\n> which (assuming it is found as /usr/bin/gcc) is just a little binary\n> that redirects to clang... trying that, this time without ccache in\n> the mix... and still no cigar. So something is different about GCC 11\n> from homebrew, or the linker invocation it produces under the covers,\n> or the linker it's using?\n\nThe original quote said \"gcc\" but that just me attempting to simplify. \nI have now also figured out that it works with gcc-10 but not with \ngcc-11 and gcc-12. For example, below are the underlying linker \ninvocations from gcc-10 and gcc-11. Note that some of the options are \nordered quite differently. I don't know what all of that means yet, but \nit surely points to something in gcc or its packaging being the cause.\n\nHowever, I think ultimately the use of -lc is an error and we should get \nrid of it. This episode shows that it's very fragile in any case.\n\n\n \n\"/usr/local/Cellar/gcc@10/10.3.0/libexec/gcc/x86_64-apple-darwin20/10.3.0/collect2\" \n-dynamic -arch x86_64 -bundle -bundle_loader \n../../../src/backend/postgres -macosx_version_min 11.4.0 \n-multiply_defined suppress -syslibroot \n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX12.3.sdk \n-weak_reference_mismatches non-weak -o pltcl.so -L../../../src/port \n-L../../../src/common -L/usr/local/lib -L/usr/local/opt/openldap/lib \n\"-L/usr/local/opt/openssl@1.1/lib\" -L/usr/local/opt/readline/lib \n-L/usr/local/opt/krb5/lib -L/usr/local/opt/icu4c/lib \n-L/usr/local/opt/tcl-tk/lib -L/usr/local/Cellar/libxml2/2.9.14/lib \n-L/usr/local/Cellar/lz4/1.9.3/lib -L/usr/local/Cellar/zstd/1.5.2/lib \n-L/usr/local/Cellar/tcl-tk/8.6.12_1/lib \n\"-L/usr/local/Cellar/gcc@10/10.3.0/lib/gcc/10/gcc/x86_64-apple-darwin20/10.3.0\" \n\"-L/usr/local/Cellar/gcc@10/10.3.0/lib/gcc/10/gcc/x86_64-apple-darwin20/10.3.0/../../..\" \npltcl.o -dead_strip_dylibs -ltcl8.6 -lz -framework CoreFoundation -lc \n-lSystem -lgcc_ext.10.5 -lgcc -lSystem -no_compact_unwind -idsym\n\n \n/usr/local/Cellar/gcc/11.3.0_1/bin/../libexec/gcc/x86_64-apple-darwin21/11/collect2 \n-dynamic -arch x86_64 -syslibroot \n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX12.3.sdk \n-macosx_version_min 12.4.0 -o pltcl.so -L../../../src/port \n-L../../../src/common -L/usr/local/lib -L/usr/local/opt/openldap/lib \n\"-L/usr/local/opt/openssl@1.1/lib\" -L/usr/local/opt/readline/lib \n-L/usr/local/opt/krb5/lib -L/usr/local/opt/icu4c/lib \n-L/usr/local/opt/tcl-tk/lib -L/usr/local/Cellar/libxml2/2.9.14/lib \n-L/usr/local/Cellar/lz4/1.9.3/lib -L/usr/local/Cellar/zstd/1.5.2/lib \n-L/usr/local/Cellar/tcl-tk/8.6.12_1/lib \n-L/usr/local/Cellar/gcc/11.3.0_1/bin/../lib/gcc/11/gcc/x86_64-apple-darwin21/11 \n-L/usr/local/Cellar/gcc/11.3.0_1/bin/../lib/gcc/11/gcc \n-L/usr/local/Cellar/gcc/11.3.0_1/bin/../lib/gcc/11/gcc/x86_64-apple-darwin21/11/../../.. \npltcl.o -dead_strip_dylibs -ltcl8.6 -lz -lc -bundle_loader \n../../../src/backend/postgres -bundle -framework CoreFoundation \n-multiply_defined suppress -lemutls_w -lgcc -lSystem -no_compact_unwind \n-idsym\n\n\n",
"msg_date": "Tue, 14 Jun 2022 19:58:06 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pltcl crash on recent macOS"
},
{
"msg_contents": "On 14.06.22 05:05, Tom Lane wrote:\n> I'd be okay with just dropping the -lc from pl/tcl/Makefile and seeing\n> what the buildfarm says. The fact that we needed it in 1998 doesn't\n> mean that we still need it on supported versions of Tcl; nor was it\n> ever anything but a hack for us to be overriding what TCL_LIBS says.\n\nOk, I propose to proceed with the attached patch (with a bit more \nexplanation added) for the master branch (for now) and see how it goes.",
"msg_date": "Mon, 20 Jun 2022 12:36:47 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pltcl crash on recent macOS"
},
{
"msg_contents": "\nOn 20.06.22 12:36, Peter Eisentraut wrote:\n> On 14.06.22 05:05, Tom Lane wrote:\n>> I'd be okay with just dropping the -lc from pl/tcl/Makefile and seeing\n>> what the buildfarm says. The fact that we needed it in 1998 doesn't\n>> mean that we still need it on supported versions of Tcl; nor was it\n>> ever anything but a hack for us to be overriding what TCL_LIBS says.\n> \n> Ok, I propose to proceed with the attached patch (with a bit more \n> explanation added) for the master branch (for now) and see how it goes.\n\ndone\n\n\n",
"msg_date": "Thu, 23 Jun 2022 09:54:28 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pltcl crash on recent macOS"
}
] |
[
{
"msg_contents": "Hello, hackers.\n\nWhile working on (1) in commit\n2871b4618af1acc85665eec0912c48f8341504c4 (2) from 2010 I noticed Simon\nRiggs was thinking about usage of memory barrier for KnownAssignedXids\naccess instead of spinlocks.\n\n> We could dispense with the spinlock if we were to\n> create suitable memory access barrier primitives and use those instead.\n\nKnownAssignedXids is array with xids and head/tail pointers. Array is\nchanged only by startup process. But access to head pointer protected\nby spinlock to guarantee new data in array is visible to other CPUs\nbefore head values.\n\n> To add XIDs to the array, we just insert\n> them into slots to the right of the head pointer and then advance the head\n> pointer. This wouldn't require any lock at all, except that on machines\n> with weak memory ordering we need to be careful that other processors\n> see the array element changes before they see the head pointer change.\n> We handle this by using a spinlock to protect reads and writes of the\n> head/tail pointers.\n\nNow we have memory barriers, so there is an WIP of patch to get rid of\n`known_assigned_xids_lck`. The idea is pretty simple - issue\npg_write_barrier after updating array, but before updating head.\n\nFirst potential positive effect I could see is\n(TransactionIdIsInProgress -> KnownAssignedXidsSearch) locking but\nseems like it is not on standby hotpath.\n\nSecond one - locking for KnownAssignedXidsGetAndSetXmin (build\nsnapshot). But I was unable to measure impact. It wasn’t visible\nseparately in (3) test.\n\nMaybe someone knows scenario causing known_assigned_xids_lck or\nTransactionIdIsInProgress become bottleneck on standby?\n\nBest regards,\nMichail.\n\n[1]: https://www.postgresql.org/message-id/flat/CANtu0ohzBFTYwdLtcanWo4%2B794WWUi7LY2rnbHyorJdE8_ZnGg%40mail.gmail.com#379c1be7b8134ada5a574078d51b64c6\n\n[2]: https://github.com/postgres/postgres/commit/2871b4618af1acc85665eec0912c48f8341504c4#diff-8879f0173be303070ab7931db7c757c96796d84402640b9e386a4150ed97b179R2409\n\n[3]: https://www.postgresql.org/message-id/flat/CANtu0ohzBFTYwdLtcanWo4%2B794WWUi7LY2rnbHyorJdE8_ZnGg%40mail.gmail.com#379c1be7b8134ada5a574078d51b64c6",
"msg_date": "Mon, 13 Jun 2022 11:30:29 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Any sense to get rid of known_assigned_xids_lck?"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nPlease see the attached draft of the release announcement for the \r\n2022-06-16 release.\r\n\r\nPlease review for technical accuracy and omissions. If you have \r\nfeedback. please provide it no later than Thu, June 16, 2022 0:00 AoE[1].\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://en.wikipedia.org/wiki/Anywhere_on_Earth",
"msg_date": "Mon, 13 Jun 2022 10:20:46 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "2022-06-16 release announcement draft"
},
{
"msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> Please review for technical accuracy and omissions.\n\nA few minor thoughts:\n\n> The PostgreSQL Global Development Group has released PostgreSQL 14.4 to fix an\n> issue that could cause silent data corruption when using the\n> [`CREATE INDEX CONCURRENTLY`](https://www.postgresql.org/docs/current/sql-createindex.html)\n> and [`REINDEX CONCURRENTLY`](https://www.postgresql.org/docs/current/sql-reindex.html)\n> commands.\n\nMaybe s/and/or/ ?\n\n> PostgreSQL 14.4 fixes an issue with the\n> [`CREATE INDEX CONCURRENTLY`](https://www.postgresql.org/docs/current/sql-createindex.html)\n> and [`REINDEX CONCURRENTLY`](https://www.postgresql.org/docs/current/sql-reindex.html)\n> that could cause silent data corruption of indexes.\n\nEither leave out \"the\" or add \"commands\". That is, \"the FOO and BAR\ncommands\" reads fine, \"the FOO and BAR\" less so. Also, I'm inclined\nto be a bit more specific and say that the problem is missing index\nentries, so maybe like \"... fixes an issue that could cause the [CIC]\nand [RIC] commands to omit index entries for some rows\".\n\n> Once you upgrade your system to PostgreSQL 14.4, you can fix any silent data\n> corruption using `REINDEX CONCURRENTLY`.\n\nPerhaps it is also worth mentioning that you can use REINDEX without\nCONCURRENTLY, even before upgrading.\n\n> * Report implicitly-created operator families (`CREATE OPERATOR CLASS`) to event\n> triggers.\n\nMaybe \"(generated by `CREATE OPERATOR CLASS`)\"? As-is, the parenthetical\ncomment looks more like a mistake than anything else.\n\nThe rest looks good.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 13 Jun 2022 13:38:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 2022-06-16 release announcement draft"
},
{
"msg_contents": "On 6/13/22 1:38 PM, Tom Lane wrote:\r\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\r\n>> Please review for technical accuracy and omissions.\r\n> \r\n> A few minor thoughts:\r\n> \r\n>> The PostgreSQL Global Development Group has released PostgreSQL 14.4 to fix an\r\n>> issue that could cause silent data corruption when using the\r\n>> [`CREATE INDEX CONCURRENTLY`](https://www.postgresql.org/docs/current/sql-createindex.html)\r\n>> and [`REINDEX CONCURRENTLY`](https://www.postgresql.org/docs/current/sql-reindex.html)\r\n>> commands.\r\n> \r\n> Maybe s/and/or/ ?\r\n\r\nFixed.\r\n\r\n>> PostgreSQL 14.4 fixes an issue with the\r\n>> [`CREATE INDEX CONCURRENTLY`](https://www.postgresql.org/docs/current/sql-createindex.html)\r\n>> and [`REINDEX CONCURRENTLY`](https://www.postgresql.org/docs/current/sql-reindex.html)\r\n>> that could cause silent data corruption of indexes.\r\n> \r\n> Either leave out \"the\" or add \"commands\". That is, \"the FOO and BAR\r\n> commands\" reads fine, \"the FOO and BAR\" less so. \r\n\r\nYeah, that was likely an edit-o. Fixed.\r\n\r\n> Also, I'm inclined\r\n> to be a bit more specific and say that the problem is missing index\r\n> entries, so maybe like \"... fixes an issue that could cause the [CIC]\r\n> and [RIC] commands to omit index entries for some rows\".\r\n\r\nAgreed. Edited in attached.\r\n\r\n>> Once you upgrade your system to PostgreSQL 14.4, you can fix any silent data\r\n>> corruption using `REINDEX CONCURRENTLY`.\r\n> \r\n> Perhaps it is also worth mentioning that you can use REINDEX without\r\n> CONCURRENTLY, even before upgrading.\r\n\r\nI'm hesitant on giving too many options. We did put out the \"warning\" \r\nannouncement providing this as an option. I do think that folks who are \r\nrunning CIC/RIC are sensitive to locking, and a plain old \"REINDEX\" may \r\nbe viable except in an emergency.\r\n\r\n>> * Report implicitly-created operator families (`CREATE OPERATOR CLASS`) to event\r\n>> triggers.\r\n> \r\n> Maybe \"(generated by `CREATE OPERATOR CLASS`)\"? As-is, the parenthetical\r\n> comment looks more like a mistake than anything else.\r\n\r\nFixed.\r\n\r\n> The rest looks good.\r\n\r\nThanks for the review! Next version attached.\r\n\r\nJonathan",
"msg_date": "Mon, 13 Jun 2022 21:15:14 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: 2022-06-16 release announcement draft"
},
{
"msg_contents": "On Mon, Jun 13, 2022 at 6:15 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> > Perhaps it is also worth mentioning that you can use REINDEX without\n> > CONCURRENTLY, even before upgrading.\n>\n> I'm hesitant on giving too many options. We did put out the \"warning\"\n> announcement providing this as an option. I do think that folks who are\n> running CIC/RIC are sensitive to locking, and a plain old \"REINDEX\" may\n> be viable except in an emergency.\n\nThe locking implications for plain REINDEX are surprising IMV -- and\nso I suggest sticking with what you have here.\n\nIn many cases using plain REINDEX is not meaningfully different to\ntaking a full AccessExclusiveLock on the table (we only acquire an AEL\non the index, but in practice that can be a distinction without a\ndifference). We at least went some way towards making the situation\nwith REINDEX locking clearer in a doc patch that recently became\ncommit 8ac700ac.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 13 Jun 2022 18:34:56 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: 2022-06-16 release announcement draft"
}
] |
[
{
"msg_contents": "Hi.\n\nFWIW, I stumbled on this obscure possible typo (?) in src/pl/plperl/po/ro.po:\n\n~~~\n\n#: plperl.c:788\nmsgid \"while parsing Perl initialization\"\nmsgstr \"în timpul parsing inițializării Perl\"\n#: plperl.c:793\nmsgid \"while running Perl initialization\"\nmsgstr \"în timpul rulării intializării Perl\"\n\n~~~\n\n(Notice the missing 'i' - \"inițializării\" versus \"intializării\")\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 14 Jun 2022 13:34:34 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Typo in ro.po file?"
},
{
"msg_contents": "On 14.06.22 05:34, Peter Smith wrote:\n> FWIW, I stumbled on this obscure possible typo (?) in src/pl/plperl/po/ro.po:\n> \n> ~~~\n> \n> #: plperl.c:788\n> msgid \"while parsing Perl initialization\"\n> msgstr \"în timpul parsing inițializării Perl\"\n> #: plperl.c:793\n> msgid \"while running Perl initialization\"\n> msgstr \"în timpul rulării intializării Perl\"\n> \n> ~~~\n> \n> (Notice the missing 'i' - \"inițializării\" versus \"intializării\")\n\nFixed in translations repository. Thanks.\n\n\n",
"msg_date": "Wed, 15 Jun 2022 09:29:02 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Typo in ro.po file?"
},
{
"msg_contents": "On Wed, Jun 15, 2022 at 09:29:02AM +0200, Peter Eisentraut wrote:\n> On 14.06.22 05:34, Peter Smith wrote:\n> > FWIW, I stumbled on this obscure possible typo (?) in src/pl/plperl/po/ro.po:\n> > \n> > ~~~\n> > \n> > #: plperl.c:788\n> > msgid \"while parsing Perl initialization\"\n> > msgstr \"în timpul parsing inițializării Perl\"\n> > #: plperl.c:793\n> > msgid \"while running Perl initialization\"\n> > msgstr \"în timpul rulării intializării Perl\"\n> > \n> > ~~~\n> > \n> > (Notice the missing 'i' - \"inițializării\" versus \"intializării\")\n> \n> Fixed in translations repository. Thanks.\n\nWhat email list should such fixes be posted to?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 16 Jun 2022 16:29:52 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Typo in ro.po file?"
},
{
"msg_contents": "On 16.06.22 22:29, Bruce Momjian wrote:\n> On Wed, Jun 15, 2022 at 09:29:02AM +0200, Peter Eisentraut wrote:\n>> On 14.06.22 05:34, Peter Smith wrote:\n>>> FWIW, I stumbled on this obscure possible typo (?) in src/pl/plperl/po/ro.po:\n>>>\n>>> ~~~\n>>>\n>>> #: plperl.c:788\n>>> msgid \"while parsing Perl initialization\"\n>>> msgstr \"în timpul parsing inițializării Perl\"\n>>> #: plperl.c:793\n>>> msgid \"while running Perl initialization\"\n>>> msgstr \"în timpul rulării intializării Perl\"\n>>>\n>>> ~~~\n>>>\n>>> (Notice the missing 'i' - \"inițializării\" versus \"intializării\")\n>>\n>> Fixed in translations repository. Thanks.\n> \n> What email list should such fixes be posted to?\n\npgsql-translators@ would be ideal, but here is ok.\n\n\n",
"msg_date": "Fri, 17 Jun 2022 09:01:42 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Typo in ro.po file?"
},
{
"msg_contents": "On Fri, Jun 17, 2022 at 09:01:42AM +0200, Peter Eisentraut wrote:\n> > > Fixed in translations repository. Thanks.\n> > \n> > What email list should such fixes be posted to?\n> \n> pgsql-translators@ would be ideal, but here is ok.\n\nThanks. I see these posts occasionally and wanted to know where I\nshould route them to, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Fri, 17 Jun 2022 12:05:11 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Typo in ro.po file?"
}
] |
[
{
"msg_contents": "Hi:I create a gin index for a bigint array. and then want to find the array which contains the key is start with special prefix. for example:\nrow1: { 112, 345, 118}row2: { 356, 258, 358}row3: { 116, 358, 369}\nI want find the key start \"11\",so the row1 and row3 will be return.of course it must be use GIN index not seq scan。\n\nis there any example ?\n\nThank You!\nHi:I create a gin index for a bigint array. and then want to find the array which contains the key is start with special prefix. for example:row1: { 112, 345, 118}row2: { 356, 258, 358}row3: { 116, 358, 369}I want find the key start \"11\",so the row1 and row3 will be return.of course it must be use GIN index not seq scan。is there any example ?Thank You!",
"msg_date": "Tue, 14 Jun 2022 03:35:52 +0000 (UTC)",
"msg_from": "\"huangning290@yahoo.com\" <huangning290@yahoo.com>",
"msg_from_op": true,
"msg_subject": "GIN index partial match"
},
{
"msg_contents": "On Tue, Jun 14, 2022 at 11:39 AM huangning290@yahoo.com <\nhuangning290@yahoo.com> wrote:\n\n> Hi:\n> I create a gin index for a bigint array. and then want to find the array\n> which contains the key is start with special prefix. for example:\n>\n> row1: { 112, 345, 118}\n> row2: { 356, 258, 358}\n> row3: { 116, 358, 369}\n>\n> I want find the key start \"11\",so the row1 and row3 will be return.of\n> course it must be use GIN index not seq scan。\n>\n\nI'd suppose:\n1. Create a helper intarray table with values i/10 (integer division) from\nvalues in original rows and connected with an original table by a unique\nkey.\n2. Create gin index on this helper table\n3. select rows containing value of exact 11 from the helper table\n\nOtherwise it could be possible to make functional index and functional\nselects. But AFAIK there is no division operator available for intarray\ntype.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nOn Tue, Jun 14, 2022 at 11:39 AM huangning290@yahoo.com <huangning290@yahoo.com> wrote:Hi:I create a gin index for a bigint array. and then want to find the array which contains the key is start with special prefix. for example:row1: { 112, 345, 118}row2: { 356, 258, 358}row3: { 116, 358, 369}I want find the key start \"11\",so the row1 and row3 will be return.of course it must be use GIN index not seq scan。I'd suppose:1. Create a helper intarray table with values i/10 (integer division) from values in original rows and connected with an original table by a unique key.2. Create gin index on this helper table3. select rows containing value of exact 11 from the helper tableOtherwise it could be possible to make functional index and functional selects. But AFAIK there is no division operator available for intarray type.-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com",
"msg_date": "Wed, 15 Jun 2022 02:01:18 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: GIN index partial match"
}
] |
[
{
"msg_contents": "We are in the process of migrating from Oracle to Postgres and the following query does much less work with Oracle vs Postgres.\n\nexplain (analyze, buffers)\nselect favoritegr0_.FAVORITE_GROUP_SID as favorite1_2_, favoritegr0_.CHANGED as changed2_2_, favoritegr0_.TYPE_DISCRIMINATOR as type_dis3_2_,\n favoritegr0_.GROUP_NAME as group_na4_2_, favoritegr0_.IS_DELETED as is_delet5_2_, favoritegr0_.LAST_USED as last_use6_2_, favoritegr0_.POSITION as position7_2_,\n favoritegr0_.PRISM_GUID as prism_gu8_2_, favoritegr0_.PRODUCT_SID as product_9_2_,\n favoritegr0_.PRODUCT_VIEW as product10_2_, favoritegr0_.USAGE_TYPE as usage_t11_2_, favoritegr0_.ROW_VERSION as row_ver12_2_\n from cf0.FAVORITE_GROUP favoritegr0_\n where 'FORMS.WESTLAW' = favoritegr0_.PRODUCT_SID\n and favoritegr0_.PRODUCT_VIEW in ('DefaultProductView')\n and (favoritegr0_.FAVORITE_GROUP_SID not in\n (select favoriteen1_.FAVORITE_GROUP_SID\n from cf0.FAVORITE_GROUP_MEMBER favoriteen1_\n cross join cf0.CATEGORY_PAGE categorypa2_\n where favoriteen1_.CATEGORY_PAGE_SID=categorypa2_.CATEGORY_PAGE_SID\n and categorypa2_.UNIQUE_NAME='Florida'\n and categorypa2_.IS_DELETED=0\n and favoriteen1_.IS_DELETED=0))\n and favoritegr0_.IS_DELETED=0\n and (favoritegr0_.USAGE_TYPE=0 or favoritegr0_.USAGE_TYPE is null)\n and favoritegr0_.PRISM_GUID='ia74483420000012ca23eacf87bb0ed56'\norder by favoritegr0_.POSITION desc;\n\nHere is the plan in Postgres. It did 1426 shared block hits. If you look at this plan it is not pushing filtering into the NOT IN subquery- it is fully resolving that part of the query driving off where UNIQUE_NAME = 'Florida'.\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nSort (cost=5198.22..5198.22 rows=1 width=144) (actual time=6.559..6.560 rows=1 loops=1)\n Sort Key: favoritegr0_.\"position\" DESC\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=1426\n -> Index Scan using favorite_group_idx01 on favorite_group favoritegr0_ (cost=5190.18..5198.21 rows=1 width=144) (actual time=6.514..6.515 rows=1 loops=1)\n Index Cond: (((prism_guid)::text = 'ia74483420000012ca23eacf87bb0ed56'::text) AND (is_deleted = 0))\n Filter: ((NOT (hashed SubPlan 1)) AND ((usage_type = 0) OR (usage_type IS NULL)) AND ('FORMS.WESTLAW'::text = (product_sid)::text) AND ((product_view)::text = 'DefaultProductView'::text))\n Buffers: shared hit=1423\n SubPlan 1\n -> Nested Loop (cost=0.70..5189.90 rows=1 width=33) (actual time=6.459..6.459 rows=0 loops=1)\n Buffers: shared hit=1417\n -> Index Scan using category_page_idx04 on category_page categorypa2_ (cost=0.42..5131.71 rows=7 width=33) (actual time=0.035..6.138 rows=92 loops=1)\n Index Cond: ((unique_name)::text = 'Florida'::text)\n Filter: (is_deleted = 0)\n Buffers: shared hit=1233\n -> Index Scan using favorite_group_member_idx03 on favorite_group_member favoriteen1_ (cost=0.28..8.30 rows=1 width=66) (actual time=0.003..0.003 rows=0 loops=92)\n Index Cond: ((category_page_sid)::text = (categorypa2_.category_page_sid)::text)\n Filter: (is_deleted = 0)\n Buffers: shared hit=184\nPlanning Time: 1.624 ms\nExecution Time: 6.697 ms\n\nIf I compare that to the plan Oracle uses it pushes the favoritegr0_.FAVORITE_GROUP_SID predicate into the NOT IN. I'm able to get a similar plan with Postgres if I change the NOT IN to a NOT EXISTS:\n\nexplain (analyze, buffers)\nselect favoritegr0_.FAVORITE_GROUP_SID as favorite1_2_, favoritegr0_.CHANGED as changed2_2_, favoritegr0_.TYPE_DISCRIMINATOR as type_dis3_2_,\n favoritegr0_.GROUP_NAME as group_na4_2_, favoritegr0_.IS_DELETED as is_delet5_2_, favoritegr0_.LAST_USED as last_use6_2_, favoritegr0_.POSITION as position7_2_,\n favoritegr0_.PRISM_GUID as prism_gu8_2_, favoritegr0_.PRODUCT_SID as product_9_2_,\n favoritegr0_.PRODUCT_VIEW as product10_2_, favoritegr0_.USAGE_TYPE as usage_t11_2_, favoritegr0_.ROW_VERSION as row_ver12_2_\n from cf0.FAVORITE_GROUP favoritegr0_\n where 'FORMS.WESTLAW' = favoritegr0_.PRODUCT_SID\n and favoritegr0_.PRODUCT_VIEW in ('DefaultProductView')\n and not exists (\n select 'x'\n from cf0.FAVORITE_GROUP_MEMBER favoriteen1_\n cross join cf0.CATEGORY_PAGE categorypa2_\n where favoriteen1_.CATEGORY_PAGE_SID=categorypa2_.CATEGORY_PAGE_SID\n and categorypa2_.UNIQUE_NAME='Florida'\n and categorypa2_.IS_DELETED=0\n and favoriteen1_.IS_DELETED=0\n and favoritegr0_.FAVORITE_GROUP_SID = favoriteen1_.FAVORITE_GROUP_SID)\n and favoritegr0_.IS_DELETED=0\n and (favoritegr0_.USAGE_TYPE=0 or favoritegr0_.USAGE_TYPE is null)\n and favoritegr0_.PRISM_GUID='ia74483420000012ca23eacf87bb0ed56'\norder by favoritegr0_.POSITION desc;\n\nHere you can see the query did 5 shared block hits- much better than the plan above. It's pushing the predicate into the NOT EXISTS with a Nested Loop Anti Join.\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nSort (cost=121.50..121.51 rows=1 width=144) (actual time=0.027..0.028 rows=1 loops=1)\n Sort Key: favoritegr0_.\"position\" DESC\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=5\n -> Nested Loop Anti Join (cost=5.11..121.49 rows=1 width=144) (actual time=0.021..0.022 rows=1 loops=1)\n Buffers: shared hit=5\n -> Index Scan using favorite_group_idx01 on favorite_group favoritegr0_ (cost=0.28..8.30 rows=1 width=144) (actual time=0.012..0.012 rows=1 loops=1)\n Index Cond: (((prism_guid)::text = 'ia74483420000012ca23eacf87bb0ed56'::text) AND (is_deleted = 0))\n Filter: (((usage_type = 0) OR (usage_type IS NULL)) AND ('FORMS.WESTLAW'::text = (product_sid)::text) AND ((product_view)::text = 'DefaultProductView'::text))\n Buffers: shared hit=3\n -> Nested Loop (cost=4.83..113.18 rows=1 width=33) (actual time=0.008..0.009 rows=0 loops=1)\n Buffers: shared hit=2\n -> Bitmap Heap Scan on favorite_group_member favoriteen1_ (cost=4.41..56.40 rows=17 width=66) (actual time=0.007..0.008 rows=0 loops=1)\n Recheck Cond: ((favoritegr0_.favorite_group_sid)::text = (favorite_group_sid)::text)\n Filter: (is_deleted = 0)\n Buffers: shared hit=2\n -> Bitmap Index Scan on favorite_group_member_idx02 (cost=0.00..4.41 rows=17 width=0) (actual time=0.003..0.003 rows=0 loops=1)\n Index Cond: ((favorite_group_sid)::text = (favoritegr0_.favorite_group_sid)::text)\n Buffers: shared hit=2\n -> Index Scan using category_page_pkey on category_page categorypa2_ (cost=0.42..3.30 rows=1 width=33) (never executed)\n Index Cond: ((category_page_sid)::text = (favoriteen1_.category_page_sid)::text)\n Filter: (((unique_name)::text = 'Florida'::text) AND (is_deleted = 0))\nPlanning Time: 0.554 ms\nExecution Time: 0.071 ms\n\nIs Postgres able to drive the query the same way with the NOT IN as the NOT EXISTS is doing or is that only available if the query has a NOT EXISTS? I don't see an option to push predicate or something like that using pg_hint_plan. I'm not sure if there are any optimizer settings that may tell Postgres to treat the NOT IN like a NOT EXISTS when optimizing this type of query.\n\nThanks in advance\nSteve\nThis e-mail is for the sole use of the intended recipient and contains information that may be privileged and/or confidential. If you are not an intended recipient, please notify the sender by return e-mail and delete this e-mail and any attachments. Certain required legal entity disclosures can be accessed on our website: https://www.thomsonreuters.com/en/resources/disclosures.html\n\n\n\n\n\n\n\n\n\nWe are in the process of migrating from Oracle to Postgres and the following query does much less work with Oracle vs Postgres.\n \nexplain (analyze, buffers)\nselect favoritegr0_.FAVORITE_GROUP_SID as favorite1_2_, favoritegr0_.CHANGED as changed2_2_, favoritegr0_.TYPE_DISCRIMINATOR as type_dis3_2_,\n\n favoritegr0_.GROUP_NAME as group_na4_2_, favoritegr0_.IS_DELETED as is_delet5_2_, favoritegr0_.LAST_USED as last_use6_2_, favoritegr0_.POSITION as position7_2_,\n\n favoritegr0_.PRISM_GUID as prism_gu8_2_, favoritegr0_.PRODUCT_SID as product_9_2_,\n\n favoritegr0_.PRODUCT_VIEW as product10_2_, favoritegr0_.USAGE_TYPE as usage_t11_2_, favoritegr0_.ROW_VERSION as row_ver12_2_\n\n from cf0.FAVORITE_GROUP favoritegr0_\n\n where 'FORMS.WESTLAW' = favoritegr0_.PRODUCT_SID\n\n and favoritegr0_.PRODUCT_VIEW in ('DefaultProductView')\n\n and (favoritegr0_.FAVORITE_GROUP_SID not in \n\n (select favoriteen1_.FAVORITE_GROUP_SID\n\n from cf0.FAVORITE_GROUP_MEMBER favoriteen1_\n\n cross join cf0.CATEGORY_PAGE categorypa2_\n\n where favoriteen1_.CATEGORY_PAGE_SID=categorypa2_.CATEGORY_PAGE_SID\n\n and categorypa2_.UNIQUE_NAME='Florida'\n\n and categorypa2_.IS_DELETED=0\n\n and favoriteen1_.IS_DELETED=0))\n\n and favoritegr0_.IS_DELETED=0\n\n and (favoritegr0_.USAGE_TYPE=0 or favoritegr0_.USAGE_TYPE is null)\n\n and favoritegr0_.PRISM_GUID='ia74483420000012ca23eacf87bb0ed56'\n\norder by favoritegr0_.POSITION desc;\n \nHere is the plan in Postgres. It did 1426 shared block hits. If you look at this plan it is not pushing filtering into the NOT IN subquery- it is fully resolving that part of the query driving off where UNIQUE_NAME = 'Florida'. \n\n \n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nSort (cost=5198.22..5198.22 rows=1 width=144) (actual time=6.559..6.560 rows=1 loops=1)\n Sort Key: favoritegr0_.\"position\" DESC\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=1426\n -> Index Scan using favorite_group_idx01 on favorite_group favoritegr0_ (cost=5190.18..5198.21 rows=1 width=144) (actual time=6.514..6.515 rows=1 loops=1)\n Index Cond: (((prism_guid)::text = 'ia74483420000012ca23eacf87bb0ed56'::text) AND (is_deleted = 0))\n Filter: ((NOT (hashed SubPlan 1)) AND ((usage_type = 0) OR (usage_type IS NULL)) AND ('FORMS.WESTLAW'::text = (product_sid)::text) AND ((product_view)::text\n = 'DefaultProductView'::text))\n Buffers: shared hit=1423\n SubPlan 1\n -> Nested Loop (cost=0.70..5189.90 rows=1 width=33) (actual time=6.459..6.459 rows=0 loops=1)\n Buffers: shared hit=1417\n -> Index Scan using category_page_idx04 on category_page categorypa2_ (cost=0.42..5131.71 rows=7 width=33) (actual time=0.035..6.138 rows=92\n loops=1)\n Index Cond: ((unique_name)::text = 'Florida'::text)\n Filter: (is_deleted = 0)\n Buffers: shared hit=1233\n -> Index Scan using favorite_group_member_idx03 on favorite_group_member favoriteen1_ (cost=0.28..8.30 rows=1 width=66) (actual time=0.003..0.003\n rows=0 loops=92)\n Index Cond: ((category_page_sid)::text = (categorypa2_.category_page_sid)::text)\n Filter: (is_deleted = 0)\n Buffers: shared hit=184\nPlanning Time: 1.624 ms\nExecution Time: 6.697 ms\n \nIf I compare that to the plan Oracle uses it pushes the favoritegr0_.FAVORITE_GROUP_SID predicate into the NOT IN. I'm able to get a similar plan with Postgres if I change the NOT IN to a NOT EXISTS:\n \nexplain (analyze, buffers)\nselect favoritegr0_.FAVORITE_GROUP_SID as favorite1_2_, favoritegr0_.CHANGED as changed2_2_, favoritegr0_.TYPE_DISCRIMINATOR as type_dis3_2_,\n\n favoritegr0_.GROUP_NAME as group_na4_2_, favoritegr0_.IS_DELETED as is_delet5_2_, favoritegr0_.LAST_USED as last_use6_2_, favoritegr0_.POSITION as position7_2_,\n\n favoritegr0_.PRISM_GUID as prism_gu8_2_, favoritegr0_.PRODUCT_SID as product_9_2_,\n\n favoritegr0_.PRODUCT_VIEW as product10_2_, favoritegr0_.USAGE_TYPE as usage_t11_2_, favoritegr0_.ROW_VERSION as row_ver12_2_\n\n from cf0.FAVORITE_GROUP favoritegr0_\n\n where 'FORMS.WESTLAW' = favoritegr0_.PRODUCT_SID\n\n and favoritegr0_.PRODUCT_VIEW in ('DefaultProductView')\n\n and not exists (\n select 'x'\n\n from cf0.FAVORITE_GROUP_MEMBER favoriteen1_\n\n cross join cf0.CATEGORY_PAGE categorypa2_\n\n where favoriteen1_.CATEGORY_PAGE_SID=categorypa2_.CATEGORY_PAGE_SID\n\n and categorypa2_.UNIQUE_NAME='Florida'\n\n and categorypa2_.IS_DELETED=0\n\n and favoriteen1_.IS_DELETED=0\n and favoritegr0_.FAVORITE_GROUP_SID = favoriteen1_.FAVORITE_GROUP_SID)\n and favoritegr0_.IS_DELETED=0\n\n and (favoritegr0_.USAGE_TYPE=0 or favoritegr0_.USAGE_TYPE is null)\n\n and favoritegr0_.PRISM_GUID='ia74483420000012ca23eacf87bb0ed56'\n\norder by favoritegr0_.POSITION desc;\n \nHere you can see the query did 5 shared block hits- much better than the plan above. It's pushing the predicate into the NOT EXISTS with a Nested Loop Anti Join.\n \n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nSort (cost=121.50..121.51 rows=1 width=144) (actual time=0.027..0.028 rows=1 loops=1)\n Sort Key: favoritegr0_.\"position\" DESC\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=5\n -> Nested Loop Anti Join (cost=5.11..121.49 rows=1 width=144) (actual time=0.021..0.022 rows=1 loops=1)\n Buffers: shared hit=5\n -> Index Scan using favorite_group_idx01 on favorite_group favoritegr0_ (cost=0.28..8.30 rows=1 width=144) (actual time=0.012..0.012 rows=1 loops=1)\n Index Cond: (((prism_guid)::text = 'ia74483420000012ca23eacf87bb0ed56'::text) AND (is_deleted = 0))\n Filter: (((usage_type = 0) OR (usage_type IS NULL)) AND ('FORMS.WESTLAW'::text = (product_sid)::text) AND ((product_view)::text = 'DefaultProductView'::text))\n Buffers: shared hit=3\n -> Nested Loop (cost=4.83..113.18 rows=1 width=33) (actual time=0.008..0.009 rows=0 loops=1)\n Buffers: shared hit=2\n -> Bitmap Heap Scan on favorite_group_member favoriteen1_ (cost=4.41..56.40 rows=17 width=66) (actual time=0.007..0.008 rows=0 loops=1)\n Recheck Cond: ((favoritegr0_.favorite_group_sid)::text = (favorite_group_sid)::text)\n Filter: (is_deleted = 0)\n Buffers: shared hit=2\n -> Bitmap Index Scan on favorite_group_member_idx02 (cost=0.00..4.41 rows=17 width=0) (actual time=0.003..0.003 rows=0 loops=1)\n Index Cond: ((favorite_group_sid)::text = (favoritegr0_.favorite_group_sid)::text)\n Buffers: shared hit=2\n -> Index Scan using category_page_pkey on category_page categorypa2_ (cost=0.42..3.30 rows=1 width=33) (never executed)\n Index Cond: ((category_page_sid)::text = (favoriteen1_.category_page_sid)::text)\n Filter: (((unique_name)::text = 'Florida'::text) AND (is_deleted = 0))\nPlanning Time: 0.554 ms\nExecution Time: 0.071 ms\n \nIs Postgres able to drive the query the same way with the NOT IN as the NOT EXISTS is doing or is that only available if the query has a NOT EXISTS? I don't see an option to push predicate or something like that using pg_hint_plan. I'm\n not sure if there are any optimizer settings that may tell Postgres to treat the NOT IN like a NOT EXISTS when optimizing this type of query.\n \nThanks in advance\nSteve\n\nThis e-mail is for the sole use of the intended recipient and contains information that may be privileged and/or confidential. If you are not an intended recipient, please notify the sender by return e-mail and delete this e-mail and any attachments. Certain\n required legal entity disclosures can be accessed on our website: https://www.thomsonreuters.com/en/resources/disclosures.html",
"msg_date": "Tue, 14 Jun 2022 15:58:39 +0000",
"msg_from": "\"Dirschel, Steve\" <steve.dirschel@thomsonreuters.com>",
"msg_from_op": true,
"msg_subject": "Postgres NOT IN vs NOT EXISTS optimization"
},
{
"msg_contents": "I think this explains the situation well:\n\nhttps://wiki.postgresql.org/wiki/Don't_Do_This#Don.27t_use_NOT_IN\n\nOn Tue, Jun 14, 2022 at 11:59 AM Dirschel, Steve <\nsteve.dirschel@thomsonreuters.com> wrote:\n\n> We are in the process of migrating from Oracle to Postgres and the\n> following query does much less work with Oracle vs Postgres.\n>\n>\n>\n> explain (analyze, buffers)\n>\n> select favoritegr0_.FAVORITE_GROUP_SID as favorite1_2_,\n> favoritegr0_.CHANGED as changed2_2_, favoritegr0_.TYPE_DISCRIMINATOR as\n> type_dis3_2_,\n>\n> favoritegr0_.GROUP_NAME as group_na4_2_, favoritegr0_.IS_DELETED as\n> is_delet5_2_, favoritegr0_.LAST_USED as last_use6_2_, favoritegr0_.POSITION\n> as position7_2_,\n>\n> favoritegr0_.PRISM_GUID as prism_gu8_2_, favoritegr0_.PRODUCT_SID\n> as product_9_2_,\n>\n> favoritegr0_.PRODUCT_VIEW as product10_2_, favoritegr0_.USAGE_TYPE\n> as usage_t11_2_, favoritegr0_.ROW_VERSION as row_ver12_2_\n>\n> from cf0.FAVORITE_GROUP favoritegr0_\n>\n> where 'FORMS.WESTLAW' = favoritegr0_.PRODUCT_SID\n>\n> and favoritegr0_.PRODUCT_VIEW in ('DefaultProductView')\n>\n> and (favoritegr0_.FAVORITE_GROUP_SID not in\n>\n> (select favoriteen1_.FAVORITE_GROUP_SID\n>\n> from cf0.FAVORITE_GROUP_MEMBER favoriteen1_\n>\n> cross join cf0.CATEGORY_PAGE categorypa2_\n>\n> where\n> favoriteen1_.CATEGORY_PAGE_SID=categorypa2_.CATEGORY_PAGE_SID\n>\n> and categorypa2_.UNIQUE_NAME='Florida'\n>\n> and categorypa2_.IS_DELETED=0\n>\n> and favoriteen1_.IS_DELETED=0))\n>\n> and favoritegr0_.IS_DELETED=0\n>\n> and (favoritegr0_.USAGE_TYPE=0 or favoritegr0_.USAGE_TYPE is null)\n>\n> and favoritegr0_.PRISM_GUID='ia74483420000012ca23eacf87bb0ed56'\n>\n> order by favoritegr0_.POSITION desc;\n>\n>\n>\n> Here is the plan in Postgres. It did 1426 shared block hits. If you look\n> at this plan it is not pushing filtering into the NOT IN subquery- it is\n> fully resolving that part of the query driving off where UNIQUE_NAME =\n> 'Florida'.\n>\n>\n>\n>\n> QUERY PLAN\n>\n>\n> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Sort (cost=5198.22..5198.22 rows=1 width=144) (actual time=6.559..6.560\n> rows=1 loops=1)\n>\n> Sort Key: favoritegr0_.\"position\" DESC\n>\n> Sort Method: quicksort Memory: 25kB\n>\n> Buffers: shared hit=1426\n>\n> -> Index Scan using favorite_group_idx01 on favorite_group\n> favoritegr0_ (cost=5190.18..5198.21 rows=1 width=144) (actual\n> time=6.514..6.515 rows=1 loops=1)\n>\n> Index Cond: (((prism_guid)::text =\n> 'ia74483420000012ca23eacf87bb0ed56'::text) AND (is_deleted = 0))\n>\n> Filter: ((NOT (hashed SubPlan 1)) AND ((usage_type = 0) OR\n> (usage_type IS NULL)) AND ('FORMS.WESTLAW'::text = (product_sid)::text) AND\n> ((product_view)::text = 'DefaultProductView'::text))\n>\n> Buffers: shared hit=1423\n>\n> SubPlan 1\n>\n> -> Nested Loop (cost=0.70..5189.90 rows=1 width=33) (actual\n> time=6.459..6.459 rows=0 loops=1)\n>\n> Buffers: shared hit=1417\n>\n> -> Index Scan using category_page_idx04 on category_page\n> categorypa2_ (cost=0.42..5131.71 rows=7 width=33) (actual\n> time=0.035..6.138 rows=92 loops=1)\n>\n> Index Cond: ((unique_name)::text = 'Florida'::text)\n>\n> Filter: (is_deleted = 0)\n>\n> Buffers: shared hit=1233\n>\n> -> Index Scan using favorite_group_member_idx03 on\n> favorite_group_member favoriteen1_ (cost=0.28..8.30 rows=1 width=66)\n> (actual time=0.003..0.003 rows=0 loops=92)\n>\n> Index Cond: ((category_page_sid)::text =\n> (categorypa2_.category_page_sid)::text)\n>\n> Filter: (is_deleted = 0)\n>\n> Buffers: shared hit=184\n>\n> Planning Time: 1.624 ms\n>\n> Execution Time: 6.697 ms\n>\n>\n>\n> If I compare that to the plan Oracle uses it pushes the\n> favoritegr0_.FAVORITE_GROUP_SID predicate into the NOT IN. I'm able to get\n> a similar plan with Postgres if I change the NOT IN to a NOT EXISTS:\n>\n>\n>\n> explain (analyze, buffers)\n>\n> select favoritegr0_.FAVORITE_GROUP_SID as favorite1_2_,\n> favoritegr0_.CHANGED as changed2_2_, favoritegr0_.TYPE_DISCRIMINATOR as\n> type_dis3_2_,\n>\n> favoritegr0_.GROUP_NAME as group_na4_2_, favoritegr0_.IS_DELETED as\n> is_delet5_2_, favoritegr0_.LAST_USED as last_use6_2_, favoritegr0_.POSITION\n> as position7_2_,\n>\n> favoritegr0_.PRISM_GUID as prism_gu8_2_, favoritegr0_.PRODUCT_SID\n> as product_9_2_,\n>\n> favoritegr0_.PRODUCT_VIEW as product10_2_, favoritegr0_.USAGE_TYPE\n> as usage_t11_2_, favoritegr0_.ROW_VERSION as row_ver12_2_\n>\n> from cf0.FAVORITE_GROUP favoritegr0_\n>\n> where 'FORMS.WESTLAW' = favoritegr0_.PRODUCT_SID\n>\n> and favoritegr0_.PRODUCT_VIEW in ('DefaultProductView')\n>\n> and not exists (\n>\n> select 'x'\n>\n> from cf0.FAVORITE_GROUP_MEMBER favoriteen1_\n>\n> cross join cf0.CATEGORY_PAGE categorypa2_\n>\n> where\n> favoriteen1_.CATEGORY_PAGE_SID=categorypa2_.CATEGORY_PAGE_SID\n>\n> and categorypa2_.UNIQUE_NAME='Florida'\n>\n> and categorypa2_.IS_DELETED=0\n>\n> and favoriteen1_.IS_DELETED=0\n>\n> and favoritegr0_.FAVORITE_GROUP_SID =\n> favoriteen1_.FAVORITE_GROUP_SID)\n>\n> and favoritegr0_.IS_DELETED=0\n>\n> and (favoritegr0_.USAGE_TYPE=0 or favoritegr0_.USAGE_TYPE is null)\n>\n> and favoritegr0_.PRISM_GUID='ia74483420000012ca23eacf87bb0ed56'\n>\n> order by favoritegr0_.POSITION desc;\n>\n>\n>\n> Here you can see the query did 5 shared block hits- much better than the\n> plan above. It's pushing the predicate into the NOT EXISTS with a Nested\n> Loop Anti Join.\n>\n>\n>\n>\n> QUERY PLAN\n>\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Sort (cost=121.50..121.51 rows=1 width=144) (actual time=0.027..0.028\n> rows=1 loops=1)\n>\n> Sort Key: favoritegr0_.\"position\" DESC\n>\n> Sort Method: quicksort Memory: 25kB\n>\n> Buffers: shared hit=5\n>\n> -> Nested Loop Anti Join (cost=5.11..121.49 rows=1 width=144) (actual\n> time=0.021..0.022 rows=1 loops=1)\n>\n> Buffers: shared hit=5\n>\n> -> Index Scan using favorite_group_idx01 on favorite_group\n> favoritegr0_ (cost=0.28..8.30 rows=1 width=144) (actual time=0.012..0.012\n> rows=1 loops=1)\n>\n> Index Cond: (((prism_guid)::text =\n> 'ia74483420000012ca23eacf87bb0ed56'::text) AND (is_deleted = 0))\n>\n> Filter: (((usage_type = 0) OR (usage_type IS NULL)) AND\n> ('FORMS.WESTLAW'::text = (product_sid)::text) AND ((product_view)::text =\n> 'DefaultProductView'::text))\n>\n> Buffers: shared hit=3\n>\n> -> Nested Loop (cost=4.83..113.18 rows=1 width=33) (actual\n> time=0.008..0.009 rows=0 loops=1)\n>\n> Buffers: shared hit=2\n>\n> -> Bitmap Heap Scan on favorite_group_member favoriteen1_\n> (cost=4.41..56.40 rows=17 width=66) (actual time=0.007..0.008 rows=0\n> loops=1)\n>\n> Recheck Cond:\n> ((favoritegr0_.favorite_group_sid)::text = (favorite_group_sid)::text)\n>\n> Filter: (is_deleted = 0)\n>\n> Buffers: shared hit=2\n>\n> -> Bitmap Index Scan on favorite_group_member_idx02\n> (cost=0.00..4.41 rows=17 width=0) (actual time=0.003..0.003 rows=0 loops=1)\n>\n> Index Cond: ((favorite_group_sid)::text =\n> (favoritegr0_.favorite_group_sid)::text)\n>\n> Buffers: shared hit=2\n>\n> -> Index Scan using category_page_pkey on category_page\n> categorypa2_ (cost=0.42..3.30 rows=1 width=33) (never executed)\n>\n> Index Cond: ((category_page_sid)::text =\n> (favoriteen1_.category_page_sid)::text)\n>\n> Filter: (((unique_name)::text = 'Florida'::text) AND\n> (is_deleted = 0))\n>\n> Planning Time: 0.554 ms\n>\n> Execution Time: 0.071 ms\n>\n>\n>\n> Is Postgres able to drive the query the same way with the NOT IN as the\n> NOT EXISTS is doing or is that only available if the query has a NOT\n> EXISTS? I don't see an option to push predicate or something like that\n> using pg_hint_plan. I'm not sure if there are any optimizer settings that\n> may tell Postgres to treat the NOT IN like a NOT EXISTS when optimizing\n> this type of query.\n>\n>\n>\n> Thanks in advance\n>\n> Steve\n> This e-mail is for the sole use of the intended recipient and contains\n> information that may be privileged and/or confidential. If you are not an\n> intended recipient, please notify the sender by return e-mail and delete\n> this e-mail and any attachments. Certain required legal entity disclosures\n> can be accessed on our website:\n> https://www.thomsonreuters.com/en/resources/disclosures.html\n>\n\nI think this explains the situation well:https://wiki.postgresql.org/wiki/Don't_Do_This#Don.27t_use_NOT_INOn Tue, Jun 14, 2022 at 11:59 AM Dirschel, Steve <steve.dirschel@thomsonreuters.com> wrote:\n\n\nWe are in the process of migrating from Oracle to Postgres and the following query does much less work with Oracle vs Postgres.\n \nexplain (analyze, buffers)\nselect favoritegr0_.FAVORITE_GROUP_SID as favorite1_2_, favoritegr0_.CHANGED as changed2_2_, favoritegr0_.TYPE_DISCRIMINATOR as type_dis3_2_,\n\n favoritegr0_.GROUP_NAME as group_na4_2_, favoritegr0_.IS_DELETED as is_delet5_2_, favoritegr0_.LAST_USED as last_use6_2_, favoritegr0_.POSITION as position7_2_,\n\n favoritegr0_.PRISM_GUID as prism_gu8_2_, favoritegr0_.PRODUCT_SID as product_9_2_,\n\n favoritegr0_.PRODUCT_VIEW as product10_2_, favoritegr0_.USAGE_TYPE as usage_t11_2_, favoritegr0_.ROW_VERSION as row_ver12_2_\n\n from cf0.FAVORITE_GROUP favoritegr0_\n\n where 'FORMS.WESTLAW' = favoritegr0_.PRODUCT_SID\n\n and favoritegr0_.PRODUCT_VIEW in ('DefaultProductView')\n\n and (favoritegr0_.FAVORITE_GROUP_SID not in \n\n (select favoriteen1_.FAVORITE_GROUP_SID\n\n from cf0.FAVORITE_GROUP_MEMBER favoriteen1_\n\n cross join cf0.CATEGORY_PAGE categorypa2_\n\n where favoriteen1_.CATEGORY_PAGE_SID=categorypa2_.CATEGORY_PAGE_SID\n\n and categorypa2_.UNIQUE_NAME='Florida'\n\n and categorypa2_.IS_DELETED=0\n\n and favoriteen1_.IS_DELETED=0))\n\n and favoritegr0_.IS_DELETED=0\n\n and (favoritegr0_.USAGE_TYPE=0 or favoritegr0_.USAGE_TYPE is null)\n\n and favoritegr0_.PRISM_GUID='ia74483420000012ca23eacf87bb0ed56'\n\norder by favoritegr0_.POSITION desc;\n \nHere is the plan in Postgres. It did 1426 shared block hits. If you look at this plan it is not pushing filtering into the NOT IN subquery- it is fully resolving that part of the query driving off where UNIQUE_NAME = 'Florida'. \n\n \n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nSort (cost=5198.22..5198.22 rows=1 width=144) (actual time=6.559..6.560 rows=1 loops=1)\n Sort Key: favoritegr0_.\"position\" DESC\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=1426\n -> Index Scan using favorite_group_idx01 on favorite_group favoritegr0_ (cost=5190.18..5198.21 rows=1 width=144) (actual time=6.514..6.515 rows=1 loops=1)\n Index Cond: (((prism_guid)::text = 'ia74483420000012ca23eacf87bb0ed56'::text) AND (is_deleted = 0))\n Filter: ((NOT (hashed SubPlan 1)) AND ((usage_type = 0) OR (usage_type IS NULL)) AND ('FORMS.WESTLAW'::text = (product_sid)::text) AND ((product_view)::text\n = 'DefaultProductView'::text))\n Buffers: shared hit=1423\n SubPlan 1\n -> Nested Loop (cost=0.70..5189.90 rows=1 width=33) (actual time=6.459..6.459 rows=0 loops=1)\n Buffers: shared hit=1417\n -> Index Scan using category_page_idx04 on category_page categorypa2_ (cost=0.42..5131.71 rows=7 width=33) (actual time=0.035..6.138 rows=92\n loops=1)\n Index Cond: ((unique_name)::text = 'Florida'::text)\n Filter: (is_deleted = 0)\n Buffers: shared hit=1233\n -> Index Scan using favorite_group_member_idx03 on favorite_group_member favoriteen1_ (cost=0.28..8.30 rows=1 width=66) (actual time=0.003..0.003\n rows=0 loops=92)\n Index Cond: ((category_page_sid)::text = (categorypa2_.category_page_sid)::text)\n Filter: (is_deleted = 0)\n Buffers: shared hit=184\nPlanning Time: 1.624 ms\nExecution Time: 6.697 ms\n \nIf I compare that to the plan Oracle uses it pushes the favoritegr0_.FAVORITE_GROUP_SID predicate into the NOT IN. I'm able to get a similar plan with Postgres if I change the NOT IN to a NOT EXISTS:\n \nexplain (analyze, buffers)\nselect favoritegr0_.FAVORITE_GROUP_SID as favorite1_2_, favoritegr0_.CHANGED as changed2_2_, favoritegr0_.TYPE_DISCRIMINATOR as type_dis3_2_,\n\n favoritegr0_.GROUP_NAME as group_na4_2_, favoritegr0_.IS_DELETED as is_delet5_2_, favoritegr0_.LAST_USED as last_use6_2_, favoritegr0_.POSITION as position7_2_,\n\n favoritegr0_.PRISM_GUID as prism_gu8_2_, favoritegr0_.PRODUCT_SID as product_9_2_,\n\n favoritegr0_.PRODUCT_VIEW as product10_2_, favoritegr0_.USAGE_TYPE as usage_t11_2_, favoritegr0_.ROW_VERSION as row_ver12_2_\n\n from cf0.FAVORITE_GROUP favoritegr0_\n\n where 'FORMS.WESTLAW' = favoritegr0_.PRODUCT_SID\n\n and favoritegr0_.PRODUCT_VIEW in ('DefaultProductView')\n\n and not exists (\n select 'x'\n\n from cf0.FAVORITE_GROUP_MEMBER favoriteen1_\n\n cross join cf0.CATEGORY_PAGE categorypa2_\n\n where favoriteen1_.CATEGORY_PAGE_SID=categorypa2_.CATEGORY_PAGE_SID\n\n and categorypa2_.UNIQUE_NAME='Florida'\n\n and categorypa2_.IS_DELETED=0\n\n and favoriteen1_.IS_DELETED=0\n and favoritegr0_.FAVORITE_GROUP_SID = favoriteen1_.FAVORITE_GROUP_SID)\n and favoritegr0_.IS_DELETED=0\n\n and (favoritegr0_.USAGE_TYPE=0 or favoritegr0_.USAGE_TYPE is null)\n\n and favoritegr0_.PRISM_GUID='ia74483420000012ca23eacf87bb0ed56'\n\norder by favoritegr0_.POSITION desc;\n \nHere you can see the query did 5 shared block hits- much better than the plan above. It's pushing the predicate into the NOT EXISTS with a Nested Loop Anti Join.\n \n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nSort (cost=121.50..121.51 rows=1 width=144) (actual time=0.027..0.028 rows=1 loops=1)\n Sort Key: favoritegr0_.\"position\" DESC\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=5\n -> Nested Loop Anti Join (cost=5.11..121.49 rows=1 width=144) (actual time=0.021..0.022 rows=1 loops=1)\n Buffers: shared hit=5\n -> Index Scan using favorite_group_idx01 on favorite_group favoritegr0_ (cost=0.28..8.30 rows=1 width=144) (actual time=0.012..0.012 rows=1 loops=1)\n Index Cond: (((prism_guid)::text = 'ia74483420000012ca23eacf87bb0ed56'::text) AND (is_deleted = 0))\n Filter: (((usage_type = 0) OR (usage_type IS NULL)) AND ('FORMS.WESTLAW'::text = (product_sid)::text) AND ((product_view)::text = 'DefaultProductView'::text))\n Buffers: shared hit=3\n -> Nested Loop (cost=4.83..113.18 rows=1 width=33) (actual time=0.008..0.009 rows=0 loops=1)\n Buffers: shared hit=2\n -> Bitmap Heap Scan on favorite_group_member favoriteen1_ (cost=4.41..56.40 rows=17 width=66) (actual time=0.007..0.008 rows=0 loops=1)\n Recheck Cond: ((favoritegr0_.favorite_group_sid)::text = (favorite_group_sid)::text)\n Filter: (is_deleted = 0)\n Buffers: shared hit=2\n -> Bitmap Index Scan on favorite_group_member_idx02 (cost=0.00..4.41 rows=17 width=0) (actual time=0.003..0.003 rows=0 loops=1)\n Index Cond: ((favorite_group_sid)::text = (favoritegr0_.favorite_group_sid)::text)\n Buffers: shared hit=2\n -> Index Scan using category_page_pkey on category_page categorypa2_ (cost=0.42..3.30 rows=1 width=33) (never executed)\n Index Cond: ((category_page_sid)::text = (favoriteen1_.category_page_sid)::text)\n Filter: (((unique_name)::text = 'Florida'::text) AND (is_deleted = 0))\nPlanning Time: 0.554 ms\nExecution Time: 0.071 ms\n \nIs Postgres able to drive the query the same way with the NOT IN as the NOT EXISTS is doing or is that only available if the query has a NOT EXISTS? I don't see an option to push predicate or something like that using pg_hint_plan. I'm\n not sure if there are any optimizer settings that may tell Postgres to treat the NOT IN like a NOT EXISTS when optimizing this type of query.\n \nThanks in advance\nSteve\n\nThis e-mail is for the sole use of the intended recipient and contains information that may be privileged and/or confidential. If you are not an intended recipient, please notify the sender by return e-mail and delete this e-mail and any attachments. Certain\n required legal entity disclosures can be accessed on our website: https://www.thomsonreuters.com/en/resources/disclosures.html",
"msg_date": "Tue, 14 Jun 2022 12:06:52 -0400",
"msg_from": "Jeremy Smith <jeremy@musicsmith.net>",
"msg_from_op": false,
"msg_subject": "Re: Postgres NOT IN vs NOT EXISTS optimization"
},
{
"msg_contents": "\"Dirschel, Steve\" <steve.dirschel@thomsonreuters.com> writes:\n> Is Postgres able to drive the query the same way with the NOT IN as the\n> NOT EXISTS is doing or is that only available if the query has a NOT\n> EXISTS?\n\nNOT IN is not optimized very well in PG, because of the strange\nsemantics that the SQL spec demands when the sub-query produces any\nnull values. There's been some interest in detecting cases where\nwe can prove that the subquery produces no nulls and then optimizing\nit into NOT EXISTS, but it seems like a lot of work for not-great\nreturn, so nothing's happened (yet). Perhaps Oracle does something\nlike that already, or perhaps they're just ignoring the semantics\nproblem; they do not have a reputation for hewing closely to the\nspec on behavior regarding nulls.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Jun 2022 12:09:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgres NOT IN vs NOT EXISTS optimization"
},
{
"msg_contents": "On Tue, Jun 14, 2022 at 12:09:16PM -0400, Tom Lane wrote:\n> \"Dirschel, Steve\" <steve.dirschel@thomsonreuters.com> writes:\n> > Is Postgres able to drive the query the same way with the NOT IN as the\n> > NOT EXISTS is doing or is that only available if the query has a NOT\n> > EXISTS?\n> \n> NOT IN is not optimized very well in PG, because of the strange\n> semantics that the SQL spec demands when the sub-query produces any\n> null values. There's been some interest in detecting cases where\n> we can prove that the subquery produces no nulls and then optimizing\n> it into NOT EXISTS, but it seems like a lot of work for not-great\n> return, so nothing's happened (yet). Perhaps Oracle does something\n> like that already, or perhaps they're just ignoring the semantics\n> problem; they do not have a reputation for hewing closely to the\n> spec on behavior regarding nulls.\n\nI was just now researching NOT IN behavior and remembered this thread,\nso wanted to give a simplified example. If you set up tables like this:\n\n\tCREATE TABLE small AS\n\t\tSELECT * FROM generate_series(1, 10) AS t(x);\n\n\tCREATE TABLE large AS SELECT small.x\n\t\tFROM small CROSS JOIN generate_series(1, 1000) AS t(x);\n\n\tINSERT INTO small VALUES (11), (12);\n\n\tANALYZE small, large;\n\nThese IN and EXISTS/NOT EXISTS queries look fine. using hash joins:\n\n\tEXPLAIN SELECT small.x\n\tFROM small\n\tWHERE small.x IN (SELECT large.x FROM large);\n\t QUERY PLAN\n\t-----------------------------------------------------------------------------\n\t Hash Join (cost=170.22..171.49 rows=10 width=4)\n\t Hash Cond: (small.x = large.x)\n\t -> Seq Scan on small (cost=0.00..1.12 rows=12 width=4)\n\t -> Hash (cost=170.10..170.10 rows=10 width=4)\n\t -> HashAggregate (cost=170.00..170.10 rows=10 width=4)\n\t Group Key: large.x\n\t -> Seq Scan on large (cost=0.00..145.00 rows=10000 width=4)\n\t\n\tEXPLAIN SELECT small.x\n\tFROM small\n\tWHERE EXISTS (SELECT large.x FROM large WHERE large.x = small.x);\n\t QUERY PLAN\n\t-----------------------------------------------------------------------------\n\t Hash Join (cost=170.22..171.49 rows=10 width=4)\n\t Hash Cond: (small.x = large.x)\n\t -> Seq Scan on small (cost=0.00..1.12 rows=12 width=4)\n\t -> Hash (cost=170.10..170.10 rows=10 width=4)\n\t -> HashAggregate (cost=170.00..170.10 rows=10 width=4)\n\t Group Key: large.x\n\t -> Seq Scan on large (cost=0.00..145.00 rows=10000 width=4)\n\t\n\tEXPLAIN SELECT small.x\n\tFROM small\n\tWHERE NOT EXISTS (SELECT large.x FROM large WHERE large.x = small.x);\n\t QUERY PLAN\n\t-----------------------------------------------------------------------\n\t Hash Anti Join (cost=270.00..271.20 rows=2 width=4)\n\t Hash Cond: (small.x = large.x)\n\t -> Seq Scan on small (cost=0.00..1.12 rows=12 width=4)\n\t -> Hash (cost=145.00..145.00 rows=10000 width=4)\n\t -> Seq Scan on large (cost=0.00..145.00 rows=10000 width=4)\n\nThese NOT IN queries all use sequential scans, and IS NOT NULL does not help:\n\n\tEXPLAIN SELECT small.x\n\tFROM small\n\tWHERE small.x NOT IN (SELECT large.x FROM large);\n\t QUERY PLAN\n\t-------------------------------------------------------------------\n\t Seq Scan on small (cost=170.00..171.15 rows=6 width=4)\n\t Filter: (NOT (hashed SubPlan 1))\n\t SubPlan 1\n\t -> Seq Scan on large (cost=0.00..145.00 rows=10000 width=4)\n\t\n\tEXPLAIN SELECT small.x\n\tFROM small\n\tWHERE small.x NOT IN (SELECT large.x FROM large WHERE large.x IS NOT NULL);\n\t QUERY PLAN\n\t-------------------------------------------------------------------\n\t Seq Scan on small (cost=170.00..171.15 rows=6 width=4)\n\t Filter: (NOT (hashed SubPlan 1))\n\t SubPlan 1\n\t -> Seq Scan on large (cost=0.00..145.00 rows=10000 width=4)\n\t Filter: (x IS NOT NULL)\n\t\nIs converting NOT IN to NOT EXISTS our only option? Couldn't we start\nto create the hash and just switch to always returning NULL if we see\nany NULLs while we are creating the hash?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 11 Aug 2022 16:12:33 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgres NOT IN vs NOT EXISTS optimization"
},
{
"msg_contents": "On Tue, Jun 14, 2022 at 12:09:16PM -0400, Tom Lane wrote:\n> \"Dirschel, Steve\" <steve.dirschel@thomsonreuters.com> writes:\n> > Is Postgres able to drive the query the same way with the NOT IN as the\n> > NOT EXISTS is doing or is that only available if the query has a NOT\n> > EXISTS?\n> \n> NOT IN is not optimized very well in PG, because of the strange\n> semantics that the SQL spec demands when the sub-query produces any\n> null values. There's been some interest in detecting cases where\n> we can prove that the subquery produces no nulls and then optimizing\n> it into NOT EXISTS, but it seems like a lot of work for not-great\n> return, so nothing's happened (yet). Perhaps Oracle does something\n> like that already, or perhaps they're just ignoring the semantics\n> problem; they do not have a reputation for hewing closely to the\n> spec on behavior regarding nulls.\n\n[ Now sent to hackers, where it really belongs. ]\n\nI was just now researching NOT IN behavior and remembered this thread,\nso wanted to give a simplified example. If you set up tables like this:\n\n\tCREATE TABLE small AS\n\t\tSELECT * FROM generate_series(1, 10) AS t(x);\n\n\tCREATE TABLE large AS SELECT small.x\n\t\tFROM small CROSS JOIN generate_series(1, 1000) AS t(x);\n\n\tINSERT INTO small VALUES (11), (12);\n\n\tANALYZE small, large;\n\nThese IN and EXISTS/NOT EXISTS queries look fine. using hash joins:\n\n\tEXPLAIN SELECT small.x\n\tFROM small\n\tWHERE small.x IN (SELECT large.x FROM large);\n\t QUERY PLAN\n\t-----------------------------------------------------------------------------\n\t Hash Join (cost=170.22..171.49 rows=10 width=4)\n\t Hash Cond: (small.x = large.x)\n\t -> Seq Scan on small (cost=0.00..1.12 rows=12 width=4)\n\t -> Hash (cost=170.10..170.10 rows=10 width=4)\n\t -> HashAggregate (cost=170.00..170.10 rows=10 width=4)\n\t Group Key: large.x\n\t -> Seq Scan on large (cost=0.00..145.00 rows=10000 width=4)\n\t\n\tEXPLAIN SELECT small.x\n\tFROM small\n\tWHERE EXISTS (SELECT large.x FROM large WHERE large.x = small.x);\n\t QUERY PLAN\n\t-----------------------------------------------------------------------------\n\t Hash Join (cost=170.22..171.49 rows=10 width=4)\n\t Hash Cond: (small.x = large.x)\n\t -> Seq Scan on small (cost=0.00..1.12 rows=12 width=4)\n\t -> Hash (cost=170.10..170.10 rows=10 width=4)\n\t -> HashAggregate (cost=170.00..170.10 rows=10 width=4)\n\t Group Key: large.x\n\t -> Seq Scan on large (cost=0.00..145.00 rows=10000 width=4)\n\t\n\tEXPLAIN SELECT small.x\n\tFROM small\n\tWHERE NOT EXISTS (SELECT large.x FROM large WHERE large.x = small.x);\n\t QUERY PLAN\n\t-----------------------------------------------------------------------\n\t Hash Anti Join (cost=270.00..271.20 rows=2 width=4)\n\t Hash Cond: (small.x = large.x)\n\t -> Seq Scan on small (cost=0.00..1.12 rows=12 width=4)\n\t -> Hash (cost=145.00..145.00 rows=10000 width=4)\n\t -> Seq Scan on large (cost=0.00..145.00 rows=10000 width=4)\n\nThese NOT IN queries all use sequential scans, and IS NOT NULL does not help:\n\n\tEXPLAIN SELECT small.x\n\tFROM small\n\tWHERE small.x NOT IN (SELECT large.x FROM large);\n\t QUERY PLAN\n\t-------------------------------------------------------------------\n\t Seq Scan on small (cost=170.00..171.15 rows=6 width=4)\n\t Filter: (NOT (hashed SubPlan 1))\n\t SubPlan 1\n\t -> Seq Scan on large (cost=0.00..145.00 rows=10000 width=4)\n\t\n\tEXPLAIN SELECT small.x\n\tFROM small\n\tWHERE small.x NOT IN (SELECT large.x FROM large WHERE large.x IS NOT NULL);\n\t QUERY PLAN\n\t-------------------------------------------------------------------\n\t Seq Scan on small (cost=170.00..171.15 rows=6 width=4)\n\t Filter: (NOT (hashed SubPlan 1))\n\t SubPlan 1\n\t -> Seq Scan on large (cost=0.00..145.00 rows=10000 width=4)\n\t Filter: (x IS NOT NULL)\n\t\nIs converting NOT IN to NOT EXISTS our only option? Couldn't we start\nto create the hash and just switch to always returning NULL if we see\nany NULLs while we are creating the hash?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 11 Aug 2022 16:50:48 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgres NOT IN vs NOT EXISTS optimization"
}
] |
[
{
"msg_contents": "Here's a couple of small patches I came up with while doing some related\nwork on TAP tests.\n\nThe first makes the argument for $node->config_data() optional. If it's\nnot supplied, pg_config is called without an argument and the whole\nresult is returned. Currently, if you try that you get back a nasty and\ncryptic error.\n\nThe second changes the new GUCs TAP test to check against the installed\npostgresql.conf.sample rather than the one in the original source\nlocation. There are probably arguments both ways, but if we ever decided\nto postprocess the file before installation, this would do the right thing.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 14 Jun 2022 12:08:16 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Small TAP improvements"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> The first makes the argument for $node->config_data() optional. If it's\n> not supplied, pg_config is called without an argument and the whole\n> result is returned. Currently, if you try that you get back a nasty and\n> cryptic error.\n\nNo opinion about whether that's useful.\n\n> The second changes the new GUCs TAP test to check against the installed\n> postgresql.conf.sample rather than the one in the original source\n> location. There are probably arguments both ways, but if we ever decided\n> to postprocess the file before installation, this would do the right thing.\n\nSeems like a good idea, especially since it also makes the test code\nshorter and more robust(-looking).\n\nLooking at the patch itself,\n\n+my $share_dir = $node->config_data('--sharedir');\n+chomp $share_dir;\n+$share_dir =~ s/^SHAREDIR = //;\n+my $sample_file = \"$share_dir/postgresql.conf.sample\";\n\nI kind of wonder why config_data() isn't doing the chomp itself;\nwhat caller would not want that? Pulling off the variable name\nmight be helpful too, since it's hard to conceive of a use-case\nwhere you don't also need that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Jun 2022 12:20:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Small TAP improvements"
},
{
"msg_contents": "The comment atop config_data still mentions $option, but after the patch that's no longer a name used in the function. (I have to admit that using @_ in the body of the function was a little bit confusing to me at first. Did you do that in order to allow multiple options to be passed?)\n\nAlso: if you give an option to pg_config, the output is not prefixed with the variable name. So you don't need to strip the \"SHAREDIR =\" bit: there isn't any. This is true even if you give multiple options:\n\nschmee: master 0$ pg_config --sharedir --includedir\n/home/alvherre/Code/pgsql-install/REL9_6_STABLE/share\n/home/alvherre/Code/pgsql-install/REL9_6_STABLE/include\n\n\n",
"msg_date": "Tue, 14 Jun 2022 18:44:09 +0200",
"msg_from": "=?UTF-8?Q?=C3=81lvaro_Herrera?= <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Small TAP improvements"
},
{
"msg_contents": "\nOn 2022-06-14 Tu 12:20, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> The first makes the argument for $node->config_data() optional. If it's\n>> not supplied, pg_config is called without an argument and the whole\n>> result is returned. Currently, if you try that you get back a nasty and\n>> cryptic error.\n> No opinion about whether that's useful.\n>\n>> The second changes the new GUCs TAP test to check against the installed\n>> postgresql.conf.sample rather than the one in the original source\n>> location. There are probably arguments both ways, but if we ever decided\n>> to postprocess the file before installation, this would do the right thing.\n> Seems like a good idea, especially since it also makes the test code\n> shorter and more robust(-looking).\n>\n> Looking at the patch itself,\n>\n> +my $share_dir = $node->config_data('--sharedir');\n> +chomp $share_dir;\n> +$share_dir =~ s/^SHAREDIR = //;\n> +my $sample_file = \"$share_dir/postgresql.conf.sample\";\n>\n> I kind of wonder why config_data() isn't doing the chomp itself;\n> what caller would not want that? Pulling off the variable name\n> might be helpful too, since it's hard to conceive of a use-case\n> where you don't also need that.\n\n\nIt already chomps the output, and pg_config doesn't output \"SETTING = \"\nif given an option argument, so we could just remove those two lines -\nthey are remnants of an earlier version. I'll do it that way.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 14 Jun 2022 13:14:15 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Small TAP improvements"
},
{
"msg_contents": "On 2022-06-14 Tu 12:44, Álvaro Herrera wrote:\n> The comment atop config_data still mentions $option, but after the patch that's no longer a name used in the function. (I have to admit that using @_ in the body of the function was a little bit confusing to me at first. Did you do that in order to allow multiple options to be passed?)\n>\n> Also: if you give an option to pg_config, the output is not prefixed with the variable name. So you don't need to strip the \"SHAREDIR =\" bit: there isn't any. This is true even if you give multiple options:\n>\n> schmee: master 0$ pg_config --sharedir --includedir\n> /home/alvherre/Code/pgsql-install/REL9_6_STABLE/share\n> /home/alvherre/Code/pgsql-install/REL9_6_STABLE/include\n\n\nOK, here's a more principled couple of patches. For config_data, if you\ngive multiple options it gives you back the list of values. If you don't\nspecify any, in scalar context it just gives you back all of pg_config's\noutput, but in array context it gives you a map, so you should be able\nto say things like:\n\n my %node_config = $node->config_data;\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 14 Jun 2022 16:21:48 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Small TAP improvements"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> OK, here's a more principled couple of patches. For config_data, if you\n> give multiple options it gives you back the list of values. If you don't\n> specify any, in scalar context it just gives you back all of pg_config's\n> output, but in array context it gives you a map, so you should be able\n> to say things like:\n> my %node_config = $node->config_data;\n\nMight be overkill, but since you wrote it already, looks OK to me.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Jun 2022 17:08:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Small TAP improvements"
},
{
"msg_contents": "On Tue, Jun 14, 2022 at 12:20:56PM -0400, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> The second changes the new GUCs TAP test to check against the installed\n>> postgresql.conf.sample rather than the one in the original source\n>> location. There are probably arguments both ways, but if we ever decided\n>> to postprocess the file before installation, this would do the right thing.\n> \n> Seems like a good idea, especially since it also makes the test code\n> shorter and more robust(-looking).\n\nIt seems to me that you did not look at the git history very closely.\nThe first version of 003_check_guc.pl did exactly what 0002 is\nproposing to do, see b0a55f4. That's also why config_data() has been\nintroduced in the first place. This original logic has been reverted\nonce shortly after, as of 52377bb, per a complain by Christoph Berg\nbecause this broke some of the assumptions the custom patches of\nDebian relied on:\nhttps://www.postgresql.org/message-id/YgYw25OXV5men8Fj@msg.df7cb.de\n\nAnd it was also pointed out that we'd better use the version in the\nsource tree rather than a logic that depends on finding the path from\nthe output of pg_config with an installation tree assumed to exist\n(there should be one for installcheck anyway), as of:\nhttps://www.postgresql.org/message-id/2023925.1644591595@sss.pgh.pa.us\n\nIf the change of 0002 is applied, we will just loop back to the\noriginal issue with Debian. So I am adding Christoph in CC, as he has\nalso mentioned that the patch applied to PG for Debian that\nmanipulates the installation paths has been removed, but I may be\nwrong in assuming that it is the case.\n--\nMichael",
"msg_date": "Wed, 15 Jun 2022 08:13:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Small TAP improvements"
},
{
"msg_contents": "On Tue, Jun 14, 2022 at 05:08:28PM -0400, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > OK, here's a more principled couple of patches. For config_data, if you\n> > give multiple options it gives you back the list of values. If you don't\n> > specify any, in scalar context it just gives you back all of pg_config's\n> > output, but in array context it gives you a map, so you should be able\n> > to say things like:\n> > my %node_config = $node->config_data;\n> \n> Might be overkill, but since you wrote it already, looks OK to me.\n\n+ # exactly one option: hand back the output (minus LF)\n+ return $stdout if (@options == 1);\n+ my @lines = split(/\\n/, $stdout);\n+ # more than one option: hand back the list of values;\n+ return @lines if (@options);\n+ # no options, array context: return a map\n+ my @map;\n+ foreach my $line (@lines)\n+ {\n+ my ($k,$v) = split (/ = /,$line,2);\n+ push(@map, $k, $v);\n+ }\n\nThis patch is able to handle the case of no option and one option\nspecified by the caller of the routine. However, pg_config is able to\nreturn a set of values when specifying multiple switches, respecting\nthe order of the switches, so wouldn't it be better to return a map\nmade of ($option, $line)? For example, on a command like `pg_config\n--sysconfdir --`, we would get back:\n(('--sysconfdir', sysconfdir_val), ('--localedir', localedir_val))\n\nIf this is not worth the trouble, I think that you'd better die() hard\nif the caller specifies more than two option switches.\n--\nMichael",
"msg_date": "Wed, 15 Jun 2022 08:24:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Small TAP improvements"
},
{
"msg_contents": "\nOn 2022-06-14 Tu 19:24, Michael Paquier wrote:\n> On Tue, Jun 14, 2022 at 05:08:28PM -0400, Tom Lane wrote:\n>> Andrew Dunstan <andrew@dunslane.net> writes:\n>>> OK, here's a more principled couple of patches. For config_data, if you\n>>> give multiple options it gives you back the list of values. If you don't\n>>> specify any, in scalar context it just gives you back all of pg_config's\n>>> output, but in array context it gives you a map, so you should be able\n>>> to say things like:\n>>> my %node_config = $node->config_data;\n>> Might be overkill, but since you wrote it already, looks OK to me.\n> + # exactly one option: hand back the output (minus LF)\n> + return $stdout if (@options == 1);\n> + my @lines = split(/\\n/, $stdout);\n> + # more than one option: hand back the list of values;\n> + return @lines if (@options);\n> + # no options, array context: return a map\n> + my @map;\n> + foreach my $line (@lines)\n> + {\n> + my ($k,$v) = split (/ = /,$line,2);\n> + push(@map, $k, $v);\n> + }\n>\n> This patch is able to handle the case of no option and one option\n> specified by the caller of the routine. However, pg_config is able to\n> return a set of values when specifying multiple switches, respecting\n> the order of the switches, so wouldn't it be better to return a map\n> made of ($option, $line)? For example, on a command like `pg_config\n> --sysconfdir --`, we would get back:\n> (('--sysconfdir', sysconfdir_val), ('--localedir', localedir_val))\n>\n> If this is not worth the trouble, I think that you'd better die() hard\n> if the caller specifies more than two option switches.\n\n\nMy would we do that? If you want a map don't pass any switches. But as\nwritten you could do:\n\n\nmy ($incdir, $localedir, $sharedir) = $node->config_data(qw(--includedir --localedir --sharedir));\n\n\nNo map needed to get what you want, in fact a map would get in the way.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 15 Jun 2022 07:59:10 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Small TAP improvements"
},
{
"msg_contents": "\nOn 2022-06-14 Tu 19:13, Michael Paquier wrote:\n> On Tue, Jun 14, 2022 at 12:20:56PM -0400, Tom Lane wrote:\n>> Andrew Dunstan <andrew@dunslane.net> writes:\n>>> The second changes the new GUCs TAP test to check against the installed\n>>> postgresql.conf.sample rather than the one in the original source\n>>> location. There are probably arguments both ways, but if we ever decided\n>>> to postprocess the file before installation, this would do the right thing.\n>> Seems like a good idea, especially since it also makes the test code\n>> shorter and more robust(-looking).\n> It seems to me that you did not look at the git history very closely.\n> The first version of 003_check_guc.pl did exactly what 0002 is\n> proposing to do, see b0a55f4. That's also why config_data() has been\n> introduced in the first place. This original logic has been reverted\n> once shortly after, as of 52377bb, per a complain by Christoph Berg\n> because this broke some of the assumptions the custom patches of\n> Debian relied on:\n> https://www.postgresql.org/message-id/YgYw25OXV5men8Fj@msg.df7cb.de\n\n\nQuite right, I missed that. Still, it now seems to be moot, given what\nChristoph said at the bottom of the thread. If I'd seen the thread I\nwould probably have been inclined to say that is Debian can patch\npg_config they can also patch the test :-)\n\n\n>\n> And it was also pointed out that we'd better use the version in the\n> source tree rather than a logic that depends on finding the path from\n> the output of pg_config with an installation tree assumed to exist\n> (there should be one for installcheck anyway), as of:\n> https://www.postgresql.org/message-id/2023925.1644591595@sss.pgh.pa.us\n>\n> If the change of 0002 is applied, we will just loop back to the\n> original issue with Debian. So I am adding Christoph in CC, as he has\n> also mentioned that the patch applied to PG for Debian that\n> manipulates the installation paths has been removed, but I may be\n> wrong in assuming that it is the case.\n\n\n\nHonestly, I don't care all that much. I noticed these issues when\ndealing with something for EDB that turned out not to be related to\nthese things. I can see arguments both ways on this one.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 15 Jun 2022 08:11:40 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Small TAP improvements"
},
{
"msg_contents": "On Wed, Jun 15, 2022 at 07:59:10AM -0400, Andrew Dunstan wrote:\n> My would we do that? If you want a map don't pass any switches. But as\n> written you could do:\n> \n> my ($incdir, $localedir, $sharedir) = $node->config_data(qw(--includedir --localedir --sharedir));\n> \n> No map needed to get what you want, in fact a map would get in the\n> way.\n\nNice, I didn't know you could do that. That's a pattern worth\nmentioning in the perldoc part as an example, in my opinion.\n--\nMichael",
"msg_date": "Thu, 16 Jun 2022 08:58:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Small TAP improvements"
},
{
"msg_contents": "On 2022-Jun-14, Andrew Dunstan wrote:\n\n> OK, here's a more principled couple of patches. For config_data, if you\n> give multiple options it gives you back the list of values. If you don't\n> specify any, in scalar context it just gives you back all of pg_config's\n> output, but in array context it gives you a map, so you should be able\n> to say things like:\n> \n> my %node_config = $node->config_data;\n\nHi, it looks to me like these were forgotten?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Sun, 6 Nov 2022 14:51:52 +0100",
"msg_from": "=?utf-8?Q?=C3=81lvaro?= Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Small TAP improvements"
},
{
"msg_contents": "\nOn 2022-11-06 Su 08:51, Álvaro Herrera wrote:\n> On 2022-Jun-14, Andrew Dunstan wrote:\n>\n>> OK, here's a more principled couple of patches. For config_data, if you\n>> give multiple options it gives you back the list of values. If you don't\n>> specify any, in scalar context it just gives you back all of pg_config's\n>> output, but in array context it gives you a map, so you should be able\n>> to say things like:\n>>\n>> my %node_config = $node->config_data;\n> Hi, it looks to me like these were forgotten?\n>\n\nYeah, will get to it this week.\n\n\nThanks for the reminder.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 9 Nov 2022 05:35:33 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Small TAP improvements"
},
{
"msg_contents": "\nOn 2022-11-09 We 05:35, Andrew Dunstan wrote:\n> On 2022-11-06 Su 08:51, Álvaro Herrera wrote:\n>> On 2022-Jun-14, Andrew Dunstan wrote:\n>>\n>>> OK, here's a more principled couple of patches. For config_data, if you\n>>> give multiple options it gives you back the list of values. If you don't\n>>> specify any, in scalar context it just gives you back all of pg_config's\n>>> output, but in array context it gives you a map, so you should be able\n>>> to say things like:\n>>>\n>>> my %node_config = $node->config_data;\n>> Hi, it looks to me like these were forgotten?\n>>\n> Yeah, will get to it this week.\n>\n>\n> Thanks for the reminder.\n>\n>\n\nPushed now.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 14 Nov 2022 10:18:47 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Small TAP improvements"
}
] |
[
{
"msg_contents": "Per the discussion at [1], pg_upgrade currently doesn't use\ncommon/logging.c's functions. Making it do so looks like a\nbigger lift than is justified, but there is one particular\ninconsistency that I think we ought to remove: pg_upgrade\nexpects (most) message strings to end in newlines, while logging.c\nexpects them not to. This is bad for a couple of reasons:\n\n* Translatable strings that otherwise could be shared with other\ncode are different.\n\n* Developers might mistakenly add or leave off a newline because of\nfamiliarity with how it's done elsewhere. This is especially bad for\npg_fatal() which is otherwise caller-compatible with the version\nprovided by logging.c. We fixed a couple of bugs of exactly that\ndescription recently, and I found a few more as I went through\npg_upgrade for the attached patch. It doesn't help any that as it\nstands, pg_upgrade requires some messages to end in newline and\nothers not: there are some places that are adding an extra newline,\napparently because whoever coded them was confused about which\nconvention applied.\n\nHence, the patch below removes trailing newlines from all of\npg_upgrade's message strings, and teaches its logging infrastructure\nto print them where appropriate. As in logging.c, there's now an\nAssert that no format string passed to pg_log() et al ends with\na newline.\n\nThis doesn't quite exactly match the code's prior behavior. Aside\nfrom the buggy-looking newlines mentioned above, there are a few\nmessages that formerly ended with a double newline, thus intentionally\nproducing a blank line, and now they don't. I could have removed just\none of their newlines, but I'd have had to give up the Assert about\nit, and I did not think that the extra blank lines were important\nenough to justify that.\n\nBTW, as I went through the code I realized just how badly pg_upgrade\nneeds a visit from the message style police. Its messages are not\neven consistent with each other, let alone with our message style\nguidelines. I have refrained (mostly) from doing any re-wording\nhere, but it could stand to be done.\n\nI'll stick this in the CF queue, but I wonder if there is any case\nfor squeezing it into v15 instead of waiting for v16.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/4036037.1655174501%40sss.pgh.pa.us",
"msg_date": "Tue, 14 Jun 2022 14:57:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Remove trailing newlines from pg_upgrade's messages"
},
{
"msg_contents": "At Tue, 14 Jun 2022 14:57:40 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Per the discussion at [1], pg_upgrade currently doesn't use\n> common/logging.c's functions. Making it do so looks like a\n> bigger lift than is justified, but there is one particular\n> inconsistency that I think we ought to remove: pg_upgrade\n> expects (most) message strings to end in newlines, while logging.c\n> expects them not to. This is bad for a couple of reasons:\n> \n> * Translatable strings that otherwise could be shared with other\n> code are different.\n\nYes. Also it is annoying that we need to care about ending new lines..\n\n> * Developers might mistakenly add or leave off a newline because of\n> familiarity with how it's done elsewhere. This is especially bad for\n> pg_fatal() which is otherwise caller-compatible with the version\n> provided by logging.c. We fixed a couple of bugs of exactly that\n> description recently, and I found a few more as I went through\n> pg_upgrade for the attached patch. It doesn't help any that as it\n> stands, pg_upgrade requires some messages to end in newline and\n> others not: there are some places that are adding an extra newline,\n> apparently because whoever coded them was confused about which\n> convention applied.\n> \n> Hence, the patch below removes trailing newlines from all of\n> pg_upgrade's message strings, and teaches its logging infrastructure\n> to print them where appropriate. As in logging.c, there's now an\n> Assert that no format string passed to pg_log() et al ends with\n> a newline.\n\nI think it's the least-bad way to control ending newline.\n\n-\tPG_STATUS,\n+\tPG_STATUS,\t\t\t\t\t/* these messages do not get a newline added */\n\nReally?\n\n+\tPG_REPORT_NONL,\t\t\t\t/* these too */\n \tPG_REPORT,\n\n> This doesn't quite exactly match the code's prior behavior. Aside\n> from the buggy-looking newlines mentioned above, there are a few\n> messages that formerly ended with a double newline, thus intentionally\n> producing a blank line, and now they don't. I could have removed just\n> one of their newlines, but I'd have had to give up the Assert about\n> it, and I did not think that the extra blank lines were important\n> enough to justify that.\n\nI don't think traling double-newlines for pg_fatal is useful so I\nagree to you in this point.\n\nAlso leading newlines and just \"\\n\" bug me when I edit message\ncatalogues with poedit. I might want a line-spacing function like\npg_log_newline(PG_REPORT) if we need line-breaks in the ends of a\nmessage.\n\n> BTW, as I went through the code I realized just how badly pg_upgrade\n> needs a visit from the message style police. Its messages are not\n> even consistent with each other, let alone with our message style\n> guidelines. I have refrained (mostly) from doing any re-wording\n> here, but it could stand to be done.\n\nA bit apart from this, I experince a bit hard time to find an\nappropriate translation for \"Your installation\", which I finally\ntranslate them into (a literal translation of ) \"This cluster\" or\nsuch..\n\n> I'll stick this in the CF queue, but I wonder if there is any case\n> for squeezing it into v15 instead of waiting for v16.\n\nI think we can as it doen't seem to make functional change. But I\nhaven't checked if the patch doesn't break anything..\n\n> \t\t\tregards, tom lane\n> \n> [1] https://www.postgresql.org/message-id/4036037.1655174501%40sss.pgh.pa.us\n> \n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 15 Jun 2022 12:56:19 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove trailing newlines from pg_upgrade's messages"
},
{
"msg_contents": "By the way, I noticed that pg_upgrade complains wrong way when the\nspecified binary path doesn't contain \"postgres\" file.\n\n$ pg_upgrade -b /tmp -B /tmp -d /tmp -D /tmp\n\ncheck for \"/tmp/postgres\" failed: not a regular file\nFailure, exiting\n\nI think it should be a quite common mistake to specify the parent\ndirectory of the binary directory..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 15 Jun 2022 13:05:52 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove trailing newlines from pg_upgrade's messages"
},
{
"msg_contents": "At Wed, 15 Jun 2022 13:05:52 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> By the way, I noticed that pg_upgrade complains wrong way when the\n> specified binary path doesn't contain \"postgres\" file.\n> \n> $ pg_upgrade -b /tmp -B /tmp -d /tmp -D /tmp\n> \n> check for \"/tmp/postgres\" failed: not a regular file\n> Failure, exiting\n> \n> I think it should be a quite common mistake to specify the parent\n> directory of the binary directory..\n\nFWIW, the following change makes sense to me according to the spec of\nvalidate_exec()...\n\ndiff --git a/src/bin/pg_upgrade/exec.c b/src/bin/pg_upgrade/exec.c\nindex fadeea12ca..3cff186213 100644\n--- a/src/bin/pg_upgrade/exec.c\n+++ b/src/bin/pg_upgrade/exec.c\n@@ -430,10 +430,10 @@ check_exec(const char *dir, const char *program, bool check_version)\n \tret = validate_exec(path);\n \n \tif (ret == -1)\n-\t\tpg_fatal(\"check for \\\"%s\\\" failed: not a regular file\\n\",\n+\t\tpg_fatal(\"check for \\\"%s\\\" failed: does not exist or inexecutable\\n\",\n \t\t\t\t path);\n \telse if (ret == -2)\n-\t\tpg_fatal(\"check for \\\"%s\\\" failed: cannot execute (permission denied)\\n\",\n+\t\tpg_fatal(\"check for \\\"%s\\\" failed: cannot read (permission denied)\\n\",\n \t\t\t\t path);\n \n \tsnprintf(cmd, sizeof(cmd), \"\\\"%s\\\" -V\", path);\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 15 Jun 2022 13:14:03 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove trailing newlines from pg_upgrade's messages"
},
{
"msg_contents": "On 14.06.22 20:57, Tom Lane wrote:\n> I'll stick this in the CF queue, but I wonder if there is any case\n> for squeezing it into v15 instead of waiting for v16.\n\nLet's stick this into 16 and use it as a starting point of tidying all \nthis up in pg_upgrade.\n\n\n",
"msg_date": "Wed, 15 Jun 2022 08:53:53 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove trailing newlines from pg_upgrade's messages"
},
{
"msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> Also leading newlines and just \"\\n\" bug me when I edit message\n> catalogues with poedit. I might want a line-spacing function like\n> pg_log_newline(PG_REPORT) if we need line-breaks in the ends of a\n> message.\n\nYeah, that is sort of the inverse problem. I think those are there\nto ensure that the text appears on a fresh line even if the current\nline has transient status on it. We could get rid of those perhaps\nif we teach pg_log_v to remember whether it ended the last output\nwith a newline or not, and then put out a leading newline only if\nnecessary, rather than hard-wiring one into the message texts.\n\nThis might take a little bit of fiddling to make it work, because\nwe'd not want the extra newline when completing an incomplete line\nby adding status. That would mean that report_status would have\nto do something special, plus we'd have to be sure that all such\ncases do go through report_status rather than calling pg_log\ndirectly. (I'm fairly sure that the code is sloppy about that\ntoday :-(.) It seems probably do-able, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 Jun 2022 11:53:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Remove trailing newlines from pg_upgrade's messages"
},
{
"msg_contents": "On Wed, 15 Jun 2022 at 11:54, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Yeah, that is sort of the inverse problem. I think those are there\n> to ensure that the text appears on a fresh line even if the current\n> line has transient status on it. We could get rid of those perhaps\n> if we teach pg_log_v to remember whether it ended the last output\n> with a newline or not, and then put out a leading newline only if\n> necessary, rather than hard-wiring one into the message texts.\n\nIs the problem that pg_upgrade doesn't know what the utilities it's\ncalling are outputting to the same terminal?\n\nAnother thing I wonder is if during development and testing there\nmight have been more output from utilities or even the backend going\non that are\nnot happening now.\n\n-- \ngreg\n\n\n",
"msg_date": "Mon, 20 Jun 2022 14:29:25 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Remove trailing newlines from pg_upgrade's messages"
},
{
"msg_contents": "Greg Stark <stark@mit.edu> writes:\n> On Wed, 15 Jun 2022 at 11:54, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Yeah, that is sort of the inverse problem. I think those are there\n>> to ensure that the text appears on a fresh line even if the current\n>> line has transient status on it. We could get rid of those perhaps\n>> if we teach pg_log_v to remember whether it ended the last output\n>> with a newline or not, and then put out a leading newline only if\n>> necessary, rather than hard-wiring one into the message texts.\n\n> Is the problem that pg_upgrade doesn't know what the utilities it's\n> calling are outputting to the same terminal?\n\nHmmm ... that's a point I'd not considered, but I think it's not an\nissue here. The subprograms generally have their output redirected\nto their own log files.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Jun 2022 14:57:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Remove trailing newlines from pg_upgrade's messages"
},
{
"msg_contents": "\nOn 14.06.22 20:57, Tom Lane wrote:\n> Hence, the patch below removes trailing newlines from all of\n> pg_upgrade's message strings, and teaches its logging infrastructure\n> to print them where appropriate. As in logging.c, there's now an\n> Assert that no format string passed to pg_log() et al ends with\n> a newline.\n\nThis patch looks okay to me. I compared the output before and after in \na few scenarios and didn't see any problematic differences.\n\n> This doesn't quite exactly match the code's prior behavior. Aside\n> from the buggy-looking newlines mentioned above, there are a few\n> messages that formerly ended with a double newline, thus intentionally\n> producing a blank line, and now they don't. I could have removed just\n> one of their newlines, but I'd have had to give up the Assert about\n> it, and I did not think that the extra blank lines were important\n> enough to justify that.\n\nIn this particular patch, the few empty lines that disappeared don't \nbother me. In general, however, I think we can just fprintf(stderr, \n\"\\n\") directly as necessary.\n\n\n",
"msg_date": "Tue, 12 Jul 2022 07:21:14 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove trailing newlines from pg_upgrade's messages"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 14.06.22 20:57, Tom Lane wrote:\n>> Hence, the patch below removes trailing newlines from all of\n>> pg_upgrade's message strings, and teaches its logging infrastructure\n>> to print them where appropriate. As in logging.c, there's now an\n>> Assert that no format string passed to pg_log() et al ends with\n>> a newline.\n\n> This patch looks okay to me. I compared the output before and after in \n> a few scenarios and didn't see any problematic differences.\n\nThanks, pushed after rebasing and adjusting some recently-added messages.\n\n> In this particular patch, the few empty lines that disappeared don't \n> bother me. In general, however, I think we can just fprintf(stderr, \n> \"\\n\") directly as necessary.\n\nHmm, if anyone wants to do that I think it'd be advisable to invent\n\"pg_log_blank_line()\" or something like that, so as to preserve the\nlogging abstraction layer. But it's moot unless anyone's interested\nenough to send a patch for that. I'm not.\n\n(I think it *would* be a good idea to try to get rid of the leading\nnewlines that appear in some of the messages, as discussed upthread.\nBut I'm not going to trouble over that right now either.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 12 Jul 2022 15:41:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Remove trailing newlines from pg_upgrade's messages"
},
{
"msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> FWIW, the following change makes sense to me according to the spec of\n> validate_exec()...\n\n> diff --git a/src/bin/pg_upgrade/exec.c b/src/bin/pg_upgrade/exec.c\n> index fadeea12ca..3cff186213 100644\n> --- a/src/bin/pg_upgrade/exec.c\n> +++ b/src/bin/pg_upgrade/exec.c\n> @@ -430,10 +430,10 @@ check_exec(const char *dir, const char *program, bool check_version)\n> \tret = validate_exec(path);\n \n> \tif (ret == -1)\n> -\t\tpg_fatal(\"check for \\\"%s\\\" failed: not a regular file\\n\",\n> +\t\tpg_fatal(\"check for \\\"%s\\\" failed: does not exist or inexecutable\\n\",\n> \t\t\t\t path);\n> \telse if (ret == -2)\n> -\t\tpg_fatal(\"check for \\\"%s\\\" failed: cannot execute (permission denied)\\n\",\n> +\t\tpg_fatal(\"check for \\\"%s\\\" failed: cannot read (permission denied)\\n\",\n> \t\t\t\t path);\n \n> \tsnprintf(cmd, sizeof(cmd), \"\\\"%s\\\" -V\", path);\n\nI initially did this, but then I wondered why validate_exec() was\nmaking it so hard: why can't we just report the failure with %m?\nIt turns out to take only a couple extra lines of code to ensure\nthat something more-or-less appropriate is returned, so we don't\nneed to guess about it here. Pushed that way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 12 Jul 2022 15:45:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Remove trailing newlines from pg_upgrade's messages"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.