threads
listlengths
1
2.99k
[ { "msg_contents": "Hi hackers!\n\n$subj was recently observed on one of our installations.\n\nStartup process backtrace\n#0 0x00007fd216660d27 in epoll_wait (epfd=525, events=0x55c688dfbde8, maxevents=maxevents@entry=1, timeout=timeout@entry=-1)\n#1 0x000055c687264be9 in WaitEventSetWaitBlock (nevents=1, occurred_events=0x7ffdf8089f00, cur_timeout=-1, set=0x55c688dfbd78)\n#2 WaitEventSetWait (set=set@entry=0x55c688dfbd78, timeout=timeout@entry=-1, occurred_events=occurred_events@entry=0x7ffdf8089f00, nevents=nevents@entry=1, wait_event_info=wait_event_info@entry=67108864) at /build/../src/backend/storage/ipc/latch.c:1000\n#3 0x000055c687265038 in WaitLatchOrSocket (latch=0x7fd1fa735454, wakeEvents=wakeEvents@entry=1, sock=sock@entry=-1, timeout=-1, timeout@entry=0, wait_event_info=wait_event_info@entry=67108864)\n#4 0x000055c6872650f5 in WaitLatch (latch=<optimized out>, wakeEvents=wakeEvents@entry=1, timeout=timeout@entry=0, wait_event_info=wait_event_info@entry=67108864\n#5 0x000055c687276399 in ProcWaitForSignal (wait_event_info=wait_event_info@entry=67108864)\n#6 0x000055c68726c898 in ResolveRecoveryConflictWithBufferPin ()\n#7 0x000055c6872582c5 in LockBufferForCleanup (buffer=292159)\n#8 0x000055c687259447 in ReadBuffer_common (smgr=0x55c688deae40, relpersistence=relpersistence@entry=112 'p', forkNum=forkNum@entry=MAIN_FORKNUM, blockNum=blockNum@entry=3751242, mode=mode@entry=RBM_ZERO_AND_CLEANUP_LOCK, strategy=strategy@entry=0x0, hit=0x7ffdf808a117 \"\\001\")\n#9 0x000055c687259b6b in ReadBufferWithoutRelcache (rnode=..., forkNum=forkNum@entry=MAIN_FORKNUM, blockNum=blockNum@entry=3751242, mode=mode@entry=RBM_ZERO_AND_CLEANUP_LOCK, strategy=strategy@entry=0x0)\n#10 0x000055c68705655f in XLogReadBufferExtended (rnode=..., forknum=MAIN_FORKNUM, blkno=3751242, mode=RBM_ZERO_AND_CLEANUP_LOCK)\n#11 0x000055c687056706 in XLogReadBufferForRedoExtended (record=record@entry=0x55c688dd2378, block_id=block_id@entry=0 '\\000', mode=mode@entry=RBM_NORMAL, get_cleanup_lock=get_cleanup_lock@entry=1 '\\001', buf=buf@entry=0x7ffdf808a218)\n#12 0x000055c68700728b in heap_xlog_clean (record=0x55c688dd2378)\n#13 heap2_redo (record=0x55c688dd2378)\n#14 0x000055c68704a7eb in StartupXLOG ()\n\nBackend holding the buffer pin:\n\n#0 0x00007fd216660d27 in epoll_wait (epfd=5, events=0x55c688d67ca8, maxevents=maxevents@entry=1, timeout=timeout@entry=-1)\n#1 0x000055c687264be9 in WaitEventSetWaitBlock (nevents=1, occurred_events=0x7ffdf808e070, cur_timeout=-1, set=0x55c688d67c38)\n#2 WaitEventSetWait (set=0x55c688d67c38, timeout=timeout@entry=-1, occurred_events=occurred_events@entry=0x7ffdf808e070, nevents=nevents@entry=1, wait_event_info=wait_event_info@entry=100663297)\n#3 0x000055c687185d7e in secure_write (port=0x55c688db4a60, ptr=ptr@entry=0x55c688dd338e, len=len@entry=2666)\n#4 0x000055c687190f4c in internal_flush ()\n#5 0x000055c68719118a in internal_putbytes (s=0x55c68f8dcc35 \"\", s@entry=0x55c68f8dcc08 \"\", len=65)\n#6 0x000055c687191262 in socket_putmessage (msgtype=<optimized out>, s=0x55c68f8dcc08 \"\", len=<optimized out>)\n#7 0x000055c687193431 in pq_endmessage (buf=buf@entry=0x7ffdf808e1a0)\n#8 0x000055c686fd1442 in printtup (slot=0x55c6894c2dc0, self=0x55c689326b40)\n#9 0x000055c687151962 in ExecutePlan (execute_once=<optimized out>, dest=0x55c689326b40, direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x55c689281a28, estate=0x55c6892816f8)\n#10 standard_ExecutorRun (queryDesc=0x55c689289628, direction=<optimized out>, count=0, execute_once=<optimized out>)\n#11 0x00007fd2074100c5 in pgss_ExecutorRun (queryDesc=0x55c689289628, direction=ForwardScanDirection, count=0, execute_once=<optimized out>)\n#12 0x000055c68728a356 in PortalRunSelect (portal=portal@entry=0x55c688d858b8, forward=forward@entry=1 '\\001', count=0, count@entry=9223372036854775807, dest=dest@entry=0x55c689326b40)\n#13 0x000055c68728b988 in PortalRun (portal=portal@entry=0x55c688d858b8, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=1 '\\001', run_once=run_once@entry=1 '\\001', dest=dest@entry=0x55c689326b40, altdest=altdest@entry=0x55c689326b40, completionTag=0x7ffdf808e580 \"\")\n#14 0x000055c687287425 in exec_simple_query (query_string=0x55c688ec5e38 \"select\\n '/capacity/created-at-counter-by-time-v2' as sensor,\\n round(extract(epoch from shipment_date))\n#15 0x000055c687289418 in PostgresMain (argc=<optimized out>, argv=argv@entry=0x55c688dd3e28, dbname=<optimized out>, username=<optimized out>)\n\n\nI think the problem here is that secure_write() uses infinite timeout.\nProbably we rely here on tcp keepalives, but they did not fire for some reason. Seems like the client was alive but sluggish.\nDoes it make sense to look for infinite timeouts in communication and replace them with a loop checking for interrupts?\n\nBest regards, Andrey Borodin.\n\n\n\n", "msg_date": "Wed, 17 Nov 2021 10:39:58 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": true, "msg_subject": "Slow client can delay replication despite max_standby_streaming_delay\n set" }, { "msg_contents": "\n\nOn 2021/11/17 14:39, Andrey Borodin wrote:\n> Hi hackers!\n> \n> $subj was recently observed on one of our installations.\n> \n> Startup process backtrace\n> #0 0x00007fd216660d27 in epoll_wait (epfd=525, events=0x55c688dfbde8, maxevents=maxevents@entry=1, timeout=timeout@entry=-1)\n> #1 0x000055c687264be9 in WaitEventSetWaitBlock (nevents=1, occurred_events=0x7ffdf8089f00, cur_timeout=-1, set=0x55c688dfbd78)\n> #2 WaitEventSetWait (set=set@entry=0x55c688dfbd78, timeout=timeout@entry=-1, occurred_events=occurred_events@entry=0x7ffdf8089f00, nevents=nevents@entry=1, wait_event_info=wait_event_info@entry=67108864) at /build/../src/backend/storage/ipc/latch.c:1000\n> #3 0x000055c687265038 in WaitLatchOrSocket (latch=0x7fd1fa735454, wakeEvents=wakeEvents@entry=1, sock=sock@entry=-1, timeout=-1, timeout@entry=0, wait_event_info=wait_event_info@entry=67108864)\n> #4 0x000055c6872650f5 in WaitLatch (latch=<optimized out>, wakeEvents=wakeEvents@entry=1, timeout=timeout@entry=0, wait_event_info=wait_event_info@entry=67108864\n> #5 0x000055c687276399 in ProcWaitForSignal (wait_event_info=wait_event_info@entry=67108864)\n> #6 0x000055c68726c898 in ResolveRecoveryConflictWithBufferPin ()\n> #7 0x000055c6872582c5 in LockBufferForCleanup (buffer=292159)\n> #8 0x000055c687259447 in ReadBuffer_common (smgr=0x55c688deae40, relpersistence=relpersistence@entry=112 'p', forkNum=forkNum@entry=MAIN_FORKNUM, blockNum=blockNum@entry=3751242, mode=mode@entry=RBM_ZERO_AND_CLEANUP_LOCK, strategy=strategy@entry=0x0, hit=0x7ffdf808a117 \"\\001\")\n> #9 0x000055c687259b6b in ReadBufferWithoutRelcache (rnode=..., forkNum=forkNum@entry=MAIN_FORKNUM, blockNum=blockNum@entry=3751242, mode=mode@entry=RBM_ZERO_AND_CLEANUP_LOCK, strategy=strategy@entry=0x0)\n> #10 0x000055c68705655f in XLogReadBufferExtended (rnode=..., forknum=MAIN_FORKNUM, blkno=3751242, mode=RBM_ZERO_AND_CLEANUP_LOCK)\n> #11 0x000055c687056706 in XLogReadBufferForRedoExtended (record=record@entry=0x55c688dd2378, block_id=block_id@entry=0 '\\000', mode=mode@entry=RBM_NORMAL, get_cleanup_lock=get_cleanup_lock@entry=1 '\\001', buf=buf@entry=0x7ffdf808a218)\n> #12 0x000055c68700728b in heap_xlog_clean (record=0x55c688dd2378)\n> #13 heap2_redo (record=0x55c688dd2378)\n> #14 0x000055c68704a7eb in StartupXLOG ()\n> \n> Backend holding the buffer pin:\n> \n> #0 0x00007fd216660d27 in epoll_wait (epfd=5, events=0x55c688d67ca8, maxevents=maxevents@entry=1, timeout=timeout@entry=-1)\n> #1 0x000055c687264be9 in WaitEventSetWaitBlock (nevents=1, occurred_events=0x7ffdf808e070, cur_timeout=-1, set=0x55c688d67c38)\n> #2 WaitEventSetWait (set=0x55c688d67c38, timeout=timeout@entry=-1, occurred_events=occurred_events@entry=0x7ffdf808e070, nevents=nevents@entry=1, wait_event_info=wait_event_info@entry=100663297)\n> #3 0x000055c687185d7e in secure_write (port=0x55c688db4a60, ptr=ptr@entry=0x55c688dd338e, len=len@entry=2666)\n> #4 0x000055c687190f4c in internal_flush ()\n> #5 0x000055c68719118a in internal_putbytes (s=0x55c68f8dcc35 \"\", s@entry=0x55c68f8dcc08 \"\", len=65)\n> #6 0x000055c687191262 in socket_putmessage (msgtype=<optimized out>, s=0x55c68f8dcc08 \"\", len=<optimized out>)\n> #7 0x000055c687193431 in pq_endmessage (buf=buf@entry=0x7ffdf808e1a0)\n> #8 0x000055c686fd1442 in printtup (slot=0x55c6894c2dc0, self=0x55c689326b40)\n> #9 0x000055c687151962 in ExecutePlan (execute_once=<optimized out>, dest=0x55c689326b40, direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x55c689281a28, estate=0x55c6892816f8)\n> #10 standard_ExecutorRun (queryDesc=0x55c689289628, direction=<optimized out>, count=0, execute_once=<optimized out>)\n> #11 0x00007fd2074100c5 in pgss_ExecutorRun (queryDesc=0x55c689289628, direction=ForwardScanDirection, count=0, execute_once=<optimized out>)\n> #12 0x000055c68728a356 in PortalRunSelect (portal=portal@entry=0x55c688d858b8, forward=forward@entry=1 '\\001', count=0, count@entry=9223372036854775807, dest=dest@entry=0x55c689326b40)\n> #13 0x000055c68728b988 in PortalRun (portal=portal@entry=0x55c688d858b8, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=1 '\\001', run_once=run_once@entry=1 '\\001', dest=dest@entry=0x55c689326b40, altdest=altdest@entry=0x55c689326b40, completionTag=0x7ffdf808e580 \"\")\n> #14 0x000055c687287425 in exec_simple_query (query_string=0x55c688ec5e38 \"select\\n '/capacity/created-at-counter-by-time-v2' as sensor,\\n round(extract(epoch from shipment_date))\n> #15 0x000055c687289418 in PostgresMain (argc=<optimized out>, argv=argv@entry=0x55c688dd3e28, dbname=<optimized out>, username=<optimized out>)\n> \n> \n> I think the problem here is that secure_write() uses infinite timeout.\n\nIs this the same issue as one reported at [1]?\n\n[1] https://www.postgresql.org/message-id/adce2c09-3bfc-4666-997a-c21991cb1eb1.mengjuan.cmj@alibaba-inc.com\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 18 Nov 2021 13:47:50 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Slow client can delay replication despite\n max_standby_streaming_delay set" }, { "msg_contents": "\n\n> 18 нояб. 2021 г., в 09:47, Fujii Masao <masao.fujii@oss.nttdata.com> написал(а):\n> \n> Is this the same issue as one reported at [1]?\n> \n> [1] https://www.postgresql.org/message-id/adce2c09-3bfc-4666-997a-c21991cb1eb1mengjuan.cmj@alibaba-inc.com\n\nYes, that's the very same issue. I should joint that discussion, sorry for the noise. And thanks for the pointer.\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Thu, 18 Nov 2021 10:32:24 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": true, "msg_subject": "Re: Slow client can delay replication despite\n max_standby_streaming_delay set" } ]
[ { "msg_contents": "Hi hackers,\n\nI am currently working on improving the cardinality estimation component in\nPostgreSQL with machine learning. I came up with a solution that mutates\nthe bounds for different columns. For example, assume that we have a query\n\n```\nselect * from test where X<10 and Y<20;\n```\n\nOur approach tries to learn the relation between X and Y. For example, if\nwe have a linear relation, Y=X+10. Then Y<20 is essentially equivalent to\nX<10. Therefore we can mutate the Y<20 to Y<INT_MAX so that the selectivity\nwill be 1, and we will have a more accurate estimation.\n\nIt seems to me that we can achieve something similar by mutating the\npg_statistics, however, mutating the bounds is something more\nstraightforward to me and less expensive.\n\nI am wondering if it is possible to have such an extension? Or if there is\na better solution to this? I have already implemented this stuff in a\nprivate repository, and if this is something you like, I can further\npropose the patch to the list.\n\nBest regards,\nXiaozhe\n\nHi hackers,I am currently working on improving the cardinality estimation component in PostgreSQL with machine learning. I came up with a solution that mutates the bounds for different columns. For example, assume that we have a query```select * from test where X<10 and Y<20;```Our approach tries to learn the relation between X and Y. For example, if we have a linear relation, Y=X+10. Then Y<20 is essentially equivalent to X<10. Therefore we can mutate the Y<20 to Y<INT_MAX so that the selectivity will be 1, and we will have a more accurate estimation.It seems to me that we can achieve something similar by mutating the pg_statistics, however, mutating the bounds is something more straightforward to me and less expensive. I am wondering if it is possible to have such an extension? Or if there is a better solution to this? I have already implemented this stuff in a private repository, and if this is something you like, I can further propose the patch to the list.Best regards,Xiaozhe", "msg_date": "Wed, 17 Nov 2021 14:24:17 +0100", "msg_from": "Xiaozhe Yao <askxzyao@gmail.com>", "msg_from_op": true, "msg_subject": "Propose a new hook for mutating the query bounds" }, { "msg_contents": "\n\nOn 11/17/21 2:24 PM, Xiaozhe Yao wrote:\n> Hi hackers,\n> \n> I am currently working on improving the cardinality estimation component\n> in PostgreSQL with machine learning. I came up with a solution that\n> mutates the bounds for different columns. For example, assume that we\n> have a query\n> \n> ```\n> select * from test where X<10 and Y<20;\n> ```\n> \n> Our approach tries to learn the relation between X and Y. For example,\n> if we have a linear relation, Y=X+10. Then Y<20 is essentially\n> equivalent to X<10. Therefore we can mutate the Y<20 to Y<INT_MAX so\n> that the selectivity will be 1, and we will have a more accurate estimation.\n> \n\nOK. FWIW the extended statistics patch originally included a patch for\nmulti-dimensional histograms, and that would have worked for this\nexample just fine, I guess. But yeah, there are various other\ndependencies for which a histogram would not help. And ML might discover\nthat and help ...\n\n> It seems to me that we can achieve something similar by mutating the\n> pg_statistics, however, mutating the bounds is something more\n> straightforward to me and less expensive.\n> \n\nI don't understand how you could achieve this by mutating pg_statistic,\nwithout also breaking estimation for queries that only have Y<20.\n\n> I am wondering if it is possible to have such an extension? Or if there\n> is a better solution to this? I have already implemented this stuff in a\n> private repository, and if this is something you like, I can further\n> propose the patch to the list.\n> \n\nMaybe, but it's really hard to comment on this without seeing any PoC\npatches. We don't know where you you'd like the hook called, what info\nwould it have access to, how would it tweak the selectivities etc.\n\nIf you think this would work, write a PoC patch and we'll see.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 17 Nov 2021 14:49:21 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Propose a new hook for mutating the query bounds" }, { "msg_contents": "Hi Tomas and Hackers,\n\nThanks for your reply and feedback!\n\n> I don't understand how you could achieve this by mutating pg_statistic,\nwithout also breaking estimation for queries that only have Y<20.\n\nI agree, if we mutate pg_statistics, we will break lots of stuff and the\nprocess becomes complicated. That's also why I think mutating the bounds\nmakes more sense and is easier to achieve.\n\n> Maybe, but it's really hard to comment on this without seeing any PoC\npatches. We don't know where you you'd like the hook called, what info\nwould it have access to, how would it tweak the selectivities etc.\n\nI have attached a PoC patch to this mail. Essentially in this patch, I only\ntry to pass the pointer of the constval in ```scalarineqsql``` function. It\nis enough from the Postgres side. With that, I can handle other things in\nan independent extension.\n\nI hope this makes sense.\n\nBest regards,\nXiaozhe\n\nOn Wed, Nov 17, 2021 at 2:49 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n>\n>\n> On 11/17/21 2:24 PM, Xiaozhe Yao wrote:\n> > Hi hackers,\n> >\n> > I am currently working on improving the cardinality estimation component\n> > in PostgreSQL with machine learning. I came up with a solution that\n> > mutates the bounds for different columns. For example, assume that we\n> > have a query\n> >\n> > ```\n> > select * from test where X<10 and Y<20;\n> > ```\n> >\n> > Our approach tries to learn the relation between X and Y. For example,\n> > if we have a linear relation, Y=X+10. Then Y<20 is essentially\n> > equivalent to X<10. Therefore we can mutate the Y<20 to Y<INT_MAX so\n> > that the selectivity will be 1, and we will have a more accurate\n> estimation.\n> >\n>\n> OK. FWIW the extended statistics patch originally included a patch for\n> multi-dimensional histograms, and that would have worked for this\n> example just fine, I guess. But yeah, there are various other\n> dependencies for which a histogram would not help. And ML might discover\n> that and help ...\n>\n> > It seems to me that we can achieve something similar by mutating the\n> > pg_statistics, however, mutating the bounds is something more\n> > straightforward to me and less expensive.\n> >\n>\n> I don't understand how you could achieve this by mutating pg_statistic,\n> without also breaking estimation for queries that only have Y<20.\n>\n> > I am wondering if it is possible to have such an extension? Or if there\n> > is a better solution to this? I have already implemented this stuff in a\n> > private repository, and if this is something you like, I can further\n> > propose the patch to the list.\n> >\n>\n> Maybe, but it's really hard to comment on this without seeing any PoC\n> patches. We don't know where you you'd like the hook called, what info\n> would it have access to, how would it tweak the selectivities etc.\n>\n> If you think this would work, write a PoC patch and we'll see.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>", "msg_date": "Wed, 17 Nov 2021 15:28:53 +0100", "msg_from": "Xiaozhe Yao <askxzyao@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Propose a new hook for mutating the query bounds" }, { "msg_contents": "Xiaozhe Yao <askxzyao@gmail.com> writes:\n\n+\tif (mutate_bounds_hook) {\n+\t\tmutate_bounds_hook(root, &constval, isgt, iseq);\n+\t}\n\nIt seems unlikely that this could do anything actually useful,\nand impossible that it could do anything useful without enormous waste\nof cycles along the way. Basically, each time one calls scalarineqsel,\nthe hook would have to re-analyze the entire query to see if it should do\nanything. Most of the time the answer would be \"no\", after a lot of\ncycles wasted. It would also have to keep some state (where?) to\ncoordinate mutation of different Consts in a WHERE clause. And why only\na hook in scalarineqsel? Is that really the only context that you'd need\nto adjust the results in?\n\nAnother important deficiency in this API spec is that the hook has no\nidea *which* constant it's being called on, so I don't see how it could\nreally deliver correct answers at all.\n\nI can buy that ML techniques might provide a way to improve selectivity\nestimates overall, but I think inserting them would be better done with\na much higher-level hook, maybe about at the level of\nclauselist_selectivity.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Nov 2021 09:49:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Propose a new hook for mutating the query bounds" }, { "msg_contents": "Hi Tom,\n\nThanks for your feedback. I completely agree with you that a higher-level\nhook is better suited for this case. I have adjusted the PoC patch to this\nemail.\n\nNow it is located in the clauselist_selectivity_ext function, where we\nfirst check if the hook is defined. If so, we let the hook estimate the\nselectivity and return the result. With this one, I can also develop\nextensions to better estimate the selectivity.\n\nI hope it makes more sense. Also please forgive me if I am understanding\nPostgres somehow wrong, as I am quite new to this community :)\n\nBest regards,\nXiaozhe", "msg_date": "Wed, 17 Nov 2021 16:39:37 +0100", "msg_from": "Xiaozhe Yao <askxzyao@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Propose a new hook for mutating the query bounds" }, { "msg_contents": "On 11/17/21 16:39, Xiaozhe Yao wrote:\n> Hi Tom,\n> \n> Thanks for your feedback. I completely agree with you that a \n> higher-level hook is better suited for this case. I have adjusted the \n> PoC patch to this email.\n> \n> Now it is located in the clauselist_selectivity_ext function, where we \n> first check if the hook is defined. If so, we let the hook estimate the \n> selectivity and return the result. With this one, I can also develop \n> extensions to better estimate the selectivity.\n> \n\nI think clauselist_selectivity is the right level, because this is \npretty similar to what extended statistics are doing. I'm not sure if \nthe hook should be called in clauselist_selectivity_ext or in the plain \nclauselist_selectivity. But it should be in clauselist_selectivity_or \ntoo, probably.\n\nThe way the hook is used seems pretty inconvenient, though. I mean, if \nyou do this\n\n if (clauselist_selectivity_hook)\n return clauselist_selectivity_hook(...);\n\nthen what will happen when the ML model has no information applicable to \na query? This is called for all relations, all conditions, etc. and \nyou've short-circuited all the regular code, so the hook will have to \ncopy all of that. Seems pretty silly and fragile.\n\nIMO the right approach is what statext_clauselist_selectivity is doing, \ni.e. estimate clauses, mark them as estimated in a bitmap, and let the \nrest of the existing code take care of the remaining clauses. So more \nsomething like\n\n if (clauselist_selectivity_hook)\n s1 *= clauselist_selectivity_hook(..., &estimatedclauses);\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 17 Nov 2021 19:47:29 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Propose a new hook for mutating the query bounds" }, { "msg_contents": "Hi,\n\nThanks for the previous feedbacks!\n\n> The way the hook is used seems pretty inconvenient, though.\n\nI see the problem, and I agree.\n\nI looked into how other hooks work, and I am wondering if it looks ok if\nwe: pass a pointer to the hook, and let the hook check if there is any\ninformation applicable. If there is none, the hook just returns False and\nwe let the rest of the code handle. If it is true, we get the selectivity\nfrom the hook and return it. So something like\n\n```\nif (clauselist_selectivity_hook &&\n(*clauselist_selectivity_hook) (root, clauses, varRelid, jointype, sjinfo,\nuse_extended_stats, &s1))\n{\nreturn s1;\n}\n```\n\nWhat I am trying to mock is the get_index_stats_hook (\nhttps://github.com/taminomara/psql-hooks/blob/master/Detailed.md#get_index_stats_hook).\n\n\nAm I understanding your idea correctly and does this look somehow better?\n\nBest regards,\nXiaozhe\n\nOn Wed, Nov 17, 2021 at 7:47 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> On 11/17/21 16:39, Xiaozhe Yao wrote:\n> > Hi Tom,\n> >\n> > Thanks for your feedback. I completely agree with you that a\n> > higher-level hook is better suited for this case. I have adjusted the\n> > PoC patch to this email.\n> >\n> > Now it is located in the clauselist_selectivity_ext function, where we\n> > first check if the hook is defined. If so, we let the hook estimate the\n> > selectivity and return the result. With this one, I can also develop\n> > extensions to better estimate the selectivity.\n> >\n>\n> I think clauselist_selectivity is the right level, because this is\n> pretty similar to what extended statistics are doing. I'm not sure if\n> the hook should be called in clauselist_selectivity_ext or in the plain\n> clauselist_selectivity. But it should be in clauselist_selectivity_or\n> too, probably.\n>\n> The way the hook is used seems pretty inconvenient, though. I mean, if\n> you do this\n>\n> if (clauselist_selectivity_hook)\n> return clauselist_selectivity_hook(...);\n>\n> then what will happen when the ML model has no information applicable to\n> a query? This is called for all relations, all conditions, etc. and\n> you've short-circuited all the regular code, so the hook will have to\n> copy all of that. Seems pretty silly and fragile.\n>\n> IMO the right approach is what statext_clauselist_selectivity is doing,\n> i.e. estimate clauses, mark them as estimated in a bitmap, and let the\n> rest of the existing code take care of the remaining clauses. So more\n> something like\n>\n> if (clauselist_selectivity_hook)\n> s1 *= clauselist_selectivity_hook(..., &estimatedclauses);\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>", "msg_date": "Thu, 18 Nov 2021 10:59:56 +0100", "msg_from": "Xiaozhe Yao <askxzyao@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Propose a new hook for mutating the query bounds" }, { "msg_contents": "On 11/18/21 10:59, Xiaozhe Yao wrote:\n> Hi,\n> \n> Thanks for the previous feedbacks!\n> \n> > The way the hook is used seems pretty inconvenient, though.\n> \n> I see the problem, and I agree.\n> \n> I looked into how other hooks work, and I am wondering if it looks ok if \n> we: pass a pointer to the hook, and let the hook check if there is any \n> information applicable. If there is none, the hook just returns False \n> and we let the rest of the code handle. If it is true, we get the \n> selectivity from the hook and return it. So something like\n> \n> ```\n> if (clauselist_selectivity_hook &&\n> (*clauselist_selectivity_hook) (root, clauses, varRelid, jointype, \n> sjinfo, use_extended_stats, &s1))\n> {\n> return s1;\n> }\n> ```\n> \n\nNo, that doesn't really solve the issue, because it's all or nothing \napproach. What if you ML can help estimating just a subset of clauses? \nIMHO the hooks should allow estimating the clauses the ML model was \nbuilt on, and then do the usual estimation for the remaining ones. \nOtherwise you still have to copy large parts of the code.\n\n> What I am trying to mock is the get_index_stats_hook \n> (https://github.com/taminomara/psql-hooks/blob/master/Detailed.md#get_index_stats_hook \n> <https://github.com/taminomara/psql-hooks/blob/master/Detailed.md#get_index_stats_hook>). \n> \n\nBut that hook only deals with a single index at a time - either it finds \nstats for it or not. But this new hook deals with a list of clauses, it \nshould allow processing just a subset of them, I think.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 18 Nov 2021 15:07:39 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Propose a new hook for mutating the query bounds" } ]
[ { "msg_contents": "Hi,\n\nCurrently docs about pg_upgrade says:\n\n\"\"\"\n <para>\n The <option>--jobs</option> option allows multiple CPU cores to be used\n for copying/linking of files and to dump and reload database schemas\n in parallel; a good place to start is the maximum of the number of\n CPU cores and tablespaces. This option can dramatically reduce the\n time to upgrade a multi-database server running on a multiprocessor\n machine.\n </para>\n\"\"\"\n\nWhich make the user think that the --jobs option could use all CPU\ncores. Which is not true. Or that it has anything to do with multiple\ndatabases, which is true only to some extent.\n\nWhat that option really improves are upgrading servers with multiple\ntablespaces, of course if --link or --clone are used pg_upgrade is still\nvery fast but used with the --copy option is not what one could expect.\n\nAs an example, a customer with a 25Tb database, 40 cores and lots of ram\nused --jobs=35 and got only 7 processes (they have 6 tablespaces) and\nthe disks where not used at maximum speed either. They expected 35\nprocesses copying lots of files at the same time.\n\nSo, first I would like to improve documentation. What about something\nlike the attached? \n\nNow, a couple of questions:\n\n- in src/bin/pg_upgrade/file.c at copyFile() we define a buffer to\n determine the amount of bytes that should be used in read()/write() to\n copy the relfilenode segments. And we define it as (50 * BLCKSZ),\n which is 400Kb. Isn't this too small?\n\n- why we read()/write() at all? is not a faster way of copying the file?\n i'm asking that because i don't actually know.\n\nI'm trying to add more parallelism by copying individual segments\nof a relfilenode in different processes. Does anyone one see a big\nproblem in trying to do that? I'm asking because no one did it before,\nthat could not be a good sign.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL", "msg_date": "Wed, 17 Nov 2021 14:44:52 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": true, "msg_subject": "pg_upgrade parallelism" }, { "msg_contents": "On Wed, 2021-11-17 at 14:44 -0500, Jaime Casanova wrote:\r\n> I'm trying to add more parallelism by copying individual segments\r\n> of a relfilenode in different processes. Does anyone one see a big\r\n> problem in trying to do that? I'm asking because no one did it before,\r\n> that could not be a good sign.\r\n\r\nI looked into speeding this up a while back, too. For the use case I\r\nwas looking at -- Greenplum, which has huge numbers of relfilenodes --\r\nspinning disk I/O was absolutely the bottleneck and that is typically\r\nnot easily parallelizable. (In fact I felt at the time that Andres'\r\nwork on async I/O might be a better way forward, at least for some\r\nfilesystems.)\r\n\r\nBut you mentioned that you were seeing disks that weren't saturated, so\r\nmaybe some CPU optimization is still valuable? I am a little skeptical\r\nthat more parallelism is the way to do that, but numbers trump my\r\nskepticism.\r\n\r\n> - why we read()/write() at all? is not a faster way of copying the file?\r\n> i'm asking that because i don't actually know.\r\n\r\nI have idly wondered if something based on splice() would be faster,\r\nbut I haven't actually tried it.\r\n\r\nBut there is now support for copy-on-write with the clone mode, isn't\r\nthere? Or are you not able to take advantage of it?\r\n\r\n--Jacob\r\n", "msg_date": "Wed, 17 Nov 2021 20:04:41 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade parallelism" }, { "msg_contents": "On Wed, Nov 17, 2021 at 02:44:52PM -0500, Jaime Casanova wrote:\n> Hi,\n> \n> Currently docs about pg_upgrade says:\n> \n> \"\"\"\n> <para>\n> The <option>--jobs</option> option allows multiple CPU cores to be used\n> for copying/linking of files and to dump and reload database schemas\n> in parallel; a good place to start is the maximum of the number of\n> CPU cores and tablespaces. This option can dramatically reduce the\n> time to upgrade a multi-database server running on a multiprocessor\n> machine.\n> </para>\n> \"\"\"\n> \n> Which make the user think that the --jobs option could use all CPU\n> cores. Which is not true. Or that it has anything to do with multiple\n> databases, which is true only to some extent.\n> \n> What that option really improves are upgrading servers with multiple\n> tablespaces, of course if --link or --clone are used pg_upgrade is still\n> very fast but used with the --copy option is not what one could expect.\n\n> As an example, a customer with a 25Tb database, 40 cores and lots of ram\n> used --jobs=35 and got only 7 processes (they have 6 tablespaces) and\n> the disks where not used at maximum speed either. They expected 35\n> processes copying lots of files at the same time.\n\nI would test this. How long does it take to cp -r the data dirs vs pg_upgrade\nthem ? If running 7 \"cp\" in parallel is faster than the \"copy\" portion of\npg_upgrade -j7, then pg_upgrade's file copy should be optimized.\n\nBut if it's not faster, then maybe should look at other options, like your idea\nto copy filenodes (or their segments) in parallel.\n\n> So, first I would like to improve documentation. What about something\n> like the attached? \n\nThe relevant history is in commits\n6f1b9e4efd94fc644f5de5377829d42e48c3c758\na89c46f9bc314ed549245d888da09b8c5cace104\n\n--jobs originally parallelized pg_dump and pg_restore, and then added\ncopying/linking. So the docs should mention tablespaces, as you said, but\nshould also mention databases. It may not be an issue for you, but pg_restore\nis the slowest part of our pg_upgrades, since we have many partitions.\n\n> Now, a couple of questions:\n> \n> - in src/bin/pg_upgrade/file.c at copyFile() we define a buffer to\n> determine the amount of bytes that should be used in read()/write() to\n> copy the relfilenode segments. And we define it as (50 * BLCKSZ),\n> which is 400Kb. Isn't this too small?\n\nMaybe - you'll have to check :)\n\n> - why we read()/write() at all? is not a faster way of copying the file?\n> i'm asking that because i don't actually know.\n\nNo portable way. Linux has this:\nhttps://man7.org/linux/man-pages/man2/copy_file_range.2.html\n\nBut I just read:\n\n| First support for cross-filesystem copies was introduced in Linux\n| 5.3. Older kernels will return -EXDEV when cross-filesystem\n| copies are attempted.\n\nTo me that sounds like it may not be worth it, at least not quite yet.\nBut it would be good to test.\n\n> I'm trying to add more parallelism by copying individual segments\n> of a relfilenode in different processes. Does anyone one see a big\n> problem in trying to do that? I'm asking because no one did it before,\n> that could not be a good sign.\n\nMy concern would be if there's too many jobs and the storage bogs down, then it\ncould be slower.\n\nI think something like that should have a separate option, not just --jobs.\nLike --parallel-in-tablespace. The original implementation puts processes\nacross CPUs (for pg_dump/restore) and tablespaces (for I/O). Maybe it should\nbe possible to control those with separate options, too.\n\nFWIW, we typically have only one database of any significance, but we do use\ntablespaces, and I've used pg_upgrade --link since c. v9.0. --jobs probably\nhelps pg_dump/restore at few customers who have multiple DBs. But it probably\ndoesn't help to parallelize --link across tablespaces (since our tablespaces\nare actually on the same storage devices, but with different filesystems).\nI anticipate it might even make a few customers upgrade a bit slower, since\n--link is a metadata operation and probably involves a lot of FS barriers, for\nwhich the storage may be inadequate to support in parallel.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 17 Nov 2021 14:34:03 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade parallelism" }, { "msg_contents": "On Wed, Nov 17, 2021 at 02:44:52PM -0500, Jaime Casanova wrote:\n> Hi,\n> \n> Currently docs about pg_upgrade says:\n> \n> \"\"\"\n> <para>\n> The <option>--jobs</option> option allows multiple CPU cores to be used\n> for copying/linking of files and to dump and reload database schemas\n> in parallel; a good place to start is the maximum of the number of\n> CPU cores and tablespaces. This option can dramatically reduce the\n> time to upgrade a multi-database server running on a multiprocessor\n> machine.\n> </para>\n> \"\"\"\n> \n> Which make the user think that the --jobs option could use all CPU\n> cores. Which is not true. Or that it has anything to do with multiple\n> databases, which is true only to some extent.\n\nUh, the behavior is a little more complicated. The --jobs option in\npg_upgrade is used to parallelize three operations:\n\n* copying relation files\n\n* dumping old cluster objects (via parallel_exec_prog())\n\n* creating objects in the new cluster (via parallel_exec_prog())\n\nThe last two basically operate on databases in parallel --- they can't\ndump/load a single database in parallel, but they can dump/load several\ndatabases in parallel.\n\nThe documentation you quote above is saying that you set jobs based on\nthe number of CPUs (for dump/reload which are assumed to be CPU bound)\nand the number of tablespaces (which is assumed to be I/O bound).\n\nI am not sure how we can improve that text. We could just say the max\nof the number of databases and tablespaces, but then the number of CPUs\nneeds to be involved since, if you only have one CPU core, you don't\nwant parallel dumps/loads happening since that will just cause CPU\ncontention with little benefit. We mention tablespaces because even if\nyou only have once CPU core, since tablespace copying is I/O bound, you\ncan still benefit from --jobs.\n\n> What that option really improves are upgrading servers with multiple\n> tablespaces, of course if --link or --clone are used pg_upgrade is still\n> very fast but used with the --copy option is not what one could expect.\n> \n> As an example, a customer with a 25Tb database, 40 cores and lots of ram\n> used --jobs=35 and got only 7 processes (they have 6 tablespaces) and\n> the disks where not used at maximum speed either. They expected 35\n> processes copying lots of files at the same time.\n> \n> So, first I would like to improve documentation. What about something\n> like the attached? \n> \n> Now, a couple of questions:\n> \n> - in src/bin/pg_upgrade/file.c at copyFile() we define a buffer to\n> determine the amount of bytes that should be used in read()/write() to\n> copy the relfilenode segments. And we define it as (50 * BLCKSZ),\n> which is 400Kb. Isn't this too small?\n\nUh, if you find that increasing that helps, we can increase it --- I\ndon't know how that value was chosen. However, we are really just\ncopying the data into the kernel, not forcing it to storage, so I don't\nknow if a larger value would help.\n\n> - why we read()/write() at all? is not a faster way of copying the file?\n> i'm asking that because i don't actually know.\n\nUh, we could use buffered I/O, I guess, but again, would there be a\nbenefit?\n\n> I'm trying to add more parallelism by copying individual segments\n> of a relfilenode in different processes. Does anyone one see a big\n> problem in trying to do that? I'm asking because no one did it before,\n> that could not be a good sign.\n\nI think we were assuming the copy would be I/O bound and that\nparallelism wouldn't help in a single tablespace.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 18 Nov 2021 17:43:24 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade parallelism" }, { "msg_contents": "On Wed, 2021-11-17 at 14:34 -0600, Justin Pryzby wrote:\r\n> On Wed, Nov 17, 2021 at 02:44:52PM -0500, Jaime Casanova wrote:\r\n> > \r\n> > - why we read()/write() at all? is not a faster way of copying the file?\r\n> > i'm asking that because i don't actually know.\r\n> \r\n> No portable way. Linux has this:\r\n> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fman7.org%2Flinux%2Fman-pages%2Fman2%2Fcopy_file_range.2.html&amp;data=04%7C01%7Cpchampion%40vmware.com%7C35fb5d59bd2745636fd408d9aa09a245%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637727780625465398%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;sdata=PS6OCE55n12KBOjh5qZ2uGzDR9U687nzNIV5AM9Zke4%3D&amp;reserved=0\r\n> \r\n> But I just read:\r\n> \r\n> > First support for cross-filesystem copies was introduced in Linux\r\n> > 5.3. Older kernels will return -EXDEV when cross-filesystem\r\n> > copies are attempted.\r\n> \r\n> To me that sounds like it may not be worth it, at least not quite yet.\r\n> But it would be good to test.\r\n\r\nI think a downside of copy_file_range() is that filesystems might\r\nperform a reflink under us, and to me that seems like something that\r\nneeds to be opted into via clone mode.\r\n\r\n(https://lwn.net/Articles/846403/ is also good reading on some sharp\r\nedges, though I doubt many of them apply to our use case.)\r\n\r\n--Jacob\r\n", "msg_date": "Tue, 23 Nov 2021 18:54:03 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade parallelism" }, { "msg_contents": "On Tue, Nov 23, 2021 at 06:54:03PM +0000, Jacob Champion wrote:\n> On Wed, 2021-11-17 at 14:34 -0600, Justin Pryzby wrote:\n> > On Wed, Nov 17, 2021 at 02:44:52PM -0500, Jaime Casanova wrote:\n> > > \n> > > - why we read()/write() at all? is not a faster way of copying the file?\n> > > i'm asking that because i don't actually know.\n> > \n> > No portable way. Linux has this:\n> > https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fman7.org%2Flinux%2Fman-pages%2Fman2%2Fcopy_file_range.2.html&amp;data=04%7C01%7Cpchampion%40vmware.com%7C35fb5d59bd2745636fd408d9aa09a245%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637727780625465398%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;sdata=PS6OCE55n12KBOjh5qZ2uGzDR9U687nzNIV5AM9Zke4%3D&amp;reserved=0\n> > \n> > But I just read:\n> > \n> > > First support for cross-filesystem copies was introduced in Linux\n> > > 5.3. Older kernels will return -EXDEV when cross-filesystem\n> > > copies are attempted.\n> > \n> > To me that sounds like it may not be worth it, at least not quite yet.\n> > But it would be good to test.\n\nI realized that pg_upgrade doesn't copy between filesystems - it copies from\n$tablespace/PG13/NNN to $tblespace/PG14/NNN. So that's no issue.\n\nAnd I did a bit of testing with this last weekend, and saw no performance\nbenefit from a larger buffersize, nor from copy_file_range, nor from libc stdio\n(fopen/fread/fwrite/fclose).\n\n> I think a downside of copy_file_range() is that filesystems might\n> perform a reflink under us, and to me that seems like something that\n> needs to be opted into via clone mode.\n\nYou're referring to this:\n\n| copy_file_range() gives filesystems an opportunity to implement \"copy\n|\tacceleration\" techniques, such as the use of reflinks (i.e., two or more\n|\ti-nodes that share pointers to the same copy-on-write disk blocks) or\n|\tserver-side-copy (in the case of NFS).\n\nI don't see why that's an issue though ? It's COW, not hardlink. It'd be the\nsame as if the filesystem implemented deduplication, right? postgres shouldn't\nnotice nor care.\n\nI guess you're concerned for someone who wants to be able to run pg_upgrade and\npreserve the ability to start the old cluster in addition to the new. But\nthat'd work fine on a COW filesystem, right ?\n\n> (https://lwn.net/Articles/846403/ is also good reading on some sharp\n> edges, though I doubt many of them apply to our use case.)\n\nYea, it doesn't seem the issues are relevant, other than to indicate that the\nsyscall is still evolving, which supports my initial conclusion.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 23 Nov 2021 13:51:09 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade parallelism" }, { "msg_contents": "On Tue, 2021-11-23 at 13:51 -0600, Justin Pryzby wrote:\r\n> \r\n> I guess you're concerned for someone who wants to be able to run pg_upgrade and\r\n> preserve the ability to start the old cluster in addition to the new.\r\n\r\nRight. What I'm worried about is, if disk space or write performance on\r\nthe new cluster is a concern, then having a copy-mode upgrade silently\r\nuse copy-on-write could be a problem if the DBA needs copy mode to\r\nactually copy.\r\n\r\n--Jacob\r\n", "msg_date": "Tue, 23 Nov 2021 21:37:34 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade parallelism" }, { "msg_contents": "Jacob Champion <pchampion@vmware.com> writes:\n> Right. What I'm worried about is, if disk space or write performance on\n> the new cluster is a concern, then having a copy-mode upgrade silently\n> use copy-on-write could be a problem if the DBA needs copy mode to\n> actually copy.\n\nParticularly for the cross-filesystem case, where it would not be\nunreasonable to expect that one could dismount or destroy the old FS\nimmediately afterward. I don't know if recent kernels try to make\nthat safe/transparent.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Nov 2021 16:43:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade parallelism" }, { "msg_contents": "On Wed, Nov 17, 2021 at 08:04:41PM +0000, Jacob Champion wrote:\n> On Wed, 2021-11-17 at 14:44 -0500, Jaime Casanova wrote:\n> > I'm trying to add more parallelism by copying individual segments\n> > of a relfilenode in different processes. Does anyone one see a big\n> > problem in trying to do that? I'm asking because no one did it before,\n> > that could not be a good sign.\n> \n> I looked into speeding this up a while back, too. For the use case I\n> was looking at -- Greenplum, which has huge numbers of relfilenodes --\n> spinning disk I/O was absolutely the bottleneck and that is typically\n> not easily parallelizable. (In fact I felt at the time that Andres'\n> work on async I/O might be a better way forward, at least for some\n> filesystems.)\n> \n> But you mentioned that you were seeing disks that weren't saturated, so\n> maybe some CPU optimization is still valuable? I am a little skeptical\n> that more parallelism is the way to do that, but numbers trump my\n> skepticism.\n> \n\nSorry for being unresponsive too long. I did add a new --jobs-per-disk\noption, this is a simple patch I made for the customer and ignored all\nWIN32 parts because I don't know anything about that part. I was wanting\nto complete that part but it has been in the same state two months now.\n\nAFAIU, it seems there is a different struct for the parameters of the\nfunction that will be called on the thread.\n\nI also decided to create a new reap_*_child() function for using with\nthe new parameter.\n\nNow, the customer went from copy 25Tb in 6 hours to 4h 45min, which is\nan improvement of 20%!\n\n\n> > - why we read()/write() at all? is not a faster way of copying the file?\n> > i'm asking that because i don't actually know.\n> \n> I have idly wondered if something based on splice() would be faster,\n> but I haven't actually tried it.\n> \n\nI tried and got no better result.\n\n> But there is now support for copy-on-write with the clone mode, isn't\n> there? Or are you not able to take advantage of it?\n> \n\nThat's sadly not possible because those are different disks, and yes I\nknow that's something that pg_upgrade normally doesn't allow but is not\ndifficult to make it happen.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL", "msg_date": "Tue, 11 Jan 2022 23:51:07 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade parallelism" } ]
[ { "msg_contents": "Dear hackers,\n\nI lately had a hard time to find the root cause for some wired behavior \nwith the async API of libpq when running client and server on Windows. \nWhen the connection aborts with an error - most notably with an error at \nthe connection setup - it sometimes fails with a wrong error message:\n\nInstead of:\n\n     connection to server at \"::1\", port 5433 failed: FATAL:  role \"a\" \ndoes not exist\n\nit fails with:\n\n     connection to server at \"::1\", port 5433 failed: server closed the \nconnection unexpectedly\n\nI found out, that the recv() function of the Winsock API has some wired \nbehavior. If the connection receives a TCP RST flag, recv() immediately \nreturns -1, regardless if all previous data has been retrieved. So when \nthe connection is closed hard, the behavior is timing dependent on the \nclient side. It may drop the last packet or it delivers it to libpq, if \nlibpq calls recv() quick enough.\n\nThis behavior is described at closesocket() here:\nhttps://docs.microsoft.com/en-us/windows/win32/api/winsock/nf-winsock-closesocket\n\n> This is called a hard or abortive close, because the socket's virtual \n> circuit is reset immediately, and any unsent data is lost. On Windows, \n> any *recv* call on the remote side of the circuit will fail with \n> WSAECONNRESET \n> <https://docs.microsoft.com/en-us/windows/desktop/WinSock/windows-sockets-error-codes-2>.\n\nUnfortunately each connection is closed hard by a Windows PostgreSQL \nserver with TCP flag RST. That in turn is another Winsock API behavior, \nthat is that every socket, that wasn't closed by the application is \nclosed hard with the RST flag at process termination. I didn't find any \nofficial documentation about this behavior.\n\nExplicit closing the socket before process termination leads to a \ngraceful close even on Windows. That is done by the attached patch. I \nthink delivering the correct error message to the user is much more \nimportant that closing the process in sync with the socket.\n\n\nSome background: I'm the maintainer of ruby-pg, the PostgreSQL client \nlibrary for ruby. The next version of ruby-pg will switch to the async \nAPI for connection setup. Using this API changes the timing of socket \noperations and therefore often leads to the above wrong message. \nPrevious versions made use of the sync API, which usually doesn't suffer \nfrom this issue. The original issue is here: \nhttps://github.com/ged/ruby-pg/issues/404\n\n--\n\nKind Regards\nLars Kanis", "msg_date": "Wed, 17 Nov 2021 22:13:33 +0100", "msg_from": "Lars Kanis <lars@greiz-reinsdorf.de>", "msg_from_op": true, "msg_subject": "Windows: Wrong error message at connection termination" }, { "msg_contents": "Lars Kanis <lars@greiz-reinsdorf.de> writes:\n> Explicit closing the socket before process termination leads to a \n> graceful close even on Windows. That is done by the attached patch. I \n> think delivering the correct error message to the user is much more \n> important that closing the process in sync with the socket.\n\nPer the comment immediately above this, it's intentional that we don't\nclose the socket. I'm not really convinced that this is an improvement.\n\nCan we get anywhere by using shutdown(2) instead of close(), ie do a\nhalf-close? I have no idea what Windows thinks the semantics of that\nare, but it might be worth trying.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Nov 2021 17:01:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Windows: Wrong error message at connection termination" }, { "msg_contents": "On Thu, Nov 18, 2021 at 10:13 AM Lars Kanis <lars@greiz-reinsdorf.de> wrote:\n> Unfortunately each connection is closed hard by a Windows PostgreSQL server with TCP flag RST. That in turn is another Winsock API behavior, that is that every socket, that wasn't closed by the application is closed hard with the RST flag at process termination. I didn't find any official documentation about this behavior.\n\nInteresting discovery. I think you might get the same behaviour from\na Unix system if you set SO_LINGER to 0 before you exit[1]. I suppose\nif a TCP implementation is partially in user space (I have no idea if\nthis is true for Windows, I never use it, but I recall that Winsock\nwas at some point a DLL) and can't handle the existence of any socket\nstate after the process is gone, you might want to nuke everything and\ntell the peer immediately that you're doing so on exit?\n\nI realise now that the experiments we did a while back to try to\nunderstand this across a few different operating systems[2] had missed\nthis subtlety, because that Python script had an explicit close()\ncall, whereas PostgreSQL exits. It still revealed that the client\nisn't allowed to read any data after its write failed, which is a\nknown source of error messages being eaten. What I missed is that the\nclient doesn't just get an RST and enter this\nno-you-can't-have-the-error-message-I-have-received state in response\nto data sent by the client (the usual way you expect to get RST), like\nin that test, but it also does so proactively when the server process\nexits, as you've explained (in other words, it's not necessary for the\nclient to try to write to reach this error-eating state).\n\n[1] https://stackoverflow.com/questions/3757289/when-is-tcp-option-so-linger-0-required\n[2] https://www.postgresql.org/message-id/flat/20190306030706.GA3967%40f01898859afd.ant.amazon.com#32f9f16f9be8da5ee5c3b405d6d1829c\n\n\n", "msg_date": "Thu, 18 Nov 2021 11:26:57 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Windows: Wrong error message at connection termination" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Interesting discovery. I think you might get the same behaviour from\n> a Unix system if you set SO_LINGER to 0 before you exit[1]. I suppose\n> if a TCP implementation is partially in user space (I have no idea if\n> this is true for Windows, I never use it, but I recall that Winsock\n> was at some point a DLL) and can't handle the existence of any socket\n> state after the process is gone, you might want to nuke everything and\n> tell the peer immediately that you're doing so on exit?\n\nIt's definitely plausible that Windows does this because it can't\nhandle retransmits once the sender's state is gone. However, it\nseems to me that any such state would be tied to the open socket,\nnot to the sender process as such. Which would suggest that an\nearly close() as Lars suggests would make things worse not better.\nThis is all just speculation unfortunately. (Man, I hate dealing\nwith closed-source software.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Nov 2021 17:55:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Windows: Wrong error message at connection termination" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I realise now that the experiments we did a while back to try to\n> understand this across a few different operating systems[2] had missed\n> this subtlety, because that Python script had an explicit close()\n> call, whereas PostgreSQL exits. It still revealed that the client\n> isn't allowed to read any data after its write failed, which is a\n> known source of error messages being eaten.\n\nYeah. After re-reading that thread, I'm a bit confused about how\nto square the results we got then with Lars' report. The Windows\ndocumentation he pointed to does claim that the default behavior if you\nissue closesocket() is to do a \"graceful close in the background\", which\none would think means allowing sent data to be received. That's not what\nwe saw. It's possible that we would get different results if we re-tested\nwith a scenario where the client doesn't attempt to send data after the\nserver-side close; but I'm not sure how much it's worth to improve that\ncase if the other case still fails hard. In any case, our previous\nresults definitely show that issuing an explicit close() is no panacea.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Nov 2021 21:04:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Windows: Wrong error message at connection termination" }, { "msg_contents": "Am 18.11.21 um 03:04 schrieb Tom Lane:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n>> I realise now that the experiments we did a while back to try to\n>> understand this across a few different operating systems[2] had missed\n>> this subtlety, because that Python script had an explicit close()\n>> call, whereas PostgreSQL exits. It still revealed that the client\n>> isn't allowed to read any data after its write failed, which is a\n>> known source of error messages being eaten.\n> Yeah. After re-reading that thread, I'm a bit confused about how\n> to square the results we got then with Lars' report. The Windows\n> documentation he pointed to does claim that the default behavior if you\n> issue closesocket() is to do a \"graceful close in the background\", which\n> one would think means allowing sent data to be received. That's not what\n> we saw. It's possible that we would get different results if we re-tested\n> with a scenario where the client doesn't attempt to send data after the\n> server-side close; but I'm not sure how much it's worth to improve that\n> case if the other case still fails hard.\n\nForm my experimentation the Winsock implementation has the two issues \nwhich I explained. First it drops all received but not yet retrieved \ndata as soon as it receives a RST packet. And secondly it always sends a \nRST packet on every socket, that wasn't send-closed at process \ntermination, regardless if there is any pending data.\n\nSending data to a socket, that was already closed from the other side is \nonly one way to trigger a RST packet, but closing a socket with \nl_linger=0 is another way and process termination is the third. They all \ncan lead to data loss on the receiver side, presumably because of the \nRST flag.\n\nAn alternative to closesocket() is shutdown(sock, SD_SEND). It doesn't \nfree the socket resource, but leads to a graceful shutdown. However the \nFIN packet is send when the shutdown() or closesocket() function is \ncalled and that's still short before the process terminates. I did some \nmore testing with different linger options, but it didn't change the \nbehavior substantial. So I didn't find any way to close the socket with \na FIN packet at the point in time of the process termination.\n\nThe other way around would be to make sure on the client side, that the \nlast message is retrieved before the RST packet arrives, so that no data \nis lost. This works mostly well through the sync API of libpq, but with \nthe async API the trigger for data reception is outside of the scope of \nlibpq, so that there's no way to ensure recv() is called quick enough, \nafter the data was received but before RST arrives. On a local \nclient+server combination there is only a gap of 0.5 milliseconds or so. \nI also didn't find a way to retrieve the enqueued data after RST \narrived. Maybe there's a nasty hack to retrieve the data afterwards, but \nI didn't dig into assembly code and memory layout of Winsock internals.\n\n\n> In any case, our previous\n> results definitely show that issuing an explicit close() is no panacea.\nI don't fully understand the issue with closing the socket before \nprocess termination. Sure, it can be a valuable information that the \ncorresponding backend process has definitely terminated. At least in the \ncontext of regression testing or so. But I think that loosing messages \nfrom the backend is way more critical than a non-sync process \ntermination. Do I miss something?\n\n--\n\nRegards,\nLars Kanis\n\n\n\n\n", "msg_date": "Sun, 21 Nov 2021 20:19:29 +0100", "msg_from": "Lars Kanis <lars@greiz-reinsdorf.de>", "msg_from_op": true, "msg_subject": "Re: Windows: Wrong error message at connection termination" }, { "msg_contents": "On Mon, Nov 22, 2021 at 8:19 AM Lars Kanis <lars@greiz-reinsdorf.de> wrote:\n> The other way around would be to make sure on the client side, that the\n> last message is retrieved before the RST packet arrives, so that no data\n> is lost. This works mostly well through the sync API of libpq, but with\n> the async API the trigger for data reception is outside of the scope of\n> libpq, so that there's no way to ensure recv() is called quick enough,\n> after the data was received but before RST arrives. On a local\n> client+server combination there is only a gap of 0.5 milliseconds or so.\n> I also didn't find a way to retrieve the enqueued data after RST\n> arrived. Maybe there's a nasty hack to retrieve the data afterwards, but\n> I didn't dig into assembly code and memory layout of Winsock internals.\n\nHmm. Well, if I understand how this works (and I'm not too familiar\nwith this Windows code so I maybe I don't), the postmaster duplicates\nthe socket into the child process (see\n{write,read}_inheritable_socket()) and then closes its own handle (see\nServerLoop()'s call to StreamClose(port->sock)). What if the\npostmaster kept the socket open, and then closed its copy after the\nchild exits? Then, I guess, maybe, Winsock socket state would live on\nwith a non-zero reference count and be able to perform the proper\ngraceful TCP shutdown dance, at least as long as the postmaster itself\nis up. Various other ideas: don't do that, but duplicate the socket\nback into the postmaster before exit, or into some other process, or\nrewrite PostgreSQL to use threads...\n\n\n", "msg_date": "Mon, 22 Nov 2021 09:24:21 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Windows: Wrong error message at connection termination" }, { "msg_contents": "On Mon, Nov 22, 2021 at 9:24 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Hmm. Well, if I understand how this works (and I'm not too familiar\n> with this Windows code so I maybe I don't), the postmaster duplicates\n> the socket into the child process (see\n> {write,read}_inheritable_socket()) and then closes its own handle (see\n> ServerLoop()'s call to StreamClose(port->sock)). What if the\n> postmaster kept the socket open, and then closed its copy after the\n> child exits? Then, I guess, maybe, Winsock socket state would live on\n> with a non-zero reference count and be able to perform the proper\n> graceful TCP shutdown dance, at least as long as the postmaster itself\n> is up. Various other ideas: don't do that, but duplicate the socket\n> back into the postmaster before exit, or into some other process, or\n> rewrite PostgreSQL to use threads...\n\nHmm, maybe it's still not enough. Now that I have coffee, I thought\nabout the well known failure of idle_in_transaction_timeout to report\nerrors on Windows[1]. There'd be no RST on timeout with the above\napproach, which is good, but the next time you try to send a query,\nperhaps a race begins: the server's TCP stack receives the query\npacket and replies with RST (the \"normal\" kind that is a response to\nunreceivable data, not the linger=0 kind that is proactively sent),\nmeanwhile the client begins to read, and *probably* reads the already\nbuffered idle-in-transaction-timeout error message, but with unlucky\nscheduling the RST arrives first and drops the buffered data (unlike\non Unix), right?\n\n[1] https://www.postgresql.org/message-id/CAP3o3PdzM0BLmNBELA5wV6YoN_1yYBVdoOvz9kYbOuK-YQGFAw%40mail.gmail.com\n\n\n", "msg_date": "Mon, 22 Nov 2021 10:10:23 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Windows: Wrong error message at connection termination" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Hmm. Well, if I understand how this works (and I'm not too familiar\n> with this Windows code so I maybe I don't), the postmaster duplicates\n> the socket into the child process (see\n> {write,read}_inheritable_socket()) and then closes its own handle (see\n> ServerLoop()'s call to StreamClose(port->sock)). What if the\n> postmaster kept the socket open, and then closed its copy after the\n> child exits?\n\nUgh :-(. For starters, we risk running out of FDs in the postmaster,\ndon't we?\n\nI did some tracing just now and convinced myself that socket_close is\nthe first on_proc_exit callback registered in an ordinary backend,\nand therefore the last action done by proc_exit_prepare. The only\nthings that happen after that are PROFILE_PID_DIR setup (not relevant\nin production builds), an elog(DEBUG) call, and any atexit callbacks\nthat third-party code might have registered.\n\nIf you're willing to avert your eyes from the question of what atexit\ncallbacks might do, then it'd be okay to do closesocket in socket_close,\nreasoning that the backend has certainly disconnected itself from shmem\nand so on, and thus is effectively done even if it is still a live process\nso far as the kernel is concerned. So maybe Lars' proposed patch is\nacceptable after all. It feels a bit shaky, but when we're sitting atop\na piece-of-junk TCP stack, we can't really have the guarantees we'd like.\n\nThe main way in which it's shaky is that future rearrangements of the\nshutdown sequence, or additions of new on_proc_exit callbacks, could\ncreate a situation where socket_close is no longer the last interesting\naction. We could imagine doing something to make it less likely for\nthat to happen accidentally, but I'm not sure it's worth the trouble.\n\nEssentially this is reverting 268313a95 of 2003-05-29. The commit\nmessage for that fails to cite any mailing-list discussion, but after\nsome digging in the archives I think I did it in response to\n\nhttps://www.postgresql.org/message-id/flat/009c01c31ce9%24eeaf00f0%24fb02a8c0%40muskrat\n\nwhere the complaint was that a DB couldn't be dropped because a\njust-closed connection was still live so far as the server was concerned.\nWe didn't do anything to make PQclose() synchronous, so the problem is\nreally still there; but the idea was that other client libraries could\nmake session-close synchronous if they wanted. For that purpose,\nbeing out of the ProcArray is really sufficient, and I think it's safe\nto suppose that socket_close must run after that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 21 Nov 2021 16:31:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Windows: Wrong error message at connection termination" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Hmm, maybe it's still not enough. Now that I have coffee, I thought\n> about the well known failure of idle_in_transaction_timeout to report\n> errors on Windows[1].\n\nYeah, I think that may well be a manifestation of the same problem:\nonce the backend exits, Winsock issues RST which prevents the client\nfrom reading the queued data. We had been analyzing that under the\nassumption that Windows obeys the TCP RFCs ... but having now been\ndisabused of that optimism, it seems to match up pretty well.\nIt'd be useful to check if Lars' patch cures that symptom.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 21 Nov 2021 16:42:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Windows: Wrong error message at connection termination" }, { "msg_contents": "On Mon, Nov 22, 2021 at 10:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Hmm, maybe it's still not enough. Now that I have coffee, I thought\n> > about the well known failure of idle_in_transaction_timeout to report\n> > errors on Windows[1].\n>\n> Yeah, I think that may well be a manifestation of the same problem:\n> once the backend exits, Winsock issues RST which prevents the client\n> from reading the queued data. We had been analyzing that under the\n> assumption that Windows obeys the TCP RFCs ... but having now been\n> disabused of that optimism, it seems to match up pretty well.\n> It'd be useful to check if Lars' patch cures that symptom.\n\nYeah, it sounds like it might solve at least the server-side problem.\nLet's call that weird behaviour #1: RST on process exit. (I wonder if\nmy keep-the-socket-open-in-another-process thought experiment is\ntheoretically better: a lingering socket should be capable of\nresending data that hasn't been ack'd yet in FIN-WAIT-1 state after\nclose, which I suspect might not happen if the TCP stack nukes the\nsocket. If close() avoids the proactive RST but still doesn't really\nfollow the shutdown protocol then it's papering over a crack in the\nwall, but I'm not planning to argue about that...)\n\nIIUC we'd still have weird behaviour #2 on the client side: TCP stack\ndrops buffered received data on the floor on receipt of RST.\n\nSo yeah, it'd be interesting to know if by avoiding/hiding weird\nbehaviour #1, idle_in_transaction_timeout works as desired most of the\ntime by tilting the race in favour of eager clients and favourable\nscheduling. If a client sends a new query and then immediately begins\nto read the response, there's a good chance it'll be able to read the\nalready-buffered error message before the query->RST ping pong...\nWhich I now understand is exactly what Lars was explaining: that sync\nAPIs (like the psql command shown in that other thread) might have a\ngood chance of winning that race, but for async APIs, the author of\nthe async API has no idea what its client is going to do.\n\n\n", "msg_date": "Mon, 22 Nov 2021 11:33:08 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Windows: Wrong error message at connection termination" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Mon, Nov 22, 2021 at 10:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> It'd be useful to check if Lars' patch cures that symptom.\n\n> Yeah, it sounds like it might solve at least the server-side problem.\n> Let's call that weird behaviour #1: RST on process exit. (I wonder if\n> my keep-the-socket-open-in-another-process thought experiment is\n> theoretically better: a lingering socket should be capable of\n> resending data that hasn't been ack'd yet in FIN-WAIT-1 state after\n> close, which I suspect might not happen if the TCP stack nukes the\n> socket. If close() avoids the proactive RST but still doesn't really\n> follow the shutdown protocol then it's papering over a crack in the\n> wall, but I'm not planning to argue about that...)\n\nThe language about \"graceful shutdown\" in the Windows docs at least\nsuggests that they finish out the TCP connection cleanly; failing\nto retransmit at need would hardly qualify as \"graceful\". Of course,\nRedmond keeps finding ways to fail to meet reasonable expectations.\n\n> IIUC we'd still have weird behaviour #2 on the client side: TCP stack\n> drops buffered received data on the floor on receipt of RST.\n\nDo we know that that actually happens in an arm's-length connection\n(ie two separate machines)? I wonder if the data loss is strictly\nan artifact of a localhost connection. There'd be a lot more pressure\non them to make cross-machine TCP work per spec, one would think.\nBut in any case, if we can avoid sending RST in this situation,\nit seems mostly moot for our usage.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 21 Nov 2021 18:04:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Windows: Wrong error message at connection termination" }, { "msg_contents": "Am 22.11.21 um 00:04 schrieb Tom Lane:\n> Do we know that that actually happens in an arm's-length connection\n> (ie two separate machines)? I wonder if the data loss is strictly\n> an artifact of a localhost connection. There'd be a lot more pressure\n> on them to make cross-machine TCP work per spec, one would think.\n> But in any case, if we can avoid sending RST in this situation,\n> it seems mostly moot for our usage.\n\nSorry it took some days to get a setup to check this!\n\nThe result is as expected:\n\n 1. Windows client to Linux server works without dropping the error message\n 2. Linux client to Windows server works without dropping the error message\n 3. Windows client to remote Windows server drops the error message,\n depending on the timing of the event loop\n\nIn 1. the Linux server doesn't end the connection with a RST packet, so \nthat the Windows client enqueues the error message properly and doesn't \ndrop it.\n\nIn 2. the Linux client doesn't care about the RST packet of the Windows \nserver and properly enqueues and raises the error message.\n\nIn 3. the combination of the bad RST behavior of client and server leads \nto data loss. It depends on the network timing. A delay of 0.5 ms in the \nevent loop was enough in a localhost setup and as wall as in some LAN \nsetup. On the contrary over some slower WLAN connection a delay of less \nthan 15 ms did not loose data, but higher delays still did.\n\nThe idea of running a second process, pass the socket handle to it, \nobserve the parent process and close the socket when it exited, could \nwork, but I guess it's overly complicated and creates more issues than \nit solves. Probably the same if the master process handles the socket \nclosing.\n\nSo I still think it's best to close the socket as proposed in the patch.\n\n--\n\nRegards,\nLars Kanis\n\n\n\n\n", "msg_date": "Sat, 27 Nov 2021 12:39:00 +0100", "msg_from": "Lars Kanis <lars@greiz-reinsdorf.de>", "msg_from_op": true, "msg_subject": "Re: Windows: Wrong error message at connection termination" }, { "msg_contents": "Hello Lars,\n27.11.2021 14:39, Lars Kanis wrote:\n>\n> So I still think it's best to close the socket as proposed in the patch.\nPlease see also the previous discussion of the topic:\nhttps://www.postgresql.org/message-id/flat/16678-253e48d34dc0c376%40postgresql.org\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Mon, 29 Nov 2021 12:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Windows: Wrong error message at connection termination" }, { "msg_contents": "Alexander Lakhin <exclusion@gmail.com> writes:\n> 27.11.2021 14:39, Lars Kanis wrote:\n>> So I still think it's best to close the socket as proposed in the patch.\n\n> Please see also the previous discussion of the topic:\n> https://www.postgresql.org/message-id/flat/16678-253e48d34dc0c376%40postgresql.org\n\nHm, yeah, that discussion seems to have slipped through the cracks.\nNot sure why it didn't end up in pushing something.\n\nAfter re-reading that thread and re-studying relevant Windows\ndocumentation [1][2], I think the main open question is whether\nwe need to issue shutdown() or not, and if so, whether to use\nSD_BOTH or just SD_SEND. I'm inclined to prefer not calling\nshutdown(), because [1] is self-contradictory as to whether it\ncan block, and [2] is pretty explicit that it's not necessary.\n\n\t\t\tregards, tom lane\n\n[1] https://docs.microsoft.com/en-us/windows/win32/api/winsock/nf-winsock-shutdown\n[2] https://docs.microsoft.com/en-us/windows/win32/winsock/graceful-shutdown-linger-options-and-socket-closure-2\n\n\n", "msg_date": "Mon, 29 Nov 2021 14:16:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Windows: Wrong error message at connection termination" }, { "msg_contents": "Hello Tom,\n29.11.2021 22:16, Tom Lane wrote:\n> Hm, yeah, that discussion seems to have slipped through the cracks.\n> Not sure why it didn't end up in pushing something.\n>\n> After re-reading that thread and re-studying relevant Windows\n> documentation [1][2], I think the main open question is whether\n> we need to issue shutdown() or not, and if so, whether to use\n> SD_BOTH or just SD_SEND. I'm inclined to prefer not calling\n> shutdown(), because [1] is self-contradictory as to whether it\n> can block, and [2] is pretty explicit that it's not necessary.\n>\n> \t\t\tregards, tom lane\n>\n> [1] https://docs.microsoft.com/en-us/windows/win32/api/winsock/nf-winsock-shutdown\n> [2] https://docs.microsoft.com/en-us/windows/win32/winsock/graceful-shutdown-linger-options-and-socket-closure-2\nI've tested the close-only patch with pg_sleep() in pqReadData(), and it\nworks too. So I wonder how to understand \"To assure that all data is\nsent and received on a connected socket before it is closed, an\napplication should use shutdown to close connection before calling\nclosesocket.\" in [1].\nMaybe they mean that shutdown should be used before, but not after\nclosesocket ). Or maybe the Windows' behaviour somehow evolved over\ntime. (With the patch I cannot reproduce the FATAL message loss even on\nWindows 2012 R2.) So without a practical evidence of the importance of\nshutdown() I'm inclined to a more simple solution too.\n\nAs to 268313a95, back in 2003 it was possible to compile server on\nWindows only using Cygwin (though you could compile libpq with Visual C,\nsee [3]). So \"#ifdef WIN32\" that is proposed now, will not affect that\nscenario anyway.\n\nBest regards,\nAlexander\n\n[3]\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob_plain;f=doc/src/sgml/install-win32.sgml;hb=268313a95\n\n\n", "msg_date": "Thu, 2 Dec 2021 22:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Windows: Wrong error message at connection termination" }, { "msg_contents": "Alexander Lakhin <exclusion@gmail.com> writes:\n> 29.11.2021 22:16, Tom Lane wrote:\n>> After re-reading that thread and re-studying relevant Windows\n>> documentation [1][2], I think the main open question is whether\n>> we need to issue shutdown() or not, and if so, whether to use\n>> SD_BOTH or just SD_SEND. I'm inclined to prefer not calling\n>> shutdown(), because [1] is self-contradictory as to whether it\n>> can block, and [2] is pretty explicit that it's not necessary.\n\n> I've tested the close-only patch with pg_sleep() in pqReadData(), and it\n> works too.\n\nThanks for testing!\n\n> So I wonder how to understand \"To assure that all data is\n> sent and received on a connected socket before it is closed, an\n> application should use shutdown to close connection before calling\n> closesocket.\" in [1].\n\nI suppose their documentation has evolved over time. This sentence\nprobably predates their explicit acknowledgement in [2] that you don't\nhave to call shutdown(). Maybe, once upon a time with very old\nversions of Winsock, you did have to do so if you wanted graceful close.\n\nI'll push the close-only change in a little bit.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Dec 2021 14:31:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Windows: Wrong error message at connection termination" }, { "msg_contents": "On 02.12.2021 22:31, Tom Lane wrote:\n> I'll push the close-only change in a little bit.\n\nUnexpectedly, this changes the error message:\n\n\tpostgres=# set idle_session_timeout = '1s';\n\tSET\n\tpostgres=# select 1;\n\tcould not receive data from server: Software caused connection abort \n(0x00002745/10053)\n\tThe connection to the server was lost. Succeeded.\n\tpostgres=#\n\nWithout shutdown/closesocket it would most likely be:\n\n\tserver closed the connection unexpectedly\n\t This probably means the server terminated abnormally\n\t before or while processing the request.\n\nWhen the timeout expires, the server sends the error message and \ngracefully closes the connection by sending a FIN. Later, psql sends \nanother query to the server, and the server responds with a RST. But \nnow recv() returns WSAECONNABORTED(10053) instead of WSAECONNRESET(10054).\n\nWithout shutdown/closesocket, after the timeout expires, the server \nsends the error message, the client sends an ACK, and the server \nresponds with a RST. Then psql tries to sends the next query, but \nnothing is sent at the TCP level, and the next recv() returns WSAECONNRESET.\n\nIIUIC, in both cases we may or may not recv() the error message from the \nserver depending on how fast the RST arrives from the server.\n\nShould we handle ECONNABORTED similarly to ECONNRESET in pqsecure_raw_read?\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/\n\n\n", "msg_date": "Fri, 14 Jan 2022 13:01:39 +0300", "msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Windows: Wrong error message at connection termination" }, { "msg_contents": "On 14.01.2022 13:01, Sergey Shinderuk wrote:\n> When the timeout expires, the server sends the error message and \n> gracefully closes the connection by sending a FIN.  Later, psql sends \n> another query to the server, and the server responds with a RST.  But \n> now recv() returns WSAECONNABORTED(10053) instead of WSAECONNRESET(10054).\n\nOn the other hand, I cannot reproduce this behavior with a remote server \neven if pause psql just before the recv() call to let the RST win the race.\n\nSo I get:\n\npostgres=# set idle_session_timeout = '1s';\nrecv() returned 15 errno 0\nSET\nrecv() returned -1 errno 10035 (WSAEWOULDBLOCK)\npostgres=# select 1;\nrecv() returned 116 errno 0\nrecv() returned 0 errno 0\nrecv() returned 0 errno 0\nFATAL: terminating connection due to idle-session timeout\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\n\nrecv() signals EOF like on Unix.\n\nHere I connected from a Windows virtual machine to the macOS host, but \nthe Wireshark dump looks the same (there is a RST) as for a localhost \nconnection.\n\nIs this \"error-eating\" behavior of RST on Windows specific only to \nlocalhost connections?\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/\n\n\n", "msg_date": "Fri, 14 Jan 2022 15:15:05 +0300", "msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Windows: Wrong error message at connection termination" }, { "msg_contents": "On 14.01.2022 13:01, Sergey Shinderuk wrote:\n> Unexpectedly, this changes the error message:\n> \n>     postgres=# set idle_session_timeout = '1s';\n>     SET\n>     postgres=# select 1;\n>     could not receive data from server: Software caused connection \n> abort (0x00002745/10053)\n\nFor the record, after more poking I realized that it depends on timing. \n By injecting delays I can get any of the following from libpq:\n\n* could not receive data from server: Software caused connection abort\n* server closed the connection unexpectedly\n* no connection to the server\n\n\n> Should we handle ECONNABORTED similarly to ECONNRESET in pqsecure_raw_read?\n\nSo this doesn't make sense anymore.\n\nSorry for the noise.\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/\n\n\n", "msg_date": "Sat, 15 Jan 2022 02:15:25 +0300", "msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Windows: Wrong error message at connection termination" }, { "msg_contents": "Sergey Shinderuk <s.shinderuk@postgrespro.ru> writes:\n> On 14.01.2022 13:01, Sergey Shinderuk wrote:\n>> Unexpectedly, this changes the error message:\n> ...\n> For the record, after more poking I realized that it depends on timing. \n> By injecting delays I can get any of the following from libpq:\n> * could not receive data from server: Software caused connection abort\n> * server closed the connection unexpectedly\n> * no connection to the server\n\nThanks for the follow-up. At the moment I'm not planning to do anything\npending the results of the other thread [1]. It seems likely though that\nwe'll end up reverting this explicit-close behavior in the back branches,\nas the other changes involved look too invasive for back-patching.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com\n\n\n", "msg_date": "Sat, 15 Jan 2022 13:58:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Windows: Wrong error message at connection termination" } ]
[ { "msg_contents": "I spent a lot of time trying to figure out why xlog.c has global\nvariables ReadRecPtr and EndRecPtr instead of just relying on the\neponymous structure members inside the XLogReaderState. I concluded\nthat the values are the same at most points in the code, and thus that\nwe could just use xlogreaderstate->{Read,End}RecPtr instead. There are\ntwo places where this wouldn't produce the same results we're getting\ntoday. Both of those appear to be bugs.\n\nThe reason why it's generally the case that ReadRecPtr ==\nxlogreaderstate->ReadRecPtr and likewise for EndRecPtr is that\nReadRecord() calls XLogReadRecord(), which sets the values inside the\nXLogReaderState, and then immediately assigns the values stored there\nto the global variables. There's no other code that changes the other\nglobal variables, and the only other code that changes the structure\nmembers is XLogBeginRead(). So the values can be unequal from the time\nXLogBeginRead() is called up until the time that the XLogReadRecord()\ncall inside ReadRecord() returns. In practice, StartupXLOG() always\ncalls ReadRecord() right after it calls XLogBeginRead(), and\nReadRecord() does not reference either global variable before calling\nXLogReadRecord(), so the problem surface is limited to code that runs\nunderneath XLogReadRecord(). XLogReadRecord() is part of xlogreader.c,\nbut it uses a callback interface: the callback is XLogPageRead(),\nwhich itself references EndRecPtr, and also calls\nWaitForWALToBecomeAvailable(), which in turn calls\nrescanLatestTimeLine(), which also references EndRecPtr. So these are\nthe two problem cases: XLogPageRead(), and rescanLatestTimeLine().\n\nIn rescanLatestTimeLine(), the problem is IMHO probably serious enough\nto justify a separate commit with back-patching. The problem is that\nEndRecPtr is being used here to reject impermissible attempts to\nswitch to a bad timeline, but if pg_wal starts out empty, EndRecPtr\nwill be 0 here, which causes the code to fail to detect a case that\nshould be prohibited. Consider the following test case:\n\n- create a primary\n- create standby #1 from the primary\n- start standby #1 and promote it\n- take a backup from the primary using -Xnone to create standby #2\n- clear primary_conninfo on standby #2 and then start it\n- copy 00000002.history from standby #1 to standby #2\n\nYou get:\n\n2021-11-17 15:34:26.213 EST [7474] LOG: selected new timeline ID: 2\n\nBut with the attached patch, you get:\n\n2021-11-17 16:12:01.566 EST [20900] LOG: new timeline 2 forked off\ncurrent database system timeline 1 before current recovery point\n0/A000060\n\nHad the problem occurred at some later point in the WAL stream rather\nthan before fetching the very first record, I think everything is\nfine; at that point, I think that the global variable EndRecPtr will\nbe initialized. I'm not entirely sure that it contains exactly the\nright value, but it's someplace in the right ballpark, at least.\n\nIn XLogPageRead(), the problem is just cosmetic. We're only using\nEndRecPtr as an argument to emode_for_corrupt_record(), which is all\nabout suppressing duplicate complaints about the same LSN. But if the\nxlogreader has been repositioned using XLogBeginRead() since the last\ncall to ReadRecord(), or if there are no preceding calls to\nReadRecord(), then the value of EndRecPtr here is left over from the\nprevious read position and is not particularly related to the record\nwe're reading now. xlogreader->EndRecPtr, OTOH, is. This doesn't seem\nworth a separate commit to me, or a back-patch, but it seems worth\nfixing while I'm cleaning up these global variables.\n\nReview appreciated.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 17 Nov 2021 17:01:36 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "xlog.c: removing ReadRecPtr and EndRecPtr" }, { "msg_contents": "On Thu, Nov 18, 2021 at 3:31 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> I spent a lot of time trying to figure out why xlog.c has global\n> variables ReadRecPtr and EndRecPtr instead of just relying on the\n> eponymous structure members inside the XLogReaderState. I concluded\n> that the values are the same at most points in the code, and thus that\n> we could just use xlogreaderstate->{Read,End}RecPtr instead. There are\n> two places where this wouldn't produce the same results we're getting\n> today. Both of those appear to be bugs.\n>\n> The reason why it's generally the case that ReadRecPtr ==\n> xlogreaderstate->ReadRecPtr and likewise for EndRecPtr is that\n> ReadRecord() calls XLogReadRecord(), which sets the values inside the\n> XLogReaderState, and then immediately assigns the values stored there\n> to the global variables. There's no other code that changes the other\n> global variables, and the only other code that changes the structure\n> members is XLogBeginRead(). So the values can be unequal from the time\n> XLogBeginRead() is called up until the time that the XLogReadRecord()\n> call inside ReadRecord() returns. In practice, StartupXLOG() always\n> calls ReadRecord() right after it calls XLogBeginRead(), and\n> ReadRecord() does not reference either global variable before calling\n> XLogReadRecord(), so the problem surface is limited to code that runs\n> underneath XLogReadRecord(). XLogReadRecord() is part of xlogreader.c,\n> but it uses a callback interface: the callback is XLogPageRead(),\n> which itself references EndRecPtr, and also calls\n> WaitForWALToBecomeAvailable(), which in turn calls\n> rescanLatestTimeLine(), which also references EndRecPtr. So these are\n> the two problem cases: XLogPageRead(), and rescanLatestTimeLine().\n>\n> In rescanLatestTimeLine(), the problem is IMHO probably serious enough\n> to justify a separate commit with back-patching. The problem is that\n> EndRecPtr is being used here to reject impermissible attempts to\n> switch to a bad timeline, but if pg_wal starts out empty, EndRecPtr\n> will be 0 here, which causes the code to fail to detect a case that\n> should be prohibited. Consider the following test case:\n>\n> - create a primary\n> - create standby #1 from the primary\n> - start standby #1 and promote it\n> - take a backup from the primary using -Xnone to create standby #2\n> - clear primary_conninfo on standby #2 and then start it\n> - copy 00000002.history from standby #1 to standby #2\n>\n> You get:\n>\n> 2021-11-17 15:34:26.213 EST [7474] LOG: selected new timeline ID: 2\n>\n> But with the attached patch, you get:\n>\n> 2021-11-17 16:12:01.566 EST [20900] LOG: new timeline 2 forked off\n> current database system timeline 1 before current recovery point\n> 0/A000060\n>\n\nSomehow with and without patch I am getting the same log.\n\n> Had the problem occurred at some later point in the WAL stream rather\n> than before fetching the very first record, I think everything is\n> fine; at that point, I think that the global variable EndRecPtr will\n> be initialized. I'm not entirely sure that it contains exactly the\n> right value, but it's someplace in the right ballpark, at least.\n>\n\nAgree, change seems pretty much reasonable.\n\n> In XLogPageRead(), the problem is just cosmetic. We're only using\n> EndRecPtr as an argument to emode_for_corrupt_record(), which is all\n> about suppressing duplicate complaints about the same LSN. But if the\n> xlogreader has been repositioned using XLogBeginRead() since the last\n> call to ReadRecord(), or if there are no preceding calls to\n> ReadRecord(), then the value of EndRecPtr here is left over from the\n> previous read position and is not particularly related to the record\n> we're reading now. xlogreader->EndRecPtr, OTOH, is. This doesn't seem\n> worth a separate commit to me, or a back-patch, but it seems worth\n> fixing while I'm cleaning up these global variables.\n>\nLGTM.\n\nRegards,\nAmul\n\n\n", "msg_date": "Thu, 18 Nov 2021 17:59:31 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: xlog.c: removing ReadRecPtr and EndRecPtr" }, { "msg_contents": "On Thu, Nov 18, 2021 at 7:30 AM Amul Sul <sulamul@gmail.com> wrote:\n> Somehow with and without patch I am getting the same log.\n\nTry applying the attached 0001-dubious-test-cast.patch for you and see\nif that fails. It does for me. If so, then try applying\n0002-fix-the-bug.patch and see if that makes it pass.\n\nUnfortunately, this test case isn't remotely committable as-is, and I\ndon't know how to make it so. The main problem is that, although you\ncan start up a server with nothing in pg_wal, no restore_command, and\nno archive command, pg_ctl will not believe that it has started. I\nworked around that problem by telling pg_ctl to ignore failures, but\nit still waits for a long time before timing out, which sucks both\nbecause (1) hackers are impatient and (2) some hackers run extremely\nslow buildfarm machines where almost any timeout won't be long enough.\nThere's a second place where the patch needs to wait for something\nalso, and that one I've crudely kludged with sleep(10). If anybody\naround here who is good at figuring out how to write clever TAP tests\ncan tell me how to fix this test to be non-stupid, I will happily do\nso.\n\nOtherwise, I think I will just need to commit and back-patch the\nactual bug fix without a test, and then rebase the rest of the patch I\nposted previously over top of those changes for master only.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 18 Nov 2021 14:13:18 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: xlog.c: removing ReadRecPtr and EndRecPtr" }, { "msg_contents": "On 2021-Nov-18, Robert Haas wrote:\n\n> Unfortunately, this test case isn't remotely committable as-is, and I\n> don't know how to make it so. The main problem is that, although you\n> can start up a server with nothing in pg_wal, no restore_command, and\n> no archive command, pg_ctl will not believe that it has started.\n\nWould it work to start postmaster directly instad of using pg_ctl, and\nthen rely on (say) pg_isready?\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"El que vive para el futuro es un iluso, y el que vive para el pasado,\nun imbécil\" (Luis Adler, \"Los tripulantes de la noche\")\n\n\n", "msg_date": "Thu, 18 Nov 2021 16:21:09 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: xlog.c: removing ReadRecPtr and EndRecPtr" }, { "msg_contents": "On Thu, Nov 18, 2021 at 2:21 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Would it work to start postmaster directly instad of using pg_ctl, and\n> then rely on (say) pg_isready?\n\nI *think* that pg_isready would also fail, because the documentation\nsays \"pg_isready returns 0 to the shell if the server is accepting\nconnections normally, 1 if the server is rejecting connections (for\nexample during startup) ...\" and I believe the \"during startup\" case\nwould apply here.\n\nStarting postmaster directly is a thought. Is there any existing\nprecedent for that approach?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 18 Nov 2021 14:33:38 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: xlog.c: removing ReadRecPtr and EndRecPtr" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> There's a second place where the patch needs to wait for something\n> also, and that one I've crudely kludged with sleep(10). If anybody\n> around here who is good at figuring out how to write clever TAP tests\n> can tell me how to fix this test to be non-stupid, I will happily do\n> so.\n\nAs far as that goes, if you conceptualize it as \"wait for this text\nto appear in the log file\", there's prior art in existing TAP tests.\nBasically, sleep for some reasonable short period and check the\nlog file; if not there, repeat until timeout.\n\nI'm a little dubious that this test case is valuable enough to\nmess around with a nonstandard postmaster startup protocol, though.\nThe main reason I dislike that idea is that any fixes we apply to\nthe TAP tests' normal postmaster-startup code would almost inevitably\nmiss fixing this test. IIRC there have been security-related fixes in\nthat area (e.g. where do we put the postmaster's socket), so I find\nthat prospect pretty scary.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 Nov 2021 15:14:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: xlog.c: removing ReadRecPtr and EndRecPtr" }, { "msg_contents": "On Thu, Nov 18, 2021 at 3:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > There's a second place where the patch needs to wait for something\n> > also, and that one I've crudely kludged with sleep(10). If anybody\n> > around here who is good at figuring out how to write clever TAP tests\n> > can tell me how to fix this test to be non-stupid, I will happily do\n> > so.\n>\n> As far as that goes, if you conceptualize it as \"wait for this text\n> to appear in the log file\", there's prior art in existing TAP tests.\n> Basically, sleep for some reasonable short period and check the\n> log file; if not there, repeat until timeout.\n\nYeah, there's something like that in the form of find_in_log in\n019_replslot_init.pl. I thought about copying that, but that didn't\nseem great, and I also thought about trying to move into a common\nmodule, which seems maybe better but also more work, and thus not\nworth doing unless we have agreement that it's what we should do.\n\n> I'm a little dubious that this test case is valuable enough to\n> mess around with a nonstandard postmaster startup protocol, though.\n> The main reason I dislike that idea is that any fixes we apply to\n> the TAP tests' normal postmaster-startup code would almost inevitably\n> miss fixing this test. IIRC there have been security-related fixes in\n> that area (e.g. where do we put the postmaster's socket), so I find\n> that prospect pretty scary.\n\nThe problem that I have with the present situation is that the test\ncoverage of xlog.c is pretty abysmal. It actually doesn't look bad if\nyou just run a coverage report, but there are a shazillion flag\nvariables in that file and bugs like this make it quite clear that we\ndon't come close to testing all the possible combinations. It's really\nborderline unmaintainable. I don't know whether there's a specific\nindividual who wrote most of this code and didn't get the memo that\nglobal variables are best avoided, or whether this is sort of case\nwhere we started over 1 or 2 and then it just gradually ballooned into\nthe giant mess that it now is, but the present situation is pretty\noutrageous. It's taking me weeks of time to figure out how to make\nchanges that would normally take days, or maybe hours. We clearly need\nto try both to get the number of cases under control by eliminating\nstupid variables that are almost but not quite the same as something\nelse, and also get proper test coverage for the things that remain so\nthat it's possible to modify the code without excesive risk of\nshooting ourselves in the foot.\n\nThat said, I'm not wedded to this particular test case, either. It's\nan extremely specific bug that is unlikely to reappear once squashed,\nand making the test case robust enough to avoid having the buildfarm\ncomplain seems fraught.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 18 Nov 2021 16:12:15 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: xlog.c: removing ReadRecPtr and EndRecPtr" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Nov 18, 2021 at 3:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'm a little dubious that this test case is valuable enough to\n>> mess around with a nonstandard postmaster startup protocol, though.\n\n> The problem that I have with the present situation is that the test\n> coverage of xlog.c is pretty abysmal.\n\nAgreed, but this one test case isn't going to move the needle much.\nTo get to reasonable coverage we're going to need more tests, and\nI definitely don't want multiple versions of ad-hoc postmaster startup\ncode. If we need that, it'd be smarter to extend Cluster.pm so that\nthe mainline code could do what's needful.\n\nHaving said that, it wasn't entirely clear to me why you needed a\nweird startup method. Why couldn't you drop the bogus history file\ninto place *before* starting the charlie postmaster? If you have\nto do it after, aren't there race/timing problems anyway?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 Nov 2021 16:49:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: xlog.c: removing ReadRecPtr and EndRecPtr" }, { "msg_contents": "On Thu, Nov 18, 2021 at 4:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Thu, Nov 18, 2021 at 3:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I'm a little dubious that this test case is valuable enough to\n> >> mess around with a nonstandard postmaster startup protocol, though.\n>\n> > The problem that I have with the present situation is that the test\n> > coverage of xlog.c is pretty abysmal.\n>\n> Agreed, but this one test case isn't going to move the needle much.\n> To get to reasonable coverage we're going to need more tests, and\n> I definitely don't want multiple versions of ad-hoc postmaster startup\n> code. If we need that, it'd be smarter to extend Cluster.pm so that\n> the mainline code could do what's needful.\n\nPerhaps so. I don't have a clear view on what a full set of good tests\nwould look like, so it's hard for me to guess which needs are general\nand which are not.\n\n> Having said that, it wasn't entirely clear to me why you needed a\n> weird startup method. Why couldn't you drop the bogus history file\n> into place *before* starting the charlie postmaster? If you have\n> to do it after, aren't there race/timing problems anyway?\n\nWell, I need rescanLatestTimeLine() to be called. I'm not sure that\nwill happen if the file is there originally -- that sounds like more\nof a scan than a rescan, but I haven't poked at that angle. I also\nthink it doesn't matter whether the file is dropped in or whether it\nis restored via restore_command, so having the server restore the file\nrather than discover that it is appeared might be another and more\nsatisfying option, but I also have not tested whether that reproduces\nthe issue. This has been extremely time-consuming to hunt down.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 19 Nov 2021 10:24:57 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: xlog.c: removing ReadRecPtr and EndRecPtr" }, { "msg_contents": "On 2021-Nov-18, Robert Haas wrote:\n\n> On Thu, Nov 18, 2021 at 2:21 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > Would it work to start postmaster directly instad of using pg_ctl, and\n> > then rely on (say) pg_isready?\n> \n> I *think* that pg_isready would also fail, because the documentation\n> says \"pg_isready returns 0 to the shell if the server is accepting\n> connections normally, 1 if the server is rejecting connections (for\n> example during startup) ...\" and I believe the \"during startup\" case\n> would apply here.\n\nHmm, right ... I suppose there are other ways to check, but I'm not sure\nthat the value of adding this particular test is large enough to justify\nsuch hacks.\n\nI think one possibly useful technique might be Alexander Korotkov's stop\nevents[1], except that it is designed around having working SQL access\nto the server in order to control it. You'd need some frosting on top\nin order to control the startup sequence without SQL access.\n\n> Starting postmaster directly is a thought. Is there any existing\n> precedent for that approach?\n\nNot as far as I can see.\n\n[1] https://postgr.es/m/CAPpHfdtSEOHX8dSk9Qp+Z++i4BGQoffKip6JDWngEA+g7Z-XmQ@mail.gmail.com\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Entristecido, Wutra (canción de Las Barreras)\necha a Freyr a rodar\ny a nosotros al mar\"\n\n\n", "msg_date": "Fri, 19 Nov 2021 16:59:12 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: xlog.c: removing ReadRecPtr and EndRecPtr" }, { "msg_contents": "On Fri, Nov 19, 2021 at 12:43 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Nov 18, 2021 at 7:30 AM Amul Sul <sulamul@gmail.com> wrote:\n> > Somehow with and without patch I am getting the same log.\n>\n> Try applying the attached 0001-dubious-test-cast.patch for you and see\n> if that fails. It does for me. If so, then try applying\n> 0002-fix-the-bug.patch and see if that makes it pass.\n>\n\nThanks, I can see the reported behavior -- 0001 alone fails & 0002\ncorrects that.\n\n> Unfortunately, this test case isn't remotely committable as-is, and I\n> don't know how to make it so. The main problem is that, although you\n> can start up a server with nothing in pg_wal, no restore_command, and\n> no archive command, pg_ctl will not believe that it has started. I\n> worked around that problem by telling pg_ctl to ignore failures, but\n> it still waits for a long time before timing out, which sucks both\n> because (1) hackers are impatient and (2) some hackers run extremely\n> slow buildfarm machines where almost any timeout won't be long enough.\n\nYeah :(\n\n> There's a second place where the patch needs to wait for something\n> also, and that one I've crudely kludged with sleep(10). If anybody\n> around here who is good at figuring out how to write clever TAP tests\n> can tell me how to fix this test to be non-stupid, I will happily do\n> so.\n>\n> Otherwise, I think I will just need to commit and back-patch the\n> actual bug fix without a test, and then rebase the rest of the patch I\n> posted previously over top of those changes for master only.\n>\n\n+1.\n\nRegards,\nAmul\n\n\n", "msg_date": "Mon, 22 Nov 2021 11:32:10 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: xlog.c: removing ReadRecPtr and EndRecPtr" }, { "msg_contents": "On Wed, Nov 17, 2021 at 5:01 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> In rescanLatestTimeLine(), the problem is IMHO probably serious enough\n> to justify a separate commit with back-patching.\n\nOn second thought, I think it's better not to back-patch this fix. It\nturns out that, while it's easy enough to back-patch the proposed\npatch, it doesn't make the test case pass in pre-v14 versions. The\ntest case itself requires some changes to work at all because of the\nperl module renaming, but that's not a big deal. The issue is that, in\nthe back-branches, when starting up the server without any local WAL,\nrescanLatestTimeLine() is checking with not only the wrong LSN but\nalso with the wrong TLI. That got fixed in master by commit\n4a92a1c3d1c361ffb031ed05bf65b801241d7cdd even though, rather\nunfortunately, the commit message does not say so. So to back-patch\nthis commit we would need to also back-patch much of that commit. But\nthat commit depends on the other commits that reduced use of\nThisTimeLineID. Untangling that seems like an undue amount of work and\nrisk for a corner-case bug fix that was discovered in the lab rather\nthan in the field and which won't matter anyway if you do things\ncorrectly. So now I'm intending to commit to just to master only.\n\nAttached please find the test case not for commit as\nv2-0001-dubious-test-case.patch; the fix, for commit and now with a\nproper commit message as\nv2-0002-Fix-corner-case-failure-to-detect-improper-timeli.patch; and\nthe back-ported test case for v14 as v14-0001-dubious-test-case.patch\nin case anyone wants to play around with that. (with apologies for\nusing the idea of a version number in two different and conflicting\nsenses)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 23 Nov 2021 13:36:56 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: xlog.c: removing ReadRecPtr and EndRecPtr" }, { "msg_contents": "On Tue, Nov 23, 2021 at 1:36 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Attached please find the test case not for commit as\n> v2-0001-dubious-test-case.patch; the fix, for commit and now with a\n> proper commit message as\n> v2-0002-Fix-corner-case-failure-to-detect-improper-timeli.patch; and\n> the back-ported test case for v14 as v14-0001-dubious-test-case.patch\n> in case anyone wants to play around with that. (with apologies for\n> using the idea of a version number in two different and conflicting\n> senses)\n\nOK, I have committed this patch, rebased the original combined patch\nover top of that, and committed that too. By my count that means I've\nnow removed a total of 3 global variables from this file and Amul got\nrid of one as well so that makes 4 in total. If we continue at this\nbrisk pace the code may become understandable and maintainable\nsometime prior to the heat death of the universe, which I think would\nbe rather nice.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 Nov 2021 11:37:28 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: xlog.c: removing ReadRecPtr and EndRecPtr" } ]
[ { "msg_contents": "Commit 2fd8685e7f simplified the checking of modified attributes that\ntakes place within heap_update(). This included a micro-optimization\nthat affects pages marked PageIsFull(): when the target page is marked\nwith PD_PAGE_FULL (which must have been set by a previous heap_update\ncall), don't even try to use HOT -- assume that we have no chance in\norder to save a few cycles on determining HOT safety.\n\nI doubt that this micro-optimization actually pays for itself, though.\nPlus heap_update() is very complicated; do we really need to keep this\nspecial case? The benefit is that we avoid work that we do ~99% of the\ntime anyway.\n\nAttached patch removes the micro-optimization entirely. This isn't\njust refactoring work -- at least to me. I'm also concerned that this\ntest unnecessarily prevents HOT updates with certain workloads.\n\nThere is a reasonable chance that the last updater couldn't fit a new\nversion on the same heap page (and so marked the page PD_PAGE_FULL) at\na point where the page didn't quite have enough free space for *their*\nnew tuple, while *almost* having enough space. And so it's worth being\nopen to the possibility that our own heap_update() call has a smaller\ntuple than the first updater, perhaps only by chance (or perhaps\nbecause the original updater couldn't use HOT specifically because\ntheir new tuple was unusually large). Not all tables naturally have\nequisized rows, obviously. And even when they do we should still\nexpect TOAST compression to create variation in the size of physical\nheap tuples (some toastable attributes have more entropy than others,\nmaking them less effective targets for compression, etc).\n\nIt's not just variability in the size of heap tuples. Comments\ndescribing the micro-optimization claim that there is no chance of\ncleanup happening concurrently, so that can't be a factor. But that's\nreally not true anymore. While it is still true that heap_update holds\na pin on the original page, blocking concurrent pruning (e.g., while\nit waits for a tuple heavyweight lock), that in itself doesn't mean\nthat nobody else can free up space when heap_update() drops its buffer\nlock -- things have changed. Commit 8523492d4e taught VACUUM to set\nLP_DEAD line pointers to LP_UNUSED, while only holding an exclusive\nlock (not a super-exclusive/cleanup lock) on the target heap\npage/buffer. That's enough to allow concurrent processing by VACUUM to\ngo ahead (excluding pruning). And so PageGetHeapFreeSpace() can go\nfrom indicating that the page has 0 space to more than enough space,\ndue only to concurrent activity by VACUUM (a pin won't prevent that\nanymore). This is not especially unlikely with a small table.\n\nI think that it's possible that Tom would have found it easier to\ndebug an issue that led to a PANIC inside heap_update() earlier this\nyear (see commit 34f581c39e). That bug was judged to be an old bug in\nheap_update(), but we only started to see PANICs when the\naforementioned enhancement to VACUUM went in.\n\n-- \nPeter Geoghegan", "msg_date": "Wed, 17 Nov 2021 19:18:45 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Why not try for a HOT update, even when PageIsFull()?" }, { "msg_contents": "On 2021-Nov-17, Peter Geoghegan wrote:\n\n> Commit 2fd8685e7f simplified the checking of modified attributes that\n> takes place within heap_update(). This included a micro-optimization\n> that affects pages marked PageIsFull(): when the target page is marked\n> with PD_PAGE_FULL (which must have been set by a previous heap_update\n> call), don't even try to use HOT -- assume that we have no chance in\n> order to save a few cycles on determining HOT safety.\n\nHmm, I don't have any memory of introducing this; and if you look at the\nthread, you'll notice that it got there between the first patch I posted\nand the second one, without any mention of the reason. I probably got\nthat code from the WARM patch series at some point, thinking that it was\nan obvious optimization; but I'm fairly certain that we didn't run any\ntailored micro-benchmark to justify it. Pavan may have something to say\nabout it, so I CC him.\n\nI certainly do not object to removing it.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Find a bug in a program, and fix it, and the program will work today.\nShow the program how to find and fix a bug, and the program\nwill work forever\" (Oliver Silfridge)\n\n\n", "msg_date": "Fri, 19 Nov 2021 16:50:54 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Why not try for a HOT update, even when PageIsFull()?" }, { "msg_contents": "On Fri, Nov 19, 2021 at 11:51 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Hmm, I don't have any memory of introducing this; and if you look at the\n> thread, you'll notice that it got there between the first patch I posted\n> and the second one, without any mention of the reason. I probably got\n> that code from the WARM patch series at some point, thinking that it was\n> an obvious optimization; but I'm fairly certain that we didn't run any\n> tailored micro-benchmark to justify it.\n\nI suspected that it was something like that. I agree that it's\nunlikely that we'll be able to do another HOT update for as long as\nthe page has PD_PAGE_FULL set. But that's not saying much; it's also\nunlikely that heap_update will find that PD_PAGE_FULL is set to begin\nwith. And, the chances of successfully applying HOT again are workload\ndependent.\n\n> I certainly do not object to removing it.\n\nI'd like to do so soon. I'll wait a few more days, in case Pavan objects.\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 21 Nov 2021 16:29:11 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Why not try for a HOT update, even when PageIsFull()?" }, { "msg_contents": "On Sun, Nov 21, 2021 at 4:29 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Fri, Nov 19, 2021 at 11:51 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > I certainly do not object to removing it.\n>\n> I'd like to do so soon. I'll wait a few more days, in case Pavan objects.\n\nPushed just now.\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 26 Nov 2021 10:59:25 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Why not try for a HOT update, even when PageIsFull()?" } ]
[ { "msg_contents": "Hi hackers,\n\nThere was some interest in implementing ASOF joins in Postgres, see\ne.g. this prototype patch by Konstantin Knizhnik:\nhttps://www.postgresql.org/message-id/flat/bc494762-26bd-b100-e1f9-a97901ddad57%40postgrespro.ru\nI't like to discuss the possible ways of implementation, if there is\nstill any interest in that.\n\n\nIntroduction\n\nASOF join is often used to work with time series data such as stock\nquotes or IoT sensors. It is an interpolation where we want to relate\ntwo different time series measured at different points in time. For\neach value of the first time series, we take the most recent value of\nthe second.\n\nBesides an inequality condition on timestamp, such join can also have\nequality conditions on other columns. For example, this query joins\ntwo tables that contain bids and asks, finding the most recent task\nfor each bid for given financial instrument:\n\n```sql\nSELECT bids.ts timebid, bid, ask\nFROM bids\nASOF JOIN asks ON bid.instrument = ask.instrument\nAND ask.ts <= bid.ts;\n```\n\nSemantically, this is equivalent to the following correlated subquery:\n```sql\nSELECT bids.ts timebid, bid, ask\nFROM bids,\n LATERAL (select * from asks\n WHERE asks.instrument = bids.instrument AND asks.ts <= bids.ts\n ORDER BY ts DESC LIMIT 1) t;\n```\nThis form is useful to think about which optimizations we can perform\nwith an ASOF join, how it behaves with respect to other joins, and so\non.\n\nQuestDB has some good docs on this with more examples:\nhttps://questdb.io/docs/reference/sql/join/#asof-join\n\n\nWhat Conditions Work with ASOF Join\n\nConditions for an ASOF join consist of one inequality condition (>=\netc), and optionally a number of equality conditions. All these\nconditions must be \"mergejoinable\" in PG terms -- they must belong to\na btree operator family, which means there is a sorting operator that\ncorresponds to the condition, which means we can perform a merge join.\nThey also must support hashing because we'll probably need both\nsorting and hashing for implementation (see below). This holds for the\nusual data types like numeric. It is natural to think of the\ninequality column as \"time\", but technically it can be any column,\neven a string one, w/o changing the algorithm.\n\n\nJoin variants\n\nThe purpose of ASOF join is interpolation of one time series to match\nanother, so it is natural to think of it as an INNER join. The outer\nvariants might be less useful. Technically, it is easy to extend it to\nLEFT ASOF JOIN, where we would output nulls for the right hand columns\nif we haven’t yet seen a match. RIGHT and FULL variants also make\nsense, but the implementation may be impossible, depending on the\nalgorithm -- merge and hash joins can support these variants, but the\nnested loop cannot.\n\n\nUse in Combination with Normal Joins\n\nThe difference of ASOF join from normal join is that for the\ninequality condition, it does not output all the rows that match it,\nbut only the most recent one. We can think of it as first performing a\nnormal join and then applying a filter that selects the latest right\nhand row. Which row is the \"latest\" depends on the entire set of rows\nthat match the join conditions (same as with LIMIT). This means that\nthe result of ASOF join may depend on the place in the join tree where\nit is evaluated, because other joins may remove some rows. Similar to\nouter joins, we must respect the user-specified join order for an ASOF\njoin. It is useful to think about pushing another join below an ASOF\njoin as pushing a join below a correlated subquery with LIMIT (see\nabove). This transformation might be correct in some cases, so we\nmight later think about adding some optimization for join order for\nASOF join.\n\n\nProposed Syntax\n\nASOF join is semantically distinct from a normal join on the same\nconditions, so it requires separate grammar. ASOF modifier + listing\nall the conditions in the ON section, looks like a good baseline:\n`bids ASOF JOIN asks ON asks.timestamp <= bids.timestamp AND\nasks.instrument = bids.instrument`\n\n\nAlgorithms\n\nLet's see which algorithm we can use to perform an ASOF join if we\nhave a \"<=\" condition on timestamp and several \"=\" conditions on other\ncolumns (equi-columns).\n\n1. Hash on Equi-keys\n\nThis is what ClickHouse uses. It builds a hash table on equi columns,\nthen for each equi-key builds an array of timestamps, sorted on\ndemand. This requires bringing the entire right hand table into\nmemory, so not feasible for large tables.\n\n\n2. Merge Join on (equi-keys, timestamp) Sorting\n\nThis is a natural extension of the merge join algorithm, but instead\nof returning all keys for the timestamp column, it returns only the\nlatest one. A drawback of this algorithm is that the data must be\nsorted on timestamp last, so we can't reuse the natural ordering of\nthe time series data encoded by a (timestamp) index. We will have to\nsort both tables entirely in different order, which is prohibitively\ncostly for large tables. Another way is to create an index on\n(equi-keys, timestamp). This would allow us to perform a merge ASOF\njoin in linear time, but has several drawbacks. First, it requires\nmaintaining an additional index which costs space and time (the\n(timestamp) index we have to have anyway). Second, the time series\ndata is naturally ordered on timestamp, so even w/o CLUSTER, the\nlocality in time translates somewhat into the locality in page space.\nReading the table in (equi-keys, timestamp) order would require\nessentially random access with frequent switching between chunks, in\ncontrast to reading in (timestamp) order which reads from a single\nchunk. So this algorithm is probably going to be less performant than\nthe one using (timestamp) sorting, described next. The good part of\nthis algorithm is that with a dedicated (equi-keys, timestamp) index,\nit requires constant memory, so it still can be useful in case of high\ncardinality of equi-keys.\n\n\n3. Merge-Hash on (timestamp) Sorting\n\nIf we sort first on timestamp, we can reuse the natural order of\ntime-series data, often encoded by the index on (timestamp). This\napproach would allow us to process data in streaming fashion, w/o\nsorting everything again, which makes it feasible for really large\ntables. Let's see what algorithm we can use to perform an ASOF join in\nthis case. Suppose we have left and right input stream sorted on\n(timestamp). We will need to use an additional data structure -- a\nhash table indexed by the equi keys. The algorithm is as follows:\n\na. For a given left row, advance the right table until right timestamp\n> left timestamp.\n\nb. While we advance the right table, put each right hand row into the\nhash table indexed by the equi keys. Overwrite the previous row with\nthe same keys, if there was any.\n\nc. We have finished advancing the right table. The hash table now\ncontains the most recent right hand row for every value of equi-keys.\nMost recent because the right hand table is sorted by (timestamp).\n\nd. For the left row, look up a right row that matches it by the equi\nkeys in the hash table. This is the right hand row that matches the\nASOF join conditions (equi-keys are equal, left timestamp >= right\ntimestamp, right timestamp is maximal for the given equi-keys). Output\nthe result.\n\ne. Go to the next left row. The left table is also sorted on\n(timestamp), so we won't need to rewind the right table, only to\nadvance it forward.\n\nGiven the sorted input paths, this algorithm is linear time in size of\nthe tables. A drawback of this algorithm is that it requires memory\nproportional to the cardinality of the equi-columns. A possible\noptimization is to split the equi-key hash table into hot and cold\nparts by LRU, and dump the cold part to disk. This would help if each\nequi-key only occurs for a small period of time.\n\n\n4. Nested Loop\n\nAn efficient nested loop plan has to have a fast right-side subplan,\nsuch as an index lookup. Unfortunately, there seems to be no way to\nefficiently perform a last-point lookup for given equi-keys, if we\nhave separate btree indexes on timestamp and equi-keys. The nested\nloop plan could work if we have a (timestamp, equi-keys) btree index.\n\n\nPrototype Implementation\n\nFor a prototype, I'd go with #3 \"merge-something with a hash table of\nmost recent rows for equi-keys\", because it works for big tables and\ncan reuse the physical data ordering.\n\n\nI'll be glad to hear your thoughts on this.\n\n\n--\nAlexander Kuzmenkov\nTimescale\n\n\n", "msg_date": "Thu, 18 Nov 2021 17:11:16 +0300", "msg_from": "Alexander Kuzmenkov <akuzmenkov@timescale.com>", "msg_from_op": true, "msg_subject": "[RFC] ASOF Join" }, { "msg_contents": "On Thu, Nov 18, 2021 at 05:11:16PM +0300, Alexander Kuzmenkov wrote:\n> Hi hackers,\n> \n> There was some interest in implementing ASOF joins in Postgres, see\n> e.g. this prototype patch by Konstantin Knizhnik:\n> https://www.postgresql.org/message-id/flat/bc494762-26bd-b100-e1f9-a97901ddad57%40postgrespro.ru\n> I't like to discuss the possible ways of implementation, if there is\n> still any interest in that.\n> \n> \n> Introduction\n> \n> ASOF join is often used to work with time series data such as stock\n> quotes or IoT sensors. It is an interpolation where we want to relate\n> two different time series measured at different points in time. For\n> each value of the first time series, we take the most recent value of\n> the second.\n\nDISCLAIMER: I am both seeing this first time and I don't have a\ngood understanding of the PosgreSQL development practices.\n\n But at a first glance the syntax looks like pure evil. I see\nthat this is somewhat common as of now (clickhouse, your refer-\nenced questdb, quasar db) -- but it's bad anyway. And not really\na standard.\n\n This introduces a new keyword to the ridiculous list of key-\nwords.\n The syntax loosely defines what preference it wants, by extract-\ning some vague set of ordering operators from the join condition.\n Also, only one asof value is allowed (there may be more of it\nsometimes).\n\n Perhaps, if we've got to some syntax -- than something like\n ORDER BY ... LIMIT \nin joins, just before the ON join_condition or USING() could be\nmuch better.\n This allows to\n 1) Easily separate just conditions from the likehood ones. No\ndeep analysis of a condition expression necessary. No need to\nhave another list of all the possible ranking functions and oper-\nators.\n 2) Have ordering preference for many ranking conditions. Like\nORDER BY secondary_field IS NOT NULL DESC, time_difference, reli-\nability\n 3) Have more than one row returned for a joined table.\n\n\n But anyways this looks like just a syntactic sugar. LATERAL\nJOINS should logically work just fine. Any optimisation should\ndeal with the LATERAL syntax style anyway.\n\n\n> \n> Besides an inequality condition on timestamp, such join can also have\n> equality conditions on other columns. For example, this query joins\n> two tables that contain bids and asks, finding the most recent task\n> for each bid for given financial instrument:\n> \n> ```sql\n> SELECT bids.ts timebid, bid, ask\n> FROM bids\n> ASOF JOIN asks ON bid.instrument = ask.instrument\n> AND ask.ts <= bid.ts;\n> ```\n> \n> Semantically, this is equivalent to the following correlated subquery:\n> ```sql\n> SELECT bids.ts timebid, bid, ask\n> FROM bids,\n> LATERAL (select * from asks\n> WHERE asks.instrument = bids.instrument AND asks.ts <= bids.ts\n> ORDER BY ts DESC LIMIT 1) t;\n> ```\n> This form is useful to think about which optimizations we can perform\n> with an ASOF join, how it behaves with respect to other joins, and so\n> on.\n> \n> QuestDB has some good docs on this with more examples:\n> https://questdb.io/docs/reference/sql/join/#asof-join\n> \n> \n> What Conditions Work with ASOF Join\n> \n> Conditions for an ASOF join consist of one inequality condition (>=\n> etc), and optionally a number of equality conditions. All these\n> conditions must be \"mergejoinable\" in PG terms -- they must belong to\n> a btree operator family, which means there is a sorting operator that\n> corresponds to the condition, which means we can perform a merge join.\n> They also must support hashing because we'll probably need both\n> sorting and hashing for implementation (see below). This holds for the\n> usual data types like numeric. It is natural to think of the\n> inequality column as \"time\", but technically it can be any column,\n> even a string one, w/o changing the algorithm.\n> \n> \n> Join variants\n> \n> The purpose of ASOF join is interpolation of one time series to match\n> another, so it is natural to think of it as an INNER join. The outer\n> variants might be less useful. Technically, it is easy to extend it to\n> LEFT ASOF JOIN, where we would output nulls for the right hand columns\n> if we haven???t yet seen a match. RIGHT and FULL variants also make\n> sense, but the implementation may be impossible, depending on the\n> algorithm -- merge and hash joins can support these variants, but the\n> nested loop cannot.\n> \n> \n> Use in Combination with Normal Joins\n> \n> The difference of ASOF join from normal join is that for the\n> inequality condition, it does not output all the rows that match it,\n> but only the most recent one. We can think of it as first performing a\n> normal join and then applying a filter that selects the latest right\n> hand row. Which row is the \"latest\" depends on the entire set of rows\n> that match the join conditions (same as with LIMIT). This means that\n> the result of ASOF join may depend on the place in the join tree where\n> it is evaluated, because other joins may remove some rows. Similar to\n> outer joins, we must respect the user-specified join order for an ASOF\n> join. It is useful to think about pushing another join below an ASOF\n> join as pushing a join below a correlated subquery with LIMIT (see\n> above). This transformation might be correct in some cases, so we\n> might later think about adding some optimization for join order for\n> ASOF join.\n> \n> \n> Proposed Syntax\n> \n> ASOF join is semantically distinct from a normal join on the same\n> conditions, so it requires separate grammar. ASOF modifier + listing\n> all the conditions in the ON section, looks like a good baseline:\n> `bids ASOF JOIN asks ON asks.timestamp <= bids.timestamp AND\n> asks.instrument = bids.instrument`\n> \n> \n> Algorithms\n> \n> Let's see which algorithm we can use to perform an ASOF join if we\n> have a \"<=\" condition on timestamp and several \"=\" conditions on other\n> columns (equi-columns).\n> \n> 1. Hash on Equi-keys\n> \n> This is what ClickHouse uses. It builds a hash table on equi columns,\n> then for each equi-key builds an array of timestamps, sorted on\n> demand. This requires bringing the entire right hand table into\n> memory, so not feasible for large tables.\n> \n> \n> 2. Merge Join on (equi-keys, timestamp) Sorting\n> \n> This is a natural extension of the merge join algorithm, but instead\n> of returning all keys for the timestamp column, it returns only the\n> latest one. A drawback of this algorithm is that the data must be\n> sorted on timestamp last, so we can't reuse the natural ordering of\n> the time series data encoded by a (timestamp) index. We will have to\n> sort both tables entirely in different order, which is prohibitively\n> costly for large tables. Another way is to create an index on\n> (equi-keys, timestamp). This would allow us to perform a merge ASOF\n> join in linear time, but has several drawbacks. First, it requires\n> maintaining an additional index which costs space and time (the\n> (timestamp) index we have to have anyway). Second, the time series\n> data is naturally ordered on timestamp, so even w/o CLUSTER, the\n> locality in time translates somewhat into the locality in page space.\n> Reading the table in (equi-keys, timestamp) order would require\n> essentially random access with frequent switching between chunks, in\n> contrast to reading in (timestamp) order which reads from a single\n> chunk. So this algorithm is probably going to be less performant than\n> the one using (timestamp) sorting, described next. The good part of\n> this algorithm is that with a dedicated (equi-keys, timestamp) index,\n> it requires constant memory, so it still can be useful in case of high\n> cardinality of equi-keys.\n> \n> \n> 3. Merge-Hash on (timestamp) Sorting\n> \n> If we sort first on timestamp, we can reuse the natural order of\n> time-series data, often encoded by the index on (timestamp). This\n> approach would allow us to process data in streaming fashion, w/o\n> sorting everything again, which makes it feasible for really large\n> tables. Let's see what algorithm we can use to perform an ASOF join in\n> this case. Suppose we have left and right input stream sorted on\n> (timestamp). We will need to use an additional data structure -- a\n> hash table indexed by the equi keys. The algorithm is as follows:\n> \n> a. For a given left row, advance the right table until right timestamp\n> > left timestamp.\n> \n> b. While we advance the right table, put each right hand row into the\n> hash table indexed by the equi keys. Overwrite the previous row with\n> the same keys, if there was any.\n> \n> c. We have finished advancing the right table. The hash table now\n> contains the most recent right hand row for every value of equi-keys.\n> Most recent because the right hand table is sorted by (timestamp).\n> \n> d. For the left row, look up a right row that matches it by the equi\n> keys in the hash table. This is the right hand row that matches the\n> ASOF join conditions (equi-keys are equal, left timestamp >= right\n> timestamp, right timestamp is maximal for the given equi-keys). Output\n> the result.\n> \n> e. Go to the next left row. The left table is also sorted on\n> (timestamp), so we won't need to rewind the right table, only to\n> advance it forward.\n> \n> Given the sorted input paths, this algorithm is linear time in size of\n> the tables. A drawback of this algorithm is that it requires memory\n> proportional to the cardinality of the equi-columns. A possible\n> optimization is to split the equi-key hash table into hot and cold\n> parts by LRU, and dump the cold part to disk. This would help if each\n> equi-key only occurs for a small period of time.\n> \n> \n> 4. Nested Loop\n> \n> An efficient nested loop plan has to have a fast right-side subplan,\n> such as an index lookup. Unfortunately, there seems to be no way to\n> efficiently perform a last-point lookup for given equi-keys, if we\n> have separate btree indexes on timestamp and equi-keys. The nested\n> loop plan could work if we have a (timestamp, equi-keys) btree index.\n> \n> \n> Prototype Implementation\n> \n> For a prototype, I'd go with #3 \"merge-something with a hash table of\n> most recent rows for equi-keys\", because it works for big tables and\n> can reuse the physical data ordering.\n> \n> \n> I'll be glad to hear your thoughts on this.\n> \n> \n> --\n> Alexander Kuzmenkov\n> Timescale\n> \n\n\n", "msg_date": "Sun, 21 Nov 2021 07:53:59 +0300", "msg_from": "Ilya Anfimov <ilan@tzirechnoy.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] ASOF Join" }, { "msg_contents": ">But anyways this looks like just a syntactic sugar. LATERAL\n>JOINS should logically work just fine. Any optimisation should\n>deal with the LATERAL syntax style anyway.\n\nAgreed.\n\nHowever, if a rewrite is implemented, it then becomes encoded into\nPostgreSQL code what ASOF maps to. Anyone who wants to use ASOF can use it,\nand anyone who wants to directly use LATERAL can do so also.\n\nOn Sun, 21 Nov 2021 at 15:54, Ilya Anfimov <ilan@tzirechnoy.com> wrote:\n\n> On Thu, Nov 18, 2021 at 05:11:16PM +0300, Alexander Kuzmenkov wrote:\n> > Hi hackers,\n> >\n> > There was some interest in implementing ASOF joins in Postgres, see\n> > e.g. this prototype patch by Konstantin Knizhnik:\n> >\n> https://www.postgresql.org/message-id/flat/bc494762-26bd-b100-e1f9-a97901ddad57%40postgrespro.ru\n> > I't like to discuss the possible ways of implementation, if there is\n> > still any interest in that.\n> >\n> >\n> > Introduction\n> >\n> > ASOF join is often used to work with time series data such as stock\n> > quotes or IoT sensors. It is an interpolation where we want to relate\n> > two different time series measured at different points in time. For\n> > each value of the first time series, we take the most recent value of\n> > the second.\n>\n> DISCLAIMER: I am both seeing this first time and I don't have a\n> good understanding of the PosgreSQL development practices.\n>\n> But at a first glance the syntax looks like pure evil. I see\n> that this is somewhat common as of now (clickhouse, your refer-\n> enced questdb, quasar db) -- but it's bad anyway. And not really\n> a standard.\n>\n> This introduces a new keyword to the ridiculous list of key-\n> words.\n> The syntax loosely defines what preference it wants, by extract-\n> ing some vague set of ordering operators from the join condition.\n> Also, only one asof value is allowed (there may be more of it\n> sometimes).\n>\n> Perhaps, if we've got to some syntax -- than something like\n> ORDER BY ... LIMIT\n> in joins, just before the ON join_condition or USING() could be\n> much better.\n> This allows to\n> 1) Easily separate just conditions from the likehood ones. No\n> deep analysis of a condition expression necessary. No need to\n> have another list of all the possible ranking functions and oper-\n> ators.\n> 2) Have ordering preference for many ranking conditions. Like\n> ORDER BY secondary_field IS NOT NULL DESC, time_difference, reli-\n> ability\n> 3) Have more than one row returned for a joined table.\n>\n>\n> But anyways this looks like just a syntactic sugar. LATERAL\n> JOINS should logically work just fine. Any optimisation should\n> deal with the LATERAL syntax style anyway.\n>\n>\n> >\n> > Besides an inequality condition on timestamp, such join can also have\n> > equality conditions on other columns. For example, this query joins\n> > two tables that contain bids and asks, finding the most recent task\n> > for each bid for given financial instrument:\n> >\n> > ```sql\n> > SELECT bids.ts timebid, bid, ask\n> > FROM bids\n> > ASOF JOIN asks ON bid.instrument = ask.instrument\n> > AND ask.ts <= bid.ts;\n> > ```\n> >\n> > Semantically, this is equivalent to the following correlated subquery:\n> > ```sql\n> > SELECT bids.ts timebid, bid, ask\n> > FROM bids,\n> > LATERAL (select * from asks\n> > WHERE asks.instrument = bids.instrument AND asks.ts <= bids.ts\n> > ORDER BY ts DESC LIMIT 1) t;\n> > ```\n> > This form is useful to think about which optimizations we can perform\n> > with an ASOF join, how it behaves with respect to other joins, and so\n> > on.\n> >\n> > QuestDB has some good docs on this with more examples:\n> > https://questdb.io/docs/reference/sql/join/#asof-join\n> >\n> >\n> > What Conditions Work with ASOF Join\n> >\n> > Conditions for an ASOF join consist of one inequality condition (>=\n> > etc), and optionally a number of equality conditions. All these\n> > conditions must be \"mergejoinable\" in PG terms -- they must belong to\n> > a btree operator family, which means there is a sorting operator that\n> > corresponds to the condition, which means we can perform a merge join.\n> > They also must support hashing because we'll probably need both\n> > sorting and hashing for implementation (see below). This holds for the\n> > usual data types like numeric. It is natural to think of the\n> > inequality column as \"time\", but technically it can be any column,\n> > even a string one, w/o changing the algorithm.\n> >\n> >\n> > Join variants\n> >\n> > The purpose of ASOF join is interpolation of one time series to match\n> > another, so it is natural to think of it as an INNER join. The outer\n> > variants might be less useful. Technically, it is easy to extend it to\n> > LEFT ASOF JOIN, where we would output nulls for the right hand columns\n> > if we haven???t yet seen a match. RIGHT and FULL variants also make\n> > sense, but the implementation may be impossible, depending on the\n> > algorithm -- merge and hash joins can support these variants, but the\n> > nested loop cannot.\n> >\n> >\n> > Use in Combination with Normal Joins\n> >\n> > The difference of ASOF join from normal join is that for the\n> > inequality condition, it does not output all the rows that match it,\n> > but only the most recent one. We can think of it as first performing a\n> > normal join and then applying a filter that selects the latest right\n> > hand row. Which row is the \"latest\" depends on the entire set of rows\n> > that match the join conditions (same as with LIMIT). This means that\n> > the result of ASOF join may depend on the place in the join tree where\n> > it is evaluated, because other joins may remove some rows. Similar to\n> > outer joins, we must respect the user-specified join order for an ASOF\n> > join. It is useful to think about pushing another join below an ASOF\n> > join as pushing a join below a correlated subquery with LIMIT (see\n> > above). This transformation might be correct in some cases, so we\n> > might later think about adding some optimization for join order for\n> > ASOF join.\n> >\n> >\n> > Proposed Syntax\n> >\n> > ASOF join is semantically distinct from a normal join on the same\n> > conditions, so it requires separate grammar. ASOF modifier + listing\n> > all the conditions in the ON section, looks like a good baseline:\n> > `bids ASOF JOIN asks ON asks.timestamp <= bids.timestamp AND\n> > asks.instrument = bids.instrument`\n> >\n> >\n> > Algorithms\n> >\n> > Let's see which algorithm we can use to perform an ASOF join if we\n> > have a \"<=\" condition on timestamp and several \"=\" conditions on other\n> > columns (equi-columns).\n> >\n> > 1. Hash on Equi-keys\n> >\n> > This is what ClickHouse uses. It builds a hash table on equi columns,\n> > then for each equi-key builds an array of timestamps, sorted on\n> > demand. This requires bringing the entire right hand table into\n> > memory, so not feasible for large tables.\n> >\n> >\n> > 2. Merge Join on (equi-keys, timestamp) Sorting\n> >\n> > This is a natural extension of the merge join algorithm, but instead\n> > of returning all keys for the timestamp column, it returns only the\n> > latest one. A drawback of this algorithm is that the data must be\n> > sorted on timestamp last, so we can't reuse the natural ordering of\n> > the time series data encoded by a (timestamp) index. We will have to\n> > sort both tables entirely in different order, which is prohibitively\n> > costly for large tables. Another way is to create an index on\n> > (equi-keys, timestamp). This would allow us to perform a merge ASOF\n> > join in linear time, but has several drawbacks. First, it requires\n> > maintaining an additional index which costs space and time (the\n> > (timestamp) index we have to have anyway). Second, the time series\n> > data is naturally ordered on timestamp, so even w/o CLUSTER, the\n> > locality in time translates somewhat into the locality in page space.\n> > Reading the table in (equi-keys, timestamp) order would require\n> > essentially random access with frequent switching between chunks, in\n> > contrast to reading in (timestamp) order which reads from a single\n> > chunk. So this algorithm is probably going to be less performant than\n> > the one using (timestamp) sorting, described next. The good part of\n> > this algorithm is that with a dedicated (equi-keys, timestamp) index,\n> > it requires constant memory, so it still can be useful in case of high\n> > cardinality of equi-keys.\n> >\n> >\n> > 3. Merge-Hash on (timestamp) Sorting\n> >\n> > If we sort first on timestamp, we can reuse the natural order of\n> > time-series data, often encoded by the index on (timestamp). This\n> > approach would allow us to process data in streaming fashion, w/o\n> > sorting everything again, which makes it feasible for really large\n> > tables. Let's see what algorithm we can use to perform an ASOF join in\n> > this case. Suppose we have left and right input stream sorted on\n> > (timestamp). We will need to use an additional data structure -- a\n> > hash table indexed by the equi keys. The algorithm is as follows:\n> >\n> > a. For a given left row, advance the right table until right timestamp\n> > > left timestamp.\n> >\n> > b. While we advance the right table, put each right hand row into the\n> > hash table indexed by the equi keys. Overwrite the previous row with\n> > the same keys, if there was any.\n> >\n> > c. We have finished advancing the right table. The hash table now\n> > contains the most recent right hand row for every value of equi-keys.\n> > Most recent because the right hand table is sorted by (timestamp).\n> >\n> > d. For the left row, look up a right row that matches it by the equi\n> > keys in the hash table. This is the right hand row that matches the\n> > ASOF join conditions (equi-keys are equal, left timestamp >= right\n> > timestamp, right timestamp is maximal for the given equi-keys). Output\n> > the result.\n> >\n> > e. Go to the next left row. The left table is also sorted on\n> > (timestamp), so we won't need to rewind the right table, only to\n> > advance it forward.\n> >\n> > Given the sorted input paths, this algorithm is linear time in size of\n> > the tables. A drawback of this algorithm is that it requires memory\n> > proportional to the cardinality of the equi-columns. A possible\n> > optimization is to split the equi-key hash table into hot and cold\n> > parts by LRU, and dump the cold part to disk. This would help if each\n> > equi-key only occurs for a small period of time.\n> >\n> >\n> > 4. Nested Loop\n> >\n> > An efficient nested loop plan has to have a fast right-side subplan,\n> > such as an index lookup. Unfortunately, there seems to be no way to\n> > efficiently perform a last-point lookup for given equi-keys, if we\n> > have separate btree indexes on timestamp and equi-keys. The nested\n> > loop plan could work if we have a (timestamp, equi-keys) btree index.\n> >\n> >\n> > Prototype Implementation\n> >\n> > For a prototype, I'd go with #3 \"merge-something with a hash table of\n> > most recent rows for equi-keys\", because it works for big tables and\n> > can reuse the physical data ordering.\n> >\n> >\n> > I'll be glad to hear your thoughts on this.\n> >\n> >\n> > --\n> > Alexander Kuzmenkov\n> > Timescale\n> >\n>\n>\n>\n\n-- \n--\nTodd Hubers\n\n>But  anyways  this  looks  like  just a syntactic sugar. LATERAL\n>JOINS should logically work just fine.  Any  optimisation  should\n>deal with the LATERAL syntax style anyway.Agreed. However, if a rewrite is implemented, it then becomes encoded into PostgreSQL code what ASOF maps to. Anyone who wants to use ASOF can use it, and anyone who wants to directly use LATERAL can do so also. On Sun, 21 Nov 2021 at 15:54, Ilya Anfimov <ilan@tzirechnoy.com> wrote:On Thu, Nov 18, 2021 at 05:11:16PM +0300, Alexander Kuzmenkov wrote:\n> Hi hackers,\n> \n> There was some interest in implementing ASOF joins in Postgres, see\n> e.g. this prototype patch by Konstantin Knizhnik:\n> https://www.postgresql.org/message-id/flat/bc494762-26bd-b100-e1f9-a97901ddad57%40postgrespro.ru\n> I't like to discuss the possible ways of implementation, if there is\n> still any interest in that.\n> \n> \n> Introduction\n> \n> ASOF join is often used to work with time series data such as stock\n> quotes or IoT sensors. It is an interpolation where we want to relate\n> two different time series measured at different points in time. For\n> each value of the first time series, we take the most recent value of\n> the second.\n\nDISCLAIMER:  I  am both seeing this first time and I don't have a\ngood understanding of the PosgreSQL development practices.\n\n But at a first glance the syntax looks like  pure  evil.  I  see\nthat  this  is somewhat common as of now (clickhouse, your refer-\nenced questdb, quasar db) -- but it's bad anyway.  And not really\na standard.\n\n This  introduces  a  new  keyword to the ridiculous list of key-\nwords.\n The syntax loosely defines what preference it wants, by extract-\ning some vague set of ordering operators from the join condition.\n Also,  only  one  asof value is allowed (there may be more of it\nsometimes).\n\n Perhaps, if we've got to some syntax -- than something like\n ORDER  BY  ... LIMIT \nin  joins,  just before the ON join_condition or USING() could be\nmuch better.\n This allows to\n   1) Easily separate just conditions from the likehood ones.  No\ndeep  analysis  of  a  condition expression necessary. No need to\nhave another list of all the possible ranking functions and oper-\nators.\n   2) Have ordering preference for many ranking conditions.  Like\nORDER BY secondary_field IS NOT NULL DESC, time_difference, reli-\nability\n   3) Have more than one row returned for a joined table.\n\n\n But  anyways  this  looks  like  just a syntactic sugar. LATERAL\nJOINS should logically work just fine.  Any  optimisation  should\ndeal with the LATERAL syntax style anyway.\n\n\n> \n> Besides an inequality condition on timestamp, such join can also have\n> equality conditions on other columns. For example, this query joins\n> two tables that contain bids and asks, finding the most recent task\n> for each bid for given financial instrument:\n> \n> ```sql\n> SELECT bids.ts timebid, bid, ask\n> FROM bids\n> ASOF JOIN asks ON bid.instrument = ask.instrument\n> AND ask.ts <= bid.ts;\n> ```\n> \n> Semantically, this is equivalent to the following correlated subquery:\n> ```sql\n> SELECT bids.ts timebid, bid, ask\n> FROM bids,\n>     LATERAL (select * from asks\n>         WHERE asks.instrument = bids.instrument AND asks.ts <= bids.ts\n>         ORDER BY ts DESC LIMIT 1) t;\n> ```\n> This form is useful to think about which optimizations we can perform\n> with an ASOF join, how it behaves with respect to other joins, and so\n> on.\n> \n> QuestDB has some good docs on this with more examples:\n> https://questdb.io/docs/reference/sql/join/#asof-join\n> \n> \n> What Conditions Work with ASOF Join\n> \n> Conditions for an ASOF join consist of one inequality condition (>=\n> etc), and optionally a number of equality conditions. All these\n> conditions must be \"mergejoinable\" in PG terms -- they must belong to\n> a btree operator family, which means there is a sorting operator that\n> corresponds to the condition, which means we can perform a merge join.\n> They also must support hashing because we'll probably need both\n> sorting and hashing for implementation (see below). This holds for the\n> usual data types like numeric. It is natural to think of the\n> inequality column as \"time\", but technically it can be any column,\n> even a string one, w/o changing the algorithm.\n> \n> \n> Join variants\n> \n> The purpose of ASOF join is interpolation of one time series to match\n> another, so it is natural to think of it as an INNER join. The outer\n> variants might be less useful. Technically, it is easy to extend it to\n> LEFT ASOF JOIN, where we would output nulls for the right hand columns\n> if we haven???t yet seen a match. RIGHT and FULL variants also make\n> sense, but the implementation may be impossible, depending on the\n> algorithm -- merge and hash joins can support these variants, but the\n> nested loop cannot.\n> \n> \n> Use in Combination with Normal Joins\n> \n> The difference of ASOF join from normal join is that for the\n> inequality condition, it does not output all the rows that match it,\n> but only the most recent one. We can think of it as first performing a\n> normal join and then applying a filter that selects the latest right\n> hand row. Which row is the \"latest\" depends on the entire set of rows\n> that match the join conditions (same as with LIMIT). This means that\n> the result of ASOF join may depend on the place in the join tree where\n> it is evaluated, because other joins may remove some rows. Similar to\n> outer joins, we must respect the user-specified join order for an ASOF\n> join. It is useful to think about pushing another join below an ASOF\n> join as pushing a join below a correlated subquery with LIMIT (see\n> above). This transformation might be correct in some cases, so we\n> might later think about adding some optimization for join order for\n> ASOF join.\n> \n> \n> Proposed Syntax\n> \n> ASOF join is semantically distinct from a normal join on the same\n> conditions, so it requires separate grammar. ASOF modifier + listing\n> all the conditions in the ON section, looks like a good baseline:\n> `bids ASOF JOIN asks ON asks.timestamp <= bids.timestamp AND\n> asks.instrument = bids.instrument`\n> \n> \n> Algorithms\n> \n> Let's see which algorithm we can use to perform an ASOF join if we\n> have a \"<=\" condition on timestamp and several \"=\" conditions on other\n> columns (equi-columns).\n> \n> 1. Hash on Equi-keys\n> \n> This is what ClickHouse uses. It builds a hash table on equi columns,\n> then for each equi-key builds an array of timestamps, sorted on\n> demand. This requires bringing the entire right hand table into\n> memory, so not feasible for large tables.\n> \n> \n> 2. Merge Join on (equi-keys, timestamp) Sorting\n> \n> This is a natural extension of the merge join algorithm, but instead\n> of returning all keys for the timestamp column, it returns only the\n> latest one. A drawback of this algorithm is that the data must be\n> sorted on timestamp last, so we can't reuse the natural ordering of\n> the time series data encoded by a (timestamp) index. We will have to\n> sort both tables entirely in different order, which is prohibitively\n> costly for large tables. Another way is to create an index on\n> (equi-keys, timestamp). This would allow us to perform a merge ASOF\n> join in linear time, but has several drawbacks. First, it requires\n> maintaining an additional index which costs space and time (the\n> (timestamp) index we have to have anyway). Second, the time series\n> data is naturally ordered on timestamp, so even w/o CLUSTER, the\n> locality in time translates somewhat into the locality in page space.\n> Reading the table in (equi-keys, timestamp) order would require\n> essentially random access with frequent switching between chunks, in\n> contrast to reading in (timestamp) order which reads from a single\n> chunk. So this algorithm is probably going to be less performant than\n> the one using (timestamp) sorting, described next. The good part of\n> this algorithm is that with a dedicated (equi-keys, timestamp) index,\n> it requires constant memory, so it still can be useful in case of high\n> cardinality of equi-keys.\n> \n> \n> 3. Merge-Hash on (timestamp) Sorting\n> \n> If we sort first on timestamp, we can reuse the natural order of\n> time-series data, often encoded by the index on (timestamp). This\n> approach would allow us to process data in streaming fashion, w/o\n> sorting everything again, which makes it feasible for really large\n> tables. Let's see what algorithm we can use to perform an ASOF join in\n> this case. Suppose we have left and right input stream sorted on\n> (timestamp). We will need to use an additional data structure -- a\n> hash table indexed by the equi keys. The algorithm is as follows:\n> \n> a. For a given left row, advance the right table until right timestamp\n> > left timestamp.\n> \n> b. While we advance the right table, put each right hand row into the\n> hash table indexed by the equi keys. Overwrite the previous row with\n> the same keys, if there was any.\n> \n> c. We have finished advancing the right table. The hash table now\n> contains the most recent right hand row for every value of equi-keys.\n> Most recent because the right hand table is sorted by (timestamp).\n> \n> d. For the left row, look up a right row that matches it by the equi\n> keys in the hash table. This is the right hand row that matches the\n> ASOF join conditions (equi-keys are equal, left timestamp >= right\n> timestamp, right timestamp is maximal for the given equi-keys). Output\n> the result.\n> \n> e. Go to the next left row. The left table is also sorted on\n> (timestamp), so we won't need to rewind the right table, only to\n> advance it forward.\n> \n> Given the sorted input paths, this algorithm is linear time in size of\n> the tables. A drawback of this algorithm is that it requires memory\n> proportional to the cardinality of the equi-columns. A possible\n> optimization is to split the equi-key hash table into hot and cold\n> parts by LRU, and dump the cold part to disk. This would help if each\n> equi-key only occurs for a small period of time.\n> \n> \n> 4. Nested Loop\n> \n> An efficient nested loop plan has to have a fast right-side subplan,\n> such as an index lookup. Unfortunately, there seems to be no way to\n> efficiently perform a last-point lookup for given equi-keys, if we\n> have separate btree indexes on timestamp and equi-keys. The nested\n> loop plan could work if we have a (timestamp, equi-keys) btree index.\n> \n> \n> Prototype Implementation\n> \n> For a prototype, I'd go with #3 \"merge-something with a hash table of\n> most recent rows for equi-keys\", because it works for big tables and\n> can reuse the physical data ordering.\n> \n> \n> I'll be glad to hear your thoughts on this.\n> \n> \n> --\n> Alexander Kuzmenkov\n> Timescale\n> \n\n\n-- --Todd Hubers", "msg_date": "Sun, 21 Nov 2021 17:13:45 +1100", "msg_from": "Todd Hubers <todd.hubers@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] ASOF Join" }, { "msg_contents": "On 21.11.2021 07:53, Ilya Anfimov wrote:\n> DISCLAIMER: I am both seeing this first time and I don't have a\n> good understanding of the PosgreSQL development practices.\n\n> pure evil\n> ridiculous\nNo worries, at least you got the etiquette just right.\n\n\nThere are two points in your mail that I'd like to discuss. First, the ASOF grammar being bad because it's implicit. I do agree on the general idea that explicit is better UX than implicit, especially when we're talking about SQL where you spend half the time battling the query planner already. However, in the grammar I proposed it's unambiguous which conditions are ASOF and which are not -- all inequalities are ASOF, all equalities are not, and there can be no other kinds of conditions for this type of join. It can also support any number of ASOF conditions. Which grammar exactly do you suggest? Maybe something like this:\n\nasks JOIN bids ON asks.instrument = bids.instrument ASOF asks.timestamp <= bids.timestamp\n\nThis still does require a keyword.\n\n\nSecond, you say that we must first optimize the corresponding LATERAL. I was thinking about this as well, but _that_ is what's not explicit. I'm not sure if this optimization would have any value outside of optimizing ASOF joins. We might give better UX if we embrace the fact that we're doing an ASOF join and allow the user to state this explicitly and get an efficient and predictable plan, or an error, instead of trying to guess this from the rewritten queries and silently falling back to an inefficient plan for cryptic reasons.\n\n\n--\nAlexander Kuzmenkov\nTimescale\n\n\n\n", "msg_date": "Mon, 22 Nov 2021 15:44:37 +0300", "msg_from": "Alexander Kuzmenkov <akuzmenkov@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [RFC] ASOF Join" }, { "msg_contents": "On Mon, Nov 22, 2021 at 03:44:37PM +0300, Alexander Kuzmenkov wrote:\n> On 21.11.2021 07:53, Ilya Anfimov wrote:\n> > DISCLAIMER: I am both seeing this first time and I don't have a\n> > good understanding of the PosgreSQL development practices.\n> \n> > pure evil\n> > ridiculous\n> No worries, at least you got the etiquette just right.\n> \n> \n> There are two points in your mail that I'd like to discuss.\n> First, the ASOF grammar being bad because it's implicit. I do\n> agree on the general idea that explicit is better UX than implic-\n> it, especially when we're talking about SQL where you spend half\n> the time battling the query planner already. However, in the\n> grammar I proposed it's unambiguous which conditions are ASOF and\n> which are not -- all inequalities are ASOF, all equalities are\n\n I see at least two operators in postgres that implement ordering\nwhile they are not being <= ( ~<=~ -- for text compare byte-by-\nbyte, and *<= for internal record compare)\n and four cases that are literally <= , but don't implement or-\ndering -- box, lseg, path and circle are compared by length and\nfuzzy floating-point comparision.\n\n Are you sure an implementor and a programmer will easily decide\nwhat is just a boolean test, and what is an order?\n\n What's worse, preference of values doesn't have a lot in common\nwith filters you want on them. Let's get your example of a time\nmatching: another reasonable business case is to match the near-\nest time point in any direction, within a reasonable time limit.\n Like timea BETWEEN timeb - '1s' AND timeb + '1s' ,\n and to choose something like min(@(timea-timeb)) among them (*We\nstrangely don't have an absolute value operator on interval, but\nI think you've got the point*).\n\n> not, and there can be no other kinds of conditions for this type\n> of join. It can also support any number of ASOF conditions. Which\n> grammar exactly do you suggest? Maybe something like this:\n\n\n\n\n> \n> asks JOIN bids ON asks.instrument = bids.instrument ASOF asks.timestamp <= bids.timestamp\n\n I suggest JOIN bids ORDER BY asks.timestamp DESC LIMIT 1\n \t\t\t\tON asks.instrument = bids.instrument AND asks.timestamp <= bids.timestamp\n\n LIMIT 1 could also be implied.\n\n\n\n\n", "msg_date": "Tue, 23 Nov 2021 10:29:29 +0300", "msg_from": "Ilya Anfimov <ilan@tzirechnoy.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] ASOF Join" }, { "msg_contents": "On 11/23/21 02:29, Ilya Anfimov wrote:\n> (*We\n> strangely don't have an absolute value operator on interval, but\n> I think you've got the point*).\n\nAlthough tangential to the topic, that might be because a PG interval\nis a triple of independently-signed months/days/seconds components.\nAn interval like '1 month -31 days +12:00:00' is positive or negative\ndepending on the absolute date you apply it to, so what its absolute\nvalue should be isn't clear in isolation.\n\nThat's also why there's no everywhere-defined conversion from PG interval\nto XML xs:duration or to java.time.Duration, etc.</tangent>\n\nRegards,\n-Chap\n\n\n", "msg_date": "Tue, 23 Nov 2021 09:44:46 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: [RFC] ASOF Join" }, { "msg_contents": "On Tue, 23 Nov 2021 at 09:44, Chapman Flack <chap@anastigmatix.net> wrote:\n\n> On 11/23/21 02:29, Ilya Anfimov wrote:\n> > (*We\n> > strangely don't have an absolute value operator on interval, but\n> > I think you've got the point*).\n>\n> Although tangential to the topic, that might be because a PG interval\n> is a triple of independently-signed months/days/seconds components.\n> An interval like '1 month -31 days +12:00:00' is positive or negative\n> depending on the absolute date you apply it to, so what its absolute\n> value should be isn't clear in isolation.\n>\n\nUmm, it's definitely negative:\n\nodyssey=> select '1 month -31 days +12:00:00'::interval < '0\nmonths'::interval;\n ?column?\n----------\n t\n(1 row)\n\nIt's just that due to the complexities of our calendar/time systems, adding\nit to a timestamp can move the timestamp in either direction:\n\nodyssey=> select '2021-02-01'::timestamp + '1 month -31 days\n+12:00:00'::interval;\n ?column?\n---------------------\n 2021-01-29 12:00:00\n(1 row)\n\nodyssey=> select '2021-03-01'::timestamp + '1 month -31 days\n+12:00:00'::interval;\n ?column?\n---------------------\n 2021-03-01 12:00:00\n(1 row)\n\nI'm working on a patch to add abs(interval) so I noticed this. There are\nlots of oddities, including lots of intervals which compare equal to 0 but\nwhich can change a timestamp when added to it, but as presently designed,\nthis particular interval compares as negative.\n\nOn Tue, 23 Nov 2021 at 09:44, Chapman Flack <chap@anastigmatix.net> wrote:On 11/23/21 02:29, Ilya Anfimov wrote:\n> (*We\n> strangely don't have an absolute value operator on interval,  but\n> I think you've got the point*).\n\nAlthough tangential to the topic, that might be because a PG interval\nis a triple of independently-signed months/days/seconds components.\nAn interval like '1 month -31 days +12:00:00' is positive or negative\ndepending on the absolute date you apply it to, so what its absolute\nvalue should be isn't clear in isolation.Umm, it's definitely negative:odyssey=> select '1 month -31 days +12:00:00'::interval < '0 months'::interval; ?column? ---------- t(1 row)It's just that due to the complexities of our calendar/time systems, adding it to a timestamp can move the timestamp in either direction:odyssey=> select '2021-02-01'::timestamp + '1 month -31 days +12:00:00'::interval;      ?column?       --------------------- 2021-01-29 12:00:00(1 row)odyssey=> select '2021-03-01'::timestamp + '1 month -31 days +12:00:00'::interval;      ?column?       --------------------- 2021-03-01 12:00:00(1 row)I'm working on a patch to add abs(interval) so I noticed this. There are lots of oddities, including lots of intervals which compare equal to 0 but which can change a timestamp when added to it, but as presently designed, this particular interval compares as negative.", "msg_date": "Tue, 23 Nov 2021 10:41:01 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] ASOF Join" }, { "msg_contents": "On 11/23/21 10:41, Isaac Morland wrote:\n> Umm, it's definitely negative:\n> \n> odyssey=> select '1 month -31 days +12:00:00'::interval < '0\n> months'::interval;\n> ----------\n> t\n\nWell, what you've shown here is that it's \"negative\" according to\nan arbitrary total ordering imposed in interval_cmp_value for the purpose\nof making it indexable in a btree ...\n\n> It's just that due to the complexities of our calendar/time systems, adding\n> it to a timestamp can move the timestamp in either direction:\n\n... and this is just another way of saying that said arbitrary choice of\nbtree ordering can't be used to tell you whether the interval is\nsemantically positive or negative. (Of course, for a great many intervals,\nthe two answers will be the same, but they're still answers to different\nquestions.)\n\n> I'm working on a patch to add abs(interval) so I noticed this. There are\n> lots of oddities, including lots of intervals which compare equal to 0 but\n> which can change a timestamp when added to it, but as presently designed,\n> this particular interval compares as negative.\n\nIt's no use—it's oddities all the way down. You can shove them off to one\nside of the desk or the other depending on your intentions of the moment,\nbut they're always there. If you want to put intervals in a btree, you\ncan define a total ordering where all days are 24 hours and all months\nare 30 days, and then there are no oddities in your btree, they're just\neverywhere else. Or you can compare your unknown interval to a known\none like '0 months' and say you know whether it's \"negative\", you just\ndon't know whether it moves a real date forward or back. Or you can see\nwhat it does to a real date, but not know whether it would precede or\nfollow some other interval in a btree.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Tue, 23 Nov 2021 11:18:30 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: [RFC] ASOF Join" } ]
[ { "msg_contents": "I noticed that, a week after Michael pushed 9ff47ea41 to silence\n-Wcompound-token-split-by-macro warnings, buildfarm member sidewinder\nis still spewing them. Investigation shows that it's building with\n\nconfigure: using compiler=cc (nb4 20200810) 7.5.0\nconfigure: using CLANG=ccache clang\n\nand the system cc doesn't know -Wcompound-token-split-by-macro,\nso we don't use it, but the modules that are built into bytecode\nstill produce the warnings because they're built with clang.\n\nI think this idea of using clang with switches selected for some other\ncompiler is completely horrid, and needs to be nuked from orbit before\nit causes problems worse than mere warnings. Why did we not simply\ninsist that if you want to use --with-llvm, the selected compiler must\nbe clang? I cannot see any benefit of mix-and-match here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 Nov 2021 11:56:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Mixing CC and a different CLANG seems like a bad idea" }, { "msg_contents": "On 2021-11-18 17:56, Tom Lane wrote:\n> I noticed that, a week after Michael pushed 9ff47ea41 to silence\n> -Wcompound-token-split-by-macro warnings, buildfarm member sidewinder\n> is still spewing them. Investigation shows that it's building with\n> \n> configure: using compiler=cc (nb4 20200810) 7.5.0\n> configure: using CLANG=ccache clang\n\n\nHm, actually it's:\n\nCC => \"ccache cc\",\nCXX => \"ccache c++\",\nCLANG => \"ccache clang\",\n\nwant me to change it to:\n\nCC => \"ccache clang\",\nCXX => \"ccache c++\",\nCLANG => \"ccache clang\",\n\n?\n\n/Mikael\n\n\n", "msg_date": "Thu, 18 Nov 2021 18:24:36 +0100", "msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>", "msg_from_op": false, "msg_subject": "Re: Mixing CC and a different CLANG seems like a bad idea" }, { "msg_contents": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu> writes:\n> Hm, actually it's:\n\n> CC => \"ccache cc\",\n> CXX => \"ccache c++\",\n> CLANG => \"ccache clang\",\n\nRight.\n\n> want me to change it to:\n\n> CC => \"ccache clang\",\n> CXX => \"ccache c++\",\n> CLANG => \"ccache clang\",\n\nWhat I actually think is we should get rid of the separate CLANG\nvariable. But don't do anything to the animal's configuration\ntill that's settled.\n\nBTW, that would presumably lead to wanting to use CXX = \"ccache clang++\",\ntoo. I think we don't really want inconsistent CC/CXX either ...\nalthough at least configure knows it needs to probe CXX's flags\nseparately. I suppose another way to resolve this would be for\nconfigure to make a third set of probes to see what switches CLANG\nhas --- but ick.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 Nov 2021 12:31:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Mixing CC and a different CLANG seems like a bad idea" }, { "msg_contents": "Hi,\n\nOn 2021-11-18 11:56:59 -0500, Tom Lane wrote:\n> I noticed that, a week after Michael pushed 9ff47ea41 to silence\n> -Wcompound-token-split-by-macro warnings, buildfarm member sidewinder\n> is still spewing them. Investigation shows that it's building with\n> \n> configure: using compiler=cc (nb4 20200810) 7.5.0\n> configure: using CLANG=ccache clang\n> \n> and the system cc doesn't know -Wcompound-token-split-by-macro,\n> so we don't use it, but the modules that are built into bytecode\n> still produce the warnings because they're built with clang.\n\n> I think this idea of using clang with switches selected for some other\n> compiler is completely horrid, and needs to be nuked from orbit before\n> it causes problems worse than mere warnings.\n\nWe can test separately for flags, see BITCODE_CFLAGS/BITCODE_CXXFLAGS.\n\n\n> Why did we not simply insist that if you want to use --with-llvm, the\n> selected compiler must be clang? I cannot see any benefit of mix-and-match\n> here.\n\nIt seemed like a problematic restriction at the time. And still does to\nme.\n\nFor one, gcc does generate somewhat better code. For another, extensions could\nstill compile with a different compiler, and generate bitcode with another\ncompiler, so it's good to have that easily testable.\n\nIt also just seems architecturally wrong: People pressed for making the choice\nof JIT runtime replaceable, and it now is, at some pain. And forcing the main\ncompiler seems problematic from that angle. With the meson port jit\ncompilation actually kinda works on windows - but it seems we shouldn't force\npeople to not use visual studio there, just for that?\n\n\nI think the issue is more with trying to be miserly in the choice of compiler\nflag tests to duplicate and how many places to change to choose the right flag\nvariable. It's taken a while for this to become a real issue, so it perhaps\nwas the right choice at the time.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 18 Nov 2021 09:32:44 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Mixing CC and a different CLANG seems like a bad idea" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-11-18 11:56:59 -0500, Tom Lane wrote:\n>> Why did we not simply insist that if you want to use --with-llvm, the\n>> selected compiler must be clang? I cannot see any benefit of mix-and-match\n>> here.\n\n> It also just seems architecturally wrong: People pressed for making the choice\n> of JIT runtime replaceable, and it now is, at some pain. And forcing the main\n> compiler seems problematic from that angle.\n\nOK, I concede that's a reasonable concern. So we need to look more\ncarefully at how the switches for CLANG are being selected.\n\n> I think the issue is more with trying to be miserly in the choice of compiler\n> flag tests to duplicate and how many places to change to choose the right flag\n> variable. It's taken a while for this to become a real issue, so it perhaps\n> was the right choice at the time.\n\nYeah. I'm inclined to think we ought to just bite the bullet and fold\nCLANG/CLANGXX into the main list of compiler switch probes, so that we\ncheck every interesting one four times. That sounds fairly horrid,\nbut as long as you are using an accache file it's not really going\nto cost that much. (BTW, does meson have any comparable optimization?\nIf it doesn't, I bet that is going to be a problem.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 Nov 2021 12:43:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Mixing CC and a different CLANG seems like a bad idea" }, { "msg_contents": "I wrote:\n> Yeah. I'm inclined to think we ought to just bite the bullet and fold\n> CLANG/CLANGXX into the main list of compiler switch probes, so that we\n> check every interesting one four times.\n\nAfter studying configure's list more closely, that doesn't seem like\na great plan either. There's a lot of idiosyncrasy in the tests,\nsuch as things that only apply to C or to C++.\n\nMore, I think (though this ought to be documented in a comment) that\nthe policy is to not bother turning on extra -W options in the bitcode\nswitches, on the grounds that warning once in the main build is enough.\nI follow that idea --- but what we missed is that we still need to\nturn *off* the warnings we're actively disabling. I shall go do that,\nif no objections.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 Nov 2021 13:39:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Mixing CC and a different CLANG seems like a bad idea" }, { "msg_contents": "Hi,\n\nOn 2021-11-18 12:43:15 -0500, Tom Lane wrote:\n> Yeah. I'm inclined to think we ought to just bite the bullet and fold\n> CLANG/CLANGXX into the main list of compiler switch probes, so that we\n> check every interesting one four times. That sounds fairly horrid,\n> but as long as you are using an accache file it's not really going\n> to cost that much. (BTW, does meson have any comparable optimization?\n> If it doesn't, I bet that is going to be a problem.)\n> \n> \t\t\tregards, tom lane\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 18 Nov 2021 12:42:57 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Mixing CC and a different CLANG seems like a bad idea" }, { "msg_contents": "On Thu, Nov 18, 2021 at 3:43 PM Andres Freund <andres@anarazel.de> wrote:\n> Hi,\n>\n> Greetings,\n>\n> Andres Freund\n\nGreetings to you too, Andres. :-)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 18 Nov 2021 16:13:50 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Mixing CC and a different CLANG seems like a bad idea" }, { "msg_contents": "Hi,\n\nOn 2021-11-18 13:39:04 -0500, Tom Lane wrote:\n> After studying configure's list more closely, that doesn't seem like\n> a great plan either. There's a lot of idiosyncrasy in the tests,\n> such as things that only apply to C or to C++.\n\nYea. It seems doable, but not really worth it for now.\n\n\n> More, I think (though this ought to be documented in a comment) that\n> the policy is to not bother turning on extra -W options in the bitcode\n> switches, on the grounds that warning once in the main build is enough.\n> I follow that idea --- but what we missed is that we still need to\n> turn *off* the warnings we're actively disabling. I shall go do that,\n> if no objections.\n\nThanks for doing that, that does sounds like a good way, at least for now.\n\n\nOn 2021-11-18 13:39:04 -0500, Tom Lane wrote:\n> That sounds fairly horrid, but as long as you are using an accache file it's\n> not really going to cost that much.\n\n> (BTW, does meson have any comparable optimization?\n> If it doesn't, I bet that is going to be a problem.)\n\nYes - imo in a nicer, more reliable way. It caches most test results, but with\nthe complete input, including the commandline (i.e. compiler flags) as the\ncache key. So no more errors about compile flags changing...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 18 Nov 2021 13:34:35 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Mixing CC and a different CLANG seems like a bad idea" }, { "msg_contents": "Hi,\n\nOn 2021-11-18 16:13:50 -0500, Robert Haas wrote:\n> On Thu, Nov 18, 2021 at 3:43 PM Andres Freund <andres@anarazel.de> wrote:\n> > Hi,\n> >\n> > Greetings,\n> >\n> > Andres Freund\n> \n> Greetings to you too, Andres. :-)\n\nOops I sent the email that I copied text from, rather than the one I wanted to\nsend...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 18 Nov 2021 13:35:01 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Mixing CC and a different CLANG seems like a bad idea" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-11-18 13:39:04 -0500, Tom Lane wrote:\n>> More, I think (though this ought to be documented in a comment) that\n>> the policy is to not bother turning on extra -W options in the bitcode\n>> switches, on the grounds that warning once in the main build is enough.\n>> I follow that idea --- but what we missed is that we still need to\n>> turn *off* the warnings we're actively disabling. I shall go do that,\n>> if no objections.\n\n> Thanks for doing that, that does sounds like a good way, at least for now.\n\nCool, thanks for confirming.\n\nFor the archives' sake: I thought originally that this was triggered\nby having CC different from CLANG, and even wrote that in the commit\nmessage; but I was mistaken. I was misled by the fact that sidewinder\nis the only animal still reporting the compound-token-split-by-macro\nwarnings, and jumped to the conclusion that its unusual configuration\nwas the cause. But actually that't not it, because the flags we\nfeed to CLANG are *not* dependent on what CC will take. I now think\nthe actual uniqueness is that sidewinder is the only animal that is\nusing clang >= 12 and has --with-llvm enabled.\n\n>> (BTW, does meson have any comparable optimization?\n>> If it doesn't, I bet that is going to be a problem.)\n\n> Yes - imo in a nicer, more reliable way. It caches most test results, but with\n> the complete input, including the commandline (i.e. compiler flags) as the\n> cache key. So no more errors about compile flags changing...\n\nNice!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 Nov 2021 16:57:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Mixing CC and a different CLANG seems like a bad idea" } ]
[ { "msg_contents": "Hi,\n\nIt seems like some of the XLogReaderAllocate failure check errors are\nnot having errdetail \"Failed while allocating a WAL reading\nprocessor.\" but just the errmsg \"out of memory\". The \"out of memory\"\nmessage without the errdetail is too generic and let's add it for\nconsistency and readability of the message in the server logs.\n\nHere's a tiny patch. Thoughts?\n\nRegards,\nBharath Rupireddy.", "msg_date": "Fri, 19 Nov 2021 09:29:15 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "add missing errdetail for xlogreader allocation failure error" }, { "msg_contents": "> On 19 Nov 2021, at 04:59, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n\n> It seems like some of the XLogReaderAllocate failure check errors are\n> not having errdetail \"Failed while allocating a WAL reading\n> processor.\" but just the errmsg \"out of memory\". The \"out of memory\"\n> message without the errdetail is too generic and let's add it for\n> consistency and readability of the message in the server logs.\n> \n> Here's a tiny patch. Thoughts?\n\nNo objections. There are quite a few more \"out of memory\" errors without\nerrdetail but that doesn't mean we can't move the needle with these.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 22 Nov 2021 12:58:36 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: add missing errdetail for xlogreader allocation failure error" }, { "msg_contents": "Le lun. 22 nov. 2021 à 19:58, Daniel Gustafsson <daniel@yesql.se> a écrit :\n\n> > On 19 Nov 2021, at 04:59, Bharath Rupireddy <\n> bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> > It seems like some of the XLogReaderAllocate failure check errors are\n> > not having errdetail \"Failed while allocating a WAL reading\n> > processor.\" but just the errmsg \"out of memory\". The \"out of memory\"\n> > message without the errdetail is too generic and let's add it for\n> > consistency and readability of the message in the server logs.\n> >\n> > Here's a tiny patch. Thoughts?\n>\n> No objections. There are quite a few more \"out of memory\" errors without\n> errdetail but that doesn't mean we can't move the needle with these.\n>\n\n+1, it's often annoying to find out which code path actually raised that\nerror so this would be quite handy.\n\n>\n\nLe lun. 22 nov. 2021 à 19:58, Daniel Gustafsson <daniel@yesql.se> a écrit :> On 19 Nov 2021, at 04:59, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n\n> It seems like some of the XLogReaderAllocate failure check errors are\n> not having errdetail \"Failed while allocating a WAL reading\n> processor.\" but just the errmsg \"out of memory\". The \"out of memory\"\n> message without the errdetail is too generic and let's add it for\n> consistency and readability of the message in the server logs.\n> \n> Here's a tiny patch. Thoughts?\n\nNo objections.  There are quite a few more \"out of memory\" errors without\nerrdetail but that doesn't mean we can't move the needle with these.+1, it's often annoying to find out which code path actually raised that error so this would be quite handy.", "msg_date": "Mon, 22 Nov 2021 20:01:08 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: add missing errdetail for xlogreader allocation failure error" }, { "msg_contents": "On 2021-Nov-19, Bharath Rupireddy wrote:\n\n> It seems like some of the XLogReaderAllocate failure check errors are\n> not having errdetail \"Failed while allocating a WAL reading\n> processor.\" but just the errmsg \"out of memory\". The \"out of memory\"\n> message without the errdetail is too generic and let's add it for\n> consistency and readability of the message in the server logs.\n> \n> Here's a tiny patch. Thoughts?\n\nYou're right -- and since in a few other callers of XLogReaderAllocate\nwe do include the exact errdetail you propose, your patch looks good to\nme.\n\nWhile looking I noticed a few other places that could be improved similarly. I\ncrammed it all in a single commit, and pushed.\n\nThank you,\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 22 Nov 2021 13:50:00 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: add missing errdetail for xlogreader allocation failure error" } ]
[ { "msg_contents": "Happened to notice this when reading around the codes. The BrinMemTuple\nwould be initialized in brin_new_memtuple(), right after being created.\nSo we don't need to initialize it again outside.\n\ndiff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c\nindex ccc9fa0959..67a277e1f9 100644\n--- a/src/backend/access/brin/brin.c\n+++ b/src/backend/access/brin/brin.c\n@@ -1261,8 +1261,6 @@ initialize_brin_buildstate(Relation idxRel,\nBrinRevmap *revmap,\n state->bs_bdesc = brin_build_desc(idxRel);\n state->bs_dtuple = brin_new_memtuple(state->bs_bdesc);\n\n- brin_memtuple_initialize(state->bs_dtuple, state->bs_bdesc);\n-\n return state;\n }\n\nThanks\nRichard\n\nHappened to notice this when reading around the codes. The BrinMemTuplewould be initialized in brin_new_memtuple(), right after being created.So we don't need to initialize it again outside.diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.cindex ccc9fa0959..67a277e1f9 100644--- a/src/backend/access/brin/brin.c+++ b/src/backend/access/brin/brin.c@@ -1261,8 +1261,6 @@ initialize_brin_buildstate(Relation idxRel, BrinRevmap *revmap,        state->bs_bdesc = brin_build_desc(idxRel);        state->bs_dtuple = brin_new_memtuple(state->bs_bdesc);-       brin_memtuple_initialize(state->bs_dtuple, state->bs_bdesc);-        return state; }ThanksRichard", "msg_date": "Fri, 19 Nov 2021 15:43:14 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "A spot of redundant initialization of brin memtuple" }, { "msg_contents": "On Fri, Nov 19, 2021 at 1:13 PM Richard Guo <guofenglinux@gmail.com> wrote:\n>\n> Happened to notice this when reading around the codes. The BrinMemTuple\n> would be initialized in brin_new_memtuple(), right after being created.\n> So we don't need to initialize it again outside.\n>\n> diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c\n> index ccc9fa0959..67a277e1f9 100644\n> --- a/src/backend/access/brin/brin.c\n> +++ b/src/backend/access/brin/brin.c\n> @@ -1261,8 +1261,6 @@ initialize_brin_buildstate(Relation idxRel, BrinRevmap *revmap,\n> state->bs_bdesc = brin_build_desc(idxRel);\n> state->bs_dtuple = brin_new_memtuple(state->bs_bdesc);\n>\n> - brin_memtuple_initialize(state->bs_dtuple, state->bs_bdesc);\n> -\n> return state;\n> }\n\nGood catch. +1 for the change. Please submit a patch.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 19 Nov 2021 21:53:21 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A spot of redundant initialization of brin memtuple" }, { "msg_contents": "On Sat, Nov 20, 2021 at 12:23 AM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Fri, Nov 19, 2021 at 1:13 PM Richard Guo <guofenglinux@gmail.com>\n> wrote:\n> >\n> > Happened to notice this when reading around the codes. The BrinMemTuple\n> > would be initialized in brin_new_memtuple(), right after being created.\n> > So we don't need to initialize it again outside.\n> >\n> > diff --git a/src/backend/access/brin/brin.c\n> b/src/backend/access/brin/brin.c\n> > index ccc9fa0959..67a277e1f9 100644\n> > --- a/src/backend/access/brin/brin.c\n> > +++ b/src/backend/access/brin/brin.c\n> > @@ -1261,8 +1261,6 @@ initialize_brin_buildstate(Relation idxRel,\n> BrinRevmap *revmap,\n> > state->bs_bdesc = brin_build_desc(idxRel);\n> > state->bs_dtuple = brin_new_memtuple(state->bs_bdesc);\n> >\n> > - brin_memtuple_initialize(state->bs_dtuple, state->bs_bdesc);\n> > -\n> > return state;\n> > }\n>\n> Good catch. +1 for the change. Please submit a patch.\n>\n\nThanks for the review. Attached is the patch.\n\nThanks\nRichard", "msg_date": "Mon, 22 Nov 2021 11:23:42 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A spot of redundant initialization of brin memtuple" }, { "msg_contents": "On Mon, Nov 22, 2021 at 8:53 AM Richard Guo <guofenglinux@gmail.com> wrote:\n>\n>\n> On Sat, Nov 20, 2021 at 12:23 AM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> On Fri, Nov 19, 2021 at 1:13 PM Richard Guo <guofenglinux@gmail.com> wrote:\n>> >\n>> > Happened to notice this when reading around the codes. The BrinMemTuple\n>> > would be initialized in brin_new_memtuple(), right after being created.\n>> > So we don't need to initialize it again outside.\n>> >\n>> > diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c\n>> > index ccc9fa0959..67a277e1f9 100644\n>> > --- a/src/backend/access/brin/brin.c\n>> > +++ b/src/backend/access/brin/brin.c\n>> > @@ -1261,8 +1261,6 @@ initialize_brin_buildstate(Relation idxRel, BrinRevmap *revmap,\n>> > state->bs_bdesc = brin_build_desc(idxRel);\n>> > state->bs_dtuple = brin_new_memtuple(state->bs_bdesc);\n>> >\n>> > - brin_memtuple_initialize(state->bs_dtuple, state->bs_bdesc);\n>> > -\n>> > return state;\n>> > }\n>>\n>> Good catch. +1 for the change. Please submit a patch.\n>\n>\n> Thanks for the review. Attached is the patch.\n\nThanks. The patch looks good to me. Let's add it to the commitfest to\nnot lose track of it.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 22 Nov 2021 10:22:44 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A spot of redundant initialization of brin memtuple" }, { "msg_contents": "On Mon, Nov 22, 2021 at 12:52 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Mon, Nov 22, 2021 at 8:53 AM Richard Guo <guofenglinux@gmail.com>\n> wrote:\n> >\n> >\n> > On Sat, Nov 20, 2021 at 12:23 AM Bharath Rupireddy <\n> bharath.rupireddyforpostgres@gmail.com> wrote:\n> >>\n> >> On Fri, Nov 19, 2021 at 1:13 PM Richard Guo <guofenglinux@gmail.com>\n> wrote:\n> >> >\n> >> > Happened to notice this when reading around the codes. The\n> BrinMemTuple\n> >> > would be initialized in brin_new_memtuple(), right after being\n> created.\n> >> > So we don't need to initialize it again outside.\n> >> >\n> >> > diff --git a/src/backend/access/brin/brin.c\n> b/src/backend/access/brin/brin.c\n> >> > index ccc9fa0959..67a277e1f9 100644\n> >> > --- a/src/backend/access/brin/brin.c\n> >> > +++ b/src/backend/access/brin/brin.c\n> >> > @@ -1261,8 +1261,6 @@ initialize_brin_buildstate(Relation idxRel,\n> BrinRevmap *revmap,\n> >> > state->bs_bdesc = brin_build_desc(idxRel);\n> >> > state->bs_dtuple = brin_new_memtuple(state->bs_bdesc);\n> >> >\n> >> > - brin_memtuple_initialize(state->bs_dtuple, state->bs_bdesc);\n> >> > -\n> >> > return state;\n> >> > }\n> >>\n> >> Good catch. +1 for the change. Please submit a patch.\n> >\n> >\n> > Thanks for the review. Attached is the patch.\n>\n> Thanks. The patch looks good to me. Let's add it to the commitfest to\n> not lose track of it.\n>\n\nDone. Here it is:\nhttps://commitfest.postgresql.org/36/3424/\n\nThanks again for the review.\n\nThanks\nRichard\n\nOn Mon, Nov 22, 2021 at 12:52 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Mon, Nov 22, 2021 at 8:53 AM Richard Guo <guofenglinux@gmail.com> wrote:\n>\n>\n> On Sat, Nov 20, 2021 at 12:23 AM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> On Fri, Nov 19, 2021 at 1:13 PM Richard Guo <guofenglinux@gmail.com> wrote:\n>> >\n>> > Happened to notice this when reading around the codes. The BrinMemTuple\n>> > would be initialized in brin_new_memtuple(), right after being created.\n>> > So we don't need to initialize it again outside.\n>> >\n>> > diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c\n>> > index ccc9fa0959..67a277e1f9 100644\n>> > --- a/src/backend/access/brin/brin.c\n>> > +++ b/src/backend/access/brin/brin.c\n>> > @@ -1261,8 +1261,6 @@ initialize_brin_buildstate(Relation idxRel, BrinRevmap *revmap,\n>> >         state->bs_bdesc = brin_build_desc(idxRel);\n>> >         state->bs_dtuple = brin_new_memtuple(state->bs_bdesc);\n>> >\n>> > -       brin_memtuple_initialize(state->bs_dtuple, state->bs_bdesc);\n>> > -\n>> >         return state;\n>> >  }\n>>\n>> Good catch. +1 for the change. Please submit a patch.\n>\n>\n> Thanks for the review. Attached is the patch.\n\nThanks. The patch looks good to me. Let's add it to the commitfest to\nnot lose track of it.Done. Here it is:https://commitfest.postgresql.org/36/3424/Thanks again for the review.ThanksRichard", "msg_date": "Mon, 22 Nov 2021 14:34:56 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A spot of redundant initialization of brin memtuple" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> On Mon, Nov 22, 2021 at 12:52 PM Bharath Rupireddy <\n> bharath.rupireddyforpostgres@gmail.com> wrote:\n>> Thanks. The patch looks good to me. Let's add it to the commitfest to\n>> not lose track of it.\n\n> Done. Here it is:\n> https://commitfest.postgresql.org/36/3424/\n\nPushed, thanks for the patch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 04 Jan 2022 16:54:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A spot of redundant initialization of brin memtuple" } ]
[ { "msg_contents": "Hi,\n\npgfdw_report_error() in postgres_fdw is implemented to report the message\n\"could not obtain ...\" if message_primary is NULL as follows.\nBut, just before this ereport(), message_primary is set to\npchomp(PQerrorMessage()) if it's NULL. So ISTM that message_primary is\nalways not NULL in ereport() and the message \"could not obtain ...\" is\nnever reported. Is this a bug?\n\n-------------------\nif (message_primary == NULL)\n\tmessage_primary = pchomp(PQerrorMessage(conn));\n\nereport(elevel,\n\t\t(errcode(sqlstate),\n\t\t message_primary ? errmsg_internal(\"%s\", message_primary) :\n\t\t errmsg(\"could not obtain message string for remote error\"),\n-------------------\n\n\nIf this is a bug, IMO the following change needs to be applied. Thought?\n\n-------------------\n ereport(elevel,\n (errcode(sqlstate),\n- message_primary ? errmsg_internal(\"%s\", message_primary) :\n+ (message_primary != NULL && message_primary[0] != '\\0') ?\n+ errmsg_internal(\"%s\", message_primary) :\n errmsg(\"could not obtain message string for remote error\"),\n-------------------\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 19 Nov 2021 17:18:22 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "issue in pgfdw_report_error()?" }, { "msg_contents": "On Fri, Nov 19, 2021 at 1:48 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> Hi,\n>\n> pgfdw_report_error() in postgres_fdw is implemented to report the message\n> \"could not obtain ...\" if message_primary is NULL as follows.\n> But, just before this ereport(), message_primary is set to\n> pchomp(PQerrorMessage()) if it's NULL. So ISTM that message_primary is\n> always not NULL in ereport() and the message \"could not obtain ...\" is\n> never reported. Is this a bug?\n>\n> -------------------\n> if (message_primary == NULL)\n> message_primary = pchomp(PQerrorMessage(conn));\n>\n> ereport(elevel,\n> (errcode(sqlstate),\n> message_primary ? errmsg_internal(\"%s\", message_primary) :\n> errmsg(\"could not obtain message string for remote error\"),\n> -------------------\n>\n>\n> If this is a bug, IMO the following change needs to be applied. Thought?\n>\n> -------------------\n> ereport(elevel,\n> (errcode(sqlstate),\n> - message_primary ? errmsg_internal(\"%s\", message_primary) :\n> + (message_primary != NULL && message_primary[0] != '\\0') ?\n> + errmsg_internal(\"%s\", message_primary) :\n> errmsg(\"could not obtain message string for remote error\"),\n> -------------------\n\nWhat if conn->errorMessage.data is NULL and PQerrorMessage returns it?\nThe message_primary can still be NULL right?\n\nI see the other places where PQerrorMessage is used, they do check for\nthe NULL value in some places the others don't do.\n\nin dblink.c:\n msg = PQerrorMessage(conn);\n if (msg == NULL || msg[0] == '\\0')\n PG_RETURN_TEXT_P(cstring_to_text(\"OK\"));\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 19 Nov 2021 18:27:49 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: issue in pgfdw_report_error()?" }, { "msg_contents": "On 2021/11/19 21:57, Bharath Rupireddy wrote:\n>> If this is a bug, IMO the following change needs to be applied. Thought?\n>>\n>> -------------------\n>> ereport(elevel,\n>> (errcode(sqlstate),\n>> - message_primary ? errmsg_internal(\"%s\", message_primary) :\n>> + (message_primary != NULL && message_primary[0] != '\\0') ?\n>> + errmsg_internal(\"%s\", message_primary) :\n>> errmsg(\"could not obtain message string for remote error\"),\n>> -------------------\n\nI attached the patch.\n\n\n> What if conn->errorMessage.data is NULL and PQerrorMessage returns it?\n> The message_primary can still be NULL right?\n\nSince conn->errorMessage is initialized by initPQExpBuffer(),\nPQerrorMessage() seems not to return NULL. But *if* it returns NULL,\npchomp(NULL) is executed and would cause a segmentation fault.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Sat, 20 Nov 2021 00:18:15 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: issue in pgfdw_report_error()?" }, { "msg_contents": "On Fri, Nov 19, 2021 at 8:48 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2021/11/19 21:57, Bharath Rupireddy wrote:\n> >> If this is a bug, IMO the following change needs to be applied. Thought?\n> >>\n> >> -------------------\n> >> ereport(elevel,\n> >> (errcode(sqlstate),\n> >> - message_primary ? errmsg_internal(\"%s\", message_primary) :\n> >> + (message_primary != NULL && message_primary[0] != '\\0') ?\n> >> + errmsg_internal(\"%s\", message_primary) :\n> >> errmsg(\"could not obtain message string for remote error\"),\n> >> -------------------\n>\n> I attached the patch.\n\nWith the existing code, it emits \"\" for message_primary[0] == '\\0'\ncases but with the patch it emits \"could not obtain message string for\nremote error\".\n\n- message_primary ? errmsg_internal(\"%s\", message_primary) :\n+ (message_primary != NULL && message_primary[0] != '\\0') ?\n+ errmsg_internal(\"%s\", message_primary) :\n\n>\n> > What if conn->errorMessage.data is NULL and PQerrorMessage returns it?\n> > The message_primary can still be NULL right?\n>\n> Since conn->errorMessage is initialized by initPQExpBuffer(),\n> PQerrorMessage() seems not to return NULL. But *if* it returns NULL,\n> pchomp(NULL) is executed and would cause a segmentation fault.\n\nWell, in that case, why can't we get rid of \"(message_primary != NULL\"\nand just have \"message_primary[0] != '\\0' ? errmsg_internal(\"%s\",\nmessage_primary) : errmsg(\"could not obtain message string for remote\nerror\")\" ?\n\nBTW, we might have to fix it in dblink_res_error too?\n\n /*\n * If we don't get a message from the PGresult, try the PGconn. This is\n * needed because for connection-level failures, PQexec may just return\n * NULL, not a PGresult at all.\n */\n if (message_primary == NULL)\n message_primary = pchomp(PQerrorMessage(conn));\n\n ereport(level,\n (errcode(sqlstate),\n message_primary ? errmsg_internal(\"%s\", message_primary) :\n errmsg(\"could not obtain message string for remote error\"),\n message_detail ? errdetail_internal(\"%s\", message_detail) : 0,\n message_hint ? errhint(\"%s\", message_hint) : 0,\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 19 Nov 2021 21:46:50 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: issue in pgfdw_report_error()?" }, { "msg_contents": "On 2021/11/20 1:16, Bharath Rupireddy wrote:\n> With the existing code, it emits \"\" for message_primary[0] == '\\0'\n> cases but with the patch it emits \"could not obtain message string for\n> remote error\".\n\nYes.\n\n\n> Well, in that case, why can't we get rid of \"(message_primary != NULL\"\n> and just have \"message_primary[0] != '\\0' ? errmsg_internal(\"%s\",\n> message_primary) : errmsg(\"could not obtain message string for remote\n> error\")\" ?\n\nThat's possible if we can confirm that PQerrorMessage() never returns\nNULL all the cases. I'm not sure how much it's worth doing that, though..\nIt seems more robust to check also NULL there.\n\n\n> BTW, we might have to fix it in dblink_res_error too?\n\nYeah, that's good idea. I included that change in the patch. Attached.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Mon, 22 Nov 2021 11:47:03 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: issue in pgfdw_report_error()?" }, { "msg_contents": "On Mon, Nov 22, 2021 at 8:17 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > Well, in that case, why can't we get rid of \"(message_primary != NULL\"\n> > and just have \"message_primary[0] != '\\0' ? errmsg_internal(\"%s\",\n> > message_primary) : errmsg(\"could not obtain message string for remote\n> > error\")\" ?\n>\n> That's possible if we can confirm that PQerrorMessage() never returns\n> NULL all the cases. I'm not sure how much it's worth doing that, though..\n> It seems more robust to check also NULL there.\n\nOkay.\n\n> > BTW, we might have to fix it in dblink_res_error too?\n>\n> Yeah, that's good idea. I included that change in the patch. Attached.\n\nThanks. pgfdw_report_error_v2 patch looks good to me.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 22 Nov 2021 10:29:33 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: issue in pgfdw_report_error()?" }, { "msg_contents": "\n\nOn 2021/11/22 13:59, Bharath Rupireddy wrote:\n> On Mon, Nov 22, 2021 at 8:17 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>> Well, in that case, why can't we get rid of \"(message_primary != NULL\"\n>>> and just have \"message_primary[0] != '\\0' ? errmsg_internal(\"%s\",\n>>> message_primary) : errmsg(\"could not obtain message string for remote\n>>> error\")\" ?\n>>\n>> That's possible if we can confirm that PQerrorMessage() never returns\n>> NULL all the cases. I'm not sure how much it's worth doing that, though..\n>> It seems more robust to check also NULL there.\n> \n> Okay.\n> \n>>> BTW, we might have to fix it in dblink_res_error too?\n>>\n>> Yeah, that's good idea. I included that change in the patch. Attached.\n> \n> Thanks. pgfdw_report_error_v2 patch looks good to me.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 3 Dec 2021 17:41:44 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: issue in pgfdw_report_error()?" } ]
[ { "msg_contents": "Hi,\n\npostgres_fdw reports no log message when it sends \"ABORT TRANSACTION\" etc\nand gives up getting a reply from a foreign server because of timeout or\nconnection trouble. This makes the troubleshooting a bit harder when\nusing postgres_fdw.\n\nSo how about making postgres_fdw report a warning in that case?\nSpecifically I'm thinking to change pgfdw_get_cleanup_result()\nin postgres_fdw/connection.c so that it reports a warning in case of\na timeout or connection failure (error of PQconsumeInput()).\n\nBTW, pgfdw_get_cleanup_result() does almost the same things as\nwhat pgfdw_get_result() does. So it might be good idea to refactor\nthose function to reduce the code duplication.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 19 Nov 2021 19:01:26 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Shouldn't postgres_fdw report warning when it gives up getting result\n from foreign server?" }, { "msg_contents": "On Fri, Nov 19, 2021 at 3:31 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> Hi,\n>\n> postgres_fdw reports no log message when it sends \"ABORT TRANSACTION\" etc\n> and gives up getting a reply from a foreign server because of timeout or\n> connection trouble. This makes the troubleshooting a bit harder when\n> using postgres_fdw.\n>\n> So how about making postgres_fdw report a warning in that case?\n> Specifically I'm thinking to change pgfdw_get_cleanup_result()\n> in postgres_fdw/connection.c so that it reports a warning in case of\n> a timeout or connection failure (error of PQconsumeInput()).\n\nHow about adding the warning message in pgfdw_abort_cleanup instead of\npgfdw_get_cleanup_result?\n\nJust before this in pgfdw_abort_cleanup seems better to me.\n\n /*\n * If a command has been submitted to the remote server by using an\n * asynchronous execution function, the command might not have yet\n * completed. Check to see if a command is still being processed by the\n * remote server, and if so, request cancellation of the command.\n */\n if (PQtransactionStatus(entry->conn) == PQTRANS_ACTIVE &&\n !pgfdw_cancel_query(entry->conn))\n return; /* Unable to cancel running query */\n\n if (!pgfdw_exec_cleanup_query(entry->conn, sql, false))\n return;\n\n> BTW, pgfdw_get_cleanup_result() does almost the same things as\n> what pgfdw_get_result() does. So it might be good idea to refactor\n> those function to reduce the code duplication.\n\nYeah, this seems to be an opportunity. But, the function should deal\nwith the timeout separately, I'm concerned that the function will\neventually be having if (timeout_param_specified) { } else { } sort\nof code. We can see how much duplicate code we save here vs the\nreadability or complexity that comes with the single function.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 19 Nov 2021 18:43:00 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Shouldn't postgres_fdw report warning when it gives up getting\n result from foreign server?" }, { "msg_contents": "On 2021/11/19 22:13, Bharath Rupireddy wrote:\n> How about adding the warning message in pgfdw_abort_cleanup instead of\n> pgfdw_get_cleanup_result?\n> \n> Just before this in pgfdw_abort_cleanup seems better to me.\n\nI was thinking pgfdw_get_cleanup_result() is better because it can\neasily report different warning messages based on cases of a timeout\nor connection failure, respectively. Since pgfdw_get_cleanup_result()\nreturns true in both those cases, ISTM that it's not easy to\ndistinguish them in pgfdw_abort_cleanup().\n\nAnyway, attached is the patch (pgfdw_get_cleanup_result_v1.patch)\nthat makes pgfdw_get_cleanup_result() report a warning message.\n\n\n> Yeah, this seems to be an opportunity. But, the function should deal\n> with the timeout separately, I'm concerned that the function will\n> eventually be having if (timeout_param_specified) { } else { } sort\n> of code. We can see how much duplicate code we save here vs the\n> readability or complexity that comes with the single function.\n\nPlease see the attached patch (refactor_pgfdw_get_result_v1.patch).\nThis is still WIP, but you can check how much the refactoring can\nsimplify the code.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Sat, 20 Nov 2021 00:44:18 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Shouldn't postgres_fdw report warning when it gives up getting\n result from foreign server?" }, { "msg_contents": "On Fri, Nov 19, 2021 at 9:14 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2021/11/19 22:13, Bharath Rupireddy wrote:\n> > How about adding the warning message in pgfdw_abort_cleanup instead of\n> > pgfdw_get_cleanup_result?\n> >\n> > Just before this in pgfdw_abort_cleanup seems better to me.\n>\n> I was thinking pgfdw_get_cleanup_result() is better because it can\n> easily report different warning messages based on cases of a timeout\n> or connection failure, respectively. Since pgfdw_get_cleanup_result()\n> returns true in both those cases, ISTM that it's not easy to\n> distinguish them in pgfdw_abort_cleanup().\n>\n> Anyway, attached is the patch (pgfdw_get_cleanup_result_v1.patch)\n> that makes pgfdw_get_cleanup_result() report a warning message.\n\nIt reports \"remote SQL command: (cancel request)\" which isn't a sql\nquery, but it looks okay to me as we report (cancel request). The\npgfdw_get_cleanup_result_v1 patch LGTM.\n\n> > Yeah, this seems to be an opportunity. But, the function should deal\n> > with the timeout separately, I'm concerned that the function will\n> > eventually be having if (timeout_param_specified) { } else { } sort\n> > of code. We can see how much duplicate code we save here vs the\n> > readability or complexity that comes with the single function.\n>\n> Please see the attached patch (refactor_pgfdw_get_result_v1.patch).\n> This is still WIP, but you can check how much the refactoring can\n> simplify the code.\n\nI think we can discuss this refactoring patch separately. Thoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 19 Nov 2021 22:08:47 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Shouldn't postgres_fdw report warning when it gives up getting\n result from foreign server?" }, { "msg_contents": "\n\nOn 2021/11/20 1:38, Bharath Rupireddy wrote:\n> It reports \"remote SQL command: (cancel request)\" which isn't a sql\n> query, but it looks okay to me as we report (cancel request). The\n> pgfdw_get_cleanup_result_v1 patch LGTM.\n\nBTW, we can hide the message \"remote SQL command: ..\" in cancel request case,\nbut which would make the debug and troubleshooting harder. So I decided to\nuse the string \"(cancel request)\" as SQL command string. Probably what string\nshould be used as SQL command might be debatable.\n\n\n> I think we can discuss this refactoring patch separately. Thoughts?\n\nYes! I will consider again if it's worth doing the refactoring,\nif yes, I will start new thread for the discussion for that.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 22 Nov 2021 11:55:25 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Shouldn't postgres_fdw report warning when it gives up getting\n result from foreign server?" }, { "msg_contents": "On Mon, Nov 22, 2021 at 8:25 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2021/11/20 1:38, Bharath Rupireddy wrote:\n> > It reports \"remote SQL command: (cancel request)\" which isn't a sql\n> > query, but it looks okay to me as we report (cancel request). The\n> > pgfdw_get_cleanup_result_v1 patch LGTM.\n>\n> BTW, we can hide the message \"remote SQL command: ..\" in cancel request case,\n> but which would make the debug and troubleshooting harder.\n\nYeah, let's not hide the message.\n\n> So I decided to\n> use the string \"(cancel request)\" as SQL command string. Probably what string\n> should be used as SQL command might be debatable.\n\nFor a cancel request maybe we can just say without te errcontext:\n ereport(WARNING,\n (errmsg(\"could not get result of cancel\nrequest due to timeout\")));\n\nSee the below existing message using \"cancel request\":\n errmsg(\"could not send cancel request: %s\",\n\nFor SQL command we can say:\n ereport(WARNING,\n (errmsg(\"could not get query result due to\ntimeout\"),\n query ? errcontext(\"remote SQL command:\n%s\", query) : 0));\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 22 Nov 2021 10:46:36 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Shouldn't postgres_fdw report warning when it gives up getting\n result from foreign server?" }, { "msg_contents": "\n\nOn 2021/11/22 14:16, Bharath Rupireddy wrote:\n>> BTW, we can hide the message \"remote SQL command: ..\" in cancel request case,\n>> but which would make the debug and troubleshooting harder.\n> \n> Yeah, let's not hide the message.\n\nYes!\n\n\n> For a cancel request maybe we can just say without te errcontext:\n> ereport(WARNING,\n> (errmsg(\"could not get result of cancel\n> request due to timeout\")));\n> \n> See the below existing message using \"cancel request\":\n> errmsg(\"could not send cancel request: %s\",\n> \n> For SQL command we can say:\n> ereport(WARNING,\n> (errmsg(\"could not get query result due to\n> timeout\"),\n> query ? errcontext(\"remote SQL command:\n> %s\", query) : 0));\n\nI wonder how pgfdw_get_cleanup_result() can determine which\nlog message to report. Probably we can add new boolean argument\nto pgfdw_get_cleanup_result() so that it should be set to true\nfor cancel request case, but false for query case. Then\npgfdw_get_cleanup_result() can decide wihch message to log\nbased on that argument. But it seems not good design to me.\nThought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 3 Dec 2021 17:56:13 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Shouldn't postgres_fdw report warning when it gives up getting\n result from foreign server?" }, { "msg_contents": "On Fri, Dec 3, 2021 at 2:26 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > For a cancel request maybe we can just say without te errcontext:\n> > ereport(WARNING,\n> > (errmsg(\"could not get result of cancel\n> > request due to timeout\")));\n> >\n> > See the below existing message using \"cancel request\":\n> > errmsg(\"could not send cancel request: %s\",\n> >\n> > For SQL command we can say:\n> > ereport(WARNING,\n> > (errmsg(\"could not get query result due to\n> > timeout\"),\n> > query ? errcontext(\"remote SQL command:\n> > %s\", query) : 0));\n>\n> I wonder how pgfdw_get_cleanup_result() can determine which\n> log message to report. Probably we can add new boolean argument\n> to pgfdw_get_cleanup_result() so that it should be set to true\n> for cancel request case, but false for query case. Then\n> pgfdw_get_cleanup_result() can decide wihch message to log\n> based on that argument. But it seems not good design to me.\n> Thought?\n\nLet's not use the boolean just for the cancel request which isn't\nscalable IMO. Maybe a macro/enum?\n\nOtherwise, we could just do, although it doesn't look elegant:\n\nif (pgfdw_get_cleanup_result(conn, endtime, &result, \"(cancel request)\"))\n\nif (strcmp(query, \"(cancel request)\") == 0)\n WARNING without \"remote SQL command:\nelse\n WARNING with \"remote SQL command:\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 3 Dec 2021 19:34:21 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Shouldn't postgres_fdw report warning when it gives up getting\n result from foreign server?" }, { "msg_contents": "On 2021/12/03 23:04, Bharath Rupireddy wrote:\n> Let's not use the boolean just for the cancel request which isn't\n> scalable IMO. Maybe a macro/enum?\n> \n> Otherwise, we could just do, although it doesn't look elegant:\n> \n> if (pgfdw_get_cleanup_result(conn, endtime, &result, \"(cancel request)\"))\n> \n> if (strcmp(query, \"(cancel request)\") == 0)\n> WARNING without \"remote SQL command:\n> else\n> WARNING with \"remote SQL command:\n\nYeah, I agree that's not elegant..\n\nSo I'd like to propose new patch with different design from\nwhat I proposed before. Patch attached.\n\nThis patch changes pgfdw_exec_cleanup_query() so that it tells\nits callers the information about whether the timeout expired\nor not. Then the callers (pgfdw_exec_cleanup_query and\npgfdw_cancel_query) report the warning messages based on\nthe results from pgfdw_exec_cleanup_query().\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Mon, 6 Dec 2021 17:16:59 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Shouldn't postgres_fdw report warning when it gives up getting\n result from foreign server?" }, { "msg_contents": "On Mon, Dec 6, 2021 at 1:47 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> Yeah, I agree that's not elegant..\n>\n> So I'd like to propose new patch with different design from\n> what I proposed before. Patch attached.\n>\n> This patch changes pgfdw_exec_cleanup_query() so that it tells\n> its callers the information about whether the timeout expired\n> or not. Then the callers (pgfdw_exec_cleanup_query and\n> pgfdw_cancel_query) report the warning messages based on\n> the results from pgfdw_exec_cleanup_query().\n\n+1 for adding a new timed_out param to pgfdw_get_cleanup_result.\npgfdw_get_cleanup_result_v2 patch looks good to me.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 6 Dec 2021 17:20:28 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Shouldn't postgres_fdw report warning when it gives up getting\n result from foreign server?" }, { "msg_contents": "\n\nOn 2021/12/06 20:50, Bharath Rupireddy wrote:\n> On Mon, Dec 6, 2021 at 1:47 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> Yeah, I agree that's not elegant..\n>>\n>> So I'd like to propose new patch with different design from\n>> what I proposed before. Patch attached.\n>>\n>> This patch changes pgfdw_exec_cleanup_query() so that it tells\n>> its callers the information about whether the timeout expired\n>> or not. Then the callers (pgfdw_exec_cleanup_query and\n>> pgfdw_cancel_query) report the warning messages based on\n>> the results from pgfdw_exec_cleanup_query().\n> \n> +1 for adding a new timed_out param to pgfdw_get_cleanup_result.\n> pgfdw_get_cleanup_result_v2 patch looks good to me.\n\nThanks for the review! I pushed the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 8 Dec 2021 23:35:30 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Shouldn't postgres_fdw report warning when it gives up getting\n result from foreign server?" } ]
[ { "msg_contents": "Hi all,\n\n\nNow that the security policy is getting stronger, it is not uncommon to \ncreate users with a password expiration date (VALID UNTIL). The problem \nis that the user is only aware that his password has expired when he can \nno longer log in unless the application with which he is connecting \nnotifies him beforehand.\n\n\nI'm wondering if we might be interested in having this feature in psql? \nFor example for a user whose password expires in 3 days:\n\n\ngilles=# CREATE ROLE foo LOGIN PASSWORD 'foo' VALID UNTIL '2021-11-22';\nCREATE ROLE\ngilles=# \\c - foo\nPassword for user foo:\npsql (15devel, server 14.1 (Ubuntu 14.1-2.pgdg20.04+1))\n** Warning: your password expires in 3 days **\nYou are now connected to database \"gilles\" as user \"foo\".\n\n\nMy idea is to add a psql variable that can be defined in psqlrc to \nspecify the number of days before the user password expires to start \nprinting a warning. The warning message is only diplayed in interactive \nmode Example:\n\n$ cat /etc/postgresql-common/psqlrc\n\\set PASSWORD_EXPIRE_WARNING 7\n\nDefault value is 0 like today no warning at all.\n\n\nOf course any other client application have to write his own beforehand \nexpiration notice but with psql we don't have it for the moment. If \nthere is interest for this psql feature I can post the patch.\n\n\n-- \nGilles Darold\n\n\n\n\n\n\n\nHi all,\n\n\n\nNow that the security policy\n is getting stronger, it is not uncommon to create users with\n a password expiration date (VALID UNTIL). The problem is that the user\n is only aware that his password has expired when he can no\n longer log in unless the application with which he is\n connecting notifies him beforehand.\n\n\n\n\nI'm wondering if we might be\n interested in having this feature in psql? For example for a user whose\n password expires in 3 days:\n\ngilles=# CREATE ROLE foo LOGIN PASSWORD 'foo' VALID UNTIL\n '2021-11-22';\n CREATE ROLE\n gilles=# \\c - foo\n Password for user foo: \n psql (15devel, server 14.1 (Ubuntu 14.1-2.pgdg20.04+1))\n ** Warning: your password expires in 3 days **\n You are now connected to database \"gilles\" as user \"foo\".\n\n\nMy idea is to add a psql variable that can be defined in psqlrc\n to specify the number of days before the user password expires to\n start printing a warning. The warning message is only diplayed in\n interactive mode Example:\n$ cat /etc/postgresql-common/psqlrc\n \\set PASSWORD_EXPIRE_WARNING 7\n\n\nDefault value is 0 like today no warning at all.\n\n\nOf course any other client application have to write his own beforehand expiration\n notice but with psql we don't have it for the moment. If there is\n interest for this psql feature I can post the patch.\n\n\n-- \nGilles Darold", "msg_date": "Fri, 19 Nov 2021 15:49:37 +0100", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "Pasword expiration warning" }, { "msg_contents": "On Fri, 19 Nov 2021 at 20:19, Gilles Darold <gilles@migops.com> wrote:\n\n> Hi all,\n>\n>\n> Now that the security policy is getting stronger, it is not uncommon to\n> create users with a password expiration date (VALID UNTIL). The problem\n> is that the user is only aware that his password has expired when he can no\n> longer log in unless the application with which he is connecting notifies\n> him beforehand.\n>\n>\n> I'm wondering if we might be interested in having this feature in psql? For\n> example for a user whose password expires in 3 days:\n>\n> gilles=# CREATE ROLE foo LOGIN PASSWORD 'foo' VALID UNTIL '2021-11-22';\n> CREATE ROLE\n> gilles=# \\c - foo\n> Password for user foo:\n> psql (15devel, server 14.1 (Ubuntu 14.1-2.pgdg20.04+1))\n> ** Warning: your password expires in 3 days **\n> You are now connected to database \"gilles\" as user \"foo\".\n>\n>\n> My idea is to add a psql variable that can be defined in psqlrc to specify\n> the number of days before the user password expires to start printing a\n> warning. The warning message is only diplayed in interactive mode Example:\n>\n> $ cat /etc/postgresql-common/psqlrc\n> \\set PASSWORD_EXPIRE_WARNING 7\n>\n> +1\n\nIt is useful to notify the users about their near account expiration,\nand we are doing that at client level.\n\n\n\n\n\nDefault value is 0 like today no warning at all.\n>\n>\n> Of course any other client application have to write his own beforehand expiration\n> notice but with psql we don't have it for the moment. If there is interest\n> for this psql feature I can post the patch.\n>\n> --\n> Gilles Darold\n>\n>\n\nOn Fri, 19 Nov 2021 at 20:19, Gilles Darold <gilles@migops.com> wrote:\n\nHi all,\n\n\n\nNow that the security policy\n is getting stronger, it is not uncommon to create users with\n a password expiration date (VALID UNTIL). The problem is that the user\n is only aware that his password has expired when he can no\n longer log in unless the application with which he is\n connecting notifies him beforehand.\n\n\n\n\nI'm wondering if we might be\n interested in having this feature in psql? For example for a user whose\n password expires in 3 days:\n\ngilles=# CREATE ROLE foo LOGIN PASSWORD 'foo' VALID UNTIL\n '2021-11-22';\n CREATE ROLE\n gilles=# \\c - foo\n Password for user foo: \n psql (15devel, server 14.1 (Ubuntu 14.1-2.pgdg20.04+1))\n ** Warning: your password expires in 3 days **\n You are now connected to database \"gilles\" as user \"foo\".\n\n\nMy idea is to add a psql variable that can be defined in psqlrc\n to specify the number of days before the user password expires to\n start printing a warning. The warning message is only diplayed in\n interactive mode Example:\n$ cat /etc/postgresql-common/psqlrc\n \\set PASSWORD_EXPIRE_WARNING 7\n+1It is useful to notify the users about their near account expiration, and we are doing that at client level. \n\nDefault value is 0 like today no warning at all.\n\n\nOf course any other client application have to write his own beforehand expiration\n notice but with psql we don't have it for the moment. If there is\n interest for this psql feature I can post the patch.\n\n\n-- \nGilles Darold", "msg_date": "Fri, 19 Nov 2021 20:46:29 +0530", "msg_from": "Dinesh Chemuduru <dinesh.kumar@migops.com>", "msg_from_op": false, "msg_subject": "Re: Pasword expiration warning" }, { "msg_contents": "Gilles Darold <gilles@migops.com> writes:\n> Now that the security policy is getting stronger, it is not uncommon to \n> create users with a password expiration date (VALID UNTIL).\n\nTBH, I thought people were starting to realize that forced password\nrotations are a net security negative. It's true that a lot of\nplaces haven't gotten the word yet.\n\n> I'm wondering if we might be interested in having this feature in psql? \n\nThis proposal kind of seems like a hack, because\n(1) not everybody uses psql\n(2) psql can't really tell whether rolvaliduntil is relevant.\n (It can see whether the server demanded a password, but\n maybe that went to LDAP or some other auth method.)\n\nThat leads me to wonder about server-side solutions. It's easy\nenough for the server to see that it's used a password with an\nexpiration N days away, but how could that be reported to the\nclient? The only idea that comes to mind that doesn't seem like\na protocol break is to issue a NOTICE message, which doesn't\nseem like it squares with your desire to only do this interactively.\n(Although I'm not sure I believe that's a great idea. If your\napplication breaks at 2AM because its password expired, you\nwon't be any happier than if your interactive sessions start to\nfail. Maybe a message that would leave a trail in the server log\nwould be best after all.)\n\n> Default value is 0 like today no warning at all.\n\nOff-by-default is pretty much guaranteed to not help most people.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Nov 2021 10:55:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Pasword expiration warning" }, { "msg_contents": "Le 19/11/2021 à 16:55, Tom Lane a écrit :\n> Gilles Darold <gilles@migops.com> writes:\n>> Now that the security policy is getting stronger, it is not uncommon to\n>> create users with a password expiration date (VALID UNTIL).\n> TBH, I thought people were starting to realize that forced password\n> rotations are a net security negative. It's true that a lot of\n> places haven't gotten the word yet.\n>\n>> I'm wondering if we might be interested in having this feature in psql?\n> This proposal kind of seems like a hack, because\n> (1) not everybody uses psql\n\n\nYes, for me it's a comfort feature. When a user connect to a PG backend \nusing an account that have expired you have no information that the \nproblem is a password expiration. The message returned to the user is \njust: \"FATAL: password authentication failed for user \"foo\".  We had to \nverify in the log file that the problem is related to \"DETAIL:  User \n\"foo\" has an expired password.\".  If the user was warned beforehand to \nchange the password it will probably saves me some time.\n\n\n> (2) psql can't really tell whether rolvaliduntil is relevant.\n> (It can see whether the server demanded a password, but\n> maybe that went to LDAP or some other auth method.)\n\n\nI agree, I hope that in case of external authentication rolvaliduntil is \nnot set and in this case I guess that there is other notification \nchannels to inform the user that his password will expire. Otherwise yes \nthe warning message could be a false positive but the rolvaliduntil can \nbe changed to infinity to fix this case.\n\n\n> That leads me to wonder about server-side solutions. It's easy\n> enough for the server to see that it's used a password with an\n> expiration N days away, but how could that be reported to the\n> client? The only idea that comes to mind that doesn't seem like\n> a protocol break is to issue a NOTICE message, which doesn't\n> seem like it squares with your desire to only do this interactively.\n> (Although I'm not sure I believe that's a great idea. If your\n> application breaks at 2AM because its password expired, you\n> won't be any happier than if your interactive sessions start to\n> fail. Maybe a message that would leave a trail in the server log\n> would be best after all.)\n\n\nI think that this is the responsibility of the client to display a \nwarning when the password is about to expire, the backend could help the \napplication by sending a NOTICE but the application will still have to \nreport the notice. I mean that it can continue to do all the work to \nverify that the password is about to expire.\n\n\n>> Default value is 0 like today no warning at all.\n> Off-by-default is pretty much guaranteed to not help most people.\n\nRight, I was thinking of backward compatibility but this does not apply \nhere. So default to 7 days will be better.\n\n\nTo sum up as I said on top this is just a comfort notification dedicated \nto psql and for local pg account to avoid looking at log file for \nforgetting users.\n\n\n-- \nGilles Darold\n\n\n\n\n\n\n\nLe 19/11/2021 à 16:55, Tom Lane a\n écrit :\n\n\nGilles Darold <gilles@migops.com> writes:\n\n\nNow that the security policy is getting stronger, it is not uncommon to \ncreate users with a password expiration date (VALID UNTIL).\n\n\n\nTBH, I thought people were starting to realize that forced password\nrotations are a net security negative. It's true that a lot of\nplaces haven't gotten the word yet.\n\n\n\nI'm wondering if we might be interested in having this feature in psql? \n\n\n\nThis proposal kind of seems like a hack, because\n(1) not everybody uses psql\n\n\n\nYes, for me it's a comfort feature.\n When a user connect to a PG backend using an account that have\n expired you have no information that the problem is a password\n expiration. The message returned to the user is just: \"FATAL: \n password authentication failed for user \"foo\".  We had to verify\n in the log file that the problem is related to \"DETAIL:  User\n \"foo\" has an expired password.\".  If the user was warned\n beforehand to change the password it will probably saves me some\n time.\n\n\n\n\n(2) psql can't really tell whether rolvaliduntil is relevant.\n (It can see whether the server demanded a password, but\n maybe that went to LDAP or some other auth method.)\n\n\n\nI agree, I hope that in case of external authentication\n rolvaliduntil is not set and in this case I guess that there is\n other notification channels to inform the user that his password\n will expire. Otherwise yes the warning message could be a false\n positive but the rolvaliduntil can be changed to infinity to fix\n this case. \n\n\n\n\nThat leads me to wonder about server-side solutions. It's easy\nenough for the server to see that it's used a password with an\nexpiration N days away, but how could that be reported to the\nclient? The only idea that comes to mind that doesn't seem like\na protocol break is to issue a NOTICE message, which doesn't\nseem like it squares with your desire to only do this interactively.\n(Although I'm not sure I believe that's a great idea. If your\napplication breaks at 2AM because its password expired, you\nwon't be any happier than if your interactive sessions start to\nfail. Maybe a message that would leave a trail in the server log\nwould be best after all.)\n\n\n\nI think that this is the responsibility of the client to display\n a warning when the password is about to expire, the\n backend could help the application by sending a NOTICE but\n the application will still have to report the notice. I mean\n that it can continue to do all the work to verify that the\n password is about to expire.\n\n\n\n\n\n\nDefault value is 0 like today no warning at all.\n\n\n\nOff-by-default is pretty much guaranteed to not help most people.\n\nRight, I was thinking of backward compatibility but this does not\n apply here. So default to 7 days will be better.\n\n\n\nTo sum up as I said on top this is just a comfort notification\n dedicated to psql and for local pg account to avoid looking at log\n file for forgetting users.\n\n\n\n-- \nGilles Darold", "msg_date": "Fri, 19 Nov 2021 17:56:20 +0100", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "Re: Pasword expiration warning" }, { "msg_contents": "On 11/19/21, 7:56 AM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\r\n> That leads me to wonder about server-side solutions. It's easy\r\n> enough for the server to see that it's used a password with an\r\n> expiration N days away, but how could that be reported to the\r\n> client? The only idea that comes to mind that doesn't seem like\r\n> a protocol break is to issue a NOTICE message, which doesn't\r\n> seem like it squares with your desire to only do this interactively.\r\n> (Although I'm not sure I believe that's a great idea. If your\r\n> application breaks at 2AM because its password expired, you\r\n> won't be any happier than if your interactive sessions start to\r\n> fail. Maybe a message that would leave a trail in the server log\r\n> would be best after all.)\r\n\r\nI bet it's possible to use the ClientAuthentication_hook for this. In\r\nany case, I agree that it probably belongs server-side so that other\r\nclients can benefit from this.\r\n\r\nNathan\r\n\r\n", "msg_date": "Sat, 20 Nov 2021 00:17:53 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Pasword expiration warning" }, { "msg_contents": "On Sat, Nov 20, 2021 at 12:17:53AM +0000, Bossart, Nathan wrote:\n> I bet it's possible to use the ClientAuthentication_hook for this. In\n> any case, I agree that it probably belongs server-side so that other\n> clients can benefit from this.\n\nClientAuthentication_hook is called before the user is informed of the\nauthentication result, FWIW, so that does not seem wise.\n--\nMichael", "msg_date": "Sat, 20 Nov 2021 14:46:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Pasword expiration warning" }, { "msg_contents": "\nOn 11/19/21 19:17, Bossart, Nathan wrote:\n> On 11/19/21, 7:56 AM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\n>> That leads me to wonder about server-side solutions. It's easy\n>> enough for the server to see that it's used a password with an\n>> expiration N days away, but how could that be reported to the\n>> client? The only idea that comes to mind that doesn't seem like\n>> a protocol break is to issue a NOTICE message, which doesn't\n>> seem like it squares with your desire to only do this interactively.\n>> (Although I'm not sure I believe that's a great idea. If your\n>> application breaks at 2AM because its password expired, you\n>> won't be any happier than if your interactive sessions start to\n>> fail. Maybe a message that would leave a trail in the server log\n>> would be best after all.)\n> I bet it's possible to use the ClientAuthentication_hook for this. In\n> any case, I agree that it probably belongs server-side so that other\n> clients can benefit from this.\n>\n\n+1 for a server side solution. The people most likely to benefit from\nthis are the people least likely to be using psql IMNSHO.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 20 Nov 2021 08:48:35 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Pasword expiration warning" }, { "msg_contents": "Le 20/11/2021 à 14:48, Andrew Dunstan a écrit :\n> On 11/19/21 19:17, Bossart, Nathan wrote:\n>> On 11/19/21, 7:56 AM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\n>>> That leads me to wonder about server-side solutions. It's easy\n>>> enough for the server to see that it's used a password with an\n>>> expiration N days away, but how could that be reported to the\n>>> client? The only idea that comes to mind that doesn't seem like\n>>> a protocol break is to issue a NOTICE message, which doesn't\n>>> seem like it squares with your desire to only do this interactively.\n>>> (Although I'm not sure I believe that's a great idea. If your\n>>> application breaks at 2AM because its password expired, you\n>>> won't be any happier than if your interactive sessions start to\n>>> fail. Maybe a message that would leave a trail in the server log\n>>> would be best after all.)\n>> I bet it's possible to use the ClientAuthentication_hook for this. In\n>> any case, I agree that it probably belongs server-side so that other\n>> clients can benefit from this.\n>>\n> +1 for a server side solution. The people most likely to benefit from\n> this are the people least likely to be using psql IMNSHO.\n\n\nOk, I can try to implement something at server side using a NOTICE message.\n\n\n-- \nGilles Darold\n\n\n\n", "msg_date": "Sun, 21 Nov 2021 10:49:57 +0100", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "Re: Pasword expiration warning" } ]
[ { "msg_contents": "Why this happens ?\n\ncreate table t(i int);\nCREATE TABLE\ninsert into t values(1);\nINSERT 0 1\nselect (ctid::text::point)[1]::int, * from t;\n ctid | i\n------+---\n 1 | 1\n(1 row)\nupdate t set i = i;\nUPDATE 1\nselect (ctid::text::point)[1]::int, * from t;\n ctid | i\n------+---\n 2 | 1\n(1 row)\n\nIf nothing was changed, why create a new record, append data to wal, set\nold record as deleted, etc, etc ?\n\nregards,\nMarcos\n\nWhy this happens ?create table t(i int);CREATE TABLEinsert into t values(1);INSERT 0 1select (ctid::text::point)[1]::int, * from t; ctid | i ------+---    1 | 1(1 row)update t set i = i;UPDATE 1select (ctid::text::point)[1]::int, * from t; ctid | i ------+---    2 | 1(1 row)If nothing was changed, why create a new record, append data to wal, set old record as deleted, etc, etc ?regards,Marcos", "msg_date": "Fri, 19 Nov 2021 13:38:25 -0300", "msg_from": "Marcos Pegoraro <marcos@f10.com.br>", "msg_from_op": true, "msg_subject": "update with no changes" }, { "msg_contents": "On Fri, Nov 19, 2021 at 9:38 AM Marcos Pegoraro <marcos@f10.com.br> wrote:\n\n> If nothing was changed, why create a new record, append data to wal, set\n> old record as deleted, etc, etc ?\n>\n\nBecause it takes resources to determine that nothing changed. If you want\nto opt-in into that there is even an extension trigger that makes doing so\nfairly simple. But it's off by default because the typical case is that\npeople don't frequently perform no-op updates so why eat the expense.\n\nDavid J.\n\nOn Fri, Nov 19, 2021 at 9:38 AM Marcos Pegoraro <marcos@f10.com.br> wrote:If nothing was changed, why create a new record, append data to wal, set old record as deleted, etc, etc ?Because it takes resources to determine that nothing changed.  If you want to opt-in into that there is even an extension trigger that makes doing so fairly simple.  But it's off by default because the typical case is that people don't frequently perform no-op updates so why eat the expense.David J.", "msg_date": "Fri, 19 Nov 2021 09:45:18 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: update with no changes" }, { "msg_contents": ">\n> Because it takes resources to determine that nothing changed. If you want\n> to opt-in into that there is even an extension trigger that makes doing so\n> fairly simple. But it's off by default because the typical case is that\n> people don't frequently perform no-op updates so why eat the expense.\n>\nBut it takes resources for other operations, right ?\nI think this is not unusual. If an user double click on a grid, just sees a\nrecord and clicks ok to save, probably that application calls an update\ninstead of seeing if some field were changed before that.\n\nBecause it takes resources to determine that nothing changed.  If you want to opt-in into that there is even an extension trigger that makes doing so fairly simple.  But it's off by default because the typical case is that people don't frequently perform no-op updates so why eat the expense.But it takes resources for other operations, right ?I think this is not unusual. If an user double click on a grid, just sees a record and clicks ok to save, probably that application calls an update instead of seeing if some field were changed before that.", "msg_date": "Fri, 19 Nov 2021 14:03:09 -0300", "msg_from": "Marcos Pegoraro <marcos@f10.com.br>", "msg_from_op": true, "msg_subject": "Re: update with no changes" }, { "msg_contents": "Hi, \n\nOn November 19, 2021 8:38:25 AM PST, Marcos Pegoraro <marcos@f10.com.br> wrote:\n>Why this happens ?\n>\n>create table t(i int);\n>CREATE TABLE\n>insert into t values(1);\n>INSERT 0 1\n>select (ctid::text::point)[1]::int, * from t;\n> ctid | i\n>------+---\n> 1 | 1\n>(1 row)\n>update t set i = i;\n>UPDATE 1\n>select (ctid::text::point)[1]::int, * from t;\n> ctid | i\n>------+---\n> 2 | 1\n>(1 row)\n>\n>If nothing was changed, why create a new record, append data to wal, set\n>old record as deleted, etc, etc ?\n\nYou can't just skip doing updates without causing problems. An update basically acquires an exclusive row lock (which in turn prevents foreign key references from being removed etc). Just skipping that would cause a lot of new deadlocks and correctness issues.\n\nThere's also cases where people intentionally perform updates to move records around etc.\n\nRegards,\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Fri, 19 Nov 2021 09:20:47 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: update with no changes" }, { "msg_contents": "Marcos Pegoraro <marcos@f10.com.br> writes:\n> But it takes resources for other operations, right ?\n> I think this is not unusual. If an user double click on a grid, just sees a\n> record and clicks ok to save, probably that application calls an update\n> instead of seeing if some field were changed before that.\n\n[ shrug... ] As David said, if you think that it's important to have\nsuch a check in a particular application, use a trigger to check it.\nThere's one built-in, you don't even need an extension:\n\nhttps://www.postgresql.org/docs/current/functions-trigger.html\n\nWe're not going to make that happen by default though, because it'd\nbe a net drag on better-written applications.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Nov 2021 12:22:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: update with no changes" }, { "msg_contents": "On Fri, Nov 19, 2021 at 10:03 AM Marcos Pegoraro <marcos@f10.com.br> wrote:\n\n> Because it takes resources to determine that nothing changed. If you want\n>> to opt-in into that there is even an extension trigger that makes doing so\n>> fairly simple. But it's off by default because the typical case is that\n>> people don't frequently perform no-op updates so why eat the expense.\n>>\n> But it takes resources for other operations, right ?\n> I think this is not unusual. If an user double click on a grid, just sees\n> a record and clicks ok to save, probably that application calls an update\n> instead of seeing if some field were changed before that.\n>\n>\nThis has been the documented behavior for decades. I suggest you research\nprior discussions on the topic if you need more than what has been\nprovided. You'd need to bring up some novel points about why a change here\nwould be overall beneficial to get any interest, at least from me, in\ndiscussing the topic further.\n\nI get the idea of letting the server centralize logic like this - but\nfrankly if the application is choosing to send all that data across the\nwire just to have the server throw it away the application is wasting\nnetwork I/O. If it does manage its resources carefully then the server\nwill never even see an update and its behavior here becomes moot.\n\nDavid J.\n\nOn Fri, Nov 19, 2021 at 10:03 AM Marcos Pegoraro <marcos@f10.com.br> wrote:Because it takes resources to determine that nothing changed.  If you want to opt-in into that there is even an extension trigger that makes doing so fairly simple.  But it's off by default because the typical case is that people don't frequently perform no-op updates so why eat the expense.But it takes resources for other operations, right ?I think this is not unusual. If an user double click on a grid, just sees a record and clicks ok to save, probably that application calls an update instead of seeing if some field were changed before that. This has been the documented behavior for decades.  I suggest you research prior discussions on the topic if you need more than what has been provided.  You'd need to bring up some novel points about why a change here would be overall beneficial to get any interest, at least from me, in discussing the topic further.I get the idea of letting the server centralize logic like this - but frankly if the application is choosing to send all that data across the wire just to have the server throw it away the application is wasting network I/O.  If it does manage its resources carefully then the server will never even see an update and its behavior here becomes moot.David J.", "msg_date": "Fri, 19 Nov 2021 10:37:52 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: update with no changes" }, { "msg_contents": ">\n> I get the idea of letting the server centralize logic like this - but\n> frankly if the application is choosing to send all that data across the\n> wire just to have the server throw it away the application is wasting\n> network I/O. If it does manage its resources carefully then the server\n> will never even see an update and its behavior here becomes moot.\n>\n> I understand your point, it´s responsability of application to do what it\nhas to do. But lots of times (maybe 98% of them) is not same people doing\nserver side and application side. So, Postgres guys will have to review all\ncode being done on apps ?\n\nAnd ok, thanks for explaining me.\n\nI get the idea of letting the server centralize logic like this - but frankly if the application is choosing to send all that data across the wire just to have the server throw it away the application is wasting network I/O.  If it does manage its resources carefully then the server will never even see an update and its behavior here becomes moot.I understand your point, it´s responsability of application to do what it has to do. But lots of times (maybe 98% of them) is not same people doing server side and application side. So, Postgres guys will have to review all code being done on apps ?And ok, thanks for explaining me.", "msg_date": "Fri, 19 Nov 2021 14:57:32 -0300", "msg_from": "Marcos Pegoraro <marcos@f10.com.br>", "msg_from_op": true, "msg_subject": "Re: update with no changes" }, { "msg_contents": "On Fri, Nov 19, 2021 at 10:57 AM Marcos Pegoraro <marcos@f10.com.br> wrote:\n\n> So, Postgres guys will have to review all code being done on apps ?\n>>\n>\n>\nI suppose if the application side cannot be trusted to code to a\nspecification without having the server side add validation and/or\ncompensation code to catch the bugs then, yes, one option is to have the\nserver side do extra work. There are other solutions, some of which are\nnot even technical in nature.\n\nDavid J.\n\nOn Fri, Nov 19, 2021 at 10:57 AM Marcos Pegoraro <marcos@f10.com.br> wrote:So, Postgres guys will have to review all code being done on apps ?I suppose if the application side cannot be trusted to code to a specification without having the server side add validation and/or compensation code to catch the bugs then, yes, one option is to have the server side do extra work.  There are other solutions, some of which are not even technical in nature.David J.", "msg_date": "Fri, 19 Nov 2021 12:02:37 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: update with no changes" }, { "msg_contents": ">\n> I suppose if the application side cannot be trusted to code to a\n> specification without having the server side add validation and/or\n> compensation code to catch the bugs then, yes, one option is to have the\n> server side do extra work. There are other solutions, some of which are\n> not even technical in nature.\n>\n> Just to show you my problem, since I wrote my first email of this\ndiscussion, I changed a little my auditing trigger to get total of records\nbeing updated. Only last 3 or 4 hours we´re talking, we had 12% of them\nwith no changes. It is a lot.\n\nThanks again, we have a huge code review here.\n\nregards,\nMarcos\n\nI suppose if the application side cannot be trusted to code to a specification without having the server side add validation and/or compensation code to catch the bugs then, yes, one option is to have the server side do extra work.  There are other solutions, some of which are not even technical in nature.Just to show you my problem, since I wrote my first email of this discussion, I changed a little my auditing trigger to get total of records being updated. Only last 3 or 4 hours we´re talking, we had 12% of them with no changes. It is a lot.Thanks again, we have a huge code review here.regards, Marcos", "msg_date": "Fri, 19 Nov 2021 17:09:36 -0300", "msg_from": "Marcos Pegoraro <marcos@f10.com.br>", "msg_from_op": true, "msg_subject": "Re: update with no changes" }, { "msg_contents": "On Fri, Nov 19, 2021 at 10:20 AM Andres Freund <andres@anarazel.de> wrote:\n\n> You can't just skip doing updates without causing problems.\n>\n>\nGiven you can do exactly this by using a trigger this statement is either\nfalse or I'm missing some piece of knowledge it relies upon.\n\nDavid J.\n\nOn Fri, Nov 19, 2021 at 10:20 AM Andres Freund <andres@anarazel.de> wrote:You can't just skip doing updates without causing problems.Given you can do exactly this by using a trigger this statement is either false or I'm missing some piece of knowledge it relies upon.David J.", "msg_date": "Fri, 19 Nov 2021 15:32:43 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: update with no changes" }, { "msg_contents": "\nOn 11/19/21 12:57, Marcos Pegoraro wrote:\n>\n> I get the idea of letting the server centralize logic like this -\n> but frankly if the application is choosing to send all that data\n> across the wire just to have the server throw it away the\n> application is wasting network I/O.  If it does manage its\n> resources carefully then the server will never even see an update\n> and its behavior here becomes moot.\n>\n> I understand your point, it´s responsability of application to do what\n> it has to do. But lots of times (maybe 98% of them) is not same people\n> doing server side and application side. So, Postgres guys will have to\n> review all code being done on apps ?\n>\n> And ok, thanks for explaining me.\n\n\nsuppress_redundant_updates_trigger was created precisely because it's\nnot always easy to create application code in such a way that it\ngenerates no redundant updates. However, there is a cost to using it,\nand the break even point can be surprisingly high. It should therefore\nbe used with caution, and after appropriate benchmarks.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 20 Nov 2021 09:41:21 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: update with no changes" }, { "msg_contents": ">\n> suppress_redundant_updates_trigger was created precisely because it's\n> not always easy to create application code in such a way that it\n> generates no redundant updates. However, there is a cost to using it,\n> and the break even point can be surprisingly high. It should therefore\n> be used with caution, and after appropriate benchmarks.\n>\n> well, there is a cost of not using it too. If lots of things needs to be\ndone when a record is stored, and if it doesn´t needed to be stored, all\nthese things will not be done. So, what are pros of changing a record\nwhich did not changed any value and what are cons of it ? So, I understood\nthe way it works and yes, my point of view is that this trigger is really\nneeded, for me, obviously.\n\nsuppress_redundant_updates_trigger was created precisely because it's\nnot always easy to create application code in such a way that it\ngenerates no redundant updates. However, there is a cost to using it,\nand the break even point can be surprisingly high. It should therefore\nbe used with caution, and after appropriate benchmarks.\n\nwell, there is a cost of not using it too. If lots of things needs to be done when a record is stored, and if it doesn´t needed to be stored, all these things will not be done.  So, what are pros of changing a record which did not changed any value and what are cons of it ? So, I understood the way it works and yes, my point of view is that this trigger is really needed, for me, obviously.", "msg_date": "Sat, 20 Nov 2021 12:03:26 -0300", "msg_from": "Marcos Pegoraro <marcos@f10.com.br>", "msg_from_op": true, "msg_subject": "Re: update with no changes" }, { "msg_contents": "\nOn 11/20/21 10:03, Marcos Pegoraro wrote:\n>\n> suppress_redundant_updates_trigger was created precisely because it's\n> not always easy to create application code in such a way that it\n> generates no redundant updates. However, there is a cost to using it,\n> and the break even point can be surprisingly high. It should therefore\n> be used with caution, and after appropriate benchmarks.\n>\n> well, there is a cost of not using it too. If lots of things needs to\n> be done when a record is stored, and if it doesn´t needed to be\n> stored, all these things will not be done.  So, what are pros of\n> changing a record which did not changed any value and what are cons of\n> it ? So, I understood the way it works and yes, my point of view is\n> that this trigger is really needed, for me, obviously.\n\n\n\nIf you need it then use it. It's been built into postgres since release\n8.4. Just be aware that if you use it there is a cost incurred for every\nrecord updated whether or not the record is redundant. If only 12% of\nyour updates are redundant I suspect it will be a net loss for you, but\nas I said above you should benchmark it.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 20 Nov 2021 10:19:08 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: update with no changes" } ]
[ { "msg_contents": "Hi,\n\nWhile working on a patch, I noticed that we never clean the cache of \nsequence values, i.e. seqhashtab in sequence.c. That is, once we create \nan entry for a sequence (by calling nextval), it will stay forever \n(until the backend terminates). Even if the sequence gets dropped, the \nentry stays behind.\n\nThe SeqTableData entries are fairly small (~30B), but even considering \nthat it's still a memory leak. Not an issue for common workloads, which \nuse just a handful of sequences, but sometimes people create a lot of \ntemporary objects, including sequences.\n\nOr what happens when a connection calls nextval() on a sequence, the \nsequence gets dropped, the Oid gets reused for new sequence, and then we \ncall nextval() again? Seems like it might cause various issues with \nreturning bogus values from stale cache.\n\nAdmittedly, it doesn't seem like a critical issue - it's been like this \nsince 2002 (a2597ef179) [1] which separated the sequence cache from \nrelcache, to address issues with locking.\n\n\n[1] \nhttps://www.postgresql.org/message-id/flat/23899.1022076750%40sss.pgh.pa.us\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 19 Nov 2021 18:33:14 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "sequence cache is kept forever" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> While working on a patch, I noticed that we never clean the cache of \n> sequence values, i.e. seqhashtab in sequence.c. That is, once we create \n> an entry for a sequence (by calling nextval), it will stay forever \n> (until the backend terminates). Even if the sequence gets dropped, the \n> entry stays behind.\n\nIt might be reasonable to drop entries when their sequence is dropped,\nthough I wonder how much that would move the needle for real-world\nusages. Dropping an entry \"just because\" risks losing cached value\nassignments, which might be unpleasant (e.g. due to faster advancement\nof the sequence's counter, more WAL traffic, etc). With no actual\ncomplaints from the field, I'm disinclined to do that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Nov 2021 12:50:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: sequence cache is kept forever" }, { "msg_contents": "\n\nOn 11/19/21 18:50, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> While working on a patch, I noticed that we never clean the cache of\n>> sequence values, i.e. seqhashtab in sequence.c. That is, once we create\n>> an entry for a sequence (by calling nextval), it will stay forever\n>> (until the backend terminates). Even if the sequence gets dropped, the\n>> entry stays behind.\n> \n> It might be reasonable to drop entries when their sequence is dropped,\n> though I wonder how much that would move the needle for real-world\n> usages. Dropping an entry \"just because\" risks losing cached value\n> assignments, which might be unpleasant (e.g. due to faster advancement\n> of the sequence's counter, more WAL traffic, etc). With no actual\n> complaints from the field, I'm disinclined to do that.\n> \n\nMy point was about dropped sequences. I certainly agree we shouldn't \ndiscard entries for sequences that still exist.\n\nI ran into this while working on the \"decoding of sequences\" patch. \nHannu Krosing proposed to track sequences modified in a transaction and \nthen log the state just once at commit - the seqhashtab seems like a \ngood match. But at commit we have to walk the hashtable to see which \nsequences need this extra logging, and if we never discard anything it's \ngoing to be more expensive over time.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 19 Nov 2021 19:37:05 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: sequence cache is kept forever" } ]
[ { "msg_contents": "Why do Toast tables have it's own visibility map and xmin, xmax columns etc?\nIsn't it increasing row size in a toast table and adding more complexity?\n\nIdeally all the vacuum cleanup on a TOAST can be done based on Primary\ntable xmin,xmax and VM info. Yes, that makes any cleanup on TOAST to be\nglued up with the Primary table.\n\nWhy do Toast tables have it's own visibility map and xmin, xmax columns etc?Isn't it increasing row size in a toast table and adding more complexity?Ideally all the vacuum cleanup on a TOAST can be done based on Primary table xmin,xmax and VM info. Yes, that makes any cleanup on TOAST to be glued up with the Primary table.", "msg_date": "Sat, 20 Nov 2021 00:15:40 +0530", "msg_from": "Virender Singla <virender.cse@gmail.com>", "msg_from_op": true, "msg_subject": "TOAST - why separate visibility map" }, { "msg_contents": "Another point that currently manual VACUUM job does cleanup/freeze on\nprimary table first and then toast table. It looks easy pick to possibly\nhave a configurable parameter to run it on both the tables in parallel.\n\nOn Sat, Nov 20, 2021 at 12:15 AM Virender Singla <virender.cse@gmail.com>\nwrote:\n\n> Why do Toast tables have it's own visibility map and xmin, xmax columns\n> etc?\n> Isn't it increasing row size in a toast table and adding more complexity?\n>\n> Ideally all the vacuum cleanup on a TOAST can be done based on Primary\n> table xmin,xmax and VM info. Yes, that makes any cleanup on TOAST to be\n> glued up with the Primary table.\n>\n>\n>\n\nAnother point that currently manual VACUUM job does cleanup/freeze on primary table first and then toast table. It looks easy pick to possibly have a configurable parameter to run it on both the tables in parallel.  On Sat, Nov 20, 2021 at 12:15 AM Virender Singla <virender.cse@gmail.com> wrote:Why do Toast tables have it's own visibility map and xmin, xmax columns etc?Isn't it increasing row size in a toast table and adding more complexity?Ideally all the vacuum cleanup on a TOAST can be done based on Primary table xmin,xmax and VM info. Yes, that makes any cleanup on TOAST to be glued up with the Primary table.", "msg_date": "Sat, 20 Nov 2021 01:18:58 +0530", "msg_from": "Virender Singla <virender.cse@gmail.com>", "msg_from_op": true, "msg_subject": "Re: TOAST - why separate visibility map" }, { "msg_contents": "Virender Singla <virender.cse@gmail.com> writes:\n> Why do Toast tables have it's own visibility map and xmin, xmax columns etc?\n> Isn't it increasing row size in a toast table and adding more complexity?\n\nThere are advantages to having the same low-level format for toast tables\nas regular tables --- for example, that you can look into a toast table\nfor debugging purposes with normal SQL queries. Even if we weren't tied\nto that format for disk-storage-compatibility reasons, I'd be disinclined\nto change it.\n\nIt might be feasible to drop the visibility map for toast tables, though.\nI agree that's not buying much, since ordinary queries don't consult it.\nNot sure if there'd be a win proportional to the added code complexity.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Nov 2021 15:31:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TOAST - why separate visibility map" }, { "msg_contents": "Hi, \n\nOn November 19, 2021 12:31:00 PM PST, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>Virender Singla <virender.cse@gmail.com> writes:\n>> Why do Toast tables have it's own visibility map and xmin, xmax columns etc?\n>> Isn't it increasing row size in a toast table and adding more complexity?\n\nGiven the size of toasted data, the overhead is unlikely to be a significant overhead. It's much more an issue for the main table, where narrow rows are common.\n\nWe don't want to use the main table visibility information for vacuuming either - that'd practically prevent HOT cleanup, and we'd a new expensive way of doing cleanup in toast tables using the main row's visibility information.\n\n\n>There are advantages to having the same low-level format for toast tables\n>as regular tables --- for example, that you can look into a toast table\n>for debugging purposes with normal SQL queries. Even if we weren't tied\n>to that format for disk-storage-compatibility reasons, I'd be disinclined\n>to change it.\n>\n>It might be feasible to drop the visibility map for toast tables, though.\n>I agree that's not buying much, since ordinary queries don't consult it.\n>Not sure if there'd be a win proportional to the added code complexity.\n\nI think it be a bad idea - the VM is used by vacuum to avoid rereading already vacuumed ranges. Loosing that for large toast tables would be bad.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Fri, 19 Nov 2021 22:29:10 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: TOAST - why separate visibility map" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On November 19, 2021 12:31:00 PM PST, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> It might be feasible to drop the visibility map for toast tables, though.\n\n> I think it be a bad idea - the VM is used by vacuum to avoid rereading already vacuumed ranges. Loosing that for large toast tables would be bad.\n\nAh, right. I was thinking vacuuming depended on the other map fork,\nbut of course it needs this one.\n\nIn short, there are indeed good reasons why it works like this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 20 Nov 2021 11:16:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TOAST - why separate visibility map" }, { "msg_contents": "\"Given the size of toasted data, the overhead is unlikely to be a\nsignificant overhead. It's much more an issue for the main table, where\nnarrow rows are common.\"\n\nCompletely agree, row size should not be a big concern for toast tables.\n\nHowever write amplification will happen with vacuum freeze where\ntransactions id need to freeze in wider toast table tuples as well. I have\nnot explored if TOAST has separate hint bits info as well. In that case it\nmeans normal vacuum (or SELECT after WRITE) has to completely rewrite the\nbig toast table tuples along with the small main table to set the hint bits\n(commit/rollback).\n\nI believe B tree Index does not contain any seperate visibility info so\nthat means the only work VACUUM does on Indexes is cleaning up dead tuples.\n\nWith maintaining one visibility info, above operations could be way faster.\nHowever now the main table and TOAST vacuuming process will be glued\ntogether where optimization can be thought about like two synchronized\nthreads working together for main and TOAST table to do the cleanup job.\nAgree that hot updates are gone in TOAST if there is a common VM.\n\nOverall this looks complex.\n\nOn Sat, Nov 20, 2021 at 9:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andres Freund <andres@anarazel.de> writes:\n> > On November 19, 2021 12:31:00 PM PST, Tom Lane <tgl@sss.pgh.pa.us>\n> wrote:\n> >> It might be feasible to drop the visibility map for toast tables,\n> though.\n>\n> > I think it be a bad idea - the VM is used by vacuum to avoid rereading\n> already vacuumed ranges. Loosing that for large toast tables would be bad.\n>\n> Ah, right. I was thinking vacuuming depended on the other map fork,\n> but of course it needs this one.\n>\n> In short, there are indeed good reasons why it works like this.\n>\n> regards, tom lane\n>\n\n\"Given the size of toasted data, the overhead is unlikely to be a \nsignificant overhead. It's much more an issue for the main table, where \nnarrow rows are common.\"Completely agree, row size should not be a big concern for toast tables.  However write amplification will happen with vacuum freeze where transactions id need to freeze in wider toast table tuples as well. I have not explored if TOAST has separate hint bits info as well. In that case it means normal vacuum (or\n SELECT after WRITE) has to completely rewrite the big toast table \ntuples along with the small main table to set the hint bits \n(commit/rollback). I believe B tree Index does not contain any seperate visibility info so that means the only work VACUUM does on Indexes is cleaning up dead tuples. With maintaining one visibility info, above operations could be way faster. However now the main table and TOAST vacuuming process will be glued together where optimization can be thought about like two synchronized threads working together for main and TOAST table to do the cleanup job. Agree that hot updates are gone  in TOAST if there is a common VM.  Overall this looks complex.On Sat, Nov 20, 2021 at 9:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Andres Freund <andres@anarazel.de> writes:\n> On November 19, 2021 12:31:00 PM PST, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> It might be feasible to drop the visibility map for toast tables, though.\n\n> I think it be a bad idea - the VM is used by vacuum to avoid rereading already vacuumed ranges. Loosing that for large toast tables would be bad.\n\nAh, right.  I was thinking vacuuming depended on the other map fork,\nbut of course it needs this one.\n\nIn short, there are indeed good reasons why it works like this.\n\n                        regards, tom lane", "msg_date": "Thu, 25 Nov 2021 19:52:01 +0530", "msg_from": "Virender Singla <virender.cse@gmail.com>", "msg_from_op": true, "msg_subject": "Re: TOAST - why separate visibility map" } ]
[ { "msg_contents": "Hi,\n\nIt seems like the same macro names for\nSnapBuildOnDiskNotChecksummedSize and SnapBuildOnDiskChecksummedSize\nare being used in slot.c and snapbuild.c. I think, in slot.c, we can\nrename them to ReplicationSlotOnDiskNotChecksummedSize and\nReplicationSlotOnDiskChecksummedSize\nsimilar to the other macros ReplicationSlotOnDiskConstantSize and\nReplicationSlotOnDiskV2Size.\n\nHere's a tiny patch for the above change.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Sat, 20 Nov 2021 19:43:11 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "rename SnapBuild* macros in slot.c" }, { "msg_contents": "On Sat, Nov 20, 2021 at 7:43 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> It seems like the same macro names for\n> SnapBuildOnDiskNotChecksummedSize and SnapBuildOnDiskChecksummedSize\n> are being used in slot.c and snapbuild.c. I think, in slot.c, we can\n> rename them to ReplicationSlotOnDiskNotChecksummedSize and\n> ReplicationSlotOnDiskChecksummedSize\n> similar to the other macros ReplicationSlotOnDiskConstantSize and\n> ReplicationSlotOnDiskV2Size.\n>\n\n+1 for this change. This seems to be introduced by commit ec5896aed3\n[1] and I think it is just a typo to name these macros starting with\nSnapBuildOnDisk* unless I am missing something.\n\n\n[1]\ncommit ec5896aed3c01da24c1f335f138817e9890d68b6\nAuthor: Andres Freund <andres@anarazel.de>\nDate: Wed Nov 12 18:52:49 2014 +0100\n\n Fix several weaknesses in slot and logical replication on-disk\nserialization.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 22 Nov 2021 17:12:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: rename SnapBuild* macros in slot.c" }, { "msg_contents": "On 2021-Nov-22, Amit Kapila wrote:\n\n> +1 for this change. This seems to be introduced by commit ec5896aed3\n> [1] and I think it is just a typo to name these macros starting with\n> SnapBuildOnDisk* unless I am missing something.\n\nYeah, it looks pretty weird. +1 for the change on consistency grounds.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 22 Nov 2021 11:17:35 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: rename SnapBuild* macros in slot.c" }, { "msg_contents": "On Mon, Nov 22, 2021 at 7:47 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Nov-22, Amit Kapila wrote:\n>\n> > +1 for this change. This seems to be introduced by commit ec5896aed3\n> > [1] and I think it is just a typo to name these macros starting with\n> > SnapBuildOnDisk* unless I am missing something.\n>\n> Yeah, it looks pretty weird. +1 for the change on consistency grounds.\n>\n\nOkay, I will push this tomorrow unless I see any objections.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 23 Nov 2021 08:12:18 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: rename SnapBuild* macros in slot.c" }, { "msg_contents": "On Tue, Nov 23, 2021 at 8:12 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Nov 22, 2021 at 7:47 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2021-Nov-22, Amit Kapila wrote:\n> >\n> > > +1 for this change. This seems to be introduced by commit ec5896aed3\n> > > [1] and I think it is just a typo to name these macros starting with\n> > > SnapBuildOnDisk* unless I am missing something.\n> >\n> > Yeah, it looks pretty weird. +1 for the change on consistency grounds.\n> >\n>\n> Okay, I will push this tomorrow unless I see any objections.\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 24 Nov 2021 14:25:49 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: rename SnapBuild* macros in slot.c" } ]
[ { "msg_contents": "Hi,\n\nI have just joined to start a community consultation process for a\nproposal. I just finished the proposal document, I spent time writing a\nProblem and Solution section, and I have done quite a bit of upfront\nexploration of the code.\n\nSee:\n\n - Google Document with Commenting turned on\n https://docs.google.com/document/d/1u6mVKEHfKtR80UrMLNYrp5D6cCSW1_arcTaZ9HcAKlw/edit?usp=sharing.\n Feel free to request Edit access.\n - The current PDF version is attached too for archive purposes\n\nI am very keen to make this happen.\n\nThanks\n\n-- \n--\nTodd Hubers", "msg_date": "Sun, 21 Nov 2021 03:11:03 +1100", "msg_from": "Todd Hubers <todd.hubers@gmail.com>", "msg_from_op": true, "msg_subject": "Feature Proposal: Connection Pool Optimization - Change the\n Connection User" }, { "msg_contents": "Hi Todd!\n\n> I have just joined to start a community consultation process for a proposal. I just finished the proposal document, I spent time writing a Problem and Solution section, and I have done quite a bit of upfront exploration of the code.\n> \n> See:\n> \n> * Google Document with Commenting turned on https://docs.google.com/document/d/1u6mVKEHfKtR80UrMLNYrp5D6cCSW1_arcTaZ9HcAKlw/edit?usp=sharing. Feel free to request Edit access.\n> * The current PDF version is attached too for archive purposes\n\nI think you are right: the ability to reconnect to the same DB with another user would be beneficial for connection poolers.\nCurrently, PgBouncer\\Odyssey maintains a lot of small pools - one for each user, while having a big one would be great.\n From my experience, most of the cost of opening a new server connection comes from a fork() and subsequent CoW memory allocations.\nAnd these expenses would be not necessary if we could just send a new Startup message after the Terminate ('X') message.\nBut this effectively would empty out out all backend caches.\nYet the feature seems useful from my PoV.\n\nBest regards, Andrey Borodin.\n\n\n", "msg_date": "Sat, 20 Nov 2021 22:27:28 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Feature Proposal: Connection Pool Optimization - Change the\n Connection User" }, { "msg_contents": "On Sun, Nov 21, 2021 at 03:11:03AM +1100, Todd Hubers wrote:\n> I have just joined to start a community consultation process for a\n> proposal. I just finished the proposal document, I spent time writing a\n> Problem and Solution section, and I have done quite a bit of upfront\n> exploration of the code.\n> \n> - Google Document with Commenting turned on\n> https://docs.google.com/document/d/1u6mVKEHfKtR80UrMLNYrp5D6cCSW1_arcTaZ9HcAKlw/edit?usp=sharing.\n\nThe stated goal is to support persistent connections with hundreds or thousands\nof different users.\n\nHowever, this doesn't attempt to address the similar issue if there's hundreds\nor thousands of DBs. Which I don't think could work at all, since switching to\na new DB requires connecting to a new backend process.\n\nYou proposed a PQ protocol version of SET ROLE/SET SESSION authorization.\nYou'd need to make sure that a client connected to the connection pooler cannot\nitself run the PQ \"set role\". The connection pooler would need to reject the\nrequest (or maybe ignore requests to switch to the same/current user). Maybe\nyou'd have two protocol requests \"begin switch user\" and \"end switch to user\",\nand then the server-side could enforce that \"begin switch\" is not nested.\nMaybe the \"begin switch\" could return some kind of \"nonce\" to the connection\npooler, and \"end switch\" would require the same nonce to be validated.\n\nIt'd be interesting to hear if you've tested with postgresql 14, which improves\nscalability to larger number of connections.\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 20 Nov 2021 12:13:45 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Feature Proposal: Connection Pool Optimization - Change the\n Connection User" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Sun, Nov 21, 2021 at 03:11:03AM +1100, Todd Hubers wrote:\n>> - Google Document with Commenting turned on\n>> https://docs.google.com/document/d/1u6mVKEHfKtR80UrMLNYrp5D6cCSW1_arcTaZ9HcAKlw/edit?usp=sharing.\n\n> You proposed a PQ protocol version of SET ROLE/SET SESSION authorization.\n> You'd need to make sure that a client connected to the connection pooler cannot\n> itself run the PQ \"set role\".\n\nIt's not really apparent to me how this mechanism wouldn't be a giant\nsecurity hole. In particular, I see no meaningful difference between\ngranting the proposed \"impersonate\" privilege and granting superuser.\nYou could restrict it to not allow impersonating superusers, which'd\nmake it a little better, but what of all the predefined roles we keep\ninventing that have privileges we don't want to be accessible to Joe\nUser? I think by the time you got to something where \"impersonate\"\nwasn't a security hole, it would be pretty much equivalent to SET ROLE,\nie you could only impersonate users you'd been specifically given the\nprivlege for.\n\nIt also seems like this is transferring the entire responsibility for\nuser authentication onto the middleware, with the server expected to\njust believe that the session should now be allowed to act as user X.\nAgain, that seems more like a security problem than a good design.\n\nAnother issue is that the proposed implementation seems pretty far\nshort of being an invisible substitute for actually logging in as\nthe target user. For starters, what of instantiating ALTER ROLE SET\nparameter values that apply to the target user, and getting rid of\nsuch settings that applied to the original user ID? How would this\ninteract with the \"on login\" triggers that people keep asking for?\n\nAlso, you'd still have to do DISCARD ALL (at least) when switching\nusers, or else integrate all that cache-flushing right into the\nswitching mechanism. So I'm not entirely convinced about how big\nthe performance benefits are compared to a fresh session.\n\nOne more point is that the proposed business about \n\n* ImpersonateDatabaseUser will either succeed silently (0-RTT), or\n fail. Upon failure, no further commands will be processed until\n ImpersonateDatabaseUser succeeds.\n\nseems to require adding a huge amount of complication on the server side,\nand complication in the protocol spec itself, to save a rather minimal\namount of complication in the middleware. Why can't we just say that\na failed \"impersonate\" command leaves the session in the same state\nas before, and it's up to the pooler to do something about it? We are\nin any case trusting the pooler not to send commands from user A to\na session logged in as user B.\n\n\t\t\tregards, tom lane\n\nPS: I wonder how we test such a feature meaningfully without\nincorporating a pooler right into the Postgres tree. I don't\nwant to do that, for sure.\n\n\n", "msg_date": "Sat, 20 Nov 2021 16:16:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Feature Proposal: Connection Pool Optimization - Change the\n Connection User" }, { "msg_contents": "Hi Tom, Justin, and Andrey,\n\nThanks everybody for your feedback so far! I agree, there are a few\nunknowns for the design and impact and there are many details to iron out.\n\n*Benchmarking* - Overall I think it's best to explore improvements with\nbenchmarking. The key goal of this proposal pertains to performance:\n\"fast\". That means benchmarking is essential in the design phase, to ensure\nthere are measurable improvements, and reward for effort. This should also\nprimarily influence the selection of Solution Option (of which there are 6\nor 7 listed). *I have added the proposed benchmarks to the document.\n<https://docs.google.com/document/d/1u6mVKEHfKtR80UrMLNYrp5D6cCSW1_arcTaZ9HcAKlw/edit?usp=sharing>*\n\n*1) Andrey said*: \"And these expenses would be not necessary *if we could\njust send a new Startup message after the Terminate ('X') message*.\" -\n*Answer*: I have added the \"Terminate/Restartup\" *benchmark in the document\n*accordingly.\n\n*2) Justin said:* \"However, this doesn't attempt to address the similar\nissue *if there's hundreds or thousands of DBs*. Which I don't think could\nwork at all, since switching to a new DB requires connecting to a new\nbackend process.\"\n\nGood point. I do have some loose thinking around that. I do have some\ncomments throughout the proposal document to *tackle that in the future*.\nAs you point out, user switching seems to be a simpler well-trodden path.\nFor now, my proposal is that the middleware can assert a database change,\nbut that should create an error for now. In the future, it might be a\nsupported capability.\n\nI have also added database-switching as a consideration for the\n*benchmarking*.\n\n*3) Justin said*: \"You'd need to make sure that a client connected to the\nconnection pooler cannot itself run the PQ \"set role\". *The connection\npooler would need to reject the request *(or maybe ignore requests to\nswitch to the same/current user).\" - *Answer: Agreed*.\n\n*4) Justin said:* \"Maybe you'd have two protocol requests \"begin switch\nuser\" and \"end switch to user\", and then the server-side could enforce that\n\"begin switch\" is not nested. Maybe the \"begin switch\" could return some\nkind of \"nonce\" to the connection pooler, and \"end switch\" *would require\nthe same nonce* to be validated.\"\n\nAgreed, something like this should be suitable. For performance, *I would\nprefer to not use a Nonce* because that would not be a 0-RTT approach. The\nPQ level ImpersonateDatabaseUser is effectively the \"begin\". I don't think\nan \"end\" is necessary, because it is expected to switch the user very\noften. The next \"begin\" is effectively ending the last Impersonation. The\nmiddleware could switch back to its own username, but as per my draft\nproposal, that user should only be allowed (as a connection bound\nauthorisation for ImpersonateDatabaseUser) to impersonate and nothing else.\n\nAnd then there are yet 5 other solution options which have an explicit\nBegin/End pattern.\n\n*5) Justin said:* \"It'd be interesting to hear if you've tested with\npostgresql 14, which improves scalability to larger number of connections.\"\n\nAgreed. I will include a baseline in the benchmarks. Regardless of\nimprovements, it won't be infinite and there will still be a need to\n\"enable\" pooling where there are MAX_CONNECTIONS users using the system.\n\n*6) Tom said:* \"It's not really apparent to me how this mechanism wouldn't\nbe a giant security hole.\"\n\nI have added your important concerns to a Security Considerations section\nof the document:\n\n - \"What is the difference between granting the proposed \"impersonate\"\n privilege and granting superuser?\"\n - \"Is this merely transferring the entire responsibility for user\n authentication onto the middleware, with the server expected to just\n believe that the session should now be allowed to act as user X?\"\n\n*7) Tom said:* \"I think by the time you got to something where\n\"impersonate\" wasn't a security hole,* it would be pretty much equivalent\nto SET ROLE*\"\n\n*I don't think so*. This is already contemplated in the draft. See Solution\nOption 4. SET ROLE comes with RESET ROLE. The frontend client could call\nRESET ROLE then change to whatever role they like. That means that feature\nis not currently suitable to be used for the context of this proposal.\n\n*8) Tom said:* \"Another issue is that the proposed implementation seems\npretty far short of being an *invisible substitute for actually logging in*\nas the target user\"\n\n*The goal* of this proposal is performance and *to enable shared connection\npooling*. Direct logging in doesn't allow the reuse of existing connections\nin the pool.\n\n*9) Tom said:* \"For starters, what of *instantiating ALTER ROLE SET*\nparameter values that apply to the target user, and getting rid of such\nsettings that applied to the original user ID?\"\n\nI think you are suggesting to modify the already-logged-in user. That's\ninteresting. However, *many systems audit the logged in user by username* -\nwho they are. Furthermore, the modification of user privileges would not\nhelp to enable connection pooling as rehashed in my answer to [8].\n\n*10) Tom said:* \"How would this interact with the \"on login\" triggers that\npeople keep asking for?\n\nThat's a good point. I would imagine that SET ROLE (which is currently\nunsuitable) would have the same requirement. The answer is *Shared\nFunctions*. SET ROLE calls a function like \"*SetSessionUserId*\". Our\nimplementation should call the same function(s). If OnLogin functionality\nis implemented they should trigger from there.\n\n*11) Tom said:* \"Also, you'd still have to do DISCARD ALL (at least) when\nswitching users, or else integrate all that cache-flushing right into the\nswitching mechanism. So *I'm not entirely convinced about* how big\nthe *performance\nbenefits *are compared to a fresh session.\"\n\nAgreed, nor am I convinced. We should only be guided by benchmarks and\ntests, not subjective assumptions. *See the Benchmarking section* in the\ndocument for details.\n\n*12 Tom said:* \"...[Upon failure, no further commands will be processed\nuntil ImpersonateDatabaseUser succeeds.] *seems to require adding a huge\namount of complication on the server side*, and complication in the\nprotocol spec itself, to save a rather minimal amount of complication in\nthe middleware. Why can't we just say that a failed \"impersonate\" command\nleaves the session in the same state as before, and it's up to the pooler\nto do something about it? We are in any case trusting the pooler not to\nsend commands from user A to a session logged in as user B. We are in any\ncase trusting the pooler not to send commands from user A to a session\nlogged in as user B.\"\n\n*I think you are overstating the complexity*. It only requires a\nLastUserSwitchFailed boolean which is cleared to false when a UserSwitch\nsucceeds. If LastUserSwitchFailed is true, tcop ignores the messages and\nsends back errors. This detail has been added to the proposal document.\n\n*It's important that the implementation is objectively faster*. The 0-RTT\ndesign is proposed for efficiency. The Middleware might be able to fit BOTH\nthe UserSwitch AND the Query within a 1500 MTU. If not, it shouldn't wait\nfor a confirmation - for efficiency. The middleware might be on localhost,\nor it might be 1-5ms away on the LAN. Effectively, the UserSwitch is a sort\nof \"Header\" before a series of commands. Performance is the goal.\nTherefore, the connection cannot be left in the same state as before, or\nelse the pending Query will run in the incorrect context. This is a rare\nfailure mode, so failure is ideal.\n\nUltimately these are assumptions and *benchmarking results should drive\ndecisions* around the implementation of every aspect.\n\n*13. Tom said:* \"I wonder how we test such a feature meaningfully *without\nincorporating a pooler right into the Postgres tree*.\"\n\n*We can benchmark without a pooler* - see the Benchmark section for details\n*.* (Furthermore, I propose that general benchmark tooling does belong in\nPostgres for the benefit of the ecosystem of connection poolers. I have\nincluded such a remark in the Benchmarking section \"PostgreSQL is not\nplanning to incorporate Connection Pooling...\".)\n\nThanks again everyone for the tough questions and the ideas!\n\nRegards,\n\nTodd\n\nOn Sun, 21 Nov 2021 at 08:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > On Sun, Nov 21, 2021 at 03:11:03AM +1100, Todd Hubers wrote:\n> >> - Google Document with Commenting turned on\n> >>\n> https://docs.google.com/document/d/1u6mVKEHfKtR80UrMLNYrp5D6cCSW1_arcTaZ9HcAKlw/edit?usp=sharing\n> .\n>\n> > You proposed a PQ protocol version of SET ROLE/SET SESSION authorization.\n> > You'd need to make sure that a client connected to the connection pooler\n> cannot\n> > itself run the PQ \"set role\".\n>\n> It's not really apparent to me how this mechanism wouldn't be a giant\n> security hole. In particular, I see no meaningful difference between\n> granting the proposed \"impersonate\" privilege and granting superuser.\n> You could restrict it to not allow impersonating superusers, which'd\n> make it a little better, but what of all the predefined roles we keep\n> inventing that have privileges we don't want to be accessible to Joe\n> User? I think by the time you got to something where \"impersonate\"\n> wasn't a security hole, it would be pretty much equivalent to SET ROLE,\n> ie you could only impersonate users you'd been specifically given the\n> privlege for.\n>\n> It also seems like this is transferring the entire responsibility for\n> user authentication onto the middleware, with the server expected to\n> just believe that the session should now be allowed to act as user X.\n> Again, that seems more like a security problem than a good design.\n>\n> Another issue is that the proposed implementation seems pretty far\n> short of being an invisible substitute for actually logging in as\n> the target user. For starters, what of instantiating ALTER ROLE SET\n> parameter values that apply to the target user, and getting rid of\n> such settings that applied to the original user ID? How would this\n> interact with the \"on login\" triggers that people keep asking for?\n>\n> Also, you'd still have to do DISCARD ALL (at least) when switching\n> users, or else integrate all that cache-flushing right into the\n> switching mechanism. So I'm not entirely convinced about how big\n> the performance benefits are compared to a fresh session.\n>\n> One more point is that the proposed business about\n>\n> * ImpersonateDatabaseUser will either succeed silently (0-RTT), or\n> fail. Upon failure, no further commands will be processed until\n> ImpersonateDatabaseUser succeeds.\n>\n> seems to require adding a huge amount of complication on the server side,\n> and complication in the protocol spec itself, to save a rather minimal\n> amount of complication in the middleware. Why can't we just say that\n> a failed \"impersonate\" command leaves the session in the same state\n> as before, and it's up to the pooler to do something about it? We are\n> in any case trusting the pooler not to send commands from user A to\n> a session logged in as user B.\n>\n> regards, tom lane\n>\n> PS: I wonder how we test such a feature meaningfully without\n> incorporating a pooler right into the Postgres tree. I don't\n> want to do that, for sure.\n>\n\n\n-- \n--\nTodd Hubers\n\nHi Tom, Justin, and Andrey,Thanks everybody for your feedback so far! I agree, there are a few unknowns for the design and impact and there are many details to iron out. Benchmarking - Overall I think it's best to explore improvements with benchmarking. The key goal of this proposal pertains to performance: \"fast\". That means benchmarking is essential in the design phase, to ensure there are measurable improvements, and reward for effort. This should also primarily influence the selection of Solution Option (of which there are 6 or 7 listed). I have added the proposed benchmarks to the document.1) Andrey said: \"And these expenses would be not necessary if we could just send a new Startup message after the Terminate ('X') message.\" - Answer: I have added the \"Terminate/Restartup\" benchmark in the document accordingly.2) Justin said: \"However, this doesn't attempt to address the similar issue if there's hundreds or thousands of DBs.  Which I don't think could work at all, since switching to a new DB requires connecting to a new backend process.\"Good point. I do have some loose thinking around that. I do have some comments throughout the proposal document to tackle that in the future. As you point out, user switching seems to be a simpler well-trodden path. For now, my proposal is that the middleware can assert a database change, but that should create an error for now. In the future, it might be a supported capability. I have also added database-switching as a consideration for the benchmarking.3) Justin said: \"You'd need to make sure that a client connected to the connection pooler cannot itself run the PQ \"set role\".  The connection pooler would need to reject the request (or maybe ignore requests to switch to the same/current user).\" - Answer: Agreed.4) Justin said: \"Maybe you'd have two protocol requests \"begin switch user\" and \"end switch to user\", and then the server-side could enforce that \"begin switch\" is not nested. Maybe the \"begin switch\" could return some kind of \"nonce\" to the connection pooler, and \"end switch\" would require the same nonce to be validated.\"Agreed, something like this should be suitable. For performance, I would prefer to not use a Nonce because that would not be a 0-RTT approach. The PQ level ImpersonateDatabaseUser is effectively the \"begin\". I don't think an \"end\" is necessary, because it is expected to switch the user very often. The next \"begin\" is effectively ending the last Impersonation. The middleware could switch back to its own username, but as per my draft proposal, that user should only be allowed (as a connection bound authorisation for ImpersonateDatabaseUser) to impersonate and nothing else.And then there are yet 5 other solution options which have an explicit Begin/End pattern.5) Justin said: \"It'd be interesting to hear if you've tested with postgresql 14, which improves scalability to larger number of connections.\"Agreed. I will include a baseline in the benchmarks. Regardless of improvements, it won't be infinite and there will still be a need to \"enable\" pooling where there are MAX_CONNECTIONS users using the system.6) Tom said: \"It's not really apparent to me how this mechanism wouldn't be a giant security hole.\"I have added your important concerns to a Security Considerations section of the document:\"What is the difference between granting the proposed \"impersonate\" privilege and granting superuser?\"\"Is this merely transferring the entire responsibility for user authentication onto the middleware, with the server expected to just believe that the session should now be allowed to act as user X?\"7) Tom said: \"I think by the time you got to something where \"impersonate\" wasn't a security hole, it would be pretty much equivalent to SET ROLE\"I don't think so. This is already contemplated in the draft. See Solution Option 4. SET ROLE comes with RESET ROLE. The frontend client could call RESET ROLE then change to whatever role they like. That means that feature is not currently suitable to be used for the context of this proposal.8) Tom said: \"Another issue is that the proposed implementation seems pretty far short of being an invisible substitute for actually logging in as the target user\"The goal of this proposal is performance and to enable shared connection pooling. Direct logging in doesn't allow the reuse of existing connections in the pool.9) Tom said: \"For starters, what of instantiating ALTER ROLE SET parameter values that apply to the target user, and getting rid of such settings that applied to the original user ID?\"I think you are suggesting to modify the already-logged-in user. That's interesting. However, many systems audit the logged in user by username - who they are. Furthermore, the modification of user privileges would not help to enable connection pooling as rehashed in my answer to [8].10) Tom said: \"How would this interact with the \"on login\" triggers that people keep asking for?That's a good point. I would imagine that SET ROLE (which is currently unsuitable) would have the same requirement. The answer is Shared Functions. SET ROLE calls a function like \"SetSessionUserId\". Our implementation should call the same function(s). If OnLogin functionality is implemented they should trigger from there.11) Tom said: \"Also, you'd still have to do DISCARD ALL (at least) when switching users, or else integrate all that cache-flushing right into the switching mechanism.  So I'm not entirely convinced about how big the performance benefits are compared to a fresh session.\"Agreed, nor am I convinced. We should only be guided by benchmarks and tests, not subjective assumptions. See the Benchmarking section in the document for details.12 Tom said: \"...[Upon failure, no further commands will be processed until ImpersonateDatabaseUser succeeds.] seems to require adding a huge amount of complication on the server side, and complication in the protocol spec itself, to save a rather minimal amount of complication in the middleware.  Why can't we just say that a failed \"impersonate\" command leaves the session in the same state as before, and it's up to the pooler to do something about it?  We are in any case trusting the pooler not to send commands from user A to a session logged in as user B. We are\nin any case trusting the pooler not to send commands from user A to a session logged in as user B.\"I think you are overstating the complexity. It only requires a LastUserSwitchFailed boolean which is cleared to \nfalse when a UserSwitch succeeds. If LastUserSwitchFailed is true, tcop \nignores the messages and sends back errors. This detail has been added to the proposal document.It's important that the implementation is objectively faster. The 0-RTT design is proposed for efficiency. The Middleware might be able to fit BOTH the UserSwitch AND the Query within a 1500 MTU. If not, it shouldn't wait for a confirmation - for efficiency. The middleware might be on localhost, or it might be 1-5ms away on the LAN. Effectively, the UserSwitch is a sort of \"Header\" before a series of commands. Performance is the goal. Therefore, the connection cannot be left in the same state as before, or else the pending Query will run in the incorrect context. This is a rare failure mode, so failure is ideal.Ultimately these are assumptions and benchmarking results should drive decisions around the implementation of every aspect.13. Tom said: \"I wonder how we test such a feature meaningfully without incorporating a pooler right into the Postgres tree.\"We can benchmark without a pooler - see the Benchmark section for details. (Furthermore, I propose that general benchmark tooling does belong in Postgres for the benefit of the ecosystem of connection poolers. I have included such a remark in the Benchmarking section \"PostgreSQL is not planning to incorporate Connection Pooling...\".)Thanks again everyone for the tough questions and the ideas!Regards,ToddOn Sun, 21 Nov 2021 at 08:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Sun, Nov 21, 2021 at 03:11:03AM +1100, Todd Hubers wrote:\n>> - Google Document with Commenting turned on\n>> https://docs.google.com/document/d/1u6mVKEHfKtR80UrMLNYrp5D6cCSW1_arcTaZ9HcAKlw/edit?usp=sharing.\n\n> You proposed a PQ protocol version of SET ROLE/SET SESSION authorization.\n> You'd need to make sure that a client connected to the connection pooler cannot\n> itself run the PQ \"set role\".\n\nIt's not really apparent to me how this mechanism wouldn't be a giant\nsecurity hole.  In particular, I see no meaningful difference between\ngranting the proposed \"impersonate\" privilege and granting superuser.\nYou could restrict it to not allow impersonating superusers, which'd\nmake it a little better, but what of all the predefined roles we keep\ninventing that have privileges we don't want to be accessible to Joe\nUser?  I think by the time you got to something where \"impersonate\"\nwasn't a security hole, it would be pretty much equivalent to SET ROLE,\nie you could only impersonate users you'd been specifically given the\nprivlege for.\n\nIt also seems like this is transferring the entire responsibility for\nuser authentication onto the middleware, with the server expected to\njust believe that the session should now be allowed to act as user X.\nAgain, that seems more like a security problem than a good design.\n\nAnother issue is that the proposed implementation seems pretty far\nshort of being an invisible substitute for actually logging in as\nthe target user.  For starters, what of instantiating ALTER ROLE SET\nparameter values that apply to the target user, and getting rid of\nsuch settings that applied to the original user ID?  How would this\ninteract with the \"on login\" triggers that people keep asking for?\n\nAlso, you'd still have to do DISCARD ALL (at least) when switching\nusers, or else integrate all that cache-flushing right into the\nswitching mechanism.  So I'm not entirely convinced about how big\nthe performance benefits are compared to a fresh session.\n\nOne more point is that the proposed business about \n\n* ImpersonateDatabaseUser will either succeed silently (0-RTT), or\n  fail. Upon failure, no further commands will be processed until\n  ImpersonateDatabaseUser succeeds.\n\nseems to require adding a huge amount of complication on the server side,\nand complication in the protocol spec itself, to save a rather minimal\namount of complication in the middleware.  Why can't we just say that\na failed \"impersonate\" command leaves the session in the same state\nas before, and it's up to the pooler to do something about it?  We are\nin any case trusting the pooler not to send commands from user A to\na session logged in as user B.\n\n                        regards, tom lane\n\nPS: I wonder how we test such a feature meaningfully without\nincorporating a pooler right into the Postgres tree.  I don't\nwant to do that, for sure.\n-- --Todd Hubers", "msg_date": "Sun, 21 Nov 2021 13:05:16 +1100", "msg_from": "Todd Hubers <todd.hubers@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Feature Proposal: Connection Pool Optimization - Change the\n Connection User" }, { "msg_contents": "> On 21 Nov 2021, at 03:05, Todd Hubers <todd.hubers@gmail.com> wrote:\n\n> 10) Tom said: \"How would this interact with the \"on login\" triggers that people keep asking for?\n> \n> That's a good point. I would imagine that SET ROLE (which is currently unsuitable) would have the same requirement. The answer is Shared Functions. SET ROLE calls a function like \"SetSessionUserId\". Our implementation should call the same function(s). If OnLogin functionality is implemented they should trigger from there.\n\nI think thats conflating session_user and current_user, SET ROLE is not a login\nevent. This is by design and discussed in the documentation:\n\n \"SET ROLE does not process session variables as specified by the role's\n ALTER ROLE settings; this only happens during login.\"\n\nThe current patch proposal doesn't fire the login event trigger on SET ROLE,\nonly on actual logins. That patch needs more review before landing, but I'm\nnot sure tying it to SET ROLE is a good idea.\n\n> 13. Tom said: \"I wonder how we test such a feature meaningfully without incorporating a pooler right into the Postgres tree.\"\n> \n> We can benchmark without a pooler - see the Benchmark section for details. (Furthermore, I propose that general benchmark tooling does belong in Postgres for the benefit of the ecosystem of connection poolers. I have included such a remark in the Benchmarking section \"PostgreSQL is not planning to incorporate Connection Pooling...\".)\n\nWe might be talking about the same things using different words, but it's\nimportant to remember that we need to cover the functionality in terms of\n*tests* first, performance benchmarking is another concern.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 22 Nov 2021 11:22:05 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Feature Proposal: Connection Pool Optimization - Change the\n Connection User" }, { "msg_contents": "On Sat, 2021-11-20 at 16:16 -0500, Tom Lane wrote:\r\n> One more point is that the proposed business about \r\n> \r\n> * ImpersonateDatabaseUser will either succeed silently (0-RTT), or\r\n> fail. Upon failure, no further commands will be processed until\r\n> ImpersonateDatabaseUser succeeds.\r\n> \r\n> seems to require adding a huge amount of complication on the server side,\r\n> and complication in the protocol spec itself, to save a rather minimal\r\n> amount of complication in the middleware. Why can't we just say that\r\n> a failed \"impersonate\" command leaves the session in the same state\r\n> as before, and it's up to the pooler to do something about it? We are\r\n> in any case trusting the pooler not to send commands from user A to\r\n> a session logged in as user B.\r\n\r\nWhen combined with the 0-RTT goal, I think a silent ignore would just\r\ninvite more security problems. Todd is effectively proposing packet\r\npipelining, so the pipeline has to fail shut.\r\n\r\nA more modern approach might be to attach the authentication to the\r\npacket itself (e.g. cryptographically, with a MAC), if the goal is to\r\nenable per-statement authentication anyway. In theory that turns the\r\nmiddleware into a message passer instead of a confusable deputy. But it\r\nrequires more complicated setup between the client and server.\r\n\r\n> PS: I wonder how we test such a feature meaningfully without\r\n> incorporating a pooler right into the Postgres tree. I don't\r\n> want to do that, for sure.\r\n\r\nHaving protocol-level tests for bytes on the wire would not only help\r\nproposals like this but also get coverage for a huge number of edge\r\ncases. Magnus has added src/test/protocol for the server, written in\r\nPerl, in his PROXY proposal. And I've added a protocol suite for both\r\nthe client and server, written in Python/pytest, in my OAuth proof of\r\nconcept. I think something is badly needed in this area.\r\n\r\n--Jacob\r\n", "msg_date": "Tue, 23 Nov 2021 01:12:14 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Feature Proposal: Connection Pool Optimization - Change the\n Connection User" }, { "msg_contents": "Hi Jacob and Daniel,\n\nThanks for your feedback.\n\n>@Daniel - I think thats conflating session_user and current_user, SET ROLE\nis not a login event. This is by design and discussed in the documentation..\n\nAgreed, I am using those terms loosely. I have updated option 4 in the\nproposal document. I have crossed it out. Option 5 is more suitable \"SET\nSESSION AUTHORIZATION\" for further consideration.\n\n>@Daniel - but it's important to remember that we need to cover the\nfunctionality in terms of *tests* first, performance benchmarking is\nanother concern.\n\nFor implementation absolutely, but not for a basic feasibility prototype. A\nquick non-secure non-reliable prototype is probably an important first-step\nto confirming which options work best for the stated goals. Importantly, if\nthe improvement is only 5% (whatever that might mean), then the project is\nprobably not work starting. But I do expect that a benchmark will prove\nbenefits that justify the resources to build the feature(s).\n\n>@Jacob - A more modern approach might be to attach the authentication to\nthe packet itself (e.g. cryptographically, with a MAC), if the goal is to\nenable per-statement authentication anyway. In theory that turns the\nmiddleware into a message passer instead of a confusable deputy. But it\nrequires more complicated setup between the client and server.\n\nI did consider this, but I ruled it out. I have now added it to the\nproposal document, and included two Issues. Please review and let me know\nwhether I might be mistaken.\n\n>@Jacob - Having protocol-level tests for bytes on the wire would not only\nhelp proposals like this but also get coverage for a huge number of edge\ncases. Magnus has added src/test/protocol for the server, written in Perl,\nin his PROXY proposal. And I've added a protocol suite for both the client\nand server, written in Python/pytest, in my OAuth proof of concept. I think\nsomething is badly needed in this area.\n\nThanks for highlighting this emerging work. I have noted this in the\nproposal in the Next Steps section.\n\n--Todd\n\nNote: Here is the proposal document link again -\nhttps://docs.google.com/document/d/1u6mVKEHfKtR80UrMLNYrp5D6cCSW1_arcTaZ9HcAKlw/edit#\n\nOn Tue, 23 Nov 2021 at 12:12, Jacob Champion <pchampion@vmware.com> wrote:\n\n> On Sat, 2021-11-20 at 16:16 -0500, Tom Lane wrote:\n> > One more point is that the proposed business about\n> >\n> > * ImpersonateDatabaseUser will either succeed silently (0-RTT), or\n> > fail. Upon failure, no further commands will be processed until\n> > ImpersonateDatabaseUser succeeds.\n> >\n> > seems to require adding a huge amount of complication on the server side,\n> > and complication in the protocol spec itself, to save a rather minimal\n> > amount of complication in the middleware. Why can't we just say that\n> > a failed \"impersonate\" command leaves the session in the same state\n> > as before, and it's up to the pooler to do something about it? We are\n> > in any case trusting the pooler not to send commands from user A to\n> > a session logged in as user B.\n>\n> When combined with the 0-RTT goal, I think a silent ignore would just\n> invite more security problems. Todd is effectively proposing packet\n> pipelining, so the pipeline has to fail shut.\n>\n> A more modern approach might be to attach the authentication to the\n> packet itself (e.g. cryptographically, with a MAC), if the goal is to\n> enable per-statement authentication anyway. In theory that turns the\n> middleware into a message passer instead of a confusable deputy. But it\n> requires more complicated setup between the client and server.\n>\n> > PS: I wonder how we test such a feature meaningfully without\n> > incorporating a pooler right into the Postgres tree. I don't\n> > want to do that, for sure.\n>\n> Having protocol-level tests for bytes on the wire would not only help\n> proposals like this but also get coverage for a huge number of edge\n> cases. Magnus has added src/test/protocol for the server, written in\n> Perl, in his PROXY proposal. And I've added a protocol suite for both\n> the client and server, written in Python/pytest, in my OAuth proof of\n> concept. I think something is badly needed in this area.\n>\n> --Jacob\n>\n\n\n-- \n--\nTodd Hubers\n\nHi Jacob and Daniel,Thanks for your feedback.>@Daniel - I think thats conflating session_user and current_user, SET ROLE is not a login event. This is by design and discussed in the documentation..Agreed, I am using those terms loosely. I have updated option 4 in the proposal document. I have crossed it out. Option 5 is more suitable \"SET SESSION AUTHORIZATION\" for further consideration. >@Daniel - but it's important to remember that we need to cover the functionality in terms of *tests* first, performance benchmarking is another concern.For implementation absolutely, but not for a basic feasibility prototype. A quick non-secure non-reliable prototype is probably an important first-step to confirming which options work best for the stated goals. Importantly, if the improvement is only 5% (whatever that might mean), then the project is probably not work starting. But I do expect that a benchmark will prove benefits that justify the resources to build the feature(s).>@Jacob - A more modern approach might be to attach the authentication to the packet itself (e.g. cryptographically, with a MAC), if the goal is to enable per-statement authentication anyway. In theory that turns the middleware into a message passer instead of a confusable deputy. But it requires more complicated setup between the client and server.I did consider this, but I ruled it out. I have now added it to the proposal document, and included two Issues. Please review and let me know whether I might be mistaken.>@Jacob - Having protocol-level tests for bytes on the wire would not only help proposals like this but also get coverage for a huge number of edge cases. Magnus has added src/test/protocol for the server, written in Perl, in his PROXY proposal. And I've added a protocol suite for both the client and server, written in Python/pytest, in my OAuth proof of concept. I think something is badly needed in this area.Thanks for highlighting this emerging work. I have noted this in the proposal in the Next Steps section.--ToddNote: Here is the proposal document link again - https://docs.google.com/document/d/1u6mVKEHfKtR80UrMLNYrp5D6cCSW1_arcTaZ9HcAKlw/edit#On Tue, 23 Nov 2021 at 12:12, Jacob Champion <pchampion@vmware.com> wrote:On Sat, 2021-11-20 at 16:16 -0500, Tom Lane wrote:\n> One more point is that the proposed business about \n> \n> * ImpersonateDatabaseUser will either succeed silently (0-RTT), or\n>   fail. Upon failure, no further commands will be processed until\n>   ImpersonateDatabaseUser succeeds.\n> \n> seems to require adding a huge amount of complication on the server side,\n> and complication in the protocol spec itself, to save a rather minimal\n> amount of complication in the middleware.  Why can't we just say that\n> a failed \"impersonate\" command leaves the session in the same state\n> as before, and it's up to the pooler to do something about it?  We are\n> in any case trusting the pooler not to send commands from user A to\n> a session logged in as user B.\n\nWhen combined with the 0-RTT goal, I think a silent ignore would just\ninvite more security problems. Todd is effectively proposing packet\npipelining, so the pipeline has to fail shut.\n\nA more modern approach might be to attach the authentication to the\npacket itself (e.g. cryptographically, with a MAC), if the goal is to\nenable per-statement authentication anyway. In theory that turns the\nmiddleware into a message passer instead of a confusable deputy. But it\nrequires more complicated setup between the client and server.\n\n> PS: I wonder how we test such a feature meaningfully without\n> incorporating a pooler right into the Postgres tree.  I don't\n> want to do that, for sure.\n\nHaving protocol-level tests for bytes on the wire would not only help\nproposals like this but also get coverage for a huge number of edge\ncases. Magnus has added src/test/protocol for the server, written in\nPerl, in his PROXY proposal. And I've added a protocol suite for both\nthe client and server, written in Python/pytest, in my OAuth proof of\nconcept. I think something is badly needed in this area.\n\n--Jacob\n-- --Todd Hubers", "msg_date": "Wed, 24 Nov 2021 02:46:36 +1100", "msg_from": "Todd Hubers <todd.hubers@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Feature Proposal: Connection Pool Optimization - Change the\n Connection User" }, { "msg_contents": "Hi Everyone,\n\nI have started working on this:\n\n - Benchmarking - increasingly more comprehensive benchmarking\n - Prototyping - to simulate the change of users (toggling back and forth)\n - Draft Implementation - of OPTION-1 (New Protocol Message)\n - (Then: Working with Odyssey and PgBouncer to add support (when the\n GRANT role privilege is available))\n\nI hope to have a patch ready by the end of March.\n\nRegards,\n\nTodd\n\nOn Wed, 24 Nov 2021 at 02:46, Todd Hubers <todd.hubers@gmail.com> wrote:\n\n>\n> Hi Jacob and Daniel,\n>\n> Thanks for your feedback.\n>\n> >@Daniel - I think thats conflating session_user and current_user, SET\n> ROLE is not a login event. This is by design and discussed in the\n> documentation..\n>\n> Agreed, I am using those terms loosely. I have updated option 4 in the\n> proposal document. I have crossed it out. Option 5 is more suitable \"SET\n> SESSION AUTHORIZATION\" for further consideration.\n>\n> >@Daniel - but it's important to remember that we need to cover the\n> functionality in terms of *tests* first, performance benchmarking is\n> another concern.\n>\n> For implementation absolutely, but not for a basic feasibility prototype.\n> A quick non-secure non-reliable prototype is probably an important\n> first-step to confirming which options work best for the stated goals.\n> Importantly, if the improvement is only 5% (whatever that might mean), then\n> the project is probably not work starting. But I do expect that a benchmark\n> will prove benefits that justify the resources to build the feature(s).\n>\n> >@Jacob - A more modern approach might be to attach the authentication to\n> the packet itself (e.g. cryptographically, with a MAC), if the goal is to\n> enable per-statement authentication anyway. In theory that turns the\n> middleware into a message passer instead of a confusable deputy. But it\n> requires more complicated setup between the client and server.\n>\n> I did consider this, but I ruled it out. I have now added it to the\n> proposal document, and included two Issues. Please review and let me know\n> whether I might be mistaken.\n>\n> >@Jacob - Having protocol-level tests for bytes on the wire would not only\n> help proposals like this but also get coverage for a huge number of edge\n> cases. Magnus has added src/test/protocol for the server, written in Perl,\n> in his PROXY proposal. And I've added a protocol suite for both the client\n> and server, written in Python/pytest, in my OAuth proof of concept. I think\n> something is badly needed in this area.\n>\n> Thanks for highlighting this emerging work. I have noted this in the\n> proposal in the Next Steps section.\n>\n> --Todd\n>\n> Note: Here is the proposal document link again -\n> https://docs.google.com/document/d/1u6mVKEHfKtR80UrMLNYrp5D6cCSW1_arcTaZ9HcAKlw/edit#\n>\n> On Tue, 23 Nov 2021 at 12:12, Jacob Champion <pchampion@vmware.com> wrote:\n>\n>> On Sat, 2021-11-20 at 16:16 -0500, Tom Lane wrote:\n>> > One more point is that the proposed business about\n>> >\n>> > * ImpersonateDatabaseUser will either succeed silently (0-RTT), or\n>> > fail. Upon failure, no further commands will be processed until\n>> > ImpersonateDatabaseUser succeeds.\n>> >\n>> > seems to require adding a huge amount of complication on the server\n>> side,\n>> > and complication in the protocol spec itself, to save a rather minimal\n>> > amount of complication in the middleware. Why can't we just say that\n>> > a failed \"impersonate\" command leaves the session in the same state\n>> > as before, and it's up to the pooler to do something about it? We are\n>> > in any case trusting the pooler not to send commands from user A to\n>> > a session logged in as user B.\n>>\n>> When combined with the 0-RTT goal, I think a silent ignore would just\n>> invite more security problems. Todd is effectively proposing packet\n>> pipelining, so the pipeline has to fail shut.\n>>\n>> A more modern approach might be to attach the authentication to the\n>> packet itself (e.g. cryptographically, with a MAC), if the goal is to\n>> enable per-statement authentication anyway. In theory that turns the\n>> middleware into a message passer instead of a confusable deputy. But it\n>> requires more complicated setup between the client and server.\n>>\n>> > PS: I wonder how we test such a feature meaningfully without\n>> > incorporating a pooler right into the Postgres tree. I don't\n>> > want to do that, for sure.\n>>\n>> Having protocol-level tests for bytes on the wire would not only help\n>> proposals like this but also get coverage for a huge number of edge\n>> cases. Magnus has added src/test/protocol for the server, written in\n>> Perl, in his PROXY proposal. And I've added a protocol suite for both\n>> the client and server, written in Python/pytest, in my OAuth proof of\n>> concept. I think something is badly needed in this area.\n>>\n>> --Jacob\n>>\n>\n>\n> --\n> --\n> Todd Hubers\n>\n\n\n-- \n--\nTodd Hubers\n\nHi Everyone,I have started working on this:Benchmarking - increasingly more comprehensive benchmarkingPrototyping - to simulate the change of users (toggling back and forth)Draft Implementation - of OPTION-1 (New Protocol Message)(Then: Working with Odyssey and PgBouncer to add support (when the GRANT role privilege is available))I hope to have a patch ready by the end of March.Regards,ToddOn Wed, 24 Nov 2021 at 02:46, Todd Hubers <todd.hubers@gmail.com> wrote:Hi Jacob and Daniel,Thanks for your feedback.>@Daniel - I think thats conflating session_user and current_user, SET ROLE is not a login event. This is by design and discussed in the documentation..Agreed, I am using those terms loosely. I have updated option 4 in the proposal document. I have crossed it out. Option 5 is more suitable \"SET SESSION AUTHORIZATION\" for further consideration. >@Daniel - but it's important to remember that we need to cover the functionality in terms of *tests* first, performance benchmarking is another concern.For implementation absolutely, but not for a basic feasibility prototype. A quick non-secure non-reliable prototype is probably an important first-step to confirming which options work best for the stated goals. Importantly, if the improvement is only 5% (whatever that might mean), then the project is probably not work starting. But I do expect that a benchmark will prove benefits that justify the resources to build the feature(s).>@Jacob - A more modern approach might be to attach the authentication to the packet itself (e.g. cryptographically, with a MAC), if the goal is to enable per-statement authentication anyway. In theory that turns the middleware into a message passer instead of a confusable deputy. But it requires more complicated setup between the client and server.I did consider this, but I ruled it out. I have now added it to the proposal document, and included two Issues. Please review and let me know whether I might be mistaken.>@Jacob - Having protocol-level tests for bytes on the wire would not only help proposals like this but also get coverage for a huge number of edge cases. Magnus has added src/test/protocol for the server, written in Perl, in his PROXY proposal. And I've added a protocol suite for both the client and server, written in Python/pytest, in my OAuth proof of concept. I think something is badly needed in this area.Thanks for highlighting this emerging work. I have noted this in the proposal in the Next Steps section.--ToddNote: Here is the proposal document link again - https://docs.google.com/document/d/1u6mVKEHfKtR80UrMLNYrp5D6cCSW1_arcTaZ9HcAKlw/edit#On Tue, 23 Nov 2021 at 12:12, Jacob Champion <pchampion@vmware.com> wrote:On Sat, 2021-11-20 at 16:16 -0500, Tom Lane wrote:\n> One more point is that the proposed business about \n> \n> * ImpersonateDatabaseUser will either succeed silently (0-RTT), or\n>   fail. Upon failure, no further commands will be processed until\n>   ImpersonateDatabaseUser succeeds.\n> \n> seems to require adding a huge amount of complication on the server side,\n> and complication in the protocol spec itself, to save a rather minimal\n> amount of complication in the middleware.  Why can't we just say that\n> a failed \"impersonate\" command leaves the session in the same state\n> as before, and it's up to the pooler to do something about it?  We are\n> in any case trusting the pooler not to send commands from user A to\n> a session logged in as user B.\n\nWhen combined with the 0-RTT goal, I think a silent ignore would just\ninvite more security problems. Todd is effectively proposing packet\npipelining, so the pipeline has to fail shut.\n\nA more modern approach might be to attach the authentication to the\npacket itself (e.g. cryptographically, with a MAC), if the goal is to\nenable per-statement authentication anyway. In theory that turns the\nmiddleware into a message passer instead of a confusable deputy. But it\nrequires more complicated setup between the client and server.\n\n> PS: I wonder how we test such a feature meaningfully without\n> incorporating a pooler right into the Postgres tree.  I don't\n> want to do that, for sure.\n\nHaving protocol-level tests for bytes on the wire would not only help\nproposals like this but also get coverage for a huge number of edge\ncases. Magnus has added src/test/protocol for the server, written in\nPerl, in his PROXY proposal. And I've added a protocol suite for both\nthe client and server, written in Python/pytest, in my OAuth proof of\nconcept. I think something is badly needed in this area.\n\n--Jacob\n-- --Todd Hubers\n-- --Todd Hubers", "msg_date": "Fri, 7 Jan 2022 10:55:09 +1100", "msg_from": "Todd Hubers <todd.hubers@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Feature Proposal: Connection Pool Optimization - Change the\n Connection User" }, { "msg_contents": "Hi Everyone,\n\nBenchmarking work has commenced, and is ongoing.\n\n - *OPTIONS 5/6/7* - `SET SESSION AUTHORIZATION` takes double the time of\n a single separate SimpleQuery. This is to be expected, because double the\n amount of SimpleQuery messages are being sent, and that requires a full\n SimpleQuery/Result/Ready cycle. If there is significant latency between a\n Connection Pooler and the database, this delay is amplified. It would be\n possible to concatenate text into a single SimpleQuery. In the real world,\n the performance impact MAY be negligible.\n - *OPTION 0* - The time to reconnect (start a new connection from\n scratch with a different username/password) was found to be faster than\n using `SET SESSION AUTHORIZATION`.\n - *OPTION 1* - My team is continuing to explore a distinct Impersonate\n message (Option-1). We are completing a prototype-quality implementation,\n and then benchmarking it. Given that Option-1 is asynchronous (Request and\n expect to succeed) and it can even be included within the same TCP packet\n as the SimpleQuery (at times), we expect the performance will be better\n than restarting a connection, and not impacted by links of higher latency.\n\nI will be recording benchmark results in the document:\nhttps://docs.google.com/document/d/1u6mVKEHfKtR80UrMLNYrp5D6cCSW1_arcTaZ9HcAKlw/edit#\nafter completion of the OPTION-1 prototype and benchmarking of that\nprototype.\n\nNote: In order to accommodate something like OPTION-8, an Impersonation\nmessage might have a flag (valid for 1x SimpleQuery only, then\nautomatically restore back to the last user).\n\nRegards,\n\nTodd\n\n\nOn Fri, 7 Jan 2022 at 10:55, Todd Hubers <todd.hubers@gmail.com> wrote:\n\n> Hi Everyone,\n>\n> I have started working on this:\n>\n> - Benchmarking - increasingly more comprehensive benchmarking\n> - Prototyping - to simulate the change of users (toggling back and\n> forth)\n> - Draft Implementation - of OPTION-1 (New Protocol Message)\n> - (Then: Working with Odyssey and PgBouncer to add support (when the\n> GRANT role privilege is available))\n>\n> I hope to have a patch ready by the end of March.\n>\n> Regards,\n>\n> Todd\n>\n> On Wed, 24 Nov 2021 at 02:46, Todd Hubers <todd.hubers@gmail.com> wrote:\n>\n>>\n>> Hi Jacob and Daniel,\n>>\n>> Thanks for your feedback.\n>>\n>> >@Daniel - I think thats conflating session_user and current_user, SET\n>> ROLE is not a login event. This is by design and discussed in the\n>> documentation..\n>>\n>> Agreed, I am using those terms loosely. I have updated option 4 in the\n>> proposal document. I have crossed it out. Option 5 is more suitable \"SET\n>> SESSION AUTHORIZATION\" for further consideration.\n>>\n>> >@Daniel - but it's important to remember that we need to cover the\n>> functionality in terms of *tests* first, performance benchmarking is\n>> another concern.\n>>\n>> For implementation absolutely, but not for a basic feasibility prototype.\n>> A quick non-secure non-reliable prototype is probably an important\n>> first-step to confirming which options work best for the stated goals.\n>> Importantly, if the improvement is only 5% (whatever that might mean), then\n>> the project is probably not work starting. But I do expect that a benchmark\n>> will prove benefits that justify the resources to build the feature(s).\n>>\n>> >@Jacob - A more modern approach might be to attach the authentication to\n>> the packet itself (e.g. cryptographically, with a MAC), if the goal is to\n>> enable per-statement authentication anyway. In theory that turns the\n>> middleware into a message passer instead of a confusable deputy. But it\n>> requires more complicated setup between the client and server.\n>>\n>> I did consider this, but I ruled it out. I have now added it to the\n>> proposal document, and included two Issues. Please review and let me know\n>> whether I might be mistaken.\n>>\n>> >@Jacob - Having protocol-level tests for bytes on the wire would not\n>> only help proposals like this but also get coverage for a huge number of\n>> edge cases. Magnus has added src/test/protocol for the server, written in\n>> Perl, in his PROXY proposal. And I've added a protocol suite for both the\n>> client and server, written in Python/pytest, in my OAuth proof of concept.\n>> I think something is badly needed in this area.\n>>\n>> Thanks for highlighting this emerging work. I have noted this in the\n>> proposal in the Next Steps section.\n>>\n>> --Todd\n>>\n>> Note: Here is the proposal document link again -\n>> https://docs.google.com/document/d/1u6mVKEHfKtR80UrMLNYrp5D6cCSW1_arcTaZ9HcAKlw/edit#\n>>\n>> On Tue, 23 Nov 2021 at 12:12, Jacob Champion <pchampion@vmware.com>\n>> wrote:\n>>\n>>> On Sat, 2021-11-20 at 16:16 -0500, Tom Lane wrote:\n>>> > One more point is that the proposed business about\n>>> >\n>>> > * ImpersonateDatabaseUser will either succeed silently (0-RTT), or\n>>> > fail. Upon failure, no further commands will be processed until\n>>> > ImpersonateDatabaseUser succeeds.\n>>> >\n>>> > seems to require adding a huge amount of complication on the server\n>>> side,\n>>> > and complication in the protocol spec itself, to save a rather minimal\n>>> > amount of complication in the middleware. Why can't we just say that\n>>> > a failed \"impersonate\" command leaves the session in the same state\n>>> > as before, and it's up to the pooler to do something about it? We are\n>>> > in any case trusting the pooler not to send commands from user A to\n>>> > a session logged in as user B.\n>>>\n>>> When combined with the 0-RTT goal, I think a silent ignore would just\n>>> invite more security problems. Todd is effectively proposing packet\n>>> pipelining, so the pipeline has to fail shut.\n>>>\n>>> A more modern approach might be to attach the authentication to the\n>>> packet itself (e.g. cryptographically, with a MAC), if the goal is to\n>>> enable per-statement authentication anyway. In theory that turns the\n>>> middleware into a message passer instead of a confusable deputy. But it\n>>> requires more complicated setup between the client and server.\n>>>\n>>> > PS: I wonder how we test such a feature meaningfully without\n>>> > incorporating a pooler right into the Postgres tree. I don't\n>>> > want to do that, for sure.\n>>>\n>>> Having protocol-level tests for bytes on the wire would not only help\n>>> proposals like this but also get coverage for a huge number of edge\n>>> cases. Magnus has added src/test/protocol for the server, written in\n>>> Perl, in his PROXY proposal. And I've added a protocol suite for both\n>>> the client and server, written in Python/pytest, in my OAuth proof of\n>>> concept. I think something is badly needed in this area.\n>>>\n>>> --Jacob\n>>>\n>>\n>>\n>> --\n>> --\n>> Todd Hubers\n>>\n>\n>\n> --\n> --\n> Todd Hubers\n>\n\n\n-- \n--\nTodd Hubers\n\nHi Everyone,Benchmarking work has commenced, and is ongoing.OPTIONS 5/6/7 - `SET SESSION AUTHORIZATION` takes double the time of a single separate SimpleQuery. This is to be expected, because double the amount of SimpleQuery messages are being sent, and that requires a full SimpleQuery/Result/Ready cycle. If there is significant latency between a Connection Pooler and the database, this delay is amplified. It would be possible to concatenate text into a single SimpleQuery. In the real world, the performance impact MAY be negligible.OPTION 0 - The time to reconnect (start a new connection from scratch with a different username/password) was found to be faster than using `SET SESSION AUTHORIZATION`.OPTION 1 - My team is continuing to explore a distinct Impersonate message (Option-1). We are completing a prototype-quality implementation, and then benchmarking it. Given that Option-1 is asynchronous (Request and expect to succeed) and it can even be included within the same TCP packet as the SimpleQuery (at times), we expect the performance will be better than restarting a connection, and not impacted by links of higher latency.I will be recording benchmark results in the document: https://docs.google.com/document/d/1u6mVKEHfKtR80UrMLNYrp5D6cCSW1_arcTaZ9HcAKlw/edit# after completion of the OPTION-1 prototype and benchmarking of that prototype.Note: In order to accommodate something like OPTION-8, an Impersonation message might have a flag (valid for 1x SimpleQuery only, then automatically restore back to the last user).Regards,ToddOn Fri, 7 Jan 2022 at 10:55, Todd Hubers <todd.hubers@gmail.com> wrote:Hi Everyone,I have started working on this:Benchmarking - increasingly more comprehensive benchmarkingPrototyping - to simulate the change of users (toggling back and forth)Draft Implementation - of OPTION-1 (New Protocol Message)(Then: Working with Odyssey and PgBouncer to add support (when the GRANT role privilege is available))I hope to have a patch ready by the end of March.Regards,ToddOn Wed, 24 Nov 2021 at 02:46, Todd Hubers <todd.hubers@gmail.com> wrote:Hi Jacob and Daniel,Thanks for your feedback.>@Daniel - I think thats conflating session_user and current_user, SET ROLE is not a login event. This is by design and discussed in the documentation..Agreed, I am using those terms loosely. I have updated option 4 in the proposal document. I have crossed it out. Option 5 is more suitable \"SET SESSION AUTHORIZATION\" for further consideration. >@Daniel - but it's important to remember that we need to cover the functionality in terms of *tests* first, performance benchmarking is another concern.For implementation absolutely, but not for a basic feasibility prototype. A quick non-secure non-reliable prototype is probably an important first-step to confirming which options work best for the stated goals. Importantly, if the improvement is only 5% (whatever that might mean), then the project is probably not work starting. But I do expect that a benchmark will prove benefits that justify the resources to build the feature(s).>@Jacob - A more modern approach might be to attach the authentication to the packet itself (e.g. cryptographically, with a MAC), if the goal is to enable per-statement authentication anyway. In theory that turns the middleware into a message passer instead of a confusable deputy. But it requires more complicated setup between the client and server.I did consider this, but I ruled it out. I have now added it to the proposal document, and included two Issues. Please review and let me know whether I might be mistaken.>@Jacob - Having protocol-level tests for bytes on the wire would not only help proposals like this but also get coverage for a huge number of edge cases. Magnus has added src/test/protocol for the server, written in Perl, in his PROXY proposal. And I've added a protocol suite for both the client and server, written in Python/pytest, in my OAuth proof of concept. I think something is badly needed in this area.Thanks for highlighting this emerging work. I have noted this in the proposal in the Next Steps section.--ToddNote: Here is the proposal document link again - https://docs.google.com/document/d/1u6mVKEHfKtR80UrMLNYrp5D6cCSW1_arcTaZ9HcAKlw/edit#On Tue, 23 Nov 2021 at 12:12, Jacob Champion <pchampion@vmware.com> wrote:On Sat, 2021-11-20 at 16:16 -0500, Tom Lane wrote:\n> One more point is that the proposed business about \n> \n> * ImpersonateDatabaseUser will either succeed silently (0-RTT), or\n>   fail. Upon failure, no further commands will be processed until\n>   ImpersonateDatabaseUser succeeds.\n> \n> seems to require adding a huge amount of complication on the server side,\n> and complication in the protocol spec itself, to save a rather minimal\n> amount of complication in the middleware.  Why can't we just say that\n> a failed \"impersonate\" command leaves the session in the same state\n> as before, and it's up to the pooler to do something about it?  We are\n> in any case trusting the pooler not to send commands from user A to\n> a session logged in as user B.\n\nWhen combined with the 0-RTT goal, I think a silent ignore would just\ninvite more security problems. Todd is effectively proposing packet\npipelining, so the pipeline has to fail shut.\n\nA more modern approach might be to attach the authentication to the\npacket itself (e.g. cryptographically, with a MAC), if the goal is to\nenable per-statement authentication anyway. In theory that turns the\nmiddleware into a message passer instead of a confusable deputy. But it\nrequires more complicated setup between the client and server.\n\n> PS: I wonder how we test such a feature meaningfully without\n> incorporating a pooler right into the Postgres tree.  I don't\n> want to do that, for sure.\n\nHaving protocol-level tests for bytes on the wire would not only help\nproposals like this but also get coverage for a huge number of edge\ncases. Magnus has added src/test/protocol for the server, written in\nPerl, in his PROXY proposal. And I've added a protocol suite for both\nthe client and server, written in Python/pytest, in my OAuth proof of\nconcept. I think something is badly needed in this area.\n\n--Jacob\n-- --Todd Hubers\n-- --Todd Hubers\n-- --Todd Hubers", "msg_date": "Wed, 2 Feb 2022 10:56:02 +1100", "msg_from": "Todd Hubers <todd.hubers@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Feature Proposal: Connection Pool Optimization - Change the\n Connection User" }, { "msg_contents": "Hi Everyone,\n\nHere is a progress update. I have an established team of 2 fulltime systems\nprogrammers who have been working on this area for a couple of months now.\n\n - Past\n - *Impersonation* - a prototype has been completed for Option-1\n \"Impersonation\"\n - *Benchmarking* - has been completed on a range of options,\n including Option-0 \"Reconnection\".\n - Both Impersonation and the Benchmarking is currently on the\n backburner\n - Current:* Notification Concentrator* - This is not PostgreSQL Codebase\n work. This project makes NOTIFY/LISTEN work in Odyssey (and others) while\n in Transaction mode. (Until now, NOTIFY/LISTEN can only work in SESSION\n mode). We intend to also build patches for PgBouncer, and other popular\n Connection Pool systems.\n - It works in Session mode, but my organisation needs it to work in\n Transaction mode. It works by intercepting LISTEN/UNLISTEN in SQL and\n redirecting them to a single shared connection. There will be a Pub/Sub\n system within Odyssey. The LISTEN/UNLISTEN is only sent for the first\n subscriber or last unsubscriber accordingly. The NOTIFICATION\nmessages are\n then dispatched to the Subscriber list. At most only one SESSION\nconnection\n is required.\n - Next:\n - *Update Benchmarking:* I then expect to update Benchmarks with a range\n of prototype solutions, with both Impersonation and Notification\n Concentrator for final review.\n - *Publishing Benchmarking*: I will send our results here, and offer\n a patch for such benchmarking code.\n - *Final Implementation:* The team will finalise code for\n production-grade implementations, and tests\n - *Patches:* Then my team will submit a patch for PostgreSQL, Odyssey\n and others; working to polish anything else that might be required of us.\n\nTodd\n\nOn Wed, 2 Feb 2022 at 10:56, Todd Hubers <todd.hubers@gmail.com> wrote:\n\n> Hi Everyone,\n>\n> Benchmarking work has commenced, and is ongoing.\n>\n> - *OPTIONS 5/6/7* - `SET SESSION AUTHORIZATION` takes double the time\n> of a single separate SimpleQuery. This is to be expected, because double\n> the amount of SimpleQuery messages are being sent, and that requires a full\n> SimpleQuery/Result/Ready cycle. If there is significant latency between a\n> Connection Pooler and the database, this delay is amplified. It would be\n> possible to concatenate text into a single SimpleQuery. In the real world,\n> the performance impact MAY be negligible.\n> - *OPTION 0* - The time to reconnect (start a new connection from\n> scratch with a different username/password) was found to be faster than\n> using `SET SESSION AUTHORIZATION`.\n> - *OPTION 1* - My team is continuing to explore a distinct Impersonate\n> message (Option-1). We are completing a prototype-quality implementation,\n> and then benchmarking it. Given that Option-1 is asynchronous (Request and\n> expect to succeed) and it can even be included within the same TCP packet\n> as the SimpleQuery (at times), we expect the performance will be better\n> than restarting a connection, and not impacted by links of higher latency.\n>\n> I will be recording benchmark results in the document:\n> https://docs.google.com/document/d/1u6mVKEHfKtR80UrMLNYrp5D6cCSW1_arcTaZ9HcAKlw/edit#\n> after completion of the OPTION-1 prototype and benchmarking of that\n> prototype.\n>\n> Note: In order to accommodate something like OPTION-8, an Impersonation\n> message might have a flag (valid for 1x SimpleQuery only, then\n> automatically restore back to the last user).\n>\n> Regards,\n>\n> Todd\n>\n>\n> On Fri, 7 Jan 2022 at 10:55, Todd Hubers <todd.hubers@gmail.com> wrote:\n>\n>> Hi Everyone,\n>>\n>> I have started working on this:\n>>\n>> - Benchmarking - increasingly more comprehensive benchmarking\n>> - Prototyping - to simulate the change of users (toggling back and\n>> forth)\n>> - Draft Implementation - of OPTION-1 (New Protocol Message)\n>> - (Then: Working with Odyssey and PgBouncer to add support (when the\n>> GRANT role privilege is available))\n>>\n>> I hope to have a patch ready by the end of March.\n>>\n>> Regards,\n>>\n>> Todd\n>>\n>> On Wed, 24 Nov 2021 at 02:46, Todd Hubers <todd.hubers@gmail.com> wrote:\n>>\n>>>\n>>> Hi Jacob and Daniel,\n>>>\n>>> Thanks for your feedback.\n>>>\n>>> >@Daniel - I think thats conflating session_user and current_user, SET\n>>> ROLE is not a login event. This is by design and discussed in the\n>>> documentation..\n>>>\n>>> Agreed, I am using those terms loosely. I have updated option 4 in the\n>>> proposal document. I have crossed it out. Option 5 is more suitable \"SET\n>>> SESSION AUTHORIZATION\" for further consideration.\n>>>\n>>> >@Daniel - but it's important to remember that we need to cover the\n>>> functionality in terms of *tests* first, performance benchmarking is\n>>> another concern.\n>>>\n>>> For implementation absolutely, but not for a basic feasibility\n>>> prototype. A quick non-secure non-reliable prototype is probably an\n>>> important first-step to confirming which options work best for the stated\n>>> goals. Importantly, if the improvement is only 5% (whatever that might\n>>> mean), then the project is probably not work starting. But I do expect that\n>>> a benchmark will prove benefits that justify the resources to build the\n>>> feature(s).\n>>>\n>>> >@Jacob - A more modern approach might be to attach the authentication\n>>> to the packet itself (e.g. cryptographically, with a MAC), if the goal is\n>>> to enable per-statement authentication anyway. In theory that turns the\n>>> middleware into a message passer instead of a confusable deputy. But it\n>>> requires more complicated setup between the client and server.\n>>>\n>>> I did consider this, but I ruled it out. I have now added it to the\n>>> proposal document, and included two Issues. Please review and let me know\n>>> whether I might be mistaken.\n>>>\n>>> >@Jacob - Having protocol-level tests for bytes on the wire would not\n>>> only help proposals like this but also get coverage for a huge number of\n>>> edge cases. Magnus has added src/test/protocol for the server, written in\n>>> Perl, in his PROXY proposal. And I've added a protocol suite for both the\n>>> client and server, written in Python/pytest, in my OAuth proof of concept.\n>>> I think something is badly needed in this area.\n>>>\n>>> Thanks for highlighting this emerging work. I have noted this in the\n>>> proposal in the Next Steps section.\n>>>\n>>> --Todd\n>>>\n>>> Note: Here is the proposal document link again -\n>>> https://docs.google.com/document/d/1u6mVKEHfKtR80UrMLNYrp5D6cCSW1_arcTaZ9HcAKlw/edit#\n>>>\n>>> On Tue, 23 Nov 2021 at 12:12, Jacob Champion <pchampion@vmware.com>\n>>> wrote:\n>>>\n>>>> On Sat, 2021-11-20 at 16:16 -0500, Tom Lane wrote:\n>>>> > One more point is that the proposed business about\n>>>> >\n>>>> > * ImpersonateDatabaseUser will either succeed silently (0-RTT), or\n>>>> > fail. Upon failure, no further commands will be processed until\n>>>> > ImpersonateDatabaseUser succeeds.\n>>>> >\n>>>> > seems to require adding a huge amount of complication on the server\n>>>> side,\n>>>> > and complication in the protocol spec itself, to save a rather minimal\n>>>> > amount of complication in the middleware. Why can't we just say that\n>>>> > a failed \"impersonate\" command leaves the session in the same state\n>>>> > as before, and it's up to the pooler to do something about it? We are\n>>>> > in any case trusting the pooler not to send commands from user A to\n>>>> > a session logged in as user B.\n>>>>\n>>>> When combined with the 0-RTT goal, I think a silent ignore would just\n>>>> invite more security problems. Todd is effectively proposing packet\n>>>> pipelining, so the pipeline has to fail shut.\n>>>>\n>>>> A more modern approach might be to attach the authentication to the\n>>>> packet itself (e.g. cryptographically, with a MAC), if the goal is to\n>>>> enable per-statement authentication anyway. In theory that turns the\n>>>> middleware into a message passer instead of a confusable deputy. But it\n>>>> requires more complicated setup between the client and server.\n>>>>\n>>>> > PS: I wonder how we test such a feature meaningfully without\n>>>> > incorporating a pooler right into the Postgres tree. I don't\n>>>> > want to do that, for sure.\n>>>>\n>>>> Having protocol-level tests for bytes on the wire would not only help\n>>>> proposals like this but also get coverage for a huge number of edge\n>>>> cases. Magnus has added src/test/protocol for the server, written in\n>>>> Perl, in his PROXY proposal. And I've added a protocol suite for both\n>>>> the client and server, written in Python/pytest, in my OAuth proof of\n>>>> concept. I think something is badly needed in this area.\n>>>>\n>>>> --Jacob\n>>>>\n>>>\n>>>\n>>> --\n>>> --\n>>> Todd Hubers\n>>>\n>>\n>>\n>> --\n>> --\n>> Todd Hubers\n>>\n>\n>\n> --\n> --\n> Todd Hubers\n>\n\n\n-- \n--\nTodd Hubers\n\nHi Everyone,Here is a progress update. I have an established team of 2 fulltime systems programmers who have been working on this area for a couple of months now.PastImpersonation - a prototype has been completed for Option-1 \"Impersonation\"Benchmarking - has been completed on a range of options, including Option-0 \"Reconnection\".Both Impersonation and the Benchmarking is currently on the backburnerCurrent: Notification Concentrator - This is not PostgreSQL Codebase work. This project makes NOTIFY/LISTEN work in Odyssey (and others) while in Transaction mode. (Until now, NOTIFY/LISTEN can only work in SESSION mode). We intend to also build patches for PgBouncer, and other popular Connection Pool systems.It works in Session mode, but my organisation needs it to work in Transaction mode. It works by intercepting LISTEN/UNLISTEN in SQL and redirecting them to a single shared connection. There will be a Pub/Sub system within Odyssey. The LISTEN/UNLISTEN is only sent for the first subscriber or last unsubscriber accordingly. The NOTIFICATION messages are then dispatched to the Subscriber list. At most only one SESSION connection is required.Next:Update Benchmarking: I then expect to update Benchmarks with a range of prototype solutions, with both Impersonation and Notification Concentrator for final review.Publishing Benchmarking: I will send our results here, and offer a patch for such benchmarking code.Final Implementation: The team will finalise code for production-grade implementations, and testsPatches: Then my team will submit a patch for PostgreSQL, Odyssey and others; working to polish anything else that might be required of us.ToddOn Wed, 2 Feb 2022 at 10:56, Todd Hubers <todd.hubers@gmail.com> wrote:Hi Everyone,Benchmarking work has commenced, and is ongoing.OPTIONS 5/6/7 - `SET SESSION AUTHORIZATION` takes double the time of a single separate SimpleQuery. This is to be expected, because double the amount of SimpleQuery messages are being sent, and that requires a full SimpleQuery/Result/Ready cycle. If there is significant latency between a Connection Pooler and the database, this delay is amplified. It would be possible to concatenate text into a single SimpleQuery. In the real world, the performance impact MAY be negligible.OPTION 0 - The time to reconnect (start a new connection from scratch with a different username/password) was found to be faster than using `SET SESSION AUTHORIZATION`.OPTION 1 - My team is continuing to explore a distinct Impersonate message (Option-1). We are completing a prototype-quality implementation, and then benchmarking it. Given that Option-1 is asynchronous (Request and expect to succeed) and it can even be included within the same TCP packet as the SimpleQuery (at times), we expect the performance will be better than restarting a connection, and not impacted by links of higher latency.I will be recording benchmark results in the document: https://docs.google.com/document/d/1u6mVKEHfKtR80UrMLNYrp5D6cCSW1_arcTaZ9HcAKlw/edit# after completion of the OPTION-1 prototype and benchmarking of that prototype.Note: In order to accommodate something like OPTION-8, an Impersonation message might have a flag (valid for 1x SimpleQuery only, then automatically restore back to the last user).Regards,ToddOn Fri, 7 Jan 2022 at 10:55, Todd Hubers <todd.hubers@gmail.com> wrote:Hi Everyone,I have started working on this:Benchmarking - increasingly more comprehensive benchmarkingPrototyping - to simulate the change of users (toggling back and forth)Draft Implementation - of OPTION-1 (New Protocol Message)(Then: Working with Odyssey and PgBouncer to add support (when the GRANT role privilege is available))I hope to have a patch ready by the end of March.Regards,ToddOn Wed, 24 Nov 2021 at 02:46, Todd Hubers <todd.hubers@gmail.com> wrote:Hi Jacob and Daniel,Thanks for your feedback.>@Daniel - I think thats conflating session_user and current_user, SET ROLE is not a login event. This is by design and discussed in the documentation..Agreed, I am using those terms loosely. I have updated option 4 in the proposal document. I have crossed it out. Option 5 is more suitable \"SET SESSION AUTHORIZATION\" for further consideration. >@Daniel - but it's important to remember that we need to cover the functionality in terms of *tests* first, performance benchmarking is another concern.For implementation absolutely, but not for a basic feasibility prototype. A quick non-secure non-reliable prototype is probably an important first-step to confirming which options work best for the stated goals. Importantly, if the improvement is only 5% (whatever that might mean), then the project is probably not work starting. But I do expect that a benchmark will prove benefits that justify the resources to build the feature(s).>@Jacob - A more modern approach might be to attach the authentication to the packet itself (e.g. cryptographically, with a MAC), if the goal is to enable per-statement authentication anyway. In theory that turns the middleware into a message passer instead of a confusable deputy. But it requires more complicated setup between the client and server.I did consider this, but I ruled it out. I have now added it to the proposal document, and included two Issues. Please review and let me know whether I might be mistaken.>@Jacob - Having protocol-level tests for bytes on the wire would not only help proposals like this but also get coverage for a huge number of edge cases. Magnus has added src/test/protocol for the server, written in Perl, in his PROXY proposal. And I've added a protocol suite for both the client and server, written in Python/pytest, in my OAuth proof of concept. I think something is badly needed in this area.Thanks for highlighting this emerging work. I have noted this in the proposal in the Next Steps section.--ToddNote: Here is the proposal document link again - https://docs.google.com/document/d/1u6mVKEHfKtR80UrMLNYrp5D6cCSW1_arcTaZ9HcAKlw/edit#On Tue, 23 Nov 2021 at 12:12, Jacob Champion <pchampion@vmware.com> wrote:On Sat, 2021-11-20 at 16:16 -0500, Tom Lane wrote:\n> One more point is that the proposed business about \n> \n> * ImpersonateDatabaseUser will either succeed silently (0-RTT), or\n>   fail. Upon failure, no further commands will be processed until\n>   ImpersonateDatabaseUser succeeds.\n> \n> seems to require adding a huge amount of complication on the server side,\n> and complication in the protocol spec itself, to save a rather minimal\n> amount of complication in the middleware.  Why can't we just say that\n> a failed \"impersonate\" command leaves the session in the same state\n> as before, and it's up to the pooler to do something about it?  We are\n> in any case trusting the pooler not to send commands from user A to\n> a session logged in as user B.\n\nWhen combined with the 0-RTT goal, I think a silent ignore would just\ninvite more security problems. Todd is effectively proposing packet\npipelining, so the pipeline has to fail shut.\n\nA more modern approach might be to attach the authentication to the\npacket itself (e.g. cryptographically, with a MAC), if the goal is to\nenable per-statement authentication anyway. In theory that turns the\nmiddleware into a message passer instead of a confusable deputy. But it\nrequires more complicated setup between the client and server.\n\n> PS: I wonder how we test such a feature meaningfully without\n> incorporating a pooler right into the Postgres tree.  I don't\n> want to do that, for sure.\n\nHaving protocol-level tests for bytes on the wire would not only help\nproposals like this but also get coverage for a huge number of edge\ncases. Magnus has added src/test/protocol for the server, written in\nPerl, in his PROXY proposal. And I've added a protocol suite for both\nthe client and server, written in Python/pytest, in my OAuth proof of\nconcept. I think something is badly needed in this area.\n\n--Jacob\n-- --Todd Hubers\n-- --Todd Hubers\n-- --Todd Hubers\n-- --Todd Hubers", "msg_date": "Thu, 23 Jun 2022 12:21:56 +1000", "msg_from": "Todd Hubers <todd.hubers@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Feature Proposal: Connection Pool Optimization - Change the\n Connection User" } ]
[ { "msg_contents": "Respected madam/sirI am Raunak Parmar and I am pursuing Btech in computer science and engineering with AIML. I have just entered my first year of engineering and I am new to open source but I have sound knowledge in C++ and python, so I would like to contribute to your organization. Please guide me through and how to get started.Hoping to hear from you soon. Contact- raunakparmar1@gmail.com                6295522657Regards.Raunak. Sent from Mail for Windows \n", "msg_date": "Sun, 21 Nov 2021 11:57:19 +0530", "msg_from": "raunak parmar <raunakparmar1@gmail.com>", "msg_from_op": true, "msg_subject": "How to get started with contribution for GSoc 2022" }, { "msg_contents": "On Sun, Nov 28, 2021 at 11:37 PM raunak parmar <raunakparmar1@gmail.com> wrote:\n>\n> Respected madam/sir\n>\n> I am Raunak Parmar and I am pursuing Btech in computer science and engineering with AIML. I have just entered my first year of engineering and I am new to open source but I have sound knowledge in C++ and python, so I would like to contribute to your organization. Please guide me through and how to get started.\n>\n> Hoping to hear from you soon.\n\nThanks for your interest in contributing to postgres. In general,\nthere are many ways one can contribute by reviewing the proposed\npatches(C code) and features, documentation and tests improvements,\nperformance testing of features, new feature/ideas proposals and so\non. Just for reference, refer [1]. One can pick up the patches from\nthe in-progress or open commitfest at\nhttps://commitfest.postgresql.org/.\n\nRelated to GSoc 2022, some other hackers may have better ideas.\n\n[1] - https://blog.timescale.com/blog/how-and-why-to-become-a-postgresql-contributor/\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 29 Nov 2021 09:20:39 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How to get started with contribution for GSoc 2022" } ]
[ { "msg_contents": "Attached WIP patch series significantly simplifies the definition of\nscanned_pages inside vacuumlazy.c. Apart from making several very\ntricky things a lot simpler, and moving more complex code outside of\nthe big \"blkno\" loop inside lazy_scan_heap (building on the Postgres\n14 work), this refactoring directly facilitates 2 new optimizations\n(also in the patch):\n\n1. We now collect LP_DEAD items into the dead_tuples array for all\nscanned pages -- even when we cannot get a cleanup lock.\n\n2. We now don't give up on advancing relfrozenxid during a\nnon-aggressive VACUUM when we happen to be unable to get a cleanup\nlock on a heap page.\n\nBoth optimizations are much more natural with the refactoring in\nplace. Especially #2, which can be thought of as making aggressive and\nnon-aggressive VACUUM behave similarly. Sure, we shouldn't wait for a\ncleanup lock in a non-aggressive VACUUM (by definition) -- and we\nstill don't in the patch (obviously). But why wouldn't we at least\n*check* if the page has tuples that need to be frozen in order for us\nto advance relfrozenxid? Why give up on advancing relfrozenxid in a\nnon-aggressive VACUUM when there's no good reason to?\n\nSee the draft commit messages from the patch series for many more\ndetails on the simplifications I am proposing.\n\nI'm not sure how much value the second optimization has on its own.\nBut I am sure that the general idea of teaching non-aggressive VACUUM\nto be conscious of the value of advancing relfrozenxid is a good one\n-- and so #2 is a good start on that work, at least. I've discussed\nthis idea with Andres (CC'd) a few times before now. Maybe we'll need\nanother patch that makes VACUUM avoid setting heap pages to\nall-visible without also setting them to all-frozen (and freezing as\nnecessary) in order to really get a benefit. Since, of course, a\nnon-aggressive VACUUM still won't be able to advance relfrozenxid when\nit skipped over all-visible pages that are not also known to be\nall-frozen.\n\nMasahiko (CC'd) has expressed interest in working on opportunistic\nfreezing. This refactoring patch seems related to that general area,\ntoo. At a high level, to me, this seems like the tuple freezing\nequivalent of the Postgres 14 work on bypassing index vacuuming when\nthere are very few LP_DEAD items (interpret that as 0 LP_DEAD items,\nwhich is close to the truth anyway). There are probably quite a few\ninteresting opportunities to make VACUUM better by not having such a\nsharp distinction between aggressive and non-aggressive VACUUM. Why\nshould they be so different? A good medium term goal might be to\ncompletely eliminate aggressive VACUUMs.\n\nI have heard many stories about anti-wraparound/aggressive VACUUMs\nwhere the cure (which suddenly made autovacuum workers\nnon-cancellable) was worse than the disease (not actually much danger\nof wraparound failure). For example:\n\nhttps://www.joyent.com/blog/manta-postmortem-7-27-2015\n\nYes, this problem report is from 2015, which is before we even had the\nfreeze map stuff. I still think that the point about aggressive\nVACUUMs blocking DDL (leading to chaos) remains valid.\n\nThere is another interesting area of future optimization within\nVACUUM, that also seems relevant to this patch: the general idea of\n*avoiding* pruning during VACUUM, when it just doesn't make sense to\ndo so -- better to avoid dirtying the page for now. Needlessly pruning\ninside lazy_scan_prune is hardly rare -- standard pgbench (maybe only\nwith heap fill factor reduced to 95) will have autovacuums that\n*constantly* do it (granted, it may not matter so much there because\nVACUUM is unlikely to re-dirty the page anyway). This patch seems\nrelevant to that area because it recognizes that pruning during VACUUM\nis not necessarily special -- a new function called lazy_scan_noprune\nmay be used instead of lazy_scan_prune (though only when a cleanup\nlock cannot be acquired). These pages are nevertheless considered\nfully processed by VACUUM (this is perhaps 99% true, so it seems\nreasonable to round up to 100% true).\n\nI find it easy to imagine generalizing the same basic idea --\nrecognizing more ways in which pruning by VACUUM isn't necessarily\nbetter than opportunistic pruning, at the level of each heap page. Of\ncourse we *need* to prune sometimes (e.g., might be necessary to do so\nto set the page all-visible in the visibility map), but why bother\nwhen we don't, and when there is no reason to think that it'll help\nanyway? Something to think about, at least.\n\n-- \nPeter Geoghegan", "msg_date": "Sun, 21 Nov 2021 18:13:51 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Removing more vacuumlazy.c special cases, relfrozenxid optimizations" }, { "msg_contents": "Hi,\n\nOn 2021-11-21 18:13:51 -0800, Peter Geoghegan wrote:\n> I have heard many stories about anti-wraparound/aggressive VACUUMs\n> where the cure (which suddenly made autovacuum workers\n> non-cancellable) was worse than the disease (not actually much danger\n> of wraparound failure). For example:\n> \n> https://www.joyent.com/blog/manta-postmortem-7-27-2015\n> \n> Yes, this problem report is from 2015, which is before we even had the\n> freeze map stuff. I still think that the point about aggressive\n> VACUUMs blocking DDL (leading to chaos) remains valid.\n\nAs I noted below, I think this is a bit of a separate issue than what your\nchanges address in this patch.\n\n\n> There is another interesting area of future optimization within\n> VACUUM, that also seems relevant to this patch: the general idea of\n> *avoiding* pruning during VACUUM, when it just doesn't make sense to\n> do so -- better to avoid dirtying the page for now. Needlessly pruning\n> inside lazy_scan_prune is hardly rare -- standard pgbench (maybe only\n> with heap fill factor reduced to 95) will have autovacuums that\n> *constantly* do it (granted, it may not matter so much there because\n> VACUUM is unlikely to re-dirty the page anyway).\n\nHm. I'm a bit doubtful that there's all that many cases where it's worth not\npruning during vacuum. However, it seems much more common for opportunistic\npruning during non-write accesses.\n\nPerhaps checking whether we'd log an FPW would be a better criteria for\ndeciding whether to prune or not compared to whether we're dirtying the page?\nIME the WAL volume impact of FPWs is a considerably bigger deal than\nunnecessarily dirtying a page that has previously been dirtied in the same\ncheckpoint \"cycle\".\n\n\n> This patch seems relevant to that area because it recognizes that pruning\n> during VACUUM is not necessarily special -- a new function called\n> lazy_scan_noprune may be used instead of lazy_scan_prune (though only when a\n> cleanup lock cannot be acquired). These pages are nevertheless considered\n> fully processed by VACUUM (this is perhaps 99% true, so it seems reasonable\n> to round up to 100% true).\n\nIDK, the potential of not having usable space on an overfly fragmented page\ndoesn't seem that low. We can't just mark such pages as all-visible because\nthen we'll potentially never reclaim that space.\n\n\n\n\n> Since any VACUUM (not just an aggressive VACUUM) can sometimes advance\n> relfrozenxid, we now make non-aggressive VACUUMs work just a little\n> harder in order to make that desirable outcome more likely in practice.\n> Aggressive VACUUMs have long checked contended pages with only a shared\n> lock, to avoid needlessly waiting on a cleanup lock (in the common case\n> where the contended page has no tuples that need to be frozen anyway).\n> We still don't make non-aggressive VACUUMs wait for a cleanup lock, of\n> course -- if we did that they'd no longer be non-aggressive.\n\nIMO the big difference between aggressive / non-aggressive isn't whether we\nwait for a cleanup lock, but that we don't skip all-visible pages...\n\n\n> But we now make the non-aggressive case notice that a failure to acquire a\n> cleanup lock on one particular heap page does not in itself make it unsafe\n> to advance relfrozenxid for the whole relation (which is what we usually see\n> in the aggressive case already).\n> \n> This new relfrozenxid optimization might not be all that valuable on its\n> own, but it may still facilitate future work that makes non-aggressive\n> VACUUMs more conscious of the benefit of advancing relfrozenxid sooner\n> rather than later. In general it would be useful for non-aggressive\n> VACUUMs to be \"more aggressive\" opportunistically (e.g., by waiting for\n> a cleanup lock once or twice if needed).\n\nWhat do you mean by \"waiting once or twice\"? A single wait may simply never\nend on a busy page that's constantly pinned by a lot of backends...\n\n\n> It would also be generally useful if aggressive VACUUMs were \"less\n> aggressive\" opportunistically (e.g. by being responsive to query\n> cancellations when the risk of wraparound failure is still very low).\n\nBeing canceleable is already a different concept than anti-wraparound\nvacuums. We start aggressive autovacuums at vacuum_freeze_table_age, but\nanti-wrap only at autovacuum_freeze_max_age. The problem is that the\nautovacuum scheduling is way too naive for that to be a significant benefit -\nnothing tries to schedule autovacuums so that they have a chance to complete\nbefore anti-wrap autovacuums kick in. All that vacuum_freeze_table_age does is\nto promote an otherwise-scheduled (auto-)vacuum to an aggressive vacuum.\n\nThis is one of the most embarassing issues around the whole anti-wrap\ntopic. We kind of define it as an emergency that there's an anti-wraparound\nvacuum. But we have *absolutely no mechanism* to prevent them from occurring.\n\n\n> We now also collect LP_DEAD items in the dead_tuples array in the case\n> where we cannot immediately get a cleanup lock on the buffer. We cannot\n> prune without a cleanup lock, but opportunistic pruning may well have\n> left some LP_DEAD items behind in the past -- no reason to miss those.\n\nThis has become *much* more important with the changes around deciding when to\nindex vacuum. It's not just that opportunistic pruning could have left LP_DEAD\nitems, it's that a previous vacuum is quite likely to have left them there,\nbecause the previous vacuum decided not to perform index cleanup.\n\n\n> Only VACUUM can mark these LP_DEAD items LP_UNUSED (no opportunistic\n> technique is independently capable of cleaning up line pointer bloat),\n\nOne thing we could do around this, btw, would be to aggressively replace\nLP_REDIRECT items with their target item. We can't do that in all situations\n(somebody might be following a ctid chain), but I think we have all the\ninformation needed to do so. Probably would require a new HTSV RECENTLY_LIVE\nstate or something like that.\n\nI think that'd be quite a win - we right now often \"migrate\" to other pages\nfor modifications not because we're out of space on a page, but because we run\nout of itemids (for debatable reasons MaxHeapTuplesPerPage constraints the\nnumber of line pointers, not just the number of actual tuples). Effectively\ndoubling the number of available line item in common cases in a number of\nrealistic / common scenarios would be quite the win.\n\n\n> Note that we no longer report on \"pin skipped pages\" in VACUUM VERBOSE,\n> since there is no barely any real practical sense in which we actually\n> miss doing useful work for these pages. Besides, this information\n> always seemed to have little practical value, even to Postgres hackers.\n\n-0.5. I think it provides some value, and I don't see why the removal of the\ninformation should be tied to this change. It's hard to diagnose why some dead\ntuples aren't cleaned up - a common cause for that on smaller tables is that\nnearly all pages are pinned nearly all the time.\n\n\nI wonder if we could have a more restrained version of heap_page_prune() that\ndoesn't require a cleanup lock? Obviously we couldn't defragment the page, but\nit's not immediately obvious that we need it if we constrain ourselves to only\nmodify tuple versions that cannot be visible to anybody.\n\nRandom note: I really dislike that we talk about cleanup locks in some parts\nof the code, and super-exclusive locks in others :(.\n\n\n> +\t/*\n> +\t * Aggressive VACUUM (which is the same thing as anti-wraparound\n> +\t * autovacuum for most practical purposes) exists so that we'll reliably\n> +\t * advance relfrozenxid and relminmxid sooner or later. But we can often\n> +\t * opportunistically advance them even in a non-aggressive VACUUM.\n> +\t * Consider if that's possible now.\n\nI don't agree with the \"most practical purposes\" bit. There's a huge\ndifference because manual VACUUMs end up aggressive but not anti-wrap once\nolder than vacuum_freeze_table_age.\n\n\n> +\t * NB: We must use orig_rel_pages, not vacrel->rel_pages, since we want\n> +\t * the rel_pages used by lazy_scan_prune, from before a possible relation\n> +\t * truncation took place. (vacrel->rel_pages is now new_rel_pages.)\n> +\t */\n\nI think it should be doable to add an isolation test for this path. There have\nbeen quite a few bugs around the wider topic...\n\n\n> +\tif (vacrel->scanned_pages + vacrel->frozenskipped_pages < orig_rel_pages ||\n> +\t\t!vacrel->freeze_cutoffs_valid)\n> +\t{\n> +\t\t/* Cannot advance relfrozenxid/relminmxid -- just update pg_class */\n> +\t\tAssert(!aggressive);\n> +\t\tvac_update_relstats(rel, new_rel_pages, new_live_tuples,\n> +\t\t\t\t\t\t\tnew_rel_allvisible, vacrel->nindexes > 0,\n> +\t\t\t\t\t\t\tInvalidTransactionId, InvalidMultiXactId, false);\n> +\t}\n> +\telse\n> +\t{\n> +\t\t/* Can safely advance relfrozen and relminmxid, too */\n> +\t\tAssert(vacrel->scanned_pages + vacrel->frozenskipped_pages ==\n> +\t\t\t orig_rel_pages);\n> +\t\tvac_update_relstats(rel, new_rel_pages, new_live_tuples,\n> +\t\t\t\t\t\t\tnew_rel_allvisible, vacrel->nindexes > 0,\n> +\t\t\t\t\t\t\tFreezeLimit, MultiXactCutoff, false);\n> +\t}\n\nI wonder if this whole logic wouldn't become easier and less fragile if we\njust went for maintaining the \"actually observed\" horizon while scanning the\nrelation. If we skip a page via VM set the horizon to invalid. Otherwise we\ncan keep track of the accurate horizon and use that. No need to count pages\nand stuff.\n\n\n> @@ -1050,18 +1046,14 @@ lazy_scan_heap(LVRelState *vacrel, VacuumParams *params, bool aggressive)\n> \t\tbool\t\tall_visible_according_to_vm = false;\n> \t\tLVPagePruneState prunestate;\n> \n> -\t\t/*\n> -\t\t * Consider need to skip blocks. See note above about forcing\n> -\t\t * scanning of last page.\n> -\t\t */\n> -#define FORCE_CHECK_PAGE() \\\n> -\t\t(blkno == nblocks - 1 && should_attempt_truncation(vacrel))\n> -\n> \t\tpgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_SCANNED, blkno);\n> \n> \t\tupdate_vacuum_error_info(vacrel, NULL, VACUUM_ERRCB_PHASE_SCAN_HEAP,\n> \t\t\t\t\t\t\t\t blkno, InvalidOffsetNumber);\n> \n> +\t\t/*\n> +\t\t * Consider need to skip blocks\n> +\t\t */\n> \t\tif (blkno == next_unskippable_block)\n> \t\t{\n> \t\t\t/* Time to advance next_unskippable_block */\n> @@ -1110,13 +1102,19 @@ lazy_scan_heap(LVRelState *vacrel, VacuumParams *params, bool aggressive)\n> \t\telse\n> \t\t{\n> \t\t\t/*\n> -\t\t\t * The current block is potentially skippable; if we've seen a\n> -\t\t\t * long enough run of skippable blocks to justify skipping it, and\n> -\t\t\t * we're not forced to check it, then go ahead and skip.\n> -\t\t\t * Otherwise, the page must be at least all-visible if not\n> -\t\t\t * all-frozen, so we can set all_visible_according_to_vm = true.\n> +\t\t\t * The current block can be skipped if we've seen a long enough\n> +\t\t\t * run of skippable blocks to justify skipping it.\n> +\t\t\t *\n> +\t\t\t * There is an exception: we will scan the table's last page to\n> +\t\t\t * determine whether it has tuples or not, even if it would\n> +\t\t\t * otherwise be skipped (unless it's clearly not worth trying to\n> +\t\t\t * truncate the table). This avoids having lazy_truncate_heap()\n> +\t\t\t * take access-exclusive lock on the table to attempt a truncation\n> +\t\t\t * that just fails immediately because there are tuples in the\n> +\t\t\t * last page.\n> \t\t\t */\n> -\t\t\tif (skipping_blocks && !FORCE_CHECK_PAGE())\n> +\t\t\tif (skipping_blocks &&\n> +\t\t\t\t!(blkno == nblocks - 1 && should_attempt_truncation(vacrel)))\n> \t\t\t{\n> \t\t\t\t/*\n> \t\t\t\t * Tricky, tricky. If this is in aggressive vacuum, the page\n\nI find the FORCE_CHECK_PAGE macro decidedly unhelpful. But I don't like\nmixing such changes within a larger change doing many other things.\n\n\n\n> @@ -1204,156 +1214,52 @@ lazy_scan_heap(LVRelState *vacrel, VacuumParams *params, bool aggressive)\n> \n> \t\tbuf = ReadBufferExtended(vacrel->rel, MAIN_FORKNUM, blkno,\n> \t\t\t\t\t\t\t\t RBM_NORMAL, vacrel->bstrategy);\n> +\t\tpage = BufferGetPage(buf);\n> +\t\tvacrel->scanned_pages++;\n\nI don't particularly like doing BufferGetPage() before holding a lock on the\npage. Perhaps I'm too influenced by rust etc, but ISTM that at some point it'd\nbe good to have a crosscheck that BufferGetPage() is only allowed when holding\na page level lock.\n\n\n> \t\t/*\n> -\t\t * We need buffer cleanup lock so that we can prune HOT chains and\n> -\t\t * defragment the page.\n> +\t\t * We need a buffer cleanup lock to prune HOT chains and defragment\n> +\t\t * the page in lazy_scan_prune. But when it's not possible to acquire\n> +\t\t * a cleanup lock right away, we may be able to settle for reduced\n> +\t\t * processing in lazy_scan_noprune.\n> \t\t */\n\ns/in lazy_scan_noprune/via lazy_scan_noprune/?\n\n\n> \t\tif (!ConditionalLockBufferForCleanup(buf))\n> \t\t{\n> \t\t\tbool\t\thastup;\n> \n> -\t\t\t/*\n> -\t\t\t * If we're not performing an aggressive scan to guard against XID\n> -\t\t\t * wraparound, and we don't want to forcibly check the page, then\n> -\t\t\t * it's OK to skip vacuuming pages we get a lock conflict on. They\n> -\t\t\t * will be dealt with in some future vacuum.\n> -\t\t\t */\n> -\t\t\tif (!aggressive && !FORCE_CHECK_PAGE())\n> +\t\t\tLockBuffer(buf, BUFFER_LOCK_SHARE);\n> +\n> +\t\t\t/* Check for new or empty pages before lazy_scan_noprune call */\n> +\t\t\tif (lazy_scan_new_or_empty(vacrel, buf, blkno, page, true,\n> +\t\t\t\t\t\t\t\t\t vmbuffer))\n> \t\t\t{\n> -\t\t\t\tReleaseBuffer(buf);\n> -\t\t\t\tvacrel->pinskipped_pages++;\n> +\t\t\t\t/* Lock and pin released for us */\n> +\t\t\t\tcontinue;\n> +\t\t\t}\n\nWhy isn't this done in lazy_scan_noprune()?\n\n\n> +\t\t\tif (lazy_scan_noprune(vacrel, buf, blkno, page, &hastup))\n> +\t\t\t{\n> +\t\t\t\t/* No need to wait for cleanup lock for this page */\n> +\t\t\t\tUnlockReleaseBuffer(buf);\n> +\t\t\t\tif (hastup)\n> +\t\t\t\t\tvacrel->nonempty_pages = blkno + 1;\n> \t\t\t\tcontinue;\n> \t\t\t}\n\nDo we really need all of buf, blkno, page for both of these functions? Quite\npossible that yes, if so, could we add an assertion that\nBufferGetBockNumber(buf) == blkno?\n\n\n> +\t\t/* Check for new or empty pages before lazy_scan_prune call */\n> +\t\tif (lazy_scan_new_or_empty(vacrel, buf, blkno, page, false, vmbuffer))\n> \t\t{\n\nMaybe worth a note mentioning that we need to redo this even in the aggressive\ncase, because we didn't continually hold a lock on the page?\n\n\n\n> +/*\n> + * Empty pages are not really a special case -- they're just heap pages that\n> + * have no allocated tuples (including even LP_UNUSED items). You might\n> + * wonder why we need to handle them here all the same. It's only necessary\n> + * because of a rare corner-case involving a hard crash during heap relation\n> + * extension. If we ever make relation-extension crash safe, then it should\n> + * no longer be necessary to deal with empty pages here (or new pages, for\n> + * that matter).\n\nI don't think it's actually that rare - the window for this is huge. You just\nneed to crash / immediate shutdown at any time between the relation having\nbeen extended and the new page contents being written out (checkpoint or\nbuffer replacement / ring writeout). That's often many minutes.\n\nI don't really see that as a realistic thing to ever reliably avoid, FWIW. I\nthink the overhead would be prohibitive. We'd need to do synchronous WAL\nlogging while holding the extension lock I think. Um, not fun.\n\n\n> + * Caller can either hold a buffer cleanup lock on the buffer, or a simple\n> + * shared lock.\n> + */\n\nKinda sounds like it'd be incorrect to call this with an exclusive lock, which\nmade me wonder why that could be true. Perhaps just say that it needs to be\ncalled with at least a shared lock?\n\n\n> +static bool\n> +lazy_scan_new_or_empty(LVRelState *vacrel, Buffer buf, BlockNumber blkno,\n> +\t\t\t\t\t Page page, bool sharelock, Buffer vmbuffer)\n\nIt'd be good to document the return value - for me it's not a case where it's\nso obvious that it's not worth it.\n\n\n\n> +/*\n> + *\tlazy_scan_noprune() -- lazy_scan_prune() variant without pruning\n> + *\n> + * Caller need only hold a pin and share lock on the buffer, unlike\n> + * lazy_scan_prune, which requires a full cleanup lock.\n\nI'd add somethign like \"returns whether a cleanup lock is required\". Having to\nread multiple paragraphs to understand the basic meaning of the return value\nisn't great.\n\n\n> +\t\tif (ItemIdIsRedirected(itemid))\n> +\t\t{\n> +\t\t\t*hastup = true;\t\t/* page won't be truncatable */\n> +\t\t\tcontinue;\n> +\t\t}\n\nIt's not really new, but this comment is now a bit confusing, because it can\nbe understood to be about PageTruncateLinePointerArray().\n\n\n> +\t\t\tcase HEAPTUPLE_DEAD:\n> +\t\t\tcase HEAPTUPLE_RECENTLY_DEAD:\n> +\n> +\t\t\t\t/*\n> +\t\t\t\t * We count DEAD and RECENTLY_DEAD tuples in new_dead_tuples.\n> +\t\t\t\t *\n> +\t\t\t\t * lazy_scan_prune only does this for RECENTLY_DEAD tuples,\n> +\t\t\t\t * and never has to deal with DEAD tuples directly (they\n> +\t\t\t\t * reliably become LP_DEAD items through pruning). Our\n> +\t\t\t\t * approach to DEAD tuples is a bit arbitrary, but it seems\n> +\t\t\t\t * better than totally ignoring them.\n> +\t\t\t\t */\n> +\t\t\t\tnew_dead_tuples++;\n> +\t\t\t\tbreak;\n\nWhy does it make sense to track DEAD tuples this way? Isn't that going to lead\nto counting them over-and-over again? I think it's quite misleading to include\nthem in \"dead bot not yet removable\".\n\n\n> +\t/*\n> +\t * Now save details of the LP_DEAD items from the page in the dead_tuples\n> +\t * array iff VACUUM uses two-pass strategy case\n> +\t */\n\nDo we really need to have separate code for this in lazy_scan_prune() and\nlazy_scan_noprune()?\n\n\n\n> +\t}\n> +\telse\n> +\t{\n> +\t\t/*\n> +\t\t * We opt to skip FSM processing for the page on the grounds that it\n> +\t\t * is probably being modified by concurrent DML operations. Seems\n> +\t\t * best to assume that the space is best left behind for future\n> +\t\t * updates of existing tuples. This matches what opportunistic\n> +\t\t * pruning does.\n\nWhy can we assume that there concurrent DML rather than concurrent read-only\noperations? IME it's much more common for read-only operations to block\ncleanup locks than read-write ones (partially because the frequency makes it\neasier, partially because cursors allow long-held pins, partially because the\nEXCLUSIVE lock of a r/w operation wouldn't let us get here)\n\n\n\nI think this is a change mostly in the right direction. But as formulated this\ncommit does *WAY* too much at once.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 22 Nov 2021 11:29:56 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases, relfrozenxid\n optimizations" }, { "msg_contents": "On Mon, Nov 22, 2021 at 11:29 AM Andres Freund <andres@anarazel.de> wrote:\n> Hm. I'm a bit doubtful that there's all that many cases where it's worth not\n> pruning during vacuum. However, it seems much more common for opportunistic\n> pruning during non-write accesses.\n\nFair enough. I just wanted to suggest an exploratory conversation\nabout pruning (among several other things). I'm mostly saying: hey,\npruning during VACUUM isn't actually that special, at least not with\nthis refactoring patch in place. So maybe it makes sense to go\nfurther, in light of that general observation about pruning in VACUUM.\n\nMaybe it wasn't useful to even mention this aspect now. I would rather\nfocus on freezing optimizations for now -- that's much more promising.\n\n> Perhaps checking whether we'd log an FPW would be a better criteria for\n> deciding whether to prune or not compared to whether we're dirtying the page?\n> IME the WAL volume impact of FPWs is a considerably bigger deal than\n> unnecessarily dirtying a page that has previously been dirtied in the same\n> checkpoint \"cycle\".\n\nAgreed. (I tend to say the former when I really mean the latter, which\nI should try to avoid.)\n\n> IDK, the potential of not having usable space on an overfly fragmented page\n> doesn't seem that low. We can't just mark such pages as all-visible because\n> then we'll potentially never reclaim that space.\n\nDon't get me started on this - because I'll never stop.\n\nIt makes zero sense that we don't think about free space holistically,\nusing the whole context of what changed in the recent past. As I think\nyou know already, a higher level concept (like open and closed pages)\nseems like the right direction to me -- because it isn't sensible to\ntreat X bytes of free space in one heap page as essentially\ninterchangeable with any other space on any other heap page. That\nmisses an enormous amount of things that matter. The all-visible\nstatus of a page is just one such thing.\n\n> IMO the big difference between aggressive / non-aggressive isn't whether we\n> wait for a cleanup lock, but that we don't skip all-visible pages...\n\nI know what you mean by that, of course. But FWIW that definition\nseems too focused on what actually happens today, rather than what is\nessential given the invariants we have for VACUUM. And so I personally\nprefer to define it as \"a VACUUM that *reliably* advances\nrelfrozenxid\". This looser definition will probably \"age\" well (ahem).\n\n> > This new relfrozenxid optimization might not be all that valuable on its\n> > own, but it may still facilitate future work that makes non-aggressive\n> > VACUUMs more conscious of the benefit of advancing relfrozenxid sooner\n> > rather than later. In general it would be useful for non-aggressive\n> > VACUUMs to be \"more aggressive\" opportunistically (e.g., by waiting for\n> > a cleanup lock once or twice if needed).\n>\n> What do you mean by \"waiting once or twice\"? A single wait may simply never\n> end on a busy page that's constantly pinned by a lot of backends...\n\nI was speculating about future work again. I think that you've taken\nmy words too literally. This is just a draft commit message, just a\nway of framing what I'm really trying to do.\n\nSure, it wouldn't be okay to wait *indefinitely* for any one pin in a\nnon-aggressive VACUUM -- so \"at least waiting for one or two pins\nduring non-aggressive VACUUM\" might not have been the best way of\nexpressing the idea that I wanted to express. The important point is\nthat _we can make a choice_ about stuff like this dynamically, based\non the observed characteristics of the table, and some general ideas\nabout the costs and benefits (of waiting or not waiting, or of how\nlong we want to wait in total, whatever might be important). This\nprobably just means adding some heuristics that are pretty sensitive\nto any reason to not do more work in a non-aggressive VACUUM, without\n*completely* balking at doing even a tiny bit more work.\n\nFor example, we can definitely afford to wait a few more milliseconds\nto get a cleanup lock just once, especially if we're already pretty\nsure that that's all the extra work that it would take to ultimately\nbe able to advance relfrozenxid in the ongoing (non-aggressive) VACUUM\n-- it's easy to make that case. Once you agree that it makes sense\nunder these favorable circumstances, you've already made\n\"aggressiveness\" a continuous thing conceptually, at a high level.\n\nThe current binary definition of \"aggressive\" is needlessly\nrestrictive -- that much seems clear to me. I'm much less sure of what\nspecific alternative should replace it.\n\nI've already prototyped advancing relfrozenxid using a dynamically\ndetermined value, so that our final relfrozenxid is just about the\nmost recent safe value (not the original FreezeLimit). That's been\ninteresting. Consider this log output from an autovacuum with the\nprototype patch (also uses my new instrumentation), based on standard\npgbench (just tuned heap fill factor a bit):\n\nLOG: automatic vacuum of table \"regression.public.pgbench_accounts\":\nindex scans: 0\npages: 0 removed, 909091 remain, 33559 skipped using visibility map\n(3.69% of total)\ntuples: 297113 removed, 50090880 remain, 90880 are dead but not yet removable\nremoval cutoff: oldest xmin was 29296744, which is now 203341 xact IDs behind\nindex scan not needed: 0 pages from table (0.00% of total) had 0 dead\nitem identifiers removed\nI/O timings: read: 55.574 ms, write: 0.000 ms\navg read rate: 17.805 MB/s, avg write rate: 4.389 MB/s\nbuffer usage: 1728273 hits, 23150 misses, 5706 dirtied\nWAL usage: 594211 records, 0 full page images, 35065032 bytes\nsystem usage: CPU: user: 6.85 s, system: 0.08 s, elapsed: 10.15 s\n\nAll of the autovacuums against the accounts table look similar to this\none -- you don't see anything about relfrozenxid being advanced\n(because it isn't). Whereas for the smaller pgbench tables, every\nsingle VACUUM successfully advances relfrozenxid to a fairly recent\nXID (without there ever being an aggressive VACUUM) -- just because\nVACUUM needs to visit every page for the smaller tables. While the\naccounts table doesn't generally need to have 100% of all pages\ntouched by VACUUM -- it's more like 95% there. Does that really make\nsense, though?\n\nI'm pretty sure that less aggressive VACUUMing (e.g. higher\nscale_factor setting) would lead to more aggressive setting of\nrelfrozenxid here. I'm always suspicious when I see insignificant\ndifferences that lead to significant behavioral differences. Am I\nworried over nothing here? Perhaps -- we don't really need to advance\nrelfrozenxid early with this table/workload anyway. But I'm not so\nsure.\n\nAgain, my point is that there is a good chance that redefining\naggressiveness in some way will be helpful. A more creative, flexible\ndefinition might be just what we need. The details are very much up in\nthe air, though.\n\n> > It would also be generally useful if aggressive VACUUMs were \"less\n> > aggressive\" opportunistically (e.g. by being responsive to query\n> > cancellations when the risk of wraparound failure is still very low).\n>\n> Being canceleable is already a different concept than anti-wraparound\n> vacuums. We start aggressive autovacuums at vacuum_freeze_table_age, but\n> anti-wrap only at autovacuum_freeze_max_age.\n\nYou know what I meant. Also, did *you* mean \"being canceleable is\nalready a different concept to *aggressive* vacuums\"? :-)\n\n> The problem is that the\n> autovacuum scheduling is way too naive for that to be a significant benefit -\n> nothing tries to schedule autovacuums so that they have a chance to complete\n> before anti-wrap autovacuums kick in. All that vacuum_freeze_table_age does is\n> to promote an otherwise-scheduled (auto-)vacuum to an aggressive vacuum.\n\nNot sure what you mean about scheduling, since vacuum_freeze_table_age\nis only in place to make overnight (off hours low activity scripted\nVACUUMs) freeze tuples before any autovacuum worker gets the chance\n(since the latter may run at a much less convenient time). Sure,\nvacuum_freeze_table_age might also force a regular autovacuum worker\nto do an aggressive VACUUM -- but I think it's mostly intended for a\nmanual overnight VACUUM. Not usually very helpful, but also not\nharmful.\n\nOh, wait. I think that you're talking about how autovacuum workers in\nparticular tend to be affected by this. We launch an av worker that\nwants to clean up bloat, but it ends up being aggressive (and maybe\ntaking way longer), perhaps quite randomly, only due to\nvacuum_freeze_table_age (not due to autovacuum_freeze_max_age). Is\nthat it?\n\n> This is one of the most embarassing issues around the whole anti-wrap\n> topic. We kind of define it as an emergency that there's an anti-wraparound\n> vacuum. But we have *absolutely no mechanism* to prevent them from occurring.\n\nWhat do you mean? Only an autovacuum worker can do an anti-wraparound\nVACUUM (which is not quite the same thing as an aggressive VACUUM).\n\nI agree that anti-wraparound autovacuum is way too unfriendly, though.\n\n> > We now also collect LP_DEAD items in the dead_tuples array in the case\n> > where we cannot immediately get a cleanup lock on the buffer. We cannot\n> > prune without a cleanup lock, but opportunistic pruning may well have\n> > left some LP_DEAD items behind in the past -- no reason to miss those.\n>\n> This has become *much* more important with the changes around deciding when to\n> index vacuum. It's not just that opportunistic pruning could have left LP_DEAD\n> items, it's that a previous vacuum is quite likely to have left them there,\n> because the previous vacuum decided not to perform index cleanup.\n\nI haven't seen any evidence of that myself (with the optimization\nadded to Postgres 14 by commit 5100010ee4). I still don't understand\nwhy you doubted that work so much. I'm not saying that you're wrong\nto; I'm saying that I don't think that I understand your perspective\non it.\n\nWhat I have seen in my own tests (particularly with BenchmarkSQL) is\nthat most individual tables either never apply the optimization even\nonce (because the table reliably has heap pages with many more LP_DEAD\nitems than the 2%-of-relpages threshold), or will never need to\n(because there are precisely zero LP_DEAD items anyway). Remaining\ntables that *might* use the optimization tend to not go very long\nwithout actually getting a round of index vacuuming. It's just too\neasy for updates (and even aborted xact inserts) to introduce new\nLP_DEAD items for us to go long without doing index vacuuming.\n\nIf you can be more concrete about a problem you've seen, then I might\nbe able to help. It's not like there are no options in this already. I\nalready thought about introducing a small degree of randomness into\nthe process of deciding to skip or to not skip (in the\nconsider_bypass_optimization path of lazy_vacuum() on Postgres 14).\nThe optimization is mostly valuable because it allows us to do more\nuseful work in VACUUM -- not because it allows us to do less useless\nwork in VACUUM. In particular, it allows to tune\nautovacuum_vacuum_insert_scale_factor very aggressively with an\nappend-only table, without useless index vacuuming making it all but\nimpossible for autovacuum to get to the useful work.\n\n> > Only VACUUM can mark these LP_DEAD items LP_UNUSED (no opportunistic\n> > technique is independently capable of cleaning up line pointer bloat),\n>\n> One thing we could do around this, btw, would be to aggressively replace\n> LP_REDIRECT items with their target item. We can't do that in all situations\n> (somebody might be following a ctid chain), but I think we have all the\n> information needed to do so. Probably would require a new HTSV RECENTLY_LIVE\n> state or something like that.\n\nAnother idea is to truncate the line pointer during pruning (including\nopportunistic pruning). Matthias van de Meent has a patch for that.\n\nI am not aware of a specific workload where the patch helps, but that\ndoesn't mean that there isn't one, or that it doesn't matter. It's\nsubtle enough that I might have just missed something. I *expect* the\ntrue damage over time to be very hard to model or understand -- I\nimagine the potential for weird feedback loops is there.\n\n> I think that'd be quite a win - we right now often \"migrate\" to other pages\n> for modifications not because we're out of space on a page, but because we run\n> out of itemids (for debatable reasons MaxHeapTuplesPerPage constraints the\n> number of line pointers, not just the number of actual tuples). Effectively\n> doubling the number of available line item in common cases in a number of\n> realistic / common scenarios would be quite the win.\n\nI believe Masahiko is working on this in the current cycle. It would\nbe easier if we had a better sense of how increasing\nMaxHeapTuplesPerPage will affect tidbitmap.c. But the idea of\nincreasing that seems sound to me.\n\n> > Note that we no longer report on \"pin skipped pages\" in VACUUM VERBOSE,\n> > since there is no barely any real practical sense in which we actually\n> > miss doing useful work for these pages. Besides, this information\n> > always seemed to have little practical value, even to Postgres hackers.\n>\n> -0.5. I think it provides some value, and I don't see why the removal of the\n> information should be tied to this change. It's hard to diagnose why some dead\n> tuples aren't cleaned up - a common cause for that on smaller tables is that\n> nearly all pages are pinned nearly all the time.\n\nIs that still true, though? If it turns out that we need to leave it\nin, then I can do that. But I'd prefer to wait until we have more\ninformation before making a final decision. Remember, the high level\nidea of this whole patch is that we do as much work as possible for\nany scanned_pages, which now includes pages that we never successfully\nacquired a cleanup lock on. And so we're justified in assuming that\nthey're exactly equivalent to pages that we did get a cleanup on --\nthat's now the working assumption. I know that that's not literally\ntrue, but that doesn't mean it's not a useful fiction -- it should be\nvery close to the truth.\n\nAlso, I would like to put more information (much more useful\ninformation) in the same log output. Perhaps that will be less\ncontroversial if I take something useless away first.\n\n> I wonder if we could have a more restrained version of heap_page_prune() that\n> doesn't require a cleanup lock? Obviously we couldn't defragment the page, but\n> it's not immediately obvious that we need it if we constrain ourselves to only\n> modify tuple versions that cannot be visible to anybody.\n>\n> Random note: I really dislike that we talk about cleanup locks in some parts\n> of the code, and super-exclusive locks in others :(.\n\nSomebody should normalize that.\n\n> > + /*\n> > + * Aggressive VACUUM (which is the same thing as anti-wraparound\n> > + * autovacuum for most practical purposes) exists so that we'll reliably\n> > + * advance relfrozenxid and relminmxid sooner or later. But we can often\n> > + * opportunistically advance them even in a non-aggressive VACUUM.\n> > + * Consider if that's possible now.\n>\n> I don't agree with the \"most practical purposes\" bit. There's a huge\n> difference because manual VACUUMs end up aggressive but not anti-wrap once\n> older than vacuum_freeze_table_age.\n\nOkay.\n\n> > + * NB: We must use orig_rel_pages, not vacrel->rel_pages, since we want\n> > + * the rel_pages used by lazy_scan_prune, from before a possible relation\n> > + * truncation took place. (vacrel->rel_pages is now new_rel_pages.)\n> > + */\n>\n> I think it should be doable to add an isolation test for this path. There have\n> been quite a few bugs around the wider topic...\n\nI would argue that we already have one -- vacuum-reltuples.spec. I had\nto update its expected output in the patch. I would argue that the\nbehavioral change (count tuples on a pinned-by-cursor heap page) that\nnecessitated updating the expected output for the test is an\nimprovement overall.\n\n> > + {\n> > + /* Can safely advance relfrozen and relminmxid, too */\n> > + Assert(vacrel->scanned_pages + vacrel->frozenskipped_pages ==\n> > + orig_rel_pages);\n> > + vac_update_relstats(rel, new_rel_pages, new_live_tuples,\n> > + new_rel_allvisible, vacrel->nindexes > 0,\n> > + FreezeLimit, MultiXactCutoff, false);\n> > + }\n>\n> I wonder if this whole logic wouldn't become easier and less fragile if we\n> just went for maintaining the \"actually observed\" horizon while scanning the\n> relation. If we skip a page via VM set the horizon to invalid. Otherwise we\n> can keep track of the accurate horizon and use that. No need to count pages\n> and stuff.\n\nThere is no question that that makes sense as an optimization -- my\nprototype convinced me of that already. But I don't think that it can\nsimplify anything (not even the call to vac_update_relstats itself, to\nactually update relfrozenxid at the end). Fundamentally, this will\nonly work if we decide to only skip all-frozen pages, which (by\ndefinition) only happens within aggressive VACUUMs. Isn't it that\nsimple?\n\nYou recently said (on the heap-pruning-14-bug thread) that you don't\nthink it would be practical to always set a page all-frozen when we\nsee that we're going to set it all-visible -- apparently you feel that\nwe could never opportunistically freeze early such that all-visible\nbut not all-frozen pages practically cease to exist. I'm still not\nsure why you believe that (though you may be right, or I might have\nmisunderstood, since it's complicated). It would certainly benefit\nthis dynamic relfrozenxid business if it was possible, though. If we\ncould somehow make that work, then almost every VACUUM would be able\nto advance relfrozenxid, independently of aggressive-ness -- because\nwe wouldn't have any all-visible-but-not-all-frozen pages to skip\n(that important detail wouldn't be left to chance).\n\n> > - if (skipping_blocks && !FORCE_CHECK_PAGE())\n> > + if (skipping_blocks &&\n> > + !(blkno == nblocks - 1 && should_attempt_truncation(vacrel)))\n> > {\n> > /*\n> > * Tricky, tricky. If this is in aggressive vacuum, the page\n>\n> I find the FORCE_CHECK_PAGE macro decidedly unhelpful. But I don't like\n> mixing such changes within a larger change doing many other things.\n\nI got rid of FORCE_CHECK_PAGE() itself in this patch (not a later\npatch) because the patch also removes the only other\nFORCE_CHECK_PAGE() call -- and the latter change is very much in scope\nfor the big patch (can't be broken down into smaller changes, I\nthink). And so this felt natural to me. But if you prefer, I can break\nit out into a separate commit.\n\n> > @@ -1204,156 +1214,52 @@ lazy_scan_heap(LVRelState *vacrel, VacuumParams *params, bool aggressive)\n> >\n> > buf = ReadBufferExtended(vacrel->rel, MAIN_FORKNUM, blkno,\n> > RBM_NORMAL, vacrel->bstrategy);\n> > + page = BufferGetPage(buf);\n> > + vacrel->scanned_pages++;\n>\n> I don't particularly like doing BufferGetPage() before holding a lock on the\n> page. Perhaps I'm too influenced by rust etc, but ISTM that at some point it'd\n> be good to have a crosscheck that BufferGetPage() is only allowed when holding\n> a page level lock.\n\nI have occasionally wondered if the whole idea of reading heap pages\nwith only a pin (and having cleanup locks in VACUUM) is really worth\nit -- alternative designs seem possible. Obviously that's a BIG\ndiscussion, and not one to have right now. But it seems kind of\nrelevant.\n\nSince it is often legit to read a heap page without a buffer lock\n(only a pin), I can't see why BufferGetPage() without a buffer lock\nshouldn't also be okay -- if anything it seems safer. I think that I\nwould agree with you if it wasn't for that inconsistency (which is\nrather a big \"if\", to be sure -- even for me).\n\n> > + /* Check for new or empty pages before lazy_scan_noprune call */\n> > + if (lazy_scan_new_or_empty(vacrel, buf, blkno, page, true,\n> > + vmbuffer))\n> > {\n> > - ReleaseBuffer(buf);\n> > - vacrel->pinskipped_pages++;\n> > + /* Lock and pin released for us */\n> > + continue;\n> > + }\n>\n> Why isn't this done in lazy_scan_noprune()?\n\nNo reason, really -- could be done that way (we'd then also give\nlazy_scan_prune the same treatment). I thought that it made a certain\namount of sense to keep some of this in the main loop, but I can\nchange it if you want.\n\n> > + if (lazy_scan_noprune(vacrel, buf, blkno, page, &hastup))\n> > + {\n> > + /* No need to wait for cleanup lock for this page */\n> > + UnlockReleaseBuffer(buf);\n> > + if (hastup)\n> > + vacrel->nonempty_pages = blkno + 1;\n> > continue;\n> > }\n>\n> Do we really need all of buf, blkno, page for both of these functions? Quite\n> possible that yes, if so, could we add an assertion that\n> BufferGetBockNumber(buf) == blkno?\n\nThis just matches the existing lazy_scan_prune function (which doesn't\nmean all that much, since it was only added in Postgres 14). Will add\nthe assertion to both.\n\n> > + /* Check for new or empty pages before lazy_scan_prune call */\n> > + if (lazy_scan_new_or_empty(vacrel, buf, blkno, page, false, vmbuffer))\n> > {\n>\n> Maybe worth a note mentioning that we need to redo this even in the aggressive\n> case, because we didn't continually hold a lock on the page?\n\nIsn't that obvious? Either way it isn't the kind of thing that I'd try\nto optimize away. It's such a narrow issue.\n\n> > +/*\n> > + * Empty pages are not really a special case -- they're just heap pages that\n> > + * have no allocated tuples (including even LP_UNUSED items). You might\n> > + * wonder why we need to handle them here all the same. It's only necessary\n> > + * because of a rare corner-case involving a hard crash during heap relation\n> > + * extension. If we ever make relation-extension crash safe, then it should\n> > + * no longer be necessary to deal with empty pages here (or new pages, for\n> > + * that matter).\n>\n> I don't think it's actually that rare - the window for this is huge.\n\nI can just remove the comment, though it still makes sense to me.\n\n> I don't really see that as a realistic thing to ever reliably avoid, FWIW. I\n> think the overhead would be prohibitive. We'd need to do synchronous WAL\n> logging while holding the extension lock I think. Um, not fun.\n\nMy long term goal for the FSM (the lease based design I talked about\nearlier this year) includes soft ownership of free space from\npreallocated pages by individual xacts -- the smgr layer itself\nbecomes transactional and crash safe (at least to a limited degree).\nThis includes bulk extension of relations, to make up for the new\noverhead implied by crash safe rel extension. I don't think that we\nshould require VACUUM (or anything else) to be cool with random\nuninitialized pages -- to me that just seems backwards.\n\nWe can't do true bulk extension right now (just an inferior version\nthat doesn't give specific pages to specific backends) because the\nrisk of losing a bunch of empty pages for way too long is not\nacceptable. But that doesn't seem fundamental to me -- that's one of\nthe things we'd be fixing at the same time (through what I call soft\nownership semantics). I think we'd come out ahead on performance, and\n*also* have a more robust approach to relation extension.\n\n> > + * Caller can either hold a buffer cleanup lock on the buffer, or a simple\n> > + * shared lock.\n> > + */\n>\n> Kinda sounds like it'd be incorrect to call this with an exclusive lock, which\n> made me wonder why that could be true. Perhaps just say that it needs to be\n> called with at least a shared lock?\n\nOkay.\n\n> > +static bool\n> > +lazy_scan_new_or_empty(LVRelState *vacrel, Buffer buf, BlockNumber blkno,\n> > + Page page, bool sharelock, Buffer vmbuffer)\n>\n> It'd be good to document the return value - for me it's not a case where it's\n> so obvious that it's not worth it.\n\nOkay.\n\n> > +/*\n> > + * lazy_scan_noprune() -- lazy_scan_prune() variant without pruning\n> > + *\n> > + * Caller need only hold a pin and share lock on the buffer, unlike\n> > + * lazy_scan_prune, which requires a full cleanup lock.\n>\n> I'd add somethign like \"returns whether a cleanup lock is required\". Having to\n> read multiple paragraphs to understand the basic meaning of the return value\n> isn't great.\n\nWill fix.\n\n> > + if (ItemIdIsRedirected(itemid))\n> > + {\n> > + *hastup = true; /* page won't be truncatable */\n> > + continue;\n> > + }\n>\n> It's not really new, but this comment is now a bit confusing, because it can\n> be understood to be about PageTruncateLinePointerArray().\n\nI didn't think of that. Will address it in the next version.\n\n> Why does it make sense to track DEAD tuples this way? Isn't that going to lead\n> to counting them over-and-over again? I think it's quite misleading to include\n> them in \"dead bot not yet removable\".\n\nCompared to what? Do we really want to invent a new kind of DEAD tuple\n(e.g., to report on), just to handle this rare case?\n\nI accept that this code is lying about the tuples being RECENTLY_DEAD,\nkind of. But isn't it still strictly closer to the truth, compared to\nHEAD? Counting it as RECENTLY_DEAD is far closer to the truth than not\ncounting it at all.\n\nNote that we don't remember LP_DEAD items here, either (not here, in\nlazy_scan_noprune, and not in lazy_scan_prune on HEAD). Because we\npretty much interpret LP_DEAD items as \"future LP_UNUSED items\"\ninstead -- we make a soft assumption that we're going to go on to mark\nthe same items LP_UNUSED during a second pass over the heap. My point\nis that there is no natural way to count \"fully DEAD tuple that\nautovacuum didn't deal with\" -- and so I picked RECENTLY_DEAD.\n\n> > + /*\n> > + * Now save details of the LP_DEAD items from the page in the dead_tuples\n> > + * array iff VACUUM uses two-pass strategy case\n> > + */\n>\n> Do we really need to have separate code for this in lazy_scan_prune() and\n> lazy_scan_noprune()?\n\nThere is hardly any repetition, though.\n\n> > + }\n> > + else\n> > + {\n> > + /*\n> > + * We opt to skip FSM processing for the page on the grounds that it\n> > + * is probably being modified by concurrent DML operations. Seems\n> > + * best to assume that the space is best left behind for future\n> > + * updates of existing tuples. This matches what opportunistic\n> > + * pruning does.\n>\n> Why can we assume that there concurrent DML rather than concurrent read-only\n> operations? IME it's much more common for read-only operations to block\n> cleanup locks than read-write ones (partially because the frequency makes it\n> easier, partially because cursors allow long-held pins, partially because the\n> EXCLUSIVE lock of a r/w operation wouldn't let us get here)\n\nI actually agree. It still probably isn't worth dealing with the FSM\nhere, though. It's just too much mechanism for too little benefit in a\nvery rare case. What do you think?\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Mon, 22 Nov 2021 17:07:46 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "Hi,\n\nOn 2021-11-22 17:07:46 -0800, Peter Geoghegan wrote:\n> Sure, it wouldn't be okay to wait *indefinitely* for any one pin in a\n> non-aggressive VACUUM -- so \"at least waiting for one or two pins\n> during non-aggressive VACUUM\" might not have been the best way of\n> expressing the idea that I wanted to express. The important point is\n> that _we can make a choice_ about stuff like this dynamically, based\n> on the observed characteristics of the table, and some general ideas\n> about the costs and benefits (of waiting or not waiting, or of how\n> long we want to wait in total, whatever might be important). This\n> probably just means adding some heuristics that are pretty sensitive\n> to any reason to not do more work in a non-aggressive VACUUM, without\n> *completely* balking at doing even a tiny bit more work.\n\n> For example, we can definitely afford to wait a few more milliseconds\n> to get a cleanup lock just once\n\nWe currently have no infrastructure to wait for an lwlock or pincount for a\nlimited time. And at least for the former it'd not be easy to add. It may be\nworth adding that at some point, but I'm doubtful this is sufficient reason\nfor nontrivial new infrastructure in very performance sensitive areas.\n\n\n> All of the autovacuums against the accounts table look similar to this\n> one -- you don't see anything about relfrozenxid being advanced\n> (because it isn't). Whereas for the smaller pgbench tables, every\n> single VACUUM successfully advances relfrozenxid to a fairly recent\n> XID (without there ever being an aggressive VACUUM) -- just because\n> VACUUM needs to visit every page for the smaller tables. While the\n> accounts table doesn't generally need to have 100% of all pages\n> touched by VACUUM -- it's more like 95% there. Does that really make\n> sense, though?\n\nDoes what make really sense?\n\n\n> I'm pretty sure that less aggressive VACUUMing (e.g. higher\n> scale_factor setting) would lead to more aggressive setting of\n> relfrozenxid here. I'm always suspicious when I see insignificant\n> differences that lead to significant behavioral differences. Am I\n> worried over nothing here? Perhaps -- we don't really need to advance\n> relfrozenxid early with this table/workload anyway. But I'm not so\n> sure.\n\nI think pgbench_accounts is just a really poor showcase. Most importantly\nthere's no even slightly longer running transactions that hold down the xid\nhorizon. But in real workloads thats incredibly common IME. It's also quite\nuncommon in real workloads to huge tables in which all records are\nupdated. It's more common to have value ranges that are nearly static, and a\nmore heavily changing range.\n\nI think the most interesting cases where using the \"measured\" horizon will be\nadvantageous is anti-wrap vacuums. Those obviously have to happen for rarely\nmodified tables, including completely static ones, too. Using the \"measured\"\nhorizon will allow us to reduce the frequency of anti-wrap autovacuums on old\ntables, because we'll be able to set a much more recent relfrozenxid.\n\nThis is becoming more common with the increased use of partitioning.\n\n\n> > The problem is that the\n> > autovacuum scheduling is way too naive for that to be a significant benefit -\n> > nothing tries to schedule autovacuums so that they have a chance to complete\n> > before anti-wrap autovacuums kick in. All that vacuum_freeze_table_age does is\n> > to promote an otherwise-scheduled (auto-)vacuum to an aggressive vacuum.\n> \n> Not sure what you mean about scheduling, since vacuum_freeze_table_age\n> is only in place to make overnight (off hours low activity scripted\n> VACUUMs) freeze tuples before any autovacuum worker gets the chance\n> (since the latter may run at a much less convenient time). Sure,\n> vacuum_freeze_table_age might also force a regular autovacuum worker\n> to do an aggressive VACUUM -- but I think it's mostly intended for a\n> manual overnight VACUUM. Not usually very helpful, but also not\n> harmful.\n\n> Oh, wait. I think that you're talking about how autovacuum workers in\n> particular tend to be affected by this. We launch an av worker that\n> wants to clean up bloat, but it ends up being aggressive (and maybe\n> taking way longer), perhaps quite randomly, only due to\n> vacuum_freeze_table_age (not due to autovacuum_freeze_max_age). Is\n> that it?\n\nNo, not quite. We treat anti-wraparound vacuums as an emergency (including\nlogging messages, not cancelling). But the only mechanism we have against\nanti-wrap vacuums happening is vacuum_freeze_table_age. But as you say, that's\nnot really a \"real\" mechanism, because it requires an \"independent\" reason to\nvacuum a table.\n\nI've seen cases where anti-wraparound vacuums weren't a problem / never\nhappend for important tables for a long time, because there always was an\n\"independent\" reason for autovacuum to start doing its thing before the table\ngot to be autovacuum_freeze_max_age old. But at some point the important\ntables started to be big enough that autovacuum didn't schedule vacuums that\ngot promoted to aggressive via vacuum_freeze_table_age before the anti-wrap\nvacuums. Then things started to burn, because of the unpaced anti-wrap vacuums\nclogging up all IO, or maybe it was the vacuums not cancelling - I don't quite\nremember the details.\n\nBehaviour that lead to a \"sudden\" falling over, rather than getting gradually\nworse are bad - they somehow tend to happen on Friday evenings :).\n\n\n\n> > This is one of the most embarassing issues around the whole anti-wrap\n> > topic. We kind of define it as an emergency that there's an anti-wraparound\n> > vacuum. But we have *absolutely no mechanism* to prevent them from occurring.\n> \n> What do you mean? Only an autovacuum worker can do an anti-wraparound\n> VACUUM (which is not quite the same thing as an aggressive VACUUM).\n\nJust that autovacuum should have a mechanism to trigger aggressive vacuums\n(i.e. ones that are guaranteed to be able to increase relfrozenxid unless\ncancelled) before getting to the \"emergency\"-ish anti-wraparound state.\n\nOr alternatively that we should have a separate threshold for the \"harsher\"\nanti-wraparound measures.\n\n\n> > > We now also collect LP_DEAD items in the dead_tuples array in the case\n> > > where we cannot immediately get a cleanup lock on the buffer. We cannot\n> > > prune without a cleanup lock, but opportunistic pruning may well have\n> > > left some LP_DEAD items behind in the past -- no reason to miss those.\n> >\n> > This has become *much* more important with the changes around deciding when to\n> > index vacuum. It's not just that opportunistic pruning could have left LP_DEAD\n> > items, it's that a previous vacuum is quite likely to have left them there,\n> > because the previous vacuum decided not to perform index cleanup.\n> \n> I haven't seen any evidence of that myself (with the optimization\n> added to Postgres 14 by commit 5100010ee4). I still don't understand\n> why you doubted that work so much. I'm not saying that you're wrong\n> to; I'm saying that I don't think that I understand your perspective\n> on it.\n\nI didn't (nor do) doubt that it can be useful - to the contrary, I think the\nunconditional index pass was a huge practial issue. I do however think that\nthere are cases where it can cause trouble. The comment above wasn't meant as\na criticism - just that it seems worth pointing out that one reason we might\nencounter a lot of LP_DEAD items is previous vacuums that didn't perform index\ncleanup.\n\n\n> What I have seen in my own tests (particularly with BenchmarkSQL) is\n> that most individual tables either never apply the optimization even\n> once (because the table reliably has heap pages with many more LP_DEAD\n> items than the 2%-of-relpages threshold), or will never need to\n> (because there are precisely zero LP_DEAD items anyway). Remaining\n> tables that *might* use the optimization tend to not go very long\n> without actually getting a round of index vacuuming. It's just too\n> easy for updates (and even aborted xact inserts) to introduce new\n> LP_DEAD items for us to go long without doing index vacuuming.\n\nI think workloads are a bit more worried than a realistic set of benchmarksk\nthat one person can run yourself.\n\nI gave you examples of cases that I see as likely being bitten by this,\ne.g. when the skipped index cleanup prevents IOS scans. When both the\nlikely-to-be-modified and likely-to-be-queried value ranges are a small subset\nof the entire data, the 2% threshold can prevent vacuum from cleaning up\nLP_DEAD entries for a long time. Or when all index scans are bitmap index\nscans, and nothing ends up cleaning up the dead index entries in certain\nranges, and even an explicit vacuum doesn't fix the issue. Even a relatively\nsmall rollback / non-HOT update rate can start to be really painful.\n\n\n> > > Only VACUUM can mark these LP_DEAD items LP_UNUSED (no opportunistic\n> > > technique is independently capable of cleaning up line pointer bloat),\n> >\n> > One thing we could do around this, btw, would be to aggressively replace\n> > LP_REDIRECT items with their target item. We can't do that in all situations\n> > (somebody might be following a ctid chain), but I think we have all the\n> > information needed to do so. Probably would require a new HTSV RECENTLY_LIVE\n> > state or something like that.\n> \n> Another idea is to truncate the line pointer during pruning (including\n> opportunistic pruning). Matthias van de Meent has a patch for that.\n\nI'm a bit doubtful that's as important (which is not to say that it's not\nworth doing). For a heavily updated table the max space usage of the line\npointer array just isn't as big a factor as ending up with only half the\nusable line pointers.\n\n\n> > > Note that we no longer report on \"pin skipped pages\" in VACUUM VERBOSE,\n> > > since there is no barely any real practical sense in which we actually\n> > > miss doing useful work for these pages. Besides, this information\n> > > always seemed to have little practical value, even to Postgres hackers.\n> >\n> > -0.5. I think it provides some value, and I don't see why the removal of the\n> > information should be tied to this change. It's hard to diagnose why some dead\n> > tuples aren't cleaned up - a common cause for that on smaller tables is that\n> > nearly all pages are pinned nearly all the time.\n> \n> Is that still true, though? If it turns out that we need to leave it\n> in, then I can do that. But I'd prefer to wait until we have more\n> information before making a final decision. Remember, the high level\n> idea of this whole patch is that we do as much work as possible for\n> any scanned_pages, which now includes pages that we never successfully\n> acquired a cleanup lock on. And so we're justified in assuming that\n> they're exactly equivalent to pages that we did get a cleanup on --\n> that's now the working assumption. I know that that's not literally\n> true, but that doesn't mean it's not a useful fiction -- it should be\n> very close to the truth.\n\nIDK, it seems misleading to me. Small tables with a lot of churn - quite\ncommon - are highly reliant on LP_DEAD entries getting removed or the tiny\ntable suddenly isn't so tiny anymore. And it's harder to diagnose why the\ncleanup isn't happening without knowledge that pages needing cleanup couldn't\nbe cleaned up due to pins.\n\nIf you want to improve the logic so that we only count pages that would have\nsomething to clean up, I'd be happy as well. It doesn't have to mean exactly\nwhat it means today.\n\n\n> > > + * NB: We must use orig_rel_pages, not vacrel->rel_pages, since we want\n> > > + * the rel_pages used by lazy_scan_prune, from before a possible relation\n> > > + * truncation took place. (vacrel->rel_pages is now new_rel_pages.)\n> > > + */\n> >\n> > I think it should be doable to add an isolation test for this path. There have\n> > been quite a few bugs around the wider topic...\n> \n> I would argue that we already have one -- vacuum-reltuples.spec. I had\n> to update its expected output in the patch. I would argue that the\n> behavioral change (count tuples on a pinned-by-cursor heap page) that\n> necessitated updating the expected output for the test is an\n> improvement overall.\n\nI was thinking of truncations, which I don't think vacuum-reltuples.spec\ntests.\n\n\n> > > + {\n> > > + /* Can safely advance relfrozen and relminmxid, too */\n> > > + Assert(vacrel->scanned_pages + vacrel->frozenskipped_pages ==\n> > > + orig_rel_pages);\n> > > + vac_update_relstats(rel, new_rel_pages, new_live_tuples,\n> > > + new_rel_allvisible, vacrel->nindexes > 0,\n> > > + FreezeLimit, MultiXactCutoff, false);\n> > > + }\n> >\n> > I wonder if this whole logic wouldn't become easier and less fragile if we\n> > just went for maintaining the \"actually observed\" horizon while scanning the\n> > relation. If we skip a page via VM set the horizon to invalid. Otherwise we\n> > can keep track of the accurate horizon and use that. No need to count pages\n> > and stuff.\n> \n> There is no question that that makes sense as an optimization -- my\n> prototype convinced me of that already. But I don't think that it can\n> simplify anything (not even the call to vac_update_relstats itself, to\n> actually update relfrozenxid at the end).\n\nMaybe. But we've had quite a few bugs because we ended up changing some detail\nof what is excluded in one of the counters, leading to wrong determination\nabout whether we scanned everything or not.\n\n\n> Fundamentally, this will only work if we decide to only skip all-frozen\n> pages, which (by definition) only happens within aggressive VACUUMs.\n\nHm? Or if there's just no runs of all-visible pages of sufficient length, so\nwe don't end up skipping at all.\n\n\n> You recently said (on the heap-pruning-14-bug thread) that you don't\n> think it would be practical to always set a page all-frozen when we\n> see that we're going to set it all-visible -- apparently you feel that\n> we could never opportunistically freeze early such that all-visible\n> but not all-frozen pages practically cease to exist. I'm still not\n> sure why you believe that (though you may be right, or I might have\n> misunderstood, since it's complicated).\n\nYes, I think it may not work out to do that. But it's not a very strongly held\nopinion.\n\nOn reason for my doubt is the following:\n\nWe can set all-visible on a page without a FPW image (well, as long as hint\nbits aren't logged). There's a significant difference between needing to WAL\nlog FPIs for every heap page or not, and it's not that rare for data to live\nshorter than autovacuum_freeze_max_age or that limit never being reached.\n\nOn a table with 40 million individually inserted rows, fully hintbitted via\nreads, I see a first VACUUM taking 1.6s and generating 11MB of WAL. A\nsubsequent VACUUM FREEZE takes 5s and generates 500MB of WAL. That's a quite\nlarge multiplier...\n\nIf we ever managed to not have a per-page all-visible flag this'd get even\nmore extreme, because we'd then not even need to dirty the page for\ninsert-only pages. But if we want to freeze, we'd need to (unless we just got\nrid of freezing).\n\n\n> It would certainly benefit this dynamic relfrozenxid business if it was\n> possible, though. If we could somehow make that work, then almost every\n> VACUUM would be able to advance relfrozenxid, independently of\n> aggressive-ness -- because we wouldn't have any\n> all-visible-but-not-all-frozen pages to skip (that important detail wouldn't\n> be left to chance).\n\nPerhaps we can have most of the benefit even without that. If we were to\nfreeze whenever it didn't cause an additional FPWing, and perhaps didn't skip\nall-visible but not !all-frozen pages if they were less than x% of the\nto-be-scanned data, we should be able to to still increase relfrozenxid in a\nlot of cases?\n\n\n> > I don't particularly like doing BufferGetPage() before holding a lock on the\n> > page. Perhaps I'm too influenced by rust etc, but ISTM that at some point it'd\n> > be good to have a crosscheck that BufferGetPage() is only allowed when holding\n> > a page level lock.\n> \n> I have occasionally wondered if the whole idea of reading heap pages\n> with only a pin (and having cleanup locks in VACUUM) is really worth\n> it -- alternative designs seem possible. Obviously that's a BIG\n> discussion, and not one to have right now. But it seems kind of\n> relevant.\n\nWith 'reading' do you mean reads-from-os, or just references to buffer\ncontents?\n\n\n> Since it is often legit to read a heap page without a buffer lock\n> (only a pin), I can't see why BufferGetPage() without a buffer lock\n> shouldn't also be okay -- if anything it seems safer. I think that I\n> would agree with you if it wasn't for that inconsistency (which is\n> rather a big \"if\", to be sure -- even for me).\n\nAt least for heap it's rarely legit to read buffer contents via\nBufferGetPage() without a lock. It's legit to read data at already-determined\noffsets, but you can't look at much other than the tuple contents.\n\n\n> > Why does it make sense to track DEAD tuples this way? Isn't that going to lead\n> > to counting them over-and-over again? I think it's quite misleading to include\n> > them in \"dead bot not yet removable\".\n> \n> Compared to what? Do we really want to invent a new kind of DEAD tuple\n> (e.g., to report on), just to handle this rare case?\n\nWhen looking at logs I use the\n\"tuples: %lld removed, %lld remain, %lld are dead but not yet removable, oldest xmin: %u\\n\"\nline to see whether the user is likely to have issues around an old\ntransaction / slot / prepared xact preventing cleanup. If new_dead_tuples\ndoesn't identify those cases anymore that's not reliable anymore.\n\n\n> I accept that this code is lying about the tuples being RECENTLY_DEAD,\n> kind of. But isn't it still strictly closer to the truth, compared to\n> HEAD? Counting it as RECENTLY_DEAD is far closer to the truth than not\n> counting it at all.\n\nI don't see how it's closer at all. There's imo a significant difference\nbetween not being able to remove tuples because of the xmin horizon, and not\nbeing able to remove it because we couldn't get a cleanup lock.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 22 Nov 2021 21:49:02 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases, relfrozenxid\n optimizations" }, { "msg_contents": "On Mon, Nov 22, 2021 at 9:49 PM Andres Freund <andres@anarazel.de> wrote:\n> > For example, we can definitely afford to wait a few more milliseconds\n> > to get a cleanup lock just once\n>\n> We currently have no infrastructure to wait for an lwlock or pincount for a\n> limited time. And at least for the former it'd not be easy to add. It may be\n> worth adding that at some point, but I'm doubtful this is sufficient reason\n> for nontrivial new infrastructure in very performance sensitive areas.\n\nIt was a hypothetical example. To be more practical about it: it seems\nlikely that we won't really benefit from waiting some amount of time\n(not forever) for a cleanup lock in non-aggressive VACUUM, once we\nhave some of the relfrozenxid stuff we've talked about in place. In a\nworld where we're smarter about advancing relfrozenxid in\nnon-aggressive VACUUMs, the choice between waiting for a cleanup lock,\nand not waiting (but also not advancing relfrozenxid at all) matters\nless -- it's no longer a binary choice.\n\nIt's no longer a binary choice because we will have done away with the\ncurrent rigid way in which our new relfrozenxid for the relation is\neither FreezeLimit, or nothing at all. So far we've only talked about\nthe case where we can update relfrozenxid with a value that happens to\nbe much newer than FreezeLimit. If we can do that, that's great. But\nwhat about setting relfrozenxid to an *older* value than FreezeLimit\ninstead (in a non-aggressive VACUUM)? That's also pretty good! There\nis still a decent chance that the final \"suboptimal\" relfrozenxid that\nwe determine can be safely set in pg_class at the end of our VACUUM\nwill still be far more recent than the preexisting relfrozenxid.\nEspecially with larger tables.\n\nAdvancing relfrozenxid should be thought of as a totally independent\nthing to freezing tuples, at least in vacuumlazy.c itself. That's\nkinda the case today, even, but *explicitly* decoupling advancing\nrelfrozenxid from actually freezing tuples seems like a good high\nlevel goal for this project.\n\nRemember, FreezeLimit is derived from vacuum_freeze_min_age in the\nobvious way: OldestXmin for the VACUUM, minus vacuum_freeze_min_age\nGUC/reloption setting. I'm pretty sure that this means that making\nautovacuum freeze tuples more aggressively (by reducing\nvacuum_freeze_min_age) could have the perverse effect of making\nnon-aggressive VACUUMs less likely to advance relfrozenxid -- which is\nexactly backwards. This effect could easily be missed, even by expert\nusers, since there is no convenient instrumentation that shows how and\nwhen relfrozenxid is advanced.\n\n> > All of the autovacuums against the accounts table look similar to this\n> > one -- you don't see anything about relfrozenxid being advanced\n> > (because it isn't).\n\n>> Does that really make\n> > sense, though?\n>\n> Does what make really sense?\n\nWell, my accounts table example wasn't a particularly good one (it was\na conveniently available example). I am now sure that you got the\npoint I was trying to make here already, based on what you go on to\nsay about non-aggressive VACUUMs optionally *not* skipping\nall-visible-not-all-frozen heap pages in the hopes of advancing\nrelfrozenxid earlier (more on that idea below, in my response).\n\nOn reflection, the simplest way of expressing the same idea is what I\njust said about decoupling (decoupling advancing relfrozenxid from\nfreezing).\n\n> I think pgbench_accounts is just a really poor showcase. Most importantly\n> there's no even slightly longer running transactions that hold down the xid\n> horizon. But in real workloads thats incredibly common IME. It's also quite\n> uncommon in real workloads to huge tables in which all records are\n> updated. It's more common to have value ranges that are nearly static, and a\n> more heavily changing range.\n\nI agree.\n\n> I think the most interesting cases where using the \"measured\" horizon will be\n> advantageous is anti-wrap vacuums. Those obviously have to happen for rarely\n> modified tables, including completely static ones, too. Using the \"measured\"\n> horizon will allow us to reduce the frequency of anti-wrap autovacuums on old\n> tables, because we'll be able to set a much more recent relfrozenxid.\n\nThat's probably true in practice -- but who knows these days, with the\nautovacuum_vacuum_insert_scale_factor stuff? Either way I see no\nreason to emphasize that case in the design itself. The \"decoupling\"\nconcept now seems like the key design-level concept -- everything else\nfollows naturally from that.\n\n> This is becoming more common with the increased use of partitioning.\n\nAlso with bulk loading. There could easily be a tiny number of\ndistinct XIDs that are close together in time, for many many rows --\npractically one XID, or even exactly one XID.\n\n> No, not quite. We treat anti-wraparound vacuums as an emergency (including\n> logging messages, not cancelling). But the only mechanism we have against\n> anti-wrap vacuums happening is vacuum_freeze_table_age. But as you say, that's\n> not really a \"real\" mechanism, because it requires an \"independent\" reason to\n> vacuum a table.\n\nGot it.\n\n> I've seen cases where anti-wraparound vacuums weren't a problem / never\n> happend for important tables for a long time, because there always was an\n> \"independent\" reason for autovacuum to start doing its thing before the table\n> got to be autovacuum_freeze_max_age old. But at some point the important\n> tables started to be big enough that autovacuum didn't schedule vacuums that\n> got promoted to aggressive via vacuum_freeze_table_age before the anti-wrap\n> vacuums.\n\nRight. Not just because they were big; also because autovacuum runs at\ngeometric intervals -- the final reltuples from last time is used to\ndetermine the point at which av runs this time. This might make sense,\nor it might not make any sense -- it all depends (mostly on index\nstuff).\n\n> Then things started to burn, because of the unpaced anti-wrap vacuums\n> clogging up all IO, or maybe it was the vacuums not cancelling - I don't quite\n> remember the details.\n\nNon-cancelling anti-wraparound VACUUMs that (all of a sudden) cause\nchaos because they interact badly with automated DDL is one I've seen\nseveral times -- I'm sure you have too. That was what the Manta/Joyent\nblogpost I referenced upthread went into.\n\n> Behaviour that lead to a \"sudden\" falling over, rather than getting gradually\n> worse are bad - they somehow tend to happen on Friday evenings :).\n\nThese are among our most important challenges IMV.\n\n> Just that autovacuum should have a mechanism to trigger aggressive vacuums\n> (i.e. ones that are guaranteed to be able to increase relfrozenxid unless\n> cancelled) before getting to the \"emergency\"-ish anti-wraparound state.\n\nMaybe, but that runs into the problem of needing another GUC that\nnobody will ever be able to remember the name of. I consider the idea\nof adding a variety of measures that make non-aggressive VACUUM much\nmore likely to advance relfrozenxid in practice to be far more\npromising.\n\n> Or alternatively that we should have a separate threshold for the \"harsher\"\n> anti-wraparound measures.\n\nOr maybe just raise the default of autovacuum_freeze_max_age, which\nmany people don't change? That might be a lot safer than it once was.\nOr will be, once we manage to teach VACUUM to advance relfrozenxid\nmore often in non-aggressive VACUUMs on Postgres 15. Imagine a world\nin which we have that stuff in place, as well as related enhancements\nadded in earlier releases: autovacuum_vacuum_insert_scale_factor, the\nfreezemap, and the wraparound failsafe.\n\nThese add up to a lot; with all of that in place, the risk we'd be\nintroducing by increasing the default value of\nautovacuum_freeze_max_age would be *far* lower than the risk of making\nthe same change back in 2006. I bring up 2006 because it was the year\nthat commit 48188e1621 added autovacuum_freeze_max_age -- the default\nhasn't changed since that time.\n\n> I think workloads are a bit more worried than a realistic set of benchmarksk\n> that one person can run yourself.\n\nNo question. I absolutely accept that I only have to miss one\nimportant detail with something like this -- that just goes with the\nterritory. Just saying that I have yet to see any evidence that the\nbypass-indexes behavior really hurt anything. I do take the idea that\nI might have missed something very seriously, despite all this.\n\n> I gave you examples of cases that I see as likely being bitten by this,\n> e.g. when the skipped index cleanup prevents IOS scans. When both the\n> likely-to-be-modified and likely-to-be-queried value ranges are a small subset\n> of the entire data, the 2% threshold can prevent vacuum from cleaning up\n> LP_DEAD entries for a long time. Or when all index scans are bitmap index\n> scans, and nothing ends up cleaning up the dead index entries in certain\n> ranges, and even an explicit vacuum doesn't fix the issue. Even a relatively\n> small rollback / non-HOT update rate can start to be really painful.\n\nThat does seem possible. But I consider it very unlikely to appear as\na regression caused by the bypass mechanism itself -- not in any way\nthat was consistent over time. As far as I can tell, autovacuum\nscheduling just doesn't operate at that level of precision, and never\nhas.\n\nI have personally observed that ANALYZE does a very bad job at\nnoticing LP_DEAD items in tables/workloads where LP_DEAD items (not\nDEAD tuples) tend to concentrate [1]. The whole idea that ANALYZE\nshould count these items as if they were normal tuples seems pretty\nbad to me.\n\nPut it this way: imagine you run into trouble with the bypass thing,\nand then you opt to disable it on that table (using the INDEX_CLEANUP\nreloption). Why should this step solve the problem on its own? In\norder for that to work, VACUUM would have to have to know to be very\naggressive about these LP_DEAD items. But there is good reason to\nbelieve that it just won't ever notice them, as long as ANALYZE is\nexpected to provide reliable statistics that drive autovacuum --\nthey're just too concentrated for the block-based approach to truly\nwork.\n\nI'm not minimizing the risk. Just telling you my thoughts on this.\n\n> I'm a bit doubtful that's as important (which is not to say that it's not\n> worth doing). For a heavily updated table the max space usage of the line\n> pointer array just isn't as big a factor as ending up with only half the\n> usable line pointers.\n\nAgreed; by far the best chance we have of improving the line pointer\nbloat situation is preventing it in the first place, by increasing\nMaxHeapTuplesPerPage. Once we actually do that, our remaining options\nare going to be much less helpful -- then it really is mostly just up\nto VACUUM.\n\n> And it's harder to diagnose why the\n> cleanup isn't happening without knowledge that pages needing cleanup couldn't\n> be cleaned up due to pins.\n>\n> If you want to improve the logic so that we only count pages that would have\n> something to clean up, I'd be happy as well. It doesn't have to mean exactly\n> what it means today.\n\nIt seems like what you really care about here are remaining cases\nwhere our inability to acquire a cleanup lock has real consequences --\nyou want to hear about it when it happens, however unlikely it may be.\nIn other words, you want to keep something in log_autovacuum_* that\nindicates that \"less than the expected amount of work was completed\"\ndue to an inability to acquire a cleanup lock. And so for you, this is\na question of keeping instrumentation that might still be useful, not\na question of how we define things fundamentally, at the design level.\n\nSound right?\n\nIf so, then this proposal might be acceptable to you:\n\n* Remaining DEAD tuples with storage (though not LP_DEAD items from\nprevious opportunistic pruning) will get counted separately in the\nlazy_scan_noprune (no cleanup lock) path. Also count the total number\nof distinct pages that were found to contain one or more such DEAD\ntuples.\n\n* These two new counters will be reported on their own line in the log\noutput, though only in the cases where we actually have any such\ntuples -- which will presumably be much rarer than simply failing to\nget a cleanup lock (that's now no big deal at all, because we now\nconsistently do certain cleanup steps, and because FreezeLimit isn't\nthe only viable thing that we can set relfrozenxid to, at least in the\nnon-aggressive case).\n\n* There is still a limited sense in which the same items get counted\nas RECENTLY_DEAD -- though just those aspects that make the overall\ndesign simpler. So the helpful aspects of this are still preserved.\n\nWe only need to tell pgstat_report_vacuum() that these items are\n\"deadtuples\" (remaining dead tuples). That can work by having its\ncaller add a new int64 counter (same new tuple-based counter used for\nthe new log line) to vacrel->new_dead_tuples. We'd also add the same\nnew tuple counter in about the same way at the point where we\ndetermine a final vacrel->new_rel_tuples.\n\nSo we wouldn't really be treating anything as RECENTLY_DEAD anymore --\npgstat_report_vacuum() and vacrel->new_dead_tuples don't specifically\nexpect anything about RECENTLY_DEAD-ness already.\n\n> I was thinking of truncations, which I don't think vacuum-reltuples.spec\n> tests.\n\nGot it. I'll look into that for v2.\n\n> Maybe. But we've had quite a few bugs because we ended up changing some detail\n> of what is excluded in one of the counters, leading to wrong determination\n> about whether we scanned everything or not.\n\nRight. But let me just point out that my whole approach is to make\nthat impossible, by not needing to count pages, except in\nscanned_pages (and in frozenskipped_pages + rel_pages). The processing\nperformed for any page that we actually read during VACUUM should be\nuniform (or practically uniform), by definition. With minimal fudging\nin the cleanup lock case (because we mostly do the same work there\ntoo).\n\nThere should be no reason for any more page counters now, except for\nnon-critical instrumentation. For example, if you want to get the\ntotal number of pages skipped via the visibility map (not just\nall-frozen pages), then you simply subtract scanned_pages from\nrel_pages.\n\n> > Fundamentally, this will only work if we decide to only skip all-frozen\n> > pages, which (by definition) only happens within aggressive VACUUMs.\n>\n> Hm? Or if there's just no runs of all-visible pages of sufficient length, so\n> we don't end up skipping at all.\n\nOf course. But my point was: who knows when that'll happen?\n\n> On reason for my doubt is the following:\n>\n> We can set all-visible on a page without a FPW image (well, as long as hint\n> bits aren't logged). There's a significant difference between needing to WAL\n> log FPIs for every heap page or not, and it's not that rare for data to live\n> shorter than autovacuum_freeze_max_age or that limit never being reached.\n\nThis sounds like an objection to one specific heuristic, and not an\nobjection to the general idea. The only essential part is\n\"opportunistic freezing during vacuum, when the cost is clearly very\nlow, and the benefit is probably high\". And so it now seems you were\nmaking a far more limited statement than I first believed.\n\nObviously many variations are possible -- there is a spectrum.\nExample: a heuristic that makes VACUUM notice when it is going to\nfreeze at least one tuple on a page, iff the page will be marked\nall-visible in any case -- we should instead freeze every tuple on the\npage, and mark the page all-frozen, batching work (could account for\nLP_DEAD items here too, not counting them on the assumption that\nthey'll become LP_UNUSED during the second heap pass later on).\n\nIf we see these conditions, then the likely explanation is that the\ntuples on the heap page happen to have XIDs that are \"split\" by the\nnot-actually-important FreezeLimit cutoff, despite being essentially\nsimilar in any way that matters.\n\nIf you want to make the same heuristic more conservative: only do this\nwhen no existing tuples are frozen, since that could be taken as a\nsign of the original heuristic not quite working on the same heap page\nat an earlier stage.\n\nI suspect that even very conservative versions of the same basic idea\nwould still help a lot.\n\n> Perhaps we can have most of the benefit even without that. If we were to\n> freeze whenever it didn't cause an additional FPWing, and perhaps didn't skip\n> all-visible but not !all-frozen pages if they were less than x% of the\n> to-be-scanned data, we should be able to to still increase relfrozenxid in a\n> lot of cases?\n\nI bet that's true. I like that idea.\n\nIf we had this policy, then the number of \"extra\"\nvisited-in-non-aggressive-vacuum pages (all-visible but not yet\nall-frozen pages) could be managed over time through more\nopportunistic freezing. This might make it work even better.\n\nThese all-visible (but not all-frozen) heap pages could be considered\n\"tenured\", since they have survived at least one full VACUUM cycle\nwithout being unset. So why not also freeze them based on the\nassumption that they'll probably stay that way forever? There won't be\nso many of the pages when we do this anyway, by definition -- since\nwe'd have a heuristic that limited the total number (say to no more\nthan 10% of the total relation size, something like that).\n\nWe're smoothing out the work that currently takes place all together\nduring an aggressive VACUUM this way.\n\nMoreover, there is perhaps a good chance that the total number of\nall-visible-not all-frozen heap pages will *stay* low over time, as a\nresult of this policy actually working -- there may be a virtuous\ncycle that totally prevents us from getting an aggressive VACUUM even\nonce.\n\n> > I have occasionally wondered if the whole idea of reading heap pages\n> > with only a pin (and having cleanup locks in VACUUM) is really worth\n> > it -- alternative designs seem possible. Obviously that's a BIG\n> > discussion, and not one to have right now. But it seems kind of\n> > relevant.\n>\n> With 'reading' do you mean reads-from-os, or just references to buffer\n> contents?\n\nThe latter.\n\n[1] https://postgr.es/m/CAH2-Wz=9R83wcwZcPUH4FVPeDM4znzbzMvp3rt21+XhQWMU8+g@mail.gmail.com\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 23 Nov 2021 17:01:20 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "Hi,\n\nOn 2021-11-23 17:01:20 -0800, Peter Geoghegan wrote:\n> > On reason for my doubt is the following:\n> >\n> > We can set all-visible on a page without a FPW image (well, as long as hint\n> > bits aren't logged). There's a significant difference between needing to WAL\n> > log FPIs for every heap page or not, and it's not that rare for data to live\n> > shorter than autovacuum_freeze_max_age or that limit never being reached.\n> \n> This sounds like an objection to one specific heuristic, and not an\n> objection to the general idea.\n\nI understood you to propose that we do not have separate frozen and\nall-visible states. Which I think will be problematic, because of scenarios\nlike the above.\n\n\n> The only essential part is \"opportunistic freezing during vacuum, when the\n> cost is clearly very low, and the benefit is probably high\". And so it now\n> seems you were making a far more limited statement than I first believed.\n\nI'm on board with freezing when we already dirty out the page, and when doing\nso doesn't cause an additional FPI. And I don't think I've argued against that\nin the past.\n\n\n> These all-visible (but not all-frozen) heap pages could be considered\n> \"tenured\", since they have survived at least one full VACUUM cycle\n> without being unset. So why not also freeze them based on the\n> assumption that they'll probably stay that way forever?\n\nBecause it's a potentially massive increase in write volume? E.g. if you have\na insert-only workload, and you discard old data by dropping old partitions,\nthis will often add yet another rewrite, despite your data likely never\ngetting old enough to need to be frozen.\n\nGiven that we often immediately need to start another vacuum just when one\nfinished, because the vacuum took long enough to reach thresholds of vacuuming\nagain, I don't think the (auto-)vacuum count is a good proxy.\n\nMaybe you meant this as a more limited concept, i.e. only doing so when the\npercentage of all-visible but not all-frozen pages is small?\n\n\nWe could perhaps do better with if we had information about the system-wide\nrate of xid throughput and how often / how long past vacuums of a table took.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 23 Nov 2021 17:32:25 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases, relfrozenxid\n optimizations" }, { "msg_contents": "On Tue, Nov 23, 2021 at 5:01 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Behaviour that lead to a \"sudden\" falling over, rather than getting gradually\n> > worse are bad - they somehow tend to happen on Friday evenings :).\n>\n> These are among our most important challenges IMV.\n\nI haven't had time to work through any of your feedback just yet --\nthough it's certainly a priority for. I won't get to it until I return\nhome from PGConf NYC next week.\n\nEven still, here is a rebased v2, just to fix the bitrot. This is just\na courtesy to anybody interested in the patch.\n\n-- \nPeter Geoghegan", "msg_date": "Tue, 30 Nov 2021 11:52:28 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Tue, Nov 30, 2021 at 11:52 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> I haven't had time to work through any of your feedback just yet --\n> though it's certainly a priority for. I won't get to it until I return\n> home from PGConf NYC next week.\n\nAttached is v3, which works through most of your (Andres') feedback.\n\nChanges in v3:\n\n* While the first patch still gets rid of the \"pinskipped_pages\"\ninstrumentation, the second patch adds back a replacement that's\nbetter targeted: it tracks and reports \"missed_dead_tuples\". This\nmeans that log output will show the number of fully DEAD tuples with\nstorage that could not be pruned away due to the fact that that would\nhave required waiting for a cleanup lock. But we *don't* generally\nreport the number of pages that we couldn't get a cleanup lock on,\nbecause that in itself doesn't mean that we skipped any useful work\n(which is very much the point of all of the refactoring in the first\npatch).\n\n* We now have FSM processing in the lazy_scan_noprune case, which more\nor less matches the standard lazy_scan_prune case.\n\n* Many small tweaks, based on suggestions from Andres, and other\nthings that I noticed.\n\n* Further simplification of the \"consider skipping pages using\nvisibility map\" logic -- now we always don't skip the last block in\nthe relation, without calling should_attempt_truncation() to make sure\nwe have a reason.\n\nNote that this means that we'll always read the final page during\nVACUUM, even when doing so is provably unhelpful. I'd prefer to keep\nthe code that deals with skipping pages using the visibility map as\nsimple as possible. There isn't much downside to always doing that\nonce my refactoring is in place: there is no risk that we'll wait for\na cleanup lock (on the final page in the rel) for no good reason.\nWe're only wasting one page access, at most.\n\n(I'm not 100% sure that this is the right trade-off, actually, but\nit's at least worth considering.)\n\nNot included in v3:\n\n* Still haven't added the isolation test for rel truncation, though\nit's on my TODO list.\n\n* I'm still working on the optimization that we discussed on this\nthread: the optimization that allows the final relfrozenxid (that we\nset in pg_class) to be determined dynamically, based on the actual\nXIDs we observed in the table (we don't just naively use FreezeLimit).\n\nI'm not ready to post that today, but it shouldn't take too much\nlonger to be good enough to review.\n\nThanks\n-- \nPeter Geoghegan", "msg_date": "Fri, 10 Dec 2021 13:48:00 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Fri, Dec 10, 2021 at 1:48 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> * I'm still working on the optimization that we discussed on this\n> thread: the optimization that allows the final relfrozenxid (that we\n> set in pg_class) to be determined dynamically, based on the actual\n> XIDs we observed in the table (we don't just naively use FreezeLimit).\n\nAttached is v4 of the patch series, which now includes this\noptimization, broken out into its own patch. In addition, it includes\na prototype of opportunistic freezing.\n\nMy emphasis here has been on making non-aggressive VACUUMs *always*\nadvance relfrozenxid, outside of certain obvious edge cases. And so\nwith all the patches applied, up to and including the opportunistic\nfreezing patch, every autovacuum of every table manages to advance\nrelfrozenxid during benchmarking -- usually to a fairly recent value.\nI've focussed on making aggressive VACUUMs (especially anti-wraparound\nautovacuums) a rare occurrence, for truly exceptional cases (e.g.,\nuser keeps canceling autovacuums, maybe due to automated script that\nperforms DDL). That has taken priority over other goals, for now.\n\nThere is a kind of virtuous circle here, where successive\nnon-aggressive autovacuums never fall behind on freezing, and so never\nfail to advance relfrozenxid (there are never any\nall_visible-but-not-all_frozen pages, and we can cope with not\nacquiring a cleanup lock quite well). When VACUUM chooses to freeze a\ntuple opportunistically, the frozen XIDs naturally cannot hold back\nthe final safe relfrozenxid for the relation. Opportunistic freezing\navoids setting all_visible (without setting all_frozen) in the\nvisibility map. It's impossible for VACUUM to just set a page to\nall_visible now, which seems like an essential part of making a decent\namount of relfrozenxid advancement take place in almost every VACUUM\noperation.\n\nHere is an example of what I'm calling a virtuous circle -- all\npgbench_history autovacuums look like this with the patch applied:\n\nLOG: automatic vacuum of table \"regression.public.pgbench_history\":\nindex scans: 0\n pages: 0 removed, 35503 remain, 31930 skipped using visibility map\n(89.94% of total)\n tuples: 0 removed, 5568687 remain (547976 newly frozen), 0 are\ndead but not yet removable\n removal cutoff: oldest xmin was 5570281, which is now 1177 xact IDs behind\n relfrozenxid: advanced by 546618 xact IDs, new value: 5565226\n index scan not needed: 0 pages from table (0.00% of total) had 0\ndead item identifiers removed\n I/O timings: read: 0.003 ms, write: 0.000 ms\n avg read rate: 0.068 MB/s, avg write rate: 0.068 MB/s\n buffer usage: 7169 hits, 1 misses, 1 dirtied\n WAL usage: 7043 records, 1 full page images, 6974928 bytes\n system usage: CPU: user: 0.10 s, system: 0.00 s, elapsed: 0.11 s\n\nNote that relfrozenxid is almost the same as oldest xmin here. Note also\nthat the log output shows the number of tuples newly frozen. I see the\nsame general trends with *every* pgbench_history autovacuum. Actually,\nwith every autovacuum. The history table tends to have ultra-recent\nrelfrozenxid values, which isn't always what we see, but that\ndifference may not matter. As far as I can tell, we can expect\npractically every table to have a relfrozenxid that would (at least\ntraditionally) be considered very safe/recent. Barring weird\napplication issues that make it totally impossible to advance\nrelfrozenxid (e.g., idle cursors that hold onto a buffer pin forever),\nit seems as if relfrozenxid will now steadily march forward. Sure,\nrelfrozenxid advancement might be held by the occasional inability to\nacquire a cleanup lock, but the effect isn't noticeable over time;\nwhat are the chances that a cleanup lock won't be available on the\nsame page (with the same old XID) more than once or twice? The odds of\nthat happening become astronomically tiny, long before there is any\nreal danger (barring pathological cases).\n\nIn the past, we've always talked about opportunistic freezing as a way\nof avoiding re-dirtying heap pages during successive VACUUM operations\n-- especially as a way of lowering the total volume of WAL. While I\nagree that that's important, I have deliberately ignored it for now,\npreferring to focus on the relfrozenxid stuff, and smoothing out the\ncost of freezing (avoiding big shocks from aggressive/anti-wraparound\nautovacuums). I care more about stable performance than absolute\nthroughput, but even still I believe that the approach I've taken to\nopportunistic freezing is probably too aggressive. But it's dead\nsimple, which will make it easier to understand and discuss the issue\nof central importance. It may be possible to optimize the WAL-logging\nused during freezing, getting the cost down to the point where\nfreezing early just isn't a concern. The current prototype adds extra\nWAL overhead, to be sure, but even that's not wildly unreasonable (you\nmake some of it back on FPIs, depending on the workload -- especially\nwith tables like pgbench_history, where delaying freezing is a total loss).\n\n\n--\nPeter Geoghegan", "msg_date": "Wed, 15 Dec 2021 12:26:47 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Thu, Dec 16, 2021 at 5:27 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Fri, Dec 10, 2021 at 1:48 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > * I'm still working on the optimization that we discussed on this\n> > thread: the optimization that allows the final relfrozenxid (that we\n> > set in pg_class) to be determined dynamically, based on the actual\n> > XIDs we observed in the table (we don't just naively use FreezeLimit).\n>\n> Attached is v4 of the patch series, which now includes this\n> optimization, broken out into its own patch. In addition, it includes\n> a prototype of opportunistic freezing.\n>\n> My emphasis here has been on making non-aggressive VACUUMs *always*\n> advance relfrozenxid, outside of certain obvious edge cases. And so\n> with all the patches applied, up to and including the opportunistic\n> freezing patch, every autovacuum of every table manages to advance\n> relfrozenxid during benchmarking -- usually to a fairly recent value.\n> I've focussed on making aggressive VACUUMs (especially anti-wraparound\n> autovacuums) a rare occurrence, for truly exceptional cases (e.g.,\n> user keeps canceling autovacuums, maybe due to automated script that\n> performs DDL). That has taken priority over other goals, for now.\n\nGreat!\n\nI've looked at 0001 patch and here are some comments:\n\n@@ -535,8 +540,16 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,\n\n xidFullScanLimit);\n aggressive |= MultiXactIdPrecedesOrEquals(rel->rd_rel->relminmxid,\n\n mxactFullScanLimit);\n+ skipwithvm = true;\n if (params->options & VACOPT_DISABLE_PAGE_SKIPPING)\n+ {\n+ /*\n+ * Force aggressive mode, and disable skipping blocks using the\n+ * visibility map (even those set all-frozen)\n+ */\n aggressive = true;\n+ skipwithvm = false;\n+ }\n\n vacrel = (LVRelState *) palloc0(sizeof(LVRelState));\n\n@@ -544,6 +557,7 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,\n vacrel->rel = rel;\n vac_open_indexes(vacrel->rel, RowExclusiveLock, &vacrel->nindexes,\n &vacrel->indrels);\n+ vacrel->aggressive = aggressive;\n vacrel->failsafe_active = false;\n vacrel->consider_bypass_optimization = true;\n\nHow about adding skipwithvm to LVRelState too?\n\n---\n /*\n- * The current block is potentially skippable;\nif we've seen a\n- * long enough run of skippable blocks to\njustify skipping it, and\n- * we're not forced to check it, then go ahead and skip.\n- * Otherwise, the page must be at least\nall-visible if not\n- * all-frozen, so we can set\nall_visible_according_to_vm = true.\n+ * The current page can be skipped if we've\nseen a long enough run\n+ * of skippable blocks to justify skipping it\n-- provided it's not\n+ * the last page in the relation (according to\nrel_pages/nblocks).\n+ *\n+ * We always scan the table's last page to\ndetermine whether it\n+ * has tuples or not, even if it would\notherwise be skipped\n+ * (unless we're skipping every single page in\nthe relation). This\n+ * avoids having lazy_truncate_heap() take\naccess-exclusive lock\n+ * on the table to attempt a truncation that just fails\n+ * immediately because there are tuples on the\nlast page.\n */\n- if (skipping_blocks && !FORCE_CHECK_PAGE())\n+ if (skipping_blocks && blkno < nblocks - 1)\n\nWhy do we always need to scan the last page even if heap truncation is\ndisabled (or in the failsafe mode)?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 17 Dec 2021 15:46:02 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Thu, Dec 16, 2021 at 10:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > My emphasis here has been on making non-aggressive VACUUMs *always*\n> > advance relfrozenxid, outside of certain obvious edge cases. And so\n> > with all the patches applied, up to and including the opportunistic\n> > freezing patch, every autovacuum of every table manages to advance\n> > relfrozenxid during benchmarking -- usually to a fairly recent value.\n> > I've focussed on making aggressive VACUUMs (especially anti-wraparound\n> > autovacuums) a rare occurrence, for truly exceptional cases (e.g.,\n> > user keeps canceling autovacuums, maybe due to automated script that\n> > performs DDL). That has taken priority over other goals, for now.\n>\n> Great!\n\nMaybe this is a good time to revisit basic questions about VACUUM. I\nwonder if we can get rid of some of the GUCs for VACUUM now.\n\nCan we fully get rid of vacuum_freeze_table_age? Maybe even get rid of\nvacuum_freeze_min_age, too? Freezing tuples is a maintenance task for\nphysical blocks, but we use logical units (XIDs).\n\nWe probably shouldn't be using any units, but using XIDs \"feels wrong\"\nto me. Even with my patch, it is theoretically possible that we won't\nbe able to advance relfrozenxid very much, because we cannot get a\ncleanup lock on one single heap page with one old XID. But even in\nthis extreme case, how relevant is the \"age\" of this old XID, really?\nWhat really matters is whether or not we can advance relfrozenxid in\ntime (with time to spare). And so the wraparound risk of the system is\nnot affected all that much by the age of the single oldest XID. The\nrisk mostly comes from how much total work we still need to do to\nadvance relfrozenxid. If the single old XID is quite old indeed (~1.5\nbillion XIDs), but there is only one, then we just have to freeze one\ntuple to be able to safely advance relfrozenxid (maybe advance it by a\nhuge amount!). How long can it take to freeze one tuple, with the\nfreeze map, etc?\n\nOn the other hand, the risk may be far greater if we have *many*\ntuples that are still unfrozen, whose XIDs are only \"middle aged\"\nright now. The idea behind vacuum_freeze_min_age seems to be to be\nlazy about work (tuple freezing) in the hope that we'll never need to\ndo it, but that seems obsolete now. (It probably made a little more\nsense before the visibility map.)\n\nUsing XIDs makes sense for things like autovacuum_freeze_max_age,\nbecause there we have to worry about wraparound and relfrozenxid\n(whether or not we like it). But with this patch, and with everything\nelse (the failsafe, insert-driven autovacuums, everything we've done\nover the last several years) I think that it might be time to increase\nthe autovacuum_freeze_max_age default. Maybe even to something as high\nas 800 million transaction IDs, but certainly to 400 million. What do\nyou think? (Maybe don't answer just yet, something to think about.)\n\n> + vacrel->aggressive = aggressive;\n> vacrel->failsafe_active = false;\n> vacrel->consider_bypass_optimization = true;\n>\n> How about adding skipwithvm to LVRelState too?\n\nAgreed -- it's slightly better that way. Will change this.\n\n> */\n> - if (skipping_blocks && !FORCE_CHECK_PAGE())\n> + if (skipping_blocks && blkno < nblocks - 1)\n>\n> Why do we always need to scan the last page even if heap truncation is\n> disabled (or in the failsafe mode)?\n\nMy goal here was to keep the behavior from commit e8429082, \"Avoid\nuseless truncation attempts during VACUUM\", while simplifying things\naround skipping heap pages via the visibility map (including removing\nthe FORCE_CHECK_PAGE() macro). Of course you're right that this\nparticular change that you have highlighted does change the behavior a\nlittle -- now we will always treat the final page as a \"scanned page\",\nexcept perhaps when 100% of all pages in the relation are skipped\nusing the visibility map.\n\nThis was a deliberate choice (and perhaps even a good choice!). I\nthink that avoiding accessing the last heap page like this isn't worth\nthe complexity. Note that we may already access heap pages (making\nthem \"scanned pages\") despite the fact that we know it's unnecessary:\nthe SKIP_PAGES_THRESHOLD test leads to this behavior (and we don't\neven try to avoid wasting CPU cycles on these\nnot-skipped-but-skippable pages). So I think that the performance cost\nfor the last page isn't going to be noticeable.\n\nHowever, now that I think about it, I wonder...what do you think of\nSKIP_PAGES_THRESHOLD, in general? Is the optimal value still 32 today?\nSKIP_PAGES_THRESHOLD hasn't changed since commit bf136cf6e3, shortly\nafter the original visibility map implementation was committed in\n2009. The idea that it helps us to advance relfrozenxid outside of\naggressive VACUUMs (per commit message from bf136cf6e3) seems like it\nmight no longer matter with the patch -- because now we won't ever set\na page all-visible but not all-frozen. Plus the idea that we need to\ndo all this work just to get readahead from the OS\nseems...questionable.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 17 Dec 2021 18:29:21 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Sat, Dec 18, 2021 at 11:29 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Thu, Dec 16, 2021 at 10:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > My emphasis here has been on making non-aggressive VACUUMs *always*\n> > > advance relfrozenxid, outside of certain obvious edge cases. And so\n> > > with all the patches applied, up to and including the opportunistic\n> > > freezing patch, every autovacuum of every table manages to advance\n> > > relfrozenxid during benchmarking -- usually to a fairly recent value.\n> > > I've focussed on making aggressive VACUUMs (especially anti-wraparound\n> > > autovacuums) a rare occurrence, for truly exceptional cases (e.g.,\n> > > user keeps canceling autovacuums, maybe due to automated script that\n> > > performs DDL). That has taken priority over other goals, for now.\n> >\n> > Great!\n>\n> Maybe this is a good time to revisit basic questions about VACUUM. I\n> wonder if we can get rid of some of the GUCs for VACUUM now.\n>\n> Can we fully get rid of vacuum_freeze_table_age?\n\nDoes it mean that a vacuum always is an aggressive vacuum? If\nopportunistic freezing works well on all tables, we might no longer\nneed vacuum_freeze_table_age. But I’m not sure that’s true since the\ncost of freezing tuples is not 0.\n\n> We probably shouldn't be using any units, but using XIDs \"feels wrong\"\n> to me. Even with my patch, it is theoretically possible that we won't\n> be able to advance relfrozenxid very much, because we cannot get a\n> cleanup lock on one single heap page with one old XID. But even in\n> this extreme case, how relevant is the \"age\" of this old XID, really?\n> What really matters is whether or not we can advance relfrozenxid in\n> time (with time to spare). And so the wraparound risk of the system is\n> not affected all that much by the age of the single oldest XID. The\n> risk mostly comes from how much total work we still need to do to\n> advance relfrozenxid. If the single old XID is quite old indeed (~1.5\n> billion XIDs), but there is only one, then we just have to freeze one\n> tuple to be able to safely advance relfrozenxid (maybe advance it by a\n> huge amount!). How long can it take to freeze one tuple, with the\n> freeze map, etc?\n\nI think that that's true for (mostly) static tables. But regarding\nconstantly-updated tables, since autovacuum runs based on the number\nof garbage tuples (or inserted tuples) and how old the relfrozenxid is\nif an autovacuum could not advance the relfrozenxid because it could\nnot get a cleanup lock on the page that has the single oldest XID,\nit's likely that when autovacuum runs next time it will have to\nprocess other pages too since the page will get dirty enough.\n\nIt might be a good idea that we remember pages where we could not get\na cleanup lock somewhere and revisit them after index cleanup. While\nrevisiting the pages, we don’t prune the page but only freeze tuples.\n\n>\n> On the other hand, the risk may be far greater if we have *many*\n> tuples that are still unfrozen, whose XIDs are only \"middle aged\"\n> right now. The idea behind vacuum_freeze_min_age seems to be to be\n> lazy about work (tuple freezing) in the hope that we'll never need to\n> do it, but that seems obsolete now. (It probably made a little more\n> sense before the visibility map.)\n\nWhy is it obsolete now? I guess that it's still valid depending on the\ncases, for example, heavily updated tables.\n\n>\n> Using XIDs makes sense for things like autovacuum_freeze_max_age,\n> because there we have to worry about wraparound and relfrozenxid\n> (whether or not we like it). But with this patch, and with everything\n> else (the failsafe, insert-driven autovacuums, everything we've done\n> over the last several years) I think that it might be time to increase\n> the autovacuum_freeze_max_age default. Maybe even to something as high\n> as 800 million transaction IDs, but certainly to 400 million. What do\n> you think? (Maybe don't answer just yet, something to think about.)\n\nI don’t have an objection to increasing autovacuum_freeze_max_age for\nnow. One of my concerns with anti-wraparound vacuums is that too many\ntables (or several large tables) will reach autovacuum_freeze_max_age\nat once, using up autovacuum slots and preventing autovacuums from\nbeing launched on tables that are heavily being updated. Given these\nworks, expanding the gap between vacuum_freeze_table_age and\nautovacuum_freeze_max_age would have better chances for the tables to\nadvance its relfrozenxid by an aggressive vacuum instead of an\nanti-wraparound-aggressive vacuum. 400 million seems to be a good\nstart.\n\n>\n> > + vacrel->aggressive = aggressive;\n> > vacrel->failsafe_active = false;\n> > vacrel->consider_bypass_optimization = true;\n> >\n> > How about adding skipwithvm to LVRelState too?\n>\n> Agreed -- it's slightly better that way. Will change this.\n>\n> > */\n> > - if (skipping_blocks && !FORCE_CHECK_PAGE())\n> > + if (skipping_blocks && blkno < nblocks - 1)\n> >\n> > Why do we always need to scan the last page even if heap truncation is\n> > disabled (or in the failsafe mode)?\n>\n> My goal here was to keep the behavior from commit e8429082, \"Avoid\n> useless truncation attempts during VACUUM\", while simplifying things\n> around skipping heap pages via the visibility map (including removing\n> the FORCE_CHECK_PAGE() macro). Of course you're right that this\n> particular change that you have highlighted does change the behavior a\n> little -- now we will always treat the final page as a \"scanned page\",\n> except perhaps when 100% of all pages in the relation are skipped\n> using the visibility map.\n>\n> This was a deliberate choice (and perhaps even a good choice!). I\n> think that avoiding accessing the last heap page like this isn't worth\n> the complexity. Note that we may already access heap pages (making\n> them \"scanned pages\") despite the fact that we know it's unnecessary:\n> the SKIP_PAGES_THRESHOLD test leads to this behavior (and we don't\n> even try to avoid wasting CPU cycles on these\n> not-skipped-but-skippable pages). So I think that the performance cost\n> for the last page isn't going to be noticeable.\n\nAgreed.\n\n>\n> However, now that I think about it, I wonder...what do you think of\n> SKIP_PAGES_THRESHOLD, in general? Is the optimal value still 32 today?\n> SKIP_PAGES_THRESHOLD hasn't changed since commit bf136cf6e3, shortly\n> after the original visibility map implementation was committed in\n> 2009. The idea that it helps us to advance relfrozenxid outside of\n> aggressive VACUUMs (per commit message from bf136cf6e3) seems like it\n> might no longer matter with the patch -- because now we won't ever set\n> a page all-visible but not all-frozen. Plus the idea that we need to\n> do all this work just to get readahead from the OS\n> seems...questionable.\n\nGiven the opportunistic freezing, that's true but I'm concerned\nwhether opportunistic freezing always works well on all tables since\nfreezing tuples is not 0 cost.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 21 Dec 2021 13:28:52 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Mon, Dec 20, 2021 at 8:29 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > Can we fully get rid of vacuum_freeze_table_age?\n>\n> Does it mean that a vacuum always is an aggressive vacuum?\n\nNo. Just somewhat more like one. Still no waiting for cleanup locks,\nthough. Also, autovacuum is still cancelable (that's technically from\nanti-wraparound VACUUM, but you know what I mean). And there shouldn't\nbe a noticeable difference in terms of how many blocks can be skipped\nusing the VM.\n\n> If opportunistic freezing works well on all tables, we might no longer\n> need vacuum_freeze_table_age. But I’m not sure that’s true since the\n> cost of freezing tuples is not 0.\n\nThat's true, of course, but right now the only goal of opportunistic\nfreezing is to advance relfrozenxid in every VACUUM. It needs to be\nshown to be worth it, of course. But let's assume that it is worth it,\nfor a moment (perhaps only because we optimize freezing itself in\npassing) -- then there is little use for vacuum_freeze_table_age, that\nI can see.\n\n> I think that that's true for (mostly) static tables. But regarding\n> constantly-updated tables, since autovacuum runs based on the number\n> of garbage tuples (or inserted tuples) and how old the relfrozenxid is\n> if an autovacuum could not advance the relfrozenxid because it could\n> not get a cleanup lock on the page that has the single oldest XID,\n> it's likely that when autovacuum runs next time it will have to\n> process other pages too since the page will get dirty enough.\n\nI'm not arguing that the age of the single oldest XID is *totally*\nirrelevant. Just that it's typically much less important than the\ntotal amount of work we'd have to do (freezing) to be able to advance\nrelfrozenxid.\n\nIn any case, the extreme case where we just cannot get a cleanup lock\non one particular page with an old XID is probably very rare.\n\n> It might be a good idea that we remember pages where we could not get\n> a cleanup lock somewhere and revisit them after index cleanup. While\n> revisiting the pages, we don’t prune the page but only freeze tuples.\n\nMaybe, but I think that it would make more sense to not use\nFreezeLimit for that at all. In an aggressive VACUUM (where we might\nactually have to wait for a cleanup lock), why should we wait once the\nage is over vacuum_freeze_min_age (usually 50 million XIDs)? The\nofficial answer is \"because we need to advance relfrozenxid\". But why\nnot accept a much older relfrozenxid that is still sufficiently\nyoung/safe, in order to avoid waiting for a cleanup lock?\n\nIn other words, what if our approach of \"being diligent about\nadvancing relfrozenxid\" makes the relfrozenxid problem worse, not\nbetter? The problem with \"being diligent\" is that it is defined by\nFreezeLimit (which is more or less the same thing as\nvacuum_freeze_min_age), which is supposed to be about which tuples we\nwill freeze. That's a very different thing to how old relfrozenxid\nshould be or can be (after an aggressive VACUUM finishes).\n\n> > On the other hand, the risk may be far greater if we have *many*\n> > tuples that are still unfrozen, whose XIDs are only \"middle aged\"\n> > right now. The idea behind vacuum_freeze_min_age seems to be to be\n> > lazy about work (tuple freezing) in the hope that we'll never need to\n> > do it, but that seems obsolete now. (It probably made a little more\n> > sense before the visibility map.)\n>\n> Why is it obsolete now? I guess that it's still valid depending on the\n> cases, for example, heavily updated tables.\n\nBecause after the 9.6 freezemap work we'll often set the all-visible\nbit in the VM, but not the all-frozen bit (unless we have the\nopportunistic freezing patch applied, which specifically avoids that).\nWhen that happens, affected heap pages will still have\nolder-than-vacuum_freeze_min_age-XIDs after VACUUM runs, until we get\nto an aggressive VACUUM. There could be many VACUUMs before the\naggressive VACUUM.\n\nThis \"freezing cliff\" seems like it might be a big problem, in\ngeneral. That's what I'm trying to address here.\n\nEither way, the system doesn't really respect vacuum_freeze_min_age in\nthe way that it did before 9.6 -- which is what I meant by \"obsolete\".\n\n> I don’t have an objection to increasing autovacuum_freeze_max_age for\n> now. One of my concerns with anti-wraparound vacuums is that too many\n> tables (or several large tables) will reach autovacuum_freeze_max_age\n> at once, using up autovacuum slots and preventing autovacuums from\n> being launched on tables that are heavily being updated.\n\nI think that the patch helps with that, actually -- there tends to be\n\"natural variation\" in the relfrozenxid age of each table, which comes\nfrom per-table workload characteristics.\n\n> Given these\n> works, expanding the gap between vacuum_freeze_table_age and\n> autovacuum_freeze_max_age would have better chances for the tables to\n> advance its relfrozenxid by an aggressive vacuum instead of an\n> anti-wraparound-aggressive vacuum. 400 million seems to be a good\n> start.\n\nThe idea behind getting rid of vacuum_freeze_table_age (not to be\nconfused by the other idea about getting rid of vacuum_freeze_min_age)\nis this: with the patch series, we only tend to get an anti-wraparound\nVACUUM in extreme and relatively rare cases. For example, we will get\naggressive anti-wraparound VACUUMs on tables that *never* grow, but\nconstantly get HOT updates (e.g. the pgbench_accounts table with heap\nfill factor reduced to 90). We won't really be able to use the VM when\nthis happens, either.\n\nWith tables like this -- tables that still get aggressive VACUUMs --\nmaybe the patch doesn't make a huge difference. But that's truly the\nextreme case -- that is true only because there is already zero chance\nof there being a non-aggressive VACUUM. We'll get aggressive\nanti-wraparound VACUUMs every time we reach autovacuum_freeze_max_age,\nagain and again -- no change, really.\n\nBut since it's only these extreme cases that continue to get\naggressive VACUUMs, why do we still need vacuum_freeze_table_age? It\nhelps right now (without the patch) by \"escalating\" a regular VACUUM\nto an aggressive one. But the cases that we still expect an aggressive\nVACUUM (with the patch) are the cases where there is zero chance of\nthat happening. Almost by definition.\n\n> Given the opportunistic freezing, that's true but I'm concerned\n> whether opportunistic freezing always works well on all tables since\n> freezing tuples is not 0 cost.\n\nThat is the big question for this patch.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 20 Dec 2021 21:35:08 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Mon, Dec 20, 2021 at 9:35 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Given the opportunistic freezing, that's true but I'm concerned\n> > whether opportunistic freezing always works well on all tables since\n> > freezing tuples is not 0 cost.\n>\n> That is the big question for this patch.\n\nAttached is a mechanical rebase of the patch series. This new version\njust fixes bitrot, caused by Masahiko's recent lazyvacuum.c\nrefactoring work. In other words, this revision has no significant\nchanges compared to the v4 that I posted back in late December -- just\nwant to keep CFTester green.\n\nI still have plenty of work to do here. Especially with the final\npatch (the v5-0005-* \"freeze early\" patch), which is generally more\nspeculative than the other patches. I'm playing catch-up now, since I\njust returned from vacation.\n\n--\nPeter Geoghegan", "msg_date": "Thu, 6 Jan 2022 11:23:57 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Fri, Dec 17, 2021 at 9:30 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Can we fully get rid of vacuum_freeze_table_age? Maybe even get rid of\n> vacuum_freeze_min_age, too? Freezing tuples is a maintenance task for\n> physical blocks, but we use logical units (XIDs).\n\nI don't see how we can get rid of these. We know that catastrophe will\nensue if we fail to freeze old XIDs for a sufficiently long time ---\nwhere sufficiently long has to do with the number of XIDs that have\nbeen subsequently consumed. So it's natural to decide whether or not\nwe're going to wait for cleanup locks on pages on the basis of how old\nthe XIDs they contain actually are. Admittedly, that decision doesn't\nneed to be made at the start of the vacuum, as we do today. We could\nhappily skip waiting for a cleanup lock on pages that contain only\nnewer XIDs, but if there is a page that both contains an old XID and\nstays pinned for a long time, we eventually have to sit there and wait\nfor that pin to be released. And the best way to decide when to switch\nto that strategy is really based on the age of that XID, at least as I\nsee it, because it is the age of that XID reaching 2 billion that is\ngoing to kill us.\n\nI think vacuum_freeze_min_age also serves a useful purpose: it\nprevents us from freezing data that's going to be modified again or\neven deleted in the near future. Since we can't know the future, we\nmust base our decision on the assumption that the future will be like\nthe past: if the page hasn't been modified for a while, then we should\nassume it's not likely to be modified again soon; otherwise not. If we\nknew the time at which the page had last been modified, it would be\nvery reasonable to use that here - say, freeze the XIDs if the page\nhasn't been touched in an hour, or whatever. But since we lack such\ntimestamps the XID age is the closest proxy we have.\n\n> The\n> risk mostly comes from how much total work we still need to do to\n> advance relfrozenxid. If the single old XID is quite old indeed (~1.5\n> billion XIDs), but there is only one, then we just have to freeze one\n> tuple to be able to safely advance relfrozenxid (maybe advance it by a\n> huge amount!). How long can it take to freeze one tuple, with the\n> freeze map, etc?\n\nI don't really see any reason for optimism here. There could be a lot\nof unfrozen pages in the relation, and we'd have to troll through all\nof those in order to find that single old XID. Moreover, there is\nnothing whatsoever to focus autovacuum's attention on that single old\nXID rather than anything else. Nothing in the autovacuum algorithm\nwill cause it to focus its efforts on that single old XID at a time\nwhen there's no pin on the page, or at a time when that XID becomes\nthe thing that's holding back vacuuming throughout the cluster. A lot\nof vacuum problems that users experience today would be avoided if\nautovacuum had perfect knowledge of what it ought to be prioritizing\nat any given time, or even some knowledge. But it doesn't, and is\noften busy fiddling while Rome burns.\n\nIOW, the time that it takes to freeze that one tuple *in theory* might\nbe small. But in practice it may be very large, because we won't\nnecessarily get around to it on any meaningful time frame.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Jan 2022 15:54:11 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Thu, Jan 6, 2022 at 12:54 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Dec 17, 2021 at 9:30 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Can we fully get rid of vacuum_freeze_table_age? Maybe even get rid of\n> > vacuum_freeze_min_age, too? Freezing tuples is a maintenance task for\n> > physical blocks, but we use logical units (XIDs).\n>\n> I don't see how we can get rid of these. We know that catastrophe will\n> ensue if we fail to freeze old XIDs for a sufficiently long time ---\n> where sufficiently long has to do with the number of XIDs that have\n> been subsequently consumed.\n\nI don't really disagree with anything you've said, I think. There are\na few subtleties here. I'll try to tease them apart.\n\nI agree that we cannot do without something like vacrel->FreezeLimit\nfor the foreseeable future -- but the closely related GUC\n(vacuum_freeze_min_age) is another matter. Although everything you've\nsaid in favor of the GUC seems true, the GUC is not a particularly\neffective (or natural) way of constraining the problem. It just\ndoesn't make sense as a tunable.\n\nOne obvious reason for this is that the opportunistic freezing stuff\nis expected to be the thing that usually forces freezing -- not\nvacuum_freeze_min_age, nor FreezeLimit, nor any other XID-based\ncutoff. As you more or less pointed out yourself, we still need\nFreezeLimit as a backstop mechanism. But the value of FreezeLimit can\njust come from autovacuum_freeze_max_age/2 in all cases (no separate\nGUC), or something along those lines. We don't particularly expect the\nvalue of FreezeLimit to matter, at least most of the time. It should\nonly noticeably affect our behavior during anti-wraparound VACUUMs,\nwhich become rare with the patch (e.g. my pgbench_accounts example\nupthread). Most individual tables will never get even one\nanti-wraparound VACUUM -- it just doesn't ever come for most tables in\npractice.\n\nMy big issue with vacuum_freeze_min_age is that it doesn't really work\nwith the freeze map work in 9.6, which creates problems that I'm\ntrying to address by freezing early and so on. After all, HEAD (and\nall stable branches) can easily set a page to all-visible (but not\nall-frozen) in the VM, meaning that the page's tuples won't be\nconsidered for freezing until the next aggressive VACUUM. This means\nthat vacuum_freeze_min_age is already frequently ignored by the\nimplementation -- it's conditioned on other things that are practically\nimpossible to predict.\n\nCurious about your thoughts on this existing issue with\nvacuum_freeze_min_age. I am concerned about the \"freezing cliff\" that\nit creates.\n\n> So it's natural to decide whether or not\n> we're going to wait for cleanup locks on pages on the basis of how old\n> the XIDs they contain actually are.\n\nI agree, but again, it's only a backstop. With the patch we'd have to\nbe rather unlucky to ever need to wait like this.\n\nWhat are the chances that we keep failing to freeze an old XID from\none particular page, again and again? My testing indicates that it's a\nnegligible concern in practice (barring pathological cases with idle\ncursors, etc).\n\n> I think vacuum_freeze_min_age also serves a useful purpose: it\n> prevents us from freezing data that's going to be modified again or\n> even deleted in the near future. Since we can't know the future, we\n> must base our decision on the assumption that the future will be like\n> the past: if the page hasn't been modified for a while, then we should\n> assume it's not likely to be modified again soon; otherwise not.\n\nBut the \"freeze early\" heuristics work a bit like that anyway. We\nwon't freeze all the tuples on a whole heap page early if we won't\notherwise set the heap page to all-visible (not all-frozen) in the VM\nanyway.\n\n> If we\n> knew the time at which the page had last been modified, it would be\n> very reasonable to use that here - say, freeze the XIDs if the page\n> hasn't been touched in an hour, or whatever. But since we lack such\n> timestamps the XID age is the closest proxy we have.\n\nXID age is a *terrible* proxy. The age of an XID in a tuple header may\nadvance quickly, even when nobody modifies the same table at all.\n\nI concede that it is true that we are (in some sense) \"gambling\" by\nfreezing early -- we may end up freezing a tuple that we subsequently\nupdate anyway. But aren't we also \"gambling\" by *not* freezing early?\nBy not freezing, we risk getting into \"freezing debt\" that will have\nto be paid off in one ruinously large installment. I would much rather\n\"gamble\" on something where we can tolerate consistently \"losing\" than\ngamble on something where I cannot ever afford to lose (even if it's\nmuch less likely that I'll lose during any given VACUUM operation).\n\nBesides all this, I think that we have a rather decent chance of\ncoming out ahead in practice by freezing early. In practice the\nmarginal cost of freezing early is consistently pretty low.\nCost-control-driven (as opposed to need-driven) freezing is *supposed*\nto be cheaper, of course. And like it or not, freezing is really just part of\nthe cost of storing data using Postgres (for the time being, at least).\n\n> > The\n> > risk mostly comes from how much total work we still need to do to\n> > advance relfrozenxid. If the single old XID is quite old indeed (~1.5\n> > billion XIDs), but there is only one, then we just have to freeze one\n> > tuple to be able to safely advance relfrozenxid (maybe advance it by a\n> > huge amount!). How long can it take to freeze one tuple, with the\n> > freeze map, etc?\n>\n> I don't really see any reason for optimism here.\n\n> IOW, the time that it takes to freeze that one tuple *in theory* might\n> be small. But in practice it may be very large, because we won't\n> necessarily get around to it on any meaningful time frame.\n\nOn second thought I agree that my specific example of 1.5 billion XIDs\nwas a little too optimistic of me. But 50 million XIDs (i.e. the\nvacuum_freeze_min_age default) is too pessimistic. The important point\nis that FreezeLimit could plausibly become nothing more than a\nbackstop mechanism, with the design from the patch series -- something\nthat typically has no effect on what tuples actually get frozen.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 6 Jan 2022 14:45:51 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Thu, Jan 6, 2022 at 2:45 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> But the \"freeze early\" heuristics work a bit like that anyway. We\n> won't freeze all the tuples on a whole heap page early if we won't\n> otherwise set the heap page to all-visible (not all-frozen) in the VM\n> anyway.\n\nI believe that applications tend to update rows according to\npredictable patterns. Andy Pavlo made an observation about this at one\npoint:\n\nhttps://youtu.be/AD1HW9mLlrg?t=3202\n\nI think that we don't do a good enough job of keeping logically\nrelated tuples (tuples inserted around the same time) together, on the\nsame original heap page, which motivated a lot of my experiments with\nthe FSM from last year. Even still, it seems like a good idea for us\nto err in the direction of assuming that tuples on the same heap page\nare logically related. The tuples should all be frozen together when\npossible. And *not* frozen early when the heap page as a whole can't\nbe frozen (barring cases with one *much* older XID before\nFreezeLimit).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 6 Jan 2022 15:58:48 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Thu, Jan 6, 2022 at 5:46 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> One obvious reason for this is that the opportunistic freezing stuff\n> is expected to be the thing that usually forces freezing -- not\n> vacuum_freeze_min_age, nor FreezeLimit, nor any other XID-based\n> cutoff. As you more or less pointed out yourself, we still need\n> FreezeLimit as a backstop mechanism. But the value of FreezeLimit can\n> just come from autovacuum_freeze_max_age/2 in all cases (no separate\n> GUC), or something along those lines. We don't particularly expect the\n> value of FreezeLimit to matter, at least most of the time. It should\n> only noticeably affect our behavior during anti-wraparound VACUUMs,\n> which become rare with the patch (e.g. my pgbench_accounts example\n> upthread). Most individual tables will never get even one\n> anti-wraparound VACUUM -- it just doesn't ever come for most tables in\n> practice.\n\nThis seems like a weak argument. Sure, you COULD hard-code the limit\nto be autovacuum_freeze_max_age/2 rather than making it a separate\ntunable, but I don't think it's better. I am generally very skeptical\nabout the idea of using the same GUC value for multiple purposes,\nbecause it often turns out that the optimal value for one purpose is\ndifferent than the optimal value for some other purpose. For example,\nthe optimal amount of memory for a hash table is likely different than\nthe optimal amount for a sort, which is why we now have\nhash_mem_multiplier. When it's not even the same value that's being\nused in both places, but the original value in one place and a value\nderived from some formula in the other, the chances of things working\nout are even less.\n\nI feel generally that a lot of the argument you're making here\nsupposes that tables are going to get vacuumed regularly. I agree that\nIF tables are being vacuumed on a regular basis, and if as part of\nthat we always push relfrozenxid forward as far as we can, we will\nrarely have a situation where aggressive strategies to avoid\nwraparound are required. However, I disagree strongly with the idea\nthat we can assume that tables will get vacuumed regularly. That can\nfail to happen for all sorts of reasons. One of the common ones is a\npoor choice of autovacuum configuration. The most common problem in my\nexperience is a cost limit that is too low to permit the amount of\nvacuuming that is actually required, but other kinds of problems like\nnot enough workers (so tables get starved), too many workers (so the\ncost limit is being shared between many processes), autovacuum=off\neither globally or on one table (because of ... reasons),\nautovacuum_vacuum_insert_threshold = -1 plus not many updates (so\nthing ever triggers the vacuum), autovacuum_naptime=1d (actually seen\nin the real world! ... and, no, it didn't work well), or stats\ncollector problems are all possible. We can *hope* that there are\ngoing to be regular vacuums of the table long before wraparound\nbecomes a danger, but realistically, we better not assume that in our\nchoice of algorithms, because the real world is a messy place where\nall sorts of crazy things happen.\n\nNow, I agree with you in part: I don't think it's obvious that it's\nuseful to tune vacuum_freeze_table_age. When I advise customers on how\nto fix vacuum problems, I am usually telling them to increase\nautovacuum_vacuum_cost_limit, possibly also with an increase in\nautovacuum_workers; or to increase or decrease\nautovacuum_freeze_max_age depending on which problem they have; or\noccasionally to adjust settings like autovacuum_naptime. It doesn't\noften seem to be necessary to change vacuum_freeze_table_age or, for\nthat matter, vacuum_freeze_min_age. But if we remove them and then\ndiscover scenarios where tuning them would have been useful, we'll\nhave no options for fixing PostgreSQL systems in the field. Waiting\nfor the next major release in such a scenario, or even the next minor\nrelease, is not good. We should be VERY conservative about removing\nexisting settings if there's any chance that somebody could use them\nto tune their way out of trouble.\n\n> My big issue with vacuum_freeze_min_age is that it doesn't really work\n> with the freeze map work in 9.6, which creates problems that I'm\n> trying to address by freezing early and so on. After all, HEAD (and\n> all stable branches) can easily set a page to all-visible (but not\n> all-frozen) in the VM, meaning that the page's tuples won't be\n> considered for freezing until the next aggressive VACUUM. This means\n> that vacuum_freeze_min_age is already frequently ignored by the\n> implementation -- it's conditioned on other things that are practically\n> impossible to predict.\n>\n> Curious about your thoughts on this existing issue with\n> vacuum_freeze_min_age. I am concerned about the \"freezing cliff\" that\n> it creates.\n\nSo, let's see: if we see a page where the tuples are all-visible and\nwe seize the opportunity to freeze it, we can spare ourselves the need\nto ever visit that page again (unless it gets modified). But if we\nonly mark it all-visible and leave the freezing for later, the next\naggressive vacuum will have to scan and dirty the page. I'm prepared\nto believe that it's worth the cost of freezing the page in that\nscenario. We've already dirtied the page and written some WAL and\nmaybe generated an FPW, so doing the rest of the work now rather than\nsaving it until later seems likely to be a win. I think it's OK to\nbehave, in this situation, as if vacuum_freeze_min_age=0.\n\nThere's another situation in which vacuum_freeze_min_age could apply,\nthough: suppose the page isn't all-visible yet. I'd argue that in that\ncase we don't want to run around freezing stuff unless it's quite old\n- like older than vacuum_freeze_table_age, say. Because we know we're\ngoing to have to revisit this page in the next vacuum anyway, and\nexpending effort to freeze tuples that may be about to be modified\nagain doesn't seem prudent. So, hmm, on further reflection, maybe it's\nOK to remove vacuum_freeze_min_age. But if we do, then I think we had\nbetter carefully distinguish between the case where the page can\nthereby be marked all-frozen and the case where it cannot. I guess you\nsay the same, further down.\n\n> > So it's natural to decide whether or not\n> > we're going to wait for cleanup locks on pages on the basis of how old\n> > the XIDs they contain actually are.\n>\n> I agree, but again, it's only a backstop. With the patch we'd have to\n> be rather unlucky to ever need to wait like this.\n>\n> What are the chances that we keep failing to freeze an old XID from\n> one particular page, again and again? My testing indicates that it's a\n> negligible concern in practice (barring pathological cases with idle\n> cursors, etc).\n\nI mean, those kinds of pathological cases happen *all the time*. Sure,\nthere are plenty of users who don't leave cursors open. But the ones\nwho do don't leave them around for short periods of time on randomly\nselected pages of the table. They are disproportionately likely to\nleave them on the same table pages over and over, just like data can't\nin general be assumed to be uniformly accessed. And not uncommonly,\nthey leave them around until the snow melts.\n\nAnd we need to worry about those kinds of users, actually much more\nthan we need to worry about users doing normal things. Honestly,\nautovacuum on a system where things are mostly \"normal\" - no\nlong-running transactions, adequate resources for autovacuum to do its\njob, reasonable configuration settings - isn't that bad. It's true\nthat there are people who get surprised by an aggressive autovacuum\nkicking off unexpectedly, but it's usually the first one during the\ncluster lifetime (which is typically the biggest, since the initial\nload tends to be bigger than later ones) and it's usually annoying but\nsurvivable. The places where autovacuum becomes incredibly frustrating\nare the pathological cases. When insufficient resources are available\nto complete the work in a timely fashion, or difficult trade-offs have\nto be made, autovacuum is too dumb to make the right choices. And even\nif you call your favorite PostgreSQL support provider and they provide\nan expert, once it gets behind, autovacuum isn't very tractable: it\nwill insist on vacuuming everything, right now, in an order that it\nchooses, and it's not going to listen to take any nonsense from some\nhuman being who thinks they might have some useful advice to provide!\n\n> But the \"freeze early\" heuristics work a bit like that anyway. We\n> won't freeze all the tuples on a whole heap page early if we won't\n> otherwise set the heap page to all-visible (not all-frozen) in the VM\n> anyway.\n\nHmm, I didn't realize that we had that. Is that an existing thing or\nsomething new you're proposing to do? If existing, where is it?\n\n> > IOW, the time that it takes to freeze that one tuple *in theory* might\n> > be small. But in practice it may be very large, because we won't\n> > necessarily get around to it on any meaningful time frame.\n>\n> On second thought I agree that my specific example of 1.5 billion XIDs\n> was a little too optimistic of me. But 50 million XIDs (i.e. the\n> vacuum_freeze_min_age default) is too pessimistic. The important point\n> is that FreezeLimit could plausibly become nothing more than a\n> backstop mechanism, with the design from the patch series -- something\n> that typically has no effect on what tuples actually get frozen.\n\nI agree that it's OK for this to become a purely backstop mechanism\n... but again, I think that the design of such backstop mechanisms\nshould be done as carefully as we know how, because users seem to hit\nthe backstop all the time. We want it to be made of, you know, nylon\ntwine, rather than, say, sharp nails. :-)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 Jan 2022 15:24:20 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Fri, Jan 7, 2022 at 12:24 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> This seems like a weak argument. Sure, you COULD hard-code the limit\n> to be autovacuum_freeze_max_age/2 rather than making it a separate\n> tunable, but I don't think it's better. I am generally very skeptical\n> about the idea of using the same GUC value for multiple purposes,\n> because it often turns out that the optimal value for one purpose is\n> different than the optimal value for some other purpose.\n\nI thought I was being conservative by suggesting\nautovacuum_freeze_max_age/2. My first thought was to teach VACUUM to\nmake its FreezeLimit \"OldestXmin - autovacuum_freeze_max_age\". To me\nthese two concepts really *are* the same thing: vacrel->FreezeLimit\nbecomes a backstop, just as anti-wraparound autovacuum (the\nautovacuum_freeze_max_age cutoff) becomes a backstop.\n\nOf course, an anti-wraparound VACUUM will do early freezing in the\nsame way as any other VACUUM will (with the patch series). So even\nwhen the FreezeLimit backstop XID cutoff actually affects the behavior\nof a given VACUUM operation, it may well not be the reason why most\nindividual tuples that we freeze get frozen. That is, most individual\nheap pages will probably have tuples frozen for some other reason.\nThough it depends on workload characteristics, most individual heap\npages will typically be frozen as a group, even here. This is a\nlogical consequence of the fact that tuple freezing and advancing\nrelfrozenxid are now only loosely coupled -- it's about as loose as\nthe current relfrozenxid invariant will allow.\n\n> I feel generally that a lot of the argument you're making here\n> supposes that tables are going to get vacuumed regularly.\n\n> I agree that\n> IF tables are being vacuumed on a regular basis, and if as part of\n> that we always push relfrozenxid forward as far as we can, we will\n> rarely have a situation where aggressive strategies to avoid\n> wraparound are required.\n\nIt's all relative. We hope that (with the patch) cases that only ever\nget anti-wraparound VACUUMs are limited to tables where nothing else\ndrives VACUUM, for sensible reasons related to workload\ncharacteristics (like the pgbench_accounts example upthread). It's\ninevitable that some users will misconfigure the system, though -- no\nquestion about that.\n\nI don't see why users that misconfigure the system in this way should\nbe any worse off than they would be today. They probably won't do\nsubstantially less freezing (usually somewhat more), and will advance\npg_class.relfrozenxid in exactly the same way as today (usually a bit\nbetter, actually). What have I missed?\n\nAdmittedly the design of the \"Freeze tuples early to advance\nrelfrozenxid\" patch (i.e. v5-0005-*patch) is still unsettled; I need\nto verify that my claims about it are really robust. But as far as I\nknow they are. Reviewers should certainly look at that with a critical\neye.\n\n> Now, I agree with you in part: I don't think it's obvious that it's\n> useful to tune vacuum_freeze_table_age.\n\nThat's definitely the easier argument to make. After all,\nvacuum_freeze_table_age will do nothing unless VACUUM runs before the\nanti-wraparound threshold (autovacuum_freeze_max_age) is reached. The\npatch series should be strictly better than that. Primarily because\nit's \"continuous\", and so isn't limited to cases where the table age\nfalls within the \"vacuum_freeze_table_age - autovacuum_freeze_max_age\"\ngoldilocks age range.\n\n> We should be VERY conservative about removing\n> existing settings if there's any chance that somebody could use them\n> to tune their way out of trouble.\n\nI agree, I suppose, but right now I honestly can't think of a reason\nwhy they would be useful.\n\nIf I am wrong about this then I'm probably also wrong about some basic\nfacet of the high-level design, in which case I should change course\naltogether. In other words, removing the GUCs is not an incidental\nthing. It's possible that I would never have pursued this project if I\ndidn't first notice how wrong-headed the GUCs are.\n\n> So, let's see: if we see a page where the tuples are all-visible and\n> we seize the opportunity to freeze it, we can spare ourselves the need\n> to ever visit that page again (unless it gets modified). But if we\n> only mark it all-visible and leave the freezing for later, the next\n> aggressive vacuum will have to scan and dirty the page. I'm prepared\n> to believe that it's worth the cost of freezing the page in that\n> scenario.\n\nThat's certainly the most compelling reason to perform early freezing.\nIt's not completely free of downsides, but it's pretty close.\n\n> There's another situation in which vacuum_freeze_min_age could apply,\n> though: suppose the page isn't all-visible yet. I'd argue that in that\n> case we don't want to run around freezing stuff unless it's quite old\n> - like older than vacuum_freeze_table_age, say. Because we know we're\n> going to have to revisit this page in the next vacuum anyway, and\n> expending effort to freeze tuples that may be about to be modified\n> again doesn't seem prudent. So, hmm, on further reflection, maybe it's\n> OK to remove vacuum_freeze_min_age. But if we do, then I think we had\n> better carefully distinguish between the case where the page can\n> thereby be marked all-frozen and the case where it cannot. I guess you\n> say the same, further down.\n\nI do. Although v5-0005-*patch still freezes early when the page is\ndirtied by pruning, I have my doubts about that particular \"freeze\nearly\" criteria. I believe that everything I just said about\nmisconfigured autovacuums doesn't rely on anything more than the \"most\ncompelling scenario for early freezing\" mechanism that arranges to\nmake us set the all-frozen bit (not just the all-visible bit).\n\n> I mean, those kinds of pathological cases happen *all the time*. Sure,\n> there are plenty of users who don't leave cursors open. But the ones\n> who do don't leave them around for short periods of time on randomly\n> selected pages of the table. They are disproportionately likely to\n> leave them on the same table pages over and over, just like data can't\n> in general be assumed to be uniformly accessed. And not uncommonly,\n> they leave them around until the snow melts.\n\n> And we need to worry about those kinds of users, actually much more\n> than we need to worry about users doing normal things.\n\nI couldn't agree more. In fact, I was mostly thinking about how to\n*help* these users. Insisting on waiting for a cleanup lock before it\nbecomes strictly necessary (when the table age is only 50\nmillion/vacuum_freeze_min_age) is actually a big part of the problem\nfor these users. vacuum_freeze_min_age enforces a false dichotomy on\naggressive VACUUMs, that just isn't unhelpful. Why should waiting on a\ncleanup lock fix anything?\n\nEven in the extreme case where we are guaranteed to eventually have a\nwraparound failure in the end (due to an idle cursor in an\nunsupervised database), the user is still much better off, I think. We\nwill have at least managed to advance relfrozenxid to the exact oldest\nXID on the one heap page that somebody holds an idle cursor\n(conflicting buffer pin) on. And we'll usually have frozen most of the\ntuples that need to be frozen. Sure, the user may need to use\nsingle-user mode to run a manual VACUUM, but at least this process\nonly needs to freeze approximately one tuple to get the system back\nonline again.\n\nIf the DBA notices the problem before the database starts to refuse to\nallocate XIDs, then they'll have a much better chance of avoiding a\nwraparound failure through simple intervention (like killing the\nbackend with the idle cursor). We can pay down 99.9% of the \"freeze\ndebt\" independently of this intractable problem of something holding\nonto an idle cursor.\n\n> Honestly,\n> autovacuum on a system where things are mostly \"normal\" - no\n> long-running transactions, adequate resources for autovacuum to do its\n> job, reasonable configuration settings - isn't that bad.\n\nRight. Autovacuum is \"too big to fail\".\n\n> > But the \"freeze early\" heuristics work a bit like that anyway. We\n> > won't freeze all the tuples on a whole heap page early if we won't\n> > otherwise set the heap page to all-visible (not all-frozen) in the VM\n> > anyway.\n>\n> Hmm, I didn't realize that we had that. Is that an existing thing or\n> something new you're proposing to do? If existing, where is it?\n\nIt's part of v5-0005-*patch. Still in flux to some degree, because\nit's necessary to balance a few things. That shouldn't undermine the\narguments I've made here.\n\n> I agree that it's OK for this to become a purely backstop mechanism\n> ... but again, I think that the design of such backstop mechanisms\n> should be done as carefully as we know how, because users seem to hit\n> the backstop all the time. We want it to be made of, you know, nylon\n> twine, rather than, say, sharp nails. :-)\n\nAbsolutely. But if autovacuum can only ever run due to\nage(relfrozenxid) reaching autovacuum_freeze_max_age, then I can't see\na downside.\n\nAgain, the v5-0005-*patch needs to meet the standard that I've laid\nout. If it doesn't then I've messed up already.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 7 Jan 2022 14:19:45 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Fri, Jan 7, 2022 at 5:20 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I thought I was being conservative by suggesting\n> autovacuum_freeze_max_age/2. My first thought was to teach VACUUM to\n> make its FreezeLimit \"OldestXmin - autovacuum_freeze_max_age\". To me\n> these two concepts really *are* the same thing: vacrel->FreezeLimit\n> becomes a backstop, just as anti-wraparound autovacuum (the\n> autovacuum_freeze_max_age cutoff) becomes a backstop.\n\nI can't follow this. If the idea is that we're going to\nopportunistically freeze a page whenever that allows us to mark it\nall-visible, then the remaining question is what XID age we should use\nto force freezing when that rule doesn't apply. It seems to me that\nthere is a rebuttable presumption that that case ought to work just as\nit does today - and I think I hear you saying that it should NOT work\nas it does today, but should use some other threshold. Yet I can't\nunderstand why you think that.\n\n> I couldn't agree more. In fact, I was mostly thinking about how to\n> *help* these users. Insisting on waiting for a cleanup lock before it\n> becomes strictly necessary (when the table age is only 50\n> million/vacuum_freeze_min_age) is actually a big part of the problem\n> for these users. vacuum_freeze_min_age enforces a false dichotomy on\n> aggressive VACUUMs, that just isn't unhelpful. Why should waiting on a\n> cleanup lock fix anything?\n\nBecause waiting on a lock means that we'll acquire it as soon as it's\navailable. If you repeatedly call your local Pizzeria Uno's and ask\nwhether there is a wait, and head to the restaurant only when the\nanswer is in the negative, you may never get there, because they may\nbe busy every time you call - especially if you always call around\nlunch or dinner time. Even if you eventually get there, it may take\nmultiple days before you find a time when a table is immediately\navailable, whereas if you had just gone over there and stood in line,\nyou likely would have been seated in under an hour and savoring the\ngoodness of quality deep-dish pizza not too long thereafter. The same\nprinciple applies here.\n\nI do think that waiting for a cleanup lock when the age of the page is\nonly vacuum_freeze_min_age seems like it might be too aggressive, but\nI don't think that's how it works. AFAICS, it's based on whether the\nvacuum is marked as aggressive, which has to do with\nvacuum_freeze_table_age, not vacuum_freeze_min_age. Let's turn the\nquestion around: if the age of the oldest XID on the page is >150\nmillion transactions and the buffer cleanup lock is not available now,\nwhat makes you think that it's any more likely to be available when\nthe XID age reaches 200 million or 300 million or 700 million? There\nis perhaps an argument for some kind of tunable that eventually shoots\nthe other session in the head (if we can identify it, anyway) but it\nseems to me that regardless of what threshold we pick, polling is\nstrictly less likely to find a time when the page is available than\nwaiting for the cleanup lock. It has the counterbalancing advantage of\nallowing the autovacuum worker to do other useful work in the meantime\nand that is indeed a significant upside, but at some point you're\ngoing to have to give up and admit that polling is a failed strategy,\nand it's unclear why 150 million XIDs - or probably even 50 million\nXIDs - isn't long enough to say that we're not getting the job done\nwith half measures.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 Jan 2022 15:19:20 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Thu, Jan 13, 2022 at 12:19 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I can't follow this. If the idea is that we're going to\n> opportunistically freeze a page whenever that allows us to mark it\n> all-visible, then the remaining question is what XID age we should use\n> to force freezing when that rule doesn't apply.\n\nThat is the idea, yes.\n\n> It seems to me that\n> there is a rebuttable presumption that that case ought to work just as\n> it does today - and I think I hear you saying that it should NOT work\n> as it does today, but should use some other threshold. Yet I can't\n> understand why you think that.\n\nCases where we can not get a cleanup lock fall into 2 sharply distinct\ncategories in my mind:\n\n1. Cases where our inability to get a cleanup lock signifies nothing\nat all about the page in question, or any page in the same table, with\nthe same workload.\n\n2. Pathological cases. Cases where we're at least at the mercy of the\napplication to do something about an idle cursor, where the situation\nmay be entirely hopeless on a long enough timeline. (Whether or not it\nactually happens in the end is less significant.)\n\nAs far as I can tell, based on testing, category 1 cases are fixed by\nthe patch series: while a small number of pages from tables in\ncategory 1 cannot be cleanup-locked during each VACUUM, even with the\npatch series, it happens at random, with no discernable pattern. The\noverall result is that our ability to advance relfrozenxid is really\nnot impacted *over time*. It's reasonable to suppose that lightning\nwill not strike in the same place twice -- and it would really have to\nstrike several times to invalidate this assumption. It's not\nimpossible, but the chances over time are infinitesimal -- and the\naggregate effect over time (not any one VACUUM operation) is what\nmatters.\n\nThere are seldom more than 5 or so of these pages, even on large\ntables. What are the chances that some random not-yet-all-frozen block\n(that we cannot freeze tuples on) will also have the oldest\ncouldn't-be-frozen XID, even once? And when it is the oldest, why\nshould it be the oldest by very many XIDs? And what are the chances\nthat the same page has the same problem, again and again, without that\nbeing due to some pathological workload thing?\n\nAdmittedly you may see a blip from this -- you might notice that the\nfinal relfrozenxid value for that one single VACUUM isn't quite as new\nas you'd like. But then the next VACUUM should catch up with the\nstable long term average again. It's hard to describe exactly why this\neffect is robust, but as I said, empirically, in practice, it appears\nto be robust. That might not be good enough as an explanation that\njustifies committing the patch series, but that's what I see. And I\nthink I will be able to nail it down.\n\nAFAICT that just leaves concern for cases in category 2. More on that below.\n\n> Even if you eventually get there, it may take\n> multiple days before you find a time when a table is immediately\n> available, whereas if you had just gone over there and stood in line,\n> you likely would have been seated in under an hour and savoring the\n> goodness of quality deep-dish pizza not too long thereafter. The same\n> principle applies here.\n\nI think that you're focussing on individual VACUUM operations, whereas\nI'm more concerned about the aggregate effect of a particular policy\nover time.\n\nLet's assume for a moment that the only thing that we really care\nabout is reliably keeping relfrozenxid reasonably recent. Even then,\nwaiting for a cleanup lock (to freeze some tuples) might be the wrong\nthing to do. Waiting in line means that we're not freezing other\ntuples (nobody else can either). So we're allowing ourselves to fall\nbehind on necessary, routine maintenance work that allows us to\nadvance relfrozenxid....in order to advance relfrozenxid.\n\n> I do think that waiting for a cleanup lock when the age of the page is\n> only vacuum_freeze_min_age seems like it might be too aggressive, but\n> I don't think that's how it works. AFAICS, it's based on whether the\n> vacuum is marked as aggressive, which has to do with\n> vacuum_freeze_table_age, not vacuum_freeze_min_age. Let's turn the\n> question around: if the age of the oldest XID on the page is >150\n> million transactions and the buffer cleanup lock is not available now,\n> what makes you think that it's any more likely to be available when\n> the XID age reaches 200 million or 300 million or 700 million?\n\nThis is my concern -- what I've called category 2 cases have this\nexact quality. So given that, why not freeze what you can, elsewhere,\non other pages that don't have the same issue (presumably the vast\nvast majority in the table)? That way you have the best possible\nchance of recovering once the DBA gets a clue and fixes the issue.\n\n> There\n> is perhaps an argument for some kind of tunable that eventually shoots\n> the other session in the head (if we can identify it, anyway) but it\n> seems to me that regardless of what threshold we pick, polling is\n> strictly less likely to find a time when the page is available than\n> waiting for the cleanup lock. It has the counterbalancing advantage of\n> allowing the autovacuum worker to do other useful work in the meantime\n> and that is indeed a significant upside, but at some point you're\n> going to have to give up and admit that polling is a failed strategy,\n> and it's unclear why 150 million XIDs - or probably even 50 million\n> XIDs - isn't long enough to say that we're not getting the job done\n> with half measures.\n\nThat's kind of what I meant. The difference between 50 million and 150\nmillion is rather unclear indeed. So having accepted that that might\nbe true, why not be open to the possibility that it won't turn out to\nbe true in the long run, for any given table? With the enhancements\nfrom the patch series in place (particularly the early freezing\nstuff), what do we have to lose by making the FreezeLimit XID cutoff\nfor freezing much higher than your typical vacuum_freeze_min_age?\nMaybe the same as autovacuum_freeze_max_age or vacuum_freeze_table_age\n(it can't be higher than that without also making these other settings\nbecome meaningless, of course).\n\nTaking a wait-and-see approach like this (not being too quick to\ndecide that a table is in category 1 or category 2) doesn't seem to\nmake wraparound failure any more likely in any particular scenario,\nbut makes it less likely in other scenarios. It also gives us early\nvisibility into the problem, because we'll see that autovacuum can no\nlonger advance relfrozenxid (using the enhanced log output) where\nthat's generally expected.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 13 Jan 2022 13:27:10 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Thu, Jan 13, 2022 at 1:27 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Admittedly you may see a blip from this -- you might notice that the\n> final relfrozenxid value for that one single VACUUM isn't quite as new\n> as you'd like. But then the next VACUUM should catch up with the\n> stable long term average again. It's hard to describe exactly why this\n> effect is robust, but as I said, empirically, in practice, it appears\n> to be robust. That might not be good enough as an explanation that\n> justifies committing the patch series, but that's what I see. And I\n> think I will be able to nail it down.\n\nAttached is v6, which like v5 is a rebased version that I'm posting to\nkeep CFTester happy. I pushed a commit that consolidates VACUUM\nVERBOSE and autovacuum logging earlier (commit 49c9d9fc), which bitrot\nv5. So new real changes, nothing to note.\n\nAlthough it technically has nothing to do with this patch series, I\nwill point out that it's now a lot easier to debug using VACUUM\nVERBOSE, which will directly display information about how we've\nadvanced relfrozenxid, tuples frozen, etc:\n\npg@regression:5432 =# delete from mytenk2 where hundred < 15;\nDELETE 1500\npg@regression:5432 =# vacuum VERBOSE mytenk2;\nINFO: vacuuming \"regression.public.mytenk2\"\nINFO: finished vacuuming \"regression.public.mytenk2\": index scans: 1\npages: 0 removed, 345 remain, 0 skipped using visibility map (0.00% of total)\ntuples: 1500 removed, 8500 remain (8500 newly frozen), 0 are dead but\nnot yet removable\nremovable cutoff: 17411, which is 0 xids behind next\nnew relfrozenxid: 17411, which is 3 xids ahead of previous value\nindex scan needed: 341 pages from table (98.84% of total) had 1500\ndead item identifiers removed\nindex \"mytenk2_unique1_idx\": pages: 39 in total, 0 newly deleted, 0\ncurrently deleted, 0 reusable\nindex \"mytenk2_unique2_idx\": pages: 30 in total, 0 newly deleted, 0\ncurrently deleted, 0 reusable\nindex \"mytenk2_hundred_idx\": pages: 11 in total, 1 newly deleted, 1\ncurrently deleted, 0 reusable\nI/O timings: read: 0.011 ms, write: 0.000 ms\navg read rate: 1.428 MB/s, avg write rate: 2.141 MB/s\nbuffer usage: 1133 hits, 2 misses, 3 dirtied\nWAL usage: 1446 records, 1 full page images, 199702 bytes\nsystem usage: CPU: user: 0.01 s, system: 0.00 s, elapsed: 0.01 s\nVACUUM\n\n-- \nPeter Geoghegan", "msg_date": "Fri, 14 Jan 2022 18:43:08 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Thu, Jan 13, 2022 at 4:27 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> 1. Cases where our inability to get a cleanup lock signifies nothing\n> at all about the page in question, or any page in the same table, with\n> the same workload.\n>\n> 2. Pathological cases. Cases where we're at least at the mercy of the\n> application to do something about an idle cursor, where the situation\n> may be entirely hopeless on a long enough timeline. (Whether or not it\n> actually happens in the end is less significant.)\n\nSure. I'm worrying about case (2). I agree that in case (1) waiting\nfor the lock is almost always the wrong idea.\n\n> I think that you're focussing on individual VACUUM operations, whereas\n> I'm more concerned about the aggregate effect of a particular policy\n> over time.\n\nI don't think so. I think I'm worrying about the aggregate effect of a\nparticular policy over time *in the pathological cases* i.e. (2).\n\n> This is my concern -- what I've called category 2 cases have this\n> exact quality. So given that, why not freeze what you can, elsewhere,\n> on other pages that don't have the same issue (presumably the vast\n> vast majority in the table)? That way you have the best possible\n> chance of recovering once the DBA gets a clue and fixes the issue.\n\nThat's the part I'm not sure I believe. Imagine a table with a\ngigantic number of pages that are not yet all-visible, a small number\nof all-visible pages, and one page containing very old XIDs on which a\ncursor holds a pin. I don't think it's obvious that not waiting is\nbest. Maybe you're going to end up vacuuming the table repeatedly and\ndoing nothing useful. If you avoid vacuuming it repeatedly, you still\nhave a lot of work to do once the DBA locates a clue.\n\nI think there's probably an important principle buried in here: the\nXID threshold that forces a vacuum had better also force waiting for\npins. If it doesn't, you can tight-loop on that table without getting\nanything done.\n\n> That's kind of what I meant. The difference between 50 million and 150\n> million is rather unclear indeed. So having accepted that that might\n> be true, why not be open to the possibility that it won't turn out to\n> be true in the long run, for any given table? With the enhancements\n> from the patch series in place (particularly the early freezing\n> stuff), what do we have to lose by making the FreezeLimit XID cutoff\n> for freezing much higher than your typical vacuum_freeze_min_age?\n> Maybe the same as autovacuum_freeze_max_age or vacuum_freeze_table_age\n> (it can't be higher than that without also making these other settings\n> become meaningless, of course).\n\nWe should probably distinguish between the situation where (a) an\nadverse pin is held continuously and effectively forever and (b)\nadverse pins are held frequently but for short periods of time. I\nthink it's possible to imagine a small, very hot table (or portion of\na table) where very high concurrency means there are often pins. In\ncase (a), it's not obvious that waiting will ever resolve anything,\nalthough it might prevent other problems like infinite looping. In\ncase (b), a brief wait will do a lot of good. But maybe that doesn't\neven matter. I think part of your argument is that if we fail to\nupdate relfrozenxid for a while, that really isn't that bad.\n\nI think I agree, up to a point. One consequence of failing to\nimmediately advance relfrozenxid might be that pg_clog and friends are\nbigger, but that's pretty minor. Another consequence might be that we\nmight vacuum the table more times, which is more serious. I'm not\nreally sure that can happen to a degree that is meaningful, apart from\nthe infinite loop case already described, but I'm also not entirely\nsure that it can't.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 17 Jan 2022 10:12:24 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Mon, Jan 17, 2022 at 7:12 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, Jan 13, 2022 at 4:27 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > 1. Cases where our inability to get a cleanup lock signifies nothing\n> > at all about the page in question, or any page in the same table, with\n> > the same workload.\n> >\n> > 2. Pathological cases. Cases where we're at least at the mercy of the\n> > application to do something about an idle cursor, where the situation\n> > may be entirely hopeless on a long enough timeline. (Whether or not it\n> > actually happens in the end is less significant.)\n>\n> Sure. I'm worrying about case (2). I agree that in case (1) waiting\n> for the lock is almost always the wrong idea.\n\nI don't doubt that we'd each have little difficulty determining\nwhich category (1 or 2) a given real world case should be placed in,\nusing a variety of methods that put the issue in context (e.g.,\nlooking at the application code, talking to the developers or the\nDBA). Of course, it doesn't follow that it would be easy to teach\nvacuumlazy.c how to determine which category the same \"can't get\ncleanup lock\" falls under, since (just for starters) there is no\npractical way for VACUUM to see all that context.\n\nThat's what I'm effectively trying to work around with this \"wait and\nsee approach\" that demotes FreezeLimit to a backstop (and so justifies\nremoving the vacuum_freeze_min_age GUC that directly dictates our\nFreezeLimit today). The cure may be worse than the disease, and the cure\nisn't actually all that great at the best of times, so we should wait\nuntil the disease visibly gets pretty bad before being\n\"interventionist\" by waiting for a cleanup lock.\n\nI've already said plenty about why I don't like vacuum_freeze_min_age\n(or FreezeLimit) due to XIDs being fundamentally the wrong unit. But\nthat's not the only fundamental problem that I see. The other problem\nis this: vacuum_freeze_min_age also dictates when an aggressive VACUUM\nwill start to wait for a cleanup lock. But why should the first thing\nbe the same as the second thing? I see absolutely no reason for it.\n(Hence the idea of making FreezeLimit a backstop, and getting rid of\nthe GUC itself.)\n\n> > This is my concern -- what I've called category 2 cases have this\n> > exact quality. So given that, why not freeze what you can, elsewhere,\n> > on other pages that don't have the same issue (presumably the vast\n> > vast majority in the table)? That way you have the best possible\n> > chance of recovering once the DBA gets a clue and fixes the issue.\n>\n> That's the part I'm not sure I believe.\n\nTo be clear, I think that I have yet to adequately demonstrate that\nthis is true. It's a bit tricky to do so -- absence of evidence isn't\nevidence of absence. I think that your principled skepticism makes\nsense right now.\n\nFortunately the early refactoring patches should be uncontroversial.\nThe controversial parts are all in the last patch in the patch series,\nwhich isn't too much code. (Plus another patch to at least get rid of\nvacuum_freeze_min_age, and maybe vacuum_freeze_table_age too, that\nhasn't been written just yet.)\n\n> Imagine a table with a\n> gigantic number of pages that are not yet all-visible, a small number\n> of all-visible pages, and one page containing very old XIDs on which a\n> cursor holds a pin. I don't think it's obvious that not waiting is\n> best. Maybe you're going to end up vacuuming the table repeatedly and\n> doing nothing useful. If you avoid vacuuming it repeatedly, you still\n> have a lot of work to do once the DBA locates a clue.\n\nMaybe this is a simpler way of putting it: I want to delay waiting on\na pin until it's pretty clear that we truly have a pathological case,\nwhich should in practice be limited to an anti-wraparound VACUUM,\nwhich will now be naturally rare -- most individual tables will\nliterally never have even one anti-wraparound VACUUM.\n\nWe don't need to reason about the vacuuming schedule this way, since\nanti-wraparound VACUUMs are driven by age(relfrozenxid) -- we don't\nreally have to predict anything. Maybe we'll need to do an\nanti-wraparound VACUUM immediately after a non-aggressive autovacuum\nruns, without getting a cleanup lock (due to an idle cursor\npathological case). We won't be able to advance relfrozenxid until the\nanti-wraparound VACUUM runs (at the earliest) in this scenario, but it\nmakes no difference. Rather than predicting the future, we're covering\nevery possible outcome (at least to the extent that that's possible).\n\n> I think there's probably an important principle buried in here: the\n> XID threshold that forces a vacuum had better also force waiting for\n> pins. If it doesn't, you can tight-loop on that table without getting\n> anything done.\n\nI absolutely agree -- that's why I think that we still need\nFreezeLimit. Just as a backstop, that in practice very rarely\ninfluences our behavior. Probably just in those remaining cases that\nare never vacuumed except for the occasional anti-wraparound VACUUM\n(even then it might not be very important).\n\n> We should probably distinguish between the situation where (a) an\n> adverse pin is held continuously and effectively forever and (b)\n> adverse pins are held frequently but for short periods of time.\n\nI agree. It's just hard to do that from vacuumlazy.c, during a routine\nnon-aggressive VACUUM operation.\n\n> I think it's possible to imagine a small, very hot table (or portion of\n> a table) where very high concurrency means there are often pins. In\n> case (a), it's not obvious that waiting will ever resolve anything,\n> although it might prevent other problems like infinite looping. In\n> case (b), a brief wait will do a lot of good. But maybe that doesn't\n> even matter. I think part of your argument is that if we fail to\n> update relfrozenxid for a while, that really isn't that bad.\n\nYeah, that is a part of it -- it doesn't matter (until it really\nmatters), and we should be careful to avoid making the situation worse\nby waiting for a cleanup lock unnecessarily. That's actually a very\ndrastic thing to do, at least in a world where freezing has been\ndecoupled from advancing relfrozenxid.\n\nUpdating relfrozenxid should now be thought of as a continuous thing,\nnot a discrete thing. And so it's highly unlikely that any given\nVACUUM will ever *completely* fail to advance relfrozenxid -- that\nfact alone signals a pathological case (things that are supposed to be\ncontinuous should not ever appear to be discrete). But you need multiple\nVACUUMs to see this \"signal\". It is only revealed over time.\n\nIt seems wise to make the most modest possible assumptions about\nwhat's going on here. We might well \"get lucky\" before the next VACUUM\ncomes around when we encounter what at first appears to be a\nproblematic case involving an idle cursor -- for all kinds of reasons.\nLike maybe an opportunistic prune gets rid of the old XID for us,\nwithout any freezing, during some brief window where the application\ndoesn't have a cursor. We're only talking about one or two heap pages\nhere.\n\nWe might also *not* \"get lucky\" with the application and its use of\nidle cursors, of course. But in that case we must have been doomed all\nalong. And we'll at least have put things on a much better footing in\nthis disaster scenario -- there is relatively little freezing left to\ndo in single user mode, and relfrozenxid should already be the same as\nthe exact oldest XID in that one page.\n\n> I think I agree, up to a point. One consequence of failing to\n> immediately advance relfrozenxid might be that pg_clog and friends are\n> bigger, but that's pretty minor.\n\nMy arguments are probabilistic (sort of), which makes it tricky.\nActual test cases/benchmarks should bear out the claims that I've\nmade. If anything fully convinces you, it'll be that, I think.\n\n> Another consequence might be that we\n> might vacuum the table more times, which is more serious. I'm not\n> really sure that can happen to a degree that is meaningful, apart from\n> the infinite loop case already described, but I'm also not entirely\n> sure that it can't.\n\nIt's definitely true that this overall strategy could result in there\nbeing more individual VACUUM operations. But that naturally\nfollow from teaching VACUUM to avoid waiting indefinitely.\n\nObviously the important question is whether we'll do\nmeaningfully more work for less benefit (in Postgres 15, relative to\nPostgres 14). Your concern is very reasonable. I just can't imagine\nhow we could lose out to any notable degree. Which is a start.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Mon, 17 Jan 2022 13:27:48 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Mon, Jan 17, 2022 at 4:28 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Updating relfrozenxid should now be thought of as a continuous thing,\n> not a discrete thing.\n\nI think that's pretty nearly 100% wrong. The most simplistic way of\nexpressing that is to say - clearly it can only happen when VACUUM\nruns, which is not all the time. That's a bit facile, though; let me\ntry to say something a little smarter. There are real production\nsystems that exist today where essentially all vacuums are\nanti-wraparound vacuums. And there are also real production systems\nthat exist today where virtually none of the vacuums are\nanti-wraparound vacuums. So if we ship your proposed patches, the\nfrequency with which relfrozenxid gets updated is going to increase by\na large multiple, perhaps 100x, for the second group of people, who\nwill then perceive the movement of relfrozenxid to be much closer to\ncontinuous than it is today even though, technically, it's still a\nstep function. But the people in the first category are not going to\nsee any difference at all.\n\nAnd therefore the reasoning that says - anti-wraparound vacuums just\naren't going to happen any more - or - relfrozenxid will advance\ncontinuously seems like dangerous wishful thinking to me. It's only\ntrue if (# of vacuums) / (# of wraparound vacuums) >> 1. And that need\nnot be true in any particular environment, which to me means that all\nconclusions based on the idea that it has to be true are pretty\ndubious. There's no doubt in my mind that advancing relfrozenxid\nopportunistically is a good idea. However, I'm not sure how reasonable\nit is to change any other behavior on the basis of the fact that we're\ndoing it, because we don't know how often it really happens.\n\nIf someone says \"every time I travel to Europe on business, I will use\nthe opportunity to bring you back a nice present,\" you can't evaluate\nhow much impact that will have on your life without knowing how often\nthey travel to Europe on business. And that varies radically from\n\"never\" to \"a lot\" based on the person.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 17 Jan 2022 17:13:17 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Mon, Jan 17, 2022 at 2:13 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, Jan 17, 2022 at 4:28 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Updating relfrozenxid should now be thought of as a continuous thing,\n> > not a discrete thing.\n>\n> I think that's pretty nearly 100% wrong. The most simplistic way of\n> expressing that is to say - clearly it can only happen when VACUUM\n> runs, which is not all the time.\n\nThat just seems like semantics to me. The very next sentence after the\none you quoted in your reply was \"And so it's highly unlikely that any\ngiven VACUUM will ever *completely* fail to advance relfrozenxid\".\nIt's continuous *within* each VACUUM. As far as I can tell there is\npretty much no way that the patch series will ever fail to advance\nrelfrozenxid *by at least a little bit*, barring pathological cases\nwith cursors and whatnot.\n\n> That's a bit facile, though; let me\n> try to say something a little smarter. There are real production\n> systems that exist today where essentially all vacuums are\n> anti-wraparound vacuums. And there are also real production systems\n> that exist today where virtually none of the vacuums are\n> anti-wraparound vacuums. So if we ship your proposed patches, the\n> frequency with which relfrozenxid gets updated is going to increase by\n> a large multiple, perhaps 100x, for the second group of people, who\n> will then perceive the movement of relfrozenxid to be much closer to\n> continuous than it is today even though, technically, it's still a\n> step function. But the people in the first category are not going to\n> see any difference at all.\n\nActually, I think that even the people in the first category might\nwell have about the same improved experience. Not just because of this\npatch series, mind you. It would also have a lot to do with the\nautovacuum_vacuum_insert_scale_factor stuff in Postgres 13. Not to\nmention the freeze map. What version are these users on?\n\nI have actually seen this for myself. With BenchmarkSQL, the largest\ntable (the order lines table) starts out having its autovacuums driven\nentirely by autovacuum_vacuum_insert_scale_factor, even though there\nis a fair amount of bloat from updates. It stays like that for hours\non HEAD. But even with my reasonably tuned setup, there is eventually\na switchover point. Eventually all autovacuums end up as aggressive\nanti-wraparound VACUUMs -- this happens once the table gets\nsufficiently large (this is one of the two that is append-only, with\none update to every inserted row from the delivery transaction, which\nhappens hours after the initial insert).\n\nWith the patch series, we have a kind of virtuous circle with freezing\nand with advancing relfrozenxid with the same order lines table. As\nfar as I can tell, we fix the problem with the patch series. Because\nthere are about 10 tuples inserted per new order transaction, the\nactual \"XID consumption rate of the table\" is much lower than the\n\"worst case XID consumption\" for such a table.\n\nIt's also true that even with the patch we still get anti-wraparound\nVACUUMs for two fixed-size, hot-update-only tables: the stock table,\nand the customers table. But that's no big deal. It only happens\nbecause nothing else will ever trigger an autovacuum, no matter the\nautovacuum_freeze_max_age setting.\n\n> And therefore the reasoning that says - anti-wraparound vacuums just\n> aren't going to happen any more - or - relfrozenxid will advance\n> continuously seems like dangerous wishful thinking to me.\n\nI never said that anti-wraparound vacuums just won't happen anymore. I\nsaid that they'll be limited to cases like the stock table or\ncustomers table case. I was very clear on that point.\n\nWith pgbench, whether or not you ever see any anti-wraparound VACUUMs\nwill depend on how heap fillfactor for the accounts table -- set it\nlow enough (maybe to 90) and you will still get them, since there\nwon't be any other reason to VACUUM. As for the branches table, and\nthe tellers table, they'll get VACUUMs in any case, regardless of heap\nfillfactor. And so they'll always advance relfrozenxid during eac\nVACUUM, and never have even one anti-wraparound VACUUM.\n\n> It's only\n> true if (# of vacuums) / (# of wraparound vacuums) >> 1. And that need\n> not be true in any particular environment, which to me means that all\n> conclusions based on the idea that it has to be true are pretty\n> dubious. There's no doubt in my mind that advancing relfrozenxid\n> opportunistically is a good idea. However, I'm not sure how reasonable\n> it is to change any other behavior on the basis of the fact that we're\n> doing it, because we don't know how often it really happens.\n\nIt isn't that hard to see that the cases where we continue to get any\nanti-wraparound VACUUMs with the patch seem to be limited to cases\nlike the stock/customers table, or cases like the pathological idle\ncursor cases we've been discussing. Pretty narrow cases, overall.\nDon't take my word for it - see for yourself.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 17 Jan 2022 14:41:10 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Mon, Jan 17, 2022 at 5:41 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> That just seems like semantics to me. The very next sentence after the\n> one you quoted in your reply was \"And so it's highly unlikely that any\n> given VACUUM will ever *completely* fail to advance relfrozenxid\".\n> It's continuous *within* each VACUUM. As far as I can tell there is\n> pretty much no way that the patch series will ever fail to advance\n> relfrozenxid *by at least a little bit*, barring pathological cases\n> with cursors and whatnot.\n\nI mean this boils down to saying that VACUUM will advance relfrozenxid\nexcept when it doesn't.\n\n> Actually, I think that even the people in the first category might\n> well have about the same improved experience. Not just because of this\n> patch series, mind you. It would also have a lot to do with the\n> autovacuum_vacuum_insert_scale_factor stuff in Postgres 13. Not to\n> mention the freeze map. What version are these users on?\n\nI think it varies. I expect the increase in the default cost limit to\nhave had a much more salutary effect than\nautovacuum_vacuum_insert_scale_factor, but I don't know for sure. At\nany rate, if you make the database big enough and generate dirty data\nfast enough, it doesn't matter what the default limits are.\n\n> I never said that anti-wraparound vacuums just won't happen anymore. I\n> said that they'll be limited to cases like the stock table or\n> customers table case. I was very clear on that point.\n\nI don't know how I'm supposed to sensibly respond to a statement like\nthis. If you were very clear, then I'm being deliberately obtuse if I\nfail to understand. If I say you weren't very clear, then we're just\ncontradicting each other.\n\n> It isn't that hard to see that the cases where we continue to get any\n> anti-wraparound VACUUMs with the patch seem to be limited to cases\n> like the stock/customers table, or cases like the pathological idle\n> cursor cases we've been discussing. Pretty narrow cases, overall.\n> Don't take my word for it - see for yourself.\n\nI don't think that's really possible. Words like \"narrow\" and\n\"pathological\" are value judgments, not factual statements. If I do an\nexperiment where no wraparound autovacuums happen, as I'm sure I can,\nthen those are the normal cases where the patch helps. If I do an\nexperiment where they do happen, as I'm sure that I also can, you'll\nprobably say either that the case in question is like the\nstock/customers table, or that it's pathological. What will any of\nthis prove?\n\nI think we're reaching the point of diminishing returns in this\nconversation. What I want to know is that users aren't going to be\nharmed - even in cases where they have behavior that is like the\nstock/customers table, or that you consider pathological, or whatever\nother words we want to use to describe the weird things that happen to\npeople. And I think we've made perhaps a bit of modest progress in\nexploring that issue, but certainly less than I'd like. I don't want\nto spend the next several days going around in circles about it\nthough. That does not seem likely to make anyone happy.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 17 Jan 2022 23:13:43 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Mon, Jan 17, 2022 at 8:13 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, Jan 17, 2022 at 5:41 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > That just seems like semantics to me. The very next sentence after the\n> > one you quoted in your reply was \"And so it's highly unlikely that any\n> > given VACUUM will ever *completely* fail to advance relfrozenxid\".\n> > It's continuous *within* each VACUUM. As far as I can tell there is\n> > pretty much no way that the patch series will ever fail to advance\n> > relfrozenxid *by at least a little bit*, barring pathological cases\n> > with cursors and whatnot.\n>\n> I mean this boils down to saying that VACUUM will advance relfrozenxid\n> except when it doesn't.\n\nIt actually doesn't boil down, at all. The world is complicated and\nmessy, whether we like it or not.\n\n> > I never said that anti-wraparound vacuums just won't happen anymore. I\n> > said that they'll be limited to cases like the stock table or\n> > customers table case. I was very clear on that point.\n>\n> I don't know how I'm supposed to sensibly respond to a statement like\n> this. If you were very clear, then I'm being deliberately obtuse if I\n> fail to understand.\n\nI don't know if I'd accuse you of being obtuse, exactly. Mostly I just\nthink it's strange that you don't seem to take what I say seriously\nwhen it cannot be proven very easily. I don't think that you intend\nthis to be disrespectful, and I don't take it personally. I just don't\nunderstand it.\n\n> > It isn't that hard to see that the cases where we continue to get any\n> > anti-wraparound VACUUMs with the patch seem to be limited to cases\n> > like the stock/customers table, or cases like the pathological idle\n> > cursor cases we've been discussing. Pretty narrow cases, overall.\n> > Don't take my word for it - see for yourself.\n>\n> I don't think that's really possible. Words like \"narrow\" and\n> \"pathological\" are value judgments, not factual statements. If I do an\n> experiment where no wraparound autovacuums happen, as I'm sure I can,\n> then those are the normal cases where the patch helps. If I do an\n> experiment where they do happen, as I'm sure that I also can, you'll\n> probably say either that the case in question is like the\n> stock/customers table, or that it's pathological. What will any of\n> this prove?\n\nYou seem to be suggesting that I used words like \"pathological\" in\nsome kind of highly informal, totally subjective way, when I did no\nsuch thing.\n\nI quite clearly said that you'll only get an anti-wraparound VACUUM\nwith the patch applied when the only factor that *ever* causes *any*\nautovacuum worker to VACUUM the table (assuming the workload is\nstable) is the anti-wraparound/autovacuum_freeze_max_age cutoff. With\na table like this, even increasing autovacuum_freeze_max_age to its\nabsolute maximum of 2 billion would not make it any more likely that\nwe'd get a non-aggressive VACUUM -- it would merely make the\nanti-wraparound VACUUMs less frequent. No big change should be\nexpected with a table like that.\n\nAlso, since the patch is not magic, and doesn't even change the basic\ninvariants for relfrozenxid, it's still true that any scenario in\nwhich it's fundamentally impossible for VACUUM to keep up will also\nhave anti-wraparound VACUUMs. But that's the least of the user's\ntrouble -- in the long run we're going to have the system refuse to\nallocate new XIDs with such a workload.\n\nThe claim that I have made is 100% testable. Even if it was flat out\nincorrect, not getting anti-wraparound VACUUMs per se is not the\nimportant part. The important part is that the work is managed\nintelligently, and the burden is spread out over time. I am\nparticularly concerned about the \"freezing cliff\" we get when many\npages are all-visible but not also all-frozen. Consistently avoiding\nan anti-wraparound VACUUM (except with very particular workload\ncharacteristics) is really just a side effect -- it's something that\nmakes the overall benefit relatively obvious, and relatively easy to\nmeasure. I thought that you'd appreciate that.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 17 Jan 2022 21:14:02 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Tue, Jan 18, 2022 at 12:14 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> I quite clearly said that you'll only get an anti-wraparound VACUUM\n> with the patch applied when the only factor that *ever* causes *any*\n> autovacuum worker to VACUUM the table (assuming the workload is\n> stable) is the anti-wraparound/autovacuum_freeze_max_age cutoff. With\n> a table like this, even increasing autovacuum_freeze_max_age to its\n> absolute maximum of 2 billion would not make it any more likely that\n> we'd get a non-aggressive VACUUM -- it would merely make the\n> anti-wraparound VACUUMs less frequent. No big change should be\n> expected with a table like that.\n\nSure, I don't disagree with any of that. I don't see how I could. But\nI don't see how it detracts from the points I was trying to make\neither.\n\n> Also, since the patch is not magic, and doesn't even change the basic\n> invariants for relfrozenxid, it's still true that any scenario in\n> which it's fundamentally impossible for VACUUM to keep up will also\n> have anti-wraparound VACUUMs. But that's the least of the user's\n> trouble -- in the long run we're going to have the system refuse to\n> allocate new XIDs with such a workload.\n\nAlso true. But again, it's just about making sure that the patch\ndoesn't make other decisions that make things worse for people in that\nsituation. That's what I was expressing uncertainty about.\n\n> The claim that I have made is 100% testable. Even if it was flat out\n> incorrect, not getting anti-wraparound VACUUMs per se is not the\n> important part. The important part is that the work is managed\n> intelligently, and the burden is spread out over time. I am\n> particularly concerned about the \"freezing cliff\" we get when many\n> pages are all-visible but not also all-frozen. Consistently avoiding\n> an anti-wraparound VACUUM (except with very particular workload\n> characteristics) is really just a side effect -- it's something that\n> makes the overall benefit relatively obvious, and relatively easy to\n> measure. I thought that you'd appreciate that.\n\nI do.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 18 Jan 2022 09:11:10 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Tue, Jan 18, 2022 at 6:11 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Tue, Jan 18, 2022 at 12:14 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I quite clearly said that you'll only get an anti-wraparound VACUUM\n> > with the patch applied when the only factor that *ever* causes *any*\n> > autovacuum worker to VACUUM the table (assuming the workload is\n> > stable) is the anti-wraparound/autovacuum_freeze_max_age cutoff. With\n> > a table like this, even increasing autovacuum_freeze_max_age to its\n> > absolute maximum of 2 billion would not make it any more likely that\n> > we'd get a non-aggressive VACUUM -- it would merely make the\n> > anti-wraparound VACUUMs less frequent. No big change should be\n> > expected with a table like that.\n>\n> Sure, I don't disagree with any of that. I don't see how I could. But\n> I don't see how it detracts from the points I was trying to make\n> either.\n\nYou said \"...the reasoning that says - anti-wraparound vacuums just\naren't going to happen any more - or - relfrozenxid will advance\ncontinuously seems like dangerous wishful thinking to me\". You then\nproceeded to attack a straw man -- a view that I couldn't possibly\nhold. This certainly surprised me, because my actual claims seemed\nwell within the bounds of what is possible, and in any case can be\nverified with a fairly modest effort.\n\nThat's what I was reacting to -- it had nothing to do with any\nconcerns you may have had. I wasn't thinking about long-idle cursors\nat all. I was defending myself, because I was put in a position where\nI had to defend myself.\n\n> > Also, since the patch is not magic, and doesn't even change the basic\n> > invariants for relfrozenxid, it's still true that any scenario in\n> > which it's fundamentally impossible for VACUUM to keep up will also\n> > have anti-wraparound VACUUMs. But that's the least of the user's\n> > trouble -- in the long run we're going to have the system refuse to\n> > allocate new XIDs with such a workload.\n>\n> Also true. But again, it's just about making sure that the patch\n> doesn't make other decisions that make things worse for people in that\n> situation. That's what I was expressing uncertainty about.\n\nI am not just trying to avoid making things worse when users are in\nthis situation. I actually want to give users every chance to avoid\nbeing in this situation in the first place. In fact, almost everything\nI've said about this aspect of things was about improving things for\nthese users. It was not about covering myself -- not at all. It would\nbe easy for me to throw up my hands, and change nothing here (keep the\nbehavior that makes FreezeLimit derived from the vacuum_freeze_min\nGUC), since it's all incidental to the main goals of this patch\nseries.\n\nI still don't understand why you think that my idea (not yet\nimplemented) of making FreezeLimit into a backstop (making it\nautovacuum_freeze_max_age/2 or something) and relying on the new\n\"early freezing\" criteria for almost everything is going to make the\nsituation worse in this scenario with long idle cursors. It's intended\nto make it better.\n\nWhy do you think that the current vacuum_freeze_min_age-based\nFreezeLimit isn't actually the main problem in these scenarios? I\nthink that the way that that works right now (in particular during\naggressive VACUUMs) is just an accident of history. It's all path\ndependence -- each incremental step may have made sense, but what we\nhave now doesn't seem to. Waiting for a cleanup lock might feel like\nthe diligent thing to do, but that doesn't make it so.\n\nMy sense is that there are very few apps that are hopelessly incapable\nof advancing relfrozenxid from day one. I find it much easier to\nbelieve that users that had this experience got away with it for a\nvery long time, until their luck ran out, somehow. I would like to\nminimize the chance of that ever happening, to the extent that that's\npossible within the confines of the basic heapam/vacuumlazy.c\ninvariants.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 18 Jan 2022 10:48:24 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Tue, Jan 18, 2022 at 1:48 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> That's what I was reacting to -- it had nothing to do with any\n> concerns you may have had. I wasn't thinking about long-idle cursors\n> at all. I was defending myself, because I was put in a position where\n> I had to defend myself.\n\nI don't think I've said anything on this thread that is an attack on\nyou. I am getting pretty frustrated with the tenor of the discussion,\nthough. I feel like you're the one attacking me, and I don't like it.\n\n> I still don't understand why you think that my idea (not yet\n> implemented) of making FreezeLimit into a backstop (making it\n> autovacuum_freeze_max_age/2 or something) and relying on the new\n> \"early freezing\" criteria for almost everything is going to make the\n> situation worse in this scenario with long idle cursors. It's intended\n> to make it better.\n\nI just don't understand how I haven't been able to convey my concern\nhere by now. I've already written multiple emails about it. If none of\nthem were clear enough for you to understand, I'm not sure how saying\nthe same thing over again can help. When I say I've already written\nabout this, I'm referring specifically to the following:\n\n- https://postgr.es/m/CA+TgmobKJm9BsZR3ETeb6MJdLKWxKK5ZXx0XhLf-W9kUgvOcNA@mail.gmail.com\nin the second-to-last paragraph, beginning with \"I don't really see\"\n- https://www.postgresql.org/message-id/CA%2BTgmoaGoZ2wX6T4sj0eL5YAOQKW3tS8ViMuN%2BtcqWJqFPKFaA%40mail.gmail.com\nin the second paragraph beginning with \"Because waiting on a lock\"\n- https://www.postgresql.org/message-id/CA%2BTgmoZYri_LUp4od_aea%3DA8RtjC%2B-Z1YmTc7ABzTf%2BtRD2Opw%40mail.gmail.com\nin the paragraph beginning with \"That's the part I'm not sure I\nbelieve.\"\n\nFor all of that, I'm not even convinced that you're wrong. I just\nthink you might be wrong. I don't really know. It seems to me however\nthat you're understating the value of waiting, which I've tried to\nexplain in the above places. Waiting does have the very real\ndisadvantage of starving the rest of the system of the work that\nautovacuum worker would have been doing, and that's why I think you\nmight be right. However, there are cases where waiting, and only\nwaiting, gets the job done. If you're not willing to admit that those\ncases exist, or you think they don't matter, then we disagree. If you\nadmit that they exist and think they matter but believe that there's\nsome reason why increasing FreezeLimit can't cause any damage, then\neither (a) you have a good reason for that belief which I have thus\nfar been unable to understand or (b) you're more optimistic about the\nproposed change than can be entirely justified.\n\n> My sense is that there are very few apps that are hopelessly incapable\n> of advancing relfrozenxid from day one. I find it much easier to\n> believe that users that had this experience got away with it for a\n> very long time, until their luck ran out, somehow. I would like to\n> minimize the chance of that ever happening, to the extent that that's\n> possible within the confines of the basic heapam/vacuumlazy.c\n> invariants.\n\nI agree with the idea that most people are OK at the beginning and\nthen at some point their luck runs out and catastrophe strikes. I\nthink there are a couple of different kinds of catastrophe that can\nhappen. For instance, somebody could park a cursor in the middle of a\ntable someplace and leave it there until the snow melts. Or, somebody\ncould take a table lock and sit on it forever. Or, there could be a\ncorrupted page in the table that causes VACUUM to error out every time\nit's reached. In the second and third situations, it doesn't matter a\nbit what we do with FreezeLimit, but in the first one it might. If the\nuser is going to leave that cursor sitting there literally forever,\nthe best solution is to raise FreezeLimit as high as we possibly can.\nThe system is bound to shut down due to wraparound at some point, but\nwe at least might as well vacuum other stuff while we're waiting for\nthat to happen. On the other hand if that user is going to close that\ncursor after 10 minutes and open a new one in the same place 10\nseconds later, the best thing to do is to keep FreezeLimit as low as\npossible, because the first time we wait for the pin to be released\nwe're guaranteed to advance relfrozenxid within 10 minutes, whereas if\nwe don't do that we may keep missing the brief windows in which no\ncursor is held for a very long time. But we have absolutely no way of\nknowing which of those things is going to happen on any particular\nsystem, or of estimating which one is more common in general.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 19 Jan 2022 09:56:32 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Wed, Jan 19, 2022 at 6:56 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I don't think I've said anything on this thread that is an attack on\n> you. I am getting pretty frustrated with the tenor of the discussion,\n> though. I feel like you're the one attacking me, and I don't like it.\n\n\"Attack\" is a strong word (much stronger than \"defend\"), and I don't\nthink I'd use it to describe anything that has happened on this\nthread. All I said was that you misrepresented my views when you\npounced on my use of the word \"continuous\". Which, honestly, I was\nvery surprised by.\n\n> For all of that, I'm not even convinced that you're wrong. I just\n> think you might be wrong. I don't really know.\n\nI agree that I might be wrong, though of course I think that I'm\nprobably correct. I value your input as a critical voice -- that's\ngenerally how we get really good designs.\n\n> However, there are cases where waiting, and only\n> waiting, gets the job done. If you're not willing to admit that those\n> cases exist, or you think they don't matter, then we disagree.\n\nThey exist, of course. That's why I don't want to completely eliminate\nthe idea of waiting for a cleanup lock. Rather, I want to change the\ndesign to recognize that that's an extreme measure, that should be\ndelayed for as long as possible. There are many ways that the problem\ncould naturally resolve itself.\n\nWaiting for a cleanup lock after only 50 million XIDs (the\nvacuum_freeze_min_age default) is like performing brain surgery to\ntreat somebody with a headache (at least with the infrastructure from\nthe earlier patches in place). It's not impossible that \"surgery\"\ncould help, in theory (could be a tumor, better to catch these things\nearly!), but that fact alone can hardly justify such a drastic\nmeasure. That doesn't mean that brain surgery isn't ever appropriate,\nof course. It should be delayed until it starts to become obvious that\nit's really necessary (but before it really is too late).\n\n> If you\n> admit that they exist and think they matter but believe that there's\n> some reason why increasing FreezeLimit can't cause any damage, then\n> either (a) you have a good reason for that belief which I have thus\n> far been unable to understand or (b) you're more optimistic about the\n> proposed change than can be entirely justified.\n\nI don't deny that it's just about possible that the changes that I'm\nthinking of could make the situation worse in some cases, but I think\nthat the overwhelming likelihood is that things will be improved\nacross the board.\n\nConsider the age of the tables from BenchmarkSQL, with the patch series:\n\n relname │ age │ mxid_age\n──────────────────┼─────────────┼──────────\n bmsql_district │ 657 │ 0\n bmsql_warehouse │ 696 │ 0\n bmsql_item │ 1,371,978 │ 0\n bmsql_config │ 1,372,061 │ 0\n bmsql_new_order │ 3,754,163 │ 0\n bmsql_history │ 11,545,940 │ 0\n bmsql_order_line │ 23,095,678 │ 0\n bmsql_oorder │ 40,653,743 │ 0\n bmsql_customer │ 51,371,610 │ 0\n bmsql_stock │ 51,371,610 │ 0\n(10 rows)\n\nWe see significant \"natural variation\" here, unlike HEAD, where the\nage of all tables is exactly the same at all times, or close to it\n(incidentally, this leads to the largest tables all being\nanti-wraparound VACUUMed at the same time). There is a kind of natural\nebb and flow for each table over time, as relfrozenxid is advanced,\ndue in part to workload characteristics. Less than half of all XIDs\nwill ever modify the two largest tables, for example, and so\nautovacuum should probably never be launched because of the age of\neither table (barring some change in workload conditions, perhaps). As\nI've said a few times now, XIDs are generally \"the wrong unit\", except\nwhen needed as a backstop against wraparound failure.\n\nThe natural variation that I see contributes to my optimism. A\nsituation where we cannot get a cleanup lock may well resolve itself,\nfor many reasons, that are hard to precisely nail down but are\nnevertheless very real.\n\nThe vacuum_freeze_min_age design (particularly within an aggressive\nVACUUM) is needlessly rigid, probably just because the assumption\nbefore now has always been that we can only advance relfrozenxid in an\naggressive VACUUM (it might happen in a non-aggressive VACUUM if we\nget very lucky, which cannot be accounted for). Because it is rigid,\nit is brittle. Because it is brittle, it will (on a long enough\ntimeline, for a susceptible workload) actually break.\n\n> On the other hand if that user is going to close that\n> cursor after 10 minutes and open a new one in the same place 10\n> seconds later, the best thing to do is to keep FreezeLimit as low as\n> possible, because the first time we wait for the pin to be released\n> we're guaranteed to advance relfrozenxid within 10 minutes, whereas if\n> we don't do that we may keep missing the brief windows in which no\n> cursor is held for a very long time. But we have absolutely no way of\n> knowing which of those things is going to happen on any particular\n> system, or of estimating which one is more common in general.\n\nI agree with all that, and I think that this particular scenario is\nthe crux of the issue.\n\nThe first time this happens (and we don't get a cleanup lock), then we\nwill at least be able to set relfrozenxid to the exact oldest unfrozen\nXID. So that'll already have bought us some wallclock time -- often a\ngreat deal (why should the oldest XID on such a page be particularly\nold?). Furthermore, there will often be many more VACUUMs before we\nneed to do an aggressive VACUUM -- each of these VACUUM operations is\nan opportunity to freeze the oldest tuple that holds up cleanup. Or\nmaybe this XID is in a dead tuple, and so somebody's opportunistic\npruning operation does the right thing for us. Never underestimate the\npower of dumb luck, especially in a situation where there are many\nindividual \"trials\", and we only have to get lucky once.\n\nIf and when that doesn't work out, and we actually have to do an\nanti-wraparound VACUUM, then something will have to give. Since\nanti-wraparound VACUUMs are naturally confined to certain kinds of\ntables/workloads with the patch series, we can now be pretty confident\nthat the problem really is with this one problematic heap page, with\nthe idle cursor. We could even verify this directly if we wanted to,\nby noticing that the preexisting relfrozenxid is an exact match for\none XID on some can't-cleanup-lock page -- we could emit a WARNING\nabout the page/tuple if we wanted to. To return to my colorful analogy\nfrom earlier, we now know that the patient almost certainly has a\nbrain tumor.\n\nWhat new risk is implied by delaying the wait like this? Very little,\nI believe. Lets say we derive FreezeLimit from\nautovacuum_freeze_max_age/2 (instead of vacuum_freeze_min_age). We\nstill ought to have the opportunity to wait for the cleanup lock for\nrather a long time -- if the XID consumption rate is so high that that\nisn't true, then we're doomed anyway. All told, there seems to be a\nhuge net reduction in risk with this design.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Wed, 19 Jan 2022 11:53:59 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Wed, Jan 19, 2022 at 2:54 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > On the other hand if that user is going to close that\n> > cursor after 10 minutes and open a new one in the same place 10\n> > seconds later, the best thing to do is to keep FreezeLimit as low as\n> > possible, because the first time we wait for the pin to be released\n> > we're guaranteed to advance relfrozenxid within 10 minutes, whereas if\n> > we don't do that we may keep missing the brief windows in which no\n> > cursor is held for a very long time. But we have absolutely no way of\n> > knowing which of those things is going to happen on any particular\n> > system, or of estimating which one is more common in general.\n>\n> I agree with all that, and I think that this particular scenario is\n> the crux of the issue.\n\nGreat, I'm glad we agree on that much. I would be interested in\nhearing what other people think about this scenario.\n\n> The first time this happens (and we don't get a cleanup lock), then we\n> will at least be able to set relfrozenxid to the exact oldest unfrozen\n> XID. So that'll already have bought us some wallclock time -- often a\n> great deal (why should the oldest XID on such a page be particularly\n> old?). Furthermore, there will often be many more VACUUMs before we\n> need to do an aggressive VACUUM -- each of these VACUUM operations is\n> an opportunity to freeze the oldest tuple that holds up cleanup. Or\n> maybe this XID is in a dead tuple, and so somebody's opportunistic\n> pruning operation does the right thing for us. Never underestimate the\n> power of dumb luck, especially in a situation where there are many\n> individual \"trials\", and we only have to get lucky once.\n>\n> If and when that doesn't work out, and we actually have to do an\n> anti-wraparound VACUUM, then something will have to give. Since\n> anti-wraparound VACUUMs are naturally confined to certain kinds of\n> tables/workloads with the patch series, we can now be pretty confident\n> that the problem really is with this one problematic heap page, with\n> the idle cursor. We could even verify this directly if we wanted to,\n> by noticing that the preexisting relfrozenxid is an exact match for\n> one XID on some can't-cleanup-lock page -- we could emit a WARNING\n> about the page/tuple if we wanted to. To return to my colorful analogy\n> from earlier, we now know that the patient almost certainly has a\n> brain tumor.\n>\n> What new risk is implied by delaying the wait like this? Very little,\n> I believe. Lets say we derive FreezeLimit from\n> autovacuum_freeze_max_age/2 (instead of vacuum_freeze_min_age). We\n> still ought to have the opportunity to wait for the cleanup lock for\n> rather a long time -- if the XID consumption rate is so high that that\n> isn't true, then we're doomed anyway. All told, there seems to be a\n> huge net reduction in risk with this design.\n\nI'm just being honest here when I say that I can't see any huge\nreduction in risk. Nor a huge increase in risk. It just seems\nspeculative to me. If I knew something about the system or the\nworkload, then I could say what would likely work out best on that\nsystem, but in the abstract I neither know nor understand how it's\npossible to know.\n\nMy gut feeling is that it's going to make very little difference\neither way. People who never release their cursors or locks or\nwhatever are going to be sad either way, and people who usually do\nwill be happy either way. There's some in-between category of people\nwho release sometimes but not too often for whom it may matter,\npossibly quite a lot. It also seems possible that one decision rather\nthan another will make the happy people MORE happy, or the sad people\nMORE sad. For most people, though, I think it's going to be\nirrelevant. The fact that you seem to view the situation quite\ndifferently is a big part of what worries me here. At least one of us\nis missing something.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 20 Jan 2022 09:55:39 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Thu, Jan 20, 2022 at 6:55 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Great, I'm glad we agree on that much. I would be interested in\n> hearing what other people think about this scenario.\n\nAgreed.\n\n> I'm just being honest here when I say that I can't see any huge\n> reduction in risk. Nor a huge increase in risk. It just seems\n> speculative to me. If I knew something about the system or the\n> workload, then I could say what would likely work out best on that\n> system, but in the abstract I neither know nor understand how it's\n> possible to know.\n\nI think that it's very hard to predict the timeline with a scenario\nlike this -- no question. But I often imagine idealized scenarios like\nthe one you brought up with cursors, with the intention of lowering\nthe overall exposure to problems to the extent that that's possible;\nif it was obvious, we'd have fixed it by now already. I cannot think\nof any reason why making FreezeLimit into what I've been calling a\nbackstop introduces any new risk, but I can think of ways in which it\navoids risk. We shouldn't be waiting indefinitely for something\ntotally outside our control or understanding, and so blocking all\nfreezing and other maintenance on the table, until it's provably\nnecessary.\n\nMore fundamentally, freezing should be thought of as an overhead of\nstoring tuples in heap blocks, as opposed to an overhead of\ntransactions (that allocate XIDs). Meaning that FreezeLimit becomes\nalmost an emergency thing, closely associated with aggressive\nanti-wraparound VACUUMs.\n\n> My gut feeling is that it's going to make very little difference\n> either way. People who never release their cursors or locks or\n> whatever are going to be sad either way, and people who usually do\n> will be happy either way.\n\nIn a real world scenario, the rate at which XIDs are used could be\nvery low. Buying a few hundred million more XIDs until the pain begins\ncould amount to buying weeks or months for the user in practice. Plus\nthey have visibility into the issue, in that they can potentially see\nexactly when they stopped being able to advance relfrozenxid by\nlooking at the autovacuum logs.\n\nMy thinking on vacuum_freeze_min_age has shifted very slightly. I now\nthink that I'll probably need to keep it around, just so things like\nVACUUM FREEZE (which sets vacuum_freeze_min_age to 0 internally)\ncontinue to work. So maybe its default should be changed to -1, which\nis interpreted as \"whatever autovacuum_freeze_max_age/2 is\". But it\nshould still be greatly deemphasized in user docs.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 20 Jan 2022 08:45:07 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Thu, Jan 20, 2022 at 11:45 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> My thinking on vacuum_freeze_min_age has shifted very slightly. I now\n> think that I'll probably need to keep it around, just so things like\n> VACUUM FREEZE (which sets vacuum_freeze_min_age to 0 internally)\n> continue to work. So maybe its default should be changed to -1, which\n> is interpreted as \"whatever autovacuum_freeze_max_age/2 is\". But it\n> should still be greatly deemphasized in user docs.\n\nI like that better, because it lets us retain an escape valve in case\nwe should need it. I suggest that the documentation should say things\nlike \"The default is believed to be suitable for most use cases\" or\n\"We are not aware of a reason to change the default\" rather than\nsomething like \"There is almost certainly no good reason to change\nthis\" or \"What kind of idiot are you, anyway?\" :-)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 20 Jan 2022 14:33:03 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Thu, Jan 20, 2022 at 11:33 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, Jan 20, 2022 at 11:45 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > My thinking on vacuum_freeze_min_age has shifted very slightly. I now\n> > think that I'll probably need to keep it around, just so things like\n> > VACUUM FREEZE (which sets vacuum_freeze_min_age to 0 internally)\n> > continue to work. So maybe its default should be changed to -1, which\n> > is interpreted as \"whatever autovacuum_freeze_max_age/2 is\". But it\n> > should still be greatly deemphasized in user docs.\n>\n> I like that better, because it lets us retain an escape valve in case\n> we should need it.\n\nI do see some value in that, too. Though it's not going to be a way of\nturning off the early freezing stuff, which seems unnecessary (though\nI do still have work to do on getting the overhead for that down).\n\n> I suggest that the documentation should say things\n> like \"The default is believed to be suitable for most use cases\" or\n> \"We are not aware of a reason to change the default\" rather than\n> something like \"There is almost certainly no good reason to change\n> this\" or \"What kind of idiot are you, anyway?\" :-)\n\nI will admit to having a big bias here: I absolutely *loathe* these\nGUCs. I really, really hate them.\n\nConsider how we have to include messy caveats about\nautovacuum_freeze_min_age when talking about\nautovacuum_vacuum_insert_scale_factor. Then there's the fact that you\nreally cannot think about the rate of XID consumption intuitively --\nit has at best a weak, unpredictable relationship with anything that\nusers can understand, such as data stored or wall clock time.\n\nThen there are the problems with the equivalent MultiXact GUCs, which\nsomehow, against all odds, are even worse:\n\nhttps://buttondown.email/nelhage/archive/notes-on-some-postgresql-implementation-details/\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 20 Jan 2022 14:00:12 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Thu, 20 Jan 2022 at 17:01, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> Then there's the fact that you\n> really cannot think about the rate of XID consumption intuitively --\n> it has at best a weak, unpredictable relationship with anything that\n> users can understand, such as data stored or wall clock time.\n\nThis confuses me. \"Transactions per second\" is a headline database\nmetric that lots of users actually focus on quite heavily -- rather\ntoo heavily imho. Ok, XID consumption is only a subset of transactions\nthat are not read-only but that's a detail that's pretty easy to\nexplain and users get pretty quickly.\n\nThere are corner cases like transactions that look read-only but are\nactually read-write or transactions that consume multiple xids but\ncomplex systems are full of corner cases and people don't seem too\nsurprised about these things.\n\nWhat I find confuses people much more is the concept of the\noldestxmin. I think most of the autovacuum problems I've seen come\nfrom cases where autovacuum is happily kicking off useless vacuums\nbecause the oldestxmin hasn't actually advanced enough for them to do\nany useful work.\n\n-- \ngreg\n\n\n", "msg_date": "Fri, 21 Jan 2022 15:06:24 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Fri, Jan 21, 2022 at 12:07 PM Greg Stark <stark@mit.edu> wrote:\n> This confuses me. \"Transactions per second\" is a headline database\n> metric that lots of users actually focus on quite heavily -- rather\n> too heavily imho.\n\nBut transactions per second is for the whole database, not for\nindividual tables. It's also really a benchmarking thing, where the\nsize and variety of transactions is fixed. With something like pgbench\nit actually is exactly the same thing, but such a workload is not at\nall realistic. Even BenchmarkSQL/TPC-C isn't like that, despite the\nfact that it is a fairly synthetic workload (it's just not super\nsynthetic).\n\n> Ok, XID consumption is only a subset of transactions\n> that are not read-only but that's a detail that's pretty easy to\n> explain and users get pretty quickly.\n\nMy point was mostly this: the number of distinct extant unfrozen tuple\nheaders (and the range of the relevant XIDs) is generally highly\nunpredictable today. And the number of tuples we'll have to freeze to\nbe able to advance relfrozenxid by a good amount is quite variable, in\ngeneral.\n\nFor example, if we bulk extend a relation as part of an ETL process,\nthen the number of distinct XIDs could be as low as 1, even though we\ncan expect a great deal of \"freeze debt\" that will have to be paid off\nat some point (with the current design, in the common case where the\nuser doesn't account for this effect because they're not already an\nexpert). There are other common cases that are not quite as extreme as\nthat, that still have the same effect -- even an expert will find it\nhard or impossible to tune autovacuum_freeze_min_age for that.\n\nAnother case of interest (that illustrates the general principle) is\nsomething like pgbench_tellers. We'll never have an aggressive VACUUM\nof the table with the patch, and we shouldn't ever need to freeze any\ntuples. But, owing to workload characteristics, we'll constantly be\nable to keep its relfrozenxid very current, because (even if we\nintroduce skew) each individual row cannot go very long without being\nupdated, allowing old XIDs to age out that way.\n\nThere is also an interesting middle ground, where you get a mixture of\nboth tendencies due to skew. The tuple that's most likely to get\nupdated was the one that was just updated. How are you as a DBA ever\nsupposed to tune autovacuum_freeze_min_age if tuples happen to be\nqualitatively different in this way?\n\n> What I find confuses people much more is the concept of the\n> oldestxmin. I think most of the autovacuum problems I've seen come\n> from cases where autovacuum is happily kicking off useless vacuums\n> because the oldestxmin hasn't actually advanced enough for them to do\n> any useful work.\n\nAs it happens, the proposed log output won't use the term oldestxmin\nanymore -- I think that it makes sense to rename it to \"removable\ncutoff\". Here's an example:\n\nLOG: automatic vacuum of table \"regression.public.bmsql_oorder\": index scans: 1\npages: 0 removed, 317308 remain, 250258 skipped using visibility map\n(78.87% of total)\ntuples: 70 removed, 34105925 remain (6830471 newly frozen), 2528 are\ndead but not yet removable\nremovable cutoff: 37574752, which is 230115 xids behind next\nnew relfrozenxid: 35221275, which is 5219310 xids ahead of previous value\nindex scan needed: 55540 pages from table (17.50% of total) had\n3339809 dead item identifiers removed\nindex \"bmsql_oorder_pkey\": pages: 144257 in total, 0 newly deleted, 0\ncurrently deleted, 0 reusable\nindex \"bmsql_oorder_idx2\": pages: 330083 in total, 0 newly deleted, 0\ncurrently deleted, 0 reusable\nI/O timings: read: 7928.207 ms, write: 1386.662 ms\navg read rate: 33.107 MB/s, avg write rate: 26.218 MB/s\nbuffer usage: 220825 hits, 443331 misses, 351084 dirtied\nWAL usage: 576110 records, 364797 full page images, 2046767817 bytes\nsystem usage: CPU: user: 10.62 s, system: 7.56 s, elapsed: 104.61 s\n\nNote also that I deliberately made the \"new relfrozenxid\" line that\nimmediately follows (information that we haven't shown before now)\nsimilar, to highlight that they're now closely related concepts. Now\nif you VACUUM a table that is either empty or has only frozen tuples,\nVACUUM will set relfrozenxid to oldestxmin/removable cutoff.\nInternally, oldestxmin is the \"starting point\" for our final/target\nrelfrozenxid for the table. We ratchet it back dynamically, whenever\nwe see an older-than-current-target XID that cannot be immediately\nfrozen (e.g., when we can't easily get a cleanup lock on the page).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 21 Jan 2022 12:42:23 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Thu, Jan 20, 2022 at 2:00 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I do see some value in that, too. Though it's not going to be a way of\n> turning off the early freezing stuff, which seems unnecessary (though\n> I do still have work to do on getting the overhead for that down).\n\nAttached is v7, a revision that overhauls the algorithm that decides\nwhat to freeze. I'm now calling it block-driven freezing in the commit\nmessage. Also included is a new patch, that makes VACUUM record zero\nfree space in the FSM for an all-visible page, unless the total amount\nof free space happens to be greater than one half of BLCKSZ.\n\nThe fact that I am now including this new FSM patch (v7-0006-*patch)\nmay seem like a case of expanding the scope of something that could\nwell do without it. But hear me out! It's true that the new FSM patch\nisn't essential. I'm including it now because it seems relevant to the\napproach taken with block-driven freezing -- it may even make my\ngeneral approach easier to understand. The new approach to freezing is\nto freeze every tuple on a block that is about to be set all-visible\n(and thus set it all-frozen too), or to not freeze anything on the\npage at all (at least until one XID gets really old, which should be\nrare). This approach has all the benefits that I described upthread,\nand a new benefit: it effectively encourages the application to allow\npages to \"become settled\".\n\nThe main difference in how we freeze here (relative to v6 of the\npatch) is that I'm *not* freezing a page just because it was\ndirtied/pruned. I now think about freezing as an essentially\npage-level thing, barring edge cases where we have to freeze\nindividual tuples, just because the XIDs really are getting old (it's\nan edge case when we can't freeze all the tuples together due to a mix\nof new and old, which is something we specifically set out to avoid\nnow).\n\nFreezing whole pages\n====================\n\nWhen VACUUM sees that all remaining/unpruned tuples on a page are\nall-visible, it isn't just important because of cost control\nconsiderations. It's deeper than that. It's also treated as a\ntentative signal from the application itself, about the data itself.\nWhich is: this page looks \"settled\" -- it may never be updated again,\nbut if there is an update it likely won't change too much about the\nwhole page. Also, if the page is ever updated in the future, it's\nlikely that that will happen at a much later time than you should\nexpect for those *other* nearby pages, that *don't* appear to be\nsettled. And so VACUUM infers that the page is *qualitatively*\ndifferent to these other nearby pages. VACUUM therefore makes it hard\n(though not impossible) for future inserts or updates to disturb these\nsettled pages, via this FSM behavior -- it is short sighted to just\nsee the space remaining on the page as free space, equivalent to any\nother. This holistic approach seems to work well for\nTPC-C/BenchmarkSQL, and perhaps even in general. More on TPC-C below.\n\nThis is not unlike the approach taken by other DB systems, where free\nspace management is baked into concurrency control, and the concept of\nphysical data independence as we know it from Postgres never really\nexisted. My approach also seems related to the concept of a \"tenured\ngeneration\", which is key to generational garbage collection. The\nwhole basis of generational garbage collection is the generational\nhypothesis: \"most objects die young\". This is an empirical observation\nabout applications written in GC'd programming languages actually\nbehave, not a rigorous principle, and yet in practice it appears to\nalways hold. Intuitively, it seems to me like the hypothesis must work\nin practice because if it didn't then a counterexample nemesis\napplication's behavior would be totally chaotic, in every way.\nTheoretically possible, but of no real concern, since the program\nmakes zero practical sense *as an actual program*. A Java program must\nmake sense to *somebody* (at least the person that wrote it), which,\nit turns out, helpfully constrains the space of possibilities that any\nindustrial strength GC implementation needs to handle well.\n\nThe same principles seem to apply here, with VACUUM. Grouping logical\nrows into pages that become their \"permanent home until further\nnotice\" may be somewhat arbitrary, at first, but that doesn't mean it\nwon't end up sticking. Just like with generational garbage collection,\nwhere the application isn't expected to instruct the GC about its\nplans for memory that it allocates, that can nevertheless be usefully\norganized into distinct generations through an adaptive process.\n\nSecond order effects\n====================\n\nRelating the FSM to page freezing/all-visible setting makes much more\nsense if you consider the second order effects.\n\nThere is bound to be competition for free space among backends that\naccess the free space map. By *not* freezing a page during VACUUM\nbecause it looks unsettled, we make its free space available in the\ntraditional way instead. It follows that unsettled pages (in tables\nwith lots of updates) are now the only place that backends that need\nmore free space from the FSM can look -- unsettled pages therefore\nbecome a hot commodity, freespace-wise. A page that initially appeared\n\"unsettled\", that went on to become settled in this newly competitive\nenvironment might have that happen by pure chance -- but probably not.\nIt *could* happen by chance, of course -- in which case the page will\nget dirtied again, and the cycle continues, for now. There will be\nfurther opportunities to figure it out, and freezing the tuples on the\npage \"prematurely\" still has plenty of benefits.\n\nLocality matters a lot, obviously. The goal with the FSM stuff is\nmerely to make it *possible* for pages to settle naturally, to the\nextent that we can. We really just want to avoid hindering a naturally\noccurring process -- we want to avoid destroying naturally occuring\nlocality. We must be willing to accept some cost for that. Even if it\ntakes a few attempts for certain pages, constraining the application's\nchoice of where to get free space from (can't be a page marked\nall-visible) allows pages to *systematically* become settled over\ntime.\n\nThe application is in charge, really -- not VACUUM. This is already\nthe case, whether we like it or not. VACUUM needs to learn to live in\nthat reality, rather than fighting it. When VACUUM considers a page\nsettled, and the physical page still has a relatively large amount of\nfree space (say 45% of BLCKSZ, a borderline case in the new FSM\npatch), \"losing\" so much free space certainly is unappealing. We set\nthe free space to 0 in the free space map all the same, because we're\ncutting our losses at that point. While the exact threshold I've\nproposed is tentative, the underlying theory seems pretty sound to me.\nThe BLCKSZ/2 cutoff (and the way that it extends the general rules for\nwhole-page freezing) is intended to catch pages that are qualitatively\ndifferent, as well as quantitatively different. It is a balancing act,\nbetween not wasting space, and the risk of systemic problems involving\nexcessive amounts of non-HOT updates that must move a successor\nversion to another page.\n\nIt's possible that a higher cutoff (for example a cutoff of 80% of\nBLCKSZ, not 50%) will actually lead to *worse* space utilization, in\naddition to the downsides from fragmentation -- it's far from a simple\ntrade-off. (Not that you should believe that 50% is special, it's just\na starting point for me.)\n\nTPC-C\n=====\n\nI'm going to talk about a benchmark that ran throughout the week,\nstarting on Monday. Each run lasted 24 hours, and there were 2 runs in\ntotal, for both the patch and for master/baseline. So this benchmark\nlasted 4 days, not including the initial bulk loading, with databases\nthat were over 450GB in size by the time I was done (that's 450GB+ for\nboth the patch and master) . Benchmarking for days at a time is pretty\ninconvenient, but it seems necessary to see certain effects in play.\nWe need to wait until the baseline/master case starts to have\nanti-wraparound VACUUMs with default, realistic settings, which just\ntakes days and days.\n\nI make available all of my data for the Benchmark in question, which\nis way more information that anybody is likely to want -- I dump\nanything that even might be useful from the system views in an\nautomated way. There are html reports for all 4 24 hour long runs.\nGoogle drive link:\n\nhttps://drive.google.com/drive/folders/1A1g0YGLzluaIpv-d_4o4thgmWbVx3LuR?usp=sharing\n\nWhile the patch did well overall, and I will get to the particulars\ntowards the end of the email, I want to start with what I consider to\nbe the important part: the user/admin experience with VACUUM, and\nVACUUM's performance stability. This is about making VACUUM less\nscary.\n\nAs I've said several times now, with an append-only table like\npgbench_history we see a consistent pattern where relfrozenxid is set\nto a value very close to the same VACUUM's OldestXmin value (even\nprecisely equal to OldestXmin) during each VACUUM operation, again and\nagain, forever -- that case is easy to understand and appreciate, and\nhas already been discussed. Now (with v7's new approach to freezing),\na related pattern can be seen in the case of the two big, troublesome\nTPC-C tables, the orders and order lines tables.\n\nTo recap, these tables are somewhat like the history table, in that\nnew orders insert into both tables, again and again, forever. But they\nalso have one huge difference to simple append-only tables too, which\nis the source of most of our problems with TPC-C. The difference is:\nthere are also delayed, correlated updates of each row from each\ntable. Exactly one such update per row for both tables, which takes\nplace hours after each order's insert, when the earlier order is\nprocessed by TPC-C's delivery transaction. In the long run we need the\ndata to age out and not get re-dirtied, as the table grows and grows\nindefinitely, much like with a simple append-only table. At the same\ntime, we don't want to have poor free space management for these\ndeferred updates. It's adversarial, sort of, but in a way that is\ngrounded in reality.\n\nWith the order and order lines tables, relfrozenxid tends to be\nadvanced up to the OldestXmin used by the *previous* VACUUM operation\n-- an unmistakable pattern. I'll show you all of the autovacuum log\noutput for the orders table during the second 24 hour long benchmark\nrun:\n\n2022-01-27 01:46:27 PST LOG: automatic vacuum of table\n\"regression.public.bmsql_oorder\": index scans: 1\npages: 0 removed, 1205349 remain, 887225 skipped using visibility map\n(73.61% of total)\ntuples: 253872 removed, 134182902 remain (26482225 newly frozen),\n27193 are dead but not yet removable\nremovable cutoff: 243783407, older by 728844 xids when operation ended\nnew relfrozenxid: 215400514, which is 26840669 xids ahead of previous value\n...\n2022-01-27 05:54:39 PST LOG: automatic vacuum of table\n\"regression.public.bmsql_oorder\": index scans: 1\npages: 0 removed, 1345302 remain, 993924 skipped using visibility map\n(73.88% of total)\ntuples: 261656 removed, 150022816 remain (29757570 newly frozen),\n29216 are dead but not yet removable\nremovable cutoff: 276319403, older by 826850 xids when operation ended\nnew relfrozenxid: 243838706, which is 28438192 xids ahead of previous value\n...\n2022-01-27 10:37:24 PST LOG: automatic vacuum of table\n\"regression.public.bmsql_oorder\": index scans: 1\npages: 0 removed, 1504707 remain, 1110002 skipped using visibility map\n(73.77% of total)\ntuples: 316086 removed, 167990124 remain (33754949 newly frozen),\n33326 are dead but not yet removable\nremovable cutoff: 313328445, older by 987732 xids when operation ended\nnew relfrozenxid: 276309397, which is 32470691 xids ahead of previous value\n...\n2022-01-27 15:49:51 PST LOG: automatic vacuum of table\n\"regression.public.bmsql_oorder\": index scans: 1\npages: 0 removed, 1680649 remain, 1250525 skipped using visibility map\n(74.41% of total)\ntuples: 343946 removed, 187739072 remain (37346315 newly frozen),\n38037 are dead but not yet removable\nremovable cutoff: 354149019, older by 1222160 xids when operation ended\nnew relfrozenxid: 313332249, which is 37022852 xids ahead of previous value\n...\n2022-01-27 21:55:34 PST LOG: automatic vacuum of table\n\"regression.public.bmsql_oorder\": index scans: 1\npages: 0 removed, 1886336 remain, 1403800 skipped using visibility map\n(74.42% of total)\ntuples: 389748 removed, 210899148 remain (43453900 newly frozen),\n45802 are dead but not yet removable\nremovable cutoff: 401955979, older by 1458514 xids when operation ended\nnew relfrozenxid: 354134615, which is 40802366 xids ahead of previous value\n\nThis mostly speaks for itself, I think. (Anybody that's interested can\ndrill down to the logs for order lines, which looks similar.)\n\nThe effect we see with the order/order lines table isn't perfectly\nreliable. Actually, it depends on how you define it. It's possible\nthat we won't be able to acquire a cleanup lock on the wrong page at\nthe wrong time, and as a result fail to advance relfrozenxid by the\nusual amount, once. But that effect appears to be both rare and of no\nreal consequence. One could reasonably argue that we never fell\nbehind, because we still did 99.9%+ of the required freezing -- we\njust didn't immediately get to advance relfrozenxid, because of a\ntemporary hiccup on one page. We will still advance relfrozenxid by a\nsmall amount. Sometimes it'll be by only hundreds of XIDs when\nmillions or tens of millions of XIDs were expected. Once we advance it\nby some amount, we can reasonably suppose that the issue was just a\nhiccup.\n\nOn the master branch, the first 24 hour period has no anti-wraparound\nVACUUMs, and so looking at that first 24 hour period gives you some\nidea of how worse off we are in the short term -- the freezing stuff\nwon't really start to pay for itself until the second 24 hour run with\nthese mostly-default freeze related settings. The second 24 hour run\non master almost exclusively has anti-wraparound VACUUMs for all the\nlargest tables, though -- all at the same time. And not just the first\ntime, either! This causes big spikes that the patch totally avoids,\nsimply by avoiding anti-wraparound VACUUMs. With the patch, there are\nno anti-wraparound VACUUMs, barring tables that will never be vacuumed\nfor any other reason, where it's still inevitable, limited to the\nstock table and customers table.\n\nIt was a mistake for me to emphasize \"no anti-wraparound VACUUMs\noutside pathological cases\" before now. I stand by those statements as\naccurate, but anti-wraparound VACUUMs should not have been given so\nmuch emphasis. Let's assume that somehow we really were to get an\nanti-wraparound VACUUM against one of the tables where that's just not\nexpected, like this orders table -- let's suppose that I got that part\nwrong, in some way. It would hardly matter at all! We'd still have\navoided the freezing cliff during this anti-wraparound VACUUM, which\nis the real benefit. Chances are good that we needed to VACUUM anyway,\njust to clean any very old garbage tuples up -- relfrozenxid is now\npredictive of the age of the oldest garbage tuples, which might have\nbeen a good enough reason to VACUUM anyway. The stampede of\nanti-wraparound VACUUMs against multiple tables seems like it would\nstill be fixed, since relfrozenxid now actually tells us something\nabout the table (as opposed to telling us only about what the user set\nvacuum_freeze_min_age to). The only concerns that this leaves for me\nare all usability related, and not of primary importance (e.g. do we\nreally need to make anti-wraparound VACUUMs non-cancelable now?).\n\nTPC-C raw numbers\n=================\n\nThe single most important number for the patch might be the decrease\nin both buffer misses and buffer hits, which I believe is caused by\nthe patch being able to use index-only scans much more effectively\n(with modifications to BenchmarkSQL to improve the indexing strategy\n[1]). This is quite clear from pg_stat_database state at the end.\n\nPatch:\n\nxact_commit | 440,515,133\nxact_rollback | 1,871,142\nblks_read | 3,754,614,188\nblks_hit | 174,551,067,731\ntup_returned | 341,222,714,073\ntup_fetched | 124,797,772,450\ntup_inserted | 2,900,197,655\ntup_updated | 4,549,948,092\ntup_deleted | 165,222,130\n\nHere is the same pg_stat_database info for master:\n\nxact_commit | 440,402,505\nxact_rollback | 1,871,536\nblks_read | 4,002,682,052\nblks_hit | 283,015,966,386\ntup_returned | 346,448,070,798\ntup_fetched | 237,052,965,901\ntup_inserted | 2,899,735,420\ntup_updated | 4,547,220,642\ntup_deleted | 165,103,426\n\nThe blks_read is x0.938 of master/baseline for the patch -- not bad.\nMore importantly, blks_hit is x0.616 for the patch -- quite a\nsignificant reduction in a key cost. Note that we start to get this\nparticular benefit for individual read queries pretty early on --\navoiding unsetting visibility map bits like this matters right from\nthe start. In TPC-C terms, the ORDER_STATUS transaction will have much\nlower latency, particularly tail latency, since it uses index-only\nscans to good effect. There are 5 distinct transaction types from the\nbenchmark, and an improvement to one particular transaction type isn't\nunusual -- so you often have to drill down, and look at the full html\nreport. The latency situation is improved across the board with the\npatch, by quite a bit, especially after the second run. This server\ncan sustain much more throughput than the TPC-C spec formally permits,\neven though I've increased the TPM rate from the benchmark by 10x the\nspec legal limit, so query latency is the main TPC-C metric of\ninterest here.\n\nWAL\n===\n\nThen there's the WAL overhead. Like practically any workload, the WAL\nconsumption for this workload is dominated by FPIs, despite the fact\nthat I've tuned checkpoints reasonably well. The patch *does* write\nmore WAL in the first set of runs -- it writes a total of ~3.991 TiB,\nversus ~3.834 TiB for master. In other words, during the first 24 hour\nrun (before the trouble with the anti-wraparound freeze cliff even\nbegins for the master branch), the patch writes x1.040 as much WAL in\ntotal. The good news is that the patch comes out ahead by the end,\nafter the second set of 24 hour runs. By the time the second run\nfinishes, it's 8.332 TiB of WAL total for the patch, versus 8.409 TiB\nfor master, putting the patch at x0.990 in the end -- a small\nimprovement. I believe that most of the WAL doesn't get generated by\nVACUUM here anyway -- opportunistic pruning works well for this\nworkload.\n\nI expect to be able to commit the first 2 patches in a couple of\nweeks, since that won't need to block on making the case for the final\n3 or 4 patches from the patch series. The early stuff is mostly just\nrefactoring work that removes needless differences between aggressive\nand non-aggressive VACUUM operations. It makes a lot of sense on its\nown.\n\n[1] https://github.com/pgsql-io/benchmarksql/pull/16\n--\nPeter Geoghegan", "msg_date": "Sat, 29 Jan 2022 20:42:36 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Sat, Jan 29, 2022 at 11:43 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Thu, Jan 20, 2022 at 2:00 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I do see some value in that, too. Though it's not going to be a way of\n> > turning off the early freezing stuff, which seems unnecessary (though\n> > I do still have work to do on getting the overhead for that down).\n>\n> Attached is v7, a revision that overhauls the algorithm that decides\n> what to freeze. I'm now calling it block-driven freezing in the commit\n> message. Also included is a new patch, that makes VACUUM record zero\n> free space in the FSM for an all-visible page, unless the total amount\n> of free space happens to be greater than one half of BLCKSZ.\n>\n> The fact that I am now including this new FSM patch (v7-0006-*patch)\n> may seem like a case of expanding the scope of something that could\n> well do without it. But hear me out! It's true that the new FSM patch\n> isn't essential. I'm including it now because it seems relevant to the\n> approach taken with block-driven freezing -- it may even make my\n> general approach easier to understand.\n\nWithout having looked at the latest patches, there was something in\nthe back of my mind while following the discussion upthread -- the\nproposed opportunistic freezing made a lot more sense if the\nearlier-proposed open/closed pages concept was already available.\n\n> Freezing whole pages\n> ====================\n\n> It's possible that a higher cutoff (for example a cutoff of 80% of\n> BLCKSZ, not 50%) will actually lead to *worse* space utilization, in\n> addition to the downsides from fragmentation -- it's far from a simple\n> trade-off. (Not that you should believe that 50% is special, it's just\n> a starting point for me.)\n\nHow was the space utilization with the 50% cutoff in the TPC-C test?\n\n> TPC-C raw numbers\n> =================\n>\n> The single most important number for the patch might be the decrease\n> in both buffer misses and buffer hits, which I believe is caused by\n> the patch being able to use index-only scans much more effectively\n> (with modifications to BenchmarkSQL to improve the indexing strategy\n> [1]). This is quite clear from pg_stat_database state at the end.\n>\n> Patch:\n\n> blks_hit | 174,551,067,731\n> tup_fetched | 124,797,772,450\n\n> Here is the same pg_stat_database info for master:\n\n> blks_hit | 283,015,966,386\n> tup_fetched | 237,052,965,901\n\nThat's impressive.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 4 Feb 2022 14:00:00 -0500", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Fri, Feb 4, 2022 at 2:00 PM John Naylor <john.naylor@enterprisedb.com> wrote:\n> Without having looked at the latest patches, there was something in\n> the back of my mind while following the discussion upthread -- the\n> proposed opportunistic freezing made a lot more sense if the\n> earlier-proposed open/closed pages concept was already available.\n\nYeah, sorry about that. The open/closed pages concept is still\nsomething I plan on working on. My prototype (which I never posted to\nthe list) will be rebased, and I'll try to target Postgres 16.\n\n> > Freezing whole pages\n> > ====================\n>\n> > It's possible that a higher cutoff (for example a cutoff of 80% of\n> > BLCKSZ, not 50%) will actually lead to *worse* space utilization, in\n> > addition to the downsides from fragmentation -- it's far from a simple\n> > trade-off. (Not that you should believe that 50% is special, it's just\n> > a starting point for me.)\n>\n> How was the space utilization with the 50% cutoff in the TPC-C test?\n\nThe picture was mixed. To get the raw numbers, compare\npg-relation-sizes-after-patch-2.out and\npg-relation-sizes-after-master-2.out files from the drive link I\nprovided (to repeat, get them from\nhttps://drive.google.com/drive/u/1/folders/1A1g0YGLzluaIpv-d_4o4thgmWbVx3LuR)\n\nHighlights: the largest table (the bmsql_order_line table) had a total\nsize of x1.006 relative to master, meaning that we did slightly worse\nthere. However, the index on the same table was slightly smaller\ninstead, probably because reducing heap fragmentation tends to make\nthe index deletion stuff work a bit better than before.\n\nCertain small tables (bmsql_district and bmsql_warehouse) were\nactually significantly smaller (less than half their size on master),\nprobably just because the patch can reliably remove LP_DEAD items from\nheap pages, even when a cleanup lock isn't available.\n\nThe bmsql_new_order table was quite a bit larger, but it's not that\nlarge anyway (1250 MB on master at the very end, versus 1433 MB with\nthe patch). This is a clear trade-off, since we get much less\nfragmentation in the same table (as evidenced by the VACUUM output,\nwhere there are fewer pages with any LP_DEAD items per VACUUM with the\npatch). The workload for that table is characterized by inserting new\norders together, and deleting the same orders as a group later on. So\nwe're bound to pay a cost in space utilization to lower the\nfragmentation.\n\n> > blks_hit | 174,551,067,731\n> > tup_fetched | 124,797,772,450\n>\n> > Here is the same pg_stat_database info for master:\n>\n> > blks_hit | 283,015,966,386\n> > tup_fetched | 237,052,965,901\n>\n> That's impressive.\n\nThanks!\n\nIt's still possible to get a big improvement like that with something\nlike TPC-C because there are certain behaviors that are clearly\nsuboptimal -- once you look at the details of the workload, and\ncompare an imaginary ideal to the actual behavior of the system. In\nparticular, there is really only one way that the free space\nmanagement can work for the two big tables that will perform\nacceptably -- the orders have to be stored in the same place to begin\nwith, and stay in the same place forever (at least to the extent that\nthat's possible).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 4 Feb 2022 14:41:19 -0500", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Sat, Jan 29, 2022 at 11:43 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> When VACUUM sees that all remaining/unpruned tuples on a page are\n> all-visible, it isn't just important because of cost control\n> considerations. It's deeper than that. It's also treated as a\n> tentative signal from the application itself, about the data itself.\n> Which is: this page looks \"settled\" -- it may never be updated again,\n> but if there is an update it likely won't change too much about the\n> whole page.\n\nWhile I agree that there's some case to be made for leaving settled\npages well enough alone, your criterion for settled seems pretty much\naccidental. Imagine a system where there are two applications running,\nA and B. Application A runs all the time and all the transactions\nwhich it performs are short. Therefore, when a certain page is not\nmodified by transaction A for a short period of time, the page will\nbecome all-visible and will be considered settled. Application B runs\nonce a month and performs various transactions all of which are long,\nperhaps on a completely separate set of tables. While application B is\nrunning, pages take longer to settle not only for application B but\nalso for application A. It doesn't make sense to say that the\napplication is in control of the behavior when, in reality, it may be\nsome completely separate application that is controlling the behavior.\n\n> The application is in charge, really -- not VACUUM. This is already\n> the case, whether we like it or not. VACUUM needs to learn to live in\n> that reality, rather than fighting it. When VACUUM considers a page\n> settled, and the physical page still has a relatively large amount of\n> free space (say 45% of BLCKSZ, a borderline case in the new FSM\n> patch), \"losing\" so much free space certainly is unappealing. We set\n> the free space to 0 in the free space map all the same, because we're\n> cutting our losses at that point. While the exact threshold I've\n> proposed is tentative, the underlying theory seems pretty sound to me.\n> The BLCKSZ/2 cutoff (and the way that it extends the general rules for\n> whole-page freezing) is intended to catch pages that are qualitatively\n> different, as well as quantitatively different. It is a balancing act,\n> between not wasting space, and the risk of systemic problems involving\n> excessive amounts of non-HOT updates that must move a successor\n> version to another page.\n\nI can see that this could have significant advantages under some\ncircumstances. But I think it could easily be far worse under other\ncircumstances. I mean, you can have workloads where you do some amount\nof read-write work on a table and then go read only and sequential\nscan it an infinite number of times. An algorithm that causes the\ntable to be smaller at the point where we switch to read-only\noperations, even by a modest amount, wins infinitely over anything\nelse. But even if you have no change in the access pattern, is it a\ngood idea to allow the table to be, say, 5% larger if it means that\ncorrelated data is colocated? In general, probably yes. If that means\nthat the table fails to fit in shared_buffers instead of fitting, no.\nIf that means that the table fails to fit in the OS cache instead of\nfitting, definitely no.\n\nAnd to me, that kind of effect is why it's hard to gain much\nconfidence in regards to stuff like this via laboratory testing. I\nmean, I'm glad you're doing such tests. But in a laboratory test, you\ntend not to have things like a sudden and complete change in the\nworkload, or a random other application sometimes sharing the machine,\nor only being on the edge of running out of memory. I think in general\npeople tend to avoid such things in benchmarking scenarios, but even\nif include stuff like this, it's hard to know what to include that\nwould be representative of real life, because just about anything\n*could* happen in real life.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 4 Feb 2022 14:45:36 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Fri, Feb 4, 2022 at 2:45 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> While I agree that there's some case to be made for leaving settled\n> pages well enough alone, your criterion for settled seems pretty much\n> accidental.\n\nI fully admit that I came up with the FSM heuristic with TPC-C in\nmind. But you have to start somewhere.\n\nFortunately, the main benefit of this patch series (avoiding the\nfreeze cliff during anti-wraparound VACUUMs, often avoiding\nanti-wraparound VACUUMs altogether) don't depend on the experimental\nFSM patch at all. I chose to post that now because it seemed to help\nwith my more general point about qualitatively different pages, and\nfreezing at the page level.\n\n> Imagine a system where there are two applications running,\n> A and B. Application A runs all the time and all the transactions\n> which it performs are short. Therefore, when a certain page is not\n> modified by transaction A for a short period of time, the page will\n> become all-visible and will be considered settled. Application B runs\n> once a month and performs various transactions all of which are long,\n> perhaps on a completely separate set of tables. While application B is\n> running, pages take longer to settle not only for application B but\n> also for application A. It doesn't make sense to say that the\n> application is in control of the behavior when, in reality, it may be\n> some completely separate application that is controlling the behavior.\n\nApplication B will already block pruning by VACUUM operations against\napplication A's table, and so effectively blocks recording of the\nresultant free space in the FSM in your scenario. And so application A\nand application B should be considered the same application already.\nThat's just how VACUUM works.\n\nVACUUM isn't a passive observer of the system -- it's another\nparticipant. It both influences and is influenced by almost everything\nelse in the system.\n\n> I can see that this could have significant advantages under some\n> circumstances. But I think it could easily be far worse under other\n> circumstances. I mean, you can have workloads where you do some amount\n> of read-write work on a table and then go read only and sequential\n> scan it an infinite number of times. An algorithm that causes the\n> table to be smaller at the point where we switch to read-only\n> operations, even by a modest amount, wins infinitely over anything\n> else. But even if you have no change in the access pattern, is it a\n> good idea to allow the table to be, say, 5% larger if it means that\n> correlated data is colocated? In general, probably yes. If that means\n> that the table fails to fit in shared_buffers instead of fitting, no.\n> If that means that the table fails to fit in the OS cache instead of\n> fitting, definitely no.\n\n5% larger seems like a lot more than would be typical, based on what\nI've seen. I don't think that the regression in this scenario can be\ncharacterized as \"infinitely worse\", or anything like it. On a long\nenough timeline, the potential upside of something like this is nearly\nunlimited -- it could avoid a huge amount of write amplification. But\nthe potential downside seems to be small and fixed -- which is the\npoint (bounding the downside). The mere possibility of getting that\nbig benefit (avoiding the costs from heap fragmentation) is itself a\nbenefit, even when it turns out not to pay off in your particular\ncase. It can be seen as insurance.\n\n> And to me, that kind of effect is why it's hard to gain much\n> confidence in regards to stuff like this via laboratory testing. I\n> mean, I'm glad you're doing such tests. But in a laboratory test, you\n> tend not to have things like a sudden and complete change in the\n> workload, or a random other application sometimes sharing the machine,\n> or only being on the edge of running out of memory. I think in general\n> people tend to avoid such things in benchmarking scenarios, but even\n> if include stuff like this, it's hard to know what to include that\n> would be representative of real life, because just about anything\n> *could* happen in real life.\n\nThen what could you have confidence in?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 4 Feb 2022 15:30:37 -0500", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Fri, Feb 4, 2022 at 3:31 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Application B will already block pruning by VACUUM operations against\n> application A's table, and so effectively blocks recording of the\n> resultant free space in the FSM in your scenario. And so application A\n> and application B should be considered the same application already.\n> That's just how VACUUM works.\n\nSure ... but that also sucks. If we consider application A and\napplication B to be the same application, then we're basing our\ndecision about what to do on information that is inaccurate.\n\n> 5% larger seems like a lot more than would be typical, based on what\n> I've seen. I don't think that the regression in this scenario can be\n> characterized as \"infinitely worse\", or anything like it. On a long\n> enough timeline, the potential upside of something like this is nearly\n> unlimited -- it could avoid a huge amount of write amplification. But\n> the potential downside seems to be small and fixed -- which is the\n> point (bounding the downside). The mere possibility of getting that\n> big benefit (avoiding the costs from heap fragmentation) is itself a\n> benefit, even when it turns out not to pay off in your particular\n> case. It can be seen as insurance.\n\nI don't see it that way. There are cases where avoiding writes is\nbetter, and cases where trying to cram everything into the fewest\npossible ages is better. With the right test case you can make either\nstrategy look superior. What I think your test case has going for it\nis that it is similar to something that a lot of people, really a ton\nof people, actually do with PostgreSQL. However, it's not going to be\nan accurate model of what everybody does, and therein lies some\nelement of danger.\n\n> Then what could you have confidence in?\n\nReal-world experience. Which is hard to get if we don't ever commit\nany patches, but a good argument for (a) having them tested by\nmultiple different hackers who invent test cases independently and (b)\nsome configurability where we can reasonably include it, so that if\nanyone does experience problems they have an escape.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 4 Feb 2022 16:18:47 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Fri, Feb 4, 2022 at 4:18 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Feb 4, 2022 at 3:31 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Application B will already block pruning by VACUUM operations against\n> > application A's table, and so effectively blocks recording of the\n> > resultant free space in the FSM in your scenario. And so application A\n> > and application B should be considered the same application already.\n> > That's just how VACUUM works.\n>\n> Sure ... but that also sucks. If we consider application A and\n> application B to be the same application, then we're basing our\n> decision about what to do on information that is inaccurate.\n\nI agree that it sucks, but I don't think that it's particularly\nrelevant to the FSM prototype patch that I included with v7 of the\npatch series. A heap page cannot be considered \"closed\" (either in the\nspecific sense from the patch, or in any informal sense) when it has\nrecently dead tuples.\n\nAt some point we should invent a fallback path for pruning, that\nmigrates recently dead tuples to some other subsidiary structure,\nretaining only forwarding information in the heap page. But even that\nwon't change what I just said about closed pages (it'll just make it\neasier to return and fix things up later on).\n\n> I don't see it that way. There are cases where avoiding writes is\n> better, and cases where trying to cram everything into the fewest\n> possible ages is better. With the right test case you can make either\n> strategy look superior.\n\nThe cost of reads is effectively much lower than writes with modern\nSSDs, in TCO terms. Plus when a FSM strategy like the one from the\npatch does badly according to a naive measure such as total table\nsize, that in itself doesn't mean that we do worse with reads. In\nfact, it's quite the opposite.\n\nThe benchmark showed that v7 of the patch did very slightly worse on\noverall space utilization, but far, far better on reads. In fact, the\nbenefits for reads were far in excess of any efficiency gains for\nwrites/with WAL. The greatest bottleneck is almost always latency on\nmodern hardware [1]. It follows that keeping logically related data\ngrouped together is crucial. Far more important than potentially using\nvery slightly more space.\n\nThe story I wanted to tell with the FSM patch was about open and\nclosed pages being the right long term direction. More generally, we\nshould emphasize managing page-level costs, and deemphasize managing\ntuple-level costs, which are much less meaningful.\n\n> What I think your test case has going for it\n> is that it is similar to something that a lot of people, really a ton\n> of people, actually do with PostgreSQL. However, it's not going to be\n> an accurate model of what everybody does, and therein lies some\n> element of danger.\n\nNo question -- agreed.\n\n> > Then what could you have confidence in?\n>\n> Real-world experience. Which is hard to get if we don't ever commit\n> any patches, but a good argument for (a) having them tested by\n> multiple different hackers who invent test cases independently and (b)\n> some configurability where we can reasonably include it, so that if\n> anyone does experience problems they have an escape.\n\nI agree.\n\n[1] https://dl.acm.org/doi/10.1145/1022594.1022596\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 4 Feb 2022 17:26:46 -0500", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Wed, 15 Dec 2021 at 15:30, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> My emphasis here has been on making non-aggressive VACUUMs *always*\n> advance relfrozenxid, outside of certain obvious edge cases. And so\n> with all the patches applied, up to and including the opportunistic\n> freezing patch, every autovacuum of every table manages to advance\n> relfrozenxid during benchmarking -- usually to a fairly recent value.\n> I've focussed on making aggressive VACUUMs (especially anti-wraparound\n> autovacuums) a rare occurrence, for truly exceptional cases (e.g.,\n> user keeps canceling autovacuums, maybe due to automated script that\n> performs DDL). That has taken priority over other goals, for now.\n\nWhile I've seen all the above cases triggering anti-wraparound cases\nby far the majority of the cases are not of these pathological forms.\n\nBy far the majority of anti-wraparound vacuums are triggered by tables\nthat are very large and so don't trigger regular vacuums for \"long\nperiods\" of time and consistently hit the anti-wraparound threshold\nfirst.\n\nThere's nothing limiting how long \"long periods\" is and nothing tying\nit to the rate of xid consumption. It's quite common to have some\n*very* large mostly static tables in databases that have other tables\nthat are *very* busy.\n\nThe worst I've seen is a table that took 36 hours to vacuum in a\ndatabase that consumed about a billion transactions per day... That's\nextreme but these days it's quite common to see tables that get\nanti-wraparound vacuums every week or so despite having < 1% modified\ntuples. And databases are only getting bigger and transaction rates\nfaster...\n\n\n-- \ngreg\n\n\n", "msg_date": "Fri, 4 Feb 2022 22:20:49 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Fri, Feb 4, 2022 at 10:21 PM Greg Stark <stark@mit.edu> wrote:\n> On Wed, 15 Dec 2021 at 15:30, Peter Geoghegan <pg@bowt.ie> wrote:\n> > My emphasis here has been on making non-aggressive VACUUMs *always*\n> > advance relfrozenxid, outside of certain obvious edge cases. And so\n> > with all the patches applied, up to and including the opportunistic\n> > freezing patch, every autovacuum of every table manages to advance\n> > relfrozenxid during benchmarking -- usually to a fairly recent value.\n> > I've focussed on making aggressive VACUUMs (especially anti-wraparound\n> > autovacuums) a rare occurrence, for truly exceptional cases (e.g.,\n> > user keeps canceling autovacuums, maybe due to automated script that\n> > performs DDL). That has taken priority over other goals, for now.\n>\n> While I've seen all the above cases triggering anti-wraparound cases\n> by far the majority of the cases are not of these pathological forms.\n\nRight - it's practically inevitable that you'll need an\nanti-wraparound VACUUM to advance relfrozenxid right now. Technically\nit's possible to advance relfrozenxid in any VACUUM, but in practice\nit just never happens on a large table. You only need to get unlucky\nwith one heap page, either by failing to get a cleanup lock, or (more\nlikely) by setting even one single page all-visible but not all-frozen\njust once (once in any VACUUM that takes place between anti-wraparound\nVACUUMs).\n\n> By far the majority of anti-wraparound vacuums are triggered by tables\n> that are very large and so don't trigger regular vacuums for \"long\n> periods\" of time and consistently hit the anti-wraparound threshold\n> first.\n\nautovacuum_vacuum_insert_scale_factor can help with this on 13 and 14,\nbut only if you tune autovacuum_freeze_min_age with that goal in mind.\nWhich probably doesn't happen very often.\n\n> There's nothing limiting how long \"long periods\" is and nothing tying\n> it to the rate of xid consumption. It's quite common to have some\n> *very* large mostly static tables in databases that have other tables\n> that are *very* busy.\n>\n> The worst I've seen is a table that took 36 hours to vacuum in a\n> database that consumed about a billion transactions per day... That's\n> extreme but these days it's quite common to see tables that get\n> anti-wraparound vacuums every week or so despite having < 1% modified\n> tuples. And databases are only getting bigger and transaction rates\n> faster...\n\nSounds very much like what I've been calling the freezing cliff. An\nanti-wraparound VACUUM throws things off by suddenly dirtying many\nmore pages than the expected amount for a VACUUM against the table,\ndespite there being no change in workload characteristics. If you just\nhad to remove the dead tuples in such a table, then it probably\nwouldn't matter if it happened earlier than expected.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 4 Feb 2022 22:44:47 -0500", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Fri, Feb 4, 2022 at 10:44 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Right - it's practically inevitable that you'll need an\n> anti-wraparound VACUUM to advance relfrozenxid right now. Technically\n> it's possible to advance relfrozenxid in any VACUUM, but in practice\n> it just never happens on a large table. You only need to get unlucky\n> with one heap page, either by failing to get a cleanup lock, or (more\n> likely) by setting even one single page all-visible but not all-frozen\n> just once (once in any VACUUM that takes place between anti-wraparound\n> VACUUMs).\n\nMinor correction: That's a slight exaggeration, since we won't skip\ngroups of all-visible pages that don't exceed SKIP_PAGES_THRESHOLD\nblocks (32 blocks).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 4 Feb 2022 23:20:50 -0500", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Fri, Feb 4, 2022 at 10:21 PM Greg Stark <stark@mit.edu> wrote:\n> By far the majority of anti-wraparound vacuums are triggered by tables\n> that are very large and so don't trigger regular vacuums for \"long\n> periods\" of time and consistently hit the anti-wraparound threshold\n> first.\n\nThat's interesting, because my experience is different. Most of the\ntime when I get asked to look at a system, it turns out that there is\na prepared transaction or a forgotten replication slot and nobody\nnoticed until the system hit the wraparound threshold. Or occasionally\na long-running transaction or a failing/stuck vacuum that has the same\neffect.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 7 Feb 2022 09:53:13 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Fri, Feb 4, 2022 at 10:45 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > While I've seen all the above cases triggering anti-wraparound cases\n> > by far the majority of the cases are not of these pathological forms.\n>\n> Right - it's practically inevitable that you'll need an\n> anti-wraparound VACUUM to advance relfrozenxid right now. Technically\n> it's possible to advance relfrozenxid in any VACUUM, but in practice\n> it just never happens on a large table. You only need to get unlucky\n> with one heap page, either by failing to get a cleanup lock, or (more\n> likely) by setting even one single page all-visible but not all-frozen\n> just once (once in any VACUUM that takes place between anti-wraparound\n> VACUUMs).\n\nBut ... if I'm not mistaken, in the kind of case that Greg is\ndescribing, relfrozenxid will be advanced exactly as often as it is\ntoday. That's because, if VACUUM is only ever getting triggered by XID\nage advancement and not by bloat, there's no opportunity for your\npatch set to advance relfrozenxid any sooner than we're doing now. So\nI think that people in this kind of situation will potentially be\nhelped or hurt by other things the patch set does, but the eager\nrelfrozenxid stuff won't make any difference for them.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 7 Feb 2022 10:07:57 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Mon, Feb 7, 2022 at 10:08 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> But ... if I'm not mistaken, in the kind of case that Greg is\n> describing, relfrozenxid will be advanced exactly as often as it is\n> today.\n\nBut what happens today in a scenario like Greg's is pathological,\ndespite being fairly common (common in large DBs). It doesn't seem\ninformative to extrapolate too much from current experience for that\nreason.\n\n> That's because, if VACUUM is only ever getting triggered by XID\n> age advancement and not by bloat, there's no opportunity for your\n> patch set to advance relfrozenxid any sooner than we're doing now.\n\nWe must distinguish between:\n\n1. \"VACUUM is fundamentally never going to need to run unless it is\nforced to, just to advance relfrozenxid\" -- this applies to tables\nlike the stock and customers tables from the benchmark.\n\nand:\n\n2. \"VACUUM must sometimes run to mark newly appended heap pages\nall-visible, and maybe to also remove dead tuples, but not that often\n-- and yet we current only get expensive and inconveniently timed\nanti-wraparound VACUUMs, no matter what\" -- this applies to all the\nother big tables in the benchmark, in particular to the orders and\norder lines tables, but also to simpler cases like pgbench_history.\n\nAs I've said a few times now, the patch doesn't change anything for 1.\nBut Greg's problem tables very much sound like they're from category\n2. And what we see with the master branch for such tables is that they\nalways get anti-wraparound VACUUMs, past a certain size (depends on\nthings like exact XID rate and VACUUM settings, the insert-driven\nautovacuum scheduling stuff matters). While the patch never reaches\nthat point in practice, during my testing -- and doesn't come close.\n\nIt is true that in theory, as the size of ones of these \"category 2\"\ntables tends to infinity, the patch ends up behaving the same as\nmaster anyway. But I'm pretty sure that that usually doesn't matter at\nall, or matters less than you'd think. As I emphasized when presenting\nthe recent v7 TPC-C benchmark, neither of the two \"TPC-C big problem\ntables\" (which are particularly interesting/tricky examples of tables\nfrom category 2) come close to getting an anti-wraparound VACUUM\n(plus, as I said in the same email, wouldn't matter if they did).\n\n> So I think that people in this kind of situation will potentially be\n> helped or hurt by other things the patch set does, but the eager\n> relfrozenxid stuff won't make any difference for them.\n\nTo be clear, I think it would if everything was in place, including\nthe basic relfrozenxid advancement thing, plus the new freezing stuff\n(though you wouldn't need the experimental FSM thing to get this\nbenefit).\n\nHere is a thought experiment that may make the general idea a bit clearer:\n\nImagine I reran the same benchmark as before, with the same settings,\nand the expectation that everything would be the same as first time\naround for the patch series. But to make things more interesting, this\ntime I add an adversarial element: I add an adversarial gizmo that\nburns XIDs steadily, without doing any useful work. This gizmo doubles\nthe rate of XID consumption for the database as a whole, perhaps by\ncalling \"SELECT txid_current()\" in a loop, followed by a timed sleep\n(with a delay chosen with the goal of doubling XID consumption). I\nimagine that this would also burn CPU cycles, but probably not enough\nto make more than a noise level impact -- so we're severely stressing\nthe implementation by adding this gizmo, but the stress is precisely\ntargeted at XID consumption and related implementation details. It's a\npretty clean experiment. What happens now?\n\nI believe (though haven't checked for myself) that nothing important\nwould change. We'd still see the same VACUUM operations occur at\napproximately the same times (relative to the start of the benchmark)\nthat we saw with the original benchmark, and each VACUUM operation\nwould do approximately the same amount of physical work on each\noccasion. Of course, the autovacuum log output would show that the\nOldestXmin for each individual VACUUM operation had larger values than\nfirst time around for this newly initdb'd TPC-C database (purely as a\nconsequence of the XID burning gizmo), but it would *also* show\n*concomitant* increases for our newly set relfrozenxid. The system\nshould therefore hardly behave differently at all compared to the\noriginal benchmark run, despite this adversarial gizmo.\n\nIt's fair to wonder: okay, but what if it was 4x, 8x, 16x? What then?\nThat does get a bit more complicated, and we should get into why that\nis. But for now I'll just say that I think that even that kind of\nextreme would make much less difference than you might think -- since\nrelfrozenxid advancement has been qualitatively improved by the patch\nseries. It is especially likely that nothing would change if you were\nwilling to increase autovacuum_freeze_max_age to get a bit more\nbreathing room -- room to allow the autovacuums to run at their\n\"natural\" times. You wouldn't necessarily have to go too far -- the\nextra breathing room from increasing autovacuum_freeze_max_age buys\nmore wall clock time *between* any two successive \"naturally timed\nautovacuums\". Again, a virtuous cycle.\n\nDoes that make sense? It's pretty subtle, admittedly, and you no doubt\nhave (very reasonable) concerns about the extremes, even if you accept\nall that. I just want to get the general idea across here, as a\nstarting point for further discussion.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 7 Feb 2022 11:42:34 -0500", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Mon, Feb 7, 2022 at 11:43 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > That's because, if VACUUM is only ever getting triggered by XID\n> > age advancement and not by bloat, there's no opportunity for your\n> > patch set to advance relfrozenxid any sooner than we're doing now.\n>\n> We must distinguish between:\n>\n> 1. \"VACUUM is fundamentally never going to need to run unless it is\n> forced to, just to advance relfrozenxid\" -- this applies to tables\n> like the stock and customers tables from the benchmark.\n>\n> and:\n>\n> 2. \"VACUUM must sometimes run to mark newly appended heap pages\n> all-visible, and maybe to also remove dead tuples, but not that often\n> -- and yet we current only get expensive and inconveniently timed\n> anti-wraparound VACUUMs, no matter what\" -- this applies to all the\n> other big tables in the benchmark, in particular to the orders and\n> order lines tables, but also to simpler cases like pgbench_history.\n\nIt's not really very understandable for me when you refer to the way\ntable X behaves in Y benchmark, because I haven't studied that in\nenough detail to know. If you say things like insert-only table, or a\ncontinuous-random-updates table, or whatever the case is, it's a lot\neasier to wrap my head around it.\n\n> Does that make sense? It's pretty subtle, admittedly, and you no doubt\n> have (very reasonable) concerns about the extremes, even if you accept\n> all that. I just want to get the general idea across here, as a\n> starting point for further discussion.\n\nNot really. I think you *might* be saying tables which currently get\nonly wraparound vacuums will end up getting other kinds of vacuums\nwith your patch because things will improve enough for other tables in\nthe system that they will be able to get more attention than they do\ncurrently. But I'm not sure I am understanding you correctly, and even\nif I am I don't understand why that would be so, and even if it is I\nthink it doesn't help if essentially all the tables in the system are\nsuffering from the problem.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 7 Feb 2022 12:20:50 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Mon, Feb 7, 2022 at 12:21 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Feb 7, 2022 at 11:43 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > > That's because, if VACUUM is only ever getting triggered by XID\n> > > age advancement and not by bloat, there's no opportunity for your\n> > > patch set to advance relfrozenxid any sooner than we're doing now.\n> >\n> > We must distinguish between:\n> >\n> > 1. \"VACUUM is fundamentally never going to need to run unless it is\n> > forced to, just to advance relfrozenxid\" -- this applies to tables\n> > like the stock and customers tables from the benchmark.\n> >\n> > and:\n> >\n> > 2. \"VACUUM must sometimes run to mark newly appended heap pages\n> > all-visible, and maybe to also remove dead tuples, but not that often\n> > -- and yet we current only get expensive and inconveniently timed\n> > anti-wraparound VACUUMs, no matter what\" -- this applies to all the\n> > other big tables in the benchmark, in particular to the orders and\n> > order lines tables, but also to simpler cases like pgbench_history.\n>\n> It's not really very understandable for me when you refer to the way\n> table X behaves in Y benchmark, because I haven't studied that in\n> enough detail to know. If you say things like insert-only table, or a\n> continuous-random-updates table, or whatever the case is, it's a lot\n> easier to wrap my head around it.\n\nWhat I've called category 2 tables are the vast majority of big tables\nin practice. They include pure append-only tables, but also tables\nthat grow and grow from inserts, but also have some updates. The point\nof the TPC-C order + order lines examples was to show how broad the\ncategory really is. And how mixtures of inserts and bloat from updates\non one single table confuse the implementation in general.\n\n> > Does that make sense? It's pretty subtle, admittedly, and you no doubt\n> > have (very reasonable) concerns about the extremes, even if you accept\n> > all that. I just want to get the general idea across here, as a\n> > starting point for further discussion.\n>\n> Not really. I think you *might* be saying tables which currently get\n> only wraparound vacuums will end up getting other kinds of vacuums\n> with your patch because things will improve enough for other tables in\n> the system that they will be able to get more attention than they do\n> currently.\n\nYes, I am.\n\n> But I'm not sure I am understanding you correctly, and even\n> if I am I don't understand why that would be so, and even if it is I\n> think it doesn't help if essentially all the tables in the system are\n> suffering from the problem.\n\nWhen I say \"relfrozenxid advancement has been qualitatively improved\nby the patch\", what I mean is that we are much closer to a rate of\nrelfrozenxid advancement that is far closer to the theoretically\noptimal rate for our current design, with freezing and with 32-bit\nXIDs, and with the invariants for freezing.\n\nConsider the extreme case, and generalize. In the simple append-only\ntable case, it is most obvious. The final relfrozenxid is very close\nto OldestXmin (only tiny noise level differences appear), regardless\nof XID consumption by the system in general, and even within the\nappend-only table in particular. Other cases are somewhat trickier,\nbut have roughly the same quality, to a surprising degree. Lots of\nthings that never really should have affected relfrozenxid to begin\nwith do not, for the first time.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 7 Feb 2022 13:26:32 -0500", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Sat, Jan 29, 2022 at 8:42 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached is v7, a revision that overhauls the algorithm that decides\n> what to freeze. I'm now calling it block-driven freezing in the commit\n> message. Also included is a new patch, that makes VACUUM record zero\n> free space in the FSM for an all-visible page, unless the total amount\n> of free space happens to be greater than one half of BLCKSZ.\n\nI pushed the earlier refactoring and instrumentation patches today.\n\nAttached is v8. No real changes -- just a rebased version.\n\nIt will be easier to benchmark and test the page-driven freezing stuff\nnow, since the master/baseline case will now output instrumentation\nshowing how relfrozenxid has been advanced (if at all) -- whether (and\nto what extent) each VACUUM operation advances relfrozenxid can now be\ndirectly compared, just by monitoring the log_autovacuum_min_duration\noutput for a given table over time.\n\n--\nPeter Geoghegan", "msg_date": "Fri, 11 Feb 2022 20:30:30 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Fri, Feb 11, 2022 at 8:30 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached is v8. No real changes -- just a rebased version.\n\nConcerns about my general approach to this project (and even the\nPostgres 14 VACUUM work) were expressed by Robert and Andres over on\nthe \"Nonrandom scanned_pages distorts pg_class.reltuples set by\nVACUUM\" thread. Some of what was said honestly shocked me. It now\nseems unwise to pursue this project on my original timeline. I even\nthought about shelving it indefinitely (which is still on the table).\n\nI propose the following compromise: the least contentious patch alone\nwill be in scope for Postgres 15, while the other patches will not be.\nI'm referring to the first patch from v8, which adds dynamic tracking\nof the oldest extant XID in each heap table, in order to be able to\nuse it as our new relfrozenxid. I can't imagine that I'll have\ndifficulty convincing Andres of the merits of this idea, for one,\nsince it was his idea in the first place. It makes a lot of sense,\nindependent of any change to how and when we freeze.\n\nThe first patch is tricky, but at least it won't require elaborate\nperformance validation. It doesn't change any of the basic performance\ncharacteristics of VACUUM. It sometimes allows us to advance\nrelfrozenxid to a value beyond FreezeLimit (typically only possible in\nan aggressive VACUUM), which is an intrinsic good. If it isn't\neffective then the overhead seems very unlikely to be noticeable. It's\npretty much a strictly additive improvement.\n\nAre there any objections to this plan?\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 18 Feb 2022 12:41:30 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Fri, Feb 18, 2022 at 3:41 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Concerns about my general approach to this project (and even the\n> Postgres 14 VACUUM work) were expressed by Robert and Andres over on\n> the \"Nonrandom scanned_pages distorts pg_class.reltuples set by\n> VACUUM\" thread. Some of what was said honestly shocked me. It now\n> seems unwise to pursue this project on my original timeline. I even\n> thought about shelving it indefinitely (which is still on the table).\n>\n> I propose the following compromise: the least contentious patch alone\n> will be in scope for Postgres 15, while the other patches will not be.\n> I'm referring to the first patch from v8, which adds dynamic tracking\n> of the oldest extant XID in each heap table, in order to be able to\n> use it as our new relfrozenxid. I can't imagine that I'll have\n> difficulty convincing Andres of the merits of this idea, for one,\n> since it was his idea in the first place. It makes a lot of sense,\n> independent of any change to how and when we freeze.\n>\n> The first patch is tricky, but at least it won't require elaborate\n> performance validation. It doesn't change any of the basic performance\n> characteristics of VACUUM. It sometimes allows us to advance\n> relfrozenxid to a value beyond FreezeLimit (typically only possible in\n> an aggressive VACUUM), which is an intrinsic good. If it isn't\n> effective then the overhead seems very unlikely to be noticeable. It's\n> pretty much a strictly additive improvement.\n>\n> Are there any objections to this plan?\n\nI really like the idea of reducing the scope of what is being changed\nhere, and I agree that eagerly advancing relfrozenxid carries much\nless risk than the other changes.\n\nI'd like to have a clearer idea of exactly what is in each of the\nremaining patches before forming a final opinion.\n\nWhat's tricky about 0001? Does it change any other behavior, either as\na necessary component of advancing relfrozenxid more eagerly, or\notherwise?\n\nIf there's a way you can make the precise contents of 0002 and 0003\nmore clear, I would like that, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 18 Feb 2022 15:54:19 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Fri, Feb 18, 2022 at 12:54 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I'd like to have a clearer idea of exactly what is in each of the\n> remaining patches before forming a final opinion.\n\nGreat.\n\n> What's tricky about 0001? Does it change any other behavior, either as\n> a necessary component of advancing relfrozenxid more eagerly, or\n> otherwise?\n\nIt does not change any other behavior. It's totally mechanical.\n\n0001 is tricky in the sense that there are a lot of fine details, and\nif you get any one of them wrong the result might be a subtle bug. For\nexample, the heap_tuple_needs_freeze() code path is only used when we\ncannot get a cleanup lock, which is rare -- and some of the branches\nwithin the function are relatively rare themselves. The obvious\nconcern is: What if some detail of how we track the new relfrozenxid\nvalue (and new relminmxid value) in this seldom-hit codepath is just\nwrong, in whatever way we didn't think of?\n\nOn the other hand, we must already be precise in almost the same way\nwithin heap_tuple_needs_freeze() today -- it's not all that different\n(we currently need to avoid leaving any XIDs < FreezeLimit behind,\nwhich isn't made that less complicated by the fact that it's a static\nXID cutoff). Plus, we have experience with bugs like this. There was\nhardening added to catch stuff like this back in 2017, following the\n\"freeze the dead\" bug.\n\n> If there's a way you can make the precise contents of 0002 and 0003\n> more clear, I would like that, too.\n\nThe really big one is 0002 -- even 0003 (the FSM PageIsAllVisible()\nthing) wasn't on the table before now. 0002 is the patch that changes\nthe basic criteria for freezing, making it block-based rather than\nbased on the FreezeLimit cutoff (barring edge cases that are important\nfor correctness, but shouldn't noticeably affect freezing overhead).\n\nThe single biggest practical improvement from 0002 is that it\neliminates what I've called the freeze cliff, which is where many old\ntuples (much older than FreezeLimit/vacuum_freeze_min_age) must be\nfrozen all at once, in a balloon payment during an eventual aggressive\nVACUUM. Although it's easy to see that that could be useful, it is\nharder to justify (much harder) than anything else. Because we're\nfreezing more eagerly overall, we're also bound to do more freezing\nwithout benefit in certain cases. Although I think that this can be\njustified as the cost of doing business, that's a hard argument to\nmake.\n\nIn short, 0001 is mechanically tricky, but easy to understand at a\nhigh level. Whereas 0002 is mechanically simple, but tricky to\nunderstand at a high level (and therefore far trickier than 0001\noverall).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 18 Feb 2022 13:09:45 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Fri, Feb 18, 2022 at 4:10 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> It does not change any other behavior. It's totally mechanical.\n>\n> 0001 is tricky in the sense that there are a lot of fine details, and\n> if you get any one of them wrong the result might be a subtle bug. For\n> example, the heap_tuple_needs_freeze() code path is only used when we\n> cannot get a cleanup lock, which is rare -- and some of the branches\n> within the function are relatively rare themselves. The obvious\n> concern is: What if some detail of how we track the new relfrozenxid\n> value (and new relminmxid value) in this seldom-hit codepath is just\n> wrong, in whatever way we didn't think of?\n\nRight. I think we have no choice but to accept such risks if we want\nto make any progress here, and every patch carries them to some\ndegree. I hope that someone else will review this patch in more depth\nthan I have just now, but what I notice reading through it is that\nsome of the comments seem pretty opaque. For instance:\n\n+ * Also maintains *NewRelfrozenxid and *NewRelminmxid, which are the current\n+ * target relfrozenxid and relminmxid for the relation. Assumption is that\n\n\"maintains\" is fuzzy. I think you should be saying something much more\nexplicit, and the thing you are saying should make it clear that these\narguments are input-output arguments: i.e. the caller must set them\ncorrectly before calling this function, and they will be updated by\nthe function. I don't think you have to spell all of that out in every\nplace where this comes up in the patch, but it needs to be clear from\nwhat you do say. For example, I would be happier with a comment that\nsaid something like \"Every call to this function will either set\nHEAP_XMIN_FROZEN in the xl_heap_freeze_tuple struct passed as an\nargument, or else reduce *NewRelfrozenxid to the xmin of the tuple if\nit is currently newer than that. Thus, after a series of calls to this\nfunction, *NewRelfrozenxid represents a lower bound on unfrozen xmin\nvalues in the tuples examined. Before calling this function, caller\nshould initialize *NewRelfrozenxid to <something>.\"\n\n+ * Changing nothing, so might have to ratchet\nback NewRelminmxid,\n+ * NewRelfrozenxid, or both together\n\nThis comment I like.\n\n+ * New multixact might have remaining XID older than\n+ * NewRelfrozenxid\n\nThis one's good, too.\n\n+ * Also maintains *NewRelfrozenxid and *NewRelminmxid, which are the current\n+ * target relfrozenxid and relminmxid for the relation. Assumption is that\n+ * caller will never freeze any of the XIDs from the tuple, even when we say\n+ * that they should. If caller opts to go with our recommendation to freeze,\n+ * then it must account for the fact that it shouldn't trust how we've set\n+ * NewRelfrozenxid/NewRelminmxid. (In practice aggressive VACUUMs always take\n+ * our recommendation because they must, and non-aggressive VACUUMs always opt\n+ * to not freeze, preferring to ratchet back NewRelfrozenxid instead).\n\nI don't understand this one.\n\n+ * (Actually, we maintain NewRelminmxid differently here, because we\n+ * assume that XIDs that should be frozen according to cutoff_xid won't\n+ * be, whereas heap_prepare_freeze_tuple makes the opposite assumption.)\n\nThis one either.\n\nI haven't really grokked exactly what is happening in\nheap_tuple_needs_freeze yet, and may not have time to study it further\nin the near future. Not saying it's wrong, although improving the\ncomments above would likely help me out.\n\n> > If there's a way you can make the precise contents of 0002 and 0003\n> > more clear, I would like that, too.\n>\n> The really big one is 0002 -- even 0003 (the FSM PageIsAllVisible()\n> thing) wasn't on the table before now. 0002 is the patch that changes\n> the basic criteria for freezing, making it block-based rather than\n> based on the FreezeLimit cutoff (barring edge cases that are important\n> for correctness, but shouldn't noticeably affect freezing overhead).\n>\n> The single biggest practical improvement from 0002 is that it\n> eliminates what I've called the freeze cliff, which is where many old\n> tuples (much older than FreezeLimit/vacuum_freeze_min_age) must be\n> frozen all at once, in a balloon payment during an eventual aggressive\n> VACUUM. Although it's easy to see that that could be useful, it is\n> harder to justify (much harder) than anything else. Because we're\n> freezing more eagerly overall, we're also bound to do more freezing\n> without benefit in certain cases. Although I think that this can be\n> justified as the cost of doing business, that's a hard argument to\n> make.\n\nYou've used the term \"freezing cliff\" repeatedly in earlier emails,\nand this is the first time I've been able to understand what you\nmeant. I'm glad I do, now.\n\nBut can you describe the algorithm that 0002 uses to accomplish this\nimprovement? Like \"if it sees that the page meets criteria X, then it\nfreezes all tuples on the page, else if it sees that that individual\ntuples on the page meet criteria Y, then it freezes just those.\" And\nlike explain what of that is same/different vs. now.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 18 Feb 2022 16:56:37 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "Hi,\n\nOn 2022-02-18 13:09:45 -0800, Peter Geoghegan wrote:\n> 0001 is tricky in the sense that there are a lot of fine details, and\n> if you get any one of them wrong the result might be a subtle bug. For\n> example, the heap_tuple_needs_freeze() code path is only used when we\n> cannot get a cleanup lock, which is rare -- and some of the branches\n> within the function are relatively rare themselves. The obvious\n> concern is: What if some detail of how we track the new relfrozenxid\n> value (and new relminmxid value) in this seldom-hit codepath is just\n> wrong, in whatever way we didn't think of?\n\nI think it'd be good to add a few isolationtest cases for the\ncan't-get-cleanup-lock paths. I think it shouldn't be hard using cursors. The\nslightly harder part is verifying that VACUUM did something reasonable, but\nthat still should be doable?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 18 Feb 2022 14:11:17 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases, relfrozenxid\n optimizations" }, { "msg_contents": "Hi,\n\nOn 2022-02-18 15:54:19 -0500, Robert Haas wrote:\n> > Are there any objections to this plan?\n> \n> I really like the idea of reducing the scope of what is being changed\n> here, and I agree that eagerly advancing relfrozenxid carries much\n> less risk than the other changes.\n\nSounds good to me too!\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 18 Feb 2022 14:11:45 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases, relfrozenxid\n optimizations" }, { "msg_contents": "On Fri, Feb 18, 2022 at 1:56 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> + * Also maintains *NewRelfrozenxid and *NewRelminmxid, which are the current\n> + * target relfrozenxid and relminmxid for the relation. Assumption is that\n>\n> \"maintains\" is fuzzy. I think you should be saying something much more\n> explicit, and the thing you are saying should make it clear that these\n> arguments are input-output arguments: i.e. the caller must set them\n> correctly before calling this function, and they will be updated by\n> the function.\n\nMakes sense.\n\n> I don't think you have to spell all of that out in every\n> place where this comes up in the patch, but it needs to be clear from\n> what you do say. For example, I would be happier with a comment that\n> said something like \"Every call to this function will either set\n> HEAP_XMIN_FROZEN in the xl_heap_freeze_tuple struct passed as an\n> argument, or else reduce *NewRelfrozenxid to the xmin of the tuple if\n> it is currently newer than that. Thus, after a series of calls to this\n> function, *NewRelfrozenxid represents a lower bound on unfrozen xmin\n> values in the tuples examined. Before calling this function, caller\n> should initialize *NewRelfrozenxid to <something>.\"\n\nWe have to worry about XIDs from MultiXacts (and xmax values more\ngenerally). And we have to worry about the case where we start out\nwith only xmin frozen (by an earlier VACUUM), and then have to freeze\nxmax too. I believe that we have to generally consider xmin and xmax\nindependently. For example, we cannot ignore xmax, just because we\nlooked at xmin, since in general xmin alone might have already been\nfrozen.\n\n> + * Also maintains *NewRelfrozenxid and *NewRelminmxid, which are the current\n> + * target relfrozenxid and relminmxid for the relation. Assumption is that\n> + * caller will never freeze any of the XIDs from the tuple, even when we say\n> + * that they should. If caller opts to go with our recommendation to freeze,\n> + * then it must account for the fact that it shouldn't trust how we've set\n> + * NewRelfrozenxid/NewRelminmxid. (In practice aggressive VACUUMs always take\n> + * our recommendation because they must, and non-aggressive VACUUMs always opt\n> + * to not freeze, preferring to ratchet back NewRelfrozenxid instead).\n>\n> I don't understand this one.\n>\n> + * (Actually, we maintain NewRelminmxid differently here, because we\n> + * assume that XIDs that should be frozen according to cutoff_xid won't\n> + * be, whereas heap_prepare_freeze_tuple makes the opposite assumption.)\n>\n> This one either.\n\nThe difference between the cleanup lock path (in\nlazy_scan_prune/heap_prepare_freeze_tuple) and the share lock path (in\nlazy_scan_noprune/heap_tuple_needs_freeze) is what is at issue in both\nof these confusing comment blocks, really. Note that cutoff_xid is the\nname that both heap_prepare_freeze_tuple and heap_tuple_needs_freeze\nhave for FreezeLimit (maybe we should rename every occurence of\ncutoff_xid in heapam.c to FreezeLimit).\n\nAt a high level, we aren't changing the fundamental definition of an\naggressive VACUUM in any of the patches -- we still need to advance\nrelfrozenxid up to FreezeLimit in an aggressive VACUUM, just like on\nHEAD, today (we may be able to advance it *past* FreezeLimit, but\nthat's just a bonus). But in a non-aggressive VACUUM, where there is\nstill no strict requirement to advance relfrozenxid (by any amount),\nthe code added by 0001 can set relfrozenxid to any known safe value,\nwhich could either be from before FreezeLimit, or after FreezeLimit --\nalmost anything is possible (provided we respect the relfrozenxid\ninvariant, and provided we see that we didn't skip any\nall-visible-not-all-frozen pages).\n\nSince we still need to \"respect FreezeLimit\" in an aggressive VACUUM,\nthe aggressive case might need to wait for a full cleanup lock the\nhard way, having tried and failed to do it the easy way within\nlazy_scan_noprune (lazy_scan_noprune will still return false when any\ncall to heap_tuple_needs_freeze for any tuple returns false) -- same\nas on HEAD, today.\n\nAnd so the difference at issue here is: FreezeLimit/cutoff_xid only\nneeds to affect the new NewRelfrozenxid value we use for relfrozenxid in\nheap_prepare_freeze_tuple, which is involved in real freezing -- not\nin heap_tuple_needs_freeze, whose main purpose is still to help us\navoid freezing where a cleanup lock isn't immediately available. While\nthe purpose of FreezeLimit/cutoff_xid within heap_tuple_needs_freeze\nis to determine its bool return value, which will only be of interest\nto the aggressive case (which might have to get a cleanup lock and do\nit the hard way), not the non-aggressive case (where ratcheting back\nNewRelfrozenxid is generally possible, and generally leaves us with\nalmost as good of a value).\n\nIn other words: the calls to heap_tuple_needs_freeze made from\nlazy_scan_noprune are simply concerned with the page as it actually\nis, whereas the similar/corresponding calls to\nheap_prepare_freeze_tuple from lazy_scan_prune are concerned with\n*what the page will actually become*, after freezing finishes, and\nafter lazy_scan_prune is done with the page entirely (ultimately\nthe final NewRelfrozenxid value set in pg_class.relfrozenxid only has\nto be <= the oldest extant XID *at the time the VACUUM operation is\njust about to end*, not some earlier time, so \"being versus becoming\"\nis an interesting distinction for us).\n\nMaybe the way that FreezeLimit/cutoff_xid is overloaded can be fixed\nhere, to make all of this less confusing. I only now fully realized\nhow confusing all of this stuff is -- very.\n\n> I haven't really grokked exactly what is happening in\n> heap_tuple_needs_freeze yet, and may not have time to study it further\n> in the near future. Not saying it's wrong, although improving the\n> comments above would likely help me out.\n\nDefinitely needs more polishing.\n\n> You've used the term \"freezing cliff\" repeatedly in earlier emails,\n> and this is the first time I've been able to understand what you\n> meant. I'm glad I do, now.\n\nUgh. I thought that a snappy term like that would catch on quickly. Guess not!\n\n> But can you describe the algorithm that 0002 uses to accomplish this\n> improvement? Like \"if it sees that the page meets criteria X, then it\n> freezes all tuples on the page, else if it sees that that individual\n> tuples on the page meet criteria Y, then it freezes just those.\" And\n> like explain what of that is same/different vs. now.\n\nThe mechanics themselves are quite simple (again, understanding the\nimplications is the hard part). The approach taken within 0002 is\nstill rough, to be honest, but wouldn't take long to clean up (there\nare XXX/FIXME comments about this in 0002).\n\nAs a general rule, we try to freeze all of the remaining live tuples\non a page (following pruning) together, as a group, or none at all.\nMost of the time this is triggered by our noticing that the page is\nabout to be set all-visible (but not all-frozen), and doing work\nsufficient to mark it fully all-frozen instead. Occasionally there is\nFreezeLimit to consider, which is now more of a backstop thing, used\nto make sure that we never get too far behind in terms of unfrozen\nXIDs. This is useful in part because it avoids any future\nnon-aggressive VACUUM that is fundamentally unable to advance\nrelfrozenxid (you can't skip all-visible pages if there are only\nall-frozen pages in the VM in practice).\n\nWe're generally doing a lot more freezing with 0002, but we still\nmanage to avoid freezing too much in tables like pgbench_tellers or\npgbench_branches -- tables where it makes the least sense. Such tables\nwill be updated so frequently that VACUUM is relatively unlikely to\never mark any page all-visible, avoiding the main criteria for\nfreezing implicitly. It's also unlikely that they'll ever have an XID that is so\nold to trigger the fallback FreezeLimit-style criteria for freezing.\n\nIn practice, freezing tuples like this is generally not that expensive in\nmost tables where VACUUM freezes the majority of pages immediately\n(tables that aren't like pgbench_tellers or pgbench_branches), because\nthey're generally big tables, where the overhead of FPIs tends\nto dominate anyway (gambling that we can avoid more FPIs later on is not a\nbad gamble, as gambles go). This seems to make the overhead\nacceptable, on balance. Granted, you might be able to poke holes in\nthat argument, and reasonable people might disagree on what acceptable\nshould mean. There are many value judgements here, which makes it\ncomplicated. (On the other hand we might be able to do better if there\nwas a particularly bad case for the 0002 work, if one came to light.)\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 18 Feb 2022 16:12:00 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Fri, Feb 18, 2022 at 2:11 PM Andres Freund <andres@anarazel.de> wrote:\n> I think it'd be good to add a few isolationtest cases for the\n> can't-get-cleanup-lock paths. I think it shouldn't be hard using cursors. The\n> slightly harder part is verifying that VACUUM did something reasonable, but\n> that still should be doable?\n\nWe could even just extend existing, related tests, from vacuum-reltuples.spec.\n\nAnother testing strategy occurs to me: we could stress-test the\nimplementation by simulating an environment where the no-cleanup-lock\npath is hit an unusually large number of times, possibly a fixed\npercentage of the time (like 1%, 5%), say by making vacuumlazy.c's\nConditionalLockBufferForCleanup() call return false randomly. Now that\nwe have lazy_scan_noprune for the no-cleanup-lock path (which is as\nsimilar to the regular lazy_scan_prune path as possible), I wouldn't\nexpect this ConditionalLockBufferForCleanup() testing gizmo to be too\ndisruptive.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 18 Feb 2022 17:00:47 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Fri, Feb 18, 2022 at 5:00 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Another testing strategy occurs to me: we could stress-test the\n> implementation by simulating an environment where the no-cleanup-lock\n> path is hit an unusually large number of times, possibly a fixed\n> percentage of the time (like 1%, 5%), say by making vacuumlazy.c's\n> ConditionalLockBufferForCleanup() call return false randomly. Now that\n> we have lazy_scan_noprune for the no-cleanup-lock path (which is as\n> similar to the regular lazy_scan_prune path as possible), I wouldn't\n> expect this ConditionalLockBufferForCleanup() testing gizmo to be too\n> disruptive.\n\nI tried this out, using the attached patch. It was quite interesting,\neven when run against HEAD. I think that I might have found a bug on\nHEAD, though I'm not really sure.\n\nIf you modify the patch to simulate conditions under which\nConditionalLockBufferForCleanup() fails about 2% of the time, you get\nmuch better coverage of lazy_scan_noprune/heap_tuple_needs_freeze,\nwithout it being so aggressive as to make \"make check-world\" fail --\nwhich is exactly what I expected. If you are much more aggressive\nabout it, and make it 50% instead (which you can get just by using the\npatch as written), then some tests will fail, mostly for reasons that\naren't surprising or interesting (e.g. plan changes). This is also\nwhat I'd have guessed would happen.\n\nHowever, it gets more interesting. One thing that I did not expect to\nhappen at all also happened (with the current 50% rate of simulated\nConditionalLockBufferForCleanup() failure from the patch): if I run\n\"make check\" from the pg_surgery directory, then the Postgres backend\ngets stuck in an infinite loop inside lazy_scan_prune, which has been\na symptom of several tricky bugs in the past year (not every time, but\nusually). Specifically, the VACUUM statement launched by the SQL\ncommand \"vacuum freeze htab2;\" from the file\ncontrib/pg_surgery/sql/heap_surgery.sql, at line 54 leads to this\nmisbehavior.\n\nThis is a temp table, which is a choice made by the tests specifically\nbecause they need to \"use a temp table so that vacuum behavior doesn't\ndepend on global xmin\". This is convenient way of avoiding spurious\nregression tests failures (e.g. from autoanalyze), and relies on the\nGlobalVisTempRels behavior established by Andres' 2020 bugfix commit\n94bc27b5.\n\nIt's quite possible that this is nothing more than a bug in my\nadversarial gizmo patch -- since I don't think that\nConditionalLockBufferForCleanup() can ever fail with a temp buffer\n(though even that's not completely clear right now). Even if the\nbehavior that I saw does not indicate a bug on HEAD, it still seems\ninformative. At the very least, it wouldn't hurt to Assert() that the\ntarget table isn't a temp table inside lazy_scan_noprune, documenting\nour assumptions around temp tables and\nConditionalLockBufferForCleanup().\n\nI haven't actually tried to debug the issue just yet, so take all this\nwith a grain of salt.\n\n-- \nPeter Geoghegan", "msg_date": "Sat, 19 Feb 2022 15:08:41 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "Hi,\n\n(On phone, so crappy formatting and no source access)\n\nOn February 19, 2022 3:08:41 PM PST, Peter Geoghegan <pg@bowt.ie> wrote:\n>On Fri, Feb 18, 2022 at 5:00 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>> Another testing strategy occurs to me: we could stress-test the\n>> implementation by simulating an environment where the no-cleanup-lock\n>> path is hit an unusually large number of times, possibly a fixed\n>> percentage of the time (like 1%, 5%), say by making vacuumlazy.c's\n>> ConditionalLockBufferForCleanup() call return false randomly. Now that\n>> we have lazy_scan_noprune for the no-cleanup-lock path (which is as\n>> similar to the regular lazy_scan_prune path as possible), I wouldn't\n>> expect this ConditionalLockBufferForCleanup() testing gizmo to be too\n>> disruptive.\n>\n>I tried this out, using the attached patch. It was quite interesting,\n>even when run against HEAD. I think that I might have found a bug on\n>HEAD, though I'm not really sure.\n>\n>If you modify the patch to simulate conditions under which\n>ConditionalLockBufferForCleanup() fails about 2% of the time, you get\n>much better coverage of lazy_scan_noprune/heap_tuple_needs_freeze,\n>without it being so aggressive as to make \"make check-world\" fail --\n>which is exactly what I expected. If you are much more aggressive\n>about it, and make it 50% instead (which you can get just by using the\n>patch as written), then some tests will fail, mostly for reasons that\n>aren't surprising or interesting (e.g. plan changes). This is also\n>what I'd have guessed would happen.\n>\n>However, it gets more interesting. One thing that I did not expect to\n>happen at all also happened (with the current 50% rate of simulated\n>ConditionalLockBufferForCleanup() failure from the patch): if I run\n>\"make check\" from the pg_surgery directory, then the Postgres backend\n>gets stuck in an infinite loop inside lazy_scan_prune, which has been\n>a symptom of several tricky bugs in the past year (not every time, but\n>usually). Specifically, the VACUUM statement launched by the SQL\n>command \"vacuum freeze htab2;\" from the file\n>contrib/pg_surgery/sql/heap_surgery.sql, at line 54 leads to this\n>misbehavior.\n\n\n>This is a temp table, which is a choice made by the tests specifically\n>because they need to \"use a temp table so that vacuum behavior doesn't\n>depend on global xmin\". This is convenient way of avoiding spurious\n>regression tests failures (e.g. from autoanalyze), and relies on the\n>GlobalVisTempRels behavior established by Andres' 2020 bugfix commit\n>94bc27b5.\n\nWe don't have a blocking path for cleanup locks of temporary buffers IIRC (normally not reachable). So I wouldn't be surprised if a cleanup lock failing would cause some odd behavior.\n\n>It's quite possible that this is nothing more than a bug in my\n>adversarial gizmo patch -- since I don't think that\n>ConditionalLockBufferForCleanup() can ever fail with a temp buffer\n>(though even that's not completely clear right now). Even if the\n>behavior that I saw does not indicate a bug on HEAD, it still seems\n>informative. At the very least, it wouldn't hurt to Assert() that the\n>target table isn't a temp table inside lazy_scan_noprune, documenting\n>our assumptions around temp tables and\n>ConditionalLockBufferForCleanup().\n\nDefinitely worth looking into more.\n\n\nThis reminds me of a recent thing I noticed in the aio patch. Spgist can end up busy looping when buffers are locked, instead of blocking. Not actually related, of course.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Sat, 19 Feb 2022 16:12:31 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "=?US-ASCII?Q?Re=3A_Removing_more_vacuumlazy=2Ec_speci?=\n =?US-ASCII?Q?al_cases=2C_relfrozenxid_optimizations?=" }, { "msg_contents": "On Sat, Feb 19, 2022 at 3:08 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> It's quite possible that this is nothing more than a bug in my\n> adversarial gizmo patch -- since I don't think that\n> ConditionalLockBufferForCleanup() can ever fail with a temp buffer\n> (though even that's not completely clear right now). Even if the\n> behavior that I saw does not indicate a bug on HEAD, it still seems\n> informative.\n\nThis very much looks like a bug in pg_surgery itself now -- attached\nis a draft fix.\n\nThe temp table thing was a red herring. I found I could get exactly\nthe same kind of failure when htab2 was a permanent table (which was\nhow it originally appeared, before commit 0811f766fd made it into a\ntemp table due to test flappiness issues). The relevant \"vacuum freeze\nhtab2\" happens at a point after the test has already deliberately\ncorrupted one of its tuples using heap_force_kill(). It's not that we\naren't careful enough about the corruption at some point in\nvacuumlazy.c, which was my second theory. But I quickly discarded that\nidea, and came up with a third theory: the relevant heap_surgery.c\npath does the relevant ItemIdSetDead() to kill items, without also\ndefragmenting the page to remove the tuples with storage, which is\nwrong.\n\nThis meant that we depended on pruning happening (in this case during\nVACUUM) and defragmenting the page in passing. But there is no reason\nto not defragment the page within pg_surgery (at least no obvious\nreason), since we have a cleanup lock anyway.\n\nTheoretically you could blame this on lazy_scan_noprune instead, since\nit thinks it can collect LP_DEAD items while assuming that they have\nno storage, but that doesn't make much sense to me. There has never\nbeen any way of setting a heap item to LP_DEAD without also\ndefragmenting the page. Since that's exactly what it means to prune a\nheap page. (Actually, the same used to be true about heap vacuuming,\nwhich worked more like heap pruning before Postgres 14, but that\ndoesn't seem important.)\n\n-- \nPeter Geoghegan", "msg_date": "Sat, 19 Feb 2022 16:22:23 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Sat, Feb 19, 2022 at 4:22 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> This very much looks like a bug in pg_surgery itself now -- attached\n> is a draft fix.\n\nWait, that's not it either. I jumped the gun -- this isn't sufficient\n(though the patch I posted might not be a bad idea anyway).\n\nLooks like pg_surgery isn't processing HOT chains as whole units,\nwhich it really should (at least in the context of killing items via\nthe heap_force_kill() function). Killing a root item in a HOT chain is\njust hazardous -- disconnected/orphaned heap-only tuples are liable to\ncause chaos, and should be avoided everywhere (including during\npruning, and within pg_surgery).\n\nIt's likely that the hardening I already planned on adding to pruning\n[1] (as follow-up work to recent bugfix commit 18b87b201f) will\nprevent lazy_scan_prune from getting stuck like this, whatever the\ncause happens to be. The actual page image I see lazy_scan_prune choke\non (i.e. exhibit the same infinite loop unpleasantness we've seen\nbefore on) is not in a consistent state at all (its tuples consist of\ntuples from a single HOT chain, and the HOT chain is totally\ninconsistent on account of having an LP_DEAD line pointer root item).\npg_surgery could in principle do the right thing here by always\ntreating HOT chains as whole units.\n\nLeaving behind disconnected/orphaned heap-only tuples is pretty much\npointless anyway, since they'll never be accessible by index scans.\nEven after a REINDEX, since there is no root item from the heap page\nto go in the index. (A dump and restore might work better, though.)\n\n[1] https://postgr.es/m/CAH2-WzmNk6V6tqzuuabxoxM8HJRaWU6h12toaS-bqYcLiht16A@mail.gmail.com\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 19 Feb 2022 17:22:33 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "Hi,\n\nOn 2022-02-19 17:22:33 -0800, Peter Geoghegan wrote:\n> Looks like pg_surgery isn't processing HOT chains as whole units,\n> which it really should (at least in the context of killing items via\n> the heap_force_kill() function). Killing a root item in a HOT chain is\n> just hazardous -- disconnected/orphaned heap-only tuples are liable to\n> cause chaos, and should be avoided everywhere (including during\n> pruning, and within pg_surgery).\n\nHow does that cause the endless loop?\n\nIt doesn't do so on HEAD + 0001-Add-adversarial-ConditionalLockBuff[...] for\nme. So something needs have changed with your patch?\n\n\n> It's likely that the hardening I already planned on adding to pruning\n> [1] (as follow-up work to recent bugfix commit 18b87b201f) will\n> prevent lazy_scan_prune from getting stuck like this, whatever the\n> cause happens to be.\n\nYea, we should pick that up again. Not just for robustness or\nperformance. Also because it's just a lot easier to understand.\n\n\n> Leaving behind disconnected/orphaned heap-only tuples is pretty much\n> pointless anyway, since they'll never be accessible by index scans.\n> Even after a REINDEX, since there is no root item from the heap page\n> to go in the index. (A dump and restore might work better, though.)\n\nGiven that heap_surgery's raison d'etre is correcting corruption etc, I think\nit makes sense for it to do as minimal work as possible. Iterating through a\nHOT chain would be a problem if you e.g. tried to repair a page with HOT\ncorruption.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 19 Feb 2022 17:54:16 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases, relfrozenxid\n optimizations" }, { "msg_contents": "On Sat, Feb 19, 2022 at 5:54 PM Andres Freund <andres@anarazel.de> wrote:\n> How does that cause the endless loop?\n\nAttached is the page image itself, dumped via gdb (and gzip'd). This\nwas on recent HEAD (commit 8f388f6f, actually), plus\n0001-Add-adversarial-ConditionalLockBuff[...]. No other changes. No\ndefragmenting in pg_surgery, nothing like that.\n\n> It doesn't do so on HEAD + 0001-Add-adversarial-ConditionalLockBuff[...] for\n> me. So something needs have changed with your patch?\n\nIt doesn't always happen -- only about half the time on my machine.\nMaybe it's timing sensitive?\n\nWe hit the \"goto retry\" on offnum 2, which is the first tuple with\nstorage (you can see \"the ghost\" of the tuple from the LP_DEAD item at\noffnum 1, since the page isn't defragmented in pg_surgery). I think\nthat this happens because the heap-only tuple at offnum 2 is fully\nDEAD to lazy_scan_prune, but hasn't been recognized as such by\nheap_page_prune. There is no way that they'll ever \"agree\" on the\ntuple being DEAD right now, because pruning still doesn't assume that\nan orphaned heap-only tuple is fully DEAD.\n\nWe can either do that, or we can throw an error concerning corruption\nwhen heap_page_prune notices orphaned tuples. Neither seems\nparticularly appealing. But it definitely makes no sense to allow\nlazy_scan_prune to spin in a futile attempt to reach agreement with\nheap_page_prune about a DEAD tuple really being DEAD.\n\n> Given that heap_surgery's raison d'etre is correcting corruption etc, I think\n> it makes sense for it to do as minimal work as possible. Iterating through a\n> HOT chain would be a problem if you e.g. tried to repair a page with HOT\n> corruption.\n\nI guess that's also true. There is at least a legitimate argument to\nbe made for not leaving behind any orphaned heap-only tuples. The\ninterface is a TID, and so the user may already believe that they're\nkilling the heap-only, not just the root item (since ctid suggests\nthat the TID of a heap-only tuple is the TID of the root item, which\nis kind of misleading).\n\nAnyway, we can decide on what to do in heap_surgery later, once the\nmain issue is under control. My point was mostly just that orphaned\nheap-only tuples are definitely not okay, in general. They are the\nleast worst option when corruption has already happened, maybe -- but\nmaybe not.\n\n-- \nPeter Geoghegan", "msg_date": "Sat, 19 Feb 2022 18:16:54 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "Hi,\n\nOn 2022-02-19 18:16:54 -0800, Peter Geoghegan wrote:\n> On Sat, Feb 19, 2022 at 5:54 PM Andres Freund <andres@anarazel.de> wrote:\n> > How does that cause the endless loop?\n> \n> Attached is the page image itself, dumped via gdb (and gzip'd). This\n> was on recent HEAD (commit 8f388f6f, actually), plus\n> 0001-Add-adversarial-ConditionalLockBuff[...]. No other changes. No\n> defragmenting in pg_surgery, nothing like that.\n\n> > It doesn't do so on HEAD + 0001-Add-adversarial-ConditionalLockBuff[...] for\n> > me. So something needs have changed with your patch?\n> \n> It doesn't always happen -- only about half the time on my machine.\n> Maybe it's timing sensitive?\n\nAh, I'd only run the tests three times or so, without it happening. Trying a\nfew more times repro'd it.\n\n\nIt's kind of surprising that this needs this\n0001-Add-adversarial-ConditionalLockBuff to break. I suspect it's a question\nof hint bits changing due to lazy_scan_noprune(), which then makes\nHeapTupleHeaderIsHotUpdated() have a different return value, preventing the\n\"If the tuple is DEAD and doesn't chain to anything else\"\npath from being taken.\n\n\n> We hit the \"goto retry\" on offnum 2, which is the first tuple with\n> storage (you can see \"the ghost\" of the tuple from the LP_DEAD item at\n> offnum 1, since the page isn't defragmented in pg_surgery). I think\n> that this happens because the heap-only tuple at offnum 2 is fully\n> DEAD to lazy_scan_prune, but hasn't been recognized as such by\n> heap_page_prune. There is no way that they'll ever \"agree\" on the\n> tuple being DEAD right now, because pruning still doesn't assume that\n> an orphaned heap-only tuple is fully DEAD.\n\n> We can either do that, or we can throw an error concerning corruption\n> when heap_page_prune notices orphaned tuples. Neither seems\n> particularly appealing. But it definitely makes no sense to allow\n> lazy_scan_prune to spin in a futile attempt to reach agreement with\n> heap_page_prune about a DEAD tuple really being DEAD.\n\nYea, this sucks. I think we should go for the rewrite of the\nheap_prune_chain() logic. The current approach is just never going to be\nrobust.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 19 Feb 2022 19:01:28 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases, relfrozenxid\n optimizations" }, { "msg_contents": "On Sat, Feb 19, 2022 at 7:01 PM Andres Freund <andres@anarazel.de> wrote:\n> > We can either do that, or we can throw an error concerning corruption\n> > when heap_page_prune notices orphaned tuples. Neither seems\n> > particularly appealing. But it definitely makes no sense to allow\n> > lazy_scan_prune to spin in a futile attempt to reach agreement with\n> > heap_page_prune about a DEAD tuple really being DEAD.\n>\n> Yea, this sucks. I think we should go for the rewrite of the\n> heap_prune_chain() logic. The current approach is just never going to be\n> robust.\n\nNo, it just isn't robust enough. But it's not that hard to fix. My\npatch really wasn't invasive.\n\nI confirmed that HeapTupleSatisfiesVacuum() and\nheap_prune_satisfies_vacuum() agree that the heap-only tuple at offnum\n2 is HEAPTUPLE_DEAD -- they are in agreement, as expected (so no\nreason to think that there is a new bug involved). The problem here is\nindeed just that heap_prune_chain() can't \"get to\" the tuple, given\nits current design.\n\nFor anybody else that doesn't follow what we're talking about:\n\nThe \"doesn't chain to anything else\" code at the start of\nheap_prune_chain() won't get to the heap-only tuple at offnum 2, since\nthe tuple is itself HeapTupleHeaderIsHotUpdated() -- the expectation\nis that it'll be processed later on, once we locate the HOT chain's\nroot item. Since, of course, the \"root item\" was already LP_DEAD\nbefore we even reached heap_page_prune() (on account of the pg_surgery\ncorruption), there is no possible way that that can happen later on.\nAnd so we cannot find the same heap-only tuple and mark it LP_UNUSED\n(which is how we always deal with HEAPTUPLE_DEAD heap-only tuples)\nduring pruning.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 19 Feb 2022 19:07:39 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Sat, Feb 19, 2022 at 7:01 PM Andres Freund <andres@anarazel.de> wrote:\n> It's kind of surprising that this needs this\n> 0001-Add-adversarial-ConditionalLockBuff to break. I suspect it's a question\n> of hint bits changing due to lazy_scan_noprune(), which then makes\n> HeapTupleHeaderIsHotUpdated() have a different return value, preventing the\n> \"If the tuple is DEAD and doesn't chain to anything else\"\n> path from being taken.\n\nThat makes sense as an explanation. Goes to show just how fragile the\n\"DEAD and doesn't chain to anything else\" logic at the top of\nheap_prune_chain really is.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 19 Feb 2022 19:18:25 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "Hi,\n\nOn 2022-02-19 19:07:39 -0800, Peter Geoghegan wrote:\n> On Sat, Feb 19, 2022 at 7:01 PM Andres Freund <andres@anarazel.de> wrote:\n> > > We can either do that, or we can throw an error concerning corruption\n> > > when heap_page_prune notices orphaned tuples. Neither seems\n> > > particularly appealing. But it definitely makes no sense to allow\n> > > lazy_scan_prune to spin in a futile attempt to reach agreement with\n> > > heap_page_prune about a DEAD tuple really being DEAD.\n> >\n> > Yea, this sucks. I think we should go for the rewrite of the\n> > heap_prune_chain() logic. The current approach is just never going to be\n> > robust.\n> \n> No, it just isn't robust enough. But it's not that hard to fix. My\n> patch really wasn't invasive.\n\nI think we're in agreement there. We might think at some point about\nbackpatching too, but I'd rather have it stew in HEAD for a bit first.\n\n\n> I confirmed that HeapTupleSatisfiesVacuum() and\n> heap_prune_satisfies_vacuum() agree that the heap-only tuple at offnum\n> 2 is HEAPTUPLE_DEAD -- they are in agreement, as expected (so no\n> reason to think that there is a new bug involved). The problem here is\n> indeed just that heap_prune_chain() can't \"get to\" the tuple, given\n> its current design.\n\nRight.\n\nThe reason that the \"adversarial\" patch makes a different is solely that it\nchanges the heap_surgery test to actually kill an item, which it doesn't\nintend:\n\ncreate temp table htab2(a int);\ninsert into htab2 values (100);\nupdate htab2 set a = 200;\nvacuum htab2;\n\n-- redirected TIDs should be skipped\nselect heap_force_kill('htab2'::regclass, ARRAY['(0, 1)']::tid[]);\n\n\nIf the vacuum can get the cleanup lock due to the adversarial patch, the\nheap_force_kill() doesn't do anything, because the first item is a\nredirect. However if it *can't* get a cleanup lock, heap_force_kill() instead\ntargets the root item. Triggering the endless loop.\n\n\nHm. I think this might be a mild regression in 14. In < 14 we'd just skip the\ntuple in lazy_scan_heap(), but now we have an uninterruptible endless\nloop.\n\n\nWe'd do completely bogus stuff later in < 14 though, I think we'd just leave\nit in place despite being older than relfrozenxid, which obviously has its own\nset of issues.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 19 Feb 2022 19:28:33 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases, relfrozenxid\n optimizations" }, { "msg_contents": "On Sat, Feb 19, 2022 at 6:16 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Given that heap_surgery's raison d'etre is correcting corruption etc, I think\n> > it makes sense for it to do as minimal work as possible. Iterating through a\n> > HOT chain would be a problem if you e.g. tried to repair a page with HOT\n> > corruption.\n>\n> I guess that's also true. There is at least a legitimate argument to\n> be made for not leaving behind any orphaned heap-only tuples. The\n> interface is a TID, and so the user may already believe that they're\n> killing the heap-only, not just the root item (since ctid suggests\n> that the TID of a heap-only tuple is the TID of the root item, which\n> is kind of misleading).\n\nActually, I would say that heap_surgery's raison d'etre is making\nweird errors related to corruption of this or that TID go away, so\nthat the user can cut their losses. That's how it's advertised.\n\nLet's assume that we don't want to make VACUUM/pruning just treat\norphaned heap-only tuples as DEAD, regardless of their true HTSV-wise\nstatus -- let's say that we want to err in the direction of doing\nnothing at all with the page. Now we have to have a weird error in\nVACUUM instead (not great, but better than just spinning between\nlazy_scan_prune and heap_prune_page). And we've just created natural\ndemand for heap_surgery to deal with the problem by deleting whole HOT\nchains (not just root items).\n\nIf we allow VACUUM to treat orphaned heap-only tuples as DEAD right\naway, then we might as well do the same thing in heap_surgery, since\nthere is little chance that the user will get to the heap-only tuples\nbefore VACUUM does (not something to rely on, at any rate).\n\nEither way, I think we probably end up needing to teach heap_surgery\nto kill entire HOT chains as a group, given a TID.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 19 Feb 2022 19:31:21 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Sat, Feb 19, 2022 at 7:28 PM Andres Freund <andres@anarazel.de> wrote:\n> If the vacuum can get the cleanup lock due to the adversarial patch, the\n> heap_force_kill() doesn't do anything, because the first item is a\n> redirect. However if it *can't* get a cleanup lock, heap_force_kill() instead\n> targets the root item. Triggering the endless loop.\n\nBut it shouldn't matter if the root item is an LP_REDIRECT or a normal\n(not heap-only) tuple with storage. Either way it's the root of a HOT\nchain.\n\nThe fact that pg_surgery treats LP_REDIRECT items differently from the\nother kind of root items is just arbitrary. It seems to have more to\ndo with freezing tuples than killing tuples.\n\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Sat, 19 Feb 2022 19:40:20 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "Hi,\n\nOn 2022-02-19 19:31:21 -0800, Peter Geoghegan wrote:\n> On Sat, Feb 19, 2022 at 6:16 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > > Given that heap_surgery's raison d'etre is correcting corruption etc, I think\n> > > it makes sense for it to do as minimal work as possible. Iterating through a\n> > > HOT chain would be a problem if you e.g. tried to repair a page with HOT\n> > > corruption.\n> >\n> > I guess that's also true. There is at least a legitimate argument to\n> > be made for not leaving behind any orphaned heap-only tuples. The\n> > interface is a TID, and so the user may already believe that they're\n> > killing the heap-only, not just the root item (since ctid suggests\n> > that the TID of a heap-only tuple is the TID of the root item, which\n> > is kind of misleading).\n> \n> Actually, I would say that heap_surgery's raison d'etre is making\n> weird errors related to corruption of this or that TID go away, so\n> that the user can cut their losses. That's how it's advertised.\n\nI'm not that sure those are that different... Imagine some corruption leading\nto two hot chains ending in the same tid, which our fancy new secure pruning\nalgorithm might detect.\n\nEither way, I'm a bit surprised about the logic to not allow killing redirect\nitems? What if you have a redirect pointing to an unused item?\n\n\n> Let's assume that we don't want to make VACUUM/pruning just treat\n> orphaned heap-only tuples as DEAD, regardless of their true HTSV-wise\n> status\n\nI don't think that'd ever be a good idea. Those tuples are visible to a\nseqscan after all.\n\n\n> -- let's say that we want to err in the direction of doing\n> nothing at all with the page. Now we have to have a weird error in\n> VACUUM instead (not great, but better than just spinning between\n> lazy_scan_prune and heap_prune_page).\n\nNon DEAD orphaned versions shouldn't cause a problem in lazy_scan_prune(). The\nproblem here is a DEAD orphaned HOT tuples, and those we should be able to\ndelete with the new page pruning logic, right?\n\n\nI think it might be worth getting rid of the need for the retry approach by\nreusing the same HTSV status array between heap_prune_page and\nlazy_scan_prune. Then the only legitimate reason for seeing a DEAD item in\nlazy_scan_prune() would be some form of corruption. And it'd be a pretty\ndecent performance boost, HTSV ain't cheap.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 19 Feb 2022 19:47:32 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases, relfrozenxid\n optimizations" }, { "msg_contents": "On Sat, Feb 19, 2022 at 7:47 PM Andres Freund <andres@anarazel.de> wrote:\n> I'm not that sure those are that different... Imagine some corruption leading\n> to two hot chains ending in the same tid, which our fancy new secure pruning\n> algorithm might detect.\n\nI suppose that's possible, but it doesn't seem all that likely to ever\nhappen, what with the xmin -> xmax cross-tuple matching stuff.\n\n> Either way, I'm a bit surprised about the logic to not allow killing redirect\n> items? What if you have a redirect pointing to an unused item?\n\nAgain, I simply think it boils down to having to treat HOT chains as a\nwhole unit when killing TIDs.\n\n> > Let's assume that we don't want to make VACUUM/pruning just treat\n> > orphaned heap-only tuples as DEAD, regardless of their true HTSV-wise\n> > status\n>\n> I don't think that'd ever be a good idea. Those tuples are visible to a\n> seqscan after all.\n\nI agree (I don't hate it completely, but it seems mostly bad). This is\nwhat leads me to the conclusion that pg_surgery has to be able to do\nthis instead. Surely it's not okay to have something that makes VACUUM\nalways end in error, that cannot even be fixed by pg_surgery.\n\n> > -- let's say that we want to err in the direction of doing\n> > nothing at all with the page. Now we have to have a weird error in\n> > VACUUM instead (not great, but better than just spinning between\n> > lazy_scan_prune and heap_prune_page).\n>\n> Non DEAD orphaned versions shouldn't cause a problem in lazy_scan_prune(). The\n> problem here is a DEAD orphaned HOT tuples, and those we should be able to\n> delete with the new page pruning logic, right?\n\nRight. But what good does that really do? The problematic page had a\nthird tuple (at offnum 3) that was LIVE. If we could have done\nsomething about the problematic tuple at offnum 2 (which is where we\ngot stuck), then we'd still be left with a very unpleasant choice\nabout what happens to the third tuple.\n\n> I think it might be worth getting rid of the need for the retry approach by\n> reusing the same HTSV status array between heap_prune_page and\n> lazy_scan_prune. Then the only legitimate reason for seeing a DEAD item in\n> lazy_scan_prune() would be some form of corruption. And it'd be a pretty\n> decent performance boost, HTSV ain't cheap.\n\nI guess it doesn't actually matter if we leave an aborted DEAD tuple\nbehind, that we could have pruned away, but didn't. The important\nthing is to be consistent at the level of the page.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 19 Feb 2022 19:56:53 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "Hi, \n\nOn February 19, 2022 7:56:53 PM PST, Peter Geoghegan <pg@bowt.ie> wrote:\n>On Sat, Feb 19, 2022 at 7:47 PM Andres Freund <andres@anarazel.de> wrote:\n>> Non DEAD orphaned versions shouldn't cause a problem in lazy_scan_prune(). The\n>> problem here is a DEAD orphaned HOT tuples, and those we should be able to\n>> delete with the new page pruning logic, right?\n>\n>Right. But what good does that really do? The problematic page had a\n>third tuple (at offnum 3) that was LIVE. If we could have done\n>something about the problematic tuple at offnum 2 (which is where we\n>got stuck), then we'd still be left with a very unpleasant choice\n>about what happens to the third tuple.\n\nWhy does anything need to happen to it from vacuum's POV? It'll not be a problem for freezing etc. Until it's deleted vacuum doesn't need to care.\n\nProbably worth a WARNING, and amcheck definitely needs to detect it, but otherwise I think it's fine to just continue.\n\n\n>> I think it might be worth getting rid of the need for the retry approach by\n>> reusing the same HTSV status array between heap_prune_page and\n>> lazy_scan_prune. Then the only legitimate reason for seeing a DEAD item in\n>> lazy_scan_prune() would be some form of corruption. And it'd be a pretty\n>> decent performance boost, HTSV ain't cheap.\n>\n>I guess it doesn't actually matter if we leave an aborted DEAD tuple\n>behind, that we could have pruned away, but didn't. The important\n>thing is to be consistent at the level of the page.\n\nThat's not ok, because it opens up dangers of being interpreted differently after wraparound etc.\n\nBut I don't see any cases where it would happen with the new pruning logic in your patch and sharing the HTSV status array?\n\nAndres\n\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Sat, 19 Feb 2022 20:21:47 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "=?US-ASCII?Q?Re=3A_Removing_more_vacuumlazy=2Ec_speci?=\n =?US-ASCII?Q?al_cases=2C_relfrozenxid_optimizations?=" }, { "msg_contents": "On Sat, Feb 19, 2022 at 8:21 PM Andres Freund <andres@anarazel.de> wrote:\n> Why does anything need to happen to it from vacuum's POV? It'll not be a problem for freezing etc. Until it's deleted vacuum doesn't need to care.\n>\n> Probably worth a WARNING, and amcheck definitely needs to detect it, but otherwise I think it's fine to just continue.\n\nMaybe that's true, but it's just really weird to imagine not having an\nLP_REDIRECT that points to the LIVE item here, without throwing an\nerror. Seems kind of iffy, to say the least.\n\n> >I guess it doesn't actually matter if we leave an aborted DEAD tuple\n> >behind, that we could have pruned away, but didn't. The important\n> >thing is to be consistent at the level of the page.\n>\n> That's not ok, because it opens up dangers of being interpreted differently after wraparound etc.\n>\n> But I don't see any cases where it would happen with the new pruning logic in your patch and sharing the HTSV status array?\n\nRight. Fundamentally, there isn't any reason why it should matter that\nVACUUM reached the heap page just before (rather than concurrent with\nor just after) some xact that inserted or updated on the page aborts.\nJust as long as we have a consistent idea about what's going on at the\nlevel of the whole page (or maybe the level of each HOT chain, but the\nwhole page level seems simpler to me).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 19 Feb 2022 20:28:21 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Sat, Feb 19, 2022 at 8:54 PM Andres Freund <andres@anarazel.de> wrote:\n> > Leaving behind disconnected/orphaned heap-only tuples is pretty much\n> > pointless anyway, since they'll never be accessible by index scans.\n> > Even after a REINDEX, since there is no root item from the heap page\n> > to go in the index. (A dump and restore might work better, though.)\n>\n> Given that heap_surgery's raison d'etre is correcting corruption etc, I think\n> it makes sense for it to do as minimal work as possible. Iterating through a\n> HOT chain would be a problem if you e.g. tried to repair a page with HOT\n> corruption.\n\nYeah, I agree. I don't have time to respond to all of these emails\nthoroughly right now, but I think it's really important that\npg_surgery do the exact surgery the user requested, and not any other\nwork. I don't think that page defragmentation should EVER be REQUIRED\nas a condition of other work. If other code is relying on that, I'd\nsay it's busted. I'm a little more uncertain about the case where we\nkill the root tuple of a HOT chain, because I can see that this might\nleave the page a state where sequential scans see the tuple at the end\nof the chain and index scans don't. I'm not sure whether that should\nbe the responsibility of pg_surgery itself to avoid, or whether that's\nyour problem as a user of it -- although I lean mildly toward the\nlatter view, at the moment. But in any case surely the pruning code\ncan't just decide to go into an infinite loop if that happens. Code\nthat manipulates the states of data pages needs to be as robust\nagainst arbitrary on-disk states as we can reasonably make it, because\npages get garbled on disk all the time.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 20 Feb 2022 10:03:17 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Fri, Feb 18, 2022 at 7:12 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> We have to worry about XIDs from MultiXacts (and xmax values more\n> generally). And we have to worry about the case where we start out\n> with only xmin frozen (by an earlier VACUUM), and then have to freeze\n> xmax too. I believe that we have to generally consider xmin and xmax\n> independently. For example, we cannot ignore xmax, just because we\n> looked at xmin, since in general xmin alone might have already been\n> frozen.\n\nRight, so we at least need to add a similar comment to what I proposed\nfor MXIDs, and maybe other changes are needed, too.\n\n> The difference between the cleanup lock path (in\n> lazy_scan_prune/heap_prepare_freeze_tuple) and the share lock path (in\n> lazy_scan_noprune/heap_tuple_needs_freeze) is what is at issue in both\n> of these confusing comment blocks, really. Note that cutoff_xid is the\n> name that both heap_prepare_freeze_tuple and heap_tuple_needs_freeze\n> have for FreezeLimit (maybe we should rename every occurence of\n> cutoff_xid in heapam.c to FreezeLimit).\n>\n> At a high level, we aren't changing the fundamental definition of an\n> aggressive VACUUM in any of the patches -- we still need to advance\n> relfrozenxid up to FreezeLimit in an aggressive VACUUM, just like on\n> HEAD, today (we may be able to advance it *past* FreezeLimit, but\n> that's just a bonus). But in a non-aggressive VACUUM, where there is\n> still no strict requirement to advance relfrozenxid (by any amount),\n> the code added by 0001 can set relfrozenxid to any known safe value,\n> which could either be from before FreezeLimit, or after FreezeLimit --\n> almost anything is possible (provided we respect the relfrozenxid\n> invariant, and provided we see that we didn't skip any\n> all-visible-not-all-frozen pages).\n>\n> Since we still need to \"respect FreezeLimit\" in an aggressive VACUUM,\n> the aggressive case might need to wait for a full cleanup lock the\n> hard way, having tried and failed to do it the easy way within\n> lazy_scan_noprune (lazy_scan_noprune will still return false when any\n> call to heap_tuple_needs_freeze for any tuple returns false) -- same\n> as on HEAD, today.\n>\n> And so the difference at issue here is: FreezeLimit/cutoff_xid only\n> needs to affect the new NewRelfrozenxid value we use for relfrozenxid in\n> heap_prepare_freeze_tuple, which is involved in real freezing -- not\n> in heap_tuple_needs_freeze, whose main purpose is still to help us\n> avoid freezing where a cleanup lock isn't immediately available. While\n> the purpose of FreezeLimit/cutoff_xid within heap_tuple_needs_freeze\n> is to determine its bool return value, which will only be of interest\n> to the aggressive case (which might have to get a cleanup lock and do\n> it the hard way), not the non-aggressive case (where ratcheting back\n> NewRelfrozenxid is generally possible, and generally leaves us with\n> almost as good of a value).\n>\n> In other words: the calls to heap_tuple_needs_freeze made from\n> lazy_scan_noprune are simply concerned with the page as it actually\n> is, whereas the similar/corresponding calls to\n> heap_prepare_freeze_tuple from lazy_scan_prune are concerned with\n> *what the page will actually become*, after freezing finishes, and\n> after lazy_scan_prune is done with the page entirely (ultimately\n> the final NewRelfrozenxid value set in pg_class.relfrozenxid only has\n> to be <= the oldest extant XID *at the time the VACUUM operation is\n> just about to end*, not some earlier time, so \"being versus becoming\"\n> is an interesting distinction for us).\n>\n> Maybe the way that FreezeLimit/cutoff_xid is overloaded can be fixed\n> here, to make all of this less confusing. I only now fully realized\n> how confusing all of this stuff is -- very.\n\nRight. I think I understand all of this, or at least most of it -- but\nnot from the comment. The question is how the comment can be more\nclear. My general suggestion is that function header comments should\nhave more to do with the behavior of the function than how it fits\ninto the bigger picture. If it's clear to the reader what conditions\nmust hold before calling the function and which must hold on return,\nit helps a lot. IMHO, it's the job of the comments in the calling\nfunction to clarify why we then choose to call that function at the\nplace and in the way that we do.\n\n> As a general rule, we try to freeze all of the remaining live tuples\n> on a page (following pruning) together, as a group, or none at all.\n> Most of the time this is triggered by our noticing that the page is\n> about to be set all-visible (but not all-frozen), and doing work\n> sufficient to mark it fully all-frozen instead. Occasionally there is\n> FreezeLimit to consider, which is now more of a backstop thing, used\n> to make sure that we never get too far behind in terms of unfrozen\n> XIDs. This is useful in part because it avoids any future\n> non-aggressive VACUUM that is fundamentally unable to advance\n> relfrozenxid (you can't skip all-visible pages if there are only\n> all-frozen pages in the VM in practice).\n>\n> We're generally doing a lot more freezing with 0002, but we still\n> manage to avoid freezing too much in tables like pgbench_tellers or\n> pgbench_branches -- tables where it makes the least sense. Such tables\n> will be updated so frequently that VACUUM is relatively unlikely to\n> ever mark any page all-visible, avoiding the main criteria for\n> freezing implicitly. It's also unlikely that they'll ever have an XID that is so\n> old to trigger the fallback FreezeLimit-style criteria for freezing.\n>\n> In practice, freezing tuples like this is generally not that expensive in\n> most tables where VACUUM freezes the majority of pages immediately\n> (tables that aren't like pgbench_tellers or pgbench_branches), because\n> they're generally big tables, where the overhead of FPIs tends\n> to dominate anyway (gambling that we can avoid more FPIs later on is not a\n> bad gamble, as gambles go). This seems to make the overhead\n> acceptable, on balance. Granted, you might be able to poke holes in\n> that argument, and reasonable people might disagree on what acceptable\n> should mean. There are many value judgements here, which makes it\n> complicated. (On the other hand we might be able to do better if there\n> was a particularly bad case for the 0002 work, if one came to light.)\n\nI think that the idea has potential, but I don't think that I\nunderstand yet what the *exact* algorithm is. Maybe I need to read the\ncode, when I have some time for that. I can't form an intelligent\nopinion at this stage about whether this is likely to be a net\npositive.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 20 Feb 2022 10:30:21 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": ",\n\nOn Sun, Feb 20, 2022 at 7:30 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Right, so we at least need to add a similar comment to what I proposed\n> for MXIDs, and maybe other changes are needed, too.\n\nAgreed.\n\n> > Maybe the way that FreezeLimit/cutoff_xid is overloaded can be fixed\n> > here, to make all of this less confusing. I only now fully realized\n> > how confusing all of this stuff is -- very.\n>\n> Right. I think I understand all of this, or at least most of it -- but\n> not from the comment. The question is how the comment can be more\n> clear. My general suggestion is that function header comments should\n> have more to do with the behavior of the function than how it fits\n> into the bigger picture. If it's clear to the reader what conditions\n> must hold before calling the function and which must hold on return,\n> it helps a lot. IMHO, it's the job of the comments in the calling\n> function to clarify why we then choose to call that function at the\n> place and in the way that we do.\n\nYou've given me a lot of high quality feedback on all of this, which\nI'll work through soon. It's hard to get the balance right here, but\nit's made much easier by this kind of feedback.\n\n> I think that the idea has potential, but I don't think that I\n> understand yet what the *exact* algorithm is.\n\nThe algorithm seems to exploit a natural tendency that Andres once\ndescribed in a blog post about his snapshot scalability work [1]. To a\nsurprising extent, we can usefully bucket all tuples/pages into two\nsimple categories:\n\n1. Very, very old (\"infinitely old\" for all practical purposes).\n\n2. Very very new.\n\nThere doesn't seem to be much need for a third \"in-between\" category\nin practice. This seems to be at least approximately true all of the\ntime.\n\nPerhaps Andres wouldn't agree with this very general statement -- he\nactually said something more specific. I for one believe that the\npoint he made generalizes surprisingly well, though. I have my own\ntheories about why this appears to be true. (Executive summary: power\nlaws are weird, and it seems as if the sparsity-of-effects principle\nmakes it easy to bucket things at the highest level, in a way that\ngeneralizes well across disparate workloads.)\n\n> Maybe I need to read the\n> code, when I have some time for that. I can't form an intelligent\n> opinion at this stage about whether this is likely to be a net\n> positive.\n\nThe code in the v8-0002 patch is a bit sloppy right now. I didn't\nquite get around to cleaning it up -- I was focussed on performance\nvalidation of the algorithm itself. So bear that in mind if you do\nlook at v8-0002 (might want to wait for v9-0002 before looking).\n\nI believe that the only essential thing about the algorithm itself is\nthat it freezes all the tuples on a page when it anticipates setting\nthe page all-visible, or (barring edge cases) freezes none at all.\n(Note that setting the page all-visible/all-frozen may be happen just\nafter lazy_scan_prune returns, or in the second pass over the heap,\nafter LP_DEAD items are set to LP_UNUSED -- lazy_scan_prune doesn't\ncare which way it will happen.)\n\nThere are one or two other design choices that we need to make, like\nwhat exact tuples we freeze in the edge case where FreezeLimit/XID age\nforces us to freeze in lazy_scan_prune. These other design choices\ndon't seem relevant to the issue of central importance, which is\nwhether or not we come out ahead overall with this new algorithm.\nFreezeLimit will seldom affect our choice to freeze or not freeze now,\nand so AFAICT the exact way that FreezeLimit affects which precise\nfreezing-eligible tuples we freeze doesn't complicate performance\nvalidation.\n\nRemember when I got excited about how my big TPC-C benchmark run\nshowed a predictable, tick/tock style pattern across VACUUM operations\nagainst the order and order lines table [2]? It seemed very\nsignificant to me that the OldestXmin of VACUUM operation n\nconsistently went on to become the new relfrozenxid for the same table\nin VACUUM operation n + 1. It wasn't exactly the same XID, but very\nclose to it (within the range of noise). This pattern was clearly\npresent, even though VACUUM operation n + 1 might happen as long as 4\nor 5 hours after VACUUM operation n (this was a big table).\n\nThis pattern was encouraging to me because it showed (at least for the\nworkload and tables in question) that the amount of unnecessary extra\nfreezing can't have been too bad -- the fact that we can always\nadvance relfrozenxid in the same way is evidence of that. Note that\nthe vacuum_freeze_min_age setting can't have affected our choice of\nwhat to freeze (given what we see in the logs), and yet there is a\nclear pattern where the pages (it's really pages, not tuples) that the\nnew algorithm doesn't freeze in VACUUM operation n will reliably get\nfrozen in VACUUM operation n + 1 instead.\n\nAnd so this pattern seems to lend support to the general idea of\nletting the workload itself be the primary driver of what pages we\nfreeze (not FreezeLimit, and not anything based on XIDs). That's\nreally the underlying principle behind the new algorithm -- freezing\nis driven by workload characteristics (or page/block characteristics,\nif you prefer). ISTM that vacuum_freeze_min_age is almost impossible\nto tune -- XID age is just too squishy a concept for that to ever\nwork.\n\n[1] https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/improving-postgres-connection-scalability-snapshots/ba-p/1806462#interlude-removing-the-need-for-recentglobalxminhorizon\n[2] https://postgr.es/m/CAH2-Wz=iLnf+0CsaB37efXCGMRJO1DyJw5HMzm7tp1AxG1NR2g@mail.gmail.com\n-- scroll down to \"TPC-C\", which has the relevant autovacuum log\noutput for the orders table, covering a 24 hour period\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Sun, 20 Feb 2022 12:27:25 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Sun, Feb 20, 2022 at 12:27 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> You've given me a lot of high quality feedback on all of this, which\n> I'll work through soon. It's hard to get the balance right here, but\n> it's made much easier by this kind of feedback.\n\nAttached is v9. Lots of changes. Highlights:\n\n* Much improved 0001 (\"loosen coupling\" dynamic relfrozenxid tracking\npatch). Some of the improvements are due to recent feedback from\nRobert.\n\n* Much improved 0002 (\"Make page-level characteristics drive freezing\"\npatch). Whole new approach to the implementation, though the same\nalgorithm as before.\n\n* No more FSM patch -- that was totally separate work, that I\nshouldn't have attached to this project.\n\n* There are 2 new patches (these are now 0003 and 0004), both of which\nare concerned with allowing non-aggressive VACUUM to consistently\nadvance relfrozenxid. I think that 0003 makes sense on general\nprinciple, but I'm much less sure about 0004. These aren't too\nimportant.\n\nWhile working on the new approach to freezing taken by v9-0002, I had\nsome insight about the issues that Robert raised around 0001, too. I\nwasn't expecting that to happen.\n\n0002 makes page-level freezing a first class thing.\nheap_prepare_freeze_tuple now has some (limited) knowledge of how this\nworks. heap_prepare_freeze_tuple's cutoff_xid argument is now always\nthe VACUUM caller's OldestXmin (not its FreezeLimit, as before). We\nstill have to pass FreezeLimit to heap_prepare_freeze_tuple, which\nhelps us to respect FreezeLimit as a backstop, and so now it's passed\nvia the new backstop_cutoff_xid argument instead. Whenever we opt to\n\"freeze a page\", the new page-level algorithm *always* uses the most\nrecent possible XID and MXID values (OldestXmin and oldestMxact) to\ndecide what XIDs/XMIDs need to be replaced. That might sound like it'd\nbe too much, but it only applies to those pages that we actually\ndecide to freeze (since page-level characteristics drive everything\nnow). FreezeLimit is only one way of triggering that now (and one of\nthe least interesting and rarest).\n\n0002 also adds an alternative set of relfrozenxid/relminmxid tracker\nvariables, to make the \"don't freeze the page\" path within\nlazy_scan_prune simpler (if you don't want to freeze the page, then\nuse the set of tracker variables that go with that choice, which\nheap_prepare_freeze_tuple knows about and helps with). With page-level\nfreezing, lazy_scan_prune wants to make a decision about the page as a\nwhole, at the last minute, after all heap_prepare_freeze_tuple calls\nhave already been made. So I think that heap_prepare_freeze_tuple\nneeds to know about that aspect of lazy_scan_prune's behavior.\n\nWhen we *don't* want to freeze the page, we more or less need\neverything related to freezing inside lazy_scan_prune to behave like\nlazy_scan_noprune, which never freezes the page (that's mostly the\npoint of lazy_scan_noprune). And that's almost what we actually do --\nheap_prepare_freeze_tuple now outsources maintenance of this\nalternative set of \"don't freeze the page\" relfrozenxid/relminmxid\ntracker variables to its sibling function, heap_tuple_needs_freeze.\nThat is the same function that lazy_scan_noprune itself actually\ncalls.\n\nNow back to Robert's feedback on 0001, which had very complicated\ncomments in the last version. This approach seems to make the \"being\nversus becoming\" or \"going to freeze versus not going to freeze\"\ndistinctions much clearer. This is less true if you assume that 0002\nwon't be committed but 0001 will be. Even if that happens with\nPostgres 15, I have to imagine that adding something like 0002 must be\nthe real goal, long term. Without 0002, the value from 0001 is far\nmore limited. You need both together to get the virtuous cycle I've\ndescribed.\n\nThe approach with always using OldestXmin as cutoff_xid and\noldestMxact as our cutoff_multi makes a lot of sense to me, in part\nbecause I think that it might well cut down on the tendency of VACUUM\nto allocate new MultiXacts in order to be able to freeze old ones.\nAFAICT the only reason that heap_prepare_freeze_tuple does that is\nbecause it has no flexibility on FreezeLimit and MultiXactCutoff.\nThese are derived from vacuum_freeze_min_age and\nvacuum_multixact_freeze_min_age, respectively, and so they're two\nindependent though fairly meaningless cutoffs. On the other hand,\nOldestXmin and OldestMxact are not independent in the same way. We get\nboth of them at the same time and the same place, in\nvacuum_set_xid_limits. OldestMxact really is very close to OldestXmin\n-- only the units differ.\n\nIt seems that heap_prepare_freeze_tuple allocates new MXIDs (when\nfreezing old ones) in large part so it can NOT freeze XIDs that it\nwould have been useful (and much cheaper) to remove anyway. On HEAD,\nFreezeMultiXactId() doesn't get passed down the VACUUM operation's\nOldestXmin at all (it actually just gets FreezeLimit passed as its\ncutoff_xid argument). It cannot possibly recognize any of this for\nitself.\n\nDoes that theory about MultiXacts sound plausible? I'm not claiming\nthat the patch makes it impossible that FreezeMultiXactId() will have\nto allocate a new MultiXact to freeze during VACUUM -- the\nfreeze-the-dead isolation tests already show that that's not true. I\njust think that page-level freezing based on page characteristics with\noldestXmin and oldestMxact (not FreezeLimit and MultiXactCutoff)\ncutoffs might make it a lot less likely in practice. oldestXmin and\noldestMxact map to the same wall clock time, more or less -- that\nseems like it might be an important distinction, independent of\neverything else.\n\nThanks\n--\nPeter Geoghegan", "msg_date": "Thu, 24 Feb 2022 20:53:08 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "Hi,\n\nOn 2022-02-24 20:53:08 -0800, Peter Geoghegan wrote:\n> 0002 makes page-level freezing a first class thing.\n> heap_prepare_freeze_tuple now has some (limited) knowledge of how this\n> works. heap_prepare_freeze_tuple's cutoff_xid argument is now always\n> the VACUUM caller's OldestXmin (not its FreezeLimit, as before). We\n> still have to pass FreezeLimit to heap_prepare_freeze_tuple, which\n> helps us to respect FreezeLimit as a backstop, and so now it's passed\n> via the new backstop_cutoff_xid argument instead.\n\nI am not a fan of the backstop terminology. It's still the reason we need to\ndo freezing for correctness reasons. It'd make more sense to me to turn it\naround and call the \"non-backstop\" freezing opportunistic freezing or such.\n\n\n> Whenever we opt to\n> \"freeze a page\", the new page-level algorithm *always* uses the most\n> recent possible XID and MXID values (OldestXmin and oldestMxact) to\n> decide what XIDs/XMIDs need to be replaced. That might sound like it'd\n> be too much, but it only applies to those pages that we actually\n> decide to freeze (since page-level characteristics drive everything\n> now). FreezeLimit is only one way of triggering that now (and one of\n> the least interesting and rarest).\n\nThat largely makes sense to me and doesn't seem weird.\n\nI'm a tad concerned about replacing mxids that have some members that are\nolder than OldestXmin but not older than FreezeLimit. It's not too hard to\nimagine that accelerating mxid consumption considerably. But we can probably,\nif not already done, special case that.\n\n\n> It seems that heap_prepare_freeze_tuple allocates new MXIDs (when\n> freezing old ones) in large part so it can NOT freeze XIDs that it\n> would have been useful (and much cheaper) to remove anyway.\n\nWell, we may have to allocate a new mxid because some members are older than\nFreezeLimit but others are still running. When do we not remove xids that\nwould have been cheaper to remove once we decide to actually do work?\n\n\n> On HEAD, FreezeMultiXactId() doesn't get passed down the VACUUM operation's\n> OldestXmin at all (it actually just gets FreezeLimit passed as its\n> cutoff_xid argument). It cannot possibly recognize any of this for itself.\n\nIt does recognize something like OldestXmin in a more precise and expensive\nway - MultiXactIdIsRunning() and TransactionIdIsCurrentTransactionId().\n\n\n> Does that theory about MultiXacts sound plausible? I'm not claiming\n> that the patch makes it impossible that FreezeMultiXactId() will have\n> to allocate a new MultiXact to freeze during VACUUM -- the\n> freeze-the-dead isolation tests already show that that's not true. I\n> just think that page-level freezing based on page characteristics with\n> oldestXmin and oldestMxact (not FreezeLimit and MultiXactCutoff)\n> cutoffs might make it a lot less likely in practice.\n\nHm. I guess I'll have to look at the code for it. It doesn't immediately\n\"feel\" quite right.\n\n\n> oldestXmin and oldestMxact map to the same wall clock time, more or less --\n> that seems like it might be an important distinction, independent of\n> everything else.\n\nHm. Multis can be kept alive by fairly \"young\" member xids. So it may not be\nremovably (without creating a newer multi) until much later than its creation\ntime. So I don't think that's really true.\n\n\n\n> From 483bc8df203f9df058fcb53e7972e3912e223b30 Mon Sep 17 00:00:00 2001\n> From: Peter Geoghegan <pg@bowt.ie>\n> Date: Mon, 22 Nov 2021 10:02:30 -0800\n> Subject: [PATCH v9 1/4] Loosen coupling between relfrozenxid and freezing.\n>\n> When VACUUM set relfrozenxid before now, it set it to whatever value was\n> used to determine which tuples to freeze -- the FreezeLimit cutoff.\n> This approach was very naive: the relfrozenxid invariant only requires\n> that new relfrozenxid values be <= the oldest extant XID remaining in\n> the table (at the point that the VACUUM operation ends), which in\n> general might be much more recent than FreezeLimit. There is no fixed\n> relationship between the amount of physical work performed by VACUUM to\n> make it safe to advance relfrozenxid (freezing and pruning), and the\n> actual number of XIDs that relfrozenxid can be advanced by (at least in\n> principle) as a result. VACUUM might have to freeze all of the tuples\n> from a hundred million heap pages just to enable relfrozenxid to be\n> advanced by no more than one or two XIDs. On the other hand, VACUUM\n> might end up doing little or no work, and yet still be capable of\n> advancing relfrozenxid by hundreds of millions of XIDs as a result.\n>\n> VACUUM now sets relfrozenxid (and relminmxid) using the exact oldest\n> extant XID (and oldest extant MultiXactId) from the table, including\n> XIDs from the table's remaining/unfrozen MultiXacts. This requires that\n> VACUUM carefully track the oldest unfrozen XID/MultiXactId as it goes.\n> This optimization doesn't require any changes to the definition of\n> relfrozenxid, nor does it require changes to the core design of\n> freezing.\n\n\n> Final relfrozenxid values must still be >= FreezeLimit in an aggressive\n> VACUUM (FreezeLimit is still used as an XID-age based backstop there).\n> In non-aggressive VACUUMs (where there is still no strict guarantee that\n> relfrozenxid will be advanced at all), we now advance relfrozenxid by as\n> much as we possibly can. This exploits workload conditions that make it\n> easy to advance relfrozenxid by many more XIDs (for the same amount of\n> freezing/pruning work).\n\nDon't we now always advance relfrozenxid as much as we can, particularly also\nduring aggressive vacuums?\n\n\n\n> * FRM_RETURN_IS_MULTI\n> *\t\tThe return value is a new MultiXactId to set as new Xmax.\n> *\t\t(caller must obtain proper infomask bits using GetMultiXactIdHintBits)\n> + *\n> + * \"relfrozenxid_out\" is an output value; it's used to maintain target new\n> + * relfrozenxid for the relation. It can be ignored unless \"flags\" contains\n> + * either FRM_NOOP or FRM_RETURN_IS_MULTI, because we only handle multiXacts\n> + * here. This follows the general convention: only track XIDs that will still\n> + * be in the table after the ongoing VACUUM finishes. Note that it's up to\n> + * caller to maintain this when the Xid return value is itself an Xid.\n> + *\n> + * Note that we cannot depend on xmin to maintain relfrozenxid_out.\n\nWhat does it mean for xmin to maintain something?\n\n\n\n> + * See heap_prepare_freeze_tuple for information about the basic rules for the\n> + * cutoffs used here.\n> + *\n> + * Maintains *relfrozenxid_nofreeze_out and *relminmxid_nofreeze_out, which\n> + * are the current target relfrozenxid and relminmxid for the relation. We\n> + * assume that caller will never want to freeze its tuple, even when the tuple\n> + * \"needs freezing\" according to our return value.\n\nI don't understand the \"will never want to\" bit?\n\n\n> Caller should make temp\n> + * copies of global tracking variables before starting to process a page, so\n> + * that we can only scribble on copies. That way caller can just discard the\n> + * temp copies if it isn't okay with that assumption.\n> + *\n> + * Only aggressive VACUUM callers are expected to really care when a tuple\n> + * \"needs freezing\" according to us. It follows that non-aggressive VACUUMs\n> + * can use *relfrozenxid_nofreeze_out and *relminmxid_nofreeze_out in all\n> + * cases.\n\nCould it make sense to track can_freeze and need_freeze separately?\n\n\n> @@ -7158,57 +7256,59 @@ heap_tuple_needs_freeze(HeapTupleHeader tuple, TransactionId cutoff_xid,\n> \tif (tuple->t_infomask & HEAP_XMAX_IS_MULTI)\n> \t{\n> \t\tMultiXactId multi;\n> +\t\tMultiXactMember *members;\n> +\t\tint\t\t\tnmembers;\n>\n> \t\tmulti = HeapTupleHeaderGetRawXmax(tuple);\n> -\t\tif (!MultiXactIdIsValid(multi))\n> -\t\t{\n> -\t\t\t/* no xmax set, ignore */\n> -\t\t\t;\n> -\t\t}\n\n> -\t\telse if (HEAP_LOCKED_UPGRADED(tuple->t_infomask))\n> +\t\tif (MultiXactIdIsValid(multi) &&\n> +\t\t\tMultiXactIdPrecedes(multi, *relminmxid_nofreeze_out))\n> +\t\t\t*relminmxid_nofreeze_out = multi;\n\nI may be misreading the diff, but aren't we know continuing to use multi down\nbelow even if !MultiXactIdIsValid()?\n\n\n> +\t\tif (HEAP_LOCKED_UPGRADED(tuple->t_infomask))\n> \t\t\treturn true;\n> -\t\telse if (MultiXactIdPrecedes(multi, cutoff_multi))\n> -\t\t\treturn true;\n> -\t\telse\n> +\t\telse if (MultiXactIdPrecedes(multi, backstop_cutoff_multi))\n> +\t\t\tneeds_freeze = true;\n> +\n> +\t\t/* need to check whether any member of the mxact is too old */\n> +\t\tnmembers = GetMultiXactIdMembers(multi, &members, false,\n> +\t\t\t\t\t\t\t\t\t\t HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask));\n\nDoesn't this mean we unpack the members even if the multi is old enough to\nneed freezing? Just to then do it again during freezing? Accessing multis\nisn't cheap...\n\n\n> +\t\t\tif (TransactionIdPrecedes(members[i].xid, backstop_cutoff_xid))\n> +\t\t\t\tneeds_freeze = true;\n> +\t\t\tif (TransactionIdPrecedes(members[i].xid,\n> +\t\t\t\t\t\t\t\t\t *relfrozenxid_nofreeze_out))\n> +\t\t\t\t*relfrozenxid_nofreeze_out = xid;\n> \t\t}\n> +\t\tif (nmembers > 0)\n> +\t\t\tpfree(members);\n> \t}\n> \telse\n> \t{\n> \t\txid = HeapTupleHeaderGetRawXmax(tuple);\n> -\t\tif (TransactionIdIsNormal(xid) &&\n> -\t\t\tTransactionIdPrecedes(xid, cutoff_xid))\n> -\t\t\treturn true;\n> +\t\tif (TransactionIdIsNormal(xid))\n> +\t\t{\n> +\t\t\tif (TransactionIdPrecedes(xid, *relfrozenxid_nofreeze_out))\n> +\t\t\t\t*relfrozenxid_nofreeze_out = xid;\n> +\t\t\tif (TransactionIdPrecedes(xid, backstop_cutoff_xid))\n> +\t\t\t\tneeds_freeze = true;\n> +\t\t}\n> \t}\n>\n> \tif (tuple->t_infomask & HEAP_MOVED)\n> \t{\n> \t\txid = HeapTupleHeaderGetXvac(tuple);\n> -\t\tif (TransactionIdIsNormal(xid) &&\n> -\t\t\tTransactionIdPrecedes(xid, cutoff_xid))\n> -\t\t\treturn true;\n> +\t\tif (TransactionIdIsNormal(xid))\n> +\t\t{\n> +\t\t\tif (TransactionIdPrecedes(xid, *relfrozenxid_nofreeze_out))\n> +\t\t\t\t*relfrozenxid_nofreeze_out = xid;\n> +\t\t\tif (TransactionIdPrecedes(xid, backstop_cutoff_xid))\n> +\t\t\t\tneeds_freeze = true;\n> +\t\t}\n> \t}\n\nThis stanza is repeated a bunch. Perhaps put it in a small static inline\nhelper?\n\n\n> \t/* VACUUM operation's cutoff for freezing XIDs and MultiXactIds */\n> \tTransactionId FreezeLimit;\n> \tMultiXactId MultiXactCutoff;\n> -\t/* Are FreezeLimit/MultiXactCutoff still valid? */\n> -\tbool\t\tfreeze_cutoffs_valid;\n> +\t/* Tracks oldest extant XID/MXID for setting relfrozenxid/relminmxid */\n> +\tTransactionId NewRelfrozenXid;\n> +\tMultiXactId NewRelminMxid;\n\nStruct member names starting with an upper case look profoundly ugly to\nme... But this isn't the first one, so I guess... :(\n\n\n\n\n> From d10f42a1c091b4dc52670fca80a63fee4e73e20c Mon Sep 17 00:00:00 2001\n> From: Peter Geoghegan <pg@bowt.ie>\n> Date: Mon, 13 Dec 2021 15:00:49 -0800\n> Subject: [PATCH v9 2/4] Make page-level characteristics drive freezing.\n>\n> Teach VACUUM to freeze all of the tuples on a page whenever it notices\n> that it would otherwise mark the page all-visible, without also marking\n> it all-frozen. VACUUM typically won't freeze _any_ tuples on the page\n> unless _all_ tuples (that remain after pruning) are all-visible. This\n> makes the overhead of vacuuming much more predictable over time. We\n> avoid the need for large balloon payments during aggressive VACUUMs\n> (typically anti-wraparound autovacuums). Freezing is proactive, so\n> we're much less likely to get into \"freezing debt\".\n\n\nI still suspect this will cause a very substantial increase in WAL traffic in\nrealistic workloads. It's common to have workloads where tuples are inserted\nonce, and deleted once/ partition dropped. Freezing all the tuples is a lot\nmore expensive than just marking the page all visible. It's not uncommon to be\nbound by WAL traffic rather than buffer dirtying rate (since the latter may be\nameliorated by s_b and local storage, whereas WAL needs to be\nstreamed/archived).\n\nThis is particularly true because log_heap_visible() doesn't need an FPW if\ncheckpoints aren't enabled. A small record vs an FPI is a *huge* difference.\n\n\nI think we'll have to make this less aggressive or tunable. Random ideas for\nheuristics:\n\n- Is it likely that freezing would not require an FPI or conversely that\n log_heap_visible() will also need an fpi? If the page already was recently\n modified / checksums are enabled the WAL overhead of the freezing doesn't\n play much of a role.\n\n- #dead items / #force-frozen items on the page - if we already need to do\n more than just setting all-visible, we can probably afford the WAL traffic.\n\n- relfrozenxid vs max_freeze_age / FreezeLimit. The closer they get, the more\n aggressive we should freeze all-visible pages. Might even make sense to\n start vacuuming an increasing percentage of all-visible pages during\n non-aggressive vacuums, the closer we get to FreezeLimit.\n\n- Keep stats about the age of dead and frozen over time. If all tuples are\n removed within a reasonable fraction of freeze_max_age, there's no point in\n freezing them.\n\n\n> The new approach to freezing also enables relfrozenxid advancement in\n> non-aggressive VACUUMs, which might be enough to avoid aggressive\n> VACUUMs altogether (with many individual tables/workloads). While the\n> non-aggressive case continues to skip all-visible (but not all-frozen)\n> pages (thereby making relfrozenxid advancement impossible), that in\n> itself will no longer hinder relfrozenxid advancement (outside of\n> pg_upgrade scenarios).\n\nI don't know how to parse \"thereby making relfrozenxid advancement impossible\n... will no longer hinder relfrozenxid advancement\"?\n\n\n> We now consistently avoid leaving behind all-visible (not all-frozen) pages.\n> This (as well as work from commit 44fa84881f) makes relfrozenxid advancement\n> in non-aggressive VACUUMs commonplace.\n\ns/consistently/try to/?\n\n\n> The system accumulates freezing debt in proportion to the number of\n> physical heap pages with unfrozen tuples, more or less. Anything based\n> on XID age is likely to be a poor proxy for the eventual cost of\n> freezing (during the inevitable anti-wraparound autovacuum). At a high\n> level, freezing is now treated as one of the costs of storing tuples in\n> physical heap pages -- not a cost of transactions that allocate XIDs.\n> Although vacuum_freeze_min_age and vacuum_multixact_freeze_min_age still\n> influence what we freeze, and when, they effectively become backstops.\n> It may still be necessary to \"freeze a page\" due to the presence of a\n> particularly old XID, from before VACUUM's FreezeLimit cutoff, though\n> that will be rare in practice -- FreezeLimit is just a backstop now.\n\nI don't really like the \"rare in practice\" bit. It'll be rare in some\nworkloads but others will likely be much less affected.\n\n\n\n> + * Although this interface is primarily tuple-based, vacuumlazy.c caller\n> + * cooperates with us to decide on whether or not to freeze whole pages,\n> + * together as a single group. We prepare for freezing at the level of each\n> + * tuple, but the final decision is made for the page as a whole. All pages\n> + * that are frozen within a given VACUUM operation are frozen according to\n> + * cutoff_xid and cutoff_multi. Caller _must_ freeze the whole page when\n> + * we've set *force_freeze to true!\n> + *\n> + * cutoff_xid must be caller's oldest xmin to ensure that any XID older than\n> + * it could neither be running nor seen as running by any open transaction.\n> + * This ensures that the replacement will not change anyone's idea of the\n> + * tuple state. Similarly, cutoff_multi must be the smallest MultiXactId used\n> + * by any open transaction (at the time that the oldest xmin was acquired).\n\nI think this means my concern above about increasing mxid creation rate\nsubstantially may be warranted.\n\n\n> + * backstop_cutoff_xid must be <= cutoff_xid, and backstop_cutoff_multi must\n> + * be <= cutoff_multi. When any XID/XMID from before these backstop cutoffs\n> + * is encountered, we set *force_freeze to true, making caller freeze the page\n> + * (freezing-eligible XIDs/XMIDs will be frozen, at least). \"Backstop\n> + * freezing\" ensures that VACUUM won't allow XIDs/XMIDs to ever get too old.\n> + * This shouldn't be necessary very often. VACUUM should prefer to freeze\n> + * when it's cheap (not when it's urgent).\n\nHm. Does this mean that we might call heap_prepare_freeze_tuple and then\ndecide not to freeze? Doesn't that mean we might create new multis over and\nover, because we don't end up pulling the trigger on freezing the page?\n\n\n> +\n> +\t\t\t/*\n> +\t\t\t * We allocated a MultiXact for this, so force freezing to avoid\n> +\t\t\t * wasting it\n> +\t\t\t */\n> +\t\t\t*force_freeze = true;\n\nAh, I guess not. But it'd be nicer if I didn't have to scroll down to the body\nof the function to figure it out...\n\n\n\n> From d2190abf366f148bae5307442e8a6245c6922e78 Mon Sep 17 00:00:00 2001\n> From: Peter Geoghegan <pg@bowt.ie>\n> Date: Mon, 21 Feb 2022 12:46:44 -0800\n> Subject: [PATCH v9 3/4] Remove aggressive VACUUM skipping special case.\n>\n> Since it's simply never okay to miss out on advancing relfrozenxid\n> during an aggressive VACUUM (that's the whole point), the aggressive\n> case treated any page from a next_unskippable_block-wise skippable block\n> range as an all-frozen page (not a merely all-visible page) during\n> skipping. Such a page might not be all-visible/all-frozen at the point\n> that it actually gets skipped, but it could nevertheless be safely\n> skipped, and then counted in frozenskipped_pages (the page must have\n> been all-frozen back when we determined the extent of the range of\n> blocks to skip, since aggressive VACUUMs _must_ scan all-visible pages).\n> This is necessary to ensure that aggressive VACUUMs are always capable\n> of advancing relfrozenxid.\n\n> The non-aggressive case behaved slightly differently: it rechecked the\n> visibility map for each page at the point of skipping, and only counted\n> pages in frozenskipped_pages when they were still all-frozen at that\n> time. But it skipped the page either way (since we already committed to\n> skipping the page at the point of the recheck). This was correct, but\n> sometimes resulted in non-aggressive VACUUMs needlessly wasting an\n> opportunity to advance relfrozenxid (when a page was modified in just\n> the wrong way, at just the wrong time). It also resulted in a needless\n> recheck of the visibility map for each and every page skipped during\n> non-aggressive VACUUMs.\n>\n> Avoid these problems by conditioning the \"skippable page was definitely\n> all-frozen when range of skippable pages was first determined\" behavior\n> on what the visibility map _actually said_ about the range as a whole\n> back when we first determined the extent of the range (don't deduce what\n> must have happened at that time on the basis of aggressive-ness). This\n> allows us to reliably count skipped pages in frozenskipped_pages when\n> they were initially all-frozen. In particular, when a page's visibility\n> map bit is unset after the point where a skippable range of pages is\n> initially determined, but before the point where the page is actually\n> skipped, non-aggressive VACUUMs now count it in frozenskipped_pages,\n> just like aggressive VACUUMs always have [1]. It's not critical for the\n> non-aggressive case to get this right, but there is no reason not to.\n>\n> [1] Actually, it might not work that way when there happens to be a mix\n> of all-visible and all-frozen pages in a range of skippable pages.\n> There is no chance of VACUUM advancing relfrozenxid in this scenario\n> either way, though, so it doesn't matter.\n\nI think this commit message needs a good amount of polishing - it's very\nconvoluted. It's late and I didn't sleep well, but I've tried to read it\nseveral times without really getting a sense of what this precisely does.\n\n\n\n\n> From 15dec1e572ac4da0540251253c3c219eadf46a83 Mon Sep 17 00:00:00 2001\n> From: Peter Geoghegan <pg@bowt.ie>\n> Date: Thu, 24 Feb 2022 17:21:45 -0800\n> Subject: [PATCH v9 4/4] Avoid setting a page all-visible but not all-frozen.\n\nTo me the commit message body doesn't actually describe what this is doing...\n\n\n> This is pretty much an addendum to the work in the \"Make page-level\n> characteristics drive freezing\" commit. It has been broken out like\n> this because I'm not even sure if it's necessary. It seems like we\n> might want to be paranoid about losing out on the chance to advance\n> relfrozenxid in non-aggressive VACUUMs, though.\n\n> The only test that will trigger this case is the \"freeze-the-dead\"\n> isolation test. It's incredibly narrow. On the other hand, why take a\n> chance? All it takes is one heap page that's all-visible (and not also\n> all-frozen) nestled between some all-frozen heap pages to lose out on\n> relfrozenxid advancement. The SKIP_PAGES_THRESHOLD stuff won't save us\n> then [1].\n\nFWIW, I'd really like to get rid of SKIP_PAGES_THRESHOLD. It often ends up\ncausing a lot of time doing IO that we never need, completely trashing all CPU\ncaches, while not actually causing decent readaead IO from what I've seen.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 24 Feb 2022 23:14:22 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases, relfrozenxid\n optimizations" }, { "msg_contents": "On Thu, Feb 24, 2022 at 11:14 PM Andres Freund <andres@anarazel.de> wrote:\n> I am not a fan of the backstop terminology. It's still the reason we need to\n> do freezing for correctness reasons.\n\nThanks for the review!\n\nI'm not wedded to that particular terminology, but I think that we\nneed something like it. Open to suggestions.\n\nHow about limit-based? Something like that?\n\n> It'd make more sense to me to turn it\n> around and call the \"non-backstop\" freezing opportunistic freezing or such.\n\nThe problem with that scheme is that it leads to a world where\n\"standard freezing\" is incredibly rare (it often literally never\nhappens), whereas \"opportunistic freezing\" is incredibly common. That\ndoesn't make much sense to me.\n\nWe tend to think of 50 million XIDs (the vacuum_freeze_min_age\ndefault) as being not that many. But I think that it can be a huge\nnumber, too. Even then, it's unpredictable -- I suspect that it can\nchange without very much changing in the application, from the point\nof view of users. That's a big part of the problem I'm trying to\naddress -- freezing outside of aggressive VACUUMs is way too rare (it\nmight barely happen at all). FreezeLimit/vacuum_freeze_min_age was\ndesigned at a time when there was no visibility map at all, when it\nmade somewhat more sense as the thing that drives freezing.\n\nIncidentally, this is part of the problem with anti-wraparound vacuums\nand freezing debt -- the fact that some quite busy databases take\nweeks or months to go through 50 million XIDs (or 200 million)\nincreases the pain of the eventual aggressive VACUUM. It's not\ncompletely unbounded -- autovacuum_freeze_max_age is not 100% useless\nhere. But the extent to which that stuff bounds the debt can vary\nenormously, for not-very-good reasons.\n\n> > Whenever we opt to\n> > \"freeze a page\", the new page-level algorithm *always* uses the most\n> > recent possible XID and MXID values (OldestXmin and oldestMxact) to\n> > decide what XIDs/XMIDs need to be replaced. That might sound like it'd\n> > be too much, but it only applies to those pages that we actually\n> > decide to freeze (since page-level characteristics drive everything\n> > now). FreezeLimit is only one way of triggering that now (and one of\n> > the least interesting and rarest).\n>\n> That largely makes sense to me and doesn't seem weird.\n\nI'm very pleased that the main intuition behind 0002 makes sense to\nyou. That's a start, at least.\n\n> I'm a tad concerned about replacing mxids that have some members that are\n> older than OldestXmin but not older than FreezeLimit. It's not too hard to\n> imagine that accelerating mxid consumption considerably. But we can probably,\n> if not already done, special case that.\n\nLet's assume for a moment that this is a real problem. I'm not sure if\nit is or not myself (it's complicated), but let's say that it is. The\nproblem may be more than offset by the positive impact on relminxmid\nadvancement. I have placed a large emphasis on enabling\nrelfrozenxid/relminxmid advancement in every non-aggressive VACUUM,\nfor a number of reasons -- this is one of the reasons. Finding a way\nfor every VACUUM operation to be \"vacrel->scanned_pages +\nvacrel->frozenskipped_pages == orig_rel_pages\" (i.e. making *some*\namount of relfrozenxid/relminxmid advancement possible in every\nVACUUM) has a great deal of value.\n\nAs I said recently on the \"do only critical work during single-user\nvacuum?\" thread, why should the largest tables in databases that\nconsume too many MXIDs do so evenly, across all their tables? There\nare usually one or two large tables, and many more smaller tables. I\nthink it's much more likely that the largest tables consume\napproximately zero MultiXactIds in these databases -- actual\nMultiXactId consumption is probably concentrated in just one or two\nsmaller tables (even when we burn through MultiXacts very quickly).\nBut we don't recognize these kinds of distinctions at all right now.\n\nUnder these conditions, we will have many more opportunities to\nadvance relminmxid for most of the tables (including the larger\ntables) all the way up to current-oldestMxact with the patch series.\nWithout needing to freeze *any* MultiXacts early (just freezing some\nXIDs early) to get that benefit. The patch series is not just about\nspreading the burden of freezing, so that non-aggressive VACUUMs\nfreeze more -- it's also making relfrozenxid and relminmxid more\nrecent and therefore *reliable* indicators of which tables any\nwraparound problems *really* are.\n\nDoes that make sense to you? This kind of \"virtuous cycle\" seems\nreally important to me. It's a subtle point, so I have to ask.\n\n> > It seems that heap_prepare_freeze_tuple allocates new MXIDs (when\n> > freezing old ones) in large part so it can NOT freeze XIDs that it\n> > would have been useful (and much cheaper) to remove anyway.\n>\n> Well, we may have to allocate a new mxid because some members are older than\n> FreezeLimit but others are still running. When do we not remove xids that\n> would have been cheaper to remove once we decide to actually do work?\n\nMy point was that today, on HEAD, there is nothing fundamentally\nspecial about FreezeLimit (aka cutoff_xid) as far as\nheap_prepare_freeze_tuple is concerned -- and yet that's the only\ncutoff it knows about, really. Why can't we do better, by \"exploiting\nthe difference\" between FreezeLimit and OldestXmin?\n\n> > On HEAD, FreezeMultiXactId() doesn't get passed down the VACUUM operation's\n> > OldestXmin at all (it actually just gets FreezeLimit passed as its\n> > cutoff_xid argument). It cannot possibly recognize any of this for itself.\n>\n> It does recognize something like OldestXmin in a more precise and expensive\n> way - MultiXactIdIsRunning() and TransactionIdIsCurrentTransactionId().\n\nIt doesn't look that way to me.\n\nWhile it's true that FreezeMultiXactId() will call\nMultiXactIdIsRunning(), that's only a cross-check. This cross-check is\nmade at a point where we've already determined that the MultiXact in\nquestion is < cutoff_multi. In other words, it catches cases where a\n\"MultiXactId < cutoff_multi\" Multi contains an XID *that's still\nrunning* -- a correctness issue. Nothing to do with being smart about\navoiding allocating new MultiXacts during freezing, or exploiting the\nfact that \"FreezeLimit < OldestXmin\" (which is almost always true,\nvery true).\n\nThis correctness issue is the same issue discussed in \"NB: cutoff_xid\n*must* be <= the current global xmin...\" comments that appear at the\ntop of heap_prepare_freeze_tuple. That's all.\n\n> Hm. I guess I'll have to look at the code for it. It doesn't immediately\n> \"feel\" quite right.\n\nI kinda think it might be. Please let me know if you see a problem\nwith what I've said.\n\n> > oldestXmin and oldestMxact map to the same wall clock time, more or less --\n> > that seems like it might be an important distinction, independent of\n> > everything else.\n>\n> Hm. Multis can be kept alive by fairly \"young\" member xids. So it may not be\n> removably (without creating a newer multi) until much later than its creation\n> time. So I don't think that's really true.\n\nMaybe what I said above it true, even though (at the same time) I have\n*also* created new problems with \"young\" member xids. I really don't\nknow right now, though.\n\n> > Final relfrozenxid values must still be >= FreezeLimit in an aggressive\n> > VACUUM (FreezeLimit is still used as an XID-age based backstop there).\n> > In non-aggressive VACUUMs (where there is still no strict guarantee that\n> > relfrozenxid will be advanced at all), we now advance relfrozenxid by as\n> > much as we possibly can. This exploits workload conditions that make it\n> > easy to advance relfrozenxid by many more XIDs (for the same amount of\n> > freezing/pruning work).\n>\n> Don't we now always advance relfrozenxid as much as we can, particularly also\n> during aggressive vacuums?\n\nI just meant \"we hope for the best and accept what we can get\". Will fix.\n\n> > * FRM_RETURN_IS_MULTI\n> > * The return value is a new MultiXactId to set as new Xmax.\n> > * (caller must obtain proper infomask bits using GetMultiXactIdHintBits)\n> > + *\n> > + * \"relfrozenxid_out\" is an output value; it's used to maintain target new\n> > + * relfrozenxid for the relation. It can be ignored unless \"flags\" contains\n> > + * either FRM_NOOP or FRM_RETURN_IS_MULTI, because we only handle multiXacts\n> > + * here. This follows the general convention: only track XIDs that will still\n> > + * be in the table after the ongoing VACUUM finishes. Note that it's up to\n> > + * caller to maintain this when the Xid return value is itself an Xid.\n> > + *\n> > + * Note that we cannot depend on xmin to maintain relfrozenxid_out.\n>\n> What does it mean for xmin to maintain something?\n\nWill fix.\n\n> > + * See heap_prepare_freeze_tuple for information about the basic rules for the\n> > + * cutoffs used here.\n> > + *\n> > + * Maintains *relfrozenxid_nofreeze_out and *relminmxid_nofreeze_out, which\n> > + * are the current target relfrozenxid and relminmxid for the relation. We\n> > + * assume that caller will never want to freeze its tuple, even when the tuple\n> > + * \"needs freezing\" according to our return value.\n>\n> I don't understand the \"will never want to\" bit?\n\nI meant \"even when it's a non-aggressive VACUUM, which will never want\nto wait for a cleanup lock the hard way, and will therefore always\nsettle for these relfrozenxid_nofreeze_out and\n*relminmxid_nofreeze_out values\". Note the convention here, which is\nrelfrozenxid_nofreeze_out is not the same thing as relfrozenxid_out --\nthe former variable name is used for values in cases where we *don't*\nfreeze, the latter for values in the cases where we do.\n\nWill try to clear that up.\n\n> > Caller should make temp\n> > + * copies of global tracking variables before starting to process a page, so\n> > + * that we can only scribble on copies. That way caller can just discard the\n> > + * temp copies if it isn't okay with that assumption.\n> > + *\n> > + * Only aggressive VACUUM callers are expected to really care when a tuple\n> > + * \"needs freezing\" according to us. It follows that non-aggressive VACUUMs\n> > + * can use *relfrozenxid_nofreeze_out and *relminmxid_nofreeze_out in all\n> > + * cases.\n>\n> Could it make sense to track can_freeze and need_freeze separately?\n\nYou mean to change the signature of heap_tuple_needs_freeze, so it\ndoesn't return a bool anymore? It just has two bool pointers as\narguments, can_freeze and need_freeze?\n\nI suppose that could make sense. Don't feel strongly either way.\n\n> I may be misreading the diff, but aren't we know continuing to use multi down\n> below even if !MultiXactIdIsValid()?\n\nWill investigate.\n\n> Doesn't this mean we unpack the members even if the multi is old enough to\n> need freezing? Just to then do it again during freezing? Accessing multis\n> isn't cheap...\n\nWill investigate.\n\n> This stanza is repeated a bunch. Perhaps put it in a small static inline\n> helper?\n\nWill fix.\n\n> Struct member names starting with an upper case look profoundly ugly to\n> me... But this isn't the first one, so I guess... :(\n\nI am in 100% agreement, actually. But you know how it goes...\n\n> I still suspect this will cause a very substantial increase in WAL traffic in\n> realistic workloads. It's common to have workloads where tuples are inserted\n> once, and deleted once/ partition dropped.\n\nI agree with the principle that this kind of use case should be\naccommodated in some way.\n\n> I think we'll have to make this less aggressive or tunable. Random ideas for\n> heuristics:\n\nThe problem that all of these heuristics have is that they will tend\nto make it impossible for future non-aggressive VACUUMs to be able to\nadvance relfrozenxid. All that it takes is one single all-visible page\nto make that impossible. As I said upthread, I think that being able\nto advance relfrozenxid (and especially relminmxid) by *some* amount\nin every VACUUM has non-obvious value.\n\nMaybe you can address that by changing the behavior of non-aggressive\nVACUUMs, so that they are directly sensitive to this. Maybe they don't\nskip any all-visible pages when there aren't too many, that kind of\nthing. That needs to be in scope IMV.\n\n> I don't know how to parse \"thereby making relfrozenxid advancement impossible\n> ... will no longer hinder relfrozenxid advancement\"?\n\nWill fix.\n\n> > We now consistently avoid leaving behind all-visible (not all-frozen) pages.\n> > This (as well as work from commit 44fa84881f) makes relfrozenxid advancement\n> > in non-aggressive VACUUMs commonplace.\n>\n> s/consistently/try to/?\n\nWill fix.\n\n> > The system accumulates freezing debt in proportion to the number of\n> > physical heap pages with unfrozen tuples, more or less. Anything based\n> > on XID age is likely to be a poor proxy for the eventual cost of\n> > freezing (during the inevitable anti-wraparound autovacuum). At a high\n> > level, freezing is now treated as one of the costs of storing tuples in\n> > physical heap pages -- not a cost of transactions that allocate XIDs.\n> > Although vacuum_freeze_min_age and vacuum_multixact_freeze_min_age still\n> > influence what we freeze, and when, they effectively become backstops.\n> > It may still be necessary to \"freeze a page\" due to the presence of a\n> > particularly old XID, from before VACUUM's FreezeLimit cutoff, though\n> > that will be rare in practice -- FreezeLimit is just a backstop now.\n>\n> I don't really like the \"rare in practice\" bit. It'll be rare in some\n> workloads but others will likely be much less affected.\n\nMaybe. The first time one XID crosses FreezeLimit now will be enough\nto trigger freezing the page. So it's still very different to today.\n\nI'll change this, though. It's not important.\n\n> I think this means my concern above about increasing mxid creation rate\n> substantially may be warranted.\n\nCan you think of an adversarial workload, to get a sense of the extent\nof the problem?\n\n> > + * backstop_cutoff_xid must be <= cutoff_xid, and backstop_cutoff_multi must\n> > + * be <= cutoff_multi. When any XID/XMID from before these backstop cutoffs\n> > + * is encountered, we set *force_freeze to true, making caller freeze the page\n> > + * (freezing-eligible XIDs/XMIDs will be frozen, at least). \"Backstop\n> > + * freezing\" ensures that VACUUM won't allow XIDs/XMIDs to ever get too old.\n> > + * This shouldn't be necessary very often. VACUUM should prefer to freeze\n> > + * when it's cheap (not when it's urgent).\n>\n> Hm. Does this mean that we might call heap_prepare_freeze_tuple and then\n> decide not to freeze?\n\nYes. And so heap_prepare_freeze_tuple is now a little more like its\nsibling function, heap_tuple_needs_freeze.\n\n> Doesn't that mean we might create new multis over and\n> over, because we don't end up pulling the trigger on freezing the page?\n\n> Ah, I guess not. But it'd be nicer if I didn't have to scroll down to the body\n> of the function to figure it out...\n\nWill fix.\n\n> I think this commit message needs a good amount of polishing - it's very\n> convoluted. It's late and I didn't sleep well, but I've tried to read it\n> several times without really getting a sense of what this precisely does.\n\nIt received much less polishing than the others.\n\nThink of 0003 like this:\n\nThe logic for skipping a range of blocks using the visibility map\nworks by deciding the next_unskippable_block-wise range of skippable\nblocks up front. Later, we actually execute the skipping of this range\nof blocks (assuming it exceeds SKIP_PAGES_THRESHOLD). These are two\nseparate steps.\n\nRight now, we do this:\n\n if (skipping_blocks && blkno < nblocks - 1)\n {\n /*\n * Tricky, tricky. If this is in aggressive vacuum, the page\n * must have been all-frozen at the time we checked whether it\n * was skippable, but it might not be any more. We must be\n * careful to count it as a skipped all-frozen page in that\n * case, or else we'll think we can't update relfrozenxid and\n * relminmxid. If it's not an aggressive vacuum, we don't\n * know whether it was initially all-frozen, so we have to\n * recheck.\n */\n if (vacrel->aggressive ||\n VM_ALL_FROZEN(vacrel->rel, blkno, &vmbuffer))\n vacrel->frozenskipped_pages++;\n continue;\n }\n\nThe fact that this is conditioned in part on \"vacrel->aggressive\"\nconcerns me here. Why should we have a special case for this, where we\ncondition something on aggressive-ness that isn't actually strictly\nrelated to that? Why not just remember that the range that we're\nskipping was all-frozen up-front?\n\nThat way non-aggressive VACUUMs are not unnecessarily at a\ndisadvantage, when it comes to being able to advance relfrozenxid.\nWhat if we end up not incrementing vacrel->frozenskipped_pages when we\neasily could have, just because this is a non-aggressive VACUUM? I\nthink that it's worth avoiding stuff like that whenever possible.\nMaybe this particular example isn't the most important one. For\nexample it probably isn't as bad as the one was fixed by the\nlazy_scan_noprune work. But why even take a chance? Seems easier to\nremove the special case -- which is what this really is.\n\n> FWIW, I'd really like to get rid of SKIP_PAGES_THRESHOLD. It often ends up\n> causing a lot of time doing IO that we never need, completely trashing all CPU\n> caches, while not actually causing decent readaead IO from what I've seen.\n\nI am also suspicious of SKIP_PAGES_THRESHOLD. But if we want to get\nrid of it, we'll need to be sensitive to how that affects relfrozenxid\nadvancement in non-aggressive VACUUMs IMV.\n\nThanks again for the review!\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 25 Feb 2022 14:00:12 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "Hi,\n\nOn 2022-02-25 14:00:12 -0800, Peter Geoghegan wrote:\n> On Thu, Feb 24, 2022 at 11:14 PM Andres Freund <andres@anarazel.de> wrote:\n> > I am not a fan of the backstop terminology. It's still the reason we need to\n> > do freezing for correctness reasons.\n> \n> Thanks for the review!\n> \n> I'm not wedded to that particular terminology, but I think that we\n> need something like it. Open to suggestions.\n>\n> How about limit-based? Something like that?\n\nfreeze_required_limit, freeze_desired_limit? Or s/limit/cutoff/? Or\ns/limit/below/? I kind of like below because that answers < vs <= which I find\nhard to remember around freezing.\n\n\n> > I'm a tad concerned about replacing mxids that have some members that are\n> > older than OldestXmin but not older than FreezeLimit. It's not too hard to\n> > imagine that accelerating mxid consumption considerably. But we can probably,\n> > if not already done, special case that.\n> \n> Let's assume for a moment that this is a real problem. I'm not sure if\n> it is or not myself (it's complicated), but let's say that it is. The\n> problem may be more than offset by the positive impact on relminxmid\n> advancement. I have placed a large emphasis on enabling\n> relfrozenxid/relminxmid advancement in every non-aggressive VACUUM,\n> for a number of reasons -- this is one of the reasons. Finding a way\n> for every VACUUM operation to be \"vacrel->scanned_pages +\n> vacrel->frozenskipped_pages == orig_rel_pages\" (i.e. making *some*\n> amount of relfrozenxid/relminxmid advancement possible in every\n> VACUUM) has a great deal of value.\n\nThat may be true, but I think working more incrementally is better in this\nare. I'd rather have a smaller improvement for a release, collect some data,\nget another improvement in the next, than see a bunch of reports of larger\nwind and large regressions.\n\n\n> As I said recently on the \"do only critical work during single-user\n> vacuum?\" thread, why should the largest tables in databases that\n> consume too many MXIDs do so evenly, across all their tables? There\n> are usually one or two large tables, and many more smaller tables. I\n> think it's much more likely that the largest tables consume\n> approximately zero MultiXactIds in these databases -- actual\n> MultiXactId consumption is probably concentrated in just one or two\n> smaller tables (even when we burn through MultiXacts very quickly).\n> But we don't recognize these kinds of distinctions at all right now.\n\nRecognizing those distinctions seems independent of freezing multixacts with\nlive members. I am happy with freezing them more aggressively if they don't\nhave live members. It's freezing mxids with live members that has me\nconcerned. The limits you're proposing are quite aggressive and can advance\nquickly.\n\nI've seen large tables with plenty multixacts. Typically concentrated over a\nvalue range (often changing over time).\n\n\n> Under these conditions, we will have many more opportunities to\n> advance relminmxid for most of the tables (including the larger\n> tables) all the way up to current-oldestMxact with the patch series.\n> Without needing to freeze *any* MultiXacts early (just freezing some\n> XIDs early) to get that benefit. The patch series is not just about\n> spreading the burden of freezing, so that non-aggressive VACUUMs\n> freeze more -- it's also making relfrozenxid and relminmxid more\n> recent and therefore *reliable* indicators of which tables any\n> wraparound problems *really* are.\n\nMy concern was explicitly about the case where we have to create new\nmultixacts...\n\n\n> Does that make sense to you?\n\nYes.\n\n\n> > > On HEAD, FreezeMultiXactId() doesn't get passed down the VACUUM operation's\n> > > OldestXmin at all (it actually just gets FreezeLimit passed as its\n> > > cutoff_xid argument). It cannot possibly recognize any of this for itself.\n> >\n> > It does recognize something like OldestXmin in a more precise and expensive\n> > way - MultiXactIdIsRunning() and TransactionIdIsCurrentTransactionId().\n> \n> It doesn't look that way to me.\n> \n> While it's true that FreezeMultiXactId() will call MultiXactIdIsRunning(),\n> that's only a cross-check.\n\n> This cross-check is made at a point where we've already determined that the\n> MultiXact in question is < cutoff_multi. In other words, it catches cases\n> where a \"MultiXactId < cutoff_multi\" Multi contains an XID *that's still\n> running* -- a correctness issue. Nothing to do with being smart about\n> avoiding allocating new MultiXacts during freezing, or exploiting the fact\n> that \"FreezeLimit < OldestXmin\" (which is almost always true, very true).\n\nIf there's <= 1 live members in a mxact, we replace it with with a plain xid\niff the xid also would get frozen. With the current freezing logic I don't see\nwhat passing down OldestXmin would change. Or how it differs to a meaningful\ndegree from heap_prepare_freeze_tuple()'s logic. I don't see how it'd avoid a\nsingle new mxact from being allocated.\n\n\n\n> > > Caller should make temp\n> > > + * copies of global tracking variables before starting to process a page, so\n> > > + * that we can only scribble on copies. That way caller can just discard the\n> > > + * temp copies if it isn't okay with that assumption.\n> > > + *\n> > > + * Only aggressive VACUUM callers are expected to really care when a tuple\n> > > + * \"needs freezing\" according to us. It follows that non-aggressive VACUUMs\n> > > + * can use *relfrozenxid_nofreeze_out and *relminmxid_nofreeze_out in all\n> > > + * cases.\n> >\n> > Could it make sense to track can_freeze and need_freeze separately?\n> \n> You mean to change the signature of heap_tuple_needs_freeze, so it\n> doesn't return a bool anymore? It just has two bool pointers as\n> arguments, can_freeze and need_freeze?\n\nSomething like that. Or return true if there's anything to do, and then rely\non can_freeze and need_freeze for finer details. But it doesn't matter that much.\n\n\n> > I still suspect this will cause a very substantial increase in WAL traffic in\n> > realistic workloads. It's common to have workloads where tuples are inserted\n> > once, and deleted once/ partition dropped.\n> \n> I agree with the principle that this kind of use case should be\n> accommodated in some way.\n> \n> > I think we'll have to make this less aggressive or tunable. Random ideas for\n> > heuristics:\n> \n> The problem that all of these heuristics have is that they will tend\n> to make it impossible for future non-aggressive VACUUMs to be able to\n> advance relfrozenxid. All that it takes is one single all-visible page\n> to make that impossible. As I said upthread, I think that being able\n> to advance relfrozenxid (and especially relminmxid) by *some* amount\n> in every VACUUM has non-obvious value.\n\nI think that's a laudable goal. But I don't think we should go there unless we\nare quite confident we've mitigated the potential downsides.\n\nObserved horizons for \"never vacuumed before\" tables and for aggressive\nvacuums alone would be a huge win.\n\n\n> Maybe you can address that by changing the behavior of non-aggressive\n> VACUUMs, so that they are directly sensitive to this. Maybe they don't\n> skip any all-visible pages when there aren't too many, that kind of\n> thing. That needs to be in scope IMV.\n\nYea. I still like my idea to have vacuum process a some all-visible pages\nevery time and to increase that percentage based on how old the relfrozenxid\nis.\n\nWe could slowly \"refill\" the number of all-visible pages VACUUM is allowed to\nprocess whenever dirtying a page for other reasons.\n\n\n\n> > I think this means my concern above about increasing mxid creation rate\n> > substantially may be warranted.\n> \n> Can you think of an adversarial workload, to get a sense of the extent\n> of the problem?\n\nI'll try to come up with something.\n\n\n> > FWIW, I'd really like to get rid of SKIP_PAGES_THRESHOLD. It often ends up\n> > causing a lot of time doing IO that we never need, completely trashing all CPU\n> > caches, while not actually causing decent readaead IO from what I've seen.\n> \n> I am also suspicious of SKIP_PAGES_THRESHOLD. But if we want to get\n> rid of it, we'll need to be sensitive to how that affects relfrozenxid\n> advancement in non-aggressive VACUUMs IMV.\n\nIt might make sense to separate the purposes of SKIP_PAGES_THRESHOLD. The\nrelfrozenxid advancement doesn't benefit from visiting all-frozen pages, just\nbecause there are only 30 of them in a row.\n\n\n> Thanks again for the review!\n\nNP, I think we need a lot of improvements in this area.\n\nI wish somebody would tackle merging heap_page_prune() with\nvacuuming. Primarily so we only do a single WAL record. But also because the\nseparation has caused a *lot* of complexity. I've already more projects than\nI should, otherwise I'd start on it...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 25 Feb 2022 15:26:42 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases, relfrozenxid\n optimizations" }, { "msg_contents": "On Fri, Feb 25, 2022 at 2:00 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Hm. I guess I'll have to look at the code for it. It doesn't immediately\n> > \"feel\" quite right.\n>\n> I kinda think it might be. Please let me know if you see a problem\n> with what I've said.\n\nOh, wait. I have a better idea of what you meant now. The loop towards\nthe end of FreezeMultiXactId() will indeed \"Determine whether to keep\nthis member or ignore it.\" when we need a new MultiXactId. The loop is\nexact in the sense that it will only include those XIDs that are truly\nneeded -- those that are still running.\n\nBut why should we ever get to the FreezeMultiXactId() loop with the\nstuff from 0002 in place? The whole purpose of the loop is to handle\ncases where we have to remove *some* (not all) XIDs from before\ncutoff_xid that appear in a MultiXact, which requires careful checking\nof each XID (this is only possible when the MultiXactId is <\ncutoff_multi to begin with, which is OldestMxact in the patch, which\nis presumably very recent).\n\nIt's not impossible that we'll get some number of \"skewed MultiXacts\"\nwith the patch -- cases that really do necessitate allocating a new\nMultiXact, just to \"freeze some XIDs from a MultiXact\". That is, there\nwill sometimes be some number of XIDs that are < OldestXmin, but\nnevertheless appear in some MultiXactIds >= OldestMxact. This seems\nlikely to be rare with the patch, though, since VACUUM calculates its\nOldestXmin and OldestMxact (which are what cutoff_xid and cutoff_multi\nreally are in the patch) at the same point in time. Which was the\npoint I made in my email yesterday.\n\nHow many of these \"skewed MultiXacts\" can we really expect? Seems like\nthere might be very few in practice. But I'm really not sure about\nthat.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 25 Feb 2022 15:28:17 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "Hi,\n\nOn 2022-02-25 15:28:17 -0800, Peter Geoghegan wrote:\n> But why should we ever get to the FreezeMultiXactId() loop with the\n> stuff from 0002 in place? The whole purpose of the loop is to handle\n> cases where we have to remove *some* (not all) XIDs from before\n> cutoff_xid that appear in a MultiXact, which requires careful checking\n> of each XID (this is only possible when the MultiXactId is <\n> cutoff_multi to begin with, which is OldestMxact in the patch, which\n> is presumably very recent).\n> \n> It's not impossible that we'll get some number of \"skewed MultiXacts\"\n> with the patch -- cases that really do necessitate allocating a new\n> MultiXact, just to \"freeze some XIDs from a MultiXact\". That is, there\n> will sometimes be some number of XIDs that are < OldestXmin, but\n> nevertheless appear in some MultiXactIds >= OldestMxact. This seems\n> likely to be rare with the patch, though, since VACUUM calculates its\n> OldestXmin and OldestMxact (which are what cutoff_xid and cutoff_multi\n> really are in the patch) at the same point in time. Which was the\n> point I made in my email yesterday.\n\nI don't see why it matters that OldestXmin and OldestMxact are computed at the\nsame time? It's a question of the workload, not vacuum algorithm.\n\nOldestMxact inherently lags OldestXmin. OldestMxact can only advance after all\nmembers are older than OldestXmin (not quite true, but that's the bound), and\nthey have always more than one member.\n\n\n> How many of these \"skewed MultiXacts\" can we really expect?\n\nI don't think they're skewed in any way. It's a fundamental aspect of\nmultixacts.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 25 Feb 2022 15:48:54 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases, relfrozenxid\n optimizations" }, { "msg_contents": "On Fri, Feb 25, 2022 at 3:48 PM Andres Freund <andres@anarazel.de> wrote:\n> I don't see why it matters that OldestXmin and OldestMxact are computed at the\n> same time? It's a question of the workload, not vacuum algorithm.\n\nI think it's both.\n\n> OldestMxact inherently lags OldestXmin. OldestMxact can only advance after all\n> members are older than OldestXmin (not quite true, but that's the bound), and\n> they have always more than one member.\n>\n>\n> > How many of these \"skewed MultiXacts\" can we really expect?\n>\n> I don't think they're skewed in any way. It's a fundamental aspect of\n> multixacts.\n\nHaving this happen to some degree is fundamental to MultiXacts, sure.\nBut also seems like the approach of using FreezeLimit and\nMultiXactCutoff in the way that we do right now seems like it might\nmake the problem a lot worse. Because they're completely meaningless\ncutoffs. They are magic numbers that have no relationship whatsoever\nto each other.\n\nThere are problems with assuming that OldestXmin and OldestMxact\n\"align\" -- no question. But at least it's approximately true -- which\nis a start. They are at least not arbitrarily, unpredictably\ndifferent, like FreezeLimit and MultiXactCutoff are, and always will\nbe. I think that that's a meaningful and useful distinction.\n\nI am okay with making the most pessimistic possible assumptions about\nhow any changes to how we freeze might cause FreezeMultiXactId() to\nallocate more MultiXacts than before. And I accept that the patch\nseries shouldn't \"get credit\" for \"offsetting\" any problem like that\nby making relminmxid advancement occur much more frequently (even\nthough that does seem very valuable). All I'm really saying is this:\nin general, there are probably quite a few opportunities for\nFreezeMultiXactId() to avoid allocating new XMIDs (just to freeze\nXIDs) by having the full context. And maybe by making the dialog\nbetween lazy_scan_prune and heap_prepare_freeze_tuple a bit more\nnuanced.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 25 Feb 2022 16:09:31 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Fri, Feb 25, 2022 at 3:26 PM Andres Freund <andres@anarazel.de> wrote:\n> freeze_required_limit, freeze_desired_limit? Or s/limit/cutoff/? Or\n> s/limit/below/? I kind of like below because that answers < vs <= which I find\n> hard to remember around freezing.\n\nI like freeze_required_limit the most.\n\n> That may be true, but I think working more incrementally is better in this\n> are. I'd rather have a smaller improvement for a release, collect some data,\n> get another improvement in the next, than see a bunch of reports of larger\n> wind and large regressions.\n\nI agree.\n\nThere is an important practical way in which it makes sense to treat\n0001 as separate to 0002. It is true that 0001 is independently quite\nuseful. In practical terms, I'd be quite happy to just get 0001 into\nPostgres 15, without 0002. I think that that's what you meant here, in\nconcrete terms, and we can agree on that now.\n\nHowever, it is *also* true that there is an important practical sense\nin which they *are* related. I don't want to ignore that either -- it\ndoes matter. Most of the value to be had here comes from the synergy\nbetween 0001 and 0002 -- or what I've been calling a \"virtuous cycle\",\nthe thing that makes it possible to advance relfrozenxid/relminmxid in\nalmost every VACUUM. Having both 0001 and 0002 together (or something\nalong the same lines) is way more valuable than having just one.\n\nPerhaps we can even agree on this second point. I am encouraged by the\nfact that you at least recognize the general validity of the key ideas\nfrom 0002. If I am going to commit 0001 (and not 0002) ahead of\nfeature freeze for 15, I better be pretty sure that I have at least\nroughly the right idea with 0002, too -- since that's the direction\nthat 0001 is going in. It almost seems dishonest to pretend that I\nwasn't thinking of 0002 when I wrote 0001.\n\nI'm glad that you seem to agree that this business of accumulating\nfreezing debt without any natural limit is just not okay. That is\nreally fundamental to me. I mean, vacuum_freeze_min_age kind of\ndoesn't work as designed. This is a huge problem for us.\n\n> > Under these conditions, we will have many more opportunities to\n> > advance relminmxid for most of the tables (including the larger\n> > tables) all the way up to current-oldestMxact with the patch series.\n> > Without needing to freeze *any* MultiXacts early (just freezing some\n> > XIDs early) to get that benefit. The patch series is not just about\n> > spreading the burden of freezing, so that non-aggressive VACUUMs\n> > freeze more -- it's also making relfrozenxid and relminmxid more\n> > recent and therefore *reliable* indicators of which tables any\n> > wraparound problems *really* are.\n>\n> My concern was explicitly about the case where we have to create new\n> multixacts...\n\nIt was a mistake on my part to counter your point about that with this\nother point about eager relminmxid advancement. As I said in the last\nemail, while that is very valuable, it's not something that needs to\nbe brought into this.\n\n> > Does that make sense to you?\n>\n> Yes.\n\nOkay, great. The fact that you recognize the value in that comes as a relief.\n\n> > You mean to change the signature of heap_tuple_needs_freeze, so it\n> > doesn't return a bool anymore? It just has two bool pointers as\n> > arguments, can_freeze and need_freeze?\n>\n> Something like that. Or return true if there's anything to do, and then rely\n> on can_freeze and need_freeze for finer details. But it doesn't matter that much.\n\nGot it.\n\n> > The problem that all of these heuristics have is that they will tend\n> > to make it impossible for future non-aggressive VACUUMs to be able to\n> > advance relfrozenxid. All that it takes is one single all-visible page\n> > to make that impossible. As I said upthread, I think that being able\n> > to advance relfrozenxid (and especially relminmxid) by *some* amount\n> > in every VACUUM has non-obvious value.\n>\n> I think that's a laudable goal. But I don't think we should go there unless we\n> are quite confident we've mitigated the potential downsides.\n\nTrue. But that works both ways. We also shouldn't err in the direction\nof adding these kinds of heuristics (which have real downsides) until\nthe idea of mostly swallowing the cost of freezing whole pages (while\nmaking it possible to disable) has lost, fairly. Overall, it looks\nlike the cost is acceptable in most cases.\n\nI think that users will find it very reassuring to regularly and\nreliably see confirmation that wraparound is being kept at bay, by\nevery VACUUM operation, with details that they can relate to their\nworkload. That has real value IMV -- even when it's theoretically\nunnecessary for us to be so eager with advancing relfrozenxid.\n\nI really don't like the idea of falling behind on freezing\nsystematically. You always run the \"risk\" of freezing being wasted.\nBut that way of looking at it can be penny wise, pound foolish --\nmaybe we should just accept that trying to predict what will happen in\nthe future (whether or not freezing will be worth it) is mostly not\nhelpful. Our users mostly complain about performance stability these\ndays. Big shocks are really something we ought to avoid. That does\nhave a cost. Why wouldn't it?\n\n> > Maybe you can address that by changing the behavior of non-aggressive\n> > VACUUMs, so that they are directly sensitive to this. Maybe they don't\n> > skip any all-visible pages when there aren't too many, that kind of\n> > thing. That needs to be in scope IMV.\n>\n> Yea. I still like my idea to have vacuum process a some all-visible pages\n> every time and to increase that percentage based on how old the relfrozenxid\n> is.\n\nYou can quite easily construct cases where the patch does much better\nthan that, though -- very believable cases. Any table like\npgbench_history. And so I lean towards quantifying the cost of\npage-level freezing carefully, making sure there is nothing\npathological, and then just accepting it (with a GUC to disable). The\nreality is that freezing is really a cost of storing data in Postgres,\nand will be for the foreseeable future.\n\n> > Can you think of an adversarial workload, to get a sense of the extent\n> > of the problem?\n>\n> I'll try to come up with something.\n\nThat would be very helpful. Thanks!\n\n> It might make sense to separate the purposes of SKIP_PAGES_THRESHOLD. The\n> relfrozenxid advancement doesn't benefit from visiting all-frozen pages, just\n> because there are only 30 of them in a row.\n\nRight. I imagine that SKIP_PAGES_THRESHOLD actually does help with\nthis, but if we actually tried we'd find a much better way.\n\n> I wish somebody would tackle merging heap_page_prune() with\n> vacuuming. Primarily so we only do a single WAL record. But also because the\n> separation has caused a *lot* of complexity. I've already more projects than\n> I should, otherwise I'd start on it...\n\nThat has value, but it doesn't feel as urgent.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 25 Feb 2022 17:52:48 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Sun, Feb 20, 2022 at 3:27 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I think that the idea has potential, but I don't think that I\n> > understand yet what the *exact* algorithm is.\n>\n> The algorithm seems to exploit a natural tendency that Andres once\n> described in a blog post about his snapshot scalability work [1]. To a\n> surprising extent, we can usefully bucket all tuples/pages into two\n> simple categories:\n>\n> 1. Very, very old (\"infinitely old\" for all practical purposes).\n>\n> 2. Very very new.\n>\n> There doesn't seem to be much need for a third \"in-between\" category\n> in practice. This seems to be at least approximately true all of the\n> time.\n>\n> Perhaps Andres wouldn't agree with this very general statement -- he\n> actually said something more specific. I for one believe that the\n> point he made generalizes surprisingly well, though. I have my own\n> theories about why this appears to be true. (Executive summary: power\n> laws are weird, and it seems as if the sparsity-of-effects principle\n> makes it easy to bucket things at the highest level, in a way that\n> generalizes well across disparate workloads.)\n\nI think that this is not really a description of an algorithm -- and I\nthink that it is far from clear that the third \"in-between\" category\ndoes not need to exist.\n\n> Remember when I got excited about how my big TPC-C benchmark run\n> showed a predictable, tick/tock style pattern across VACUUM operations\n> against the order and order lines table [2]? It seemed very\n> significant to me that the OldestXmin of VACUUM operation n\n> consistently went on to become the new relfrozenxid for the same table\n> in VACUUM operation n + 1. It wasn't exactly the same XID, but very\n> close to it (within the range of noise). This pattern was clearly\n> present, even though VACUUM operation n + 1 might happen as long as 4\n> or 5 hours after VACUUM operation n (this was a big table).\n\nI think findings like this are very unconvincing. TPC-C (or any\nbenchmark really) is so simple as to be a terrible proxy for what\nvacuuming is going to look like on real-world systems. Like, it's nice\nthat it works, and it shows that something's working, but it doesn't\ndemonstrate that the patch is making the right trade-offs overall.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 1 Mar 2022 16:46:40 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Tue, Mar 1, 2022 at 1:46 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I think that this is not really a description of an algorithm -- and I\n> think that it is far from clear that the third \"in-between\" category\n> does not need to exist.\n\nBut I already described the algorithm. It is very simple\nmechanistically -- though that in itself means very little. As I have\nsaid multiple times now, the hard part is assessing what the\nimplications are. And the even harder part is making a judgement about\nwhether or not those implications are what we generally want.\n\n> I think findings like this are very unconvincing.\n\nTPC-C may be unrealistic in certain ways, but it is nevertheless\nvastly more realistic than pgbench. pgbench is really more of a stress\ntest than a benchmark.\n\nThe main reasons why TPC-C is interesting here are *very* simple, and\nwould likely be equally true with TPC-E (just for example) -- even\nthough TPC-E is a very different benchmark kind of OLTP workload\noverall. TPC-C (like TPC-E) features a diversity of transaction types,\nsome of which are more complicated than others -- which is strictly\nmore realistic than having only one highly synthetic OLTP transaction\ntype. Each transaction type doesn't necessarily modify the same tables\nin the same way. This leads to natural diversity among tables and\namong transactions, including:\n\n* The typical or average number of distinct XIDs per heap page varies\nsignificantly among each table. There are way fewer distinct XIDs per\n\"order line\" table heap page than there are per \"order\" table heap\npage, for the obvious reason.\n\n* Roughly speaking, there are various different ways that free space\nmanagement ought to work in a system like Postgres. For example it is\nnecessary to make a \"fragmentations vs space utilization\" trade-off\nwith the new orders table.\n\n* There are joins in some of the transactions!\n\nMaybe TPC-C is a crude approximation of reality, but it nevertheless\nexercises relevant parts of the system to a significant degree. What\nelse would you expect me to use, for a project like this? To a\nsignificant degree the relfrozenxid tracking stuff is interesting\nbecause tables tend to have natural differences like the ones I have\nhighlighted on this thread. How could that not be the case? Why\nwouldn't we want to take advantage of that?\n\nThere might be some danger in over-optimizing for this particular\nbenchmark, but right now that is so far from being the main problem\nthat the idea seems strange to me. pgbench doesn't need the FSM, at\nall. In fact pgbench doesn't even really need VACUUM (except for\nantiwraparound), once heap fillfactor is lowered to 95 or so. pgbench\nsimply isn't relevant, *at all*, except perhaps as a way of measuring\nregressions in certain synthetic cases that don't benefit.\n\n> TPC-C (or any\n> benchmark really) is so simple as to be a terrible proxy for what\n> vacuuming is going to look like on real-world systems.\n\nDoesn't that amount to \"no amount of any kind of testing or\nbenchmarking will convince me of anything, ever\"?\n\nThere is more than one type of real-world system. I think that TPC-C\nis representative of some real world systems in some regards. But even\nthat's not the important point for me. I find TPC-C generally\ninteresting for one reason: I can clearly see that Postgres does\nthings in a way that just doesn't make much sense, which isn't\nparticularly fundamental to how VACUUM works.\n\nMy only long term goal is to teach Postgres to *avoid* various\npathological cases exhibited by TPC-C (e.g., the B-Tree \"split after\nnew tuple\" mechanism from commit f21668f328 *avoids* a pathological\ncase from TPC-C). We don't necessarily have to agree on how important\neach individual case is \"in the real world\" (which is impossible to\nknow anyway). We only have to agree that what we see is a pathological\ncase (because some reasonable expectation is dramatically violated),\nand then work out a fix.\n\nI don't want to teach Postgres to be clever -- I want to teach it to\navoid being stupid in cases where it exhibits behavior that really\ncannot be described any other way. You seem to talk about some of this\nwork as if it was just as likely to have a detrimental effect\nelsewhere, for some equally plausible workload, which will have a\ndownside that is roughly as bad as the advertised upside. I consider\nthat very unlikely, though. Sure, regressions are quite possible, and\na real concern -- but regressions *like that* are unlikely. Avoiding\ndoing what is clearly the wrong thing just seems to work out that way,\nin general.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 1 Mar 2022 14:59:36 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Fri, Feb 25, 2022 at 5:52 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> There is an important practical way in which it makes sense to treat\n> 0001 as separate to 0002. It is true that 0001 is independently quite\n> useful. In practical terms, I'd be quite happy to just get 0001 into\n> Postgres 15, without 0002. I think that that's what you meant here, in\n> concrete terms, and we can agree on that now.\n\nAttached is v10. While this does still include the freezing patch,\nit's not in scope for Postgres 15. As I've said, I still think that it\nmakes sense to maintain the patch series with the freezing stuff,\nsince it's structurally related. So, to be clear, the first two\npatches from the patch series are in scope for Postgres 15. But not\nthe third.\n\nHighlights:\n\n* Changes to terminology and commit messages along the lines suggested\nby Andres.\n\n* Bug fixes to heap_tuple_needs_freeze()'s MultiXact handling. My\ntesting strategy here still needs work.\n\n* Expanded refactoring by v10-0002 patch.\n\nThe v10-0002 patch (which appeared for the first time in v9) was\noriginally all about fixing a case where non-aggressive VACUUMs were\nat a gratuitous disadvantage (relative to aggressive VACUUMs) around\nadvancing relfrozenxid -- very much like the lazy_scan_noprune work\nfrom commit 44fa8488. And that is still its main purpose. But the\nrefactoring now seems related to Andres' idea of making non-aggressive\nVACUUMs decides to scan a few extra all-visible pages in order to be\nable to advance relfrozenxid.\n\nThe code that sets up skipping the visibility map is made a lot\nclearer by v10-0002. That patch moves a significant amount of code\nfrom lazy_scan_heap() into a new helper routine (so it continues the\ntrend started by the Postgres 14 work that added lazy_scan_prune()).\nNow skipping a range of visibility map pages is fundamentally based on\nsetting up the range up front, and then using the same saved details\nabout the range thereafter -- we don't have anymore ad-hoc\nVM_ALL_VISIBLE()/VM_ALL_FROZEN() calls for pages from a range that we\nalready decided to skip (so no calls to those routines from\nlazy_scan_heap(), at least not until after we finish processing in\nlazy_scan_prune()).\n\nThis is more or less what we were doing all along for one special\ncase: aggressive VACUUMs. We had to make sure to either increment\nfrozenskipped_pages or increment scanned_pages for every page from\nrel_pages -- this issue is described by lazy_scan_heap() comments on\nHEAD that begin with \"Tricky, tricky.\" (these date back to the freeze\nmap work from 2016). Anyway, there is no reason to not go further with\nthat: we should make whole ranges the basic unit that we deal with\nwhen skipping. It's a lot simpler to think in terms of entire ranges\n(not individual pages) that are determined to be all-visible or\nall-frozen up-front, without needing to recheck anything (regardless\nof whether it's an aggressive VACUUM).\n\nWe don't need to track frozenskipped_pages this way. And it's much\nmore obvious that it's safe for more complicated cases, in particular\nfor aggressive VACUUMs.\n\nThis kind of approach seems necessary to make non-aggressive VACUUMs\ndo a little more work opportunistically, when they realize that they\ncan advance relfrozenxid relatively easily that way (which I believe\nAndres favors as part of overhauling freezing). That becomes a lot\nmore natural when you have a clear and unambiguous separation between\ndeciding what range of blocks to skip, and then actually skipping. I\ncan imagine the new helper function added by v10-0002 (which I've\ncalled lazy_scan_skip_range()) eventually being taught to do these\nkinds of tricks.\n\nIn general I think that all of the details of what to skip need to be\ndecided up front. The loop in lazy_scan_heap() should execute skipping\nbased on the instructions it receives from the new helper function, in\nthe simplest way possible. The helper function can become more\nintelligent about the costs and benefits of skipping in the future,\nwithout that impacting lazy_scan_heap().\n\n--\nPeter Geoghegan", "msg_date": "Sun, 13 Mar 2022 21:05:31 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Sun, Mar 13, 2022 at 9:05 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached is v10. While this does still include the freezing patch,\n> it's not in scope for Postgres 15. As I've said, I still think that it\n> makes sense to maintain the patch series with the freezing stuff,\n> since it's structurally related.\n\nAttached is v11. Changes:\n\n* No longer includes the patch that adds page-level freezing. It was\nmaking it harder to assess code coverage for the patches that I'm\ntargeting Postgres 15 with. And so including it with each new revision\nno longer seems useful. I'll pick it up for Postgres 16.\n\n* Extensive isolation tests added to v11-0001-*, exercising a lot of\nhard-to-hit code paths that are reached when VACUUM is unable to\nimmediately acquire a cleanup lock on some heap page. In particular,\nwe now have test coverage for the code in heapam.c that handles\ntracking the oldest extant XID and MXID in the presence of MultiXacts\n(on a no-cleanup-lock heap page).\n\n* v11-0002-* (which is the patch that avoids missing out on advancing\nrelfrozenxid in non-aggressive VACUUMs due to a race condition on\nHEAD) now moves even more of the logic for deciding how VACUUM will\nskip using the visibility map into its own helper routine. Now\nlazy_scan_heap just does what the state returned by the helper routine\ntells it about the current skippable range -- it doesn't make any\ndecisions itself anymore. This is far simpler than what we do\ncurrently, on HEAD.\n\nThere are no behavioral changes here, but this approach could be\npushed further to improve performance. We could easily determine\n*every* page that we're going to scan (not skip) up-front in even the\nlargest tables, very early, before we've even scanned one page. This\ncould enable things like I/O prefetching, or capping the size of the\ndead_items array based on our final scanned_pages (not on rel_pages).\n\n* A new patch (v11-0003-*) alters the behavior of VACUUM's\nDISABLE_PAGE_SKIPPING option. DISABLE_PAGE_SKIPPING no longer forces\naggressive VACUUM -- now it only forces the use of the visibility map,\nsince that behavior is totally independent of aggressiveness.\n\nI don't feel too strongly about the DISABLE_PAGE_SKIPPING change. It\njust seems logical to decouple no-vm-skipping from aggressiveness --\nit might actually be helpful in testing the work from the patch series\nin the future. Any page counted in scanned_pages has essentially been\nprocessed by VACUUM with this work in place -- that was the idea\nbehind the lazy_scan_noprune stuff from commit 44fa8488. Bear in mind\nthat the relfrozenxid tracking stuff from v11-0001-* makes it almost\ncertain that a DISABLE_PAGE_SKIPPING-without-aggressiveness VACUUM\nwill still manage to advance relfrozenxid -- usually by the same\namount as an equivalent aggressive VACUUM would anyway. (Failing to\nacquire a cleanup lock on some heap page might result in the final\nolder relfrozenxid being appreciably older, but probably not, and we'd\nstill almost certainly manage to advance relfrozenxid by *some* small\namount.)\n\nOf course, anybody that wants both an aggressive VACUUM and a VACUUM\nthat never skips even all-frozen pages in the visibility map will\nstill be able to get that behavior quite easily. For example,\nVACUUM(DISABLE_PAGE_SKIPPING, FREEZE) will do that. Several of our\nexisting tests must already use both of these options together,\nbecause the tests require an effective vacuum_freeze_min_age of 0 (and\nvacuum_multixact_freeze_min_age of 0) -- DISABLE_PAGE_SKIPPING alone\nwon't do that on HEAD, which seems to confuse the issue (see commit\nb700f96c for an example of that).\n\nIn other words, since DISABLE_PAGE_SKIPPING doesn't *consistently*\nforce lazy_scan_noprune to refuse to process a page on HEAD (it all\ndepends on FreezeLimit/vacuum_freeze_min_age), it is logical for\nDISABLE_PAGE_SKIPPING to totally get out of the business of caring\nabout that -- better to limit it to caring only about the visibility\nmap (by no longer making it force aggressiveness).\n\n-- \nPeter Geoghegan", "msg_date": "Wed, 23 Mar 2022 12:59:01 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Wed, Mar 23, 2022 at 3:59 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> In other words, since DISABLE_PAGE_SKIPPING doesn't *consistently*\n> force lazy_scan_noprune to refuse to process a page on HEAD (it all\n> depends on FreezeLimit/vacuum_freeze_min_age), it is logical for\n> DISABLE_PAGE_SKIPPING to totally get out of the business of caring\n> about that -- better to limit it to caring only about the visibility\n> map (by no longer making it force aggressiveness).\n\nIt seems to me that if DISABLE_PAGE_SKIPPING doesn't completely\ndisable skipping pages, we have a problem.\n\nThe option isn't named CARE_ABOUT_VISIBILITY_MAP. It's named\nDISABLE_PAGE_SKIPPING.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 23 Mar 2022 16:41:33 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Wed, Mar 23, 2022 at 1:41 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> It seems to me that if DISABLE_PAGE_SKIPPING doesn't completely\n> disable skipping pages, we have a problem.\n\nIt depends on how you define skipping. DISABLE_PAGE_SKIPPING was\ncreated at a time when a broader definition of skipping made a lot\nmore sense.\n\n> The option isn't named CARE_ABOUT_VISIBILITY_MAP. It's named\n> DISABLE_PAGE_SKIPPING.\n\nVACUUM(DISABLE_PAGE_SKIPPING, VERBOSE) will still consistently show\nthat 100% of all of the pages from rel_pages are scanned. A page that\nis \"skipped\" by lazy_scan_noprune isn't pruned, and won't have any of\nits tuples frozen. But every other aspect of processing the page\nhappens in just the same way as it would in the cleanup\nlock/lazy_scan_prune path.\n\nWe'll even still VACUUM the page if it happens to have some existing\nLP_DEAD items left behind by opportunistic pruning. We don't need a\ncleanup in either lazy_scan_noprune (a share lock is all we need), nor\ndo we even need one in lazy_vacuum_heap_page (a regular exclusive lock\nis all we need).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 23 Mar 2022 13:49:09 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Wed, Mar 23, 2022 at 4:49 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Wed, Mar 23, 2022 at 1:41 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > It seems to me that if DISABLE_PAGE_SKIPPING doesn't completely\n> > disable skipping pages, we have a problem.\n>\n> It depends on how you define skipping. DISABLE_PAGE_SKIPPING was\n> created at a time when a broader definition of skipping made a lot\n> more sense.\n>\n> > The option isn't named CARE_ABOUT_VISIBILITY_MAP. It's named\n> > DISABLE_PAGE_SKIPPING.\n>\n> VACUUM(DISABLE_PAGE_SKIPPING, VERBOSE) will still consistently show\n> that 100% of all of the pages from rel_pages are scanned. A page that\n> is \"skipped\" by lazy_scan_noprune isn't pruned, and won't have any of\n> its tuples frozen. But every other aspect of processing the page\n> happens in just the same way as it would in the cleanup\n> lock/lazy_scan_prune path.\n\nI see what you mean about it depending on how you define \"skipping\".\nBut I think that DISABLE_PAGE_SKIPPING is intended as a sort of\nemergency safeguard when you really, really don't want to leave\nanything out. And therefore I favor defining it to mean that we don't\nskip any work at all.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 23 Mar 2022 16:53:13 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Wed, Mar 23, 2022 at 1:53 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I see what you mean about it depending on how you define \"skipping\".\n> But I think that DISABLE_PAGE_SKIPPING is intended as a sort of\n> emergency safeguard when you really, really don't want to leave\n> anything out.\n\nI agree.\n\n> And therefore I favor defining it to mean that we don't\n> skip any work at all.\n\nBut even today DISABLE_PAGE_SKIPPING won't do pruning when we cannot\nacquire a cleanup lock on a page, unless it happens to have XIDs from\nbefore FreezeLimit (which is probably 50 million XIDs behind\nOldestXmin, the vacuum_freeze_min_age default). I don't see much\ndifference.\n\nAnyway, this isn't important. I'll just drop the third patch.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 23 Mar 2022 13:58:31 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Thu, Mar 24, 2022 at 9:59 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Wed, Mar 23, 2022 at 1:53 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > And therefore I favor defining it to mean that we don't\n> > skip any work at all.\n>\n> But even today DISABLE_PAGE_SKIPPING won't do pruning when we cannot\n> acquire a cleanup lock on a page, unless it happens to have XIDs from\n> before FreezeLimit (which is probably 50 million XIDs behind\n> OldestXmin, the vacuum_freeze_min_age default). I don't see much\n> difference.\n\nYeah, I found it confusing that DISABLE_PAGE_SKIPPING doesn't disable\nall page skipping, so 3414099c turned out to be not enough.\n\n\n", "msg_date": "Thu, 24 Mar 2022 10:02:48 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Wed, Mar 23, 2022 at 2:03 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Yeah, I found it confusing that DISABLE_PAGE_SKIPPING doesn't disable\n> all page skipping, so 3414099c turned out to be not enough.\n\nThe proposed change to DISABLE_PAGE_SKIPPING is partly driven by that,\nand partly driven by a similar concern about aggressive VACUUM.\n\nIt seems worth emphasizing the idea that an aggressive VACUUM is now\njust the same as any other VACUUM except for one detail: we're\nguaranteed to advance relfrozenxid to a value >= FreezeLimit at the\nend. The non-aggressive case has the choice to do things that make\nthat impossible. But there are only two places where this can happen now:\n\n1. Non-aggressive VACUUMs might decide to skip some all-visible pages in\nthe new lazy_scan_skip() helper routine for skipping with the VM (see\nv11-0002-*).\n\n2. A non-aggressive VACUUM can *always* decide to ratchet back its\ntarget relfrozenxid in lazy_scan_noprune, to avoid waiting for a\ncleanup lock -- a final value from before FreezeLimit is usually still\npretty good.\n\nThe first scenario is the only one where it becomes impossible for\nnon-aggressive VACUUM to be able to advance relfrozenxid (with\nv11-0001-* in place) by any amount. Even that's a choice, made by\nweighing costs against benefits.\n\nThere is no behavioral change in v11-0002-* (we're still using the\nold SKIP_PAGES_THRESHOLD strategy), but the lazy_scan_skip()\nhelper routine could fairly easily be taught a lot more about the\ndownside of skipping all-visible pages (namely how that makes it\nimpossible to advance relfrozenxid).\n\nMaybe it's worth skipping all-visible pages (there are lots of them\nand age(relfrozenxid) is still low), and maybe it isn't worth it. We\nshould get to decide, without implementation details making\nrelfrozenxid advancement unsafe.\n\nIt would be great if you could take a look v11-0002-*, Robert. Does it\nmake sense to you?\n\nThanks\n--\nPeter Geoghegan\n\n\n", "msg_date": "Wed, 23 Mar 2022 15:28:05 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Wed, Mar 23, 2022 at 6:28 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> It would be great if you could take a look v11-0002-*, Robert. Does it\n> make sense to you?\n\nYou're probably not going to love hearing this, but I think you're\nstill explaining things here in ways that are too baroque and hard to\nfollow. I do think it's probably better. But, for example, in the\ncommit message for 0001, I think you could change the subject line to\n\"Allow non-aggressive vacuums to advance relfrozenxid\" and it would be\nclearer. And then I think you could eliminate about half of the first\nparagraph, starting with \"There is no fixed relationship\", and all of\nthe third paragraph (which starts with \"Later work...\"), and I think\nremoving all that material would make it strictly more clear than it\nis currently. I don't think it's the place of a commit message to\nspeculate too much on future directions or to wax eloquent on\ntheoretical points. If that belongs anywhere, it's in a mailing list\ndiscussion.\n\nIt seems to me that 0002 mixes code movement with functional changes.\nI'm completely on board with moving the code that decides how much to\nskip into a function. That seems like a great idea, and probably\noverdue. But it is not easy for me to see what has changed\nfunctionally between the old and new code organization, and I bet it\nwould be possible to split this into two patches, one of which creates\na function, and the other of which fixes the problem, and I think that\nwould be a useful service to future readers of the code. I have a hard\ntime believing that if someone in the future bisects a problem back to\nthis commit, they're going to have an easy time finding the behavior\nchange in here. In fact I can't see it myself. I think the actual\nfunctional change is to fix what is described in the second paragraph\nof the commit message, but I haven't been able to figure out where the\nlogic is actually changing to address that. Note that I would be happy\nwith the behavior change happening either before or after the code\nreorganization.\n\nI also think that the commit message for 0002 is probably longer and\nmore complex than is really helpful, and that the subject line is too\nvague, but since I don't yet understand exactly what's happening here,\nI cannot comment on how I think it should be revised at this point,\nexcept to say that the second paragraph of that commit message looks\nlike the most useful part.\n\nI would also like to mention a few things that I do like about 0002.\nOne is that it seems to collapse two different pieces of logic for\npage skipping into one. That seems good. As mentioned, it's especially\ngood because that logic is abstracted into a function. Also, it looks\nlike it is making a pretty localized change to one (1) aspect of what\nVACUUM does -- and I definitely prefer patches that change only one\nthing at a time.\n\nHope that's helpful.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 24 Mar 2022 13:20:50 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Thu, Mar 24, 2022 at 10:21 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> You're probably not going to love hearing this, but I think you're\n> still explaining things here in ways that are too baroque and hard to\n> follow. I do think it's probably better.\n\nThere are a lot of dimensions to this work. It's hard to know which to\nemphasize here.\n\n> But, for example, in the\n> commit message for 0001, I think you could change the subject line to\n> \"Allow non-aggressive vacuums to advance relfrozenxid\" and it would be\n> clearer.\n\nBut non-aggressive VACUUMs have always been able to do that.\n\nHow about: \"Set relfrozenxid to oldest extant XID seen by VACUUM\"\n\n> And then I think you could eliminate about half of the first\n> paragraph, starting with \"There is no fixed relationship\", and all of\n> the third paragraph (which starts with \"Later work...\"), and I think\n> removing all that material would make it strictly more clear than it\n> is currently. I don't think it's the place of a commit message to\n> speculate too much on future directions or to wax eloquent on\n> theoretical points. If that belongs anywhere, it's in a mailing list\n> discussion.\n\nOkay, I'll do that.\n\n> It seems to me that 0002 mixes code movement with functional changes.\n\nBelieve it or not, I avoided functional changes in 0002 -- at least in\none important sense. That's why you had difficulty spotting any. This\nmust sound peculiar, since the commit message very clearly says that\nthe commit avoids a problem seen only in the non-aggressive case. It's\nreally quite subtle.\n\nYou wrote this comment and code block (which I propose to remove in\n0002), so clearly you already understand the race condition that I'm\nconcerned with here:\n\n- if (skipping_blocks && blkno < rel_pages - 1)\n- {\n- /*\n- * Tricky, tricky. If this is in aggressive vacuum, the page\n- * must have been all-frozen at the time we checked whether it\n- * was skippable, but it might not be any more. We must be\n- * careful to count it as a skipped all-frozen page in that\n- * case, or else we'll think we can't update relfrozenxid and\n- * relminmxid. If it's not an aggressive vacuum, we don't\n- * know whether it was initially all-frozen, so we have to\n- * recheck.\n- */\n- if (vacrel->aggressive ||\n- VM_ALL_FROZEN(vacrel->rel, blkno, &vmbuffer))\n- vacrel->frozenskipped_pages++;\n- continue;\n- }\n\nWhat you're saying here boils down to this: it doesn't matter what the\nvisibility map would say right this microsecond (in the aggressive\ncase) were we to call VM_ALL_FROZEN(): we know for sure that the VM\nsaid that this page was all-frozen *in the recent past*. That's good\nenough; we will never fail to scan a page that might have an XID <\nOldestXmin (ditto for XMIDs) this way, which is all that really\nmatters.\n\nThis is absolutely mandatory in the aggressive case, because otherwise\nrelfrozenxid advancement might be seen as unsafe. My observation is:\nWhy should we accept the same race in the non-aggressive case? Why not\ndo essentially the same thing in every VACUUM?\n\nIn 0002 we now track if each range that we actually chose to skip had\nany all-visible (not all-frozen) pages -- if that happens then\nrelfrozenxid advancement becomes unsafe. The existing code uses\n\"vacrel->aggressive\" as a proxy for the same condition -- the existing\ncode reasons based on what the visibility map must have said about the\npage in the recent past. Which makes sense, but only works in the\naggressive case. The approach taken in 0002 also makes the code\nsimpler, which is what enabled putting the VM skipping code into its\nown helper function, but that was just a bonus.\n\nAnd so you could almost say that there is now behavioral change at\nall. We're skipping pages in the same way, based on the same\ninformation (from the visibility map) as before. We're just being a\nbit more careful than before about how that information is tracked, to\navoid this race. A race that we always avoided in the aggressive case\nis now consistently avoided.\n\n> I'm completely on board with moving the code that decides how much to\n> skip into a function. That seems like a great idea, and probably\n> overdue. But it is not easy for me to see what has changed\n> functionally between the old and new code organization, and I bet it\n> would be possible to split this into two patches, one of which creates\n> a function, and the other of which fixes the problem, and I think that\n> would be a useful service to future readers of the code.\n\nIt seems kinda tricky to split up 0002 like that. It's possible, but\nI'm not sure if it's possible to split it in a way that highlights the\nissue that I just described. Because we already avoided the race in\nthe aggressive case.\n\n> I also think that the commit message for 0002 is probably longer and\n> more complex than is really helpful, and that the subject line is too\n> vague, but since I don't yet understand exactly what's happening here,\n> I cannot comment on how I think it should be revised at this point,\n> except to say that the second paragraph of that commit message looks\n> like the most useful part.\n\nI'll work on that.\n\n> I would also like to mention a few things that I do like about 0002.\n> One is that it seems to collapse two different pieces of logic for\n> page skipping into one. That seems good. As mentioned, it's especially\n> good because that logic is abstracted into a function. Also, it looks\n> like it is making a pretty localized change to one (1) aspect of what\n> VACUUM does -- and I definitely prefer patches that change only one\n> thing at a time.\n\nTotally embracing the idea that we don't necessarily need very recent\ninformation from the visibility map (it just has to be after\nOldestXmin was established) has a lot of advantages, architecturally.\nIt could in principle be hours out of date in the longest VACUUM\noperations -- that should be fine. This is exactly the same principle\nthat makes it okay to stick with our original rel_pages, even when the\ntable has grown during a VACUUM operation (I documented this in commit\n73f6ec3d3c recently).\n\nWe could build on the approach taken by 0002 to create a totally\ncomprehensive picture of the ranges we're skipping up-front, before we\nactually scan any pages, even with very large tables. We could in\nprinciple cache a very large number of skippable ranges up-front,\nwithout ever going back to the visibility map again later (unless we\nneed to set a bit). It really doesn't matter if somebody else unsets a\npage's VM bit concurrently, at all.\n\nI see a lot of advantage to knowing our final scanned_pages almost\nimmediately. Things like prefetching, capping the size of the\ndead_items array more intelligently (use final scanned_pages instead\nof rel_pages in dead_items_max_items()), improvements to progress\nreporting...not to mention more intelligent choices about whether we\nshould try to advance relfrozenxid a bit earlier during non-aggressive\nVACUUMs.\n\n> Hope that's helpful.\n\nVery helpful -- thanks!\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 24 Mar 2022 12:28:08 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Thu, Mar 24, 2022 at 3:28 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> But non-aggressive VACUUMs have always been able to do that.\n>\n> How about: \"Set relfrozenxid to oldest extant XID seen by VACUUM\"\n\nSure, that sounds nice.\n\n> Believe it or not, I avoided functional changes in 0002 -- at least in\n> one important sense. That's why you had difficulty spotting any. This\n> must sound peculiar, since the commit message very clearly says that\n> the commit avoids a problem seen only in the non-aggressive case. It's\n> really quite subtle.\n\nWell, I think the goal in revising the code is to be as un-subtle as\npossible. Commits that people can't easily understand breed future\nbugs.\n\n> What you're saying here boils down to this: it doesn't matter what the\n> visibility map would say right this microsecond (in the aggressive\n> case) were we to call VM_ALL_FROZEN(): we know for sure that the VM\n> said that this page was all-frozen *in the recent past*. That's good\n> enough; we will never fail to scan a page that might have an XID <\n> OldestXmin (ditto for XMIDs) this way, which is all that really\n> matters.\n\nMakes sense. So maybe the commit message should try to emphasize this\npoint e.g. \"If a page is all-frozen at the time we check whether it\ncan be skipped, don't allow it to affect the relfrozenxmin and\nrelminmxid which we set for the relation. This was previously true for\naggressive vacuums, but not for non-aggressive vacuums, which was\ninconsistent. (The reason this is a safe thing to do is that any new\nXIDs or MXIDs that appear on the page after we initially observe it to\nbe frozen must be newer than any relfrozenxid or relminmxid the\ncurrent vacuum could possibly consider storing into pg_class.)\"\n\n> This is absolutely mandatory in the aggressive case, because otherwise\n> relfrozenxid advancement might be seen as unsafe. My observation is:\n> Why should we accept the same race in the non-aggressive case? Why not\n> do essentially the same thing in every VACUUM?\n\nSure, that seems like a good idea. I think I basically agree with the\ngoals of the patch. My concern is just about making the changes\nunderstandable to future readers. This area is notoriously subtle, and\npeople are going to introduce more bugs even if the comments and code\norganization are fantastic.\n\n> And so you could almost say that there is now behavioral change at\n> all.\n\nI vigorously object to this part, though. We should always err on the\nside of saying that commits *do* have behavioral changes. We should go\nout of our way to call out in the commit message any possible way that\nsomeone might notice the difference between the post-commit situation\nand the pre-commit situation. It is fine, even good, to also be clear\nabout how we're maintaining continuity and why we don't think it's a\nproblem, but the only commits that should be described as not having\nany behavioral change are ones that do mechanical code movement, or\nare just changing comments, or something like that.\n\n> It seems kinda tricky to split up 0002 like that. It's possible, but\n> I'm not sure if it's possible to split it in a way that highlights the\n> issue that I just described. Because we already avoided the race in\n> the aggressive case.\n\nI do see that there are some difficulties there. I'm not sure what to\ndo about that. I think a sufficiently clear commit message could\npossibly be enough, rather than trying to split the patch. But I also\nthink splitting the patch should be considered, if that can reasonably\nbe done.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 24 Mar 2022 16:21:19 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Thu, Mar 24, 2022 at 1:21 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > How about: \"Set relfrozenxid to oldest extant XID seen by VACUUM\"\n>\n> Sure, that sounds nice.\n\nCool.\n\n> > What you're saying here boils down to this: it doesn't matter what the\n> > visibility map would say right this microsecond (in the aggressive\n> > case) were we to call VM_ALL_FROZEN(): we know for sure that the VM\n> > said that this page was all-frozen *in the recent past*. That's good\n> > enough; we will never fail to scan a page that might have an XID <\n> > OldestXmin (ditto for XMIDs) this way, which is all that really\n> > matters.\n>\n> Makes sense. So maybe the commit message should try to emphasize this\n> point e.g. \"If a page is all-frozen at the time we check whether it\n> can be skipped, don't allow it to affect the relfrozenxmin and\n> relminmxid which we set for the relation. This was previously true for\n> aggressive vacuums, but not for non-aggressive vacuums, which was\n> inconsistent. (The reason this is a safe thing to do is that any new\n> XIDs or MXIDs that appear on the page after we initially observe it to\n> be frozen must be newer than any relfrozenxid or relminmxid the\n> current vacuum could possibly consider storing into pg_class.)\"\n\nOkay, I'll add something more like that.\n\nAlmost every aspect of relfrozenxid advancement by VACUUM seems\nsimpler when thought about in these terms IMV. Every VACUUM now scans\nall pages that might have XIDs < OldestXmin, and so every VACUUM can\nadvance relfrozenxid to the oldest extant XID (barring non-aggressive\nVACUUMs that *choose* to skip some all-visible pages).\n\nThere are a lot more important details, of course. My \"Every\nVACUUM...\" statement works well as an axiom because all of those other\ndetails don't create any awkward exceptions.\n\n> > This is absolutely mandatory in the aggressive case, because otherwise\n> > relfrozenxid advancement might be seen as unsafe. My observation is:\n> > Why should we accept the same race in the non-aggressive case? Why not\n> > do essentially the same thing in every VACUUM?\n>\n> Sure, that seems like a good idea. I think I basically agree with the\n> goals of the patch.\n\nGreat.\n\n> My concern is just about making the changes\n> understandable to future readers. This area is notoriously subtle, and\n> people are going to introduce more bugs even if the comments and code\n> organization are fantastic.\n\nMakes sense.\n\n> > And so you could almost say that there is now behavioral change at\n> > all.\n>\n> I vigorously object to this part, though. We should always err on the\n> side of saying that commits *do* have behavioral changes.\n\nI think that you've taken my words too literally here. I would never\nconceal the intent of a piece of work like that. I thought that it\nwould clarify matters to point out that I could in theory \"get away\nwith it if I wanted to\" in this instance. This was only a means of\nconveying a subtle point about the behavioral changes from 0002 --\nsince you couldn't initially see them yourself (even with my commit\nmessage).\n\nKind of like Tom Lane's 2011 talk on the query planner. The one where\nhe lied to the audience several times.\n\n> > It seems kinda tricky to split up 0002 like that. It's possible, but\n> > I'm not sure if it's possible to split it in a way that highlights the\n> > issue that I just described. Because we already avoided the race in\n> > the aggressive case.\n>\n> I do see that there are some difficulties there. I'm not sure what to\n> do about that. I think a sufficiently clear commit message could\n> possibly be enough, rather than trying to split the patch. But I also\n> think splitting the patch should be considered, if that can reasonably\n> be done.\n\nI'll see if I can come up with something. It's hard to be sure about\nthat kind of thing when you're this close to the code.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 24 Mar 2022 14:40:28 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Thu, Mar 24, 2022 at 2:40 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > > This is absolutely mandatory in the aggressive case, because otherwise\n> > > relfrozenxid advancement might be seen as unsafe. My observation is:\n> > > Why should we accept the same race in the non-aggressive case? Why not\n> > > do essentially the same thing in every VACUUM?\n> >\n> > Sure, that seems like a good idea. I think I basically agree with the\n> > goals of the patch.\n>\n> Great.\n\nAttached is v12. My current goal is to commit all 3 patches before\nfeature freeze. Note that this does not include the more complicated\npatch including with previous revisions of the patch series (the\npage-level freezing work that appeared in versions before v11).\n\nChanges that appear in this new revision, v12:\n\n* Reworking of the commit messages based on feedback from Robert.\n\n* General cleanup of the changes to heapam.c from 0001 (the changes to\nheap_prepare_freeze_tuple and related functions). New and existing\ncode now fits together a bit better. I also added a couple of new\ndocumenting assertions, to make the flow a bit easier to understand.\n\n* Added new assertions that document\nOldestXmin/FreezeLimit/relfrozenxid invariants, right at the point we\nupdate pg_class within vacuumlazy.c.\n\nThese assertions would have a decent chance of failing if there were\nany bugs in the code.\n\n* Removed patch that made DISABLE_PAGE_SKIPPING not force aggressive\nVACUUM, limiting the underlying mechanism to forcing scanning of all\npages in lazy_scan_heap (v11 was the first and last revision that\nincluded this patch).\n\n* Adds a new small patch 0003. This just moves the last piece of\nresource allocation that still took place at the top of\nlazy_scan_heap() back into its caller, heap_vacuum_rel().\n\nThe work in 0003 probably should have happened as part of the patch\nthat became commit 73f6ec3d -- same idea. It's totally mechanical\nstuff. With 0002 and 0003, there is hardly any lazy_scan_heap code\nbefore the main loop that iterates through blocks in rel_pages (and\nthe code that's still there is obviously related to the loop in a\ndirect and obvious way). This seems like a big overall improvement in\nmaintainability.\n\nDidn't see a way to split up 0002, per Robert's suggestion 3 days ago.\nAs I said at the time, it's possible to split it up, but not in a way\nthat highlights the underlying issue (since the issue 0002 fixes was\nalways limited to non-aggressive VACUUMs). The commit message may have\nto suffice.\n\n--\nPeter Geoghegan", "msg_date": "Sun, 27 Mar 2022 20:24:32 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Sun, Mar 27, 2022 at 11:24 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached is v12. My current goal is to commit all 3 patches before\n> feature freeze. Note that this does not include the more complicated\n> patch including with previous revisions of the patch series (the\n> page-level freezing work that appeared in versions before v11).\n\nReviewing 0001, focusing on the words in the patch file much more than the code:\n\nI can understand this version of the commit message. Woohoo! I like\nunderstanding things.\n\nI think the header comments for FreezeMultiXactId() focus way too much\non what the caller is supposed to do and not nearly enough on what\nFreezeMultiXactId() itself does. I think to some extent this also\napplies to the comments within the function body.\n\nOn the other hand, the header comments for heap_prepare_freeze_tuple()\nseem good to me. If I were thinking of calling this function, I would\nknow how to use the new arguments. If I were looking for bugs in it, I\ncould compare the logic in the function to what these comments say it\nshould be doing. Yay.\n\nI think I understand what the first paragraph of the header comment\nfor heap_tuple_needs_freeze() is trying to say, but the second one is\nquite confusing. I think this is again because it veers into talking\nabout what the caller should do rather than explaining what the\nfunction itself does.\n\nI don't like the statement-free else block in lazy_scan_noprune(). I\nthink you could delete the else{} and just put that same comment there\nwith one less level of indentation. There's a clear \"return false\"\njust above so it shouldn't be confusing what's happening.\n\nThe comment hunk at the end of lazy_scan_noprune() would probably be\nbetter if it said something more specific than \"caller can tolerate\nreduced processing.\" My guess is that it would be something like\n\"caller does not need to do something or other.\"\n\nI have my doubts about whether the overwrite-a-future-relfrozenxid\nbehavior is any good, but that's a topic for another day. I suggest\nkeeping the words \"it seems best to\", though, because they convey a\nlevel of tentativeness, which seems appropriate.\n\nI am surprised to see you write in maintenance.sgml that the VACUUM\nwhich most recently advanced relfrozenxid will typically be the most\nrecent aggressive VACUUM. I would have expected something like \"(often\nthe most recent VACUUM)\".\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 29 Mar 2022 13:03:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Tue, Mar 29, 2022 at 10:03 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I can understand this version of the commit message. Woohoo! I like\n> understanding things.\n\nThat's good news.\n\n> I think the header comments for FreezeMultiXactId() focus way too much\n> on what the caller is supposed to do and not nearly enough on what\n> FreezeMultiXactId() itself does. I think to some extent this also\n> applies to the comments within the function body.\n\nTo some extent this is a legitimate difference in style. I myself\ndon't think that it's intrinsically good to have these sorts of\ncomments. I just think that it can be the least worst thing when a\nfunction is intrinsically written with one caller and one very\nspecific set of requirements in mind. That is pretty much a matter of\ntaste, though.\n\n> I think I understand what the first paragraph of the header comment\n> for heap_tuple_needs_freeze() is trying to say, but the second one is\n> quite confusing. I think this is again because it veers into talking\n> about what the caller should do rather than explaining what the\n> function itself does.\n\nI wouldn't have done it that way if the function wasn't called\nheap_tuple_needs_freeze().\n\nI would be okay with removing this paragraph if the function was\nrenamed to reflect the fact it now tells the caller something about\nthe tuple having an old XID/MXID relative to the caller's own XID/MXID\ncutoffs. Maybe the function name should be heap_tuple_would_freeze(),\nmaking it clear that the function merely tells caller what\nheap_prepare_freeze_tuple() *would* do, without presuming to tell the\nvacuumlazy.c caller what it *should* do about any of the information\nit is provided.\n\nThen it becomes natural to see the boolean return value and the\nchanges the function makes to caller's relfrozenxid/relminmxid tracker\nvariables as independent.\n\n> I don't like the statement-free else block in lazy_scan_noprune(). I\n> think you could delete the else{} and just put that same comment there\n> with one less level of indentation. There's a clear \"return false\"\n> just above so it shouldn't be confusing what's happening.\n\nOkay, will fix.\n\n> The comment hunk at the end of lazy_scan_noprune() would probably be\n> better if it said something more specific than \"caller can tolerate\n> reduced processing.\" My guess is that it would be something like\n> \"caller does not need to do something or other.\"\n\nI meant \"caller can tolerate not pruning or freezing this particular\npage\". Will fix.\n\n> I have my doubts about whether the overwrite-a-future-relfrozenxid\n> behavior is any good, but that's a topic for another day. I suggest\n> keeping the words \"it seems best to\", though, because they convey a\n> level of tentativeness, which seems appropriate.\n\nI agree that it's best to keep a tentative tone here. That code was\nwritten following a very specific bug in pg_upgrade several years\nback. There was a very recent bug fixed only last year, by commit\n74cf7d46.\n\nFWIW I tend to think that we'd have a much better chance of catching\nthat sort of thing if we'd had better relfrozenxid instrumentation\nbefore now. Now you'd see a negative value in the \"new relfrozenxid:\n%u, which is %d xids ahead of previous value\" part of the autovacuum\nlog message in the event of such a bug. That's weird enough that I bet\nsomebody would notice and report it.\n\n> I am surprised to see you write in maintenance.sgml that the VACUUM\n> which most recently advanced relfrozenxid will typically be the most\n> recent aggressive VACUUM. I would have expected something like \"(often\n> the most recent VACUUM)\".\n\nThat's always been true, and will only be slightly less true in\nPostgres 15 -- the fact is that we only need to skip one all-visible\npage to lose out, and that's not unlikely with tables that aren't\nquite small with all the patches from v12 applied (we're still much\ntoo naive). The work that I'll get into Postgres 15 on VACUUM is very\nvaluable as a basis for future improvements, but not all that valuable\nto users (improved instrumentation might be the biggest benefit in 15,\nor maybe relminmxid advancement for certain types of applications).\n\nI still think that we need to do more proactive page-level freezing to\nmake relfrozenxid advancement happen in almost every VACUUM, but even\nthat won't quite be enough. There are still cases where we need to\nmake a choice about giving up on relfrozenxid advancement in a\nnon-aggressive VACUUM -- all-visible pages won't completely go away\nwith page-level freezing. At a minimum we'll still have edge cases\nlike the case where heap_lock_tuple() unsets the all-frozen bit. And\npg_upgrade'd databases, too.\n\n0002 structures the logic for skipping using the VM in a way that will\nmake the choice to skip or not skip all-visible pages in\nnon-aggressive VACUUMs quite natural. I suspect that\nSKIP_PAGES_THRESHOLD was always mostly just about relfrozenxid\nadvancement in non-aggressive VACUUM, all along. We can do much better\nthan SKIP_PAGES_THRESHOLD, especially if we preprocess the entire\nvisibility map up-front -- we'll know the costs and benefits up-front,\nbefore committing to early relfrozenxid advancement.\n\nOverall, aggressive vs non-aggressive VACUUM seems like a false\ndichotomy to me. ISTM that it should be a totally dynamic set of\nbehaviors. There should probably be several different \"aggressive\ngradations''. Most VACUUMs start out completely non-aggressive\n(including even anti-wraparound autovacuums), but can escalate from\nthere. The non-cancellable autovacuum behavior (technically an\nanti-wraparound thing, but really an aggressiveness thing) should be\nsomething we escalate to, as with the failsafe.\n\nDynamic behavior works a lot better. And it makes scheduling of\nautovacuum workers a lot more straightforward -- the discontinuities\nseem to make that much harder, which is one more reason to avoid them\naltogether.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 29 Mar 2022 11:58:13 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Tue, Mar 29, 2022 at 11:58 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I think I understand what the first paragraph of the header comment\n> > for heap_tuple_needs_freeze() is trying to say, but the second one is\n> > quite confusing. I think this is again because it veers into talking\n> > about what the caller should do rather than explaining what the\n> > function itself does.\n>\n> I wouldn't have done it that way if the function wasn't called\n> heap_tuple_needs_freeze().\n>\n> I would be okay with removing this paragraph if the function was\n> renamed to reflect the fact it now tells the caller something about\n> the tuple having an old XID/MXID relative to the caller's own XID/MXID\n> cutoffs. Maybe the function name should be heap_tuple_would_freeze(),\n> making it clear that the function merely tells caller what\n> heap_prepare_freeze_tuple() *would* do, without presuming to tell the\n> vacuumlazy.c caller what it *should* do about any of the information\n> it is provided.\n\nAttached is v13, which does it that way. This does seem like a real\nincrease in clarity, albeit one that comes at the cost of renaming\nheap_tuple_needs_freeze().\n\nv13 also addresses all of the other items from Robert's most recent\nround of feedback.\n\nI would like to commit something close to v13 on Friday or Saturday.\n\nThanks\n-- \nPeter Geoghegan", "msg_date": "Tue, 29 Mar 2022 20:08:32 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "+ diff = (int32) (vacrel->NewRelfrozenXid - vacrel->relfrozenxid);\n+ Assert(diff > 0);\n\nDid you see that this crashed on windows cfbot?\n\nhttps://api.cirrus-ci.com/v1/artifact/task/4592929254670336/log/tmp_check/postmaster.log\nTRAP: FailedAssertion(\"diff > 0\", File: \"c:\\cirrus\\src\\backend\\access\\heap\\vacuumlazy.c\", Line: 724, PID: 5984)\nabort() has been called2022-03-30 03:48:30.267 GMT [5316][client backend] [pg_regress/tablefunc][3/15389:0] ERROR: infinite recursion detected\n2022-03-30 03:48:38.031 GMT [5592][postmaster] LOG: server process (PID 5984) was terminated by exception 0xC0000354\n2022-03-30 03:48:38.031 GMT [5592][postmaster] DETAIL: Failed process was running: autovacuum: VACUUM ANALYZE pg_catalog.pg_database\n2022-03-30 03:48:38.031 GMT [5592][postmaster] HINT: See C include file \"ntstatus.h\" for a description of the hexadecimal value.\n\nhttps://cirrus-ci.com/task/4592929254670336\n\n00000000`007ff130 00000001`400b4ef8 postgres!ExceptionalCondition(\n\t\t\tchar * conditionName = 0x00000001`40a915d8 \"diff > 0\", \n\t\t\tchar * errorType = 0x00000001`40a915c8 \"FailedAssertion\", \n\t\t\tchar * fileName = 0x00000001`40a91598 \"c:\\cirrus\\src\\backend\\access\\heap\\vacuumlazy.c\", \n\t\t\tint lineNumber = 0n724)+0x8d [c:\\cirrus\\src\\backend\\utils\\error\\assert.c @ 70]\n00000000`007ff170 00000001`402a0914 postgres!heap_vacuum_rel(\n\t\t\tstruct RelationData * rel = 0x00000000`00a51088, \n\t\t\tstruct VacuumParams * params = 0x00000000`00a8420c, \n\t\t\tstruct BufferAccessStrategyData * bstrategy = 0x00000000`00a842a0)+0x1038 [c:\\cirrus\\src\\backend\\access\\heap\\vacuumlazy.c @ 724]\n00000000`007ff350 00000001`402a4686 postgres!table_relation_vacuum(\n\t\t\tstruct RelationData * rel = 0x00000000`00a51088, \n\t\t\tstruct VacuumParams * params = 0x00000000`00a8420c, \n\t\t\tstruct BufferAccessStrategyData * bstrategy = 0x00000000`00a842a0)+0x34 [c:\\cirrus\\src\\include\\access\\tableam.h @ 1681]\n00000000`007ff380 00000001`402a1a2d postgres!vacuum_rel(\n\t\t\tunsigned int relid = 0x4ee, \n\t\t\tstruct RangeVar * relation = 0x00000000`01799ae0, \n\t\t\tstruct VacuumParams * params = 0x00000000`00a8420c)+0x5a6 [c:\\cirrus\\src\\backend\\commands\\vacuum.c @ 2068]\n00000000`007ff400 00000001`4050f1ef postgres!vacuum(\n\t\t\tstruct List * relations = 0x00000000`0179df58, \n\t\t\tstruct VacuumParams * params = 0x00000000`00a8420c, \n\t\t\tstruct BufferAccessStrategyData * bstrategy = 0x00000000`00a842a0, \n\t\t\tbool isTopLevel = true)+0x69d [c:\\cirrus\\src\\backend\\commands\\vacuum.c @ 482]\n00000000`007ff5f0 00000001`4050dc95 postgres!autovacuum_do_vac_analyze(\n\t\t\tstruct autovac_table * tab = 0x00000000`00a84208, \n\t\t\tstruct BufferAccessStrategyData * bstrategy = 0x00000000`00a842a0)+0x8f [c:\\cirrus\\src\\backend\\postmaster\\autovacuum.c @ 3248]\n00000000`007ff640 00000001`4050b4e3 postgres!do_autovacuum(void)+0xef5 [c:\\cirrus\\src\\backend\\postmaster\\autovacuum.c @ 2503]\n\nIt seems like there should be even more logs, especially since it says:\n[03:48:43.119] Uploading 3 artifacts for c:\\cirrus\\**\\*.diffs\n[03:48:43.122] Uploaded c:\\cirrus\\contrib\\tsm_system_rows\\regression.diffs\n[03:48:43.125] Uploaded c:\\cirrus\\contrib\\tsm_system_time\\regression.diffs\n\n\n", "msg_date": "Wed, 30 Mar 2022 01:10:41 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases, relfrozenxid\n optimizations" }, { "msg_contents": "On Tue, Mar 29, 2022 at 11:10 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> + diff = (int32) (vacrel->NewRelfrozenXid - vacrel->relfrozenxid);\n> + Assert(diff > 0);\n>\n> Did you see that this crashed on windows cfbot?\n>\n> https://api.cirrus-ci.com/v1/artifact/task/4592929254670336/log/tmp_check/postmaster.log\n> TRAP: FailedAssertion(\"diff > 0\", File: \"c:\\cirrus\\src\\backend\\access\\heap\\vacuumlazy.c\", Line: 724, PID: 5984)\n\nThat's weird. There are very similar assertions a little earlier, that\nmust have *not* failed here, before the call to vac_update_relstats().\nI was actually thinking of removing this assertion for that reason --\nI thought that it was redundant.\n\nPerhaps something is amiss inside vac_update_relstats(), where the\nboolean flag that indicates that pg_class.relfrozenxid was advanced is\nset:\n\n if (frozenxid_updated)\n *frozenxid_updated = false;\n if (TransactionIdIsNormal(frozenxid) &&\n pgcform->relfrozenxid != frozenxid &&\n (TransactionIdPrecedes(pgcform->relfrozenxid, frozenxid) ||\n TransactionIdPrecedes(ReadNextTransactionId(),\n pgcform->relfrozenxid)))\n {\n if (frozenxid_updated)\n *frozenxid_updated = true;\n pgcform->relfrozenxid = frozenxid;\n dirty = true;\n }\n\nMaybe the \"existing relfrozenxid is in the future, silently update\nrelfrozenxid\" part of the condition (which involves\nReadNextTransactionId()) somehow does the wrong thing here. But how?\n\nThe other assertions take into account the fact that OldestXmin can\nitself \"go backwards\" across VACUUM operations against the same table:\n\n Assert(!aggressive || vacrel->NewRelfrozenXid == OldestXmin ||\n TransactionIdPrecedesOrEquals(FreezeLimit,\n vacrel->NewRelfrozenXid));\n\nNote the \"vacrel->NewRelfrozenXid == OldestXmin\", without which the\nassertion will fail pretty easily when the regression tests are run.\nPerhaps I need to do something like that with the other assertion as\nwell (or more likely just get rid of it). Will figure it out tomorrow.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 30 Mar 2022 00:01:06 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Wed, Mar 30, 2022 at 12:01 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> Perhaps something is amiss inside vac_update_relstats(), where the\n> boolean flag that indicates that pg_class.relfrozenxid was advanced is\n> set:\n>\n> if (frozenxid_updated)\n> *frozenxid_updated = false;\n> if (TransactionIdIsNormal(frozenxid) &&\n> pgcform->relfrozenxid != frozenxid &&\n> (TransactionIdPrecedes(pgcform->relfrozenxid, frozenxid) ||\n> TransactionIdPrecedes(ReadNextTransactionId(),\n> pgcform->relfrozenxid)))\n> {\n> if (frozenxid_updated)\n> *frozenxid_updated = true;\n> pgcform->relfrozenxid = frozenxid;\n> dirty = true;\n> }\n>\n> Maybe the \"existing relfrozenxid is in the future, silently update\n> relfrozenxid\" part of the condition (which involves\n> ReadNextTransactionId()) somehow does the wrong thing here. But how?\n\nI tried several times to recreate this issue on CI. No luck with that,\nthough -- can't get it to fail again after 4 attempts.\n\nThis was a VACUUM of pg_database, run from an autovacuum worker. I am\nvaguely reminded of the two bugs fixed by Andres in commit a54e1f15.\nBoth were issues with the shared relcache init file affecting shared\nand nailed catalog relations. Those bugs had symptoms like \" ERROR:\nfound xmin ... from before relfrozenxid ...\" for various system\ncatalogs.\n\nWe know that this particular assertion did not fail during the same VACUUM:\n\n Assert(vacrel->NewRelfrozenXid == OldestXmin ||\n TransactionIdPrecedesOrEquals(vacrel->relfrozenxid,\n vacrel->NewRelfrozenXid));\n\nSo it's hard to see how this could be a bug in the patch -- the final\nnew relfrozenxid is presumably equal to VACUUM's OldestXmin in the\nproblem scenario seen on the CI Windows instance yesterday (that's why\nthis earlier assertion didn't fail). The assertion I'm showing here\nneeds the \"vacrel->NewRelfrozenXid == OldestXmin\" part of the\ncondition to account for the fact that\nOldestXmin/GetOldestNonRemovableTransactionId() is known to \"go\nbackwards\". Without that the regression tests will fail quite easily.\n\nThe surprising part of the CI failure must have taken place just after\nthis assertion, when VACUUM's call to vacuum_set_xid_limits() actually\nupdates pg_class.relfrozenxid with vacrel->NewRelfrozenXid --\npresumably because the existing relfrozenxid appeared to be \"in the\nfuture\" when we examine it in pg_class again. We see evidence that\nthis must have happened afterwards, when the closely related assertion\n(used only in instrumentation code) fails:\n\n From my patch:\n\n> if (frozenxid_updated)\n> {\n> - diff = (int32) (FreezeLimit - vacrel->relfrozenxid);\n> + diff = (int32) (vacrel->NewRelfrozenXid - vacrel->relfrozenxid);\n> + Assert(diff > 0);\n> appendStringInfo(&buf,\n> _(\"new relfrozenxid: %u, which is %d xids ahead of previous value\\n\"),\n> - FreezeLimit, diff);\n> + vacrel->NewRelfrozenXid, diff);\n> }\n\nDoes anybody have any ideas about what might be going on here?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 30 Mar 2022 17:50:42 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "Hi,\n\nOn 2022-03-30 17:50:42 -0700, Peter Geoghegan wrote:\n> I tried several times to recreate this issue on CI. No luck with that,\n> though -- can't get it to fail again after 4 attempts.\n\nIt's really annoying that we don't have Assert variants that show the compared\nvalues, that might make it easier to intepret what's going on.\n\nSomething vaguely like EXPECT_EQ_U32 in regress.c. Maybe\nAssertCmp(type, a, op, b),\n\nThen the assertion could have been something like\n AssertCmp(int32, diff, >, 0)\n\n\nDoes the line number in the failed run actually correspond to the xid, rather\nthan the mxid case? I didn't check.\n\n\nYou could try to increase the likelihood of reproducing the failure by\nduplicating the invocation that lead to the crash a few times in the\n.cirrus.yml file in your dev branch. That might allow hitting the problem more\nquickly.\n\nMaybe reduce autovacuum_naptime in src/tools/ci/pg_ci_base.conf?\n\nOr locally - one thing that windows CI does different from the other platforms\nis that it runs isolation, contrib and a bunch of other tests using the same\ncluster. Which of course increases the likelihood of autovacuum having stuff\nto do, *particularly* on shared relations - normally there's probably not\nenough changes for that.\n\nYou can do something similar locally on linux with\n make -Otarget -C contrib/ -j48 -s USE_MODULE_DB=1 installcheck prove_installcheck=true\n(the prove_installcheck=true to prevent tap tests from running, we don't seem\nto have another way for that)\n\nI don't think windows uses USE_MODULE_DB=1, but it allows to cause a lot more\nload concurrently than running tests serially...\n\n\n> We know that this particular assertion did not fail during the same VACUUM:\n> \n> Assert(vacrel->NewRelfrozenXid == OldestXmin ||\n> TransactionIdPrecedesOrEquals(vacrel->relfrozenxid,\n> vacrel->NewRelfrozenXid));\n\nThe comment in your patch says \"is either older or newer than FreezeLimit\" - I\nassume that's some rephrasing damage?\n\n\n\n> So it's hard to see how this could be a bug in the patch -- the final\n> new relfrozenxid is presumably equal to VACUUM's OldestXmin in the\n> problem scenario seen on the CI Windows instance yesterday (that's why\n> this earlier assertion didn't fail).\n\nPerhaps it's worth commiting improved assertions on master? If this is indeed\na pre-existing bug, and we're just missing due to slightly less stringent\nasserts, we could rectify that separately.\n\n\n> The surprising part of the CI failure must have taken place just after\n> this assertion, when VACUUM's call to vacuum_set_xid_limits() actually\n> updates pg_class.relfrozenxid with vacrel->NewRelfrozenXid --\n> presumably because the existing relfrozenxid appeared to be \"in the\n> future\" when we examine it in pg_class again. We see evidence that\n> this must have happened afterwards, when the closely related assertion\n> (used only in instrumentation code) fails:\n\nHm. This triggers some vague memories. There's some oddities around shared\nrelations being vacuumed separately in all the databases and thus having\nseparate horizons.\n\n\nAfter \"remembering\" that, I looked in the cirrus log for the failed run, and\nthe worker was processing shared a shared relation last:\n\n2022-03-30 03:48:30.238 GMT [5984][autovacuum worker] LOG: automatic analyze of table \"contrib_regression.pg_catalog.pg_authid\"\n\nObviously that's not a guarantee that the next table processed also is a\nshared catalog, but ...\n\nOh, the relid is actually in the stack trace. 0x4ee = 1262 =\npg_database. Which makes sense, the test ends up with a high percentage of\ndead rows in pg_database, due to all the different contrib tests\ncreating/dropping a database.\n\n\n\n> From my patch:\n> \n> > if (frozenxid_updated)\n> > {\n> > - diff = (int32) (FreezeLimit - vacrel->relfrozenxid);\n> > + diff = (int32) (vacrel->NewRelfrozenXid - vacrel->relfrozenxid);\n> > + Assert(diff > 0);\n> > appendStringInfo(&buf,\n> > _(\"new relfrozenxid: %u, which is %d xids ahead of previous value\\n\"),\n> > - FreezeLimit, diff);\n> > + vacrel->NewRelfrozenXid, diff);\n> > }\n\nPerhaps this ought to be an elog() instead of an Assert()? Something has gone\npear shaped if we get here... It's a bit annoying though, because it'd have to\nbe a PANIC to be visible on the bf / CI :(.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 30 Mar 2022 19:00:18 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases, relfrozenxid\n optimizations" }, { "msg_contents": "On Wed, Mar 30, 2022 at 7:00 PM Andres Freund <andres@anarazel.de> wrote:\n> Something vaguely like EXPECT_EQ_U32 in regress.c. Maybe\n> AssertCmp(type, a, op, b),\n>\n> Then the assertion could have been something like\n> AssertCmp(int32, diff, >, 0)\n\nI'd definitely use them if they were there.\n\n> Does the line number in the failed run actually correspond to the xid, rather\n> than the mxid case? I didn't check.\n\nYes, I verified -- definitely relfrozenxid.\n\n> You can do something similar locally on linux with\n> make -Otarget -C contrib/ -j48 -s USE_MODULE_DB=1 installcheck prove_installcheck=true\n> (the prove_installcheck=true to prevent tap tests from running, we don't seem\n> to have another way for that)\n>\n> I don't think windows uses USE_MODULE_DB=1, but it allows to cause a lot more\n> load concurrently than running tests serially...\n\nCan't get it to fail locally with that recipe.\n\n> > Assert(vacrel->NewRelfrozenXid == OldestXmin ||\n> > TransactionIdPrecedesOrEquals(vacrel->relfrozenxid,\n> > vacrel->NewRelfrozenXid));\n>\n> The comment in your patch says \"is either older or newer than FreezeLimit\" - I\n> assume that's some rephrasing damage?\n\nBoth the comment and the assertion are correct. I see what you mean, though.\n\n> Perhaps it's worth commiting improved assertions on master? If this is indeed\n> a pre-existing bug, and we're just missing due to slightly less stringent\n> asserts, we could rectify that separately.\n\nI don't think there's much chance of the assertion actually hitting\nwithout the rest of the patch series. The new relfrozenxid value is\nalways going to be OldestXmin - vacuum_min_freeze_age on HEAD, while\nwith the patch it's sometimes close to OldestXmin. Especially when you\nhave lots of dead tuples that you churn through constantly (like\npgbench_tellers, or like these system catalogs on the CI test\nmachine).\n\n> Hm. This triggers some vague memories. There's some oddities around shared\n> relations being vacuumed separately in all the databases and thus having\n> separate horizons.\n\nThat's what I was thinking of, obviously.\n\n> After \"remembering\" that, I looked in the cirrus log for the failed run, and\n> the worker was processing shared a shared relation last:\n>\n> 2022-03-30 03:48:30.238 GMT [5984][autovacuum worker] LOG: automatic analyze of table \"contrib_regression.pg_catalog.pg_authid\"\n\nI noticed the same thing myself. Should have said sooner.\n\n> Perhaps this ought to be an elog() instead of an Assert()? Something has gone\n> pear shaped if we get here... It's a bit annoying though, because it'd have to\n> be a PANIC to be visible on the bf / CI :(.\n\nYeah, a WARNING would be good here. I can write a new version of my\npatch series with a separation patch for that this evening. Actually,\nbetter make it a PANIC for now...\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 30 Mar 2022 19:37:58 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Wed, Mar 30, 2022 at 7:37 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Yeah, a WARNING would be good here. I can write a new version of my\n> patch series with a separation patch for that this evening. Actually,\n> better make it a PANIC for now...\n\nAttached is v14, which includes a new patch that PANICs like that in\nvac_update_relstats() --- 0003.\n\nThis approach also covers manual VACUUMs, which isn't the case with\nthe failing assertion, which is in instrumentation code (actually\nVACUUM VERBOSE might hit it).\n\nI definitely think that something like this should be committed.\nSilently ignoring system catalog corruption isn't okay.\n\n-- \nPeter Geoghegan", "msg_date": "Wed, 30 Mar 2022 19:51:21 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "Hi,\n\nI was able to trigger the crash.\n\ncat ~/tmp/pgbench-createdb.sql\nCREATE DATABASE pgb_:client_id;\nDROP DATABASE pgb_:client_id;\n\npgbench -n -P1 -c 10 -j10 -T100 -f ~/tmp/pgbench-createdb.sql\n\nwhile I was also running\n\nfor i in $(seq 1 100); do echo iteration $i; make -Otarget -C contrib/ -s installcheck -j48 -s prove_installcheck=true USE_MODULE_DB=1 > /tmp/ci-$i.log 2>&1; done\n\nI triggered twice now, but it took a while longer the second time.\n\n(gdb) bt full\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:49\n set = {__val = {4194304, 0, 0, 0, 0, 0, 216172782113783808, 2, 2377909399344644096, 18446497967838863616, 0, 0, 0, 0, 0, 0}}\n pid = <optimized out>\n tid = <optimized out>\n ret = <optimized out>\n#1 0x00007fe49a2db546 in __GI_abort () at abort.c:79\n save_stage = 1\n act = {__sigaction_handler = {sa_handler = 0x0, sa_sigaction = 0x0}, sa_mask = {__val = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}},\n sa_flags = 0, sa_restorer = 0x107e0}\n sigs = {__val = {32, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}}\n#2 0x00007fe49b9706f1 in ExceptionalCondition (conditionName=0x7fe49ba0618d \"diff > 0\", errorType=0x7fe49ba05bd1 \"FailedAssertion\",\n fileName=0x7fe49ba05b90 \"/home/andres/src/postgresql/src/backend/access/heap/vacuumlazy.c\", lineNumber=724)\n at /home/andres/src/postgresql/src/backend/utils/error/assert.c:69\nNo locals.\n#3 0x00007fe49b2fc739 in heap_vacuum_rel (rel=0x7fe497a8d148, params=0x7fe49c130d7c, bstrategy=0x7fe49c130e10)\n at /home/andres/src/postgresql/src/backend/access/heap/vacuumlazy.c:724\n buf = {\n data = 0x7fe49c17e238 \"automatic vacuum of table \\\"contrib_regression_dict_int.pg_catalog.pg_database\\\": index scans: 1\\npages: 0 removed, 3 remain, 3 scanned (100.00% of total)\\ntuples: 49 removed, 53 remain, 9 are dead but no\"..., len = 279, maxlen = 1024, cursor = 0}\n msgfmt = 0x7fe49ba06038 \"automatic vacuum of table \\\"%s.%s.%s\\\": index scans: %d\\n\"\n diff = 0\n endtime = 702011687982080\n vacrel = 0x7fe49c19b5b8\n verbose = false\n instrument = true\n ru0 = {tv = {tv_sec = 1648696487, tv_usec = 975963}, ru = {ru_utime = {tv_sec = 0, tv_usec = 0}, ru_stime = {tv_sec = 0, tv_usec = 3086}, {\n--Type <RET> for more, q to quit, c to continue without paging--c\n ru_maxrss = 10824, __ru_maxrss_word = 10824}, {ru_ixrss = 0, __ru_ixrss_word = 0}, {ru_idrss = 0, __ru_idrss_word = 0}, {ru_isrss = 0, __ru_isrss_word = 0}, {ru_minflt = 449, __ru_minflt_word = 449}, {ru_majflt = 0, __ru_majflt_word = 0}, {ru_nswap = 0, __ru_nswap_word = 0}, {ru_inblock = 0, __ru_inblock_word = 0}, {ru_oublock = 0, __ru_oublock_word = 0}, {ru_msgsnd = 0, __ru_msgsnd_word = 0}, {ru_msgrcv = 0, __ru_msgrcv_word = 0}, {ru_nsignals = 0, __ru_nsignals_word = 0}, {ru_nvcsw = 2, __ru_nvcsw_word = 2}, {ru_nivcsw = 0, __ru_nivcsw_word = 0}}}\n starttime = 702011687975964\n walusage_start = {wal_records = 0, wal_fpi = 0, wal_bytes = 0}\n walusage = {wal_records = 11, wal_fpi = 7, wal_bytes = 30847}\n secs = 0\n usecs = 6116\n read_rate = 16.606033355134073\n write_rate = 7.6643230869849575\n aggressive = false\n skipwithvm = true\n frozenxid_updated = true\n minmulti_updated = true\n orig_rel_pages = 3\n new_rel_pages = 3\n new_rel_allvisible = 0\n indnames = 0x7fe49c19bb28\n errcallback = {previous = 0x0, callback = 0x7fe49b3012fd <vacuum_error_callback>, arg = 0x7fe49c19b5b8}\n startreadtime = 180\n startwritetime = 0\n OldestXmin = 67552\n FreezeLimit = 4245034848\n OldestMxact = 224\n MultiXactCutoff = 4289967520\n __func__ = \"heap_vacuum_rel\"\n#4 0x00007fe49b523d92 in table_relation_vacuum (rel=0x7fe497a8d148, params=0x7fe49c130d7c, bstrategy=0x7fe49c130e10) at /home/andres/src/postgresql/src/include/access/tableam.h:1680\nNo locals.\n#5 0x00007fe49b527032 in vacuum_rel (relid=1262, relation=0x7fe49c1ae360, params=0x7fe49c130d7c) at /home/andres/src/postgresql/src/backend/commands/vacuum.c:2065\n lmode = 4\n rel = 0x7fe497a8d148\n lockrelid = {relId = 1262, dbId = 0}\n toast_relid = 0\n save_userid = 10\n save_sec_context = 0\n save_nestlevel = 2\n __func__ = \"vacuum_rel\"\n#6 0x00007fe49b524c3b in vacuum (relations=0x7fe49c1b03a8, params=0x7fe49c130d7c, bstrategy=0x7fe49c130e10, isTopLevel=true) at /home/andres/src/postgresql/src/backend/commands/vacuum.c:482\n vrel = 0x7fe49c1ae3b8\n cur__state = {l = 0x7fe49c1b03a8, i = 0}\n cur = 0x7fe49c1b03c0\n _save_exception_stack = 0x7fff97e35a10\n _save_context_stack = 0x0\n _local_sigjmp_buf = {{__jmpbuf = {140735741652128, 6126579318940970843, 9223372036854775747, 0, 0, 0, 6126579318957748059, 6139499258682879835}, __mask_was_saved = 0, __saved_mask = {__val = {32, 140619848279000, 8590910454, 140619848278592, 32, 140619848278944, 7784, 140619848278592, 140619848278816, 140735741647200, 140619839915137, 8458711686435861857, 32, 4869, 140619848278592, 140619848279024}}}}\n _do_rethrow = false\n in_vacuum = true\n stmttype = 0x7fe49baff1a7 \"VACUUM\"\n in_outer_xact = false\n use_own_xacts = true\n __func__ = \"vacuum\"\n#7 0x00007fe49b6d483d in autovacuum_do_vac_analyze (tab=0x7fe49c130d78, bstrategy=0x7fe49c130e10) at /home/andres/src/postgresql/src/backend/postmaster/autovacuum.c:3247\n rangevar = 0x7fe49c1ae360\n rel = 0x7fe49c1ae3b8\n rel_list = 0x7fe49c1ae3f0\n#8 0x00007fe49b6d34bc in do_autovacuum () at /home/andres/src/postgresql/src/backend/postmaster/autovacuum.c:2495\n _save_exception_stack = 0x7fff97e35d70\n _save_context_stack = 0x0\n _local_sigjmp_buf = {{__jmpbuf = {140735741652128, 6126579318779490139, 9223372036854775747, 0, 0, 0, 6126579319014371163, 6139499700101525339}, __mask_was_saved = 0, __saved_mask = {__val = {140619840139982, 140735741647712, 140619841923928, 957, 140619847223443, 140735741647656, 140619847312112, 140619847223451, 140619847223443, 140619847224399, 0, 139637976727552, 140619817480714, 140735741647616, 140619839856340, 1024}}}}\n _do_rethrow = false\n tab = 0x7fe49c130d78\n skipit = false\n stdVacuumCostDelay = 0\n stdVacuumCostLimit = 200\n iter = {cur = 0x7fe497668da0, end = 0x7fe497668da0}\n relid = 1262\n classTup = 0x7fe497a6c568\n isshared = true\n cell__state = {l = 0x7fe49c130d40, i = 0}\n classRel = 0x7fe497a5ae18\n tuple = 0x0\n relScan = 0x7fe49c130928\n dbForm = 0x7fe497a64fb8\n table_oids = 0x7fe49c130d40\n orphan_oids = 0x0\n ctl = {num_partitions = 0, ssize = 0, dsize = 1296236544, max_dsize = 140619847224424, keysize = 4, entrysize = 96, hash = 0x0, match = 0x0, keycopy = 0x0, alloc = 0x0, hcxt = 0x7fff97e35c50, hctl = 0x7fe49b9a787e <AllocSetFree+670>}\n table_toast_map = 0x7fe49c19d2f0\n cell = 0x7fe49c130d58\n shared = 0x7fe49c17c360\n dbentry = 0x7fe49c18d7a0\n bstrategy = 0x7fe49c130e10\n key = {sk_flags = 0, sk_attno = 17, sk_strategy = 3, sk_subtype = 0, sk_collation = 950, sk_func = {fn_addr = 0x7fe49b809a6a <chareq>, fn_oid = 61, fn_nargs = 2, fn_strict = true, fn_retset = false, fn_stats = 2 '\\002', fn_extra = 0x0, fn_mcxt = 0x7fe49c12f7f0, fn_expr = 0x0}, sk_argument = 116}\n pg_class_desc = 0x7fe49c12f910\n effective_multixact_freeze_max_age = 400000000\n did_vacuum = false\n found_concurrent_worker = false\n i = 32740\n __func__ = \"do_autovacuum\"\n#9 0x00007fe49b6d21c4 in AutoVacWorkerMain (argc=0, argv=0x0) at /home/andres/src/postgresql/src/backend/postmaster/autovacuum.c:1719\n dbname = \"contrib_regression_dict_int\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\"\n local_sigjmp_buf = {{__jmpbuf = {140735741652128, 6126579318890639195, 9223372036854775747, 0, 0, 0, 6126579318785781595, 6139499699353759579}, __mask_was_saved = 1, __saved_mask = {__val = {18446744066192964099, 8, 140735741648416, 140735741648352, 3156423108750738944, 0, 30, 140735741647888, 140619835812981, 140735741648080, 32666874400, 140735741648448, 140619836964693, 140735741652128, 2586778441, 140735741648448}}}}\n dbid = 205328\n __func__ = \"AutoVacWorkerMain\"\n#10 0x00007fe49b6d1d5b in StartAutoVacWorker () at /home/andres/src/postgresql/src/backend/postmaster/autovacuum.c:1504\n worker_pid = 0\n __func__ = \"StartAutoVacWorker\"\n#11 0x00007fe49b6e79af in StartAutovacuumWorker () at /home/andres/src/postgresql/src/backend/postmaster/postmaster.c:5635\n bn = 0x7fe49c0da920\n __func__ = \"StartAutovacuumWorker\"\n#12 0x00007fe49b6e745d in sigusr1_handler (postgres_signal_arg=10) at /home/andres/src/postgresql/src/backend/postmaster/postmaster.c:5340\n save_errno = 4\n __func__ = \"sigusr1_handler\"\n#13 <signal handler called>\nNo locals.\n#14 0x00007fe49a3a9fc4 in __GI___select (nfds=8, readfds=0x7fff97e36c20, writefds=0x0, exceptfds=0x0, timeout=0x7fff97e36ca0) at ../sysdeps/unix/sysv/linux/select.c:71\n sc_ret = -4\n sc_ret = <optimized out>\n s = <optimized out>\n us = <optimized out>\n ns = <optimized out>\n ts64 = {tv_sec = 59, tv_nsec = 765565741}\n pts64 = <optimized out>\n r = <optimized out>\n#15 0x00007fe49b6e26c7 in ServerLoop () at /home/andres/src/postgresql/src/backend/postmaster/postmaster.c:1765\n timeout = {tv_sec = 60, tv_usec = 0}\n rmask = {fds_bits = {224, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}}\n selres = -1\n now = 1648696487\n readmask = {fds_bits = {224, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}}\n nSockets = 8\n last_lockfile_recheck_time = 1648696432\n last_touch_time = 1648696072\n __func__ = \"ServerLoop\"\n#16 0x00007fe49b6e2031 in PostmasterMain (argc=55, argv=0x7fe49c0aa2d0) at /home/andres/src/postgresql/src/backend/postmaster/postmaster.c:1473\n opt = -1\n status = 0\n userDoption = 0x7fe49c0951d0 \"/srv/dev/pgdev-dev/\"\n listen_addr_saved = true\n i = 64\n output_config_variable = 0x0\n __func__ = \"PostmasterMain\"\n#17 0x00007fe49b5d2808 in main (argc=55, argv=0x7fe49c0aa2d0) at /home/andres/src/postgresql/src/backend/main/main.c:202\n do_check_root = true\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 30 Mar 2022 20:28:44 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases, relfrozenxid\n optimizations" }, { "msg_contents": "On Wed, Mar 30, 2022 at 8:28 PM Andres Freund <andres@anarazel.de> wrote:\n> I triggered twice now, but it took a while longer the second time.\n\nGreat.\n\nI wonder if you can get an RR recording...\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 30 Mar 2022 20:35:25 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "Hi,\n\nOn 2022-03-30 20:28:44 -0700, Andres Freund wrote:\n> I was able to trigger the crash.\n> \n> cat ~/tmp/pgbench-createdb.sql\n> CREATE DATABASE pgb_:client_id;\n> DROP DATABASE pgb_:client_id;\n> \n> pgbench -n -P1 -c 10 -j10 -T100 -f ~/tmp/pgbench-createdb.sql\n> \n> while I was also running\n> \n> for i in $(seq 1 100); do echo iteration $i; make -Otarget -C contrib/ -s installcheck -j48 -s prove_installcheck=true USE_MODULE_DB=1 > /tmp/ci-$i.log 2>&1; done\n> \n> I triggered twice now, but it took a while longer the second time.\n\nForgot to say how postgres was started. Via my usual devenv script, which\nresults in:\n\n+ /home/andres/build/postgres/dev-assert/vpath/src/backend/postgres -c hba_file=/home/andres/tmp/pgdev/pg_hba.conf -D /srv/dev/pgdev-dev/ -p 5440 -c shared_buffers=2GB -c wal_level=hot_standby -c max_wal_senders=10 -c track_io_timing=on -c restart_after_crash=false -c max_prepared_transactions=20 -c log_checkpoints=on -c min_wal_size=48MB -c max_wal_size=150GB -c 'cluster_name=dev assert' -c ssl_cert_file=/home/andres/tmp/pgdev/ssl-cert-snakeoil.pem -c ssl_key_file=/home/andres/tmp/pgdev/ssl-cert-snakeoil.key -c 'log_line_prefix=%m [%p][%b][%v:%x][%a] ' -c shared_buffers=16MB -c log_min_messages=debug1 -c log_connections=on -c allow_in_place_tablespaces=1 -c log_autovacuum_min_duration=0 -c log_lock_waits=true -c autovacuum_naptime=10s -c fsync=off\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 30 Mar 2022 20:35:51 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases, relfrozenxid\n optimizations" }, { "msg_contents": "Hi,\n\nOn 2022-03-30 20:35:25 -0700, Peter Geoghegan wrote:\n> On Wed, Mar 30, 2022 at 8:28 PM Andres Freund <andres@anarazel.de> wrote:\n> > I triggered twice now, but it took a while longer the second time.\n>\n> Great.\n>\n> I wonder if you can get an RR recording...\n\nStarted it, but looks like it's too slow.\n\n(gdb) p MyProcPid\n$1 = 2172500\n\n(gdb) p vacrel->NewRelfrozenXid\n$3 = 717\n(gdb) p vacrel->relfrozenxid\n$4 = 717\n(gdb) p OldestXmin\n$5 = 5112\n(gdb) p aggressive\n$6 = false\n\nThere was another autovacuum of pg_database 10s before:\n\n2022-03-30 20:35:17.622 PDT [2165344][autovacuum worker][5/3:0][] LOG: automatic vacuum of table \"postgres.pg_catalog.pg_database\": index scans: 1\n pages: 0 removed, 3 remain, 3 scanned (100.00% of total)\n tuples: 61 removed, 4 remain, 1 are dead but not yet removable\n removable cutoff: 1921, older by 3 xids when operation ended\n new relfrozenxid: 717, which is 3 xids ahead of previous value\n index scan needed: 3 pages from table (100.00% of total) had 599 dead item identifiers removed\n index \"pg_database_datname_index\": pages: 2 in total, 0 newly deleted, 0 currently deleted, 0 reusable\n index \"pg_database_oid_index\": pages: 4 in total, 0 newly deleted, 0 currently deleted, 0 reusable\n I/O timings: read: 0.029 ms, write: 0.034 ms\n avg read rate: 134.120 MB/s, avg write rate: 89.413 MB/s\n buffer usage: 35 hits, 12 misses, 8 dirtied\n WAL usage: 12 records, 5 full page images, 27218 bytes\n system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n\nThe dying backend:\n2022-03-30 20:35:27.668 PDT [2172500][autovacuum worker][7/0:0][] DEBUG: autovacuum: processing database \"contrib_regression_hstore\"\n...\n2022-03-30 20:35:27.690 PDT [2172500][autovacuum worker][7/674:0][] CONTEXT: while cleaning up index \"pg_database_oid_index\" of relation \"pg_catalog.pg_database\"\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 30 Mar 2022 21:04:07 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases, relfrozenxid\n optimizations" }, { "msg_contents": "On Wed, Mar 30, 2022 at 9:04 PM Andres Freund <andres@anarazel.de> wrote:\n> (gdb) p vacrel->NewRelfrozenXid\n> $3 = 717\n> (gdb) p vacrel->relfrozenxid\n> $4 = 717\n> (gdb) p OldestXmin\n> $5 = 5112\n> (gdb) p aggressive\n> $6 = false\n\nDoes this OldestXmin seem reasonable at this point in execution, based\non context? Does it look too high? Something else?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 30 Mar 2022 21:11:48 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "Hi,\n\nOn 2022-03-30 21:04:07 -0700, Andres Freund wrote:\n> On 2022-03-30 20:35:25 -0700, Peter Geoghegan wrote:\n> > On Wed, Mar 30, 2022 at 8:28 PM Andres Freund <andres@anarazel.de> wrote:\n> > > I triggered twice now, but it took a while longer the second time.\n> >\n> > Great.\n> >\n> > I wonder if you can get an RR recording...\n>\n> Started it, but looks like it's too slow.\n>\n> (gdb) p MyProcPid\n> $1 = 2172500\n>\n> (gdb) p vacrel->NewRelfrozenXid\n> $3 = 717\n> (gdb) p vacrel->relfrozenxid\n> $4 = 717\n> (gdb) p OldestXmin\n> $5 = 5112\n> (gdb) p aggressive\n> $6 = false\n\nI added a bunch of debug elogs to see what sets *frozenxid_updated to true.\n\n(gdb) p *vacrel\n$1 = {rel = 0x7fe24f3e0148, indrels = 0x7fe255c17ef8, nindexes = 2, aggressive = false, skipwithvm = true, failsafe_active = false,\n consider_bypass_optimization = true, do_index_vacuuming = true, do_index_cleanup = true, do_rel_truncate = true, bstrategy = 0x7fe255bb0e28, pvs = 0x0,\n relfrozenxid = 717, relminmxid = 6, old_live_tuples = 42, OldestXmin = 20751, vistest = 0x7fe255058970 <GlobalVisSharedRels>, FreezeLimit = 4244988047,\n MultiXactCutoff = 4289967302, NewRelfrozenXid = 717, NewRelminMxid = 6, skippedallvis = false, relnamespace = 0x7fe255c17bf8 \"pg_catalog\",\n relname = 0x7fe255c17cb8 \"pg_database\", indname = 0x0, blkno = 4294967295, offnum = 0, phase = VACUUM_ERRCB_PHASE_SCAN_HEAP, verbose = false,\n dead_items = 0x7fe255c131d0, rel_pages = 8, scanned_pages = 8, removed_pages = 0, lpdead_item_pages = 0, missed_dead_pages = 0, nonempty_pages = 8,\n new_rel_tuples = 124, new_live_tuples = 42, indstats = 0x7fe255c18320, num_index_scans = 0, tuples_deleted = 0, lpdead_items = 0, live_tuples = 42,\n recently_dead_tuples = 82, missed_dead_tuples = 0}\n\nBut the debug elog reports that\n\nrelfrozenxid updated 714 -> 717\nrelminmxid updated 1 -> 6\n\nTthe problem is that the crashing backend reads the relfrozenxid/relminmxid\nfrom the shared relcache init file written by another backend:\n\n2022-03-30 21:10:47.626 PDT [2625038][autovacuum worker][6/433:0][] LOG: automatic vacuum of table \"contrib_regression_postgres_fdw.pg_catalog.pg_database\": index scans: 1\n pages: 0 removed, 8 remain, 8 scanned (100.00% of total)\n tuples: 4 removed, 114 remain, 72 are dead but not yet removable\n removable cutoff: 20751, older by 596 xids when operation ended\n new relfrozenxid: 717, which is 3 xids ahead of previous value\n new relminmxid: 6, which is 5 mxids ahead of previous value\n index scan needed: 3 pages from table (37.50% of total) had 8 dead item identifiers removed\n index \"pg_database_datname_index\": pages: 2 in total, 0 newly deleted, 0 currently deleted, 0 reusable\n index \"pg_database_oid_index\": pages: 6 in total, 0 newly deleted, 2 currently deleted, 2 reusable\n I/O timings: read: 0.050 ms, write: 0.102 ms\n avg read rate: 209.860 MB/s, avg write rate: 76.313 MB/s\n buffer usage: 42 hits, 22 misses, 8 dirtied\n WAL usage: 13 records, 5 full page images, 33950 bytes\n system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n...\n2022-03-30 21:10:47.772 PDT [2625043][autovacuum worker][:0][] DEBUG: InitPostgres\n2022-03-30 21:10:47.772 PDT [2625043][autovacuum worker][6/0:0][] DEBUG: my backend ID is 6\n2022-03-30 21:10:47.772 PDT [2625043][autovacuum worker][6/0:0][] LOG: reading shared init file\n2022-03-30 21:10:47.772 PDT [2625043][autovacuum worker][6/443:0][] DEBUG: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS, xid/sub>\n2022-03-30 21:10:47.772 PDT [2625043][autovacuum worker][6/443:0][] LOG: reading non-shared init file\n\nThis is basically the inverse of a54e1f15 - we read a *newer* horizon. That's\nnormally fairly harmless - I think.\n\nPerhaps we should just fetch the horizons from the \"local\" catalog for shared\nrels?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 30 Mar 2022 21:20:38 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases, relfrozenxid\n optimizations" }, { "msg_contents": "Hi,\n\nOn 2022-03-30 21:11:48 -0700, Peter Geoghegan wrote:\n> On Wed, Mar 30, 2022 at 9:04 PM Andres Freund <andres@anarazel.de> wrote:\n> > (gdb) p vacrel->NewRelfrozenXid\n> > $3 = 717\n> > (gdb) p vacrel->relfrozenxid\n> > $4 = 717\n> > (gdb) p OldestXmin\n> > $5 = 5112\n> > (gdb) p aggressive\n> > $6 = false\n>\n> Does this OldestXmin seem reasonable at this point in execution, based\n> on context? Does it look too high? Something else?\n\nReasonable:\n(gdb) p *ShmemVariableCache\n$1 = {nextOid = 78969, oidCount = 2951, nextXid = {value = 21411}, oldestXid = 714, xidVacLimit = 200000714, xidWarnLimit = 2107484361,\n xidStopLimit = 2144484361, xidWrapLimit = 2147484361, oldestXidDB = 1, oldestCommitTsXid = 0, newestCommitTsXid = 0, latestCompletedXid = {value = 21408},\n xactCompletionCount = 1635, oldestClogXid = 714}\n\nI think the explanation I just sent explains the problem, without \"in-memory\"\nconfusion about what's running and what's not.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 30 Mar 2022 21:24:43 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases, relfrozenxid\n optimizations" }, { "msg_contents": "On Wed, Mar 30, 2022 at 9:20 PM Andres Freund <andres@anarazel.de> wrote:\n> But the debug elog reports that\n>\n> relfrozenxid updated 714 -> 717\n> relminmxid updated 1 -> 6\n>\n> Tthe problem is that the crashing backend reads the relfrozenxid/relminmxid\n> from the shared relcache init file written by another backend:\n\nWe should have added logging of relfrozenxid and relminmxid a long time ago.\n\n> This is basically the inverse of a54e1f15 - we read a *newer* horizon. That's\n> normally fairly harmless - I think.\n\nIs this one pretty old?\n\n> Perhaps we should just fetch the horizons from the \"local\" catalog for shared\n> rels?\n\nNot sure what you mean.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 30 Mar 2022 21:29:16 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Wed, Mar 30, 2022 at 9:29 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Perhaps we should just fetch the horizons from the \"local\" catalog for shared\n> > rels?\n>\n> Not sure what you mean.\n\nWait, you mean use vacrel->relfrozenxid directly? Seems kind of ugly...\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 30 Mar 2022 21:44:32 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "Hi,\n\nOn 2022-03-30 21:29:16 -0700, Peter Geoghegan wrote:\n> On Wed, Mar 30, 2022 at 9:20 PM Andres Freund <andres@anarazel.de> wrote:\n> > But the debug elog reports that\n> >\n> > relfrozenxid updated 714 -> 717\n> > relminmxid updated 1 -> 6\n> >\n> > Tthe problem is that the crashing backend reads the relfrozenxid/relminmxid\n> > from the shared relcache init file written by another backend:\n> \n> We should have added logging of relfrozenxid and relminmxid a long time ago.\n\nAt least at DEBUG1 or such.\n\n\n> > This is basically the inverse of a54e1f15 - we read a *newer* horizon. That's\n> > normally fairly harmless - I think.\n> \n> Is this one pretty old?\n\nWhat do you mean with \"this one\"? The cause for the assert failure?\n\nI'm not sure there's a proper bug on HEAD here. I think at worst it can delay\nthe horizon increasing a bunch, by falsely not using an aggressive vacuum when\nwe should have - might even be limited to a single autovacuum cycle.\n\n\n\n> > Perhaps we should just fetch the horizons from the \"local\" catalog for shared\n> > rels?\n> \n> Not sure what you mean.\n\nBasically, instead of relying on the relcache, which for shared relation is\nvulnerable to seeing \"too new\" horizons due to the shared relcache init file,\nexplicitly load relfrozenxid / relminmxid from the the catalog / syscache.\n\nI.e. fetch the relevant pg_class row in heap_vacuum_rel() (using\nSearchSysCache[Copy1](RELID)). And use that to set vacrel->relfrozenxid\netc. Whereas right now we only fetch the pg_class row in\nvac_update_relstats(), but use the relcache before.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 30 Mar 2022 21:59:15 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases, relfrozenxid\n optimizations" }, { "msg_contents": "Hi,\n\nOn 2022-03-30 21:59:15 -0700, Andres Freund wrote:\n> On 2022-03-30 21:29:16 -0700, Peter Geoghegan wrote:\n> > On Wed, Mar 30, 2022 at 9:20 PM Andres Freund <andres@anarazel.de> wrote:\n> > > Perhaps we should just fetch the horizons from the \"local\" catalog for shared\n> > > rels?\n> > \n> > Not sure what you mean.\n> \n> Basically, instead of relying on the relcache, which for shared relation is\n> vulnerable to seeing \"too new\" horizons due to the shared relcache init file,\n> explicitly load relfrozenxid / relminmxid from the the catalog / syscache.\n> \n> I.e. fetch the relevant pg_class row in heap_vacuum_rel() (using\n> SearchSysCache[Copy1](RELID)). And use that to set vacrel->relfrozenxid\n> etc. Whereas right now we only fetch the pg_class row in\n> vac_update_relstats(), but use the relcache before.\n\nPerhaps we should explicitly mask out parts of relcache entries in the shared\ninit file that we know to be unreliable. I.e. set relfrozenxid, relminmxid to\nInvalid* or such.\n\nI even wonder if we should just generally move those out of the fields we have\nin the relcache, not just for shared rels loaded from the init\nfork. Presumably by just moving them into the CATALOG_VARLEN ifdef.\n\nThe only place that appears to access rd_rel->relfrozenxid outside of DDL is\nheap_abort_speculative().\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 31 Mar 2022 09:37:15 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases, relfrozenxid\n optimizations" }, { "msg_contents": "On Thu, Mar 31, 2022 at 9:37 AM Andres Freund <andres@anarazel.de> wrote:\n> Perhaps we should explicitly mask out parts of relcache entries in the shared\n> init file that we know to be unreliable. I.e. set relfrozenxid, relminmxid to\n> Invalid* or such.\n\nThat has the advantage of being more honest. If you're going to break\nthe abstraction, then it seems best to break it in an obvious way,\nthat leaves no doubts about what you're supposed to be relying on.\n\nThis bug doesn't seem like the kind of thing that should be left\nas-is. If only because it makes it hard to add something like a\nWARNING when we make relfrozenxid go backwards (on the basis of the\nexisting value apparently being in the future), which we really should\nhave been doing all along.\n\nThe whole reason why we overwrite pg_class.relfrozenxid values from\nthe future is to ameliorate the effects of more serious bugs like the\npg_upgrade/pg_resetwal one fixed in commit 74cf7d46 not so long ago\n(mid last year). We had essentially the same pg_upgrade \"from the\nfuture\" bug twice (once for relminmxid in the MultiXact bug era,\nanother more recent version affecting relfrozenxid).\n\n> The only place that appears to access rd_rel->relfrozenxid outside of DDL is\n> heap_abort_speculative().\n\nI wonder how necessary that really is. Even if the XID is before\nrelfrozenxid, does that in itself really make it \"in the future\"?\nObviously it's often necessary to make the assumption that allowing\nwraparound amounts to allowing XIDs \"from the future\" to exist, which\nis dangerous. But why here? Won't pruning by VACUUM eventually correct\nthe issue anyway?\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 31 Mar 2022 09:58:18 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "Hi,\n\nOn 2022-03-31 09:58:18 -0700, Peter Geoghegan wrote:\n> On Thu, Mar 31, 2022 at 9:37 AM Andres Freund <andres@anarazel.de> wrote:\n> > The only place that appears to access rd_rel->relfrozenxid outside of DDL is\n> > heap_abort_speculative().\n> \n> I wonder how necessary that really is. Even if the XID is before\n> relfrozenxid, does that in itself really make it \"in the future\"?\n> Obviously it's often necessary to make the assumption that allowing\n> wraparound amounts to allowing XIDs \"from the future\" to exist, which\n> is dangerous. But why here? Won't pruning by VACUUM eventually correct\n> the issue anyway?\n\nI don't think we should weaken defenses against xids from before relfrozenxid\nin vacuum / amcheck / .... If anything we should strengthen them.\n\nIsn't it also just plainly required for correctness? We'd not necessarily\ntrigger a vacuum in time to remove the xid before approaching wraparound if we\nput in an xid before relfrozenxid? That happening in prune_xid is obviously\nles bad than on actual data, but still.\n\n\nISTM we should just use our own xid. Yes, it might delay cleanup a bit\nlonger. But unless there's already crud on the page (with prune_xid already\nset, the abort of the speculative insertion isn't likely to make the\ndifference?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 31 Mar 2022 10:11:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases, relfrozenxid\n optimizations" }, { "msg_contents": "On Wed, Mar 30, 2022 at 9:59 PM Andres Freund <andres@anarazel.de> wrote:\n> I'm not sure there's a proper bug on HEAD here. I think at worst it can delay\n> the horizon increasing a bunch, by falsely not using an aggressive vacuum when\n> we should have - might even be limited to a single autovacuum cycle.\n\nSo, to be clear: vac_update_relstats() never actually considered the\nnew relfrozenxid value from its vacuumlazy.c caller to be \"in the\nfuture\"? It just looked that way to the failing assertion in\nvacuumlazy.c, because its own version of the original relfrozenxid was\nstale from the beginning? And so the worst problem is probably just\nthat we don't use aggressive VACUUM when we really should in rare\ncases?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 31 Mar 2022 10:12:49 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Thu, Mar 31, 2022 at 10:11 AM Andres Freund <andres@anarazel.de> wrote:\n> I don't think we should weaken defenses against xids from before relfrozenxid\n> in vacuum / amcheck / .... If anything we should strengthen them.\n>\n> Isn't it also just plainly required for correctness? We'd not necessarily\n> trigger a vacuum in time to remove the xid before approaching wraparound if we\n> put in an xid before relfrozenxid? That happening in prune_xid is obviously\n> les bad than on actual data, but still.\n\nYeah, you're right. Ambiguity about stuff like this should be avoided\non general principle.\n\n> ISTM we should just use our own xid. Yes, it might delay cleanup a bit\n> longer. But unless there's already crud on the page (with prune_xid already\n> set, the abort of the speculative insertion isn't likely to make the\n> difference?\n\nSpeculative insertion abort is pretty rare in the real world, I bet.\nThe speculative insertion precheck is very likely to work almost\nalways with real workloads.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 31 Mar 2022 10:16:59 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "Hi,\n\nOn 2022-03-31 10:12:49 -0700, Peter Geoghegan wrote:\n> On Wed, Mar 30, 2022 at 9:59 PM Andres Freund <andres@anarazel.de> wrote:\n> > I'm not sure there's a proper bug on HEAD here. I think at worst it can delay\n> > the horizon increasing a bunch, by falsely not using an aggressive vacuum when\n> > we should have - might even be limited to a single autovacuum cycle.\n> \n> So, to be clear: vac_update_relstats() never actually considered the\n> new relfrozenxid value from its vacuumlazy.c caller to be \"in the\n> future\"?\n\nNo, I added separate debug messages for those, and also applied your patch,\nand it didn't trigger.\n\nI don't immediately see how we could end up computing a frozenxid value that\nwould be problematic? The pgcform->relfrozenxid value will always be the\n\"local\" value, which afaics can be behind the other database's value (and thus\nbehind the value from the relcache init file). But it can't be ahead, we have\nthe proper invalidations for that (I think).\n\n\nI do think we should apply a version of the warnings you have (with a WARNING\ninstead of PANIC obviously). I think it's bordering on insanity that we have\nso many paths to just silently fix stuff up around vacuum. It's like we want\nthings to be undebuggable, and to give users no warnings about something being\nup.\n\n\n> It just looked that way to the failing assertion in\n> vacuumlazy.c, because its own version of the original relfrozenxid was\n> stale from the beginning? And so the worst problem is probably just\n> that we don't use aggressive VACUUM when we really should in rare\n> cases?\n\nYes, I think that's right.\n\nCan you repro the issue with my recipe? FWIW, adding log_min_messages=debug5\nand fsync=off made the crash trigger more quickly.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 31 Mar 2022 10:50:07 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases, relfrozenxid\n optimizations" }, { "msg_contents": "On Thu, Mar 31, 2022 at 10:50 AM Andres Freund <andres@anarazel.de> wrote:\n> > So, to be clear: vac_update_relstats() never actually considered the\n> > new relfrozenxid value from its vacuumlazy.c caller to be \"in the\n> > future\"?\n>\n> No, I added separate debug messages for those, and also applied your patch,\n> and it didn't trigger.\n\nThe assert is \"Assert(diff > 0)\", and not \"Assert(diff >= 0)\". Plus\nthe other related assert I mentioned did not trigger. So when this\n\"diff\" assert did trigger, the value of \"diff\" must have been 0 (not a\nnegative value). While this state does technically indicate that the\n\"existing\" relfrozenxid value (actually a stale version) appears to be\n\"in the future\" (because the OldestXmin XID might still never have\nbeen allocated), it won't ever be in the future according to\nvac_update_relstats() (even if it used that version).\n\nI suppose that I might be wrong about that, somehow -- anything is\npossible. The important point is that there is currently no evidence\nthat this bug (or any very recent bug) could ever allow\nvac_update_relstats() to actually believe that it needs to update\nrelfrozenxid/relminmxid, purely because the existing value is in the\nfuture.\n\nThe fact that vac_update_relstats() doesn't log/warn when this happens\nis very unfortunate, but there is nevertheless no evidence that that\nwould have informed us of any bug on HEAD, even including the actual\nbug here, which is a bug in vacuumlazy.c (not in vac_update_relstats).\n\n> I do think we should apply a version of the warnings you have (with a WARNING\n> instead of PANIC obviously). I think it's bordering on insanity that we have\n> so many paths to just silently fix stuff up around vacuum. It's like we want\n> things to be undebuggable, and to give users no warnings about something being\n> up.\n\nYeah, it's just totally self defeating to not at least log it. I mean\nthis is a code path that is only hit once per VACUUM, so there is\npractically no risk of that causing any new problems.\n\n> Can you repro the issue with my recipe? FWIW, adding log_min_messages=debug5\n> and fsync=off made the crash trigger more quickly.\n\nI'll try to do that today. I'm not feeling the most energetic right\nnow, to be honest.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 31 Mar 2022 11:19:53 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Thu, Mar 31, 2022 at 11:19 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> The assert is \"Assert(diff > 0)\", and not \"Assert(diff >= 0)\".\n\nAttached is v15. I plan to commit the first two patches (the most\nsubstantial two patches by far) in the next couple of days, barring\nobjections.\n\nv15 removes this \"Assert(diff > 0)\" assertion from 0001. It's not\nadding any value, now that the underlying issue that it accidentally\nbrought to light is well understood (there are still more robust\nassertions to the relfrozenxid/relminmxid invariants). \"Assert(diff >\n0)\" is liable to fail until the underlying bug on HEAD is fixed, which\ncan be treated as separate work.\n\nI also refined the WARNING patch in v15. It now actually issues\nWARNINGs (rather than PANICs, which were just a temporary debugging\nmeasure in v14). Also fixed a compiler warning in this patch, based on\na complaint from CFBot's CompilerWarnings task. I can delay commiting\nthis WARNING patch until right before feature freeze. Seems best to\ngive others more opportunity for comments.\n\n-- \nPeter Geoghegan", "msg_date": "Fri, 1 Apr 2022 10:54:14 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "Hi,\n\nOn 2022-04-01 10:54:14 -0700, Peter Geoghegan wrote:\n> On Thu, Mar 31, 2022 at 11:19 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > The assert is \"Assert(diff > 0)\", and not \"Assert(diff >= 0)\".\n>\n> Attached is v15. I plan to commit the first two patches (the most\n> substantial two patches by far) in the next couple of days, barring\n> objections.\n\nJust saw that you committed: Wee! I think this will be a substantial\nimprovement for our users.\n\n\nWhile I was writing the above I, again, realized that it'd be awfully nice to\nhave some accumulated stats about (auto-)vacuum's effectiveness. For us to get\nfeedback about improvements more easily and for users to know what aspects\nthey need to tune.\n\nKnowing how many times a table was vacuumed doesn't really tell that much, and\nrequiring to enable log_autovacuum_min_duration and then aggregating those\nresults is pretty painful (and version dependent).\n\nIf we just collected something like:\n- number of heap passes\n- time spent heap vacuuming\n- number of index scans\n- time spent index vacuuming\n- time spent delaying\n- percentage of non-yet-removable vs removable tuples\n\nit'd start to be a heck of a lot easier to judge how well autovacuum is\ncoping.\n\nIf we tracked the related pieces above in the index stats (or perhaps\nadditionally there), it'd also make it easier to judge the cost of different\nindexes.\n\n- Andres\n\n\n", "msg_date": "Sun, 3 Apr 2022 12:05:32 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases, relfrozenxid\n optimizations" }, { "msg_contents": "On Sun, Apr 3, 2022 at 12:05 PM Andres Freund <andres@anarazel.de> wrote:\n> Just saw that you committed: Wee! I think this will be a substantial\n> improvement for our users.\n\nI hope so! I think that it's much more useful as the basis for future\nwork than as a standalone thing. Users of Postgres 15 might not notice\na huge difference. But it opens up a lot of new directions to take\nVACUUM in.\n\nI would like to get rid of anti-wraparound VACUUMs and aggressive\nVACUUMs in Postgres 16. This isn't as radical as it sounds. It seems\nquite possible to find a way for *every* VACUUM to become aggressive\nprogressively and dynamically. We'll still need to have autovacuum.c\nknow about wraparound, but it should just be just another threshold,\nnot fundamentally different to the other thresholds (except that it's\nstill used when autovacuum is nominally disabled).\n\nThe behavior around autovacuum cancellations is probably still going\nto be necessary when age(relfrozenxid) gets too high, but it shouldn't\nbe conditioned on what age(relfrozenxid) *used to be*, when the\nautovacuum started. That could have been a long time ago. It should be\nbased on what's happening *right now*.\n\n> While I was writing the above I, again, realized that it'd be awfully nice to\n> have some accumulated stats about (auto-)vacuum's effectiveness. For us to get\n> feedback about improvements more easily and for users to know what aspects\n> they need to tune.\n\nStrongly agree. And I'm excited about the potential of the shared\nmemory stats patch to enable more thorough instrumentation, which\nallows us to improve things with feedback that we just can't get right\nnow.\n\nVACUUM is still too complicated -- that makes this kind of analysis\nmuch harder, even for experts. You need more continuous behavior to\nget value from this kind of analysis. There are too many things that\nmight end up mattering, that really shouldn't ever matter. Too much\npotential for strange illogical discontinuities in performance over\ntime.\n\nHaving only one type of VACUUM (excluding VACUUM FULL) will be much\neasier for users to reason about. But I also think that it'll be much\neasier for us to reason about. For example, better autovacuum\nscheduling will be made much easier if autovacuum.c can just assume\nthat every VACUUM operation will do the same amount of work. (Another\nproblem with the scheduling is that it uses ANALYZE statistics\n(sampling) in a way that just doesn't make any sense for something\nlike VACUUM, which is an inherently dynamic and cyclic process.)\n\nNone of this stuff has to rely on my patch for freezing. We don't\nnecessarily have to make every VACUUM advance relfrozenxid to do all\nthis. The important point is that we definitely shouldn't be putting\noff *all* freezing of all-visible pages in non-aggressive VACUUMs (or\nin VACUUMs that are not expected to advance relfrozenxid). Even a very\nconservative implementation could achieve all this; we need only\nspread out the burden of freezing all-visible pages over time, across\nmultiple VACUUM operations. Make the behavior continuous.\n\n> Knowing how many times a table was vacuumed doesn't really tell that much, and\n> requiring to enable log_autovacuum_min_duration and then aggregating those\n> results is pretty painful (and version dependent).\n\nYeah. Ideally we could avoid making the output of\nlog_autovacuum_min_duration into an API, by having a real API instead.\nThe output probably needs to evolve some more. A lot of very basic\ninformation wasn't there until recently.\n\n> If we just collected something like:\n> - number of heap passes\n> - time spent heap vacuuming\n> - number of index scans\n> - time spent index vacuuming\n> - time spent delaying\n\nYou forgot FPIs.\n\n> - percentage of non-yet-removable vs removable tuples\n\nI think that we should address this directly too. By \"taking a\nsnapshot of the visibility map\", so we at least don't scan/vacuum heap\npages that don't really need it. This is also valuable because it\nmakes slowing down VACUUM (maybe slowing it down a lot) have fewer\ndownsides. At least we'll have \"locked in\" our scanned_pages, which we\ncan figure out in full before we really scan even one page.\n\n> it'd start to be a heck of a lot easier to judge how well autovacuum is\n> coping.\n\nWhat about the potential of the shared memory stats stuff to totally\nreplace the use of ANALYZE stats in autovacuum.c? Possibly with help\nfrom vacuumlazy.c, and the visibility map?\n\nI see a lot of potential for exploiting the visibility map more, both\nwithin vacuumlazy.c itself, and for autovacuum.c scheduling [1]. I'd\nprobably start with the scheduling stuff, and only then work out how\nto show users more actionable information.\n\n[1] https://postgr.es/m/CAH2-Wzkt9Ey9NNm7q9nSaw5jdBjVsAq3yvb4UT4M93UaJVd_xg@mail.gmail.com\n--\nPeter Geoghegan\n\n\n", "msg_date": "Sun, 3 Apr 2022 18:52:00 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Fri, Apr 1, 2022 at 10:54 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> I also refined the WARNING patch in v15. It now actually issues\n> WARNINGs (rather than PANICs, which were just a temporary debugging\n> measure in v14).\n\nGoing to commit this remaining patch tomorrow, barring objections.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 4 Apr 2022 19:32:13 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "Hi,\n\nOn 2022-04-04 19:32:13 -0700, Peter Geoghegan wrote:\n> On Fri, Apr 1, 2022 at 10:54 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I also refined the WARNING patch in v15. It now actually issues\n> > WARNINGs (rather than PANICs, which were just a temporary debugging\n> > measure in v14).\n> \n> Going to commit this remaining patch tomorrow, barring objections.\n\nThe remaining patch are the warnings in vac_update_relstats(), correct? I\nguess one could argue they should be LOG rather than WARNING, but I find the\nproject stance on that pretty impractical. So warning's ok with me.\n\nNot sure why you used errmsg_internal()?\n\nOtherwise LGTM.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 4 Apr 2022 20:18:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases, relfrozenxid\n optimizations" }, { "msg_contents": "On Mon, Apr 4, 2022 at 8:18 PM Andres Freund <andres@anarazel.de> wrote:\n> The remaining patch are the warnings in vac_update_relstats(), correct? I\n> guess one could argue they should be LOG rather than WARNING, but I find the\n> project stance on that pretty impractical. So warning's ok with me.\n\nRight. The reason I used WARNINGs was because it matches vaguely\nrelated WARNINGs in vac_update_relstats()'s sibling function,\nvacuum_set_xid_limits().\n\n> Not sure why you used errmsg_internal()?\n\nThe usual reason for using errmsg_internal(), I suppose. I tend to do\nthat with corruption related messages on the grounds that they're\nusually highly obscure issues that are (by definition) never supposed\nto happen. The only thing that a user can be expected to do with the\ninformation from the message is to report it to -bugs, or find some\nother similar report.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 4 Apr 2022 20:25:44 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Mon, Apr 4, 2022 at 8:25 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Right. The reason I used WARNINGs was because it matches vaguely\n> related WARNINGs in vac_update_relstats()'s sibling function,\n> vacuum_set_xid_limits().\n\nOkay, pushed the relfrozenxid warning patch.\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 5 Apr 2022 09:45:30 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "\nOn 4/3/22 12:05 PM, Andres Freund wrote:\n> While I was writing the above I, again, realized that it'd be awfully nice to\n> have some accumulated stats about (auto-)vacuum's effectiveness. For us to get\n> feedback about improvements more easily and for users to know what aspects\n> they need to tune.\n>\n> Knowing how many times a table was vacuumed doesn't really tell that much, and\n> requiring to enable log_autovacuum_min_duration and then aggregating those\n> results is pretty painful (and version dependent).\n>\n> If we just collected something like:\n> - number of heap passes\n> - time spent heap vacuuming\n> - number of index scans\n> - time spent index vacuuming\n> - time spent delaying\nThe number of passes would let you know if maintenance_work_mem is too \nsmall (or to stop killing 187M+ tuples in one go). The timing info would \ngive you an idea of the impact of throttling.\n> - percentage of non-yet-removable vs removable tuples\n\nThis'd give you an idea how bad your long-running-transaction problem is.\n\nAnother metric I think would be useful is the average utilization of \nyour autovac workers. No spare workers means you almost certainly have \ntables that need vacuuming but have to wait. As a single number, it'd \nalso be much easier for users to understand. I'm no stats expert, but \none way to handle that cheaply would be to maintain an \nengineering-weighted-mean of the percentage of autovac workers that are \nin use at the end of each autovac launcher cycle (though that would \nprobably not work great for people that have extreme values for launcher \ndelay, or constantly muck with launcher_delay).\n\n>\n> it'd start to be a heck of a lot easier to judge how well autovacuum is\n> coping.\n>\n> If we tracked the related pieces above in the index stats (or perhaps\n> additionally there), it'd also make it easier to judge the cost of different\n> indexes.\n>\n> - Andres\n>\n>\n\n\n", "msg_date": "Thu, 14 Apr 2022 18:19:24 -0500", "msg_from": "Jim Nasby <nasbyj@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" }, { "msg_contents": "On Thu, Apr 14, 2022 at 4:19 PM Jim Nasby <nasbyj@amazon.com> wrote:\n> > - percentage of non-yet-removable vs removable tuples\n>\n> This'd give you an idea how bad your long-running-transaction problem is.\n\nVACUUM fundamentally works by removing those tuples that are\nconsidered dead according to an XID-based cutoff established when the\noperation begins. And so many very long running VACUUM operations will\nsee dead-but-not-removable tuples even when there are absolutely no\nlong running transactions (nor any other VACUUM operations). The only\nlong running thing involved might be our own long running VACUUM\noperation.\n\nI would like to reduce the number of non-removal dead tuples\nencountered by VACUUM by \"locking in\" heap pages that we'd like to\nscan up front. This would work by having VACUUM create its own local\nin-memory copy of the visibility map before it even starts scanning\nheap pages. That way VACUUM won't end up visiting heap pages just\nbecause they were concurrently modified half way through our VACUUM\n(by some other transactions). We don't really need to scan these pages\nat all -- they have dead tuples, but not tuples that are \"dead to\nVACUUM\".\n\nThe key idea here is to remove a big unnatural downside to slowing\nVACUUM down. The cutoff would almost work like an MVCC snapshot, that\ndescribed precisely the work that VACUUM needs to do (which pages to\nscan) up-front. Once that's locked in, the amount of work we're\nrequired to do cannot go up as we're doing it (or it'll be less of an\nissue, at least).\n\nIt would also help if VACUUM didn't scan pages that it already knows\ndon't have any dead tuples. The current SKIP_PAGES_THRESHOLD rule\ncould easily be improved. That's almost the same problem.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 14 Apr 2022 17:02:45 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Removing more vacuumlazy.c special cases,\n relfrozenxid optimizations" } ]
[ { "msg_contents": "Hello.\n\nI noticed obsolete lines in the function comment of pgss_store().\n\n * If queryId is 0 then this is a utility statement for which we couldn't\n * compute a queryId during parse analysis, and we should compute a suitable\n * queryId internally.\n\nPreviously the function actually calculates queryId using\npgss_hash_string when the given queryId is 0, but since 14 the\nfunction simply rejects to work. We can just drop the paragraph. Or\nwe can emphasize the change of the behavior by describing the current\nbehavior for the value.\n\nThe attached patch is doing the latter.\n\n * queryId is supposed to be a valid value, otherwise this function dosen't\n * calucate it by its own as before then returns immediately.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 22 Nov 2021 15:38:23 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "An obsolete comment of pg_stat_statements" }, { "msg_contents": "At Mon, 22 Nov 2021 15:38:23 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> * queryId is supposed to be a valid value, otherwise this function dosen't\n> * calucate it by its own as before then returns immediately.\n\nMmm. That's bad. This is the correted version.\n\n * queryId is supposed to be a valid value, otherwise this function doesn't\n * calculate it by its own as before then returns immediately.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 22 Nov 2021 15:48:40 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: An obsolete comment of pg_stat_statements" }, { "msg_contents": "On Mon, Nov 22, 2021 at 2:48 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 22 Nov 2021 15:38:23 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > * queryId is supposed to be a valid value, otherwise this function dosen't\n> > * calucate it by its own as before then returns immediately.\n>\n> Mmm. That's bad. This is the correted version.\n>\n> * queryId is supposed to be a valid value, otherwise this function doesn't\n> * calculate it by its own as before then returns immediately.\n\nAh good catch! Indeed the semantics changed and I missed that comment.\n\nI think that the new comment should be a bit more precise about what\nis a valid value and should probably not refer to a previous version\nof the code. How about something like:\n\n- * If queryId is 0 then this is a utility statement for which we couldn't\n- * compute a queryId during parse analysis, and we should compute a suitable\n- * queryId internally.\n+ * If queryId is 0 then no query fingerprinting source has been enabled, so we\n+ * act as if the extension was disabled: silently exit without doing any work.\n *\n\n\n", "msg_date": "Mon, 22 Nov 2021 22:50:04 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: An obsolete comment of pg_stat_statements" }, { "msg_contents": "At Mon, 22 Nov 2021 22:50:04 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> On Mon, Nov 22, 2021 at 2:48 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Mon, 22 Nov 2021 15:38:23 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > > * queryId is supposed to be a valid value, otherwise this function dosen't\n> > > * calucate it by its own as before then returns immediately.\n> >\n> > Mmm. That's bad. This is the correted version.\n> >\n> > * queryId is supposed to be a valid value, otherwise this function doesn't\n> > * calculate it by its own as before then returns immediately.\n> \n> Ah good catch! Indeed the semantics changed and I missed that comment.\n> \n> I think that the new comment should be a bit more precise about what\n> is a valid value and should probably not refer to a previous version\n> of the code. How about something like:\n> \n> - * If queryId is 0 then this is a utility statement for which we couldn't\n> - * compute a queryId during parse analysis, and we should compute a suitable\n> - * queryId internally.\n> + * If queryId is 0 then no query fingerprinting source has been enabled, so we\n> + * act as if the extension was disabled: silently exit without doing any work.\n> *\n\nThanks! Looks better. It is used as-is in the attached.\n\nAnd I will register this to the next CF.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 24 Dec 2021 15:32:10 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: An obsolete comment of pg_stat_statements" }, { "msg_contents": "On Fri, Dec 24, 2021 at 03:32:10PM +0900, Kyotaro Horiguchi wrote:\n> Thanks! Looks better. It is used as-is in the attached.\n> \n> And I will register this to the next CF.\n\nDo we really need to have this comment in the function header? The\nsame is explained a couple of lines down so this feels like a\nduplicate, and it is hard to miss it with the code shaped as-is (aka\nthe relationship between compute_query_id and queryId and the\nconsequences on what's stored in this case).\n--\nMichael", "msg_date": "Fri, 24 Dec 2021 21:02:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: An obsolete comment of pg_stat_statements" }, { "msg_contents": "On Fri, Dec 24, 2021 at 09:02:10PM +0900, Michael Paquier wrote:\n> Do we really need to have this comment in the function header? The\n> same is explained a couple of lines down so this feels like a\n> duplicate, and it is hard to miss it with the code shaped as-is (aka\n> the relationship between compute_query_id and queryId and the\n> consequences on what's stored in this case).\n\nThe simpler the better here. So, I have just removed this comment\nafter thinking more about this.\n--\nMichael", "msg_date": "Mon, 3 Jan 2022 17:36:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: An obsolete comment of pg_stat_statements" }, { "msg_contents": "At Mon, 3 Jan 2022 17:36:25 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Fri, Dec 24, 2021 at 09:02:10PM +0900, Michael Paquier wrote:\n> > Do we really need to have this comment in the function header? The\n> > same is explained a couple of lines down so this feels like a\n> > duplicate, and it is hard to miss it with the code shaped as-is (aka\n> > the relationship between compute_query_id and queryId and the\n> > consequences on what's stored in this case).\n> \n> The simpler the better here. So, I have just removed this comment\n> after thinking more about this.\n\nI'm fine with it. Thanks!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 04 Jan 2022 09:54:20 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: An obsolete comment of pg_stat_statements" } ]
[ { "msg_contents": "Hi:\n\nShould we guarantee the sequence's nextval should never be rolled back\neven in a crashed recovery case?\nI can produce the rollback in the following case:\n\nSession 1:\nCREATE SEQUENCE s;\nBEGIN;\nSELECT nextval('s'); \\watch 0.01\n\nSession 2:\nkill -9 {sess1.pid}\n\nAfter the restart, the nextval('s') may be rolled back (less than the\nlast value from session 1).\n\nThe reason is because we never flush the xlog for the nextval_internal\nfor the above case. So if\nthe system crashes, there is nothing to redo from. It can be fixed\nwith the following online change\ncode.\n\n@@ -810,6 +810,8 @@ nextval_internal(Oid relid, bool check_permissions)\n recptr = XLogInsert(RM_SEQ_ID, XLOG_SEQ_LOG);\n\n PageSetLSN(page, recptr);\n+\n+ XLogFlush(recptr);\n }\n\n\nIf a user uses sequence value for some external systems, the\nrollbacked value may surprise them.\n[I didn't run into this issue in any real case, I just studied xlog /\nsequence stuff today and found this case].\n\n-- \nBest Regards\nAndy Fan\n\n\n", "msg_date": "Mon, 22 Nov 2021 14:57:00 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Sequence's value can be rollback after a crashed recovery." }, { "msg_contents": "On Sunday, November 21, 2021, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> Should we guarantee the sequence's nextval should never be rolled back\n> even in a crashed recovery case?\n> I can produce the rollback in the following case:\n>\n\nThis seems to be the same observation that was made a little over a year\nago.\n\nhttps://www.postgresql.org/message-id/flat/ea6485e3-98d0-24a7-094c-87f9d5f9b18f%40amazon.com#4cfe7217c829419b769339465e8c2915\n\nI don’t think the suggested documentation ever got written but haven’t\nlooked for it either.\n\nDavid J.\n\nOn Sunday, November 21, 2021, Andy Fan <zhihui.fan1213@gmail.com> wrote:\nShould we guarantee the sequence's nextval should never be rolled back\neven in a crashed recovery case?\nI can produce the rollback in the following case:\nThis seems to be the same observation that was made a little over a year ago.https://www.postgresql.org/message-id/flat/ea6485e3-98d0-24a7-094c-87f9d5f9b18f%40amazon.com#4cfe7217c829419b769339465e8c2915I don’t think the suggested documentation ever got written but haven’t looked for it either.David J.", "msg_date": "Mon, 22 Nov 2021 00:07:15 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Sequence's value can be rollback after a crashed recovery." }, { "msg_contents": "On Mon, 2021-11-22 at 14:57 +0800, Andy Fan wrote:\n> Should we guarantee the sequence's nextval should never be rolled back\n> even in a crashed recovery case?\n> I can produce the rollback in the following case:\n> \n> Session 1:\n> CREATE SEQUENCE s;\n> BEGIN;\n> SELECT nextval('s'); \\watch 0.01\n> \n> Session 2:\n> kill -9 {sess1.pid}\n> \n> After the restart, the  nextval('s') may be rolled back (less than the\n> last value  from session 1).\n> \n> The reason is because we never flush the xlog for the nextval_internal\n> for the above case.  So if\n> the system crashes, there is nothing to redo from. It can be fixed\n> with the following online change\n> code.\n> \n> @@ -810,6 +810,8 @@ nextval_internal(Oid relid, bool check_permissions)\n>                 recptr = XLogInsert(RM_SEQ_ID, XLOG_SEQ_LOG);\n> \n>                 PageSetLSN(page, recptr);\n> +\n> +               XLogFlush(recptr);\n>         }\n> \n> \n> If a user uses sequence value for some external systems,  the\n> rollbacked value may surprise them.\n> [I didn't run into this issue in any real case,  I just studied xlog /\n> sequence stuff today and found this case].\n\nI think that is a bad idea.\nIt will have an intolerable performance impact on OLTP queries, doubling\nthe number of I/O requests for many cases.\n\nPerhaps it would make sense to document that you should never rely on\nsequence values from an uncommitted transaction.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Mon, 22 Nov 2021 08:22:24 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Sequence's value can be rollback after a crashed recovery." }, { "msg_contents": "> > The reason is because we never flush the xlog for the nextval_internal\n> > for the above case. So if\n> > the system crashes, there is nothing to redo from. It can be fixed\n> > with the following online change\n> > code.\n> >\n> > @@ -810,6 +810,8 @@ nextval_internal(Oid relid, bool check_permissions)\n> > recptr = XLogInsert(RM_SEQ_ID, XLOG_SEQ_LOG);\n> >\n> > PageSetLSN(page, recptr);\n> > +\n> > + XLogFlush(recptr);\n> > }\n> >\n> >\n> > If a user uses sequence value for some external systems, the\n> > rollbacked value may surprise them.\n> > [I didn't run into this issue in any real case, I just studied xlog /\n> > sequence stuff today and found this case].\n>\n> I think that is a bad idea.\n> It will have an intolerable performance impact on OLTP queries, doubling\n> the number of I/O requests for many cases.\n>\n\nThe performance argument was expected before this writing. If we look at the\nnextval_interval more carefully, we can find it would not flush the xlog every\ntime even the sequence's cachesize is 1. Currently It happens every 32 times\non the nextval_internal at the worst case.\n\n> Perhaps it would make sense to document that you should never rely on\n> sequence values from an uncommitted transaction.\n\nI am OK with this if more people think this is the solution.\n\n\n-- \nBest Regards\nAndy Fan\n\n\n", "msg_date": "Mon, 22 Nov 2021 15:43:23 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Sequence's value can be rollback after a crashed recovery." }, { "msg_contents": "On Mon, 2021-11-22 at 15:43 +0800, Andy Fan wrote:\n> > > The reason is because we never flush the xlog for the nextval_internal\n> > > for the above case.  So if\n> > > the system crashes, there is nothing to redo from. It can be fixed\n> > > with the following online change\n> > > code.\n> > > \n> > > @@ -810,6 +810,8 @@ nextval_internal(Oid relid, bool check_permissions)\n> > >                  recptr = XLogInsert(RM_SEQ_ID, XLOG_SEQ_LOG);\n> > > \n> > >                  PageSetLSN(page, recptr);\n> > > +\n> > > +               XLogFlush(recptr);\n> > >          }\n> > > \n> > > \n> > > If a user uses sequence value for some external systems,  the\n> > > rollbacked value may surprise them.\n> > > [I didn't run into this issue in any real case,  I just studied xlog /\n> > > sequence stuff today and found this case].\n> > \n> > I think that is a bad idea.\n> > It will have an intolerable performance impact on OLTP queries, doubling\n> > the number of I/O requests for many cases.\n> \n> The performance argument was expected before this writing. If we look at the\n> nextval_interval more carefully, we can find it would not flush the xlog every\n> time even the sequence's cachesize is 1. Currently It happens every 32 times\n> on the nextval_internal at the worst case.\n\nRight, I didn't think of that. Still, I'm -1 on this performance regression.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Mon, 22 Nov 2021 14:09:52 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Sequence's value can be rollback after a crashed recovery." }, { "msg_contents": "On 11/22/21, 5:10 AM, \"Laurenz Albe\" <laurenz.albe@cybertec.at> wrote:\r\n> On Mon, 2021-11-22 at 15:43 +0800, Andy Fan wrote:\r\n>> The performance argument was expected before this writing. If we look at the\r\n>> nextval_interval more carefully, we can find it would not flush the xlog every\r\n>> time even the sequence's cachesize is 1. Currently It happens every 32 times\r\n>> on the nextval_internal at the worst case.\r\n>\r\n> Right, I didn't think of that. Still, I'm -1 on this performance regression.\r\n\r\nI periodically hear rumblings about this behavior as well. At the\r\nvery least, it certainly ought to be documented if it isn't yet. I\r\nwouldn't mind trying my hand at that. Perhaps we could also add a new\r\nconfiguration parameter if users really want to take the performance\r\nhit.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 22 Nov 2021 19:50:42 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Sequence's value can be rollback after a crashed recovery." }, { "msg_contents": "\"Bossart, Nathan\" <bossartn@amazon.com> writes:\n> I periodically hear rumblings about this behavior as well. At the\n> very least, it certainly ought to be documented if it isn't yet. I\n> wouldn't mind trying my hand at that. Perhaps we could also add a new\n> configuration parameter if users really want to take the performance\n> hit.\n\nA sequence's cache length is already configurable, no?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Nov 2021 15:31:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Sequence's value can be rollback after a crashed recovery." }, { "msg_contents": "On 11/22/21 12:31, Tom Lane wrote:\n> \"Bossart, Nathan\" <bossartn@amazon.com> writes:\n>> I periodically hear rumblings about this behavior as well. At the\n>> very least, it certainly ought to be documented if it isn't yet. I\n>> wouldn't mind trying my hand at that. Perhaps we could also add a new\n>> configuration parameter if users really want to take the performance\n>> hit.\n> \n> A sequence's cache length is already configurable, no?\n> \n\nCache length isn't related to the problem here.\n\nThe problem is that PostgreSQL sequences are entirely unsafe to use from\na durability perspective, unless there's DML in the same transaction.\n\nUsers might normally think that \"commit\" makes things durable.\nUnfortunately, IIUC, that's not true for sequences in PostgreSQL.\n\n-Jeremy\n\n\nPS. my bad on the documentation thing... I just noticed that I said a\nyear ago I'd take a swing at a doc update, and I never did that!!\nBetween Nate and I we'll get something proposed.\n\n\n-- \nhttp://about.me/jeremy_schneider\n\n\n", "msg_date": "Mon, 22 Nov 2021 12:42:12 -0800", "msg_from": "Jeremy Schneider <schneider@ardentperf.com>", "msg_from_op": false, "msg_subject": "Re: Sequence's value can be rollback after a crashed recovery." }, { "msg_contents": "On Tue, Nov 23, 2021 at 4:31 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"Bossart, Nathan\" <bossartn@amazon.com> writes:\n> > I periodically hear rumblings about this behavior as well. At the\n> > very least, it certainly ought to be documented if it isn't yet. I\n> > wouldn't mind trying my hand at that. Perhaps we could also add a new\n> > configuration parameter if users really want to take the performance\n> > hit.\n>\n> A sequence's cache length is already configurable, no?\n>\n>\nWe can hit this issue even cache=1. And even if we added the XLogFlush,\nwith _cachesize=1_, the Xlog is still recorded/flushed every 32 values.\n\nI know your opinion about this at [1], IIUC you probably miss the\nSEQ_LOG_VALS\ndesign, it was designed for the performance reason to avoid frequent xlog\nupdates already.\nBut after that, the XLogSync is still not called which caused this issue.\n\n[1] https://www.postgresql.org/message-id/19521.1588183354%40sss.pgh.pa.us\n\n-- \nBest Regards\nAndy Fan\n\nOn Tue, Nov 23, 2021 at 4:31 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"Bossart, Nathan\" <bossartn@amazon.com> writes:\n> I periodically hear rumblings about this behavior as well.  At the\n> very least, it certainly ought to be documented if it isn't yet.  I\n> wouldn't mind trying my hand at that.  Perhaps we could also add a new\n> configuration parameter if users really want to take the performance\n> hit.\n\nA sequence's cache length is already configurable, no?\nWe can hit this issue even cache=1. And even if we added the XLogFlush,with _cachesize=1_,  the Xlog is still recorded/flushed every 32 values. I know your opinion about this at [1], IIUC you probably miss the SEQ_LOG_VALSdesign, it was designed for the  performance reason to avoid frequent xlog updates already. But after that,  the XLogSync is still not called which caused this issue. [1] https://www.postgresql.org/message-id/19521.1588183354%40sss.pgh.pa.us -- Best RegardsAndy Fan", "msg_date": "Tue, 23 Nov 2021 09:27:11 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Sequence's value can be rollback after a crashed recovery." }, { "msg_contents": "\n\nOn 11/22/21 9:42 PM, Jeremy Schneider wrote:\n> On 11/22/21 12:31, Tom Lane wrote:\n>> \"Bossart, Nathan\" <bossartn@amazon.com> writes:\n>>> I periodically hear rumblings about this behavior as well. At the\n>>> very least, it certainly ought to be documented if it isn't yet. I\n>>> wouldn't mind trying my hand at that. Perhaps we could also add a new\n>>> configuration parameter if users really want to take the performance\n>>> hit.\n>>\n>> A sequence's cache length is already configurable, no?\n>>\n> \n> Cache length isn't related to the problem here.\n> \n> The problem is that PostgreSQL sequences are entirely unsafe to use from\n> a durability perspective, unless there's DML in the same transaction.\n> \n> Users might normally think that \"commit\" makes things durable.\n> Unfortunately, IIUC, that's not true for sequences in PostgreSQL.\n> \n\nThat's not what the example in this thread demonstrates, though. There's\nno COMMIT in that example, so it shows that we may discard the nextval()\nin uncommitted transactions. I fail to see how that's less durable than\nany other DML (e.g. we don't complain about INSERT not being durable if\nyou don't commit the change).\n\nIf you can show that the sequence goes back after a commit, that'd be an\nactual durability issue.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 23 Nov 2021 05:00:34 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Sequence's value can be rollback after a crashed recovery." }, { "msg_contents": "On Tue, Nov 23, 2021 at 9:30 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n\n> On 11/22/21 9:42 PM, Jeremy Schneider wrote:\n> > On 11/22/21 12:31, Tom Lane wrote:\n> >> \"Bossart, Nathan\" <bossartn@amazon.com> writes:\n> >>> I periodically hear rumblings about this behavior as well. At the\n> >>> very least, it certainly ought to be documented if it isn't yet. I\n> >>> wouldn't mind trying my hand at that. Perhaps we could also add a new\n> >>> configuration parameter if users really want to take the performance\n> >>> hit.\n> >>\n> >> A sequence's cache length is already configurable, no?\n> >>\n> >\n> > Cache length isn't related to the problem here.\n> >\n> > The problem is that PostgreSQL sequences are entirely unsafe to use from\n> > a durability perspective, unless there's DML in the same transaction.\n> >\n> > Users might normally think that \"commit\" makes things durable.\n> > Unfortunately, IIUC, that's not true for sequences in PostgreSQL.\n> >\n>\n> That's not what the example in this thread demonstrates, though. There's\n> no COMMIT in that example, so it shows that we may discard the nextval()\n> in uncommitted transactions. I fail to see how that's less durable than\n> any other DML (e.g. we don't complain about INSERT not being durable if\n> you don't commit the change).\n>\n> If you can show that the sequence goes back after a commit, that'd be an\n> actual durability issue.\n\nI think at this thread[1], which claimed to get this issue even after\ncommit, I haven't tried it myself though.\n\n[1] https://www.postgresql.org/message-id/flat/ea6485e3-98d0-24a7-094c-87f9d5f9b18f%40amazon.com#4cfe7217c829419b769339465e8c2915\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 23 Nov 2021 09:52:25 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Sequence's value can be rollback after a crashed recovery." }, { "msg_contents": "On 11/23/21 5:22 AM, Dilip Kumar wrote:\n> On Tue, Nov 23, 2021 at 9:30 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> \n>> On 11/22/21 9:42 PM, Jeremy Schneider wrote:\n>>> On 11/22/21 12:31, Tom Lane wrote:\n>>>> \"Bossart, Nathan\" <bossartn@amazon.com> writes:\n>>>>> I periodically hear rumblings about this behavior as well. At the\n>>>>> very least, it certainly ought to be documented if it isn't yet. I\n>>>>> wouldn't mind trying my hand at that. Perhaps we could also add a new\n>>>>> configuration parameter if users really want to take the performance\n>>>>> hit.\n>>>>\n>>>> A sequence's cache length is already configurable, no?\n>>>>\n>>>\n>>> Cache length isn't related to the problem here.\n>>>\n>>> The problem is that PostgreSQL sequences are entirely unsafe to use from\n>>> a durability perspective, unless there's DML in the same transaction.\n>>>\n>>> Users might normally think that \"commit\" makes things durable.\n>>> Unfortunately, IIUC, that's not true for sequences in PostgreSQL.\n>>>\n>>\n>> That's not what the example in this thread demonstrates, though. There's\n>> no COMMIT in that example, so it shows that we may discard the nextval()\n>> in uncommitted transactions. I fail to see how that's less durable than\n>> any other DML (e.g. we don't complain about INSERT not being durable if\n>> you don't commit the change).\n>>\n>> If you can show that the sequence goes back after a commit, that'd be an\n>> actual durability issue.\n> \n> I think at this thread[1], which claimed to get this issue even after\n> commit, I haven't tried it myself though.\n> \n> [1] https://www.postgresql.org/message-id/flat/ea6485e3-98d0-24a7-094c-87f9d5f9b18f%40amazon.com#4cfe7217c829419b769339465e8c2915\n> \n\nI did try, and I haven't been able to reproduce that behavior (on\nmaster, at least).\n\nI see Tom speculated we may not flush WAL if a transaction only does\nnextval() in that other thread, but I don't think that's true. AFAICS if\nthe nextval() call writes stuff to WAL, the RecordTransactionCommit will\nhave wrote_xlog=true and valid XID. And so it'll do the usual usual\nXLogFlush() etc.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 23 Nov 2021 05:55:46 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Sequence's value can be rollback after a crashed recovery." }, { "msg_contents": ">\n>\n> > I think at this thread[1], which claimed to get this issue even after\n> > commit, I haven't tried it myself though.\n> >\n> > [1]\n> https://www.postgresql.org/message-id/flat/ea6485e3-98d0-24a7-094c-87f9d5f9b18f%40amazon.com#4cfe7217c829419b769339465e8c2915\n> >\n>\n> I did try, and I haven't been able to reproduce that behavior (on\n> master, at least).\n>\n>\nI agree with this, the commit would flush the xlog and persist the change.\n\n\n> I see Tom speculated we may not flush WAL if a transaction only does\n> nextval() in that other thread, but I don't think that's true. AFAICS if\n> the nextval() call writes stuff to WAL, the RecordTransactionCommit will\n> have wrote_xlog=true and valid XID. And so it'll do the usual usual\n> XLogFlush() etc.\n>\n>\nI agree with this as well. or else, how can we replicate it to standby if\nuser only runs the SELECT nextval('s') in a transaction.\n\n> I fail to see how that's less durable than any other DML (e.g. we don't\n> complain about INSERT not being durable if you don't commit the change).\n> If you can show that the sequence goes back after a commit, that'd be an\nactual durability issue.\n\nThis can't be called a tranaction's durability issue, but people usually\nthink\nthe value of sequence will not rollback. so it may surprise people if that\nhappens.\n\n-- \nBest Regards\nAndy Fan\n\n\n> I think at this thread[1], which claimed to get this issue even after\n> commit, I haven't tried it myself though.\n> \n> [1] https://www.postgresql.org/message-id/flat/ea6485e3-98d0-24a7-094c-87f9d5f9b18f%40amazon.com#4cfe7217c829419b769339465e8c2915\n> \n\nI did try, and I haven't been able to reproduce that behavior (on\nmaster, at least).\nI agree with this,  the commit would flush the xlog and persist the change.  \nI see Tom speculated we may not flush WAL if a transaction only does\nnextval() in that other thread, but I don't think that's true. AFAICS if\nthe nextval() call writes stuff to WAL, the RecordTransactionCommit will\nhave wrote_xlog=true and valid XID. And so it'll do the usual usual\nXLogFlush() etc.I agree with this as well.  or else, how can we replicate it to standby ifuser only runs the SELECT nextval('s') in a transaction. >  I fail to see how that's less durable than any other DML (e.g. we don't > complain about INSERT not being durable if you don't commit the change).> If you can show that the sequence goes back after a commit, that'd be anactual durability issue.This can't be called a tranaction's durability issue, but people usually think the value of sequence will not rollback.  so it may surprise people if that happens. -- Best RegardsAndy Fan", "msg_date": "Tue, 23 Nov 2021 21:49:14 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Sequence's value can be rollback after a crashed recovery." }, { "msg_contents": "On 11/23/21 05:49, Andy Fan wrote:\n> \n> > I think at this thread[1], which claimed to get this issue even after\n> > commit, I haven't tried it myself though.\n> >\n> > [1]\n> https://www.postgresql.org/message-id/flat/ea6485e3-98d0-24a7-094c-87f9d5f9b18f%40amazon.com#4cfe7217c829419b769339465e8c2915\n> <https://www.postgresql.org/message-id/flat/ea6485e3-98d0-24a7-094c-87f9d5f9b18f%40amazon.com#4cfe7217c829419b769339465e8c2915>\n> >\n> \n> I did try, and I haven't been able to reproduce that behavior (on\n> master, at least).\n> \n> \n> I agree with this,  the commit would flush the xlog and persist the change. \n\nOn that older thread, there were exact reproductions in the first email\nfrom Vini - two of them - available here:\n\nhttps://gist.github.com/vinnix/2fe148e3c42e11269bac5fcc5c78a8d1\n\nNathan help me realize a mistake I've made here.\n\nThe second reproduction involved having psql run nextval() inside of an\nexplicit transaction. I had assumed that the transaction would be\ncommitted when psql closed the session without error. This is because in\nOracle SQLPlus (my original RDBMS background), the \"exitcommit\" setting\nhas a default value giving this behavior.\n\nThis was a silly mistake on my part. When PostgreSQL psql closes the\nconnection with an open transaction, it turns out that the PostgreSQL\nserver will abort the transaction rather than committing it. (Oracle\nDBAs be-aware!)\n\nNonetheless, Vini's first reproduction did not make this same mistake.\nIt involved 10 psql sessions in parallel using implicit transactions,\nsuspending I/O (via the linux device mapper), and killing PG while the\nI/O is suspended.\n\nGiven my mistake on the second repro, I want to look a little closer at\nthis first reproduction and revisit whether it's actually demonstrating\na corner case where one could claim that durability isn't being handled\ncorrectly - that \"COMMIT\" is returning successfully to the application,\nand yet the sequence numbers are being repeated. Maybe there's something\ninvolving the linux I/O path coming into play here.\n\n-Jeremy\n\n\n-- \nhttp://about.me/jeremy_schneider\n\n\n", "msg_date": "Tue, 23 Nov 2021 13:08:40 -0800", "msg_from": "Jeremy Schneider <schneider@ardentperf.com>", "msg_from_op": false, "msg_subject": "Re: Sequence's value can be rollback after a crashed recovery." }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> I see Tom speculated we may not flush WAL if a transaction only does\n> nextval() in that other thread, but I don't think that's true. AFAICS if\n> the nextval() call writes stuff to WAL, the RecordTransactionCommit will\n> have wrote_xlog=true and valid XID. And so it'll do the usual usual\n> XLogFlush() etc.\n\nYeah. I didn't look at the code during last year's discussion, but now\nI have, and I see that if nextval_internal() decides it needs to make\na WAL entry, it is careful to ensure the xact has an XID:\n\n /*\n * If something needs to be WAL logged, acquire an xid, so this\n * transaction's commit will trigger a WAL flush and wait for syncrep.\n * It's sufficient to ensure the toplevel transaction has an xid, no need\n * to assign xids subxacts, that'll already trigger an appropriate wait.\n * (Have to do that here, so we're outside the critical section)\n */\n if (logit && RelationNeedsWAL(seqrel))\n GetTopTransactionId();\n\nSo that eliminates my worry that RecordTransactionCommit would decide\nit doesn't need to do anything. If there was a WAL record, we will\nflush it at commit (and not before).\n\nAs you say, this is exactly as durable as any other DML operation.\nSo I don't feel a need to change the code behavior.\n\nThe problematic situation seems to be where an application gets\na nextval() result and uses it for some persistent outside-the-DB\nstate, without having made sure that the nextval() was committed.\nYou could say that that's the same rookie error as relying on the\npersistence of any other uncommitted DML ... except that at [1]\nwe say\n\n To avoid blocking concurrent transactions that obtain numbers from the\n same sequence, a nextval operation is never rolled back; that is, once\n a value has been fetched it is considered used and will not be\n returned again. This is true even if the surrounding transaction later\n aborts, or if the calling query ends up not using the value.\n\nIt's not so unreasonable to read that as promising persistence over\ncrashes as well as xact aborts. So I think we need to improve the docs\nhere. A minimal fix would be to leave the existing text alone and add a\nseparate para to the <caution> block, along the lines of\n\n However, the above statements do not apply if the database cluster\n crashes before committing the transaction containing the nextval\n operation. In that case the sequence advance might not have made its\n way to persistent storage, so that it is uncertain whether the same\n value can be returned again after the cluster restarts. If you wish\n to use a nextval result for persistent outside-the-database purposes,\n make sure that the nextval has been committed before doing so.\n\nI wonder though if we shouldn't try to improve the existing text.\nThe phrasing \"never rolled back\" seems like it's too easily\nmisinterpreted. Maybe rewrite the <caution> block like\n\n To avoid blocking concurrent transactions that obtain numbers from the\n same sequence, the value obtained by nextval is not reclaimed for\n re-use if the calling transaction later aborts. This means that\n transaction aborts or database crashes can result in gaps in the\n sequence of assigned values. That can happen without a transaction\n abort, too.\n -- this text is unchanged: --\n For example an INSERT with an ON CONFLICT clause will compute the\n to-be-inserted tuple, including doing any required nextval calls,\n before detecting any conflict that would cause it to follow the ON\n CONFLICT rule instead. Such cases will leave unused “holes” in the\n sequence of assigned values. Thus, PostgreSQL sequence objects cannot\n be used to obtain “gapless” sequences.\n\n Likewise, any sequence state changes made by setval are not undone if\n the transaction rolls back.\n -- end unchanged text --\n\n If the database cluster crashes before committing the transaction\n containing a nextval operation, the sequence advance might not yet\n have made its way to persistent storage, so that it is uncertain\n whether the same value can be returned again after the cluster\n restarts. This is harmless for usage of the nextval value within\n that transaction, since its other effects will not be visible either.\n However, if you wish to use a nextval result for persistent\n outside-the-database purposes, make sure that the nextval operation\n has been committed before doing so.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/docs/current/functions-sequence.html\n\n\n", "msg_date": "Tue, 23 Nov 2021 16:12:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Sequence's value can be rollback after a crashed recovery." }, { "msg_contents": "I wrote:\n> I wonder though if we shouldn't try to improve the existing text.\n> The phrasing \"never rolled back\" seems like it's too easily\n> misinterpreted. Maybe rewrite the <caution> block like\n> ...\n\nA bit of polishing later, maybe like the attached.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 23 Nov 2021 16:41:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Sequence's value can be rollback after a crashed recovery." }, { "msg_contents": "On 11/23/21, 1:41 PM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\r\n> I wrote:\r\n>> I wonder though if we shouldn't try to improve the existing text.\r\n>> The phrasing \"never rolled back\" seems like it's too easily\r\n>> misinterpreted. Maybe rewrite the <caution> block like\r\n>> ...\r\n>\r\n> A bit of polishing later, maybe like the attached.\r\n\r\nThe doc updates look good to me. Yesterday I suggested possibly\r\nadding a way to ensure that nextval() called in an uncommitted\r\ntransaction was persistent, but I think we'd have to also ensure that\r\nsynchronous replication waits for those records, too. Anyway, I don't\r\nthink it is unreasonable to require the transaction to be committed to\r\navoid duplicates from nextval().\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 23 Nov 2021 22:13:14 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Sequence's value can be rollback after a crashed recovery." }, { "msg_contents": ">\n>\n> You could say that that's the same rookie error as relying on the\n> persistence of any other uncommitted DML ...\n>\n\nIIUC, This is not the same as uncommitted DML exactly. For any\nuncommitted\nDML, it is a rollback for sure. But as for sequence, The xmax is not\nchanged\nduring sequence's value update by design and we didn't maintain the multi\nversions\nfor sequence, so sequence can't be rolled back clearly. The fact is a\ndirty data page flush\ncan persist the change no matter the txn is committed or aborted. The\nbelow example\ncan show the difference:\n\nSELECT nextval('s'); -- 1\nbegin;\nSELECT nextval('s'); \\watch 0.1 for a while, many checkpointer or data\nflush happened.\n-- crashed.\n\nIf we run nextval('s') from the recovered system, we probably will _not_ get\nthe 2 (assume cachesize=1) like uncommitted DML.\n\n\n-- \nBest Regards\nAndy Fan\n\n\nYou could say that that's the same rookie error as relying on the\npersistence of any other uncommitted DML ... IIUC, This is not the same as uncommitted DML exactly.   For any uncommitted DML, it is a rollback for sure.  But as for sequence, The xmax is not changedduring sequence's value update by design and we didn't maintain the multi versionsfor sequence,  so sequence can't be rolled back clearly.  The fact is a dirty data page flush can persist the change no matter the txn is committed or aborted.  The below examplecan show the difference: SELECT nextval('s'); -- 1begin;SELECT nextval('s');  \\watch 0.1 for a while, many checkpointer or data flush happened. -- crashed. If we run nextval('s') from the recovered system, we probably will _not_ getthe 2 (assume cachesize=1) like uncommitted DML.-- Best RegardsAndy Fan", "msg_date": "Wed, 24 Nov 2021 08:30:37 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Sequence's value can be rollback after a crashed recovery." }, { "msg_contents": "On Tue, 2021-11-23 at 16:41 -0500, Tom Lane wrote:\n> I wrote:\n> > I wonder though if we shouldn't try to improve the existing text.\n> > The phrasing \"never rolled back\" seems like it's too easily\n> > misinterpreted.  Maybe rewrite the <caution> block like\n> > ...\n> \n> A bit of polishing later, maybe like the attached.\n\nThat looks good to me.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Wed, 24 Nov 2021 08:17:14 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Sequence's value can be rollback after a crashed recovery." }, { "msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> On Tue, 2021-11-23 at 16:41 -0500, Tom Lane wrote:\n>> A bit of polishing later, maybe like the attached.\n\n> That looks good to me.\n\nPushed, thanks.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Nov 2021 13:38:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Sequence's value can be rollback after a crashed recovery." } ]
[ { "msg_contents": "Currently there's a test [1] in the regression suite that ensures that\na SAVEPOINT cannot be initialized outside a transaction.\n\nInstead of throwing an error, if we allowed it such that a SAVEPOINT\noutside a transaction implicitly started a transaction, and the\ncorresponding ROLLBACK TO or RELEASE command finished that\ntransaction, I believe it will provide a uniform behavior that will\nlet SAVEPOINT be used on its own, to the exclusion of BEGIN, and I\nbelieve the users will find it very useful as well.\n\nFor example, I was looking at a SQL script that is self-contained\n(creates objects that it later operates on), but did not leverage\nPostgres' ability to perform DDL and other non-DML operations inside a\ntransaction. My first instinct was to enclose it in a BEGIN-COMMIT\npair. But doing that would not play well with the other SQL scripts\nthat include/wrap it (using, say, \\include or \\ir). So the next\nthought that crossed my mind was to wrap the script in a\nSAVEPOINT-RELEASE pair, but that would obviously fail when the script\nis sourced on its own, because SAVEPOINT and RELEASE are not allowed\noutside a transaction.\n\nAnother possibility is as follows, but clearly not acceptable because\nof uncertainty of outcome.\n\nBEGIN TRANSACTION; -- Cmd1. issues a WARNING if already in txn, not otherwise\nSAVEPOINT AA;\n-- Do work\nRELEASE SAVEPOINT AA;\nCOMMIT; -- This will commit the transaction started before Cmd1, if any.\n\nIs there any desire to implement the behavior described in $SUBJECT?\nArguably, Postgres would be straying slightly further away from the\nSQL compatibility of this command, but not by much.\n\nHere's a sample session describing what the behavior would look like.\n\nSAVEPOINT AA ; -- currently an error if outside a transaction;\n-- but starts a transaction after implementation\n\n-- Do work with other SQL commands\n\nCOMMIT ; -- Commits transaction AA started with savepoint. Transaction started\n-- before that, if any, is not affected until its corresponding COMMIT/ROLLBACK.\n-- Other commands that end this transaction:\n-- -- ROLLBACK TO AA (rolls back txn; usual behavior)\n-- -- RELEASE SAVEPOINT AA (commit/rollback depending on state of txn;\nusual behavior)\n-- -- ROLLBACK (rolls back the top-level transaction AA)\n\nLooking at this example, we will also get the \"named transactions\"\nfeature for free! I don't know what the use of a named transaction\nwould be, though; identify it and use it in WAL and related features\nsomehow?!!\n\n[1]:\ncommit cc813fc2b8d9293bbd4d0e0d6a6f3b9cf02fe32f\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Tue Jul 27 05:11:48 2004 +0000\n\n Replace nested-BEGIN syntax for subtransactions with spec-compliant\n SAVEPOINT/RELEASE/ROLLBACK-TO syntax. (Alvaro)\n Cause COMMIT of a failed transaction to report ROLLBACK instead of\n COMMIT in its command tag. (Tom)\n Fix a few loose ends in the nested-transactions stuff.\n....\n-- only in a transaction block:\nSAVEPOINT one;\nERROR: SAVEPOINT can only be used in transaction blocks\nROLLBACK TO SAVEPOINT one;\nERROR: ROLLBACK TO SAVEPOINT can only be used in transaction blocks\nRELEASE SAVEPOINT one;\nERROR: RELEASE SAVEPOINT can only be used in transaction blocks\n\nBest regards,\n--\nGurjeet Singh http://gurjeet.singh.im/\n\n\n", "msg_date": "Mon, 22 Nov 2021 01:50:07 -0800", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": true, "msg_subject": "Begin a transaction on a SAVEPOINT that is outside any transaction" }, { "msg_contents": "On Mon, Nov 22, 2021 at 4:50 AM Gurjeet Singh <gurjeet@singh.im> wrote:\n> Instead of throwing an error, if we allowed it such that a SAVEPOINT\n> outside a transaction implicitly started a transaction, and the\n> corresponding ROLLBACK TO or RELEASE command finished that\n> transaction, I believe it will provide a uniform behavior that will\n> let SAVEPOINT be used on its own, to the exclusion of BEGIN, and I\n> believe the users will find it very useful as well.\n\nI think I would find this behavior confusing.\n\n> For example, I was looking at a SQL script that is self-contained\n> (creates objects that it later operates on), but did not leverage\n> Postgres' ability to perform DDL and other non-DML operations inside a\n> transaction. My first instinct was to enclose it in a BEGIN-COMMIT\n> pair. But doing that would not play well with the other SQL scripts\n> that include/wrap it (using, say, \\include or \\ir). So the next\n> thought that crossed my mind was to wrap the script in a\n> SAVEPOINT-RELEASE pair, but that would obviously fail when the script\n> is sourced on its own, because SAVEPOINT and RELEASE are not allowed\n> outside a transaction.\n\nI don't find this a compelling argument, because it's an extremely\nspecific scenario that could also be handled in other ways, like\nhaving the part that's intended to run in its own subtransaction in\none file for the times when you want to run it that way, and having a\nwrapper script file that does BEGIN \\ir END when you want to run it\nthat way. Alternatively, I imagine you could also find a way to use\npsql's \\if.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 22 Nov 2021 07:59:20 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Begin a transaction on a SAVEPOINT that is outside any\n transaction" } ]
[ { "msg_contents": "Hi,\n\nI cannot compile postgresql for armv7. I tested a bunch of versions of gcc\nand all have the same issue. Sometimes disabling optimization allows me to\ncompile failed files but it is not a rule. The same error will happen in\nanother file and changing optimization parameters or lto options does not\nmake a difference.\n\nProblem is related to dtrace probes code. Compiling without them is\nsuccessful.\n\nMaybe someone encountered something similar. I'm using Fedora Rawhide. Here\nis the error:\n\n<mock-chroot> sh-5.1# gcc -I../../../../src/include -I/usr/include/libxml2\n -c -o tuplesort.o tuplesort.c\nduring RTL pass: mach\ntuplesort.c: In function ‘tuplesort_begin_heap’:\ntuplesort.c:949:1: internal compiler error: in create_fix_barrier, at\nconfig/arm/arm.c:17845\n 949 | }\n | ^\n\n...\n\n<mock-chroot> sh-5.1# gcc -I../../../../src/include -I/usr/include/libxml2\n -c -o md.o md.c\nduring RTL pass: mach\nmd.c: In function ‘mdwrite’:\nmd.c:731:1: internal compiler error: in create_fix_barrier, at\nconfig/arm/arm.c:17845\n 731 | }\n | ^\n\nHi,I cannot compile postgresql for armv7. I tested a bunch of versions of gcc and all have the same issue. Sometimes disabling optimization allows me to compile failed files but it is not a rule. The same error will happen in another file and changing optimization parameters or lto options does not make a difference.Problem is related to dtrace probes code. Compiling without them is successful.Maybe someone encountered something similar. I'm using Fedora Rawhide. Here is the error:<mock-chroot> sh-5.1# gcc -I../../../../src/include  -I/usr/include/libxml2   -c -o tuplesort.o tuplesort.cduring RTL pass: machtuplesort.c: In function ‘tuplesort_begin_heap’:tuplesort.c:949:1: internal compiler error: in create_fix_barrier, at config/arm/arm.c:17845  949 | }      | ^...<mock-chroot> sh-5.1# gcc -I../../../../src/include  -I/usr/include/libxml2   -c -o md.o md.cduring RTL pass: machmd.c: In function ‘mdwrite’:md.c:731:1: internal compiler error: in create_fix_barrier, at config/arm/arm.c:17845  731 | }      | ^", "msg_date": "Mon, 22 Nov 2021 11:57:46 +0100", "msg_from": "Marek Kulik <mkulik@redhat.com>", "msg_from_op": true, "msg_subject": "Building postgresql armv7 on emulated x86_64" } ]
[ { "msg_contents": "Hi,\n\nI'm seeing the following annoying build warnings on Windows (without\nasserts, latest Postgres source):\n\npruneheap.c(858): warning C4101: 'htup': unreferenced local variable\npruneheap.c(870): warning C4101: 'tolp': unreferenced local variable\n\nI notice that these are also reported here: [1]\n\nI've attached a patch to fix these warnings.\n(Note that currently PG_USED_FOR_ASSERTS_ONLY is defined as the unused\nattribute, which is only supported by GCC)\n\n[1]: https://www.postgresql.org/message-id/CAH2-WznwWU+9on9nZCnZtk7uA238MCTgPxYr1Ty7U_Msn5ZGwQ@mail.gmail.com\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia", "msg_date": "Mon, 22 Nov 2021 22:10:10 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": true, "msg_subject": "Windows build warnings" }, { "msg_contents": "> On 22 Nov 2021, at 12:10, Greg Nancarrow <gregn4422@gmail.com> wrote:\n\n> I've attached a patch to fix these warnings.\n\nLGTM.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 22 Nov 2021 12:17:30 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Windows build warnings" }, { "msg_contents": "On 2021-Nov-22, Daniel Gustafsson wrote:\n\n> > On 22 Nov 2021, at 12:10, Greg Nancarrow <gregn4422@gmail.com> wrote:\n> \n> > I've attached a patch to fix these warnings.\n> \n> LGTM.\n\n.. but see\nhttps://postgr.es/m/CAH2-WznwWU+9on9nZCnZtk7uA238MCTgPxYr1Ty7U_Msn5ZGwQ@mail.gmail.com\nwhere this was already discussed. I think if we're going to workaround\nPG_USED_FOR_ASSERTS_ONLY not actually working, we may as well get rid of\nit entirely. My preference would be to fix it so that it works on more\nplatforms (at least Windows in addition to GCC).\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"In Europe they call me Niklaus Wirth; in the US they call me Nickel's worth.\n That's because in Europe they call me by name, and in the US by value!\"\n\n\n", "msg_date": "Mon, 22 Nov 2021 11:21:15 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Windows build warnings" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> .. but see\n> https://postgr.es/m/CAH2-WznwWU+9on9nZCnZtk7uA238MCTgPxYr1Ty7U_Msn5ZGwQ@mail.gmail.com\n> where this was already discussed. I think if we're going to workaround\n> PG_USED_FOR_ASSERTS_ONLY not actually working, we may as well get rid of\n> it entirely. My preference would be to fix it so that it works on more\n> platforms (at least Windows in addition to GCC).\n\nYeah, I do not think there is a reason to change the code if it's using\nPG_USED_FOR_ASSERTS_ONLY properly. We should either make that macro\nwork on $compiler, or ignore such warnings from $compiler.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Nov 2021 10:06:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Windows build warnings" }, { "msg_contents": "> On 22 Nov 2021, at 16:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> .. but see\n>> https://postgr.es/m/CAH2-WznwWU+9on9nZCnZtk7uA238MCTgPxYr1Ty7U_Msn5ZGwQ@mail.gmail.com\n>> where this was already discussed. I think if we're going to workaround\n>> PG_USED_FOR_ASSERTS_ONLY not actually working, we may as well get rid of\n>> it entirely. My preference would be to fix it so that it works on more\n>> platforms (at least Windows in addition to GCC).\n> \n> Yeah, I do not think there is a reason to change the code if it's using\n> PG_USED_FOR_ASSERTS_ONLY properly. We should either make that macro\n> work on $compiler, or ignore such warnings from $compiler.\n\nFair enough. Looking at where we use PG_USED_FOR_ASSERTS_ONLY (and where it\nworks), these two warnings are the only places where we apply it to a pointer\ntypedef (apart from one place where the variable is indeed used outside of\nasserts). Since it clearly works in all other cases, I wonder if something\nlike the below sketch could make MSVC handle the attribute?\n\ndiff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c\nindex 5c0b60319d..9701c9eba0 100644\n--- a/src/backend/access/heap/pruneheap.c\n+++ b/src/backend/access/heap/pruneheap.c\n@@ -855,7 +855,7 @@ heap_page_prune_execute(Buffer buffer,\n {\n Page page = (Page) BufferGetPage(buffer);\n OffsetNumber *offnum;\n- HeapTupleHeader htup PG_USED_FOR_ASSERTS_ONLY;\n+ HeapTupleHeaderData *htup PG_USED_FOR_ASSERTS_ONLY;\n\n /* Shouldn't be called unless there's something to do */\n Assert(nredirected > 0 || ndead > 0 || nunused > 0);\n@@ -867,7 +867,7 @@ heap_page_prune_execute(Buffer buffer,\n OffsetNumber fromoff = *offnum++;\n OffsetNumber tooff = *offnum++;\n ItemId fromlp = PageGetItemId(page, fromoff);\n- ItemId tolp PG_USED_FOR_ASSERTS_ONLY;\n+ ItemIdData *tolp PG_USED_FOR_ASSERTS_ONLY;\n\n #ifdef USE_ASSERT_CHECKING\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 22 Nov 2021 16:22:20 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Windows build warnings" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> Fair enough. Looking at where we use PG_USED_FOR_ASSERTS_ONLY (and where it\n> works), these two warnings are the only places where we apply it to a pointer\n> typedef (apart from one place where the variable is indeed used outside of\n> asserts). Since it clearly works in all other cases, I wonder if something\n> like the below sketch could make MSVC handle the attribute?\n\nUgh. If we're changing the code anyway, I think I prefer Greg's original\npatch; it's at least not randomly inconsistent with everything else.\n\nHowever ... I question your assumption that it works everywhere else.\nI can't find anything that is providing a non-empty definition of\nPG_USED_FOR_ASSERTS_ONLY (a/k/a pg_attribute_unused) for anything\nexcept GCC. It seems likely to me that MSVC simply fails to produce\nsuch warnings in most places, but it's woken up and done so here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Nov 2021 10:40:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Windows build warnings" }, { "msg_contents": "> On 22 Nov 2021, at 16:40, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> Fair enough. Looking at where we use PG_USED_FOR_ASSERTS_ONLY (and where it\n>> works), these two warnings are the only places where we apply it to a pointer\n>> typedef (apart from one place where the variable is indeed used outside of\n>> asserts). Since it clearly works in all other cases, I wonder if something\n>> like the below sketch could make MSVC handle the attribute?\n> \n> Ugh. If we're changing the code anyway, I think I prefer Greg's original\n> patch; it's at least not randomly inconsistent with everything else.\n\nTotally agree.\n\n> However ... I question your assumption that it works everywhere else.\n\nRight, I wasn't expressing myself well uncaffeinated; I meant that it doesn't\nemit a warning for the other instances, which I agree isn't the same thing as\nthe warning avoidance working.\n\n> I can't find anything that is providing a non-empty definition of\n> PG_USED_FOR_ASSERTS_ONLY (a/k/a pg_attribute_unused) for anything\n> except GCC. \n\nIt's supported in clang as well per the documentation [0] in at least some\nconfigurations or distributions:\n\n\t\"The [[maybe_unused]] (or __attribute__((unused))) attribute can be\n\tused to silence such diagnostics when the entity cannot be removed.\n\tFor instance, a local variable may exist solely for use in an assert()\n\tstatement, which makes the local variable unused when NDEBUG is\n\tdefined.\"\n\n> It seems likely to me that MSVC simply fails to produce\n> such warnings in most places, but it's woken up and done so here.\n\nLooking at the instances of PG_USED_FOR_ASSERTS_ONLY in the tree, every such\nvariable is set (although not read) either at instantiation or later in the\ncodepath *outside* of the Assertion. The documentation for C4101 [1] isn't\nexactly verbose, but the examples therein show it for variables not set at all\nso it might be that it simply only warns for pruneheap.c since all other cases\nare at least set. Setting the variables in question to NULL as a test, instead\nof marking them unused, the build pass through MSVC without any warnings. We\nmight still want to use the original patch in this thread, but it seems that\nthis might at least hint at explaining why MSVC only emitted warnings for those\ntwo. \n\nWhile skimming the variables marked as unused, I noticed that two of them seems\nto be marked as such while being used in non-assert codepaths, see the attached\ndiff. The one in postgres_fdw.c was added in 8998e3cafa2 with 1ec7fca8592\nusing the variable; in lsyscache.c it was introduced in 2a6368343ff and then\npromptly used in the fix commit 7e041603904 shortly thereafter. Any objections\nto applying that?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n[0] https://clang.llvm.org/docs/AttributeReference.html#maybe-unused-unused\n[1] https://docs.microsoft.com/en-us/cpp/error-messages/compiler-warnings/compiler-warning-level-3-c4101", "msg_date": "Tue, 23 Nov 2021 14:10:48 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Windows build warnings" }, { "msg_contents": "On Tue, Nov 23, 2021 at 2:11 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 22 Nov 2021, at 16:40, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> > I can't find anything that is providing a non-empty definition of\n> > PG_USED_FOR_ASSERTS_ONLY (a/k/a pg_attribute_unused) for anything\n> > except GCC.\n>\n> It's supported in clang as well per the documentation [0] in at least some\n> configurations or distributions:\n>\n> \"The [[maybe_unused]] (or __attribute__((unused))) attribute can be\n> used to silence such diagnostics when the entity cannot be removed.\n> For instance, a local variable may exist solely for use in an\n> assert()\n> statement, which makes the local variable unused when NDEBUG is\n> defined.\"\n>\n> [[maybe_unused]] is also recognized from Visual Studio 2017 onwards [1].\n\n[1] https://docs.microsoft.com/en-us/cpp/cpp/attributes?view=msvc-170\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Tue, Nov 23, 2021 at 2:11 PM Daniel Gustafsson <daniel@yesql.se> wrote:> On 22 Nov 2021, at 16:40, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I can't find anything that is providing a non-empty definition of\n> PG_USED_FOR_ASSERTS_ONLY (a/k/a pg_attribute_unused) for anything\n> except GCC.  \n\nIt's supported in clang as well per the documentation [0] in at least some\nconfigurations or distributions:\n\n        \"The [[maybe_unused]] (or __attribute__((unused))) attribute can be\n        used to silence such diagnostics when the entity cannot be removed.\n        For instance, a local variable may exist solely for use in an assert()\n        statement, which makes the local variable unused when NDEBUG is\n        defined.\"[[maybe_unused]] is also recognized from Visual Studio 2017 onwards [1].[1] https://docs.microsoft.com/en-us/cpp/cpp/attributes?view=msvc-170Regards,Juan José Santamaría Flecha", "msg_date": "Tue, 23 Nov 2021 14:58:33 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Windows build warnings" }, { "msg_contents": "On 2021-Nov-23, Juan José Santamaría Flecha wrote:\n\n> On Tue, Nov 23, 2021 at 2:11 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > It's supported in clang as well per the documentation [0] in at least some\n> > configurations or distributions:\n\n> [[maybe_unused]] is also recognized from Visual Studio 2017 onwards [1].\n> \n> [1] https://docs.microsoft.com/en-us/cpp/cpp/attributes?view=msvc-170\n\nRight ... the problem, as I understand, is that the syntax for\n[[maybe_unused]] is different from what we can do with the current\npg_attribute_unused -- [[maybe_unused]] goes before the variable name.\nWe would need to define pg_attribute_unused macro (maybe have it take\nthe variable name and initializator value as arguments?), and also\ndefine PG_USED_FOR_ASSERTS_ONLY in the same style.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 23 Nov 2021 11:41:23 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Windows build warnings" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Right ... the problem, as I understand, is that the syntax for\n> [[maybe_unused]] is different from what we can do with the current\n> pg_attribute_unused -- [[maybe_unused]] goes before the variable name.\n> We would need to define pg_attribute_unused macro (maybe have it take\n> the variable name and initializator value as arguments?), and also\n> define PG_USED_FOR_ASSERTS_ONLY in the same style.\n\nI've thought all along that PG_USED_FOR_ASSERTS_ONLY was making\nunwarranted assumptions about what the underlying syntax would be,\nand it seems I was right. Anyone want to look into what it'd take\nto change this?\n\n(It might be an idea to introduce a new macro with a slightly\ndifferent name, so we don't have to touch every usage site\nright away.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Nov 2021 10:03:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Windows build warnings" }, { "msg_contents": "On Wed, Nov 24, 2021 at 1:41 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Nov-23, Juan José Santamaría Flecha wrote:\n>\n> > On Tue, Nov 23, 2021 at 2:11 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > > It's supported in clang as well per the documentation [0] in at least some\n> > > configurations or distributions:\n>\n> > [[maybe_unused]] is also recognized from Visual Studio 2017 onwards [1].\n> >\n> > [1] https://docs.microsoft.com/en-us/cpp/cpp/attributes?view=msvc-170\n>\n> Right ... the problem, as I understand, is that the syntax for\n> [[maybe_unused]] is different from what we can do with the current\n> pg_attribute_unused -- [[maybe_unused]] goes before the variable name.\n> We would need to define pg_attribute_unused macro (maybe have it take\n> the variable name and initializator value as arguments?), and also\n> define PG_USED_FOR_ASSERTS_ONLY in the same style.\n>\n\nIsn't \"[[maybe_unused]]\" only supported for MS C++ (not C)?\nI'm using Visual Studio 17, and I get nothing but a syntax error if\ntrying to use it in C code, whereas it works if I rename the same\nsource file to have a \".cpp\" extension (but even then I need to use\nthe \"/std:c++17\" compiler flag)\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Wed, 24 Nov 2021 11:36:30 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Windows build warnings" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n\n> On 22 Nov 2021, at 16:40, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> I can't find anything that is providing a non-empty definition of\n>> PG_USED_FOR_ASSERTS_ONLY (a/k/a pg_attribute_unused) for anything\n>> except GCC. \n>\n> It's supported in clang as well per the documentation [0] in at least some\n> configurations or distributions:\n>\n> \t\"The [[maybe_unused]] (or __attribute__((unused))) attribute can be\n> \tused to silence such diagnostics when the entity cannot be removed.\n> \tFor instance, a local variable may exist solely for use in an assert()\n> \tstatement, which makes the local variable unused when NDEBUG is\n> \tdefined.\"\n\nShould we change the compiler checks for attributes in c.h to include\n`|| __has_attribute(…)`, so that we automatically get them on compilers\nthat support that (particularly clang)?\n\n- ilmari\n\n\n", "msg_date": "Wed, 24 Nov 2021 10:20:00 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: Windows build warnings" }, { "msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n> Should we change the compiler checks for attributes in c.h to include\n> `|| __has_attribute(…)`, so that we automatically get them on compilers\n> that support that (particularly clang)?\n\nclang already #defines GCC, no?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Nov 2021 10:07:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Windows build warnings" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> =?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n>> Should we change the compiler checks for attributes in c.h to include\n>> `|| __has_attribute(…)`, so that we automatically get them on compilers\n>> that support that (particularly clang)?\n>\n> clang already #defines GCC, no?\n\n\n__GNUC__, but yes, I didn't realise that. Clang 11 seems to claim to be\nGCC 4.2 by default, but that can be overridden usng the -fgnuc-version\n(and turned off by setting it to zero).\n\nDo any other compilers support __has_attribute()?\n\n> \t\t\tregards, tom lane\n\n- ilmari\n\n\n", "msg_date": "Wed, 24 Nov 2021 15:26:01 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: Windows build warnings" }, { "msg_contents": "> On 22 Nov 2021, at 16:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> .. but see\n>> https://postgr.es/m/CAH2-WznwWU+9on9nZCnZtk7uA238MCTgPxYr1Ty7U_Msn5ZGwQ@mail.gmail.com\n>> where this was already discussed. I think if we're going to workaround\n>> PG_USED_FOR_ASSERTS_ONLY not actually working, we may as well get rid of\n>> it entirely. My preference would be to fix it so that it works on more\n>> platforms (at least Windows in addition to GCC).\n> \n> Yeah, I do not think there is a reason to change the code if it's using\n> PG_USED_FOR_ASSERTS_ONLY properly. We should either make that macro\n> work on $compiler, or ignore such warnings from $compiler.\n\nSo, to reach some conclusion on this thread; it seems the code is using\nPG_USED_FOR_ASSERTS_ONLY - as it's currently implemented - properly, but it\ndoesn't work on MSVC and isn't likely to work in the shape it is today.\nReworking to support a wider range of compilers will also likely mean net new\ncode.\n\nTo silence the warnings in the meantime (if the rework at all happens) we\nshould either apply the patch from Greg or add C4101 to disablewarnings in\nsrc/tools/msvc/Project.pm as mentioned above. On top of that, we should apply\nthe patch I proposed downthread to remove PG_USED_FOR_ASSERTS_ONLY where it's\nno longer applicable. Personally I'm fine with either, and am happy to make it\nhappen, once we agree on what it should be.\n\nThoughts?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 25 Nov 2021 13:03:41 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Windows build warnings" }, { "msg_contents": "On Thu, Nov 25, 2021 at 11:03 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> To silence the warnings in the meantime (if the rework at all happens) we\n> should either apply the patch from Greg or add C4101 to disablewarnings in\n> src/tools/msvc/Project.pm as mentioned above. On top of that, we should apply\n> the patch I proposed downthread to remove PG_USED_FOR_ASSERTS_ONLY where it's\n> no longer applicable. Personally I'm fine with either, and am happy to make it\n> happen, once we agree on what it should be.\n>\n\nAFAICS, the fundamental difference here seems to be that the GCC\ncompiler still regards a variable as \"unused\" if it is never read,\nwhereas if the variable is set (but not necessarily read) that's\nenough for the Windows C compiler to regard it as \"used\".\nThis is why, at least for the majority of cases, why we're not seeing\nthe C4101 warnings on Windows where PG_USED_FOR_ASSERTS_ONLY has been\nused in the Postgres source, because in those cases the variable has\nbeen set prior its use in an Assert or \"#ifdef USE_ASSERT_CHECKING\"\nblock.\nIMO, for the case in point, it's best to fix it by either setting the\nvariables to NULL, prior to their use in the \"#ifdef\nUSE_ASSERT_CHECKING\" block, or by applying my patch.\nOf course, this doesn't address fixing the PG_USED_ONLY_FOR_ASSERTS\nmacro to work on Windows, but I don't see an easy way forward on that\nif it's to remain in its \"variable attribute\" form, and in any case\nthe Windows C compiler doesn't seem to support any annotation to mark\na variable as potentially unused.\nPersonally I'm not really in favour of outright disabling the C4101\nwarning on Windows, because I think it is a useful warning for\nPostgres developers on Windows for cases unrelated to the use of\nPG_USED_FOR_ASSERTS_ONLY.\nI agree with your proposal to apply your patch to remove\nPG_USED_FOR_ASSERTS_ONLY where it's no longer applicable.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Fri, 26 Nov 2021 15:34:33 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Windows build warnings" }, { "msg_contents": "Greg Nancarrow <gregn4422@gmail.com> writes:\n> AFAICS, the fundamental difference here seems to be that the GCC\n> compiler still regards a variable as \"unused\" if it is never read,\n> whereas if the variable is set (but not necessarily read) that's\n> enough for the Windows C compiler to regard it as \"used\".\n\nIt depends. Older gcc versions don't complain about set-but-not-read\nvariables, but clang has done so for awhile (with a specific warning\nmessage about the case), and I think recent gcc follows suit.\n\n> Personally I'm not really in favour of outright disabling the C4101\n> warning on Windows, because I think it is a useful warning for\n> Postgres developers on Windows for cases unrelated to the use of\n> PG_USED_FOR_ASSERTS_ONLY.\n\nIMO we should either do that or do whatever's necessary to make the\nmacro work properly on MSVC. I'm not very much in favor of jumping\nthrough hoops to satisfy a compiler that has a randomly-different-\nbut-still-demonstrably-inadequate version of this warning.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Nov 2021 23:45:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Windows build warnings" }, { "msg_contents": "> On 26 Nov 2021, at 05:45, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>> Personally I'm not really in favour of outright disabling the C4101\n>> warning on Windows, because I think it is a useful warning for\n>> Postgres developers on Windows for cases unrelated to the use of\n>> PG_USED_FOR_ASSERTS_ONLY.\n\nI'm not sure I find it useful, as the only reason I *think* I know what it's\ndoing is through trial and error. The only warnings we get from a tree where\nPG_USED_FOR_ASSERTS_ONLY clearly does nothing, are quite uninteresting and\nfixing them only amounts to silencing the compiler and not improving the code.\n\n> IMO we should either do that or do whatever's necessary to make the\n> macro work properly on MSVC. I'm not very much in favor of jumping\n> through hoops to satisfy a compiler that has a randomly-different-\n> but-still-demonstrably-inadequate version of this warning.\n\nSince there is no equivalent attribute in MSVC ([[maybe_unused]] being a C++17\nfeature) I propose that we silence the warning. If someone comes along with an\nimplementation of PG_USED_FOR_ASSERTS_ONLY we can always revert that then.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 26 Nov 2021 10:12:30 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Windows build warnings" }, { "msg_contents": "\nOn 11/26/21 04:12, Daniel Gustafsson wrote:\n>> On 26 Nov 2021, at 05:45, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Personally I'm not really in favour of outright disabling the C4101\n>>> warning on Windows, because I think it is a useful warning for\n>>> Postgres developers on Windows for cases unrelated to the use of\n>>> PG_USED_FOR_ASSERTS_ONLY.\n> I'm not sure I find it useful, as the only reason I *think* I know what it's\n> doing is through trial and error. The only warnings we get from a tree where\n> PG_USED_FOR_ASSERTS_ONLY clearly does nothing, are quite uninteresting and\n> fixing them only amounts to silencing the compiler and not improving the code.\n>\n\nI agree with Tom. I don't think we should disable the warning. If we\ncan't come up with a reasonable implementation of\nPG_USED_FOR_ASSERTS_ONLY that works with MSVC we should just live with\nthe warnings. It's not like we get flooded with them.\n\n\ncheers\n\n\nandrew\n\n\n\n", "msg_date": "Fri, 26 Nov 2021 13:23:09 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Windows build warnings" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 11/26/21 04:12, Daniel Gustafsson wrote:\n>> On 26 Nov 2021, at 05:45, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>> Personally I'm not really in favour of outright disabling the C4101\n>>>> warning on Windows, because I think it is a useful warning for\n>>>> Postgres developers on Windows for cases unrelated to the use of\n>>>> PG_USED_FOR_ASSERTS_ONLY.\n\n[ FTR, that text is not mine; somebody messed up the attribution ]\n\n> I agree with Tom. I don't think we should disable the warning. If we\n> can't come up with a reasonable implementation of\n> PG_USED_FOR_ASSERTS_ONLY that works with MSVC we should just live with\n> the warnings. It's not like we get flooded with them.\n\nI think our policy is to suppress unused-variable warnings if they\nappear on current mainstream compilers; and it feels a little churlish\nto deem MSVC non-mainstream. So I stick with my previous suggestion,\nwhich basically was to disable C4101 until such time as somebody can\nmake PG_USED_FOR_ASSERTS_ONLY work correctly on MSVC. In the worst\ncase, that might lead a Windows-based developer to submit a patch that\ndraws warnings elsewhere ... but the cfbot, other developers, or the\nbuildfarm will find such problems soon enough.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 26 Nov 2021 14:33:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Windows build warnings" }, { "msg_contents": "> On 26 Nov 2021, at 20:33, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 11/26/21 04:12, Daniel Gustafsson wrote:\n>>> On 26 Nov 2021, at 05:45, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>>> Personally I'm not really in favour of outright disabling the C4101\n>>>>> warning on Windows, because I think it is a useful warning for\n>>>>> Postgres developers on Windows for cases unrelated to the use of\n>>>>> PG_USED_FOR_ASSERTS_ONLY.\n> \n> [ FTR, that text is not mine; somebody messed up the attribution ]\n\nThat was probably me fat-fingering it, sorry.\n\n>> I agree with Tom. I don't think we should disable the warning. If we\n>> can't come up with a reasonable implementation of\n>> PG_USED_FOR_ASSERTS_ONLY that works with MSVC we should just live with\n>> the warnings. It's not like we get flooded with them.\n> \n> I think our policy is to suppress unused-variable warnings if they\n> appear on current mainstream compilers; and it feels a little churlish\n> to deem MSVC non-mainstream. So I stick with my previous suggestion,\n> which basically was to disable C4101 until such time as somebody can\n> make PG_USED_FOR_ASSERTS_ONLY work correctly on MSVC. In the worst\n> case, that might lead a Windows-based developer to submit a patch that\n> draws warnings elsewhere ... but the cfbot, other developers, or the\n> buildfarm will find such problems soon enough.\n\nI agree with that, and can go make that happen.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 26 Nov 2021 21:14:05 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Windows build warnings" }, { "msg_contents": "\nOn 11/26/21 15:14, Daniel Gustafsson wrote:\n>> On 26 Nov 2021, at 20:33, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> I think our policy is to suppress unused-variable warnings if they\n>> appear on current mainstream compilers; and it feels a little churlish\n>> to deem MSVC non-mainstream. So I stick with my previous suggestion,\n>> which basically was to disable C4101 until such time as somebody can\n>> make PG_USED_FOR_ASSERTS_ONLY work correctly on MSVC. In the worst\n>> case, that might lead a Windows-based developer to submit a patch that\n>> draws warnings elsewhere ... but the cfbot, other developers, or the\n>> buildfarm will find such problems soon enough.\n> I agree with that, and can go make that happen.\n>\n\n[trust I have attributions right]\n\n\nISTM the worst case is that there will be undetected unused variables in\nWindows-only code. I guess that would mostly be detected by Msys systems\nrunning gcc.\n\n\nAnyway I don't think it's worth arguing a lot about.\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 27 Nov 2021 08:55:29 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Windows build warnings" }, { "msg_contents": "> On 27 Nov 2021, at 14:55, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n> ISTM the worst case is that there will be undetected unused variables in\n> Windows-only code. I guess that would mostly be detected by Msys systems\n> running gcc.\n\nYes, that should be caught there. I've applied this now together with the\nremoval of PG_USED_FOR_ASSERTS_ONLY on those variables where it was set on\nvariables in general use.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Tue, 30 Nov 2021 14:13:43 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Windows build warnings" } ]
[ { "msg_contents": "Hi,\n\nI cannot compile postgresql for armv7. I tested a bunch of versions of gcc\nand all have the same issue. Sometimes disabling optimization allows me to\ncompile failed files but it is not a rule. The same error will happen in\nanother file and changing optimization parameters or lto options does not\nmake a difference.\n\nProblem is related to dtrace probes code. Compiling without them is\nsuccessful.\n\nMaybe someone encountered something similar. I'm using Fedora Rawhide. Here\nis the error:\n\n<mock-chroot> sh-5.1# gcc -I../../../../src/include -I/usr/include/libxml2\n -c -o tuplesort.o tuplesort.c\nduring RTL pass: mach\ntuplesort.c: In function ‘tuplesort_begin_heap’:\ntuplesort.c:949:1: internal compiler error: in create_fix_barrier, at\nconfig/arm/arm.c:17845\n 949 | }\n | ^\n\n...\n\n<mock-chroot> sh-5.1# gcc -I../../../../src/include -I/usr/include/libxml2\n -c -o md.o md.c\nduring RTL pass: mach\nmd.c: In function ‘mdwrite’:\nmd.c:731:1: internal compiler error: in create_fix_barrier, at\nconfig/arm/arm.c:17845\n 731 | }\n | ^\n\nHi,I cannot compile postgresql for armv7.\n I tested a bunch of versions of gcc and all have the same issue. \nSometimes disabling optimization allows me to compile failed files but \nit is not a rule. The same error will happen in another file and \nchanging optimization parameters or lto options does not make a \ndifference.Problem is related to dtrace probes code. Compiling without them is successful.Maybe someone encountered something similar. I'm using Fedora Rawhide. Here is the error:<mock-chroot> sh-5.1# gcc -I../../../../src/include  -I/usr/include/libxml2   -c -o tuplesort.o tuplesort.cduring RTL pass: machtuplesort.c: In function ‘tuplesort_begin_heap’:tuplesort.c:949:1: internal compiler error: in create_fix_barrier, at config/arm/arm.c:17845  949 | }      | ^...<mock-chroot> sh-5.1# gcc -I../../../../src/include  -I/usr/include/libxml2   -c -o md.o md.cduring RTL pass: machmd.c: In function ‘mdwrite’:md.c:731:1: internal compiler error: in create_fix_barrier, at config/arm/arm.c:17845  731 | }      | ^", "msg_date": "Mon, 22 Nov 2021 15:23:59 +0100", "msg_from": "Marek Kulik <mkulik@redhat.com>", "msg_from_op": true, "msg_subject": "Building postgresql armv7 on emulated x86_64" }, { "msg_contents": "Marek Kulik <mkulik@redhat.com> writes:\n> I cannot compile postgresql for armv7. I tested a bunch of versions of gcc\n> and all have the same issue.\n\n> Maybe someone encountered something similar. I'm using Fedora Rawhide. Here\n> is the error:\n\n> <mock-chroot> sh-5.1# gcc -I../../../../src/include -I/usr/include/libxml2\n> -c -o tuplesort.o tuplesort.c\n> during RTL pass: mach\n> tuplesort.c: In function ‘tuplesort_begin_heap’:\n> tuplesort.c:949:1: internal compiler error: in create_fix_barrier, at\n> config/arm/arm.c:17845\n> 949 | }\n> | ^\n\nThat seems like pretty obviously a compiler bug. Shouldn't you be filing\nthis complaint with gcc not us?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Nov 2021 10:08:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Building postgresql armv7 on emulated x86_64" }, { "msg_contents": "> Marek Kulik <mkulik@redhat.com> writes:\n>> I cannot compile postgresql for armv7. I tested a bunch of versions of gcc\n>> and all have the same issue.\n\nFWIW, I just tried a build with --enable-dtrace on up-to-date Fedora 35\naarch64, and that worked fine. So this definitely seems like a problem\nin the toolchain not in Postgres.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Nov 2021 11:53:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Building postgresql armv7 on emulated x86_64" }, { "msg_contents": "What is weird is that this issue is only present in Fedora Rawhide, older\nversions of fedora are not affected. I couldn't pinpoint what package\nupdate caused that issue. I made a regression for gcc and packages related\nto it with no luck.\n\nIt seems to be an issue related to a bug in gcc. Here is related topic:\nhttps://www.spinics.net/lists/fedora-devel/msg294295.html\n\nOn Mon, Nov 22, 2021 at 5:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> > Marek Kulik <mkulik@redhat.com> writes:\n> >> I cannot compile postgresql for armv7. I tested a bunch of versions of\n> gcc\n> >> and all have the same issue.\n>\n> FWIW, I just tried a build with --enable-dtrace on up-to-date Fedora 35\n> aarch64, and that worked fine. So this definitely seems like a problem\n> in the toolchain not in Postgres.\n>\n> regards, tom lane\n>\n>\n\nWhat is weird is that this issue is only present in Fedora Rawhide, older versions of fedora are not affected. I couldn't pinpoint what package update caused that issue. I made a regression for gcc and packages related to it with no luck.It seems to be an issue related to a bug in gcc. Here is related topic: https://www.spinics.net/lists/fedora-devel/msg294295.htmlOn Mon, Nov 22, 2021 at 5:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:> Marek Kulik <mkulik@redhat.com> writes:\n>> I cannot compile postgresql for armv7. I tested a bunch of versions of gcc\n>> and all have the same issue.\n\nFWIW, I just tried a build with --enable-dtrace on up-to-date Fedora 35\naarch64, and that worked fine.  So this definitely seems like a problem\nin the toolchain not in Postgres.\n\n                        regards, tom lane", "msg_date": "Wed, 24 Nov 2021 00:01:04 +0100", "msg_from": "Marek Kulik <mkulik@redhat.com>", "msg_from_op": true, "msg_subject": "Re: Building postgresql armv7 on emulated x86_64" } ]
[ { "msg_contents": "Hi,\n\nThis example in the docs declares a function returning an anonymous\n2-component record:\n\nCREATE FUNCTION dup(in int, out f1 int, out f2 text)\n AS $$ SELECT $1, CAST($1 AS text) || ' is text' $$\n LANGUAGE SQL;\n\nThe same declaration can be changed to have just one OUT parameter:\n\nCREATE FUNCTION dup(in int, out f text)\n AS $$ SELECT CAST($1 AS text) || ' is text' $$\n LANGUAGE SQL;\n\nBut it then behaves as a function returning a text scalar, not a record.\nIt is distinguishable in the catalog though; it has prorettype text,\nproallargtypes {int,text}, proargmodes {i,o}, proargnames {\"\",f}.\n\nThe first declaration can have RETURNS RECORD explicitly added (which\ndoesn't change its meaning any).\n\nIf RETURNS RECORD is added to the second, this error results:\n\nERROR: function result type must be text because of OUT parameters\n\nIs that a better outcome than saying \"ah, the human has said what he means,\nand intends a record type here\"? It seems the case could easily be\ndistinguished in the catalog by storing record as prorettype.\n\nPerhaps more surprisingly, the RETURNS TABLE syntax for the set-returning\ncase has the same quirk; RETURNS TABLE (f text) behaves as setof text\nrather than setof record. Again it's distinguishable in the catalog,\nthis time with t in place of o in proargmodes.\n\nIn this case, clearly the meaning of RETURNS TABLE with one component\ncan't be changed, as it's already established the way it is, but the\nequivalent syntax with one OUT parameter and RETURNS RECORD is currently\nrejected with an error just as in the non-SETOF case, so would it not\nbe equally feasible to just allow that syntax and let it mean what it says?\n\nRegards,\n-Chap\n\n\nIn passing, I also noticed RETURNS TABLE () is a syntax error. I have\nno use case in mind for a function returning an empty composite result,\nbut I note that we do allow zero-column tables and empty composite types.\nAnd it still has 1 bit of entropy: you can tell an empty composite value\nfrom a null.\n\n\n", "msg_date": "Mon, 22 Nov 2021 10:59:52 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Is a function to a 1-component record type undeclarable?" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> The same declaration can be changed to have just one OUT parameter:\n\n> CREATE FUNCTION dup(in int, out f text)\n> AS $$ SELECT CAST($1 AS text) || ' is text' $$\n> LANGUAGE SQL;\n\n> But it then behaves as a function returning a text scalar, not a record.\n\nYup, that's intentional, and documented. It seems more useful to allow\nyou to declare a scalar-returning function in this style, if you wish,\nthan to make it mean a one-component record. If you really want a\none-component composite type, you can declare such a type explicitly\nand return that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Nov 2021 11:15:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is a function to a 1-component record type undeclarable?" }, { "msg_contents": "On 11/22/21 11:15, Tom Lane wrote:\n> Yup, that's intentional, and documented.\n\nI think I found where it's documented; nothing under argmode/column_type\n/column_name, but just enough under rettype to entail the current behavior.\n\n> It seems more useful to allow you to declare a scalar-returning function\n> in this style, if you wish, than to make it mean a one-component record.\n\nWould that usefulness be diminished any by allowing the currently-rejected\nexplicit RECORD syntax to be accepted and explicitly mean record?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 22 Nov 2021 11:37:53 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: Is a function to a 1-component record type undeclarable?" }, { "msg_contents": "Hi\n\npo 22. 11. 2021 v 17:00 odesílatel Chapman Flack <chap@anastigmatix.net>\nnapsal:\n\n> Hi,\n>\n> This example in the docs declares a function returning an anonymous\n> 2-component record:\n>\n> CREATE FUNCTION dup(in int, out f1 int, out f2 text)\n> AS $$ SELECT $1, CAST($1 AS text) || ' is text' $$\n> LANGUAGE SQL;\n>\n> The same declaration can be changed to have just one OUT parameter:\n>\n> CREATE FUNCTION dup(in int, out f text)\n> AS $$ SELECT CAST($1 AS text) || ' is text' $$\n> LANGUAGE SQL;\n>\n> But it then behaves as a function returning a text scalar, not a record.\n> It is distinguishable in the catalog though; it has prorettype text,\n> proallargtypes {int,text}, proargmodes {i,o}, proargnames {\"\",f}.\n>\n> The first declaration can have RETURNS RECORD explicitly added (which\n> doesn't change its meaning any).\n>\n> If RETURNS RECORD is added to the second, this error results:\n>\n> ERROR: function result type must be text because of OUT parameters\n>\n> Is that a better outcome than saying \"ah, the human has said what he means,\n> and intends a record type here\"? It seems the case could easily be\n> distinguished in the catalog by storing record as prorettype.\n>\n> Perhaps more surprisingly, the RETURNS TABLE syntax for the set-returning\n> case has the same quirk; RETURNS TABLE (f text) behaves as setof text\n> rather than setof record. Again it's distinguishable in the catalog,\n> this time with t in place of o in proargmodes.\n>\n> In this case, clearly the meaning of RETURNS TABLE with one component\n> can't be changed, as it's already established the way it is, but the\n> equivalent syntax with one OUT parameter and RETURNS RECORD is currently\n> rejected with an error just as in the non-SETOF case, so would it not\n> be equally feasible to just allow that syntax and let it mean what it says?\n>\n\nI agree so this is not consistent, but changing semantics of RETURNS TABLE\nafter 12 years is not good (and possible), and implemented behaviour has\nsome logic based on the possibility to work with scalars in scalar and\ntabular contexts in Postgres. In this time lateral join was not implemented\nand sometimes the tabular functions had to be used in scalar context.\nMoreover, the overhead of work with composite types had much more impact\nthan now, and there was strong preference to use composite types only when\nit was really necessary.\n\nMy opinions about allowing RETURNS RECORD for one column table is neutral.\nIt can be relatively natural syntax for enforcing output composite type. On\nsecond hand, usually in usual usage, there is not a big difference if\nfunctions return a scalar or one column table (after implicit expansion),\nand the composite type should be slower. But if you need to make a\ncomposite, you can use just the ROW constructor.\n\npostgres=# select row(g) from generate_series(1, 3) g(v);\n┌─────┐\n│ row │\n╞═════╡\n│ (1) │\n│ (2) │\n│ (3) │\n└─────┘\n(3 rows)\n\npostgres=# select (row(g)).* from generate_series(1, 3) g(v);\n┌────┐\n│ f1 │\n╞════╡\n│ 1 │\n│ 2 │\n│ 3 │\n└────┘\n(3 rows)\n\nAnd if we allow RETURNS RECORD, then there will be new inconsistency\nbetween OUT variables and RETURNS TABLE, so at the end I don't see stronger\nbenefits than negatives.\n\nDo you have some real use cases, where proposed functionality will carry\nsome benefit?\n\nI remember, when I started with Postgres, and when I started hacking\nPostgres I had a lot of problems with implicit unpacking of composite types\nsomewhere and with necessity to support scalar and composite values as\nseparate classes. I agree, so this is a real issue, but the beginnings of\nthis issue are 20 years ago, maybe 40 years ago, when composite types were\nintroduced, and when composite types were mapped to SQL tables (when QEL\nwas replaced by SQL). I don't think so now it is possible to fix it.\n\n\n\n> Regards,\n> -Chap\n>\n>\n> In passing, I also noticed RETURNS TABLE () is a syntax error. I have\n> no use case in mind for a function returning an empty composite result,\n> but I note that we do allow zero-column tables and empty composite types.\n> And it still has 1 bit of entropy: you can tell an empty composite value\n> from a null.\n>\n\nIf RETURNS TABLE (1col) returns scalar, then RETURNS TABLE () does not make\nany sense. The sequence 0 .. empty record, 1 scalar, 2+ composite looks\nscary too.\n\nRegards\n\nPavel\n\nHipo 22. 11. 2021 v 17:00 odesílatel Chapman Flack <chap@anastigmatix.net> napsal:Hi,\n\nThis example in the docs declares a function returning an anonymous\n2-component record:\n\nCREATE FUNCTION dup(in int, out f1 int, out f2 text)\n    AS $$ SELECT $1, CAST($1 AS text) || ' is text' $$\n    LANGUAGE SQL;\n\nThe same declaration can be changed to have just one OUT parameter:\n\nCREATE FUNCTION dup(in int, out f text)\n    AS $$ SELECT CAST($1 AS text) || ' is text' $$\n    LANGUAGE SQL;\n\nBut it then behaves as a function returning a text scalar, not a record.\nIt is distinguishable in the catalog though; it has prorettype text,\nproallargtypes {int,text}, proargmodes {i,o}, proargnames {\"\",f}.\n\nThe first declaration can have RETURNS RECORD explicitly added (which\ndoesn't change its meaning any).\n\nIf RETURNS RECORD is added to the second, this error results:\n\nERROR:  function result type must be text because of OUT parameters\n\nIs that a better outcome than saying \"ah, the human has said what he means,\nand intends a record type here\"? It seems the case could easily be\ndistinguished in the catalog by storing record as prorettype.\n\nPerhaps more surprisingly, the RETURNS TABLE syntax for the set-returning\ncase has the same quirk; RETURNS TABLE (f text) behaves as setof text\nrather than setof record. Again it's distinguishable in the catalog,\nthis time with t in place of o in proargmodes.\n\nIn this case, clearly the meaning of RETURNS TABLE with one component\ncan't be changed, as it's already established the way it is, but the\nequivalent syntax with one OUT parameter and RETURNS RECORD is currently\nrejected with an error just as in the non-SETOF case, so would it not\nbe equally feasible to just allow that syntax and let it mean what it says?I agree so this is not consistent, but changing semantics of RETURNS TABLE after 12 years is not good (and possible), and implemented behaviour has some logic based on the possibility to work with scalars in scalar and tabular contexts in Postgres. In this time lateral join was not implemented and sometimes the tabular functions had to be used in scalar context. Moreover, the overhead of work with composite types had much more impact than now, and there was strong preference to use composite types only when it was really necessary.  My opinions about allowing RETURNS RECORD for one column table is neutral. It can be relatively natural syntax for enforcing output composite type. On second hand, usually in usual usage, there is not a big difference if functions return a scalar or one column table (after implicit expansion), and the composite type should be slower. But if you need to make a composite, you can use just the ROW constructor.postgres=# select row(g) from generate_series(1, 3) g(v);┌─────┐│ row │╞═════╡│ (1) ││ (2) ││ (3) │└─────┘(3 rows)postgres=# select (row(g)).* from generate_series(1, 3) g(v);┌────┐│ f1 │╞════╡│  1 ││  2 ││  3 │└────┘(3 rows)And if we allow RETURNS RECORD, then there will be new inconsistency between OUT variables and RETURNS TABLE, so at the end I don't see stronger benefits than negatives.Do you have some real use cases, where proposed functionality will carry some benefit?I remember, when I started with Postgres, and when I started hacking Postgres I had a lot of problems with implicit unpacking of composite types somewhere and with necessity to support scalar and composite values as separate classes. I agree, so this is a real issue, but the beginnings of this issue are 20 years ago, maybe 40 years ago, when composite types were introduced, and when composite types were mapped to SQL tables (when QEL was replaced by SQL). I don't think so now it is possible to fix it. \n\nRegards,\n-Chap\n\n\nIn passing, I also noticed RETURNS TABLE () is a syntax error. I have\nno use case in mind for a function returning an empty composite result,\nbut I note that we do allow zero-column tables and empty composite types.\nAnd it still has 1 bit of entropy: you can tell an empty composite value\nfrom a null.If RETURNS TABLE (1col) returns scalar, then RETURNS TABLE () does not make any sense. The sequence 0 .. empty record, 1 scalar, 2+ composite looks scary too.RegardsPavel", "msg_date": "Mon, 22 Nov 2021 17:59:36 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is a function to a 1-component record type undeclarable?" }, { "msg_contents": "On 11/22/21 11:59, Pavel Stehule wrote:\n> And if we allow RETURNS RECORD, then there will be new inconsistency\n> between OUT variables and RETURNS TABLE\n\nI don't really see it as a new inconsistency, so much as the same old\ninconsistency but with an escape hatch if you really mean the other thing.\n\nI take the consistency you speak of here to be \"anything you can say with\nOUT parameters is equivalent to something you can say with RETURNS TABLE.\"\nThat is tidy, but I don't think it suffers much if it becomes \"everything\nyou can say with RETURNS TABLE is something you can equivalently say with\nOUT parameters, but there is one thing for historical reasons you can't say\nwith RETURNS TABLE, and if you need to say that, with OUT params you can.\"\n\n> Do you have some real use cases, where proposed functionality will carry\n> some benefit?\n\nThe most general might be a refactoring situation where you start with\nsomething producing a two-component record and one of those goes away, and\nyou want to make the minimally invasive changes. Going through containing\nqueries to add or remove row() or .foo would be more invasive.\n\nI often am coming from the position of a PL maintainer, where my aim\nis to present an accurate picture of what is going on in PostgreSQL\nto people who are thinking in Java, and to support them with language\nconstructs that will do what they expect. I happened to notice today\nthat I am generating SQL that won't succeed if a Java function declares\na one-component record result. Ok, so that's a bug I have to fix, and\ndocument what the real behavior is. Beyond that, if I could also say\n\"if a one component record is really what you want, then write *this*\",\nI think that would be good.\n\n\nIt seems like something that would entail a very easy change in the docs.\nThe paragraph that now says\n\n When there are OUT or INOUT parameters, the RETURNS clause can be\n omitted. If present, it must agree with the result type implied by\n the output parameters: RECORD if there are multiple output parameters,\n or the same type as the single output parameter.\n\nit could simply say\n\n When there are OUT or INOUT parameters, the RETURNS clause can be\n omitted. If present, it must agree with the result type implied by\n the output parameters: always RECORD if there are multiple output\n parameters. For exactly one output parameter, there is a choice:\n the same type as the single output parameter (which is the default\n if the clause is omitted), or RECORD if the function should really\n return a composite type with one component.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 22 Nov 2021 12:43:34 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: Is a function to a 1-component record type undeclarable?" }, { "msg_contents": "po 22. 11. 2021 v 18:43 odesílatel Chapman Flack <chap@anastigmatix.net>\nnapsal:\n\n> On 11/22/21 11:59, Pavel Stehule wrote:\n> > And if we allow RETURNS RECORD, then there will be new inconsistency\n> > between OUT variables and RETURNS TABLE\n>\n> I don't really see it as a new inconsistency, so much as the same old\n> inconsistency but with an escape hatch if you really mean the other thing.\n>\n> I take the consistency you speak of here to be \"anything you can say with\n> OUT parameters is equivalent to something you can say with RETURNS TABLE.\"\n> That is tidy, but I don't think it suffers much if it becomes \"everything\n> you can say with RETURNS TABLE is something you can equivalently say with\n> OUT parameters, but there is one thing for historical reasons you can't say\n> with RETURNS TABLE, and if you need to say that, with OUT params you can.\"\n>\n> > Do you have some real use cases, where proposed functionality will carry\n> > some benefit?\n>\n> The most general might be a refactoring situation where you start with\n> something producing a two-component record and one of those goes away, and\n> you want to make the minimally invasive changes. Going through containing\n> queries to add or remove row() or .foo would be more invasive.\n>\n\nI think so on SQL level, the difference should be minimal.\n\n\n> I often am coming from the position of a PL maintainer, where my aim\n> is to present an accurate picture of what is going on in PostgreSQL\n> to people who are thinking in Java, and to support them with language\n> constructs that will do what they expect. I happened to notice today\n> that I am generating SQL that won't succeed if a Java function declares\n> a one-component record result. Ok, so that's a bug I have to fix, and\n> document what the real behavior is. Beyond that, if I could also say\n> \"if a one component record is really what you want, then write *this*\",\n> I think that would be good.\n>\n\nI understand. PLpgSQL does this magic implicitly, so users don't need to\nsolve it, but the complexity of PLpgSQL (some routines) are significantly\nhigher and some features cannot be implemented, because some semantics are\nambiguous. Unfortunately, there can be performance differences although the\nbehaviour on SQL level will be the same. And still it is an open question\nhow much slower it is to work with one column composite than with scalar.\n\n\n>\n> It seems like something that would entail a very easy change in the docs.\n> The paragraph that now says\n>\n> When there are OUT or INOUT parameters, the RETURNS clause can be\n> omitted. If present, it must agree with the result type implied by\n> the output parameters: RECORD if there are multiple output parameters,\n> or the same type as the single output parameter.\n>\n> it could simply say\n>\n> When there are OUT or INOUT parameters, the RETURNS clause can be\n> omitted. If present, it must agree with the result type implied by\n> the output parameters: always RECORD if there are multiple output\n> parameters. For exactly one output parameter, there is a choice:\n> the same type as the single output parameter (which is the default\n> if the clause is omitted), or RECORD if the function should really\n> return a composite type with one component.\n>\n\n+1\n\nRegards\n\nPavel\n\n>\n> Regards,\n> -Chap\n>\n\npo 22. 11. 2021 v 18:43 odesílatel Chapman Flack <chap@anastigmatix.net> napsal:On 11/22/21 11:59, Pavel Stehule wrote:\n> And if we allow RETURNS RECORD, then there will be new inconsistency\n> between OUT variables and RETURNS TABLE\n\nI don't really see it as a new inconsistency, so much as the same old\ninconsistency but with an escape hatch if you really mean the other thing.\n\nI take the consistency you speak of here to be \"anything you can say with\nOUT parameters is equivalent to something you can say with RETURNS TABLE.\"\nThat is tidy, but I don't think it suffers much if it becomes \"everything\nyou can say with RETURNS TABLE is something you can equivalently say with\nOUT parameters, but there is one thing for historical reasons you can't say\nwith RETURNS TABLE, and if you need to say that, with OUT params you can.\"\n\n> Do you have some real use cases, where proposed functionality will carry\n> some benefit?\n\nThe most general might be a refactoring situation where you start with\nsomething producing a two-component record and one of those goes away, and\nyou want to make the minimally invasive changes. Going through containing\nqueries to add or remove row() or .foo would be more invasive.I think so on SQL level, the difference should be minimal. \n\nI often am coming from the position of a PL maintainer, where my aim\nis to present an accurate picture of what is going on in PostgreSQL\nto people who are thinking in Java, and to support them with language\nconstructs that will do what they expect. I happened to notice today\nthat I am generating SQL that won't succeed if a Java function declares\na one-component record result. Ok, so that's a bug I have to fix, and\ndocument what the real behavior is. Beyond that, if I could also say\n\"if a one component record is really what you want, then write *this*\",\nI think that would be good.I understand. PLpgSQL does this magic implicitly, so users don't need to solve it, but the complexity of PLpgSQL (some routines) are significantly higher and some features cannot be implemented, because some semantics are ambiguous. Unfortunately, there can be performance differences although the behaviour on SQL level will be the same. And still it is an open question how much slower it is to work with one column composite than with scalar. \n\n\nIt seems like something that would entail a very easy change in the docs.\nThe paragraph that now says\n\n  When there are OUT or INOUT parameters, the RETURNS clause can be\n  omitted. If present, it must agree with the result type implied by\n  the output parameters: RECORD if there are multiple output parameters,\n  or the same type as the single output parameter.\n\nit could simply say\n\n  When there are OUT or INOUT parameters, the RETURNS clause can be\n  omitted. If present, it must agree with the result type implied by\n  the output parameters: always RECORD if there are multiple output\n  parameters. For exactly one output parameter, there is a choice:\n  the same type as the single output parameter (which is the default\n  if the clause is omitted), or RECORD if the function should really\n  return a composite type with one component.+1RegardsPavel\n\nRegards,\n-Chap", "msg_date": "Mon, 22 Nov 2021 20:20:10 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is a function to a 1-component record type undeclarable?" } ]
[ { "msg_contents": "Hi,\n\nThere's two related, but somewhat different aspects to $subject.\n\nTL;DR: We use use -fvisibility=hidden + explicit symbol visiblity,\n-Wl,-Bdynamic, -fno-plt\n\n\n1) Cross-translation-unit calls in extension library\n\nA while ago I was looking at a profile of a workload that spent a good chunk\nof time in an extension. Looking at the instruction level profile it showed\nthat some of that time was spent doing more-complicated-than-necessary\nfunction calls to other functions within the extension.\n\nBasically they way we currently build our extensions, the compiler & linker\nassume every symbol inside the extension libraries needs to be interceptable\nby the main binary. Which means that all function calls to symbols visible\noutside the current translation unit need to be made indirectly via the PLT.\n\nAn example of this (picked from plpgsql, for simplicity)\n\n0000000000024a40 <plpgsql_inline_handler>:\n{\n...\n func = plpgsql_compile_inline(codeblock->source_text);\n 24a80: 48 8b 85 a8 fe ff ff mov -0x158(%rbp),%rax\n 24a87: 48 8b 78 08 mov 0x8(%rax),%rdi\n 24a8b: e8 20 41 fe ff call 8bb0 <plpgsql_compile_inline@plt>\n...\n\n0000000000008bb0 <plpgsql_compile_inline@plt>:\n 8bb0: ff 25 da ac 02 00 jmp *0x2acda(%rip) # 33890 <plpgsql_compile_inline@@Base+0x24de0>\n 8bb6: 68 12 01 00 00 push $0x112\n 8bbb: e9 c0 ee ff ff jmp 7a80 <_init+0x18>\n\nI.e. plpgsql_inline_handler doesn't call plpgsql_compile_inline() directly, it\ncalls plpgsql_compile_inline@plt(), which then loads the target address for\nplpgsql_compile_inline() from the global offset table. Depending on the linker\nsettings / flags passed to dlopen() that'll point to yet another wrapper\nfunction (doing a dynamic symbol lookup on the first call, putting the\nreal address in the GOT).\n\nThis can be addressed to some degree by using explicit symbol visibility\nmarkers, as I propose in [1].\n\nWith that patch applied compiler / linker know that plpgsql_compile_inline()\nis not an external symbol, and therefore doesn't need to go through the\nPLT/GOT. That changes the above to:\n\n func = plpgsql_compile_inline(codeblock->source_text);\n 23000: 48 8b 85 a8 fe ff ff mov -0x158(%rbp),%rax\n 23007: 48 8b 78 08 mov 0x8(%rax),%rdi\n 2300b: e8 00 a1 fe ff call d110 <plpgsql_compile_inline>\n\nwhich unsurprisingly is cheaper.\n\n\n2) Calls to exported functions in extension library\n\nHowever, this does *not* address the issue fully. When an extension calls a\nfunction that has to be exported, the symbol with continue to be loaded from\nthe PLT.\n\nE.g. hstorePairs() has to be exported, because it's called from transform\nmodules. That results in calls to hstorePairs() from within hstore.so to go\nthrough the PLT. e.g.\n\n000000000000e380 <hstore_subscript_assign>:\n{\n...\n e427: e8 e4 59 ff ff call 3e10 <hstorePairs@plt>\n\n\nIn theory we could mark such symbols as \"protected\" while compiling hstore.so\nand as \"default\" otherwise, but that's pretty complicated. And there are some\ntoolchain issues with protected visibility.\n\nThe easier approach for this class of issues is to use the linker option\n-Bsymbolic. That turns the above into a plain function call\n\n000000000000e250 <hstore_subscript_assign>:\n{\n...\n e2f7: e8 f4 a2 ff ff call 85f0 <hstorePairs>\n\n\nAs it turns out we already use -Bsymbolic on some platforms (solaris,\nhpux). But not elsehwere.\n\n\n3) Function calls from extension library to main binary\n4) C library function calls\n\nHowever, even with the above done, calls into shared libraries still\ngo through the PLT. This is particularly annoying for functions like palloc()\nthat are quite performance sensitive and where there's no potential use of\nintercepting the function call with a different shared library.\n\nE.g. the optimized disassembly add_dummy_return() looks like\n\n000000000000bc30 <add_dummy_return>:\n{\n...\n new = palloc0(sizeof(PLpgSQL_stmt_block));\n bc4d: bf 38 00 00 00 mov $0x38,%edi\n bc52: e8 d9 a7 ff ff call 6430 <palloc0@plt>\n...\n0000000000006430 <palloc0@plt>:\n 6430: ff 25 d2 bb 02 00 jmp *0x2bbd2(%rip) # 32008 <palloc0>\n 6436: 68 01 00 00 00 push $0x1\n 643b: e9 d0 ff ff ff jmp 6410 <_init+0x20>\n\n\nObviously we cannot easily avoid indirection entirely in this case. The offset\nto call palloc0 is not known when plpgsql.so is built. But we don't actually\nneed a two-level indirection.\n\n\nBy compiling with -fno-plt, the above becomes:\n\n000000000000b130 <add_dummy_return>:\n{\n...\n new = palloc0(sizeof(PLpgSQL_stmt_block));\n b14d: bf 38 00 00 00 mov $0x38,%edi\n b152: ff 15 80 66 02 00 call *0x26680(%rip) # 317d8 <palloc0>\n\nI.e. a single level of indirection. This has more benefits than just removing\none layer of indirection. Here's what gcc's man page says:\n\n -fno-plt\n Do not use the PLT for external function calls in position-independent code. Instead, load the callee address\n at call sites from the GOT and branch to it. This leads to more efficient code by eliminating PLT stubs and\n exposing GOT loads to optimizations.\n\nIn some cases this allows functions to use the sibling-call optimization where\nthat previously was not possible (i.e. for x86 use \"jmp\" instead of \"call\" to\ncall another function when that function call is the last thing done in a\nfunction, thereby reusing the call frame and reducing the cost of returns).\n\n\nThis doesn't just matter for extension libraries. It's also relevant for the\nmain binary (i.e. the upsides are bigger / more widely applicable) - every\nfunction call to libc goes through PLT+GOT (well, with a dynamically linked\nlibc). This includes things that are often called in performance critical\nbits, like strlen. E.g. without -fno-plt raw_parser() calls strlen via the\nplt:\n\n cur_token_length = strlen(yyextra->core_yy_extra.scanbuf + *llocp);\n 2775a6: 49 63 55 00 movslq 0x0(%r13),%rdx\n 2775aa: 4c 8b 3b mov (%rbx),%r15\n 2775ad: 48 89 4d c0 mov %rcx,-0x40(%rbp)\n 2775b1: 49 8d 3c 17 lea (%r15,%rdx,1),%rdi\n 2775b5: 48 89 55 c8 mov %rdx,-0x38(%rbp)\n 2775b9: e8 82 03 e5 ff call c7940 <strlen@plt>\n\nbut not with -fno-plt:\n cur_token_length = strlen(yyextra->core_yy_extra.scanbuf + *llocp);\n 2838e6: 49 63 55 00 movslq 0x0(%r13),%rdx\n 2838ea: 4c 8b 3b mov (%rbx),%r15\n 2838ed: 48 89 4d c0 mov %rcx,-0x40(%rbp)\n 2838f1: 49 8d 3c 17 lea (%r15,%rdx,1),%rdi\n 2838f5: 48 89 55 c8 mov %rdx,-0x38(%rbp)\n 2838f9: ff 15 09 45 66 00 call *0x664509(%rip) # 8e7e08 <strlen@GLIBC_2.2.5>\n\n\nI haven't run detailed benchmarks in isolation, but have seen some good\nresults. It obviously is heavily workload dependent.\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/20211101020311.av6hphdl6xbjbuif%40alap3.anarazel.de\n\n\n", "msg_date": "Mon, 22 Nov 2021 13:50:48 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Reduce function call costs on ELF platforms" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Basically they way we currently build our extensions, the compiler & linker\n> assume every symbol inside the extension libraries needs to be interceptable\n> by the main binary. Which means that all function calls to symbols visible\n> outside the current translation unit need to be made indirectly via the PLT.\n\nYeah, that would be nice to improve.\n\n> The easier approach for this class of issues is to use the linker option\n> -Bsymbolic.\n\nI don't recall details, but we've previously rejected the idea of\ntrying to use -Bsymbolic widely; apparently it has undesirable\nside-effects on some platforms. See commit message for e3d77ea6b\n(hopefully there's some detail in the email thread [1]). It sounds\nlike you're not actually proposing that, but I thought it would be\na good idea to note the hazard here.\n\n> By compiling with -fno-plt, the above becomes:\n\nDoes -fno-plt amount to an ABI change? If so, I'm worried that it'd\nbreak the ability to compile extensions with a different compiler.\n\nAlso, we have at least some places where there actually are cross-calls\nbetween extensions, eg hstore_perl -> plperl. Do we need to worry about\nbreaking those?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/153626613985.23143.4743626885618266803%40wrigleys.postgresql.org\n\n\n", "msg_date": "Mon, 22 Nov 2021 17:32:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Reduce function call costs on ELF platforms" }, { "msg_contents": "Hi,\n\nOn 2021-11-22 17:32:21 -0500, Tom Lane wrote:\n> > The easier approach for this class of issues is to use the linker option\n> > -Bsymbolic.\n> \n> I don't recall details, but we've previously rejected the idea of\n> trying to use -Bsymbolic widely; apparently it has undesirable\n> side-effects on some platforms. See commit message for e3d77ea6b\n> (hopefully there's some detail in the email thread [1]).\n\nHm. I didn't really see reasons to not use -Bsymbolic in that thread.\n\n\n> It sounds like you're not actually proposing that, but I thought it would be\n> a good idea to note the hazard here.\n\nI think it'd be good to use, but it'll be much less important once we use\nsymbol visibility.\n\n\n> > By compiling with -fno-plt, the above becomes:\n> \n> Does -fno-plt amount to an ABI change? If so, I'm worried that it'd\n> break the ability to compile extensions with a different compiler.\n\nNo, it is a change at the function callsite thats transparent to the function\nitself (unless the called function goes out of its way to do stuff like\ninspecting frame pointers or such, but they're not available by default on\nmost platforms anyway).\n\nIt does however change symbol binding, basically making all symbols bound\neagerly. Which I guess theoretically could be considered an ABI change,\nbecause it removes the ability to intercept symbols referenced in a previously\nloaded shared library, with a subsequently loaded library (e.g. loaded with\nRTLD_DEEPBIND) function before the symbol is used. But that seems like a\nstretch. And I think most ELF platforms/linux distributions have/are moving\ntowards using -Wl,-z,now -Wl,-z,relro also makes symbols bound eagerly.\n\n\n> Also, we have at least some places where there actually are cross-calls\n> between extensions, eg hstore_perl -> plperl. Do we need to worry about\n> breaking those?\n\nIt does require a bit of care in the symbol visibility patch, basically\nmarking all such symbols as exported (which may happen implicitly via\nPG_FUNCTION_INFO_V1). For extensions that are referenced that way that\nactually seems like a good thing, because it makes it clearer what could be\nreferenced that way.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 22 Nov 2021 15:57:45 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Reduce function call costs on ELF platforms" }, { "msg_contents": "On Mon, Nov 22, 2021 at 03:57:45PM -0800, Andres Freund wrote:\n> It does however change symbol binding, basically making all symbols bound\n> eagerly. Which I guess theoretically could be considered an ABI change,\n> because it removes the ability to intercept symbols referenced in a previously\n> loaded shared library, with a subsequently loaded library (e.g. loaded with\n> RTLD_DEEPBIND) function before the symbol is used. But that seems like a\n> stretch. And I think most ELF platforms/linux distributions have/are moving\n> towards using -Wl,-z,now -Wl,-z,relro also makes symbols bound eagerly.\n\nI found this really interesting, and I am surprised how things got so\nsuboptimal. Has it always been this way? Is it the use of C++ that is\ncausing this by default?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 22 Nov 2021 20:13:28 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Reduce function call costs on ELF platforms" }, { "msg_contents": "Hi,\n\nOn 2021-11-22 20:13:28 -0500, Bruce Momjian wrote:\n> On Mon, Nov 22, 2021 at 03:57:45PM -0800, Andres Freund wrote:\n> > It does however change symbol binding, basically making all symbols bound\n> > eagerly. Which I guess theoretically could be considered an ABI change,\n> > because it removes the ability to intercept symbols referenced in a previously\n> > loaded shared library, with a subsequently loaded library (e.g. loaded with\n> > RTLD_DEEPBIND) function before the symbol is used. But that seems like a\n> > stretch. And I think most ELF platforms/linux distributions have/are moving\n> > towards using -Wl,-z,now -Wl,-z,relro also makes symbols bound eagerly.\n> \n> I found this really interesting, and I am surprised how things got so\n> suboptimal.\n\nIt's always been this way on ELF afaict (*).\n\nI don't think anybody would design ELF the same way today. I think it's pretty\nclear that defaulting to making symbols interceptable was a bad idea. But it's\nwhere we are...\n\n\n> Has it always been this way? Is it the use of C++ that is causing this by\n> default?\n\nNope, this is with plain C.\n\nGreetings,\n\nAndres Freund\n\n(*) I guess you can argue it got a tad worse with the increasing use of\nposition independent executables, but that's a relatively small difference in\nthe scope of the issue.\n\n\n", "msg_date": "Mon, 22 Nov 2021 17:18:01 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Reduce function call costs on ELF platforms" }, { "msg_contents": "On 22.11.21 23:32, Tom Lane wrote:\n>> The easier approach for this class of issues is to use the linker option\n>> -Bsymbolic.\n> I don't recall details, but we've previously rejected the idea of\n> trying to use -Bsymbolic widely; apparently it has undesirable\n> side-effects on some platforms. See commit message for e3d77ea6b\n> (hopefully there's some detail in the email thread [1]). It sounds\n> like you're not actually proposing that, but I thought it would be\n> a good idea to note the hazard here.\n\nAlso, IIRC, -Bsymbolic was once frowned upon by packaging policies, \nsince it prevents use of LD_PRELOAD. I'm not sure what the current \nthinking there is, however.\n\n\n", "msg_date": "Tue, 23 Nov 2021 17:28:08 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Reduce function call costs on ELF platforms" }, { "msg_contents": "Hi,\n\nOn 2021-11-23 17:28:08 +0100, Peter Eisentraut wrote:\n> On 22.11.21 23:32, Tom Lane wrote:\n> > > The easier approach for this class of issues is to use the linker option\n> > > -Bsymbolic.\n> > I don't recall details, but we've previously rejected the idea of\n> > trying to use -Bsymbolic widely; apparently it has undesirable\n> > side-effects on some platforms. See commit message for e3d77ea6b\n> > (hopefully there's some detail in the email thread [1]). It sounds\n> > like you're not actually proposing that, but I thought it would be\n> > a good idea to note the hazard here.\n> \n> Also, IIRC, -Bsymbolic was once frowned upon by packaging policies, since it\n> prevents use of LD_PRELOAD. I'm not sure what the current thinking there\n> is, however.\n\nIt doesn't break some (most?) of the uses of LD_PRELOAD. In particular, it\ndoesn't break things like replacing the malloc implementation. When do you\nhave a symbol that you want to override *inside* your library (executables\nalready bind to their own symbols at compile time)? I've seen that for\nreplacing buggy functions in closed source things, but that's about it?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 24 Nov 2021 10:55:59 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Reduce function call costs on ELF platforms" }, { "msg_contents": "\nOn 11/24/21 13:55, Andres Freund wrote:\n> Hi,\n>\n> On 2021-11-23 17:28:08 +0100, Peter Eisentraut wrote:\n>> On 22.11.21 23:32, Tom Lane wrote:\n>>>> The easier approach for this class of issues is to use the linker option\n>>>> -Bsymbolic.\n>>> I don't recall details, but we've previously rejected the idea of\n>>> trying to use -Bsymbolic widely; apparently it has undesirable\n>>> side-effects on some platforms. See commit message for e3d77ea6b\n>>> (hopefully there's some detail in the email thread [1]). It sounds\n>>> like you're not actually proposing that, but I thought it would be\n>>> a good idea to note the hazard here.\n>> Also, IIRC, -Bsymbolic was once frowned upon by packaging policies, since it\n>> prevents use of LD_PRELOAD. I'm not sure what the current thinking there\n>> is, however.\n> It doesn't break some (most?) of the uses of LD_PRELOAD. In particular, it\n> doesn't break things like replacing the malloc implementation. When do you\n> have a symbol that you want to override *inside* your library (executables\n> already bind to their own symbols at compile time)? I've seen that for\n> replacing buggy functions in closed source things, but that's about it?\n>\n\nWhich things does it break exactly? I have a case where a library that\nis LD_PRELOADed calls PQsetSSLKeyPassHook_OpenSSL() in its constructor\nfunction. I'd be very unhappy if that stopped working (and so would our\nclient).\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 24 Nov 2021 17:55:03 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Reduce function call costs on ELF platforms" }, { "msg_contents": "Hi,\n\nOn 2021-11-24 17:55:03 -0500, Andrew Dunstan wrote:\n> On 11/24/21 13:55, Andres Freund wrote:\n> > On 2021-11-23 17:28:08 +0100, Peter Eisentraut wrote:\n> >> On 22.11.21 23:32, Tom Lane wrote:\n> >>>> The easier approach for this class of issues is to use the linker option\n> >>>> -Bsymbolic.\n> >>> I don't recall details, but we've previously rejected the idea of\n> >>> trying to use -Bsymbolic widely; apparently it has undesirable\n> >>> side-effects on some platforms. See commit message for e3d77ea6b\n> >>> (hopefully there's some detail in the email thread [1]). It sounds\n> >>> like you're not actually proposing that, but I thought it would be\n> >>> a good idea to note the hazard here.\n> >> Also, IIRC, -Bsymbolic was once frowned upon by packaging policies, since it\n> >> prevents use of LD_PRELOAD. I'm not sure what the current thinking there\n> >> is, however.\n> > It doesn't break some (most?) of the uses of LD_PRELOAD. In particular, it\n> > doesn't break things like replacing the malloc implementation. When do you\n> > have a symbol that you want to override *inside* your library (executables\n> > already bind to their own symbols at compile time)? I've seen that for\n> > replacing buggy functions in closed source things, but that's about it?\n> >\n> \n> Which things does it break exactly?\n\n-Bsymbolic causes symbols that are defined and referenced within one shared\nlibrary to use that definition. E.g. if a shared lib has a function\n\"do_something()\" and some of its code calls do_something(), you cannot use\nLD_PRELOAD (or a definition in the main binary) to redirect the call to\ndo_something() inside the shared library to something else.\n\nI.e. if a shared library calls a function that's *not* defined within that\nshared library, -Bsymbolic doesn't have an effect for that symbol.\n\n\n> I have a case where a library that\n> is LD_PRELOADed calls PQsetSSLKeyPassHook_OpenSSL() in its constructor\n> function. I'd be very unhappy if that stopped working (and so would our\n> client).\n\nBsymbolic shouldn't affect that at all.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 24 Nov 2021 19:57:46 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Reduce function call costs on ELF platforms" }, { "msg_contents": "\nOn 11/24/21 22:57, Andres Freund wrote:\n>\n>> Which things does it break exactly?\n> -Bsymbolic causes symbols that are defined and referenced within one shared\n> library to use that definition. E.g. if a shared lib has a function\n> \"do_something()\" and some of its code calls do_something(), you cannot use\n> LD_PRELOAD (or a definition in the main binary) to redirect the call to\n> do_something() inside the shared library to something else.\n>\n> I.e. if a shared library calls a function that's *not* defined within that\n> shared library, -Bsymbolic doesn't have an effect for that symbol.\n>\n>\n>> I have a case where a library that\n>> is LD_PRELOADed calls PQsetSSLKeyPassHook_OpenSSL() in its constructor\n>> function. I'd be very unhappy if that stopped working (and so would our\n>> client).\n> Bsymbolic shouldn't affect that at all.\n>\n\nThanks for the explanation.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 25 Nov 2021 11:06:27 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Reduce function call costs on ELF platforms" } ]
[ { "msg_contents": "The PG docs are not quite as clear as they might be regarding the\ndefault value of the \"partition_via_partition_root\" option.\n\nThis small change originated from the row-filter thread [RF], but then\nit was decided it should really be a separate patch (hence this post).\n\n[Tomas 11/7] comment 2: \"Why is the patch changing\npublish_via_partition_root docs? That seems like a rather unrelated\nbit.\"\n[Peter 13/7] comment 4: \"IMO you need to propose this one in another thread\"\n\nPSA.\n\n-------\n[RF] https://www.postgresql.org/message-id/flat/CAHE3wggb715X%2BmK_DitLXF25B%3DjE6xyNCH4YOwM860JR7HarGQ%40mail.gmail.com\n[Tomas 11/7] https://www.postgresql.org/message-id/849ee491-bba3-c0ae-cc25-4fce1c03f105%40enterprisedb.com\n[Peter 13/7] https://www.postgresql.org/message-id/CAHut%2BPsjWbBHTT1dS%3Dji%3DPcziQU1%2BmYqh6G0gFG6AGTCMhTN_g%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 23 Nov 2021 12:06:01 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "PG docs - CREATE PUBLICATION publish_via_partition_root" } ]
[ { "msg_contents": "Hi,\n\nThe replication slots data is stored in binary format on the disk under the\npg_replslot/<<slot_name>> directory which isn't human readable. If the\nserver is crashed/down (for whatever reasons) and unable to come up,\ncurrently there's no way for the user/admin/developer to know what were all\nthe replication slots available at the time of server crash/down to figure\nout what's the restart lsn, xid, two phase info or types of slots etc.\n\npg_replslotdata is a tool that interprets the replication slots information\nand displays it onto the stdout even if the server is crashed/down. The\ndesign of this tool is similar to other tools available in the core today\ni.e. pg_controldata, pg_waldump.\n\nAttaching initial patch herewith. I will improve it with documentation and\nother stuff a bit later.\n\nPlease see the attached picture for the sample output.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.", "msg_date": "Tue, 23 Nov 2021 10:39:26 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "pg_replslotdata - a tool for displaying replication slot information" }, { "msg_contents": "On Tue, Nov 23, 2021 at 10:39 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> The replication slots data is stored in binary format on the disk under the pg_replslot/<<slot_name>> directory which isn't human readable. If the server is crashed/down (for whatever reasons) and unable to come up, currently there's no way for the user/admin/developer to know what were all the replication slots available at the time of server crash/down to figure out what's the restart lsn, xid, two phase info or types of slots etc.\n>\n> pg_replslotdata is a tool that interprets the replication slots information and displays it onto the stdout even if the server is crashed/down. The design of this tool is similar to other tools available in the core today i.e. pg_controldata, pg_waldump.\n>\n> Attaching initial patch herewith. I will improve it with documentation and other stuff a bit later.\n>\n> Please see the attached picture for the sample output.\n>\n> Thoughts?\n\nAttaching the rebased v2 patch.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Wed, 24 Nov 2021 21:09:48 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_replslotdata - a tool for displaying replication slot\n information" }, { "msg_contents": "On Wed, Nov 24, 2021 at 9:09 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Nov 23, 2021 at 10:39 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > The replication slots data is stored in binary format on the disk under the pg_replslot/<<slot_name>> directory which isn't human readable. If the server is crashed/down (for whatever reasons) and unable to come up, currently there's no way for the user/admin/developer to know what were all the replication slots available at the time of server crash/down to figure out what's the restart lsn, xid, two phase info or types of slots etc.\n> >\n> > pg_replslotdata is a tool that interprets the replication slots information and displays it onto the stdout even if the server is crashed/down. The design of this tool is similar to other tools available in the core today i.e. pg_controldata, pg_waldump.\n> >\n> > Attaching initial patch herewith. I will improve it with documentation and other stuff a bit later.\n> >\n> > Please see the attached picture for the sample output.\n> >\n> > Thoughts?\n>\n> Attaching the rebased v2 patch.\n\nOn windows the previous patches were failing, fixed that in the v3\npatch. I'm really sorry for the noise.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Wed, 24 Nov 2021 21:29:51 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_replslotdata - a tool for displaying replication slot\n information" }, { "msg_contents": "\nOn Wed, 24 Nov 2021 at 23:59, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Wed, Nov 24, 2021 at 9:09 PM Bharath Rupireddy\n>> > Thoughts?\n>>\n>> Attaching the rebased v2 patch.\n>\n> On windows the previous patches were failing, fixed that in the v3\n> patch. I'm really sorry for the noise.\n>\n\nCool! When I try to use it, there is an error for -v, --verbose option.\n\npx@ubuntu:~/Codes/postgres/Debug$ pg_replslotdata -v\npg_replslotdata: invalid option -- 'v'\nTry \"pg_replslotdata --help\" for more information.\n\nThis is because the getopt_long() missing 'v' in the third parameter.\n\n while ((c = getopt_long(argc, argv, \"D:v\", long_options, NULL)) != -1)\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Thu, 25 Nov 2021 00:10:51 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_replslotdata - a tool for displaying replication slot\n information" }, { "msg_contents": "On Wed, Nov 24, 2021 at 9:40 PM Japin Li <japinli@hotmail.com> wrote:\n>\n>\n> On Wed, 24 Nov 2021 at 23:59, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > On Wed, Nov 24, 2021 at 9:09 PM Bharath Rupireddy\n> >> > Thoughts?\n> >>\n> >> Attaching the rebased v2 patch.\n> >\n> > On windows the previous patches were failing, fixed that in the v3\n> > patch. I'm really sorry for the noise.\n> >\n>\n> Cool! When I try to use it, there is an error for -v, --verbose option.\n>\n> px@ubuntu:~/Codes/postgres/Debug$ pg_replslotdata -v\n> pg_replslotdata: invalid option -- 'v'\n> Try \"pg_replslotdata --help\" for more information.\n>\n> This is because the getopt_long() missing 'v' in the third parameter.\n>\n> while ((c = getopt_long(argc, argv, \"D:v\", long_options, NULL)) != -1)\n\nThanks for taking a look at the patch, attaching v4.\n\nThere are many things that I could do in the patch, for instance, more\ncomments, documentation, code improvements etc. I would like to first\nknow what hackers think about this tool, and then start spending more\ntime on it.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Thu, 25 Nov 2021 10:22:15 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_replslotdata - a tool for displaying replication slot\n information" }, { "msg_contents": "On 23.11.21 06:09, Bharath Rupireddy wrote:\n> The replication slots data is stored in binary format on the disk under \n> the pg_replslot/<<slot_name>> directory which isn't human readable. If \n> the server is crashed/down (for whatever reasons) and unable to come up, \n> currently there's no way for the user/admin/developer to know what were \n> all the replication slots available at the time of server crash/down to \n> figure out what's the restart lsn, xid, two phase info or types of slots \n> etc.\n\nWhat do you need that for? You can't do anything with a replication \nslot while the server is down.\n\n\n", "msg_date": "Tue, 30 Nov 2021 15:14:04 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_replslotdata - a tool for displaying replication slot\n information" }, { "msg_contents": "On 11/30/21, 6:14 AM, \"Peter Eisentraut\" <peter.eisentraut@enterprisedb.com> wrote:\r\n> On 23.11.21 06:09, Bharath Rupireddy wrote:\r\n>> The replication slots data is stored in binary format on the disk under\r\n>> the pg_replslot/<<slot_name>> directory which isn't human readable. If\r\n>> the server is crashed/down (for whatever reasons) and unable to come up,\r\n>> currently there's no way for the user/admin/developer to know what were\r\n>> all the replication slots available at the time of server crash/down to\r\n>> figure out what's the restart lsn, xid, two phase info or types of slots\r\n>> etc.\r\n>\r\n> What do you need that for? You can't do anything with a replication\r\n> slot while the server is down.\r\n\r\nOne use-case might be to discover the value you need to set for\r\nmax_replication_slots, although it's pretty trivial to discover the\r\nnumber of replication slots by looking at the folder directly.\r\nHowever, you also need to know how many replication origins there are,\r\nand AFAIK there isn't an easy way to read the replorigin_checkpoint\r\nfile at the moment. IMO a utility like this should also show details\r\nfor the replication origins. I don't have any other compelling use-\r\ncases at the moment, but I will say that it is typically nice from an\r\nadministrative standpoint to be able to inspect things like this\r\nwithout logging into a running server.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 30 Nov 2021 18:43:23 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: pg_replslotdata - a tool for displaying replication slot\n information" }, { "msg_contents": "On Wed, Dec 1, 2021 at 12:13 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 11/30/21, 6:14 AM, \"Peter Eisentraut\" <peter.eisentraut@enterprisedb.com> wrote:\n> > On 23.11.21 06:09, Bharath Rupireddy wrote:\n> >> The replication slots data is stored in binary format on the disk under\n> >> the pg_replslot/<<slot_name>> directory which isn't human readable. If\n> >> the server is crashed/down (for whatever reasons) and unable to come up,\n> >> currently there's no way for the user/admin/developer to know what were\n> >> all the replication slots available at the time of server crash/down to\n> >> figure out what's the restart lsn, xid, two phase info or types of slots\n> >> etc.\n> >\n> > What do you need that for? You can't do anything with a replication\n> > slot while the server is down.\n>\n> One use-case might be to discover the value you need to set for\n> max_replication_slots, although it's pretty trivial to discover the\n> number of replication slots by looking at the folder directly.\n\nApart from the above use-case, one can do some exploratory analysis on\nthe replication slot information after the server crash, this may be\nuseful for RCA or debugging purposes, for instance:\n1) to look at the restart_lsn of the slots to get to know why there\nwere many WAL files filled up on the disk (because of the restart_lsn\nbeing low)\n2) to know how many replication slots available at the time of crash,\nif required, one can choose to drop selective replication slots or the\nones that were falling behind to make the server up\n3) if we persist active_pid info of the replication slot to the\ndisk(currently we don't have this info in the disk), one can get to\nknow the inactive replication slots at the time of crash\n4) if the primary server is down and failover were to happen on to the\nstandby, by looking at the replication slot information on the\nprimary, one can easily recreate the slots on the standby\n\n> However, you also need to know how many replication origins there are,\n> and AFAIK there isn't an easy way to read the replorigin_checkpoint\n> file at the moment. IMO a utility like this should also show details\n> for the replication origins. I don't have any other compelling use-\n> cases at the moment, but I will say that it is typically nice from an\n> administrative standpoint to be able to inspect things like this\n> without logging into a running server.\n\nYeah, this can be added too, probably as an extra option to the\nproposed pg_replslotdata tool. But for now, let's deal with the\nreplication slot information alone and once this gets committed, we\ncan extend it further for replication origin info.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 1 Dec 2021 11:16:45 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_replslotdata - a tool for displaying replication slot\n information" }, { "msg_contents": "On Tue, Nov 30, 2021 at 9:47 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Wed, Dec 1, 2021 at 12:13 AM Bossart, Nathan <bossartn@amazon.com>\n> wrote:\n> >\n> > On 11/30/21, 6:14 AM, \"Peter Eisentraut\" <\n> peter.eisentraut@enterprisedb.com> wrote:\n> > > On 23.11.21 06:09, Bharath Rupireddy wrote:\n> > >> The replication slots data is stored in binary format on the disk\n> under\n> > >> the pg_replslot/<<slot_name>> directory which isn't human readable. If\n> > >> the server is crashed/down (for whatever reasons) and unable to come\n> up,\n> > >> currently there's no way for the user/admin/developer to know what\n> were\n> > >> all the replication slots available at the time of server crash/down\n> to\n> > >> figure out what's the restart lsn, xid, two phase info or types of\n> slots\n> > >> etc.\n> > >\n> > > What do you need that for? You can't do anything with a replication\n> > > slot while the server is down.\n> >\n> > One use-case might be to discover the value you need to set for\n> > max_replication_slots, although it's pretty trivial to discover the\n> > number of replication slots by looking at the folder directly.\n>\n> Apart from the above use-case, one can do some exploratory analysis on\n> the replication slot information after the server crash, this may be\n> useful for RCA or debugging purposes, for instance:\n> 1) to look at the restart_lsn of the slots to get to know why there\n> were many WAL files filled up on the disk (because of the restart_lsn\n> being low)\n>\n\nIn a disk full scenario because of WAL, this tool comes handy identifying\nwhich WAL files to delete to free up the space and also help assess the\naccidental delete of the WAL files. I am not sure if there is a tool to\nhelp cleanup the WAL (may be invoking the archive_command too?) without\nimpacting physical / logical slots, and respecting last checkpoint location\nbut if one exist that will be handy\n\nOn Tue, Nov 30, 2021 at 9:47 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Wed, Dec 1, 2021 at 12:13 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 11/30/21, 6:14 AM, \"Peter Eisentraut\" <peter.eisentraut@enterprisedb.com> wrote:\n> > On 23.11.21 06:09, Bharath Rupireddy wrote:\n> >> The replication slots data is stored in binary format on the disk under\n> >> the pg_replslot/<<slot_name>> directory which isn't human readable. If\n> >> the server is crashed/down (for whatever reasons) and unable to come up,\n> >> currently there's no way for the user/admin/developer to know what were\n> >> all the replication slots available at the time of server crash/down to\n> >> figure out what's the restart lsn, xid, two phase info or types of slots\n> >> etc.\n> >\n> > What do you need that for?  You can't do anything with a replication\n> > slot while the server is down.\n>\n> One use-case might be to discover the value you need to set for\n> max_replication_slots, although it's pretty trivial to discover the\n> number of replication slots by looking at the folder directly.\n\nApart from the above use-case, one can do some exploratory analysis on\nthe replication slot information after the server crash, this may be\nuseful for RCA or debugging purposes, for instance:\n1) to look at the restart_lsn of the slots to get to know why there\nwere many WAL files filled up on the disk (because of the restart_lsn\nbeing low)In a disk full scenario because of WAL, this tool comes handy identifying which WAL files to delete  to free up the space and also help assess the accidental delete of the WAL files. I am not sure if there is a tool to help cleanup the WAL (may be invoking the archive_command too?) without impacting physical / logical slots, and respecting last checkpoint location but if one exist that will be handy", "msg_date": "Tue, 30 Nov 2021 23:08:21 -0800", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_replslotdata - a tool for displaying replication slot\n information" }, { "msg_contents": "Hi,\n\nOn 2021-11-30 18:43:23 +0000, Bossart, Nathan wrote:\n> On 11/30/21, 6:14 AM, \"Peter Eisentraut\" <peter.eisentraut@enterprisedb.com> wrote:\n> > On 23.11.21 06:09, Bharath Rupireddy wrote:\n> >> The replication slots data is stored in binary format on the disk under\n> >> the pg_replslot/<<slot_name>> directory which isn't human readable. If\n> >> the server is crashed/down (for whatever reasons) and unable to come up,\n> >> currently there's no way for the user/admin/developer to know what were\n> >> all the replication slots available at the time of server crash/down to\n> >> figure out what's the restart lsn, xid, two phase info or types of slots\n> >> etc.\n> >\n> > What do you need that for? You can't do anything with a replication\n> > slot while the server is down.\n\nYes, I don't think there's sufficient need for this.\n\n\n> I don't have any other compelling use- cases at the moment, but I will say\n> that it is typically nice from an administrative standpoint to be able to\n> inspect things like this without logging into a running server.\n\nWeighed against the cost of maintaining (including documentation) a separate\ntool this doesn't seem sufficient reason.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 Dec 2021 14:52:43 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_replslotdata - a tool for displaying replication slot\n information" }, { "msg_contents": "On Thu, Dec 2, 2021 at 4:22 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-11-30 18:43:23 +0000, Bossart, Nathan wrote:\n> > On 11/30/21, 6:14 AM, \"Peter Eisentraut\" <peter.eisentraut@enterprisedb.com> wrote:\n> > > On 23.11.21 06:09, Bharath Rupireddy wrote:\n> > >> The replication slots data is stored in binary format on the disk under\n> > >> the pg_replslot/<<slot_name>> directory which isn't human readable. If\n> > >> the server is crashed/down (for whatever reasons) and unable to come up,\n> > >> currently there's no way for the user/admin/developer to know what were\n> > >> all the replication slots available at the time of server crash/down to\n> > >> figure out what's the restart lsn, xid, two phase info or types of slots\n> > >> etc.\n> > >\n> > > What do you need that for? You can't do anything with a replication\n> > > slot while the server is down.\n>\n> Yes, I don't think there's sufficient need for this.\n\nThanks. The idea of the pg_replslotdata is emanated from the real-time\nworking experience with the customer issues and answering some of\ntheir questions. Given the fact that replication slots are used in\nalmost every major production servers, and they are likely to cause\nproblems, postgres having a core tool like pg_replslotdata to\ninterpret the replication slot info without contacting the server,\nwill definitely be useful for all the other postgres vendors out\nthere. Having some important tool in the core, can avoid duplicate\nefforts.\n\n> > I don't have any other compelling use- cases at the moment, but I will say\n> > that it is typically nice from an administrative standpoint to be able to\n> > inspect things like this without logging into a running server.\n>\n> Weighed against the cost of maintaining (including documentation) a separate\n> tool this doesn't seem sufficient reason.\n\nIMHO, this shouldn't be a reason to not get something useful (useful\nIMO and few others in this thread) into the core. The maintenance of\nthe tools generally is low compared to the core server features once\nthey get reviewed and committed.\n\nHaving said that, other hackers may have better thoughts.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 2 Dec 2021 08:32:08 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_replslotdata - a tool for displaying replication slot\n information" }, { "msg_contents": "On Thu, Dec 02, 2021 at 08:32:08AM +0530, Bharath Rupireddy wrote:\n> On Thu, Dec 2, 2021 at 4:22 AM Andres Freund <andres@anarazel.de> wrote:\n>>> I don't have any other compelling use- cases at the moment, but I will say\n>>> that it is typically nice from an administrative standpoint to be able to\n>>> inspect things like this without logging into a running server.\n>>\n>> Weighed against the cost of maintaining (including documentation) a separate\n>> tool this doesn't seem sufficient reason.\n> \n> IMHO, this shouldn't be a reason to not get something useful (useful\n> IMO and few others in this thread) into the core. The maintenance of\n> the tools generally is low compared to the core server features once\n> they get reviewed and committed.\n\nWell, a bit less maintenance is always better than more maintenance.\nAn extra cost that you may be missing is related to the translation of\nthe documentation, as well as the translation of any new strings this\nwould require. FWIW, I don't directly see a use for this tool that\ncould not be solved with an online server.\n--\nMichael", "msg_date": "Mon, 6 Dec 2021 16:10:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_replslotdata - a tool for displaying replication slot\n information" }, { "msg_contents": "On 12/5/21, 11:10 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> On Thu, Dec 02, 2021 at 08:32:08AM +0530, Bharath Rupireddy wrote:\r\n>> On Thu, Dec 2, 2021 at 4:22 AM Andres Freund <andres@anarazel.de> wrote:\r\n>>>> I don't have any other compelling use- cases at the moment, but I will say\r\n>>>> that it is typically nice from an administrative standpoint to be able to\r\n>>>> inspect things like this without logging into a running server.\r\n>>>\r\n>>> Weighed against the cost of maintaining (including documentation) a separate\r\n>>> tool this doesn't seem sufficient reason.\r\n>> \r\n>> IMHO, this shouldn't be a reason to not get something useful (useful\r\n>> IMO and few others in this thread) into the core. The maintenance of\r\n>> the tools generally is low compared to the core server features once\r\n>> they get reviewed and committed.\r\n>\r\n> Well, a bit less maintenance is always better than more maintenance.\r\n> An extra cost that you may be missing is related to the translation of\r\n> the documentation, as well as the translation of any new strings this\r\n> would require. FWIW, I don't directly see a use for this tool that\r\n> could not be solved with an online server.\r\n\r\nBharath, perhaps you should maintain this outside of core PostgreSQL\r\nfor now. If some compelling use-cases ever surface that make it seem\r\nworth the added maintenance burden, this thread could probably be\r\nrevisited.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 6 Dec 2021 19:16:12 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: pg_replslotdata - a tool for displaying replication slot\n information" }, { "msg_contents": "Hi,\n\nOn Mon, Dec 06, 2021 at 07:16:12PM +0000, Bossart, Nathan wrote:\n> On 12/5/21, 11:10 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\n> > On Thu, Dec 02, 2021 at 08:32:08AM +0530, Bharath Rupireddy wrote:\n> >> On Thu, Dec 2, 2021 at 4:22 AM Andres Freund <andres@anarazel.de> wrote:\n> >>>> I don't have any other compelling use- cases at the moment, but I will say\n> >>>> that it is typically nice from an administrative standpoint to be able to\n> >>>> inspect things like this without logging into a running server.\n> >>>\n> >>> Weighed against the cost of maintaining (including documentation) a separate\n> >>> tool this doesn't seem sufficient reason.\n> >> \n> >> IMHO, this shouldn't be a reason to not get something useful (useful\n> >> IMO and few others in this thread) into the core. The maintenance of\n> >> the tools generally is low compared to the core server features once\n> >> they get reviewed and committed.\n> >\n> > Well, a bit less maintenance is always better than more maintenance.\n> > An extra cost that you may be missing is related to the translation of\n> > the documentation, as well as the translation of any new strings this\n> > would require. FWIW, I don't directly see a use for this tool that\n> > could not be solved with an online server.\n> \n> Bharath, perhaps you should maintain this outside of core PostgreSQL\n> for now. If some compelling use-cases ever surface that make it seem\n> worth the added maintenance burden, this thread could probably be\n> revisited.\n\nIronically, the patch is currently failing due to translation problem:\n\nhttps://cirrus-ci.com/task/5467034313031680\n[19:12:28.179] su postgres -c \"make -s -j${BUILD_JOBS} world-bin\"\n[19:12:44.270] make[3]: *** No rule to make target 'po/cs.po', needed by 'po/cs.mo'. Stop.\n[19:12:44.270] make[2]: *** [Makefile:44: all-pg_replslotdata-recurse] Error 2\n[19:12:44.270] make[2]: *** Waiting for unfinished jobs....\n[19:12:44.499] make[1]: *** [Makefile:42: all-bin-recurse] Error 2\n[19:12:44.499] make: *** [GNUmakefile:21: world-bin-src-recurse] Error 2\n\nLooking at the thread, I see support from 3 people:\n\n- Bharath\n- Japin\n- Satyanarayana\n\nwhile 3 committers think that the extra maintenance effort isn't worth the\nusage:\n\n- Peter E.\n- Andres\n- Michael\n\nand a +0.5 from Nathan IIUC.\n\nI also personally don't think that this worth the maintenance effort. This\ntool being entirely client side, there's no problem with maintaining it on a\nseparate repository, as mentioned by Nathan, including using it on the cloud\nproviders that provides access to at least the data file. Another pro of the\nexternal repo is that the tool can be made available immediately and for older\nreleases.\n\nSince 3 committers voted against it I think that the patch should be closed\nas \"Rejected\". I will do that in a few days unless there's some compelling\nobjection by then.\n\n\n", "msg_date": "Sat, 15 Jan 2022 16:50:12 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_replslotdata - a tool for displaying replication slot\n information" }, { "msg_contents": "On Sat, Jan 15, 2022 at 2:20 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > Bharath, perhaps you should maintain this outside of core PostgreSQL\n> > for now. If some compelling use-cases ever surface that make it seem\n> > worth the added maintenance burden, this thread could probably be\n> > revisited.\n>\n> Ironically, the patch is currently failing due to translation problem:\n>\n> https://cirrus-ci.com/task/5467034313031680\n> [19:12:28.179] su postgres -c \"make -s -j${BUILD_JOBS} world-bin\"\n> [19:12:44.270] make[3]: *** No rule to make target 'po/cs.po', needed by 'po/cs.mo'. Stop.\n> [19:12:44.270] make[2]: *** [Makefile:44: all-pg_replslotdata-recurse] Error 2\n> [19:12:44.270] make[2]: *** Waiting for unfinished jobs....\n> [19:12:44.499] make[1]: *** [Makefile:42: all-bin-recurse] Error 2\n> [19:12:44.499] make: *** [GNUmakefile:21: world-bin-src-recurse] Error 2\n\nThanks Juilen. I'm okay if the patch gets dropped. But, I'm curious to\nknow why the above error occurred. Is it because I included the nls.mk\nfile in the patch which I'm not supposed to? Are these nls.mk files\ngenerated as part of the commit that does translation changes?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 17 Jan 2022 16:10:13 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_replslotdata - a tool for displaying replication slot\n information" }, { "msg_contents": "Hi,\n\nOn Mon, Jan 17, 2022 at 04:10:13PM +0530, Bharath Rupireddy wrote:\n> \n> Thanks Juilen. I'm okay if the patch gets dropped.\n\nOk, I will take care of that soon.\n\n> But, I'm curious to\n> know why the above error occurred. Is it because I included the nls.mk\n> file in the patch which I'm not supposed to? Are these nls.mk files\n> generated as part of the commit that does translation changes?\n\nNot exactly. I think it's a good thing to take care of the translatability in\nthe initial submission, at least for the infrastructure part. So the nlk.mk\nand the _() function are fine, but you should have added an empty\nAVAIL_LANGUAGES in your nlk.mk to avoid those errors. The translation is being\ndone at a later stage by the various teams on babel ([1]) and then synced\nperiodically (usually by PeterE, thanks a lot to him!) in the tree.\n\n[1] https://babel.postgresql.org/\n\n\n", "msg_date": "Mon, 17 Jan 2022 21:11:36 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_replslotdata - a tool for displaying replication slot\n information" }, { "msg_contents": "On Mon Jan 17, 2022 at 5:11 AM PST, Julien Rouhaud wrote:\n> On Mon, Jan 17, 2022 at 04:10:13PM +0530, Bharath Rupireddy wrote:\n> > \n> > Thanks Juilen. I'm okay if the patch gets dropped.\n>\n> Ok, I will take care of that soon.\n\nI find this utility interesting and useful, especially for the reason\nthat it can provide information about the replication slots without\nconsuming a connection. I would be willing to continue the work on it.\n\nJust checking here if, a year later, anyone has seen any more, or\ninteresting use-cases that would make it a candidate for its inclusion\nin Postgres.\n\nBest regards,\nGurjeet, http://Gurje.et\nPostgres Contributors Team, https://aws.amazon.com/opensource\n\n\n", "msg_date": "Thu, 13 Apr 2023 10:41:29 -0700", "msg_from": "\"Gurjeet\" <gurjeet@singh.im>", "msg_from_op": false, "msg_subject": "Re: pg_replslotdata - a tool for displaying replication slot\n information" } ]
[ { "msg_contents": "Hi,\n\nAttached patch covers a case where TLI in the filename for a\nrecord being read is different from where it belongs to. In other\nwords, it covers following case noted in StartupXLOG():\n\n/*\n * EndOfLogTLI is the TLI in the filename of the XLOG segment containing\n * the end-of-log. It could be different from the timeline that EndOfLog\n * nominally belongs to, if there was a timeline switch in that segment,\n * and we were reading the old WAL from a segment belonging to a higher\n * timeline.\n */\nEndOfLogTLI = xlogreader->seg.ws_tli;\n\nTest tried to set recovery target just before the end-of-recovery WAL\nrecord. Unfortunately, the test couldn't directly verify EndOfLogTLI\n!= replayTLI case, I am not sure how to do that -- any suggestions\nwill be greatly appreciated. Perhaps, having this test is good to\nimprove xlog.c test coverage. Also, if you see in other angle test\ncovers a case where recovery_target_lsn and\nrecovery_target_inclusive=off which doesn't exist as of now and that\nis the reason I have added this test to 003_recovery_targets.pl file.\n\nThoughts? Suggestions?\n\n--\nRegards,\nAmul Sul\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 23 Nov 2021 11:43:21 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "TAP test to cover \"EndOfLogTLI != replayTLI\" case" }, { "msg_contents": "Hi,\n\nOn 2021-11-23 11:43:21 +0530, Amul Sul wrote:\n> Attached patch covers a case where TLI in the filename for a\n> record being read is different from where it belongs to. In other\n> words, it covers following case noted in StartupXLOG():\n\n> Thoughts? Suggestions?\n\nIt seems the test isn't quite reliable. It occasionally fails on freebsd,\nmacos, linux and always on windows (starting with the new CI stuff, before the\ntest wasn't run).\n\nSee https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/36/3427\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 9 Jan 2022 18:55:02 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: TAP test to cover \"EndOfLogTLI != replayTLI\" case" }, { "msg_contents": "On Mon, Jan 10, 2022 at 8:25 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-11-23 11:43:21 +0530, Amul Sul wrote:\n> > Attached patch covers a case where TLI in the filename for a\n> > record being read is different from where it belongs to. In other\n> > words, it covers following case noted in StartupXLOG():\n>\n> > Thoughts? Suggestions?\n>\n> It seems the test isn't quite reliable. It occasionally fails on freebsd,\n> macos, linux and always on windows (starting with the new CI stuff, before the\n> test wasn't run).\n>\n> See https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/36/3427\n>\n\nThanks for the note, I can see the same test is failing on my centos\nvm too with the latest master head(376ce3e404b). The failing reason is\nthe \"recovery_target_inclusive = off\" setting which is unnecessary for\nthis test, the attached patch removing the same.\n\nRegards,\nAmul", "msg_date": "Mon, 10 Jan 2022 09:46:23 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: TAP test to cover \"EndOfLogTLI != replayTLI\" case" }, { "msg_contents": "Hi,\n\nOn Mon, Jan 10, 2022 at 09:46:23AM +0530, Amul Sul wrote:\n> \n> Thanks for the note, I can see the same test is failing on my centos\n> vm too with the latest master head(376ce3e404b). The failing reason is\n> the \"recovery_target_inclusive = off\" setting which is unnecessary for\n> this test, the attached patch removing the same.\n\nThis version still fails on windows according to the cfbot:\n\nhttps://cirrus-ci.com/task/5882621321281536\n\n[18:14:02.639] c:\\cirrus>call perl src/tools/msvc/vcregress.pl recoverycheck\n[18:14:56.114]\n[18:14:56.122] # Failed test 'check standby content before timeline switch 0/500FB30'\n[18:14:56.122] # at t/003_recovery_targets.pl line 234.\n[18:14:56.122] # got: '6000'\n[18:14:56.122] # expected: '7000'\n\nI'm switching the cf entry to Waiting on Author.\n\n\n", "msg_date": "Sat, 15 Jan 2022 14:05:49 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TAP test to cover \"EndOfLogTLI != replayTLI\" case" }, { "msg_contents": "On Sat, Jan 15, 2022 at 11:35 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> On Mon, Jan 10, 2022 at 09:46:23AM +0530, Amul Sul wrote:\n> >\n> > Thanks for the note, I can see the same test is failing on my centos\n> > vm too with the latest master head(376ce3e404b). The failing reason is\n> > the \"recovery_target_inclusive = off\" setting which is unnecessary for\n> > this test, the attached patch removing the same.\n>\n> This version still fails on windows according to the cfbot:\n>\n> https://cirrus-ci.com/task/5882621321281536\n>\n> [18:14:02.639] c:\\cirrus>call perl src/tools/msvc/vcregress.pl recoverycheck\n> [18:14:56.114]\n> [18:14:56.122] # Failed test 'check standby content before timeline switch 0/500FB30'\n> [18:14:56.122] # at t/003_recovery_targets.pl line 234.\n> [18:14:56.122] # got: '6000'\n> [18:14:56.122] # expected: '7000'\n>\n> I'm switching the cf entry to Waiting on Author.\n\nThanks for the note.\n\nI am not sure what really went wrong but I think the 'standby_9'\nserver shutdown too early before it gets a chance to archive a\nrequired WAL file. The attached patch tries to shutdown that server\nafter the required WAL are archived, unfortunately, I couldn't test\nthat on the window, let see how cfbot reacts to this version.\n\nRegards,\nAmul", "msg_date": "Mon, 17 Jan 2022 17:07:48 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: TAP test to cover \"EndOfLogTLI != replayTLI\" case" }, { "msg_contents": "Hi,\n\nOn Mon, Jan 17, 2022 at 05:07:48PM +0530, Amul Sul wrote:\n> \n> I am not sure what really went wrong but I think the 'standby_9'\n> server shutdown too early before it gets a chance to archive a\n> required WAL file. The attached patch tries to shutdown that server\n> after the required WAL are archived, unfortunately, I couldn't test\n> that on the window, let see how cfbot reacts to this version.\n\nThanks for the updated patch! Note that thanks to Andres and Thomas work, you\ncan now easily rely on the exact same CI than the cfbot on your own github\nrepository, if you need to debug something on a platform you don't have access\nto. You can check the documentation at [1] for more detail on how to setup the\nCI.\n\n[1] https://github.com/postgres/postgres/blob/master/src/tools/ci/README\n\n\n", "msg_date": "Mon, 17 Jan 2022 21:33:57 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TAP test to cover \"EndOfLogTLI != replayTLI\" case" }, { "msg_contents": "Hi,\n\nOn Mon, Jan 17, 2022 at 09:33:57PM +0800, Julien Rouhaud wrote:\n> \n> Thanks for the updated patch! Note that thanks to Andres and Thomas work, you\n> can now easily rely on the exact same CI than the cfbot on your own github\n> repository, if you need to debug something on a platform you don't have access\n> to. You can check the documentation at [1] for more detail on how to setup the\n> CI.\n\nThe cfbot reports that the patch still fails on Windows but also failed on\nmacos with the same error: https://cirrus-ci.com/task/5655777858813952:\n\n[14:20:43.950] # Failed test 'check standby content before timeline switch 0/500FAF8'\n[14:20:43.950] # at t/003_recovery_targets.pl line 239.\n[14:20:43.950] # got: '6000'\n[14:20:43.950] # expected: '7000'\n[14:20:43.950] # Looks like you failed 1 test of 10.\n\nI'm switching the CF entry to Waiting on Author.\n\n\n", "msg_date": "Tue, 18 Jan 2022 12:25:15 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TAP test to cover \"EndOfLogTLI != replayTLI\" case" }, { "msg_contents": "At Tue, 18 Jan 2022 12:25:15 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> Hi,\n> \n> On Mon, Jan 17, 2022 at 09:33:57PM +0800, Julien Rouhaud wrote:\n> > \n> > Thanks for the updated patch! Note that thanks to Andres and Thomas work, you\n> > can now easily rely on the exact same CI than the cfbot on your own github\n> > repository, if you need to debug something on a platform you don't have access\n> > to. You can check the documentation at [1] for more detail on how to setup the\n> > CI.\n> \n> The cfbot reports that the patch still fails on Windows but also failed on\n> macos with the same error: https://cirrus-ci.com/task/5655777858813952:\n> \n> [14:20:43.950] # Failed test 'check standby content before timeline switch 0/500FAF8'\n> [14:20:43.950] # at t/003_recovery_targets.pl line 239.\n> [14:20:43.950] # got: '6000'\n> [14:20:43.950] # expected: '7000'\n> [14:20:43.950] # Looks like you failed 1 test of 10.\n> \n> I'm switching the CF entry to Waiting on Author.\n\nThe most significant advantages of the local CI setup are\n\n - CI immediately responds to push.\n\n - You can dirty the code with additional logging aid as much as you\n like to see closely what is going on. It makes us hesitant to do\n the same on this ML:p\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 18 Jan 2022 14:01:23 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TAP test to cover \"EndOfLogTLI != replayTLI\" case" }, { "msg_contents": "On Tue, Jan 18, 2022 at 10:31 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 18 Jan 2022 12:25:15 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in\n> > Hi,\n> >\n> > On Mon, Jan 17, 2022 at 09:33:57PM +0800, Julien Rouhaud wrote:\n> > >\n> > > Thanks for the updated patch! Note that thanks to Andres and Thomas work, you\n> > > can now easily rely on the exact same CI than the cfbot on your own github\n> > > repository, if you need to debug something on a platform you don't have access\n> > > to. You can check the documentation at [1] for more detail on how to setup the\n> > > CI.\n> >\n> > The cfbot reports that the patch still fails on Windows but also failed on\n> > macos with the same error: https://cirrus-ci.com/task/5655777858813952:\n> >\n> > [14:20:43.950] # Failed test 'check standby content before timeline switch 0/500FAF8'\n> > [14:20:43.950] # at t/003_recovery_targets.pl line 239.\n> > [14:20:43.950] # got: '6000'\n> > [14:20:43.950] # expected: '7000'\n> > [14:20:43.950] # Looks like you failed 1 test of 10.\n> >\n> > I'm switching the CF entry to Waiting on Author.\n>\n> The most significant advantages of the local CI setup are\n>\n> - CI immediately responds to push.\n>\n> - You can dirty the code with additional logging aid as much as you\n> like to see closely what is going on. It makes us hesitant to do\n> the same on this ML:p\n>\n\nIndeed :)\n\nI found the cause for the test failing on window -- is due to the\ncustom archive command setting which wasn't setting the correct window\narchive directory path.\n\nThere is no option to choose a custom wal archive and restore patch in\nthe TAP test. The attach 0001 patch proposes the same, which enables\nyou to choose a custom WAL archive and restore directory. The only\nconcern I have is that $node->info() will print the wrong archive\ndirectory path in that case, do we need to fix that? We might need to\nstore archive_dir like base_dir, I wasn't sure about that. Thoughts?\nComments?\n\nRegards,\nAmul", "msg_date": "Thu, 20 Jan 2022 12:13:08 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: TAP test to cover \"EndOfLogTLI != replayTLI\" case" }, { "msg_contents": "On Thu, Jan 20, 2022 at 12:13:08PM +0530, Amul Sul wrote:\n> I found the cause for the test failing on window -- is due to the\n> custom archive command setting which wasn't setting the correct window\n> archive directory path.\n\nAfter reading this patch and this thread, I have noticed that you are\ntesting the same thing as Heikki here, patch 0001:\nhttps://www.postgresql.org/message-id/52bc9ccd-8591-431b-0086-15d9acf25a3f@iki.fi\n\nThe patch sent on the other thread has a better description and shape,\nso perhaps we'd better drop what is posted here in favor of the other\nversion. Thoughts?\n--\nMichael", "msg_date": "Thu, 27 Jan 2022 15:31:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: TAP test to cover \"EndOfLogTLI != replayTLI\" case" }, { "msg_contents": "At Thu, 27 Jan 2022 15:31:37 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Thu, Jan 20, 2022 at 12:13:08PM +0530, Amul Sul wrote:\n> > I found the cause for the test failing on window -- is due to the\n> > custom archive command setting which wasn't setting the correct window\n> > archive directory path.\n> \n> After reading this patch and this thread, I have noticed that you are\n> testing the same thing as Heikki here, patch 0001:\n> https://www.postgresql.org/message-id/52bc9ccd-8591-431b-0086-15d9acf25a3f@iki.fi\n> \n> The patch sent on the other thread has a better description and shape,\n> so perhaps we'd better drop what is posted here in favor of the other\n> version. Thoughts?\n\npg_switch_wal() is more preferable than filling-in a large number of\nrecords as the means to advance segments because it is stable against\nchanges of wal_segment_size.\n\nAs you said, using has_restoring on initializing node_ptr is simpler\nthan explicitly setting archive_dir from that of another node.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 27 Jan 2022 16:24:34 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TAP test to cover \"EndOfLogTLI != replayTLI\" case" }, { "msg_contents": "On Thu, Jan 27, 2022 at 12:01 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Jan 20, 2022 at 12:13:08PM +0530, Amul Sul wrote:\n> > I found the cause for the test failing on window -- is due to the\n> > custom archive command setting which wasn't setting the correct window\n> > archive directory path.\n>\n> After reading this patch and this thread, I have noticed that you are\n> testing the same thing as Heikki here, patch 0001:\n> https://www.postgresql.org/message-id/52bc9ccd-8591-431b-0086-15d9acf25a3f@iki.fi\n>\n> The patch sent on the other thread has a better description and shape,\n> so perhaps we'd better drop what is posted here in favor of the other\n> version. Thoughts?\n\nYes, I do agree with you. Thanks for the comparison.\n\nRegards,\nAmul\n\n\n", "msg_date": "Fri, 28 Jan 2022 15:40:24 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: TAP test to cover \"EndOfLogTLI != replayTLI\" case" }, { "msg_contents": "On 28/01/2022 12:10, Amul Sul wrote:\n> On Thu, Jan 27, 2022 at 12:01 PM Michael Paquier <michael@paquier.xyz> wrote:\n>>\n>> After reading this patch and this thread, I have noticed that you are\n>> testing the same thing as Heikki here, patch 0001:\n>> https://www.postgresql.org/message-id/52bc9ccd-8591-431b-0086-15d9acf25a3f@iki.fi\n>>\n>> The patch sent on the other thread has a better description and shape,\n>> so perhaps we'd better drop what is posted here in favor of the other\n>> version. Thoughts?\n> \n> Yes, I do agree with you. Thanks for the comparison.\n\nPushed the test from that other thread now. Thanks!\n\n- Heikki\n\n\n", "msg_date": "Mon, 14 Feb 2022 11:37:57 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: TAP test to cover \"EndOfLogTLI != replayTLI\" case" }, { "msg_contents": "On Mon, Feb 14, 2022 at 3:07 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 28/01/2022 12:10, Amul Sul wrote:\n> > On Thu, Jan 27, 2022 at 12:01 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >>\n> >> After reading this patch and this thread, I have noticed that you are\n> >> testing the same thing as Heikki here, patch 0001:\n> >> https://www.postgresql.org/message-id/52bc9ccd-8591-431b-0086-15d9acf25a3f@iki.fi\n> >>\n> >> The patch sent on the other thread has a better description and shape,\n> >> so perhaps we'd better drop what is posted here in favor of the other\n> >> version. Thoughts?\n> >\n> > Yes, I do agree with you. Thanks for the comparison.\n>\n> Pushed the test from that other thread now. Thanks!\n>\n\nThank you !\n\nRegards,\nAmul\n\n\n", "msg_date": "Mon, 14 Feb 2022 15:12:18 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: TAP test to cover \"EndOfLogTLI != replayTLI\" case" } ]
[ { "msg_contents": "[ moving thread to -hackers for a bit more visibility ]\n\nAttached are a couple of patches I propose in the wake of commit\n405f32fc4 (Require version 0.98 of Test::More for TAP tests).\n\n0001 responds to the failure we saw on buildfarm member wrasse [1]\nwhere, despite configure having carefully checked for Test::More\nbeing >= 0.98, actual tests failed with\nTest::More version 0.98 required--this is only version 0.92 at /export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/../pgsql/src/test/perl/PostgreSQL/Test/Utils.pm line 63.\nThe reason is that wrasse was choosing \"prove\" from a different\nPerl installation than \"perl\", as a result of its configuration\nhaving set PERL to a nondefault place but doing nothing about PROVE.\n\nWe already installed a couple of mitigations for that:\n(a) as of c4fe3199a, configure checks \"prove\" not \"perl\" for\nappropriate module versions;\n(b) Noah has modified wrasse's configuration to set PROVE.\nBut I'm of the opinion that (b) should not be necessary.\nIf you set PERL then it's highly likely that you want to use\n\"prove\" from the same installation. So 0001 modifies configure\nto first check for an executable \"prove\" in the same directory\nas $PERL. If that's not what you want then you should override\nit by setting PROVE explicitly.\n\nSince this is mainly meant to prevent an easy-to-make error in\nsetting up buildfarm configurations, we should back-patch it.\n\n0002 is written to apply to v14 and earlier, and what it wants\nto do is back-patch the effects of 405f32fc4, so that the\nminimum Test::More version is 0.98 in all branches. The thought\nhere is that (1) somebody is likely to want to back-patch a\ntest involving Test::More::subtest before too long; (2) we have\nzero coverage for older Test::More versions anyway, now that\nall buildfarm members have been updated to work with HEAD.\n\nAny objections?\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2021-11-20%2022%3A58%3A17", "msg_date": "Tue, 23 Nov 2021 12:03:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Mop-up from Test::More version change patch" }, { "msg_contents": "\nOn 11/23/21 12:03, Tom Lane wrote:\n> [ moving thread to -hackers for a bit more visibility ]\n>\n> Attached are a couple of patches I propose in the wake of commit\n> 405f32fc4 (Require version 0.98 of Test::More for TAP tests).\n>\n> 0001 responds to the failure we saw on buildfarm member wrasse [1]\n> where, despite configure having carefully checked for Test::More\n> being >= 0.98, actual tests failed with\n> Test::More version 0.98 required--this is only version 0.92 at /export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/../pgsql/src/test/perl/PostgreSQL/Test/Utils.pm line 63.\n> The reason is that wrasse was choosing \"prove\" from a different\n> Perl installation than \"perl\", as a result of its configuration\n> having set PERL to a nondefault place but doing nothing about PROVE.\n>\n> We already installed a couple of mitigations for that:\n> (a) as of c4fe3199a, configure checks \"prove\" not \"perl\" for\n> appropriate module versions;\n> (b) Noah has modified wrasse's configuration to set PROVE.\n> But I'm of the opinion that (b) should not be necessary.\n> If you set PERL then it's highly likely that you want to use\n> \"prove\" from the same installation. So 0001 modifies configure\n> to first check for an executable \"prove\" in the same directory\n> as $PERL. If that's not what you want then you should override\n> it by setting PROVE explicitly.\n>\n> Since this is mainly meant to prevent an easy-to-make error in\n> setting up buildfarm configurations, we should back-patch it.\n\n\nDo we really have much of an issue left to solve given c4fe3199a? It\nfeels a bit like a solution in search of a problem.\n\n\n\n>\n> 0002 is written to apply to v14 and earlier, and what it wants\n> to do is back-patch the effects of 405f32fc4, so that the\n> minimum Test::More version is 0.98 in all branches. The thought\n> here is that (1) somebody is likely to want to back-patch a\n> test involving Test::More::subtest before too long; (2) we have\n> zero coverage for older Test::More versions anyway, now that\n> all buildfarm members have been updated to work with HEAD.\n>\n\nThis one seems like a good idea.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 23 Nov 2021 21:00:53 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Mop-up from Test::More version change patch" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 11/23/21 12:03, Tom Lane wrote:\n>> If you set PERL then it's highly likely that you want to use\n>> \"prove\" from the same installation. So 0001 modifies configure\n>> to first check for an executable \"prove\" in the same directory\n>> as $PERL. If that's not what you want then you should override\n>> it by setting PROVE explicitly.\n\n> Do we really have much of an issue left to solve given c4fe3199a? It\n> feels a bit like a solution in search of a problem.\n\nWe don't absolutely have to do this, agreed. But I think the\ncurrent behavior fails to satisfy the POLA. As I remarked in\nthe other thread, I'm worried about somebody wasting time trying\nto identify why their TAP test isn't behaving the way it does\nwhen they invoke the code under the perl they think they're using.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Nov 2021 21:17:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Mop-up from Test::More version change patch" }, { "msg_contents": "On Tue, Nov 23, 2021 at 12:03:05PM -0500, Tom Lane wrote:\n> Attached are a couple of patches I propose in the wake of commit\n> 405f32fc4 (Require version 0.98 of Test::More for TAP tests).\n> \n> 0001 responds to the failure we saw on buildfarm member wrasse [1]\n> where, despite configure having carefully checked for Test::More\n> being >= 0.98, actual tests failed with\n> Test::More version 0.98 required--this is only version 0.92 at /export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/../pgsql/src/test/perl/PostgreSQL/Test/Utils.pm line 63.\n> The reason is that wrasse was choosing \"prove\" from a different\n> Perl installation than \"perl\", as a result of its configuration\n> having set PERL to a nondefault place but doing nothing about PROVE.\n> \n> We already installed a couple of mitigations for that:\n> (a) as of c4fe3199a, configure checks \"prove\" not \"perl\" for\n> appropriate module versions;\n> (b) Noah has modified wrasse's configuration to set PROVE.\n> But I'm of the opinion that (b) should not be necessary.\n> If you set PERL then it's highly likely that you want to use\n> \"prove\" from the same installation.\n\nMy regular development system is a counterexample. It uses system Perl, but\nit has a newer \"prove\" from CPAN:\n\n$ grep -E '(PERL|PROVE)' config.status\nS[\"PROVE\"]=\"/home/nm/sw/cpan/bin/prove\"\nS[\"PERL\"]=\"/usr/bin/perl\"\n\nThe patch sends it back to using the system \"prove\":\n\nS[\"PROVE\"]=\"/usr/bin/prove\"\nS[\"PERL\"]=\"/usr/bin/perl\"\n\nI could, of course, override that. But with so little evidence about systems\nhelped by the proposed change, I'm now -1.0 on it.\n\n> 0002 is written to apply to v14 and earlier, and what it wants\n> to do is back-patch the effects of 405f32fc4, so that the\n> minimum Test::More version is 0.98 in all branches. The thought\n> here is that (1) somebody is likely to want to back-patch a\n> test involving Test::More::subtest before too long; (2) we have\n> zero coverage for older Test::More versions anyway, now that\n> all buildfarm members have been updated to work with HEAD.\n\nwrasse v10..v14 is testing older Test::More, so coverage persists. However, I\nam okay with this change.\n\n\n", "msg_date": "Tue, 23 Nov 2021 20:37:45 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Mop-up from Test::More version change patch" }, { "msg_contents": "On 23.11.21 18:03, Tom Lane wrote:\n> 0002 is written to apply to v14 and earlier, and what it wants\n> to do is back-patch the effects of 405f32fc4, so that the\n> minimum Test::More version is 0.98 in all branches. The thought\n> here is that (1) somebody is likely to want to back-patch a\n> test involving Test::More::subtest before too long; (2) we have\n> zero coverage for older Test::More versions anyway, now that\n> all buildfarm members have been updated to work with HEAD.\n\nThe backpatching argument is a little off-target, I think. The purpose \nof subtests is so that wrappers like command_fails_like() count only as \none test on the outside, instead of however many checks it runs \ninternally. There is no use in using the subtest feature in a top-level \ntest one might add, say in response to a bugfix. So the only reason \nthis might become relevant is if someone were to backpatch new test \ninfrastructure, which seems rare and in most cases would probably \nrequire additional portability and backpatching care. And even then, \nyou still need to adjust the test count at the top of the file \nindividually in each branch, because the total number of tests will \nprobably be different.\n\nIn my mind, the subtests feature is useful during new development, where \nyou write a bunch of new tests and want to set the total test count by \neyeballing what you just wrote instead of having to run test cycles and \nreact to the complaints. But it won't be useful for patching tests into \nexisting files.\n\nIn summary, I would leave it alone.\n\n\n", "msg_date": "Wed, 24 Nov 2021 08:40:56 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Mop-up from Test::More version change patch" } ]
[ { "msg_contents": "Hello all,\r\n\r\nNow that the MITM CVEs are published [1], I wanted to share my wishlist\r\nof things that would have made those attacks difficult/impossible to\r\npull off.\r\n\r\n= Implicit TLS =\r\n\r\nThe frontend/backend protocol uses a STARTTLS-style negotiation, which\r\nhas had a fair number of implementation vulnerabilities in the SMTP\r\nspace [2] and is where I got the idea for the attacks to begin with.\r\nThere is a large amount of plaintext traffic on the wire that happens\r\nbefore either party trusts the other, and there is plenty of code that\r\ntouches that untrusted traffic.\r\n\r\nAn implicit TLS flow would put the TLS handshake at the very beginning\r\nof the connection, before any of the frontend/backend protocol is\r\nspoken on the wire. (You all have daily experience with this flow, by\r\nway of HTTPS.) So a verify-full client would know that the server is\r\nthe expected recipient, and that the connection is encrypted, before\r\never sending a single byte of Postgres-specific communication, and\r\nbefore receiving potential error messages. Similarly, a\r\nclientcert=verify-* server would have already partially authenticated\r\nthe client before ever receiving a startup packet, and it can trust\r\nthat any messages sent during the handshake are received without\r\ntampering.\r\n\r\nThis has downsides. Backwards compatibility tends to reintroduce\r\ndowngrade attacks, which defeats the goal of implicit TLS. So DBAs\r\nwould probably have to either perform a hard cutover to the new system\r\n(with all-new clients ready to go), or else temporarily introduce a new\r\nport for the implicit TLS mode (similar to HTTP/HTTPS separation) while\r\nolder clients are migrated.\r\n\r\nAnd obviously this doesn't help you if you're already using a different\r\nform of encryption (GSSAPI). I'm ignorant of the state of the art for\r\nimplicit encryption using that method. But for DBAs who are already\r\ndeploying TLS as their encryption layer and want to force its use for\r\nevery client, I think this would be a big step forward.\r\n\r\n= Client-Side Auth Selection =\r\n\r\nThe second request is for the client to stop fully trusting the server\r\nduring the authentication phase. If I tell libpq to use a client\r\ncertificate, for example, I don't think the server should be allowed to\r\nextract a plaintext password from my environment (at least not without\r\nmy explicit opt-in).\r\n\r\nAuth negotiation has been proposed a couple of times on the list (see\r\nfor example [3]), and it's already been narrowly applied for SCRAM's\r\nchannel binding, since allowing a server to sidestep the client\r\nauthentication defeats the purpose. But I'd like to see this applied\r\ngenerally, so that if the server sends an authentication request that\r\nmy client hasn't been explicitly configured to use, the connection\r\nfails. Implementations could range from a simple \"does the server's\r\nauth method match the single one I expect\" to a full SASL mechanism\r\nnegotation.\r\n\r\nWDYT? Are these worth pursuing in the near future?\r\n\r\n--Jacob\r\n\r\n[1] https://www.postgresql.org/about/news/postgresql-141-135-129-1114-1019-and-9624-released-2349/\r\n[2] https://www.feistyduck.com/bulletproof-tls-newsletter/issue_80_vulnerabilities_show_fragility_of_starttls\r\n[3] https://www.postgresql.org/message-id/flat/CAB7nPqS-aFg0iM3AQOJwKDv_0WkAedRjs1W2X8EixSz%2BsKBXCQ%40mail.gmail.com\r\n\r\n\r\n\r\n\r\n\r\n\r\n", "msg_date": "Tue, 23 Nov 2021 18:27:38 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Post-CVE Wishlist" }, { "msg_contents": "Jacob Champion <pchampion@vmware.com> writes:\n> = Implicit TLS =\n\nI think this idea is a nonstarter. It breaks backwards compatibility\nat the protocol level in order to fix entirely-hypothetical bugs.\nNobody is going to like that tradeoff. Moreover, how shall the\nserver know whether an incoming connection is expected to use TLS\nor not, if no prior communication is allowed? \"TLS is the only\nsupported case\" is even more of a nonstarter than a protocol change.\n\n> = Client-Side Auth Selection =\n\n> The second request is for the client to stop fully trusting the server\n> during the authentication phase. If I tell libpq to use a client\n> certificate, for example, I don't think the server should be allowed to\n> extract a plaintext password from my environment (at least not without\n> my explicit opt-in).\n\nYeah. I don't recall whether it's been discussed in public or not,\nbut it certainly seems like libpq should be able to be configured so\nthat (for example) it will never send a cleartext password. It's not\nclear to me what extent of configurability would be useful, and I\ndon't want to overdesign it --- but that much at least would be a\ngood thing.\n\n> ... Implementations could range from a simple \"does the server's\n> auth method match the single one I expect\" to a full SASL mechanism\n> negotation.\n\nAgain, any sort of protocol change seems like a nonstarter from a\ncost/benefit standpoint, even before you get to the question of\nwhether a downgrade attack could defeat it. I'm envisioning just\nhaving local configuration (probably in the connection string) that\ntells libpq to fail the connection upon seeing certain auth requests.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Nov 2021 14:18:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Post-CVE Wishlist" }, { "msg_contents": "On Tue, Nov 23, 2021 at 2:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Jacob Champion <pchampion@vmware.com> writes:\n> > = Implicit TLS =\n>\n> I think this idea is a nonstarter. It breaks backwards compatibility\n> at the protocol level in order to fix entirely-hypothetical bugs.\n> Nobody is going to like that tradeoff. Moreover, how shall the\n> server know whether an incoming connection is expected to use TLS\n> or not, if no prior communication is allowed? \"TLS is the only\n> supported case\" is even more of a nonstarter than a protocol change.\n\nI am not persuaded by this argument. Suppose we added a server option\nlike ssl_port which causes us to listen on an additional port and, on\nthat port, everything, from the first byte on this connection, is\nencrypted using SSL. Then we have to also add a matching libpq option\nwhich the client must set to tell the server that this is the\nexpectation. For people using URL connection strings, that might be as\nsimple as specifying postgresqls://whatever rather than\npostgresql://whatever. With such a design, the feature is entirely\nignorable for those who don't wish to use it, but anyone who wants it\ncan get it relatively easily. I don't know how we'd ever deprecate the\ncurrent system, but we don't necessarily need to do so. LDAP allows\neither kind of scheme -- you can either connect to a regular LDAP\nport, usually port 389, and negotiate security, or you can connect to\na different port, usually 636, and use SSL from the start. I don't see\nwhy that couldn't be a model for us, and I suspect that it would get\ndecent uptake.\n\nNow that being said, https://www.openldap.org/faq/data/cache/605.html\nclaims that ldaps (encrpyt from the first byte) is deprecated in favor\nof STARTTLS (encrypt by negotiation). It's interesting that Jacob is\nproposing to introduce as a new and better option the thing they've\ndecided they don't like. I guess my question is - is either one truly\nbetter, or is this just a vi vs. emacs type debate where different\npeople have different preferences? I'm really not sure.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 23 Nov 2021 16:44:43 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Post-CVE Wishlist" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I am not persuaded by this argument. Suppose we added a server option\n> like ssl_port which causes us to listen on an additional port and, on\n> that port, everything, from the first byte on this connection, is\n> encrypted using SSL.\n\nRight, a separate port number (much akin to http 80 vs https 443) is\npretty much the only way this could be managed. That's messy enough\nthat I don't see anyone wanting to do it for purely-hypothetical\nbenefits. If we'd done it that way from the start, it'd be fine;\nbut there's way too much established practice now.\n\n> Now that being said, https://www.openldap.org/faq/data/cache/605.html\n> claims that ldaps (encrpyt from the first byte) is deprecated in favor\n> of STARTTLS (encrypt by negotiation). It's interesting that Jacob is\n> proposing to introduce as a new and better option the thing they've\n> decided they don't like.\n\nIndeed, that is interesting. I wonder if we can find the discussions\nthat led to that decision.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Nov 2021 17:02:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Post-CVE Wishlist" }, { "msg_contents": "On 23/11/2021 23:44, Robert Haas wrote:\n> On Tue, Nov 23, 2021 at 2:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Jacob Champion <pchampion@vmware.com> writes:\n>>> = Implicit TLS =\n\nAside from security, one small benefit of skipping the Starttls-style \nnegotiation is that you avoid one round-trip to the server.\n\n>> I think this idea is a nonstarter. It breaks backwards compatibility\n>> at the protocol level in order to fix entirely-hypothetical bugs.\n>> Nobody is going to like that tradeoff. Moreover, how shall the\n>> server know whether an incoming connection is expected to use TLS\n>> or not, if no prior communication is allowed? \"TLS is the only\n>> supported case\" is even more of a nonstarter than a protocol change.\n> \n> I am not persuaded by this argument. Suppose we added a server option\n> like ssl_port which causes us to listen on an additional port and, on\n> that port, everything, from the first byte on this connection, iswh\n> encrypted using SSL. Then we have to also add a matching libpq option\n> which the client must set to tell the server that this is the\n> expectation. For people using URL connection strings, that might be as\n> simple as specifying postgresqls://whatever rather than\n> postgresql://whatever. With such a design, the feature is entirely\n> ignorable for those who don't wish to use it, but anyone who wants it\n> can get it relatively easily. I don't know how we'd ever deprecate the\n> current system, but we don't necessarily need to do so. LDAP allows\n> either kind of scheme -- you can either connect to a regular LDAP\n> port, usually port 389, and negotiate security, or you can connect to\n> a different port, usually 636, and use SSL from the start. I don't see\n> why that couldn't be a model for us, and I suspect that it would get\n> decent uptake.\nOne intriguing option is to allow starting the TLS handshake without the \nSSLRequest packet. On the same port. A TLS handshake and PostgreSQL \nSSLRequest/StartupMessage/CancelRequest can be distinguished by looking \nat the first few bytes. The server can peek at them, and if it looks \nlike a TLS handshake, start TLS (or fail if it's not supported), \notherwise proceed with the libpq protocol.\n\nThis would make the transition smooth: we can add server support for \nthis now, with an option in libpq to start using it. After some years, \nwe can make it the default in libpq, but still fall back to the old \nmethod if it fails. After a long enough transition period, we can drop \nthe old mechanism.\n\nAll that said, I'm not sure how serious I am about this. I think it \nwould work, and it wouldn't even be very complicated, but it feels \nhacky, and that's not a good thing with anything security related. And \nthe starttls-style negotiation isn't that bad, really. I'm inclined to \ndo nothing I guess. Thoughts?\n\n> Now that being said, https://www.openldap.org/faq/data/cache/605.html\n> claims that ldaps (encrpyt from the first byte) is deprecated in favor\n> of STARTTLS (encrypt by negotiation). It's interesting that Jacob is\n> proposing to introduce as a new and better option the thing they've\n> decided they don't like. I guess my question is - is either one truly\n> better, or is this just a vi vs. emacs type debate where different\n> people have different preferences? I'm really not sure.\n\nHuh, that's surprising, I though the trend is towards implicit TLS. As \nanother data point, RFC 8314 recommends implicit TLS for POP, IMAP and SMTP.\n\n- Heikki\n\n\n", "msg_date": "Wed, 24 Nov 2021 00:41:25 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Post-CVE Wishlist" }, { "msg_contents": "On Tue, Nov 23, 2021 at 02:18:30PM -0500, Tom Lane wrote:\n> Jacob Champion <pchampion@vmware.com> writes:\n>> = Client-Side Auth Selection = \n>> The second request is for the client to stop fully trusting the server\n>> during the authentication phase. If I tell libpq to use a client\n>> certificate, for example, I don't think the server should be allowed to\n>> extract a plaintext password from my environment (at least not without\n>> my explicit opt-in).\n> \n> Yeah. I don't recall whether it's been discussed in public or not,\n> but it certainly seems like libpq should be able to be configured so\n> that (for example) it will never send a cleartext password. It's not\n> clear to me what extent of configurability would be useful, and I\n> don't want to overdesign it --- but that much at least would be a\n> good thing.\n\nI recall this part being discussed in public, but I cannot put my\nfinger on the exact thread. I think that this was around when we\ndiscussed the open items of 10 or 11 for things around channel binding\nand how libpq was sensitive to downgrade attacks, which would mean\naround 2016 or 2017. I also recall reading (writing?) a patch that\nintroduced a new connection parameter that takes in input a\ncomma-separated list of keywords to allow the user to choose a set of\nauth methods accepted, failing if the server is willing to use a\nmethod that does not match with what the user has put in his list.\nPerhaps this last part has never reached -hackers though :)\n\nAnyway, the closest thing I can put my finger on now is that:\nhttps://www.postgresql.org/message-id/c5cb08f4cce46ff661ad287fadaa1b2a@postgrespro.ru\n--\nMichael", "msg_date": "Wed, 24 Nov 2021 14:09:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Post-CVE Wishlist" }, { "msg_contents": "On 23.11.21 23:41, Heikki Linnakangas wrote:\n> On 23/11/2021 23:44, Robert Haas wrote:\n>> On Tue, Nov 23, 2021 at 2:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Jacob Champion <pchampion@vmware.com> writes:\n>>>> = Implicit TLS =\n> \n> Aside from security, one small benefit of skipping the Starttls-style \n> negotiation is that you avoid one round-trip to the server.\n\nAlso, you could make use of existing TLS-aware proxy infrastructure \nwithout having to hack in PostgreSQL protocol support. There is \ndefinitely demand for that.\n\n\n\n", "msg_date": "Wed, 24 Nov 2021 08:48:27 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Post-CVE Wishlist" }, { "msg_contents": "On 24/11/2021 07:09, Michael Paquier wrote:\n> On Tue, Nov 23, 2021 at 02:18:30PM -0500, Tom Lane wrote:\n>> Jacob Champion <pchampion@vmware.com> writes:\n>>> = Client-Side Auth Selection =\n>>> The second request is for the client to stop fully trusting the server\n>>> during the authentication phase. If I tell libpq to use a client\n>>> certificate, for example, I don't think the server should be allowed to\n>>> extract a plaintext password from my environment (at least not without\n>>> my explicit opt-in).\n>>\n>> Yeah. I don't recall whether it's been discussed in public or not,\n>> but it certainly seems like libpq should be able to be configured so\n>> that (for example) it will never send a cleartext password. It's not\n>> clear to me what extent of configurability would be useful, and I\n>> don't want to overdesign it --- but that much at least would be a\n>> good thing.\n> \n> I recall this part being discussed in public, but I cannot put my\n> finger on the exact thread. I think that this was around when we\n> discussed the open items of 10 or 11 for things around channel binding\n> and how libpq was sensitive to downgrade attacks, which would mean\n> around 2016 or 2017. I also recall reading (writing?) a patch that\n> introduced a new connection parameter that takes in input a\n> comma-separated list of keywords to allow the user to choose a set of\n> auth methods accepted, failing if the server is willing to use a\n> method that does not match with what the user has put in his list.\n> Perhaps this last part has never reached -hackers though :)\n> \n> Anyway, the closest thing I can put my finger on now is that:\n> https://www.postgresql.org/message-id/c5cb08f4cce46ff661ad287fadaa1b2a@postgrespro.ru\n\nHere's a thread:\n\nhttps://www.postgresql.org/message-id/227015d8417f2b4fef03f8966dbfa5cbcc4f44da.camel%40j-davis.com\n\nThe result of that thread was that we added the \nchannel_binding=require/prefer/disable option.\n\n- Heikki\n\n\n", "msg_date": "Wed, 24 Nov 2021 10:26:55 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Post-CVE Wishlist" }, { "msg_contents": "On Tue, Nov 23, 2021 at 5:41 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> All that said, I'm not sure how serious I am about this. I think it\n> would work, and it wouldn't even be very complicated, but it feels\n> hacky, and that's not a good thing with anything security related. And\n> the starttls-style negotiation isn't that bad, really. I'm inclined to\n> do nothing I guess. Thoughts?\n\nI am not really persuaded by Jacob's argument that, had this only\nworked the other way from the start, this bug wouldn't have occurred.\nThat's just a tautology, because we can only have bugs in the code we\nwrite, not the code we didn't write. So perhaps we would have just had\nsome other bug, which might have been more or less serious than the\none we actually had. It's hard to say, really, because the situation\nis hypothetical.\n\nBut on reflection, one thing that isn't very nice about the current\napproach is that it won't work with anything that doesn't support the\nPostgreSQL wire protocol specifically. Imagine that you have a driver\nfor PostgreSQL that for some reason does not support SSL, but you want\nto use SSL to talk to the server. You cannot stick a generic proxy\nthat speaks plaintext on one side and SSL on the other side between\nthat driver and the server and have it work. You will need something\nthat knows how to proxy the PostgreSQL protocol specifically, and that\nwill probably end up being higher-overhead than a generic proxy. There\nare all sorts of other variants of this scenario, and one of them is\nprobably the motivation behind the request for proxy protocol support.\nI don't use these kinds of software myself, but I think a lot of\npeople do, and it wouldn't be a bad thing if we could be\n\"plug-compatible\" with things that people on the Internet want to do,\nwithout needing a PostgreSQL-specific adapter. SSL is certainly one of\nthose things.\n\nThis argument doesn't answer the question of whether speaking pure SSL\non a separate port is better or worse than having a single port that\ndoes either. If I had to guess, the latter is more convenient for\nusers but less convenient to code. I don't even see a compelling\nreason why we can't support multiple models here, supposing someone is\nwilling to do the work and fix the bugs that result.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 Nov 2021 09:40:47 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Post-CVE Wishlist" }, { "msg_contents": "On Wed, 2021-11-24 at 09:40 -0500, Robert Haas wrote:\r\n> I am not really persuaded by Jacob's argument that, had this only\r\n> worked the other way from the start, this bug wouldn't have occurred.\r\n> That's just a tautology, because we can only have bugs in the code we\r\n> write, not the code we didn't write. So perhaps we would have just had\r\n> some other bug, which might have been more or less serious than the\r\n> one we actually had. It's hard to say, really, because the situation\r\n> is hypothetical.\r\n\r\nI'm not trying to convince you that there wouldn't have been bugs.\r\nThere will always be bugs.\r\n\r\nWhat I'm trying to convince you of is that this pattern of beginning a\r\nTLS conversation is known to be particularly error-prone, across\r\nmultiple protocols and implementations. I think this is supported by\r\nthe fact that at least three independent client libraries made changes\r\nin response to this Postgres CVE, a decade after the first writeup of\r\nthis exact vulnerability.\r\n\r\n- https://github.com/postgres/postgres/commit/160c0258802\r\n- https://github.com/pgbouncer/pgbouncer/commit/e4453c9151a\r\n- https://github.com/yandex/odyssey/commit/4e00bf797a\r\n\r\nI don't buy the idea that, because we have fixed that particular\r\nvulnerability, we've rendered this entire class of bugs \"hypothetical\".\r\nThere will be more code and more clients. There will always be bugs.\r\nI'd rather the bugs that people write be in places that are less\r\nsecurity-critical.\r\n\r\n> This argument doesn't answer the question of whether speaking pure SSL\r\n> on a separate port is better or worse than having a single port that\r\n> does either. If I had to guess, the latter is more convenient for\r\n> users but less convenient to code. I don't even see a compelling\r\n> reason why we can't support multiple models here, supposing someone is\r\n> willing to do the work and fix the bugs that result.\r\n\r\nI only have experience in the area of HTTP(S), which supports three\r\nmodels of plaintext-only, plaintext-upgrade-to-TLS (which is rare in\r\npractice), and implicit-TLS. I'm not aware of mainstream efforts to mix\r\nplaintext and implicit-TLS traffic on the same HTTP port -- but there\r\nare projects that fill that niche [1] -- so I don't know what security\r\nissues might arise from that approach.\r\n\r\n--Jacob\r\n\r\n[1] https://github.com/mscdex/httpolyglot\r\n", "msg_date": "Wed, 24 Nov 2021 18:29:44 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Post-CVE Wishlist" }, { "msg_contents": "On Wed, Nov 24, 2021 at 1:29 PM Jacob Champion <pchampion@vmware.com> wrote:\n> What I'm trying to convince you of is that this pattern of beginning a\n> TLS conversation is known to be particularly error-prone, across\n> multiple protocols and implementations. I think this is supported by\n> the fact that at least three independent client libraries made changes\n> in response to this Postgres CVE, a decade after the first writeup of\n> this exact vulnerability.\n>\n> - https://github.com/postgres/postgres/commit/160c0258802\n> - https://github.com/pgbouncer/pgbouncer/commit/e4453c9151a\n> - https://github.com/yandex/odyssey/commit/4e00bf797a\n\nSure, that's certainly true, but there are many programming patterns\nthat have well-known gotchas, and people still write programs that do\nthose things, debug them, and are satisfied with the results.\nBeginning programming classes often cover the abuse of recursion using\nfact() and fib() as examples. It's extremely easy to write a program\nthat concates a large number of strings with O(n^2) runtime, and tons\nof people must have made that mistake. I've personally written fork()\nbombs multiple times, sometimes unintentionally. None of that proves\nthat computing factorials or fibonacci numbers, concatenating strings,\nor calling fork() are things that you just should not do. However, if\nyou do those things and make the classic errors, somebody's probably\ngoing to think that you're kind of stupid. So here.\n\nI think it would take an overwhelming amount of evidence to convince\nthe project to remove support for the current method. One or even two\nor three high-severity bugs will probably not convince the project to\ndo more than spend more studying that code and trying to tighten\nthings up in a systematic way. Even if we did agree to move away from\nit, we would mostly likely add support for the replacement method now\nwith a view to deprecating the old way in 6-10 years when existing\nreleases are out of support, which means we'd still need to fix all of\nthe bugs in the existing implementation, or at least all of the ones\ndiscovered between now and then. The bar for actually ripping it out\non an expedited time scale would be proving not only that it's broken\nin multiple ways, but that it's so badly broken that it can't be fixed\nwith any reasonable amount of effort. And I just don't see one bug\nthat had a pretty localized fix is really moving the needle as far as\nthat burden of proof is concerned.\n\nAnd if the existing method is not going away, then adding a new method\njust means that we have two things that can have bugs instead of one.\nThat might or might not be an advancement in usability or convenience,\nbut it certainly can't be less buggy.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 Nov 2021 14:01:50 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Post-CVE Wishlist" }, { "msg_contents": "Jacob Champion <pchampion@vmware.com> writes:\n> What I'm trying to convince you of is that this pattern of beginning a\n> TLS conversation is known to be particularly error-prone, across\n> multiple protocols and implementations. I think this is supported by\n> the fact that at least three independent client libraries made changes\n> in response to this Postgres CVE, a decade after the first writeup of\n> this exact vulnerability.\n\nWell, it's not clear that they didn't just copy libpq's buggy logic,\nso I'm not sure how \"independent\" these bugs are. I actually took\nconsiderable comfort from the number of clients that *weren't*\nvulnerable.\n\n> I don't buy the idea that, because we have fixed that particular\n> vulnerability, we've rendered this entire class of bugs \"hypothetical\".\n> There will be more code and more clients. There will always be bugs.\n> I'd rather the bugs that people write be in places that are less\n> security-critical.\n\nUnless we actively remove the existing way of starting SSL encryption\n--- and GSS encryption, and anything else somebody proposes in future ---\nwe are not going to be able to design out this class of bugs. Maybe\nwe could start the process now in the hopes of making such a breaking\nchange ten years down the road; but whether anyone will remember to\npull the trigger then is doubtful, and even if we do remember, you can\nbe dead certain it will still break some people's clients. So I don't\nput much stock in the argument that this will make things more secure.\n(Ten years from now, SSL may be dead and replaced by something more\nsecure against quantum computers.)\n\nThe suggestion that we could remove one network roundtrip is worth\nsomething. And perhaps the argument about improving compatibility\nwith tools that know SSL but not the PG wire protocol (although\nthat one seems pretty unbacked by concrete facts). Whether these\nthings make it worth the effort is dubious in my mind, but others\nmay evaluate that differently. Note, however, that neither argument\nimpels us to break compatibility with existing clients. That's a\nfar heavier price to pay, and basically I don't believe that we\nare willing to pay it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Nov 2021 14:03:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Post-CVE Wishlist" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I think it would take an overwhelming amount of evidence to convince\n> the project to remove support for the current method. One or even two\n> or three high-severity bugs will probably not convince the project to\n> do more than spend more studying that code and trying to tighten\n> things up in a systematic way.\n\nOne other point to be made here is that it seems like a stretch to call\nthese particular bugs \"high-severity\". Given what we learned about\nthe difficulty of exploiting the libpq bug, and the certainty that any\nother clients sharing the issue would have their own idiosyncrasies\nnecessitating a custom-designed attack, I rather doubt that we're going\nto hear of anybody trying to exploit the issue in the field.\n\n(By no means do I suggest that these bugs aren't worth fixing when we\nfind them. But so far they seem very easy to fix. So moving mountains\nto design out just this one type of bug doesn't seem like a great use\nof our finite earth-moving capacity.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Nov 2021 14:53:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Post-CVE Wishlist" }, { "msg_contents": "On Wed, 2021-11-24 at 14:03 -0500, Tom Lane wrote:\r\n> > I don't buy the idea that, because we have fixed that particular\r\n> > vulnerability, we've rendered this entire class of bugs \"hypothetical\".\r\n> > There will be more code and more clients. There will always be bugs.\r\n> > I'd rather the bugs that people write be in places that are less\r\n> > security-critical.\r\n> \r\n> Unless we actively remove the existing way of starting SSL encryption\r\n> --- and GSS encryption, and anything else somebody proposes in future ---\r\n> we are not going to be able to design out this class of bugs.\r\n\r\n_We_ can't. I get that. But if this feature is introduced, new clients\r\nwill begin to have the option of designing it out of their code. And\r\nDBAs will have the option of locking down their servers so that any new\r\nbugs we introduce in the TLS-upgrade codepath will simply not affect\r\nthem.\r\n\r\nThe ecosystem has the option of transitioning faster than we can. And\r\nthen, some number of releases later, an entirely new conversation might\r\nhappen. (Or it might not.)\r\n\r\n> Maybe\r\n> we could start the process now in the hopes of making such a breaking\r\n> change ten years down the road; but whether anyone will remember to\r\n> pull the trigger then is doubtful, and even if we do remember, you can\r\n> be dead certain it will still break some people's clients.\r\n\r\nI am familiar with the \"we didn't plant a tree 20 years ago, so we\r\nshouldn't plant one now\" line of argument. :D I hope it's not as\r\npersuasive as it used to be.\r\n\r\n> So I don't\r\n> put much stock in the argument that this will make things more secure.\r\n> (Ten years from now, SSL may be dead and replaced by something more\r\n> secure against quantum computers.)\r\n\r\nThat would be great! But I suspect that if that happens, the new\r\nargument will be \"we can't upgrade our server to XQuantum-only! Look at\r\nall these legacy SSL clients.\"\r\n\r\n--Jacob\r\n", "msg_date": "Wed, 24 Nov 2021 19:56:54 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Post-CVE Wishlist" }, { "msg_contents": "On Wed, Nov 24, 2021 at 2:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> One other point to be made here is that it seems like a stretch to call\n> these particular bugs \"high-severity\".\n\nWell, I was referring to the CVSS score, which was in the \"high\" range.\n\n> Given what we learned about\n> the difficulty of exploiting the libpq bug, and the certainty that any\n> other clients sharing the issue would have their own idiosyncrasies\n> necessitating a custom-designed attack, I rather doubt that we're going\n> to hear of anybody trying to exploit the issue in the field.\n\nI don't know. The main thing that I find consoling is the fact that\nmost people probably have the libpq connection behind a firewall where\nnasty people can't even connect to the port. But there are probably\nexceptions.\n\n> (By no means do I suggest that these bugs aren't worth fixing when we\n> find them. But so far they seem very easy to fix. So moving mountains\n> to design out just this one type of bug doesn't seem like a great use\n> of our finite earth-moving capacity.)\n\nI have enough trouble just moving the couch.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 Nov 2021 14:57:15 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Post-CVE Wishlist" }, { "msg_contents": "On Wed, 2021-11-24 at 14:01 -0500, Robert Haas wrote:\r\n> The bar for actually ripping it out\r\n> on an expedited time scale would be proving not only that it's broken\r\n> in multiple ways, but that it's so badly broken that it can't be fixed\r\n> with any reasonable amount of effort. And I just don't see one bug\r\n> that had a pretty localized fix is really moving the needle as far as\r\n> that burden of proof is concerned.\r\n\r\nI'm not suggesting that we rip it out today, for the record. I agree\r\nwith you on where the bar is for that.\r\n\r\n--Jacob\r\n", "msg_date": "Wed, 24 Nov 2021 20:00:13 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Post-CVE Wishlist" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Nov 24, 2021 at 2:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> One other point to be made here is that it seems like a stretch to call\n>> these particular bugs \"high-severity\".\n\n> Well, I was referring to the CVSS score, which was in the \"high\" range.\n\nThe server CVE is; the client CVE, not so much, precisely because we\ncouldn't find exciting exploits.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Nov 2021 15:14:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Post-CVE Wishlist" }, { "msg_contents": "On Wed, Nov 24, 2021 at 3:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Wed, Nov 24, 2021 at 2:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> One other point to be made here is that it seems like a stretch to call\n> >> these particular bugs \"high-severity\".\n>\n> > Well, I was referring to the CVSS score, which was in the \"high\" range.\n>\n> The server CVE is; the client CVE, not so much, precisely because we\n> couldn't find exciting exploits.\n\nRight, I understand that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 Nov 2021 15:19:33 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Post-CVE Wishlist" }, { "msg_contents": "On Tue, 2021-11-23 at 18:27 +0000, Jacob Champion wrote:\r\n> Now that the MITM CVEs are published [1], I wanted to share my wishlist\r\n> of things that would have made those attacks difficult/impossible to\r\n> pull off.\r\n\r\nNow that we're post-commitfest, here's my summary of the responses so\r\nfar:\r\n\r\n> = Client-Side Auth Selection =\r\n\r\nThere is interest in letting libpq reject certain auth methods coming\r\nback from the server, perhaps using a simple connection option, and\r\nthere are some prior conversations on the list to look into.\r\n\r\n> = Implicit TLS =\r\n\r\nReactions to implicit TLS were mixed, from \"we should not do this\" to\r\n\"it might be nice to have the option, from a technical standpoint\".\r\nBoth a separate-port model and a shared-port model were tentatively\r\nproposed. The general consensus seems to be that the StartTLS-style\r\nflow is currently sufficient from a security standpoint.\r\n\r\nI didn't see any responses that were outright in favor, so I think my\r\nremaining question is: are there any committers who think a prototype\r\nwould be worth the time for a motivated implementer?\r\n\r\nThanks for the discussion!\r\n\r\n--Jacob\r\n", "msg_date": "Tue, 7 Dec 2021 18:49:54 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Post-CVE Wishlist" }, { "msg_contents": "On 07.12.21 19:49, Jacob Champion wrote:\n>> = Implicit TLS =\n> Reactions to implicit TLS were mixed, from \"we should not do this\" to\n> \"it might be nice to have the option, from a technical standpoint\".\n> Both a separate-port model and a shared-port model were tentatively\n> proposed. The general consensus seems to be that the StartTLS-style\n> flow is currently sufficient from a security standpoint.\n> \n> I didn't see any responses that were outright in favor, so I think my\n> remaining question is: are there any committers who think a prototype\n> would be worth the time for a motivated implementer?\n\nI'm quite interested in this. My next question would be how complicated \nit would be. Is it just a small block of code that peaks at a few bytes \nand decides it's a TLS handshake? Or would it require a major \nrestructuring of all the TLS support code? Possibly something in the \nmiddle.\n\n\n", "msg_date": "Thu, 9 Dec 2021 16:24:23 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Post-CVE Wishlist" }, { "msg_contents": "On 09/12/2021 17:24, Peter Eisentraut wrote:\n> On 07.12.21 19:49, Jacob Champion wrote:\n>>> = Implicit TLS =\n>> Reactions to implicit TLS were mixed, from \"we should not do this\" to\n>> \"it might be nice to have the option, from a technical standpoint\".\n>> Both a separate-port model and a shared-port model were tentatively\n>> proposed. The general consensus seems to be that the StartTLS-style\n>> flow is currently sufficient from a security standpoint.\n>>\n>> I didn't see any responses that were outright in favor, so I think my\n>> remaining question is: are there any committers who think a prototype\n>> would be worth the time for a motivated implementer?\n> \n> I'm quite interested in this. My next question would be how complicated\n> it would be. Is it just a small block of code that peaks at a few bytes\n> and decides it's a TLS handshake? Or would it require a major\n> restructuring of all the TLS support code? Possibly something in the\n> middle.\n\nProcessStartupPacket() currently reads the first 4 bytes coming from the \nclient to decide what kind of a connection it is, and I believe a TLS \nClientHello message always begins with the same sequence of bytes, so it \nwould be easy to check for.\n\nYou could use recv(.., MSG_PEEK | MSG_WAITALL) flags to leave the bytes \nin the OS buffer. Not sure how portable that is, though. Alternatively, \nyou could stash them e.g. in a global variable and modify \nsecure_raw_read() to return those bytes first.\n\nOverall, doesn't seem very hard to me.\n\n- Heikki\n\n\n", "msg_date": "Fri, 10 Dec 2021 15:43:08 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Post-CVE Wishlist" }, { "msg_contents": "On Fri, 2021-12-10 at 15:43 +0200, Heikki Linnakangas wrote:\r\n> ProcessStartupPacket() currently reads the first 4 bytes coming from the \r\n> client to decide what kind of a connection it is, and I believe a TLS \r\n> ClientHello message always begins with the same sequence of bytes, so it \r\n> would be easy to check for.\r\n> \r\n> You could use recv(.., MSG_PEEK | MSG_WAITALL) flags to leave the bytes \r\n> in the OS buffer. Not sure how portable that is, though. Alternatively, \r\n> you could stash them e.g. in a global variable and modify \r\n> secure_raw_read() to return those bytes first.\r\n> \r\n> Overall, doesn't seem very hard to me.\r\n\r\nAfter further thought... Seems like sharing a port between implicit and\r\nexplicit TLS will still allow a MITM to put bytes on the wire to try to\r\nattack the client-to-server communication, because they can craft the\r\nSSLRequest themselves and then hand it off to the real client.\r\n\r\nBut they shouldn't be able to attack the server-to-client communication\r\nif the client is using implicit TLS, so it's still an overall\r\nimprovement? I wonder if there are any other protocols out there doing this.\r\n\r\n--Jacob\r\n", "msg_date": "Fri, 17 Dec 2021 01:00:10 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Post-CVE Wishlist" } ]
[ { "msg_contents": "On 8/26/20, 2:13 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> On 8/26/20, 12:16 PM, \"Alvaro Herrera\" <alvherre@2ndquadrant.com> wrote:\r\n>> On 2020-Aug-20, Jeremy Schneider wrote:\r\n>>> Alternatively, if we don't want to take this approach, then I'd argue\r\n>>> that we should update README.tuplock to explicitly state that\r\n>>> XMAX_LOCK_ONLY and XMAX_COMMITTED are incompatible (just as it already\r\n>>> states for HEAP_XMAX_IS_MULTI and HEAP_XMAX_COMMITTED) and clean up the\r\n>>> code in heapam_visibility.c for consistency.\r\n>>\r\n>> Yeah, I like this approach better for the master branch; not just clean\r\n>> up as in remove the cases that handle it, but also actively elog(ERROR)\r\n>> if the condition ever occurs (hopefully with some known way to fix the\r\n>> problem; maybe by \"WITH tup AS (DELETE FROM tab WHERE .. RETURNING *)\r\n>> INSERT * INTO tab FROM tup\" or similar.)\r\n>\r\n> +1. I wouldn't mind picking this up, but it might be some time before\r\n> I can get to it.\r\n\r\nI've finally gotten started on this, and I've attached a work-in-\r\nprogress patch to gather some early feedback. I'm specifically\r\nwondering if there are other places it'd be good to check for these\r\nunsupported combinations and whether we should use the\r\nHEAP_XMAX_IS_LOCKED_ONLY macro or just look for the\r\nHEAP_XMAX_LOCK_ONLY bit. IIUC HEAP_XMAX_IS_LOCKED_ONLY is intended to\r\nhandle some edge cases after pg_upgrade, but AFAICT\r\nHEAP_XMAX_COMMITTED was not used for those previous bit combinations,\r\neither. Therefore, I've used the HEAP_XMAX_IS_LOCKED_ONLY macro in\r\nthe attached patch, but I would not be surprised to learn that this is\r\nwrong for some reason.\r\n\r\nNathan", "msg_date": "Wed, 24 Nov 2021 00:26:39 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: XMAX_LOCK_ONLY and XMAX_COMMITTED (fk/multixact code)" }, { "msg_contents": "The archives seem unhappy with my attempt to revive this old thread,\r\nso here is a link to it in case anyone is looking for more context:\r\n\r\n https://www.postgresql.org/message-id/flat/3476708e-7919-20cb-ca45-6603470565f7%40amazon.com\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 24 Nov 2021 00:32:00 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: XMAX_LOCK_ONLY and XMAX_COMMITTED (fk/multixact code)" }, { "msg_contents": "\n\n> On Nov 23, 2021, at 4:26 PM, Bossart, Nathan <bossartn@amazon.com> wrote:\n> \n> I've finally gotten started on this, and I've attached a work-in-\n> progress patch to gather some early feedback. I'm specifically\n> wondering if there are other places it'd be good to check for these\n> unsupported combinations and whether we should use the\n> HEAP_XMAX_IS_LOCKED_ONLY macro or just look for the\n> HEAP_XMAX_LOCK_ONLY bit. \n\nI have to wonder if, when corruption is reported for conditions like this:\n\n+\tif ((ctx->tuphdr->t_infomask & HEAP_XMAX_COMMITTED) &&\n+\t\tHEAP_XMAX_IS_LOCKED_ONLY(ctx->tuphdr->t_infomask))\n\nif the first thing we're going to want to know is which branch of the HEAP_XMAX_IS_LOCKED_ONLY macro evaluated true? Should we split this check to do each branch of the macro separately, such as:\n\nif (ctx->tuphdr->t_infomask & HEAP_XMAX_COMMITTED)\n{ \n if (ctx->tuphdr->t_infomask & HEAP_XMAX_LOCK_ONLY)\n ... report something ...\n else if ((ctx->tuphdr->t_infomask & (HEAP_XMAX_IS_MULTI | HEAP_LOCK_MASK)) == HEAP_XMAX_EXCL_LOCK)\n ... report a different thing ...\n}\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 23 Nov 2021 16:35:56 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: XMAX_LOCK_ONLY and XMAX_COMMITTED (fk/multixact code)" }, { "msg_contents": "On 11/23/21, 4:36 PM, \"Mark Dilger\" <mark.dilger@enterprisedb.com> wrote:\r\n> I have to wonder if, when corruption is reported for conditions like this:\r\n>\r\n> + if ((ctx->tuphdr->t_infomask & HEAP_XMAX_COMMITTED) &&\r\n> + HEAP_XMAX_IS_LOCKED_ONLY(ctx->tuphdr->t_infomask))\r\n>\r\n> if the first thing we're going to want to know is which branch of the HEAP_XMAX_IS_LOCKED_ONLY macro evaluated true? Should we split this check to do each branch of the macro separately, such as:\r\n>\r\n> if (ctx->tuphdr->t_infomask & HEAP_XMAX_COMMITTED)\r\n> {\r\n> if (ctx->tuphdr->t_infomask & HEAP_XMAX_LOCK_ONLY)\r\n> ... report something ...\r\n> else if ((ctx->tuphdr->t_infomask & (HEAP_XMAX_IS_MULTI | HEAP_LOCK_MASK)) == HEAP_XMAX_EXCL_LOCK)\r\n> ... report a different thing ...\r\n> }\r\n\r\nThis is a good point. Right now, you'd have to manually inspect the\r\ninfomask field to determine the exact form of the invalid combination.\r\nMy only worry with this is that we'd want to make sure these checks\r\nstayed consistent with the definition of HEAP_XMAX_IS_LOCKED_ONLY in\r\nhtup_details.h. I'm guessing HEAP_XMAX_IS_LOCKED_ONLY is unlikely to\r\nchange all that often, though.\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 24 Nov 2021 00:51:15 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: XMAX_LOCK_ONLY and XMAX_COMMITTED (fk/multixact code)" }, { "msg_contents": "\n\n> On Nov 23, 2021, at 4:51 PM, Bossart, Nathan <bossartn@amazon.com> wrote:\n> \n> This is a good point. Right now, you'd have to manually inspect the\n> infomask field to determine the exact form of the invalid combination.\n> My only worry with this is that we'd want to make sure these checks\n> stayed consistent with the definition of HEAP_XMAX_IS_LOCKED_ONLY in\n> htup_details.h. I'm guessing HEAP_XMAX_IS_LOCKED_ONLY is unlikely to\n> change all that often, though.\n\nI expect that your patch will contain some addition to the amcheck (or pg_amcheck) tests, so if we ever allow some currently disallowed bit combination, we'd be reminded of the need to update amcheck. So I'm not too worried about that aspect of this.\n\nI prefer not to have to get a page (or full file) from a customer when the check reports corruption. Even assuming they are comfortable giving that, which they may not be, it is an extra round trip to the customer asking for stuff.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 23 Nov 2021 16:58:29 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: XMAX_LOCK_ONLY and XMAX_COMMITTED (fk/multixact code)" }, { "msg_contents": "On 11/23/21, 4:59 PM, \"Mark Dilger\" <mark.dilger@enterprisedb.com> wrote:\r\n>> On Nov 23, 2021, at 4:51 PM, Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>>\r\n>> This is a good point. Right now, you'd have to manually inspect the\r\n>> infomask field to determine the exact form of the invalid combination.\r\n>> My only worry with this is that we'd want to make sure these checks\r\n>> stayed consistent with the definition of HEAP_XMAX_IS_LOCKED_ONLY in\r\n>> htup_details.h. I'm guessing HEAP_XMAX_IS_LOCKED_ONLY is unlikely to\r\n>> change all that often, though.\r\n>\r\n> I expect that your patch will contain some addition to the amcheck (or pg_amcheck) tests, so if we ever allow some currently disallowed bit combination, we'd be reminded of the need to update amcheck. So I'm not too worried about that aspect of this.\r\n>\r\n> I prefer not to have to get a page (or full file) from a customer when the check reports corruption. Even assuming they are comfortable giving that, which they may not be, it is an extra round trip to the customer asking for stuff.\r\n\r\nAnother option we might consider is only checking for the\r\nHEAP_XMAX_LOCK_ONLY bit instead of everything in\r\nHEAP_XMAX_IS_LOCKED_ONLY. IIUC everything else is only expected to\r\nhappen for upgrades from v9.2 or earlier, so it might be pretty rare\r\nat this point. Otherwise, I'll extract the exact bit pattern for the\r\nerror message as you suggest.\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 24 Nov 2021 20:53:08 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: XMAX_LOCK_ONLY and XMAX_COMMITTED (fk/multixact code)" }, { "msg_contents": "\n\n> On Nov 24, 2021, at 12:53 PM, Bossart, Nathan <bossartn@amazon.com> wrote:\n> \n> Another option we might consider is only checking for the\n> HEAP_XMAX_LOCK_ONLY bit instead of everything in\n> HEAP_XMAX_IS_LOCKED_ONLY. IIUC everything else is only expected to\n> happen for upgrades from v9.2 or earlier, so it might be pretty rare\n> at this point. Otherwise, I'll extract the exact bit pattern for the\n> error message as you suggest.\n\nI would prefer to detect and report any \"can't happen\" bit patterns without regard for how likely the pattern may be. The difficulty is in proving that a bit pattern is disallowed. Just because you can't find a code path in the current code base that would create a pattern doesn't mean it won't have legitimately been created by some past release or upgrade path. As such, any prohibitions explicitly in the backend, such as Asserts around a condition, are really valuable. You can know that the pattern is disallowed, since the server would Assert on it if encountered.\n\nAside from that, I don't really buy the argument that databases upgraded from v9.2 or earlier are rare. Even if servers *running* v9.2 or earlier are (or become) rare, servers initialized that far back which have been upgraded one or more times since then may be common.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 25 Nov 2021 09:15:19 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: XMAX_LOCK_ONLY and XMAX_COMMITTED (fk/multixact code)" }, { "msg_contents": "On 11/25/21, 9:16 AM, \"Mark Dilger\" <mark.dilger@enterprisedb.com> wrote:\r\n>> On Nov 24, 2021, at 12:53 PM, Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>>\r\n>> Another option we might consider is only checking for the\r\n>> HEAP_XMAX_LOCK_ONLY bit instead of everything in\r\n>> HEAP_XMAX_IS_LOCKED_ONLY. IIUC everything else is only expected to\r\n>> happen for upgrades from v9.2 or earlier, so it might be pretty rare\r\n>> at this point. Otherwise, I'll extract the exact bit pattern for the\r\n>> error message as you suggest.\r\n>\r\n>I would prefer to detect and report any \"can't happen\" bit patterns without regard for how likely the pattern may be. The difficulty is in proving that a bit pattern is disallowed. Just because you can't find a code path in the current code base that would create a pattern doesn't mean it won't have legitimately been created by some past release or upgrade path. As such, any prohibitions explicitly in the backend, such as Asserts around a condition, are really valuable. You can know that the pattern is disallowed, since the server would Assert on it if encountered.\r\n>\r\n> Aside from that, I don't really buy the argument that databases upgraded from v9.2 or earlier are rare. Even if servers *running* v9.2 or earlier are (or become) rare, servers initialized that far back which have been upgraded one or more times since then may be common.\r\n\r\nOkay, I'll do it that way in the next revision.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 29 Nov 2021 18:06:22 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: XMAX_LOCK_ONLY and XMAX_COMMITTED (fk/multixact code)" }, { "msg_contents": "On 11/29/21, 10:10 AM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> Okay, I'll do it that way in the next revision.\r\n\r\nv2 attached.\r\n\r\nNathan", "msg_date": "Wed, 1 Dec 2021 00:49:43 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: XMAX_LOCK_ONLY and XMAX_COMMITTED (fk/multixact code)" }, { "msg_contents": "On 11/30/21, 4:54 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> v2 attached.\r\n\r\nI accidentally left a redundant check in v2, so here is a v3 without\r\nit.\r\n\r\nMy proposed patch adds a few checks for the unsupported bit patterns\r\nin the visibility code, but it is far from exhaustive. I'm wondering\r\nif it might be better just to add a function or macro that everything\r\nexported from heapam_visibility.c is expected to call. My guess is\r\nthe main argument against that would be the possible performance\r\nimpact.\r\n\r\nNathan", "msg_date": "Wed, 1 Dec 2021 18:59:25 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: XMAX_LOCK_ONLY and XMAX_COMMITTED (fk/multixact code)" }, { "msg_contents": "\n\n> On Dec 1, 2021, at 10:59 AM, Bossart, Nathan <bossartn@amazon.com> wrote:\n> \n> here is a v3\n\nIt took a while for me to get to this....\n\n@@ -1304,33 +1370,46 @@ HeapTupleSatisfiesVacuumHorizon(HeapTuple htup, Buffer buffer, TransactionId *de\n\n if (HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask))\n {\n+ if (unlikely(tuple->t_infomask & HEAP_XMAX_COMMITTED))\n+ {\n+ if (tuple->t_infomask & HEAP_XMAX_LOCK_ONLY)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_DATA_CORRUPTED),\n+ errmsg_internal(\"found tuple with HEAP_XMAX_COMMITTED \"\n+ \"and HEAP_XMAX_LOCK_ONLY\")));\n+\n+ /* pre-v9.3 lock-only bit pattern */\n+ ereport(ERROR,\n+ (errcode(ERRCODE_DATA_CORRUPTED),\n+ errmsg_internal(\"found tuple with HEAP_XMAX_COMMITTED and\"\n+ \"HEAP_XMAX_EXCL_LOCK set and \"\n+ \"HEAP_XMAX_IS_MULTI unset\")));\n+ }\n+\n\nI find this bit hard to understand. Does the comment mean to suggest that the *upgrade* process should have eliminated all pre-v9.3 bit patterns, and therefore any such existing patterns are certainly corruption, or does it mean that data written by pre-v9.3 servers (and not subsequently updated) is defined as corrupt, or .... ?\n\nI am not complaining that the logic is wrong, just trying to wrap my head around what the comment means.\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 21 Dec 2021 11:41:57 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: XMAX_LOCK_ONLY and XMAX_COMMITTED (fk/multixact code)" }, { "msg_contents": "On 12/21/21, 11:42 AM, \"Mark Dilger\" <mark.dilger@enterprisedb.com> wrote:\r\n> + /* pre-v9.3 lock-only bit pattern */\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_DATA_CORRUPTED),\r\n> + errmsg_internal(\"found tuple with HEAP_XMAX_COMMITTED and\"\r\n> + \"HEAP_XMAX_EXCL_LOCK set and \"\r\n> + \"HEAP_XMAX_IS_MULTI unset\")));\r\n> + }\r\n> +\r\n>\r\n> I find this bit hard to understand. Does the comment mean to suggest that the *upgrade* process should have eliminated all pre-v9.3 bit patterns, and therefore any such existing patterns are certainly corruption, or does it mean that data written by pre-v9.3 servers (and not subsequently updated) is defined as corrupt, or .... ?\r\n>\r\n> I am not complaining that the logic is wrong, just trying to wrap my head around what the comment means.\r\n\r\nThis is just another way that a tuple may be marked locked-only, and\r\nwe want to explicitly disallow locked-only + xmax-committed. This bit\r\npattern may be present on servers that were pg_upgraded from pre-v9.3\r\nversions. See commits 0ac5ad5 and 74ebba8 for more detail.\r\n\r\nNathan\r\n\r\n", "msg_date": "Thu, 6 Jan 2022 21:55:25 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: XMAX_LOCK_ONLY and XMAX_COMMITTED (fk/multixact code)" }, { "msg_contents": "I think this one requires some more work, and it needn't be a priority for\nv15, so I've adjusted the commitfest entry to v16 and moved it to the next\ncommitfest.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 17 Mar 2022 16:45:28 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: XMAX_LOCK_ONLY and XMAX_COMMITTED (fk/multixact code)" }, { "msg_contents": "On Thu, Mar 17, 2022 at 04:45:28PM -0700, Nathan Bossart wrote:\n> I think this one requires some more work, and it needn't be a priority for\n> v15, so I've adjusted the commitfest entry to v16 and moved it to the next\n> commitfest.\n\nHere is a new patch. The main differences from v3 are in\nheapam_visibility.c. Specifically, instead of trying to work the infomask\nchecks into the visibility logic, I added a new function that does a couple\nof assertions. This function is called at the beginning of each visibility\nfunction.\n\nWhat do folks think? The options I've considered are 1) not adding any\nsuch checks to heapam_visibility.c, 2) only adding assertions like the\nattached patch, or 3) actually using elog(ERROR, ...) when the invalid bit\npatterns are detected. AFAICT (1) is more in line with existing invalid\nbit patterns (e.g., XMAX_COMMITTED + XMAX_IS_MULTI). There are a couple of\nscattered assertions, but most code paths don't check for it. (2) adds\nadditional checks, but only for --enable-cassert builds. (3) would add\nchecks even for non-assert builds, but there would presumably be some\nperformance cost involved.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 22 Mar 2022 16:06:40 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: XMAX_LOCK_ONLY and XMAX_COMMITTED (fk/multixact code)" }, { "msg_contents": "Hi,\n\nOn 2022-03-22 16:06:40 -0700, Nathan Bossart wrote:\n> On Thu, Mar 17, 2022 at 04:45:28PM -0700, Nathan Bossart wrote:\n> > I think this one requires some more work, and it needn't be a priority for\n> > v15, so I've adjusted the commitfest entry to v16 and moved it to the next\n> > commitfest.\n> \n> Here is a new patch. The main differences from v3 are in\n> heapam_visibility.c. Specifically, instead of trying to work the infomask\n> checks into the visibility logic, I added a new function that does a couple\n> of assertions. This function is called at the beginning of each visibility\n> function.\n> \n> What do folks think? The options I've considered are 1) not adding any\n> such checks to heapam_visibility.c, 2) only adding assertions like the\n> attached patch, or 3) actually using elog(ERROR, ...) when the invalid bit\n> patterns are detected. AFAICT (1) is more in line with existing invalid\n> bit patterns (e.g., XMAX_COMMITTED + XMAX_IS_MULTI). There are a couple of\n> scattered assertions, but most code paths don't check for it. (2) adds\n> additional checks, but only for --enable-cassert builds. (3) would add\n> checks even for non-assert builds, but there would presumably be some\n> performance cost involved.\n\n> From 2d6b372cf61782e0fd52590b57b1c914b0ed7a4c Mon Sep 17 00:00:00 2001\n> From: Nathan Bossart <nathandbossart@gmail.com>\n> Date: Tue, 22 Mar 2022 15:35:34 -0700\n> Subject: [PATCH v4 1/1] disallow XMAX_COMMITTED + XMAX_LOCK_ONLY\n\nJust skimming this thread quickly, I really have no idea what this is trying\nto achieve and the commit message doesn't help either... I didn't read the\nreferenced thread, but I shouldn't have to, to get a basic idea.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 22 Mar 2022 16:13:47 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: XMAX_LOCK_ONLY and XMAX_COMMITTED (fk/multixact code)" }, { "msg_contents": "On Tue, Mar 22, 2022 at 04:13:47PM -0700, Andres Freund wrote:\n> Just skimming this thread quickly, I really have no idea what this is trying\n> to achieve and the commit message doesn't help either... I didn't read the\n> referenced thread, but I shouldn't have to, to get a basic idea.\n\nAh, my bad. I should've made sure the context was carried over better. I\nupdated the commit message with some basic information about the intent.\nPlease let me know if there is anything else that needs to be cleared up.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 22 Mar 2022 17:26:37 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: XMAX_LOCK_ONLY and XMAX_COMMITTED (fk/multixact code)" }, { "msg_contents": "Here is a rebased patch for cfbot.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 20 Sep 2022 11:32:02 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: XMAX_LOCK_ONLY and XMAX_COMMITTED (fk/multixact code)" }, { "msg_contents": "On Tue, Sep 20, 2022 at 2:32 PM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n\n> Here is a rebased patch for cfbot.\n>\n>\n>\nApplies, passes make check world.\n\nPatch is straightforward, but the previous code is less so. It purported to\nset XMAX_COMMITTED _or_ XMAX_INVALID, but never seemed to un-set\nXMAX_COMMITTED, was that the source of the double-setting?\n\nOn Tue, Sep 20, 2022 at 2:32 PM Nathan Bossart <nathandbossart@gmail.com> wrote:Here is a rebased patch for cfbot.\nApplies, passes make check world.Patch is straightforward, but the previous code is less so. It purported to set XMAX_COMMITTED _or_ XMAX_INVALID, but never seemed to un-set XMAX_COMMITTED, was that the source of the double-setting?", "msg_date": "Thu, 20 Oct 2022 20:16:56 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: XMAX_LOCK_ONLY and XMAX_COMMITTED (fk/multixact code)" }, { "msg_contents": "Hi,\n\nOn 2022-09-20 11:32:02 -0700, Nathan Bossart wrote:\n> Note that this change also disallows XMAX_COMMITTED together with\n> the special pre-v9.3 locked-only bit pattern that\n> HEAP_XMAX_IS_LOCKED_ONLY checks for. This locked-only bit pattern\n> may still be present on servers pg_upgraded from pre-v9.3 versions.\n\nGiven that fact, that aspect at least seems to be not viable?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 2 Feb 2023 06:59:51 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: XMAX_LOCK_ONLY and XMAX_COMMITTED (fk/multixact code)" }, { "msg_contents": "On Thu, Feb 02, 2023 at 06:59:51AM -0800, Andres Freund wrote:\n> On 2022-09-20 11:32:02 -0700, Nathan Bossart wrote:\n>> Note that this change also disallows XMAX_COMMITTED together with\n>> the special pre-v9.3 locked-only bit pattern that\n>> HEAP_XMAX_IS_LOCKED_ONLY checks for. This locked-only bit pattern\n>> may still be present on servers pg_upgraded from pre-v9.3 versions.\n> \n> Given that fact, that aspect at least seems to be not viable?\n\nAFAICT from looking at the v9.2 code, the same idea holds true for this\nspecial bit pattern. I only see HEAP_XMAX_INVALID set when one of the\ninfomask lock bits is set, and those bits now correspond to\nHEAP_XMAX_LOCK_ONLY and HEAP_XMAX_EXCL_LOCK (which are both covered by the\nHEAP_XMAX_IS_LOCKED_ONLY macro). Of course, I could be missing something.\nDo you think we should limit this to the v9.3+ bit pattern?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 2 Feb 2023 09:33:48 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: XMAX_LOCK_ONLY and XMAX_COMMITTED (fk/multixact code)" } ]
[ { "msg_contents": "Hi,\n\nClang 13 on my machine and peripatus (but not Apple clang 13 on eg\nsifika, I'm still confused about Apple's versioning but I think that's\nreally llvm 12-based) warns:\n\ngeqo_main.c:86:8: warning: variable 'edge_failures' set but not used\n[-Wunused-but-set-variable]\n int edge_failures = 0;\n\nHere's one way to silence it.", "msg_date": "Wed, 24 Nov 2021 15:31:49 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Warning in geqo_main.c from clang 13" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Clang 13 on my machine and peripatus (but not Apple clang 13 on eg\n> sifika, I'm still confused about Apple's versioning but I think that's\n> really llvm 12-based) warns:\n> geqo_main.c:86:8: warning: variable 'edge_failures' set but not used\n> [-Wunused-but-set-variable]\n> int edge_failures = 0;\n\nYeah, I noticed that a week or two ago, but didn't see a simple fix.\n\n> Here's one way to silence it.\n\nI'm kind of inclined to just drop the edge_failures recording/logging\naltogether, rather than make that rats-nest of #ifdefs even worse.\nIt's not like anyone has cared about that number in the last decade\nor two.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Nov 2021 22:17:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Warning in geqo_main.c from clang 13" }, { "msg_contents": "I wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n>> Clang 13 on my machine and peripatus (but not Apple clang 13 on eg\n>> sifika, I'm still confused about Apple's versioning but I think that's\n>> really llvm 12-based) warns:\n>> geqo_main.c:86:8: warning: variable 'edge_failures' set but not used\n>> [-Wunused-but-set-variable]\n>> Here's one way to silence it.\n\n> I'm kind of inclined to just drop the edge_failures recording/logging\n> altogether, rather than make that rats-nest of #ifdefs even worse.\n> It's not like anyone has cared about that number in the last decade\n> or two.\n\nWe're starting to see more buildfarm animals producing this warning,\nso I took another look, and thought of a slightly less invasive way to\nsilence it. I confirmed this works with clang 13.0.0 on Fedora 35.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 22 Jan 2022 17:34:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Warning in geqo_main.c from clang 13" }, { "msg_contents": "On Sun, Jan 23, 2022 at 11:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> We're starting to see more buildfarm animals producing this warning,\n> so I took another look, and thought of a slightly less invasive way to\n> silence it. I confirmed this works with clang 13.0.0 on Fedora 35.\n\nLGTM. Tested on bleeding edge clang 14.\n\n\n", "msg_date": "Sun, 23 Jan 2022 13:47:35 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Warning in geqo_main.c from clang 13" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Sun, Jan 23, 2022 at 11:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> We're starting to see more buildfarm animals producing this warning,\n>> so I took another look, and thought of a slightly less invasive way to\n>> silence it. I confirmed this works with clang 13.0.0 on Fedora 35.\n\n> LGTM. Tested on bleeding edge clang 14.\n\nPushed, thanks.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 23 Jan 2022 11:12:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Warning in geqo_main.c from clang 13" } ]
[ { "msg_contents": "According to [1], we need to stop including Python's <eval.h>.\nI've not checked whether this creates any backwards-compatibility\nissues.\n\n\t\t\tregards, tom lane\n\n[1] https://bugzilla.redhat.com/show_bug.cgi?id=2023272\n\n\n", "msg_date": "Tue, 23 Nov 2021 22:07:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Python 3.11 vs. Postgres" }, { "msg_contents": "On 24.11.21 04:07, Tom Lane wrote:\n> According to [1], we need to stop including Python's <eval.h>.\n> I've not checked whether this creates any backwards-compatibility\n> issues.\n> \n> \t\t\tregards, tom lane\n> \n> [1] https://bugzilla.redhat.com/show_bug.cgi?id=2023272\n\nSee attached patch. The minimum Python version for this change is 2.4, \nwhich is the oldest version supported by PG10, so we can backpatch this \nto all live branches.", "msg_date": "Wed, 24 Nov 2021 08:24:52 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Python 3.11 vs. Postgres" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 24.11.21 04:07, Tom Lane wrote:\n>> According to [1], we need to stop including Python's <eval.h>.\n\n> See attached patch. The minimum Python version for this change is 2.4, \n> which is the oldest version supported by PG10, so we can backpatch this \n> to all live branches.\n\nLGTM. Tested with v10 and prairiedog's Python 2.4.1.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Nov 2021 12:39:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Python 3.11 vs. Postgres" } ]
[ { "msg_contents": "Attached patch performs polishing within vacuumlazy.c, as follow-up\nwork to the refactoring work in Postgres 14. This mainly consists of\nchanging references of dead tuples to dead items, which reflects the\nfact that VACUUM no longer deals with TIDs that might point to\nremaining heap tuples with storage -- the TIDs in the array must now\nstrictly point to LP_DEAD stub line pointers that remain in the heap,\nfollowing pruning.\n\nI've also simplified header comments, and comments above the main\nentry point functions. These comments made much more sense back when\nlazy_scan_heap() was simpler, but wasn't yet broken up into smaller,\nbetter-scoped functions.\n\nIf there are no objections, I'll move on this soon. It's mostly just\nmechanical changes.\n\n-- \nPeter Geoghegan", "msg_date": "Tue, 23 Nov 2021 21:45:48 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Rename dead_tuples to dead_items in vacuumlazy.c" }, { "msg_contents": "On Wed, Nov 24, 2021 at 11:16 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> Attached patch performs polishing within vacuumlazy.c, as follow-up\n> work to the refactoring work in Postgres 14. This mainly consists of\n> changing references of dead tuples to dead items, which reflects the\n> fact that VACUUM no longer deals with TIDs that might point to\n> remaining heap tuples with storage -- the TIDs in the array must now\n> strictly point to LP_DEAD stub line pointers that remain in the heap,\n> following pruning.\n>\n> I've also simplified header comments, and comments above the main\n> entry point functions. These comments made much more sense back when\n> lazy_scan_heap() was simpler, but wasn't yet broken up into smaller,\n> better-scoped functions.\n>\n> If there are no objections, I'll move on this soon. It's mostly just\n> mechanical changes.\n\n-#define PROGRESS_VACUUM_NUM_DEAD_TUPLES 6\n+#define PROGRESS_VACUUM_MAX_DEAD_ITEMS 5\n+#define PROGRESS_VACUUM_NUM_DEAD_ITEMS 6\n\nWouldn't this be more logical to change to DEAD_TIDS instead of DEAD_ITEMS?\n\n+ /* Sorted list of TIDs to delete from indexes */\n+ ItemPointerData dead[FLEXIBLE_ARRAY_MEMBER];\n\nInstead of just dead, why not deadtid or deaditem?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 Nov 2021 11:28:55 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Rename dead_tuples to dead_items in vacuumlazy.c" }, { "msg_contents": "On Wed, Nov 24, 2021 at 2:46 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> Attached patch performs polishing within vacuumlazy.c, as follow-up\n> work to the refactoring work in Postgres 14. This mainly consists of\n> changing references of dead tuples to dead items, which reflects the\n> fact that VACUUM no longer deals with TIDs that might point to\n> remaining heap tuples with storage -- the TIDs in the array must now\n> strictly point to LP_DEAD stub line pointers that remain in the heap,\n> following pruning.\n\n+1\n\n> If there are no objections, I'll move on this soon. It's mostly just\n> mechanical changes.\n\nThe patch renames dead tuples to dead items at some places and to\ndead TIDs at some places. For instance, it renames dead tuples to dead\nTIDs here:\n\n- * Return the maximum number of dead tuples we can record.\n+ * Computes the number of dead TIDs that VACUUM will have to store in the\n+ * worst case, where all line pointers are allocated, and all are LP_DEAD\n\nwhereas renames to dead items here:\n\n- * extra cost of bsearch(), especially if dead tuples on the heap are\n+ * extra cost of bsearch(), especially if dead items on the heap are\n\nI think it's more consistent if we change it to one side. I prefer \"dead items\".\n\n---\nThere is one more place where we can rename \"dead tuples\":\n\n /*\n * Allocate the space for dead tuples. Note that this handles parallel\n * VACUUM initialization as part of allocating shared memory space used\n * for dead_items.\n */\n\nRegards,\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 24 Nov 2021 21:48:01 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Rename dead_tuples to dead_items in vacuumlazy.c" }, { "msg_contents": "On Wed, Nov 24, 2021 at 7:48 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> I think it's more consistent if we change it to one side. I prefer \"dead items\".\n\nI feel like \"items\" is quite a generic word, so I think I would prefer\nTIDs. But it's probably not a big deal.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 Nov 2021 08:49:15 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Rename dead_tuples to dead_items in vacuumlazy.c" }, { "msg_contents": "On 2021-Nov-24, Robert Haas wrote:\n\n> On Wed, Nov 24, 2021 at 7:48 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I think it's more consistent if we change it to one side. I prefer\n> > \"dead items\".\n> \n> I feel like \"items\" is quite a generic word, so I think I would prefer\n> TIDs. But it's probably not a big deal.\n\nIs there clarity on what each term means? \n\nSince this patch only changes things that are specific to heap\nvacuuming, it seems OK to rely the convention that \"item\" means \"heap\nitem\" (not just any generic item). However, I'm not sure that we fully\nagree exactly what a heap item is. Maybe if we agree to a single non\nambiguous definition for each of those terms we can agree what\nterminology to use.\n\nIt seems to me we have the following terms:\n\n- tuple\n- line pointer\n- [heap] item\n- TID\n\nMy mental model is that \"tuple\" (in the narrow context of heap vacuum)\nis the variable-size on-disk representation of a row in a page; \"line\npointer\" is the fixed-size struct at the bottom of each page that\ncontains location, size and flags of a tuple: struct ItemIdData. The\nTID is the address of a line pointer -- an ItemPointerData.\n\nWhat is an item? Is an item the same as a line pointer? That seems\nconfusing. I think \"item\" means the tuple as a whole. In that light,\nusing the term TID for some of the things that the patch renames to\n\"item\" seems more appropriate.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 24 Nov 2021 11:37:05 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Rename dead_tuples to dead_items in vacuumlazy.c" }, { "msg_contents": "On Wed, Nov 24, 2021 at 9:37 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> My mental model is that \"tuple\" (in the narrow context of heap vacuum)\n> is the variable-size on-disk representation of a row in a page; \"line\n> pointer\" is the fixed-size struct at the bottom of each page that\n> contains location, size and flags of a tuple: struct ItemIdData. The\n> TID is the address of a line pointer -- an ItemPointerData.\n>\n> What is an item? Is an item the same as a line pointer? That seems\n> confusing. I think \"item\" means the tuple as a whole. In that light,\n> using the term TID for some of the things that the patch renames to\n> \"item\" seems more appropriate.\n\nHmm. I think in my model an item and an item pointer and a line\npointer are all the same thing, but a TID is different. When I talk\nabout a TID, I mean the location of an item pointer, not its contents.\nSo a TID is what tells me that I want block 5 and the 4th slot in the\nitem pointer array. The item pointer tells me that the associate tuple\nis at a certain position in the page and has a certain length.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 Nov 2021 09:45:56 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Rename dead_tuples to dead_items in vacuumlazy.c" }, { "msg_contents": "On 2021-Nov-24, Robert Haas wrote:\n\n> Hmm. I think in my model an item and an item pointer and a line\n> pointer are all the same thing, but a TID is different. When I talk\n> about a TID, I mean the location of an item pointer, not its contents.\n> So a TID is what tells me that I want block 5 and the 4th slot in the\n> item pointer array. The item pointer tells me that the associate tuple\n> is at a certain position in the page and has a certain length.\n\nOK, but you can have item pointers that don't have any item.\nLP_REDIRECT, LP_DEAD, LP_UNUSED item pointers don't have items.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"Nunca confiaré en un traidor. Ni siquiera si el traidor lo he creado yo\"\n(Barón Vladimir Harkonnen)\n\n\n", "msg_date": "Wed, 24 Nov 2021 11:51:12 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Rename dead_tuples to dead_items in vacuumlazy.c" }, { "msg_contents": "On 2021-Nov-24, Alvaro Herrera wrote:\n\n> On 2021-Nov-24, Robert Haas wrote:\n> \n> > Hmm. I think in my model an item and an item pointer and a line\n> > pointer are all the same thing, but a TID is different. When I talk\n> > about a TID, I mean the location of an item pointer, not its contents.\n> > So a TID is what tells me that I want block 5 and the 4th slot in the\n> > item pointer array. The item pointer tells me that the associate tuple\n> > is at a certain position in the page and has a certain length.\n> \n> OK, but you can have item pointers that don't have any item.\n> LP_REDIRECT, LP_DEAD, LP_UNUSED item pointers don't have items.\n\nSorry to reply to myself, but I realized that I forgot to return to the\nmain point of this thread. If we agree that \"an LP_DEAD item pointer\ndoes not point to any item\" (an assertion that gives a precise meaning\nto both those terms), then a patch that renames \"tuples\" to \"items\" is\nnot doing anything useful IMO, because those two terms are synonyms.\n\nNow maybe Peter doesn't agree with the definitions I suggest, in which \ncase I would like to know what his definitions are.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"How strange it is to find the words \"Perl\" and \"saner\" in such close\nproximity, with no apparent sense of irony. I doubt that Larry himself\ncould have managed it.\" (ncm, http://lwn.net/Articles/174769/)\n\n\n", "msg_date": "Wed, 24 Nov 2021 12:16:40 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Rename dead_tuples to dead_items in vacuumlazy.c" }, { "msg_contents": "On Wed, Nov 24, 2021 at 9:51 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2021-Nov-24, Robert Haas wrote:\n> > Hmm. I think in my model an item and an item pointer and a line\n> > pointer are all the same thing, but a TID is different. When I talk\n> > about a TID, I mean the location of an item pointer, not its contents.\n> > So a TID is what tells me that I want block 5 and the 4th slot in the\n> > item pointer array. The item pointer tells me that the associate tuple\n> > is at a certain position in the page and has a certain length.\n>\n> OK, but you can have item pointers that don't have any item.\n> LP_REDIRECT, LP_DEAD, LP_UNUSED item pointers don't have items.\n\nI guess so. I said before that I thought an item and an item pointer\nwere the same, but on reflection, that doesn't entirely make sense.\nBut I don't know that I like making item and tuple synonymous either.\nI think perhaps the term \"item\" by itself is not very clear.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 Nov 2021 10:50:13 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Rename dead_tuples to dead_items in vacuumlazy.c" }, { "msg_contents": "On Wed, Nov 24, 2021 at 7:16 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Sorry to reply to myself, but I realized that I forgot to return to the\n> main point of this thread. If we agree that \"an LP_DEAD item pointer\n> does not point to any item\" (an assertion that gives a precise meaning\n> to both those terms), then a patch that renames \"tuples\" to \"items\" is\n> not doing anything useful IMO, because those two terms are synonyms.\n\nTIDs (ItemPointerData structs) are of course not the same thing as\nline pointers (ItemIdData structs). There is a tendency to refer to\nthe latter as \"item pointers\" all the same, which was confusing. I\npersonally corrected/normalized this in commit ae7291ac in 2019. I\nthink that it's worth being careful about precisely because they're\nclosely related (but distinct) concepts. And so FWIW \"LP_DEAD item\npointer\" is not a thing. I agree that an LP_DEAD item pointer has no\ntuple storage, and so you could say that it points to nothing (though\nonly in heapam). I probably would just say that it has no tuple\nstorage, though.\n\n> Now maybe Peter doesn't agree with the definitions I suggest, in which\n> case I would like to know what his definitions are.\n\nI agree with others that the term \"item\" is vague, but I don't think\nthat that's necessarily a bad thing here -- I deliberately changed the\ncomments to say either \"TIDs\" or \"LP_DEAD items\", emphasizing whatever\nthe important aspect seemed to be in each context (they're LP_DEAD\nitems to the heap structure, TIDs to index structures).\n\nI'm not attached to the term \"item\". To me the truly important point\nis what these items are *not*: they're not tuples. The renaming is\nintended to enforce the concepts that I went into at the end of the\ncommit message for commit 8523492d. Now the pruning steps in\nlazy_scan_prune always avoiding keeping around a DEAD tuple with tuple\nstorage on return to lazy_scan_heap (only LP_DEAD items can remain),\nsince (as of that commit) lazy_scan_prune alone is responsible for\nthings involving the \"logical database\".\n\nThis means that index vacuuming and heap vacuuming can now be thought\nof as removing garbage items from physical data structures (they're\npurely \"physical database\" concepts), and nothing else. They don't\nneed recovery conflicts. How could they? Where are you supposed to get\nthe XIDs for that from, when you've only got LP_DEAD items?\n\nThis is also related to the idea that pruning by VACUUM isn't\nnecessarily all that special compared to earlier pruning or concurrent\nopportunistic pruning. As I go into on the other recent thread on\nremoving special cases in vacuumlazy.c, ISTM that we ought to do\neverything except pruning itself (and freezing tuples, which\neffectively depends on pruning) without even acquiring a cleanup lock.\nWhich is actually quite a lot of things.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 24 Nov 2021 09:06:58 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Rename dead_tuples to dead_items in vacuumlazy.c" }, { "msg_contents": "On 2021-Nov-24, Peter Geoghegan wrote:\n\n> TIDs (ItemPointerData structs) are of course not the same thing as\n> line pointers (ItemIdData structs). There is a tendency to refer to\n> the latter as \"item pointers\" all the same, which was confusing. I\n> personally corrected/normalized this in commit ae7291ac in 2019. I\n> think that it's worth being careful about precisely because they're\n> closely related (but distinct) concepts. And so FWIW \"LP_DEAD item\n> pointer\" is not a thing. I agree that an LP_DEAD item pointer has no\n> tuple storage, and so you could say that it points to nothing (though\n> only in heapam). I probably would just say that it has no tuple\n> storage, though.\n\nOK, this makes a lot more sense. I wasn't aware of ae7291ac (and I\nwasn't aware of the significance of 8523492d either, but that's not\nreally relevant here.)\n\n> I agree with others that the term \"item\" is vague, but I don't think\n> that that's necessarily a bad thing here -- I deliberately changed the\n> comments to say either \"TIDs\" or \"LP_DEAD items\", emphasizing whatever\n> the important aspect seemed to be in each context (they're LP_DEAD\n> items to the heap structure, TIDs to index structures).\n\nI think we could say \"LP_DEAD line pointer\" and that would be perfectly\nclear. Given how nuanced we have to be if we want to be clear about\nthis, I would rather not use \"LP_DEAD item\"; that seems slightly\ncontradictory, since the item is the storage and such a line pointer\ndoes not have storage. Perhaps change that define in progress.h to\nPROGRESS_VACUUM_NUM_DEAD_LPS, and, in the first comment in vacuumlazy.c,\nuse wording such as\n\n+ * The major space usage for LAZY VACUUM is storage for the array of TIDs\n+ * of dead line pointers that are to be removed from indexes.\n\nor\n\n+ * The major space usage for LAZY VACUUM is storage for the array of TIDs\n+ * of LP_DEAD line pointers that are to be removed from indexes.\n\n(The point being that TIDs are not dead themselves, only the line\npointers that they refer to.)\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Most hackers will be perfectly comfortable conceptualizing users as entropy\n sources, so let's move on.\" (Nathaniel Smith)\n\n\n", "msg_date": "Wed, 24 Nov 2021 14:53:02 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Rename dead_tuples to dead_items in vacuumlazy.c" }, { "msg_contents": "On Wed, Nov 24, 2021 at 9:53 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> OK, this makes a lot more sense. I wasn't aware of ae7291ac (and I\n> wasn't aware of the significance of 8523492d either, but that's not\n> really relevant here.)\n\nThanks for hearing me out about the significance of 8523492d.\n\nHaving the right formalisms seems to really matter here, because they\nenable decoupling, which is generally very useful. This makes it easy\nto understand (just for example) that index vacuuming and heap\nvacuuming are just additive, optional steps (in principle) -- an idea\nthat will become even more important once we get Robert's pending TID\nconveyor belt design. I believe that that design goes one step further\nthan what we have today, by making index vacuuming and heap vacuuming\noccur in a distinct operation to VACUUM proper (VACUUM would only need\nto set up the LP_DEAD item list for index vacuuming and heap\nvacuuming, which may or may not happen immediately after).\n\nAn interesting question (at least to me) is: within a non-aggressive\nVACUUM, what remaining steps are *not* technically optional?\n\nI am pretty sure that they're all optional in principle (or will be\nsoon), because soon we will be able to independently advance\nrelfrozenxid without freezing all tuples with XIDs before our original\nFreezeLimit (FreezeLimit should only be used to decide which tuples to\nfreeze, not to decide on a new relfrozenxid). Soon almost everything\nwill be decoupled, without changing the basic invariants that we've\nhad for many years. This flexibility seems really important to me.\n\nThat just leaves avoiding pruning without necessarily avoiding\nostensibly related processing for indexes. We can already\nindependently prune without doing index/heap vacuuming (the bypass\nindexes optimization). We will also be able to do the opposite thing,\nwith my new patch: we can perform index/heap vacuuming *without*\npruning ourselves. This makes sense in the case where we cannot\nacquire a cleanup lock on a heap page with preexisting LP_DEAD items.\n\n> I think we could say \"LP_DEAD line pointer\" and that would be perfectly\n> clear. Given how nuanced we have to be if we want to be clear about\n> this, I would rather not use \"LP_DEAD item\"; that seems slightly\n> contradictory, since the item is the storage and such a line pointer\n> does not have storage. Perhaps change that define in progress.h to\n> PROGRESS_VACUUM_NUM_DEAD_LPS, and, in the first comment in vacuumlazy.c,\n> use wording such as\n\nI agree with all that, I think. But it's still not clear what the\nvariable dead_tuples should be renamed to within the structure that\nyou lay out (I imagine that you agree with me that dead_tuples is now\na bad name). This one detail affects more individual lines of code\nthan the restructuring of comments.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 24 Nov 2021 10:41:40 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Rename dead_tuples to dead_items in vacuumlazy.c" }, { "msg_contents": "On Wed, Nov 24, 2021 at 4:48 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> The patch renames dead tuples to dead items at some places and to\n> dead TIDs at some places.\n\n> I think it's more consistent if we change it to one side. I prefer \"dead items\".\n\nI just pushed a version of the patch that still uses both terms when\ntalking about dead_items. But the final commit actually makes it clear\nwhy, in comments above the LVDeadItems struct itself: LVDeadItems is\nused by both index vacuuming and heap vacuuming.\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 29 Nov 2021 09:59:51 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Rename dead_tuples to dead_items in vacuumlazy.c" }, { "msg_contents": "On Tue, Nov 30, 2021 at 3:00 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, Nov 24, 2021 at 4:48 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > The patch renames dead tuples to dead items at some places and to\n> > dead TIDs at some places.\n>\n> > I think it's more consistent if we change it to one side. I prefer \"dead items\".\n>\n> I just pushed a version of the patch that still uses both terms when\n> talking about dead_items.\n\nThanks! I'll change my parallel vacuum refactoring patch accordingly.\n\nRegarding the commit, I think that there still is one place in\nlazyvacuum.c where we can change \"dead tuples” to \"dead items”:\n\n /*\n * Allocate the space for dead tuples. Note that this handles parallel\n * VACUUM initialization as part of allocating shared memory space used\n * for dead_items.\n */\n dead_items_alloc(vacrel, params->nworkers);\n dead_items = vacrel->dead_items;\n\nAlso, the commit doesn't change both PROGRESS_VACUUM_MAX_DEAD_TUPLES\nand PROGRESS_VACUUM_NUM_DEAD_TUPLES. Did you leave them on purpose?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 30 Nov 2021 12:00:19 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Rename dead_tuples to dead_items in vacuumlazy.c" }, { "msg_contents": "On Mon, Nov 29, 2021 at 7:00 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> Thanks! I'll change my parallel vacuum refactoring patch accordingly.\n\nThanks again for working on that.\n\n> Regarding the commit, I think that there still is one place in\n> lazyvacuum.c where we can change \"dead tuples” to \"dead items”:\n>\n> /*\n> * Allocate the space for dead tuples. Note that this handles parallel\n> * VACUUM initialization as part of allocating shared memory space used\n> * for dead_items.\n> */\n> dead_items_alloc(vacrel, params->nworkers);\n> dead_items = vacrel->dead_items;\n\nOops. Pushed a fixup for that just now.\n\n> Also, the commit doesn't change both PROGRESS_VACUUM_MAX_DEAD_TUPLES\n> and PROGRESS_VACUUM_NUM_DEAD_TUPLES. Did you leave them on purpose?\n\nThat was deliberate.\n\nIt would be a bit strange to alter these constants without also\nupdating the corresponding column names for the\npg_stat_progress_vacuum system view. But if I kept the definition from\nsystem_views.sql in sync, then I would break user scripts -- for\nreasons that users don't care about. That didn't seem like the right\napproach.\n\nAlso, the system as a whole still assumes \"DEAD tuples and LP_DEAD\nitems are the same, and are just as much of a problem in the table as\nthey are in each index\". As you know, this is not really true, which\nis an important problem for us. Fixing it (perhaps as part of adding\nsomething like Robert's conveyor belt design) will likely require\nrevising this model quite fundamentally (e.g, the vacthresh\ncalculation in autovacuum.c:relation_needs_vacanalyze() would be\nreplaced). When this happens, we'll probably need to update system\nviews that have columns with names like \"dead_tuples\" -- because maybe\nwe no longer specifically count dead items/tuples at all. I strongly\nsuspect that the approach to statistics that we take for pg_statistic\noptimizer stats just doesn't work for dead items/tuples -- statistical\nsampling only produces useful statistics for the optimizer because\ncertain delicate assumptions are met (even these assumptions only\nreally work with a properly normalized database schema).\n\nMaybe revising the model used for autovacuum scheduling wouldn't\ninclude changing pg_stat_progress_vacuum, since that isn't technically\n\"part of the model\" --- I'm not sure. But it's not something that I am\nin a hurry to fix.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 30 Nov 2021 11:41:43 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Rename dead_tuples to dead_items in vacuumlazy.c" }, { "msg_contents": "On Wed, Dec 1, 2021 at 4:42 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Nov 29, 2021 at 7:00 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > Thanks! I'll change my parallel vacuum refactoring patch accordingly.\n>\n> Thanks again for working on that.\n>\n> > Regarding the commit, I think that there still is one place in\n> > lazyvacuum.c where we can change \"dead tuples” to \"dead items”:\n> >\n> > /*\n> > * Allocate the space for dead tuples. Note that this handles parallel\n> > * VACUUM initialization as part of allocating shared memory space used\n> > * for dead_items.\n> > */\n> > dead_items_alloc(vacrel, params->nworkers);\n> > dead_items = vacrel->dead_items;\n>\n> Oops. Pushed a fixup for that just now.\n\nThanks!\n\n>\n> > Also, the commit doesn't change both PROGRESS_VACUUM_MAX_DEAD_TUPLES\n> > and PROGRESS_VACUUM_NUM_DEAD_TUPLES. Did you leave them on purpose?\n>\n> That was deliberate.\n>\n> It would be a bit strange to alter these constants without also\n> updating the corresponding column names for the\n> pg_stat_progress_vacuum system view. But if I kept the definition from\n> system_views.sql in sync, then I would break user scripts -- for\n> reasons that users don't care about. That didn't seem like the right\n> approach.\n\nAgreed.\n\n>\n> Also, the system as a whole still assumes \"DEAD tuples and LP_DEAD\n> items are the same, and are just as much of a problem in the table as\n> they are in each index\". As you know, this is not really true, which\n> is an important problem for us. Fixing it (perhaps as part of adding\n> something like Robert's conveyor belt design) will likely require\n> revising this model quite fundamentally (e.g, the vacthresh\n> calculation in autovacuum.c:relation_needs_vacanalyze() would be\n> replaced). When this happens, we'll probably need to update system\n> views that have columns with names like \"dead_tuples\" -- because maybe\n> we no longer specifically count dead items/tuples at all. I strongly\n> suspect that the approach to statistics that we take for pg_statistic\n> optimizer stats just doesn't work for dead items/tuples -- statistical\n> sampling only produces useful statistics for the optimizer because\n> certain delicate assumptions are met (even these assumptions only\n> really work with a properly normalized database schema).\n>\n> Maybe revising the model used for autovacuum scheduling wouldn't\n> include changing pg_stat_progress_vacuum, since that isn't technically\n> \"part of the model\" --- I'm not sure. But it's not something that I am\n> in a hurry to fix.\n\nUnderstood.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 1 Dec 2021 14:34:39 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Rename dead_tuples to dead_items in vacuumlazy.c" } ]
[ { "msg_contents": "Hi, \n\nI think I found a problem related to replica identity. According to PG doc at [1], replica identity includes only columns marked NOT NULL. \nBut in fact users can accidentally break this rule as follows:\n\ncreate table tbl (a int not null unique);\nalter table tbl replica identity using INDEX tbl_a_key;\nalter table tbl alter column a drop not null;\ninsert into tbl values (null);\n\nAs a result, some operations on newly added null value will cause unexpected failure as below:\n\npostgres=# delete from tbl;\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\nIn the log, I can also see an assertion failure when deleting null value:\nTRAP: FailedAssertion(\"!nulls[i]\", File: \"heapam.c\", Line: 8439, PID: 274656)\n\nTo solve the above problem, I think it's better to add a check when executing ALTER COLUMN DROP NOT NULL,\nand report an error if this column is part of replica identity.\n\nAttaching a patch that disallow DROP NOT NULL on a column if it's in a REPLICA IDENTITY index. Also added a test in it.\nThanks Hou for helping me write/review this patch.\n\nBy the way, replica identity was introduced in PG9.4, so this problem exists in\nall supported versions.\n\n[1] https://www.postgresql.org/docs/current/sql-altertable.html\n\nRegards,\nTang", "msg_date": "Wed, 24 Nov 2021 07:04:51 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "[BUG]Missing REPLICA IDENTITY check when DROP NOT NULL" }, { "msg_contents": "On Wed, Nov 24, 2021 at 07:04:51AM +0000, tanghy.fnst@fujitsu.com wrote:\n> create table tbl (a int not null unique);\n> alter table tbl replica identity using INDEX tbl_a_key;\n> alter table tbl alter column a drop not null;\n> insert into tbl values (null);\n\nOops. Yes, that's obviously not good.\n\n> To solve the above problem, I think it's better to add a check when\n> executing ALTER COLUMN DROP NOT NULL,\n> and report an error if this column is part of replica identity.\n\nI'd say that you are right to block the operation. I'll try to play a\nbit with this stuff tomorrow.\n\n> Attaching a patch that disallow DROP NOT NULL on a column if it's in\n> a REPLICA IDENTITY index. Also added a test in it. \n\n if (indexStruct->indkey.values[i] == attnum)\n ereport(ERROR,\n (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n- errmsg(\"column \\\"%s\\\" is in a primary key\",\n+ errmsg(ngettext(\"column \\\"%s\\\" is in a primary key\",\n+ \"column \\\"%s\\\" is in a REPLICA IDENTITY index\",\n+ indexStruct->indisprimary),\n colName)));\nUsing ngettext() looks incorrect to me here as it is used to get the\nplural form of a string, so you'd better make these completely\nseparated instead.\n--\nMichael", "msg_date": "Wed, 24 Nov 2021 20:28:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG]Missing REPLICA IDENTITY check when DROP NOT NULL" }, { "msg_contents": "On Wed, Nov 24, 2021 7:29 PM, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Wed, Nov 24, 2021 at 07:04:51AM +0000, tanghy.fnst@fujitsu.com wrote:\n> > create table tbl (a int not null unique);\n> > alter table tbl replica identity using INDEX tbl_a_key;\n> > alter table tbl alter column a drop not null;\n> > insert into tbl values (null);\n> \n> Oops. Yes, that's obviously not good.\n> \n> > To solve the above problem, I think it's better to add a check when\n> > executing ALTER COLUMN DROP NOT NULL,\n> > and report an error if this column is part of replica identity.\n> \n> I'd say that you are right to block the operation. I'll try to play a\n> bit with this stuff tomorrow.\n> \n> > Attaching a patch that disallow DROP NOT NULL on a column if it's in\n> > a REPLICA IDENTITY index. Also added a test in it.\n> \n> if (indexStruct->indkey.values[i] == attnum)\n> ereport(ERROR,\n> (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n> - errmsg(\"column \\\"%s\\\" is in a primary key\",\n> + errmsg(ngettext(\"column \\\"%s\\\" is in a primary key\",\n> + \"column \\\"%s\\\" is in a REPLICA IDENTITY index\",\n> + indexStruct->indisprimary),\n> colName)));\n> Using ngettext() looks incorrect to me here as it is used to get the\n> plural form of a string, so you'd better make these completely\n> separated instead.\n\nThanks for your comment. I agree with you.\nI have fixed it and attached v2 patch.\n\nRegards,\nTang", "msg_date": "Thu, 25 Nov 2021 02:51:24 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [BUG]Missing REPLICA IDENTITY check when DROP NOT NULL" }, { "msg_contents": "On Thu, Nov 25, 2021 at 8:21 AM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n>\n> On Wed, Nov 24, 2021 7:29 PM, Michael Paquier <michael@paquier.xyz> wrote:\n\n>\n> Thanks for your comment. I agree with you.\n> I have fixed it and attached v2 patch.\n\nGood catch, your patch looks fine to me and solves the reported problem.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 25 Nov 2021 10:44:53 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG]Missing REPLICA IDENTITY check when DROP NOT NULL" }, { "msg_contents": "On Thu, Nov 25, 2021 at 10:44:53AM +0530, Dilip Kumar wrote:\n> Good catch, your patch looks fine to me and solves the reported problem.\n\nAnd applied down to 10. A couple of comments in the same area did not\nget the call, though.\n--\nMichael", "msg_date": "Thu, 25 Nov 2021 15:46:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG]Missing REPLICA IDENTITY check when DROP NOT NULL" } ]
[ { "msg_contents": "\nHi,\n\nWhen I read the documentation about Overview of PostgreSQL Internals - Executor [1],\nI find there is a missing space in the documentation.\n\ndiff --git a/doc/src/sgml/arch-dev.sgml b/doc/src/sgml/arch-dev.sgml\nindex 7aff059e82..c2be28fac8 100644\n--- a/doc/src/sgml/arch-dev.sgml\n+++ b/doc/src/sgml/arch-dev.sgml\n@@ -559,7 +559,7 @@\n A simple <command>INSERT ... VALUES</command> command creates a\n trivial plan tree consisting of a single <literal>Result</literal>\n node, which computes just one result row, feeding that up\n- to<literal>ModifyTable</literal> to perform the insertion.\n+ to <literal>ModifyTable</literal> to perform the insertion.\n </para>\n \n </sect1>\n\n\n[1] https://www.postgresql.org/docs/14/executor.html\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Wed, 24 Nov 2021 22:49:01 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Minor documentation fix - missing blank space" }, { "msg_contents": "On 24/11/2021 16:49, Japin Li wrote:\n> When I read the documentation about Overview of PostgreSQL Internals - Executor [1],\n> I find there is a missing space in the documentation.\n> \n> diff --git a/doc/src/sgml/arch-dev.sgml b/doc/src/sgml/arch-dev.sgml\n> index 7aff059e82..c2be28fac8 100644\n> --- a/doc/src/sgml/arch-dev.sgml\n> +++ b/doc/src/sgml/arch-dev.sgml\n> @@ -559,7 +559,7 @@\n> A simple <command>INSERT ... VALUES</command> command creates a\n> trivial plan tree consisting of a single <literal>Result</literal>\n> node, which computes just one result row, feeding that up\n> - to<literal>ModifyTable</literal> to perform the insertion.\n> + to <literal>ModifyTable</literal> to perform the insertion.\n> </para>\n> \n> </sect1>\n\nApplied, thanks!\n\n- Heikki\n\n\n\n", "msg_date": "Wed, 24 Nov 2021 18:38:53 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Minor documentation fix - missing blank space" } ]
[ { "msg_contents": "xlog.c: Remove global variables ReadRecPtr and EndRecPtr.\n\nIn most places, the variables necessarily store the same value as the\neponymous members of the XLogReaderState that we use during WAL\nreplay, because ReadRecord() assigns the values from the structure\nmembers to the global variables just after XLogReadRecord() returns.\nHowever, XLogBeginRead() adjusts the structure members but not the\nglobal variables, so after XLogBeginRead() and before the completion\nof XLogReadRecord() the values can differ. Otherwise, they must be\nidentical. According to my analysis, the only place where either\nvariable is referenced at a point where it might not have the same\nvalue as the structure member is the refrence to EndRecPtr within\nXLogPageRead.\n\nTherefore, at every other place where we are using the global\nvariable, we can just switch to using the structure member instead,\nand remove the global variable. However, we can, and in fact should,\ndo this in XLogPageRead() as well, because at that point in the code,\nthe global variable will actually store the start of the record we\nwant to read - either because it's where the last WAL record ended, or\nbecause the read position has been changed using XLogBeginRead since\nthe last record was read. The structure member, on the other hand,\nwill already have been updated to point to the end of the record we\njust read. Elsewhere, the latter is what we use as an argument to\nemode_for_corrupt_record(), so we should do the same here.\n\nThis part of the patch is perhaps a bug fix, but I don't think it has\nany important consequences, so no back-patch. The point here is just\nto continue to whittle down the entirely excessive use of global\nvariables in xlog.c.\n\nDiscussion: http://postgr.es/m/CA+Tgmoao96EuNeSPd+hspRKcsCddu=b1h-QNRuKfY8VmfNQdfg@mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/d2ddfa681db27a138acb63c8defa8cc6fa588922\n\nModified Files\n--------------\nsrc/backend/access/transam/xlog.c | 53 ++++++++++++++++++---------------------\n1 file changed, 24 insertions(+), 29 deletions(-)", "msg_date": "Wed, 24 Nov 2021 16:27:58 +0000", "msg_from": "Robert Haas <rhaas@postgresql.org>", "msg_from_op": true, "msg_subject": "pgsql: xlog.c: Remove global variables ReadRecPtr and EndRecPtr." }, { "msg_contents": "On Wed, Nov 24, 2021 at 11:28 AM Robert Haas <rhaas@postgresql.org> wrote:\n> xlog.c: Remove global variables ReadRecPtr and EndRecPtr.\n> https://git.postgresql.org/pg/commitdiff/d2ddfa681db27a138acb63c8defa8cc6fa588922\n\nThere is a buildfarm failure on lapwing which looks likely to be\nattributable to this commit, but I don't understand what has gone\nwrong exactly. The error message that is reported is:\n\n[17:07:19] t/025_stuck_on_old_timeline.pl ....... ok 6048 ms\n# poll_query_until timed out executing this query:\n# SELECT '0/201E8E0'::pg_lsn <= pg_last_wal_replay_lsn()\n# expecting this output:\n# t\n# last actual query output:\n#\n# with stderr:\n# psql: error: connection to server on socket\n\"/tmp/SrZekgRh7K/.s.PGSQL.57959\" failed: No such file or directory\n# Is the server running locally and accepting connections on that socket?\n\nThis output confused me for a while because I thought the failure was\nin test 025, but I should have realized that the problem must be in\n026, since 025 finished with \"ok\". Anyway, in\n026_overwrite_contrecord_standby.log there's this:\n\n2021-11-24 17:07:41.388 UTC [1473:1] LOG: started streaming WAL from\nprimary at 0/2000000 on timeline 1\n2021-11-24 17:07:41.428 UTC [1449:6] FATAL: mismatching overwritten\nLSN 0/1FFE014 -> 0/1FFE000\n2021-11-24 17:07:41.428 UTC [1449:7] CONTEXT: WAL redo at 0/2000024\nfor XLOG/OVERWRITE_CONTRECORD: lsn 0/1FFE014; time 2021-11-24\n17:07:41.127049+00\n\nBeyond the fact that something bad has happened, I'm not sure what\nthat is trying to tell me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 Nov 2021 13:16:37 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: xlog.c: Remove global variables ReadRecPtr and EndRecPtr." }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> 2021-11-24 17:07:41.428 UTC [1449:6] FATAL: mismatching overwritten\n> LSN 0/1FFE014 -> 0/1FFE000\n> 2021-11-24 17:07:41.428 UTC [1449:7] CONTEXT: WAL redo at 0/2000024\n> for XLOG/OVERWRITE_CONTRECORD: lsn 0/1FFE014; time 2021-11-24\n> 17:07:41.127049+00\n\n> Beyond the fact that something bad has happened, I'm not sure what\n> that is trying to tell me.\n\nPre-existing issue:\n\nhttps://www.postgresql.org/message-id/45597.1637694259%40sss.pgh.pa.us\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Nov 2021 13:46:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: xlog.c: Remove global variables ReadRecPtr and EndRecPtr." }, { "msg_contents": "On 2021-Nov-24, Robert Haas wrote:\n\n> On Wed, Nov 24, 2021 at 11:28 AM Robert Haas <rhaas@postgresql.org> wrote:\n> > xlog.c: Remove global variables ReadRecPtr and EndRecPtr.\n> > https://git.postgresql.org/pg/commitdiff/d2ddfa681db27a138acb63c8defa8cc6fa588922\n> \n> There is a buildfarm failure on lapwing which looks likely to be\n> attributable to this commit, but I don't understand what has gone\n> wrong exactly. The error message that is reported is:\n\nNo, this is an unrelated problem; Tom reported this yesterday also:\nhttps://postgr.es/m/45597.1637694259@sss.pgh.pa.us\n\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"La espina, desde que nace, ya pincha\" (Proverbio africano)\n\n\n", "msg_date": "Wed, 24 Nov 2021 15:51:41 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pgsql: xlog.c: Remove global variables ReadRecPtr and EndRecPtr." }, { "msg_contents": "On Wed, Nov 24, 2021 at 1:51 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > There is a buildfarm failure on lapwing which looks likely to be\n> > attributable to this commit, but I don't understand what has gone\n> > wrong exactly. The error message that is reported is:\n>\n> No, this is an unrelated problem; Tom reported this yesterday also:\n> https://postgr.es/m/45597.1637694259@sss.pgh.pa.us\n\nOh, OK. I couldn't see exactly what that could have to do with my\ncommit, but it happened right afterwards so I was hesitant to call it\na coincidence.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 Nov 2021 13:58:00 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: xlog.c: Remove global variables ReadRecPtr and EndRecPtr." }, { "msg_contents": "Hi,\n\nOn 2021-11-24 16:27:58 +0000, Robert Haas wrote:\n> xlog.c: Remove global variables ReadRecPtr and EndRecPtr.\n\nThis fails when building with WAL_DEBUG. There's remaining Read/EndRecPtr\nreferences in #ifdef WAL_DEBUG sections...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 24 Nov 2021 15:12:06 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pgsql: xlog.c: Remove global variables ReadRecPtr and EndRecPtr." }, { "msg_contents": "Hi,\n\nOn 2021-11-24 15:12:06 -0800, Andres Freund wrote:\n> This fails when building with WAL_DEBUG. There's remaining Read/EndRecPtr\n> references in #ifdef WAL_DEBUG sections...\n\nPushed the obvious fix for that. Somehow thought I'd seen more compile failure\nthan the one WAL_DEBUG...\n\n\n", "msg_date": "Wed, 24 Nov 2021 17:01:45 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pgsql: xlog.c: Remove global variables ReadRecPtr and EndRecPtr." }, { "msg_contents": "On Wed, Nov 24, 2021 at 8:01 PM Andres Freund <andres@anarazel.de> wrote:\n> Pushed the obvious fix for that. Somehow thought I'd seen more compile failure\n> than the one WAL_DEBUG...\n\nHmm, thanks. I guess i put too much trust in the compiler.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 25 Nov 2021 20:30:48 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: xlog.c: Remove global variables ReadRecPtr and EndRecPtr." }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Nov 24, 2021 at 8:01 PM Andres Freund <andres@anarazel.de> wrote:\n>> Pushed the obvious fix for that. Somehow thought I'd seen more compile failure\n>> than the one WAL_DEBUG...\n\n> Hmm, thanks. I guess i put too much trust in the compiler.\n\nMy approach to such patches is always \"in grep we trust, all others\npay cash\". Even without #ifdef issues, you are highly likely to\nmiss comments that need to be updated.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Nov 2021 23:02:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: xlog.c: Remove global variables ReadRecPtr and EndRecPtr." }, { "msg_contents": "On Thu, Nov 25, 2021 at 11:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Wed, Nov 24, 2021 at 8:01 PM Andres Freund <andres@anarazel.de> wrote:\n> >> Pushed the obvious fix for that. Somehow thought I'd seen more compile failure\n> >> than the one WAL_DEBUG...\n>\n> > Hmm, thanks. I guess i put too much trust in the compiler.\n>\n> My approach to such patches is always \"in grep we trust, all others\n> pay cash\". Even without #ifdef issues, you are highly likely to\n> miss comments that need to be updated.\n\nRight. I didn't rely completely on grep and did spend a lot of time\nlooking through the code. But sometimes my eyes glaze over and I get\nsloppy after too much time working on one thing, and this seems to\nhave been one of those times.\n\nFortunately, it seems to have been only a minor oversight.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 26 Nov 2021 10:24:15 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: xlog.c: Remove global variables ReadRecPtr and EndRecPtr." }, { "msg_contents": "Hi, \n\nOn November 26, 2021 5:53:31 PM PST, Andres Freund <andres@anarazel.de> wrote:\n>Hi, \n>\n>On November 26, 2021 7:24:15 AM PST, Robert Haas <robertmhaas@gmail.com> wrote:\n>>Fortunately, it seems to have been only a minor oversight.\n>\nAgreed. \n\nI wonder if we should turn some of these ifdefs into something boiling down to if (0) { ifdef body}... No need to make it impossible for the compiler to help us in most of these cases...\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Fri, 26 Nov 2021 17:55:17 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pgsql: xlog.c: Remove global variables ReadRecPtr and EndRecPtr." } ]
[ { "msg_contents": "Hi,\n\nBy inspection of plperl and plpython, it looks like the canonical pattern\nfor a PL using internal subtransactions is:\n\nsave CurrentMemoryContext\nsave CurrentResourceOwner\nBeginInternalSubTransaction\nreimpose the saved memory context\n// but not the saved resource owner\n\n...\n(RollbackAnd)?ReleaseCurrentSubTransaction\nreimpose the saved memory context\nand the saved resource owner\n\n\nTherefore, during the subtransaction, its newly-established memory context\nis accessible as CurTransactionMemoryContext, but the caller can still use\nCurrentMemoryContext to refer to the same context it already expected.\n\nBy contrast, the newly established resource owner is both the\nCurTransactionResourceOwner and the CurrentResourceOwner within the scope\nof the subtransaction.\n\nIs there more explanation of this pattern written somewhere than I have\nmanaged to find, and in particular of the motivation for treating the memory\ncontext and the resource owner in these nearly-but-not-quite matching\nways?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Wed, 24 Nov 2021 15:28:29 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "internal subtransactions, memory contexts, resource owners" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> By inspection of plperl and plpython, it looks like the canonical pattern\n> for a PL using internal subtransactions is:\n\n> save CurrentMemoryContext\n> save CurrentResourceOwner\n> BeginInternalSubTransaction\n> reimpose the saved memory context\n> // but not the saved resource owner\n\n> ...\n> (RollbackAnd)?ReleaseCurrentSubTransaction\n> reimpose the saved memory context\n> and the saved resource owner\n\n> Therefore, during the subtransaction, its newly-established memory context\n> is accessible as CurTransactionMemoryContext, but the caller can still use\n> CurrentMemoryContext to refer to the same context it already expected.\n\n> By contrast, the newly established resource owner is both the\n> CurTransactionResourceOwner and the CurrentResourceOwner within the scope\n> of the subtransaction.\n\n> Is there more explanation of this pattern written somewhere than I have\n> managed to find, and in particular of the motivation for treating the memory\n> context and the resource owner in these nearly-but-not-quite matching\n> ways?\n\nYou normally want a separate resource owner for a subtransaction, since\nthe main point of a subtransaction is to be able to clean up after errors\nand release resources. What to do with CurrentMemoryContext is a lot more\nspecific to a particular PL. I don't want to speak to plperl and plpython\nin particular, but plpgsql does it this way because it uses the same\nfunction parsetree data structure and the same variable values throughout\nexecution of a function. You would not, say, want the function's local\nvariables to revert to previous values upon failure of a BEGIN block;\nso they have to be kept in the same memory context throughout.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Nov 2021 17:16:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: internal subtransactions, memory contexts, resource owners" } ]
[ { "msg_contents": "Hi Hackers,\n\nWhile an exclusive backup is in progress if Postgres restarts, postgres\nruns the recovery from the checkpoint identified by the label file instead\nof the control file. This can cause long recovery or even sometimes fail to\nrecover as the WAL records corresponding to that checkpoint location are\nremoved. I can write a layer in my control plane to remove the backup_label\nfile when I know the server is not in restore from the base backup but I\ndon't see a reason why everyone has to repeat this step. Am I missing\nsomething?\n\nIf there are no standby.signal or recovery.signal, what is the use case of\nhonoring backup_label file? Even when they exist, for a long running\nrecovery, should we honor the backup_label file as the majority of the WAL\nalready applied? It does slow down the recovery on restart right as it has\nto start all the way from the beginning?\n\nThanks,\nSatya\n\nHi Hackers,While an exclusive backup is in progress if Postgres restarts, postgres runs the recovery from the checkpoint identified by the label file instead of the control file. This can cause long recovery or even sometimes fail to recover as the WAL records corresponding to that checkpoint location are removed. I can write a layer in my control plane to remove the backup_label file when I know the server is not in restore from the base backup but I don't see a reason why everyone has to repeat this step. Am I missing something?If there are no standby.signal or recovery.signal, what is the use case of honoring backup_label file? Even when they exist, for a long running recovery, should we honor the backup_label file as the majority of the WAL already applied? It does slow down the recovery on restart right as it has to start all the way from the beginning?Thanks,Satya", "msg_date": "Wed, 24 Nov 2021 14:12:19 -0800", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": true, "msg_subject": "Postgres restart in the middle of exclusive backup and the presence\n of backup_label file" }, { "msg_contents": "On Wed, Nov 24, 2021 at 02:12:19PM -0800, SATYANARAYANA NARLAPURAM wrote:\n> While an exclusive backup is in progress if Postgres restarts, postgres\n> runs the recovery from the checkpoint identified by the label file instead\n> of the control file. This can cause long recovery or even sometimes fail to\n> recover as the WAL records corresponding to that checkpoint location are\n> removed. I can write a layer in my control plane to remove the backup_label\n> file when I know the server is not in restore from the base backup but I\n> don't see a reason why everyone has to repeat this step. Am I missing\n> something?\n\nThis is a known issue with exclusive backups, which is a reason why \nnon-exclusive backups have been implemented. pg_basebackup does that,\nand using \"false\" as the third argument of pg_start_backup() would\nhave the same effect. So I would recommend to switch to that.\n--\nMichael", "msg_date": "Thu, 25 Nov 2021 07:45:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "Thanks Michael!\n\nThis is a known issue with exclusive backups, which is a reason why\n> non-exclusive backups have been implemented. pg_basebackup does that,\n> and using \"false\" as the third argument of pg_start_backup() would\n> have the same effect. So I would recommend to switch to that.\n>\n\nIs there a plan in place to remove the exclusive backup option from the\ncore in PG 15/16? If we are keeping it then why not make it better?\n\nThanks Michael!This is a known issue with exclusive backups, which is a reason why \nnon-exclusive backups have been implemented.  pg_basebackup does that,\nand using \"false\" as the third argument of pg_start_backup() would\nhave the same effect.  So I would recommend to switch to that.Is there a plan in place to remove the exclusive backup option from the core in PG 15/16? If we are keeping it then why not make it better?", "msg_date": "Thu, 25 Nov 2021 18:19:03 -0800", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On Thu, Nov 25, 2021 at 06:19:03PM -0800, SATYANARAYANA NARLAPURAM wrote:\n> Is there a plan in place to remove the exclusive backup option from the\n> core in PG 15/16?\n\nThis was discussed, but removing it could also harm users relying on\nit. Perhaps it could be revisited, but I am not sure if this is worth\nworrying about either.\n\n> If we are keeping it then why not make it better?\n\nWell, non-exclusive backups are better by design in many aspects, so I\ndon't quite see the point in spending time on something that has more\nlimitations than what's already in place.\n--\nMichael", "msg_date": "Fri, 26 Nov 2021 14:57:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Thu, Nov 25, 2021 at 06:19:03PM -0800, SATYANARAYANA NARLAPURAM wrote:\n>> If we are keeping it then why not make it better?\n\n> Well, non-exclusive backups are better by design in many aspects, so I\n> don't quite see the point in spending time on something that has more\n> limitations than what's already in place.\n\nIMO the main reason for keeping it is backwards compatibility for users\nwho have a satisfactory backup arrangement using it. That same argument\nimplies that we shouldn't change how it works (at least, not very much).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 26 Nov 2021 10:31:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On 11/26/21, 7:33 AM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\r\n> Michael Paquier <michael@paquier.xyz> writes:\r\n>> On Thu, Nov 25, 2021 at 06:19:03PM -0800, SATYANARAYANA NARLAPURAM wrote:\r\n>>> If we are keeping it then why not make it better?\r\n>\r\n>> Well, non-exclusive backups are better by design in many aspects, so I\r\n>> don't quite see the point in spending time on something that has more\r\n>> limitations than what's already in place.\r\n>\r\n> IMO the main reason for keeping it is backwards compatibility for users\r\n> who have a satisfactory backup arrangement using it. That same argument\r\n> implies that we shouldn't change how it works (at least, not very much).\r\n\r\nThe issues with exclusive backups seem to be fairly well-documented\r\n(e.g., c900c15), but perhaps there should also be a note in the\r\n\"Backup Control Functions\" table [0].\r\n\r\nNathan\r\n\r\n[0] https://www.postgresql.org/docs/devel/functions-admin.html#FUNCTIONS-ADMIN-BACKUP\r\n\r\n", "msg_date": "Mon, 29 Nov 2021 18:25:24 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of\n backup_label file" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Thu, Nov 25, 2021 at 06:19:03PM -0800, SATYANARAYANA NARLAPURAM wrote:\n> >> If we are keeping it then why not make it better?\n> \n> > Well, non-exclusive backups are better by design in many aspects, so I\n> > don't quite see the point in spending time on something that has more\n> > limitations than what's already in place.\n> \n> IMO the main reason for keeping it is backwards compatibility for users\n> who have a satisfactory backup arrangement using it. That same argument\n> implies that we shouldn't change how it works (at least, not very much).\n\nThere isn't a satisfactory backup approach using it specifically because\nof this issue, hence why we should remove it to make it so users don't\nrun into this. Would also simplify the documentation around the low\nlevel backup API, which would be a very good thing. Right now, making\nimprovements in that area is very challenging even if all you want to do\nis improve the documentation around the non-exclusive API.\n\nWe dealt with this as best as one could in pgbackrest for PG versions\nprior to when non-exclusive backup was added- which is to remove the\nbackup_label file as soon as possible and then put it back right before\nyou call pg_stop_backup() (since it'll complain otherwise). Not a\nperfect answer though and a risk still exists there of a failed restart\nhappening. Of course, for versions which support non-exclusive backup,\nwe use that to avoid this issue.\n\nWe also extensively changed how restore works a couple releases ago and\nwhile there was some noise about it, it certainly wasn't all that bad.\nI don't find the reasons brought up to continue to support exclusive\nbackup to be at all compelling and the lack of huge issues with the new\nway restore works to make it abundently clear that we can, in fact,\nremove exclusive backup in a major version change without the world\ncoming down.\n\nThanks,\n\nStephen", "msg_date": "Tue, 30 Nov 2021 09:20:42 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On Tue, 2021-11-30 at 09:20 -0500, Stephen Frost wrote:\n> Greetings,\n> \n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> > Michael Paquier <michael@paquier.xyz> writes:\n> > > On Thu, Nov 25, 2021 at 06:19:03PM -0800, SATYANARAYANA NARLAPURAM wrote:\n> > > > If we are keeping it then why not make it better?\n> > \n> > > Well, non-exclusive backups are better by design in many aspects, so I\n> > > don't quite see the point in spending time on something that has more\n> > > limitations than what's already in place.\n> > \n> > IMO the main reason for keeping it is backwards compatibility for users\n> > who have a satisfactory backup arrangement using it.  That same argument\n> > implies that we shouldn't change how it works (at least, not very much).\n> \n> There isn't a satisfactory backup approach using it specifically because\n> of this issue, hence why we should remove it to make it so users don't\n> run into this.\n\nThere is a satisfactory approach, as long as you are satisfied with\nmanually restarting the server if it crashed during a backup.\n\n> I don't find the reasons brought up to continue to support exclusive\n> backup to be at all compelling and the lack of huge issues with the new\n> way restore works to make it abundently clear that we can, in fact,\n> remove exclusive backup in a major version change without the world\n> coming down.\n\nI guess the lack of hue and cry was at least to a certain extent because\nthe exclusive backup API was deprecated, but not removed.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Tue, 30 Nov 2021 17:47:18 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "Greetings,\n\nOn Tue, Nov 30, 2021 at 11:47 Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n\n> On Tue, 2021-11-30 at 09:20 -0500, Stephen Frost wrote:\n> > Greetings,\n> >\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> > > Michael Paquier <michael@paquier.xyz> writes:\n> > > > On Thu, Nov 25, 2021 at 06:19:03PM -0800, SATYANARAYANA NARLAPURAM\n> wrote:\n> > > > > If we are keeping it then why not make it better?\n> > >\n> > > > Well, non-exclusive backups are better by design in many aspects, so\n> I\n> > > > don't quite see the point in spending time on something that has more\n> > > > limitations than what's already in place.\n> > >\n> > > IMO the main reason for keeping it is backwards compatibility for users\n> > > who have a satisfactory backup arrangement using it. That same\n> argument\n> > > implies that we shouldn't change how it works (at least, not very\n> much).\n> >\n> > There isn't a satisfactory backup approach using it specifically because\n> > of this issue, hence why we should remove it to make it so users don't\n> > run into this.\n>\n> There is a satisfactory approach, as long as you are satisfied with\n> manually restarting the server if it crashed during a backup.\n\n\nI disagree that that’s a satisfactory approach. It certainly wasn’t\nintended or documented as part of the original feature and therefore to\ncall it satisfactory strikes me quite strongly as revisionist history.\n\n> I don't find the reasons brought up to continue to support exclusive\n> > backup to be at all compelling and the lack of huge issues with the new\n> > way restore works to make it abundently clear that we can, in fact,\n> > remove exclusive backup in a major version change without the world\n> > coming down.\n>\n> I guess the lack of hue and cry was at least to a certain extent because\n> the exclusive backup API was deprecated, but not removed.\n\n\nThese comments were in reference to the restore API, which was quite\nchanged (new special files that have to be touched, removing of\nrecovery.conf, options moved to postgresql.conf/.auto, etc). So, no.\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Tue, Nov 30, 2021 at 11:47 Laurenz Albe <laurenz.albe@cybertec.at> wrote:On Tue, 2021-11-30 at 09:20 -0500, Stephen Frost wrote:\n> Greetings,\n> \n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> > Michael Paquier <michael@paquier.xyz> writes:\n> > > On Thu, Nov 25, 2021 at 06:19:03PM -0800, SATYANARAYANA NARLAPURAM wrote:\n> > > > If we are keeping it then why not make it better?\n> > \n> > > Well, non-exclusive backups are better by design in many aspects, so I\n> > > don't quite see the point in spending time on something that has more\n> > > limitations than what's already in place.\n> > \n> > IMO the main reason for keeping it is backwards compatibility for users\n> > who have a satisfactory backup arrangement using it.  That same argument\n> > implies that we shouldn't change how it works (at least, not very much).\n> \n> There isn't a satisfactory backup approach using it specifically because\n> of this issue, hence why we should remove it to make it so users don't\n> run into this.\n\nThere is a satisfactory approach, as long as you are satisfied with\nmanually restarting the server if it crashed during a backup.I disagree that that’s a satisfactory approach. It certainly wasn’t intended or documented as part of the original feature and therefore to call it satisfactory strikes me quite strongly as revisionist history. \n> I don't find the reasons brought up to continue to support exclusive\n> backup to be at all compelling and the lack of huge issues with the new\n> way restore works to make it abundently clear that we can, in fact,\n> remove exclusive backup in a major version change without the world\n> coming down.\n\nI guess the lack of hue and cry was at least to a certain extent because\nthe exclusive backup API was deprecated, but not removed.These comments were in reference to the restore API, which was quite changed (new special files that have to be touched, removing of recovery.conf, options moved to postgresql.conf/.auto, etc). So, no. Thanks,Stephen", "msg_date": "Tue, 30 Nov 2021 12:48:15 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On 11/30/21, 9:51 AM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\r\n> I disagree that that’s a satisfactory approach. It certainly wasn’t\r\n> intended or documented as part of the original feature and therefore\r\n> to call it satisfactory strikes me quite strongly as revisionist\r\n> history. \r\n\r\nIt looks like the exclusive way has been marked deprecated in all\r\nsupported versions along with a note that it will eventually be\r\nremoved. If it's not going to be removed out of fear of breaking\r\nbackward compatibility, I think the documentation should be updated to\r\nsay that. However, unless there is something that is preventing users\r\nfrom switching to the non-exclusive approach, I think it is reasonable\r\nto begin thinking about removing it.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 30 Nov 2021 21:56:36 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of\n backup_label file" }, { "msg_contents": "\"Bossart, Nathan\" <bossartn@amazon.com> writes:\n> It looks like the exclusive way has been marked deprecated in all\n> supported versions along with a note that it will eventually be\n> removed. If it's not going to be removed out of fear of breaking\n> backward compatibility, I think the documentation should be updated to\n> say that. However, unless there is something that is preventing users\n> from switching to the non-exclusive approach, I think it is reasonable\n> to begin thinking about removing it.\n\nIf we're willing to outright remove it, I don't have any great objection.\nMy original two cents was that we shouldn't put effort into improving it;\nbut removing it isn't that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Nov 2021 17:26:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On 11/30/21, 2:27 PM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\r\n> If we're willing to outright remove it, I don't have any great objection.\r\n> My original two cents was that we shouldn't put effort into improving it;\r\n> but removing it isn't that.\r\n\r\nI might try to put a patch together for the January commitfest, given\r\nthere is enough support.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 30 Nov 2021 22:50:11 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of\n backup_label file" }, { "msg_contents": "On 11/30/21 17:26, Tom Lane wrote:\n> \"Bossart, Nathan\" <bossartn@amazon.com> writes:\n>> It looks like the exclusive way has been marked deprecated in all\n>> supported versions along with a note that it will eventually be\n>> removed. If it's not going to be removed out of fear of breaking\n>> backward compatibility, I think the documentation should be updated to\n>> say that. However, unless there is something that is preventing users\n>> from switching to the non-exclusive approach, I think it is reasonable\n>> to begin thinking about removing it.\n> \n> If we're willing to outright remove it, I don't have any great objection.\n> My original two cents was that we shouldn't put effort into improving it;\n> but removing it isn't that.\n\nThe main objections as I recall are that it is much harder for simple \nbackup scripts and commercial backup integrations to hold a connection \nto postgres open and write the backup label separately into the backup.\n\nAs Stephen noted, working in this area is much harder (even in the docs) \ndue to the need to keep both methods working. When I removed exclusive \nbackup it didn't break any tests, other than one that needed to generate \na corrupt backup, so we have virtually no coverage for that method.\n\nI did figure out how to keep the safe part of exclusive backup (not \nhaving to maintain a connection) while removing the dangerous part \n(writing backup_label into PGDATA), but it was a substantial amount of \nwork and I felt that it had little chance of being committed.\n\nAttaching the thread [1] that I started with a patch to remove exclusive \nbackup for reference.\n\n--\n\n[1] \nhttps://www.postgresql.org/message-id/flat/ac7339ca-3718-3c93-929f-99e725d1172c%40pgmasters.net\n\n\n", "msg_date": "Tue, 30 Nov 2021 17:58:15 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On 11/30/21, 2:58 PM, \"David Steele\" <david@pgmasters.net> wrote:\r\n> I did figure out how to keep the safe part of exclusive backup (not\r\n> having to maintain a connection) while removing the dangerous part\r\n> (writing backup_label into PGDATA), but it was a substantial amount of\r\n> work and I felt that it had little chance of being committed.\r\n\r\nDo you think it's still worth trying to make it safe, or do you think\r\nwe should just remove exclusive mode completely?\r\n\r\n> Attaching the thread [1] that I started with a patch to remove exclusive\r\n> backup for reference.\r\n\r\nAh, good, some light reading. :)\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 30 Nov 2021 23:31:04 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of\n backup_label file" }, { "msg_contents": "On Tue, Nov 30, 2021 at 05:58:15PM -0500, David Steele wrote:\n> The main objections as I recall are that it is much harder for simple backup\n> scripts and commercial backup integrations to hold a connection to postgres\n> open and write the backup label separately into the backup.\n\nI don't quite understand why this argument would not hold even today,\neven if I'd like to think that more people are using pg_basebackup.\n\n> I did figure out how to keep the safe part of exclusive backup (not having\n> to maintain a connection) while removing the dangerous part (writing\n> backup_label into PGDATA), but it was a substantial amount of work and I\n> felt that it had little chance of being committed.\n\nWhich was, I guess, done by storing the backup_label contents within a\nfile different than backup_label, still maintained in the main data\nfolder to ensure that it gets included in the backup?\n--\nMichael", "msg_date": "Wed, 1 Dec 2021 09:54:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On Tue, Nov 30, 2021 at 4:54 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Nov 30, 2021 at 05:58:15PM -0500, David Steele wrote:\n> > The main objections as I recall are that it is much harder for simple\n> backup\n> > scripts and commercial backup integrations to hold a connection to\n> postgres\n> > open and write the backup label separately into the backup.\n>\n> I don't quite understand why this argument would not hold even today,\n> even if I'd like to think that more people are using pg_basebackup.\n>\n> > I did figure out how to keep the safe part of exclusive backup (not\n> having\n> > to maintain a connection) while removing the dangerous part (writing\n> > backup_label into PGDATA), but it was a substantial amount of work and I\n> > felt that it had little chance of being committed.\n>\n> Which was, I guess, done by storing the backup_label contents within a\n> file different than backup_label, still maintained in the main data\n> folder to ensure that it gets included in the backup?\n>\n\nNon-exclusive backup has significant advantages over exclusive backups but\nwould like to add a few comments on the simplicity of exclusive backups -\n1/ It is not uncommon nowadays to take a snapshot based backup. Exclusive\nbackup simplifies this story as the backup label file is part of the\nsnapshot. Otherwise, one needs to store it somewhere outside as snapshot\nmetadata and copy this file over during restore (after creating a disk from\nthe snapshot) to the data directory. Typical steps included are 1/ start\npg_base_backup 2/ Take disk snapshot 3/ pg_stop_backup() 4/ Mark snapshot\nas consistent and add some create time metadata.\n2/ Control plane code responsible for taking backups is simpler with\nexclusive backups than non-exclusive as it doesn't maintain a connection to\nthe server, particularly when that orchestration is outside the machine the\nPostgres server is running on.\n\nIMHO, we should either remove the support for it or improve it but not\nleave it hanging there.\n\nOn Tue, Nov 30, 2021 at 4:54 PM Michael Paquier <michael@paquier.xyz> wrote:On Tue, Nov 30, 2021 at 05:58:15PM -0500, David Steele wrote:\n> The main objections as I recall are that it is much harder for simple backup\n> scripts and commercial backup integrations to hold a connection to postgres\n> open and write the backup label separately into the backup.\n\nI don't quite understand why this argument would not hold even today,\neven if I'd like to think that more people are using pg_basebackup.\n\n> I did figure out how to keep the safe part of exclusive backup (not having\n> to maintain a connection) while removing the dangerous part (writing\n> backup_label into PGDATA), but it was a substantial amount of work and I\n> felt that it had little chance of being committed.\n\nWhich was, I guess, done by storing the backup_label contents within a\nfile different than backup_label, still maintained in the main data\nfolder to ensure that it gets included in the backup?Non-exclusive backup has significant advantages over exclusive backups but would like to add a few comments on the simplicity of exclusive backups -1/ It is not uncommon nowadays to take a snapshot based backup. Exclusive backup simplifies this story as the backup label file is part of the snapshot. Otherwise, one needs to store it somewhere outside as snapshot metadata and copy this file over during restore (after creating a disk from the snapshot) to the data directory. Typical steps included are 1/ start pg_base_backup 2/ Take disk snapshot 3/ pg_stop_backup() 4/ Mark snapshot as consistent and add some create time metadata.2/ Control plane code responsible for taking backups is simpler with exclusive backups than non-exclusive as it doesn't maintain a connection to the server, particularly when that orchestration is outside the machine the Postgres server is running on.IMHO, we should either remove the support for it or improve it but not leave it hanging there.", "msg_date": "Tue, 30 Nov 2021 17:56:39 -0800", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On 11/30/21 19:54, Michael Paquier wrote:\n> On Tue, Nov 30, 2021 at 05:58:15PM -0500, David Steele wrote:\n>> I did figure out how to keep the safe part of exclusive backup (not having\n>> to maintain a connection) while removing the dangerous part (writing\n>> backup_label into PGDATA), but it was a substantial amount of work and I\n>> felt that it had little chance of being committed.\n> \n> Which was, I guess, done by storing the backup_label contents within a\n> file different than backup_label, still maintained in the main data\n> folder to ensure that it gets included in the backup?\n\nThat, or emit it from pg_start_backup() so the user can write it \nwherever they please. That would include writing it into PGDATA if they \nreally wanted to, but that would be on them and the default behavior \nwould be safe. The problem with this is if the user does not \nrename/supply backup_label on restore then they will get corruption and \nnot know it.\n\nHere's another idea. Since the contents of pg_wal are not supposed to be \ncopied, we could add a file there to indicate that the cluster should \nremove backup_label on restart. Our instructions also say to remove the \ncontents of pg_wal on restore if they were originally copied, so \nhopefully one of the two would happen. But, again, if they fail to \nfollow the directions it would lead to corruption.\n\nOrder would be important here. When starting the backup the proper order \nwould be to write pg_wal/backup_in_progress and then backup_label. When \nstopping the backup they would be removed in the reverse order.\n\nOn a restart if both are present then delete both in the correct order \nand start crash recovery using the info in pg_control. If only \nbackup_label is present then go into recovery using the info from \nbackup_label.\n\nIt's possible for pg_wal/backup_in_process to be present by itself if \nthe server crashes after deleting backup_label but before deleting \npg_wal/backup_in_progress. In that case the server should simply remove \nit on start and go into crash recovery using the info from pg_control.\n\nThe advantage of this idea is that it does not change the current \ninstructions as far as I can see. If the user is already following them, \nthey'll be fine. If they are not, then they'll need to start doing so.\n\nOf course, none of this affects users who are using non-exclusive \nbackup, which I do hope covers the majority by now.\n\nThoughts?\n\nRegards,\n-David\n\n\n", "msg_date": "Wed, 1 Dec 2021 11:00:20 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "\nOn 11/30/21 17:26, Tom Lane wrote:\n> \"Bossart, Nathan\" <bossartn@amazon.com> writes:\n>> It looks like the exclusive way has been marked deprecated in all\n>> supported versions along with a note that it will eventually be\n>> removed. If it's not going to be removed out of fear of breaking\n>> backward compatibility, I think the documentation should be updated to\n>> say that. However, unless there is something that is preventing users\n>> from switching to the non-exclusive approach, I think it is reasonable\n>> to begin thinking about removing it.\n> If we're willing to outright remove it, I don't have any great objection.\n> My original two cents was that we shouldn't put effort into improving it;\n> but removing it isn't that.\n>\n> \t\t\t\n\n\n+1\n\n\nLet's just remove it. We already know it's a footgun, and there's been\nplenty of warning.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 1 Dec 2021 11:14:54 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On 11/30/21 18:31, Bossart, Nathan wrote:\n> On 11/30/21, 2:58 PM, \"David Steele\" <david@pgmasters.net> wrote:\n>> I did figure out how to keep the safe part of exclusive backup (not\n>> having to maintain a connection) while removing the dangerous part\n>> (writing backup_label into PGDATA), but it was a substantial amount of\n>> work and I felt that it had little chance of being committed.\n> \n> Do you think it's still worth trying to make it safe, or do you think\n> we should just remove exclusive mode completely?\n\nMy preference would be to remove it completely, but I haven't gotten a \nlot of traction so far.\n\n>> Attaching the thread [1] that I started with a patch to remove exclusive\n>> backup for reference.\n> \n> Ah, good, some light reading. :)\n\nSure, if you say so!\n\nRegards,\n-David\n\n\n", "msg_date": "Wed, 1 Dec 2021 11:27:42 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On 12/1/21, 8:27 AM, \"David Steele\" <david@pgmasters.net> wrote:\r\n> On 11/30/21 18:31, Bossart, Nathan wrote:\r\n>> Do you think it's still worth trying to make it safe, or do you think\r\n>> we should just remove exclusive mode completely?\r\n>\r\n> My preference would be to remove it completely, but I haven't gotten a\r\n> lot of traction so far.\r\n\r\nIn this thread, I count 6 people who seem alright with removing it,\r\nand 2 who might be opposed, although I don't think anyone has\r\nexplicitly stated they are against it.\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 1 Dec 2021 18:33:29 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of\n backup_label file" }, { "msg_contents": "On 12/1/21, 10:37 AM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> On 12/1/21, 8:27 AM, \"David Steele\" <david@pgmasters.net> wrote:\r\n>> On 11/30/21 18:31, Bossart, Nathan wrote:\r\n>>> Do you think it's still worth trying to make it safe, or do you think\r\n>>> we should just remove exclusive mode completely?\r\n>>\r\n>> My preference would be to remove it completely, but I haven't gotten a\r\n>> lot of traction so far.\r\n>\r\n> In this thread, I count 6 people who seem alright with removing it,\r\n> and 2 who might be opposed, although I don't think anyone has\r\n> explicitly stated they are against it.\r\n\r\nI hastily rebased the patch from 2018 and got it building and passing\r\nthe tests. I'm sure it will need additional changes, but I'll wait\r\nfor more feedback before I expend too much more effort on this.\r\n\r\nNathan", "msg_date": "Thu, 2 Dec 2021 00:30:25 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "\nOn 12/1/21 19:30, Bossart, Nathan wrote:\n> On 12/1/21, 10:37 AM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\n>> On 12/1/21, 8:27 AM, \"David Steele\" <david@pgmasters.net> wrote:\n>>> On 11/30/21 18:31, Bossart, Nathan wrote:\n>>>> Do you think it's still worth trying to make it safe, or do you think\n>>>> we should just remove exclusive mode completely?\n>>> My preference would be to remove it completely, but I haven't gotten a\n>>> lot of traction so far.\n>> In this thread, I count 6 people who seem alright with removing it,\n>> and 2 who might be opposed, although I don't think anyone has\n>> explicitly stated they are against it.\n> I hastily rebased the patch from 2018 and got it building and passing\n> the tests. I'm sure it will need additional changes, but I'll wait\n> for more feedback before I expend too much more effort on this.\n>\n\n\nShould we really be getting rid of\nPostgreSQL::Test::Cluster::backup_fs_hot() ?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 2 Dec 2021 11:00:32 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On 12/2/21 11:00, Andrew Dunstan wrote:\n> \n> On 12/1/21 19:30, Bossart, Nathan wrote:\n>> On 12/1/21, 10:37 AM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\n>>> On 12/1/21, 8:27 AM, \"David Steele\" <david@pgmasters.net> wrote:\n>>>> On 11/30/21 18:31, Bossart, Nathan wrote:\n>>>>> Do you think it's still worth trying to make it safe, or do you think\n>>>>> we should just remove exclusive mode completely?\n>>>> My preference would be to remove it completely, but I haven't gotten a\n>>>> lot of traction so far.\n>>> In this thread, I count 6 people who seem alright with removing it,\n>>> and 2 who might be opposed, although I don't think anyone has\n>>> explicitly stated they are against it.\n>> I hastily rebased the patch from 2018 and got it building and passing\n>> the tests. I'm sure it will need additional changes, but I'll wait\n>> for more feedback before I expend too much more effort on this.\n>>\n> \n> Should we really be getting rid of\n> PostgreSQL::Test::Cluster::backup_fs_hot() ?\n\nAgreed, it would be better to update backup_fs_hot() to use exclusive \nmode and save out backup_label instead.\n\nRegards,\n-David\n\n\n", "msg_date": "Thu, 2 Dec 2021 12:38:50 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On 12/2/21 12:38, David Steele wrote:\n> On 12/2/21 11:00, Andrew Dunstan wrote:\n>>\n>> Should we really be getting rid of\n>> PostgreSQL::Test::Cluster::backup_fs_hot() ?\n> \n> Agreed, it would be better to update backup_fs_hot() to use exclusive \n> mode and save out backup_label instead.\n\nOops, of course I meant non-exclusive mode.\n\nRegards,\n-David\n\n\n", "msg_date": "Thu, 2 Dec 2021 12:49:52 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On 12/2/21, 9:50 AM, \"David Steele\" <david@pgmasters.net> wrote:\r\n> On 12/2/21 12:38, David Steele wrote:\r\n>> On 12/2/21 11:00, Andrew Dunstan wrote:\r\n>>>\r\n>>> Should we really be getting rid of\r\n>>> PostgreSQL::Test::Cluster::backup_fs_hot() ?\r\n>>\r\n>> Agreed, it would be better to update backup_fs_hot() to use exclusive\r\n>> mode and save out backup_label instead.\r\n>\r\n> Oops, of course I meant non-exclusive mode.\r\n\r\n+1. I'll fix that in the next revision.\r\n\r\nNathan\r\n\r\n", "msg_date": "Thu, 2 Dec 2021 21:31:53 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of\n backup_label file" }, { "msg_contents": "On 12/2/21, 1:34 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> On 12/2/21, 9:50 AM, \"David Steele\" <david@pgmasters.net> wrote:\r\n>> On 12/2/21 12:38, David Steele wrote:\r\n>>> On 12/2/21 11:00, Andrew Dunstan wrote:\r\n>>>>\r\n>>>> Should we really be getting rid of\r\n>>>> PostgreSQL::Test::Cluster::backup_fs_hot() ?\r\n>>>\r\n>>> Agreed, it would be better to update backup_fs_hot() to use exclusive\r\n>>> mode and save out backup_label instead.\r\n>>\r\n>> Oops, of course I meant non-exclusive mode.\r\n>\r\n> +1. I'll fix that in the next revision.\r\n\r\nI finally got around to looking into this, and I think I found why it\r\nwas done this way in 2018. backup_fs_hot() runs pg_start_backup(),\r\ncloses the session, copies the data, and then runs pg_stop_backup() in\r\na different session. This doesn't work with non-exclusive mode\r\nbecause the backup will be aborted when the session that runs\r\npg_start_backup() is closed. pg_stop_backup() will fail with a\r\n\"backup is not in progress\" error. Furthermore,\r\n010_logical_decoding_timelines.pl seems to be the only test that uses\r\nbackup_fs_hot().\r\n\r\nAfter a quick glance, I didn't see an easy way to hold a session open\r\nwhile the test does other things. If there isn't one, modifying\r\nbackup_fs_hot() to work with non-exclusive mode might be more trouble\r\nthan it is worth.\r\n\r\nNathan\r\n\r\n", "msg_date": "Fri, 7 Jan 2022 00:48:21 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On Thu, Jan 6, 2022, at 9:48 PM, Bossart, Nathan wrote:\n> After a quick glance, I didn't see an easy way to hold a session open\n> while the test does other things. If there isn't one, modifying\n> backup_fs_hot() to work with non-exclusive mode might be more trouble\n> than it is worth.\nYou can use IPC::Run to start psql in background. See examples in\nsrc/test/recovery.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Thu, Jan 6, 2022, at 9:48 PM, Bossart, Nathan wrote:After a quick glance, I didn't see an easy way to hold a session openwhile the test does other things.  If there isn't one, modifyingbackup_fs_hot() to work with non-exclusive mode might be more troublethan it is worth.You can use IPC::Run to start psql in background. See examples insrc/test/recovery.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Thu, 06 Jan 2022 22:20:55 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of\n backup_label file" }, { "msg_contents": "On 1/6/22 20:20, Euler Taveira wrote:\n> On Thu, Jan 6, 2022, at 9:48 PM, Bossart, Nathan wrote:\n>> After a quick glance, I didn't see an easy way to hold a session open\n>> while the test does other things.  If there isn't one, modifying\n>> backup_fs_hot() to work with non-exclusive mode might be more trouble\n>> than it is worth.\n >\n> You can use IPC::Run to start psql in background. See examples in\n> src/test/recovery.\n\nI don't think updating backup_fs_hot() is worth it here.\n\nbackup_fs_cold() works just fine for this case and if there is a need \nfor backup_fs_hot() in the future it can be implemented as needed.\n\nRegards,\n-David\n\n\n", "msg_date": "Fri, 7 Jan 2022 08:51:14 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On 1/7/22, 5:52 AM, \"David Steele\" <david@pgmasters.net> wrote:\r\n> On 1/6/22 20:20, Euler Taveira wrote:\r\n>> On Thu, Jan 6, 2022, at 9:48 PM, Bossart, Nathan wrote:\r\n>>> After a quick glance, I didn't see an easy way to hold a session open\r\n>>> while the test does other things. If there isn't one, modifying\r\n>>> backup_fs_hot() to work with non-exclusive mode might be more trouble\r\n>>> than it is worth.\r\n>>\r\n>> You can use IPC::Run to start psql in background. See examples in\r\n>> src/test/recovery.\r\n>\r\n> I don't think updating backup_fs_hot() is worth it here.\r\n>\r\n> backup_fs_cold() works just fine for this case and if there is a need\r\n> for backup_fs_hot() in the future it can be implemented as needed.\r\n\r\nThanks for the pointer on IPC::Run. I had a feeling I was missing\r\nsomething obvious!\r\n\r\nI think I agree with David that it still isn't worth it for just this\r\none test. Of course, it would be great to test the non-exclusive\r\nbackup logic as much as possible, but I'm not sure that this\r\nparticular test will provide any sort of meaningful coverage.\r\n\r\nNathan\r\n\r\n", "msg_date": "Fri, 7 Jan 2022 18:17:29 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of\n backup_label file" }, { "msg_contents": "Here is a rebased patch.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 16 Feb 2022 17:10:53 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "Some review comments on the latest version:\n\n+ * runningBackups is a counter indicating the number of backups currently in\n+ * progress. forcePageWrites is set to true when either of these is\n+ * non-zero. lastBackupStart is the latest checkpoint redo location used as\n+ * a starting point for an online backup.\n */\n- ExclusiveBackupState exclusiveBackupState;\n- int nonExclusiveBackups;\n\nWhat do you mean by \"either of these is non-zero ''. Earlier we used\nto set forcePageWrites in case of both exclusive and non-exclusive\nbackups, but we have just one type of backup now.\n\n==\n\n- * OK to update backup counters, forcePageWrites and session-level lock.\n+ * OK to update backup counters and forcePageWrites.\n *\n\nWe still update the status of session-level lock so I don't think we\nshould update the above comment. See below code:\n\n if (XLogCtl->Insert.runningBackups == 0)\n {\n XLogCtl->Insert.forcePageWrites = false;\n }\n\n /*\n * Clean up session-level lock.\n *\n * You might think that WALInsertLockRelease() can be called before\n * cleaning up session-level lock because session-level lock doesn't need\n * to be protected with WAL insertion lock. But since\n * CHECK_FOR_INTERRUPTS() can occur in it, session-level lock must be\n * cleaned up before it.\n */\n sessionBackupState = SESSION_BACKUP_NONE;\n\n WALInsertLockRelease();\n\n==\n\n@@ -8993,18 +8686,16 @@ do_pg_abort_backup(int code, Datum arg)\n bool emit_warning = DatumGetBool(arg);\n\n /*\n- * Quick exit if session is not keeping around a non-exclusive backup\n- * already started.\n+ * Quick exit if session does not have a running backup.\n */\n- if (sessionBackupState != SESSION_BACKUP_NON_EXCLUSIVE)\n+ if (sessionBackupState != SESSION_BACKUP_RUNNING)\n return;\n\n WALInsertLockAcquireExclusive();\n- Assert(XLogCtl->Insert.nonExclusiveBackups > 0);\n- XLogCtl->Insert.nonExclusiveBackups--;\n+ Assert(XLogCtl->Insert.runningBackups > 0);\n+ XLogCtl->Insert.runningBackups--;\n\n- if (XLogCtl->Insert.exclusiveBackupState == EXCLUSIVE_BACKUP_NONE &&\n- XLogCtl->Insert.nonExclusiveBackups == 0)\n+ if (XLogCtl->Insert.runningBackups == 0)\n {\n XLogCtl->Insert.forcePageWrites = false;\n }\n\nI think we have a lot of common code in do_pg_abort_backup() and\npg_do_stop_backup(). So why not have a common function that can be\ncalled from both these functions.\n\n==\n\n+# Now delete the bogus backup_label file since it will interfere with startup\n+unlink(\"$pgdata/backup_label\")\n+ or BAIL_OUT(\"unable to unlink $pgdata/backup_label\");\n+\n\nWhy do we need this additional change? Earlier this was not required.\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Thu, Feb 17, 2022 at 6:41 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> Here is a rebased patch.\n>\n> --\n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 18 Feb 2022 22:48:10 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On Fri, Feb 18, 2022 at 10:48:10PM +0530, Ashutosh Sharma wrote:\n> Some review comments on the latest version:\n\nThanks for the feedback! Before I start spending more time on this one, I\nshould probably ask if this has any chance of making it into v15.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 18 Feb 2022 12:54:29 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On Sat, Feb 19, 2022 at 2:24 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Fri, Feb 18, 2022 at 10:48:10PM +0530, Ashutosh Sharma wrote:\n> > Some review comments on the latest version:\n>\n> Thanks for the feedback! Before I start spending more time on this one, I\n> should probably ask if this has any chance of making it into v15.\n\nI don't see any reason why it can't make it to v15. However, it is not\nsuper urgent as the users facing this problem have a choice. They can\nuse non-exclusive mode.\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Sat, 19 Feb 2022 09:37:40 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "I've attached an updated patch.\n\nOn Fri, Feb 18, 2022 at 10:48:10PM +0530, Ashutosh Sharma wrote:\n> + * runningBackups is a counter indicating the number of backups currently in\n> + * progress. forcePageWrites is set to true when either of these is\n> + * non-zero. lastBackupStart is the latest checkpoint redo location used as\n> + * a starting point for an online backup.\n> */\n> - ExclusiveBackupState exclusiveBackupState;\n> - int nonExclusiveBackups;\n> \n> What do you mean by \"either of these is non-zero ''. Earlier we used\n> to set forcePageWrites in case of both exclusive and non-exclusive\n> backups, but we have just one type of backup now.\n\nFixed this.\n\n> - * OK to update backup counters, forcePageWrites and session-level lock.\n> + * OK to update backup counters and forcePageWrites.\n> *\n> \n> We still update the status of session-level lock so I don't think we\n> should update the above comment. See below code:\n\nFixed this.\n\n> I think we have a lot of common code in do_pg_abort_backup() and\n> pg_do_stop_backup(). So why not have a common function that can be\n> called from both these functions.\n\nI didn't follow through with this change. I only saw a handful of lines\nthat looked similar, and AFAICT we'd need an extra branch for cleaning up\nthe session-level lock since do_pg_abort_backup() doesn't.\n\n> +# Now delete the bogus backup_label file since it will interfere with startup\n> +unlink(\"$pgdata/backup_label\")\n> + or BAIL_OUT(\"unable to unlink $pgdata/backup_label\");\n> +\n> \n> Why do we need this additional change? Earlier this was not required.\n\nIIUC this test relied on the following code to handle the bogus file:\n\n\t\t\t/*\n\t\t\t * Terminate exclusive backup mode to avoid recovery after a clean\n\t\t\t * fast shutdown. Since an exclusive backup can only be taken\n\t\t\t * during normal running (and not, for example, while running\n\t\t\t * under Hot Standby) it only makes sense to do this if we reached\n\t\t\t * normal running. If we're still in recovery, the backup file is\n\t\t\t * one we're recovering *from*, and we must keep it around so that\n\t\t\t * recovery restarts from the right place.\n\t\t\t */\n\t\t\tif (ReachedNormalRunning)\n\t\t\t\tCancelBackup();\n\nThe attached patch removes this code.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 21 Feb 2022 09:23:06 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nThis patch applies cleanly for me and passes installcheck-world.\r\nI have not yet studied all of the changes in detail.\r\n\r\nSome proofreading nits in the documentation: the pg_stop_backup\r\nwith arguments has lost the 'exclusive' argument, but still shows a comma\r\nbefore the 'wait_for_archive' argument. And the niladic pg_stop_backup\r\nis still documented, though it no longer exists.\r\n\r\nMy biggest concerns are the changes to the SQL-visible pg_start_backup\r\nand pg_stop_backup functions. When the non-exclusive API was implemented\r\n(in 7117685), that was done with care (with a new optional argument to\r\npg_start_backup, and a new overload of pg_stop_backup) to avoid immediate\r\nbreakage of working backup scripts.\r\n\r\nWith this patch, even scripts that were dutifully migrated to that new API and\r\nnow invoke pg_start_backup(label, false) or (label, exclusive => false) will\r\nimmediately and unnecessarily break. What I would suggest for this patch\r\nwould be to change the exclusive default from true to false, and have the\r\nfunction report an ERROR if true is passed.\r\n\r\nOtherwise, for sites using a third-party backup solution, there will be an\r\nunnecessary requirement to synchronize a PostgreSQL upgrade with an\r\nupgrade of the backup solution that won't be broken by the change. For\r\na site with their backup procedures scripted in-house, there will be an\r\nunnecessarily urgent need for the current admin team to study and patch\r\nthe currently-working scripts.\r\n\r\nThat can be avoided by just changing the default to false and rejecting calls\r\nwhere true is passed. That will break only scripts that never got the memo\r\nabout moving to non-exclusive backup, available for six years now.\r\n\r\nAssuming the value is false, so no error is thrown, is it practical to determine\r\nfrom flinfo->fn_expr whether the value was defaulted or supplied? If so, I would\r\nfurther suggest reporting a deprecation WARNING if it was explicitly supplied,\r\nwith a HINT that the argument can simply be removed at the call site, and will\r\nbecome unrecognized in some future release.\r\n\r\npg_stop_backup needs thought, because 7117685 added a new overload for that\r\nfunction, rather than just an optional argument. This patch removes the old\r\nniladic version that returned pg_lsn, leaving just one version, with an optional\r\nargument, that returns a record.\r\n\r\nHere again, the old niladic one was only suitable for exclusive backups, so there\r\ncan't be any script existing in 2022 that still calls that unless it has never been\r\nupdated in six years to nonexclusive backups, and that breakage can't be\r\nhelped.\r\n\r\nAny scripts that did get dutifully updated over the last six years will be calling the\r\nrecord-returning version, passing false, or exclusive => false. This patch as it\r\nstands will unnecessarily break those, but here again I think that can be avoided\r\njust by making the exclusive parameter optional with default false, and reporting\r\nan error if true is passed.\r\n\r\nHere again, I would consider also issuing a deprecation warning if the argument\r\nis explicitly supplied, if it is practical to determine that from fn_expr. (I haven't\r\nlooked yet to see how practical that is.)\r\n\r\nRegards,\r\n-Chap", "msg_date": "Sat, 26 Feb 2022 16:48:52 +0000", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence\n of backup_label file" }, { "msg_contents": "On 02/26/22 11:48, Chapman Flack wrote:\n> This patch applies cleanly for me and passes installcheck-world.\n> I have not yet studied all of the changes in detail.\n\nI've now looked through the rest, and the only further thing I noticed\nwas that xlog.c's do_pg_start_backup still has a tablespaces parameter\nto receive a List* of tablespaces if the caller wants, but this patch\nremoves the comment describing it:\n\n\n- * If \"tablespaces\" isn't NULL, it receives a list of tablespaceinfo structs\n- * describing the cluster's tablespaces.\n\n\nwhich seems like collateral damage.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Sat, 26 Feb 2022 17:03:04 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On Sat, Feb 26, 2022 at 04:48:52PM +0000, Chapman Flack wrote:\n> My biggest concerns are the changes to the SQL-visible pg_start_backup\n> and pg_stop_backup functions. When the non-exclusive API was implemented\n> (in 7117685), that was done with care (with a new optional argument to\n> pg_start_backup, and a new overload of pg_stop_backup) to avoid immediate\n> breakage of working backup scripts.\n> \n> With this patch, even scripts that were dutifully migrated to that new API and\n> now invoke pg_start_backup(label, false) or (label, exclusive => false) will\n> immediately and unnecessarily break. What I would suggest for this patch\n> would be to change the exclusive default from true to false, and have the\n> function report an ERROR if true is passed.\n> \n> Otherwise, for sites using a third-party backup solution, there will be an\n> unnecessary requirement to synchronize a PostgreSQL upgrade with an\n> upgrade of the backup solution that won't be broken by the change. For\n> a site with their backup procedures scripted in-house, there will be an\n> unnecessarily urgent need for the current admin team to study and patch\n> the currently-working scripts.\n> \n> That can be avoided by just changing the default to false and rejecting calls\n> where true is passed. That will break only scripts that never got the memo\n> about moving to non-exclusive backup, available for six years now.\n> \n> Assuming the value is false, so no error is thrown, is it practical to determine\n> from flinfo->fn_expr whether the value was defaulted or supplied? If so, I would\n> further suggest reporting a deprecation WARNING if it was explicitly supplied,\n> with a HINT that the argument can simply be removed at the call site, and will\n> become unrecognized in some future release.\n\nThis is a good point. I think I agree with your proposed changes. I\nbelieve it is possible to add a deprecation warning only when 'exclusive'\nis specified. If anything, we can create a separate function that accepts\nthe 'exclusive' parameter and that always emits a NOTICE or WARNING.\n\n> pg_stop_backup needs thought, because 7117685 added a new overload for that\n> function, rather than just an optional argument. This patch removes the old\n> niladic version that returned pg_lsn, leaving just one version, with an optional\n> argument, that returns a record.\n> \n> Here again, the old niladic one was only suitable for exclusive backups, so there\n> can't be any script existing in 2022 that still calls that unless it has never been\n> updated in six years to nonexclusive backups, and that breakage can't be\n> helped.\n> \n> Any scripts that did get dutifully updated over the last six years will be calling the\n> record-returning version, passing false, or exclusive => false. This patch as it\n> stands will unnecessarily break those, but here again I think that can be avoided\n> just by making the exclusive parameter optional with default false, and reporting\n> an error if true is passed.\n> \n> Here again, I would consider also issuing a deprecation warning if the argument\n> is explicitly supplied, if it is practical to determine that from fn_expr. (I haven't\n> looked yet to see how practical that is.)\n\nAgreed. I will look into updating this one, too. I think the 'exclusive'\nparameter should remain documented for now for both pg_start_backup() and\npg_stop_backup(), but this documentation will just note that it is there\nfor backward compatibility and must be set to false.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 26 Feb 2022 14:06:14 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On Sat, Feb 26, 2022 at 05:03:04PM -0500, Chapman Flack wrote:\n> I've now looked through the rest, and the only further thing I noticed\n> was that xlog.c's do_pg_start_backup still has a tablespaces parameter\n> to receive a List* of tablespaces if the caller wants, but this patch\n> removes the comment describing it:\n> \n> \n> - * If \"tablespaces\" isn't NULL, it receives a list of tablespaceinfo structs\n> - * describing the cluster's tablespaces.\n> \n> \n> which seems like collateral damage.\n\nThanks. I will fix this and the proofreading nits you noted upthread in\nthe next revision.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 26 Feb 2022 14:08:32 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On Sat, Feb 26, 2022 at 02:06:14PM -0800, Nathan Bossart wrote:\n> On Sat, Feb 26, 2022 at 04:48:52PM +0000, Chapman Flack wrote:\n>> Assuming the value is false, so no error is thrown, is it practical to determine\n>> from flinfo->fn_expr whether the value was defaulted or supplied? If so, I would\n>> further suggest reporting a deprecation WARNING if it was explicitly supplied,\n>> with a HINT that the argument can simply be removed at the call site, and will\n>> become unrecognized in some future release.\n> \n> This is a good point. I think I agree with your proposed changes. I\n> believe it is possible to add a deprecation warning only when 'exclusive'\n> is specified. If anything, we can create a separate function that accepts\n> the 'exclusive' parameter and that always emits a NOTICE or WARNING.\n\nI've spent some time looking into this, and I haven't found a clean way to\nemit a WARNING only if the \"exclusive\" parameter is supplied (and set to\nfalse). AFAICT flinfo->fn_expr doesn't tell us whether the parameter was\nsupplied or the default value was used. I was able to get it working by\nsplitting pg_start_backup() into 3 separate internal functions (i.e.,\npg_start_backup_1arg(), pg_start_backup_2arg(), and\npg_start_backup_3arg()), but this breaks calls such as\npg_start_backup('mylabel', exclusive => false), and it might complicate\nprivilege management for users.\n\nWithout a WARNING, I think it will be difficult to justify removing the\n\"exclusive\" parameter in the future. We would either need to leave it\naround forever, or we would have to risk unnecessarily breaking some\nworking backup scripts. I wonder if we should just remove it now and make\nsure that this change is well-documented in the release notes.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 28 Feb 2022 21:51:00 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On 2/28/22 23:51, Nathan Bossart wrote:\n> On Sat, Feb 26, 2022 at 02:06:14PM -0800, Nathan Bossart wrote:\n>> On Sat, Feb 26, 2022 at 04:48:52PM +0000, Chapman Flack wrote:\n>>> Assuming the value is false, so no error is thrown, is it practical to determine\n>>> from flinfo->fn_expr whether the value was defaulted or supplied? If so, I would\n>>> further suggest reporting a deprecation WARNING if it was explicitly supplied,\n>>> with a HINT that the argument can simply be removed at the call site, and will\n>>> become unrecognized in some future release.\n>>\n>> This is a good point. I think I agree with your proposed changes. I\n>> believe it is possible to add a deprecation warning only when 'exclusive'\n>> is specified. If anything, we can create a separate function that accepts\n>> the 'exclusive' parameter and that always emits a NOTICE or WARNING.\n> \n> I've spent some time looking into this, and I haven't found a clean way to\n> emit a WARNING only if the \"exclusive\" parameter is supplied (and set to\n> false). AFAICT flinfo->fn_expr doesn't tell us whether the parameter was\n> supplied or the default value was used. I was able to get it working by\n> splitting pg_start_backup() into 3 separate internal functions (i.e.,\n> pg_start_backup_1arg(), pg_start_backup_2arg(), and\n> pg_start_backup_3arg()), but this breaks calls such as\n> pg_start_backup('mylabel', exclusive => false), and it might complicate\n> privilege management for users.\n> \n> Without a WARNING, I think it will be difficult to justify removing the\n> \"exclusive\" parameter in the future. We would either need to leave it\n> around forever, or we would have to risk unnecessarily breaking some\n> working backup scripts. I wonder if we should just remove it now and make\n> sure that this change is well-documented in the release notes.\n\nPersonally, I am in favor of removing it. We change/rename \nfunctions/tables/views when we need to, and this happens in almost every \nrelease.\n\nWhat we need to do is make sure that an older installation won't \nsilently work in a broken way, i.e. if we remove the exclusive flag \nsomebody expecting the pre-9.6 behavior might not receive an error and \nthink everything is OK. That would not be good.\n\nOne option might be to rename the functions. Something like \npg_backup_start/stop.\n\nRegards,\n-David\n\n\n", "msg_date": "Tue, 1 Mar 2022 08:44:51 -0600", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On Tue, Mar 01, 2022 at 08:44:51AM -0600, David Steele wrote:\n> Personally, I am in favor of removing it. We change/rename\n> functions/tables/views when we need to, and this happens in almost every\n> release.\n> \n> What we need to do is make sure that an older installation won't silently\n> work in a broken way, i.e. if we remove the exclusive flag somebody\n> expecting the pre-9.6 behavior might not receive an error and think\n> everything is OK. That would not be good.\n> \n> One option might be to rename the functions. Something like\n> pg_backup_start/stop.\n\nI'm fine with this approach.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 1 Mar 2022 07:10:49 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On 03/01/22 09:44, David Steele wrote:\n> Personally, I am in favor of removing it. We change/rename\n> functions/tables/views when we need to, and this happens in almost every\n> release.\n\nFor clarification, is that a suggestion to remove the 'exclusive' parameter\nin some later release, after using this release to default it to false and\nreject calls with true?\n\nI can get behind that proposal, even if we don't have a practical way\nto add the warning I suggested. I'd be happier with the warning, but can\nlive without it. Release notes can be the warning.\n\nThat way, at least, there would be a period of time where procedures\nthat currently work (by passing exclusive => false) would continue to work,\nand could be adapted as time permits by removing that argument, with no\nbehavioral change.\n\nThe later release removing the argument would then break only procedures\nthat had never done so. That's comparable to what's proposed for this\nrelease, which will only break procedures that have never migrated away\nfrom exclusive mode despite the time and notice to do so.\n\nThat seems ok to me.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Tue, 1 Mar 2022 11:09:13 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "Greetings,\n\n* Nathan Bossart (nathandbossart@gmail.com) wrote:\n> On Tue, Mar 01, 2022 at 08:44:51AM -0600, David Steele wrote:\n> > Personally, I am in favor of removing it. We change/rename\n> > functions/tables/views when we need to, and this happens in almost every\n> > release.\n> > \n> > What we need to do is make sure that an older installation won't silently\n> > work in a broken way, i.e. if we remove the exclusive flag somebody\n> > expecting the pre-9.6 behavior might not receive an error and think\n> > everything is OK. That would not be good.\n> > \n> > One option might be to rename the functions. Something like\n> > pg_backup_start/stop.\n> \n> I'm fine with this approach.\n\n+1.\n\nThanks,\n\nStephen", "msg_date": "Tue, 1 Mar 2022 11:39:55 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On Tue, Mar 01, 2022 at 11:09:13AM -0500, Chapman Flack wrote:\n> On 03/01/22 09:44, David Steele wrote:\n>> Personally, I am in favor of removing it. We change/rename\n>> functions/tables/views when we need to, and this happens in almost every\n>> release.\n> \n> For clarification, is that a suggestion to remove the 'exclusive' parameter\n> in some later release, after using this release to default it to false and\n> reject calls with true?\n\nMy suggestion was to remove it in v15. My impression is that David and\nStephen agree, but I could be misinterpreting their responses.\n\n> That way, at least, there would be a period of time where procedures\n> that currently work (by passing exclusive => false) would continue to work,\n> and could be adapted as time permits by removing that argument, with no\n> behavioral change.\n\nI'm not sure if there's any advantage to kicking the can down the road. At\nsome point, we'll need to break existing backup scripts. Will we be more\nprepared to do that in v17 than we are now? We could maintain two sets of\nfunctions for a few releases and make it really clear in the documentation\nthat pg_start/stop_backup() are going to be removed soon (and always emit a\nWARNING when they are used). Would that address your concerns?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 1 Mar 2022 09:32:08 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On 3/1/22 11:32, Nathan Bossart wrote:\n> On Tue, Mar 01, 2022 at 11:09:13AM -0500, Chapman Flack wrote:\n>> On 03/01/22 09:44, David Steele wrote:\n>>> Personally, I am in favor of removing it. We change/rename\n>>> functions/tables/views when we need to, and this happens in almost every\n>>> release.\n>>\n>> For clarification, is that a suggestion to remove the 'exclusive' parameter\n>> in some later release, after using this release to default it to false and\n>> reject calls with true?\n> \n> My suggestion was to remove it in v15. My impression is that David and\n> Stephen agree, but I could be misinterpreting their responses.\n\nI agree and I'm pretty sure Stephen does as well.\n\n>> That way, at least, there would be a period of time where procedures\n>> that currently work (by passing exclusive => false) would continue to work,\n>> and could be adapted as time permits by removing that argument, with no\n>> behavioral change.\n> \n> I'm not sure if there's any advantage to kicking the can down the road. At\n> some point, we'll need to break existing backup scripts. Will we be more\n> prepared to do that in v17 than we are now? We could maintain two sets of\n> functions for a few releases and make it really clear in the documentation\n> that pg_start/stop_backup() are going to be removed soon (and always emit a\n> WARNING when they are used). Would that address your concerns?\n\nI think people are going to complain no matter what. If scripts are \nbeing maintained changing the name is not a big deal (though moving from \nexclusive to non-exclusive may be). If they aren't being maintained then \nthey'll just blow up a few versions down the road when we remove the \ncompatibility functions.\n\nRegards,\n-David\n\n\n", "msg_date": "Tue, 1 Mar 2022 12:22:25 -0600", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On 03/01/22 12:32, Nathan Bossart wrote:\n> On Tue, Mar 01, 2022 at 11:09:13AM -0500, Chapman Flack wrote:\n>> That way, at least, there would be a period of time where procedures\n>> that currently work (by passing exclusive => false) would continue to work,\n>> and could be adapted as time permits by removing that argument, with no\n>> behavioral change.\n> \n> I'm not sure if there's any advantage to kicking the can down the road. At\n> some point, we'll need to break existing backup scripts. Will we be more\n> prepared to do that in v17 than we are now?\n\nYes, if we have provided a transition period the way we did in 7117685.\nThe way the on-ramp to that transition worked:\n\n- Initially, your procedures used exclusive mode.\n- If you changed nothing, they continued to work, with no behavior change.\n- You then had ample time to adapt them to non-exclusive mode so now\n they work that way.\n- You knew a day would come (here it comes) where, if you've never\n gotten around to doing that, your unchanged exclusive-mode procedures\n are going to break.\n\nSo here we are, arrived at that day. If you're still using exclusive mode,\nyour stuff's going to break; serves you right. So now limit the cases we\ncare about to people who made use of the time they were given to change\ntheir procedures to use exclusive => false. So the off-ramp can look like:\n\n- Initially, your procedures pass exclusive => false.\n- If you change nothing, they should continue to work, with no\n behavior change.\n- You should then have ample time to change to a new spelling without\n exclusive => false, and have them work that way.\n- You know some later day is coming where, if you've never\n gotten around to doing that, they're going to break.\n\nThen, when that day comes, if you're still passing exclusive at all,\nyour stuff's going to break; serves you right. If you have made use of\nthe time you were given for the changes, you'll be fine. So yes, at that\npoint, I think we can do it with clear conscience. We'll have made the\noff-ramp as smooth and navigable as the on-ramp was.\n\n> We could maintain two sets of\n> functions for a few releases and make it really clear in the documentation\n> that pg_start/stop_backup() are going to be removed soon (and always emit a\n> WARNING when they are used). Would that address your concerns?\n\nThat would. I had been thinking of not changing the names, and just making\nthe parameter go away. But I wasn't thinking of your concern here:\n\n> What we need to do is make sure that an older installation won't silently\n> work in a broken way, i.e. if we remove the exclusive flag somebody\n> expecting the pre-9.6 behavior might not receive an error and think\n> everything is OK. That would not be good.\n\nSo I'm ok with changing the names. Then step 3 of the off-ramp\nwould just be to call the functions by the new names, as well as to drop\nthe exclusive => false.\n\nThe thing I'd want to avoid is just, after the trouble that was taken\nto make the on-ramp navigable, making the off-ramp be a cliff.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Tue, 1 Mar 2022 13:32:39 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On 03/01/22 13:22, David Steele wrote:\n> I think people are going to complain no matter what. If scripts are being\n> maintained changing the name is not a big deal (though moving from exclusive\n> to non-exclusive may be). If they aren't being maintained then they'll just\n> blow up a few versions down the road when we remove the compatibility\n> functions.\n\nI might have already said enough in the message that crossed with this,\nbut I think what I'm saying is there's a less-binary distinction between\nscripts that are/aren't \"being maintained\".\n\nThere can't really be many teams out there thinking \"we'll just ignore\nthese scripts forever, and nothing bad will happen.\" They all know they'll\nhave to do stuff sometimes. But it matters how we allow them to schedule it.\n\nIn the on-ramp, at first there was only exclusive. Then there were both\nmodes, with exclusive being deprecated, so teams knew they'd need to do\nstuff, and most by now probably have. They were able to do separation of\nhazards and schedule that work; they did not have to pile it onto the\nwhole plate of \"upgrade PG from 9.5 to 9.6 and make sure everything works\".\n\nSo now we're dropping the other shoe: first there was one mode, then both,\nnow there's only the other one. Again there's some work for teams to do;\nlet's again allow them to separate hazards and schedule that work apart\nfrom the whole 14 to 15 upgrade project.\n\nWe can't help getting complaints in the off-ramp from anybody who ignored\nthe on-ramp. But we can avoid clobbering the teams who dutifully played\nalong before, and only want the same space to do so now.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Tue, 1 Mar 2022 13:55:42 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "Greetings,\n\n* David Steele (david@pgmasters.net) wrote:\n> On 3/1/22 11:32, Nathan Bossart wrote:\n> >On Tue, Mar 01, 2022 at 11:09:13AM -0500, Chapman Flack wrote:\n> >>On 03/01/22 09:44, David Steele wrote:\n> >>>Personally, I am in favor of removing it. We change/rename\n> >>>functions/tables/views when we need to, and this happens in almost every\n> >>>release.\n> >>\n> >>For clarification, is that a suggestion to remove the 'exclusive' parameter\n> >>in some later release, after using this release to default it to false and\n> >>reject calls with true?\n> >\n> >My suggestion was to remove it in v15. My impression is that David and\n> >Stephen agree, but I could be misinterpreting their responses.\n> \n> I agree and I'm pretty sure Stephen does as well.\n\nYes, +1 to removing it.\n\n> >>That way, at least, there would be a period of time where procedures\n> >>that currently work (by passing exclusive => false) would continue to work,\n> >>and could be adapted as time permits by removing that argument, with no\n> >>behavioral change.\n> >\n> >I'm not sure if there's any advantage to kicking the can down the road. At\n> >some point, we'll need to break existing backup scripts. Will we be more\n> >prepared to do that in v17 than we are now? We could maintain two sets of\n> >functions for a few releases and make it really clear in the documentation\n> >that pg_start/stop_backup() are going to be removed soon (and always emit a\n> >WARNING when they are used). Would that address your concerns?\n> \n> I think people are going to complain no matter what. If scripts are being\n> maintained changing the name is not a big deal (though moving from exclusive\n> to non-exclusive may be). If they aren't being maintained then they'll just\n> blow up a few versions down the road when we remove the compatibility\n> functions.\n\nI don't consider \"maintained\" and \"still using the exclusive backup\nmethod\" to both be able to be true at the same time.\n\nThanks,\n\nStephen", "msg_date": "Tue, 1 Mar 2022 14:12:59 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "Greetings,\n\n* Chapman Flack (chap@anastigmatix.net) wrote:\n> On 03/01/22 13:22, David Steele wrote:\n> > I think people are going to complain no matter what. If scripts are being\n> > maintained changing the name is not a big deal (though moving from exclusive\n> > to non-exclusive may be). If they aren't being maintained then they'll just\n> > blow up a few versions down the road when we remove the compatibility\n> > functions.\n> \n> I might have already said enough in the message that crossed with this,\n> but I think what I'm saying is there's a less-binary distinction between\n> scripts that are/aren't \"being maintained\".\n> \n> There can't really be many teams out there thinking \"we'll just ignore\n> these scripts forever, and nothing bad will happen.\" They all know they'll\n> have to do stuff sometimes. But it matters how we allow them to schedule it.\n\nWe only make these changes between major versions. That's as much as we\nshould be required to provide.\n\nFurther, we seriously changed around how restores work a few versions\nback and there was rather little complaining.\n\nThanks,\n\nStephen", "msg_date": "Tue, 1 Mar 2022 14:14:41 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On 03/01/22 14:14, Stephen Frost wrote:\n>> There can't really be many teams out there thinking \"we'll just ignore\n>> these scripts forever, and nothing bad will happen.\" They all know they'll\n>> have to do stuff sometimes. But it matters how we allow them to schedule it.\n> \n> We only make these changes between major versions. That's as much as we\n> should be required to provide.\n\nIt's an OSS project, so I guess we're not required to provide anything.\n\nBut in the course of this multi-release exclusive to non-exclusive\ntransition, we already demonstrated, in 7117685, that we can avoid\ninflicting immediate breakage when there's nothing in our objective\nthat inherently requires it, and avoiding it is relatively easy.\n\nI can't bring myself to think that was a bad precedent.\n\nNow, granted, the difference between the adaptations being required then\nand the ones required now is that those required both: changes to some\nfunction calls, and corresponding changes to how the scripts handled\nlabel and tablespace files. Here, it's only a clerical update to some\nfunction calls.\n\nSo if I'm outvoted here and the reason is \"look, a lighter burden is\ninvolved this time than that time\", then ok. I would rather bow to that\nargument on the specific facts of one case than abandon the precedent\nfrom 7117685 generally.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Tue, 1 Mar 2022 14:56:12 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "Greetings,\n\n* Chapman Flack (chap@anastigmatix.net) wrote:\n> On 03/01/22 14:14, Stephen Frost wrote:\n> >> There can't really be many teams out there thinking \"we'll just ignore\n> >> these scripts forever, and nothing bad will happen.\" They all know they'll\n> >> have to do stuff sometimes. But it matters how we allow them to schedule it.\n> > \n> > We only make these changes between major versions. That's as much as we\n> > should be required to provide.\n> \n> It's an OSS project, so I guess we're not required to provide anything.\n> \n> But in the course of this multi-release exclusive to non-exclusive\n> transition, we already demonstrated, in 7117685, that we can avoid\n> inflicting immediate breakage when there's nothing in our objective\n> that inherently requires it, and avoiding it is relatively easy.\n> \n> I can't bring myself to think that was a bad precedent.\n\nIt's actively bad because we are ridiculously inconsistent when it comes\nto these things and we're terrible about ever removing anything once\nit's gotten into the tree as 'deprecated'. Witness that it's 8 years\nsince 7117685 and we still have these old and clearly broken APIs\naround. We absolutely need to move *away* from this approach, exactly\nhow 2dedf4d9, much more recently than 7117685, for all of its other\nflaws, did.\n\n> So if I'm outvoted here and the reason is \"look, a lighter burden is\n> involved this time than that time\", then ok. I would rather bow to that\n> argument on the specific facts of one case than abandon the precedent\n> from 7117685 generally.\n\nIt's far from precedent- if anything, it's quite the opposite from how\nmost changes around here are made, and much more recent commits in the\nsame area clearly tossed out entirely the idea of trying to maintain\nsome kind of backwards compatibility with existing scripts.\n\nThanks,\n\nStephen", "msg_date": "Tue, 1 Mar 2022 15:05:23 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "Here is a new version of the patch with the following changes:\n\n\t1. Addressed Chap's feedback from upthread.\n\t2. Renamed pg_start/stop_backup() to pg_backup_start/stop() as\n\t suggested by David.\n\t3. A couple of other small documentation adjustments.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 1 Mar 2022 17:03:02 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On 03/01/22 20:03, Nathan Bossart wrote:\n> Here is a new version of the patch with the following changes:\n\nI did not notice this earlier (sorry), but there seems to remain in\nbackup.sgml a programlisting example that shows a psql invocation\nfor pg_backup_start, then a tar command, then another psql invocation\nfor pg_backup_stop.\n\nI think that was only workable for the exclusive mode, and now it is\nnecessary to issue pg_backup_start and pg_backup_stop in the same session.\n\n(The 'touch backup_in_progress' business seems a bit bogus now too,\nsuggesting an exclusivity remembered from bygone days.)\n\nI am not sure what a workable, simple example ought to look like.\nMaybe a single psql script issuing the pg_backup_start and the\npg_backup_stop, with a tar command in between with \\! ?\n\nSeveral bricks shy of production-ready, but it would give the idea.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Wed, 2 Mar 2022 14:23:51 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On Wed, Mar 02, 2022 at 02:23:51PM -0500, Chapman Flack wrote:\n> I did not notice this earlier (sorry), but there seems to remain in\n> backup.sgml a programlisting example that shows a psql invocation\n> for pg_backup_start, then a tar command, then another psql invocation\n> for pg_backup_stop.\n> \n> I think that was only workable for the exclusive mode, and now it is\n> necessary to issue pg_backup_start and pg_backup_stop in the same session.\n> \n> (The 'touch backup_in_progress' business seems a bit bogus now too,\n> suggesting an exclusivity remembered from bygone days.)\n> \n> I am not sure what a workable, simple example ought to look like.\n> Maybe a single psql script issuing the pg_backup_start and the\n> pg_backup_stop, with a tar command in between with \\! ?\n> \n> Several bricks shy of production-ready, but it would give the idea.\n\nAnother option might be to just remove this section. The top of the\nsection mentions that this is easily done using pg_basebackup with the -X\nparameter. The bottom part of the section includes more complicated steps\nfor when \"more flexibility in copying the backup files is needed...\"\nAFAICT the more complicated strategy was around before pg_basebackup, and\nthe pg_basebackup recommendation was added in 2012 as part of 920febd.\nThoughts?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 8 Mar 2022 12:01:07 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On 3/8/22 14:01, Nathan Bossart wrote:\n> On Wed, Mar 02, 2022 at 02:23:51PM -0500, Chapman Flack wrote:\n>> I did not notice this earlier (sorry), but there seems to remain in\n>> backup.sgml a programlisting example that shows a psql invocation\n>> for pg_backup_start, then a tar command, then another psql invocation\n>> for pg_backup_stop.\n>>\n>> I think that was only workable for the exclusive mode, and now it is\n>> necessary to issue pg_backup_start and pg_backup_stop in the same session.\n>>\n>> (The 'touch backup_in_progress' business seems a bit bogus now too,\n>> suggesting an exclusivity remembered from bygone days.)\n>>\n>> I am not sure what a workable, simple example ought to look like.\n>> Maybe a single psql script issuing the pg_backup_start and the\n>> pg_backup_stop, with a tar command in between with \\! ?\n>>\n>> Several bricks shy of production-ready, but it would give the idea.\n> \n> Another option might be to just remove this section. The top of the\n> section mentions that this is easily done using pg_basebackup with the -X\n> parameter. The bottom part of the section includes more complicated steps\n> for when \"more flexibility in copying the backup files is needed...\"\n> AFAICT the more complicated strategy was around before pg_basebackup, and\n> the pg_basebackup recommendation was added in 2012 as part of 920febd.\n> Thoughts?\n\nThis makes sense to me. I think pg_basebackup is far preferable to doing \nanything like what is described in this section. Unless you are planning \nto do something fancy (parallelization, snapshots, object stores, etc.) \nthen pg_basebackup is the way to go.\n\nRegards,\n-David\n\n\n", "msg_date": "Tue, 8 Mar 2022 15:09:50 -0600", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On Tue, Mar 08, 2022 at 03:09:50PM -0600, David Steele wrote:\n> On 3/8/22 14:01, Nathan Bossart wrote:\n>> On Wed, Mar 02, 2022 at 02:23:51PM -0500, Chapman Flack wrote:\n>> > I did not notice this earlier (sorry), but there seems to remain in\n>> > backup.sgml a programlisting example that shows a psql invocation\n>> > for pg_backup_start, then a tar command, then another psql invocation\n>> > for pg_backup_stop.\n>> > \n>> > I think that was only workable for the exclusive mode, and now it is\n>> > necessary to issue pg_backup_start and pg_backup_stop in the same session.\n>> > \n>> > (The 'touch backup_in_progress' business seems a bit bogus now too,\n>> > suggesting an exclusivity remembered from bygone days.)\n>> > \n>> > I am not sure what a workable, simple example ought to look like.\n>> > Maybe a single psql script issuing the pg_backup_start and the\n>> > pg_backup_stop, with a tar command in between with \\! ?\n>> > \n>> > Several bricks shy of production-ready, but it would give the idea.\n>> \n>> Another option might be to just remove this section. The top of the\n>> section mentions that this is easily done using pg_basebackup with the -X\n>> parameter. The bottom part of the section includes more complicated steps\n>> for when \"more flexibility in copying the backup files is needed...\"\n>> AFAICT the more complicated strategy was around before pg_basebackup, and\n>> the pg_basebackup recommendation was added in 2012 as part of 920febd.\n>> Thoughts?\n> \n> This makes sense to me. I think pg_basebackup is far preferable to doing\n> anything like what is described in this section. Unless you are planning to\n> do something fancy (parallelization, snapshots, object stores, etc.) then\n> pg_basebackup is the way to go.\n\nI spent some time trying to come up with a workable script to replace the\nexisting one. I think the main problem is that you need to write out both\nthe backup label file and the tablespace map file, but I didn't find an\neasy way to write the different output columns of pg_backup_stop() to\nseparate files via psql. We'd probably need to write out the steps in\nprose like the 'Making a Base Backup Using the Low Level API' section does.\nUltimately, I just removed everything beyond the pg_basebackup\nrecommendation in the 'Standalone Hot Backups' section.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 8 Mar 2022 14:12:53 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On 03/08/22 17:12, Nathan Bossart wrote:\n> I spent some time trying to come up with a workable script to replace the\n> existing one. I think the main problem is that you need to write out both\n> the backup label file and the tablespace map file, but I didn't find an\n> easy way to write the different output columns of pg_backup_stop() to\n> separate files via psql.\n\nSomething like this might work:\n\nSELECT * FROM pg_backup_stop(true) \\gset\n\n\\out /tmp/backup_label \\qecho :labelfile\n\\out /tmp/tablespace_map \\qecho :spcmapfile\n\\out\n\\! ... tar command adding /tmp/{backup_label,tablespace_map} to the tarball\n\nI notice the \\qecho adds a final newline (and so if :spcmapfile is empty,\na file containing a single newline is made). In a quick test with a bogus\nrestore_command, I did not see any error messages specific to the format\nof the backup_label or tablespace_map files, so maybe the final newline\nisn't a problem.\n\nAssuming the newline isn't a problem, that might be simple enough to\nuse in an example, and maybe it's not a bad thing that it highlights a few\npsql capabilities the reader might not have stumbled on before. Or, maybe\nit is just too confusing to bother.\n\nWhile agreeing that pg_basebackup is the production-ready thing that\ndoes it all for you (with tests for likely errors and so on), I think\nthere is also some value in a dead-simple example that concretely\nshows you what \"it\" is, what the basic steps are that happen beneath\npg_basebackup's chrome.\n\nIf the added newline is a problem, I haven't thought of a way to exclude\nit that doesn't take the example out of the realm of dead-simple.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Tue, 8 Mar 2022 20:24:13 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "Greetings,\n\n* Chapman Flack (chap@anastigmatix.net) wrote:\n> On 03/08/22 17:12, Nathan Bossart wrote:\n> > I spent some time trying to come up with a workable script to replace the\n> > existing one. I think the main problem is that you need to write out both\n> > the backup label file and the tablespace map file, but I didn't find an\n> > easy way to write the different output columns of pg_backup_stop() to\n> > separate files via psql.\n\nLet's not confuse ourselves here- the existing script *doesn't* work in\nany reasonable way when we're talking about everything that needs to be\ndone to perform a backup. That a lot of people are using it because\nit's in the documentation is an actively bad thing.\n\nThe same goes for the archive command example.\n\n> Something like this might work:\n> \n> SELECT * FROM pg_backup_stop(true) \\gset\n> \n> \\out /tmp/backup_label \\qecho :labelfile\n> \\out /tmp/tablespace_map \\qecho :spcmapfile\n> \\out\n> \\! ... tar command adding /tmp/{backup_label,tablespace_map} to the tarball\n\n... this doesn't do what's needed either. We could try to write down\nsome minimum set of things that are needed for a backup tool to do but\nit's not something that a 3 line script is going to cover. Indeed, it's\na lot more like pg_basebackup and if we want to give folks a script to\nuse, it should be \"run pg_basebackup\".\n\n> I notice the \\qecho adds a final newline (and so if :spcmapfile is empty,\n> a file containing a single newline is made). In a quick test with a bogus\n> restore_command, I did not see any error messages specific to the format\n> of the backup_label or tablespace_map files, so maybe the final newline\n> isn't a problem.\n> \n> Assuming the newline isn't a problem, that might be simple enough to\n> use in an example, and maybe it's not a bad thing that it highlights a few\n> psql capabilities the reader might not have stumbled on before. Or, maybe\n> it is just too confusing to bother.\n\nIt's more than just too confusing, it's actively bad because people will\nactually use it and then end up with backups that don't work.\n\n> While agreeing that pg_basebackup is the production-ready thing that\n> does it all for you (with tests for likely errors and so on), I think\n> there is also some value in a dead-simple example that concretely\n> shows you what \"it\" is, what the basic steps are that happen beneath\n> pg_basebackup's chrome.\n\nDocumenting everything that pg_basebackup does to make sure that the\nbackup is viable might be something to work on if someone is really\nexcited about this, but it's not 'dead-simple' and it's darn close to\nthe bare minimum, something that none of these simple scripts will come\nanywhere close to being and instead they'll be far less than the\nminimum.\n\n> If the added newline is a problem, I haven't thought of a way to exclude\n> it that doesn't take the example out of the realm of dead-simple.\n\nI disagree that there's really a way to provide 'dead-simple' backups\nwith what's built into core without using pg_basebackup. If we want a\n'dead-simple' solution in core then we'd need to write an appropriate\nbackup tool that does all the basic things and include and maintain\nthat. Writing a shell script isn't enough and we shouldn't encourage\nour users to do exactly that by having it in our documentation because\nthen they'll think it's enough.\n\nThanks,\n\nStephen", "msg_date": "Wed, 9 Mar 2022 10:42:10 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On Wed, Mar 9, 2022 at 4:42 PM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> * Chapman Flack (chap@anastigmatix.net) wrote:\n> > On 03/08/22 17:12, Nathan Bossart wrote:\n> > > I spent some time trying to come up with a workable script to replace the\n> > > existing one. I think the main problem is that you need to write out both\n> > > the backup label file and the tablespace map file, but I didn't find an\n> > > easy way to write the different output columns of pg_backup_stop() to\n> > > separate files via psql.\n>\n> Let's not confuse ourselves here- the existing script *doesn't* work in\n> any reasonable way when we're talking about everything that needs to be\n> done to perform a backup. That a lot of people are using it because\n> it's in the documentation is an actively bad thing.\n>\n> The same goes for the archive command example.\n>\n> > Something like this might work:\n> >\n> > SELECT * FROM pg_backup_stop(true) \\gset\n> >\n> > \\out /tmp/backup_label \\qecho :labelfile\n> > \\out /tmp/tablespace_map \\qecho :spcmapfile\n> > \\out\n> > \\! ... tar command adding /tmp/{backup_label,tablespace_map} to the tarball\n>\n> ... this doesn't do what's needed either. We could try to write down\n> some minimum set of things that are needed for a backup tool to do but\n> it's not something that a 3 line script is going to cover. Indeed, it's\n> a lot more like pg_basebackup and if we want to give folks a script to\n> use, it should be \"run pg_basebackup\".\n>\n> > I notice the \\qecho adds a final newline (and so if :spcmapfile is empty,\n> > a file containing a single newline is made). In a quick test with a bogus\n> > restore_command, I did not see any error messages specific to the format\n> > of the backup_label or tablespace_map files, so maybe the final newline\n> > isn't a problem.\n> >\n> > Assuming the newline isn't a problem, that might be simple enough to\n> > use in an example, and maybe it's not a bad thing that it highlights a few\n> > psql capabilities the reader might not have stumbled on before. Or, maybe\n> > it is just too confusing to bother.\n>\n> It's more than just too confusing, it's actively bad because people will\n> actually use it and then end up with backups that don't work.\n\n+1.\n\nOr even worse, backups that sometimes work, but not reliably and not every time.\n\n> > While agreeing that pg_basebackup is the production-ready thing that\n> > does it all for you (with tests for likely errors and so on), I think\n> > there is also some value in a dead-simple example that concretely\n> > shows you what \"it\" is, what the basic steps are that happen beneath\n> > pg_basebackup's chrome.\n\nI agree that having a dead simple script would be good.\n\nThe *only* dead simple script that's going to be possible is one that\ncalls pg_basebackup.\n\nThe current APIs don't make it *possible* to drive them directly with\na dead simple script.\n\nPretending something is simple when it's not, is not doing anybody a favor.\n\n\n> Documenting everything that pg_basebackup does to make sure that the\n> backup is viable might be something to work on if someone is really\n> excited about this, but it's not 'dead-simple' and it's darn close to\n> the bare minimum, something that none of these simple scripts will come\n> anywhere close to being and instead they'll be far less than the\n> minimum.\n\nYeah, having the full set of steps required documented certainly\nwouldn't be a bad thing. But it's a very *different* thing.\n\n\n> > If the added newline is a problem, I haven't thought of a way to exclude\n> > it that doesn't take the example out of the realm of dead-simple.\n>\n> I disagree that there's really a way to provide 'dead-simple' backups\n> with what's built into core without using pg_basebackup. If we want a\n> 'dead-simple' solution in core then we'd need to write an appropriate\n> backup tool that does all the basic things and include and maintain\n> that. Writing a shell script isn't enough and we shouldn't encourage\n> our users to do exactly that by having it in our documentation because\n> then they'll think it's enough.\n\n+1.\n\n\nWe need to accept that the current APIs are far too low level to be\ndriven by a shellscript. No matter how much documentation we write is\nnot going to change that fact.\n\nFor the people who want to drive their backups from a shellscript and\nfor some reason *don't* want to use pg_basebackup, we need to come up\nwith a different API or a different set of tools. That is not a\ndocumentation task. That is a \"start from a list of which things\npg_basebackup cannot do that are still simple, or that tools like\npgbackrest cannot do if they're complicated\". And then design an API\nthat's actually safe and easy to use *for that usecase*.\n\nFor example, if the use case is \"i want to use filesystemor SAN\nsnapshots for my backups\", we shouldn't try to write workarounds using\nbash coprocs or whatever. Instead, we could write a tool that\ninteracts with the current api to start the backup, then launches a\nshellscript that interacts with the snapshot system, and then stops\nthe backup after. With a well defined set of rules for how that shell\nscript should work and interact.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Wed, 9 Mar 2022 17:22:56 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On 03/09/22 11:22, Magnus Hagander wrote:\n>> It's more than just too confusing, it's actively bad because people will\n>> actually use it and then end up with backups that don't work.\n> \n> +1.\n> \n> Or even worse, backups that sometimes work, but not reliably and not\n> every time.\n> ...\n> Pretending something is simple when it's not, is not doing anybody a favor.\n\nOkay, I bow to this reasoning, for the purpose of this patch. Let's\njust lose the example.\n\n>> Documenting everything that pg_basebackup does to make sure that the\n>> backup is viable might be something to work on if someone is really\n>> excited about this, but it's not 'dead-simple' and it's darn close to\n>> the bare minimum, something that none of these simple scripts will come\n>> anywhere close to being and instead they'll be far less than the\n>> minimum.\n> \n> Yeah, having the full set of steps required documented certainly\n> wouldn't be a bad thing.\n\nI'd say that qualifies as an understatement. While it certainly doesn't\nhave to be part of this patch, if the claim is that an admin who relies\non pg_basebackup is relying on essential things pg_basebackup does that\nhave not been enumerated in our documentation yet, I would argue they\nshould be.\n\n> with a different API or a different set of tools. That is not a\n> documentation task. That is a \"start from a list of which things\n> pg_basebackup cannot do that are still simple, or that tools like\n> pgbackrest cannot do if they're complicated\". And then design an API\n> that's actually safe and easy to use *for that usecase*.\n\nThat might also be a good thing, but I don't see it as a substitute\nfor documenting the present reality of what the irreducibly essential\nbehaviors of pg_basebackup (or of third-party tools like pgbackrest)\nare, and why they are so.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Wed, 9 Mar 2022 12:12:07 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "Greetings,\n\n* Chapman Flack (chap@anastigmatix.net) wrote:\n> On 03/09/22 11:22, Magnus Hagander wrote:\n> >> It's more than just too confusing, it's actively bad because people will\n> >> actually use it and then end up with backups that don't work.\n> > \n> > +1.\n> > \n> > Or even worse, backups that sometimes work, but not reliably and not\n> > every time.\n> > ...\n> > Pretending something is simple when it's not, is not doing anybody a favor.\n> \n> Okay, I bow to this reasoning, for the purpose of this patch. Let's\n> just lose the example.\n\nGreat.\n\n> >> Documenting everything that pg_basebackup does to make sure that the\n> >> backup is viable might be something to work on if someone is really\n> >> excited about this, but it's not 'dead-simple' and it's darn close to\n> >> the bare minimum, something that none of these simple scripts will come\n> >> anywhere close to being and instead they'll be far less than the\n> >> minimum.\n> > \n> > Yeah, having the full set of steps required documented certainly\n> > wouldn't be a bad thing.\n> \n> I'd say that qualifies as an understatement. While it certainly doesn't\n> have to be part of this patch, if the claim is that an admin who relies\n> on pg_basebackup is relying on essential things pg_basebackup does that\n> have not been enumerated in our documentation yet, I would argue they\n> should be.\n\nIt doesn't have to be part of this patch and we should move forward with\nthis patch. Let's avoid hijacking this thread, which is about this\npatch, for an independent debate about what our documentation should or\nshouldn't include.\n\n> > with a different API or a different set of tools. That is not a\n> > documentation task. That is a \"start from a list of which things\n> > pg_basebackup cannot do that are still simple, or that tools like\n> > pgbackrest cannot do if they're complicated\". And then design an API\n> > that's actually safe and easy to use *for that usecase*.\n> \n> That might also be a good thing, but I don't see it as a substitute\n> for documenting the present reality of what the irreducibly essential\n> behaviors of pg_basebackup (or of third-party tools like pgbackrest)\n> are, and why they are so.\n\nI disagree. If we provided a tool then we'd document that tool and how\nusers can use it, not every single step that it does (see also:\npg_basebackup).\n\nThanks,\n\nStephen", "msg_date": "Wed, 9 Mar 2022 12:19:23 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On 03/09/22 12:19, Stephen Frost wrote:\n> Let's avoid hijacking this thread, which is about this\n> patch, for an independent debate about what our documentation should or\n> shouldn't include.\n\nAgreed. New thread here:\n\nhttps://www.postgresql.org/message-id/6228FFE4.3050309%40anastigmatix.net\n\nRegards,\n-Chap\n\n\n", "msg_date": "Wed, 9 Mar 2022 14:32:24 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On Wed, Mar 09, 2022 at 02:32:24PM -0500, Chapman Flack wrote:\n> On 03/09/22 12:19, Stephen Frost wrote:\n>> Let's avoid hijacking this thread, which is about this\n>> patch, for an independent debate about what our documentation should or\n>> shouldn't include.\n> \n> Agreed. New thread here:\n> \n> https://www.postgresql.org/message-id/6228FFE4.3050309%40anastigmatix.net\n\nGreat. Is there any additional feedback on this patch? Should we add an\nexample of using pg_basebackup in the \"Standalone Hot Backups\" section, or\nshould we leave all documentation additions like this for Chap's new\nthread?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 9 Mar 2022 14:21:24 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On 03/09/22 17:21, Nathan Bossart wrote:\n> Great. Is there any additional feedback on this patch? Should we add an\n> example of using pg_basebackup in the \"Standalone Hot Backups\" section, or\n> should we leave all documentation additions like this for Chap's new\n> thread?\n\nI'm composing something longer for the new thread, but on the way\nI noticed something we might fit into this one.\n\nI think the listitem\n\n In the same connection as before, issue the command:\n SELECT * FROM pg_backup_stop(true);\n\nwould be clearer if it used named-parameter form, wait_for_archive => true.\n\nThis is not strictly necessary, of course, for a function with a single\nIN parameter, but it's good documentation (and also could save us headaches\nlike these if there is ever another future need to give it more parameters).\n\nThat listitem doesn't say anything about what the parameter means, which\nis a little weird, but probably ok because the next listitem does go into\nit in some detail. I don't think a larger reorg is needed to bring that\nlanguage one listitem earlier. Just naming the parameter is probably\nenough to make it less puzzling (or adding in that listitem, at most,\n\"the effect of the wait_for_archive parameter is explained below\").\n\nFor consistency (and the same futureproofing benefit), I'd go to\nfast => false in the earlier pg_backup_start as well.\n\nI'm more ambivalent about label => 'label'. It would be consistent,\nbut should we just agree for conciseness that there will always be\na label and it will always be first?\n\nYou can pretty much tell in a call what's a label; it's those anonymous\ntrues and falses that are easier to read with named notation.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Wed, 9 Mar 2022 18:11:19 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On Wed, Mar 09, 2022 at 06:11:19PM -0500, Chapman Flack wrote:\n> I think the listitem\n> \n> In the same connection as before, issue the command:\n> SELECT * FROM pg_backup_stop(true);\n> \n> would be clearer if it used named-parameter form, wait_for_archive => true.\n> \n> This is not strictly necessary, of course, for a function with a single\n> IN parameter, but it's good documentation (and also could save us headaches\n> like these if there is ever another future need to give it more parameters).\n> \n> That listitem doesn't say anything about what the parameter means, which\n> is a little weird, but probably ok because the next listitem does go into\n> it in some detail. I don't think a larger reorg is needed to bring that\n> language one listitem earlier. Just naming the parameter is probably\n> enough to make it less puzzling (or adding in that listitem, at most,\n> \"the effect of the wait_for_archive parameter is explained below\").\n> \n> For consistency (and the same futureproofing benefit), I'd go to\n> fast => false in the earlier pg_backup_start as well.\n> \n> I'm more ambivalent about label => 'label'. It would be consistent,\n> but should we just agree for conciseness that there will always be\n> a label and it will always be first?\n> \n> You can pretty much tell in a call what's a label; it's those anonymous\n> trues and falses that are easier to read with named notation.\n\nDone. I went ahead and added \"label => 'label'\" for consistency.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 9 Mar 2022 16:06:22 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On 3/9/22 18:06, Nathan Bossart wrote:\n> On Wed, Mar 09, 2022 at 06:11:19PM -0500, Chapman Flack wrote:\n>> I think the listitem\n>>\n>> In the same connection as before, issue the command:\n>> SELECT * FROM pg_backup_stop(true);\n>>\n>> would be clearer if it used named-parameter form, wait_for_archive => true.\n>>\n>> This is not strictly necessary, of course, for a function with a single\n>> IN parameter, but it's good documentation (and also could save us headaches\n>> like these if there is ever another future need to give it more parameters).\n>>\n>> That listitem doesn't say anything about what the parameter means, which\n>> is a little weird, but probably ok because the next listitem does go into\n>> it in some detail. I don't think a larger reorg is needed to bring that\n>> language one listitem earlier. Just naming the parameter is probably\n>> enough to make it less puzzling (or adding in that listitem, at most,\n>> \"the effect of the wait_for_archive parameter is explained below\").\n>>\n>> For consistency (and the same futureproofing benefit), I'd go to\n>> fast => false in the earlier pg_backup_start as well.\n>>\n>> I'm more ambivalent about label => 'label'. It would be consistent,\n>> but should we just agree for conciseness that there will always be\n>> a label and it will always be first?\n>>\n>> You can pretty much tell in a call what's a label; it's those anonymous\n>> trues and falses that are easier to read with named notation.\n> \n> Done. I went ahead and added \"label => 'label'\" for consistency.\n\n+1 from me.\n\nRegards,\n-David\n\n\n\n", "msg_date": "Wed, 9 Mar 2022 18:41:46 -0600", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On 03/09/22 19:06, Nathan Bossart wrote:\n> Done. I went ahead and added \"label => 'label'\" for consistency.\n\nLooks like this change to an example in func.sgml is not quite right:\n\n-postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup());\n+postgres=# SELECT * FROM pg_walfile_name_offset(pg_backup_stop());\n\npg_backup_stop returns a record now, not just lsn. So this works for me:\n\n+postgres=# SELECT * FROM pg_walfile_name_offset((pg_backup_stop()).lsn);\n\n\nOtherwise, all looks to be in good order.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Thu, 10 Mar 2022 19:13:14 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On Thu, Mar 10, 2022 at 07:13:14PM -0500, Chapman Flack wrote:\n> Looks like this change to an example in func.sgml is not quite right:\n> \n> -postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup());\n> +postgres=# SELECT * FROM pg_walfile_name_offset(pg_backup_stop());\n> \n> pg_backup_stop returns a record now, not just lsn. So this works for me:\n> \n> +postgres=# SELECT * FROM pg_walfile_name_offset((pg_backup_stop()).lsn);\n\nAh, good catch. I made this change in v7. I considered doing something\nlike this\n\n\tSELECT w.* FROM pg_backup_stop() b, pg_walfile_name_offset(b.lsn) w;\n\nbut I think your suggestion is simpler.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 10 Mar 2022 16:38:34 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On 03/10/22 19:38, Nathan Bossart wrote:\n> On Thu, Mar 10, 2022 at 07:13:14PM -0500, Chapman Flack wrote:\n>> +postgres=# SELECT * FROM pg_walfile_name_offset((pg_backup_stop()).lsn);\n> \n> Ah, good catch. I made this change in v7. I considered doing something\n\nv7 looks good to me. I have repeated the installcheck-world and given\nthe changed documentation sections one last read-through, and will\nchange the status to RfC.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Fri, 11 Mar 2022 14:00:37 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On Fri, Mar 11, 2022 at 02:00:37PM -0500, Chapman Flack wrote:\n> v7 looks good to me. I have repeated the installcheck-world and given\n> the changed documentation sections one last read-through, and will\n> change the status to RfC.\n\nThanks for reviewing!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 11 Mar 2022 11:31:58 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "Greetings,\n\n* Nathan Bossart (nathandbossart@gmail.com) wrote:\n> On Thu, Mar 10, 2022 at 07:13:14PM -0500, Chapman Flack wrote:\n> > Looks like this change to an example in func.sgml is not quite right:\n> > \n> > -postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup());\n> > +postgres=# SELECT * FROM pg_walfile_name_offset(pg_backup_stop());\n> > \n> > pg_backup_stop returns a record now, not just lsn. So this works for me:\n> > \n> > +postgres=# SELECT * FROM pg_walfile_name_offset((pg_backup_stop()).lsn);\n> \n> Ah, good catch. I made this change in v7. I considered doing something\n> like this\n> \n> \tSELECT w.* FROM pg_backup_stop() b, pg_walfile_name_offset(b.lsn) w;\n> \n> but I think your suggestion is simpler.\n\nI tend to agree. More below.\n\n> Subject: [PATCH v7 1/1] remove exclusive backup mode\n> diff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml\n> index 0d69851bb1..c8b914c1aa 100644\n> --- a/doc/src/sgml/backup.sgml\n> +++ b/doc/src/sgml/backup.sgml\n> @@ -881,19 +873,19 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 &amp;&amp; cp pg_wal/0\n> <listitem>\n> <para>\n> Connect to the server (it does not matter which database) as a user with\n> - rights to run pg_start_backup (superuser, or a user who has been granted\n> + rights to run pg_backup_start (superuser, or a user who has been granted\n> EXECUTE on the function) and issue the command:\n> <programlisting>\n> -SELECT pg_start_backup('label', false, false);\n> +SELECT pg_backup_start(label => 'label', fast => false);\n> </programlisting>\n> where <literal>label</literal> is any string you want to use to uniquely\n> identify this backup operation. The connection\n> - calling <function>pg_start_backup</function> must be maintained until the end of\n> + calling <function>pg_backup_start</function> must be maintained until the end of\n> the backup, or the backup will be automatically aborted.\n> </para>\n> \n> <para>\n> - By default, <function>pg_start_backup</function> can take a long time to finish.\n> + By default, <function>pg_backup_start</function> can take a long time to finish.\n> This is because it performs a checkpoint, and the I/O\n> required for the checkpoint will be spread out over a significant\n> period of time, by default half your inter-checkpoint interval\n\nHrmpf. Not this patch's fault, but the default isn't 'half your\ninter-checkpoint interval' anymore (checkpoint_completion_target was\nchanged to 0.9 ... let's not discuss who did that and missed fixing\nthis). Overall though, maybe we should reword this to avoid having to\nremember to change it again if we ever change the completion target\nagain? Also it seems to imply that pg_backup_start is going to run its\nown independent checkpoint or something, which isn't the typical case.\nI generally explain what is going on here like this:\n\nBy default, <function>pg_backup_start</function> will wait for the next\ncheckpoint to complete (see ref: checkpoint_timeout ... maybe also\nwal-configuration.html).\n\n> @@ -937,7 +925,7 @@ SELECT * FROM pg_stop_backup(false, true);\n> ready to archive.\n> </para>\n> <para>\n> - The <function>pg_stop_backup</function> will return one row with three\n> + The <function>pg_backup_stop</function> will return one row with three\n> values. The second of these fields should be written to a file named\n> <filename>backup_label</filename> in the root directory of the backup. The\n> third field should be written to a file named\n\nI get that we have <function> and </function>, but those are just\nformatting and don't show up in the actual text, and so it ends up\nbeing:\n\nThe pg_backup_stop will return\n\nWhich doesn't sound quite right to me. I'd say we either drop 'The' or\nadd 'function' after. I realize that this patch is mostly doing a\ns/pg_stop_backup/pg_backup_stop/, but, still.\n\n> diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\n> index 8a802fb225..73096708cc 100644\n> --- a/doc/src/sgml/func.sgml\n> +++ b/doc/src/sgml/func.sgml\n> @@ -25732,24 +25715,19 @@ LOG: Grand total: 1651920 bytes in 201 blocks; 622360 free (88 chunks); 1029560\n> <parameter>spcmapfile</parameter> <type>text</type> )\n> </para>\n> <para>\n> - Finishes performing an exclusive or non-exclusive on-line backup.\n> - The <parameter>exclusive</parameter> parameter must match the\n> - previous <function>pg_start_backup</function> call.\n> - In an exclusive backup, <function>pg_stop_backup</function> removes\n> - the backup label file and, if it exists, the tablespace map file\n> - created by <function>pg_start_backup</function>. In a non-exclusive\n> - backup, the desired contents of these files are returned as part of\n> + Finishes performing an on-line backup. The desired contents of the\n> + backup label file and the tablespace map file are returned as part of\n> the result of the function, and should be written to files in the\n> backup area (not in the data directory).\n> </para>\n\nGiven that this is a crucial part, which the exclusive mode has wrong,\nI'd be a bit more forceful about it, eg:\n\nThe contents of the backup label file and the tablespace map file must\nbe stored as part of the backup and must NOT be stored in the live data\ndirectory (doing so will cause PostgreSQL to fail to restart in the\nevent of a crash).\n\n> @@ -25771,8 +25749,7 @@ LOG: Grand total: 1651920 bytes in 201 blocks; 622360 free (88 chunks); 1029560\n> The result of the function is a single record.\n> The <parameter>lsn</parameter> column holds the backup's ending\n> write-ahead log location (which again can be ignored). The second and\n> - third columns are <literal>NULL</literal> when ending an exclusive\n> - backup; after a non-exclusive backup they hold the desired contents of\n> + third columns hold the desired contents of\n> the label and tablespace map files.\n> </para>\n> <para>\n\nI dislike saying 'desired' here as if it's something that we're nicely\nasking but maybe isn't a big deal, and just saying 'label' isn't good\nsince we call it 'backup label' elsewhere and we should try hard to be\nconsistent and clear. How about:\n\nThe second column returns the contents of the backup label file, the\nthird column returns the contents of the tablespace map file. These\nmust be stored as part of the backup and are required as part of the\nrestore process.\n\n> diff --git a/src/bin/pg_ctl/pg_ctl.c b/src/bin/pg_ctl/pg_ctl.c\n> index 3c182c97d4..ee3fa148b6 100644\n> --- a/src/bin/pg_ctl/pg_ctl.c\n> +++ b/src/bin/pg_ctl/pg_ctl.c\n> @@ -1069,7 +1069,7 @@ do_stop(void)\n> \t\t\tget_control_dbstate() != DB_IN_ARCHIVE_RECOVERY)\n> \t\t{\n> \t\t\tprint_msg(_(\"WARNING: online backup mode is active\\n\"\n> -\t\t\t\t\t\t\"Shutdown will not complete until pg_stop_backup() is called.\\n\\n\"));\n> +\t\t\t\t\t\t\"Shutdown will not complete until pg_backup_stop() is called.\\n\\n\"));\n> \t\t}\n> \n> \t\tprint_msg(_(\"waiting for server to shut down...\"));\n> @@ -1145,7 +1145,7 @@ do_restart(void)\n> \t\t\tget_control_dbstate() != DB_IN_ARCHIVE_RECOVERY)\n> \t\t{\n> \t\t\tprint_msg(_(\"WARNING: online backup mode is active\\n\"\n> -\t\t\t\t\t\t\"Shutdown will not complete until pg_stop_backup() is called.\\n\\n\"));\n> +\t\t\t\t\t\t\"Shutdown will not complete until pg_backup_stop() is called.\\n\\n\"));\n> \t\t}\n> \n> \t\tprint_msg(_(\"waiting for server to shut down...\"));\n\nThis... can't actually happen anymore, right? Shouldn't we just pull\nall of this out?\n\nOn a once-over of the rest of the code, I definitely like how much we're\nable to simplify things in this area and remove various hacks in things\nlike pg_basebackup and pg_rewind where we previously had to worry about\nbackup_label and tablespace_map files being in a live data directory.\nI'm planning to spend more time on this and hopefully be able to get it\nin for v15.\n\nThanks!\n\nStephen", "msg_date": "Mon, 28 Mar 2022 16:30:27 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On Mon, Mar 28, 2022 at 04:30:27PM -0400, Stephen Frost wrote:\n>> - By default, <function>pg_start_backup</function> can take a long time to finish.\n>> + By default, <function>pg_backup_start</function> can take a long time to finish.\n>> This is because it performs a checkpoint, and the I/O\n>> required for the checkpoint will be spread out over a significant\n>> period of time, by default half your inter-checkpoint interval\n> \n> Hrmpf. Not this patch's fault, but the default isn't 'half your\n> inter-checkpoint interval' anymore (checkpoint_completion_target was\n> changed to 0.9 ... let's not discuss who did that and missed fixing\n> this). Overall though, maybe we should reword this to avoid having to\n> remember to change it again if we ever change the completion target\n> again? Also it seems to imply that pg_backup_start is going to run its\n> own independent checkpoint or something, which isn't the typical case.\n> I generally explain what is going on here like this:\n> \n> By default, <function>pg_backup_start</function> will wait for the next\n> checkpoint to complete (see ref: checkpoint_timeout ... maybe also\n> wal-configuration.html).\n\ndone\n\n>> - The <function>pg_stop_backup</function> will return one row with three\n>> + The <function>pg_backup_stop</function> will return one row with three\n>> values. The second of these fields should be written to a file named\n>> <filename>backup_label</filename> in the root directory of the backup. The\n>> third field should be written to a file named\n> \n> I get that we have <function> and </function>, but those are just\n> formatting and don't show up in the actual text, and so it ends up\n> being:\n> \n> The pg_backup_stop will return\n> \n> Which doesn't sound quite right to me. I'd say we either drop 'The' or\n> add 'function' after. I realize that this patch is mostly doing a\n> s/pg_stop_backup/pg_backup_stop/, but, still.\n\ndone\n\n>> - Finishes performing an exclusive or non-exclusive on-line backup.\n>> - The <parameter>exclusive</parameter> parameter must match the\n>> - previous <function>pg_start_backup</function> call.\n>> - In an exclusive backup, <function>pg_stop_backup</function> removes\n>> - the backup label file and, if it exists, the tablespace map file\n>> - created by <function>pg_start_backup</function>. In a non-exclusive\n>> - backup, the desired contents of these files are returned as part of\n>> + Finishes performing an on-line backup. The desired contents of the\n>> + backup label file and the tablespace map file are returned as part of\n>> the result of the function, and should be written to files in the\n>> backup area (not in the data directory).\n>> </para>\n> \n> Given that this is a crucial part, which the exclusive mode has wrong,\n> I'd be a bit more forceful about it, eg:\n> \n> The contents of the backup label file and the tablespace map file must\n> be stored as part of the backup and must NOT be stored in the live data\n> directory (doing so will cause PostgreSQL to fail to restart in the\n> event of a crash).\n\ndone\n\n>> The result of the function is a single record.\n>> The <parameter>lsn</parameter> column holds the backup's ending\n>> write-ahead log location (which again can be ignored). The second and\n>> - third columns are <literal>NULL</literal> when ending an exclusive\n>> - backup; after a non-exclusive backup they hold the desired contents of\n>> + third columns hold the desired contents of\n>> the label and tablespace map files.\n>> </para>\n>> <para>\n> \n> I dislike saying 'desired' here as if it's something that we're nicely\n> asking but maybe isn't a big deal, and just saying 'label' isn't good\n> since we call it 'backup label' elsewhere and we should try hard to be\n> consistent and clear. How about:\n> \n> The second column returns the contents of the backup label file, the\n> third column returns the contents of the tablespace map file. These\n> must be stored as part of the backup and are required as part of the\n> restore process.\n\ndone\n\n>> \t\t{\n>> \t\t\tprint_msg(_(\"WARNING: online backup mode is active\\n\"\n>> -\t\t\t\t\t\t\"Shutdown will not complete until pg_stop_backup() is called.\\n\\n\"));\n>> +\t\t\t\t\t\t\"Shutdown will not complete until pg_backup_stop() is called.\\n\\n\"));\n>> \t\t}\n>> \n>> \t\tprint_msg(_(\"waiting for server to shut down...\"));\n>> @@ -1145,7 +1145,7 @@ do_restart(void)\n>> \t\t\tget_control_dbstate() != DB_IN_ARCHIVE_RECOVERY)\n>> \t\t{\n>> \t\t\tprint_msg(_(\"WARNING: online backup mode is active\\n\"\n>> -\t\t\t\t\t\t\"Shutdown will not complete until pg_stop_backup() is called.\\n\\n\"));\n>> +\t\t\t\t\t\t\"Shutdown will not complete until pg_backup_stop() is called.\\n\\n\"));\n>> \t\t}\n>> \n>> \t\tprint_msg(_(\"waiting for server to shut down...\"));\n> \n> This... can't actually happen anymore, right? Shouldn't we just pull\n> all of this out?\n\ndone\n\n> On a once-over of the rest of the code, I definitely like how much we're\n> able to simplify things in this area and remove various hacks in things\n> like pg_basebackup and pg_rewind where we previously had to worry about\n> backup_label and tablespace_map files being in a live data directory.\n> I'm planning to spend more time on this and hopefully be able to get it\n> in for v15.\n\nGreat! Much appreciated.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 28 Mar 2022 19:09:48 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On 3/28/22 10:09 PM, Nathan Bossart wrote:\n> On Mon, Mar 28, 2022 at 04:30:27PM -0400, Stephen Frost wrote:\n> \n>> On a once-over of the rest of the code, I definitely like how much we're\n>> able to simplify things in this area and remove various hacks in things\n>> like pg_basebackup and pg_rewind where we previously had to worry about\n>> backup_label and tablespace_map files being in a live data directory.\n>> I'm planning to spend more time on this and hopefully be able to get it\n>> in for v15.\n> \n> Great! Much appreciated.\n\nMinor typo in the docs:\n\n+\t * capable of doing an online backup, but exclude then just in case.\n\nShould be:\n\ncapable of doing an online backup, but exclude them just in case.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Mon, 4 Apr 2022 09:56:26 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On Mon, Apr 04, 2022 at 09:56:26AM -0400, David Steele wrote:\n> Minor typo in the docs:\n> \n> +\t * capable of doing an online backup, but exclude then just in case.\n> \n> Should be:\n> \n> capable of doing an online backup, but exclude them just in case.\n\nfixed\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 4 Apr 2022 07:22:47 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "I noticed a couple of other things that can be removed. Since we no longer\nwait on exclusive backup mode during smart shutdown, we can change\nconnsAllowed (in postmaster.c) to a boolean and remove CAC_SUPERUSER. We\ncan also remove a couple of related notes in the documentation. I've done\nall this in the attached patch.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 4 Apr 2022 08:42:18 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On 4/4/22 11:42 AM, Nathan Bossart wrote:\n> I noticed a couple of other things that can be removed. Since we no longer\n> wait on exclusive backup mode during smart shutdown, we can change\n> connsAllowed (in postmaster.c) to a boolean and remove CAC_SUPERUSER. We\n> can also remove a couple of related notes in the documentation. I've done\n> all this in the attached patch.\n\nThese changes look good to me. IMV it is a real bonus how much the state \nmachine has been simplified.\n\nI've also run this patch through the pgbackrest regression tests without \nany problems.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Mon, 4 Apr 2022 13:11:33 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "Greetings,\n\n* David Steele (david@pgmasters.net) wrote:\n> On 4/4/22 11:42 AM, Nathan Bossart wrote:\n> >I noticed a couple of other things that can be removed. Since we no longer\n> >wait on exclusive backup mode during smart shutdown, we can change\n> >connsAllowed (in postmaster.c) to a boolean and remove CAC_SUPERUSER. We\n> >can also remove a couple of related notes in the documentation. I've done\n> >all this in the attached patch.\n> \n> These changes look good to me. IMV it is a real bonus how much the state\n> machine has been simplified.\n\nYeah, agreed.\n\n> I've also run this patch through the pgbackrest regression tests without any\n> problems.\n\nFantastic.\n\nPlease find attached an updated patch + commit message. Mostly, I just\nwent through and did a bit more in terms of updating the documentation\nand improving the comments (there were some places that were still\nworrying about the chance of a 'stray' backup_label file existing, which\nisn't possible any longer), along with some additional testing and\nreview. This is looking pretty good to me, but other thoughts are\ncertainly welcome. Otherwise, I'm hoping to commit this tomorrow.\n\nThanks!\n\nStephen", "msg_date": "Tue, 5 Apr 2022 11:25:36 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On 4/5/22 11:25 AM, Stephen Frost wrote:\n> \n> Please find attached an updated patch + commit message. Mostly, I just\n> went through and did a bit more in terms of updating the documentation\n> and improving the comments (there were some places that were still\n> worrying about the chance of a 'stray' backup_label file existing, which\n> isn't possible any longer), along with some additional testing and\n> review. This is looking pretty good to me, but other thoughts are\n> certainly welcome. Otherwise, I'm hoping to commit this tomorrow.\n\nI have reviewed the changes and they look good. I also ran the new patch \nthrough pgbackrest regression with no issues.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Tue, 5 Apr 2022 12:06:08 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On Tue, Apr 5, 2022 at 5:25 PM Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> * David Steele (david@pgmasters.net) wrote:\n> > On 4/4/22 11:42 AM, Nathan Bossart wrote:\n> > >I noticed a couple of other things that can be removed. Since we no\n> longer\n> > >wait on exclusive backup mode during smart shutdown, we can change\n> > >connsAllowed (in postmaster.c) to a boolean and remove CAC_SUPERUSER.\n> We\n> > >can also remove a couple of related notes in the documentation. I've\n> done\n> > >all this in the attached patch.\n> >\n> > These changes look good to me. IMV it is a real bonus how much the state\n> > machine has been simplified.\n>\n> Yeah, agreed.\n>\n\nDefinitely.\n\n\n\n> > I've also run this patch through the pgbackrest regression tests without\n> any\n> > problems.\n>\n> Fantastic.\n>\n> Please find attached an updated patch + commit message. Mostly, I just\n> went through and did a bit more in terms of updating the documentation\n> and improving the comments (there were some places that were still\n> worrying about the chance of a 'stray' backup_label file existing, which\n> isn't possible any longer), along with some additional testing and\n> review. This is looking pretty good to me, but other thoughts are\n> certainly welcome. Otherwise, I'm hoping to commit this tomorrow.\n>\n\n+1. LGTM.\n\nI'm not sure I love the renaming of the functions, but I have also yet to\ncome up with a better idea for how to avoid silent breakage, so go with it.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Apr 5, 2022 at 5:25 PM Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* David Steele (david@pgmasters.net) wrote:\n> On 4/4/22 11:42 AM, Nathan Bossart wrote:\n> >I noticed a couple of other things that can be removed.  Since we no longer\n> >wait on exclusive backup mode during smart shutdown, we can change\n> >connsAllowed (in postmaster.c) to a boolean and remove CAC_SUPERUSER.  We\n> >can also remove a couple of related notes in the documentation.  I've done\n> >all this in the attached patch.\n> \n> These changes look good to me. IMV it is a real bonus how much the state\n> machine has been simplified.\n\nYeah, agreed.Definitely. \n> I've also run this patch through the pgbackrest regression tests without any\n> problems.\n\nFantastic.\n\nPlease find attached an updated patch + commit message.  Mostly, I just\nwent through and did a bit more in terms of updating the documentation\nand improving the comments (there were some places that were still\nworrying about the chance of a 'stray' backup_label file existing, which\nisn't possible any longer), along with some additional testing and\nreview.  This is looking pretty good to me, but other thoughts are\ncertainly welcome.  Otherwise, I'm hoping to commit this tomorrow.+1. LGTM.I'm not sure I love the renaming of the functions, but I have also yet to come up with a better idea for how to avoid silent breakage, so go with it.--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Tue, 5 Apr 2022 18:42:47 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On Tue, Apr 05, 2022 at 11:25:36AM -0400, Stephen Frost wrote:\n> Please find attached an updated patch + commit message. Mostly, I just\n> went through and did a bit more in terms of updating the documentation\n> and improving the comments (there were some places that were still\n> worrying about the chance of a 'stray' backup_label file existing, which\n> isn't possible any longer), along with some additional testing and\n> review. This is looking pretty good to me, but other thoughts are\n> certainly welcome. Otherwise, I'm hoping to commit this tomorrow.\n\nLGTM!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 5 Apr 2022 13:06:17 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On Tue, 2022-04-05 at 13:06 -0700, Nathan Bossart wrote:\n> On Tue, Apr 05, 2022 at 11:25:36AM -0400, Stephen Frost wrote:\n> > Please find attached an updated patch + commit message.  Mostly, I just\n> > went through and did a bit more in terms of updating the documentation\n> > and improving the comments (there were some places that were still\n> > worrying about the chance of a 'stray' backup_label file existing, which\n> > isn't possible any longer), along with some additional testing and\n> > review.  This is looking pretty good to me, but other thoughts are\n> > certainly welcome.  Otherwise, I'm hoping to commit this tomorrow.\n> \n> LGTM!\n\nCassandra (not the software) from the sidelines predicts that we will\nget some fire from users for this, although I concede the theoretical\nsanity of the change.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Wed, 06 Apr 2022 12:59:18 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "Greetings,\n\n* Laurenz Albe (laurenz.albe@cybertec.at) wrote:\n> On Tue, 2022-04-05 at 13:06 -0700, Nathan Bossart wrote:\n> > On Tue, Apr 05, 2022 at 11:25:36AM -0400, Stephen Frost wrote:\n> > > Please find attached an updated patch + commit message.  Mostly, I just\n> > > went through and did a bit more in terms of updating the documentation\n> > > and improving the comments (there were some places that were still\n> > > worrying about the chance of a 'stray' backup_label file existing, which\n> > > isn't possible any longer), along with some additional testing and\n> > > review.  This is looking pretty good to me, but other thoughts are\n> > > certainly welcome.  Otherwise, I'm hoping to commit this tomorrow.\n> > \n> > LGTM!\n> \n> Cassandra (not the software) from the sidelines predicts that we will\n> get some fire from users for this, although I concede the theoretical\n> sanity of the change.\n\nGreat, thanks for that.\n\nThis has now been committed- thanks again to everyone for their help!\n\nStephen", "msg_date": "Wed, 6 Apr 2022 15:29:15 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On Wed, Apr 06, 2022 at 03:29:15PM -0400, Stephen Frost wrote:\n> This has now been committed- thanks again to everyone for their help!\n\nThanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 6 Apr 2022 12:40:39 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "Hi,\n\nI was looking at the commit for this patch and noticed this small typo\nin the comment in `basebackup.c`:\n\n```\ndiff --git a/src/backend/replication/basebackup.c\nb/src/backend/replication/basebackup.c\nindex 6884cad2c00af1632eacec07903098e7e1874393..815681ada7dd0c135af584ad5da2dd13c9a12465\n100644 (file)\n--- a/src/backend/replication/basebackup.c\n+++ b/src/backend/replication/basebackup.c\n@@ -184,10 +184,8 @@ static const struct exclude_list_item excludeFiles[] =\n {RELCACHE_INIT_FILENAME, true},\n\n /*\n- * If there's a backup_label or tablespace_map file, it belongs to a\n- * backup started by the user with pg_start_backup(). It is *not* correct\n- * for this backup. Our backup_label/tablespace_map is injected into the\n- * tar separately.\n+ * backup_label and tablespace_map should not exist in in a running cluster\n+ * capable of doing an online backup, but exclude them just in case.\n */\n {BACKUP_LABEL_FILE, false},\n {TABLESPACE_MAP, false},\n```\n\nThe typo is in `exist in in a running cluster`. There's two `in` in a row.\n\nP.D.: I was looking at this just because I was looking at an issue\nwhere someone bumped their head with this \"problem\", so great that\nwe're in a better place now. Hopefully one day everyone will be\nrunning PG15 or better :)\n\nKind regards, Martín\n\nEl mié, 6 abr 2022 a las 16:29, Stephen Frost (<sfrost@snowman.net>) escribió:\n>\n> Greetings,\n>\n> * Laurenz Albe (laurenz.albe@cybertec.at) wrote:\n> > On Tue, 2022-04-05 at 13:06 -0700, Nathan Bossart wrote:\n> > > On Tue, Apr 05, 2022 at 11:25:36AM -0400, Stephen Frost wrote:\n> > > > Please find attached an updated patch + commit message. Mostly, I just\n> > > > went through and did a bit more in terms of updating the documentation\n> > > > and improving the comments (there were some places that were still\n> > > > worrying about the chance of a 'stray' backup_label file existing, which\n> > > > isn't possible any longer), along with some additional testing and\n> > > > review. This is looking pretty good to me, but other thoughts are\n> > > > certainly welcome. Otherwise, I'm hoping to commit this tomorrow.\n> > >\n> > > LGTM!\n> >\n> > Cassandra (not the software) from the sidelines predicts that we will\n> > get some fire from users for this, although I concede the theoretical\n> > sanity of the change.\n>\n> Great, thanks for that.\n>\n> This has now been committed- thanks again to everyone for their help!\n>\n> Stephen\n\n\n\n-- \nMartín Marqués\nIt’s not that I have something to hide,\nit’s that I have nothing I want you to see\n\n\n", "msg_date": "Tue, 19 Apr 2022 22:12:32 -0300", "msg_from": "=?UTF-8?B?TWFydMOtbiBNYXJxdcOpcw==?= <martin.marques@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "On Tue, Apr 19, 2022 at 10:12:32PM -0300, Martín Marqués wrote:\n> The typo is in `exist in in a running cluster`. There's two `in` in a row.\n\nThanks, fixed.\n--\nMichael", "msg_date": "Wed, 20 Apr 2022 11:06:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" }, { "msg_contents": "Greetings,\n\n* Martín Marqués (martin.marques@gmail.com) wrote:\n> The typo is in `exist in in a running cluster`. There's two `in` in a row.\n\nOops, thanks for catching (and thanks to Michael for committing the\nfix!).\n\n> P.D.: I was looking at this just because I was looking at an issue\n> where someone bumped their head with this \"problem\", so great that\n> we're in a better place now. Hopefully one day everyone will be\n> running PG15 or better :)\n\nAgreed! Does make me wonder just how often folks run into this issue..\nGlad that we were able to get the change in for v15.\n\nThanks again!\n\nStephen", "msg_date": "Wed, 20 Apr 2022 13:21:18 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Postgres restart in the middle of exclusive backup and the\n presence of backup_label file" } ]
[ { "msg_contents": "Hi,\n\nI noticed that the pg_dump and pg_basebackup are not erroring out when\n\"--fo\"/\"--for\"/\"--form\"/\"--forma\"/\" are specified(use cases 1 - 4, 7 -\n9) whereas it fails if a pattern that doesn't match with \"format\" is\nspecified (use case 5, 10). This seems to be true only for \"--format\"\noption, for other options it just errors out (use case 6). Why is the\nbehaviour for \"--format\" isn't consistent?\n\nIs it something expected? Is the option parsing logic here buggy?\n\nThoughts?\n\nUse cases:\n1) ./pg_dump --dbname=postgres --host=localhost --port=5432\n--username=bharath --no-password --fo=plain --file=mydump.sql\n2) ./pg_dump --dbname=postgres --host=localhost --port=5432\n--username=bharath --no-password --for=plain --file=mydump.sql\n3) ./pg_dump --dbname=postgres --host=localhost --port=5432\n--username=bharath --no-password --form=plain --file=mydump.sql\n4) ./pg_dump --dbname=postgres --host=localhost --port=5432\n--username=bharath --no-password --forma=plain --file=mydump.sql\n5) ./pg_dump --dbname=postgres --host=localhost --port=5432\n--username=bharath --no-password --foo=plain --file=mydump.sql\n6) ./pg_dump --dbname=postgres --host=localhost --port=5432\n--username=bharath --no-password --format=plain --fi=mydump.sql\n7) ./pg_basebackup -D bkupdata --fo=plain\n8) ./pg_basebackup -D bkupdata --for=plain\n9) ./pg_basebackup -D bkupdata --forma=plain\n10) ./pg_basebackup -D bkupdata --foo=plain\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 25 Nov 2021 10:44:03 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "pg_dump,\n pg_basebackup don't error out with wrong option for \"--format\"" }, { "msg_contents": "On Thu, Nov 25, 2021 at 10:44 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> I noticed that the pg_dump and pg_basebackup are not erroring out when\n> \"--fo\"/\"--for\"/\"--form\"/\"--forma\"/\" are specified(use cases 1 - 4, 7 -\n> 9) whereas it fails if a pattern that doesn't match with \"format\" is\n> specified (use case 5, 10). This seems to be true only for \"--format\"\n> option, for other options it just errors out (use case 6). Why is the\n> behaviour for \"--format\" isn't consistent?\n>\n> Is it something expected? Is the option parsing logic here buggy?\n\nI think for parsing we use getopt_long(), as per that if you use the\nprefix of the string and that is not conflicting with any other option\nthen that is allowed. So --fo, --for all are accepted, --f will not\nbe accepted because --file and --format will conflict, --foo is also\nnot allowed because it is not a valid prefix string of any valid\noption string.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 25 Nov 2021 11:02:04 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump,\n pg_basebackup don't error out with wrong option for \"--format\"" }, { "msg_contents": "On Thu, Nov 25, 2021 at 11:02 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Nov 25, 2021 at 10:44 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > I noticed that the pg_dump and pg_basebackup are not erroring out when\n> > \"--fo\"/\"--for\"/\"--form\"/\"--forma\"/\" are specified(use cases 1 - 4, 7 -\n> > 9) whereas it fails if a pattern that doesn't match with \"format\" is\n> > specified (use case 5, 10). This seems to be true only for \"--format\"\n> > option, for other options it just errors out (use case 6). Why is the\n> > behaviour for \"--format\" isn't consistent?\n> >\n> > Is it something expected? Is the option parsing logic here buggy?\n>\n> I think for parsing we use getopt_long(), as per that if you use the\n> prefix of the string and that is not conflicting with any other option\n> then that is allowed. So --fo, --for all are accepted, --f will not\n> be accepted because --file and --format will conflict, --foo is also\n> not allowed because it is not a valid prefix string of any valid\n> option string.\n\nYeah, that makes sense. Thanks.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 25 Nov 2021 11:22:11 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump,\n pg_basebackup don't error out with wrong option for \"--format\"" }, { "msg_contents": "On Thu, Nov 25, 2021 at 11:22:11AM +0530, Bharath Rupireddy wrote:\n> On Thu, Nov 25, 2021 at 11:02 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>> I think for parsing we use getopt_long(), as per that if you use the\n>> prefix of the string and that is not conflicting with any other option\n>> then that is allowed. So --fo, --for all are accepted, --f will not\n>> be accepted because --file and --format will conflict, --foo is also\n>> not allowed because it is not a valid prefix string of any valid\n>> option string.\n> \n> Yeah, that makes sense. Thanks.\n\nIt is worth noting that getopt_long() is a GNU extension and not\ndirectly something defined in POSIX, but it is so popular that it\nexpanded everywhere. This option anme handling is quite common across\neverything that uses getopt_long(), actually, and erroring on\nnon-exact option names would break a bunch of existing use cases as it\nis possible to save some characters if getopt_long() is sure of the \nuniqueness of the option found.\n--\nMichael", "msg_date": "Fri, 26 Nov 2021 15:55:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_dump, pg_basebackup don't error out with wrong option for\n \"--format\"" } ]
[ { "msg_contents": "A publication for all tables was running fine, Master is a PostgreSQL\n11.11. Replica was running version 13 (don´t remember minor version).\n\nThen we tried to update only subscriber server, nothing was done on master\nside.\n\nThen we did ...\n- installed postgresql-14.\n- configured postgresql.conf to be similar to previous.\n- on version 13 disabled subscription - alter subscription disable.\n- changed both port to run pg_upgrade.\n- stop services for both 13 e 14.\n- */usr/lib/postgresql/14/bin/pg_upgrade -b /usr/lib/postgresql/13/bin -B\n/usr/lib/postgresql/14/bin -d /etc/postgresql/13/main/ -D\n/etc/postgresql/14/main/ -j 2 --link -p 9999 -P 9998 -U postgres -v*\n- when finished upgrade process, we removed version 13 and ran *vacuumdb -p\n9998 -U postgres --all --analyze-in-stages*\n- last step was to enable that subscription.\n- just wait for the subscriber to get the data changed, pg_upgrade ran for\n15 minutes, this should be synced in a few seconds ...\n- few seconds later we remembered that some other tables were created on\npublication server, so we did a refresh publication.\n\nThen, some minutes later we got lots of log entries \"duplicate key value\nviolates unique constraint pk...\" because it was trying to COPY that table\nfrom master.\n\nWe disable subscription again until we solve, as remains.\n\nSelecting from pg_subscription_rel all old tables are with srsubstate i for\ninitialize, not s for synchronized or r for ready, as they should. And\nall srsublsn of these records were null, so it lost synchronization\ncoordination for all tables which existed before this upgrade process.\n\nSo, my first question is, as our publication server continues running, lots\nof updates were processed, so how can I synchronize both sides without\nrecreating that publication ?\nAnd my second question is, is this problem documented ? Is this problem\nexpected to happen ?\n\nregards,\nMarcos\n\nA publication for all tables was running fine, Master is a PostgreSQL 11.11. Replica was running version 13 (don´t remember minor version).\n\nThen we tried to update only subscriber server, nothing was done on master side. Then we did ...- installed postgresql-14.- configured postgresql.conf to be similar to previous.- on version 13 disabled subscription - alter subscription disable.- changed both port to run pg_upgrade.- stop services for both 13 e 14.- /usr/lib/postgresql/14/bin/pg_upgrade -b /usr/lib/postgresql/13/bin -B /usr/lib/postgresql/14/bin -d /etc/postgresql/13/main/ -D /etc/postgresql/14/main/ -j 2 --link -p 9999 -P 9998 -U postgres -v- when finished upgrade process, we removed version 13 and ran vacuumdb -p 9998 -U postgres --all --analyze-in-stages- last step was to enable that subscription. - just wait for the subscriber to get the data changed, pg_upgrade ran for 15 minutes, this should be synced in a few seconds ...- few seconds later we remembered that some other tables were created on publication server, so we did a refresh publication.Then, some minutes later we got lots of log entries \"duplicate key value violates unique constraint pk...\" because it was trying to COPY that table from master.We disable subscription again until we solve, as remains. Selecting from pg_subscription_rel all old tables are with srsubstate i for initialize, not s for synchronized or r for ready, as they should. And all srsublsn of these records were null, so it lost synchronization coordination for all tables which existed before this upgrade process.So, my first question is, as our publication server continues running, lots of updates were processed, so how can I synchronize both sides without recreating that publication ? And my second question is, is this problem documented ? Is this problem expected to happen ?regards,Marcos", "msg_date": "Thu, 25 Nov 2021 08:43:11 -0300", "msg_from": "Marcos Pegoraro <marcos@f10.com.br>", "msg_from_op": true, "msg_subject": "pg_upgrade and publication/subscription problem" }, { "msg_contents": "On Thu, Nov 25, 2021 at 5:13 PM Marcos Pegoraro <marcos@f10.com.br> wrote:\n>\n> A publication for all tables was running fine, Master is a PostgreSQL 11.11. Replica was running version 13 (don´t remember minor version).\n>\n> Then we tried to update only subscriber server, nothing was done on master side.\n>\n> Then we did ...\n> - installed postgresql-14.\n> - configured postgresql.conf to be similar to previous.\n> - on version 13 disabled subscription - alter subscription disable.\n> - changed both port to run pg_upgrade.\n> - stop services for both 13 e 14.\n> - /usr/lib/postgresql/14/bin/pg_upgrade -b /usr/lib/postgresql/13/bin -B /usr/lib/postgresql/14/bin -d /etc/postgresql/13/main/ -D /etc/postgresql/14/main/ -j 2 --link -p 9999 -P 9998 -U postgres -v\n> - when finished upgrade process, we removed version 13 and ran vacuumdb -p 9998 -U postgres --all --analyze-in-stages\n> - last step was to enable that subscription.\n> - just wait for the subscriber to get the data changed, pg_upgrade ran for 15 minutes, this should be synced in a few seconds ...\n> - few seconds later we remembered that some other tables were created on publication server, so we did a refresh publication.\n>\n> Then, some minutes later we got lots of log entries \"duplicate key value violates unique constraint pk...\" because it was trying to COPY that table from master.\n>\n> We disable subscription again until we solve, as remains.\n>\n> Selecting from pg_subscription_rel all old tables are with srsubstate i for initialize, not s for synchronized or r for ready, as they should. And all srsublsn of these records were null, so it lost synchronization coordination for all tables which existed before this upgrade process.\n>\n\nThe reason is after an upgrade, there won't be any data in\npg_subscription_rel, and only when you tried to refresh it is trying\nto sync again which leads to the \"duplicate key value ...\" problem you\nare seeing.\n\n> So, my first question is, as our publication server continues running, lots of updates were processed, so how can I synchronize both sides without recreating that publication ?\n>\n\nDon't you want to eventually upgrade the publisher node as well? You\ncan refer to blog [1] for the detailed steps.\n\n> And my second question is, is this problem documented ? Is this problem expected to happen ?\n>\n\nYes, the way you are doing I think it is bound to happen. There is\nsome discussion about why this is happening in email [2]. AFAIK, it is\nnot documented and if so, I think it will be a good idea to document\nit.\n\n[1] - https://elephanttamer.net/?p=58\n[2] - https://www.postgresql.org/message-id/CALDaNm2-SRGHK0rqJQu7rGiS4hDAb7Nib5HbojEN5ubaXGs2CA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 25 Nov 2021 19:25:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade and publication/subscription problem" }, { "msg_contents": ">\n> The reason is after an upgrade, there won't be any data in\n> pg_subscription_rel, and only when you tried to refresh it is trying\n> to sync again which leads to the \"duplicate key value ...\" problem you\n> are seeing.\n>\n> So, is pg_upgrade populating pg_subscription and not pg_subscription_rel ?\nIt is doing 50% of his job ?\n\n>\n> Don't you want to eventually upgrade the publisher node as well? You\n> can refer to blog [1] for the detailed steps.\n>\n> It is possible but I don´t think changing publisher will solve anything,\nwill ?\n\n>\n> Yes, the way you are doing I think it is bound to happen. There is\n> some discussion about why this is happening in email [2]. AFAIK, it is\n> not documented and if so, I think it will be a good idea to document\n>\n> And my problem remains the same, how to solve it ? All records on\npg_subscription_rel are initialize with srsubstate null. How can I replay\nonly updates since yesterday. This replication is a auditing database, so I\ncannot loose all things happened since that pg_upgrade. [1] points me how\nto upgrade but if I did the wrong way, how to solve that ?\n\nThe reason is after an upgrade, there won't be any data in\npg_subscription_rel, and only when you tried to refresh it is trying\nto sync again which leads to the \"duplicate key value ...\" problem you\nare seeing.\nSo, is pg_upgrade populating pg_subscription and not pg_subscription_rel ? It is doing 50% of his job ?\n\nDon't you want to eventually upgrade the publisher node as well? You\ncan refer to blog [1] for the detailed steps.\nIt is possible but I don´t think changing publisher will solve anything, will ?\nYes, the way you are doing I think it is bound to happen. There is\nsome discussion about why this is happening in email [2]. AFAIK, it is\nnot documented and if so, I think it will be a good idea to document\nAnd my problem remains the same, how to solve it ? All records on pg_subscription_rel are initialize with srsubstate null. How can I replay only updates since yesterday. This replication is a auditing database, so I cannot loose all things happened since that pg_upgrade. [1] points me how to upgrade but if I did the wrong way, how to solve that ?", "msg_date": "Thu, 25 Nov 2021 11:30:45 -0300", "msg_from": "Marcos Pegoraro <marcos@f10.com.br>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade and publication/subscription problem" }, { "msg_contents": "On Thu, Nov 25, 2021 at 8:00 PM Marcos Pegoraro <marcos@f10.com.br> wrote:\n>>\n>> Yes, the way you are doing I think it is bound to happen. There is\n>> some discussion about why this is happening in email [2]. AFAIK, it is\n>> not documented and if so, I think it will be a good idea to document\n>>\n> And my problem remains the same, how to solve it ? All records on pg_subscription_rel are initialize with srsubstate null. How can I replay only updates since yesterday. This replication is a auditing database, so I cannot loose all things happened since that pg_upgrade. [1] points me how to upgrade but if I did the wrong way, how to solve that ?\n>\n\nAFAIU the main problem in your case is that you didn't block the write\ntraffic on the publisher side. Let me try to understand the situation.\nAfter the upgrade is finished, there are some new tables with data on\nthe publisher, and did old tables have any additional data?\n\nAre the contents in pg_replication_origin intact after the upgrade?\n\nSo, in short, I think what we need to solve is to get the data from\nnew tables and newly performed writes on old tables. I could think of\nthe following two approaches:\n\nApproach-1:\n1. Drop subscription and Truncate all tables corresponding to subscription.\n2. Create a new subscription for the publication.\n\nI think this will be quite neat and there would be no risk of data\nloss but it could be time-consuming since all the data from previous\ntables needs to be synced again.\n\nApproach-2:\nHere, I am assuming pg_replication_origin is intact.\n1. Block new writes on the publisher-side.\n2. Disable the existing subscription (say the name of the subscription\nis old_sub).\n3. Drop the existing all tables publication.\n4. Create two new publications, one for old tables (old_pub), and one\nfor new tables (new_pub).\n5. Create a new subscription corresponding to new_pub.\n6. Remove the existing publication from old_sub and add the old_pub.\n7. Enable the subscription.\n8. Now, perform a refresh on old_sub.\n\nThe benefit of Approach-1 is that you don't need to change anything on\nthe publisher-side and it has very few steps. OTOH, in Approach-2, we\ncan save the effort/time to re-sync the initial data for old tables\nbut as there are a lot of things to be taken care there is always a\nchance of mistake and if that happens you might lose some data.\n\nIn any case, before following any of these, I suggest creating a dummy\nsetup that mimics your original setup, perform the above steps and\nensure everything is fine, then only try the same steps in your main\nsetup.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 26 Nov 2021 08:26:48 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade and publication/subscription problem" }, { "msg_contents": ">\n> AFAIU the main problem in your case is that you didn't block the write\n> traffic on the publisher side. Let me try to understand the situation.\n> After the upgrade is finished, there are some new tables with data on\n> the publisher, and did old tables have any additional data?\n>\nCorrect.\n\n>\n> Are the contents in pg_replication_origin intact after the upgrade?\n>\nYes\n\n>\n> So, in short, I think what we need to solve is to get the data from\n> new tables and newly performed writes on old tables. I could think of\n> the following two approaches:\n>\n> Approach-1:\n> 1. Drop subscription and Truncate all tables corresponding to subscription.\n\n2. Create a new subscription for the publication.\n>\nIf I drop subscription it will drop WAL ou publication side and I lost all\nchanged data between the starting of pg_upgrade process and now.\nMy problem is not related with new tables, they will be copied fine because\ndoesn´t exists any record on subscriber.\nBut old tables had records modified since that pg_upgrade process, that is\nmy problem, only that.\n\nMy question remains the same, why pg_subscription_rel was not copied from\nprevious version ?\n\nIf pg_upgrade would copy pg_replication_origin (it did) and these\npg_subscription_rel (it didn´t) records from version 13 to 14, when I\nenable subscription it would start copying data from that point on, correct\n?\n\nAFAIU the main problem in your case is that you didn't block the write\ntraffic on the publisher side. Let me try to understand the situation.\nAfter the upgrade is finished, there are some new tables with data on\nthe publisher, and did old tables have any additional data?Correct.\n\nAre the contents in pg_replication_origin intact after the upgrade?Yes \n\nSo, in short, I think what we need to solve is to get the data from\nnew tables and newly performed writes on old tables. I could think of\nthe following two approaches:\n\nApproach-1:\n1. Drop subscription and Truncate all tables corresponding to subscription.\n2. Create a new subscription for the publication.If I drop subscription it will drop WAL ou publication side and I lost all changed data between the starting of pg_upgrade process and now.My problem is not related with new tables, they will be copied fine because doesn´t exists any record on subscriber.But old tables had records modified since that pg_upgrade process, that is my problem, only that.My question remains the same, why pg_subscription_rel was not copied from previous version ?If pg_upgrade would copy pg_replication_origin (it did) and these pg_subscription_rel (it didn´t) records from version 13 to 14, when I enable subscription it would start copying data from that point on, correct ?", "msg_date": "Fri, 26 Nov 2021 09:17:55 -0300", "msg_from": "Marcos Pegoraro <marcos@f10.com.br>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade and publication/subscription problem" }, { "msg_contents": "On Fri, Nov 26, 2021 at 5:47 PM Marcos Pegoraro <marcos@f10.com.br> wrote:\n>>\n>> AFAIU the main problem in your case is that you didn't block the write\n>> traffic on the publisher side. Let me try to understand the situation.\n>> After the upgrade is finished, there are some new tables with data on\n>> the publisher, and did old tables have any additional data?\n>\n> Correct.\n>>\n>>\n>> Are the contents in pg_replication_origin intact after the upgrade?\n>\n> Yes\n>>\n>>\n>> So, in short, I think what we need to solve is to get the data from\n>> new tables and newly performed writes on old tables. I could think of\n>> the following two approaches:\n>>\n>> Approach-1:\n>> 1. Drop subscription and Truncate all tables corresponding to subscription.\n>>\n>> 2. Create a new subscription for the publication.\n>\n> If I drop subscription it will drop WAL ou publication side and I lost all changed data between the starting of pg_upgrade process and now.\n>\n\nI think you can disable the subscription as well or before dropping\ndisassociate the slot from subscription.\n\n> My problem is not related with new tables, they will be copied fine because doesn´t exists any record on subscriber.\n> But old tables had records modified since that pg_upgrade process, that is my problem, only that.\n>\n\nYeah, I understand that point. Here, the problem is that both old and\nnew tables belong to the same publication, and you can't refresh some\ntables from the publication.\n\n> My question remains the same, why pg_subscription_rel was not copied from previous version ?\n>\n> If pg_upgrade would copy pg_replication_origin (it did) and these pg_subscription_rel (it didn´t) records from version 13 to 14, when I enable subscription it would start copying data from that point on, correct ?\n>\n\nI think we don't want to make assumptions about the remote end being\nthe same after the upgrade and we let users reactivate the\nsubscriptions in a suitable way. See [1] (Start reading from \"..When\ndumping logical replication subscriptions..\") In your case, if you\nwouldn't have allowed new tables in the publication then a simple\nAlter Subscription <sub_name> Refresh Publication with (copy_data =\nfalse) would have served the purpose.\n\nBTW, just for records, this problem has nothing to do with any changes\nin PG-14, the same behavior exists in the previous versions as well.\n\n[1] - https://www.postgresql.org/docs/devel/app-pgdump.html\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 27 Nov 2021 17:21:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade and publication/subscription problem" }, { "msg_contents": ">\n> I think we don't want to make assumptions about the remote end being\n> the same after the upgrade and we let users reactivate the\n> subscriptions in a suitable way. See [1] (Start reading from \"..When\n> dumping logical replication subscriptions..\") In your case, if you\n> wouldn't have allowed new tables in the publication then a simple\n> Alter Subscription <sub_name> Refresh Publication with (copy_data =\n> false) would have served the purpose.\n\n\nI understand that this is not related with version 14, pg_upgrade would do\nthe same steps on previous versions too.\nAdditionally it would be interesting to document that pg_upgrade does not\nupgrade completely if the server is a subscriber of logical replication, so\nit´ll have pre and post steps to do if the server has this kind of\nreplication.\n\nI think we don't want to make assumptions about the remote end being\nthe same after the upgrade and we let users reactivate the\nsubscriptions in a suitable way. See [1] (Start reading from \"..When\ndumping logical replication subscriptions..\") In your case, if you\nwouldn't have allowed new tables in the publication then a simple\nAlter Subscription <sub_name> Refresh Publication with (copy_data =\nfalse) would have served the purpose. I understand that this is not related with version 14, pg_upgrade would do the same steps on previous versions too.Additionally it would be interesting to document that pg_upgrade does not upgrade completely if the server is a subscriber of logical replication, so it´ll have pre and post steps to do if the server has this kind of replication.", "msg_date": "Sat, 27 Nov 2021 09:44:36 -0300", "msg_from": "Marcos Pegoraro <marcos@f10.com.br>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade and publication/subscription problem" }, { "msg_contents": "On Fri, Nov 26, 2021 at 5:47 PM Marcos Pegoraro <marcos@f10.com.br> wrote:\n>>\n>> So, in short, I think what we need to solve is to get the data from\n>> new tables and newly performed writes on old tables. I could think of\n>> the following two approaches:\n>>\n>> Approach-1:\n>> 1. Drop subscription and Truncate all tables corresponding to subscription.\n>>\n>> 2. Create a new subscription for the publication.\n>\n> If I drop subscription it will drop WAL ou publication side and I lost all changed data between the starting of pg_upgrade process and now.\n>\n\nOn thinking about this point again, it is not clear to me why that\nwould matter for this particular use case? Basically, when you create\na new subscription, it should copy the entire existing data from the\ntable directly and then will decode changes from WAL. So, I think in\nyour case all the changes between pg_upgrade and now should be\ndirectly copied from tables, so probably older WAL won't be required.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 29 Nov 2021 09:08:19 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade and publication/subscription problem" }, { "msg_contents": ">\n> On thinking about this point again, it is not clear to me why that\n> would matter for this particular use case? Basically, when you create\n> a new subscription, it should copy the entire existing data from the\n> table directly and then will decode changes from WAL. So, I think in\n> your case all the changes between pg_upgrade and now should be\n> directly copied from tables, so probably older WAL won't be required.\n>\n\nMaybe you did not understand\nProduction server cannot stop while I upgrade my subscriber server, so it\nwill be creating WAL continuously.\n\nSubscriber server has trigger functions for auditing on all tables,\nsomething like ...\ninsert into auditable(schemaname, tablename, primarykey, operation,\nolddata, newdata) values(tg_table_schema, tg_table_name, getpk(new), tg_op,\nrow_to_json(old), row_to_json(new))\n\nThen, all changes between pg_upgrade and now will not be inserted into\nauditable.\n\nOn thinking about this point again, it is not clear to me why that\nwould matter for this particular use case? Basically, when you create\na new subscription, it should copy the entire existing data from the\ntable directly and then will decode changes from WAL. So, I think in\nyour case all the changes between pg_upgrade and now should be\ndirectly copied from tables, so probably older WAL won't be required.Maybe you did not understandProduction server cannot stop while I  upgrade my subscriber server, so it will be creating WAL continuously.Subscriber server has trigger functions for auditing on all tables, something like ...insert into auditable(schemaname, tablename, primarykey, operation, olddata, newdata) values(tg_table_schema, tg_table_name, getpk(new), tg_op, row_to_json(old), row_to_json(new))Then, all changes between pg_upgrade and now will not be inserted into auditable.", "msg_date": "Mon, 29 Nov 2021 08:34:48 -0300", "msg_from": "Marcos Pegoraro <marcos@f10.com.br>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade and publication/subscription problem" }, { "msg_contents": "On Mon, Nov 29, 2021 at 5:04 PM Marcos Pegoraro <marcos@f10.com.br> wrote:\n>>\n>> On thinking about this point again, it is not clear to me why that\n>> would matter for this particular use case? Basically, when you create\n>> a new subscription, it should copy the entire existing data from the\n>> table directly and then will decode changes from WAL. So, I think in\n>> your case all the changes between pg_upgrade and now should be\n>> directly copied from tables, so probably older WAL won't be required.\n>\n>\n> Maybe you did not understand\n>\n\nYeah, because some information like trigger functions was not there in\nyour previous emails.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 29 Nov 2021 17:50:43 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade and publication/subscription problem" }, { "msg_contents": "Sorry, I didn´t explain exactly what I was doing, I just wrote \"This\nreplication is a auditing database\" on my second email.\n\nAtenciosamente,\n\n\n\n\nEm seg., 29 de nov. de 2021 às 09:20, Amit Kapila <amit.kapila16@gmail.com>\nescreveu:\n\n> On Mon, Nov 29, 2021 at 5:04 PM Marcos Pegoraro <marcos@f10.com.br> wrote:\n> >>\n> >> On thinking about this point again, it is not clear to me why that\n> >> would matter for this particular use case? Basically, when you create\n> >> a new subscription, it should copy the entire existing data from the\n> >> table directly and then will decode changes from WAL. So, I think in\n> >> your case all the changes between pg_upgrade and now should be\n> >> directly copied from tables, so probably older WAL won't be required.\n> >\n> >\n> > Maybe you did not understand\n> >\n>\n> Yeah, because some information like trigger functions was not there in\n> your previous emails.\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nSorry, I didn´t explain exactly what I was doing, I just wrote \"This replication is a auditing database\" on my second email.Atenciosamente, Em seg., 29 de nov. de 2021 às 09:20, Amit Kapila <amit.kapila16@gmail.com> escreveu:On Mon, Nov 29, 2021 at 5:04 PM Marcos Pegoraro <marcos@f10.com.br> wrote:\n>>\n>> On thinking about this point again, it is not clear to me why that\n>> would matter for this particular use case? Basically, when you create\n>> a new subscription, it should copy the entire existing data from the\n>> table directly and then will decode changes from WAL. So, I think in\n>> your case all the changes between pg_upgrade and now should be\n>> directly copied from tables, so probably older WAL won't be required.\n>\n>\n> Maybe you did not understand\n>\n\nYeah, because some information like trigger functions was not there in\nyour previous emails.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Mon, 29 Nov 2021 09:49:21 -0300", "msg_from": "Marcos Pegoraro <marcos@f10.com.br>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade and publication/subscription problem" } ]
[ { "msg_contents": "Hi hackers,\n\nI noticed that there are some tab completions missing for the following \ncommands:\n-ALTER DEFAULT PRIVILEGES: missing FOR USER\n-ALTER FOREIGN DATA WRAPPER: missing NO HANDLER, NO VALIDATOR\n-ALTER SEQUENCE: missing AS\n-ALTER VIEW: no completion after ALTER COLUMN column_name\n-ALTER TRANSFORM: no doc for ALTER TRANSFORM, so I excluded TRANSFORM \nfrom ALTER tab completion\n\nI made a patch for this, so please have a look.\n\nBest wishes,\n\n-- \nKen Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Fri, 26 Nov 2021 13:55:46 +0900", "msg_from": "Ken Kato <katouknl@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "[PATCH] ALTER tab completion" }, { "msg_contents": "On Fri, Nov 26, 2021 at 01:55:46PM +0900, Ken Kato wrote:\n> I noticed that there are some tab completions missing for the following\n> commands:\n> -ALTER DEFAULT PRIVILEGES: missing FOR USER\n\nFOR ROLE is an equivalent. That does not seem mandatory to me.\n\n> -ALTER FOREIGN DATA WRAPPER: missing NO HANDLER, NO VALIDATOR\n\nOkay for this one.\n\n> -ALTER VIEW: no completion after ALTER COLUMN column_name\n\n+ /* ALTER VIEW xxx ALTER yyy */\n+ else if (Matches(\"ALTER\", \"VIEW\", MatchAny, \"ALTER\", MatchAny))\n+ COMPLETE_WITH(\"SET DEFAULT\", \"DROP DEFAULT\");\nIt may be cleaner to group this one with \"ALTER VIEW xxx ALTER yyy\"\ntwo blocks above.\n\n> -ALTER TRANSFORM: no doc for ALTER TRANSFORM, so I excluded TRANSFORM from\n> ALTER tab completion\n\nRight.\n\n> -ALTER SEQUENCE: missing AS\n+ /* ALTER SEQUENCE <name> AS */\n+ else if (TailMatches(\"ALTER\", \"SEQUENCE\", MatchAny, \"AS\"))\n+ COMPLETE_WITH(\"smallint\", \"integer\", \"bigint\");\nRe-quoting Horiguchi-san, that should be COMPLETE_WITH_CS() to keep\nthese completions in lower case.\n--\nMichael", "msg_date": "Fri, 26 Nov 2021 15:33:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] ALTER tab completion" }, { "msg_contents": "Hi,\n\nThank you for the comments!\n\nI made following updates:\n\n>> -ALTER DEFAULT PRIVILEGES: missing FOR USER\n> \n> FOR ROLE is an equivalent. That does not seem mandatory to me.\nI deleted the completion for \"FOR USER\".\n\n>> -ALTER VIEW: no completion after ALTER COLUMN column_name\n> + /* ALTER VIEW xxx ALTER yyy */\n> + else if (Matches(\"ALTER\", \"VIEW\", MatchAny, \"ALTER\", MatchAny))\n> + COMPLETE_WITH(\"SET DEFAULT\", \"DROP DEFAULT\");\n> It may be cleaner to group this one with \"ALTER VIEW xxx ALTER yyy\"\n> two blocks above.\nI put them back to back so that it looks cleaner.\n\n>> -ALTER SEQUENCE: missing AS\n> + /* ALTER SEQUENCE <name> AS */\n> + else if (TailMatches(\"ALTER\", \"SEQUENCE\", MatchAny, \"AS\"))\n> + COMPLETE_WITH(\"smallint\", \"integer\", \"bigint\");\n> Re-quoting Horiguchi-san, that should be COMPLETE_WITH_CS() to keep\n> these completions in lower case.\nThat's what it's for.\nI used COMPLETE_WITH_CS instead of COMPLETE_WITH.\n\nBest wishes,\n\n-- \nKen Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Fri, 26 Nov 2021 17:00:44 +0900", "msg_from": "Ken Kato <katouknl@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] ALTER tab completion" }, { "msg_contents": "On Fri, Nov 26, 2021 at 05:00:44PM +0900, Ken Kato wrote:\n> I made following updates:\n\nI have made one small modification for ALTER VIEW, and applied what\nyou have. Thanks, Kato-san.\n--\nMichael", "msg_date": "Mon, 29 Nov 2021 11:58:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] ALTER tab completion" } ]
[ { "msg_contents": "Here is how it can be reproduced.\n\ncreate table point_tbl (f1 point);\n\ninsert into point_tbl(f1) values ('(5.1, 34.5)');\ninsert into point_tbl(f1) values (' ( Nan , NaN ) ');\nanalyze;\n\ncreate index gpointind on point_tbl using gist (f1);\n\nset enable_seqscan to on;\nset enable_indexscan to off;\n# select * from point_tbl where f1 <@ polygon\n'(0,0),(0,100),(100,100),(100,0),(0,0)';\n f1\n------------\n (5.1,34.5)\n (NaN,NaN)\n(2 rows)\n\n\nset enable_seqscan to off;\nset enable_indexscan to on;\n# select * from point_tbl where f1 <@ polygon\n'(0,0),(0,100),(100,100),(100,0),(0,0)';\n f1\n------------\n (5.1,34.5)\n(1 row)\n\nSeems point_inside() does not handle NaN properly.\n\nBTW, I'm using 15devel. But this issue can be seen in at least 12\nversion also.\n\nThanks\nRichard\n\nHere is how it can be reproduced.create table point_tbl (f1 point);insert into point_tbl(f1) values ('(5.1, 34.5)');insert into point_tbl(f1) values (' ( Nan , NaN ) ');analyze;create index gpointind on point_tbl using gist (f1);set enable_seqscan to on;set enable_indexscan to off;# select * from point_tbl where f1 <@ polygon '(0,0),(0,100),(100,100),(100,0),(0,0)';     f1------------ (5.1,34.5) (NaN,NaN)(2 rows)set enable_seqscan to off;set enable_indexscan to on;# select * from point_tbl where f1 <@ polygon '(0,0),(0,100),(100,100),(100,0),(0,0)';     f1------------ (5.1,34.5)(1 row)Seems point_inside() does not handle NaN properly.BTW, I'm using 15devel. But this issue can be seen in at least 12version also.ThanksRichard", "msg_date": "Fri, 26 Nov 2021 14:09:50 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Inconsistent results from seqscan and gist indexscan" }, { "msg_contents": "Hi,\n\nOn Fri, Nov 26, 2021 at 2:10 PM Richard Guo <guofenglinux@gmail.com> wrote:\n>\n> Seems point_inside() does not handle NaN properly.\n\nThis is unfortunately a known issue, which was reported twice ([1] and\n[2]) already. There's a patch proposed for it:\nhttps://commitfest.postgresql.org/32/2710/ (adding Horiguchi-san in\nCc).\n\n[1]: https://www.postgresql.org/message-id/flat/CAGf+fX70rWFOk5cd00uMfa__0yP+vtQg5ck7c2Onb-Yczp0URA@mail.gmail.com\n[2]: https://www.postgresql.org/message-id/20210330095751.x5hnqbqcxilzwjlm%40nol\n\n\n", "msg_date": "Fri, 26 Nov 2021 17:23:12 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistent results from seqscan and gist indexscan" }, { "msg_contents": "On Fri, Nov 26, 2021 at 5:23 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> Hi,\n>\n> On Fri, Nov 26, 2021 at 2:10 PM Richard Guo <guofenglinux@gmail.com>\n> wrote:\n> >\n> > Seems point_inside() does not handle NaN properly.\n>\n> This is unfortunately a known issue, which was reported twice ([1] and\n> [2]) already. There's a patch proposed for it:\n> https://commitfest.postgresql.org/32/2710/ (adding Horiguchi-san in\n> Cc).\n>\n\nAh, I missed the previous threads. Good to know there is a patch fixing\nit.\n\nThanks\nRichard\n\nOn Fri, Nov 26, 2021 at 5:23 PM Julien Rouhaud <rjuju123@gmail.com> wrote:Hi,\n\nOn Fri, Nov 26, 2021 at 2:10 PM Richard Guo <guofenglinux@gmail.com> wrote:\n>\n> Seems point_inside() does not handle NaN properly.\n\nThis is unfortunately a known issue, which was reported twice ([1] and\n[2]) already.  There's a patch proposed for it:\nhttps://commitfest.postgresql.org/32/2710/ (adding Horiguchi-san in\nCc).Ah, I missed the previous threads. Good to know there is a patch fixingit.ThanksRichard", "msg_date": "Sat, 27 Nov 2021 10:19:32 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Inconsistent results from seqscan and gist indexscan" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> On Fri, Nov 26, 2021 at 5:23 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>> On Fri, Nov 26, 2021 at 2:10 PM Richard Guo <guofenglinux@gmail.com>\n>> wrote:\n>>> Seems point_inside() does not handle NaN properly.\n\n>> This is unfortunately a known issue, which was reported twice ([1] and\n>> [2]) already. There's a patch proposed for it:\n>> https://commitfest.postgresql.org/32/2710/ (adding Horiguchi-san in\n>> Cc).\n\n> Ah, I missed the previous threads. Good to know there is a patch fixing\n> it.\n\nNote that that patch seems pretty well stalled; if you'd like to\nsee it move forward, please pitch in and help review.\n\n(Maybe we should scale back the patch's ambitions, and just try\nto get the seqscan/indexscan inconsistency fixed.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 26 Nov 2021 21:51:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Inconsistent results from seqscan and gist indexscan" } ]
[ { "msg_contents": "Dear Sir/Ma'am,\n I am Parth Shah, currently a second-year computer\nengineering student at Mumbai University. I only recently started learning\nPostgres and I was quite fascinated by it. I recently read about\nKonstantina Skovola's project about creating an extension for the Julia\nprogramming language. I would really appreciate if I could get a chance to\nshowcase my skills and contribute back to society.\n I will be waiting eagerly for your reply.\n\n\n Yours sincerely,\n\n Parth Shah\n\nDear Sir/Ma'am,                I am Parth Shah, currently a second-year computer engineering student at Mumbai University. I only recently started learning Postgres and I was quite fascinated by it. I recently read about Konstantina Skovola's project about creating an extension for the Julia programming language. I would really appreciate if I could get a chance to showcase my skills and contribute back to society.               I will be waiting eagerly for your reply.                                                                                              Yours sincerely,                                                                                               Parth Shah", "msg_date": "Fri, 26 Nov 2021 23:13:38 +0530", "msg_from": "Parth Shah <oasisshah2512@gmail.com>", "msg_from_op": true, "msg_subject": "Contributing" }, { "msg_contents": "On Fri, 2021-11-26 at 23:13 +0530, Parth Shah wrote:\n> I am Parth Shah, currently a second-year computer engineering student at Mumbai University.\n> I only recently started learning Postgres and I was quite fascinated by it. I recently read\n> about Konstantina Skovola's project about creating an extension for the Julia programming language.\n> I would really appreciate if I could get a chance to showcase my skills and contribute back to society.\n> I will be waiting eagerly for your reply.\n\nYou should contact the authors. That project is related to PostgreSQL, but not part of it.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Sun, 28 Nov 2021 21:32:11 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Contributing" } ]
[ { "msg_contents": "I think that it's worth unifying VACUUM VERBOSE and\nlog_autovacuum_min_duration output, to remove the redundancy, and to\nprovide more useful VACUUM VERBOSE output.\n\nBoth variants already output approximately the same things. But, each\nvariant reports on certain details that the other variant lacks. I\nfind the extra information provided by log_autovacuum_min_duration far\nmore useful than the extra details provided by VACUUM VERBOSE. This is\nprobably because we've focussed on improving the former over the\nlatter, probably because autovacuum is much more interesting than\nVACUUM on average, in practice, to users.\n\nUnifying everything cannot be approached mechanically, so doing this\nrequires real buy-in. It's a bit tricky because VACUUM VERBOSE is\nsupposed to show real time information about what just finished, as a\nkind of rudimentary progress indicator, while log_autovacuum_*\nsummarizes the whole entire VACUUM operation. This difference is most\nnotable when there are multiple index vacuuming passes (\"index\nscans\"), or when we truncate the heap relation.\n\nMy preferred approach to this is simple: redefine VACUUM VERBOSE to\nnot show incremental output, which seems rather unhelpful anyway. This\ndoes mean that VACUUM VERBOSE won't show certain information that\nmight occasionally be useful to hackers. For example, there is\ndetailed information about how rel truncation happened in the VERBOSE\noutput, and detailed information about how many index tuples were\ndeleted by each round of index vacuuming, for each individual index.\nWe can keep this extra information as DEBUG2 messages, as in the\ncurrent !VERBOSE case (or perhaps make some of them DEBUG1). I don't\nthink that we need to keep the getrusage() stuff at all, though.\n\nI think that this would significantly improve VACUUM VERBOSE output,\nespecially for users, but also for hackers. Here are my reasons, in\ndetail:\n\n* We have pg_stat_progress_vacuum these days.\n\n* VACUUM VERBOSE doesn't provide much of the most useful\ninstrumentation that we have available in log_autovacuum_min_duration,\nand yet produces output that is ludicrously, unmanageably verbose --\nlots of pg_rusage_show() information for each and every step, which\njust isn't useful.\n\n* I really miss the extra stuff that log_autovacuum_min_duration\nprovides when I run VACUUM VERBOSE, though.\n\n* In practice having multiple rounds of index vacuuming is quite rare\nthese days. And when it does happen it's interesting because it\nhappened at all -- I don't really care about the breakdown beyond\nthat. If I ever do care about the very fine details, I can easily set\nclient_min_messages to DEBUG2 on that one occasion.\n\n* The fact that VACUUM VERBOSE will no longer report on\nIndexBulkDeleteResult.num_index_tuples and\nIndexBulkDeleteResult.tuples_removed seems like no great loss to me --\nthe fact that the number might be higher or lower for an index\ntypically means very little these days, with the improvements made to\nindex tuple deletion.\n\nVERBOSE will still report on IndexBulkDeleteResult.pages_*, which is\nwhat really matters. VERBOSE will also report on LP_DEAD-in-heap items\nremoved (or not removed) directly, which is a generic upper bound on\ntuples_removed, that applies to all indexes.\n\n* The detailed lazy_truncate_heap() instrumentation output by VACUUM\nVERBOSE just isn't useful outside of debugging scenarios -- it just\nisn't actionable to users (users only really care about how much\nsmaller the table became through truncation). The low level details\ncould easily be output as DEBUG1 (not DEBUG2) instead.\n\nThoughts?\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 26 Nov 2021 12:37:32 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Unifying VACUUM VERBOSE and log_autovacuum_min_duration output" }, { "msg_contents": "On Fri, Nov 26, 2021 at 12:37:32PM -0800, Peter Geoghegan wrote:\n> My preferred approach to this is simple: redefine VACUUM VERBOSE to\n> not show incremental output, which seems rather unhelpful anyway.\n\n> I don't think that we need to keep the getrusage() stuff at all, though.\n\n+1\n\n> * VACUUM VERBOSE doesn't provide much of the most useful\n> instrumentation that we have available in log_autovacuum_min_duration,\n> and yet produces output that is ludicrously, unmanageably verbose --\n> lots of pg_rusage_show() information for each and every step, which\n> just isn't useful.\n\nNot only not useful/unhelpful, but confusing.\n\nIt's what I complained about here.\nhttps://www.postgresql.org/message-id/flat/20191220171132.GB30414@telsasoft.com\n\nI see that lazy_scan_heap() still has a shadow variable buf...\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 26 Nov 2021 15:57:12 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Unifying VACUUM VERBOSE and log_autovacuum_min_duration output" }, { "msg_contents": "On Fri, Nov 26, 2021 at 12:37 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Unifying everything cannot be approached mechanically, so doing this\n> requires real buy-in. It's a bit tricky because VACUUM VERBOSE is\n> supposed to show real time information about what just finished, as a\n> kind of rudimentary progress indicator, while log_autovacuum_*\n> summarizes the whole entire VACUUM operation. This difference is most\n> notable when there are multiple index vacuuming passes (\"index\n> scans\"), or when we truncate the heap relation.\n\nI based these remarks on one sentence about VERBOSE that appears in vacuum.sgml:\n\n\"When VERBOSE is specified, VACUUM emits progress messages to indicate\nwhich table is currently being processed. Various statistics about the\ntables are printed as well.\"\n\nThere is a very similar sentence in analyze.sgml. It seems that I\noverinterpreted the word \"progress\" before. I now believe that VACUUM\nVERBOSE wasn't ever really intended to indicate the progress of one\nparticular vacuumlazy.c-wise operation targeting one particular heap\nrelation with storage. The VERBOSE option gives some necessary\ntable-level structure to an unqualified \"VACUUM VERBOSE\" -- same as\nan unqualified \"ANALYZE VERBOSE\". The progress is explicitly table\ngranularity progress. Nothing more.\n\nI definitely need to preserve that aspect of VERBOSE output --\nobviously the output must still make it perfectly clear which\nparticular table a given run of information relates to, especially\nwith unqualified \"VACUUM VERBOSE\". Fortunately, that'll be easy. In\nfact, my proposal will improve things here, because now there will\nonly be a single extra INFO line per table (so one INFO line for the\ntable name, another with newlines for the instrumentation itself).\nThis matches the current behavior with an unqualified \"ANALYZE\nVERBOSE\".\n\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 26 Nov 2021 14:56:14 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Unifying VACUUM VERBOSE and log_autovacuum_min_duration output" }, { "msg_contents": "On Fri, Nov 26, 2021 at 1:57 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > * VACUUM VERBOSE doesn't provide much of the most useful\n> > instrumentation that we have available in log_autovacuum_min_duration,\n> > and yet produces output that is ludicrously, unmanageably verbose --\n> > lots of pg_rusage_show() information for each and every step, which\n> > just isn't useful.\n>\n> Not only not useful/unhelpful, but confusing.\n\nAlso makes testing harder.\n\n> It's what I complained about here.\n> https://www.postgresql.org/message-id/flat/20191220171132.GB30414@telsasoft.com\n>\n> I see that lazy_scan_heap() still has a shadow variable buf...\n\nI noticed that myself. That function has had many accretions of code,\nover decades. I often notice things that seem like they once made\nsense (e.g., before we had HOT), but don't anymore.\n\nI hope to be able to pay down more technical debt in this area for Postgres 15.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 26 Nov 2021 15:02:02 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Unifying VACUUM VERBOSE and log_autovacuum_min_duration output" }, { "msg_contents": "On Fri, Nov 26, 2021 at 12:37 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> My preferred approach to this is simple: redefine VACUUM VERBOSE to\n> not show incremental output, which seems rather unhelpful anyway. This\n> does mean that VACUUM VERBOSE won't show certain information that\n> might occasionally be useful to hackers.\n\nAttached is a WIP patch doing this.\n\nOne thing that's still unclear is what the new elevel should be for\nthe ereport messages that used to be either LOG (for VACUUM VERBOSE)\nor DEBUG2 (for everything else) -- what should I change them to now?\nFor now I've done taken the obvious approach of making everything\nDEBUG2. There is of course no reason why some messages can't be DEBUG1\ninstead. Some of them do seem more interesting than others (though\nstill not particularly interesting overall).\n\nHere is an example of VACUUM VERBOSE on HEAD:\n\npg@regression=# vacuum VERBOSE foo;\nINFO: vacuuming \"public.foo\"\nINFO: table \"public.foo\": found 0 removable, 54 nonremovable row\nversions in 1 out of 45 pages\nDETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 770\nSkipped 0 pages due to buffer pins, 0 frozen pages.\nCPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\nVACUUM\n\nHere's what a VACUUM VERBOSE against the same table looks like with\nthe patch applied:\n\npg@regression=# vacuum VERBOSE foo;\nINFO: vacuuming \"regression.public.foo\"\nINFO: finished vacuuming \"regression.public.foo\": index scans: 0\npages: 0 removed, 45 remain, 0 skipped due to pins, 0 skipped frozen\ntuples: 0 removed, 7042 remain, 0 are dead but not yet removable,\noldest xmin: 770\nindex scan not needed: 0 pages from table (0.00% of total) had 0 dead\nitem identifiers removed\nI/O timings: read: 0.065 ms, write: 0.000 ms\navg read rate: 147.406 MB/s, avg write rate: 14.741 MB/s\nbuffer usage: 22 hits, 10 misses, 1 dirtied\nWAL usage: 1 records, 1 full page images, 1401 bytes\nsystem usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\nVACUUM\n\nIt's easy to produce examples where the patch is somewhat more verbose\nthan HEAD (that's what you see here). It's also easy to produce\nexamples where HEAD is *significantly* more verbose than the patch.\nEspecially when VERBOSE shows many lines of getrusage() output (patch\nonly ever shows one of those, at the end). Another factor is index\nvacuuming. With the patch, you'll only see one extra line per index,\nversus several lines on HEAD.\n\nI cannot find clear guidelines on multiline INFO messages lines -- all\nI'm really doing here is selectively making the LOG output from\nlog_autovacuum_min_duration into INFO output for VACUUM VERBOSE\n(actually there are 2 INFO messages per heap relation processed). It\nwould be nice if there was a clear message style precedent that I\ncould point to for this.\n\n--\nPeter Geoghegan", "msg_date": "Mon, 29 Nov 2021 18:51:37 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Unifying VACUUM VERBOSE and log_autovacuum_min_duration output" }, { "msg_contents": "I think the 2nd chunk here could say \"if (instrument)\" like the first:\n\n> @@ -482,8 +480,10 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,\n> TransactionId FreezeLimit;\n> MultiXactId MultiXactCutoff;\n> \n> - /* measure elapsed time iff autovacuum logging requires it */\n> - if (IsAutoVacuumWorkerProcess() && params->log_min_duration >= 0)\n> + verbose = (params->options & VACOPT_VERBOSE) != 0;\n> + instrument = (verbose || (IsAutoVacuumWorkerProcess() &&\n> + params->log_min_duration >= 0));\n> + if (instrument)\n\n...\n\n> @@ -702,12 +705,13 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,\n> vacrel->new_dead_tuples);\n> pgstat_progress_end_command();\n> \n> - /* and log the action if appropriate */\n> - if (IsAutoVacuumWorkerProcess() && params->log_min_duration >= 0)\n> - if (IsAutoVacuumWorkerProcess() && params->log_min_duration >= 0)\n> + /* Output instrumentation where appropriate */\n> + if (verbose ||\n> + (IsAutoVacuumWorkerProcess() && params->log_min_duration >= 0))\n\nAutovacuum's format doesn't show the number of scanned pages ; it shows how\nmany pages were skipped due to frozen bit, but not how many were skipped due to\nthe all visible bit:\n\n> INFO: table \"public.foo\": found 0 removable, 54 nonremovable row versions in 1 out of 45 pages\n...\n> INFO: finished vacuuming \"regression.public.foo\": index scans: 0\n> pages: 0 removed, 45 remain, 0 skipped due to pins, 0 skipped frozen\n\nIf the format of autovacuum output were to change, maybe it's an opportunity to\nshow some of the stuff Jeff mentioned:\n\n|Also, I'd appreciate a report on how many hint-bits were set, and how many\n|pages were marked all-visible and/or frozen\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 29 Nov 2021 22:19:11 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Unifying VACUUM VERBOSE and log_autovacuum_min_duration output" }, { "msg_contents": "On Mon, Nov 29, 2021 at 8:19 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I think the 2nd chunk here could say \"if (instrument)\" like the first:\n\nI agree that that would be clearer.\n\n> Autovacuum's format doesn't show the number of scanned pages ; it shows how\n> many pages were skipped due to frozen bit, but not how many were skipped due to\n> the all visible bit:\n\nThat's a weird historical accident. I had planned on fixing that as\npart of ongoing refactoring work [1].\n\nThe short explanation for why it works that way goes like this: while\nit makes zero practical sense (who wants to see how many frozen pages\nwe skipped, without also seeing merely all-visible pages skipped?), it\ndoes make some sense when your starting point is the code itself.\n\n> If the format of autovacuum output were to change, maybe it's an opportunity to\n> show some of the stuff Jeff mentioned:\n\nYou must be referencing the thread again, from your earlier message --\nyou must mean Jeff Janes here.\n\nJeff said something about the number of all-visible pages accessed\n(i.e. not skipped over) being implicit. For what it's worth, that\nisn't true in the general case -- there simply is no reliable way to\nsee the total number of pages that were skipped using the VM, as of\nright now.\n\n> |Also, I'd appreciate a report on how many hint-bits were set, and how many\n> |pages were marked all-visible and/or frozen\n\nI will probably also add the latter in the Postgres 15 cycle.\nHint-bits-set is much harder, and not likely to happen soon.\n\n[1] https://postgr.es/m/CAH2-Wznp=c=Opj8Z7RMR3G=ec3_JfGYMN_YvmCEjoPCHzWbx0g@mail.gmail.com\n--\nPeter Geoghegan\n\n\n", "msg_date": "Mon, 29 Nov 2021 20:35:06 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Unifying VACUUM VERBOSE and log_autovacuum_min_duration output" }, { "msg_contents": "Hi,\n\nOn 2021-11-29 18:51:37 -0800, Peter Geoghegan wrote:\n> One thing that's still unclear is what the new elevel should be for\n> the ereport messages that used to be either LOG (for VACUUM VERBOSE)\n> or DEBUG2 (for everything else) -- what should I change them to now?\n> For now I've done taken the obvious approach of making everything\n> DEBUG2.\n\nI think some actually ended up being omitted compared to the previous\nstate. E.g. \"aggressively vacuuming ...\", but I think others as well.\n\n\n> It's easy to produce examples where the patch is somewhat more verbose\n> than HEAD (that's what you see here).\n\nWe could make verbose a more complicated parameter if that turns out to be a\nproblem. E.g. controlling whether resource usage is included.\n\n\n> It's also easy to produce examples where HEAD is *significantly* more\n> verbose than the patch. Especially when VERBOSE shows many lines of\n> getrusage() output (patch only ever shows one of those, at the end).\n\nThat's not really equivalent though? It does seem somewhat useful to be able\nto distinguish the cost of heap and index processing?\n\n\n> I cannot find clear guidelines on multiline INFO messages lines -- all\n> I'm really doing here is selectively making the LOG output from\n> log_autovacuum_min_duration into INFO output for VACUUM VERBOSE\n> (actually there are 2 INFO messages per heap relation processed).\n\nUsing multiple messages has the clear drawback of including context/statement\nmultiple times... But if part of the point is to be able to analyze what's\ncurrently happening there's not really an alternative. However that need\nprobably is lessened now that we have pg_stat_progress_vacuum.\n\n> @@ -702,12 +705,13 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,\n> \t\t\t\t\t\t vacrel->new_dead_tuples);\n> \tpgstat_progress_end_command();\n>\n> -\t/* and log the action if appropriate */\n> -\tif (IsAutoVacuumWorkerProcess() && params->log_min_duration >= 0)\n> +\t/* Output instrumentation where appropriate */\n> +\tif (verbose ||\n> +\t\t(IsAutoVacuumWorkerProcess() && params->log_min_duration >= 0))\n> \t{\n> \t\tTimestampTz endtime = GetCurrentTimestamp();\n>\n> -\t\tif (params->log_min_duration == 0 ||\n> +\t\tif (verbose || params->log_min_duration == 0 ||\n> \t\t\tTimestampDifferenceExceeds(starttime, endtime,\n> \t\t\t\t\t\t\t\t\t params->log_min_duration))\n> \t\t{\n\nThis is quite the nest of conditions by now. Any chance of cleaning that up?\n\n\n> @@ -3209,7 +3144,7 @@ lazy_truncate_heap(LVRelState *vacrel)\n> \t\t\t\t * We failed to establish the lock in the specified number of\n> \t\t\t\t * retries. This means we give up truncating.\n> \t\t\t\t */\n> -\t\t\t\tereport(elevel,\n> +\t\t\t\tereport(DEBUG2,\n> \t\t\t\t\t\t(errmsg(\"\\\"%s\\\": stopping truncate due to conflicting lock request\",\n> \t\t\t\t\t\t\t\tvacrel->relname)));\n> \t\t\t\treturn;\n\n> @@ -3279,12 +3214,10 @@ lazy_truncate_heap(LVRelState *vacrel)\n> \t\tvacrel->pages_removed += orig_rel_pages - new_rel_pages;\n> \t\tvacrel->rel_pages = new_rel_pages;\n>\n> -\t\tereport(elevel,\n> +\t\tereport(DEBUG2,\n> \t\t\t\t(errmsg(\"table \\\"%s\\\": truncated %u to %u pages\",\n> \t\t\t\t\t\tvacrel->relname,\n> -\t\t\t\t\t\torig_rel_pages, new_rel_pages),\n> -\t\t\t\t errdetail_internal(\"%s\",\n> -\t\t\t\t\t\t\t\t\tpg_rusage_show(&ru0))));\n> +\t\t\t\t\t\torig_rel_pages, new_rel_pages)));\n> \t\torig_rel_pages = new_rel_pages;\n> \t} while (new_rel_pages > vacrel->nonempty_pages && lock_waiter_detected);\n> }\n\nThese imo are useful. Perhaps we could just make them part of some log\nmessage that autovac logging includes as well?\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 10 Dec 2021 20:30:16 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Unifying VACUUM VERBOSE and log_autovacuum_min_duration output" }, { "msg_contents": "On Fri, Dec 10, 2021 at 8:30 PM Andres Freund <andres@anarazel.de> wrote:\n> I think some actually ended up being omitted compared to the previous\n> state. E.g. \"aggressively vacuuming ...\", but I think others as well.\n\nThe \"aggressive-ness\" is reported by a distinct ereport() with the\npatch, so you'll still see that information. You'll still be able to\nsee when each VACUUM begins and ends, which matters in database level\n\"VACUUM\" command (a VACUUM that doesn't specify any relation, and so\nvacuums everything). VACUUM VERBOSE should still work as a progress\nindicator at the whole-VACUUM-operation level (when there will be more\nthan a single operation per command), but it won't indicate the\nprogress of any individual VACUUM operation anymore. That's the\ntrade-off.\n\nTo me the most notable loss of VERBOSE information is the number of\nindex tuples deleted in each index. But even that isn't so useful,\nsince you can already see the number of LP_DEAD items, which is a more\ninteresting number (that applies equally to all indexes, and the table\nitself).\n\n> > It's easy to produce examples where the patch is somewhat more verbose\n> > than HEAD (that's what you see here).\n>\n> We could make verbose a more complicated parameter if that turns out to be a\n> problem. E.g. controlling whether resource usage is included.\n\nThat's true, but I don't think that it's going to be a problem. I'd\nrather avoid it if possible. If we need to place some of the stuff\nthat's currently only shown by VERBOSE to be shown by the autovacuum\nlog output too, then that's fine.\n\nYou said something about showing the number of workers launched in the\nautovacuum log output (also the new VERBOSE output). That could make\nsense. But there could be a different number of workers for cleanup\nand for vacuuming. Should I show both together, or just the high\nwatermark? I think that it needs to be okay to suppress the output in\nthe common case where parallelism isn't used (e.g., in every\nautovacuum).\n\n> > It's also easy to produce examples where HEAD is *significantly* more\n> > verbose than the patch. Especially when VERBOSE shows many lines of\n> > getrusage() output (patch only ever shows one of those, at the end).\n>\n> That's not really equivalent though? It does seem somewhat useful to be able\n> to distinguish the cost of heap and index processing?\n\nI've personally never used VACUUM VERBOSE like that. I agree that it\ncould be useful, but I would argue that it's not worth it. I'd just\nuse the DEBUG1 version, or more likely use my own custom\nmicrobenchmark.\n\n> This is quite the nest of conditions by now. Any chance of cleaning that up?\n\nYes, I can simplify that code a little.\n\n> > @@ -3279,12 +3214,10 @@ lazy_truncate_heap(LVRelState *vacrel)\n\n> These imo are useful. Perhaps we could just make them part of some log\n> message that autovac logging includes as well?\n\nI would argue that it already does -- because you see pages removed\n(which is heap pages truncation). We do lose the details with the\npatch, of course -- you'll no longer see the progress of truncation,\nwhich works incrementally. But as I said, that's the general trade-off\nthat the patch makes.\n\nIf you can't truncate the table due to a conflicting lock request,\nthen that might just have been for the last round of truncation. And\nso reporting that aspect in the whole-autovacuum log output (or in the\nnew format VACUUM VERBOSE output) seems like it could be misleading.\n\nI went as far as removing the getrusage stuff for the ereport()\nmessages that get demoted to DEBUG2. What do you think of that aspect?\nI could add some the getrusage output back where that makes sense. I\ndon't have very strong feelings about that.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 11 Dec 2021 09:52:29 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Unifying VACUUM VERBOSE and log_autovacuum_min_duration output" }, { "msg_contents": "On 2021-12-11 09:52:29 -0800, Peter Geoghegan wrote:\n> On Fri, Dec 10, 2021 at 8:30 PM Andres Freund <andres@anarazel.de> wrote:\n> > I think some actually ended up being omitted compared to the previous\n> > state. E.g. \"aggressively vacuuming ...\", but I think others as well.\n> \n> The \"aggressive-ness\" is reported by a distinct ereport() with the\n> patch, so you'll still see that information.\n\nBut the ereport is inside an if (verbose), no?\n\n\n", "msg_date": "Sat, 11 Dec 2021 12:24:44 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Unifying VACUUM VERBOSE and log_autovacuum_min_duration output" }, { "msg_contents": "On Sat, Dec 11, 2021 at 12:24 PM Andres Freund <andres@anarazel.de> wrote:\n> But the ereport is inside an if (verbose), no?\n\nYes -- in order to report aggressiveness in VACUUM VERBOSE. But the\nautovacuum case still reports verbose-ness, in the same way as it\nalways has -- in that same LOG entry. We don't want to repeat\nourselves in the VERBOSE case, which will have already indicated its\nverboseness in the up-front ereport().\n\nIn other words, every distinct case reports on its aggressiveness\nexactly once per call into lazyvacuum.c. In roughly the same way as it\nworks on HEAD.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 11 Dec 2021 13:13:56 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Unifying VACUUM VERBOSE and log_autovacuum_min_duration output" }, { "msg_contents": "On Sat, Dec 11, 2021 at 1:13 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Yes -- in order to report aggressiveness in VACUUM VERBOSE. But the\n> autovacuum case still reports verbose-ness, in the same way as it\n> always has -- in that same LOG entry. We don't want to repeat\n> ourselves in the VERBOSE case, which will have already indicated its\n> verboseness in the up-front ereport().\n\nSorry, I meant \"indicated its aggressiveness in the up-front ereport()\".\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 11 Dec 2021 13:16:39 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Unifying VACUUM VERBOSE and log_autovacuum_min_duration output" }, { "msg_contents": "Hi,\n\nOn 2021-12-11 13:13:56 -0800, Peter Geoghegan wrote:\n> On Sat, Dec 11, 2021 at 12:24 PM Andres Freund <andres@anarazel.de> wrote:\n> > But the ereport is inside an if (verbose), no?\n> \n> Yes -- in order to report aggressiveness in VACUUM VERBOSE. But the\n> autovacuum case still reports verbose-ness, in the same way as it\n> always has -- in that same LOG entry. We don't want to repeat\n> ourselves in the VERBOSE case, which will have already indicated its\n> verboseness in the up-front ereport().\n\nI feel one, or both, must be missing something here. My point was that you\nsaid upthread that the patch doesn't change DEBUG2/non-verbose logging for\nmost messages. But the fact that those messages are only emitted inside and if\n(verbose) seems to contradict that?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 11 Dec 2021 14:51:58 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Unifying VACUUM VERBOSE and log_autovacuum_min_duration output" }, { "msg_contents": "On Sat, Dec 11, 2021 at 2:52 PM Andres Freund <andres@anarazel.de> wrote:\n> I feel one, or both, must be missing something here. My point was that you\n> said upthread that the patch doesn't change DEBUG2/non-verbose logging for\n> most messages. But the fact that those messages are only emitted inside and if\n> (verbose) seems to contradict that?\n\nThat is technically true, but it's not true in any practical sense.\nYes, there are 2 distinct ereports() per vacuumlazy.c call for VACUUM\nVERBOSE (i.e. 2 per relation processed by the command). Yes, only the\nsecond one is actually \"shared\" with log_autovacuum_* (the first one\nis just shows that we're processing a new relation, with the\naggressiveness). But that's not very significant.\n\nThe only reason that I did it that way is because there is an\nexpectation that plain \"VACUUM VERBOSE\" (i.e. no target relation\nspecified) will work as a rudimentary progress indicator at the heap\nrel granularity -- the VACUUM VERBOSE docs pretty much say so. As I\npointed out before, the docs for VERBOSE that appear in vacuum.sgml\nsay:\n\n\"When VERBOSE is specified, VACUUM emits progress messages to indicate\nwhich table is currently being processed. Various statistics about the\ntables are printed as well.\"\n\nHaving 2 ereports (not just 1) isn't essential, but it seems useful\nbecause it makes the new VACUUM VERBOSE continue to work like this.\nBut without any of the downsides that go with seeing way too much\ndetail, moment to moment.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 11 Dec 2021 15:11:42 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Unifying VACUUM VERBOSE and log_autovacuum_min_duration output" }, { "msg_contents": "On Mon, Nov 29, 2021 at 6:51 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached is a WIP patch doing this.\n\nThis has bitrot, so I attach v2, mostly just to keep the CFTester\nstatus green. The only real change is one minor simplification to how\nwe set everything up, inside heap_vacuum_rel().\n\n-- \nPeter Geoghegan", "msg_date": "Mon, 20 Dec 2021 09:39:22 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Unifying VACUUM VERBOSE and log_autovacuum_min_duration output" }, { "msg_contents": "I haven't read the patch yet. But some thoughts based on the posted output:\n\n1) At first I was quite skeptical about losing the progress reporting.\nI've often found it quite useful. But looking at the examples I'm\nconvinced.\n\nOr rather I think a better way to look at it is that the progress\noutput for the operator should be separated from the metrics logged.\nAs an operator what I want to see is some progress indicator\n\"\"starting table scan\", \"overflow at x% of table scanned, starting\nindex scan\", \"processing index 1\" \"index 2\"... so I can have some idea\nof how much longer the vacuum will take and see whether I need to\nraise maintenance_work_mem and by how much. I don't need to see all\nthe metrics while it's running.\n\n2) I don't much like the format. I want to be able to parse the output\nwith awk or mtail or even just grep for relevant lines. Things like\n\"index scan not needed\" make it hard to parse since you don't know\nwhat it will look like if they are needed. I would have expected\nsomething like \"index scans: 0\" which is actually already there up\nabove. I'm not clear how this line is meant to be read. Is it just\nexplaining *why* the index scan was skipped? It would just be missing\nentirely if it wasn't skipped?\n\nFwiw, having it be parsable is why I wouldn't want it to be multiple\nereports. That would mean it could get interleaved with other errors\nfrom other backends. That would be a disaster.\n\n\n", "msg_date": "Wed, 22 Dec 2021 00:46:21 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Unifying VACUUM VERBOSE and log_autovacuum_min_duration output" }, { "msg_contents": "On Tue, Dec 21, 2021 at 2:39 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Nov 29, 2021 at 6:51 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Attached is a WIP patch doing this.\n>\n> This has bitrot, so I attach v2, mostly just to keep the CFTester\n> status green. The only real change is one minor simplification to how\n> we set everything up, inside heap_vacuum_rel().\n\nI've looked at the patch and here are comments:\n\n@@ -3076,16 +3021,12 @@ lazy_cleanup_one_index(Relation indrel,\nIndexBulkDeleteResult *istat,\n LVRelState *vacrel)\n {\n IndexVacuumInfo ivinfo;\n- PGRUsage ru0;\n LVSavedErrInfo saved_err_info;\n\n- pg_rusage_init(&ru0);\n-\n ivinfo.index = indrel;\n ivinfo.analyze_only = false;\n ivinfo.report_progress = false;\n ivinfo.estimated_count = estimated_count;\n- ivinfo.message_level = elevel;\n\nI think we should set message_level. Otherwise, index AM will set an\ninvalid log level, although any index AM in core seems not to use it.\n\n---\n- /*\n- * Update error traceback information. This is the\nlast phase during\n- * which we add context information to errors, so we\ndon't need to\n- * revert to the previous phase.\n- */\n\nWhy is this comment removed? ISTM this comment is still valid.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 22 Dec 2021 16:57:11 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unifying VACUUM VERBOSE and log_autovacuum_min_duration output" }, { "msg_contents": "On Tue, Dec 21, 2021 at 9:46 PM Greg Stark <stark@mit.edu> wrote:\n> Or rather I think a better way to look at it is that the progress\n> output for the operator should be separated from the metrics logged.\n> As an operator what I want to see is some progress indicator\n> \"\"starting table scan\", \"overflow at x% of table scanned, starting\n> index scan\", \"processing index 1\" \"index 2\"... so I can have some idea\n> of how much longer the vacuum will take and see whether I need to\n> raise maintenance_work_mem and by how much. I don't need to see all\n> the metrics while it's running.\n\nWe have the pg_stat_progress_vacuum view for that these days, of\ncourse. Which has the advantage of working with autovacuum and\nmanually-run VACUUMs in exactly the same way. I am generally opposed\nto any difference between autovacuum and manual VACUUM that isn't\nclearly necessary. For example, ANALYZE behaves very differently in a\nVACUUM ANALYZE run on a table with a GIN index in autovacuum -- that\nseems awful to me.\n\n> 2) I don't much like the format. I want to be able to parse the output\n> with awk or mtail or even just grep for relevant lines. Things like\n> \"index scan not needed\" make it hard to parse since you don't know\n> what it will look like if they are needed. I would have expected\n> something like \"index scans: 0\" which is actually already there up\n> above. I'm not clear how this line is meant to be read. Is it just\n> explaining *why* the index scan was skipped? It would just be missing\n> entirely if it wasn't skipped?\n\nNo, a line that looks very much like the \"index scan not needed\" line\nwill always be there. IOW there will reliably be a line that explains\nwhether or not any index scan took place, and why (or why not).\nWhereas there won't ever be a line in VACUUM VERBOSE (as currently\nimplemented) that tells you about something that might have been\nexpected to happen, but didn't actually happen.\n\nThe same thing cannot be said for every line of the log output,\nthough. For example, the line about I/O timings only appears with\ntrack_io_timing=on.\n\nI have changed things here quite a bit in the last year. I do try to\nstick to the \"always show line\" convention, if only for the benefit of\nhumans. If the line doesn't generalize to every situation, then I tend\nto doubt that it merits appearing in the summary in the first place.\n\n> Fwiw, having it be parsable is why I wouldn't want it to be multiple\n> ereports. That would mean it could get interleaved with other errors\n> from other backends. That would be a disaster.\n\nThat does seem relevant, but honestly I haven't made that a goal here.\n\nPart of the problem has been with what we've actually shown. Postgres\n14 was the first version to separately report on the number of LP_DEAD\nline pointers in the table (or left behind in the table when we didn't\ndo index vacuuming). Prior to 14 we only reported dead tuples. These\nseemed to be assumed to be roughly equivalent in the past, but\nactually they're totally different things, with many practical\nconsequences:\n\nhttps://www.postgresql.org/message-id/flat/CAH2-WzkkGT2Gt4XauS5eQOQi4mVvL5X49hBTtWccC8DEqeNfKA%40mail.gmail.com#b7bd96573a2ca27b023ce78b4a8c2b13\n\nThis means that we only just started showing one particular metric\nthat is of fundamental importance in this log output (and VACUUM\nVERBOSE). We also used to show things that had very little relevance,\nwith slightly different (confusingly similar) metrics shown in each\nvariant of the instrumentation (a problem that I'm trying to\npermanently avoid by unifying everything). While things have improved\na lot here recently, I don't think that things have fully settled yet\n-- the output will probably change quite a bit more in Postgres 15.\nThat makes me a little hesitant to promise very much about making the\noutput parseable or stable.\n\nThat said, I don't want to make it needlessly difficult. That should be avoided.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Wed, 22 Dec 2021 14:19:16 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Unifying VACUUM VERBOSE and log_autovacuum_min_duration output" }, { "msg_contents": "On Tue, Dec 21, 2021 at 11:57 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> I've looked at the patch and here are comments:\n\nThanks!\n\nThe patch bitrot again, so attached is a rebased version, v3.\n\n> I think we should set message_level. Otherwise, index AM will set an\n> invalid log level, although any index AM in core seems not to use it.\n\nFixed.\n\n> ---\n> - /*\n> - * Update error traceback information. This is the\n> last phase during\n> - * which we add context information to errors, so we\n> don't need to\n> - * revert to the previous phase.\n> - */\n>\n> Why is this comment removed? ISTM this comment is still valid.\n\nWe don't \"revert to the previous phase\" here, which is always\nVACUUM_ERRCB_PHASE_SCAN_HEAP in practice, per the comment -- but that\ndoesn't seem significant. It's not just unnecessary to do so, as the\ncomment claims -- it actually seems *wrong*. That is, it would be\nwrong to go back to VACUUM_ERRCB_PHASE_SCAN_HEAP here, since we're\ncompletely finished scanning the heap at this point.\n\nThere is still perhaps a small danger that somebody will forget to add\na new VACUUM_ERRCB_PHASE_* for some new kind of work that happens\nafter this point, at the very last moment. But that would be equally\ntrue if the new kind of work took place earlier, inside\nlazy_scan_heap(). And so the last call to update_vacuum_error_info()\nisn't special compared to any other update_vacuum_error_info() call\n(or any other call that doesn't set a saved_err_info).\n\n--\nPeter Geoghegan", "msg_date": "Thu, 6 Jan 2022 10:21:32 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Unifying VACUUM VERBOSE and log_autovacuum_min_duration output" }, { "msg_contents": "On Thu, Jan 6, 2022 at 10:21 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> The patch bitrot again, so attached is a rebased version, v3.\n\nI pushed a version of this patch earlier. This final version didn't go\nquite as far as v3 did: it retained a few VACUUM VERBOSE only INFO\nmessages (it didn't demote them to DEBUG2). See the commit message for\ndetails.\n\nThank you for your review work, Masahiko and Andres.\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 14 Jan 2022 19:03:39 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Unifying VACUUM VERBOSE and log_autovacuum_min_duration output" } ]
[ { "msg_contents": "Hello!\n\nSince I used a lot of my time chasing short lived processes eating away big chunks of memory in recent weeks, I am wondering about a decent way to go about this.\nThe problem I am facing essentially relates to the fact that work_mem settings, while they are enforced per hash and sort node, aren't enforced globally.\nOne common case, that causes this problem more frequently than a few years ago, is the partitionwise_join. If there are a lot of partitions hash joined, we get a lot of hash nodes, each one potentially consuming work_mem.\n\nWhile avoiding oom seems a big deal to me, my search didn't turn up previous hackers discussions about this. There is a good chance I am missing something here, so I'd appreciate any pointers.\n\nThe most reasonable solution seems to me to have a data structure per worker, that 1. tracks the amount of memory used by certain nodes and 2. offers a callback to let the node spill it's contents (almost) completely to disc. I am thinking about hash and sort nodes for now, since they affect memory usage a lot.\nThis would allow a node to spill other nodes contents to disc to avoid exceeding work_mem.\n\nI'd love to hear your thoughts and suggestions!\n\nRegards\nArne\n\n\n\n\n\n\n\n\n\n\nHello!\n\n\nSince I used a lot of my time chasing short lived processes eating away big chunks of memory in recent weeks, I am wondering about a decent way to go about this.\nThe problem I am facing essentially relates to the fact that work_mem settings, while they are enforced per hash and sort node, aren't enforced globally.\nOne common case, that causes this problem more frequently than a few years ago, is the partitionwise_join. If there are a lot of partitions hash joined, we get a lot of hash nodes, each one potentially consuming work_mem.\n\n\nWhile avoiding oom seems a big deal to me, my search didn't turn up previous hackers discussions about this. There is a good chance I am missing something here, so I'd appreciate any pointers.\n\nThe most reasonable solution seems to me to have a data structure per worker, that 1. tracks the amount of memory used by certain nodes and 2. offers a callback to let the node spill it's contents (almost) completely to disc.\nI am thinking about hash and sort nodes for now, since they affect memory usage a lot.\nThis would allow a node to spill other nodes contents to disc to avoid exceeding work_mem.\n\nI'd love to hear your thoughts and suggestions!\n\nRegards\nArne", "msg_date": "Sat, 27 Nov 2021 16:33:07 +0000", "msg_from": "Arne Roland <A.Roland@index.de>", "msg_from_op": true, "msg_subject": "Enforce work_mem per worker" }, { "msg_contents": "On Sat, Nov 27, 2021 at 04:33:07PM +0000, Arne Roland wrote:\n> Hello!\n> \n> Since I used a lot of my time chasing short lived processes eating away big chunks of memory in recent weeks, I am wondering about a decent way to go about this.\n> The problem I am facing essentially relates to the fact that work_mem settings, while they are enforced per hash and sort node, aren't enforced globally.\n> One common case, that causes this problem more frequently than a few years ago, is the partitionwise_join. If there are a lot of partitions hash joined, we get a lot of hash nodes, each one potentially consuming work_mem.\n\n> While avoiding oom seems a big deal to me, my search didn't turn up previous hackers discussions about this. There is a good chance I am missing something here, so I'd appreciate any pointers.\n\nHere's some pointers ;)\n\nhttps://www.postgresql.org/message-id/flat/20190708164401.GA22387%40telsasoft.com\nhttps://www.postgresql.org/message-id/flat/20191216215314.qvrtgxaz4m755geq%40development#75e9930ac2cd353a8036dc71e8f5e6f7\nhttps://www.postgresql.org/message-id/flat/CAH2-WzmNwV%3DLfDRXPsmCqgmm91mp%3D2b4FvXNF%3DcCvMrb8YFLfQ%40mail.gmail.com\n - I don't recall reading all of this last one before, and it's got interesting\n historic value, so I'm reading it myself now...\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 27 Nov 2021 11:57:12 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Enforce work_mem per worker" }, { "msg_contents": "I did read parts of the last one back then. But thanks for the link, I plan to reread the thread as a whole.\n\n\n From what I can tell, the discussions here are the attempt by very smart people to (at least partially) solve the problem of memory allocation (without sacrificing to much on the runtime front). That problem is very hard.\n\n\nWhat I am mostly trying to do, is to provide a reliable way of preventing the operational hazard of dealing with oom and alike, e.g. massive kernel buffer eviction. I don't want to touch the planning, which is always complex and tends to introduce weird side effects.\n\n\nThat way we can't hope to prevent the issue from occurring generally. I'm much more concerned with containing it, if it happens.\n\n\nIn the case that there is only a single pass, which tends to be the case for a lot of queries, my suggested approach would even help the offender.\n\nBut my main goal is something else. I can't explain my clients, why a chanced statistics due to autovacuum suddenly leads to oom. They would be right to question postgres qualification for any serious production system.\n\n\nRegards\n\nArne\n\n\n\n\n\n\n\n\n\n\n\n\nI did read parts of the last one back then. But thanks for the link, I plan to reread the thread as a whole.\n\n\nFrom what I can tell, the discussions here are the attempt by very smart people to (at least partially) solve the problem of memory allocation (without sacrificing to much on the runtime front). That problem is very hard.\n\n\nWhat I am mostly trying to do, is to provide a reliable way of preventing the operational hazard of dealing with oom and alike, e.g. massive kernel buffer eviction. I don't want to touch the planning, which is always complex and tends to introduce weird\n side effects.\n\n\nThat way we can't hope to prevent the issue from occurring generally. I'm much more concerned with containing it, if it happens.\n\n\nIn the case that there is only a single pass, which tends to be the case for a lot of queries, my suggested approach would even help the offender.\nBut my main goal is something else. I can't explain my clients, why a chanced statistics due to autovacuum suddenly leads to oom. They would be right to question postgres qualification for any serious production system.\n\n\n\nRegards\nArne", "msg_date": "Mon, 29 Nov 2021 14:01:35 +0000", "msg_from": "Arne Roland <A.Roland@index.de>", "msg_from_op": true, "msg_subject": "Re: Enforce work_mem per worker" }, { "msg_contents": "On Mon, Nov 29, 2021 at 02:01:35PM +0000, Arne Roland wrote:\n> But my main goal is something else. I can't explain my clients, why a chanced statistics due to autovacuum suddenly leads to oom. They would be right to question postgres qualification for any serious production system.\n\nWhat version postgres was that on ?\n\nI realize it doesn't address your question, but since PG13, HashAggregate\nrespects work_mem. Depending on the details of the query plan, that's arguably\na bigger problem than the absence of a global work_mem. At least that one is\nresolved.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 29 Nov 2021 09:10:59 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Enforce work_mem per worker" }, { "msg_contents": "From: Justin Pryzby <pryzby@telsasoft.com>\nSent: Monday, November 29, 2021 16:10\n> On Mon, Nov 29, 2021 at 02:01:35PM +0000, Arne Roland wrote:\n> > But my main goal is something else. I can't explain my clients, why a chanced statistics due to autovacuum suddenly leads to oom. They would be right to question postgres qualification for any serious production system.\n>\n> What version postgres was that on ?\n\nIt's pg13 and pg14 mostly. I have different servers with similar problems.\n\n> I realize it doesn't address your question, but since PG13, HashAggregate\n> respects work_mem.\n\nI haven't run into issues with hash agg personally.\n\n> Depending on the details of the query plan, that's arguably\n> a bigger problem than the absence of a global work_mem. At least that one is\n> resolved.\n\nI can go around to fix issues with plans. But plans are inherently unstable. And we can't have people becoming wary of autoanalyze.\nHaving a single wild plan bringing down a whole cluster is just madness.\n\nThere are bunch of different problems, that can occur. But where I stand this almost invalidates partition wise hash joins, because you'd generate one hash node per partition. But you can still have sorts with merge append, without partitionwise joins.\nTo quote your message from 2019:\n> gracefully support[...ing] \"thousands\" of partitions\nmeans using 1000 * work_mem?\nAm I wrong here?\n\nRegards\nArne\n\n\n\n\n\n\n\n\nFrom: Justin Pryzby <pryzby@telsasoft.com>\nSent: Monday, November 29, 2021 16:10\n> On Mon, Nov 29, 2021 at 02:01:35PM +0000, Arne Roland wrote:\n> > But my main goal is something else. I can't explain my clients, why a chanced statistics due to autovacuum suddenly leads to oom. They would be right to question postgres qualification for any serious production system.\n> \n> What version postgres was that on ?\n\n\n\nIt's pg13 and pg14 mostly. I have different servers with similar problems.\n\n\n> I realize it doesn't address your question, but since PG13, HashAggregate\n> respects work_mem. \n\nI haven't run into issues with hash agg personally.\n\n> Depending on the details of the query plan, that's arguably\n> a bigger problem than the absence of a global work_mem.  At least that one is\n> resolved.\n\nI can go around to fix issues with plans. But plans are inherently unstable. And we can't have people becoming wary of autoanalyze.\nHaving a single wild plan bringing down a whole cluster is just madness.\n\nThere are bunch of different problems, that can occur. But where I stand this almost invalidates partition wise hash joins, because you'd generate one hash node per partition. But you can still have sorts with merge append, without partitionwise joins.\n\nTo quote your message from 2019:\n> gracefully support[...ing] \"thousands\" of partitions\nmeans using 1000 * work_mem?\nAm I wrong here?\n\n\nRegards\nArne", "msg_date": "Mon, 29 Nov 2021 15:48:09 +0000", "msg_from": "Arne Roland <A.Roland@index.de>", "msg_from_op": true, "msg_subject": "Re: Enforce work_mem per worker" } ]
[ { "msg_contents": "Hi hackers,\n\nthere are a very long discuss about TDE, and we agreed on that if the\ntemporary file I/O can be aligned to some fixed size, it will be easier\nto use some kind of encryption algorithm.\n\ndiscuss:\nhttps://www.postgresql.org/message-id/20211025155814.GD20998%40tamriel.snowman.net\n\nThis patch adjust file->curOffset and file->pos before the real IO to\nensure the start offset is aligned.", "msg_date": "Sun, 28 Nov 2021 23:37:18 +0800", "msg_from": "Sasasu <i@sasa.su>", "msg_from_op": true, "msg_subject": "[PATCH] buffile: ensure start offset is aligned with BLCKSZ" }, { "msg_contents": "Sasasu <i@sasa.su> wrote:\n\n> Hi hackers,\n> \n> there are a very long discuss about TDE, and we agreed on that if the\n> temporary file I/O can be aligned to some fixed size, it will be easier\n> to use some kind of encryption algorithm.\n> \n> discuss:\n> https://www.postgresql.org/message-id/20211025155814.GD20998%40tamriel.snowman.net\n> \n> This patch adjust file->curOffset and file->pos before the real IO to\n> ensure the start offset is aligned.\n\nDoes this test really pass regression tests? In BufFileRead(), I would\nunderstand if you did\n\n+\t\t\tfile->pos = offsetInBlock;\n+\t\t\tfile->curOffset -= offsetInBlock;\n\nrather than\n\n+\t\t\tfile->pos += offsetInBlock;\n+\t\t\tfile->curOffset -= offsetInBlock;\n\nAnyway, BufFileDumpBuffer() does not seem to enforce curOffset to end up at\nblock boundary, not to mention BufFileSeek().\n\nWhen I was implementing this for our fork [1], I concluded that the encryption\ncode path is too specific, so I left the existing code for the unecrypted data\nand added separate functions for the encrypted data.\n\nOne specific thing is that if you encrypt and write n bytes, but only need\npart of it later, you need to read and decrypt exactly those n bytes anyway,\notherwise the decryption won't work. So I decided not only to keep curOffset\nat BLCKSZ boundary, but also to read / write BLCKSZ bytes at a time. This also\nmakes sense if the scope of the initialization vector (IV) is BLCKSZ bytes.\n\nAnother problem is that you might want to store the IV somewhere in between\nthe data. In short, the encryption makes the buffered IO rather different and\nthe specific code should be kept aside, although the same API is used to\ninvoke it.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n[1] https://github.com/cybertec-postgresql/postgres/tree/PG_14_TDE_1_1\n\n\n", "msg_date": "Mon, 29 Nov 2021 11:05:00 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PATCH] buffile: ensure start offset is aligned with BLCKSZ" }, { "msg_contents": "On 2021/11/29 18:05, Antonin Houska wrote:\n> Does this test really pass regression tests? In BufFileRead(), I would\n> understand if you did\n> \n> +\t\t\tfile->pos = offsetInBlock;\n> +\t\t\tfile->curOffset -= offsetInBlock;\n> \n> rather than\n> \n> +\t\t\tfile->pos += offsetInBlock;\n> +\t\t\tfile->curOffset -= offsetInBlock;\n\nIt pass all regression tests. this patch is compatible with\nBufFileSeek().\n\nto generate a correct alignment, we need to make sure\n pos_new + offset_new = pos_old + offset_old \n offset_new = offset_old - offset_old % BLCKSZ\nit means\n pos_new = pos_old + offset_old % BLCKSZ\n = pos_old + \"offsetInBlock\"\n\nwith your code, backend will read a wrong buffile at the end of buffile\nreading. for example: physical file size = 20 and pos = 10, off = 10,\nread start at 20. after the '=' code: pos = 10, off = 0, read start at\n10, which is wrong.\n\n> Anyway, BufFileDumpBuffer() does not seem to enforce curOffset to end up at\n> block boundary, not to mention BufFileSeek().\n> \n> When I was implementing this for our fork [1], I concluded that the encryption\n> code path is too specific, so I left the existing code for the unecrypted data\n> and added separate functions for the encrypted data.\n> \n> One specific thing is that if you encrypt and write n bytes, but only need\n> part of it later, you need to read and decrypt exactly those n bytes anyway,\n> otherwise the decryption won't work. So I decided not only to keep curOffset\n> at BLCKSZ boundary, but also to read / write BLCKSZ bytes at a time. This also\n> makes sense if the scope of the initialization vector (IV) is BLCKSZ bytes.\n> \n> Another problem is that you might want to store the IV somewhere in between\n> the data. In short, the encryption makes the buffered IO rather different and\n> the specific code should be kept aside, although the same API is used to\n> invoke it.\n> \nbut I want to make less change on existed code. with this path. the only\ncode added to critical code path is this:\n\ndiff --git a/src/backend/storage/file/buffile.c b/src/backend/storage/file/buffile.c\nindex 3be08eb723..ceae85584b 100644\n--- a/src/backend/storage/file/buffile.c\n+++ b/src/backend/storage/file/buffile.c\n@@ -512,6 +512,9 @@ BufFileDumpBuffer(BufFile *file)\n \t\t/* and the buffer is aligned with BLCKSZ */\n \t\tAssert(file->curOffset % BLCKSZ == 0);\n \n+\t\t/* encrypt before write */\n+\t\tTBD_ENC(file->buffer.data + wpos /* buffer */, bytestowrite /* size */, file->curOffset /* context to find IV */);\n+\n \t\tthisfile = file->files[file->curFile];\n \t\tbytestowrite = FileWrite(thisfile,\n \t\t\t\t\t\t\t\t file->buffer.data + wpos,\n@@ -582,6 +585,9 @@ BufFileRead(BufFile *file, void *ptr, size_t size)\n \t\t\tBufFileLoadBuffer(file);\n \t\t\tif (file->nbytes <= 0 || (file->nbytes == file->pos && file->nbytes != BLCKSZ))\n \t\t\t\tbreak;\t\t\t/* no more data available */\n+\n+\t\t\t/* decrypt after read */\n+\t\t\tTBD_DEC(file->buffer /* buffer */, file->nbytes /* size */, file->curOffset /* context to find IV */);\n \t\t}\n \n \t\tnthistime = file->nbytes - file->pos;\n\nthose change will allow TDE to use any encryption algorithm (read offset\nand write offset are matched) and implement on-the-fly IV generation.", "msg_date": "Tue, 30 Nov 2021 13:55:29 +0800", "msg_from": "Sasasu <i@sasa.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] buffile: ensure start offset is aligned with BLCKSZ" } ]
[ { "msg_contents": "The following bug has been logged on the website:\n\nBug reference: 17302\nLogged by: Alexander Lakhin\nEmail address: exclusion@gmail.com\nPostgreSQL version: 14.1\nOperating system: Ubuntu 20.04\nDescription: \n\nThe last statement in the following sequence of queries:\r\nCREATE TABLE point_tbl (f1 point);\r\nCREATE INDEX gpointind ON point_tbl USING gist (f1);\r\nINSERT INTO point_tbl SELECT '(0,0)'::point FROM generate_series(1, 1000)\ng;\r\nINSERT INTO point_tbl VALUES ('(1e-300,-1e-300)'::point);\r\nproduces:\r\nERROR: value out of range: underflow\r\n(The error occurs inside gist_box_penalty()->box_penalty()->size_box().)\r\nBut the following sequence:\r\nCREATE TABLE point_tbl (f1 point);\r\nINSERT INTO point_tbl SELECT '(0,0)'::point FROM generate_series(1, 1000)\ng;\r\nINSERT INTO point_tbl VALUES ('(1e-300,-1e-300)'::point);\r\nexecutes without an error. Moreover, the same index can be created\nsuccessfully after the insertion. The error is also depends on number of the\npoints inserted in the first step.", "msg_date": "Sun, 28 Nov 2021 18:00:01 +0000", "msg_from": "PG Bug reporting form <noreply@postgresql.org>", "msg_from_op": true, "msg_subject": "BUG #17302: gist index prevents insertion of some data" }, { "msg_contents": "On Sun, Nov 28, 2021 at 9:07 PM PG Bug reporting form\n<noreply@postgresql.org> wrote:\n> The last statement in the following sequence of queries:\n> CREATE TABLE point_tbl (f1 point);\n> CREATE INDEX gpointind ON point_tbl USING gist (f1);\n> INSERT INTO point_tbl SELECT '(0,0)'::point FROM generate_series(1, 1000)\n> g;\n> INSERT INTO point_tbl VALUES ('(1e-300,-1e-300)'::point);\n> produces:\n> ERROR: value out of range: underflow\n> (The error occurs inside gist_box_penalty()->box_penalty()->size_box().)\n> But the following sequence:\n> CREATE TABLE point_tbl (f1 point);\n> INSERT INTO point_tbl SELECT '(0,0)'::point FROM generate_series(1, 1000)\n> g;\n> INSERT INTO point_tbl VALUES ('(1e-300,-1e-300)'::point);\n> executes without an error. Moreover, the same index can be created\n> successfully after the insertion. The error is also depends on number of the\n> points inserted in the first step.\n\nI think losing precision in the gist penalty is generally OK. Thus,\nit shouldn't be a problem to round a very small value as zero.\nProbably, we could even tolerate overflow in the gist penalty. Should\nbe much worse than underflow, because we might consider a very bad\npenalty as very good (or vise versa). But it still affects only index\nquality, not correctness.\n\nAny thoughts?\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Thu, 2 Dec 2021 01:08:27 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #17302: gist index prevents insertion of some data" }, { "msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> I think losing precision in the gist penalty is generally OK. Thus,\n> it shouldn't be a problem to round a very small value as zero.\n\nCheck.\n\n> Probably, we could even tolerate overflow in the gist penalty.\n\nAs long as overflow -> infinity, yeah I think so. Seems like it\nwas a mistake to insert the overflow-testing functions in this code\nat all, and we should simplify it down to plain C addition/subtraction/\nmultiplication.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Dec 2021 17:14:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #17302: gist index prevents insertion of some data" }, { "msg_contents": "On Thu, Dec 2, 2021 at 1:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > I think losing precision in the gist penalty is generally OK. Thus,\n> > it shouldn't be a problem to round a very small value as zero.\n>\n> Check.\n>\n> > Probably, we could even tolerate overflow in the gist penalty.\n>\n> As long as overflow -> infinity, yeah I think so. Seems like it\n> was a mistake to insert the overflow-testing functions in this code\n> at all, and we should simplify it down to plain C addition/subtraction/\n> multiplication.\n>\n\nThe underflow should not throw an interrupting exception ever, even on\nplain SQL-level calculations.\n\nThe code to implement was added in error by a series of misunderstandings\nand gets in the way of simple things too often. I dug into the history here:\n\n\nhttps://www.postgresql.org/message-id/CAC8Q8t%2BXJH68WB%2BsKN0BV0uGc3ZjA2DtbQuoJ5EhB4JAcS0C%2BQ%40mail.gmail.com\n\n\n\n\n\n>\n> regards, tom lane\n>\n>\n>\n\nOn Thu, Dec 2, 2021 at 1:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Alexander Korotkov <aekorotkov@gmail.com> writes:\n> I think losing precision in the gist penalty is generally OK.  Thus,\n> it shouldn't be a problem to round a very small value as zero.\n\nCheck.\n\n> Probably, we could even tolerate overflow in the gist penalty.\n\nAs long as overflow -> infinity, yeah I think so.  Seems like it\nwas a mistake to insert the overflow-testing functions in this code\nat all, and we should simplify it down to plain C addition/subtraction/\nmultiplication.The underflow should not throw an interrupting exception ever, even on plain SQL-level calculations. The code to implement was added in error by a series of misunderstandings and gets in the way of simple things too often. I dug into the history here:https://www.postgresql.org/message-id/CAC8Q8t%2BXJH68WB%2BsKN0BV0uGc3ZjA2DtbQuoJ5EhB4JAcS0C%2BQ%40mail.gmail.com \n\n                        regards, tom lane", "msg_date": "Thu, 2 Dec 2021 09:02:09 +0300", "msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>", "msg_from_op": false, "msg_subject": "Re: BUG #17302: gist index prevents insertion of some data" } ]
[ { "msg_contents": "Hi Hackers,\n\nWhen the standby couldn't connect to the primary it switches the XLog\nsource from streaming to archive and continues in that state until it can\nget the WAL from the archive location. On a server with high WAL activity,\ntypically getting the WAL from the archive is slower than streaming it from\nthe primary and couldn't exit from that state. This not only increases the\nlag on the standby but also adversely impacts the primary as the WAL gets\naccumulated, and vacuum is not able to collect the dead tuples. DBAs as a\nmitigation can however remove/advance the slot or remove the\nrestore_command on the standby but this is a manual work I am trying to\navoid. I would like to propose the following, please let me know your\nthoughts.\n\n - Automatically attempt to switch the source from Archive to streaming\n when the primary_conninfo is set after replaying 'N' wal segment governed\n by the GUC retry_primary_conn_after_wal_segments\n - when retry_primary_conn_after_wal_segments is set to -1 then the\n feature is disabled\n - When the retry attempt fails, then switch back to the archive\n\nThanks,\nSatya\n\nHi Hackers,When the standby couldn't connect to the primary it switches the XLog source from streaming to archive and continues in that state until it can get the WAL from the archive location. On a server with high WAL activity, typically getting the WAL from the archive is slower than streaming it from the primary and couldn't exit from that state. This not only increases the lag on the standby but also adversely impacts the primary as the WAL gets accumulated, and vacuum is not able to collect the dead tuples. DBAs as a mitigation can however remove/advance the slot or remove the restore_command on the standby but this is a manual work I am trying to avoid. I would like to propose the following, please let me know your thoughts.Automatically attempt to switch the source from Archive to streaming when the primary_conninfo is set after replaying 'N' wal segment governed by the GUC retry_primary_conn_after_wal_segmentswhen \n\nretry_primary_conn_after_wal_segments is set to -1 then the feature is disabledWhen the retry attempt fails, then switch back to the archiveThanks,Satya", "msg_date": "Sun, 28 Nov 2021 12:00:34 -0800", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": true, "msg_subject": "Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Mon, Nov 29, 2021 at 1:30 AM SATYANARAYANA NARLAPURAM\n<satyanarlapuram@gmail.com> wrote:\n>\n> Hi Hackers,\n>\n> When the standby couldn't connect to the primary it switches the XLog source from streaming to archive and continues in that state until it can get the WAL from the archive location. On a server with high WAL activity, typically getting the WAL from the archive is slower than streaming it from the primary and couldn't exit from that state. This not only increases the lag on the standby but also adversely impacts the primary as the WAL gets accumulated, and vacuum is not able to collect the dead tuples. DBAs as a mitigation can however remove/advance the slot or remove the restore_command on the standby but this is a manual work I am trying to avoid. I would like to propose the following, please let me know your thoughts.\n>\n> Automatically attempt to switch the source from Archive to streaming when the primary_conninfo is set after replaying 'N' wal segment governed by the GUC retry_primary_conn_after_wal_segments\n> when retry_primary_conn_after_wal_segments is set to -1 then the feature is disabled\n> When the retry attempt fails, then switch back to the archive\n\nI think there is another thread [1] that is logically trying to solve\na similar issue, basically, in the main recovery apply loop is the\nwalreceiver does not exist then it is launching the walreceiver.\nHowever, in that patch, it is not changing the current Xlog source but\nI think that is not a good idea because with that it will restore from\nthe archive as well as stream from the primary so I have given that\nreview comment on that thread as well. One big difference is that\npatch is launching the walreceiver even if the WAL is locally\navailable and we don't really need more WAL but that is controlled by\na GUC.\n\n[1] https://www.postgresql.org/message-id/CAKYtNApe05WmeRo92gTePEmhOM4myMpCK_%2BceSJtC7-AWLw1qw%40mail.gmail.com\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 29 Nov 2021 18:02:58 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Mon, Nov 29, 2021 at 1:30 AM SATYANARAYANA NARLAPURAM\n<satyanarlapuram@gmail.com> wrote:\n>\n> Hi Hackers,\n>\n> When the standby couldn't connect to the primary it switches the XLog source from streaming to archive and continues in that state until it can get the WAL from the archive location. On a server with high WAL activity, typically getting the WAL from the archive is slower than streaming it from the primary and couldn't exit from that state. This not only increases the lag on the standby but also adversely impacts the primary as the WAL gets accumulated, and vacuum is not able to collect the dead tuples. DBAs as a mitigation can however remove/advance the slot or remove the restore_command on the standby but this is a manual work I am trying to avoid. I would like to propose the following, please let me know your thoughts.\n>\n> Automatically attempt to switch the source from Archive to streaming when the primary_conninfo is set after replaying 'N' wal segment governed by the GUC retry_primary_conn_after_wal_segments\n> when retry_primary_conn_after_wal_segments is set to -1 then the feature is disabled\n> When the retry attempt fails, then switch back to the archive\n\nI've gone through the state machine in WaitForWALToBecomeAvailable and\nI understand it this way: failed to receive WAL records from the\nprimary causes the current source to switch to archive and the standby\ncontinues to get WAL records from archive location unless some failure\noccurs there the current source is never going to switch back to\nstream. Given the fact that getting WAL from archive location causes\ndelay in production environments, we miss to take the advantage of the\nreconnection to primary after previous failed attempt.\n\nSo basically, we try to attempt to switch to streaming from archive\n(even though fetching from archive can succeed) after a certain amount\nof time or WAL segments. I prefer timing-based switch to streaming\nfrom archive instead of after a number of WAL segments fetched from\narchive. Right now, wal_retrieve_retry_interval is being used to wait\nbefore switching to archive after failed attempt from streaming, IMO,\na similar GUC (that gets set once the source switched from streaming\nto archive and on timeout it switches to streaming again) can be used\nto switch from archive to streaming after the specified amount of\ntime.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 30 Apr 2022 18:19:09 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Sat, Apr 30, 2022 at 6:19 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Nov 29, 2021 at 1:30 AM SATYANARAYANA NARLAPURAM\n> <satyanarlapuram@gmail.com> wrote:\n> >\n> > Hi Hackers,\n> >\n> > When the standby couldn't connect to the primary it switches the XLog source from streaming to archive and continues in that state until it can get the WAL from the archive location. On a server with high WAL activity, typically getting the WAL from the archive is slower than streaming it from the primary and couldn't exit from that state. This not only increases the lag on the standby but also adversely impacts the primary as the WAL gets accumulated, and vacuum is not able to collect the dead tuples. DBAs as a mitigation can however remove/advance the slot or remove the restore_command on the standby but this is a manual work I am trying to avoid. I would like to propose the following, please let me know your thoughts.\n> >\n> > Automatically attempt to switch the source from Archive to streaming when the primary_conninfo is set after replaying 'N' wal segment governed by the GUC retry_primary_conn_after_wal_segments\n> > when retry_primary_conn_after_wal_segments is set to -1 then the feature is disabled\n> > When the retry attempt fails, then switch back to the archive\n>\n> I've gone through the state machine in WaitForWALToBecomeAvailable and\n> I understand it this way: failed to receive WAL records from the\n> primary causes the current source to switch to archive and the standby\n> continues to get WAL records from archive location unless some failure\n> occurs there the current source is never going to switch back to\n> stream. Given the fact that getting WAL from archive location causes\n> delay in production environments, we miss to take the advantage of the\n> reconnection to primary after previous failed attempt.\n>\n> So basically, we try to attempt to switch to streaming from archive\n> (even though fetching from archive can succeed) after a certain amount\n> of time or WAL segments. I prefer timing-based switch to streaming\n> from archive instead of after a number of WAL segments fetched from\n> archive. Right now, wal_retrieve_retry_interval is being used to wait\n> before switching to archive after failed attempt from streaming, IMO,\n> a similar GUC (that gets set once the source switched from streaming\n> to archive and on timeout it switches to streaming again) can be used\n> to switch from archive to streaming after the specified amount of\n> time.\n>\n> Thoughts?\n\nHere's a v1 patch that I've come up with. I'm right now using the\nexisting GUC wal_retrieve_retry_interval to switch to stream mode from\narchive mode as opposed to switching only after the failure to get WAL\nfrom archive mode. If okay with the approach, I can add tests, change\nthe docs and add a new GUC to control this behaviour. I'm open to\nthoughts and ideas here.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Tue, 24 May 2022 21:48:05 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nHello\r\n\r\nI tested this patch in a setup where the standby is in the middle of replicating and REDOing primary's WAL files during a very large data insertion. During this time, I keep killing the walreceiver process to cause a stream failure and force standby to read from archive. The system will restore from archive for \"wal_retrieve_retry_interval\" seconds before it attempts to steam again. Without this patch, once the streaming is interrupted, it keeps reading from archive until standby reaches the same consistent state of primary and then it will switch back to streaming again. So it seems that the patch does the job as described and does bring some benefit during a very large REDO job where it will try to re-stream after restoring some WALs from archive to speed up this \"catch up\" process. But if the recovery job is not a large one, PG is already switching back to streaming once it hits consistent state.\r\n\r\nthank you\r\n\r\nCary Huang\r\nHighGo Software Canada", "msg_date": "Fri, 24 Jun 2022 20:00:38 +0000", "msg_from": "Cary Huang <cary.huang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Sat, Jun 25, 2022 at 1:31 AM Cary Huang <cary.huang@highgo.ca> wrote:\n>\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: not tested\n> Documentation: not tested\n>\n> Hello\n>\n> I tested this patch in a setup where the standby is in the middle of replicating and REDOing primary's WAL files during a very large data insertion. During this time, I keep killing the walreceiver process to cause a stream failure and force standby to read from archive. The system will restore from archive for \"wal_retrieve_retry_interval\" seconds before it attempts to steam again. Without this patch, once the streaming is interrupted, it keeps reading from archive until standby reaches the same consistent state of primary and then it will switch back to streaming again. So it seems that the patch does the job as described and does bring some benefit during a very large REDO job where it will try to re-stream after restoring some WALs from archive to speed up this \"catch up\" process. But if the recovery job is not a large one, PG is already switching back to streaming once it hits consistent state.\n\nThanks a lot Cary for testing the patch.\n\n> Here's a v1 patch that I've come up with. I'm right now using the\n> existing GUC wal_retrieve_retry_interval to switch to stream mode from\n> archive mode as opposed to switching only after the failure to get WAL\n> from archive mode. If okay with the approach, I can add tests, change\n> the docs and add a new GUC to control this behaviour. I'm open to\n> thoughts and ideas here.\n\nIt will be great if I can hear some thoughts on the above points (as\nposted upthread).\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 8 Jul 2022 21:16:22 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Fri, Jul 8, 2022 at 9:16 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Sat, Jun 25, 2022 at 1:31 AM Cary Huang <cary.huang@highgo.ca> wrote:\n> >\n> > The following review has been posted through the commitfest application:\n> > make installcheck-world: tested, passed\n> > Implements feature: tested, passed\n> > Spec compliant: not tested\n> > Documentation: not tested\n> >\n> > Hello\n> >\n> > I tested this patch in a setup where the standby is in the middle of replicating and REDOing primary's WAL files during a very large data insertion. During this time, I keep killing the walreceiver process to cause a stream failure and force standby to read from archive. The system will restore from archive for \"wal_retrieve_retry_interval\" seconds before it attempts to steam again. Without this patch, once the streaming is interrupted, it keeps reading from archive until standby reaches the same consistent state of primary and then it will switch back to streaming again. So it seems that the patch does the job as described and does bring some benefit during a very large REDO job where it will try to re-stream after restoring some WALs from archive to speed up this \"catch up\" process. But if the recovery job is not a large one, PG is already switching back to streaming once it hits consistent state.\n>\n> Thanks a lot Cary for testing the patch.\n>\n> > Here's a v1 patch that I've come up with. I'm right now using the\n> > existing GUC wal_retrieve_retry_interval to switch to stream mode from\n> > archive mode as opposed to switching only after the failure to get WAL\n> > from archive mode. If okay with the approach, I can add tests, change\n> > the docs and add a new GUC to control this behaviour. I'm open to\n> > thoughts and ideas here.\n>\n> It will be great if I can hear some thoughts on the above points (as\n> posted upthread).\n\nHere's the v2 patch with a separate GUC, new GUC was necessary as the\nexisting GUC wal_retrieve_retry_interval is loaded with multiple\nusages. When the feature is enabled, it will let standby to switch to\nstream mode i.e. fetch WAL from primary before even fetching from\narchive fails. The switching to stream mode from archive happens in 2\nscenarios: 1) when standby is in initial recovery 2) when there was a\nfailure in receiving from primary (walreceiver got killed or crashed\nor timed out, or connectivity to primary was broken - for whatever\nreasons).\n\nI also added test cases to the v2 patch.\n\nPlease review the patch.\n\n--\nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/", "msg_date": "Thu, 11 Aug 2022 21:08:23 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "+ <indexterm>\n+ <primary><varname>wal_source_switch_interval</varname> configuration parameter</primary>\n+ </indexterm>\n\nI don't want to bikeshed on the name too much, but I do think we need\nsomething more descriptive. I'm thinking of something like\nstreaming_replication_attempt_interval or\nstreaming_replication_retry_interval.\n\n+ Specifies how long the standby server should wait before switching WAL\n+ source from WAL archive to primary (streaming replication). This can\n+ happen either during the standby initial recovery or after a previous\n+ failed attempt to stream WAL from the primary.\n\nI'm not sure what the second sentence means. In general, I think the\nexplanation in your commit message is much clearer:\n\n\tThe standby makes an attempt to read WAL from primary after\n\twal_retrieve_retry_interval milliseconds reading from archive.\n\n+ If this value is specified without units, it is taken as milliseconds.\n+ The default value is 5 seconds. A setting of <literal>0</literal>\n+ disables the feature.\n\n5 seconds seems low. I would expect the default to be 1-5 minutes. I\nthink it's important to strike a balance between interrupting archive\nrecovery to attempt streaming replication and letting archive recovery make\nprogress.\n\n+\t * Try reading WAL from primary after every wal_source_switch_interval\n+\t * milliseconds, when state machine is in XLOG_FROM_ARCHIVE state. If\n+\t * successful, the state machine moves to XLOG_FROM_STREAM state, otherwise\n+\t * it falls back to XLOG_FROM_ARCHIVE state.\n\nIt's not clear to me how this is expected to interact with the pg_wal phase\nof standby recovery. As the docs note [0], standby servers loop through\narchive recovery, recovery from pg_wal, and streaming replication. Does\nthis cause the pg_wal phase to be skipped (i.e., the standby goes straight\nfrom archive recovery to streaming replication)? I wonder if it'd be\nbetter for this mechanism to simply move the standby to the pg_wal phase so\nthat the usual ordering isn't changed.\n\n+\t\t\t\t\tif (!first_time &&\n+\t\t\t\t\t\tTimestampDifferenceExceeds(last_switch_time, curr_time,\n+\t\t\t\t\t\t\t\t\t\t\t\t wal_source_switch_interval))\n\nShouldn't this also check that wal_source_switch_interval is not set to 0?\n\n+\t\t\t\t\t\telog(DEBUG2,\n+\t\t\t\t\t\t\t \"trying to switch WAL source to %s after fetching WAL from %s for %d milliseconds\",\n+\t\t\t\t\t\t\t xlogSourceNames[XLOG_FROM_STREAM],\n+\t\t\t\t\t\t\t xlogSourceNames[currentSource],\n+\t\t\t\t\t\t\t wal_source_switch_interval);\n+\n+\t\t\t\t\t\tlast_switch_time = curr_time;\n\nShouldn't the last_switch_time be set when the state machine first enters\nXLOG_FROM_ARCHIVE? IIUC this logic is currently counting time spent\nelsewhere (e.g., XLOG_FROM_STREAM) when determining whether to force a\nsource switch. This would mean that a standby that has spent a lot of time\nin streaming replication before failing would flip to XLOG_FROM_ARCHIVE,\nimmediately flip back to XLOG_FROM_STREAM, and then likely flip back to\nXLOG_FROM_ARCHIVE when it failed again. Given the standby will wait for\nwal_retrieve_retry_interval before going back to XLOG_FROM_ARCHIVE, it\nseems like we could end up rapidly looping between sources. Perhaps I am\nmisunderstanding how this is meant to work.\n\n+\t{\n+\t\t{\"wal_source_switch_interval\", PGC_SIGHUP, REPLICATION_STANDBY,\n+\t\t\tgettext_noop(\"Sets the time to wait before switching WAL source from archive to primary\"),\n+\t\t\tgettext_noop(\"0 turns this feature off.\"),\n+\t\t\tGUC_UNIT_MS\n+\t\t},\n+\t\t&wal_source_switch_interval,\n+\t\t5000, 0, INT_MAX,\n+\t\tNULL, NULL, NULL\n+\t},\n\nI wonder if the lower bound should be higher to avoid switching\nunnecessarily rapidly between WAL sources. I see that\nWaitForWALToBecomeAvailable() ensures that standbys do not switch from\nXLOG_FROM_STREAM to XLOG_FROM_ARCHIVE more often than once per\nwal_retrieve_retry_interval. Perhaps wal_retrieve_retry_interval should be\nthe lower bound for this GUC, too. Or maybe WaitForWALToBecomeAvailable()\nshould make sure that the standby makes at least once attempt to restore\nthe file from archive before switching to streaming replication.\n\n[0] https://www.postgresql.org/docs/current/warm-standby.html#STANDBY-SERVER-OPERATION\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 6 Sep 2022 14:57:04 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Wed, Sep 7, 2022 at 3:27 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> + <indexterm>\n> + <primary><varname>wal_source_switch_interval</varname> configuration parameter</primary>\n> + </indexterm>\n>\n> I don't want to bikeshed on the name too much, but I do think we need\n> something more descriptive. I'm thinking of something like\n> streaming_replication_attempt_interval or\n> streaming_replication_retry_interval.\n\nI could come up with wal_source_switch_interval after a log of\nbikeshedding myself :). However, streaming_replication_retry_interval\nlooks much better, I've used it in the latest patch. Thanks.\n\n> + Specifies how long the standby server should wait before switching WAL\n> + source from WAL archive to primary (streaming replication). This can\n> + happen either during the standby initial recovery or after a previous\n> + failed attempt to stream WAL from the primary.\n>\n> I'm not sure what the second sentence means. In general, I think the\n> explanation in your commit message is much clearer:\n\nI polished the comments, docs and commit message a bit, please check now.\n\n> 5 seconds seems low. I would expect the default to be 1-5 minutes. I\n> think it's important to strike a balance between interrupting archive\n> recovery to attempt streaming replication and letting archive recovery make\n> progress.\n\n+1 for a default value of 5 minutes to avoid frequent interruptions\nfor archive mode when primary is really down for a long time. I've\nalso added a cautionary note in the docs about the lower values.\n\n> + * Try reading WAL from primary after every wal_source_switch_interval\n> + * milliseconds, when state machine is in XLOG_FROM_ARCHIVE state. If\n> + * successful, the state machine moves to XLOG_FROM_STREAM state, otherwise\n> + * it falls back to XLOG_FROM_ARCHIVE state.\n>\n> It's not clear to me how this is expected to interact with the pg_wal phase\n> of standby recovery. As the docs note [0], standby servers loop through\n> archive recovery, recovery from pg_wal, and streaming replication. Does\n> this cause the pg_wal phase to be skipped (i.e., the standby goes straight\n> from archive recovery to streaming replication)? I wonder if it'd be\n> better for this mechanism to simply move the standby to the pg_wal phase so\n> that the usual ordering isn't changed.\n>\n> [0] https://www.postgresql.org/docs/current/warm-standby.html#STANDBY-SERVER-OPERATION\n\nIt doesn't change any behaviour as such for XLOG_FROM_PG_WAL. In\nstandby mode when recovery_command is specified, the initial value of\ncurrentSource would be XLOG_FROM_ARCHIVE (see [1]). If the archive is\nexhausted of WAL or the standby fails to fetch from the archive, then\nit switches to XLOG_FROM_STREAM. If the standby fails to receive WAL\nfrom primary, it switches back to XLOG_FROM_ARCHIVE. This continues\nunless the standby gets promoted. With the patch, we enable the\nstandby to try fetching from the primary, instead of waiting for WAL\nin the archive to get exhausted or for an error to occur in the\nstandby while receiving from the archive.\n\n> + if (!first_time &&\n> + TimestampDifferenceExceeds(last_switch_time, curr_time,\n> + wal_source_switch_interval))\n>\n> Shouldn't this also check that wal_source_switch_interval is not set to 0?\n\nCorrected.\n\n> + elog(DEBUG2,\n> + \"trying to switch WAL source to %s after fetching WAL from %s for %d milliseconds\",\n> + xlogSourceNames[XLOG_FROM_STREAM],\n> + xlogSourceNames[currentSource],\n> + wal_source_switch_interval);\n> +\n> + last_switch_time = curr_time;\n>\n> Shouldn't the last_switch_time be set when the state machine first enters\n> XLOG_FROM_ARCHIVE? IIUC this logic is currently counting time spent\n> elsewhere (e.g., XLOG_FROM_STREAM) when determining whether to force a\n> source switch. This would mean that a standby that has spent a lot of time\n> in streaming replication before failing would flip to XLOG_FROM_ARCHIVE,\n> immediately flip back to XLOG_FROM_STREAM, and then likely flip back to\n> XLOG_FROM_ARCHIVE when it failed again. Given the standby will wait for\n> wal_retrieve_retry_interval before going back to XLOG_FROM_ARCHIVE, it\n> seems like we could end up rapidly looping between sources. Perhaps I am\n> misunderstanding how this is meant to work.\n\nlast_switch_time indicates the time when the standby last attempted to\nswitch to primary. For instance, a standby:\n1) for the first time, sets last_switch_time = current_time when in archive mode\n2) if current_time < last_switch_time + interval, continues to be in\narchive mode\n3) if current_time >= last_switch_time + interval, attempts to switch\nto primary and sets last_switch_time = current_time\n3.1) if successfully switches to primary, continues in there and for\nany reason fails to fetch from primary, then enters archive mode and\nloops from step (2)\n3.2) if fails to switch to primary, then enters archive mode and loops\nfrom step (2)\n\nHope this clarifies the behaviour.\n\n> + {\n> + {\"wal_source_switch_interval\", PGC_SIGHUP, REPLICATION_STANDBY,\n> + gettext_noop(\"Sets the time to wait before switching WAL source from archive to primary\"),\n> + gettext_noop(\"0 turns this feature off.\"),\n> + GUC_UNIT_MS\n> + },\n> + &wal_source_switch_interval,\n> + 5000, 0, INT_MAX,\n> + NULL, NULL, NULL\n> + },\n>\n> I wonder if the lower bound should be higher to avoid switching\n> unnecessarily rapidly between WAL sources. I see that\n> WaitForWALToBecomeAvailable() ensures that standbys do not switch from\n> XLOG_FROM_STREAM to XLOG_FROM_ARCHIVE more often than once per\n> wal_retrieve_retry_interval. Perhaps wal_retrieve_retry_interval should be\n> the lower bound for this GUC, too. Or maybe WaitForWALToBecomeAvailable()\n> should make sure that the standby makes at least once attempt to restore\n> the file from archive before switching to streaming replication.\n\nNo, we need a way to disable the feature, so I'm not changing the\nlower bound. And let's not make this GUC dependent on any other GUC, I\nwould like to keep it simple for better usability. However, I've\nincreased the default value to 5min and added a note in the docs about\nthe lower values.\n\nI'm attaching the v3 patch with the review comments addressed, please\nreview it further.\n\n[1]\n if (!InArchiveRecovery)\n currentSource = XLOG_FROM_PG_WAL;\n else if (currentSource == XLOG_FROM_ANY ||\n (!StandbyMode && currentSource == XLOG_FROM_STREAM))\n {\n lastSourceFailed = false;\n currentSource = XLOG_FROM_ARCHIVE;\n }\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 8 Sep 2022 17:16:53 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Thu, Sep 08, 2022 at 05:16:53PM +0530, Bharath Rupireddy wrote:\n> On Wed, Sep 7, 2022 at 3:27 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> It's not clear to me how this is expected to interact with the pg_wal phase\n>> of standby recovery. As the docs note [0], standby servers loop through\n>> archive recovery, recovery from pg_wal, and streaming replication. Does\n>> this cause the pg_wal phase to be skipped (i.e., the standby goes straight\n>> from archive recovery to streaming replication)? I wonder if it'd be\n>> better for this mechanism to simply move the standby to the pg_wal phase so\n>> that the usual ordering isn't changed.\n> \n> It doesn't change any behaviour as such for XLOG_FROM_PG_WAL. In\n> standby mode when recovery_command is specified, the initial value of\n> currentSource would be XLOG_FROM_ARCHIVE (see [1]). If the archive is\n> exhausted of WAL or the standby fails to fetch from the archive, then\n> it switches to XLOG_FROM_STREAM. If the standby fails to receive WAL\n> from primary, it switches back to XLOG_FROM_ARCHIVE. This continues\n> unless the standby gets promoted. With the patch, we enable the\n> standby to try fetching from the primary, instead of waiting for WAL\n> in the archive to get exhausted or for an error to occur in the\n> standby while receiving from the archive.\n\nOkay. I see that you are checking for XLOG_FROM_ARCHIVE.\n\n>> Shouldn't the last_switch_time be set when the state machine first enters\n>> XLOG_FROM_ARCHIVE? IIUC this logic is currently counting time spent\n>> elsewhere (e.g., XLOG_FROM_STREAM) when determining whether to force a\n>> source switch. This would mean that a standby that has spent a lot of time\n>> in streaming replication before failing would flip to XLOG_FROM_ARCHIVE,\n>> immediately flip back to XLOG_FROM_STREAM, and then likely flip back to\n>> XLOG_FROM_ARCHIVE when it failed again. Given the standby will wait for\n>> wal_retrieve_retry_interval before going back to XLOG_FROM_ARCHIVE, it\n>> seems like we could end up rapidly looping between sources. Perhaps I am\n>> misunderstanding how this is meant to work.\n> \n> last_switch_time indicates the time when the standby last attempted to\n> switch to primary. For instance, a standby:\n> 1) for the first time, sets last_switch_time = current_time when in archive mode\n> 2) if current_time < last_switch_time + interval, continues to be in\n> archive mode\n> 3) if current_time >= last_switch_time + interval, attempts to switch\n> to primary and sets last_switch_time = current_time\n> 3.1) if successfully switches to primary, continues in there and for\n> any reason fails to fetch from primary, then enters archive mode and\n> loops from step (2)\n> 3.2) if fails to switch to primary, then enters archive mode and loops\n> from step (2)\n\nLet's say I have this new parameter set to 5 minutes, and I have a standby\nthat's been at step 3.1 for 5 days before failing and going back to step 2.\nWon't the standby immediately jump back to step 3.1? I think we should\nplace the limit on how long the server stays in XLOG_FROM_ARCHIVE, not how\nlong it's been since we last tried XLOG_FROM_STREAM.\n\n>> I wonder if the lower bound should be higher to avoid switching\n>> unnecessarily rapidly between WAL sources. I see that\n>> WaitForWALToBecomeAvailable() ensures that standbys do not switch from\n>> XLOG_FROM_STREAM to XLOG_FROM_ARCHIVE more often than once per\n>> wal_retrieve_retry_interval. Perhaps wal_retrieve_retry_interval should be\n>> the lower bound for this GUC, too. Or maybe WaitForWALToBecomeAvailable()\n>> should make sure that the standby makes at least once attempt to restore\n>> the file from archive before switching to streaming replication.\n> \n> No, we need a way to disable the feature, so I'm not changing the\n> lower bound. And let's not make this GUC dependent on any other GUC, I\n> would like to keep it simple for better usability. However, I've\n> increased the default value to 5min and added a note in the docs about\n> the lower values.\n> \n> I'm attaching the v3 patch with the review comments addressed, please\n> review it further.\n\nMy general point is that we should probably offer some basic preventative\nmeasure against flipping back and forth between streaming and archive\nrecovery while making zero progress. As I noted, maybe that's as simple as\nhaving WaitForWALToBecomeAvailable() attempt to restore a file from archive\nat least once before the new parameter forces us to switch to streaming\nreplication. There might be other ways to handle this.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 8 Sep 2022 10:53:56 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "Being late for the party.\n\nIt seems to me that the function is getting too long. I think we\nmight want to move the core part of the patch into another function.\n\nI think it might be better if intentionalSourceSwitch doesn't need\nlastSourceFailed set. It would look like this:\n\n> if (lastSourceFailed || switchSource)\n> {\n> if (nonblocking && lastSourceFailed)\n> return XLREAD_WOULDBLOCK;\n\n\n+\t\t\t\t\tif (first_time)\n+\t\t\t\t\t\tlast_switch_time = curr_time;\n..\n+\t\t\t\t\tif (!first_time &&\n+\t\t\t\t\t\tTimestampDifferenceExceeds(last_switch_time, curr_time,\n..\n+\t\t\t\t\t/* We're not here for the first time any more */\n+\t\t\t\t\tif (first_time)\n+\t\t\t\t\t\tfirst_time = false;\n\nI don't think the flag first_time is needed.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 09 Sep 2022 14:16:17 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "At Thu, 8 Sep 2022 10:53:56 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in \n> On Thu, Sep 08, 2022 at 05:16:53PM +0530, Bharath Rupireddy wrote:\n> > I'm attaching the v3 patch with the review comments addressed, please\n> > review it further.\n> \n> My general point is that we should probably offer some basic preventative\n> measure against flipping back and forth between streaming and archive\n> recovery while making zero progress. As I noted, maybe that's as simple as\n> having WaitForWALToBecomeAvailable() attempt to restore a file from archive\n> at least once before the new parameter forces us to switch to streaming\n> replication. There might be other ways to handle this.\n\n+1.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 09 Sep 2022 14:26:57 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Fri, Sep 9, 2022 at 10:57 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 8 Sep 2022 10:53:56 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in\n> > On Thu, Sep 08, 2022 at 05:16:53PM +0530, Bharath Rupireddy wrote:\n> > > I'm attaching the v3 patch with the review comments addressed, please\n> > > review it further.\n> >\n> > My general point is that we should probably offer some basic preventative\n> > measure against flipping back and forth between streaming and archive\n> > recovery while making zero progress. As I noted, maybe that's as simple as\n> > having WaitForWALToBecomeAvailable() attempt to restore a file from archive\n> > at least once before the new parameter forces us to switch to streaming\n> > replication. There might be other ways to handle this.\n>\n> +1.\n\nHm. In that case, I think we can get rid of timeout based switching\nmechanism and have this behaviour - the standby can attempt to switch\nto streaming mode from archive, say, after fetching 1, 2 or a\nconfigurable number of WAL files. In fact, this is the original idea\nproposed by Satya in this thread.\n\nIf okay, I can code on that. Thoughts?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 9 Sep 2022 12:14:25 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Fri, Sep 09, 2022 at 12:14:25PM +0530, Bharath Rupireddy wrote:\n> On Fri, Sep 9, 2022 at 10:57 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n>> At Thu, 8 Sep 2022 10:53:56 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in\n>> > My general point is that we should probably offer some basic preventative\n>> > measure against flipping back and forth between streaming and archive\n>> > recovery while making zero progress. As I noted, maybe that's as simple as\n>> > having WaitForWALToBecomeAvailable() attempt to restore a file from archive\n>> > at least once before the new parameter forces us to switch to streaming\n>> > replication. There might be other ways to handle this.\n>>\n>> +1.\n> \n> Hm. In that case, I think we can get rid of timeout based switching\n> mechanism and have this behaviour - the standby can attempt to switch\n> to streaming mode from archive, say, after fetching 1, 2 or a\n> configurable number of WAL files. In fact, this is the original idea\n> proposed by Satya in this thread.\n\nIMO the timeout approach would be more intuitive for users. When it comes\nto archive recovery, \"WAL segment\" isn't a standard unit of measure. WAL\nsegment size can differ between clusters, and WAL files can have different\namounts of data or take different amounts of time to replay. So I think it\nwould be difficult for the end user to decide on a value. However, even\nthe timeout approach has this sort of problem. If your parameter is set to\n1 minute, but the current archive takes 5 minutes to recover, you won't\nreally be testing streaming replication once a minute. That would likely\nneed to be documented.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 9 Sep 2022 09:59:50 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Fri, Sep 9, 2022 at 10:29 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Fri, Sep 09, 2022 at 12:14:25PM +0530, Bharath Rupireddy wrote:\n> > On Fri, Sep 9, 2022 at 10:57 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> >> At Thu, 8 Sep 2022 10:53:56 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in\n> >> > My general point is that we should probably offer some basic preventative\n> >> > measure against flipping back and forth between streaming and archive\n> >> > recovery while making zero progress. As I noted, maybe that's as simple as\n> >> > having WaitForWALToBecomeAvailable() attempt to restore a file from archive\n> >> > at least once before the new parameter forces us to switch to streaming\n> >> > replication. There might be other ways to handle this.\n> >>\n> >> +1.\n> >\n> > Hm. In that case, I think we can get rid of timeout based switching\n> > mechanism and have this behaviour - the standby can attempt to switch\n> > to streaming mode from archive, say, after fetching 1, 2 or a\n> > configurable number of WAL files. In fact, this is the original idea\n> > proposed by Satya in this thread.\n>\n> IMO the timeout approach would be more intuitive for users. When it comes\n> to archive recovery, \"WAL segment\" isn't a standard unit of measure. WAL\n> segment size can differ between clusters, and WAL files can have different\n> amounts of data or take different amounts of time to replay.\n\nHow about the amount of WAL bytes fetched from the archive after which\na standby attempts to connect to primary or enter streaming mode? Of\nlate, we've changed some GUCs to represent bytes instead of WAL\nfiles/segments, see [1].\n\n> So I think it\n> would be difficult for the end user to decide on a value. However, even\n> the timeout approach has this sort of problem. If your parameter is set to\n> 1 minute, but the current archive takes 5 minutes to recover, you won't\n> really be testing streaming replication once a minute. That would likely\n> need to be documented.\n\nIf we have configurable WAL bytes instead of timeout for standby WAL\nsource switch from archive to primary, we don't have the above problem\nright?\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=c3fe108c025e4a080315562d4c15ecbe3f00405e\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 9 Sep 2022 23:07:00 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Fri, Sep 09, 2022 at 11:07:00PM +0530, Bharath Rupireddy wrote:\n> On Fri, Sep 9, 2022 at 10:29 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> IMO the timeout approach would be more intuitive for users. When it comes\n>> to archive recovery, \"WAL segment\" isn't a standard unit of measure. WAL\n>> segment size can differ between clusters, and WAL files can have different\n>> amounts of data or take different amounts of time to replay.\n> \n> How about the amount of WAL bytes fetched from the archive after which\n> a standby attempts to connect to primary or enter streaming mode? Of\n> late, we've changed some GUCs to represent bytes instead of WAL\n> files/segments, see [1].\n\nWell, for wal_keep_size, using bytes makes sense. Given you know how much\ndisk space you have, you can set this parameter accordingly to avoid\nretaining too much of it for standby servers. For your proposed parameter,\nit's not so simple. The same setting could have wildly different timing\nbehavior depending on the server. I still think that a timeout is the most\nintuitive.\n\n>> So I think it\n>> would be difficult for the end user to decide on a value. However, even\n>> the timeout approach has this sort of problem. If your parameter is set to\n>> 1 minute, but the current archive takes 5 minutes to recover, you won't\n>> really be testing streaming replication once a minute. That would likely\n>> need to be documented.\n> \n> If we have configurable WAL bytes instead of timeout for standby WAL\n> source switch from archive to primary, we don't have the above problem\n> right?\n\nIf you are going to stop replaying in the middle of a WAL archive, then\nmaybe. But I don't think I'd recommend that.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 9 Sep 2022 15:05:23 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Sat, Sep 10, 2022 at 3:35 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> Well, for wal_keep_size, using bytes makes sense. Given you know how much\n> disk space you have, you can set this parameter accordingly to avoid\n> retaining too much of it for standby servers. For your proposed parameter,\n> it's not so simple. The same setting could have wildly different timing\n> behavior depending on the server. I still think that a timeout is the most\n> intuitive.\n\nHm. In v3 patch, I've used the timeout approach, but tracking the\nduration server spends in XLOG_FROM_ARCHIVE as opposed to tracking\nlast failed time in streaming from primary.\n\n> So I think it\n> would be difficult for the end user to decide on a value. However, even\n> the timeout approach has this sort of problem. If your parameter is set to\n> 1 minute, but the current archive takes 5 minutes to recover, you won't\n> really be testing streaming replication once a minute. That would likely\n> need to be documented.\n\nAdded a note in the docs.\n\nOn Fri, Sep 9, 2022 at 10:46 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Being late for the party.\n\nThanks for reviewing this.\n\n> It seems to me that the function is getting too long. I think we\n> might want to move the core part of the patch into another function.\n\nYeah, the WaitForWALToBecomeAvailable() (without this patch) has\naround 460 LOC out of which WAL fetching from the chosen source is of\n240 LOC, IMO, this code will be a candidate for a new function. I\nthink that part can be discussed separately.\n\nHaving said that, I moved the new code to a new function.\n\n> I think it might be better if intentionalSourceSwitch doesn't need\n> lastSourceFailed set. It would look like this:\n>\n> > if (lastSourceFailed || switchSource)\n> > {\n> > if (nonblocking && lastSourceFailed)\n> > return XLREAD_WOULDBLOCK;\n\nI think the above looks good, done that way in the latest patch.\n\n> I don't think the flag first_time is needed.\n\nAddressed this in the v4 patch.\n\nPlease review the attached v4 patch addressing above review comments.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 12 Sep 2022 09:03:02 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Mon, Sep 12, 2022 at 9:03 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Please review the attached v4 patch addressing above review comments.\n\nOops, there's a compiler warning [1] with the v4 patch, fixed it.\nPlease review the attached v5 patch.\n\n[1] https://cirrus-ci.com/task/5730076611313664?logs=gcc_warning#L450\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 12 Sep 2022 11:56:29 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Mon, Sep 12, 2022 at 11:56 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Please review the attached v5 patch.\n\nI'm attaching the v6 patch that's rebased on to the latest HEAD.\nPlease consider this for review.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 15 Sep 2022 10:28:12 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "At Thu, 15 Sep 2022 10:28:12 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> I'm attaching the v6 patch that's rebased on to the latest HEAD.\n> Please consider this for review.\n\nThaks for the new version!\n\n+#define StreamingReplRetryEnabled() \\\n+\t(streaming_replication_retry_interval > 0 && \\\n+\t StandbyMode && \\\n+\t currentSource == XLOG_FROM_ARCHIVE)\n\nIt seems to me a bit too complex..\n\n+\t\t\t/* Save the timestamp at which we're switching to archive. */\n+\t\t\tif (StreamingReplRetryEnabled())\n+\t\t\t\tswitched_to_archive_at = GetCurrentTimestamp();\n\nAnyway we are going to open a file just after this so\nGetCurrentTimestamp() cannot cause a perceptible degradation.\nCoulnd't we do that unconditionally, to get rid of the macro?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 15 Sep 2022 17:22:07 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Thu, Sep 15, 2022 at 1:52 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 15 Sep 2022 10:28:12 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > I'm attaching the v6 patch that's rebased on to the latest HEAD.\n> > Please consider this for review.\n>\n> Thaks for the new version!\n>\n> +#define StreamingReplRetryEnabled() \\\n> + (streaming_replication_retry_interval > 0 && \\\n> + StandbyMode && \\\n> + currentSource == XLOG_FROM_ARCHIVE)\n>\n> It seems to me a bit too complex..\n\nI don't think so, it just tells whether a standby is allowed to switch\nsource to stream from archive.\n\n> + /* Save the timestamp at which we're switching to archive. */\n> + if (StreamingReplRetryEnabled())\n> + switched_to_archive_at = GetCurrentTimestamp();\n>\n> Anyway we are going to open a file just after this so\n> GetCurrentTimestamp() cannot cause a perceptible degradation.\n> Coulnd't we do that unconditionally, to get rid of the macro?\n\nDo we really need to do it unconditionally? I don't think so. And, we\ncan't get rid of the macro, as we need to check for the current\nsource, GUC and standby mode. When this feature is disabled, it\nmustn't execute any extra code IMO.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 16 Sep 2022 09:15:58 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "At Fri, 16 Sep 2022 09:15:58 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Thu, Sep 15, 2022 at 1:52 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Thu, 15 Sep 2022 10:28:12 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > > I'm attaching the v6 patch that's rebased on to the latest HEAD.\n> > > Please consider this for review.\n> >\n> > Thaks for the new version!\n> >\n> > +#define StreamingReplRetryEnabled() \\\n> > + (streaming_replication_retry_interval > 0 && \\\n> > + StandbyMode && \\\n> > + currentSource == XLOG_FROM_ARCHIVE)\n> >\n> > It seems to me a bit too complex..\n\nIn other words, it seems to me that the macro name doesn't manifest\nthe condition correctly.\n\n> I don't think so, it just tells whether a standby is allowed to switch\n> source to stream from archive.\n> \n> > + /* Save the timestamp at which we're switching to archive. */\n> > + if (StreamingReplRetryEnabled())\n> > + switched_to_archive_at = GetCurrentTimestamp();\n> >\n> > Anyway we are going to open a file just after this so\n> > GetCurrentTimestamp() cannot cause a perceptible degradation.\n> > Coulnd't we do that unconditionally, to get rid of the macro?\n> \n> Do we really need to do it unconditionally? I don't think so. And, we\n> can't get rid of the macro, as we need to check for the current\n> source, GUC and standby mode. When this feature is disabled, it\n> mustn't execute any extra code IMO.\n\nI don't think we don't particularly want to do that unconditionally.\nI wanted just to get rid of the macro from the usage site. Even if\nthe same condition is used elsewhere, I see it better to write out the\ncondition directly there..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 16 Sep 2022 15:36:29 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Fri, Sep 16, 2022 at 12:06 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> In other words, it seems to me that the macro name doesn't manifest\n> the condition correctly.\n>\n> I don't think we don't particularly want to do that unconditionally.\n> I wanted just to get rid of the macro from the usage site. Even if\n> the same condition is used elsewhere, I see it better to write out the\n> condition directly there..\n\nI wanted to avoid a bit of duplicate code there. How about naming that\nmacro IsXLOGSourceSwitchToStreamEnabled() or\nSwitchFromArchiveToStreamEnabled() or just SwitchFromArchiveToStream()\nor any other better name?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 16 Sep 2022 16:58:29 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Fri, Sep 16, 2022 at 4:58 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Sep 16, 2022 at 12:06 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > In other words, it seems to me that the macro name doesn't manifest\n> > the condition correctly.\n> >\n> > I don't think we don't particularly want to do that unconditionally.\n> > I wanted just to get rid of the macro from the usage site. Even if\n> > the same condition is used elsewhere, I see it better to write out the\n> > condition directly there..\n>\n> I wanted to avoid a bit of duplicate code there. How about naming that\n> macro IsXLOGSourceSwitchToStreamEnabled() or\n> SwitchFromArchiveToStreamEnabled() or just SwitchFromArchiveToStream()\n> or any other better name?\n\nSwitchFromArchiveToStreamEnabled() seemed better at this point. I'm\nattaching the v7 patch with that change. Please review it further.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 19 Sep 2022 19:49:21 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Mon, Sep 19, 2022 at 07:49:21PM +0530, Bharath Rupireddy wrote:\n> SwitchFromArchiveToStreamEnabled() seemed better at this point. I'm\n> attaching the v7 patch with that change. Please review it further.\n\nAs I mentioned upthread [0], I'm still a little concerned that this patch\nwill cause the state machine to go straight from archive recovery to\nstreaming replication, skipping recovery from pg_wal. I wonder if this\ncould be resolved by moving the standby to the pg_wal phase instead.\nConcretely, this line\n\n+\t\t\t\tif (switchSource)\n+\t\t\t\t\tbreak;\n\nwould instead change currentSource from XLOG_FROM_ARCHIVE to\nXLOG_FROM_PG_WAL before the call to XLogFileReadAnyTLI(). I suspect the\nbehavior would be basically the same, but it would maintain the existing\nordering.\n\nHowever, I do see the following note elsewhere in xlogrecovery.c:\n\n * The segment can be fetched via restore_command, or via walreceiver having\n * streamed the record, or it can already be present in pg_wal. Checking\n * pg_wal is mainly for crash recovery, but it will be polled in standby mode\n * too, in case someone copies a new segment directly to pg_wal. That is not\n * documented or recommended, though.\n\nGiven this information, the present behavior might not be too important,\nbut I don't see a point in changing it without good reason.\n\n[0] https://postgr.es/m/20220906215704.GA2084086%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 8 Oct 2022 14:52:21 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Sun, Oct 9, 2022 at 3:22 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> As I mentioned upthread [0], I'm still a little concerned that this patch\n> will cause the state machine to go straight from archive recovery to\n> streaming replication, skipping recovery from pg_wal.\n>\n> [0] https://postgr.es/m/20220906215704.GA2084086%40nathanxps13\n\nYes, it goes straight to streaming replication skipping recovery from\npg_wal with the patch.\n\n> I wonder if this\n> could be resolved by moving the standby to the pg_wal phase instead.\n> Concretely, this line\n>\n> + if (switchSource)\n> + break;\n>\n> would instead change currentSource from XLOG_FROM_ARCHIVE to\n> XLOG_FROM_PG_WAL before the call to XLogFileReadAnyTLI(). I suspect the\n> behavior would be basically the same, but it would maintain the existing\n> ordering.\n\nWe can give it a chance to restore from pg_wal before switching to\nstreaming to not change any behaviour of the state machine. But, not\ndefinitely by setting currentSource to XLOG_FROM_WAL, we basically\nnever explicitly set currentSource to XLOG_FROM_WAL, other than when\nnot in archive recovery i.e. InArchiveRecovery is false. Also, see the\ncomment [1].\n\nInstead, the simplest would be to just pass XLOG_FROM_WAL to\nXLogFileReadAnyTLI() when we're about to switch the source to stream\nmode. This doesn't change the existing behaviour.\n\n> However, I do see the following note elsewhere in xlogrecovery.c:\n>\n> * The segment can be fetched via restore_command, or via walreceiver having\n> * streamed the record, or it can already be present in pg_wal. Checking\n> * pg_wal is mainly for crash recovery, but it will be polled in standby mode\n> * too, in case someone copies a new segment directly to pg_wal. That is not\n> * documented or recommended, though.\n>\n> Given this information, the present behavior might not be too important,\n> but I don't see a point in changing it without good reason.\n\nYeah, with the attached patch we don't skip pg_wal before switching to\nstreaming mode.\n\nI've also added a note in the 'Standby Server Operation' section about\nthe new feature.\n\nPlease review the v8 patch further.\n\nUnrelated to this patch, the fact that the standby polls pg_wal is not\ndocumented or recommended, is not true, it is actually documented [2].\nWhether or not we change the docs to be something like [3], is a\nseparate discussion.\n\n[1]\n /*\n * We just successfully read a file in pg_wal. We prefer files in\n * the archive over ones in pg_wal, so try the next file again\n * from the archive first.\n */\n\n[2] https://www.postgresql.org/docs/current/warm-standby.html#STANDBY-SERVER-OPERATION\nThe standby server will also attempt to restore any WAL found in the\nstandby cluster's pg_wal directory. That typically happens after a\nserver restart, when the standby replays again WAL that was streamed\nfrom the primary before the restart, but you can also manually copy\nfiles to pg_wal at any time to have them replayed.\n\n[3]\nThe standby server will also attempt to restore any WAL found in the\nstandby cluster's pg_wal directory. That typically happens after a\nserver restart, when the standby replays again WAL that was streamed\nfrom the primary before the restart, but you can also manually copy\nfiles to pg_wal at any time to have them replayed. However, copying of\nWAL files manually is not recommended.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sun, 9 Oct 2022 14:39:47 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Sun, Oct 09, 2022 at 02:39:47PM +0530, Bharath Rupireddy wrote:\n> We can give it a chance to restore from pg_wal before switching to\n> streaming to not change any behaviour of the state machine. But, not\n> definitely by setting currentSource to XLOG_FROM_WAL, we basically\n> never explicitly set currentSource to XLOG_FROM_WAL, other than when\n> not in archive recovery i.e. InArchiveRecovery is false. Also, see the\n> comment [1].\n> \n> Instead, the simplest would be to just pass XLOG_FROM_WAL to\n> XLogFileReadAnyTLI() when we're about to switch the source to stream\n> mode. This doesn't change the existing behaviour.\n\nIt might be more consistent with existing behavior, but one thing I hadn't\nconsidered is that it might make your proposed feature ineffective when\nusers are copying files straight into pg_wal. IIUC as long as the files\nare present in pg_wal, the source-switch logic won't kick in.\n\n> Unrelated to this patch, the fact that the standby polls pg_wal is not\n> documented or recommended, is not true, it is actually documented [2].\n> Whether or not we change the docs to be something like [3], is a\n> separate discussion.\n\nI wonder if it would be better to simply remove this extra polling of\npg_wal as a prerequisite to your patch. The existing commentary leads me\nto think there might not be a strong reason for this behavior, so it could\nbe a nice way to simplify your patch.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 9 Oct 2022 14:47:25 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Mon, Oct 10, 2022 at 3:17 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> > Instead, the simplest would be to just pass XLOG_FROM_WAL to\n> > XLogFileReadAnyTLI() when we're about to switch the source to stream\n> > mode. This doesn't change the existing behaviour.\n>\n> It might be more consistent with existing behavior, but one thing I hadn't\n> considered is that it might make your proposed feature ineffective when\n> users are copying files straight into pg_wal. IIUC as long as the files\n> are present in pg_wal, the source-switch logic won't kick in.\n\nIt happens even now, that is, the server will not switch to streaming\nmode from the archive after a failure if there's someone continuously\ncopying WAL files to the pg_wal directory. I have not personally seen\nanyone or any service doing that. It doesn't mean that can't happen.\nThey might do it for some purpose such as 1) to bring back in sync\nquickly a standby that's lagging behind the primary after the archive\nconnection and/or streaming replication connection are/is broken but\nmany WAL files leftover on the primary 2) before promoting a standby\nthat's lagging behind the primary for failover or other purposes.\nHowever, I'm not sure if someone does these things on production\nservers.\n\n> > Unrelated to this patch, the fact that the standby polls pg_wal is not\n> > documented or recommended, is not true, it is actually documented [2].\n> > Whether or not we change the docs to be something like [3], is a\n> > separate discussion.\n>\n> I wonder if it would be better to simply remove this extra polling of\n> pg_wal as a prerequisite to your patch. The existing commentary leads me\n> to think there might not be a strong reason for this behavior, so it could\n> be a nice way to simplify your patch.\n\nI don't think it's a good idea to remove that completely. As said\nabove, it might help someone, we never know.\n\nI think for this feature, we just need to decide on whether or not\nwe'd allow pg_wal polling before switching to streaming mode. If we\nallow it like in the v8 patch, we can document the behavior.\n\nThoughts?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 10 Oct 2022 11:33:57 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Mon, Oct 10, 2022 at 11:33:57AM +0530, Bharath Rupireddy wrote:\n> On Mon, Oct 10, 2022 at 3:17 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> I wonder if it would be better to simply remove this extra polling of\n>> pg_wal as a prerequisite to your patch. The existing commentary leads me\n>> to think there might not be a strong reason for this behavior, so it could\n>> be a nice way to simplify your patch.\n> \n> I don't think it's a good idea to remove that completely. As said\n> above, it might help someone, we never know.\n\nIt would be great to hear whether anyone is using this functionality. If\nno one is aware of existing usage and there is no interest in keeping it\naround, I don't think it would be unreasonable to remove it in v16.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 10 Oct 2022 20:10:01 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Tue, Oct 11, 2022 at 8:40 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Mon, Oct 10, 2022 at 11:33:57AM +0530, Bharath Rupireddy wrote:\n> > On Mon, Oct 10, 2022 at 3:17 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> >> I wonder if it would be better to simply remove this extra polling of\n> >> pg_wal as a prerequisite to your patch. The existing commentary leads me\n> >> to think there might not be a strong reason for this behavior, so it could\n> >> be a nice way to simplify your patch.\n> >\n> > I don't think it's a good idea to remove that completely. As said\n> > above, it might help someone, we never know.\n>\n> It would be great to hear whether anyone is using this functionality. If\n> no one is aware of existing usage and there is no interest in keeping it\n> around, I don't think it would be unreasonable to remove it in v16.\n\nIt seems like exhausting all the WAL in pg_wal before switching to\nstreaming after failing to fetch from archive is unremovable. I found\nthis after experimenting with it, here are my findings:\n1. The standby has to recover initial WAL files in the pg_wal\ndirectory even for the normal post-restart/first-time-start case, I\nmean, in non-crash recovery case.\n2. The standby received WAL files from primary (walreceiver just\nwrites and flushes the received WAL to WAL files under pg_wal)\npretty-fast and/or standby recovery is slow, say both the standby\nconnection to primary and archive connection are broken for whatever\nreasons, then it has WAL files to recover in pg_wal directory.\n\nI think the fundamental behaviour for the standy is that it has to\nfully recover to the end of WAL under pg_wal no matter who copies WAL\nfiles there. I fully understand the consequences of manually copying\nWAL files into pg_wal, for that matter, manually copying/tinkering any\nother files into/under the data directory is something we don't\nrecommend and encourage.\n\nIn summary, the standby state machine in WaitForWALToBecomeAvailable()\nexhausts all the WAL in pg_wal before switching to streaming after\nfailing to fetch from archive. The v8 patch proposed upthread deviates\nfrom this behaviour. Hence, attaching v9 patch that keeps the\nbehaviour as-is, that means, the standby exhausts all the WAL in\npg_wal before switching to streaming after fetching WAL from archive\nfor at least streaming_replication_retry_interval milliseconds.\n\nPlease review the v9 patch further.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 18 Oct 2022 07:31:33 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "2022年10月18日(火) 11:02 Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>:\n>\n> On Tue, Oct 11, 2022 at 8:40 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> >\n> > On Mon, Oct 10, 2022 at 11:33:57AM +0530, Bharath Rupireddy wrote:\n> > > On Mon, Oct 10, 2022 at 3:17 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> > >> I wonder if it would be better to simply remove this extra polling of\n> > >> pg_wal as a prerequisite to your patch. The existing commentary leads me\n> > >> to think there might not be a strong reason for this behavior, so it could\n> > >> be a nice way to simplify your patch.\n> > >\n> > > I don't think it's a good idea to remove that completely. As said\n> > > above, it might help someone, we never know.\n> >\n> > It would be great to hear whether anyone is using this functionality. If\n> > no one is aware of existing usage and there is no interest in keeping it\n> > around, I don't think it would be unreasonable to remove it in v16.\n>\n> It seems like exhausting all the WAL in pg_wal before switching to\n> streaming after failing to fetch from archive is unremovable. I found\n> this after experimenting with it, here are my findings:\n> 1. The standby has to recover initial WAL files in the pg_wal\n> directory even for the normal post-restart/first-time-start case, I\n> mean, in non-crash recovery case.\n> 2. The standby received WAL files from primary (walreceiver just\n> writes and flushes the received WAL to WAL files under pg_wal)\n> pretty-fast and/or standby recovery is slow, say both the standby\n> connection to primary and archive connection are broken for whatever\n> reasons, then it has WAL files to recover in pg_wal directory.\n>\n> I think the fundamental behaviour for the standy is that it has to\n> fully recover to the end of WAL under pg_wal no matter who copies WAL\n> files there. I fully understand the consequences of manually copying\n> WAL files into pg_wal, for that matter, manually copying/tinkering any\n> other files into/under the data directory is something we don't\n> recommend and encourage.\n>\n> In summary, the standby state machine in WaitForWALToBecomeAvailable()\n> exhausts all the WAL in pg_wal before switching to streaming after\n> failing to fetch from archive. The v8 patch proposed upthread deviates\n> from this behaviour. Hence, attaching v9 patch that keeps the\n> behaviour as-is, that means, the standby exhausts all the WAL in\n> pg_wal before switching to streaming after fetching WAL from archive\n> for at least streaming_replication_retry_interval milliseconds.\n>\n> Please review the v9 patch further.\n\nThanks for the updated patch.\n\nWhile reviewing the patch backlog, we have determined that this patch adds\none or more TAP tests but has not added the test to the \"meson.build\" file.\n\nTo do this, locate the relevant \"meson.build\" file for each test and add it\nin the 'tests' dictionary, which will look something like this:\n\n 'tap': {\n 'tests': [\n 't/001_basic.pl',\n ],\n },\n\nFor some additional details please see this Wiki article:\n\n https://wiki.postgresql.org/wiki/Meson_for_patch_authors\n\nFor more information on the meson build system for PostgreSQL see:\n\n https://wiki.postgresql.org/wiki/Meson\n\n\nRegards\n\nIan Barwick\n\n\n", "msg_date": "Wed, 16 Nov 2022 13:08:18 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Wed, Nov 16, 2022 at 9:38 AM Ian Lawrence Barwick <barwick@gmail.com> wrote:\n>\n> While reviewing the patch backlog, we have determined that this patch adds\n> one or more TAP tests but has not added the test to the \"meson.build\" file.\n\nThanks for pointing it out. Yeah, the test wasn't picking up on meson\nbuilds. I added the new test file name in\nsrc/test/recovery/meson.build.\n\nI'm attaching the v10 patch for further review.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 16 Nov 2022 11:39:53 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Tue, Oct 18, 2022 at 07:31:33AM +0530, Bharath Rupireddy wrote:\n> In summary, the standby state machine in WaitForWALToBecomeAvailable()\n> exhausts all the WAL in pg_wal before switching to streaming after\n> failing to fetch from archive. The v8 patch proposed upthread deviates\n> from this behaviour. Hence, attaching v9 patch that keeps the\n> behaviour as-is, that means, the standby exhausts all the WAL in\n> pg_wal before switching to streaming after fetching WAL from archive\n> for at least streaming_replication_retry_interval milliseconds.\n\nI think this is okay. The following comment explains why archives are\npreferred over existing files in pg_wal:\n\n\t * When doing archive recovery, we always prefer an archived log file even\n\t * if a file of the same name exists in XLOGDIR. The reason is that the\n\t * file in XLOGDIR could be an old, un-filled or partly-filled version\n\t * that was copied and restored as part of backing up $PGDATA.\n\nWith your patch, we might replay one of these \"old\" files in pg_wal instead\nof the complete version of the file from the archives, but I think that is\nstill correct. We'll just replay whatever exists in pg_wal (which may be\nun-filled or partly-filled) before attempting streaming. If that fails,\nwe'll go back to trying the archives again.\n\nWould you mind testing this scenario?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 11 Jan 2023 16:51:14 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Thu, Jan 12, 2023 at 6:21 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Tue, Oct 18, 2022 at 07:31:33AM +0530, Bharath Rupireddy wrote:\n> > In summary, the standby state machine in WaitForWALToBecomeAvailable()\n> > exhausts all the WAL in pg_wal before switching to streaming after\n> > failing to fetch from archive. The v8 patch proposed upthread deviates\n> > from this behaviour. Hence, attaching v9 patch that keeps the\n> > behaviour as-is, that means, the standby exhausts all the WAL in\n> > pg_wal before switching to streaming after fetching WAL from archive\n> > for at least streaming_replication_retry_interval milliseconds.\n>\n> I think this is okay. The following comment explains why archives are\n> preferred over existing files in pg_wal:\n>\n> * When doing archive recovery, we always prefer an archived log file even\n> * if a file of the same name exists in XLOGDIR. The reason is that the\n> * file in XLOGDIR could be an old, un-filled or partly-filled version\n> * that was copied and restored as part of backing up $PGDATA.\n>\n> With your patch, we might replay one of these \"old\" files in pg_wal instead\n> of the complete version of the file from the archives,\n\nThat's true even today, without the patch, no? We're not changing the\nexisting behaviour of the state machine. Can you explain how it\nhappens with the patch?\n\nOn HEAD, after failing to read from the archive, exhaust all wal from\npg_wal and then switch to streaming mode. With the patch, after\nreading from the archive for at least\nstreaming_replication_retry_interval milliseconds, exhaust all wal\nfrom pg_wal and then switch to streaming mode.\n\n> but I think that is\n> still correct. We'll just replay whatever exists in pg_wal (which may be\n> un-filled or partly-filled) before attempting streaming. If that fails,\n> we'll go back to trying the archives again.\n>\n> Would you mind testing this scenario?\n\nHow about something like below for testing the above scenario? If it\nlooks okay, I can add it as a new TAP test file.\n\n1. Generate WAL files f1 and f2 and archive them.\n2. Check the replay lsn and WAL file name on the standby, when it\nreplays upto f2, stop the standby.\n3. Set recovery to fail on the standby, and stop the standby.\n4. Generate f3, f4 (partially filled) on the primary.\n5. Manually copy f3, f4 to the standby's pg_wal.\n6. Start the standby, since recovery is set to fail, and there're new\nWAL files (f3, f4) under its pg_wal, it must replay those WAL files\n(check the replay lsn and WAL file name, it must be f4) before\nswitching to streaming.\n7. Generate f5 on the primary.\n8. The standby should receive f5 and replay it (check the replay lsn\nand WAL file name, it must be f5).\n9. Set streaming to fail on the standby and set recovery to succeed.\n10. Generate f6 on the primary.\n11. The standby should receive f6 via archive and replay it (check the\nreplay lsn and WAL file name, it must be f6).\n\nIf needed, we can look out for these messages to confirm it works as expected:\n elog(DEBUG2, \"switched WAL source from %s to %s after %s\",\n xlogSourceNames[oldSource], xlogSourceNames[currentSource],\n lastSourceFailed ? \"failure\" : \"success\");\n ereport(LOG,\n (errmsg(\"restored log file \\\"%s\\\" from archive\",\n xlogfname)));\n\nEssentially, it covers what the documentation\nhttps://www.postgresql.org/docs/devel/warm-standby.html says:\n\n\"In standby mode, the server continuously applies WAL received from\nthe primary server. The standby server can read WAL from a WAL archive\n(see restore_command) or directly from the primary over a TCP\nconnection (streaming replication). The standby server will also\nattempt to restore any WAL found in the standby cluster's pg_wal\ndirectory. That typically happens after a server restart, when the\nstandby replays again WAL that was streamed from the primary before\nthe restart, but you can also manually copy files to pg_wal at any\ntime to have them replayed.\"\n\nThoughts?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 17 Jan 2023 19:44:52 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Tue, Jan 17, 2023 at 07:44:52PM +0530, Bharath Rupireddy wrote:\n> On Thu, Jan 12, 2023 at 6:21 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> With your patch, we might replay one of these \"old\" files in pg_wal instead\n>> of the complete version of the file from the archives,\n> \n> That's true even today, without the patch, no? We're not changing the\n> existing behaviour of the state machine. Can you explain how it\n> happens with the patch?\n\nMy point is that on HEAD, we will always prefer a complete archive file.\nWith your patch, we might instead choose to replay an old file in pg_wal\nbecause we are artificially advancing the state machine. IOW even if\nthere's a complete archive available, we might not use it. This is a\nbehavior change, but I think it is okay.\n\n>> Would you mind testing this scenario?\n> \n> How about something like below for testing the above scenario? If it\n> looks okay, I can add it as a new TAP test file.\n> \n> 1. Generate WAL files f1 and f2 and archive them.\n> 2. Check the replay lsn and WAL file name on the standby, when it\n> replays upto f2, stop the standby.\n> 3. Set recovery to fail on the standby, and stop the standby.\n> 4. Generate f3, f4 (partially filled) on the primary.\n> 5. Manually copy f3, f4 to the standby's pg_wal.\n> 6. Start the standby, since recovery is set to fail, and there're new\n> WAL files (f3, f4) under its pg_wal, it must replay those WAL files\n> (check the replay lsn and WAL file name, it must be f4) before\n> switching to streaming.\n> 7. Generate f5 on the primary.\n> 8. The standby should receive f5 and replay it (check the replay lsn\n> and WAL file name, it must be f5).\n> 9. Set streaming to fail on the standby and set recovery to succeed.\n> 10. Generate f6 on the primary.\n> 11. The standby should receive f6 via archive and replay it (check the\n> replay lsn and WAL file name, it must be f6).\n\nI meant testing the scenario where there's an old file in pg_wal, a\ncomplete file in the archives, and your new GUC forces replay of the\nformer. This might be difficult to do in a TAP test. Ultimately, I just\nwant to validate the assumptions discussed above.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 18 Jan 2023 16:50:14 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Thu, Jan 19, 2023 at 6:20 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Tue, Jan 17, 2023 at 07:44:52PM +0530, Bharath Rupireddy wrote:\n> > On Thu, Jan 12, 2023 at 6:21 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> >> With your patch, we might replay one of these \"old\" files in pg_wal instead\n> >> of the complete version of the file from the archives,\n> >\n> > That's true even today, without the patch, no? We're not changing the\n> > existing behaviour of the state machine. Can you explain how it\n> > happens with the patch?\n>\n> My point is that on HEAD, we will always prefer a complete archive file.\n> With your patch, we might instead choose to replay an old file in pg_wal\n> because we are artificially advancing the state machine. IOW even if\n> there's a complete archive available, we might not use it. This is a\n> behavior change, but I think it is okay.\n\nOh, yeah, I too agree that it's okay because manually copying WAL\nfiles directly to pg_wal (which eventually get replayed before\nswitching to streaming) isn't recommended anyway for production level\nservers. I think, we covered it in the documentation that it exhausts\nall the WAL present in pg_wal before switching. Isn't that enough?\n\n+ Specifies amount of time after which standby attempts to switch WAL\n+ source from WAL archive to streaming replication (get WAL from\n+ primary). However, exhaust all the WAL present in pg_wal before\n+ switching. If the standby fails to switch to stream mode, it falls\n+ back to archive mode.\n\n> >> Would you mind testing this scenario?\nndby should receive f6 via archive and replay it (check the\n> > replay lsn an> >\n>\n> I meant testing the scenario where there's an old file in pg_wal, a\n> complete file in the archives, and your new GUC forces replay of the\n> former. This might be difficult to do in a TAP test. Ultimately, I just\n> want to validate the assumptions discussed above.\n\nI think testing the scenario [1] is achievable. I could write a TAP\ntest for it - https://github.com/BRupireddy/postgres/tree/prefer_archived_wal_v1.\nIt's a bit flaky and needs a little more work (1 - writing a custom\nscript for restore_command that sleeps only after fetching an\nexisting WAL file from archive, not sleeping for a history file or a\nnon-existent WAL file. 2- finding a command-line way to sleep on\nWindows.) to stabilize it, but it seems doable. I can spend some more\ntime, if one thinks that the test is worth adding to the core, perhaps\ndiscussing it separately from this thread.\n\n[1] RestoreArchivedFile():\n /*\n * When doing archive recovery, we always prefer an archived log file even\n * if a file of the same name exists in XLOGDIR. The reason is that the\n * file in XLOGDIR could be an old, un-filled or partly-filled version\n * that was copied and restored as part of backing up $PGDATA.\n *\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 21 Jan 2023 11:13:42 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Wed, Nov 16, 2022 at 11:39 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> I'm attaching the v10 patch for further review.\n\nNeeded a rebase. I'm attaching the v11 patch for further review.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 24 Feb 2023 10:26:30 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Fri, Feb 24, 2023 at 10:26 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Nov 16, 2022 at 11:39 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > I'm attaching the v10 patch for further review.\n>\n> Needed a rebase. I'm attaching the v11 patch for further review.\n\nNeeded a rebase, so attaching the v12 patch. I word-smithed comments\nand docs a bit.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 25 Apr 2023 21:27:04 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Tue, Apr 25, 2023 at 9:27 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> > Needed a rebase. I'm attaching the v11 patch for further review.\n>\n> Needed a rebase, so attaching the v12 patch. I word-smithed comments\n> and docs a bit.\n\nNeeded a rebase. I'm attaching the v13 patch for further consideration.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 21 Jul 2023 12:38:07 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Fri, Jul 21, 2023 at 12:38 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Needed a rebase. I'm attaching the v13 patch for further consideration.\n\nNeeded a rebase. I'm attaching the v14 patch. It also has the following changes:\n\n- Ran pgindent on the new source code.\n- Ran pgperltidy on the new TAP test.\n- Improved the newly added TAP test a bit. Used the new wait_for_log\ncore TAP function in place of custom find_in_log.\n\nThoughts?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 21 Oct 2023 23:59:00 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Sat, Oct 21, 2023 at 11:59 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Jul 21, 2023 at 12:38 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Needed a rebase. I'm attaching the v13 patch for further consideration.\n>\n> Needed a rebase. I'm attaching the v14 patch. It also has the following changes:\n>\n> - Ran pgindent on the new source code.\n> - Ran pgperltidy on the new TAP test.\n> - Improved the newly added TAP test a bit. Used the new wait_for_log\n> core TAP function in place of custom find_in_log.\n>\n> Thoughts?\n\nI took a closer look at v14 and came up with the following changes:\n\n1. Used advance_wal introduced by commit c161ab74f7.\n2. Simplified the core logic and new TAP tests.\n3. Reworded the comments and docs.\n4. Simplified new DEBUG messages.\n\nI've attached the v15 patch for further review.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 28 Dec 2023 17:26:19 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Thu, Dec 28, 2023 at 5:26 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> I took a closer look at v14 and came up with the following changes:\n>\n> 1. Used advance_wal introduced by commit c161ab74f7.\n> 2. Simplified the core logic and new TAP tests.\n> 3. Reworded the comments and docs.\n> 4. Simplified new DEBUG messages.\n>\n> I've attached the v15 patch for further review.\n\nPer a recent commit c538592, FATAL-ized perl warnings in the newly\nadded TAP test and attached the v16 patch.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 3 Jan 2024 16:58:22 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Wed, Jan 3, 2024 at 4:58 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Dec 28, 2023 at 5:26 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > I took a closer look at v14 and came up with the following changes:\n> >\n> > 1. Used advance_wal introduced by commit c161ab74f7.\n> > 2. Simplified the core logic and new TAP tests.\n> > 3. Reworded the comments and docs.\n> > 4. Simplified new DEBUG messages.\n> >\n> > I've attached the v15 patch for further review.\n>\n> Per a recent commit c538592, FATAL-ized perl warnings in the newly\n> added TAP test and attached the v16 patch.\n\nNeeded a rebase due to commit 776621a (conflict in\nsrc/test/recovery/meson.build for new TAP test file added). Please\nfind the attached v17 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 31 Jan 2024 18:30:00 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Wed, Jan 31, 2024 at 6:30 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Needed a rebase due to commit 776621a (conflict in\n> src/test/recovery/meson.build for new TAP test file added). Please\n> find the attached v17 patch.\n\nStrengthened tests a bit by using recovery_min_apply_delay to mimic\nstandby spending some time fetching from archive. PSA v18 patch.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 19 Feb 2024 16:06:00 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "\nOn Mon, 19 Feb 2024 at 18:36, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Wed, Jan 31, 2024 at 6:30 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> Needed a rebase due to commit 776621a (conflict in\n>> src/test/recovery/meson.build for new TAP test file added). Please\n>> find the attached v17 patch.\n>\n> Strengthened tests a bit by using recovery_min_apply_delay to mimic\n> standby spending some time fetching from archive. PSA v18 patch.\n\nHere are some minor comments:\n\n[1]\n+ primary). However, the standby exhausts all the WAL present in pg_wal\n\ns|pg_wal|<filename>pg_wal</filename>|g\n\n[2]\n+# Ensure checkpoint doesn't come in our way\n+$primary->append_conf('postgresql.conf', qq(\n+ min_wal_size = 2MB\n+ max_wal_size = 1GB\n+ checkpoint_timeout = 1h\n+\tautovacuum = off\n+));\n\nKeeping the same indentation might be better.\n\n\n", "msg_date": "Mon, 19 Feb 2024 22:55:43 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Mon, Feb 19, 2024 at 8:25 PM Japin Li <japinli@hotmail.com> wrote:\n>\n> > Strengthened tests a bit by using recovery_min_apply_delay to mimic\n> > standby spending some time fetching from archive. PSA v18 patch.\n>\n> Here are some minor comments:\n\nThanks for taking a look at it.\n\n> [1]\n> + primary). However, the standby exhausts all the WAL present in pg_wal\n>\n> s|pg_wal|<filename>pg_wal</filename>|g\n\nDone.\n\n> [2]\n> +# Ensure checkpoint doesn't come in our way\n> +$primary->append_conf('postgresql.conf', qq(\n> + min_wal_size = 2MB\n> + max_wal_size = 1GB\n> + checkpoint_timeout = 1h\n> + autovacuum = off\n> +));\n>\n> Keeping the same indentation might be better.\n\nThe autovacuum line looks mis-indented in the patch file. However, I\nnow ran src/tools/pgindent/perltidyrc\nsrc/test/recovery/t/041_wal_source_switch.pl on it.\n\nPlease see the attached v19 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 20 Feb 2024 11:10:19 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Tue, 20 Feb 2024 at 13:40, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Mon, Feb 19, 2024 at 8:25 PM Japin Li <japinli@hotmail.com> wrote:\n>> [2]\n>> +# Ensure checkpoint doesn't come in our way\n>> +$primary->append_conf('postgresql.conf', qq(\n>> + min_wal_size = 2MB\n>> + max_wal_size = 1GB\n>> + checkpoint_timeout = 1h\n>> + autovacuum = off\n>> +));\n>>\n>> Keeping the same indentation might be better.\n>\n> The autovacuum line looks mis-indented in the patch file. However, I\n> now ran src/tools/pgindent/perltidyrc\n> src/test/recovery/t/041_wal_source_switch.pl on it.\n>\n\nThanks for updating the patch. It seems still with the wrong indent.", "msg_date": "Tue, 20 Feb 2024 14:24:49 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Tue, Feb 20, 2024 at 11:54 AM Japin Li <japinli@hotmail.com> wrote:\n>\n>\n> On Tue, 20 Feb 2024 at 13:40, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > On Mon, Feb 19, 2024 at 8:25 PM Japin Li <japinli@hotmail.com> wrote:\n> >> [2]\n> >> +# Ensure checkpoint doesn't come in our way\n> >> +$primary->append_conf('postgresql.conf', qq(\n> >> + min_wal_size = 2MB\n> >> + max_wal_size = 1GB\n> >> + checkpoint_timeout = 1h\n> >> + autovacuum = off\n> >> +));\n> >>\n> >> Keeping the same indentation might be better.\n> >\n> > The autovacuum line looks mis-indented in the patch file. However, I\n> > now ran src/tools/pgindent/perltidyrc\n> > src/test/recovery/t/041_wal_source_switch.pl on it.\n> >\n>\n> Thanks for updating the patch. It seems still with the wrong indent.\n\nThanks. perltidyrc didn't complain about anything on v19. However, I\nkept the alignment same as other TAP tests for multi-line append_conf.\nIf that's not correct, I'll leave it to the committer to decide. PSA\nv20 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 20 Feb 2024 13:15:00 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "cfbot claims that this one needs another rebase.\n\nI've spent some time thinking about this one. I'll admit I'm a bit worried\nabout adding more complexity to this state machine, but I also haven't\nthought of any other viable approaches, and this still seems like a useful\nfeature. So, for now, I think we should continue with the current\napproach.\n\n+ fails to switch to stream mode, it falls back to archive mode. If this\n+ parameter value is specified without units, it is taken as\n+ milliseconds. Default is <literal>5min</literal>. With a lower value\n\nDoes this really need to be milliseconds? I would think that any\nreasonable setting would at least on the order of seconds.\n\n+ attempts. To avoid this, it is recommended to set a reasonable value.\n\nI think we might want to suggest what a \"reasonable value\" is.\n\n+\tstatic bool canSwitchSource = false;\n+\tbool\t\tswitchSource = false;\n\nIIUC \"canSwitchSource\" indicates that we are trying to force a switch to\nstreaming, but we are currently exhausting anything that's present in the\npg_wal directory, while \"switchSource\" indicates that we should force a\nswitch to streaming right now. Furthermore, \"canSwitchSource\" is static\nwhile \"switchSource\" is not. Is there any way to simplify this? For\nexample, would it be possible to make an enum that tracks the\nstreaming_replication_retry_interval state?\n\n \t\t\t/*\n \t\t\t * Don't allow any retry loops to occur during nonblocking\n-\t\t\t * readahead. Let the caller process everything that has been\n-\t\t\t * decoded already first.\n+\t\t\t * readahead if we failed to read from the current source. Let the\n+\t\t\t * caller process everything that has been decoded already first.\n \t\t\t */\n-\t\t\tif (nonblocking)\n+\t\t\tif (nonblocking && lastSourceFailed)\n \t\t\t\treturn XLREAD_WOULDBLOCK;\n\nWhy do we skip this when \"switchSource\" is set?\n\n+\t\t\t/* Reset the WAL source switch state */\n+\t\t\tif (switchSource)\n+\t\t\t{\n+\t\t\t\tAssert(canSwitchSource);\n+\t\t\t\tAssert(currentSource == XLOG_FROM_STREAM);\n+\t\t\t\tAssert(oldSource == XLOG_FROM_ARCHIVE);\n+\t\t\t\tswitchSource = false;\n+\t\t\t\tcanSwitchSource = false;\n+\t\t\t}\n\nHow do we know that oldSource is guaranteed to be XLOG_FROM_ARCHIVE? Is\nthere no way it could be XLOG_FROM_PG_WAL?\n\n+#streaming_replication_retry_interval = 5min\t# time after which standby\n+\t\t\t\t\t# attempts to switch WAL source from archive to\n+\t\t\t\t\t# streaming replication\n+\t\t\t\t\t# in milliseconds; 0 disables\n\nI think we might want to turn this feature off by default, at least for the\nfirst release.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 4 Mar 2024 20:04:52 -0600", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Tue, Mar 5, 2024 at 7:34 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> cfbot claims that this one needs another rebase.\n\nYeah, the conflict was with the new TAP test file name in\nsrc/test/recovery/meson.build.\n\n> I've spent some time thinking about this one. I'll admit I'm a bit worried\n> about adding more complexity to this state machine, but I also haven't\n> thought of any other viable approaches,\n\nRight. I understand that the WaitForWALToBecomeAvailable()'s state\nmachine is a complex piece.\n\n> and this still seems like a useful\n> feature. So, for now, I think we should continue with the current\n> approach.\n\nYes, the feature is useful like mentioned in the docs as below:\n\n+ Reading WAL from archive may not always be as efficient and fast as\n+ reading from primary. This can be due to the differences in disk types,\n+ IO costs, network latencies etc. All of these can impact the recovery\n+ performance on standby, and can increase the replication lag on\n+ primary. In addition, the primary keeps accumulating WAL needed for the\n+ standby while the standby reads WAL from archive, because the standby\n+ replication slot stays inactive. To avoid these problems, one can use\n+ this parameter to make standby switch to stream mode sooner.\n\n> + fails to switch to stream mode, it falls back to archive mode. If this\n> + parameter value is specified without units, it is taken as\n> + milliseconds. Default is <literal>5min</literal>. With a lower value\n>\n> Does this really need to be milliseconds? I would think that any\n> reasonable setting would at least on the order of seconds.\n\nAgreed. Done that way.\n\n> + attempts. To avoid this, it is recommended to set a reasonable value.\n>\n> I think we might want to suggest what a \"reasonable value\" is.\n\nIt really depends on the WAL generation rate on the primary. If the\nWAL files grow faster, the disk runs out of space sooner, so setting a\n value to make frequent WAL source switch attempts can help. It's hard\nto suggest a one-size-fits-all value. Therefore, I've tweaked the docs\na bit to reflect the fact that it depends on the WAL generation rate.\n\n> + static bool canSwitchSource = false;\n> + bool switchSource = false;\n>\n> IIUC \"canSwitchSource\" indicates that we are trying to force a switch to\n> streaming, but we are currently exhausting anything that's present in the\n> pg_wal directory,\n\nRight.\n\n> while \"switchSource\" indicates that we should force a\n> switch to streaming right now.\n\nIt's not indicating force switch, it says \"previously I was asked to\nswitch source via canSwitchSource, now that I've exhausted all the WAL\nfrom the pg_wal directory, I'll make a source switch attempt now\".\n\n> Furthermore, \"canSwitchSource\" is static\n> while \"switchSource\" is not.\n\nThis is because the WaitForWALToBecomeAvailable() has to remember the\ndecision (that streaming_replication_retry_interval has occurred)\nacross the calls. And, switchSource is decided within\nWaitForWALToBecomeAvailable() for every function call.\n\n> Is there any way to simplify this? For\n> example, would it be possible to make an enum that tracks the\n> streaming_replication_retry_interval state?\n\nI guess the way it is right now looks simple IMHO. If the suggestion\nis to have an enum like below; it looks overkill for just two states.\n\ntypedef enum\n{\n CAN_SWITCH_SOURCE,\n SWITCH_SOURCE\n} XLogSourceSwitchState;\n\n> /*\n> * Don't allow any retry loops to occur during nonblocking\n> - * readahead. Let the caller process everything that has been\n> - * decoded already first.\n> + * readahead if we failed to read from the current source. Let the\n> + * caller process everything that has been decoded already first.\n> */\n> - if (nonblocking)\n> + if (nonblocking && lastSourceFailed)\n> return XLREAD_WOULDBLOCK;\n>\n> Why do we skip this when \"switchSource\" is set?\n\nIt was leftover from the initial version of the patch - I was then\nencountering some issue and had that piece there. Removed it now.\n\n> + /* Reset the WAL source switch state */\n> + if (switchSource)\n> + {\n> + Assert(canSwitchSource);\n> + Assert(currentSource == XLOG_FROM_STREAM);\n> + Assert(oldSource == XLOG_FROM_ARCHIVE);\n> + switchSource = false;\n> + canSwitchSource = false;\n> + }\n>\n> How do we know that oldSource is guaranteed to be XLOG_FROM_ARCHIVE? Is\n> there no way it could be XLOG_FROM_PG_WAL?\n\nNo. switchSource is set to true only when canSwitchSource is set to\ntrue, which happens only when currentSource is XLOG_FROM_ARCHIVE (see\nSwitchWALSourceToPrimary()).\n\n> +#streaming_replication_retry_interval = 5min # time after which standby\n> + # attempts to switch WAL source from archive to\n> + # streaming replication\n> + # in milliseconds; 0 disables\n>\n> I think we might want to turn this feature off by default, at least for the\n> first release.\n\nAgreed. Done that way.\n\nPlease see the attached v21 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 5 Mar 2024 23:38:37 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Tue, Mar 05, 2024 at 11:38:37PM +0530, Bharath Rupireddy wrote:\n> On Tue, Mar 5, 2024 at 7:34 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> Is there any way to simplify this? For\n>> example, would it be possible to make an enum that tracks the\n>> streaming_replication_retry_interval state?\n> \n> I guess the way it is right now looks simple IMHO. If the suggestion\n> is to have an enum like below; it looks overkill for just two states.\n> \n> typedef enum\n> {\n> CAN_SWITCH_SOURCE,\n> SWITCH_SOURCE\n> } XLogSourceSwitchState;\n\nI was thinking of something more like\n\n\ttypedef enum\n\t{\n\t\tNO_FORCE_SWITCH_TO_STREAMING,\t\t/* no switch necessary */\n\t\tFORCE_SWITCH_TO_STREAMING_PENDING,\t/* exhausting pg_wal */\n\t\tFORCE_SWITCH_TO_STREAMING,\t\t\t/* switch to streaming now */\n\t} WALSourceSwitchState;\n\nAt least, that illustrates my mental model of the process here. IMHO\nthat's easier to follow than two similarly-named bool variables.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 5 Mar 2024 13:52:12 -0600", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Wed, Mar 6, 2024 at 1:22 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> I was thinking of something more like\n>\n> typedef enum\n> {\n> NO_FORCE_SWITCH_TO_STREAMING, /* no switch necessary */\n> FORCE_SWITCH_TO_STREAMING_PENDING, /* exhausting pg_wal */\n> FORCE_SWITCH_TO_STREAMING, /* switch to streaming now */\n> } WALSourceSwitchState;\n>\n> At least, that illustrates my mental model of the process here. IMHO\n> that's easier to follow than two similarly-named bool variables.\n\nI played with that idea and it came out very nice. Please see the\nattached v22 patch. Note that personally I didn't like \"FORCE\" being\nthere in the names, so I've simplified them a bit.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 6 Mar 2024 10:02:43 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Wed, Mar 06, 2024 at 10:02:43AM +0530, Bharath Rupireddy wrote:\n> On Wed, Mar 6, 2024 at 1:22 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> I was thinking of something more like\n>>\n>> typedef enum\n>> {\n>> NO_FORCE_SWITCH_TO_STREAMING, /* no switch necessary */\n>> FORCE_SWITCH_TO_STREAMING_PENDING, /* exhausting pg_wal */\n>> FORCE_SWITCH_TO_STREAMING, /* switch to streaming now */\n>> } WALSourceSwitchState;\n>>\n>> At least, that illustrates my mental model of the process here. IMHO\n>> that's easier to follow than two similarly-named bool variables.\n> \n> I played with that idea and it came out very nice. Please see the\n> attached v22 patch. Note that personally I didn't like \"FORCE\" being\n> there in the names, so I've simplified them a bit.\n\nThanks. I'd like to spend some time testing this, but from a glance, the\ncode appears to be in decent shape.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 6 Mar 2024 10:19:26 -0600", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Wed, Mar 6, 2024 at 9:49 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> > I played with that idea and it came out very nice. Please see the\n> > attached v22 patch. Note that personally I didn't like \"FORCE\" being\n> > there in the names, so I've simplified them a bit.\n>\n> Thanks. I'd like to spend some time testing this, but from a glance, the\n> code appears to be in decent shape.\n\nRebase needed after 071e3ad59d6fd2d6d1277b2bd9579397d10ded28 due to a\nconflict in meson.build. Please see the attached v23 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sun, 17 Mar 2024 11:37:58 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Sun, Mar 17, 2024 at 11:37:58AM +0530, Bharath Rupireddy wrote:\n> Rebase needed after 071e3ad59d6fd2d6d1277b2bd9579397d10ded28 due to a\n> conflict in meson.build. Please see the attached v23 patch.\n\nI've been reading this patch, and this is a very tricky one. Please\nbe *very* cautious.\n\n+#streaming_replication_retry_interval = 0 # time after which standby\n+ # attempts to switch WAL source from archive to\n+ # streaming replication in seconds; 0 disables\n\nThis stuff allows a minimal retry interval of 1s. Could it be useful\nto have more responsiveness here and allow lower values than that?\nWhy not switching the units to be milliseconds?\n\n+ if (streaming_replication_retry_interval <= 0 ||\n+ !StandbyMode ||\n+ currentSource != XLOG_FROM_ARCHIVE)\n+ return SWITCH_TO_STREAMING_NONE;\n\nHmm. Perhaps this should mention why we don't care about the\nconsistent point.\n\n+ /* See if we can switch WAL source to streaming */\n+ if (wal_source_switch_state == SWITCH_TO_STREAMING_NONE)\n+ wal_source_switch_state = SwitchWALSourceToPrimary();\n\nRather than a routine that returns as result the value to use for the\nGUC, I'd suggest to let this routine set the GUC as there is only one\ncaller of SwitchWALSourceToPrimary(). This can also include a check\non SWITCH_TO_STREAMING_NONE, based on what I'm reading that.\n\n- if (lastSourceFailed)\n+ if (lastSourceFailed ||\n+ wal_source_switch_state == SWITCH_TO_STREAMING) \n\nHmm. This one may be tricky. I'd recommend a separation between the\nfailure in reading from a source and the switch to a new \"forced\"\nsource.\n\n+ if (wal_source_switch_state == SWITCH_TO_STREAMING_PENDING)\n+ readFrom = XLOG_FROM_PG_WAL;\n+ else\n+ readFrom = currentSource == XLOG_FROM_ARCHIVE ?\n+ XLOG_FROM_ANY : currentSource;\n\nWALSourceSwitchState looks confusing here, and are you sure that this\nis actualy correct? Shouldn't we still try a READ_FROM_ANY or a read\nfrom the archives even with a streaming pending.\n\nBy the way, I am not convinced that what you have is the best\ninterface ever. This assumes that we'd always want to switch to\nstreaming more aggressively. Could there be a point in also\ncontrolling if we should switch to pg_wal/ or just to archiving more\naggressively as well, aka be able to do the opposite switch of WAL\nsource? This design looks somewhat limited to me. The origin of the\nissue is that we don't have a way to control the order of the sources\nconsumed by WAL replay. Perhaps something like a replay_source_order\nthat uses a list would be better, with elements settable to archive,\npg_wal and streaming?\n--\nMichael", "msg_date": "Mon, 18 Mar 2024 15:08:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "On Mon, Mar 18, 2024 at 11:38 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sun, Mar 17, 2024 at 11:37:58AM +0530, Bharath Rupireddy wrote:\n> > Rebase needed after 071e3ad59d6fd2d6d1277b2bd9579397d10ded28 due to a\n> > conflict in meson.build. Please see the attached v23 patch.\n>\n> I've been reading this patch, and this is a very tricky one. Please\n> be *very* cautious.\n\nThanks for looking into this.\n\n> +#streaming_replication_retry_interval = 0 # time after which standby\n> + # attempts to switch WAL source from archive to\n> + # streaming replication in seconds; 0 disables\n>\n> This stuff allows a minimal retry interval of 1s. Could it be useful\n> to have more responsiveness here and allow lower values than that?\n> Why not switching the units to be milliseconds?\n\nNathan had a different view on this to have it on the order of seconds\n- https://www.postgresql.org/message-id/20240305020452.GA3373526%40nathanxps13.\nIf set to a too low value, the frequency of standby trying to connect\nto primary increases. IMO, the order of seconds seems fine.\n\n> + if (streaming_replication_retry_interval <= 0 ||\n> + !StandbyMode ||\n> + currentSource != XLOG_FROM_ARCHIVE)\n> + return SWITCH_TO_STREAMING_NONE;\n>\n> Hmm. Perhaps this should mention why we don't care about the\n> consistent point.\n\nAre you asking why we don't care whether the standby reached a\nconsistent point when switching to streaming mode due to this new\nparameter? If this is the ask, the same applies when a standby\ntypically switches to streaming replication (get WAL\nfrom primary) today, that is when receive from WAL archive finishes\n(no more WAL left there) or fails for any reason. The standby doesn't\ncare about the consistent point even today, it just trusts the WAL\nsource and makes the switch.\n\n> + /* See if we can switch WAL source to streaming */\n> + if (wal_source_switch_state == SWITCH_TO_STREAMING_NONE)\n> + wal_source_switch_state = SwitchWALSourceToPrimary();\n>\n> Rather than a routine that returns as result the value to use for the\n> GUC, I'd suggest to let this routine set the GUC as there is only one\n> caller of SwitchWALSourceToPrimary(). This can also include a check\n> on SWITCH_TO_STREAMING_NONE, based on what I'm reading that.\n\nFirstly, wal_source_switch_state is not a GUC, it's a static variable\nto be used across WaitForWALToBecomeAvailable calls. And, if you are\nsuggesting to turn SwitchWALSourceToPrimary so that it sets\nwal_source_switch_state directly, I'd not do that because when\nwal_source_switch_state is not SWITCH_TO_STREAMING_NONE, the function\ngets called unnecessarily.\n\n> - if (lastSourceFailed)\n> + if (lastSourceFailed ||\n> + wal_source_switch_state == SWITCH_TO_STREAMING)\n>\n> Hmm. This one may be tricky. I'd recommend a separation between the\n> failure in reading from a source and the switch to a new \"forced\"\n> source.\n\nSeparation would just add duplicate code. Moreover, the code wrapped\nwithin if (lastSourceFailed) doesn't do any error handling or such, it\njust resets a few stuff from the previous source and sets the next\nsource.\n\nFWIW, please check [1] (and the discussion thereon) for how the\nlastSourceFailed flag is being used to consume all the streamed WAL in\npg_wal directly upon detecting promotion trigger file. Therefore, I\nsee no problem with the way it is right now for this new feature.\n\n[1]\n /*\n * Data not here yet. Check for trigger, then wait for\n * walreceiver to wake us up when new WAL arrives.\n */\n if (CheckForStandbyTrigger())\n {\n /*\n * Note that we don't return XLREAD_FAIL immediately\n * here. After being triggered, we still want to\n * replay all the WAL that was already streamed. It's\n * in pg_wal now, so we just treat this as a failure,\n * and the state machine will move on to replay the\n * streamed WAL from pg_wal, and then recheck the\n * trigger and exit replay.\n */\n lastSourceFailed = true;\n\n> + if (wal_source_switch_state == SWITCH_TO_STREAMING_PENDING)\n> + readFrom = XLOG_FROM_PG_WAL;\n> + else\n> + readFrom = currentSource == XLOG_FROM_ARCHIVE ?\n> + XLOG_FROM_ANY : currentSource;\n>\n> WALSourceSwitchState looks confusing here, and are you sure that this\n> is actualy correct? Shouldn't we still try a READ_FROM_ANY or a read\n> from the archives even with a streaming pending.\n\nPlease see the discussion starting from\nhttps://www.postgresql.org/message-id/20221008215221.GA894639%40nathanxps13.\nWe wanted to keep the existing behaviour the same when we\nintentionally switch source to streaming from archive due to the\ntimeout. The existing behaviour is to exhaust WAL in pg_wal before\nswitching the source to streaming after failure to fetch from archive.\n\nWhen wal_source_switch_state is SWITCH_TO_STREAMING_PENDING, the\ncurrentSource is already XLOG_FROM_ARCHIVE (To clear the dust off\nhere, I've added an assert now in the attached new v24 patch). And, we\ndon't want to pass XLOG_FROM_ANY to XLogFileReadAnyTLI to again fetch\nfrom the archive. Hence, we choose readFrom = XLOG_FROM_PG_WAL to\nspecifically tell XLogFileReadAnyTLI read from pg_wal directly.\n\n> By the way, I am not convinced that what you have is the best\n> interface ever. This assumes that we'd always want to switch to\n> streaming more aggressively. Could there be a point in also\n> controlling if we should switch to pg_wal/ or just to archiving more\n> aggressively as well, aka be able to do the opposite switch of WAL\n> source? This design looks somewhat limited to me. The origin of the\n> issue is that we don't have a way to control the order of the sources\n> consumed by WAL replay. Perhaps something like a replay_source_order\n> that uses a list would be better, with elements settable to archive,\n> pg_wal and streaming?\n\nIntention of this feature is to provide a way for the streaming\nstandby to quickly detect when the primary is up and running without\nhaving to wait until either all the WAL in the archive is over or a\nfailure to fetch from archive happens. Advantages of this feature are:\n1) it can make the recovery a bit faster (if fetching from archive\nadds up costs with different storage types, IO costs and network\ndelays), thus can reduce the replication lag 2) primary (if using\nreplication slot based streaming replication setup) doesn't have to\nkeep the required WAL for the standby for longer durations, thus\nreducing the risk of no space left on disk issues.\n\nIMHO, it makes sense to have something like replay_source_order if\nthere's any use case that arises in future requiring the standby to\nintentionally switch to pg_wal or archive. But not as part of this\nfeature.\n\nPlease see the attached v24 patch. I've added an assertion that the\ncurrent source is archive before calling XLogFileReadAnyTLI if\nwal_source_switch_state is SWITCH_TO_STREAMING_PENDING. I've also\nadded the new enum WALSourceSwitchState to typedefs.list to make\npgindent adjust it correctly.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 23 Mar 2024 14:52:55 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "Hi,\n\nI took a brief look at the patch.\n\nFor a motivation aspect I can see this being useful\nsynchronous_replicas if you have commit set to flush mode.\nSo +1 on feature, easier configurability, although thinking about it\nmore you could probably have the restore script be smarter and provide\nnon-zero exit codes periodically.\n\nThe patch needs to be rebased but I tested this against an older 17 build.\n\n> + ereport(DEBUG1,\n> + errmsg_internal(\"switched WAL source from %s to %s after %s\",\n> + xlogSourceNames[oldSource],\n\nNot sure if you're intentionally changing from DEBUG1 from DEBUG2.\n\n> * standby and increase the replication lag on primary.\n\nDo you mean \"increase replication lag on standby\"?\nnit: reading from archive *could* be faster since you could in theory\nit's not single-processed/threaded.\n\n> However,\n> + * exhaust all the WAL present in pg_wal before switching. If successful,\n> + * the state machine moves to XLOG_FROM_STREAM state, otherwise it falls\n> + * back to XLOG_FROM_ARCHIVE state.\n\nI think I'm missing how this happens. Or what \"successful\" means. If I'm reading\nit right, no matter what happens we will always move to\nXLOG_FROM_STREAM based on how\nthe state machine works?\n\nI tested this in a basic RR setup without replication slots (e.g. log\nshipping) where the\nWAL is available in the archive but the primary always has the WAL\nrotated out and\n'streaming_replication_retry_interval = 1'. This leads the RR to\nbecome stuck where it stops fetching from\narchive and loops between XLOG_FROM_PG_WAL and XLOG_FROM_STREAM.\n\nWhen 'streaming_replication_retry_interval' is breached, we transition\nfrom {currentSource, wal_source_switch_state}\n\n{XLOG_FROM_ARCHIVE, SWITCH_TO_STREAMING_NONE} -> {XLOG_FROM_ARCHIVE,\nSWITCH_TO_STREAMING_PENDING} with readFrom = XLOG_FROM_PG_WAL.\n\nThat reads the last record successfully in pg_wal and then fails to\nread the next one because it doesn't exist, transitioning to\n\n{XLOG_FROM_STREAM, SWITCH_TO_STREAMING_PENDING}.\n\nXLOG_FROM_STREAM fails because the WAL is no longer there on primary,\nit sets it back to {XLOG_FROM_ARCHIVE, SWITCH_TO_STREAMING_PENDING}.\n\n> last_fail_time = now;\n> currentSource = XLOG_FROM_ARCHIVE;\n> break;\n\nSince the state is still SWITCH_TO_STREAMING_PENDING from the previous\nloops, it forces\n\n> Assert(currentSource == XLOG_FROM_ARCHIVE);\n> readFrom = XLOG_FROM_PG_WAL;\n> ...\n> readFile = XLogFileReadAnyTLI(readSegNo, DEBUG2, readFrom);\n\nAnd this readFile call seems to always succeed since it can read the\nlatest WAL record but not the next one, which is in archive, leading\nto transition back to XLOG_FROM_STREAMING and repeats.\n\n\n> /*\n> * Nope, not found in archive or pg_wal.\n> */\n> lastSourceFailed = true;\n\nI don't think this gets triggered for XLOG_FROM_PG_WAL case, which\nmeans the safety\ncheck you added doesn't actually kick in.\n\n> if (wal_source_switch_state == SWITCH_TO_STREAMING_PENDING)\n> {\n> wal_source_switch_state = SWITCH_TO_STREAMING;\n> elog(LOG, \"SWITCH_TO_STREAMING_PENDING TO SWITCH_TO_STREAMING\");\n> }\n\nThanks\n-- \nJohn Hsu - Amazon Web Services\n\n\n", "msg_date": "Thu, 22 Aug 2024 16:33:14 -0700", "msg_from": "John H <johnhyvr@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "Hi,\n\nThanks for looking into this.\n\nOn Fri, Aug 23, 2024 at 5:03 AM John H <johnhyvr@gmail.com> wrote:\n>\n> For a motivation aspect I can see this being useful\n> synchronous_replicas if you have commit set to flush mode.\n\nIn synchronous replication setup, until standby finishes fetching WAL\nfrom the archive, the commits on the primary have to wait which can\nincrease the query latency. If the standby can connect to the primary\nas soon as the broken connection is restored, it can fetch the WAL\nsoon and transaction commits can continue on the primary. Is my\nunderstanding correct? Is there anything more to this?\n\nI talked to Michael Paquier at PGConf.Dev 2024 and got some concerns\nabout this feature for dealing with changing timelines. I can't think\nof them right now.\n\nAnd, there were some cautions raised upthread -\nhttps://www.postgresql.org/message-id/20240305020452.GA3373526%40nathanxps13\nand https://www.postgresql.org/message-id/ZffaQt7UbM2Q9kYh%40paquier.xyz.\n\n> So +1 on feature, easier configurability, although thinking about it\n> more you could probably have the restore script be smarter and provide\n> non-zero exit codes periodically.\n\nInteresting. Yes, the restore script has to be smarter to detect the\nbroken connections and distinguish whether the server is performing\njust the archive recovery/PITR or streaming from standby. Not doing it\nright, perhaps, can cause data loss (?).\n\n> The patch needs to be rebased but I tested this against an older 17 build.\n\nWill rebase soon.\n\n> > + ereport(DEBUG1,\n> > + errmsg_internal(\"switched WAL source from %s to %s after %s\",\n> > + xlogSourceNames[oldSource],\n>\n> Not sure if you're intentionally changing from DEBUG1 from DEBUG2.\n\nWill change.\n\n> > * standby and increase the replication lag on primary.\n>\n> Do you mean \"increase replication lag on standby\"?\n> nit: reading from archive *could* be faster since you could in theory\n> it's not single-processed/threaded.\n\nYes. I think we can just say \"All of these can impact the recovery\nperformance on\n+ * standby and increase the replication lag.\"\n\n> > However,\n> > + * exhaust all the WAL present in pg_wal before switching. If successful,\n> > + * the state machine moves to XLOG_FROM_STREAM state, otherwise it falls\n> > + * back to XLOG_FROM_ARCHIVE state.\n>\n> I think I'm missing how this happens. Or what \"successful\" means. If I'm reading\n> it right, no matter what happens we will always move to\n> XLOG_FROM_STREAM based on how\n> the state machine works?\n\nPlease have a look at some discussion upthread on exhausting pg_wal\nbefore switching -\nhttps://www.postgresql.org/message-id/20230119005014.GA3838170%40nathanxps13.\nEven today, the standby exhausts pg_wal before switching to streaming\nfrom the archive.\n\n> I tested this in a basic RR setup without replication slots (e.g. log\n> shipping) where the\n> WAL is available in the archive but the primary always has the WAL\n> rotated out and\n> 'streaming_replication_retry_interval = 1'. This leads the RR to\n> become stuck where it stops fetching from\n> archive and loops between XLOG_FROM_PG_WAL and XLOG_FROM_STREAM.\n\nNice catch. This is a problem. One idea is to disable\nstreaming_replication_retry_interval feature for slot-less streaming\nreplication - either when primary_slot_name isn't specified disallow\nthe GUC to be set in assign_hook or when deciding to switch the wal\nsource. Thoughts?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 29 Aug 2024 19:02:15 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" }, { "msg_contents": "Hi,\n\nOn Thu, Aug 29, 2024 at 6:32 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> In synchronous replication setup, until standby finishes fetching WAL\n> from the archive, the commits on the primary have to wait which can\n> increase the query latency. If the standby can connect to the primary\n> as soon as the broken connection is restored, it can fetch the WAL\n> soon and transaction commits can continue on the primary. Is my\n> understanding correct? Is there anything more to this?\n>\n\nYup, if you're running with synchronous_commit = 'on' with\nsynchronous_replicas, then you can\nhave the replica continue streaming changes into pg_wal faster than\nWAL replay so commits\nmay be unblocked faster.\n\n> I talked to Michael Paquier at PGConf.Dev 2024 and got some concerns\n> about this feature for dealing with changing timelines. I can't think\n> of them right now.\n\nI'm not sure what the risk would be if the WAL/history files we sync\nfrom streaming is the same as\nwe replay from archive.\n\n\n> And, there were some cautions raised upthread -\n> https://www.postgresql.org/message-id/20240305020452.GA3373526%40nathanxps13\n> and https://www.postgresql.org/message-id/ZffaQt7UbM2Q9kYh%40paquier.xyz.\n\nYup agreed. I need to understand this area a lot better before I can\ndo a more in-depth review.\n\n> Interesting. Yes, the restore script has to be smarter to detect the\n> broken connections and distinguish whether the server is performing\n> just the archive recovery/PITR or streaming from standby. Not doing it\n> right, perhaps, can cause data loss (?).\n\nI don't think there would be data-loss, only replay is stuck/slows down.\nIt wouldn't be any different today if the restore-script returned a\nnon-zero exit status.\nThe end-user could configure their restore-script to return a non-zero\nstatus, based on some\ncondition, to move to streaming.\n\n> > > However,\n> > > + * exhaust all the WAL present in pg_wal before switching. If successful,\n> > > + * the state machine moves to XLOG_FROM_STREAM state, otherwise it falls\n> > > + * back to XLOG_FROM_ARCHIVE state.\n> >\n> > I think I'm missing how this happens. Or what \"successful\" means. If I'm reading\n> > it right, no matter what happens we will always move to\n> > XLOG_FROM_STREAM based on how\n> > the state machine works?\n>\n> Please have a look at some discussion upthread on exhausting pg_wal\n> before switching -\n> https://www.postgresql.org/message-id/20230119005014.GA3838170%40nathanxps13.\n> Even today, the standby exhausts pg_wal before switching to streaming\n> from the archive.\n>\n\nI'm getting caught on the word \"successful\". My rough understanding of\nWaitForWALToBecomeAvailable is that once you're in XLOG_FROM_PG_WAL, if it was\nunsuccessful for whatever reason, it will still transition to\nXLOG_FROM_STREAMING.\nIt does not loop back to XLOG_FROM_ARCHIVE if XLOG_FROM_PG_WAL fails.\n\n> Nice catch. This is a problem. One idea is to disable\n> streaming_replication_retry_interval feature for slot-less streaming\n> replication - either when primary_slot_name isn't specified disallow\n> the GUC to be set in assign_hook or when deciding to switch the wal\n> source. Thoughts?\n\nI don't think it's dependent on slot-less streaming. You would also run into the\nissue if the WAL is no longer there on the primary, which can occur\nwith 'max_slot_wal_keep_size'\nas well.\nIMO the guarantee we need to make is that when we transition from\nXLOG_FROM_STREAMING to\nXLOG_FROM_ARCHIVE for a \"fresh start\", we should attempt to restore\nfrom archive at least once.\nI think this means that wal_source_switch_state should be reset back\nto SWITCH_TO_STREAMING_NONE\nwhenever we transition to XLOG_FROM_ARCHIVE.\nWe've attempted the switch to streaming once, so let's not continually\nre-try if it failed.\n\nThanks,\n\n-- \nJohn Hsu - Amazon Web Services\n\n\n", "msg_date": "Thu, 29 Aug 2024 13:58:31 -0700", "msg_from": "John H <johnhyvr@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Switching XLog source from archive to streaming when primary\n available" } ]
[ { "msg_contents": "Hi,\r\n\r\nI found several typos in comments and README. See patch attached.\r\n\r\nBest regards,\r\nLingjie Qiang", "msg_date": "Mon, 29 Nov 2021 01:01:55 +0000", "msg_from": "\"qianglj.fnst@fujitsu.com\" <qianglj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "Fix typos" }, { "msg_contents": "On Mon, Nov 29, 2021 at 01:01:55AM +0000, qianglj.fnst@fujitsu.com wrote:\n> I found several typos in comments and README. See patch attached.\n\nThanks, fixed.\n--\nMichael", "msg_date": "Tue, 30 Nov 2021 11:09:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix typos" } ]
[ { "msg_contents": "This isn't flagged with GUC_EXPLAIN:\nenable_incremental_sort\n\nHere's some more candidates:\nshared_buffers - it seems unfortunate this is not included; actually, it seems\n\tlike maybe this should be always included - not just if it's set to a\n\tnon-default, but especially if it's left at the default..\n\tI suppose it's more important for DML than SELECT.\ntemp_tablespaces isn't, but temp_buffers is;\nautovacuum - if it's off, that can be the cause of the issue (same as force_parallel_mode, which has GUC_EXPLAIN);\nmax_worker_processes - isn't, but these are: max_parallel_workers, max_parallel_workers_per_gather;\ntrack_io_timing - it can have high overhead;\nsession_preload_libraries, shared_preload_libraries, local_preload_libraries;\ndebug_assertions - it's be kind of nice to show this whenever it's true (even thought it's not \"changed\")\ndebug_discard_caches\nlc_collate and lc_ctype ?\nserver_version_num - I asked about this in the past, but Tom thought it should\n\tnot be included, and I kind of agree, but I'm including it here for\n\tcompleteness sake\n\nThis isn't marked GUC_NOT_IN_SAMPLE, like all other DEVELOPER_OPTIONS:\ntrace_recovery_messages\n\nI'm not sure jit_tuple_deforming should be marked GUC_NOT_IN_SAMPLE.\nI disable this one because it's slow for tables with many attributes.\nSame for jit_expressions ?\n\nbgwriter_lru_maxpages should have GUC_UNIT_BLOCKS\n\nmax_identifier_length should have BYTES (like log_parameter_max_length and\ntrack_activity_query_size).\n\nblock_size and wal_block_size should say BYTES (like wal_segment_size)\nAnd all three descriptions should say \".. in [prefix]bytes\" (but see below).\n\nMaybe these should have COMPAT_OPTIONS_PREVIOUS:\ninteger_datetimes\nssl_renegotiation_limit\n\nautovacuum_freeze_max_age has a comment about pg_resetwal which is obsolete\nsince 74cf7d46a.\n\ncheckpoint_warning refers to \"checkpoint segments\", which is obsolete since\n88e982302.\n\nThe attached does the least-disputable, lowest hanging fruit.\n\nMore ideas:\n\nMaybe maintenance_io_concurrency should not be GUC_EXPLAIN. But it's used not\nonly by ANALYZE but also heap_index_delete_tuples.\n\nShould these be GUC_RUNTIME_COMPUTED?\nin_hot_standby, data_directory_mode\n\nSince GUC_DISALLOW_IN_FILE effectively implies GUC_NOT_IN_SAMPLE in\nsrc/backend/utils/misc/help_config.c:displayStruct(), many of the redundant\nGUC_NOT_IN_SAMPLE could be removed.\n\nI think several things with COMPAT_OPTIONS_PREVIOUS could have \nGUC_NO_SHOW_ALL and/or GUC_NOT_IN_SAMPLE.\n\nThe GUC descriptions are a hodge podge of full sentences and telegrams.\nThere's no consistency whether the long description can be read independently\nfrom the short description. For these GUCs, the short description reads more\nlike a \"DETAIL\" message:\n|trace_recovery_messages, log_min_error_statement, log_min_messages, client_min_messages\n|log_transaction_sample_rate, log_statement_sample_rate\n|data_directory_mode, log_file_mode, unix_socket_permissions\n|log_directory, log_destination, log_line_prefix\n|unix_socket_group, default_tablespace, DateStyle, maintenance_work_mem, geqo_generations\n\nFor integer/real GUCs, the long description is being used just to describe the\n\"special\" values:\n|jit_inline_above_cost, jit_optimize_above_cost, jit_above_cost, log_startup_progress_interval,\n|tcp_user_timeout, tcp_keepalives_interval, tcp_keepalives_idle, log_temp_files, old_snapshot_threshold,\n|log_parameter_max_length_on_error, log_parameter_max_length, log_autovacuum_min_duration, log_min_duration_sample,\n|idle_session_timeout, idle_in_transaction_session_timeout, lock_timeout,\n|statement_timeout, shared_memory_size_in_huge_pages\n\nDescriptions of some GUCs describe their default units, but other's don't.\nThe units are not very important for input, since a non-default unit can be\nspecified, like SET statement_timeout='1h'. It's important for output, and\nSHOW already includes a unit, which may not be the default unit. So I think \nthe default units should be removed from the descriptions.\n\nThis cleanup is similar to GUC categories fixed in a55a98477.\nTom was of the impression that there's more loose ends on that thread.\nhttps://www.postgresql.org/message-id/flat/16997-ff16127f6e0d1390@postgresql.org\n\n-- \nJustin", "msg_date": "Sun, 28 Nov 2021 21:08:33 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "GUC flags" }, { "msg_contents": "On Sun, Nov 28, 2021 at 09:08:33PM -0600, Justin Pryzby wrote:\n> This isn't flagged with GUC_EXPLAIN:\n> enable_incremental_sort\n\nYeah, that's inconsistent.\n\n> This isn't marked GUC_NOT_IN_SAMPLE, like all other DEVELOPER_OPTIONS:\n> trace_recovery_messages\n\nIndeed.\n\n> I'm not sure jit_tuple_deforming should be marked GUC_NOT_IN_SAMPLE.\n> I disable this one because it's slow for tables with many attributes.\n> Same for jit_expressions ?\n\nThat would be consistent. Both are not in postgresql.conf.sample.\n\n> bgwriter_lru_maxpages should have GUC_UNIT_BLOCKS\n>\n> max_identifier_length should have BYTES (like log_parameter_max_length and\n> track_activity_query_size).\n>\n> block_size and wal_block_size should say BYTES (like wal_segment_size)\n> And all three descriptions should say \".. in [prefix]bytes\" (but see below).\n\nOkay for these.\n\n> Maybe these should have COMPAT_OPTIONS_PREVIOUS:\n> integer_datetimes\n> ssl_renegotiation_limit\n\nHmm. Okay as well for integer_datetimes.\n\n> autovacuum_freeze_max_age has a comment about pg_resetwal which is obsolete\n> since 74cf7d46a.\n> \n> checkpoint_warning refers to \"checkpoint segments\", which is obsolete since\n> 88e982302.\n\nThat's part of 0002. That's a bit weird to use now, so I'd agree with\nyour suggestion to use \"WAL segments\" instead.\n\n0001, to adjust the units, and 0003, to make the GUC descriptions less\nunit-dependent, are good ideas.\n\n- gettext_noop(\"Use of huge pages on Linux or Windows.\"),\n+ gettext_noop(\"Enable use of huge pages on Linux or Windows.\"),\nThis should be \"Enables use of\".\n\n {\"compute_query_id\", PGC_SUSET, STATS_MONITORING,\n- gettext_noop(\"Compute query identifiers.\"),\n+ gettext_noop(\"Enables in-core computation of a query identifier.\"),\nThis could just be \"Computes\"?\n\nI am not completely sure that all the contents of 0002 are\nimprovements, but the suggestions done for huge_pages,\nssl_passphrase_command_supports_reload, checkpoint_warning and\ncommit_siblings seem fine to me.\n--\nMichael", "msg_date": "Mon, 29 Nov 2021 17:04:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Mon, Nov 29, 2021 at 05:04:01PM +0900, Michael Paquier wrote:\n> 0001, to adjust the units, and 0003, to make the GUC descriptions less\n> unit-dependent, are good ideas.\n\nActually, after sleeping on it and doing some digging in\ncodesearch.debian.org, changing the units of max_identifier_length,\nblock_size and wal_block_size could induce some breakages for anything\nusing a SHOW command, something that becomes particularly worse now\nthat SHOW is supported in replication connections, and it would force \nclients to know and parse the units of a value. Perhaps I am being\ntoo careful here, but that could harm a lot of users. It is worth\nnoting that I have found some driver code making use of pg_settings,\nwhich would not be influenced by such a change, but it is unsafe to\nassume that everybody does that.\n\nThe addition of GUC_EXPLAIN for enable_incremental_sort, the comment\nfix for autovacuum_freeze_max_age, the use of COMPAT_OPTIONS_PREVIOUS\nfor ssl_renegotiation_limit and the addition of GUC_NOT_IN_SAMPLE for\ntrace_recovery_messages are fine, though.\n\n> I am not completely sure that all the contents of 0002 are\n> improvements, but the suggestions done for huge_pages,\n> ssl_passphrase_command_supports_reload, checkpoint_warning and\n> commit_siblings seem fine to me.\n\nHmm, I think the patched description of checkpoint_warning is not that\nmuch an improvement compared to the current one. While the current\ndescription uses the term \"checkpoint segments\", which is, I agree,\nweird. The new one would lose the term \"checkpoint\", making the short\ndescription of the parameter lose some of its context.\n\nI have done a full review of the patch set, and applied the obvious\nfixes/improvements as of be54551. Attached is an extra patch based on\nthe contents of the whole set sent upthread:\n- Improvement of the description of checkpoint_segments.\n- Reworded the description of all parameters using \"N units\", rather\nthan just switching to \"this period of units\". I have been using\nsomething more generic.\n\nThoughts?\n--\nMichael", "msg_date": "Tue, 30 Nov 2021 15:36:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Tue, Nov 30, 2021 at 03:36:45PM +0900, Michael Paquier wrote:\n> -\t\t\tgettext_noop(\"Forces a switch to the next WAL file if a \"\n> -\t\t\t\t\t\t \"new file has not been started within N seconds.\"),\n> +\t\t\tgettext_noop(\"Sets the amount of time to wait before forcing a \"\n> +\t\t\t\t\t\t \"switch to the next WAL file.\"),\n\n..\n\n> -\t\t\tgettext_noop(\"Waits N seconds on connection startup after authentication.\"),\n> +\t\t\tgettext_noop(\"Sets the amount of seconds to wait on connection \"\n> +\t\t\t\t\t\t \"startup after authentication.\"),\n\n\"amount of time\", like above.\n\n> -\t\t\tgettext_noop(\"Waits N seconds on connection startup before authentication.\"),\n> +\t\t\tgettext_noop(\"Sets the amount of seconds to wait on connection \"\n> +\t\t\t\t\t\t \"startup before authentication.\"),\n\nsame\n\n> \t{\n> \t\t{\"checkpoint_warning\", PGC_SIGHUP, WAL_CHECKPOINTS,\n> -\t\t\tgettext_noop(\"Enables warnings if checkpoint segments are filled more \"\n> -\t\t\t\t\t\t \"frequently than this.\"),\n> +\t\t\tgettext_noop(\"Sets the maximum time before warning if checkpoints \"\n> +\t\t\t\t\t\t \"triggered by WAL volume happen too frequently.\"),\n> \t\t\tgettext_noop(\"Write a message to the server log if checkpoints \"\n> -\t\t\t\t\t\t \"caused by the filling of checkpoint segment files happens more \"\n> +\t\t\t\t\t\t \"caused by the filling of WAL segment files happens more \"\n\nIt should say \"happen\" , since it's referring to \"checkpoints\".\nThat was a pre-existing issue.\n\n> \t\t{\"log_parameter_max_length\", PGC_SUSET, LOGGING_WHAT,\n> -\t\t\tgettext_noop(\"When logging statements, limit logged parameter values to first N bytes.\"),\n> +\t\t\tgettext_noop(\"Sets the maximum amount of data logged for bind \"\n> +\t\t\t\t\t\t \"parameter values when logging statements.\"),\n\nI think this one should actually say \"in bytes\" or at least say \"maximum\nlength\". It seems unlikely that someone is going to specify this in other\nunits, and it's confusing to everyone else to refer to \"amount of data\" instead\nof \"length in bytes\".\n\n\n> \t\t{\"log_parameter_max_length_on_error\", PGC_USERSET, LOGGING_WHAT,\n> -\t\t\tgettext_noop(\"When reporting an error, limit logged parameter values to first N bytes.\"),\n> +\t\t\tgettext_noop(\"Sets the maximum amount of data logged for bind \"\n> +\t\t\t\t\t\t \"parameter values when logging statements, on error.\"),\n\nsame\n\n> -\t\t\tgettext_noop(\"Automatic log file rotation will occur after N minutes.\"),\n> +\t\t\tgettext_noop(\"Sets the maximum amount of time to wait before \"\n> +\t\t\t\t\t\t \"forcing log file rotation.\"),\n\nShould it say \"maximum\" ? Does that mean anything ?\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 1 Dec 2021 01:59:05 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Wed, Dec 01, 2021 at 01:59:05AM -0600, Justin Pryzby wrote:\n> On Tue, Nov 30, 2021 at 03:36:45PM +0900, Michael Paquier wrote:\n>> -\t\t\tgettext_noop(\"Waits N seconds on connection startup before authentication.\"),\n>> +\t\t\tgettext_noop(\"Sets the amount of seconds to wait on connection \"\n>> +\t\t\t\t\t\t \"startup before authentication.\"),\n> \n> same\n\nThanks. This makes things more consistent.\n\n>> \t{\n>> \t\t{\"checkpoint_warning\", PGC_SIGHUP, WAL_CHECKPOINTS,\n>> -\t\t\tgettext_noop(\"Enables warnings if checkpoint segments are filled more \"\n>> -\t\t\t\t\t\t \"frequently than this.\"),\n>> +\t\t\tgettext_noop(\"Sets the maximum time before warning if checkpoints \"\n>> +\t\t\t\t\t\t \"triggered by WAL volume happen too frequently.\"),\n>> \t\t\tgettext_noop(\"Write a message to the server log if checkpoints \"\n>> -\t\t\t\t\t\t \"caused by the filling of checkpoint segment files happens more \"\n>> +\t\t\t\t\t\t \"caused by the filling of WAL segment files happens more \"\n> \n> It should say \"happen\" , since it's referring to \"checkpoints\".\n> That was a pre-existing issue.\n\nIndeed.\n\n>> \t\t{\"log_parameter_max_length\", PGC_SUSET, LOGGING_WHAT,\n>> -\t\t\tgettext_noop(\"When logging statements, limit logged parameter values to first N bytes.\"),\n>> +\t\t\tgettext_noop(\"Sets the maximum amount of data logged for bind \"\n>> +\t\t\t\t\t\t \"parameter values when logging statements.\"),\n> \n> I think this one should actually say \"in bytes\" or at least say \"maximum\n> length\". It seems unlikely that someone is going to specify this in other\n> units, and it's confusing to everyone else to refer to \"amount of data\" instead\n> of \"length in bytes\".\n\nOkay. Do you like the updated version attached?\n\n>> -\t\t\tgettext_noop(\"Automatic log file rotation will occur after N minutes.\"),\n>> +\t\t\tgettext_noop(\"Sets the maximum amount of time to wait before \"\n>> +\t\t\t\t\t\t \"forcing log file rotation.\"),\n> \n> Should it say \"maximum\" ? Does that mean anything ?\n\nTo be consistent with the rest of your suggestions, we could use here:\n\"Sets the amount of time to wait before forcing log file rotation\"\n\nThanks,\n--\nMichael", "msg_date": "Wed, 1 Dec 2021 20:58:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC flags" }, { "msg_contents": "> @@ -2142,7 +2142,8 @@ static struct config_int ConfigureNamesInt[] =\n> \t\t{\"post_auth_delay\", PGC_BACKEND, DEVELOPER_OPTIONS,\n> -\t\t\tgettext_noop(\"Waits N seconds on connection startup after authentication.\"),\n> +\t\t\tgettext_noop(\"Sets the amount of time to wait on connection \"\n> +\t\t\t\t\t\t \"startup after authentication.\"),\n\n> @@ -2762,7 +2763,8 @@ static struct config_int ConfigureNamesInt[] =\n> \t\t{\"pre_auth_delay\", PGC_SIGHUP, DEVELOPER_OPTIONS,\n> -\t\t\tgettext_noop(\"Waits N seconds on connection startup before authentication.\"),\n> +\t\t\tgettext_noop(\"Sets the amount of time to wait on connection \"\n> +\t\t\t\t\t\t \"startup before authentication.\"),\n> \t\t\tgettext_noop(\"This allows attaching a debugger to the process.\"),\n\nI wonder if these should say \"Sets the amount of time to wait [before]\nauthentication during connection startup\"\n\n> \t\t{\"checkpoint_warning\", PGC_SIGHUP, WAL_CHECKPOINTS,\n> -\t\t\tgettext_noop(\"Enables warnings if checkpoint segments are filled more \"\n> -\t\t\t\t\t\t \"frequently than this.\"),\n> +\t\t\tgettext_noop(\"Sets the maximum time before warning if checkpoints \"\n> +\t\t\t\t\t\t \"triggered by WAL volume happen too frequently.\"),\n> \t\t\tgettext_noop(\"Write a message to the server log if checkpoints \"\n> -\t\t\t\t\t\t \"caused by the filling of checkpoint segment files happens more \"\n> +\t\t\t\t\t\t \"caused by the filling of WAL segment files happen more \"\n> \t\t\t\t\t\t \"frequently than this number of seconds. Zero turns off the warning.\"),\n\nShould this still say \"seconds\" ?\nOr change it to \"this amount of time\"?\nI'm not sure.\n\n> \t\t{\"log_rotation_age\", PGC_SIGHUP, LOGGING_WHERE,\n> -\t\t\tgettext_noop(\"Automatic log file rotation will occur after N minutes.\"),\n> +\t\t\tgettext_noop(\"Sets the amount of time to wait before forcing \"\n> +\t\t\t\t\t\t \"log file rotation.\"),\n> \t\t\tNULL,\n> \t\t\tGUC_UNIT_MIN\n> \t\t},\n> @@ -3154,7 +3159,8 @@ static struct config_int ConfigureNamesInt[] =\n> \n> \t{\n> \t\t{\"log_rotation_size\", PGC_SIGHUP, LOGGING_WHERE,\n> -\t\t\tgettext_noop(\"Automatic log file rotation will occur after N kilobytes.\"),\n> +\t\t\tgettext_noop(\"Sets the maximum size of log file to reach before \"\n> +\t\t\t\t\t\t \"forcing log file rotation.\"),\n\nActually, I think that for log_rotation_size, it should not say \"forcing\".\n\n\"Sets the maximum size a log file can reach before being rotated\"\n\nBTW the EXPLAIN flag for enable_incremental_sort could be backpatched to v13.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 1 Dec 2021 21:34:39 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Wed, Dec 01, 2021 at 09:34:39PM -0600, Justin Pryzby wrote:\n>> @@ -2762,7 +2763,8 @@ static struct config_int ConfigureNamesInt[] =\n>> \t\t{\"pre_auth_delay\", PGC_SIGHUP, DEVELOPER_OPTIONS,\n>> -\t\t\tgettext_noop(\"Waits N seconds on connection startup before authentication.\"),\n>> +\t\t\tgettext_noop(\"Sets the amount of time to wait on connection \"\n>> +\t\t\t\t\t\t \"startup before authentication.\"),\n>> \t\t\tgettext_noop(\"This allows attaching a debugger to the process.\"),\n> \n> I wonder if these should say \"Sets the amount of time to wait [before]\n> authentication during connection startup\"\n\nHmm. I don't see much a difference between both of wordings in this\ncontext.\n\n>> \t\t\tgettext_noop(\"Write a message to the server log if checkpoints \"\n>> -\t\t\t\t\t\t \"caused by the filling of checkpoint segment files happens more \"\n>> +\t\t\t\t\t\t \"caused by the filling of WAL segment files happen more \"\n>> \t\t\t\t\t\t \"frequently than this number of seconds. Zero turns off the warning.\"),\n> \n> Should this still say \"seconds\" ?\n> Or change it to \"this amount of time\"?\n> I'm not sure.\n\nEither way would be fine by me, though I'd agree to be consistent and\nuse \"this amount of time\" here.\n\n>> \t\t{\"log_rotation_size\", PGC_SIGHUP, LOGGING_WHERE,\n>> -\t\t\tgettext_noop(\"Automatic log file rotation will occur after N kilobytes.\"),\n>> +\t\t\tgettext_noop(\"Sets the maximum size of log file to reach before \"\n>> +\t\t\t\t\t\t \"forcing log file rotation.\"),\n> \n> Actually, I think that for log_rotation_size, it should not say \"forcing\".\n> \n> \"Sets the maximum size a log file can reach before being rotated\"\n\nOkay. Fine by me.\n\n> BTW the EXPLAIN flag for enable_incremental_sort could be backpatched to v13.\n\nThis could cause small diffs in EXPLAIN outputs, which could be\nsurprising. This is not worth taking any risks.\n--\nMichael", "msg_date": "Thu, 2 Dec 2021 14:11:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Thu, Dec 02, 2021 at 02:11:38PM +0900, Michael Paquier wrote:\n> On Wed, Dec 01, 2021 at 09:34:39PM -0600, Justin Pryzby wrote:\n> >> @@ -2762,7 +2763,8 @@ static struct config_int ConfigureNamesInt[] =\n> >> \t\t{\"pre_auth_delay\", PGC_SIGHUP, DEVELOPER_OPTIONS,\n> >> -\t\t\tgettext_noop(\"Waits N seconds on connection startup before authentication.\"),\n> >> +\t\t\tgettext_noop(\"Sets the amount of time to wait on connection \"\n> >> +\t\t\t\t\t\t \"startup before authentication.\"),\n> >> \t\t\tgettext_noop(\"This allows attaching a debugger to the process.\"),\n> > \n> > I wonder if these should say \"Sets the amount of time to wait [before]\n> > authentication during connection startup\"\n> \n> Hmm. I don't see much a difference between both of wordings in this\n> context.\n\nI find it easier to read \"wait before authentication ...\" than \"wait ... before\nauthentication\".\n\n> > BTW the EXPLAIN flag for enable_incremental_sort could be backpatched to v13.\n> \n> This could cause small diffs in EXPLAIN outputs, which could be\n> surprising. This is not worth taking any risks.\n\nOnly if one specifies explain(SETTINGS).\nIt's fine either way ;)\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 1 Dec 2021 23:17:34 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Wed, Dec 01, 2021 at 11:17:34PM -0600, Justin Pryzby wrote:\n> I find it easier to read \"wait before authentication ...\" than \"wait ... before\n> authentication\".\n\nI have a hard time seeing a strong difference here. At the end, I\nhave used what you suggested, adjusted the rest based on your set of\ncomments, and applied the patch.\n--\nMichael", "msg_date": "Fri, 3 Dec 2021 10:06:47 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Fri, Dec 03, 2021 at 10:06:47AM +0900, Michael Paquier wrote:\n> On Wed, Dec 01, 2021 at 11:17:34PM -0600, Justin Pryzby wrote:\n> > I find it easier to read \"wait before authentication ...\" than \"wait ... before\n> > authentication\".\n> \n> I have a hard time seeing a strong difference here. At the end, I\n> have used what you suggested, adjusted the rest based on your set of\n> comments, and applied the patch.\n\nThanks. One more item. The check_guc script currently outputs 68 false\npositives - even though it includes a list of 20 exceptions. This is not\nuseful.\n\n$ (cd ./src/backend/utils/misc/; ./check_guc) |wc -l\n68\n\nWith the attached:\n\n$ (cd ./src/backend/utils/misc/; ./check_guc) \nconfig_file seems to be missing from postgresql.conf.sample\n\nThat has a defacto exception for the \"include\" directive, which seems\nreasonable.\n\nThis requires GNU awk. I'm not sure if that's a limitation of any\nsignificance.\n\n-- \nJustin", "msg_date": "Sun, 5 Dec 2021 23:38:05 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Sun, Dec 05, 2021 at 11:38:05PM -0600, Justin Pryzby wrote:\n> Thanks. One more item. The check_guc script currently outputs 68 false\n> positives - even though it includes a list of 20 exceptions. This is not\n> useful.\n\nIndeed. Hmm. This script does a couple of things:\n1) Check the format of the options defined in the various lists of\nguc.c, which is something people format well, and pgindent also does \na part of this job.\n2) Check that options in the hardcoded list of GUCs in\nINTENTIONALLY_NOT_INCLUDED are not included in\npostgresql.conf.sample\n3) Check that nothing considered as a parameter in\npostgresql.conf.sample is listed in guc.c.\n\nYour patch removes 1) and 2), but keeps 3) to check for dead\nparameter references in postgresql.conf.sample.\n\nIs check_guc actually run on a periodic basis by somebody? Based on\nthe amount of false positives that has accumulated over the years, and\nwhat `git grep` can already do for 3), it seems to me that we have\nmore arguments in favor of just removing it entirely.\n--\nMichael", "msg_date": "Mon, 6 Dec 2021 15:58:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Mon, Dec 06, 2021 at 03:58:39PM +0900, Michael Paquier wrote:\n> On Sun, Dec 05, 2021 at 11:38:05PM -0600, Justin Pryzby wrote:\n> > Thanks. One more item. The check_guc script currently outputs 68 false\n> > positives - even though it includes a list of 20 exceptions. This is not\n> > useful.\n> \n> Indeed. Hmm. This script does a couple of things:\n> 1) Check the format of the options defined in the various lists of\n> guc.c, which is something people format well, and pgindent also does \n> a part of this job.\n> 2) Check that options in the hardcoded list of GUCs in\n> INTENTIONALLY_NOT_INCLUDED are not included in\n> postgresql.conf.sample\n> 3) Check that nothing considered as a parameter in\n> postgresql.conf.sample is listed in guc.c.\n> \n> Your patch removes 1) and 2), but keeps 3) to check for dead\n> parameter references in postgresql.conf.sample.\n\nThe script checks that guc.c and sample config are consistent.\n\nI think your undertanding of INTENTIONALLY_NOT_INCLUDED is not right.\nThat's a list of stuff it \"avoids reporting\" as an suspected error, not an\nadditional list of stuff to checks. INTENTIONALLY_NOT_INCLUDED is a list of\nstuff like NOT_IN_SAMPLE, which is better done by parsing /NOT_IN_SAMPLE/.\n\n> Is check_guc actually run on a periodic basis by somebody? Based on\n> the amount of false positives that has accumulated over the years, and\n> what `git grep` can already do for 3), it seems to me that we have\n> more arguments in favor of just removing it entirely.\n\nI saw that Tom updated it within the last 12 months, which I took to mean that\nit was still being maintained. But I'm okay with removing it.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 6 Dec 2021 07:36:55 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Mon, Dec 06, 2021 at 07:36:55AM -0600, Justin Pryzby wrote:\n> The script checks that guc.c and sample config are consistent.\n> \n> I think your undertanding of INTENTIONALLY_NOT_INCLUDED is not right.\n> That's a list of stuff it \"avoids reporting\" as an suspected error, not an\n> additional list of stuff to checks. INTENTIONALLY_NOT_INCLUDED is a list of\n> stuff like NOT_IN_SAMPLE, which is better done by parsing /NOT_IN_SAMPLE/.\n\nIndeed. I got that wrong, thanks for clarifying.\n\n> I saw that Tom updated it within the last 12 months, which I took to mean that\n> it was still being maintained. But I'm okay with removing it.\n\nYes, I saw that as of bf8a662. With 42 incorrect reports, I still see\nmore evidence with removing it. Before doing anything, let's wait for\nand gather some opinions. I am adding Bruce (as the original author)\nand Tom in CC as they are the ones who have updated this script the\nmost in the last ~15 years.\n--\nMichael", "msg_date": "Wed, 8 Dec 2021 15:27:19 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On 08.12.21 07:27, Michael Paquier wrote:\n>> I saw that Tom updated it within the last 12 months, which I took to mean that\n>> it was still being maintained. But I'm okay with removing it.\n> Yes, I saw that as of bf8a662. With 42 incorrect reports, I still see\n> more evidence with removing it. Before doing anything, let's wait for\n> and gather some opinions. I am adding Bruce (as the original author)\n> and Tom in CC as they are the ones who have updated this script the\n> most in the last ~15 years.\n\nI wasn't really aware of this script either. But I think it's a good \nidea to have it. But only if it's run automatically as part of a test \nsuite run.\n\n\n", "msg_date": "Wed, 8 Dec 2021 13:23:51 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Wed, Dec 08, 2021 at 01:23:51PM +0100, Peter Eisentraut wrote:\n> I wasn't really aware of this script either. But I think it's a good idea\n> to have it. But only if it's run automatically as part of a test suite run.\n\nOkay. If we do that, I am wondering whether it would be better to\nrewrite this script in perl then, so as there is no need to worry\nabout the compatibility of grep. And also, it would make sense to\nreturn a non-zero exit code if an incompatibility is found for the\nautomation part.\n--\nMichael", "msg_date": "Thu, 9 Dec 2021 17:17:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Thu, Dec 09, 2021 at 05:17:54PM +0900, Michael Paquier wrote:\n> On Wed, Dec 08, 2021 at 01:23:51PM +0100, Peter Eisentraut wrote:\n> > I wasn't really aware of this script either. But I think it's a good idea\n> > to have it. But only if it's run automatically as part of a test suite run.\n> \n> Okay. If we do that, I am wondering whether it would be better to\n> rewrite this script in perl then, so as there is no need to worry\n> about the compatibility of grep. And also, it would make sense to\n> return a non-zero exit code if an incompatibility is found for the\n> automation part.\n\nOne option is to expose the GUC flags in pg_settings, so this can all be done\nin SQL regression tests.\n\nMaybe the flags should be text strings, so it's a nicer user-facing interface.\nBut then the field would be pretty wide, even though we're only adding it for\nregression tests. The only other alternative I can think of is to make a\nsql-callable function like pg_get_guc_flags(text guc).", "msg_date": "Thu, 9 Dec 2021 09:53:23 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Thu, Dec 09, 2021 at 09:53:23AM -0600, Justin Pryzby wrote:\n> On Thu, Dec 09, 2021 at 05:17:54PM +0900, Michael Paquier wrote:\n> > On Wed, Dec 08, 2021 at 01:23:51PM +0100, Peter Eisentraut wrote:\n> > > I wasn't really aware of this script either. But I think it's a good idea\n> > > to have it. But only if it's run automatically as part of a test suite run.\n> > \n> > Okay. If we do that, I am wondering whether it would be better to\n> > rewrite this script in perl then, so as there is no need to worry\n> > about the compatibility of grep. And also, it would make sense to\n> > return a non-zero exit code if an incompatibility is found for the\n> > automation part.\n> \n> One option is to expose the GUC flags in pg_settings, so this can all be done\n> in SQL regression tests.\n> \n> Maybe the flags should be text strings, so it's a nicer user-facing interface.\n> But then the field would be pretty wide, even though we're only adding it for\n> regression tests. The only other alternative I can think of is to make a\n> sql-callable function like pg_get_guc_flags(text guc).\n\nFixed regression tests caused by another patches.", "msg_date": "Thu, 16 Dec 2021 15:06:51 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Thu, Dec 09, 2021 at 09:53:23AM -0600, Justin Pryzby wrote:\n> On Thu, Dec 09, 2021 at 05:17:54PM +0900, Michael Paquier wrote:\n> > On Wed, Dec 08, 2021 at 01:23:51PM +0100, Peter Eisentraut wrote:\n> > > I wasn't really aware of this script either. But I think it's a good idea\n> > > to have it. But only if it's run automatically as part of a test suite run.\n> > \n> > Okay. If we do that, I am wondering whether it would be better to\n> > rewrite this script in perl then, so as there is no need to worry\n> > about the compatibility of grep. And also, it would make sense to\n> > return a non-zero exit code if an incompatibility is found for the\n> > automation part.\n> \n> One option is to expose the GUC flags in pg_settings, so this can all be done\n> in SQL regression tests.\n> \n> Maybe the flags should be text strings, so it's a nicer user-facing interface.\n> But then the field would be pretty wide, even though we're only adding it for\n> regression tests. The only other alternative I can think of is to make a\n> sql-callable function like pg_get_guc_flags(text guc).\n\nRebased on cab5b9ab2c066ba904f13de2681872dcda31e207.\n\nAnd added 0003, which changes to instead exposes the flags as a function, to\navoid changing pg_settings and exposing internally-defined integer flags in\nthat somewhat prominent view.\n\n-- \nJustin", "msg_date": "Tue, 28 Dec 2021 20:32:40 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: GUC flags" }, { "msg_contents": "At Tue, 28 Dec 2021 20:32:40 -0600, Justin Pryzby <pryzby@telsasoft.com> wrote in \n> On Thu, Dec 09, 2021 at 09:53:23AM -0600, Justin Pryzby wrote:\n> > On Thu, Dec 09, 2021 at 05:17:54PM +0900, Michael Paquier wrote:\n> > One option is to expose the GUC flags in pg_settings, so this can all be done\n> > in SQL regression tests.\n> > \n> > Maybe the flags should be text strings, so it's a nicer user-facing interface.\n> > But then the field would be pretty wide, even though we're only adding it for\n> > regression tests. The only other alternative I can think of is to make a\n> > sql-callable function like pg_get_guc_flags(text guc).\n> \n> Rebased on cab5b9ab2c066ba904f13de2681872dcda31e207.\n> \n> And added 0003, which changes to instead exposes the flags as a function, to\n> avoid changing pg_settings and exposing internally-defined integer flags in\n> that somewhat prominent view.\n\nJust an idea but couldn't we use flags in a series of one-letter flag\nrepresentations? It is more user-friendly than integers but shorter\nthan full-text representation.\n\n+SELECT name, flags FROM pg_settings;\n name | flags \n------------------------+--------\n application_name | ARsec\n transaction_deferrable | Arsec\n transaction_isolation | Arsec\n transaction_read_only | Arsec\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 05 Jan 2022 11:47:57 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Wed, Jan 05, 2022 at 11:47:57AM +0900, Kyotaro Horiguchi wrote:\n> At Tue, 28 Dec 2021 20:32:40 -0600, Justin Pryzby <pryzby@telsasoft.com> wrote in \n> > On Thu, Dec 09, 2021 at 09:53:23AM -0600, Justin Pryzby wrote:\n> > > On Thu, Dec 09, 2021 at 05:17:54PM +0900, Michael Paquier wrote:\n> > > One option is to expose the GUC flags in pg_settings, so this can all be done\n> > > in SQL regression tests.\n> > > \n> > > Maybe the flags should be text strings, so it's a nicer user-facing interface.\n> > > But then the field would be pretty wide, even though we're only adding it for\n> > > regression tests. The only other alternative I can think of is to make a\n> > > sql-callable function like pg_get_guc_flags(text guc).\n> > \n> > Rebased on cab5b9ab2c066ba904f13de2681872dcda31e207.\n> > \n> > And added 0003, which changes to instead exposes the flags as a function, to\n> > avoid changing pg_settings and exposing internally-defined integer flags in\n> > that somewhat prominent view.\n> \n> Just an idea but couldn't we use flags in a series of one-letter flag\n> representations? It is more user-friendly than integers but shorter\n> than full-text representation.\n> \n> +SELECT name, flags FROM pg_settings;\n> name | flags \n> ------------------------+--------\n> application_name | ARsec\n> transaction_deferrable | Arsec\n> transaction_isolation | Arsec\n> transaction_read_only | Arsec\n\nIt's a good idea.\n\nI suppose you intend that \"A\" means it's enabled and \"a\" means it's disabled ?\n\n\tA => show all\n\tR => reset all\n\tS => not in sample\n\tE => explain\n\tC => computed\n\nWhich is enough to support the tests that I came up with:\n\n+ (flags&4) != 0 AS no_show_all,\n+ (flags&8) != 0 AS no_reset_all,\n+ (flags&32) != 0 AS not_in_sample,\n+ (flags&1048576) != 0 AS guc_explain,\n+ (flags&2097152) != 0 AS guc_computed\n\nHowever, I think if we add a field to pg_stat_activity, it would be in a\nseparate patch, expected to be independently useful.\n\n1) expose GUC flags to pg_stat_activity;\n2) rewrite check_guc as a sql regression test;\n\nIn that case, *all* the flags should be exposed. There's currently 20, which\nmeans it may not work well after all - it's already too long, and could get\nlonger, and/or overflow the alphabet...\n\nI think pg_get_guc_flags() may be best, but I'm interested to hear other\nopinions.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 4 Jan 2022 21:06:48 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Tue, Jan 04, 2022 at 09:06:48PM -0600, Justin Pryzby wrote:\n> I think pg_get_guc_flags() may be best, but I'm interested to hear other\n> opinions.\n\nMy opinion on this matter is rather close to what you have here with\nhandling things through one extra attribute. But I don't see the\npoint of using an extra function where users would need to do a manual\nmapping of the flag bits back to a a text representation of them. So\nI would suggest to just add one text[] to pg_show_all_settings, with\nvalues being the bit names themselves, without the prefix \"GUC_\", for\nthe ones we care most about. Sticking with one column for each one\nwould require a catversion bump all the time, which could be\ncumbersome in the long run.\n--\nMichael", "msg_date": "Wed, 5 Jan 2022 14:17:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Wed, Jan 05, 2022 at 02:17:11PM +0900, Michael Paquier wrote:\n> On Tue, Jan 04, 2022 at 09:06:48PM -0600, Justin Pryzby wrote:\n> > I think pg_get_guc_flags() may be best, but I'm interested to hear other\n> > opinions.\n> \n> My opinion on this matter is rather close to what you have here with\n> handling things through one extra attribute. But I don't see the\n> point of using an extra function where users would need to do a manual\n> mapping of the flag bits back to a a text representation of them.\n\nIf it were implemented as a function, this would be essentially internal and\nleft undocumented. Only exposed for the purpose of re-implementing check_guc.\n\n> I would suggest to just add one text[] to pg_show_all_settings\n\nGood idea to use the backing function without updating the view.\n\npg_settings is currently defined with \"SELECT *\". Is it fine to enumerate a\nlist of columns instead ?", "msg_date": "Wed, 5 Jan 2022 17:55:17 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Wed, Jan 05, 2022 at 05:55:17PM -0600, Justin Pryzby wrote:\n> pg_settings is currently defined with \"SELECT *\". Is it fine to enumerate a\n> list of columns instead ?\n\nI'd like to think that this is a better practice when it comes\ndocumenting the columns, but I don't see an actual need for this extra\ncomplication here.\n\n> +\tinitStringInfo(&ret);\n> +\tappendStringInfoChar(&ret, '{');\n> +\n> +\tif (flags & GUC_NO_SHOW_ALL)\n> +\t\tappendStringInfo(&ret, \"NO_SHOW_ALL,\");\n> +\tif (flags & GUC_NO_RESET_ALL)\n> +\t\tappendStringInfo(&ret, \"NO_RESET_ALL,\");\n> +\tif (flags & GUC_NOT_IN_SAMPLE)\n> +\t\tappendStringInfo(&ret, \"NOT_IN_SAMPLE,\");\n> +\tif (flags & GUC_EXPLAIN)\n> +\t\tappendStringInfo(&ret, \"EXPLAIN,\");\n> +\tif (flags & GUC_RUNTIME_COMPUTED)\n> +\t\tappendStringInfo(&ret, \"RUNTIME_COMPUTED,\");\n> +\n> +\t/* Remove trailing comma, if any */\n> +\tif (ret.len > 1)\n> +\t\tret.data[--ret.len] = '\\0';\n\nThe way of building the text array is incorrect here. See\nheap_tuple_infomask_flags() in pageinspect as an example with all the\nHEAP_* flags. I think that you should allocate an array of Datums,\nuse CStringGetTextDatum() to assign each array element, wrapping the\nwhole with construct_array() to build the final value for the\nparameter tuple.\n--\nMichael", "msg_date": "Thu, 6 Jan 2022 14:19:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC flags" }, { "msg_contents": "At Tue, 4 Jan 2022 21:06:48 -0600, Justin Pryzby <pryzby@telsasoft.com> wrote in \n> On Wed, Jan 05, 2022 at 11:47:57AM +0900, Kyotaro Horiguchi wrote:\n> > At Tue, 28 Dec 2021 20:32:40 -0600, Justin Pryzby <pryzby@telsasoft.com> wrote in \n> In that case, *all* the flags should be exposed. There's currently 20, which\n> means it may not work well after all - it's already too long, and could get\n> longer, and/or overflow the alphabet...\n\nYeah, if we show all 20 properties, the string is too long as well as\nall properties cannot have a sensible abbreviation character..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 06 Jan 2022 14:35:54 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Thu, Jan 06, 2022 at 02:19:08PM +0900, Michael Paquier wrote:\n> On Wed, Jan 05, 2022 at 05:55:17PM -0600, Justin Pryzby wrote:\n> > pg_settings is currently defined with \"SELECT *\". Is it fine to enumerate a\n> > list of columns instead ?\n> \n> I'd like to think that this is a better practice when it comes\n> documenting the columns, but I don't see an actual need for this extra\n> complication here.\n\nThe reason is to avoid showing the flags in the pg_settings view, which should\nnot be bloated just so we can retire check_guc.\n\n> > +\tinitStringInfo(&ret);\n> > +\tappendStringInfoChar(&ret, '{');\n> > +\n> > +\tif (flags & GUC_NO_SHOW_ALL)\n> > +\t\tappendStringInfo(&ret, \"NO_SHOW_ALL,\");\n> > +\tif (flags & GUC_NO_RESET_ALL)\n> > +\t\tappendStringInfo(&ret, \"NO_RESET_ALL,\");\n> > +\tif (flags & GUC_NOT_IN_SAMPLE)\n> > +\t\tappendStringInfo(&ret, \"NOT_IN_SAMPLE,\");\n> > +\tif (flags & GUC_EXPLAIN)\n> > +\t\tappendStringInfo(&ret, \"EXPLAIN,\");\n> > +\tif (flags & GUC_RUNTIME_COMPUTED)\n> > +\t\tappendStringInfo(&ret, \"RUNTIME_COMPUTED,\");\n> > +\n> > +\t/* Remove trailing comma, if any */\n> > +\tif (ret.len > 1)\n> > +\t\tret.data[--ret.len] = '\\0';\n> \n> The way of building the text array is incorrect here. See\n> heap_tuple_infomask_flags() in pageinspect as an example with all the\n> HEAP_* flags. I think that you should allocate an array of Datums,\n> use CStringGetTextDatum() to assign each array element, wrapping the\n> whole with construct_array() to build the final value for the\n> parameter tuple.\n\nI actually did it that way last night ... however GetConfigOptionByNum() is\nexpecting it to return a text string, not an array.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 5 Jan 2022 23:36:42 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Wed, Jan 05, 2022 at 11:36:41PM -0600, Justin Pryzby wrote:\n> On Thu, Jan 06, 2022 at 02:19:08PM +0900, Michael Paquier wrote:\n> > > +\tinitStringInfo(&ret);\n> > > +\tappendStringInfoChar(&ret, '{');\n> > > +\n> > > +\tif (flags & GUC_NO_SHOW_ALL)\n> > > +\t\tappendStringInfo(&ret, \"NO_SHOW_ALL,\");\n> > > +\tif (flags & GUC_NO_RESET_ALL)\n> > > +\t\tappendStringInfo(&ret, \"NO_RESET_ALL,\");\n> > > +\tif (flags & GUC_NOT_IN_SAMPLE)\n> > > +\t\tappendStringInfo(&ret, \"NOT_IN_SAMPLE,\");\n> > > +\tif (flags & GUC_EXPLAIN)\n> > > +\t\tappendStringInfo(&ret, \"EXPLAIN,\");\n> > > +\tif (flags & GUC_RUNTIME_COMPUTED)\n> > > +\t\tappendStringInfo(&ret, \"RUNTIME_COMPUTED,\");\n> > > +\n> > > +\t/* Remove trailing comma, if any */\n> > > +\tif (ret.len > 1)\n> > > +\t\tret.data[--ret.len] = '\\0';\n> > \n> > The way of building the text array is incorrect here. See\n\nI think you'll find that this is how it's done elsewhere in postgres.\nIn the frontend, see appendPQExpBufferChar and appendPGArray and 3e6e86abc.\nOn the backend, see: git grep -F \"'{'\" |grep -w appendStringInfoChar\n\nI updated the patch with a regex to accommodate GUCs without '=', as needed\nsince f47ed79cc8.\n\n-- \nJustin", "msg_date": "Mon, 24 Jan 2022 19:07:29 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Mon, Jan 24, 2022 at 07:07:29PM -0600, Justin Pryzby wrote:\n> I think you'll find that this is how it's done elsewhere in postgres.\n> In the frontend, see appendPQExpBufferChar and appendPGArray and 3e6e86abc.\n> On the backend, see: git grep -F \"'{'\" |grep -w appendStringInfoChar\n\nYeah, I was not careful enough to look after the uses of TEXTARRAYOID,\nand there is one in the same area, as of config_enum_get_options().\nAt least things are consistent this way.\n\n> I updated the patch with a regex to accommodate GUCs without '=', as needed\n> since f47ed79cc8.\n\nOkay. While looking at your proposal, I was thinking that we had\nbetter include the array with the flags by default in pg_settings, and\nnot just pg_show_all_settings().\n\n+SELECT lower(name) FROM pg_settings_flags WHERE NOT not_in_sample EXCEPT\n+SELECT regexp_replace(ln, '^#?([_[:alpha:]]+) (= .*|[^ ]*$)', '\\1') AS guc\n+FROM (SELECT regexp_split_to_table(pg_read_file('postgresql.conf'),\n'\\n') AS ln) conf\n\nTests reading postgresql.conf would break on instances started with a\ncustom config_file provided by a command line, no? You could change\nthe patch to use the value provided by the GUC, instead, but I am not\nconvinced that we need that at all, even if check_guc does so.\n\nRegarding the tests, I am not sure if we need to be this much\nextensive. We could take is slow, and I am also wondering if this\ncould not cause some issues with GUCs loaded via\nshared_preload_libraries if we are too picky about the requirements,\nas this could cause installcheck failures.\n\nThe following things have been issues recently, though, and they look\nsensible enough to have checks for: \n- GUC_NOT_IN_SAMPLE with developer options.\n- Query-tuning parameters with GUC_EXPLAIN, and we'd better add some\ncomments in the test to explain why there are exceptions like\ndefault_statistics_target.\n- preset parameters marked as runtime-computed.\n- NO_SHOW_ALL and NOT_IN_SAMPLE.\n\nThanks,\n--\nMichael", "msg_date": "Tue, 25 Jan 2022 16:25:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On 25.01.22 02:07, Justin Pryzby wrote:\n> +CREATE TABLE pg_settings_flags AS SELECT name, category,\n> +\t'NO_SHOW_ALL'\t=ANY(flags) AS no_show_all,\n> +\t'NO_RESET_ALL'\t=ANY(flags) AS no_reset_all,\n> +\t'NOT_IN_SAMPLE'\t=ANY(flags) AS not_in_sample,\n> +\t'EXPLAIN'\t=ANY(flags) AS guc_explain,\n> +\t'COMPUTED'\t=ANY(flags) AS guc_computed\n> +\tFROM pg_show_all_settings();\n\nDoes this stuff have any value for users? I'm worried we are exposing a \nbunch of stuff that is really just for internal purposes. Like, what \nvalue does showing \"not_in_sample\" have? On the other hand, \n\"guc_explain\" might be genuinely useful, since that is part of a \nuser-facing feature. (I don't like the \"guc_*\" naming though.)\n\nYour patch doesn't contain a documentation change, so I don't know how \nand to what extend this is supposed to be presented to users.\n\n\n", "msg_date": "Tue, 25 Jan 2022 11:47:14 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Tue, Jan 25, 2022 at 11:47:14AM +0100, Peter Eisentraut wrote:\n> On 25.01.22 02:07, Justin Pryzby wrote:\n> > +CREATE TABLE pg_settings_flags AS SELECT name, category,\n> > +\t'NO_SHOW_ALL'\t=ANY(flags) AS no_show_all,\n> > +\t'NO_RESET_ALL'\t=ANY(flags) AS no_reset_all,\n> > +\t'NOT_IN_SAMPLE'\t=ANY(flags) AS not_in_sample,\n> > +\t'EXPLAIN'\t=ANY(flags) AS guc_explain,\n> > +\t'COMPUTED'\t=ANY(flags) AS guc_computed\n> > +\tFROM pg_show_all_settings();\n> \n> Does this stuff have any value for users? I'm worried we are exposing a\n> bunch of stuff that is really just for internal purposes. Like, what value\n> does showing \"not_in_sample\" have? On the other hand, \"guc_explain\" might\n> be genuinely useful, since that is part of a user-facing feature. (I don't\n> like the \"guc_*\" naming though.)\n> \n> Your patch doesn't contain a documentation change, so I don't know how and\n> to what extend this is supposed to be presented to users.\n\nI want to avoid putting this in pg_settings.\n\nThe two options discussed so far are:\n - to add an function to return the flags;\n - to add the flags to pg_show_all_settings(), but not show it in pg_settings view;\n\nI interpretted Michael's suggested as adding it to pg_get_all_settings(), but\n*not* including it in the pg_settings view. Now it seems like I misunderstood,\nand Michael wants to add it to the view.\n\nBut, even if we only handle the 5 flags we have an immediate use for, it makes\nthe user-facing view too \"wide\", just to accommodate this internal use.\n\nIf it were in the pg_settings view, I think it ought to have *all* the flags\n(not just the flags that help us to retire ./check_guc). That's much too much.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 25 Jan 2022 12:07:51 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Tue, Jan 25, 2022 at 12:07:51PM -0600, Justin Pryzby wrote:\n> On Tue, Jan 25, 2022 at 11:47:14AM +0100, Peter Eisentraut wrote:\n>> Does this stuff have any value for users? I'm worried we are exposing a\n>> bunch of stuff that is really just for internal purposes. Like, what value\n>> does showing \"not_in_sample\" have? On the other hand, \"guc_explain\" might\n>> be genuinely useful, since that is part of a user-facing feature. (I don't\n>> like the \"guc_*\" naming though.)\n\nEXPLAIN is useful to know which parameter could be part of an explain\nquery, as that's not an information provided now, even if the category\nprovides a hint. COMPUTED is also useful for the purpose of postgres\n-C in my opinion. I am reserved about the rest in terms of user\nexperience, but the other ones are useful to automate the checks\ncheck_guc was doing, which is still the main goal of this patch if we\nremove this script. And experience has proved lately that people\nforget a lot to mark GUCs correctly.\n\n> I interpretted Michael's suggested as adding it to pg_get_all_settings(), but\n> *not* including it in the pg_settings view. Now it seems like I misunderstood,\n> and Michael wants to add it to the view.\n\nYeah, I meant to add that in the view, as it is already wide. I'd be\nfine with a separate SQL function at the end, but putting that in\npg_show_all_settings() without considering pg_settings would not be\nconsistent. There is the argument that one could miss an update of\nsystem_views.sql if adding more data to pg_show_all_settings(), even\nif that's not really going to happen.\n\n> But, even if we only handle the 5 flags we have an immediate use for, it makes\n> the user-facing view too \"wide\", just to accommodate this internal use.\n\nshort_desc and extra_desc count for most of the bloat already, so that\nwould not change much, but I am fine to discard my point to not make\nthings worse.\n--\nMichael", "msg_date": "Wed, 26 Jan 2022 09:54:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Wed, Jan 26, 2022 at 09:54:43AM +0900, Michael Paquier wrote:\n> On Tue, Jan 25, 2022 at 12:07:51PM -0600, Justin Pryzby wrote:\n> > On Tue, Jan 25, 2022 at 11:47:14AM +0100, Peter Eisentraut wrote:\n> >> Does this stuff have any value for users? I'm worried we are exposing a\n> >> bunch of stuff that is really just for internal purposes. Like, what value\n> >> does showing \"not_in_sample\" have? On the other hand, \"guc_explain\" might\n> >> be genuinely useful, since that is part of a user-facing feature. (I don't\n> >> like the \"guc_*\" naming though.)\n> \n> EXPLAIN is useful to know which parameter could be part of an explain\n> query, as that's not an information provided now, even if the category\n> provides a hint. COMPUTED is also useful for the purpose of postgres\n> -C in my opinion.\n\nIt seems like an arbitrary and short-sighted policy to expose a handful of\nflags in the view for the purpose of retiring ./check_guc, but not expose other\nflags, because we thought we knew that no user could ever want them.\n\nWe should either expose all the flags, or should put them into an undocumented\nfunction. Otherwise, how would we document the flags argument ? \"Shows some\nof the flags\" ? An undocumented function avoids this issue.\n\nShould I update the patch to put the function back ?\nShould I also make the function expose all of the flags ?\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 25 Jan 2022 21:44:26 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Tue, Jan 25, 2022 at 09:44:26PM -0600, Justin Pryzby wrote:\n> It seems like an arbitrary and short-sighted policy to expose a handful of\n> flags in the view for the purpose of retiring ./check_guc, but not expose other\n> flags, because we thought we knew that no user could ever want them.\n> \n> We should either expose all the flags, or should put them into an undocumented\n> function. Otherwise, how would we document the flags argument ? \"Shows some\n> of the flags\" ? An undocumented function avoids this issue.\n\nMy vote would be to have a documented function, with a minimal set of\nthe flags exposed and documented, with the option to expand that in\nthe future. COMPUTED and EXPLAIN are useful, and allow some of the\nautomated tests to happen. NOT_IN_SAMPLE and GUC_NO_SHOW_ALL are less \nuseful for the user, and are more developer oriented, but are useful\nfor the tests. So having these four seem like a good first cut.\n--\nMichael", "msg_date": "Wed, 26 Jan 2022 15:29:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Wed, Jan 26, 2022 at 03:29:29PM +0900, Michael Paquier wrote:\n> On Tue, Jan 25, 2022 at 09:44:26PM -0600, Justin Pryzby wrote:\n> > It seems like an arbitrary and short-sighted policy to expose a handful of\n> > flags in the view for the purpose of retiring ./check_guc, but not expose other\n> > flags, because we thought we knew that no user could ever want them.\n> > \n> > We should either expose all the flags, or should put them into an undocumented\n> > function. Otherwise, how would we document the flags argument ? \"Shows some\n> > of the flags\" ? An undocumented function avoids this issue.\n> \n> My vote would be to have a documented function, with a minimal set of\n> the flags exposed and documented, with the option to expand that in\n> the future. COMPUTED and EXPLAIN are useful, and allow some of the\n> automated tests to happen. NOT_IN_SAMPLE and GUC_NO_SHOW_ALL are less \n> useful for the user, and are more developer oriented, but are useful\n> for the tests. So having these four seem like a good first cut.\n\nI implemented that (But my own preference would still be for an *undocumented*\nfunction which returns whatever flags we find to be useful to include. Or\nalternately, a documented function which exposes every flag).\n\n> +SELECT lower(name) FROM pg_settings_flags WHERE NOT not_in_sample EXCEPT\n> +SELECT regexp_replace(ln, '^#?([_[:alpha:]]+) (= .*|[^ ]*$)', '\\1') AS guc\n> +FROM (SELECT regexp_split_to_table(pg_read_file('postgresql.conf'),\n> '\\n') AS ln) conf\n> \n> Tests reading postgresql.conf would break on instances started with a\n> custom config_file provided by a command line, no?\n\nMaybe you misunderstood - I'm not reading the file specified by\ncurrent_setting('config_file'). Rather, I'm reading\ntmp_check/data/postgresql.conf, which is copied from the sample conf.\nDo you see an issue with that ?\n\nThe regression tests are only intended run from a postgres source dir, and if\nsomeone runs the from somewhere else, and they \"fail\", I think that's because\nthey violated their assumption, not because of a problem with the test.\n\nI wondered if it should chomp off anything added by pg_regress --temp-regress.\nHowever that's either going to be a valid guc (or else it would fail some other\ntest). Or an extention's guc (which this isn't testing), which has a dot, and\nwhich this regex doesn't match, so doesn't cause false positives.\n\n-- \nJustin", "msg_date": "Thu, 27 Jan 2022 22:36:21 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Thu, Jan 27, 2022 at 10:36:21PM -0600, Justin Pryzby wrote:\n> Maybe you misunderstood - I'm not reading the file specified by\n> current_setting('config_file'). Rather, I'm reading\n> tmp_check/data/postgresql.conf, which is copied from the sample conf.\n> Do you see an issue with that ?\n\nYes, as of:\nmv $PGDATA/postgresql.conf $PGDATA/popo.conf\npg_ctl start -D $PGDATA -o '-c config_file=popo.conf'\nmake installcheck\n\nI have not checked, but I am pretty sure that a couple of\ndistributions out there would pass down a custom path for the\nconfiguration file, while removing postgresql.conf from the data\ndirectory to avoid any confusion because if one finds out that some\nparameters are defined but not loaded. Your patch fails on that.\n\n> The regression tests are only intended run from a postgres source dir, and if\n> someone runs the from somewhere else, and they \"fail\", I think that's because\n> they violated their assumption, not because of a problem with the test.\n\nThe tests are able to work out on HEAD, I'd rather not break something\nthat has worked this way for years. Two other aspects that we may\nwant to worry about are include_dir and include if we were to add\ntests for that, perhaps. This last part is not really a strong\nrequirement IMO, though. \n\n> I wondered if it should chomp off anything added by pg_regress --temp-regress.\n> However that's either going to be a valid guc (or else it would fail some other\n> test). Or an extention's guc (which this isn't testing), which has a dot, and\n> which this regex doesn't match, so doesn't cause false positives.\n\nI am not sure about those parts, being reserved about the parts that\ninvolve the format of postgresql.conf or any other configuration\nparts, but we could tackle that after, if necessary.\n\nFor now, I have down a review of the patch, tweaking the docs, the\ncode and the test to take care of all the inconsistencies I could\nfind. This looks like a good first cut to be able to remove check_guc\n(the attached removes it, but I think that we'd better treat that\nindependently of the actual feature proposed, for clarity).\n--\nMichael", "msg_date": "Sat, 29 Jan 2022 15:38:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Sat, Jan 29, 2022 at 03:38:53PM +0900, Michael Paquier wrote:\n> +-- Three exceptions as of transaction_*\n> +SELECT name FROM pg_settings_flags\n> + WHERE NOT no_show_all AND no_reset_all\n> + ORDER BY 1;\n> + name \n> +------------------------\n> + transaction_deferrable\n> + transaction_isolation\n> + transaction_read_only\n> +(3 rows)\n\nI think \"as of\" is not the right phrase here.\nMaybe say: Exceptions are transaction_*\n\n> --- a/doc/src/sgml/func.sgml\n> +++ b/doc/src/sgml/func.sgml\n> @@ -23596,6 +23596,45 @@ SELECT pg_type_is_visible('myschema.widget'::regtype);\n> + <para>\n> + Returns an array of the flags associated with the given GUC, or\n> + <literal>NULL</literal> if the does not exist. The result is\n\nI guess it should say \"if the GUC does not exist\".\n\n> + an empty array if the GUC exists but there are no flags to show,\n> + as supported by the list below.\n\nI'd say \"...but none of the GUC's flags are exposed by this function.\"\n\n> + The following flags are exposed (the most meaningful ones are\n> + included):\n\n\"The most meaningful ones are included\" doesn't seem to add anything.\nMaybe it'd be useful to say \"(Only the most useful flags are exposed)\"\n\n> + <literal>EXPLAIN</literal>, parameters included in\n> + <command>EXPLAIN</command> commands.\n> + </member>\n> + <member>\n\nI think the description is wrong, or just copied from the others.\nEXPLAIN is for GUCs which are shown in EXPLAIN(SETTINGS).\n\n|EXPLAIN, parameters included in EXPLAIN commands.\n|NO_SHOW_ALL, parameters excluded from SHOW ALL commands.\n|NO_RESET_ALL, parameters excluded from RESET ALL commands.\n|NOT_IN_SAMPLE, parameters not included in postgresql.conf by default.\n|RUNTIME_COMPUTED, runtime-computed parameters. \n\nInstead of a comma, these should use a colon, or something else?\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 29 Jan 2022 18:18:50 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Sat, Jan 29, 2022 at 06:18:50PM -0600, Justin Pryzby wrote:\n> \"The most meaningful ones are included\" doesn't seem to add anything.\n> Maybe it'd be useful to say \"(Only the most useful flags are exposed)\"\n\nYes, I have used something like that.\n\n> I think the description is wrong, or just copied from the others.\n> EXPLAIN is for GUCs which are shown in EXPLAIN(SETTINGS).\n\nAdded some details here.\n\n> |EXPLAIN, parameters included in EXPLAIN commands.\n> |NO_SHOW_ALL, parameters excluded from SHOW ALL commands.\n> |NO_RESET_ALL, parameters excluded from RESET ALL commands.\n> |NOT_IN_SAMPLE, parameters not included in postgresql.conf by default.\n> |RUNTIME_COMPUTED, runtime-computed parameters. \n> \n> Instead of a comma, these should use a colon, or something else?\n\nAnd switched to a colon here.\n\nWith all those doc fixes, applied after an extra round of review. So\nthis makes us rather covered with the checks on the flags.\n\nNow, what do we do with the rest of check_guc that involve a direct\nlookup at what's on disk. We have the following:\n1) Check the format of the option lists in guc.c.\n2) Check the format of postgresql.conf.sample:\n-- Valid options preceded by a '#' character.\n-- Valid options followed by ' =', with at least one space before the\nequal sign.\n3) Check that options not marked as NOT_IN_SAMPLE are in the sample\nfile.\n\nI have never seen 1) as a problem, and pgindent takes care of that at\nsome degree. 2) is also mostly cosmetic, and committers are usually\ncareful when adding a new GUC. 3) would be the most interesting\npiece, and would cover most cases if we consider that a default\ninstallation just copies postgresql.conf.sample over, as proposed\nupthread in 0002.\n\nNow, 3) has also the problem that it would fail installcheck as one\ncan freely add a developer option in the configuration. We could\nsolve that by adding a check in a TAP test, by using pg_config\n--sharedir to find where the sample file is located. I wonder if this\nwould be a problem for some distributions, though, so adding such a\ndependency feels a bit scary even if it would mean that initdb is\npatched.\n\nAs a whole, I'd like to think that we would not lose much if check_guc\nis removed.\n--\nMichael", "msg_date": "Mon, 31 Jan 2022 14:17:41 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Mon, Jan 31, 2022 at 02:17:41PM +0900, Michael Paquier wrote:\n> With all those doc fixes, applied after an extra round of review. So\n> this makes us rather covered with the checks on the flags.\n\nThanks\n\n> Now, what do we do with the rest of check_guc that involve a direct\n> lookup at what's on disk. We have the following:\n> 1) Check the format of the option lists in guc.c.\n> 2) Check the format of postgresql.conf.sample:\n> -- Valid options preceded by a '#' character.\n> -- Valid options followed by ' =', with at least one space before the\n> equal sign.\n> 3) Check that options not marked as NOT_IN_SAMPLE are in the sample\n> file.\n> \n> I have never seen 1) as a problem, and pgindent takes care of that at\n> some degree. 2) is also mostly cosmetic, and committers are usually\n> careful when adding a new GUC. 3) would be the most interesting\n> piece, and would cover most cases if we consider that a default\n> installation just copies postgresql.conf.sample over, as proposed\n> upthread in 0002.\n> \n> Now, 3) has also the problem that it would fail installcheck as one\n> can freely add a developer option in the configuration. We could\n\nI'm not clear on what things are required/prohibited to allow/expect\n\"installcheck\" to pass. It's possible that postgresql.conf doesn't even exist\nin the data dir, right ?\n\nIt's okay with me if the config_file-reading stuff isn't re-implemented.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 31 Jan 2022 16:56:45 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Mon, Jan 31, 2022 at 04:56:45PM -0600, Justin Pryzby wrote:\n> I'm not clear on what things are required/prohibited to allow/expect\n> \"installcheck\" to pass. It's possible that postgresql.conf doesn't even exist\n> in the data dir, right ?\n\nThere are no written instructions AFAIK, but I have as personal rule\nto not break the tests in configurations where they worked\npreviously.\n\n> It's okay with me if the config_file-reading stuff isn't re-implemented.\n\nActually, I am thinking that we should implement it before retiring\ncompletely check_guc, but not in the fashion you are suggesting. I\nwould be tempted to add something in the TAP tests as of\nsrc/test/misc/, where we initialize an instance to get the information\nabout all the GUCs from SQL, and map that to the sample file located\nat pg_config --sharedir. I actually have in my patch set for\npg_upgrade's TAP a perl routine that could be used for this purpose,\nas of the following in Cluster.pm:\n\n+=item $node->config_data($option)\n+\n+Grab some data from pg_config, with $option being the command switch\n+used.\n+\n+=cut\n+\n+sub config_data\n+{\n+ my ($self, $option) = @_;\n+ local %ENV = $self->_get_env();\n+\n+ my ($stdout, $stderr);\n+ my $result =\n+ IPC::Run::run [ $self->installed_command('pg_config'), $option\n],\n+ '>', \\$stdout, '2>', \\$stderr\n+ or die \"could not execute pg_config\";\n+ chomp($stdout);\n+ $stdout =~ s/\\r$//;\n+\n+ return $stdout;\n+}\n\nWhat do you think? (I was thinking about applying that separately\nanyway, to lower the load of the pg_upgrade patch a bit.)\n--\nMichael", "msg_date": "Sun, 6 Feb 2022 14:09:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Sun, Feb 06, 2022 at 02:09:45PM +0900, Michael Paquier wrote:\n> Actually, I am thinking that we should implement it before retiring\n> completely check_guc, but not in the fashion you are suggesting. I\n> would be tempted to add something in the TAP tests as of\n> src/test/misc/, where we initialize an instance to get the information\n> about all the GUCs from SQL, and map that to the sample file located\n> at pg_config --sharedir. I actually have in my patch set for\n> pg_upgrade's TAP a perl routine that could be used for this purpose,\n> as of the following in Cluster.pm:\n\nI have been poking at that, and this is finishing to be pretty\nelegant as of the attached. With this in place, we are able to\ncross-check GUCs marked as NOT_IN_SAMPLE (or not) with the contents of \npostgresql.conf.sample, so as check_guc could get retired without us\nlosing much.\n\nI am planning to apply the Cluster.pm part of the patch separately,\nfor clarity, as I want this routine in place for some other patch.\n--\nMichael", "msg_date": "Mon, 7 Feb 2022 11:40:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC flags" }, { "msg_contents": "Thanks for working on it.\n\nYour test is checking that stuff in sample.conf is actually a GUC and not\nmarked NOT_IN_SAMPLE. But those are both unlikely mistakes to make.\n\nThe important/interesting test is the opposite: that all GUCs are present in\nthe sample file. It's a lot easier for someone to forget to add a GUC to\nsample.conf than it is for someone to accidentally add something that isn't a\nGUC.\n\nI'd first parse the GUC-like lines in the file, making a list of gucs_in_file\nand then compare the two lists.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 6 Feb 2022 21:04:14 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Sun, Feb 06, 2022 at 09:04:14PM -0600, Justin Pryzby wrote:\n> Your test is checking that stuff in sample.conf is actually a GUC and not\n> marked NOT_IN_SAMPLE. But those are both unlikely mistakes to make.\n\nYeah, you are right. Still, I don't see any reason to not include both.\n\n> I'd first parse the GUC-like lines in the file, making a list of gucs_in_file\n> and then compare the two lists.\n\nThis is a good idea, and makes the tests faster because there is no\nneed to test each GUC separately. While testing a bit more, I got\nrecalled by the fact that config_file is not marked as NOT_IN_SAMPLE\nand not in postgresql.conf.sample, so the new case you suggested was\nfailing.\n\nWhat do you think about the updated version attached? I have applied\nthe addition of config_data() separately.\n--\nMichael", "msg_date": "Tue, 8 Feb 2022 10:44:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Tue, Feb 08, 2022 at 10:44:07AM +0900, Michael Paquier wrote:\n> What do you think about the updated version attached? I have applied\n> the addition of config_data() separately.\n\nLooks fine\n\n> +\t# Check if this line matches a GUC parameter.\n> +\tif ($line =~ m/^#?([_[:alpha:]]+) (= .*|[^ ]*$)/)\n\nI think this is the regex I wrote to handle either \"name = value\" or \"name\nvalue\", which was needed between f47ed79cc..4d7c3e344. See skip_equals.\n\nIt's fine the way it is, but could also remove the 2nd half of the alternation\n(|), since GUCs shouldn't be added to sample.conf without '='.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 7 Feb 2022 21:07:28 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Mon, Feb 07, 2022 at 09:07:28PM -0600, Justin Pryzby wrote:\n> I think this is the regex I wrote to handle either \"name = value\" or \"name\n> value\", which was needed between f47ed79cc..4d7c3e344. See skip_equals.\n\nYes, I took it from there, noticing that it was working just fine for\nthis purpose.\n\n> It's fine the way it is, but could also remove the 2nd half of the alternation\n> (|), since GUCs shouldn't be added to sample.conf without '='.\n\nMakes sense. check_guc also checks after this pattern.\n--\nMichael", "msg_date": "Tue, 8 Feb 2022 13:06:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC flags" }, { "msg_contents": "On Tue, Feb 08, 2022 at 01:06:29PM +0900, Michael Paquier wrote:\n> Makes sense. check_guc also checks after this pattern.\n\nOkay, I have done all the adjustments you mentioned, added a couple of\ncomments and applied the patch. If the buildfarm is happy, I'll\ngo retire check_guc.\n--\nMichael", "msg_date": "Wed, 9 Feb 2022 10:19:09 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC flags" } ]
[ { "msg_contents": "Hi,\n\nAttached patch is doing small changes to brin, gin & gist index tests\nto use an unlogged table without changing the original intention of\nthose tests and that is able to hit ambuildempty() routing which is\notherwise not reachable by the current tests.\n\n-- \nRegards,\nAmul Sul\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 29 Nov 2021 10:34:06 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "tweak to a few index tests to hits ambuildempty() routine." }, { "msg_contents": "On Mon, Nov 29, 2021 at 10:34 AM Amul Sul <sulamul@gmail.com> wrote:\n\n> Hi,\n>\n> Attached patch is doing small changes to brin, gin & gist index tests\n> to use an unlogged table without changing the original intention of\n> those tests and that is able to hit ambuildempty() routing which is\n> otherwise not reachable by the current tests.\n>\n\n+1 for the idea as it does the better code coverage.\n\n\n\n>\n> --\n> Regards,\n> Amul Sul\n> EDB: http://www.enterprisedb.com\n>\n\n\n-- \nRushabh Lathia\n\nOn Mon, Nov 29, 2021 at 10:34 AM Amul Sul <sulamul@gmail.com> wrote:Hi,\n\nAttached patch is doing small changes to brin, gin & gist index tests\nto use an unlogged table without changing the original intention of\nthose tests and that is able to hit ambuildempty() routing which is\notherwise not reachable by the current tests.+1 for the idea as it does the better code coverage. \n\n-- \nRegards,\nAmul Sul\nEDB: http://www.enterprisedb.com\n-- Rushabh Lathia", "msg_date": "Tue, 18 Jan 2022 11:55:17 +0530", "msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>", "msg_from_op": false, "msg_subject": "Re: tweak to a few index tests to hits ambuildempty() routine." }, { "msg_contents": "On 2021-Nov-29, Amul Sul wrote:\n\n> Attached patch is doing small changes to brin, gin & gist index tests\n> to use an unlogged table without changing the original intention of\n> those tests and that is able to hit ambuildempty() routing which is\n> otherwise not reachable by the current tests.\n\nI added one change to include spgist too, which was uncovered, and\npushed this.\n\nThanks for the patch!\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"I am amazed at [the pgsql-sql] mailing list for the wonderful support, and\nlack of hesitasion in answering a lost soul's question, I just wished the rest\nof the mailing list could be like this.\" (Fotis)\n (http://archives.postgresql.org/pgsql-sql/2006-06/msg00265.php)\n\n\n", "msg_date": "Mon, 25 Apr 2022 15:05:59 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: tweak to a few index tests to hits ambuildempty() routine." }, { "msg_contents": "On 2022-Apr-25, Alvaro Herrera wrote:\n\n> I added one change to include spgist too, which was uncovered, and\n> pushed this.\n\nLooking into the recoveryCheck failure in buildfarm.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 25 Apr 2022 15:34:00 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: tweak to a few index tests to hits ambuildempty() routine." }, { "msg_contents": "On 2022-Apr-25, Alvaro Herrera wrote:\n\n> On 2022-Apr-25, Alvaro Herrera wrote:\n> \n> > I added one change to include spgist too, which was uncovered, and\n> > pushed this.\n> \n> Looking into the recoveryCheck failure in buildfarm.\n\nHmm, so 027_stream_regress.pl is not prepared to deal with any unlogged\ntables that may be left in the regression database (which is what my\nspgist addition did). I first tried doing a TRUNCATE of the unlogged\ntable, but that doesn't work either, and it turns out that the\nregression database does not have any UNLOGGED relations. Maybe that's\nsomething we need to cater for, eventually, but for now dropping the\ntable suffices. I have pushed that.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nTom: There seems to be something broken here.\nTeodor: I'm in sackcloth and ashes... Fixed.\n http://archives.postgresql.org/message-id/482D1632.8010507@sigaev.ru\n\n\n", "msg_date": "Mon, 25 Apr 2022 15:53:10 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: tweak to a few index tests to hits ambuildempty() routine." }, { "msg_contents": "On Mon, Apr 25, 2022 at 7:23 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Apr-25, Alvaro Herrera wrote:\n>\n> > On 2022-Apr-25, Alvaro Herrera wrote:\n> >\n> > > I added one change to include spgist too, which was uncovered, and\n> > > pushed this.\n> >\n\nThanks for the commit with the improvement.\n\nRegards,\nAmul\n\n\n", "msg_date": "Mon, 25 Apr 2022 19:27:47 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: tweak to a few index tests to hits ambuildempty() routine." }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Hmm, so 027_stream_regress.pl is not prepared to deal with any unlogged\n> tables that may be left in the regression database (which is what my\n> spgist addition did). I first tried doing a TRUNCATE of the unlogged\n> table, but that doesn't work either, and it turns out that the\n> regression database does not have any UNLOGGED relations. Maybe that's\n> something we need to cater for, eventually, but for now dropping the\n> table suffices. I have pushed that.\n\nIt does seem like the onus should be on 027_stream_regress.pl to\ndeal with that, rather than restricting what the core tests can\nleave behind.\n\nMaybe we could have it look for unlogged tables and drop them\nbefore making the dumps? Although I don't understand why\nTRUNCATE wouldn't do the job equally well.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Apr 2022 10:05:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: tweak to a few index tests to hits ambuildempty() routine." }, { "msg_contents": "On Mon, Apr 25, 2022 at 10:05:08AM -0400, Tom Lane wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > Hmm, so 027_stream_regress.pl is not prepared to deal with any unlogged\n> > tables that may be left in the regression database (which is what my\n> > spgist addition did). I first tried doing a TRUNCATE of the unlogged\n> > table, but that doesn't work either, and it turns out that the\n> > regression database does not have any UNLOGGED relations. Maybe that's\n> > something we need to cater for, eventually, but for now dropping the\n> > table suffices. I have pushed that.\n> \n> It does seem like the onus should be on 027_stream_regress.pl to\n> deal with that, rather than restricting what the core tests can\n> leave behind.\n\nYeah. Using \"pg_dumpall --no-unlogged-table-data\", as attached, suffices.\n\n> Maybe we could have it look for unlogged tables and drop them\n> before making the dumps? Although I don't understand why\n> TRUNCATE wouldn't do the job equally well.\n\nAfter TRUNCATE, one still gets a setval for sequences and a zero-row COPY for\ntables. When dumping a standby or using --no-unlogged-table-data, those\ncommands are absent.", "msg_date": "Fri, 20 May 2022 23:15:09 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: tweak to a few index tests to hits ambuildempty() routine." }, { "msg_contents": "On Sat, May 21, 2022 at 6:15 PM Noah Misch <noah@leadboat.com> wrote:\n> On Mon, Apr 25, 2022 at 10:05:08AM -0400, Tom Lane wrote:\n> > Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > > Hmm, so 027_stream_regress.pl is not prepared to deal with any unlogged\n> > > tables that may be left in the regression database (which is what my\n> > > spgist addition did). I first tried doing a TRUNCATE of the unlogged\n> > > table, but that doesn't work either, and it turns out that the\n> > > regression database does not have any UNLOGGED relations. Maybe that's\n> > > something we need to cater for, eventually, but for now dropping the\n> > > table suffices. I have pushed that.\n> >\n> > It does seem like the onus should be on 027_stream_regress.pl to\n> > deal with that, rather than restricting what the core tests can\n> > leave behind.\n>\n> Yeah. Using \"pg_dumpall --no-unlogged-table-data\", as attached, suffices.\n\n 'pg_dumpall', '-f', $outputdir . '/primary.dump',\n- '--no-sync', '-p', $node_primary->port\n+ '--no-sync', '-p', $node_primary->port,\n+ '--no-unlogged-table-data' # if unlogged, standby\nhas schema only\n\nLGTM, except for the stray extra whitespace. I tested by reverting\ndec8ad36 locally, at which point \"gmake check\" still passed but \"gmake\n-C src/test/recovery/ check PROVE_TESTS=t/027_stream_regress.pl\nPROVE_FLAGS=-v\" failed, and then your change fixed that.\n\n\n", "msg_date": "Sun, 22 May 2022 16:24:16 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: tweak to a few index tests to hits ambuildempty() routine." }, { "msg_contents": "On Sun, May 22, 2022 at 04:24:16PM +1200, Thomas Munro wrote:\n> On Sat, May 21, 2022 at 6:15 PM Noah Misch <noah@leadboat.com> wrote:\n> > On Mon, Apr 25, 2022 at 10:05:08AM -0400, Tom Lane wrote:\n> > > Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > > > Hmm, so 027_stream_regress.pl is not prepared to deal with any unlogged\n> > > > tables that may be left in the regression database (which is what my\n> > > > spgist addition did). I first tried doing a TRUNCATE of the unlogged\n> > > > table, but that doesn't work either, and it turns out that the\n> > > > regression database does not have any UNLOGGED relations. Maybe that's\n> > > > something we need to cater for, eventually, but for now dropping the\n> > > > table suffices. I have pushed that.\n> > >\n> > > It does seem like the onus should be on 027_stream_regress.pl to\n> > > deal with that, rather than restricting what the core tests can\n> > > leave behind.\n> >\n> > Yeah. Using \"pg_dumpall --no-unlogged-table-data\", as attached, suffices.\n> \n> 'pg_dumpall', '-f', $outputdir . '/primary.dump',\n> - '--no-sync', '-p', $node_primary->port\n> + '--no-sync', '-p', $node_primary->port,\n> + '--no-unlogged-table-data' # if unlogged, standby\n> has schema only\n> \n> LGTM, except for the stray extra whitespace.\n\nperltidy contributes the prior-line whitespace change.\n\n\n", "msg_date": "Sat, 21 May 2022 21:54:27 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: tweak to a few index tests to hits ambuildempty() routine." }, { "msg_contents": "Hi,\n\nThe commit 4fb5c794e5 eliminates the ginbulkdelete() test coverage \nprovided by the commit 4c51a2d1e4 two years ago.\nWith the following Assert added:\n@@ -571,7 +571,7 @@ ginbulkdelete(IndexVacuumInfo *info, \nIndexBulkDeleteResult *stats,\n Buffer buffer;\n BlockNumber rootOfPostingTree[BLCKSZ / (sizeof(IndexTupleData) + \nsizeof(ItemId))];\n uint32 nRoot;\n-\n+ Assert(0);\n gvs.tmpCxt = AllocSetContextCreate(CurrentMemoryContext,\n \n \"Gin vacuum temporary context\",\n \n ALLOCSET_DEFAULT_SIZES);\nI have check-world passed successfully.\n\nAmul Sul писал 2021-11-29 12:04:\n> Hi,\n> \n> Attached patch is doing small changes to brin, gin & gist index tests\n> to use an unlogged table without changing the original intention of\n> those tests and that is able to hit ambuildempty() routing which is\n> otherwise not reachable by the current tests.\n\n\n", "msg_date": "Mon, 12 Sep 2022 15:47:09 +0700", "msg_from": "a.kozhemyakin@postgrespro.ru", "msg_from_op": false, "msg_subject": "Re: tweak to a few index tests to hits ambuildempty() routine." }, { "msg_contents": "I still wonder, if assert doesn't catch why that place is marked as \ncovered here?\nhttps://coverage.postgresql.org/src/backend/access/gin/ginvacuum.c.gcov.html\n\na.kozhemyakin@postgrespro.ru писал 2022-09-12 15:47:\n> Hi,\n> \n> The commit 4fb5c794e5 eliminates the ginbulkdelete() test coverage\n> provided by the commit 4c51a2d1e4 two years ago.\n> With the following Assert added:\n> @@ -571,7 +571,7 @@ ginbulkdelete(IndexVacuumInfo *info,\n> IndexBulkDeleteResult *stats,\n> Buffer buffer;\n> BlockNumber rootOfPostingTree[BLCKSZ / (sizeof(IndexTupleData)\n> + sizeof(ItemId))];\n> uint32 nRoot;\n> -\n> + Assert(0);\n> gvs.tmpCxt = AllocSetContextCreate(CurrentMemoryContext,\n> \n> \"Gin vacuum temporary context\",\n> \n> ALLOCSET_DEFAULT_SIZES);\n> I have check-world passed successfully.\n> \n> Amul Sul писал 2021-11-29 12:04:\n>> Hi,\n>> \n>> Attached patch is doing small changes to brin, gin & gist index tests\n>> to use an unlogged table without changing the original intention of\n>> those tests and that is able to hit ambuildempty() routing which is\n>> otherwise not reachable by the current tests.\n\n\n", "msg_date": "Wed, 14 Sep 2022 13:46:32 +0700", "msg_from": "a.kozhemyakin@postgrespro.ru", "msg_from_op": false, "msg_subject": "Re: tweak to a few index tests to hits ambuildempty() routine." }, { "msg_contents": "On Wed, Sep 14, 2022 at 12:16 PM <a.kozhemyakin@postgrespro.ru> wrote:\n>\n> I still wonder, if assert doesn't catch why that place is marked as\n> covered here?\n> https://coverage.postgresql.org/src/backend/access/gin/ginvacuum.c.gcov.html\n>\n\nProbably other tests cover that.\n\nRegards,\nAmul\n\n\n", "msg_date": "Wed, 14 Sep 2022 12:58:18 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: tweak to a few index tests to hits ambuildempty() routine." }, { "msg_contents": "After analyzing this, I found out why we don't reach that Assert but we \nhave coverage shown - firstly, it reached via another test, vacuum; \nsecondly, it depends on the gcc optimization flag. We reach that Assert \nonly when using -O0.\nIf we build with -O2 or -Og that function is not reached (due to \ndifferent results of the heap_prune_satisfies_vacuum() check inside \nheap_page_prune()).\nBut as the make checks mostly (including the buildfarm testing) \nperformed with -O2/-Og, it looks like that after 4fb5c794e5 we have \nlost the coverage provided by the 4c51a2d1e4.\n\nAmul Sul писал 2022-09-14 14:28:\n> On Wed, Sep 14, 2022 at 12:16 PM <a.kozhemyakin@postgrespro.ru> wrote:\n>> \n>> I still wonder, if assert doesn't catch why that place is marked as\n>> covered here?\n>> https://coverage.postgresql.org/src/backend/access/gin/ginvacuum.c.gcov.html\n>> \n> \n> Probably other tests cover that.\n> \n> Regards,\n> Amul\n\n\n", "msg_date": "Wed, 21 Sep 2022 14:10:42 +0700", "msg_from": "a.kozhemyakin@postgrespro.ru", "msg_from_op": false, "msg_subject": "Re: tweak to a few index tests to hits ambuildempty() routine." }, { "msg_contents": "On 2022-Sep-21, a.kozhemyakin@postgrespro.ru wrote:\n\n> After analyzing this, I found out why we don't reach that Assert but we have\n> coverage shown - firstly, it reached via another test, vacuum; secondly, it\n> depends on the gcc optimization flag. We reach that Assert only when using\n> -O0.\n> If we build with -O2 or -Og that function is not reached (due to different\n> results of the heap_prune_satisfies_vacuum() check inside\n> heap_page_prune()).\n> But as the make checks mostly (including the buildfarm testing) performed\n> with -O2/-Og, it looks like that after 4fb5c794e5 we have lost the coverage\n> provided by the 4c51a2d1e4.\n\nHmm, so if we merely revert the change to gin.sql then we still won't\nget the coverage back? I was thinking that a simple change would be to\nrevert the change from temp to unlogged for that table, and create\nanother unlogged table; but maybe that's not enough. Do we need a\nbetter test for GIN vacuuming that works regardless of the optimization\nlevel?\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Investigación es lo que hago cuando no sé lo que estoy haciendo\"\n(Wernher von Braun)\n\n\n", "msg_date": "Wed, 21 Sep 2022 13:58:37 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: tweak to a few index tests to hits ambuildempty() routine." }, { "msg_contents": "On Wed, Sep 21, 2022 at 02:10:42PM +0700, a.kozhemyakin@postgrespro.ru wrote:\n> After analyzing this, I found out why we don't reach that Assert but we have\n> coverage shown - firstly, it reached via another test, vacuum; secondly, it\n> depends on the gcc optimization flag. We reach that Assert only when using\n> -O0.\n> If we build with -O2 or -Og that function is not reached (due to different\n> results of the heap_prune_satisfies_vacuum() check inside\n> heap_page_prune()).\n\nWith \"make check MAX_CONNECTIONS=1\", does that difference between -O0 and -O2\nstill appear? Compiler optimization shouldn't consistently change pruning\ndecisions. It could change pruning decisions probabilistically, by changing\nwhich parallel actions overlap. If the difference disappears under\nMAX_CONNECTIONS=1, the system is likely fine.\n\n\n", "msg_date": "Sat, 24 Sep 2022 10:20:20 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: tweak to a few index tests to hits ambuildempty() routine." }, { "msg_contents": "Yes, with MAX_CONNECTIONS=1 and -O2 the function ginbulkdelete() is \nreached while the vacuum test.\nBut my point is that after 4fb5c794e5 for most developer setups and \nbuildfarm members, e.g.:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=guaibasaurus&dt=2022-09-25%2001%3A01%3A13\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tayra&dt=2022-09-24%2020%3A40%3A00\nthe ginbulkdelete() most probably is not tested.\nIn other words, it seems that we've just lost the effect of 4c51a2d1e4:\nAdd a test case that exercises vacuum's deletion of empty GIN\nposting pages. Since this is a temp table, it should now work\nreliably to delete a bunch of rows and immediately VACUUM.\nBefore the preceding commit, this would not have had the desired\neffect, at least not in parallel regression tests.\n\nNoah Misch писал 2022-09-25 00:20:\n> On Wed, Sep 21, 2022 at 02:10:42PM +0700, a.kozhemyakin@postgrespro.ru \n> wrote:\n>> After analyzing this, I found out why we don't reach that Assert but \n>> we have\n>> coverage shown - firstly, it reached via another test, vacuum; \n>> secondly, it\n>> depends on the gcc optimization flag. We reach that Assert only when \n>> using\n>> -O0.\n>> If we build with -O2 or -Og that function is not reached (due to \n>> different\n>> results of the heap_prune_satisfies_vacuum() check inside\n>> heap_page_prune()).\n> \n> With \"make check MAX_CONNECTIONS=1\", does that difference between -O0 \n> and -O2\n> still appear? Compiler optimization shouldn't consistently change \n> pruning\n> decisions. It could change pruning decisions probabilistically, by \n> changing\n> which parallel actions overlap. If the difference disappears under\n> MAX_CONNECTIONS=1, the system is likely fine.\n\n\n", "msg_date": "Sun, 25 Sep 2022 20:49:27 +0700", "msg_from": "a.kozhemyakin@postgrespro.ru", "msg_from_op": false, "msg_subject": "Re: tweak to a few index tests to hits ambuildempty() routine." }, { "msg_contents": "a.kozhemyakin@postgrespro.ru writes:\n> But my point is that after 4fb5c794e5 for most developer setups and \n> buildfarm members, e.g.:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=guaibasaurus&dt=2022-09-25%2001%3A01%3A13\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tayra&dt=2022-09-24%2020%3A40%3A00\n> the ginbulkdelete() most probably is not tested.\n> In other words, it seems that we've just lost the effect of 4c51a2d1e4:\n> Add a test case that exercises vacuum's deletion of empty GIN\n> posting pages.\n\nYeah. You can see that the coverage-test animal is not reaching it\nanymore:\nhttps://coverage.postgresql.org/src/backend/access/gin/ginvacuum.c.gcov.html\n\nSo it seems clear that 4fb5c794e5 made at least some coverage worse\nnot better. I think we'd better rejigger it to add some new indexes\nnot repurpose old ones.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 25 Sep 2022 10:49:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: tweak to a few index tests to hits ambuildempty() routine." }, { "msg_contents": "I wrote:\n> Yeah. You can see that the coverage-test animal is not reaching it\n> anymore:\n> https://coverage.postgresql.org/src/backend/access/gin/ginvacuum.c.gcov.html\n\nThat's what it's saying *now*, but after rereading this whole thread\nI see that it apparently said something different last week. So the\ncoverage is probabilistic, which squares with this discussion and\nwith some tests I just did locally. That's not good. I shudder to\nimagine how much time somebody might waste trying to locate a bug\nin this area, if a test failure appears and disappears regardless\nof code changes they make while chasing it.\n\nI propose that we revert 4fb5c794e and instead add separate test\ncases that just create unlogged indexes (I guess they don't actually\nneed to *do* anything with them?). Looks like dec8ad367 could be\nreverted as well, in view of 2f2e24d90.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 25 Sep 2022 11:51:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: tweak to a few index tests to hits ambuildempty() routine." }, { "msg_contents": "On 2022-Sep-25, Tom Lane wrote:\n\n> That's what it's saying *now*, but after rereading this whole thread\n> I see that it apparently said something different last week. So the\n> coverage is probabilistic, which squares with this discussion and\n> with some tests I just did locally. That's not good.\n\nCompletely agreed.\n\n> I propose that we revert 4fb5c794e and instead add separate test\n> cases that just create unlogged indexes (I guess they don't actually\n> need to *do* anything with them?).\n\nWFM. I can do it next week, or feel free to do so if you want.\n\n> Looks like dec8ad367 could be reverted as well, in view of 2f2e24d90.\n\nYeah, sounds good.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Sun, 25 Sep 2022 17:57:07 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: tweak to a few index tests to hits ambuildempty() routine." }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2022-Sep-25, Tom Lane wrote:\n>> I propose that we revert 4fb5c794e and instead add separate test\n>> cases that just create unlogged indexes (I guess they don't actually\n>> need to *do* anything with them?).\n\n> WFM. I can do it next week, or feel free to do so if you want.\n\nOn it now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 25 Sep 2022 12:19:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: tweak to a few index tests to hits ambuildempty() routine." } ]
[ { "msg_contents": "We've been burnt by this issue repeatedly (cf c2d1eea9e, d025cf88b,\n11b500072) so I think it's time to try to formalize and document\nwhat to do to export a variable from src/common/ or src/port/.\n\nHere's a draft patch. I'm not in love with the name \"PGDLLIMPORT_FE\"\nand would welcome better ideas.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 29 Nov 2021 00:57:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Rationalizing declarations of src/common/ variables" }, { "msg_contents": "On Mon, Nov 29, 2021 at 11:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> We've been burnt by this issue repeatedly (cf c2d1eea9e, d025cf88b,\n> 11b500072) so I think it's time to try to formalize and document\n> what to do to export a variable from src/common/ or src/port/.\n\n+1 to document it.\n\n> Here's a draft patch. I'm not in love with the name \"PGDLLIMPORT_FE\"\n> and would welcome better ideas.\n\nHow about PGDLLIMPORT_FE_BE which represents the macro to be used for\nvariables/functions common to both frontend and backend? Otherwise,\nPGDLLIMPORT_COMM/PGDLLIMPORT_COMMON or PGDLLIMPORT_2 or\nPGDLLIMPORT_PORT?\n\nWe have some of the #defines with \"FeBe\":\n/*\n * prototypes for functions in pqcomm.c\n */\nextern WaitEventSet *FeBeWaitSet;\n\n#define FeBeWaitSetSocketPos 0\n#define FeBeWaitSetLatchPos 1\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 29 Nov 2021 11:48:50 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Rationalizing declarations of src/common/ variables" }, { "msg_contents": "On Mon, Nov 29, 2021 at 12:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Here's a draft patch. I'm not in love with the name \"PGDLLIMPORT_FE\"\n> and would welcome better ideas.\n\nWhat's the value of introducing PGDLLIMPORT_FE? I mean suppose we just\nmake PGDLLIMPORT expand to nothing in front-end code.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 29 Nov 2021 07:47:45 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Rationalizing declarations of src/common/ variables" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> What's the value of introducing PGDLLIMPORT_FE? I mean suppose we just\n> make PGDLLIMPORT expand to nothing in front-end code.\n\nHmm ... fair question. It feels like that risks breaking something,\nbut offhand I can't see what, as long as we're certain that FRONTEND\nis set correctly in every compile.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Nov 2021 09:26:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Rationalizing declarations of src/common/ variables" }, { "msg_contents": "On Mon, Nov 29, 2021 at 9:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > What's the value of introducing PGDLLIMPORT_FE? I mean suppose we just\n> > make PGDLLIMPORT expand to nothing in front-end code.\n>\n> Hmm ... fair question. It feels like that risks breaking something,\n> but offhand I can't see what, as long as we're certain that FRONTEND\n> is set correctly in every compile.\n\nIf it isn't, your way might go wrong too, since it depends on FRONTEND\nbeing set correctly at least at the point when the PGDLLIMPORT_FE\nmacro is defined. But that is not to say that I think everything is in\ngreat shape in this area. In a perfect world, I think the only\n'#define FRONTEND 1' in the backend would be in postgres_fe.h, but we\nhave it in 5 other places too, 3 of which include a comment saying\nthat it's an \"ugly hack\". Until somebody cleans that mess up, we have\nat least three cases to worry about: backend code that includes\n\"postgres.h\", front code that includes \"postgres-fe.h\", and\nfrbontackend code that first does #define FRONTEND 1 and then includes\n\"postgres.h\" anyway.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 29 Nov 2021 09:39:49 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Rationalizing declarations of src/common/ variables" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Nov 29, 2021 at 9:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Robert Haas <robertmhaas@gmail.com> writes:\n>>> What's the value of introducing PGDLLIMPORT_FE? I mean suppose we just\n>>> make PGDLLIMPORT expand to nothing in front-end code.\n\n>> Hmm ... fair question. It feels like that risks breaking something,\n>> but offhand I can't see what, as long as we're certain that FRONTEND\n>> is set correctly in every compile.\n\n> If it isn't, your way might go wrong too, since it depends on FRONTEND\n> being set correctly at least at the point when the PGDLLIMPORT_FE\n> macro is defined.\n\nEither of these ways would require that FRONTEND is already set correctly\nwhen c.h is read. But all of the hacks you mention do ensure that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Nov 2021 10:03:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Rationalizing declarations of src/common/ variables" }, { "msg_contents": "On Mon, Nov 29, 2021 at 10:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Either of these ways would require that FRONTEND is already set correctly\n> when c.h is read. But all of the hacks you mention do ensure that.\n\nYeah. Are you aware of any other, worse hacks?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 29 Nov 2021 11:44:15 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Rationalizing declarations of src/common/ variables" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Nov 29, 2021 at 10:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Either of these ways would require that FRONTEND is already set correctly\n>> when c.h is read. But all of the hacks you mention do ensure that.\n\n> Yeah. Are you aware of any other, worse hacks?\n\nWorse than which? Anyway, I pushed a patch based on your suggestion;\nwe'll soon see if the Windows BF members like it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Nov 2021 11:48:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Rationalizing declarations of src/common/ variables" } ]
[ { "msg_contents": "Hi,\n\nDuring my recent work, I need some new stuff attached to Relation. Rather\nthan adding\nsome new data structures, I added it to Relation directly. Like\nrelation->balabala. Then\nI initialize it during ExecutorRun, like table_tuple_insert.. and\ndestroy it at ExecutorEnd.\n\nThe above solution works based on 2 assumptions at least:\n1. During the ExecutorRun & ExecutorEnd, the relcache will never by\ninvalidated, if not\nthe old relation->balabala will be lost. I assume this is correct since I\ndidn't see any places\nwhere we handle such changes in Executor code.\n2. We need to consider the ExuecotRun raised error, we need to destroy\nthe balabala resource\nas well. so I added it to the RelationClose function.\n\nSo the overall design works like this:\n\nxxx_table_tuple_insert(Relation rel, ...)\n{\n if (rel->balabala == NULL)\n rel->balabala = allocate_bala_resource(rel); // Allocate the\nmemory xCtx which is under TopTransactionContext.\n do_task_with(rel->balabala);\n}\n\nat the end of the executor, I run\n\nrelease_bala_resource(Relation rel)\n{\n if (rel->balabala == NULL)\n return;\n do_the_real_task();\n MemoryContextDelete(rel->bala->memctx);\n rel->balabala = NULL\n}\n\nFor the failed cases:\n\nRelationClose(..)\n{\n if (RelationHasReferenceCountZero(relation))\n release_bala_resource(relation);\n}\n\nWill my suluation work?\n\n-- \nBest Regards\nAndy Fan\n\nHi,During my recent work,  I need some new stuff attached to Relation.  Rather than addingsome new data structures,  I added it to Relation directly.  Like relation->balabala.  ThenI initialize it during ExecutorRun,  like  table_tuple_insert.. and destroy it at ExecutorEnd.The above solution works based on 2 assumptions at least: 1.  During the ExecutorRun & ExecutorEnd,  the relcache will never by invalidated, if notthe old relation->balabala will be lost.  I assume this is correct since I didn't see any placeswhere we handle such changes in Executor code. 2.  We need to consider the ExuecotRun raised error,  we need to destroy the balabala resourceas well.  so I added it to the RelationClose function.  So the overall design works like this:xxx_table_tuple_insert(Relation rel, ...){   if (rel->balabala == NULL)        rel->balabala = allocate_bala_resource(rel);  // Allocate the memory xCtx which is under TopTransactionContext.   do_task_with(rel->balabala); }at the end of the executor,  I runrelease_bala_resource(Relation rel){   if (rel->balabala == NULL)         return;   do_the_real_task();   MemoryContextDelete(rel->bala->memctx);    rel->balabala = NULL}For the failed cases:RelationClose(..){   if (RelationHasReferenceCountZero(relation))         release_bala_resource(relation); }Will my suluation work?-- Best RegardsAndy Fan", "msg_date": "Mon, 29 Nov 2021 15:10:03 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Can I assume relation would not be invalid during from ExecutorRun to\n ExecutorEnd" }, { "msg_contents": "Andy Fan <zhihui.fan1213@gmail.com> writes:\n> During my recent work, I need some new stuff attached to Relation. Rather\n> than adding\n> some new data structures, I added it to Relation directly. Like\n> relation->balabala. Then\n> I initialize it during ExecutorRun, like table_tuple_insert.. and\n> destroy it at ExecutorEnd.\n\nThis is not going to work, at least not if you expect that a relcache\nreset would not preserve the data. Also, what happens in the case of\nnested executor runs touching the same relation?\n\nWhy do you think this ought to be in the relcache, and not in the\nexecutor's rangetable-associated data structures?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Nov 2021 09:33:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Can I assume relation would not be invalid during from\n ExecutorRun to ExecutorEnd" }, { "msg_contents": "On Mon, Nov 29, 2021 at 2:10 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> 1. During the ExecutorRun & ExecutorEnd, the relcache will never by invalidated, if not\n> the old relation->balabala will be lost. I assume this is correct since I didn't see any places\n> where we handle such changes in Executor code.\n\nIt's not correct. We accept invalidation messages in a number of\ndifferent code paths, including whenever we acquire a lock on a\nrelation. That doesn't typically happen in the middle of a query, but\nthat's just because most queries don't happen to do anything that\nwould make it happen. They can, though. For example, the query can\ncall a user-defined function that accesses a table not previously\ntouched by the transaction. Or a built-in function that does the same\nthing, like table_to_xml().\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 29 Nov 2021 09:56:17 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Can I assume relation would not be invalid during from\n ExecutorRun to ExecutorEnd" }, { "msg_contents": "On Mon, Nov 29, 2021 at 10:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andy Fan <zhihui.fan1213@gmail.com> writes:\n> > During my recent work, I need some new stuff attached to Relation.\n> Rather\n> > than adding\n> > some new data structures, I added it to Relation directly. Like\n> > relation->balabala. Then\n> > I initialize it during ExecutorRun, like table_tuple_insert.. and\n> > destroy it at ExecutorEnd.\n>\n\n\n> at least not if you expect that a relcache reset would not preserve the\n> data.\n\n\nThanks for looking into this question.\n\nI am not clear about this sentence . IMO, if the relcache is reset, my\ndata\nwould be lost, That's why I expect there is no relcache reset happening in\nExecutor Stage, are you talking about something different?\n\n\n> Also, what happens in the case of nested executor runs touching the same\n> relation?\n>\n\nIf you are talking about the RelationClose would be called many times, then\nmy solution is my resource is only destroyed when the refcount == 0; If\nyou are talking\nabout the different situation needs different resource types, then that's\nsomething\nI didn't describe it clearly. At the current time, we can assume \"For 1\nSQL statement, there\nare only 1 resource needed per relation, even the executor or nest executor\ntouch the\nsame relation many times\". I'd like to describe this clearly as well for\nreview purposes,\nbut I want to focus on one topic only first. So let's first assume my\nassumption is correct.\n\n\n> Why do you think this ought to be in the relcache, and not in the\n> executor's rangetable-associated data structures?\n>\n>\nI just see the ExecutorEnd code is not called after the exceptions case.\nand I hope my resource\ncan be released totally even if the statement raises errors. for example:\n\nCREATE TABLE t(A INT PRIMARY KEY);\n\nINSERT INTO t VALUES(1);\nINSERT INTO t VALUES(1);\n\nIn the above case, the ExecutorEnd is not called, but the RelationClose\nwill be absolutely called\nduring the ResourceOwnerRelease call.\n\n-- \nBest Regards\nAndy Fan\n\nOn Mon, Nov 29, 2021 at 10:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Andy Fan <zhihui.fan1213@gmail.com> writes:\n> During my recent work,  I need some new stuff attached to Relation.  Rather\n> than adding\n> some new data structures,  I added it to Relation directly.  Like\n> relation->balabala.  Then\n> I initialize it during ExecutorRun,  like  table_tuple_insert.. and\n> destroy it at ExecutorEnd. at least not if you expect that a relcache reset would not preserve the data. Thanks for looking into this question. I am not clear about this sentence .  IMO,  if the relcache is reset,  my data would be lost,  That's why I expect there is no relcache reset happening in Executor Stage,  are you talking about something different?    Also, what happens in the case of nested executor runs touching the same relation?If you are talking about the RelationClose would be called many times, thenmy solution is my resource is only destroyed when the refcount == 0;   If you are talkingabout the different situation needs different resource types, then that's somethingI didn't describe it clearly.  At the current time, we can assume \"For 1 SQL statement, thereare only 1 resource needed per relation, even the executor or nest executor touch thesame relation many times\".   I'd like to describe this clearly as well for review purposes, but I want to focus on one topic only first.  So let's first assume my assumption is correct.  \nWhy do you think this ought to be in the relcache, and not in the\nexecutor's rangetable-associated data structures?\nI just see the ExecutorEnd code is not called after the exceptions case.  and I hope my resourcecan be released totally even if the statement raises errors.  for example:CREATE TABLE t(A INT PRIMARY KEY);INSERT INTO t VALUES(1);INSERT INTO t VALUES(1);In the above case, the ExecutorEnd is not called, but the RelationClose will be absolutely calledduring the ResourceOwnerRelease call. -- Best RegardsAndy Fan", "msg_date": "Tue, 30 Nov 2021 08:36:42 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can I assume relation would not be invalid during from\n ExecutorRun to ExecutorEnd" }, { "msg_contents": "On Mon, Nov 29, 2021 at 10:56 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Nov 29, 2021 at 2:10 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > 1. During the ExecutorRun & ExecutorEnd, the relcache will never by\n> invalidated, if not\n> > the old relation->balabala will be lost. I assume this is correct since\n> I didn't see any places\n> > where we handle such changes in Executor code.\n>\n> It's not correct. We accept invalidation messages in a number of\n> different code paths, including whenever we acquire a lock on a\n> relation. That doesn't typically happen in the middle of a query, but\n> that's just because most queries don't happen to do anything that\n> would make it happen. They can, though.\n\n\nThanks for looking into this question.\n\nActually I think this would not happen. My reasons are:\n\n1. I think the case which would cause the relcache reset needs a lock, after\nthe executor start, the executor lock is acquired so on one can really\nhave chances\nto send an invalidation message until the lock is released. (let me first\nignore the 2\nexamples you talked about, and I will talk about it later).\n\n2. _If_ the relation can be reset after we open it during Executor code,\nthen would the\nrelation (RelationData *) pointed memory still validated after the relcache\nreset? For example\n\nCREATE TABLE t(a int);\nINSERT INTO t VALUES(1), (2);\n\nUPDATE t set a = 100;\n\nWe need to update 2 tuples in the update statement, if the relcache is\nreset, can we still use the previous\n(RelationData *) to do the following update? If not, what code is used to\nchange the relation for the old relcache\naddress to the new relcache. I assumed (RelationData *) pointer. to the\nrelcache directly, hope this is correct..\n\nFor example, the query can\n> call a user-defined function that accesses a table not previously\n> touched by the transaction. Or a built-in function that does the same\n> thing, like table_to_xml().\n>\n>\nOK, this is something I missed before. but looks it would not caused\ndifferent _in my case_;\n\nIIUC, you are describing the thing like this:\n\nCREATE FUNCTION udf()\n...\nSELECT * FROM t2;\n$$;\n\nSELECT udf() FROM t1;\n\nthen the relation t2 will not be opened at the beginning of ExecutorRun and\nit will be only opened\nwhen we fetch the first tuple from t1; so we can have cache invalidation\nbetween ExecutorRun and\nthe first call of udf.\n\nBut in my case, my exception should be that the relcache should not be\ninvalidated _after the first relation open_\nin the executor (not the beginning of executorRun), this is something I\ndidn't describe well when I post\nmy message since I didn't find out this situation at that time.\n\n\n-- \nBest Regards\nAndy Fan\n\nOn Mon, Nov 29, 2021 at 10:56 PM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Nov 29, 2021 at 2:10 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> 1.  During the ExecutorRun & ExecutorEnd,  the relcache will never by invalidated, if not\n> the old relation->balabala will be lost.  I assume this is correct since I didn't see any places\n> where we handle such changes in Executor code.\n\nIt's not correct. We accept invalidation messages in a number of\ndifferent code paths, including whenever we acquire a lock on a\nrelation. That doesn't typically happen in the middle of a query, but\nthat's just because most queries don't happen to do anything that\nwould make it happen. They can, though.Thanks for looking into this question. Actually I think this would not happen.  My reasons are:1. I think the case which would cause the relcache reset needs a lock, afterthe executor start,  the executor lock is acquired so on one can really have chancesto send an invalidation message until the lock is released. (let me first ignore the 2 examples you talked about, and I will talk about it later). 2. _If_ the relation can be reset after we open it during Executor code, then would therelation (RelationData *) pointed memory still validated after the relcache reset?  For exampleCREATE TABLE t(a int);INSERT INTO t VALUES(1), (2);UPDATE t set a = 100; We need to update 2 tuples in the update statement, if the relcache is reset,  can we still use the previous (RelationData *) to do the following update?  If not, what code is used to change the relation for the old relcacheaddress to the new relcache.   I assumed (RelationData *) pointer. to the relcache directly, hope this is correct..  For example, the query can\ncall a user-defined function that accesses a table not previously\ntouched by the transaction. Or a built-in function that does the same\nthing, like table_to_xml().\nOK, this is something I missed before.  but looks it would not caused different _in my case_;  IIUC, you are describing the thing like this:CREATE FUNCTION udf() ...SELECT * FROM t2;$$;SELECT udf() FROM t1;then the relation t2 will not be opened at the beginning of ExecutorRun and it will be only openedwhen we fetch the first tuple from t1;  so we can have cache invalidation between ExecutorRun andthe first call of udf. But in my case, my exception should be that the relcache should not be invalidated _after the first relation open_in the executor (not the beginning of executorRun), this is something I didn't describe well when I postmy message since I didn't find out this situation at that time.  -- Best RegardsAndy Fan", "msg_date": "Tue, 30 Nov 2021 09:00:33 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can I assume relation would not be invalid during from\n ExecutorRun to ExecutorEnd" }, { "msg_contents": ">\n>\n>\n>> Why do you think this ought to be in the relcache, and not in the\n>> executor's rangetable-associated data structures?\n>>\n>\nI re-think about this, I guess I didn't mention something clear enough.\nThat's true that I bound my bala struct to Relation struct, and the memory\nrelation used is allocated in relcache. but the memory of bala is\nallocated in\nTopTransactionMemory context.\n\nxxx_table_tuple_insert(Relation rel, ...)\n{\n if (rel->balabala == NULL)\n rel->balabala = allocate_bala_resource(rel); //\n*TopTransactionContext*.\n do_task_with(rel->balabala);\n}\n\nnot sure if this should be called as putting my data in relcache.\n\nand I rechecked the RelationData struct, and it looks like some\nExecutor-bind struct also\nresides in it. for example: RelationData.rd_lookInfo. If the relcache can\nbe reset, the\nfields like this are unsafe to access as well. Am I missing something?\n\n-- \nBest Regards\nAndy Fan\n\n \nWhy do you think this ought to be in the relcache, and not in the\nexecutor's rangetable-associated data structures?I re-think about this,  I guess I didn't mention something  clear enough.That's true that I bound my bala struct to Relation struct, and the memoryrelation used  is allocated in relcache.  but the memory of bala is allocated inTopTransactionMemory context.  xxx_table_tuple_insert(Relation rel, ...){   if (rel->balabala == NULL)        rel->balabala = allocate_bala_resource(rel);  //  TopTransactionContext.   do_task_with(rel->balabala); }not sure if this should be called as putting my data in relcache. and I rechecked the RelationData struct, and it looks like some Executor-bind struct alsoresides in it. for example: RelationData.rd_lookInfo.  If the relcache can be reset, the fields like this are unsafe to access as well.  Am I missing something? -- Best RegardsAndy Fan", "msg_date": "Tue, 30 Nov 2021 09:14:06 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can I assume relation would not be invalid during from\n ExecutorRun to ExecutorEnd" }, { "msg_contents": "On Tue, Nov 30, 2021 at 6:44 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>>\n>>\n>>>\n>>> Why do you think this ought to be in the relcache, and not in the\n>>> executor's rangetable-associated data structures?\n>\n>\n> I re-think about this, I guess I didn't mention something clear enough.\n> That's true that I bound my bala struct to Relation struct, and the memory\n> relation used is allocated in relcache. but the memory of bala is allocated in\n> TopTransactionMemory context.\n>\n> xxx_table_tuple_insert(Relation rel, ...)\n> {\n> if (rel->balabala == NULL)\n> rel->balabala = allocate_bala_resource(rel); // TopTransactionContext.\n> do_task_with(rel->balabala);\n> }\n>\n> not sure if this should be called as putting my data in relcache.\n>\n> and I rechecked the RelationData struct, and it looks like some Executor-bind struct also\n> resides in it. for example: RelationData.rd_lookInfo. If the relcache can be reset, the\n> fields like this are unsafe to access as well. Am I missing something?\n\nI think you are talking about rd_lockInfo? first, think this is not a\npointer so even if you get a new recache entry this memory will be\nallocated along with the new relation descriptor and it will be\ninitialized whenever a new relation descriptor is created check\nRelationInitLockInfo(). You will see there are many pointers also in\nRelationData but we ensure before we access them they are initialized,\nand most of them are allocated while building the relation descriptor\nsee RelationBuildDesc().\n\nSo if you keep some random pointer in RelationData without ensuring\nthat is getting reinitialized while building the relation cache then I\nthink that's not the correct way to do it.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 30 Nov 2021 09:51:38 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Can I assume relation would not be invalid during from\n ExecutorRun to ExecutorEnd" }, { "msg_contents": ">\n>\n> You will see there are many pointers also in\n> RelationData but we ensure before we access them they are initialized,\n>\n\nThe initialized values are not much helpful in the cases I provided here.\n\nWhat do you think about this question?\n\n2. _If_ the relation can be reset after we open it during Executor code,\nthen would the\nrelation (RelationData *) pointed memory still validated after the relcache\nreset? For example\n\nCREATE TABLE t(a int);\nINSERT INTO t VALUES(1), (2);\n\nUPDATE t set a = 100;\n\nWe need to update 2 tuples in the update statement, if the relcache is\nreset after the first tuple is\nupdated, can we still use the previous (RelationData *) to do the 2nd\nupdate? This is just\na common example. If you would say, in this case the relcache can't be\nreset, then the question\ncome back to what situation the relcache can be reset between the first\ntime I open it during execution\ncode and the end of execution code. I think we have some talks about this\nat [1].\n\nRight now, I am not pretty sure I am doing something well. That's why I\npost the question here. but still\nI didn't find a better solution right now. [2]\n\n[1]\nhttps://www.postgresql.org/message-id/CAKU4AWonyG2_5V+vKaD6GETNNDKbKFL=otwWSEA3UXOJ=rS_wQ@mail.gmail.com\n\n[2]\nhttps://www.postgresql.org/message-id/CAKU4AWotGB7PH%2BSJk41cgvqxOfeEEvJ1MV%2B6b21_5DMDE8SLXg%40mail.gmail.com\n\n-- \nBest Regards\nAndy Fan\n\n You will see there are many pointers also in\nRelationData but we ensure before we access them they are initialized,The initialized values are not much helpful in the cases I provided here.What do you think about this question?2. _If_ the relation can be reset after we open it during Executor code, then would therelation (RelationData *) pointed memory still validated after the relcache reset?  For exampleCREATE TABLE t(a int);INSERT INTO t VALUES(1), (2);UPDATE t set a = 100; We need to update 2 tuples in the update statement, if the relcache is reset after the first tuple isupdated,  can we still use the previous (RelationData *) to do the 2nd update? This is just a common example.  If you would say, in this case the relcache can't be reset,  then the questioncome back to what situation the relcache can be reset between the first time I open it during executioncode  and the end of execution code.  I think we have some talks about this at [1]. Right now, I am not pretty sure I am doing something well.  That's why I post the question here.  but stillI didn't find a better solution right now. [2][1] https://www.postgresql.org/message-id/CAKU4AWonyG2_5V+vKaD6GETNNDKbKFL=otwWSEA3UXOJ=rS_wQ@mail.gmail.com [2] https://www.postgresql.org/message-id/CAKU4AWotGB7PH%2BSJk41cgvqxOfeEEvJ1MV%2B6b21_5DMDE8SLXg%40mail.gmail.com -- Best RegardsAndy Fan", "msg_date": "Tue, 30 Nov 2021 14:42:37 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can I assume relation would not be invalid during from\n ExecutorRun to ExecutorEnd" }, { "msg_contents": "On Tue, Nov 30, 2021 at 12:12 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>>\n>>\n>> You will see there are many pointers also in\n>> RelationData but we ensure before we access them they are initialized,\n>\n>\n> The initialized values are not much helpful in the cases I provided here.\n>\n> What do you think about this question?\n>\n> 2. _If_ the relation can be reset after we open it during Executor code, then would the\n> relation (RelationData *) pointed memory still validated after the relcache reset? For example\n>\n> CREATE TABLE t(a int);\n> INSERT INTO t VALUES(1), (2);\n>\n> UPDATE t set a = 100;\n>\n> We need to update 2 tuples in the update statement, if the relcache is reset after the first tuple is\n> updated, can we still use the previous (RelationData *) to do the 2nd update? This is just\n> a common example. If you would say, in this case the relcache can't be reset, then the question\n> come back to what situation the relcache can be reset between the first time I open it during execution\n> code and the end of execution code. I think we have some talks about this at [1].\n\nIMHO, if you are doing an update then you must be already holding the\nrelation lock, so even if you call some UDF (e.g. UPDATE t set a = 100\nWHERE x= UDF()) and it accepts the invalidation it wont affect your\nRelationData because you are already holding the lock so parallelly\nthere could not be any DDL on this particular relation so this recache\nentry should not be invalidated at least in the example you have\ngiven.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 30 Nov 2021 12:22:25 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Can I assume relation would not be invalid during from\n ExecutorRun to ExecutorEnd" }, { "msg_contents": "Thanks for everyone's insight so far!\n\nmy exception should be that the relcache should not be invalidated _after\n> the first relation open_\n> in the executor (not the beginning of executorRun)。\n>\n>\ns/exception/expectation.\n\nTo be more accurate, my expectation is for a single sql statement, after\nthe first time I write data into\nthe relation, until the statement is completed or \"aborted and\nRelationClose is called in ResourceOwnerRelease\",\nthe relcache reset should never happen.\n\nSince there are many places the write table access methods can be called\nlike COPY, CAST, REFRESH MATVIEW,\nVACUUM and ModifyNode, and it is possible that we can get error in each\nplace, so I think RelationClose\nshould be a great places for exceptional case(at the same time, we should\nremember to destroy properly for non\nexceptional case).\n\nThis might be not very professional since I bind some executor related data\ninto a relcache related struct.\nBut it should be workable in my modified user case. The most professional\nmethod I can think out is adding\nanother resource type in ResourceOwner and let ResourceOwnerRelease to\nhandle the exceptional cases.\n\n-- \nBest Regards\nAndy Fan\n\nThanks for everyone's insight so far!my exception should be that the relcache should not be invalidated _after the first relation open_in the executor (not the beginning of executorRun)。 s/exception/expectation. To be more accurate,  my expectation is for a single sql statement,  after the first time I write data intothe relation, until the statement is completed or \"aborted and RelationClose is called in ResourceOwnerRelease\",the relcache reset should never happen. Since there are many places the write table access methods can be called like COPY,  CAST,  REFRESH MATVIEW,VACUUM and ModifyNode,  and it is possible that we can get error in each place,  so I think RelationCloseshould be a great places for exceptional case(at the same time, we should remember to destroy properly for nonexceptional case). This might be not very professional since I bind some executor related data into a relcache related struct.But it should be workable in my modified user case.  The most professional method I can think out is addinganother resource type in ResourceOwner and let ResourceOwnerRelease to handle the exceptional cases.-- Best RegardsAndy Fan", "msg_date": "Tue, 30 Nov 2021 17:47:46 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can I assume relation would not be invalid during from\n ExecutorRun to ExecutorEnd" }, { "msg_contents": "On Tue, Nov 30, 2021 at 4:47 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>> my exception should be that the relcache should not be invalidated _after the first relation open_\n>> in the executor (not the beginning of executorRun)。\n>\n> s/exception/expectation.\n>\n> To be more accurate, my expectation is for a single sql statement, after the first time I write data into\n> the relation, until the statement is completed or \"aborted and RelationClose is called in ResourceOwnerRelease\",\n> the relcache reset should never happen.\n\nWell .... I'm not sure why you asked the question and then argued with\nthe answer you got. I mean, you can certainly decide how you think it\nworks, but that's not how I think it works.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 30 Nov 2021 14:33:25 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Can I assume relation would not be invalid during from\n ExecutorRun to ExecutorEnd" }, { "msg_contents": "On Wed, Dec 1, 2021 at 3:33 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Nov 30, 2021 at 4:47 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> >> my exception should be that the relcache should not be invalidated\n> _after the first relation open_\n> >> in the executor (not the beginning of executorRun)。\n> >\n> > s/exception/expectation.\n> >\n> > To be more accurate, my expectation is for a single sql statement,\n> after the first time I write data into\n> > the relation, until the statement is completed or \"aborted and\n> RelationClose is called in ResourceOwnerRelease\",\n> > the relcache reset should never happen.\n>\n> Well .... I'm not sure why you asked the question and then argued with\n> the answer you got.\n\n\nI think you misunderstand me, I argued with the answer because after I got\nthe\nanswer and I rethink my problem, I found my question description is not\naccurate\nenough, so I improved the question and willing discussion again. My\nexception was\nthings will continue with something like this:\n1. In your new described situation, your solution still does not work\nbecause ...\n2. In your new described situation, the solution would work for sure\n3. your situation is still not cleared enough.\n\n\n-- \nBest Regards\nAndy Fan\n\nOn Wed, Dec 1, 2021 at 3:33 AM Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Nov 30, 2021 at 4:47 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>> my exception should be that the relcache should not be invalidated _after the first relation open_\n>> in the executor (not the beginning of executorRun)。\n>\n> s/exception/expectation.\n>\n> To be more accurate,  my expectation is for a single sql statement,  after the first time I write data into\n> the relation, until the statement is completed or \"aborted and RelationClose is called in ResourceOwnerRelease\",\n> the relcache reset should never happen.\n\nWell .... I'm not sure why you asked the question and then argued with\nthe answer you got. I think you misunderstand me,  I argued with the answer because after I got theanswer and I rethink my problem, I found my question description is not accurateenough,  so I improved the question and willing discussion again. My exception wasthings will continue with something like this:1. In your new described situation,  your solution still does not work because ...2. In your new described situation,  the solution would work for sure3.  your situation is still not cleared enough. -- Best RegardsAndy Fan", "msg_date": "Wed, 1 Dec 2021 08:50:12 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can I assume relation would not be invalid during from\n ExecutorRun to ExecutorEnd" }, { "msg_contents": "On Tue, Nov 30, 2021 at 7:50 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> I think you misunderstand me, I argued with the answer because after I got the\n> answer and I rethink my problem, I found my question description is not accurate\n> enough, so I improved the question and willing discussion again. My exception was\n> things will continue with something like this:\n> 1. In your new described situation, your solution still does not work because ...\n> 2. In your new described situation, the solution would work for sure\n> 3. your situation is still not cleared enough.\n\nI mean, it's clear to me that you can make a new table get opened at\nany time in the execution of a query. Just use CASE or an IF statement\ninside of a stored procedure to do nothing for the first 9999 calls\nand then on call 10000 access a new relation. And as soon as that\nhappens, you can AcceptInvalidationMessages(), which can cause\nrelcache entries to be destroyed or rebuilt. If the relation is open,\nthe relcache entry can't be destroyed altogether, but it can be\nrebuilt: see RelationClearRelation(). Whether that's a problem for\nwhat you are doing I don't know. But the overall point is that access\nto a new relation can happen at any point in the query -- and as soon\nas it does, we will accept ALL pending invalidation messages for ALL\nrelations regardless of what locks anyone holds on anything.\n\nSo it's generally a mistake to assume that relcache entries are\n\"stable\" across large stretches of code. They are in fact stable in a\ncertain sense - if we have the relation open, we hold a reference\ncount on it, and so the Relation pointer itself will remain valid. But\nthe data it points to can change in various ways, and different\nmembers of the RelationData struct are handled differently. Separately\nfrom the reference count, the heavyweight lock that we also hold on\nthe relation as a condition of opening it prevents certain kinds of\nchanges, so that even if the relation cache entry is rebuilt, certain\nparticular fields will be unaffected. Which fields are protected in\nthis way will depend on what kind of lock is held. It's hard to speak\nin general terms. The best advice I can give you is (1) look exactly\nwhat RelationClearRelation() is going to do to the fields you care\nabout if a rebuild happens, (2) err on the side of assuming that\nthings can change under you, and (3) try running your code under\ndebug_discard_caches = 1. It will be slow that way, but it's pretty\neffective in finding places where you've made unsafe assumptions.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 1 Dec 2021 10:01:08 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Can I assume relation would not be invalid during from\n ExecutorRun to ExecutorEnd" }, { "msg_contents": "> If the relation is open,\n> the relcache entry can't be destroyed altogether, but it can be\n> rebuilt: see RelationClearRelation().\n\n\nThanks! This is a new amazing knowledge for me!\n\n\n> They are in fact stable in a certain sense - if we have the relation open\n\nwe hold a reference count on it, and so the Relation pointer itself will\n\nremain valid.\n\n\nThis sounds amazing as well.\n\n\n> But\n> the data it points to can change in various ways, and different\n> members of the RelationData struct are handled differently. Separately\n> from the reference count, the heavyweight lock that we also hold on\n> the relation as a condition of opening it prevents certain kinds of\n> changes, so that even if the relation cache entry is rebuilt, certain\n> particular fields will be unaffected. Which fields are protected in\n> this way will depend on what kind of lock is held. It's hard to speak\n> in general terms.\n\n\nAmazing++;\n\n\n> The best advice I can give you is (1) look exactly\n> what RelationClearRelation() is going to do to the fields you care\n> about if a rebuild happens, (2) err on the side of assuming that\n> things can change under you, and (3) try running your code under\n> debug_discard_caches = 1. It will be slow that way, but it's pretty\n> effective in finding places where you've made unsafe assumptions.\n>\n>\nThanks! I clearly understand what's wrong in my previous knowledge.\nThat is, after a relation is open with some lock, then the content of the\nrelation\nwill never change until the RelationClose. It would take time to fill the\ngap, but I'd like to say \"thank you!\" first.\n\n-- \nBest Regards\nAndy Fan\n\nIf the relation is open,\nthe relcache entry can't be destroyed altogether, but it can be\nrebuilt: see RelationClearRelation().Thanks! This is a new amazing knowledge for me! They are in fact stable in a certain sense - if we have the relation open we hold a reference count on it, and so the Relation pointer itself will remain valid.This sounds amazing as well.   But\nthe data it points to can change in various ways, and different\nmembers of the RelationData struct are handled differently. Separately\nfrom the reference count, the heavyweight lock that we also hold on\nthe relation as a condition of opening it prevents certain kinds of\nchanges, so that even if the relation cache entry is rebuilt, certain\nparticular fields will be unaffected. Which fields are protected in\nthis way will depend on what kind of lock is held. It's hard to speak\nin general terms. Amazing++;  The best advice I can give you is (1) look exactly\nwhat RelationClearRelation() is going to do to the fields you care\nabout if a rebuild happens, (2) err on the side of assuming that\nthings can change under you, and (3) try running your code under\ndebug_discard_caches = 1. It will be slow that way, but it's pretty\neffective in finding places where you've made unsafe assumptions.\nThanks! I clearly understand what's wrong in my previous knowledge. That is, after a relation is open with some lock, then the content of the relationwill never change until the RelationClose.  It would take time to fill thegap, but I'd like to say \"thank you!\" first. -- Best RegardsAndy Fan", "msg_date": "Thu, 2 Dec 2021 11:58:41 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can I assume relation would not be invalid during from\n ExecutorRun to ExecutorEnd" }, { "msg_contents": "On Wed, Dec 1, 2021 at 10:58 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> Thanks! I clearly understand what's wrong in my previous knowledge.\n\nCool, glad it helped.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 2 Dec 2021 10:44:02 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Can I assume relation would not be invalid during from\n ExecutorRun to ExecutorEnd" } ]
[ { "msg_contents": "Hi hackers,\n\nThis time, I went through DROP tab completions\nand noticed some tab completions missing for the following commands:\n-DROP MATERIALIZED VIEW, DROP OWNED BY, DROP POLICY: missing \n[CASCADE|RESTRICT] at the end\n-DROP TRANSFORM: no completions after TRANSFORM\n\nI made a patch for this.\n\nBest wishes,\n\n-- \nKen Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Mon, 29 Nov 2021 18:05:52 +0900", "msg_from": "Ken Kato <katouknl@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "[PATCH] DROP tab completion" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nThe patch applies cleanly and the functionality seems to work well. (master e7122548a3)\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Tue, 30 Nov 2021 15:17:07 +0000", "msg_from": "Asif Rehman <asifr.rehman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] DROP tab completion" }, { "msg_contents": "On Tue, Nov 30, 2021 at 03:17:07PM +0000, Asif Rehman wrote:\n> The patch applies cleanly and the functionality seems to work well. (master e7122548a3)\n> \n> The new status of this patch is: Ready for Committer\n\n+ else if (Matches(\"DROP\", \"MATERIALIZED\", \"VIEW\", MatchAny))\n+ COMPLETE_WITH(\"CASCADE\", \"RESTRICT\");\n[...]\n+ else if (Matches(\"DROP\", \"OWNED\", \"BY\", MatchAny))\n+ COMPLETE_WITH(\"CASCADE\", \"RESTRICT\");\nThis stuff is gathered around line 3284 in tab-complete.c as of HEAD\nat 538724f, but I think that you have done things right as there are\nalready sections for those commands and they have multiple keywords.\nSo, applied. Thanks!\n--\nMichael", "msg_date": "Wed, 1 Dec 2021 11:07:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] DROP tab completion" } ]
[ { "msg_contents": "Hi,\n\nOne of our customers had a very bad issue while trying to reassign objects\nfrom user A to user B. He had a lot of them, and the backend got very\nhungry for memory. It finally all went down when the linux kernel decided\nto kill the backend (-9 of course).\n\nI attach three shell scripts showing the issue. create.sh creates a\ndatabase and two users. Then it imports a million Large Objects in this new\ndatabase. There's no drop.sh as it is a simple \"dropdb foodb\".\n\nsession1_monitor.sh will start logging memory usage in the server log file\nevery second. So it needs v14, but our customer is in v11. While this\nscript runs, you can start session2_reindex.sh. This script will only run a\nreassign from one user to another.\n\nHere is what I get in the server log file:\n\n$ grep \"Grand total\" 14.log\nLOG: Grand total: 15560832 bytes...\nLOG: Grand total: 68710528 bytes...\nLOG: Grand total: 119976064 bytes..\nLOG: Grand total: 171626624 bytes...\nLOG: Grand total: 224211072 bytes...\nLOG: Grand total: 276615296 bytes...\nLOG: Grand total: 325611648 bytes...\nLOG: Grand total: 378196096 bytes...\nLOG: Grand total: 429838464 bytes...\nLOG: Grand total: 481104000 bytes...\n\nIOW, it's asking for at least 481MB to reassign 1 million empty LO. It\nstrikes me as odd.\n\nFWIW, the biggest memory context is this one:\n\nLOG: level: 2; PortalContext: 479963904 total in 58590 blocks; 2662328\nfree (32567 chunks); 477301576 used: <unnamed>\n\nMemory is released at the end of the reassignment. So it's definitely not\nleaked forever, but only during the operation, which looks like a missing\npfree (or something related). I've tried to find something like that in the\ncode somewhere, but to no avail. I'm pretty sure I missed something, which\nis the reason for this email :)\n\nThanks.\n\nRegards.\n\nPS : we've found a workaround to make it work for our customer (executing\nall the required ALTER LARGE OBJECT ... OWNER TO ...), but I'm still amazed\nby this weird behaviour.\n\n\n-- \nGuillaume.", "msg_date": "Mon, 29 Nov 2021 13:49:24 +0100", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Lots of memory allocated when reassigning Large Objects" }, { "msg_contents": "On Mon, Nov 29, 2021 at 01:49:24PM +0100, Guillaume Lelarge wrote:\n> One of our customers had a very bad issue while trying to reassign objects\n> from user A to user B. He had a lot of them, and the backend got very\n> hungry for memory. It finally all went down when the linux kernel decided\n> to kill the backend (-9 of course).\n\n> Memory is released at the end of the reassignment. So it's definitely not\n> leaked forever, but only during the operation, which looks like a missing\n> pfree (or something related). I've tried to find something like that in the\n> code somewhere, but to no avail. I'm pretty sure I missed something, which\n> is the reason for this email :)\n\nI reproduced the issue like this.\n\npsql postgres -c 'CREATE ROLE two WITH login superuser'\npsql postgres two -c \"SELECT lo_import('/dev/null') FROM generate_series(1,22111)\" >/dev/null\npsql postgres -c 'SET client_min_messages=debug; SET log_statement_stats=on;' -c 'begin; REASSIGN OWNED BY two TO pryzbyj; rollback;'\n\nI didn't find the root problem, but was able to avoid the issue by creating a\nnew mem context. I wonder if there are a bunch more issues like this.\n\nunpatched:\n! 33356 kB max resident size\n\npatched:\n! 21352 kB max resident size\n\ndiff --git a/src/backend/catalog/pg_shdepend.c b/src/backend/catalog/pg_shdepend.c\nindex 9ea42f805f..cbe7f04983 100644\n--- a/src/backend/catalog/pg_shdepend.c\n+++ b/src/backend/catalog/pg_shdepend.c\n@@ -65,6 +65,7 @@\n #include \"storage/lmgr.h\"\n #include \"utils/acl.h\"\n #include \"utils/fmgroids.h\"\n+#include \"utils/memutils.h\"\n #include \"utils/syscache.h\"\n \n typedef enum\n@@ -1497,6 +1498,11 @@ shdepReassignOwned(List *roleids, Oid newrole)\n \t\twhile ((tuple = systable_getnext(scan)) != NULL)\n \t\t{\n \t\t\tForm_pg_shdepend sdepForm = (Form_pg_shdepend) GETSTRUCT(tuple);\n+\t\t\tMemoryContext cxt, oldcxt;\n+\n+\t\t\tcxt = AllocSetContextCreate(CurrentMemoryContext, \"shdepReassignOwned cxt\",\n+\t\t\t\t\tALLOCSET_DEFAULT_SIZES);\n+\t\t\toldcxt = MemoryContextSwitchTo(cxt);\n \n \t\t\t/*\n \t\t\t * We only operate on shared objects and objects in the current\n@@ -1598,8 +1604,12 @@ shdepReassignOwned(List *roleids, Oid newrole)\n \t\t\t\t\telog(ERROR, \"unexpected classid %u\", sdepForm->classid);\n \t\t\t\t\tbreak;\n \t\t\t}\n+\n \t\t\t/* Make sure the next iteration will see my changes */\n \t\t\tCommandCounterIncrement();\n+\n+\t\t\tMemoryContextSwitchTo(oldcxt);\n+\t\t\tMemoryContextDelete(cxt);\n \t\t}\n \n \t\tsystable_endscan(scan);\n\n\n", "msg_date": "Mon, 29 Nov 2021 09:15:35 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Lots of memory allocated when reassigning Large Objects" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> I reproduced the issue like this.\n\n> psql postgres -c 'CREATE ROLE two WITH login superuser'\n> psql postgres two -c \"SELECT lo_import('/dev/null') FROM generate_series(1,22111)\" >/dev/null\n> psql postgres -c 'SET client_min_messages=debug; SET log_statement_stats=on;' -c 'begin; REASSIGN OWNED BY two TO pryzbyj; rollback;'\n\nConfirmed here, although I needed to use a lot more than 22K large objects\nto see a big leak.\n\n> I didn't find the root problem, but was able to avoid the issue by creating a\n> new mem context. I wonder if there are a bunch more issues like this.\n\nI poked into it with valgrind, and identified the major leak as being\nstuff that is allocated by ExecOpenIndices and not freed by\nExecCloseIndices. The latter knows it's leaking:\n\n\t/*\n\t * XXX should free indexInfo array here too? Currently we assume that\n\t * such stuff will be cleaned up automatically in FreeExecutorState.\n\t */\n\nOn the whole, I'd characterize this as DDL code using pieces of the\nexecutor without satisfying the executor's expectations as to environment\n--- specifically, that it'll be run in a memory context that doesn't\noutlive executor shutdown. Normally, any one DDL operation does a limited\nnumber of catalog updates so that small per-update leaks don't cost that\nmuch ... but REASSIGN OWNED creates a loop that can invoke ALTER OWNER\nmany times.\n\nI think your idea of creating a short-lived context is about right.\nAnother idea we could consider is to do that within CatalogTupleUpdate;\nbut I'm not sure that the cost/benefit ratio would be good for most\noperations. Anyway I think ALTER OWNER has other leaks outside the\nindex-update operations, so we'd still need to do this within\nREASSIGN OWNED's loop.\n\nDROP OWNED BY likely has similar issues.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Nov 2021 13:40:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Lots of memory allocated when reassigning Large Objects" }, { "msg_contents": "Le lun. 29 nov. 2021 à 19:40, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > I reproduced the issue like this.\n>\n> > psql postgres -c 'CREATE ROLE two WITH login superuser'\n> > psql postgres two -c \"SELECT lo_import('/dev/null') FROM\n> generate_series(1,22111)\" >/dev/null\n> > psql postgres -c 'SET client_min_messages=debug; SET\n> log_statement_stats=on;' -c 'begin; REASSIGN OWNED BY two TO pryzbyj;\n> rollback;'\n>\n> Confirmed here, although I needed to use a lot more than 22K large objects\n> to see a big leak.\n>\n>\nSo do I.\n\n> I didn't find the root problem, but was able to avoid the issue by\n> creating a\n> > new mem context. I wonder if there are a bunch more issues like this.\n>\n> I poked into it with valgrind, and identified the major leak as being\n> stuff that is allocated by ExecOpenIndices and not freed by\n> ExecCloseIndices. The latter knows it's leaking:\n>\n> /*\n> * XXX should free indexInfo array here too? Currently we assume\n> that\n> * such stuff will be cleaned up automatically in\n> FreeExecutorState.\n> */\n>\n> On the whole, I'd characterize this as DDL code using pieces of the\n> executor without satisfying the executor's expectations as to environment\n> --- specifically, that it'll be run in a memory context that doesn't\n> outlive executor shutdown. Normally, any one DDL operation does a limited\n> number of catalog updates so that small per-update leaks don't cost that\n> much ... but REASSIGN OWNED creates a loop that can invoke ALTER OWNER\n> many times.\n>\n> I think your idea of creating a short-lived context is about right.\n> Another idea we could consider is to do that within CatalogTupleUpdate;\n> but I'm not sure that the cost/benefit ratio would be good for most\n> operations. Anyway I think ALTER OWNER has other leaks outside the\n> index-update operations, so we'd still need to do this within\n> REASSIGN OWNED's loop.\n>\n>\nI've tried Justin's patch but it didn't help with my memory allocation\nissue. FWIW, I attach the patch I used in v14.\n\nDROP OWNED BY likely has similar issues.\n>\n>\nDidn't try it, but it wouldn't be a surprise.\n\n\n-- \nGuillaume.", "msg_date": "Mon, 29 Nov 2021 20:04:06 +0100", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: Lots of memory allocated when reassigning Large Objects" }, { "msg_contents": "Guillaume Lelarge <guillaume@lelarge.info> writes:\n> I've tried Justin's patch but it didn't help with my memory allocation\n> issue. FWIW, I attach the patch I used in v14.\n\n[ looks closer ... ] Ah, that patch is a bit buggy: it fails to do the\nright thing in the cases where the loop does a \"continue\". The attached\nrevision seems to behave properly.\n\nI still see a small leakage, which I think is due to accumulation of\npending sinval messages for the catalog updates. I'm curious whether\nthat's big enough to be a problem for Guillaume's use case. (We've\nspeculated before about bounding the memory used for pending sinval\nin favor of just issuing a cache reset when the list would be too\nbig. But nobody's done anything about it, suggesting that people\nseldom have a problem in practice.)\n\n>> DROP OWNED BY likely has similar issues.\n\n> Didn't try it, but it wouldn't be a surprise.\n\nI tried just changing the REASSIGN to a DROP in Justin's example,\nand immediately hit\n\nERROR: out of shared memory\nHINT: You might need to increase max_locks_per_transaction.\n\nthanks to the per-object locks we try to acquire. So I'm not\nsure that the DROP case can reach an interesting amount of\nlocal memory leaked before it runs out of lock-table space.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 29 Nov 2021 14:39:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Lots of memory allocated when reassigning Large Objects" }, { "msg_contents": "Le lun. 29 nov. 2021 à 20:39, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n\n> Guillaume Lelarge <guillaume@lelarge.info> writes:\n> > I've tried Justin's patch but it didn't help with my memory allocation\n> > issue. FWIW, I attach the patch I used in v14.\n>\n> [ looks closer ... ] Ah, that patch is a bit buggy: it fails to do the\n> right thing in the cases where the loop does a \"continue\". The attached\n> revision seems to behave properly.\n>\n> I still see a small leakage, which I think is due to accumulation of\n> pending sinval messages for the catalog updates. I'm curious whether\n> that's big enough to be a problem for Guillaume's use case. (We've\n> speculated before about bounding the memory used for pending sinval\n> in favor of just issuing a cache reset when the list would be too\n> big. But nobody's done anything about it, suggesting that people\n> seldom have a problem in practice.)\n>\n>\nI've tried your patch with my test case. It still uses a lot of memory.\nActually even more.\n\nI have this with the log_statement_stats:\n\n1185072 kB max resident size\n\nAnd I have this with the log-memory-contexts function:\n\nLOG: Grand total: 1007796352 bytes in 320 blocks; 3453512 free (627\nchunks); 1004342840 used\n\nContrary to Justin's patch, the shdepReassignOwned doesn't seem to be used.\nI don't get any shdepReassignOwned line in the log file. I tried multiple\ntimes to avoid any mistake on my part, but got same result.\n\n>> DROP OWNED BY likely has similar issues.\n>\n> > Didn't try it, but it wouldn't be a surprise.\n>\n> I tried just changing the REASSIGN to a DROP in Justin's example,\n> and immediately hit\n>\n> ERROR: out of shared memory\n> HINT: You might need to increase max_locks_per_transaction.\n>\n> thanks to the per-object locks we try to acquire. So I'm not\n> sure that the DROP case can reach an interesting amount of\n> local memory leaked before it runs out of lock-table space.\n>\n>\nI've hit the same issue when I tried my ALTER LARGE OBJECT workaround in\none transaction.\n\n\n-- \nGuillaume.\n\nLe lun. 29 nov. 2021 à 20:39, Tom Lane <tgl@sss.pgh.pa.us> a écrit :Guillaume Lelarge <guillaume@lelarge.info> writes:\n> I've tried Justin's patch but it didn't help with my memory allocation\n> issue. FWIW, I attach the patch I used in v14.\n\n[ looks closer ... ]  Ah, that patch is a bit buggy: it fails to do the\nright thing in the cases where the loop does a \"continue\".  The attached\nrevision seems to behave properly.\n\nI still see a small leakage, which I think is due to accumulation of\npending sinval messages for the catalog updates.  I'm curious whether\nthat's big enough to be a problem for Guillaume's use case.  (We've\nspeculated before about bounding the memory used for pending sinval\nin favor of just issuing a cache reset when the list would be too\nbig.  But nobody's done anything about it, suggesting that people\nseldom have a problem in practice.)\nI've tried your patch with my test case. It still uses a lot of memory. Actually even more.I have this with the log_statement_stats:1185072 kB max resident sizeAnd I have this with the log-memory-contexts function:LOG:  Grand total: 1007796352 bytes in 320 blocks; 3453512 free (627 chunks); 1004342840 usedContrary to Justin's patch, the shdepReassignOwned doesn't seem to be used. I don't get any shdepReassignOwned line in the log file. I tried multiple times to avoid any mistake on my part, but got same result.\n>> DROP OWNED BY likely has similar issues.\n\n> Didn't try it, but it wouldn't be a surprise.\n\nI tried just changing the REASSIGN to a DROP in Justin's example,\nand immediately hit\n\nERROR:  out of shared memory\nHINT:  You might need to increase max_locks_per_transaction.\n\nthanks to the per-object locks we try to acquire.  So I'm not\nsure that the DROP case can reach an interesting amount of\nlocal memory leaked before it runs out of lock-table space.\nI've hit the same issue when I tried my ALTER LARGE OBJECT workaround in one transaction.-- Guillaume.", "msg_date": "Mon, 29 Nov 2021 20:58:17 +0100", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: Lots of memory allocated when reassigning Large Objects" }, { "msg_contents": "Guillaume Lelarge <guillaume@lelarge.info> writes:\n> Le lun. 29 nov. 2021 à 20:39, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n>> [ looks closer ... ] Ah, that patch is a bit buggy: it fails to do the\n>> right thing in the cases where the loop does a \"continue\". The attached\n>> revision seems to behave properly.\n\n> I've tried your patch with my test case. It still uses a lot of memory.\n> Actually even more.\n\nHmm ... I tried it with your test case, and I see the backend completing\nthe query without going beyond 190MB used (which is mostly shared memory).\nWithout the patch it blows past that point very quickly indeed.\n\nI'm checking it in HEAD though; perhaps there's something else wrong\nin the back branches?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Nov 2021 16:27:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Lots of memory allocated when reassigning Large Objects" }, { "msg_contents": "Le lun. 29 nov. 2021 à 22:27, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n\n> Guillaume Lelarge <guillaume@lelarge.info> writes:\n> > Le lun. 29 nov. 2021 à 20:39, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n> >> [ looks closer ... ] Ah, that patch is a bit buggy: it fails to do the\n> >> right thing in the cases where the loop does a \"continue\". The attached\n> >> revision seems to behave properly.\n>\n> > I've tried your patch with my test case. It still uses a lot of memory.\n> > Actually even more.\n>\n> Hmm ... I tried it with your test case, and I see the backend completing\n> the query without going beyond 190MB used (which is mostly shared memory).\n> Without the patch it blows past that point very quickly indeed.\n>\n> I'm checking it in HEAD though; perhaps there's something else wrong\n> in the back branches?\n>\n>\nThat's also what I was thinking. I was only trying with v14. I just checked\nwith v15devel, and your patch works alright. So there must be something\nelse with back branches.\n\n\n-- \nGuillaume.\n\nLe lun. 29 nov. 2021 à 22:27, Tom Lane <tgl@sss.pgh.pa.us> a écrit :Guillaume Lelarge <guillaume@lelarge.info> writes:\n> Le lun. 29 nov. 2021 à 20:39, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n>> [ looks closer ... ]  Ah, that patch is a bit buggy: it fails to do the\n>> right thing in the cases where the loop does a \"continue\".  The attached\n>> revision seems to behave properly.\n\n> I've tried your patch with my test case. It still uses a lot of memory.\n> Actually even more.\n\nHmm ... I tried it with your test case, and I see the backend completing\nthe query without going beyond 190MB used (which is mostly shared memory).\nWithout the patch it blows past that point very quickly indeed.\n\nI'm checking it in HEAD though; perhaps there's something else wrong\nin the back branches?\nThat's also what I was thinking. I was only trying with v14. I just checked with v15devel, and your patch works alright. So there must be something else with back branches.-- Guillaume.", "msg_date": "Mon, 29 Nov 2021 22:33:49 +0100", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: Lots of memory allocated when reassigning Large Objects" }, { "msg_contents": "Guillaume Lelarge <guillaume@lelarge.info> writes:\n> Le lun. 29 nov. 2021 à 22:27, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n>> I'm checking it in HEAD though; perhaps there's something else wrong\n>> in the back branches?\n\n> That's also what I was thinking. I was only trying with v14. I just checked\n> with v15devel, and your patch works alright. So there must be something\n> else with back branches.\n\nThanks for confirming, I'll dig into it later.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Nov 2021 16:47:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Lots of memory allocated when reassigning Large Objects" }, { "msg_contents": "Le lun. 29 nov. 2021 à 22:47, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n\n> Guillaume Lelarge <guillaume@lelarge.info> writes:\n> > Le lun. 29 nov. 2021 à 22:27, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n> >> I'm checking it in HEAD though; perhaps there's something else wrong\n> >> in the back branches?\n>\n> > That's also what I was thinking. I was only trying with v14. I just\n> checked\n> > with v15devel, and your patch works alright. So there must be something\n> > else with back branches.\n>\n> Thanks for confirming, I'll dig into it later.\n>\n>\nThanks a lot.\n\n\n-- \nGuillaume.\n\nLe lun. 29 nov. 2021 à 22:47, Tom Lane <tgl@sss.pgh.pa.us> a écrit :Guillaume Lelarge <guillaume@lelarge.info> writes:\n> Le lun. 29 nov. 2021 à 22:27, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n>> I'm checking it in HEAD though; perhaps there's something else wrong\n>> in the back branches?\n\n> That's also what I was thinking. I was only trying with v14. I just checked\n> with v15devel, and your patch works alright. So there must be something\n> else with back branches.\n\nThanks for confirming, I'll dig into it later.\nThanks a lot.-- Guillaume.", "msg_date": "Mon, 29 Nov 2021 22:49:30 +0100", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: Lots of memory allocated when reassigning Large Objects" }, { "msg_contents": "Guillaume Lelarge <guillaume@lelarge.info> writes:\n> Le lun. 29 nov. 2021 à 22:27, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n>> I'm checking it in HEAD though; perhaps there's something else wrong\n>> in the back branches?\n\n> That's also what I was thinking. I was only trying with v14. I just checked\n> with v15devel, and your patch works alright. So there must be something\n> else with back branches.\n\nAFAICT the patch fixes what it intends to fix in v14 too. The reason the\nresidual leak is worse in v14 is that the sinval message queue is bulkier.\nWe improved that in HEAD in commit 3aafc030a. I'm not sure if I want to\ntake the risk of back-patching that, even now that it's aged a couple\nmonths in the tree. It is a pretty localized fix, but it makes some\nassumptions about usage patterns that might not hold up.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Nov 2021 18:25:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Lots of memory allocated when reassigning Large Objects" }, { "msg_contents": "On Mon, Nov 29, 2021 at 01:40:31PM -0500, Tom Lane wrote:\n> DROP OWNED BY likely has similar issues.\n\nI tried a few more commands but found no significant issue.\nIMO if you have 100k tables, then you can afford 1GB RAM.\n\nSELECT format('CREATE TABLE t%s()', a) FROM generate_series(1,9999)a\\gexec\nSET client_min_messages=debug; SET log_statement_stats=on;\n\nCREATE TABLESPACE tbsp LOCATION '/home/pryzbyj/tblspc';\nALTER TABLE ALL IN TABLESPACE pg_default SET TABLESPACE tbsp;\n-- 10k tables uses 78MB RAM, which seems good enough (64MB the 2nd time??)\n\nGRANT ALL ON ALL TABLES IN SCHEMA public TO current_user;\n-- 10k tables uses 50MB RAM, which seems good enough\nGRANT ALL ON ALL SEQUENCES IN SCHEMA public TO current_user\n-- 10k sequences uses 47MB RAM, which seems good enough\n\nSELECT format('CREATE FUNCTION f%s() RETURNS int RETURN 1;', a) FROM generate_series(1,9999)a;\nGRANT ALL ON ALL FUNCTIONS IN SCHEMA public TO current_user;\n-- 10k functions uses 62MB RAM, which seems good enough\n\nAnd it looks like for ALTER PUBLICATION .. FOR ALL TABLES IN SCHEMA, the\nnamespace itself is stored, rather than enumerating all its tables.\n\n>> IOW, it's asking for at least 481MB to reassign 1 million empty LO. It\n>> strikes me as odd.\n\n@Guillaume: Even if memory use with the patch isn't constant, I imagine it's\nenough to have avoided OOM.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 29 Nov 2021 18:55:22 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Lots of memory allocated when reassigning Large Objects" }, { "msg_contents": "Le mar. 30 nov. 2021 à 00:25, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n\n> Guillaume Lelarge <guillaume@lelarge.info> writes:\n> > Le lun. 29 nov. 2021 à 22:27, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n> >> I'm checking it in HEAD though; perhaps there's something else wrong\n> >> in the back branches?\n>\n> > That's also what I was thinking. I was only trying with v14. I just\n> checked\n> > with v15devel, and your patch works alright. So there must be something\n> > else with back branches.\n>\n> AFAICT the patch fixes what it intends to fix in v14 too. The reason the\n> residual leak is worse in v14 is that the sinval message queue is bulkier.\n> We improved that in HEAD in commit 3aafc030a.\n\n\nI wanted to make sure that commit 3aafc030a fixed this issue. So I did a\nfew tests:\n\nwithout 3aafc030a, without your latest patch\n 1182148 kB max resident size\nwith 3aafc030a, without your latest patch\n 1306812 kB max resident size\n\nwithout 3aafc030a, with your latest patch\n 1182128 kB max resident size\nwith 3aafc030a, with your latest patch\n 180996 kB max resident size\n\nDefinitely, 3aafc030a and your latest patch allow PostgreSQL to use much\nless memory. Going from 1GB to 180MB is awesome.\n\nI tried to cherry-pick 3aafc030a on v14, but it didn't apply cleanly, and\nthe work was a bit overwhelming for me, at least in the morning. I'll try\nagain today, but I don't have much hope.\n\nI'm not sure if I want to\n> take the risk of back-patching that, even now that it's aged a couple\n> months in the tree. It is a pretty localized fix, but it makes some\n> assumptions about usage patterns that might not hold up.\n>\n>\nI understand. Of course, it would be better if it could be fixed for each\nsupported version but I'm already happy that it could be fixed in the next\nrelease.\n\n\n-- \nGuillaume.\n\nLe mar. 30 nov. 2021 à 00:25, Tom Lane <tgl@sss.pgh.pa.us> a écrit :Guillaume Lelarge <guillaume@lelarge.info> writes:\n> Le lun. 29 nov. 2021 à 22:27, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n>> I'm checking it in HEAD though; perhaps there's something else wrong\n>> in the back branches?\n\n> That's also what I was thinking. I was only trying with v14. I just checked\n> with v15devel, and your patch works alright. So there must be something\n> else with back branches.\n\nAFAICT the patch fixes what it intends to fix in v14 too.  The reason the\nresidual leak is worse in v14 is that the sinval message queue is bulkier.\nWe improved that in HEAD in commit 3aafc030a.I wanted to make sure that commit 3aafc030a fixed this issue. So I did a few tests:without 3aafc030a, without your latest patch  1182148 kB max resident sizewith 3aafc030a, without your latest patch  1306812 kB max resident sizewithout 3aafc030a, with your latest patch  1182128 kB max resident sizewith 3aafc030a, with your latest patch  180996 kB max resident sizeDefinitely, 3aafc030a and your latest patch allow PostgreSQL to use much less memory. Going from 1GB to 180MB is awesome.I tried to cherry-pick 3aafc030a on v14, but it didn't apply cleanly, and the work was a bit overwhelming for me, at least in the morning. I'll try again today, but I don't have much hope. I'm not sure if I want to\ntake the risk of back-patching that, even now that it's aged a couple\nmonths in the tree.  It is a pretty localized fix, but it makes some\nassumptions about usage patterns that might not hold up.\nI understand. Of course, it would be better if it could be fixed for each supported version but I'm already happy that it could be fixed in the next release.-- Guillaume.", "msg_date": "Tue, 30 Nov 2021 09:23:44 +0100", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: Lots of memory allocated when reassigning Large Objects" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> @Guillaume: Even if memory use with the patch isn't constant, I imagine it's\n> enough to have avoided OOM.\n\nI think it's good enough in HEAD. In the back branches, the sinval\nqueue growth is bad enough that there's still an issue. Still,\nthis is a useful improvement, so I added some comments and pushed it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Dec 2021 13:47:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Lots of memory allocated when reassigning Large Objects" }, { "msg_contents": "Le mer. 1 déc. 2021 à 19:48, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > @Guillaume: Even if memory use with the patch isn't constant, I imagine\n> it's\n> > enough to have avoided OOM.\n>\n> I think it's good enough in HEAD. In the back branches, the sinval\n> queue growth is bad enough that there's still an issue. Still,\n> this is a useful improvement, so I added some comments and pushed it.\n>\n>\nThanks.\n\n\n-- \nGuillaume.\n\nLe mer. 1 déc. 2021 à 19:48, Tom Lane <tgl@sss.pgh.pa.us> a écrit :Justin Pryzby <pryzby@telsasoft.com> writes:\n> @Guillaume: Even if memory use with the patch isn't constant, I imagine it's\n> enough to have avoided OOM.\n\nI think it's good enough in HEAD.  In the back branches, the sinval\nqueue growth is bad enough that there's still an issue.  Still,\nthis is a useful improvement, so I added some comments and pushed it.\nThanks.-- Guillaume.", "msg_date": "Wed, 1 Dec 2021 21:46:40 +0100", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: Lots of memory allocated when reassigning Large Objects" } ]
[ { "msg_contents": "Hi, hackers!\n\nIt seems we have a problem in pg_statio_all_tables view defenition.\nAccording to the documentation and identification fields, this view\nmust have exact one row per a table.\nThe view definition contains an x.indexrelid as the last field in its\nGROUP BY list:\n\n <...>\n GROUP BY c.oid, n.nspname, c.relname, t.oid, x.indexrelid\n\nWhich is the oid of a TOAST-index.\n\nHowever it is possible that the TOAST table will have more than one\nindex. For example, this happens when REINDEX CONCURRENTLY operation\nlefts an index in invalid state (indisvalid = false) due to some kind\nof a failure. It's often sufficient to interrupt REINDEX CONCURRENTLY\noperation right after start.\n\nSuch index will cause the second row to appear in a\npg_statio_all_tables view which obvious is unexpected behaviour.\n\nNow we can have several regular indexes and several TOAST-indexes for\nthe same table. Statistics for the regular and TOAST indexes is to be\ncalculated the same way so I've decided to use a CTE here.\n\nThe proposed view definition follows:\n\nCREATE VIEW pg_statio_all_tables AS\n WITH indstat AS (\n SELECT\n indrelid,\n sum(pg_stat_get_blocks_fetched(indexrelid) -\n pg_stat_get_blocks_hit(indexrelid))::bigint\n AS idx_blks_read,\n sum(pg_stat_get_blocks_hit(indexrelid))::bigint\n AS idx_blks_hit\n FROM\n pg_index\n GROUP BY indrelid\n )\n SELECT\n C.oid AS relid,\n N.nspname AS schemaname,\n C.relname AS relname,\n pg_stat_get_blocks_fetched(C.oid) -\n pg_stat_get_blocks_hit(C.oid) AS heap_blks_read,\n pg_stat_get_blocks_hit(C.oid) AS heap_blks_hit,\n I.idx_blks_read AS idx_blks_read,\n I.idx_blks_hit AS idx_blks_hit,\n pg_stat_get_blocks_fetched(T.oid) -\n pg_stat_get_blocks_hit(T.oid) AS toast_blks_read,\n pg_stat_get_blocks_hit(T.oid) AS toast_blks_hit,\n X.idx_blks_read AS tidx_blks_read,\n X.idx_blks_read AS tidx_blks_hit\n FROM pg_class C LEFT JOIN\n indstat I ON C.oid = I.indrelid LEFT JOIN\n pg_class T ON C.reltoastrelid = T.oid LEFT JOIN\n indstat X ON T.oid = X.indrelid\n LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)\n WHERE C.relkind IN ('r', 't', 'm');\n\nReported by Sergey Grinko.\n\nRegards.\n-- \nAndrei Zubkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 29 Nov 2021 17:04:29 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "[PATCH] pg_statio_all_tables: several rows per table due to invalid\n TOAST index" }, { "msg_contents": "On Mon, Nov 29, 2021 at 05:04:29PM +0300, Andrei Zubkov wrote:\n> However it is possible that the TOAST table will have more than one\n> index. For example, this happens when REINDEX CONCURRENTLY operation\n> lefts an index in invalid state (indisvalid = false) due to some kind\n> of a failure. It's often sufficient to interrupt REINDEX CONCURRENTLY\n> operation right after start.\n> \n> Such index will cause the second row to appear in a\n> pg_statio_all_tables view which obvious is unexpected behaviour.\n\nIndeed. I can see that. \n\n> Now we can have several regular indexes and several TOAST-indexes for\n> the same table. Statistics for the regular and TOAST indexes is to be\n> calculated the same way so I've decided to use a CTE here.\n\nHmm. Why should we care about invalid indexes at all, including\npg_statio_all_indexes?\n--\nMichael", "msg_date": "Tue, 30 Nov 2021 17:29:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_statio_all_tables: several rows per table due to\n invalid TOAST index" }, { "msg_contents": "Hi, Michael\n\nThank you for your attention!\n\nOn Tue, 2021-11-30 at 17:29 +0900, Michael Paquier wrote:\n> Hmm.  Why should we care about invalid indexes at all, including\n> pg_statio_all_indexes?\n> \n\nI think we should care about them at least because they are exists and\ncan consume resources. For example, invalid index is to be updated by\nDML operations.\nOf course we can exclude such indexes from a view using isvalid,\nisready, islive fields. But in such case we should mention this in the\ndocs, and more important is that the new such states of indexes can\nappear in the future causing change in a view definition. Counting all\nindexes regardless of states seems more reasonable to me.\n\n-- \nAndrei Zubkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n", "msg_date": "Tue, 30 Nov 2021 11:57:25 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_statio_all_tables: several rows per table due to\n invalid TOAST index" }, { "msg_contents": "It seems we need to bump catalog version here.\n-- \nAndrei Zubkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n", "msg_date": "Thu, 16 Dec 2021 12:23:59 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_statio_all_tables: several rows per table due to\n invalid TOAST index" }, { "msg_contents": "Andrei Zubkov <zubkov@moonset.ru> writes:\n> On Tue, 2021-11-30 at 17:29 +0900, Michael Paquier wrote:\n>> Hmm.  Why should we care about invalid indexes at all, including\n>> pg_statio_all_indexes?\n\n> I think we should care about them at least because they are exists and\n> can consume resources. For example, invalid index is to be updated by\n> DML operations.\n> Of course we can exclude such indexes from a view using isvalid,\n> isready, islive fields. But in such case we should mention this in the\n> docs, and more important is that the new such states of indexes can\n> appear in the future causing change in a view definition. Counting all\n> indexes regardless of states seems more reasonable to me.\n\nYeah, I agree, especially since we do it like that for the table's\nown indexes. I have a couple of comments though:\n\n1. There's a silly typo in the view definition (it outputs tidx_blks_read\ntwice). Fixed in the attached v2.\n\n2. Historically, if you put any constraints on the view output, like\n\tselect * from pg_statio_all_tables where relname like 'foo%';\nyou'd get a commensurate reduction in the amount of work done. With\nthis version, you don't: the CTE will get computed in its entirety\neven if we just need one row of its result. This seems pretty bad,\nespecially for installations with many tables --- I suspect many\nusers would think this cure is worse than the disease.\n\nI'm not quite sure what to do about #2. I thought of just removing\nX.indexrelid from the GROUP BY clause and summing over the toast\nindex(es) as we do for the table's index(es). But that doesn't\nwork: if there are N > 1 table indexes, then the counts for\nthe toast index(es) will be multiplied by N, and conversely if\nthere are multiple toast indexes then the counts for the table\nindexes will be scaled up. We need to sum separately over the\ntable indexes and toast indexes, and I don't immediately see how\nto do that without creating an optimization fence.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 20 Mar 2022 18:08:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_statio_all_tables: several rows per table due to\n invalid TOAST index" }, { "msg_contents": "I wrote:\n> ... We need to sum separately over the\n> table indexes and toast indexes, and I don't immediately see how\n> to do that without creating an optimization fence.\n\nAfter a bit of further fooling, I found that we could make that\nwork with LEFT JOIN LATERAL. This formulation has a different\nproblem, which is that if you do want most or all of the output,\ncomputing each sub-aggregation separately is probably less\nefficient than it could be. But this is probably the better way\nto go unless someone has an even better idea.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 20 Mar 2022 18:33:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_statio_all_tables: several rows per table due to\n invalid TOAST index" }, { "msg_contents": "I wrote:\n> After a bit of further fooling, I found that we could make that\n> work with LEFT JOIN LATERAL. This formulation has a different\n> problem, which is that if you do want most or all of the output,\n> computing each sub-aggregation separately is probably less\n> efficient than it could be. But this is probably the better way\n> to go unless someone has an even better idea.\n\nHearing no better ideas, pushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 24 Mar 2022 16:34:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_statio_all_tables: several rows per table due to\n invalid TOAST index" }, { "msg_contents": "Hi Tom!\nOn Thu, 2022-03-24 at 16:34 -0400, Tom Lane wrote:\n> I wrote:\n> > After a bit of further fooling, I found that we could make that\n> > work with LEFT JOIN LATERAL.  This formulation has a different\n> > problem, which is that if you do want most or all of the output,\n> > computing each sub-aggregation separately is probably less\n> > efficient than it could be.  But this is probably the better way\n> > to go unless someone has an even better idea.\n> \n> Hearing no better ideas, pushed.\n> \n>                         regards, tom lane\n\nThank you for your attention and for the problem resolution. However\nI'm worry a little about possible performance issues related to\nmonitoring solutions performing regular sampling of statistic views to\nfind out the most intensive objects in a database. They obviously will\nquery all rows from statistic views and their impact will only depend\non the sampling frequency. Is it seems reasonable to avoid using\npg_statio_all_tables view by such monitoring tools? But it seems that\nthe only way to do so is using statistic functions directly in a\nsampling query. Is it seems reliable? Maybe we should think about a\nlittle bit different statio view for that? For example, a plain view\nfor all tables (regular and TOASTs)...\n-- \nRegards, Andrei Zubkov\n\n\n\n\n", "msg_date": "Fri, 25 Mar 2022 11:28:06 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_statio_all_tables: several rows per table due to\n invalid TOAST index" }, { "msg_contents": "Andrei Zubkov <zubkov@moonset.ru> writes:\n> Thank you for your attention and for the problem resolution. However\n> I'm worry a little about possible performance issues related to\n> monitoring solutions performing regular sampling of statistic views to\n> find out the most intensive objects in a database.\n\nThere's no actual evidence that the new formulation is meaningfully\nworse for such cases. Sure, it's probably *somewhat* less efficient,\nbut that could easily be swamped by pgstat or other costs. I wouldn't\ncare to worry about this unless some evidence is presented that we've\ncreated a big problem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 25 Mar 2022 09:28:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_statio_all_tables: several rows per table due to\n invalid TOAST index" } ]
[ { "msg_contents": "Hi hackers,\r\n\r\nCurrently, if you attempt to use CREATE EXTENSION for an extension\r\nthat is not installed, you'll see something like the following:\r\n\r\n postgres=# CREATE EXTENSION does_not_exist;\r\n ERROR: could not open extension control file \"/usr/local/pgsql/share/extension/does_not_exist.control\": No such file or directory\r\n\r\nI suspect this ERROR message is confusing for novice users, so perhaps\r\nwe should add a HINT. With the attached patch, you'd see the\r\nfollowing:\r\n\r\n postgres=# CREATE EXTENSION does_not_exist;\r\n ERROR: could not open extension control file \"/usr/local/pgsql/share/extension/does_not_exist.control\": No such file or directory\r\n HINT: This typically indicates that the specified extension is not installed on the system.\r\n\r\nThoughts?\r\n\r\nNathan", "msg_date": "Mon, 29 Nov 2021 19:54:56 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "improve CREATE EXTENSION error message" }, { "msg_contents": "> On 29 Nov 2021, at 20:54, Bossart, Nathan <bossartn@amazon.com> wrote:\n> \n> Hi hackers,\n> \n> Currently, if you attempt to use CREATE EXTENSION for an extension\n> that is not installed, you'll see something like the following:\n> \n> postgres=# CREATE EXTENSION does_not_exist;\n> ERROR: could not open extension control file \"/usr/local/pgsql/share/extension/does_not_exist.control\": No such file or directory\n> \n> I suspect this ERROR message is confusing for novice users, so perhaps\n> we should add a HINT. With the attached patch, you'd see the\n> following:\n> \n> postgres=# CREATE EXTENSION does_not_exist;\n> ERROR: could not open extension control file \"/usr/local/pgsql/share/extension/does_not_exist.control\": No such file or directory\n> HINT: This typically indicates that the specified extension is not installed on the system.\n> \n> Thoughts?\n\nI haven't given the suggested wording too much thought, but in general that\nsounds like a good idea.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 29 Nov 2021 21:32:20 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: improve CREATE EXTENSION error message" }, { "msg_contents": "\"Bossart, Nathan\" <bossartn@amazon.com> writes:\n> Currently, if you attempt to use CREATE EXTENSION for an extension\n> that is not installed, you'll see something like the following:\n\n> postgres=# CREATE EXTENSION does_not_exist;\n> ERROR: could not open extension control file \"/usr/local/pgsql/share/extension/does_not_exist.control\": No such file or directory\n\n> I suspect this ERROR message is confusing for novice users, so perhaps\n> we should add a HINT.\n\nIf we issue the hint only for errno == ENOENT, I think we could be\nless wishy-washy (and if it's not ENOENT, the hint is likely\ninappropriate anyway). I'm thinking something more like\n\nHINT: This means the extension is not installed on the system.\n\nI'm not quite satisfied with the \"on the system\" wording, but I'm\nnot sure of what would be better. I agree that we can't just say\n\"is not installed\", because people will confuse that with whether\nit is installed within the database.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Nov 2021 16:02:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: improve CREATE EXTENSION error message" }, { "msg_contents": "On 11/29/21, 12:33 PM, \"Daniel Gustafsson\" <daniel@yesql.se> wrote:\r\n> I haven't given the suggested wording too much thought, but in general that\r\n> sounds like a good idea.\r\n\r\nThanks. I'm flexible with the wording.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 29 Nov 2021 21:27:46 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: improve CREATE EXTENSION error message" }, { "msg_contents": "> On 29 Nov 2021, at 22:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> \"Bossart, Nathan\" <bossartn@amazon.com> writes:\n>> Currently, if you attempt to use CREATE EXTENSION for an extension\n>> that is not installed, you'll see something like the following:\n> \n>> postgres=# CREATE EXTENSION does_not_exist;\n>> ERROR: could not open extension control file \"/usr/local/pgsql/share/extension/does_not_exist.control\": No such file or directory\n> \n>> I suspect this ERROR message is confusing for novice users, so perhaps\n>> we should add a HINT.\n> \n> If we issue the hint only for errno == ENOENT, I think we could be\n> less wishy-washy (and if it's not ENOENT, the hint is likely\n> inappropriate anyway). I'm thinking something more like\n> \n> HINT: This means the extension is not installed on the system.\n> \n> I'm not quite satisfied with the \"on the system\" wording, but I'm\n> not sure of what would be better. I agree that we can't just say\n> \"is not installed\", because people will confuse that with whether\n> it is installed within the database.\n\nThat's a good point, the hint is targeting users who might not even know that\nan extension needs to be physically and separately installed on the machine\nbefore it can be installed in their database; so maybe using \"installed\" here\nisn't entirely helpful at all. That being said I'm at a loss for a more\nsuitable word, \"available\" perhaps?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 29 Nov 2021 22:31:13 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: improve CREATE EXTENSION error message" }, { "msg_contents": "On 11/29/21, 1:03 PM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\r\n> If we issue the hint only for errno == ENOENT, I think we could be\r\n> less wishy-washy (and if it's not ENOENT, the hint is likely\r\n> inappropriate anyway). I'm thinking something more like\r\n>\r\n> HINT: This means the extension is not installed on the system.\r\n\r\nGood idea.\r\n\r\n> I'm not quite satisfied with the \"on the system\" wording, but I'm\r\n> not sure of what would be better. I agree that we can't just say\r\n> \"is not installed\", because people will confuse that with whether\r\n> it is installed within the database.\r\n\r\nRight. The only other idea I have at the moment is to say something\r\nlike\r\n\r\n This means the extension is not available[ on the system].\r\n\r\nI don't know whether that is actually any less confusing, though.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 29 Nov 2021 21:33:21 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: improve CREATE EXTENSION error message" }, { "msg_contents": "On 11/29/21 16:31, Daniel Gustafsson wrote:\n> That's a good point, the hint is targeting users who might not even know that\n> an extension needs to be physically and separately installed on the machine\n> before it can be installed in their database; so maybe using \"installed\" here\n> isn't entirely helpful at all. That being said I'm at a loss for a more\n> suitable word, \"available\" perhaps?\n\nMaybe a larger break with the \"This means the extension something something\"\nformulation, and more on the lines of\n\nHINT: an extension must first be present (for example, installed with a\n package manager) on the system where PostgreSQL is running.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 29 Nov 2021 16:37:30 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: improve CREATE EXTENSION error message" }, { "msg_contents": "On 11/29/21, 1:32 PM, \"Daniel Gustafsson\" <daniel@yesql.se> wrote:\r\n>> On 29 Nov 2021, at 22:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n>> I'm not quite satisfied with the \"on the system\" wording, but I'm\r\n>> not sure of what would be better. I agree that we can't just say\r\n>> \"is not installed\", because people will confuse that with whether\r\n>> it is installed within the database.\r\n>\r\n> That's a good point, the hint is targeting users who might not even know that\r\n> an extension needs to be physically and separately installed on the machine\r\n> before it can be installed in their database; so maybe using \"installed\" here\r\n> isn't entirely helpful at all. That being said I'm at a loss for a more\r\n> suitable word, \"available\" perhaps?\r\n\r\nI was just thinking the same thing. I used \"available\" in v2, which\r\nis attached.\r\n\r\nNathan", "msg_date": "Mon, 29 Nov 2021 21:39:14 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: improve CREATE EXTENSION error message" }, { "msg_contents": "On 11/29/21, 1:38 PM, \"Chapman Flack\" <chap@anastigmatix.net> wrote:\r\n> On 11/29/21 16:31, Daniel Gustafsson wrote:\r\n>> That's a good point, the hint is targeting users who might not even know that\r\n>> an extension needs to be physically and separately installed on the machine\r\n>> before it can be installed in their database; so maybe using \"installed\" here\r\n>> isn't entirely helpful at all. That being said I'm at a loss for a more\r\n>> suitable word, \"available\" perhaps?\r\n>\r\n> Maybe a larger break with the \"This means the extension something something\"\r\n> formulation, and more on the lines of\r\n>\r\n> HINT: an extension must first be present (for example, installed with a\r\n> package manager) on the system where PostgreSQL is running.\r\n\r\nI like this idea. I can do it this way in the next revision if others\r\nagree.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 29 Nov 2021 21:47:09 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: improve CREATE EXTENSION error message" }, { "msg_contents": "> On 29 Nov 2021, at 22:47, Bossart, Nathan <bossartn@amazon.com> wrote:\n> \n> On 11/29/21, 1:38 PM, \"Chapman Flack\" <chap@anastigmatix.net> wrote:\n>> On 11/29/21 16:31, Daniel Gustafsson wrote:\n>>> That's a good point, the hint is targeting users who might not even know that\n>>> an extension needs to be physically and separately installed on the machine\n>>> before it can be installed in their database; so maybe using \"installed\" here\n>>> isn't entirely helpful at all. That being said I'm at a loss for a more\n>>> suitable word, \"available\" perhaps?\n>> \n>> Maybe a larger break with the \"This means the extension something something\"\n>> formulation, and more on the lines of\n>> \n>> HINT: an extension must first be present (for example, installed with a\n>> package manager) on the system where PostgreSQL is running.\n> \n> I like this idea. I can do it this way in the next revision if others\n> agree.\n\nI think taking it in this direction has merits.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 29 Nov 2021 22:49:16 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: improve CREATE EXTENSION error message" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 29 Nov 2021, at 22:47, Bossart, Nathan <bossartn@amazon.com> wrote:\n>> On 11/29/21, 1:38 PM, \"Chapman Flack\" <chap@anastigmatix.net> wrote:\n>>> Maybe a larger break with the \"This means the extension something something\"\n>>> formulation, and more on the lines of\n>>> HINT: an extension must first be present (for example, installed with a\n>>> package manager) on the system where PostgreSQL is running.\n\n>> I like this idea. I can do it this way in the next revision if others\n>> agree.\n\n> I think taking it in this direction has merits.\n\nI think \"The extension must ...\" would read better, otherwise +1.\n\nI don't especially like intertwining the hint choice with the existing\nspecial case for per-version files. Our usual style for conditional\nhints can be found in places like sysv_shmem.c, and following that\nwould lead to a patch roughly like\n\n if ((file = AllocateFile(filename, \"r\")) == NULL)\n {\n+ int ext_errno = errno;\n+\n if (version && errno == ENOENT)\n {\n /* no auxiliary file for this version */\n pfree(filename);\n return;\n }\n ereport(ERROR,\n (errcode_for_file_access(),\n errmsg(\"could not open extension control file \\\"%s\\\": %m\",\n- filename)));\n+ filename),\n+ (ext_errno == ENOENT) ? errhint(\"...\") : 0));\n }\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Nov 2021 17:03:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: improve CREATE EXTENSION error message" }, { "msg_contents": "On 11/29/21, 2:04 PM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\r\n> I think \"The extension must ...\" would read better, otherwise +1.\r\n>\r\n> I don't especially like intertwining the hint choice with the existing\r\n> special case for per-version files. Our usual style for conditional\r\n> hints can be found in places like sysv_shmem.c, and following that\r\n> would lead to a patch roughly like\r\n\r\nAlright, here's v3. In this version, I actually removed the message\r\nabout the control file entirely, so now the error message looks like\r\nthis:\r\n\r\n postgres=# CREATE EXTENSION does_not_exist;\r\n ERROR: extension \"does_not_exist\" is not available\r\n DETAIL: The extension must first be installed on the system where PostgreSQL is running.\r\n HINT: The pg_available_extensions view lists the extensions that are available for installation.\r\n\r\nI can add the control file part back if we think it's necessary.\r\n\r\nNathan", "msg_date": "Mon, 29 Nov 2021 22:13:02 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: improve CREATE EXTENSION error message" }, { "msg_contents": "On 11/29/21, 2:13 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> Alright, here's v3. In this version, I actually removed the message\r\n> about the control file entirely, so now the error message looks like\r\n> this:\r\n>\r\n> postgres=# CREATE EXTENSION does_not_exist;\r\n> ERROR: extension \"does_not_exist\" is not available\r\n> DETAIL: The extension must first be installed on the system where PostgreSQL is running.\r\n> HINT: The pg_available_extensions view lists the extensions that are available for installation.\r\n>\r\n> I can add the control file part back if we think it's necessary.\r\n\r\nHm. I should probably adjust the hint to avoid confusion from\r\n\"installed on the system\" and \"available for installation.\" Maybe\r\nsomething like\r\n\r\n The pg_available_extensions view lists the available extensions.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 29 Nov 2021 22:18:48 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: improve CREATE EXTENSION error message" }, { "msg_contents": "On 11/29/21 17:03, Tom Lane wrote:\n> I think \"The extension must ...\" would read better, otherwise +1.\n\nI'm not strongly invested either way, but I'll see if I can get at why\nI used 'an' ...\n\nHints are hinty. We can give them, and they're helpful, because there\nare certain situations that we know are very likely to be what's behind\ncertain errors. ENOENT on the control file? Yeah, probably means the\nextension needs to be installed. In somebody's specific case, though,\nit could mean most of the extension is there but the other sysadmin\novernight fat-fingered an rm command and has been spending the morning\nidly wondering why the file he /meant/ to remove is still there. Or a bit\nflipped in an inode and a directory became a file. (That happened to me on\na production system once; the directory was /usr. That'll mess stuff up.)\n\nSo, in my view, a hint doesn't need to sound omniscient, or as if it\nsomehow knows precisely what happened in your case. It's enough (maybe\nbetter, even?) if a hint reads like a hint, a general statement that\nyou may ponder for a moment and then think \"yeah, that sounds like it's\nprobably what I needed to know.\"\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 29 Nov 2021 17:24:12 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: improve CREATE EXTENSION error message" }, { "msg_contents": "On 11/29/21 17:13, Bossart, Nathan wrote:\n> postgres=# CREATE EXTENSION does_not_exist;\n> ERROR: extension \"does_not_exist\" is not available\n> DETAIL: The extension must first be installed on the system where PostgreSQL is running.\n> HINT: The pg_available_extensions view lists the extensions that are available for installation.\n\nMessages crossed ...\n\nIf it were me, I would combine that DETAIL and HINT as one larger HINT,\nand use DETAIL for specific details about what actually happened (such\nas the exact filename sought and the %m).\n\nThe need for those details doesn't go away; they're still what you need\nwhen what went wrong is some other freak occurrence the hint doesn't\nexplain.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 29 Nov 2021 17:31:04 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: improve CREATE EXTENSION error message" }, { "msg_contents": "\"Bossart, Nathan\" <bossartn@amazon.com> writes:\n> Alright, here's v3. In this version, I actually removed the message\n> about the control file entirely, so now the error message looks like\n> this:\n\n> postgres=# CREATE EXTENSION does_not_exist;\n> ERROR: extension \"does_not_exist\" is not available\n> DETAIL: The extension must first be installed on the system where PostgreSQL is running.\n> HINT: The pg_available_extensions view lists the extensions that are available for installation.\n\nI don't think that HINT is useful at all, and I agree with Chapman\nthat we should still show the filename we tried to look up,\njust in case there's a path problem or the like.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Nov 2021 17:42:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: improve CREATE EXTENSION error message" }, { "msg_contents": "On 11/29/21, 2:32 PM, \"Chapman Flack\" <chap@anastigmatix.net> wrote:\r\n> If it were me, I would combine that DETAIL and HINT as one larger HINT,\r\n> and use DETAIL for specific details about what actually happened (such\r\n> as the exact filename sought and the %m).\r\n>\r\n> The need for those details doesn't go away; they're still what you need\r\n> when what went wrong is some other freak occurrence the hint doesn't\r\n> explain.\r\n\r\nHow's this?\r\n\r\n postgres=# CREATE EXTENSION does_not_exist;\r\n ERROR: extension \"does_not_exist\" is not available\r\n DETAIL: Extension control file \"/usr/local/pgsql/share/extension/does_not_exist.control\" does not exist.\r\n HINT: The extension must first be installed on the system where PostgreSQL is running. The pg_available_extensions view lists the available extensions.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 29 Nov 2021 22:43:00 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: improve CREATE EXTENSION error message" }, { "msg_contents": "On 11/29/21, 2:43 PM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\r\n> \"Bossart, Nathan\" <bossartn@amazon.com> writes:\r\n>> postgres=# CREATE EXTENSION does_not_exist;\r\n>> ERROR: extension \"does_not_exist\" is not available\r\n>> DETAIL: The extension must first be installed on the system where PostgreSQL is running.\r\n>> HINT: The pg_available_extensions view lists the extensions that are available for installation.\r\n>\r\n> I don't think that HINT is useful at all, and I agree with Chapman\r\n> that we should still show the filename we tried to look up,\r\n> just in case there's a path problem or the like.\r\n\r\nOkay, I removed the part about pg_available_extensions and now the\r\nmessage looks like this:\r\n\r\n postgres=# CREATE EXTENSION does_not_exist;\r\n ERROR: extension \"does_not_exist\" is not available\r\n DETAIL: Extension control file \"/usr/local/pgsql/share/extension/does_not_exist.control\" does not exist.\r\n HINT: The extension must first be installed on the system where PostgreSQL is running.\r\n\r\nNathan", "msg_date": "Mon, 29 Nov 2021 22:54:50 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: improve CREATE EXTENSION error message" }, { "msg_contents": "On 11/29/21 17:54, Bossart, Nathan wrote:\n\n> postgres=# CREATE EXTENSION does_not_exist;\n> ERROR: extension \"does_not_exist\" is not available\n> DETAIL: Extension control file \"/usr/local/pgsql/share/extension/does_not_exist.control\" does not exist.\n> HINT: The extension must first be installed on the system where PostgreSQL is running.\n\nThat looks like the direction I would have gone with it.\n\nI wonder, though, is it better to write \"does not exist.\" in the message,\nor to use %m and get the exact message from the OS (which presumably would\nbe \"No such file or directory\" on Unix, and whatever Windows says for such\nthings on Windows).\n\nMy leaning is generally to use %m and therefore the exact OS message\nin the detail, but I don't claim to speak for the project style on that.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 29 Nov 2021 18:46:09 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: improve CREATE EXTENSION error message" }, { "msg_contents": "On 11/29/21, 3:47 PM, \"Chapman Flack\" <chap@anastigmatix.net> wrote:\r\n> My leaning is generally to use %m and therefore the exact OS message\r\n> in the detail, but I don't claim to speak for the project style on that.\r\n\r\nOkay, the message looks like this in v5:\r\n\r\n postgres=# CREATE EXTENSION does_not_exist;\r\n ERROR: extension \"does_not_exist\" is not available\r\n DETAIL: Could not open extension control file \"/usr/local/pgsql/share/extension/does_not_exist.control\": No such file or directory.\r\n HINT: The extension must first be installed on the system where PostgreSQL is running.\r\n\r\nNathan", "msg_date": "Tue, 30 Nov 2021 00:12:31 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: improve CREATE EXTENSION error message" }, { "msg_contents": "\"Bossart, Nathan\" <bossartn@amazon.com> writes:\n> Okay, the message looks like this in v5:\n\n> postgres=# CREATE EXTENSION does_not_exist;\n> ERROR: extension \"does_not_exist\" is not available\n> DETAIL: Could not open extension control file \"/usr/local/pgsql/share/extension/does_not_exist.control\": No such file or directory.\n> HINT: The extension must first be installed on the system where PostgreSQL is running.\n\nNobody complained about that wording, so pushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 11 Jan 2022 14:22:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: improve CREATE EXTENSION error message" }, { "msg_contents": "On 1/11/22, 11:23 AM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\r\n> \"Bossart, Nathan\" <bossartn@amazon.com> writes:\r\n>> Okay, the message looks like this in v5:\r\n>\r\n>> postgres=# CREATE EXTENSION does_not_exist;\r\n>> ERROR: extension \"does_not_exist\" is not available\r\n>> DETAIL: Could not open extension control file \"/usr/local/pgsql/share/extension/does_not_exist.control\": No such file or directory.\r\n>> HINT: The extension must first be installed on the system where PostgreSQL is running.\r\n>\r\n> Nobody complained about that wording, so pushed.\r\n\r\nThanks!\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 11 Jan 2022 19:28:42 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: improve CREATE EXTENSION error message" } ]
[ { "msg_contents": "Hi,\n\nThe attached patch updates the code comment which is no longer true\nafter commit # 4a92a1c3d1c361ffb031ed05bf65b801241d7cdd\n\n--\nRegards,\nAmul Sul\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 30 Nov 2021 12:30:41 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Update stale code comment in CheckpointerMain()" }, { "msg_contents": "> On 30 Nov 2021, at 08:00, Amul Sul <sulamul@gmail.com> wrote:\n\n> The attached patch updates the code comment which is no longer true\n> after commit # 4a92a1c3d1c361ffb031ed05bf65b801241d7cdd\n\nAgreed, but looking at this shouldn't we also tweak the comment on\nRecoveryInProgress() as per the attached v2 diff?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Tue, 30 Nov 2021 10:39:27 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Update stale code comment in CheckpointerMain()" }, { "msg_contents": "On Tue, Nov 30, 2021 at 3:09 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 30 Nov 2021, at 08:00, Amul Sul <sulamul@gmail.com> wrote:\n>\n> > The attached patch updates the code comment which is no longer true\n> > after commit # 4a92a1c3d1c361ffb031ed05bf65b801241d7cdd\n>\n> Agreed, but looking at this shouldn't we also tweak the comment on\n> RecoveryInProgress() as per the attached v2 diff?\n>\n\nYes, we should -- diff looks good to me, thanks.\n\nRegards,\nAmul\n\n\n", "msg_date": "Wed, 1 Dec 2021 11:49:05 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Update stale code comment in CheckpointerMain()" }, { "msg_contents": "> On 1 Dec 2021, at 07:19, Amul Sul <sulamul@gmail.com> wrote:\n> \n> On Tue, Nov 30, 2021 at 3:09 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> \n>>> On 30 Nov 2021, at 08:00, Amul Sul <sulamul@gmail.com> wrote:\n>> \n>>> The attached patch updates the code comment which is no longer true\n>>> after commit # 4a92a1c3d1c361ffb031ed05bf65b801241d7cdd\n>> \n>> Agreed, but looking at this shouldn't we also tweak the comment on\n>> RecoveryInProgress() as per the attached v2 diff?\n> \n> Yes, we should -- diff looks good to me, thanks.\n\nThanks for confirming, I've applied this to master.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 1 Dec 2021 14:24:26 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Update stale code comment in CheckpointerMain()" }, { "msg_contents": "On Wed, Dec 1, 2021 at 8:24 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >>> The attached patch updates the code comment which is no longer true\n> >>> after commit # 4a92a1c3d1c361ffb031ed05bf65b801241d7cdd\n> >> Agreed, but looking at this shouldn't we also tweak the comment on\n> >> RecoveryInProgress() as per the attached v2 diff?\n> > Yes, we should -- diff looks good to me, thanks.\n> Thanks for confirming, I've applied this to master.\n\nThanks both of you.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 1 Dec 2021 10:44:13 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Update stale code comment in CheckpointerMain()" } ]
[ { "msg_contents": "Hi Team,\r\n\r\n\r\n\r\nI am facing connectivity issue with PostgreSQL -13 , can you please suggest .\r\n\r\n\r\n\r\nError coming:\r\n\r\n\r\n\r\n\"PHP Warning: pg_connect(): Unable to connect to PostgreSQL server: authentication method 10 not supported in\r\n\r\n/var/www/html/myLavaUat/app/webroot/myLavaCronDirect/cron/mylava_stg_arch.php on line 9 \"\r\n\r\n\r\n\r\n\r\n\r\nDB \"pg_hba.conf\" Conf file setting is as below (172.16.40.32 is my application IP address).\r\n\r\n\r\n\r\n\r\n\r\n# TYPE DATABASE USER ADDRESS METHOD\r\n\r\n\r\n\r\n# \"local\" is for Unix domain socket connections only\r\n\r\nlocal all all peer\r\n\r\n#local all postgres peer\r\n\r\nlocal all postgres ident\r\n\r\n# IPv4 local connections:\r\n\r\nhost all all 127.0.0.1/32 md5\r\n\r\n# IPv6 local connections:\r\n\r\nhost all all ::1/128 md5\r\n\r\n# Allow replication connections from localhost, by a user with the\r\n\r\n# replication privilege.\r\n\r\nlocal replication all peer\r\n\r\nhost replication all 127.0.0.1/32 md5\r\n\r\nhost replication all ::1/128 md5\r\n\r\nhost replication replication 192.168.1.133/32 md5\r\n\r\nhost replication postgres 192.168.1.133/32 trust\r\n\r\nhost replication replication 192.168.1.138/32 md5\r\n\r\nhost replication postgres 192.168.1.138/32 trust\r\n\r\nhost replication replication 172.16.40.30/32 md5\r\n\r\nhost replication postgres 172.16.40.30/32 trust\r\n\r\nhost replication postgres 127.0.0.1/32 trust\r\n\r\nhost all all 0.0.0.0/0 md5\r\n\r\nhost replication replication 172.16.40.32/32 md5\r\n\r\nhost replication postgres 172.16.40.32/32 trust\r\n\r\n\r\n\r\n\r\n\r\nRegards,\r\n\r\nRam Pratap.\r\n\n\n\n\n\n\n\n\n\nHi Team,\n \nI am facing connectivity  issue with PostgreSQL -13  , can you please suggest .\n \nError coming: \n \n\"PHP Warning:  pg_connect(): Unable to connect to PostgreSQL server: authentication method 10 not supported in\r\n\n/var/www/html/myLavaUat/app/webroot/myLavaCronDirect/cron/mylava_stg_arch.php on line 9 \"\n \n \nDB \"pg_hba.conf\" Conf file setting is as below (172.16.40.32 is my application IP address).\n \n \n# TYPE  DATABASE        USER            ADDRESS                 METHOD\n \n# \"local\" is for Unix domain socket connections only\nlocal   all             all                                     peer\n#local   all             postgres                                peer\nlocal   all             postgres                                ident\n# IPv4 local connections:\nhost    all             all             127.0.0.1/32            md5\n# IPv6 local connections:\nhost    all             all             ::1/128                 md5\n# Allow replication connections from localhost, by a user with the\n# replication privilege.\nlocal   replication     all                                     peer\nhost    replication     all             127.0.0.1/32            md5\nhost    replication     all             ::1/128                 md5\nhost    replication     replication     192.168.1.133/32        md5\nhost    replication     postgres        192.168.1.133/32        trust\nhost    replication     replication     192.168.1.138/32        md5\nhost    replication     postgres        192.168.1.138/32        trust\nhost    replication     replication     172.16.40.30/32         md5\nhost    replication     postgres        172.16.40.30/32         trust\nhost    replication     postgres        127.0.0.1/32            trust\nhost    all             all             0.0.0.0/0               md5\nhost    replication     replication     172.16.40.32/32        md5\nhost    replication     postgres        172.16.40.32/32        trust\n \n \nRegards,\nRam Pratap.", "msg_date": "Tue, 30 Nov 2021 09:44:29 +0000", "msg_from": "Ram Pratap Maurya <ram.maurya@lavainternational.in>", "msg_from_op": true, "msg_subject": "PostgreSQL server: authentication method 10 not supported" }, { "msg_contents": "Ram Pratap Maurya <ram.maurya@lavainternational.in> writes:\n> \"PHP Warning: pg_connect(): Unable to connect to PostgreSQL server: authentication method 10 not supported in\n> /var/www/html/myLavaUat/app/webroot/myLavaCronDirect/cron/mylava_stg_arch.php on line 9 \"\n\nYou need to update your client's libpq to a version that knows about\nSCRAM authentication, or else not use a SCRAM-encrypted password.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 Dec 2021 13:55:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL server: authentication method 10 not supported" } ]
[ { "msg_contents": "Ignore BRIN indexes when checking for HOT udpates\n\nWhen determining whether an index update may be skipped by using HOT, we\ncan ignore attributes indexed only by BRIN indexes. There are no index\npointers to individual tuples in BRIN, and the page range summary will\nbe updated anyway as it relies on visibility info.\n\nThis also removes rd_indexattr list, and replaces it with rd_attrsvalid\nflag. The list was not used anywhere, and a simple flag is sufficient.\n\nPatch by Josef Simanek, various fixes and improvements by me.\n\nAuthor: Josef Simanek\nReviewed-by: Tomas Vondra, Alvaro Herrera\nDiscussion: https://postgr.es/m/CAFp7QwpMRGcDAQumN7onN9HjrJ3u4X3ZRXdGFT0K5G2JWvnbWg%40mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/5753d4ee320b3f6fb2ff734667a1ce1d9d8615a1\n\nModified Files\n--------------\ndoc/src/sgml/indexam.sgml | 11 +++\nsrc/backend/access/brin/brin.c | 1 +\nsrc/backend/access/gin/ginutil.c | 1 +\nsrc/backend/access/gist/gist.c | 1 +\nsrc/backend/access/hash/hash.c | 1 +\nsrc/backend/access/heap/heapam.c | 2 +-\nsrc/backend/access/nbtree/nbtree.c | 1 +\nsrc/backend/access/spgist/spgutils.c | 1 +\nsrc/backend/utils/cache/relcache.c | 50 ++++++++------\nsrc/include/access/amapi.h | 2 +\nsrc/include/utils/rel.h | 3 +-\nsrc/include/utils/relcache.h | 4 +-\nsrc/test/modules/dummy_index_am/dummy_index_am.c | 1 +\nsrc/test/regress/expected/brin.out | 85 ++++++++++++++++++++++++\nsrc/test/regress/sql/brin.sql | 63 ++++++++++++++++++\n15 files changed, 202 insertions(+), 25 deletions(-)", "msg_date": "Tue, 30 Nov 2021 19:04:47 +0000", "msg_from": "Tomas Vondra <tomas.vondra@postgresql.org>", "msg_from_op": true, "msg_subject": "pgsql: Ignore BRIN indexes when checking for HOT udpates" }, { "msg_contents": "On 2021-Nov-30, Tomas Vondra wrote:\n\n> Ignore BRIN indexes when checking for HOT udpates\n\nI was trying to use RelationGetIndexAttrBitmap for something and\nrealized that its header comment does not really explain things very\nwell. That was already the case before this commit, but it (this\ncommit) did add new possible values without mentioning them. I propose\nthe attached comment-only patch.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nAl principio era UNIX, y UNIX habló y dijo: \"Hello world\\n\".\nNo dijo \"Hello New Jersey\\n\", ni \"Hello USA\\n\".", "msg_date": "Wed, 9 Aug 2023 11:11:55 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pgsql: Ignore BRIN indexes when checking for HOT udpates" }, { "msg_contents": "On 8/9/23 11:11, Alvaro Herrera wrote:\n> On 2021-Nov-30, Tomas Vondra wrote:\n> \n>> Ignore BRIN indexes when checking for HOT udpates\n> \n> I was trying to use RelationGetIndexAttrBitmap for something and\n> realized that its header comment does not really explain things very\n> well. That was already the case before this commit, but it (this\n> commit) did add new possible values without mentioning them. I propose\n> the attached comment-only patch.\n> \n\n+1\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 9 Aug 2023 17:56:48 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Ignore BRIN indexes when checking for HOT udpates" }, { "msg_contents": "On 2023-Aug-09, Tomas Vondra wrote:\n\n> On 8/9/23 11:11, Alvaro Herrera wrote:\n> > I was trying to use RelationGetIndexAttrBitmap for something and\n> > realized that its header comment does not really explain things very\n> > well. That was already the case before this commit, but it (this\n> > commit) did add new possible values without mentioning them. I propose\n> > the attached comment-only patch.\n> \n> +1\n\nThanks for looking! Pushed now.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 10 Aug 2023 12:06:54 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pgsql: Ignore BRIN indexes when checking for HOT udpates" } ]
[ { "msg_contents": "In commit 276db875, I made vacuumlazy.c consistently use the term\n\"cleanup lock\", rather than the term \"super-exclusive lock\". But on\nfurther reflection I should have gone further, and removed the term\n\"super-exclusive lock\" from the tree completely. The actual relevant C\nsymbols only use the term cleanup.\n\nAttached patch does this. There's not much churn.\n\nThe term \"super-exclusive lock\" is far more likely to be used in index\nAM code, particularly nbtree. That's why I, the nbtree person,\noriginally added a bunch of uses of that term in heapam -- prior to\nthat point heapam probably didn't use the term once.\n\nAnyway, I don't think that there is a particularly good reason for an\nindex AM/table AM divide in terminology. In fact, I'd go further:\nnbtree's use of super-exclusive locks is actually something that\nexists for the benefit of heapam alone -- and so using a different\nname in index AM code makes zero sense, because it's really a heapam\nthing anyway. Despite appearances.\n\nThe underlying why we need a cleanup lock when calling\n_bt_delitems_vacuum() (but not when calling the near-identical\n_bt_delitems_delete() function) is this: we need it as an interlock,\nto avoid breaking index-only scans with concurrent heap vacuuming (not\npruning) that takes place in vacuumlazy.c [1]. This issue isn't\ncurrently documented anywhere, though I plan on addressing that in the\nnear future, with a separate patch.\n\nHistoric note: the reason why this issue is so confused now has a lot\nto do with how the code has evolved over time. When the cleanup lock\nthing was first added to nbtree way back in 2001 (see commit\nc8076f09), there was no such thing as HOT, and nbtree didn't do\npage-at-a-time processing yet -- I believe that the cleanup lock was\nneeded to avoid breaking these things (when lazy VACUUM became the\ndefault). Of course the cleanup lock can't have been needed for\nindex-only scans back then, because there weren't any. I'm pretty sure\nthat that's the only remaining reason for requiring a cleanup lock.\n\nNote about a future optimization opportunity: this also means that we\ncould safely elide the cleanup lock during VACUUM (just get an\nexclusive lock) iff lazyvacuum.c told ambulkdelete that it has\n*already* decided that it won't bother performing a round of heap\nVACUUM in lazy_vacuum_heap_rel(). This observation isn't useful on its\nown, but in a world with something like Robert Haas's conveyor belt\ndesign (a world with *selective* index vacuuming), it could be quite\nvaluable.\n\n[1] https://postgr.es/m/CAH2-Wz=PqOziyRSrnN5jAtfXWXY7-BJcHz9S355LH8Dt=5qxWQ@mail.gmail.com\n-- \nPeter Geoghegan", "msg_date": "Tue, 30 Nov 2021 16:21:25 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Deprecating the term \"super-exclusive lock\"" }, { "msg_contents": "Hi,\n\nOn 2021-11-30 16:21:25 -0800, Peter Geoghegan wrote:\n> In commit 276db875, I made vacuumlazy.c consistently use the term\n> \"cleanup lock\", rather than the term \"super-exclusive lock\". But on\n> further reflection I should have gone further, and removed the term\n> \"super-exclusive lock\" from the tree completely. The actual relevant C\n> symbols only use the term cleanup.\n\n+1\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 Dec 2021 14:50:45 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Deprecating the term \"super-exclusive lock\"" } ]
[ { "msg_contents": "Hi,\n\nIt seems like there's a following typo in code comments:\n- /* determine how many segments slots can be kept by slots */\n+ /* determine how many segments can be kept by slots */\n\nAttaching a tiny patch to fix it. This typo exists all the way until PG 13.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Wed, 1 Dec 2021 12:22:30 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "fix a typo in slotfuncs.c" }, { "msg_contents": "On Wed, Dec 01, 2021 at 12:22:30PM +0530, Bharath Rupireddy wrote:\n> It seems like there's a following typo in code comments:\n> - /* determine how many segments slots can be kept by slots */\n> + /* determine how many segments can be kept by slots */\n> \n> Attaching a tiny patch to fix it. This typo exists all the way until PG 13.\n\nIndeed, thanks. I'll fix in a bit.\n--\nMichael", "msg_date": "Wed, 1 Dec 2021 16:30:30 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: fix a typo in slotfuncs.c" } ]
[ { "msg_contents": "Hi,\n\nI recently did a small experiment to see how one can create extensions\nproperly in HA(primary-standby) setup.\n\nHere are my findings:\n1) ALTER SYSTEM SET or GUC(configuration parameters) settings are not\nreplicated to standby.\n2) CREATE EXTENSION statements are replicated to standby.\n3) If the extension doesn't need to be set up in\nshared_preload_libraries GUC, no need to create extension on the\nstandby, it just works.\n4) If the extension needs to be set up in shared_preload_libraries\nGUC: the correct way to install the extension on both primary and\nstandby is:\n a) set shared_preload_libraries GUC on primary, reload conf,\nrestart the primary to make the GUC effective.\n b) set shared_preload_libraries GUC on standby, restart the\nstandby to make the GUC effective.\n c) create extension on primary (we don't need to create extension\non standby as the create extension statements are replicated).\n d) verify that the extension functions work on both primary and standby.\n5) The extensions which perform writes to the database may not work on\nstandby as the write transactions are not allowed on the standby.\nHowever, the create extension on the standby works just fine but the\nfunctions it provides may not work.\n\nI think I was successful in my experiment, please let me know if\nanything is wrong in what I did.\n\nDo we have the documentation on how to create extensions correctly in\nHA setup? If what I did is correct and we don't have it documented,\ncan we have it somewhere in the existing HA related documentation?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 1 Dec 2021 12:31:35 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Is there any documentation on how to correctly create extensions in\n HA(primary-standby) setup?" }, { "msg_contents": "On Wed, Dec 1, 2021 at 12:31 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> I recently did a small experiment to see how one can create extensions\n> properly in HA(primary-standby) setup.\n>\n> Here are my findings:\n> 1) ALTER SYSTEM SET or GUC(configuration parameters) settings are not\n> replicated to standby.\n> 2) CREATE EXTENSION statements are replicated to standby.\n> 3) If the extension doesn't need to be set up in\n> shared_preload_libraries GUC, no need to create extension on the\n> standby, it just works.\n> 4) If the extension needs to be set up in shared_preload_libraries\n> GUC: the correct way to install the extension on both primary and\n> standby is:\n> a) set shared_preload_libraries GUC on primary, reload conf,\n> restart the primary to make the GUC effective.\n> b) set shared_preload_libraries GUC on standby, restart the\n> standby to make the GUC effective.\n> c) create extension on primary (we don't need to create extension\n> on standby as the create extension statements are replicated).\n> d) verify that the extension functions work on both primary and standby.\n> 5) The extensions which perform writes to the database may not work on\n> standby as the write transactions are not allowed on the standby.\n> However, the create extension on the standby works just fine but the\n> functions it provides may not work.\n>\n> I think I was successful in my experiment, please let me know if\n> anything is wrong in what I did.\n>\n> Do we have the documentation on how to create extensions correctly in\n> HA setup? If what I did is correct and we don't have it documented,\n> can we have it somewhere in the existing HA related documentation?\n\nI'm thinking of adding the above steps into the \"Additional Supplied\nModules\" section documentation. Any thoughts please?\n\n[1] - https://www.postgresql.org/docs/devel/contrib.html\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 3 Dec 2021 19:58:52 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is there any documentation on how to correctly create extensions\n in HA(primary-standby) setup?" }, { "msg_contents": "On 03.12.21 15:28, Bharath Rupireddy wrote:\n> I'm thinking of adding the above steps into the \"Additional Supplied\n> Modules\" section documentation. Any thoughts please?\n> \n> [1] - https://www.postgresql.org/docs/devel/contrib.html\n\nThe chapter about extensions is probably better: \nhttps://www.postgresql.org/docs/devel/extend-extensions.html\n\n\n", "msg_date": "Tue, 7 Dec 2021 16:45:55 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Is there any documentation on how to correctly create extensions\n in HA(primary-standby) setup?" }, { "msg_contents": "On Tue, Dec 7, 2021 at 9:16 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 03.12.21 15:28, Bharath Rupireddy wrote:\n> > I'm thinking of adding the above steps into the \"Additional Supplied\n> > Modules\" section documentation. Any thoughts please?\n> >\n> > [1] - https://www.postgresql.org/docs/devel/contrib.html\n>\n> The chapter about extensions is probably better:\n> https://www.postgresql.org/docs/devel/extend-extensions.html\n\nThanks. Attaching v1 patch specifying the notes there. Please review.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Thu, 9 Dec 2021 08:19:06 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is there any documentation on how to correctly create extensions\n in HA(primary-standby) setup?" }, { "msg_contents": "Hi,\n\nOn Thu, Dec 09, 2021 at 08:19:06AM +0530, Bharath Rupireddy wrote:\n> \n> Thanks. Attaching v1 patch specifying the notes there. Please review.\n\nI think that the common terminology is \"module\", not \"extension\". That's\nespecially important here as this information is also relevant for modules that\nmay come with an SQL level extension. This should be made clear in that new\ndocumentation, same for the CREATE EXTENSION part that may not be relevant.\n\nIt also seems that this documentation is only aimed for physical replication.\nIt should also be explicitly stated as it might not be obvious for the intended\nreaders.\n\n\n+ [...] set it either via <link linkend=\"sql-altersystem\">ALTER SYSTEM</link>\n+ command or <filename>postgresql.conf</filename> file on both primary and\n+ standys, reload the <filename>postgresql.conf</filename> file and restart\n+ the servers.\n\nIsn't the reload a terrible advice? By definition changing\nshared_preload_libraries isn't compatible with a simple reload and will emit\nsome error.\n\n+ [...] Create the extension on the primary, there is no need to\n+ create it on the standbys as the <link linkend=\"sql-createextension\"><command>CREATE EXTENSION</command></link>\n+ command is replicated.\n\nThe \"no need\" here is quite ambiguous, as it seems to indicate that trying to\ncreate the extension on the standby will work but is unnecessary.\n\n\n", "msg_date": "Tue, 18 Jan 2022 15:55:58 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is there any documentation on how to correctly create extensions\n in HA(primary-standby) setup?" }, { "msg_contents": "On Tue, Jan 18, 2022 at 1:26 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> On Thu, Dec 09, 2021 at 08:19:06AM +0530, Bharath Rupireddy wrote:\n> >\n> > Thanks. Attaching v1 patch specifying the notes there. Please review.\n>\n> I think that the common terminology is \"module\", not \"extension\". That's\n> especially important here as this information is also relevant for modules that\n> may come with an SQL level extension. This should be made clear in that new\n> documentation, same for the CREATE EXTENSION part that may not be relevant.\n\nThanks for reviewing this. The aim of the patch is to add how one can\ncreate extensions with \"CREATE EXTENSION\" command in replication\nsetup, not sure why I should use the term \"module\". I hope you have\nseen the usage of extension in the extend.sgml.\n\n> It also seems that this documentation is only aimed for physical replication.\n> It should also be explicitly stated as it might not be obvious for the intended\n> readers.\n\nYeah, I've changed the title and description accordingly.\n\n> + [...] set it either via <link linkend=\"sql-altersystem\">ALTER SYSTEM</link>\n> + command or <filename>postgresql.conf</filename> file on both primary and\n> + standys, reload the <filename>postgresql.conf</filename> file and restart\n> + the servers.\n>\n> Isn't the reload a terrible advice? By definition changing\n> shared_preload_libraries isn't compatible with a simple reload and will emit\n> some error.\n\nYes, it will emit the following messages. I removed the reload part.\n\n2022-02-11 04:07:53.178 UTC [1206594] LOG: parameter\n\"shared_preload_libraries\" cannot be changed without restarting the\nserver\n2022-02-11 04:07:53.178 UTC [1206594] LOG: configuration file\n\"/home/bharath/postgres/inst/bin/data/postgresql.auto.conf\" contains\nerrors; unaffected changes were applied\n\n> + [...] Create the extension on the primary, there is no need to\n> + create it on the standbys as the <link linkend=\"sql-createextension\"><command>CREATE EXTENSION</command></link>\n> + command is replicated.\n>\n> The \"no need\" here is quite ambiguous, as it seems to indicate that trying to\n> create the extension on the standby will work but is unnecessary.\n\nModified.\n\nAttaching v2, please have a look.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Fri, 11 Feb 2022 10:16:27 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is there any documentation on how to correctly create extensions\n in HA(primary-standby) setup?" }, { "msg_contents": "On Fri, Feb 11, 2022 at 10:16:27AM +0530, Bharath Rupireddy wrote:\n> On Tue, Jan 18, 2022 at 1:26 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > I think that the common terminology is \"module\", not \"extension\". That's\n> > especially important here as this information is also relevant for modules that\n> > may come with an SQL level extension. This should be made clear in that new\n> > documentation, same for the CREATE EXTENSION part that may not be relevant.\n> \n> Thanks for reviewing this. The aim of the patch is to add how one can\n> create extensions with \"CREATE EXTENSION\" command in replication\n> setup, not sure why I should use the term \"module\". I hope you have\n> seen the usage of extension in the extend.sgml.\n\nThe aim of this patch should be to clarify postgres configuration for\nadditional modules in physical replication, whether those includes an extension\nor not. Your patch covers implication of modifying shared_preload_libraries,\nare you saying that if there's no extension associated with that library it\nshouldn't be covered?\n\nA simple example is auto_explain, which also means that the documentation\nshould probbaly mention that shared_preload_libraries is only an example and\nall configuration changes should be reported (and eventually adapted) on the\nstandby, like session_preload_libraries among others.\n\n\n", "msg_date": "Fri, 11 Feb 2022 13:03:07 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is there any documentation on how to correctly create extensions\n in HA(primary-standby) setup?" }, { "msg_contents": "This doesn't seem to be getting any further attention. It sounds like\nJulien didn't agree with the scope of the text. Bharath do you think\nJulien's comments make sense? Will you have a chance to look at this?\n\n\n", "msg_date": "Fri, 25 Mar 2022 00:49:46 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Is there any documentation on how to correctly create extensions\n in HA(primary-standby) setup?" }, { "msg_contents": "On Fri, Mar 25, 2022 at 10:20 AM Greg Stark <stark@mit.edu> wrote:\n>\n> This doesn't seem to be getting any further attention. It sounds like\n> Julien didn't agree with the scope of the text. Bharath do you think\n> Julien's comments make sense? Will you have a chance to look at this?\n\nThanks Greg. I was busy with other features.\n\nThanks Julien for the off-list discussion. I tried to address review\ncomments in the v3 patch attached. Now, I've added the notes in\nhigh-availability.sgml which sort of suits more and closer to physical\nreplicatioin than contrib.sgml or extend.sgml.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.", "msg_date": "Sun, 27 Mar 2022 09:07:12 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is there any documentation on how to correctly create extensions\n in HA(primary-standby) setup?" }, { "msg_contents": "Hi,\n\nOn Sun, Mar 27, 2022 at 09:07:12AM +0530, Bharath Rupireddy wrote:\n> On Fri, Mar 25, 2022 at 10:20 AM Greg Stark <stark@mit.edu> wrote:\n> >\n> > This doesn't seem to be getting any further attention. It sounds like\n> > Julien didn't agree with the scope of the text. Bharath do you think\n> > Julien's comments make sense? Will you have a chance to look at this?\n> \n> Thanks Greg. I was busy with other features.\n> \n> Thanks Julien for the off-list discussion. I tried to address review\n> comments in the v3 patch attached. Now, I've added the notes in\n> high-availability.sgml which sort of suits more and closer to physical\n> replicatioin than contrib.sgml or extend.sgml.\n\n+ [...] Firstly, the module's shared library must be present on\n+ both primary and standbys.\n\nI'm a bit confused with it. It looks like it means that the .so must be\nphysically present on both servers, but I'm assuming that you're talking about\nshared_preload_libraries?\n\nIf yes, I still think it's worth documenting that it *needs* to be present on\nthe standbys *if* you want it to be enabled on the standby, including if it can\nbe promoted to a primary node. And that any related GUC also has to be\nproperly configured on all nodes (so maybe moving the last paragraph just after\nthis one?).\n\nIf no, maybe just saying that the module has to be installed and configured on\nall nodes?\n\n+ [...] If the module exposes SQL functions, running\n+ <link linkend=\"sql-createextension\"><command>CREATE EXTENSION</command></link>\n+ command on primary is sufficient as standbys will receive it via physical\n+ replication.\n\nI think it's better to phrase it with something like \"CREATE EXTENSION is\nreplicated in physical replication similarly to other DDL commands\".\n\n+ [...] The\n+ module's shared library gets loaded upon first usage of any of its\n+ functions on primary and standbys.\n\nIs it worth documenting that? Note that this is only true if the lib isn't in\nshared_preload_libraries and if it's a wrapper on top of a C function.\n\nnitpicking: there's a trailing whitespace after \"standbys.\"\n\n+ If the module doesn't expose SQL functions, the shared library has to be\n+ loaded separately on primary and standbys, either by\n+ <link linkend=\"sql-load\"><command>LOAD</command></link> command or by\n+ setting parameter <xref linkend=\"guc-session-preload-libraries\"/> or\n+ <xref linkend=\"guc-shared-preload-libraries\"/> or\n+ <xref linkend=\"guc-local-preload-libraries\"/>, depending on module's need.\n\nI think this is also confusing. The need for preloading is entirely orthogonal\nto SQL functions in the extension, especially since this is implying SQL\nfunction over C-code. This should be reworded to go with the first paragraph I\nthink.\n\n\n", "msg_date": "Wed, 30 Mar 2022 12:57:50 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is there any documentation on how to correctly create extensions\n in HA(primary-standby) setup?" } ]
[ { "msg_contents": "Hi,\n\nIt seems like users can try different ways to set multiple values for\nshared_preload_libraries GUC even after reading the documentation\n[1]), something like:\nALTER SYSTEM SET shared_preload_libraries TO\nauth_delay,pg_stat_statements,sepgsql; --> correct\nALTER SYSTEM SET shared_preload_libraries =\n'auth_delay','pg_stat_statements','sepgsql'; --> correct\nALTER SYSTEM SET shared_preload_libraries TO\n'auth_delay','pg_stat_statements','sepgsql'; --> correct\nALTER SYSTEM SET shared_preload_libraries =\nauth_delay,pg_stat_statements,sepgsql; --> wrong\nALTER SYSTEM SET shared_preload_libraries =\n'auth_delay,pg_stat_statements,sepgsql'; --> wrong\nALTER SYSTEM SET shared_preload_libraries =\n\"auth_delay,pg_stat_statements,sepgsql\"; --> wrong\n\nThe problem with the wrong parameter set command is that the ALTER\nSYSTEM SET will not fail, but the server will not come up in case it\nis restarted. In various locations in the documentation, we have shown\nhow a single value can be set, something like:\nshared_preload_libraries = 'auth_delay'\nshared_preload_libraries = 'pg_stat_statements'\nshared_preload_libraries = 'sepgsql'\n\nIsn't it better we document (in [1]) an example to set multiple values\nto shared_preload_libraries? If okay, we can provide examples to other\nGUCs local_preload_libraries and session_preload_libraries, but I'm\nnot in favour of it.\n\nThoughts?\n\n[1] - https://www.postgresql.org/docs/devel/runtime-config-client.html#GUC-SHARED-PRELOAD-LIBRARIES\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 1 Dec 2021 16:20:52 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "should we document an example to set multiple libraries in\n shared_preload_libraries?" }, { "msg_contents": "On Wed, Dec 01, 2021 at 04:20:52PM +0530, Bharath Rupireddy wrote:\n> It seems like users can try different ways to set multiple values for\n> shared_preload_libraries GUC even after reading the documentation\n> [1]), something like:\n...\n> ALTER SYSTEM SET shared_preload_libraries = \"auth_delay,pg_stat_statements,sepgsql\"; --> wrong\n> \n> The problem with the wrong parameter set command is that the ALTER\n> SYSTEM SET will not fail, but the server will not come up in case it\n> is restarted. In various locations in the documentation, we have shown\n> how a single value can be set, something like:\n> shared_preload_libraries = 'auth_delay'\n> shared_preload_libraries = 'pg_stat_statements'\n> shared_preload_libraries = 'sepgsql'\n> \n> Isn't it better we document (in [1]) an example to set multiple values\n> to shared_preload_libraries?\n\n+1 to document it, but it seems like the worse problem is allowing the admin to\nwrite a configuration which causes the server to fail to start, without having\nissued a warning.\n\nI think you could fix that with a GUC check hook to emit a warning.\nI'm not sure what objections people might have to this. Maybe it's confusing\nto execute preliminary verification of the library by calling stat() but not do\nstronger verification for other reasons the library might fail to load. Like\nit doesn't have the right magic number, or it's built for the wrong server\nversion. Should factor out the logic from internal_load_library and check\nthose too ?\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 1 Dec 2021 07:00:16 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: should we document an example to set multiple libraries in\n shared_preload_libraries?" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> +1 to document it, but it seems like the worse problem is allowing the admin to\n> write a configuration which causes the server to fail to start, without having\n> issued a warning.\n\n> I think you could fix that with a GUC check hook to emit a warning.\n> I'm not sure what objections people might have to this. Maybe it's confusing\n> to execute preliminary verification of the library by calling stat() but not do\n> stronger verification for other reasons the library might fail to load. Like\n> it doesn't have the right magic number, or it's built for the wrong server\n> version. Should factor out the logic from internal_load_library and check\n> those too ?\n\nConsidering the vanishingly small number of actual complaints we've\nseen about this, that sounds ridiculously over-engineered.\nA documentation example should be sufficient.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Dec 2021 08:15:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: should we document an example to set multiple libraries in\n shared_preload_libraries?" }, { "msg_contents": "On Wed, Dec 1, 2021 at 6:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > +1 to document it, but it seems like the worse problem is allowing the admin to\n> > write a configuration which causes the server to fail to start, without having\n> > issued a warning.\n>\n> > I think you could fix that with a GUC check hook to emit a warning.\n> > I'm not sure what objections people might have to this. Maybe it's confusing\n> > to execute preliminary verification of the library by calling stat() but not do\n> > stronger verification for other reasons the library might fail to load. Like\n> > it doesn't have the right magic number, or it's built for the wrong server\n> > version. Should factor out the logic from internal_load_library and check\n> > those too ?\n>\n> Considering the vanishingly small number of actual complaints we've\n> seen about this, that sounds ridiculously over-engineered.\n> A documentation example should be sufficient.\n\nThanks. Here's the v1 patch adding examples in the documentation.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Wed, 1 Dec 2021 19:25:40 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: should we document an example to set multiple libraries in\n shared_preload_libraries?" }, { "msg_contents": "On 12/1/21, 5:59 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> On Wed, Dec 1, 2021 at 6:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n>> Considering the vanishingly small number of actual complaints we've\r\n>> seen about this, that sounds ridiculously over-engineered.\r\n>> A documentation example should be sufficient.\r\n>\r\n> Thanks. Here's the v1 patch adding examples in the documentation.\r\n\r\nI think the problems you noted upthread are shared for all GUCs with\r\ntype GUC_LIST_QUOTE (e.g., search_path, temp_tablespaces). Perhaps\r\nthe documentation for each of these GUCs should contain a short blurb\r\nabout how to properly SET a list of values.\r\n\r\nAlso upthread, I see that you gave the following example for an\r\nincorrect way to set shared_preload_libraries:\r\n\r\n ALTER SYSTEM SET shared_preload_libraries =\r\n auth_delay,pg_stat_statements,sepgsql; --> wrong\r\n\r\nWhy is this wrong? It seems to work okay for me.\r\n\r\nNathan\r\n\r\n", "msg_date": "Fri, 3 Dec 2021 00:45:56 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: should we document an example to set multiple libraries in\n shared_preload_libraries?" }, { "msg_contents": "On Fri, Dec 03, 2021 at 12:45:56AM +0000, Bossart, Nathan wrote:\n> I think the problems you noted upthread are shared for all GUCs with\n> type GUC_LIST_QUOTE (e.g., search_path, temp_tablespaces). Perhaps\n> the documentation for each of these GUCs should contain a short blurb\n> about how to properly SET a list of values.\n\nYeah, the approach taken by the proposed patch is not going to scale\nand age well.\n\nIt seems to me that we should have something dedicated to lists around\nthe section for \"Parameter Names and Values\", and add a link in the\ndescription of each parameters concerned back to the generic\ndescription.\n\n> Also upthread, I see that you gave the following example for an\n> incorrect way to set shared_preload_libraries:\n> \n> ALTER SYSTEM SET shared_preload_libraries =\n> auth_delay,pg_stat_statements,sepgsql; --> wrong\n> \n> Why is this wrong? It seems to work okay for me.\n\nYep.\n--\nMichael", "msg_date": "Fri, 3 Dec 2021 10:02:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: should we document an example to set multiple libraries in\n shared_preload_libraries?" }, { "msg_contents": "On Fri, Dec 3, 2021 at 6:33 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Dec 03, 2021 at 12:45:56AM +0000, Bossart, Nathan wrote:\n> > I think the problems you noted upthread are shared for all GUCs with\n> > type GUC_LIST_QUOTE (e.g., search_path, temp_tablespaces). Perhaps\n> > the documentation for each of these GUCs should contain a short blurb\n> > about how to properly SET a list of values.\n>\n> Yeah, the approach taken by the proposed patch is not going to scale\n> and age well.\n>\n> It seems to me that we should have something dedicated to lists around\n> the section for \"Parameter Names and Values\", and add a link in the\n> description of each parameters concerned back to the generic\n> description.\n\n+1 to add here in the \"Parameter Names and Values section\", but do we\nwant to backlink every string parameter to this section? I think it\nneeds more effort. IMO, we can just backlink for\nshared_preload_libraries alone. Thoughts?\n\n <listitem>\n <para>\n <emphasis>String:</emphasis>\n In general, enclose the value in single quotes, doubling any single\n quotes within the value. Quotes can usually be omitted if the value\n is a simple number or identifier, however.\n </para>\n </listitem>\n\n> > Also upthread, I see that you gave the following example for an\n> > incorrect way to set shared_preload_libraries:\n> >\n> > ALTER SYSTEM SET shared_preload_libraries =\n> > auth_delay,pg_stat_statements,sepgsql; --> wrong\n> >\n> > Why is this wrong? It seems to work okay for me.\n>\n> Yep.\n\nMy bad. Yes, it works.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 3 Dec 2021 19:49:52 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: should we document an example to set multiple libraries in\n shared_preload_libraries?" }, { "msg_contents": "On 12/3/21, 6:21 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> +1 to add here in the \"Parameter Names and Values section\", but do we\r\n> want to backlink every string parameter to this section? I think it\r\n> needs more effort. IMO, we can just backlink for\r\n> shared_preload_libraries alone. Thoughts?\r\n\r\nIMO this is most important for GUC_LIST_QUOTE parameters, of which\r\nthere are only a handful. I don't think adding a link to every string\r\nparameter is necessary.\r\n\r\nNathan\r\n\r\n", "msg_date": "Fri, 3 Dec 2021 17:55:04 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: should we document an example to set multiple libraries in\n shared_preload_libraries?" }, { "msg_contents": "On Fri, Dec 3, 2021 at 11:25 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 12/3/21, 6:21 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > +1 to add here in the \"Parameter Names and Values section\", but do we\n> > want to backlink every string parameter to this section? I think it\n> > needs more effort. IMO, we can just backlink for\n> > shared_preload_libraries alone. Thoughts?\n>\n> IMO this is most important for GUC_LIST_QUOTE parameters, of which\n> there are only a handful. I don't think adding a link to every string\n> parameter is necessary.\n\nAgree.\n\nShould we specify something like below in the \"Parameter Names and\nValues\" section's \"String:\" para? Do we use generic terminology like\n'name' and val1, val2, val3 and so on?\n\nALTER SYSTEM SET name = val1,val2,val3;\nALTER SYSTEM SET name = 'val1', 'val2', 'val3';\nALTER SYSTEM SET name = '\"val 1\"', '\"val,2\"', 'val3';\n\nAnother thing I observed is the difference between how the\npostgresql.conf file and ALTER SYSTEM SET command is parsed for\nGUC_LIST_QUOTE values.\n\nFor instance, in postgresql.conf file, by default search_path is\nspecified as follows:\nsearch_path = '\"$user\", public',\npostgres=# show search_path ;\n search_path\n-----------------\n \"$user\", public\n(1 row)\n\nWhen I use the same style with ALTER SYSTEM SET command, the value is\ntreated as single string value:\npostgres=# ALTER SYSTEM SET search_path = '\"$user\", public';\nALTER SYSTEM\npostgres=# show search_path ;\n search_path\n-----------------\n \"$user\", public\n(1 row)\n\npostgres=# select pg_reload_conf();\n pg_reload_conf\n----------------\n t\n(1 row)\n\npostgres=# show search_path ;\n search_path\n---------------------\n \"\"\"$user\"\", public\"\n(1 row)\n\nAm I missing something here? Or is there a distinction between parsing\nof postgresql.conf and ALTER SYSTEM SET command for GUC_LIST_QUOTE\nvalues? If so, what is it?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 6 Dec 2021 21:20:09 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: should we document an example to set multiple libraries in\n shared_preload_libraries?" }, { "msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> Am I missing something here? Or is there a distinction between parsing\n> of postgresql.conf and ALTER SYSTEM SET command for GUC_LIST_QUOTE\n> values? If so, what is it?\n\nOne context is SQL, the other is not. The quoting rules are\nreally quite different.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 Dec 2021 11:01:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: should we document an example to set multiple libraries in\n shared_preload_libraries?" }, { "msg_contents": "On Mon, Dec 6, 2021 at 9:20 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Dec 3, 2021 at 11:25 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> >\n> > On 12/3/21, 6:21 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > +1 to add here in the \"Parameter Names and Values section\", but do we\n> > > want to backlink every string parameter to this section? I think it\n> > > needs more effort. IMO, we can just backlink for\n> > > shared_preload_libraries alone. Thoughts?\n> >\n> > IMO this is most important for GUC_LIST_QUOTE parameters, of which\n> > there are only a handful. I don't think adding a link to every string\n> > parameter is necessary.\n>\n> Agree.\n>\n> Should we specify something like below in the \"Parameter Names and\n> Values\" section's \"String:\" para? Do we use generic terminology like\n> 'name' and val1, val2, val3 and so on?\n>\n> ALTER SYSTEM SET name = val1,val2,val3;\n> ALTER SYSTEM SET name = 'val1', 'val2', 'val3';\n> ALTER SYSTEM SET name = '\"val 1\"', '\"val,2\"', 'val3';\n>\n> Another thing I observed is the difference between how the\n> postgresql.conf file and ALTER SYSTEM SET command is parsed for\n> GUC_LIST_QUOTE values.\n\nSince there's a difference in the way the params are set in the\npostgresql.conf file and ALTER SYSTEM SET command (as pointed out by\nTom in this thread [1]), I'm now confused. If we were to add examples\nto the \"Parameter Names and Values\" section, which examples should we\naddd, postgresql.conf files ones or ALTER SYSTEM SET command ones?\n\nThoughts?\n\n[1] https://www.postgresql.org/message-id/flat/3108541.1638806477%40sss.pgh.pa.us\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Tue, 7 Dec 2021 18:59:36 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: should we document an example to set multiple libraries in\n shared_preload_libraries?" }, { "msg_contents": "On Wed, Dec 1, 2021 at 5:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > +1 to document it, but it seems like the worse problem is allowing the admin to\n> > write a configuration which causes the server to fail to start, without having\n> > issued a warning.\n>\n> > I think you could fix that with a GUC check hook to emit a warning.\n> > ...\n>\n> Considering the vanishingly small number of actual complaints we've\n> seen about this, that sounds ridiculously over-engineered.\n> A documentation example should be sufficient.\n\nI don't know if this will tip the scales, but I'd like to lodge a\nbelated complaint. I've gotten myself in this server-fails-to-start\nsituation several times (in development, for what it's worth). The\nsyntax (as Bharath pointed out in the original message) is pretty\npicky, there are no guard rails, and if you got there through ALTER\nSYSTEM, you can't fix it with ALTER SYSTEM (because the server isn't\nup). If you go to fix it manually, you get a scary \"Do not edit this\nfile manually!\" warning that you have to know to ignore in this case\n(that's if you find the file after you realize what the fairly generic\n\"FATAL: ... No such file or directory\" error in the log is telling\nyou). Plus you have to get the (different!) quoting syntax right or\ncut your losses and delete the change.\n\nI'm over-dramatizing this a bit, but I do think there are a lot of\nopportunities to make mistakes here, and this behavior could be more\nuser-friendly beyond just documentation changes. If a config change is\nbogus most likely due to a quoting mistake or a typo, a warning would\nbe fantastic (i.e., the stat() check Justin suggested). Or maybe the\nFATAL log message on a failed startup could include the source of the\nproblem?\n\nThanks,\nMaciek\n\n\n", "msg_date": "Wed, 8 Dec 2021 23:31:50 -0800", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": false, "msg_subject": "Re: should we document an example to set multiple libraries in\n shared_preload_libraries?" }, { "msg_contents": "On Thu, Dec 9, 2021 at 1:02 PM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:\n>\n> On Wed, Dec 1, 2021 at 5:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Justin Pryzby <pryzby@telsasoft.com> writes:\n> > > +1 to document it, but it seems like the worse problem is allowing the admin to\n> > > write a configuration which causes the server to fail to start, without having\n> > > issued a warning.\n> >\n> > > I think you could fix that with a GUC check hook to emit a warning.\n> > > ...\n> >\n> > Considering the vanishingly small number of actual complaints we've\n> > seen about this, that sounds ridiculously over-engineered.\n> > A documentation example should be sufficient.\n>\n> I don't know if this will tip the scales, but I'd like to lodge a\n> belated complaint. I've gotten myself in this server-fails-to-start\n> situation several times (in development, for what it's worth). The\n> syntax (as Bharath pointed out in the original message) is pretty\n> picky, there are no guard rails, and if you got there through ALTER\n> SYSTEM, you can't fix it with ALTER SYSTEM (because the server isn't\n> up). If you go to fix it manually, you get a scary \"Do not edit this\n> file manually!\" warning that you have to know to ignore in this case\n> (that's if you find the file after you realize what the fairly generic\n> \"FATAL: ... No such file or directory\" error in the log is telling\n> you). Plus you have to get the (different!) quoting syntax right or\n> cut your losses and delete the change.\n>\n> I'm over-dramatizing this a bit, but I do think there are a lot of\n> opportunities to make mistakes here, and this behavior could be more\n> user-friendly beyond just documentation changes. If a config change is\n> bogus most likely due to a quoting mistake or a typo, a warning would\n> be fantastic (i.e., the stat() check Justin suggested). Or maybe the\n> FATAL log message on a failed startup could include the source of the\n> problem?\n\nHow about ALTER SYSTEM SET shared_preload_libraries command itself\nchecking for availability of the specified libraries (after fetching\nlibrary names from the parsed string value) with stat() and then\nreport an error if any of the libraries doesn't exist? Currently, it\njust accepts if the specified value passes the parsing.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 9 Dec 2021 21:12:33 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: should we document an example to set multiple libraries in\n shared_preload_libraries?" }, { "msg_contents": "\r\nOn Wed, Dec 1, 2021 at 5:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n>\r\n> Justin Pryzby <pryzby@telsasoft.com> writes:\r\n> > +1 to document it, but it seems like the worse problem is allowing \r\n> > +the admin to\r\n> > write a configuration which causes the server to fail to start, \r\n> > without having issued a warning.\r\n>\r\n> > I think you could fix that with a GUC check hook to emit a warning.\r\n> > ...\r\n>\r\n> Considering the vanishingly small number of actual complaints we've \r\n> seen about this, that sounds ridiculously over-engineered.\r\n> A documentation example should be sufficient.\r\n\r\n>I don't know if this will tip the scales, but I'd like to lodge a belated complaint. I've gotten myself in this server-fails-to-start situation several times (in development, for what it's worth). The syntax (as Bharath pointed out in the original message) is pretty picky, there are no guard rails, and if you got there through ALTER SYSTEM, you can't fix it with ALTER SYSTEM (because the server isn't up). If you go to fix it manually, you get a scary \"Do not edit this file manually!\" warning that you have to know to ignore in this case (that's if you find the file after you realize what the fairly generic\r\n>\"FATAL: ... No such file or directory\" error in the log is telling you). Plus you have to get the (different!) quoting syntax right or cut your losses and delete the change.\r\n>\r\n>I'm over-dramatizing this a bit, but I do think there are a lot of opportunities to make mistakes here, and this behavior could be more user-friendly beyond just documentation changes. If a config change is bogus most likely due to a quoting mistake or a typo, a warning would be fantastic (i.e., the stat() check Justin suggested). Or maybe the FATAL log message on a failed startup could include the source of the problem?\r\n>\r\n>Thanks,\r\n>Maciek\r\n\r\nI may have missed something in this stream, but is this a system controlled by Patroni? In any case I to have gotten stuck like this. If this is a Patroni system, I've discovered that patroni either ides or prevents \"out of memory\" messages from getting into the db log. If it is patroni controlled, I've solved this by turning off Patroni, starting the DB using pg_ctl and then I can examine the log messages. With pg_ctl, you can edit the postgresql.conf and see what you can do. Alternatively, with the DCS you can make 'dynamic edits' to the system configuration without the db running. Use the patroni control utility to do an 'edit-config' to make the changes. Then reload the config (same utility) and then you can bring up the db with Patroni...\r\n\r\nSmiles,\r\nphil\r\n\r\n", "msg_date": "Fri, 10 Dec 2021 18:10:54 +0000", "msg_from": "\"Godfrin, Philippe E\" <Philippe.Godfrin@nov.com>", "msg_from_op": false, "msg_subject": "RE: [EXTERNAL] Re: should we document an example to set multiple\n libraries in shared_preload_libraries?" }, { "msg_contents": "On Thu, Dec 9, 2021 at 2:32 AM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:\n> > Considering the vanishingly small number of actual complaints we've\n> > seen about this, that sounds ridiculously over-engineered.\n> > A documentation example should be sufficient.\n>\n> I don't know if this will tip the scales, but I'd like to lodge a\n> belated complaint. I've gotten myself in this server-fails-to-start\n> situation several times (in development, for what it's worth). The\n> syntax (as Bharath pointed out in the original message) is pretty\n> picky, there are no guard rails, and if you got there through ALTER\n> SYSTEM, you can't fix it with ALTER SYSTEM (because the server isn't\n> up). If you go to fix it manually, you get a scary \"Do not edit this\n> file manually!\" warning that you have to know to ignore in this case\n> (that's if you find the file after you realize what the fairly generic\n> \"FATAL: ... No such file or directory\" error in the log is telling\n> you). Plus you have to get the (different!) quoting syntax right or\n> cut your losses and delete the change.\n\n+1. I disagree that trying to detect this kind of problem would be\n\"ridiculously over-engineered.\" I don't know whether it can be done\nelegantly enough that we'd be happy with it and I don't know whether\nit would end up just garden variety over-engineered. But there's\nnothing ridiculous about trying to prevent people from putting their\nsystem into a state where it won't start.\n\n(To be clear, I also think updating the documentation is sensible,\nwithout taking a view on exactly what that update should look like.)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 Dec 2021 09:01:57 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: should we document an example to set multiple libraries in\n shared_preload_libraries?" }, { "msg_contents": "On Thu, Dec 9, 2021 at 7:42 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> How about ALTER SYSTEM SET shared_preload_libraries command itself\n> checking for availability of the specified libraries (after fetching\n> library names from the parsed string value) with stat() and then\n> report an error if any of the libraries doesn't exist? Currently, it\n> just accepts if the specified value passes the parsing.\n\nThat certainly would have helped me. I guess it technically breaks the\ntheoretical use case of \"first change the shared_preload_libraries\nsetting, then install those libraries, then restart Postgres,\" but I\ndon't see why anyone would do that in practice.\n\nFor what it's worth, I don't even feel strongly that this needs to be\nan error—just that the current flow around this is error-prone due to\nseveral sharp edges and could be improved. I would be happy with an\nerror, but a warning or other mechanism could work, too. I do think\nbetter documentation is not enough: the differences between a working\nsetting value and a broken one are pretty subtle.\n\nThanks,\nMaciek\n\n\n", "msg_date": "Sat, 18 Dec 2021 16:31:42 -0800", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": false, "msg_subject": "Re: should we document an example to set multiple libraries in\n shared_preload_libraries?" }, { "msg_contents": "On Fri, Dec 10, 2021 at 10:10 AM Godfrin, Philippe E\n<Philippe.Godfrin@nov.com> wrote:\n> I may have missed something in this stream, but is this a system controlled by Patroni?\n\nIn my case, no: it's a pretty vanilla PGDG install on Ubuntu 20.04.\nThanks for the context, though.\n\nThanks,\nMaciek\n\n\n", "msg_date": "Sat, 18 Dec 2021 16:34:00 -0800", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [EXTERNAL] Re: should we document an example to set multiple\n libraries in shared_preload_libraries?" } ]
[ { "msg_contents": "Hi,\n\nThe active_pid of ReplicationSlot structure, which tells whether a\nreplication slot is active or inactive, isn't persisted to the disk\ni.e has no entry in ReplicationSlotPersistentData structure. Isn't it\nbetter if we add that info to ReplicationSlotPersistentData structure\nand persist to the disk? This will help to know what were the inactive\nreplication slots in case the server goes down or crashes for some\nreason. Currently, we don't have a way to interpret the replication\nslot info in the disk but there's a patch for pg_replslotdata tool at\n[1]. This way, one can figure out the reasons for the server\ndown/crash and figure out which replication slots to remove to bring\nthe server up and running without touching the other replication\nslots.\n\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/CALj2ACW0rV5gWK8A3m6_X62qH%2BVfaq5hznC%3Di0R5Wojt5%2Byhyw%40mail.gmail.com\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 1 Dec 2021 19:39:04 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?" }, { "msg_contents": "On 2021-Dec-01, Bharath Rupireddy wrote:\n\n> The active_pid of ReplicationSlot structure, which tells whether a\n> replication slot is active or inactive, isn't persisted to the disk\n> i.e has no entry in ReplicationSlotPersistentData structure. Isn't it\n> better if we add that info to ReplicationSlotPersistentData structure\n> and persist to the disk? This will help to know what were the inactive\n> replication slots in case the server goes down or crashes for some\n> reason. Currently, we don't have a way to interpret the replication\n> slot info in the disk but there's a patch for pg_replslotdata tool at\n> [1]. This way, one can figure out the reasons for the server\n> down/crash and figure out which replication slots to remove to bring\n> the server up and running without touching the other replication\n> slots.\n\nI think the PIDs are log-worthy for sure, but it's not clear to me that\nit is desirable to write them to the persistent state file. In case of\ncrashes, the log should serve just fine to aid root cause investigation\n-- in fact even better than the persistent file, where the data would be\nlost as soon as the next client acquires that slot.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"The Postgresql hackers have what I call a \"NASA space shot\" mentality.\n Quite refreshing in a world of \"weekend drag racer\" developers.\"\n(Scott Marlowe)\n\n\n", "msg_date": "Wed, 1 Dec 2021 13:20:46 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?" }, { "msg_contents": "On Wed, Dec 1, 2021 at 9:50 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Dec-01, Bharath Rupireddy wrote:\n>\n> > The active_pid of ReplicationSlot structure, which tells whether a\n> > replication slot is active or inactive, isn't persisted to the disk\n> > i.e has no entry in ReplicationSlotPersistentData structure. Isn't it\n> > better if we add that info to ReplicationSlotPersistentData structure\n> > and persist to the disk? This will help to know what were the inactive\n> > replication slots in case the server goes down or crashes for some\n> > reason. Currently, we don't have a way to interpret the replication\n> > slot info in the disk but there's a patch for pg_replslotdata tool at\n> > [1]. This way, one can figure out the reasons for the server\n> > down/crash and figure out which replication slots to remove to bring\n> > the server up and running without touching the other replication\n> > slots.\n>\n> I think the PIDs are log-worthy for sure, but it's not clear to me that\n> it is desirable to write them to the persistent state file. In case of\n> crashes, the log should serve just fine to aid root cause investigation\n> -- in fact even better than the persistent file, where the data would be\n> lost as soon as the next client acquires that slot.\n\nThanks. +1 to log a message at LOG level whenever a replication slot\nbecomes active (gets assigned a valid pid to active_pid) and becomes\ninactive(gets assigned 0 to active_pid). Having said that, isn't it\nalso helpful if we write a bool (1 byte character) whenever the slot\nbecomes active and inactive to the disk?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 3 Dec 2021 19:39:31 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?" }, { "msg_contents": "On Fri, Dec 3, 2021 at 7:39 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Dec 1, 2021 at 9:50 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2021-Dec-01, Bharath Rupireddy wrote:\n> >\n> > > The active_pid of ReplicationSlot structure, which tells whether a\n> > > replication slot is active or inactive, isn't persisted to the disk\n> > > i.e has no entry in ReplicationSlotPersistentData structure. Isn't it\n> > > better if we add that info to ReplicationSlotPersistentData structure\n> > > and persist to the disk? This will help to know what were the inactive\n> > > replication slots in case the server goes down or crashes for some\n> > > reason. Currently, we don't have a way to interpret the replication\n> > > slot info in the disk but there's a patch for pg_replslotdata tool at\n> > > [1]. This way, one can figure out the reasons for the server\n> > > down/crash and figure out which replication slots to remove to bring\n> > > the server up and running without touching the other replication\n> > > slots.\n> >\n> > I think the PIDs are log-worthy for sure, but it's not clear to me that\n> > it is desirable to write them to the persistent state file. In case of\n> > crashes, the log should serve just fine to aid root cause investigation\n> > -- in fact even better than the persistent file, where the data would be\n> > lost as soon as the next client acquires that slot.\n>\n> Thanks. +1 to log a message at LOG level whenever a replication slot\n> becomes active (gets assigned a valid pid to active_pid) and becomes\n> inactive(gets assigned 0 to active_pid). Having said that, isn't it\n> also helpful if we write a bool (1 byte character) whenever the slot\n> becomes active and inactive to the disk?\n\nHere's the patch that adds a LOG message whenever a replication slot\nbecomes active and inactive. These logs will be extremely useful on\nproduction servers to debug and analyze inactive replication slot\nissues.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.", "msg_date": "Tue, 14 Dec 2021 19:04:09 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?" }, { "msg_contents": "At Tue, 14 Dec 2021 19:04:09 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Fri, Dec 3, 2021 at 7:39 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Wed, Dec 1, 2021 at 9:50 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > > I think the PIDs are log-worthy for sure, but it's not clear to me that\n> > > it is desirable to write them to the persistent state file. In case of\n> > > crashes, the log should serve just fine to aid root cause investigation\n> > > -- in fact even better than the persistent file, where the data would be\n> > > lost as soon as the next client acquires that slot.\n> >\n> > Thanks. +1 to log a message at LOG level whenever a replication slot\n> > becomes active (gets assigned a valid pid to active_pid) and becomes\n> > inactive(gets assigned 0 to active_pid). Having said that, isn't it\n> > also helpful if we write a bool (1 byte character) whenever the slot\n> > becomes active and inactive to the disk?\n> \n> Here's the patch that adds a LOG message whenever a replication slot\n> becomes active and inactive. These logs will be extremely useful on\n> production servers to debug and analyze inactive replication slot\n> issues.\n> \n> Thoughts?\n\nIf I create a replication slot, I saw the following lines in server log.\n\n[22054:client backend] LOG: replication slot \"s1\" becomes active\n[22054:client backend] DETAIL: The process with PID 22054 acquired it.\n[22054:client backend] STATEMENT: select pg_drop_replication_slot('s1');\n[22054:client backend] LOG: replication slot \"s1\" becomes inactive\n[22054:client backend] DETAIL: The process with PID 22054 released it.\n[22054:client backend] STATEMENT: select pg_drop_replication_slot('s1');\n\nThey are apparently too much as if they were DEBUG3 lines. The\nprocess PID shown is of the process the slot operations took place so\nI think it conveys no information. The STATEMENT lines are also noisy\nfor non-ERROR emssages. Couldn't we hide that line?\n\nThat is, how about making the log lines as simple as the follows?\n\n[17156:walsender] LOG: acquired replication slot \"s1\"\n[17156:walsender] LOG: released replication slot \"s1\"\n\nI think in the first place we don't need this log lines at slot\ncreation since it is actually not acquirement nor releasing of a slot.\n\n\nIt behaves in a strange way when executing pg_basebackup.\n\n[22864:walsender] LOG: replication slot \"pg_basebackup_22864\" becomes active\n[22864:walsender] DETAIL: The process with PID 22864 acquired it.\n[22864:walsender] STATEMENT: CREATE_REPLICATION_SLOT \"pg_basebackup_22864\" TEMPORARY PHYSICAL ( RESERVE_WAL)\n[22864:walsender] LOG: replication slot \"pg_basebackup_22864\" becomes active\n[22864:walsender] DETAIL: The process with PID 22864 acquired it.\n[22864:walsender] STATEMENT: START_REPLICATION SLOT \"pg_basebackup_22864\" 0/6000000 TIMELINE 1\n[22864:walsender] LOG: replication slot \"pg_basebackup_22864\" becomes inactive\n[22864:walsender] DETAIL: The process with PID 22864 released it.\n\nThe slot is acquired twice then released once. It is becuase the\npatch doesn't emit \"becomes inactive\" line when releasing a temporary\nslot. However, I'd rather think we don't need the first 'become\nactive' line like the previous example.\n\n\n@@ -658,6 +690,13 @@ ReplicationSlotDropPtr(ReplicationSlot *slot)\n \tslot->active_pid = 0;\n \tslot->in_use = false;\n \tLWLockRelease(ReplicationSlotControlLock);\n+\n+\tif (pid > 0)\n+\t\tereport(LOG,\n+\t\t\t\t(errmsg(\"replication slot \\\"%s\\\" becomes inactive\",\n+\t\t\t\t\t\tNameStr(slot->data.name)),\n+\t\t\t\t errdetail(\"The process with PID %d released it.\", pid)));\n+\n\nThis is wrong. I see a \"become inactive\" message if I droped an\n\"inactive\" replication slot. The reason the inactive slot looks as if\nit were acquired is it is temporarily aquired as a preparing step of\ndropping.\n\n\nEven assuming that the log lines are simplified to this extent, I\nstill see it a bit strange that the \"becomes acitve (or acruied)\"\nmessage shows alone without having a message like \"replication\nconnection accepted\". But that would be another issue even if it is\ntrue.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 15 Dec 2021 12:01:57 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?" }, { "msg_contents": "On Wed, Dec 15, 2021 at 8:32 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> > Here's the patch that adds a LOG message whenever a replication slot\n> > becomes active and inactive. These logs will be extremely useful on\n> > production servers to debug and analyze inactive replication slot\n> > issues.\n> >\n> > Thoughts?\n>\n> If I create a replication slot, I saw the following lines in server log.\n>\n> [22054:client backend] LOG: replication slot \"s1\" becomes active\n> [22054:client backend] DETAIL: The process with PID 22054 acquired it.\n> [22054:client backend] STATEMENT: select pg_drop_replication_slot('s1');\n> [22054:client backend] LOG: replication slot \"s1\" becomes inactive\n> [22054:client backend] DETAIL: The process with PID 22054 released it.\n> [22054:client backend] STATEMENT: select pg_drop_replication_slot('s1');\n>\n> They are apparently too much as if they were DEBUG3 lines. The\n> process PID shown is of the process the slot operations took place so\n> I think it conveys no information. The STATEMENT lines are also noisy\n> for non-ERROR emssages. Couldn't we hide that line?\n>\n> That is, how about making the log lines as simple as the follows?\n>\n> [17156:walsender] LOG: acquired replication slot \"s1\"\n> [17156:walsender] LOG: released replication slot \"s1\"\n\nThanks for taking a look at the patch. Here's what I've come up with:\n\nfor drops:\n2021-12-28 06:39:34.963 UTC [2541600] LOG: acquired persistent\nphysical replication slot \"myslot1\"\n2021-12-28 06:39:34.980 UTC [2541600] LOG: dropped persistent\nphysical replication slot \"myslot1\"\n2021-12-28 06:47:39.994 UTC [2544153] LOG: acquired persistent\nlogical replication slot \"myslot2\"\n2021-12-28 06:47:40.003 UTC [2544153] LOG: dropped persistent logical\nreplication slot \"myslot2\"\n\nfor creates:\n2021-12-28 06:39:46.859 UTC [2541600] LOG: created persistent\nphysical replication slot \"myslot1\"\n2021-12-28 06:39:46.859 UTC [2541600] LOG: released persistent\nphysical replication slot \"myslot1\"\n2021-12-28 06:45:20.037 UTC [2544153] LOG: created persistent logical\nreplication slot \"myslot2\"\n2021-12-28 06:45:20.058 UTC [2544153] LOG: released persistent\nlogical replication slot \"myslot2\"\n\nfor pg_basebackup:\n2021-12-28 06:41:04.601 UTC [2542686] LOG: created temporary physical\nreplication slot \"pg_basebackup_2542686\"\n2021-12-28 06:41:04.602 UTC [2542686] LOG: released temporary\nphysical replication slot \"pg_basebackup_2542686\"\n2021-12-28 06:41:04.602 UTC [2542686] LOG: acquired temporary\nphysical replication slot \"pg_basebackup_2542686\"\n2021-12-28 06:41:04.867 UTC [2542686] LOG: released temporary\nphysical replication slot \"pg_basebackup_2542686\"\n2021-12-28 06:41:04.954 UTC [2542686] LOG: dropped temporary physical\nreplication slot \"pg_basebackup_2542686\"\n\nThe amount of logs may seem noisy, but they do help a lot given the\nfact that the server generates much more noise, for instance, the\nserver logs the syntax error statements. And, the replication slots\ndon't get created/dropped every now and then (at max, the\npg_basebackup if at all used and configured to take the backups, say,\nevery 24hrs or so). With the above set of logs debugging for inactive\nreplication slots becomes easier. One can find the root cause and\nanswer questions like \"why there was a huge WAL file growth at some\npoint or when did a replication slot become inactive or how much time\na replication slot was inactive or when did a standby disconnected and\nconnected again or when did a pg_receivewal or pg_recvlogical\nconnected and disconnected so on.\".\n\nHere's the v2 patch. Please provide your thoughts.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Tue, 28 Dec 2021 12:28:07 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?" }, { "msg_contents": "At Tue, 28 Dec 2021 12:28:07 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Wed, Dec 15, 2021 at 8:32 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > > Here's the patch that adds a LOG message whenever a replication slot\n> > > becomes active and inactive. These logs will be extremely useful on\n> > > production servers to debug and analyze inactive replication slot\n> > > issues.\n> > >\n> > > Thoughts?\n> >\n> > If I create a replication slot, I saw the following lines in server log.\n> >\n> > [22054:client backend] LOG: replication slot \"s1\" becomes active\n> > [22054:client backend] DETAIL: The process with PID 22054 acquired it.\n> > [22054:client backend] STATEMENT: select pg_drop_replication_slot('s1');\n> > [22054:client backend] LOG: replication slot \"s1\" becomes inactive\n> > [22054:client backend] DETAIL: The process with PID 22054 released it.\n> > [22054:client backend] STATEMENT: select pg_drop_replication_slot('s1');\n> >\n> > They are apparently too much as if they were DEBUG3 lines. The\n> > process PID shown is of the process the slot operations took place so\n> > I think it conveys no information. The STATEMENT lines are also noisy\n> > for non-ERROR emssages. Couldn't we hide that line?\n> >\n> > That is, how about making the log lines as simple as the follows?\n> >\n> > [17156:walsender] LOG: acquired replication slot \"s1\"\n> > [17156:walsender] LOG: released replication slot \"s1\"\n> \n> Thanks for taking a look at the patch. Here's what I've come up with:\n> \n> for drops:\n(two log lines per slot: acquire->drop)\n> \n> for creates:\n(two log lines per slot: create->release)\n..\n\nTheses are still needlessly verbose. Even for those who want slot\nactivities to be logged are not interested in this detail. This is\nstill debug logs in that sense.\n\n> The amount of logs may seem noisy, but they do help a lot given the\n> fact that the server generates much more noise, for instance, the\n> server logs the syntax error statements. And, the replication slots\n> don't get created/dropped every now and then (at max, the\n> pg_basebackup if at all used and configured to take the backups, say,\n> every 24hrs or so). With the above set of logs debugging for inactive\n\nIn a nearby thread, there was a discussion that checkpoint logs are\ntoo noisy to defaultly turn on. Finally it is committed but I don't\nthink this is committed as is as it is more verbose (IMV) than the\ncheckpoint logs. Thus this logs need to be muteable. Meanwhile I\ndon't think we willingly add a new knob for this feature. I think we\ncan piggy-back on log_replication_commands for the purpose, changing\nits meaning slightly to \"log replication commands and related\nactivity\".\n\n> replication slots becomes easier. One can find the root cause and\n> answer questions like \"why there was a huge WAL file growth at some\n> point or when did a replication slot become inactive or how much time\n> a replication slot was inactive or when did a standby disconnected and\n> connected again or when did a pg_receivewal or pg_recvlogical\n> connected and disconnected so on.\".\n\nI don't deny it is useful in that cases. If you are fine that the\nlogs are debug-only, making them DEBUG1 would work. But I don't think\nyou are fine with that since I think you are going to turn on them on\nproduction systems.\n\n> Here's the v2 patch. Please provide your thoughts.\n\nThanks! I have three comments on this version.\n\n- I still think \"acquire/release\" logs on creation/dropping should be\n silenced. Adding the third parameter to ReplicationSlotAcquire()\n that can mute the acquiring (and as well as the corresponding\n releasing) log will work.\n\n- Need to mute the logs by log_replication_commands. (We could add\n another choice for the variable for this purpose but I think we\n don't need it.)\n\n- The messages are not translatable as the variable parts are\n adjectives. They need to consist of static sentences. The\n combinations of the two properties are 6 (note that persistence is\n tristate) but I don't think we accept that complexity for the\n information. So I recommend to just remove the variable parts from\n the messages.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 05 Jan 2022 15:43:41 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?" }, { "msg_contents": "On Wed, Jan 5, 2022 at 12:13 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n> > Here's the v2 patch. Please provide your thoughts.\n>\n> Thanks! I have three comments on this version.\n\nThanks for your review.\n\n> - I still think \"acquire/release\" logs on creation/dropping should be\n> silenced. Adding the third parameter to ReplicationSlotAcquire()\n> that can mute the acquiring (and as well as the corresponding\n> releasing) log will work.\n\nDone.\n\n> can piggy-back on log_replication_commands for the purpose, changing\n> its meaning slightly to \"log replication commands and related\n> activity\".\n> - Need to mute the logs by log_replication_commands. (We could add\n> another choice for the variable for this purpose but I think we\n> don't need it.)\n\nDone.\n\n> - The messages are not translatable as the variable parts are\n> adjectives. They need to consist of static sentences. The\n> combinations of the two properties are 6 (note that persistence is\n> tristate) but I don't think we accept that complexity for the\n> information. So I recommend to just remove the variable parts from\n> the messages.\n\nDone.\n\nHere's v3, please review it further.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Mon, 10 Jan 2022 06:50:02 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?" }, { "msg_contents": "On Mon, Jan 10, 2022 at 6:50 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Jan 5, 2022 at 12:13 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > > Here's the v2 patch. Please provide your thoughts.\n> >\n> > Thanks! I have three comments on this version.\n>\n> Thanks for your review.\n>\n> > - I still think \"acquire/release\" logs on creation/dropping should be\n> > silenced. Adding the third parameter to ReplicationSlotAcquire()\n> > that can mute the acquiring (and as well as the corresponding\n> > releasing) log will work.\n>\n> Done.\n>\n> > can piggy-back on log_replication_commands for the purpose, changing\n> > its meaning slightly to \"log replication commands and related\n> > activity\".\n> > - Need to mute the logs by log_replication_commands. (We could add\n> > another choice for the variable for this purpose but I think we\n> > don't need it.)\n>\n> Done.\n>\n> > - The messages are not translatable as the variable parts are\n> > adjectives. They need to consist of static sentences. The\n> > combinations of the two properties are 6 (note that persistence is\n> > tristate) but I don't think we accept that complexity for the\n> > information. So I recommend to just remove the variable parts from\n> > the messages.\n>\n> Done.\n>\n> Here's v3, please review it further.\n\nHere's the rebased v4 patch.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Sun, 27 Feb 2022 15:16:44 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?" }, { "msg_contents": "On Sun, Feb 27, 2022 at 3:16 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Jan 10, 2022 at 6:50 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Wed, Jan 5, 2022 at 12:13 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > > > Here's the v2 patch. Please provide your thoughts.\n> > >\n> > > Thanks! I have three comments on this version.\n> >\n> > Thanks for your review.\n> >\n> > > - I still think \"acquire/release\" logs on creation/dropping should be\n> > > silenced. Adding the third parameter to ReplicationSlotAcquire()\n> > > that can mute the acquiring (and as well as the corresponding\n> > > releasing) log will work.\n> >\n> > Done.\n> >\n> > > can piggy-back on log_replication_commands for the purpose, changing\n> > > its meaning slightly to \"log replication commands and related\n> > > activity\".\n> > > - Need to mute the logs by log_replication_commands. (We could add\n> > > another choice for the variable for this purpose but I think we\n> > > don't need it.)\n> >\n> > Done.\n> >\n> > > - The messages are not translatable as the variable parts are\n> > > adjectives. They need to consist of static sentences. The\n> > > combinations of the two properties are 6 (note that persistence is\n> > > tristate) but I don't think we accept that complexity for the\n> > > information. So I recommend to just remove the variable parts from\n> > > the messages.\n> >\n> > Done.\n> >\n> > Here's v3, please review it further.\n>\n> Here's the rebased v4 patch.\n\nHere's the rebased v5 patch.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Fri, 29 Apr 2022 15:29:51 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "add log messages when replication slots become active and inactive\n (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "If I read the code right, this patch emits logs when\npg_logical_slot_get_changes and pg_replication_slot_advance SQL\nfunctions are called. Is this desirable/useful, for the use case you\nstated at start of thread? I think it is most likely pointless. If you\nget rid of those, then the only acquisitions that would log messages are\nthose in StartReplication and StartLogicalReplication. So I wonder if\nit would be better to leave the API of ReplicationSlotAcquire() alone,\nand instead make StartReplication and StartLogicalReplication\nresponsible for those messages.\n\nI didn't look at the release-side messages you're adding, but I suppose\nit should be symmetrical.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"El hombre nunca sabe de lo que es capaz hasta que lo intenta\" (C. Dickens)\n\n\n", "msg_date": "Fri, 29 Apr 2022 12:32:43 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "On Fri, Apr 29, 2022 at 4:02 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> If I read the code right, this patch emits logs when\n> pg_logical_slot_get_changes and pg_replication_slot_advance SQL\n> functions are called. Is this desirable/useful, for the use case you\n> stated at start of thread? I think it is most likely pointless. If you\n> get rid of those, then the only acquisitions that would log messages are\n> those in StartReplication and StartLogicalReplication. So I wonder if\n> it would be better to leave the API of ReplicationSlotAcquire() alone,\n> and instead make StartReplication and StartLogicalReplication\n> responsible for those messages.\n>\n> I didn't look at the release-side messages you're adding, but I suppose\n> it should be symmetrical.\n\nAdding the messages right after ReplicationSlotAcquire in the\nStartReplication and StartLogicalReplication looks okay, but, if we\njust add \"released replication slot \\\"%s\\\" for logical/physical\nreplication\". in StartReplication and StartLogicalReplication right\nafter ReplicationSlotRelease, how about ReplicationSlotRelease in\nWalSndErrorCleanup and ReplicationSlotShmemExit?\n\nThe whole idea is to get to know when and how much time a slot was\ninactive/unused when log_replication_commands is set to true.\n\nWe can think of adding a timestamp column to on-disk replication slot\ndata but that will be too much.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 5 May 2022 14:57:29 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "On Thu, May 5, 2022 at 2:57 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Apr 29, 2022 at 4:02 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > If I read the code right, this patch emits logs when\n> > pg_logical_slot_get_changes and pg_replication_slot_advance SQL\n> > functions are called. Is this desirable/useful, for the use case you\n> > stated at start of thread? I think it is most likely pointless. If you\n> > get rid of those, then the only acquisitions that would log messages are\n> > those in StartReplication and StartLogicalReplication. So I wonder if\n> > it would be better to leave the API of ReplicationSlotAcquire() alone,\n> > and instead make StartReplication and StartLogicalReplication\n> > responsible for those messages.\n> >\n> > I didn't look at the release-side messages you're adding, but I suppose\n> > it should be symmetrical.\n\nHere's the v6 patch, a much simpler one - no changes to any of the\nexisting function APIs. Please see the sample logs at [1]. There's a\nbit of duplicate code in the v6 patch, if the overall approach looks\nokay, I can remove that too in the next version of the patch.\n\nThoughts?\n\n[1]\n2022-07-25 12:30:14.847 UTC [152873] LOG: acquired physical\nreplication slot \"foo\"\n2022-07-25 12:30:20.878 UTC [152873] LOG: released physical\nreplication slot \"foo\"\n\n2022-07-25 12:49:18.023 UTC [168738] LOG: acquired logical\nreplication slot \"bar\"\n2022-07-25 12:49:28.105 UTC [168738] LOG: released logical\nreplication slot \"bar\"\n\nRegards,\nBharath Rupireddy.", "msg_date": "Mon, 25 Jul 2022 18:31:55 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "On Mon, Jul 25, 2022 at 6:31 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Here's the v6 patch, a much simpler one - no changes to any of the\n> existing function APIs. Please see the sample logs at [1]. There's a\n> bit of duplicate code in the v6 patch, if the overall approach looks\n> okay, I can remove that too in the next version of the patch.\n\nI modified the log_replication_commands description in guc_tables.c.\nPlease review the v7 patch further.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 15 Sep 2022 10:39:08 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "On Thu, Sep 15, 2022 at 10:39 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Jul 25, 2022 at 6:31 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Here's the v6 patch, a much simpler one - no changes to any of the\n> > existing function APIs. Please see the sample logs at [1]. There's a\n> > bit of duplicate code in the v6 patch, if the overall approach looks\n> > okay, I can remove that too in the next version of the patch.\n>\n> I modified the log_replication_commands description in guc_tables.c.\n> Please review the v7 patch further.\n>\n\nI see that you have modified the patch to address the comments from\nAlvaro. Personally, I feel it would be better to add such a message at\na centralized location instead of spreading these in different callers\nof slot acquire/release functionality to avoid getting these missed in\nthe new callers in the future. However, if Alvaro and others think\nthat the current style is better then we should go ahead and do it\nthat way. I hope that we should be able to decide on this and get it\ninto PG16. Anyone else would like to weigh in here?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 22 Mar 2023 11:33:33 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "Hi, I had a quick look at the v7 patch.\n\nYou might consider to encapsulate some of this logic in new functions like:\n\nvoid\nLogReplicationSlotAquired(bool is_physical, char *slotname)\n{\n loglevel = log_replication_commands ? LOG : DEBUG3;\n\n if (is_physical)\n ereport(loglevel,\n (errmsg(\"acquired physical replication slot \\\"%s\\\"\", slotname)));\n else\n ereport(loglevel,\n (errmsg(\"acquired logical replication slot \\\"%s\\\"\", slotname)));\n}\n\nvoid\nLogReplicationSlotReleased(bool is_physical, char *slotname)\n{\n loglevel = log_replication_commands ? LOG : DEBUG3;\n\n if (is_physical)\n ereport(loglevel,\n (errmsg(\"released physical replication slot \\\"%s\\\"\", slotname)));\n else\n ereport(loglevel,\n (errmsg(\"released logical replication slot \\\"%s\\\"\", slotname)));\n}\n\n~~\n\nTHEN\n\nReplicationSlotShmemExit and WalSndErrorCleanup can call it like:\nif (MyReplicationSlot != NULL)\n{\n bool is_physical = SlotIsPhysical(MyReplicationSlot);\n char *slotname = pstrdup(NameStr(MyReplicationSlot->data.name));\n\n ReplicationSlotRelease();\n LogReplicationSlotReleased(is_physical, slotname);\n}\n\n~\n\nStartlReplication can call like:\nLogReplicationSlotAquired(true, cmd->slotname);\n...\nLogReplicationSlotReleased(true, cmd->slotname);\n\n~\n\nStartLogicalReplication can call like:\nLogReplicationSlotAquired(false, cmd->slotname);\n...\nLogReplicationSlotReleased(false, cmd->slotname);\n\n\n~~~\n\nTBH, I am not sure for the *current* code if the encapsulation is\nworth the trouble or not. But maybe at least it helps message\nconsistency and will make it easier if future callers are needed. I\nguess those functions could also give you some central point to\ncomment the intent of this logging? Feel free to take or leave this\ncode suggestion.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 23 Mar 2023 20:21:47 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "On 2023-Mar-22, Amit Kapila wrote:\n\n> I see that you have modified the patch to address the comments from\n> Alvaro. Personally, I feel it would be better to add such a message at\n> a centralized location instead of spreading these in different callers\n> of slot acquire/release functionality to avoid getting these missed in\n> the new callers in the future. However, if Alvaro and others think\n> that the current style is better then we should go ahead and do it\n> that way. I hope that we should be able to decide on this and get it\n> into PG16. Anyone else would like to weigh in here?\n\nI like Peter Smith's suggestion downthread.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 23 Mar 2023 11:07:26 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "On Thu, Mar 23, 2023 at 3:37 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2023-Mar-22, Amit Kapila wrote:\n>\n> > I see that you have modified the patch to address the comments from\n> > Alvaro. Personally, I feel it would be better to add such a message at\n> > a centralized location instead of spreading these in different callers\n> > of slot acquire/release functionality to avoid getting these missed in\n> > the new callers in the future. However, if Alvaro and others think\n> > that the current style is better then we should go ahead and do it\n> > that way. I hope that we should be able to decide on this and get it\n> > into PG16. Anyone else would like to weigh in here?\n>\n> I like Peter Smith's suggestion downthread.\n\n+1. Please review the attached v8 patch further.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 23 Mar 2023 23:03:38 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "On Fri, Mar 24, 2023 at 4:33 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Mar 23, 2023 at 3:37 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2023-Mar-22, Amit Kapila wrote:\n> >\n> > > I see that you have modified the patch to address the comments from\n> > > Alvaro. Personally, I feel it would be better to add such a message at\n> > > a centralized location instead of spreading these in different callers\n> > > of slot acquire/release functionality to avoid getting these missed in\n> > > the new callers in the future. However, if Alvaro and others think\n> > > that the current style is better then we should go ahead and do it\n> > > that way. I hope that we should be able to decide on this and get it\n> > > into PG16. Anyone else would like to weigh in here?\n> >\n> > I like Peter Smith's suggestion downthread.\n>\n> +1. Please review the attached v8 patch further.\n>\n\nPatch v8 applied OK, and builds/renders the HTML docs OK, and passes\nthe regression and subscription TAP tests OK.\n\n(Note - I didn't do any additional manual testing, and I've assumed it\nto be covering all the necessary acquire/related logging that you\nwanted).\n\n~~\n\nHere are some minor comments:\n\n1.\n+ ereport(log_replication_commands ? LOG : DEBUG3,\n+ (errmsg(\"acquired physical replication slot \\\"%s\\\"\",\n+ slotname)));\n\nAFAIK those extra parentheses wrapping the \"errmsg\" part are not necessary.\n\n~~\n\n2.\nextern void LogReplicationSlotAquired(bool is_physical, char *slotname);\nextern void LogReplicationSlotReleased(bool is_physical, char *slotname);\n\nThe \"char *slotname\" params of those helper functions should probably\nbe declared and defined as \"const char *slotname\".\n\n~~\n\nOtherwise, from a code review perspective the patch v8 LGTM.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 24 Mar 2023 08:40:19 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "On Thu, Mar 23, 2023, at 2:33 PM, Bharath Rupireddy wrote:\n> On Thu, Mar 23, 2023 at 3:37 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2023-Mar-22, Amit Kapila wrote:\n> >\n> > > I see that you have modified the patch to address the comments from\n> > > Alvaro. Personally, I feel it would be better to add such a message at\n> > > a centralized location instead of spreading these in different callers\n> > > of slot acquire/release functionality to avoid getting these missed in\n> > > the new callers in the future. However, if Alvaro and others think\n> > > that the current style is better then we should go ahead and do it\n> > > that way. I hope that we should be able to decide on this and get it\n> > > into PG16. Anyone else would like to weigh in here?\n> >\n> > I like Peter Smith's suggestion downthread.\n> \n> +1. Please review the attached v8 patch further.\nIf you are adding separate functions as suggested, you should add a comment at\nthe top of ReplicationSlotAcquire() and ReplicationSlotRelease() functions\nsaying that LogReplicationSlotAquired() and LogReplicationSlotReleased()\nfunctions should be called respectively after it.\n\nMy suggestion is that the functions should have the same name with a \"Log\"\nprefix. On of them has a typo \"Aquired\" in its name. Hence,\nLogReplicationSlotAcquire() and LogReplicationSlotRelease() as names. It is\neasier to find if someone is grepping by the origin function.\n\nI prefer a sentence that includes a verb.\n\n physical replication slot \\\"%s\\\" is acquired\n logical replication slot \\\"%s\\\" is released\n\nIsn't the PID important for this use case? If so, of course, you can rely on\nlog_line_prefix (%p) but if the PID is crucial for an investigation then it\nshould also be included in the message.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Thu, Mar 23, 2023, at 2:33 PM, Bharath Rupireddy wrote:On Thu, Mar 23, 2023 at 3:37 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:>> On 2023-Mar-22, Amit Kapila wrote:>> > I see that you have modified the patch to address the comments from> > Alvaro. Personally, I feel it would be better to add such a message at> > a centralized location instead of spreading these in different callers> > of slot acquire/release functionality to avoid getting these missed in> > the new callers in the future. However, if Alvaro and others think> > that the current style is better then we should go ahead and do it> > that way. I hope that we should be able to decide on this and get it> > into PG16. Anyone else would like to weigh in here?>> I like Peter Smith's suggestion downthread.+1. Please review the attached v8 patch further.If you are adding separate functions as suggested, you should add a comment atthe top of ReplicationSlotAcquire() and ReplicationSlotRelease() functionssaying that LogReplicationSlotAquired() and LogReplicationSlotReleased()functions should be called  respectively after it.My suggestion is that the functions should have the same name with a \"Log\"prefix. On of them has a typo \"Aquired\" in its name. Hence,LogReplicationSlotAcquire() and LogReplicationSlotRelease() as names. It iseasier to find if someone is grepping by the origin function.I prefer a sentence that includes a verb.  physical replication slot \\\"%s\\\" is acquired  logical replication slot \\\"%s\\\" is releasedIsn't the PID important for this use case? If so, of course, you can rely onlog_line_prefix (%p) but if the PID is crucial for an investigation then itshould also be included in the message.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Thu, 23 Mar 2023 18:40:26 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was\n Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "On Fri, Mar 24, 2023 at 3:11 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> If you are adding separate functions as suggested, you should add a comment at\n> the top of ReplicationSlotAcquire() and ReplicationSlotRelease() functions\n> saying that LogReplicationSlotAquired() and LogReplicationSlotReleased()\n> functions should be called respectively after it.\n\nDone.\n\n> My suggestion is that the functions should have the same name with a \"Log\"\n> prefix. On of them has a typo \"Aquired\" in its name. Hence,\n> LogReplicationSlotAcquire() and LogReplicationSlotRelease() as names. It is\n> easier to find if someone is grepping by the origin function.\n\nDone.\n\n> I prefer a sentence that includes a verb.\n>\n> physical replication slot \\\"%s\\\" is acquired\n> logical replication slot \\\"%s\\\" is released\n\nHm, changed for now. But I'll leave it to the committer's discretion.\n\n> Isn't the PID important for this use case? If so, of course, you can rely on\n> log_line_prefix (%p) but if the PID is crucial for an investigation then it\n> should also be included in the message.\n\nOn Fri, Mar 24, 2023 at 3:10 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Patch v8 applied OK, and builds/renders the HTML docs OK, and passes\n> the regression and subscription TAP tests OK.\n>\n> Here are some minor comments:\n>\n> 1.\n> + ereport(log_replication_commands ? LOG : DEBUG3,\n> + (errmsg(\"acquired physical replication slot \\\"%s\\\"\",\n> + slotname)));\n>\n> AFAIK those extra parentheses wrapping the \"errmsg\" part are not necessary.\n\nDone\n\n> 2.\n> extern void LogReplicationSlotAquired(bool is_physical, char *slotname);\n> extern void LogReplicationSlotReleased(bool is_physical, char *slotname);\n>\n> The \"char *slotname\" params of those helper functions should probably\n> be declared and defined as \"const char *slotname\".\n\nDone.\n\n> Otherwise, from a code review perspective the patch v8 LGTM.\n\nThanks. Please have a look at the v9 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 24 Mar 2023 08:55:07 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "On Thu, 23 Mar 2023 at 23:30, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> > + ereport(log_replication_commands ? LOG : DEBUG3,\n> > + (errmsg(\"acquired physical replication slot \\\"%s\\\"\",\n> > + slotname)));\n\nSo this is just a bit of bike-shedding but I don't feel like these log\nmessages really meet the standard we set for our logging. Like what\ndid the acquiring? What does \"acquired\" actually mean for a\nreplication slot? Is there not any meta information about the\nacquisition that can give more context to the reader to make this\nmessage more meaningful?\n\nI would expect a log message like this to say, I dunno, something like\n\"physical replication slot \\\"%s\\\" acquired by streaming TCP connection\nto 192.168.0.1:999 at LSN ... with xxxMB of logs to read\"\n\nI even would be wondering if the other end shouldn't also be logging a\ncorresponding log and we shouldn't be going out of our way to ensure\nthere's enough information to match them up and presenting them in a\nway that makes that easy.\n\n-- \ngreg\n\n\n", "msg_date": "Thu, 23 Mar 2023 23:52:31 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "On 2023-Mar-23, Greg Stark wrote:\n\n> On Thu, 23 Mar 2023 at 23:30, Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > > + ereport(log_replication_commands ? LOG : DEBUG3,\n> > > + (errmsg(\"acquired physical replication slot \\\"%s\\\"\",\n> > > + slotname)));\n> \n> So this is just a bit of bike-shedding but I don't feel like these log\n> messages really meet the standard we set for our logging. Like what\n> did the acquiring? What does \"acquired\" actually mean for a\n> replication slot? Is there not any meta information about the\n> acquisition that can give more context to the reader to make this\n> message more meaningful?\n> \n> I would expect a log message like this to say, I dunno, something like\n> \"physical replication slot \\\"%s\\\" acquired by streaming TCP connection\n> to 192.168.0.1:999 at LSN ... with xxxMB of logs to read\"\n\nHmm, I don't disagree with your argument in principle, but I think this\nproposal is going too far. I think stating the PID is more than\nsufficient. And I don't think we need this patch to go great lengths to\nexplain what acquisition is, either; I mean, maybe that's a good thing\nto have, but then that's a different patch.\n\n> I even would be wondering if the other end shouldn't also be logging a\n> corresponding log and we shouldn't be going out of our way to ensure\n> there's enough information to match them up and presenting them in a\n> way that makes that easy.\n\nHmm, you should be able to match things using the connection\ninformation. I don't think the slot acquisition operation in itself is\nthat important.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Use it up, wear it out, make it do, or do without\"\n\n\n", "msg_date": "Fri, 24 Mar 2023 10:35:47 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "On Fri, Mar 24, 2023 at 3:05 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2023-Mar-23, Greg Stark wrote:\n>\n> > On Thu, 23 Mar 2023 at 23:30, Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > > + ereport(log_replication_commands ? LOG : DEBUG3,\n> > > > + (errmsg(\"acquired physical replication slot \\\"%s\\\"\",\n> > > > + slotname)));\n> >\n> > So this is just a bit of bike-shedding but I don't feel like these log\n> > messages really meet the standard we set for our logging. Like what\n> > did the acquiring? What does \"acquired\" actually mean for a\n> > replication slot? Is there not any meta information about the\n> > acquisition that can give more context to the reader to make this\n> > message more meaningful?\n> >\n> > I would expect a log message like this to say, I dunno, something like\n> > \"physical replication slot \\\"%s\\\" acquired by streaming TCP connection\n> > to 192.168.0.1:999 at LSN ... with xxxMB of logs to read\"\n>\n> Hmm, I don't disagree with your argument in principle, but I think this\n> proposal is going too far. I think stating the PID is more than\n> sufficient.\n\nDo you mean to have something like \"physical/logical replication slot\n\\\"%s\\\" is released/acquired by PID %d\", MyProcPid? If yes, the\nlog_line_prefix already contains PID right? Or do we want to cover the\ncases when someone changes log_line_prefix to not contain PID?\n\n> And I don't think we need this patch to go great lengths to\n> explain what acquisition is, either; I mean, maybe that's a good thing\n> to have, but then that's a different patch.\n>\n> > I even would be wondering if the other end shouldn't also be logging a\n> > corresponding log and we shouldn't be going out of our way to ensure\n> > there's enough information to match them up and presenting them in a\n> > way that makes that easy.\n>\n> Hmm, you should be able to match things using the connection\n> information. I don't think the slot acquisition operation in itself is\n> that important.\n\nYeah, the intention of the patch is to track the patterns of slot\nacquisitions and releases to aid analysis. Of course, this information\nalone may not help but when matched with others in the logs, it will.\n\nThe v9 patch was failing because I was using MyReplicationSlot after\nit got reset by slot release, attached v10 patch fixes it.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 27 Mar 2023 11:08:01 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "On Mon, Mar 27, 2023 at 11:08 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> The v9 patch was failing because I was using MyReplicationSlot after\n> it got reset by slot release, attached v10 patch fixes it.\n>\n\n+ *\n+ * Note: use LogReplicationSlotAcquire() if needed, to log a message after\n+ * acquiring the replication slot.\n */\n void\n ReplicationSlotAcquire(const char *name, bool nowait)\n@@ -542,6 +554,9 @@ retry:\n\nWhen does it need to be logged? For example, recently, we added one\nmore slot acquisition/release call in\nbinary_upgrade_logical_slot_has_caught_up(); it is not clear from the\ncomments whether we need to LOG it or not. I guess at some place like\natop LogReplicationSlotAcquire() we should document in a bit more\nspecific way as to when is this expected to be called.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 1 Nov 2023 08:33:07 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "On Wed, Nov 1, 2023 at 2:03 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Mar 27, 2023 at 11:08 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > The v9 patch was failing because I was using MyReplicationSlot after\n> > it got reset by slot release, attached v10 patch fixes it.\n> >\n>\n> + *\n> + * Note: use LogReplicationSlotAcquire() if needed, to log a message after\n> + * acquiring the replication slot.\n> */\n> void\n> ReplicationSlotAcquire(const char *name, bool nowait)\n> @@ -542,6 +554,9 @@ retry:\n>\n> When does it need to be logged? For example, recently, we added one\n> more slot acquisition/release call in\n> binary_upgrade_logical_slot_has_caught_up(); it is not clear from the\n> comments whether we need to LOG it or not. I guess at some place like\n> atop LogReplicationSlotAcquire() we should document in a bit more\n> specific way as to when is this expected to be called.\n>\n\nI agree. Just saying \"if needed\" in those function comments doesn't\nhelp with knowing how to judge when logging is needed or not.\n\n~\n\nLooking back at the thread history it seems the function comment was\nadded after Euler [1] suggested (\"... you should add a comment at the\ntop of ReplicationSlotAcquire() and ReplicationSlotRelease() functions\nsaying that LogReplicationSlotAquired() and\nLogReplicationSlotReleased() functions should be called respectively\nafter it.\")\n\nBut that's not quite compatible with what Alvaro [2] had written long\nback (\"... the only acquisitions that would log messages are those in\nStartReplication and StartLogicalReplication.\").\n\nIn other words, ReplicationSlotAcquire/ReplicationSlotRelease is\ncalled by more places than you care to log from.\n\n~\n\nAdding a better explanatory comment than \"if needed\" will be good, and\nmaybe that is all that is necessary. I'm not sure.\n\nOTOH, if you have to explain that logging is only wanted for a couple\nof scenarios, then it raises some doubts about the usefulness of\nhaving a common function in the first place. I had the same doubts\nback in March [3] (\"I am not sure for the *current* code if the\nencapsulation is worth the trouble or not.\").\n\n======\n[1] Euler - https://www.postgresql.org/message-id/c42d5634-ca9b-49a7-85cd-9eff9feb33b4%40app.fastmail.com\n[2] Alvaro - https://www.postgresql.org/message-id/202204291032.qfvyuqxkjnjw%40alvherre.pgsql\n[3] Peter - https://www.postgresql.org/message-id/CAHut%2BPu6Knwooc_NckMxszGrAJnytgpMadtoJ-OA-SFWT%2BGFYw%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Austalia\n\n\n", "msg_date": "Thu, 2 Nov 2023 12:48:34 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "On Thu, Nov 2, 2023 at 7:19 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> But that's not quite compatible with what Alvaro [2] had written long\n> back (\"... the only acquisitions that would log messages are those in\n> StartReplication and StartLogicalReplication.\").\n>\n> In other words, ReplicationSlotAcquire/ReplicationSlotRelease is\n> called by more places than you care to log from.\n\nI refreshed my thoughts for this patch and I think it's enough if\nwalsenders alone log messages when slots become active and inactive.\nHow about something like the attached v11 patch?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sun, 5 Nov 2023 04:00:00 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "On Sun, Nov 5, 2023 at 4:01 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Nov 2, 2023 at 7:19 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > But that's not quite compatible with what Alvaro [2] had written long\n> > back (\"... the only acquisitions that would log messages are those in\n> > StartReplication and StartLogicalReplication.\").\n> >\n> > In other words, ReplicationSlotAcquire/ReplicationSlotRelease is\n> > called by more places than you care to log from.\n>\n> I refreshed my thoughts for this patch and I think it's enough if\n> walsenders alone log messages when slots become active and inactive.\n> How about something like the attached v11 patch?\n>\n\n+ * This function is currently used only in walsender.\n+ */\n+void\n+ReplicationSlotAcquireAndLog(const char *name, bool nowait)\n\nBTW, is the reason for using it only in walsender is that it is a\nbackground process and it is not very apparent whether the slot is\ncreated, and for foreground processes, it is a bit clear when the\ncommand is executed. If so, the other alternative is to either use a\nparameter to the existing function or directly use am_walsender flag\nto distinguish when to print the message in acquire/release calls.\n\nCan you please tell me the use case of this additional message?\n\nA few other minor comments:\n1.\n+ Causes each replication command and related activity to be logged in\n+ the server log.\n\nCan we be bit more specific by changing to something like: \"Causes\neach replication command and slot acquisition/release to be logged in\nthe server log.\"\n\n2.\n+ ereport(log_replication_commands ? LOG : DEBUG1,\n+ (errmsg(\"walsender process with PID %d acquired %s replication slot \\\"%s\\\"\",\n\nIt seems PID and process name is quite unlike what we print in other\nsimilar messages. For example, see below messages when we start\nreplication via pub/sub :\n\n2023-11-06 08:41:57.867 IST [24400] LOG: received replication\ncommand: CREATE_REPLICATION_SLOT \"sub1\" LOGICAL pgoutput (SNAPSHOT\n'nothing')\n2023-11-06 08:41:57.867 IST [24400] STATEMENT:\nCREATE_REPLICATION_SLOT \"sub1\" LOGICAL pgoutput (SNAPSHOT 'nothing')\n...\n...\n2023-11-06 08:41:57.993 IST [22332] LOG: walsender process with PID\n22332 acquired logical replication slot \"sub1\"\n2023-11-06 08:41:57.993 IST [22332] STATEMENT: START_REPLICATION SLOT\n\"sub1\" LOGICAL 0/0 (proto_version '4', origin 'any', publication_names\n'\"pub1\"')\n...\n...\n2023-11-06 08:41:58.015 IST [22332] LOG: starting logical decoding\nfor slot \"sub1\"\n2023-11-06 08:41:58.015 IST [22332] DETAIL: Streaming transactions\ncommitting after 0/1522730, reading WAL from 0/15226F8.\n2023-11-06 08:41:58.015 IST [22332] STATEMENT: START_REPLICATION SLOT\n\"sub1\" LOGICAL 0/0 (proto_version '4', origin 'any', publication_names\n'\"pub1\"')\n\nWe can get the PID from the log line as for other logs and I don't see\nthe process name printed anywhere else.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 6 Nov 2023 09:08:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "Hi,\n\nFWIW, I used a small script to set up the following environment to\nhelp observe the log messages written by this patch.\n\nNode1 = primary; it creates a PUBLICATION\nNode2 = physical standby; the publication is replicated here\nNode3 = subscriber; it subscribes to the publication now on the standby Node2\n\n~\n\nIn the log files you can see:\n\n1. The \"physical\" message from the walsender of the primary (Node1)\ne.g. 2023-11-06 18:59:58.094 AEDT [17091] LOG: walsender process with\nPID 17091 acquired physical replication slot \"pg_basebackup_17091\"\n\n2. The \"logical\" message from the pub/sub walsender of the standby (Node2)\ne.g. 2023-11-06 19:16:29.053 AEDT [12774] LOG: walsender process with\nPID 12774 acquired logical replication slot \"sub1\"\n\n~\n\nPSA the test script and logs\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 7 Nov 2023 12:33:07 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "On Mon, Nov 6, 2023 at 9:09 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Nov 5, 2023 at 4:01 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Thu, Nov 2, 2023 at 7:19 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > But that's not quite compatible with what Alvaro [2] had written long\n> > > back (\"... the only acquisitions that would log messages are those in\n> > > StartReplication and StartLogicalReplication.\").\n> > >\n> > > In other words, ReplicationSlotAcquire/ReplicationSlotRelease is\n> > > called by more places than you care to log from.\n> >\n> > I refreshed my thoughts for this patch and I think it's enough if\n> > walsenders alone log messages when slots become active and inactive.\n> > How about something like the attached v11 patch?\n> >\n>\n> + * This function is currently used only in walsender.\n> + */\n> +void\n> +ReplicationSlotAcquireAndLog(const char *name, bool nowait)\n>\n> BTW, is the reason for using it only in walsender is that it is a\n> background process and it is not very apparent whether the slot is\n> created, and for foreground processes, it is a bit clear when the\n> command is executed.\n>\n> Can you please tell me the use case of this additional message?\n\nReplication slot acquisitions and releases by backends say when\nrunning pg_replication_slot_advance or pg_logical_slot_get_changes or\npg_drop_replication_slot or pg_create_{physical,\nlogical}_replication_slot are transient unlike walsenders which\ncomparatively hold slots for longer durations. Therefore, I've added\nthem only for walsenders. These messages help to know the lifetime of\na replication slot - one can know how long a streaming standby or\nlogical subscriber is down, IOW, how long a replication slot is\ninactive in production. For instance, the time between released and\nacquired slots in the below messages is the inactive replication slot\nduration.\n\n2023-11-13 11:06:34.338 UTC [470262] LOG: acquired physical\nreplication slot \"sb_repl_slot\"\n2023-11-13 11:06:34.338 UTC [470262] STATEMENT: START_REPLICATION\nSLOT \"sb_repl_slot\" 0/3000000 TIMELINE 1\n2023-11-13 11:09:24.918 UTC [470262] LOG: released physical\nreplication slot \"sb_repl_slot\"\n2023-11-13 12:01:40.530 UTC [470967] LOG: acquired physical\nreplication slot \"sb_repl_slot\"\n2023-11-13 12:01:40.530 UTC [470967] STATEMENT: START_REPLICATION\nSLOT \"sb_repl_slot\" 0/3000000 TIMELINE 1\n\n> If so, the other alternative is to either use a\n> parameter to the existing function or directly use am_walsender flag\n> to distinguish when to print the message in acquire/release calls.\n\nDone that way. PSA v12.\n\n> A few other minor comments:\n> 1.\n> + Causes each replication command and related activity to be logged in\n> + the server log.\n>\n> Can we be bit more specific by changing to something like: \"Causes\n> each replication command and slot acquisition/release to be logged in\n> the server log.\"\n\nDone.\n\n> 2.\n> + ereport(log_replication_commands ? LOG : DEBUG1,\n> + (errmsg(\"walsender process with PID %d acquired %s replication slot \\\"%s\\\"\",\n>\n> It seems PID and process name is quite unlike what we print in other\n> similar messages. For example, see below messages when we start\n> replication via pub/sub :\n>\n> We can get the PID from the log line as for other logs and I don't see\n> the process name printed anywhere else.\n\nThere was a comment upthread to have PID printed, but I agree to be\nconsistent and changed the messages to be: acquired physical/logical\nreplication slot \"foo\" and released physical/logical replication slot\n\"foo\".\n\nPSA v12 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 13 Nov 2023 17:43:13 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "On Mon, Nov 13, 2023 at 5:43 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Nov 6, 2023 at 9:09 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sun, Nov 5, 2023 at 4:01 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > On Thu, Nov 2, 2023 at 7:19 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > >\n> > > > But that's not quite compatible with what Alvaro [2] had written long\n> > > > back (\"... the only acquisitions that would log messages are those in\n> > > > StartReplication and StartLogicalReplication.\").\n> > > >\n> > > > In other words, ReplicationSlotAcquire/ReplicationSlotRelease is\n> > > > called by more places than you care to log from.\n> > >\n> > > I refreshed my thoughts for this patch and I think it's enough if\n> > > walsenders alone log messages when slots become active and inactive.\n> > > How about something like the attached v11 patch?\n> > >\n> >\n> > + * This function is currently used only in walsender.\n> > + */\n> > +void\n> > +ReplicationSlotAcquireAndLog(const char *name, bool nowait)\n> >\n> > BTW, is the reason for using it only in walsender is that it is a\n> > background process and it is not very apparent whether the slot is\n> > created, and for foreground processes, it is a bit clear when the\n> > command is executed.\n> >\n> > Can you please tell me the use case of this additional message?\n>\n> Replication slot acquisitions and releases by backends say when\n> running pg_replication_slot_advance or pg_logical_slot_get_changes or\n> pg_drop_replication_slot or pg_create_{physical,\n> logical}_replication_slot are transient unlike walsenders which\n> comparatively hold slots for longer durations. Therefore, I've added\n> them only for walsenders. These messages help to know the lifetime of\n> a replication slot - one can know how long a streaming standby or\n> logical subscriber is down, IOW, how long a replication slot is\n> inactive in production. For instance, the time between released and\n> acquired slots in the below messages is the inactive replication slot\n> duration.\n>\n> 2023-11-13 11:06:34.338 UTC [470262] LOG: acquired physical\n> replication slot \"sb_repl_slot\"\n> 2023-11-13 11:06:34.338 UTC [470262] STATEMENT: START_REPLICATION\n> SLOT \"sb_repl_slot\" 0/3000000 TIMELINE 1\n> 2023-11-13 11:09:24.918 UTC [470262] LOG: released physical\n> replication slot \"sb_repl_slot\"\n> 2023-11-13 12:01:40.530 UTC [470967] LOG: acquired physical\n> replication slot \"sb_repl_slot\"\n> 2023-11-13 12:01:40.530 UTC [470967] STATEMENT: START_REPLICATION\n> SLOT \"sb_repl_slot\" 0/3000000 TIMELINE 1\n>\n> > If so, the other alternative is to either use a\n> > parameter to the existing function or directly use am_walsender flag\n> > to distinguish when to print the message in acquire/release calls.\n>\n> Done that way. PSA v12.\n>\n> > A few other minor comments:\n> > 1.\n> > + Causes each replication command and related activity to be logged in\n> > + the server log.\n> >\n> > Can we be bit more specific by changing to something like: \"Causes\n> > each replication command and slot acquisition/release to be logged in\n> > the server log.\"\n>\n> Done.\n>\n> > 2.\n> > + ereport(log_replication_commands ? LOG : DEBUG1,\n> > + (errmsg(\"walsender process with PID %d acquired %s replication slot \\\"%s\\\"\",\n> >\n> > It seems PID and process name is quite unlike what we print in other\n> > similar messages. For example, see below messages when we start\n> > replication via pub/sub :\n> >\n> > We can get the PID from the log line as for other logs and I don't see\n> > the process name printed anywhere else.\n>\n> There was a comment upthread to have PID printed, but I agree to be\n> consistent and changed the messages to be: acquired physical/logical\n> replication slot \"foo\" and released physical/logical replication slot\n> \"foo\".\n>\n> PSA v12 patch.\n\nCompiler isn't happy with v12\nhttps://cirrus-ci.com/task/5543061376204800?logs=gcc_warning#L405. PSA\nv13 patch.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 13 Nov 2023 19:47:40 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "Here are some review comments for patch v13-0001.\n\n======\ndoc/src/sgml/config.sgml\n\n1.\n <para>\n- Causes each replication command to be logged in the server log.\n- See <xref linkend=\"protocol-replication\"/> for more information about\n- replication command. The default value is <literal>off</literal>.\n- Only superusers and users with the appropriate <literal>SET</literal>\n- privilege can change this setting.\n+ Causes each replication command and slot acquisition/release to be\n+ logged in the server log. See <xref linkend=\"protocol-replication\"/>\n+ for more information about replication command. The default value is\n+ <literal>off</literal>. Only superusers and users with the appropriate\n+ <literal>SET</literal> privilege can change this setting.\n </para>\n\nShould that also mention about walsender?\n\ne.g.\n\"and slot acquisition/release\" ==> \"and <literal>walsender</literal>\nslot acquisition/release\"\n\n======\nsrc/backend/replication/slot.c\n\n2. ReplicationSlotAcquire\n\n if (SlotIsLogical(s))\n pgstat_acquire_replslot(s);\n+\n+ if (am_walsender)\n+ ereport(log_replication_commands ? LOG : DEBUG1,\n+ errmsg_internal(\"acquired %s replication slot \\\"%s\\\"\",\n+ SlotIsPhysical(MyReplicationSlot) ? \"physical\" : \"logical\",\n+ NameStr(MyReplicationSlot->data.name)));\n\n2a.\nInstead of calling SlotIsLogical() and then again calling\nSlotIsPhysical(), it might be better to assign this one time to a\nlocal variable.\n\n~\n\n2b.\nIMO it is better to continue using variable 's' here instead of\n'MyReplicationSlot'. Code is not only shorter but is also consistent\nwith the rest of the function which never uses MyReplicationSlot, even\nin the places where it could have.\n\n~\n\nSUGGESTION (for #2a and #2b)\nis_logical = SlotIsLogical(s);\nif (is_logical)\n pgstat_acquire_replslot(s);\n\nif (am_walsender)\n ereport(log_replication_commands ? LOG : DEBUG1,\n errmsg_internal(\"acquired %s replication slot \\\"%s\\\"\",\n is_logical ? \"logical\" : \"physical\", NameStr(s->data.name)));\n\n~~~\n\n3. ReplicationSlotRelease\n\n ReplicationSlotRelease(void)\n {\n ReplicationSlot *slot = MyReplicationSlot;\n+ char *slotname = NULL; /* keep compiler quiet */\n+ bool is_physical = false; /* keep compiler quiet */\n\n Assert(slot != NULL && slot->active_pid != 0);\n\n+ if (am_walsender)\n+ {\n+ slotname = pstrdup(NameStr(MyReplicationSlot->data.name));\n+ is_physical = SlotIsPhysical(MyReplicationSlot);\n+ }\n+\n\n3a.\nNotice 'MyReplicationSlot' is already assigned to the local 'slot'\nvariable, so IMO it is better if this new code also uses that 'slot'\nvariable for consistency with the rest of the function.\n\n~\n\n3b.\nConsider flipping the flag to be 'is_logical' instead of\n'is_physical', so the ereport substitution will match the other\nReplicationSlotAcquirecode suggested above (#2a).\n\n~\n\nSUGGESTION (For #3a and #3b)\nif (am_walsender)\n{\n slotname = pstrdup(NameStr(slot->data.name));\n is_logical = SlotIsLogical(slot);\n}\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 14 Nov 2023 10:19:50 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "On Tue, Nov 14, 2023 at 4:50 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are some review comments for patch v13-0001.\n\nThanks.\n\n> ======\n> doc/src/sgml/config.sgml\n>\n> 1.\n>\n> Should that also mention about walsender?\n>\n> e.g.\n> \"and slot acquisition/release\" ==> \"and <literal>walsender</literal>\n> slot acquisition/release\"\n\nChanged.\n\n> 2a.\n> Instead of calling SlotIsLogical() and then again calling\n> SlotIsPhysical(), it might be better to assign this one time to a\n> local variable.\n>\n> 2b.\n> IMO it is better to continue using variable 's' here instead of\n> 'MyReplicationSlot'. Code is not only shorter but is also consistent\n> with the rest of the function which never uses MyReplicationSlot, even\n> in the places where it could have.\n>\n> SUGGESTION (for #2a and #2b)\n> is_logical = SlotIsLogical(s);\n> if (is_logical)\n> pgstat_acquire_replslot(s);\n>\n> if (am_walsender)\n> ereport(log_replication_commands ? LOG : DEBUG1,\n> errmsg_internal(\"acquired %s replication slot \\\"%s\\\"\",\n> is_logical ? \"logical\" : \"physical\", NameStr(s->data.name)));\n\nUse of a separate variable isn't good IMO, I used SlotIsLogical(s); directly.\n\n> 3a.\n> Notice 'MyReplicationSlot' is already assigned to the local 'slot'\n> variable, so IMO it is better if this new code also uses that 'slot'\n> variable for consistency with the rest of the function.\n>\n> 3b.\n> Consider flipping the flag to be 'is_logical' instead of\n> 'is_physical', so the ereport substitution will match the other\n> ReplicationSlotAcquirecode suggested above (#2a).\n>\n> SUGGESTION (For #3a and #3b)\n> if (am_walsender)\n> {\n> slotname = pstrdup(NameStr(slot->data.name));\n> is_logical = SlotIsLogical(slot);\n> }\n\nDone.\n\nPSA v14 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 14 Nov 2023 12:31:54 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "Hi,\n\nThanks for addressing my previous comments. Patch v14-0001 looks good\nto me, except I have one question:\n\nThe patch uses errmsg_internal() for the logging, but I noticed the\nonly other code using GUC 'log_replication_commands' has errmsg()\ninstead of errmsg_internal(). Isn't it better to be consistent with\nthe existing code?\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 15 Nov 2023 10:10:14 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "On Wed, Nov 15, 2023 at 4:40 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Thanks for addressing my previous comments. Patch v14-0001 looks good\n> to me, except I have one question:\n>\n> The patch uses errmsg_internal() for the logging, but I noticed the\n> only other code using GUC 'log_replication_commands' has errmsg()\n> instead of errmsg_internal(). Isn't it better to be consistent with\n> the existing code?\n>\n\nI agree that we should errmsg here. If we read the description of\nerrmsg_internal() [1], it is recommended to be used for \"cannot\nhappen\" cases where we don't want to spend translation effort which is\nnot the case here. Also, similar to the below message, we should add a\ncomment for a translator.\n\nereport(LOG,\n/* translator: %s is SIGKILL or SIGABRT */\n(errmsg(\"issuing %s to recalcitrant children\",\nsend_abort_for_kill ? \"SIGABRT\" : \"SIGKILL\")));\n\nAnother minor comment:\n+ Causes each replication command and <literal>walsender</literal>\n+ process replication slot acquisition/release to be logged in the server\n+ log.\n\nIsn't it better to use process's instead of process in the above sentence?\n\n[1] -https://www.postgresql.org/docs/devel/error-message-reporting.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 15 Nov 2023 09:49:36 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "On Wed, Nov 15, 2023 at 9:49 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Nov 15, 2023 at 4:40 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Thanks for addressing my previous comments. Patch v14-0001 looks good\n> > to me, except I have one question:\n> >\n> > The patch uses errmsg_internal() for the logging, but I noticed the\n> > only other code using GUC 'log_replication_commands' has errmsg()\n> > instead of errmsg_internal(). Isn't it better to be consistent with\n> > the existing code?\n> >\n>\n> I agree that we should errmsg here. If we read the description of\n> errmsg_internal() [1], it is recommended to be used for \"cannot\n> happen\" cases where we don't want to spend translation effort which is\n> not the case here.\n\nI chose not to translate the newly added messages as they are only\nwritten to server logs not sent to the client. However, I've changed\nto errmsg, after looking at the errmsg_internal docs.\n\n> Also, similar to the below message, we should add a\n> comment for a translator.\n>\n> ereport(LOG,\n> /* translator: %s is SIGKILL or SIGABRT */\n> (errmsg(\"issuing %s to recalcitrant children\",\n> send_abort_for_kill ? \"SIGABRT\" : \"SIGKILL\")));\n\nAdded.\n\n> Another minor comment:\n> + Causes each replication command and <literal>walsender</literal>\n> + process replication slot acquisition/release to be logged in the server\n> + log.\n>\n> Isn't it better to use process's instead of process in the above sentence?\n\nChanged.\n\nPSA v15 patch.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 15 Nov 2023 11:00:41 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "Patch v15-0001 LGTM.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 15 Nov 2023 17:13:16 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "On Wed, Nov 15, 2023 at 11:00 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> PSA v15 patch.\n>\n\nThe patch looks good to me. I have slightly modified the translator\nmessage and commit message in the attached. I'll push this tomorrow\nunless there are any comments.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Wed, 15 Nov 2023 14:20:38 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "Translation-wise, this doesn't work, because you're building a string.\nThere's no reason to think that the words \"logical\" and \"physical\"\nshould stay untranslated; the message would make no sense, or at least\nwould be very ugly.\n\nYou should do something like\n\nif (am_walsender)\n{\n\tereport(log_replication_commands ? LOG : DEBUG1,\n\t\tSlotIsLogical(s) ? errmsg(\"acquired logical replication slot \\\"%s\\\"\", NameStr(s->data.name)) :\n\t\terrmsg(\"acquired physical replication slot \\\"%s\\\"\", NameStr(s->data.name)));\n}\n\n(Obviously, lose the \"translator:\" comments since they are unnecessary)\n\n\nIf you really want to keep the \"logical\"/\"physical\" word untranslated,\nyou need to split it out of the sentence somehow. But it would be\nreally horrible IMO. Like\n\nerrmsg(\"acquired replication slot \\\"%s\\\" of type \\\"%s\\\"\",\n NameStr(s->data.name), SlotIsLogical(s) ? \"logical\" : \"physical\")\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Use it up, wear it out, make it do, or do without\"\n\n\n", "msg_date": "Wed, 15 Nov 2023 11:28:40 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "On Wed, Nov 15, 2023 at 3:58 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> Translation-wise, this doesn't work, because you're building a string.\n> There's no reason to think that the words \"logical\" and \"physical\"\n> should stay untranslated; the message would make no sense, or at least\n> would be very ugly.\n>\n> You should do something like\n>\n> if (am_walsender)\n> {\n> ereport(log_replication_commands ? LOG : DEBUG1,\n> SlotIsLogical(s) ? errmsg(\"acquired logical replication slot \\\"%s\\\"\", NameStr(s->data.name)) :\n> errmsg(\"acquired physical replication slot \\\"%s\\\"\", NameStr(s->data.name)));\n> }\n\nThis seems better, so done that way.\n\n> (Obviously, lose the \"translator:\" comments since they are unnecessary)\n\nThe translator message now indicates that the remaining %s denotes the\nreplication slot name.\n\nPSA v17 patch.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 16 Nov 2023 00:02:30 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "Some minor comments for v17-0001.\n\n======\n\n1.\n+ ereport(log_replication_commands ? LOG : DEBUG1,\n+ SlotIsLogical(s)\n+ /* translator: %s is name of the replication slot */\n+ ? errmsg(\"acquired logical replication slot \\\"%s\\\"\",\n+ NameStr(s->data.name))\n+ : errmsg(\"acquired physical replication slot \\\"%s\\\"\",\n+ NameStr(s->data.name)));\n\n1a.\nFWIW, if the ternary was inside the errmsg, there would be less code\nduplication.\n\n~\n\n1b.\nI searched HEAD code and did not find any \"translator:\" comments for\njust ordinary slot name substitutions like these; AFAICT that comment\nis not necessary anymore.\n\n~\n\nSUGGESTION (#1a and #1b)\n\nereport(log_replication_commands ? LOG : DEBUG1,\n errmsg(SlotIsLogical(s)\n ? \"acquired logical replication slot \\\"%s\\\"\"\n : \"acquired physical replication slot \\\"%s\\\"\",\n NameStr(s->data.name)));\n\n~~~\n\n2.\nDitto for the other ereport.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 16 Nov 2023 09:18:29 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "On Thu, Nov 16, 2023 at 3:48 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> ~\n>\n> SUGGESTION (#1a and #1b)\n>\n> ereport(log_replication_commands ? LOG : DEBUG1,\n> errmsg(SlotIsLogical(s)\n> ? \"acquired logical replication slot \\\"%s\\\"\"\n> : \"acquired physical replication slot \\\"%s\\\"\",\n> NameStr(s->data.name)));\n>\n> ~~~\n>\n\nPersonally, I prefer the way Bharath had in his patch. Did you see any\npreferred way in the existing code?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 16 Nov 2023 06:48:14 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "On Wed, Nov 15, 2023 at 3:58 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> Translation-wise, this doesn't work, because you're building a string.\n> There's no reason to think that the words \"logical\" and \"physical\"\n> should stay untranslated; the message would make no sense, or at least\n> would be very ugly.\n>\n> You should do something like\n>\n> if (am_walsender)\n> {\n> ereport(log_replication_commands ? LOG : DEBUG1,\n> SlotIsLogical(s) ? errmsg(\"acquired logical replication slot \\\"%s\\\"\", NameStr(s->data.name)) :\n> errmsg(\"acquired physical replication slot \\\"%s\\\"\", NameStr(s->data.name)));\n> }\n>\n> (Obviously, lose the \"translator:\" comments since they are unnecessary)\n>\n>\n> If you really want to keep the \"logical\"/\"physical\" word untranslated,\n> you need to split it out of the sentence somehow. But it would be\n> really horrible IMO. Like\n>\n> errmsg(\"acquired replication slot \\\"%s\\\" of type \\\"%s\\\"\",\n> NameStr(s->data.name), SlotIsLogical(s) ? \"logical\" : \"physical\")\n>\n\nThanks for the suggestion. I would like to clarify on this a bit. What\ndo exactly mean by splitting out of the sentence? For example, in one\nof the existing messages:\n\nereport(LOG,\n/* translator: %s is SIGKILL or SIGABRT */\n(errmsg(\"issuing %s to recalcitrant children\",\nsend_abort_for_kill ? \"SIGABRT\" : \"SIGKILL\")));\n\nDo here words SIGABRT/SIGKILL remain untranslated due to the\ntranslator's comment? I thought this was similar to the message being\nproposed but seems like this message construction follows translation\nrules better.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 16 Nov 2023 07:06:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "On Thu, Nov 16, 2023 at 12:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Nov 15, 2023 at 3:58 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > Translation-wise, this doesn't work, because you're building a string.\n> > There's no reason to think that the words \"logical\" and \"physical\"\n> > should stay untranslated; the message would make no sense, or at least\n> > would be very ugly.\n> >\n> > You should do something like\n> >\n> > if (am_walsender)\n> > {\n> > ereport(log_replication_commands ? LOG : DEBUG1,\n> > SlotIsLogical(s) ? errmsg(\"acquired logical replication slot \\\"%s\\\"\", NameStr(s->data.name)) :\n> > errmsg(\"acquired physical replication slot \\\"%s\\\"\", NameStr(s->data.name)));\n> > }\n> >\n> > (Obviously, lose the \"translator:\" comments since they are unnecessary)\n> >\n> >\n> > If you really want to keep the \"logical\"/\"physical\" word untranslated,\n> > you need to split it out of the sentence somehow. But it would be\n> > really horrible IMO. Like\n> >\n> > errmsg(\"acquired replication slot \\\"%s\\\" of type \\\"%s\\\"\",\n> > NameStr(s->data.name), SlotIsLogical(s) ? \"logical\" : \"physical\")\n> >\n>\n> Thanks for the suggestion. I would like to clarify on this a bit. What\n> do exactly mean by splitting out of the sentence? For example, in one\n> of the existing messages:\n>\n> ereport(LOG,\n> /* translator: %s is SIGKILL or SIGABRT */\n> (errmsg(\"issuing %s to recalcitrant children\",\n> send_abort_for_kill ? \"SIGABRT\" : \"SIGKILL\")));\n>\n> Do here words SIGABRT/SIGKILL remain untranslated due to the\n> translator's comment? I thought this was similar to the message being\n> proposed but seems like this message construction follows translation\n> rules better.\n>\n\nIIUC, that example is different because \"SIGABRT\" / \"SIGKILL\" are not\nreal words, so you don't want the translator to attempt to translate\nthem.You want them to appear in the message as-is.\n\nOTOH in this patch \"logical\" and \"physical\" are just normal English\nwords that should be translated as part of the original message.\ne.g. like in these similar messages:\n- msgid \"database \\\"%s\\\" is used by an active logical replication slot\"\n- msgstr \"la base de données « %s » est utilisée par un slot de\nréplication logique actif\"\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 16 Nov 2023 14:32:49 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "On Thu, Nov 16, 2023 at 12:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Nov 16, 2023 at 3:48 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > ~\n> >\n> > SUGGESTION (#1a and #1b)\n> >\n> > ereport(log_replication_commands ? LOG : DEBUG1,\n> > errmsg(SlotIsLogical(s)\n> > ? \"acquired logical replication slot \\\"%s\\\"\"\n> > : \"acquired physical replication slot \\\"%s\\\"\",\n> > NameStr(s->data.name)));\n> >\n> > ~~~\n> >\n>\n> Personally, I prefer the way Bharath had in his patch. Did you see any\n> preferred way in the existing code?\n\nNot really. I think the errmsg combined with ternary is not so common.\nI couldn't find many examples, so I wouldn't try to claim anything is\na \"preferred\" way\n\nThere are some existing examples, like Bharath had:\n\nereport(NOTICE,\n (errcode(ERRCODE_DUPLICATE_OBJECT),\n collencoding == -1\n ? errmsg(\"collation \\\"%s\\\" already exists, skipping\",\n collname)\n : errmsg(\"collation \\\"%s\\\" for encoding \\\"%s\\\" already\nexists, skipping\",\n collname, pg_encoding_to_char(collencoding))));\n\nOTOH, when there are different numbers of substitution parameters in\neach of the errmsg like that, you don't have much choice but to do it\nthat way.\n\nI am fine with whatever is chosen -- I was only offering an alternative.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 16 Nov 2023 14:53:42 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "On 2023-Nov-16, Peter Smith wrote:\n\n> I searched HEAD code and did not find any \"translator:\" comments for\n> just ordinary slot name substitutions like these; AFAICT that comment\n> is not necessary anymore.\n\nTrue. Lose that.\n\nThe rationale I have is to consider whether a translator looking at the\noriginal message message in isolation is going to understand what the %s\nmeans. If it's possible to tell what it is without having to go read\nthe source code that leads to the message, then you don't need a\n\"translator:\" comment. Otherwise you do.\n\nYou also need to assume the translator is not stupid, but that seems an\nOK assumption.\n\n> SUGGESTION (#1a and #1b)\n> \n> ereport(log_replication_commands ? LOG : DEBUG1,\n> errmsg(SlotIsLogical(s)\n> ? \"acquired logical replication slot \\\"%s\\\"\"\n> : \"acquired physical replication slot \\\"%s\\\"\",\n> NameStr(s->data.name)));\n\nThe bad thing about this is that gettext() is not going to pick up these\nstrings into the translation catalog. You could fix that by adding\ngettext_noop() calls around them:\n\n ereport(log_replication_commands ? LOG : DEBUG1,\n errmsg(SlotIsLogical(s)\n ? gettext_noop(\"acquired logical replication slot \\\"%s\\\"\")\n : gettext_noop(\"acquired physical replication slot \\\"%s\\\"\"),\n NameStr(s->data.name)));\n\nbut at that point it's not clear that it's really better than putting\nthe ternary in the outer scope.\n\nYou can verify this by doing \"make update-po\" and then searching for the\nmessages in postgres.pot.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Cada quien es cada cual y baja las escaleras como quiere\" (JMSerrat)\n\n\n", "msg_date": "Thu, 16 Nov 2023 11:31:43 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "On Thu, Nov 16, 2023 at 4:01 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2023-Nov-16, Peter Smith wrote:\n>\n> > I searched HEAD code and did not find any \"translator:\" comments for\n> > just ordinary slot name substitutions like these; AFAICT that comment\n> > is not necessary anymore.\n>\n> True. Lose that.\n\nRemoved.\n\n> The rationale I have is to consider whether a translator looking at the\n> original message message in isolation is going to understand what the %s\n> means. If it's possible to tell what it is without having to go read\n> the source code that leads to the message, then you don't need a\n> \"translator:\" comment. Otherwise you do.\n\nAgree. I think it's easy for one to guess the slot name follows \"....\nreplication slot %s\", so I removed the translator message.\n\n> > SUGGESTION (#1a and #1b)\n> >\n> > ereport(log_replication_commands ? LOG : DEBUG1,\n> > errmsg(SlotIsLogical(s)\n> > ? \"acquired logical replication slot \\\"%s\\\"\"\n> > : \"acquired physical replication slot \\\"%s\\\"\",\n> > NameStr(s->data.name)));\n>\n> The bad thing about this is that gettext() is not going to pick up these\n> strings into the translation catalog. You could fix that by adding\n> gettext_noop() calls around them:\n>\n> ereport(log_replication_commands ? LOG : DEBUG1,\n> errmsg(SlotIsLogical(s)\n> ? gettext_noop(\"acquired logical replication slot \\\"%s\\\"\")\n> : gettext_noop(\"acquired physical replication slot \\\"%s\\\"\"),\n> NameStr(s->data.name)));\n>\n> but at that point it's not clear that it's really better than putting\n> the ternary in the outer scope.\n\nI retained wrapping messages in errmsg(\"...\").\n\n> You can verify this by doing \"make update-po\" and then searching for the\n> messages in postgres.pot.\n\nTranslation gives me [1] with v18 patch\n\nPSA v18 patch.\n\n[1]\n#: replication/slot.c:545\n#, c-format\nmsgid \"acquired logical replication slot \\\"%s\\\"\"\nmsgstr \"\"\n\n#: replication/slot.c:547\n#, c-format\nmsgid \"acquired physical replication slot \\\"%s\\\"\"\nmsgstr \"\"\n\n#: replication/slot.c:622\n#, c-format\nmsgid \"released logical replication slot \\\"%s\\\"\"\nmsgstr \"\"\n\n#: replication/slot.c:624\n#, c-format\nmsgid \"released physical replication slot \\\"%s\\\"\"\nmsgstr \"\"\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 16 Nov 2023 18:08:52 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "On Thu, Nov 16, 2023 at 6:09 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Nov 16, 2023 at 4:01 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2023-Nov-16, Peter Smith wrote:\n> >\n> > > I searched HEAD code and did not find any \"translator:\" comments for\n> > > just ordinary slot name substitutions like these; AFAICT that comment\n> > > is not necessary anymore.\n> >\n> > True. Lose that.\n>\n> Removed.\n>\n> > The rationale I have is to consider whether a translator looking at the\n> > original message message in isolation is going to understand what the %s\n> > means. If it's possible to tell what it is without having to go read\n> > the source code that leads to the message, then you don't need a\n> > \"translator:\" comment. Otherwise you do.\n>\n> Agree. I think it's easy for one to guess the slot name follows \"....\n> replication slot %s\", so I removed the translator message.\n>\n> > > SUGGESTION (#1a and #1b)\n> > >\n> > > ereport(log_replication_commands ? LOG : DEBUG1,\n> > > errmsg(SlotIsLogical(s)\n> > > ? \"acquired logical replication slot \\\"%s\\\"\"\n> > > : \"acquired physical replication slot \\\"%s\\\"\",\n> > > NameStr(s->data.name)));\n> >\n> > The bad thing about this is that gettext() is not going to pick up these\n> > strings into the translation catalog. You could fix that by adding\n> > gettext_noop() calls around them:\n> >\n> > ereport(log_replication_commands ? LOG : DEBUG1,\n> > errmsg(SlotIsLogical(s)\n> > ? gettext_noop(\"acquired logical replication slot \\\"%s\\\"\")\n> > : gettext_noop(\"acquired physical replication slot \\\"%s\\\"\"),\n> > NameStr(s->data.name)));\n> >\n> > but at that point it's not clear that it's really better than putting\n> > the ternary in the outer scope.\n>\n> I retained wrapping messages in errmsg(\"...\").\n>\n> > You can verify this by doing \"make update-po\" and then searching for the\n> > messages in postgres.pot.\n>\n> Translation gives me [1] with v18 patch\n>\n> PSA v18 patch.\n>\n\nLGTM. I'll push this early next week unless there are further\nsuggestions or comments.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 18 Nov 2023 16:54:55 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" }, { "msg_contents": "On Sat, Nov 18, 2023 at 4:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Nov 16, 2023 at 6:09 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > PSA v18 patch.\n> >\n>\n> LGTM. I'll push this early next week unless there are further\n> suggestions or comments.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 21 Nov 2023 14:18:45 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: add log messages when replication slots become active and\n inactive (was Re: Is it worth adding ReplicationSlot active_pid to\n ReplicationSlotPersistentData?)" } ]
[ { "msg_contents": "I noticed that the chr() function uses PG_GETARG_UINT32() to get its \nargument, even though the argument is a (signed) int. So you get some \nslightly silly behavior like this:\n\n=> select chr(-333);\nERROR: 54000: requested character too large for encoding: -333\n\nThe attached patch fixes this by accepting the argument using \nPG_GETARG_INT32(), doing some checks, and then casting it to unsigned \nfor the rest of the code.\n\nThe patch also fixes another inappropriate use in an example in the \ndocumentation. These two were the only inappropriate uses I found, \nafter we had fixed a few recently.", "msg_date": "Wed, 1 Dec 2021 19:26:45 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Fix inappropriate uses of PG_GETARG_UINT32()" }, { "msg_contents": "On 12/1/21, 10:29 AM, \"Peter Eisentraut\" <peter.eisentraut@enterprisedb.com> wrote:\r\n> The attached patch fixes this by accepting the argument using\r\n> PG_GETARG_INT32(), doing some checks, and then casting it to unsigned\r\n> for the rest of the code.\r\n>\r\n> The patch also fixes another inappropriate use in an example in the\r\n> documentation. These two were the only inappropriate uses I found,\r\n> after we had fixed a few recently.\r\n\r\nLGTM\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 1 Dec 2021 21:59:24 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Fix inappropriate uses of PG_GETARG_UINT32()" }, { "msg_contents": "On 01.12.21 22:59, Bossart, Nathan wrote:\n> On 12/1/21, 10:29 AM, \"Peter Eisentraut\" <peter.eisentraut@enterprisedb.com> wrote:\n>> The attached patch fixes this by accepting the argument using\n>> PG_GETARG_INT32(), doing some checks, and then casting it to unsigned\n>> for the rest of the code.\n>>\n>> The patch also fixes another inappropriate use in an example in the\n>> documentation. These two were the only inappropriate uses I found,\n>> after we had fixed a few recently.\n> \n> LGTM\n\ncommitted\n\n\n", "msg_date": "Mon, 6 Dec 2021 13:47:35 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Fix inappropriate uses of PG_GETARG_UINT32()" } ]
[ { "msg_contents": "The current implementation of pg_stat_progress_vacuum does not provide progress on which index is being vacuumed making it difficult for a user to determine if the \"vacuuming indexes\" phase is making progress. By exposing which index is being scanned as well as the total progress the scan has made for the current cycle, a user can make better estimations on when the vacuum will complete.\r\n\r\nThe proposed patch adds 4 new columns to pg_stat_progress_vacuum:\r\n\r\n1. indrelid - the relid of the index being vacuumed\r\n2. index_blks_total - total number of blocks to be scanned in the current cycle\r\n3. index_blks_scanned - number of blocks scanned in the current cycle\r\n4. leader_pid - if the pid for the pg_stat_progress_vacuum entry is a leader or a vacuum worker. This patch places an entry for every worker pid ( if parallel ) as well as the leader pid\r\n\r\nAttached is the patch.\r\n\r\nHere is a sample output of a parallel vacuum for table with relid = 16638\r\n\r\npostgres=# select * from pg_stat_progress_vacuum ;\r\n-[ RECORD 1 ]------+------------------\r\npid | 18180\r\ndatid | 13732\r\ndatname | postgres\r\nrelid | 16638\r\nphase | vacuuming indexes\r\nheap_blks_total | 5149825\r\nheap_blks_scanned | 5149825\r\nheap_blks_vacuumed | 3686381\r\nindex_vacuum_count | 2\r\nmax_dead_tuples | 178956969\r\nnum_dead_tuples | 142086544\r\nindrelid | 0 <<-----\r\nindex_blks_total | 0 <<-----\r\nindex_blks_scanned | 0 <<-----\r\nleader_pid | <<-----\r\n-[ RECORD 2 ]------+------------------\r\npid | 1543\r\ndatid | 13732\r\ndatname | postgres\r\nrelid | 16638\r\nphase | vacuuming indexes\r\nheap_blks_total | 0\r\nheap_blks_scanned | 0\r\nheap_blks_vacuumed | 0\r\nindex_vacuum_count | 0\r\nmax_dead_tuples | 0\r\nnum_dead_tuples | 0\r\nindrelid | 16646\r\nindex_blks_total | 3030305\r\nindex_blks_scanned | 2356564\r\nleader_pid | 18180\r\n-[ RECORD 3 ]------+------------------\r\npid | 1544\r\ndatid | 13732\r\ndatname | postgres\r\nrelid | 16638\r\nphase | vacuuming indexes\r\nheap_blks_total | 0\r\nheap_blks_scanned | 0\r\nheap_blks_vacuumed | 0\r\nindex_vacuum_count | 0\r\nmax_dead_tuples | 0\r\nnum_dead_tuples | 0\r\nindrelid | 16651\r\nindex_blks_total | 2685921\r\nindex_blks_scanned | 2119179\r\nleader_pid | 18180\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nDatabase Engineer @ Amazon Web Services", "msg_date": "Wed, 1 Dec 2021 19:32:01 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On 12/1/21, 3:02 PM, \"Imseih (AWS), Sami\" <simseih@amazon.com> wrote:\r\n> The current implementation of pg_stat_progress_vacuum does not\r\n> provide progress on which index is being vacuumed making it\r\n> difficult for a user to determine if the \"vacuuming indexes\" phase\r\n> is making progress. By exposing which index is being scanned as well\r\n> as the total progress the scan has made for the current cycle, a\r\n> user can make better estimations on when the vacuum will complete.\r\n\r\n+1\r\n\r\n> The proposed patch adds 4 new columns to pg_stat_progress_vacuum:\r\n>\r\n> 1. indrelid - the relid of the index being vacuumed\r\n> 2. index_blks_total - total number of blocks to be scanned in the\r\n> current cycle\r\n> 3. index_blks_scanned - number of blocks scanned in the current\r\n> cycle\r\n> 4. leader_pid - if the pid for the pg_stat_progress_vacuum entry is\r\n> a leader or a vacuum worker. This patch places an entry for every\r\n> worker pid ( if parallel ) as well as the leader pid\r\n\r\nnitpick: Shouldn't index_blks_scanned be index_blks_vacuumed? IMO it\r\nis more analogous to heap_blks_vacuumed.\r\n\r\nThis will tell us which indexes are currently being vacuumed and the\r\ncurrent progress of those operations, but it doesn't tell us which\r\nindexes have already been vacuumed or which ones are pending vacuum.\r\nI think such information is necessary to truly understand the current\r\nprogress of vacuuming indexes, and I can think of a couple of ways we\r\nmight provide it:\r\n\r\n 1. Make the new columns you've proposed return arrays. This isn't\r\n very clean, but it would keep all the information for a given\r\n vacuum operation in a single row. The indrelids column would be\r\n populated with all the indexes that have been vacuumed, need to\r\n be vacuumed, or are presently being vacuumed. The other index-\r\n related columns would then have the associated stats and the\r\n worker PID (which might be the same as the pid column depending\r\n on whether parallel index vacuum was being done). Alternatively,\r\n the index column could have an array of records, each containing\r\n all the information for a given index.\r\n 2. Create a new view for just index vacuum progress information.\r\n This would have similar information as 1. There would be an\r\n entry for each index that has been vacuumed, needs to be\r\n vacuumed, or is currently being vacuumed. And there would be an\r\n easy way to join with pg_stat_progress_vacuum (e.g., leader_pid,\r\n which again might be the same as our index vacuum PID depending\r\n on whether we were doing parallel index vacuum). Note that it\r\n would be possible for the PID of these entries to be null before\r\n and after we process the index.\r\n 3. Instead of adding columns to pg_stat_progress_vacuum, adjust the\r\n current ones to be more general, and then add new entries for\r\n each of the indexes that have been, need to be, or currently are\r\n being vacuumed. This is the most similar option to your current\r\n proposal, but instead of introducing a column like\r\n index_blks_total, we'd rename heap_blks_total to blks_total and\r\n use that for both the heap and indexes. I think we'd still want\r\n to add a leader_pid column. Again, we have to be prepared for\r\n the PID to be null in this case. Or we could just make the pid\r\n column always refer to the leader, and we could introduce a\r\n worker_pid column. That might create confusion, though.\r\n\r\nI wish option #1 was cleaner, because I think it would be really nice\r\nto have all this information in a single row. However, I don't expect\r\nmuch support for a 3-dimensional view, so I suspect option #2\r\n(creating a separate view for index vacuum progress) is the way to go.\r\nThe other benefit of option #2 versus option #3 or your original\r\nproposal is that it cleanly separates the top-level vacuum operations\r\nand the index vacuum operations, which are related at the moment, but\r\nwhich might not always be tied so closely together.\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 15 Dec 2021 22:09:59 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "\r\n\r\nOn 12/15/21, 4:10 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n\r\n On 12/1/21, 3:02 PM, \"Imseih (AWS), Sami\" <simseih@amazon.com> wrote:\r\n > The current implementation of pg_stat_progress_vacuum does not\r\n > provide progress on which index is being vacuumed making it\r\n > difficult for a user to determine if the \"vacuuming indexes\" phase\r\n > is making progress. By exposing which index is being scanned as well\r\n > as the total progress the scan has made for the current cycle, a\r\n > user can make better estimations on when the vacuum will complete.\r\n\r\n +1\r\n\r\n > The proposed patch adds 4 new columns to pg_stat_progress_vacuum:\r\n >\r\n > 1. indrelid - the relid of the index being vacuumed\r\n > 2. index_blks_total - total number of blocks to be scanned in the\r\n > current cycle\r\n > 3. index_blks_scanned - number of blocks scanned in the current\r\n > cycle\r\n > 4. leader_pid - if the pid for the pg_stat_progress_vacuum entry is\r\n > a leader or a vacuum worker. This patch places an entry for every\r\n > worker pid ( if parallel ) as well as the leader pid\r\n\r\n nitpick: Shouldn't index_blks_scanned be index_blks_vacuumed? IMO it\r\n is more analogous to heap_blks_vacuumed.\r\n\r\nNo, What is being tracked is the number of index blocks scanned from the total index blocks. The block will be scanned regardless if it will be vacuumed or not. \r\n\r\n This will tell us which indexes are currently being vacuumed and the\r\n current progress of those operations, but it doesn't tell us which\r\n indexes have already been vacuumed or which ones are pending vacuum.\r\n I think such information is necessary to truly understand the current\r\n progress of vacuuming indexes, and I can think of a couple of ways we\r\n might provide it:\r\n\r\n 1. Make the new columns you've proposed return arrays. This isn't\r\n very clean, but it would keep all the information for a given\r\n vacuum operation in a single row. The indrelids column would be\r\n populated with all the indexes that have been vacuumed, need to\r\n be vacuumed, or are presently being vacuumed. The other index-\r\n related columns would then have the associated stats and the\r\n worker PID (which might be the same as the pid column depending\r\n on whether parallel index vacuum was being done). Alternatively,\r\n the index column could have an array of records, each containing\r\n all the information for a given index.\r\n 2. Create a new view for just index vacuum progress information.\r\n This would have similar information as 1. There would be an\r\n entry for each index that has been vacuumed, needs to be\r\n vacuumed, or is currently being vacuumed. And there would be an\r\n easy way to join with pg_stat_progress_vacuum (e.g., leader_pid,\r\n which again might be the same as our index vacuum PID depending\r\n on whether we were doing parallel index vacuum). Note that it\r\n would be possible for the PID of these entries to be null before\r\n and after we process the index.\r\n 3. Instead of adding columns to pg_stat_progress_vacuum, adjust the\r\n current ones to be more general, and then add new entries for\r\n each of the indexes that have been, need to be, or currently are\r\n being vacuumed. This is the most similar option to your current\r\n proposal, but instead of introducing a column like\r\n index_blks_total, we'd rename heap_blks_total to blks_total and\r\n use that for both the heap and indexes. I think we'd still want\r\n to add a leader_pid column. Again, we have to be prepared for\r\n the PID to be null in this case. Or we could just make the pid\r\n column always refer to the leader, and we could introduce a\r\n worker_pid column. That might create confusion, though.\r\n\r\n I wish option #1 was cleaner, because I think it would be really nice\r\n to have all this information in a single row. However, I don't expect\r\n much support for a 3-dimensional view, so I suspect option #2\r\n (creating a separate view for index vacuum progress) is the way to go.\r\n The other benefit of option #2 versus option #3 or your original\r\n proposal is that it cleanly separates the top-level vacuum operations\r\n and the index vacuum operations, which are related at the moment, but\r\n which might not always be tied so closely together.\r\n\r\nOption #1 is not clean as you will need to unnest the array to make sense out of it. It will be too complex to use.\r\nOption #3 I am reluctant to spent time looking at this option. It's more valuable to see progress per index instead of total. \r\nOption #2 was one that I originally designed but backed away as it was introducing a new view. Thinking about it a bit more, this is a cleaner approach. \r\n1. Having a view called pg_stat_progress_vacuum_worker to join with pg_stat_progress_vacuum is clean\r\n2. No changes required to pg_stat_progress_vacuum\r\n3. I’ll lean towards calling the view \" pg_stat_progress_vacuum_worker\" instead of \" pg_stat_progress_vacuum_index\", to perhaps allow us to track other items a vacuum worker may do in future releases. As of now, only indexes are vacuumed by workers.\r\nI will rework the patch for option #2\r\n\r\n Nathan\r\n\r\n\r\n", "msg_date": "Thu, 16 Dec 2021 21:37:11 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "I had a similar question. And I'm still not clear from the response\nwhat exactly index_blks_total is and whether it addresses it.\n\nI think I agree that a user is likely to want to see the progress in a\nway they can understand which means for a single index at a time.\n\nI think what you're describing is that index_blks_total and\nindex_blks_scanned are the totals across all the indexes? That isn't\nclear from the definitions but if that's what you intend then I think\nthat would work.\n\n(For what it's worth what I was imagining was having a pair of\ncounters for blocks scanned and max blocks in this index and a second\ncounter for number of indexes processed and max number of indexes. But\nI don't think that's necessarily any better than what you have)\n\n\n", "msg_date": "Thu, 16 Dec 2021 17:03:23 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Here is a V2 attempt of the patch to include a new view called pg_stat_progress_vacuum_worker. Also, scans for index cleanups will also have an entry in the new view.\r\n\r\n- here is the new view which reports an entry for every worker ( or leader ) that is doing index vacuum/index cleanup work.\r\npostgres=# select * from pg_stat_progress_vacuum_worker ;\r\n-[ RECORD 1 ]------+------\r\npid | 29355\r\nleader_pid | 26501\r\nindrelid | 16391\r\nindex_blks_total | 68894\r\nindex_blks_scanned | 35618\r\n\r\n\r\n- the view can be joined with pg_stat_progress_vacuum. Sample output below\r\n\r\npostgres=# select a.*, b.phase, b.heap_blks_total, b.heap_blks_scanned from pg_stat_progress_vacuum_worker a full outer join pg_stat_progress_vacuum b on a.pid = b.pid ;\r\n pid | leader_pid | indrelid | index_blks_total | index_blks_scanned | phase | heap_blks_total | heap_blks_scanned\r\n-------+------------+----------+------------------+--------------------+---------------------+-----------------+-------------------\r\n 26667 | 26667 | 16391 | 9165 | 401 | cleaning up indexes | 20082 | 20082\r\n(1 row)\r\n\r\n\r\n\r\npostgres=# select a.*, b.phase, b.heap_blks_total, b.heap_blks_scanned from pg_stat_progress_vacuum_worker a full outer join pg_stat_progress_vacuum b on a.pid = b.pid ;\r\n-[ RECORD 1 ]------+------------------\r\npid | 26501\r\nleader_pid | 26501\r\nindrelid | 16393\r\nindex_blks_total | 145107\r\nindex_blks_scanned | 11060\r\nphase | vacuuming indexes\r\nheap_blks_total | 165375\r\nheap_blks_scanned | 165375\r\n-[ RECORD 2 ]------+------------------\r\npid | 28982\r\nleader_pid | 26501\r\nindrelid | 16392\r\nindex_blks_total | 47616\r\nindex_blks_scanned | 11861\r\nphase | vacuuming indexes\r\nheap_blks_total | 0\r\nheap_blks_scanned | 0\r\n-[ RECORD 3 ]------+------------------\r\npid | 28983\r\nleader_pid | 26501\r\nindrelid | 16391\r\nindex_blks_total | 56936\r\nindex_blks_scanned | 9138\r\nphase | vacuuming indexes\r\nheap_blks_total | 0\r\nheap_blks_scanned | 0\r\n\r\n\r\n\r\n On 12/15/21, 4:10 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n\r\n On 12/1/21, 3:02 PM, \"Imseih (AWS), Sami\" <simseih@amazon.com> wrote:\r\n > The current implementation of pg_stat_progress_vacuum does not\r\n > provide progress on which index is being vacuumed making it\r\n > difficult for a user to determine if the \"vacuuming indexes\" phase\r\n > is making progress. By exposing which index is being scanned as well\r\n > as the total progress the scan has made for the current cycle, a\r\n > user can make better estimations on when the vacuum will complete.\r\n\r\n +1\r\n\r\n > The proposed patch adds 4 new columns to pg_stat_progress_vacuum:\r\n >\r\n > 1. indrelid - the relid of the index being vacuumed\r\n > 2. index_blks_total - total number of blocks to be scanned in the\r\n > current cycle\r\n > 3. index_blks_scanned - number of blocks scanned in the current\r\n > cycle\r\n > 4. leader_pid - if the pid for the pg_stat_progress_vacuum entry is\r\n > a leader or a vacuum worker. This patch places an entry for every\r\n > worker pid ( if parallel ) as well as the leader pid\r\n\r\n nitpick: Shouldn't index_blks_scanned be index_blks_vacuumed? IMO it\r\n is more analogous to heap_blks_vacuumed.\r\n\r\n No, What is being tracked is the number of index blocks scanned from the total index blocks. The block will be scanned regardless if it will be vacuumed or not. \r\n\r\n This will tell us which indexes are currently being vacuumed and the\r\n current progress of those operations, but it doesn't tell us which\r\n indexes have already been vacuumed or which ones are pending vacuum.\r\n I think such information is necessary to truly understand the current\r\n progress of vacuuming indexes, and I can think of a couple of ways we\r\n might provide it:\r\n\r\n 1. Make the new columns you've proposed return arrays. This isn't\r\n very clean, but it would keep all the information for a given\r\n vacuum operation in a single row. The indrelids column would be\r\n populated with all the indexes that have been vacuumed, need to\r\n be vacuumed, or are presently being vacuumed. The other index-\r\n related columns would then have the associated stats and the\r\n worker PID (which might be the same as the pid column depending\r\n on whether parallel index vacuum was being done). Alternatively,\r\n the index column could have an array of records, each containing\r\n all the information for a given index.\r\n 2. Create a new view for just index vacuum progress information.\r\n This would have similar information as 1. There would be an\r\n entry for each index that has been vacuumed, needs to be\r\n vacuumed, or is currently being vacuumed. And there would be an\r\n easy way to join with pg_stat_progress_vacuum (e.g., leader_pid,\r\n which again might be the same as our index vacuum PID depending\r\n on whether we were doing parallel index vacuum). Note that it\r\n would be possible for the PID of these entries to be null before\r\n and after we process the index.\r\n 3. Instead of adding columns to pg_stat_progress_vacuum, adjust the\r\n current ones to be more general, and then add new entries for\r\n each of the indexes that have been, need to be, or currently are\r\n being vacuumed. This is the most similar option to your current\r\n proposal, but instead of introducing a column like\r\n index_blks_total, we'd rename heap_blks_total to blks_total and\r\n use that for both the heap and indexes. I think we'd still want\r\n to add a leader_pid column. Again, we have to be prepared for\r\n the PID to be null in this case. Or we could just make the pid\r\n column always refer to the leader, and we could introduce a\r\n worker_pid column. That might create confusion, though.\r\n\r\n I wish option #1 was cleaner, because I think it would be really nice\r\n to have all this information in a single row. However, I don't expect\r\n much support for a 3-dimensional view, so I suspect option #2\r\n (creating a separate view for index vacuum progress) is the way to go.\r\n The other benefit of option #2 versus option #3 or your original\r\n proposal is that it cleanly separates the top-level vacuum operations\r\n and the index vacuum operations, which are related at the moment, but\r\n which might not always be tied so closely together.\r\n\r\n Option #1 is not clean as you will need to unnest the array to make sense out of it. It will be too complex to use.\r\n Option #3 I am reluctant to spent time looking at this option. It's more valuable to see progress per index instead of total. \r\n Option #2 was one that I originally designed but backed away as it was introducing a new view. Thinking about it a bit more, this is a cleaner approach. \r\n 1. Having a view called pg_stat_progress_vacuum_worker to join with pg_stat_progress_vacuum is clean\r\n 2. No changes required to pg_stat_progress_vacuum\r\n 3. I’ll lean towards calling the view \" pg_stat_progress_vacuum_worker\" instead of \" pg_stat_progress_vacuum_index\", to perhaps allow us to track other items a vacuum worker may do in future releases. As of now, only indexes are vacuumed by workers.\r\n I will rework the patch for option #2\r\n\r\n Nathan", "msg_date": "Mon, 20 Dec 2021 17:55:03 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Wed, Dec 15, 2021 at 2:10 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> nitpick: Shouldn't index_blks_scanned be index_blks_vacuumed? IMO it\n> is more analogous to heap_blks_vacuumed.\n\n+1.\n\n> This will tell us which indexes are currently being vacuumed and the\n> current progress of those operations, but it doesn't tell us which\n> indexes have already been vacuumed or which ones are pending vacuum.\n\nVACUUM will process a table's indexes in pg_class OID order (outside\nof parallel VACUUM, I suppose). See comments about sort order above\nRelationGetIndexList().\n\nAnyway, it might be useful to add ordinal numbers to each index, that\nline up with this processing/OID order. It would also be reasonable to\ndisplay the same number in log_autovacuum* (and VACUUM VERBOSE)\nper-index output, to reinforce the idea. Note that we don't\nnecessarily display a distinct line for each distinct index in this\nlog output, which is why including the ordinal number there makes\nsense.\n\n> I wish option #1 was cleaner, because I think it would be really nice\n> to have all this information in a single row.\n\nI do too. I agree with the specific points you raise in your remarks\nabout what you've called options #2 and #3, but those options still\nseem unappealing to me.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 20 Dec 2021 10:37:05 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Wed, Dec 1, 2021 at 2:59 PM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n> The current implementation of pg_stat_progress_vacuum does not provide progress on which index is being vacuumed making it difficult for a user to determine if the \"vacuuming indexes\" phase is making progress.\n\nI notice that your patch largely assumes that indexes can be treated\nlike heap relations, in the sense that they're scanned sequentially,\nand process each block exactly once (or exactly once per \"pass\"). But\nthat isn't quite true. There are a few differences that seem like they\nmight matter:\n\n* An ambulkdelete() scan of an index cannot take the size of the\nrelation once, at the start, and ignore any blocks that are added\nafter the scan begins. And so the code may need to re-establish the\ntotal size of the index multiple times, to make sure no index tuples\nare missed -- there may be index tuples that VACUUM needs to process\nthat appear in later pages due to concurrent page splits. You don't\nhave the issue with things like IndexBulkDeleteResult.num_pages,\nbecause they report on the index after ambulkdelete/amvacuumcleanup\nreturn (they're not granular progress indicators).\n\n* Some index AMs don't work like nbtree and GiST in that they cannot\ndo their scan sequentially -- they have to do something like a\nlogical/keyspace order scan instead, which is *totally* different to\nheapam (not just a bit different). There is no telling how many times\neach page will be accessed in these other index AMs, and in what\norder, even under optimal conditions. We should arguably not even try\nto provide any granular progress information here, since it'll\nprobably be too messy.\n\nI'm not sure what to recommend for your patch, in light of this. Maybe\nyou should change the names of the new columns to own the squishiness.\nFor example, instead of using the name index_blks_total, you might\ninstead use the name index_blks_initial. That might be enough to avoid\nuser confusion when we scan more blocks than the index initially\ncontained (within a single ambulkdelete scan).\n\nNote also that we have to do something called backtracking in\nbtvacuumpage(), which you've ignored -- that's another reasonably\ncommon way that we'll end up scanning a page twice. But that probably\nshould just be ignored -- it's too narrow a case to be worth caring\nabout.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 20 Dec 2021 11:05:47 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "This view also doesn't show vacuum progress across a partitioned table.\n\nFor comparison:\n\npg_stat_progress_create_index (added in v12) has:\npartitions_total\npartitions_done\n\npg_stat_progress_analyze (added in v13) has:\nchild_tables_total\nchild_tables_done\n\npg_stat_progress_cluster should have something similar.\n\n-- \nJustin Pryzby\nSystem Administrator\nTelsasoft\n+1-952-707-8581\n\n\n", "msg_date": "Mon, 20 Dec 2021 13:27:26 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Tue, Dec 21, 2021 at 3:37 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, Dec 15, 2021 at 2:10 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> > nitpick: Shouldn't index_blks_scanned be index_blks_vacuumed? IMO it\n> > is more analogous to heap_blks_vacuumed.\n>\n> +1.\n>\n> > This will tell us which indexes are currently being vacuumed and the\n> > current progress of those operations, but it doesn't tell us which\n> > indexes have already been vacuumed or which ones are pending vacuum.\n>\n> VACUUM will process a table's indexes in pg_class OID order (outside\n> of parallel VACUUM, I suppose). See comments about sort order above\n> RelationGetIndexList().\n\nRight.\n\n>\n> Anyway, it might be useful to add ordinal numbers to each index, that\n> line up with this processing/OID order. It would also be reasonable to\n> display the same number in log_autovacuum* (and VACUUM VERBOSE)\n> per-index output, to reinforce the idea. Note that we don't\n> necessarily display a distinct line for each distinct index in this\n> log output, which is why including the ordinal number there makes\n> sense.\n\nAn alternative idea would be to show the number of indexes on the\ntable and the number of indexes that have been processed in the\nleader's entry of pg_stat_progress_vacuum. Even in parallel vacuum\ncases, since we have index vacuum status for each index it would not\nbe hard for the leader process to count how many indexes have been\nprocessed.\n\nRegarding the details of the progress of index vacuum, I'm not sure\nthis progress information can fit for pg_stat_progress_vacuum. As\nPeter already mentioned, the behavior quite varies depending on index\nAM.\n\nRegards,\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 23 Dec 2021 17:44:37 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On 21/12/2021 00:05, Peter Geoghegan wrote:\n> * Some index AMs don't work like nbtree and GiST in that they cannot\n> do their scan sequentially -- they have to do something like a\n> logical/keyspace order scan instead, which is *totally* different to\n> heapam (not just a bit different). There is no telling how many times\n> each page will be accessed in these other index AMs, and in what\n> order, even under optimal conditions. We should arguably not even try\n> to provide any granular progress information here, since it'll\n> probably be too messy.\n\nMaybe we could add callbacks into AM interface for \nsend/receive/representation implementation of progress?\nSo AM would define a set of parameters to send into stat collector and \nshow to users.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Thu, 23 Dec 2021 15:49:59 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Please send your patches as *.diff or *.patch, so they're processed by the\npatch tester. Preferably with commit messages; git format-patch is the usual\ntool for this.\nhttp://cfbot.cputube.org/sami-imseih.html\n\n(Occasionally, it's also useful to send a *.txt to avoid the cfbot processing\nthe wrong thing, in case one sends an unrelated, secondary patch, or sends\nfixes to a patch as a \"relative patch\" which doesn't include the main patch.)\n\nI'm including a patch rebased on 8e1fae193.", "msg_date": "Mon, 27 Dec 2021 11:59:25 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "I do agree that tracking progress by # of blocks scanned is not deterministic for all index types.\r\n\r\nBased on this feedback, I went back to the drawing board on this. \r\n\r\nSomething like below may make more sense.\r\n\r\nIn pg_stat_progress_vacuum, introduce 2 new columns:\r\n\r\n1. total_index_vacuum - total # of indexes to vacuum\r\n2. max_cycle_time - the time in seconds of the longest index cycle. \r\n\r\nIntroduce another view called pg_stat_progress_vacuum_index_cycle:\r\n\r\npostgres=# \\d pg_stat_progress_vacuum_index_cycle\r\n View \"public.pg_stat_progress_vacuum_worker\"\r\n Column | Type | Collation | Nullable | Default\r\n----------------+---------+-----------+----------+---------\r\npid | integer | | |\t\t\t\t<<<-- the PID of the vacuum worker ( or leader if it's doing index vacuuming )\r\nleader_pid | bigint | | |\t\t\t\t<<<-- the leader PID to allow this view to be joined back to pg_stat_progress_vacuum\r\nindrelid | bigint | | |\t\t\t\t<<<- the index relid of the index being vacuumed\r\nordinal_position | bigint | | |\t\t\t\t<<<- the processing position, which will give an idea of the processing position of the index being vacuumed. \r\ndead_tuples_removed | bigint | |\t\t\t\t<<<- the number of dead rows removed in the current cycle for the index.\r\n\r\nHaving this information, one can\r\n\r\n1. Determine which index is being vacuumed. For monitoring tools, this can help identify the index that accounts for most of the index vacuuming time.\r\n2. Having the processing order of the current index will allow the user to determine how many of the total indexes has been completed in the current cycle.\r\n3. dead_tuples_removed will show progress on the index vacuum in the current cycle.\r\n4. the max_cycle_time will give an idea on how long the longest index cycle took for the current vacuum operation.\r\n\r\n\r\nOn 12/23/21, 2:46 AM, \"Masahiko Sawada\" <sawada.mshk@gmail.com> wrote:\r\n\r\n CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\r\n\r\n\r\n\r\n On Tue, Dec 21, 2021 at 3:37 AM Peter Geoghegan <pg@bowt.ie> wrote:\r\n >\r\n > On Wed, Dec 15, 2021 at 2:10 PM Bossart, Nathan <bossartn@amazon.com> wrote:\r\n > > nitpick: Shouldn't index_blks_scanned be index_blks_vacuumed? IMO it\r\n > > is more analogous to heap_blks_vacuumed.\r\n >\r\n > +1.\r\n >\r\n > > This will tell us which indexes are currently being vacuumed and the\r\n > > current progress of those operations, but it doesn't tell us which\r\n > > indexes have already been vacuumed or which ones are pending vacuum.\r\n >\r\n > VACUUM will process a table's indexes in pg_class OID order (outside\r\n > of parallel VACUUM, I suppose). See comments about sort order above\r\n > RelationGetIndexList().\r\n\r\n Right.\r\n\r\n >\r\n > Anyway, it might be useful to add ordinal numbers to each index, that\r\n > line up with this processing/OID order. It would also be reasonable to\r\n > display the same number in log_autovacuum* (and VACUUM VERBOSE)\r\n > per-index output, to reinforce the idea. Note that we don't\r\n > necessarily display a distinct line for each distinct index in this\r\n > log output, which is why including the ordinal number there makes\r\n > sense.\r\n\r\n An alternative idea would be to show the number of indexes on the\r\n table and the number of indexes that have been processed in the\r\n leader's entry of pg_stat_progress_vacuum. Even in parallel vacuum\r\n cases, since we have index vacuum status for each index it would not\r\n be hard for the leader process to count how many indexes have been\r\n processed.\r\n\r\n Regarding the details of the progress of index vacuum, I'm not sure\r\n this progress information can fit for pg_stat_progress_vacuum. As\r\n Peter already mentioned, the behavior quite varies depending on index\r\n AM.\r\n\r\n Regards,\r\n\r\n\r\n --\r\n Masahiko Sawada\r\n EDB: https://www.enterprisedb.com/\r\n\r\n", "msg_date": "Tue, 28 Dec 2021 00:13:16 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Attached is the latest revision of the patch.\r\n\r\nIn \"pg_stat_progress_vacuum\", introduce 2 columns:\r\n\r\n* total_index_vacuum : This is the # of indexes that will be vacuumed. Keep in mind that if failsafe mode kicks in mid-flight to the vacuum, Postgres may choose to forgo index scans. This value will be adjusted accordingly.\r\n* max_index_vacuum_cycle_time : The total elapsed time for a index vacuum cycle is calculated and this value will be updated to reflect the longest vacuum cycle. Until the first cycle completes, this value will be 0. The purpose of this column is to give the user an idea of how long an index vacuum cycle takes to complete.\r\n\r\npostgres=# \\d pg_stat_progress_vacuum\r\nView \"pg_catalog.pg_stat_progress_vacuum\"\r\nColumn | Type | Collation | Nullable | Default\r\n-----------------------------+---------+-----------+----------+---------\r\npid | integer | | |\r\ndatid | oid | | |\r\ndatname | name | | |\r\nrelid | oid | | |\r\nphase | text | | |\r\nheap_blks_total | bigint | | |\r\nheap_blks_scanned | bigint | | |\r\nheap_blks_vacuumed | bigint | | |\r\nindex_vacuum_count | bigint | | |\r\nmax_dead_tuples | bigint | | |\r\nnum_dead_tuples | bigint | | |\r\ntotal_index_vacuum | bigint | | |\r\nmax_index_vacuum_cycle_time | bigint | | |\r\n\r\n\r\n\r\nIntroduce a new view called \"pg_stat_progress_vacuum_index\". This view will track the progress of a worker ( or leader PID ) while it's vacuuming an index. It will expose some key columns:\r\n\r\n* pid: The PID of the worker process\r\n\r\n* leader_pid: The PID of the leader process. This is the column that can be joined with \"pg_stat_progress_vacuum\". leader_pid and pid can have the same value as a leader can also perform an index vacuum.\r\n\r\n* indrelid: The relid of the index currently being vacuumed\r\n\r\n* vacuum_cycle_ordinal_position: The processing position of the index being vacuumed. This can be useful to determine how many indexes out of the total indexes ( pg_stat_progress_vacuum.total_index_vacuum ) have been vacuumed\r\n\r\n* index_tuples_vacuumed: This is the number of index tuples vacuumed for the index overall. This is useful to show that the vacuum is actually doing work, as the # of tuples keeps increasing. \r\n\r\npostgres=# \\d pg_stat_progress_vacuum_index\r\nView \"pg_catalog.pg_stat_progress_vacuum_index\"\r\nColumn | Type | Collation | Nullable | Default\r\n-------------------------------+---------+-----------+----------+---------\r\npid | integer | | |\r\nleader_pid | bigint | | |\r\nindrelid | bigint | | |\r\nvacuum_cycle_ordinal_position | bigint | | |\r\nindex_tuples_vacuumed | bigint | | |\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nOn 12/27/21, 6:12 PM, \"Imseih (AWS), Sami\" <simseih@amazon.com> wrote:\r\n\r\n I do agree that tracking progress by # of blocks scanned is not deterministic for all index types.\r\n\r\n Based on this feedback, I went back to the drawing board on this. \r\n\r\n Something like below may make more sense.\r\n\r\n In pg_stat_progress_vacuum, introduce 2 new columns:\r\n\r\n 1. total_index_vacuum - total # of indexes to vacuum\r\n 2. max_cycle_time - the time in seconds of the longest index cycle. \r\n\r\n Introduce another view called pg_stat_progress_vacuum_index_cycle:\r\n\r\n postgres=# \\d pg_stat_progress_vacuum_index_cycle\r\n View \"public.pg_stat_progress_vacuum_worker\"\r\n Column | Type | Collation | Nullable | Default\r\n ----------------+---------+-----------+----------+---------\r\n pid | integer | | |\t\t\t\t<<<-- the PID of the vacuum worker ( or leader if it's doing index vacuuming )\r\n leader_pid | bigint | | |\t\t\t\t<<<-- the leader PID to allow this view to be joined back to pg_stat_progress_vacuum\r\n indrelid | bigint | | |\t\t\t\t<<<- the index relid of the index being vacuumed\r\n ordinal_position | bigint | | |\t\t\t\t<<<- the processing position, which will give an idea of the processing position of the index being vacuumed. \r\n dead_tuples_removed | bigint | |\t\t\t\t<<<- the number of dead rows removed in the current cycle for the index.\r\n\r\n Having this information, one can\r\n\r\n 1. Determine which index is being vacuumed. For monitoring tools, this can help identify the index that accounts for most of the index vacuuming time.\r\n 2. Having the processing order of the current index will allow the user to determine how many of the total indexes has been completed in the current cycle.\r\n 3. dead_tuples_removed will show progress on the index vacuum in the current cycle.\r\n 4. the max_cycle_time will give an idea on how long the longest index cycle took for the current vacuum operation.\r\n\r\n\r\n On 12/23/21, 2:46 AM, \"Masahiko Sawada\" <sawada.mshk@gmail.com> wrote:\r\n\r\n CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\r\n\r\n\r\n\r\n On Tue, Dec 21, 2021 at 3:37 AM Peter Geoghegan <pg@bowt.ie> wrote:\r\n >\r\n > On Wed, Dec 15, 2021 at 2:10 PM Bossart, Nathan <bossartn@amazon.com> wrote:\r\n > > nitpick: Shouldn't index_blks_scanned be index_blks_vacuumed? IMO it\r\n > > is more analogous to heap_blks_vacuumed.\r\n >\r\n > +1.\r\n >\r\n > > This will tell us which indexes are currently being vacuumed and the\r\n > > current progress of those operations, but it doesn't tell us which\r\n > > indexes have already been vacuumed or which ones are pending vacuum.\r\n >\r\n > VACUUM will process a table's indexes in pg_class OID order (outside\r\n > of parallel VACUUM, I suppose). See comments about sort order above\r\n > RelationGetIndexList().\r\n\r\n Right.\r\n\r\n >\r\n > Anyway, it might be useful to add ordinal numbers to each index, that\r\n > line up with this processing/OID order. It would also be reasonable to\r\n > display the same number in log_autovacuum* (and VACUUM VERBOSE)\r\n > per-index output, to reinforce the idea. Note that we don't\r\n > necessarily display a distinct line for each distinct index in this\r\n > log output, which is why including the ordinal number there makes\r\n > sense.\r\n\r\n An alternative idea would be to show the number of indexes on the\r\n table and the number of indexes that have been processed in the\r\n leader's entry of pg_stat_progress_vacuum. Even in parallel vacuum\r\n cases, since we have index vacuum status for each index it would not\r\n be hard for the leader process to count how many indexes have been\r\n processed.\r\n\r\n Regarding the details of the progress of index vacuum, I'm not sure\r\n this progress information can fit for pg_stat_progress_vacuum. As\r\n Peter already mentioned, the behavior quite varies depending on index\r\n AM.\r\n\r\n Regards,\r\n\r\n\r\n --\r\n Masahiko Sawada\r\n EDB: https://www.enterprisedb.com/", "msg_date": "Wed, 29 Dec 2021 16:44:31 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "http://cfbot.cputube.org/sami-imseih.html\nYou should run \"make check\" and update rules.out.\n\nYou should also use make check-world - usually something like:\nmake check-world -j4 >check-world.out 2>&1 ; echo ret $?\n\n> indrelid: The relid of the index currently being vacuumed\n\nI think it should be called indexrelid not indrelid, for consistency with\npg_index.\n\n> S.param10 vacuum_cycle_ordinal_position,\n> S.param13 index_rows_vacuumed\n\nThese should both say \"AS\" for consistency.\n\nsystem_views.sql is using tabs, but should use spaces for consistency.\n\n> #include \"commands/progress.h\"\n\nThe postgres convention is to alphabetize the includes.\n\n> /* VACCUM operation's longest index scan cycle */\n\nVACCUM => VACUUM\n\nUltimately you'll also need to update the docs.\n\n\n", "msg_date": "Wed, 29 Dec 2021 11:51:21 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Attaching the latest revision of the patch with the fixes suggested. Also ran make check and make check-world successfully.\r\n\r\n\r\nOn 12/29/21, 11:51 AM, \"Justin Pryzby\" <pryzby@telsasoft.com> wrote:\r\n\r\n CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\r\n\r\n\r\n\r\n http://cfbot.cputube.org/sami-imseih.html\r\n You should run \"make check\" and update rules.out.\r\n\r\n You should also use make check-world - usually something like:\r\n make check-world -j4 >check-world.out 2>&1 ; echo ret $?\r\n\r\n > indrelid: The relid of the index currently being vacuumed\r\n\r\n I think it should be called indexrelid not indrelid, for consistency with\r\n pg_index.\r\n\r\n > S.param10 vacuum_cycle_ordinal_position,\r\n > S.param13 index_rows_vacuumed\r\n\r\n These should both say \"AS\" for consistency.\r\n\r\n system_views.sql is using tabs, but should use spaces for consistency.\r\n\r\n > #include \"commands/progress.h\"\r\n\r\n The postgres convention is to alphabetize the includes.\r\n\r\n > /* VACCUM operation's longest index scan cycle */\r\n\r\n VACCUM => VACUUM\r\n\r\n Ultimately you'll also need to update the docs.", "msg_date": "Fri, 31 Dec 2021 05:59:15 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On 12/29/21, 8:44 AM, \"Imseih (AWS), Sami\" <simseih@amazon.com> wrote:\r\n> In \"pg_stat_progress_vacuum\", introduce 2 columns:\r\n>\r\n> * total_index_vacuum : This is the # of indexes that will be vacuumed. Keep in mind that if failsafe mode kicks in mid-flight to the vacuum, Postgres may choose to forgo index scans. This value will be adjusted accordingly.\r\n> * max_index_vacuum_cycle_time : The total elapsed time for a index vacuum cycle is calculated and this value will be updated to reflect the longest vacuum cycle. Until the first cycle completes, this value will be 0. The purpose of this column is to give the user an idea of how long an index vacuum cycle takes to complete.\r\n\r\nI think that total_index_vacuum is a good thing to have. I would\r\nexpect this to usually just be the number of indexes on the table, but\r\nas you pointed out, this can be different when we are skipping\r\nindexes. My only concern with this new column is the potential for\r\nconfusion when compared with the index_vacuum_count value.\r\nindex_vacuum_count indicates the number of vacuum cycles completed,\r\nbut total_index_vacuum indicates the number of indexes that will be\r\nvacuumed. However, the names sound like they could refer to the same\r\nthing to me. Perhaps we should rename index_vacuum_count to\r\nindex_vacuum_cycles/index_vacuum_cycle_count, and the new column\r\nshould be something like num_indexes_to_vacuum or index_vacuum_total.\r\n\r\nI don't think we need the max_index_vacuum_cycle_time column. While\r\nthe idea is to give users a rough estimate for how long an index cycle\r\nwill take, I don't think it will help generate any meaningful\r\nestimates for how much longer the vacuum operation will take. IIUC we\r\nwon't have any idea how many total index vacuum cycles will be needed.\r\nEven if we did, the current cycle could take much more or much less\r\ntime. Also, none of the other progress views seem to provide any\r\ntiming information, which I suspect is by design to avoid inaccurate\r\nestimates.\r\n\r\n> Introduce a new view called \"pg_stat_progress_vacuum_index\". This view will track the progress of a worker ( or leader PID ) while it's vacuuming an index. It will expose some key columns:\r\n>\r\n> * pid: The PID of the worker process\r\n>\r\n> * leader_pid: The PID of the leader process. This is the column that can be joined with \"pg_stat_progress_vacuum\". leader_pid and pid can have the same value as a leader can also perform an index vacuum.\r\n>\r\n> * indrelid: The relid of the index currently being vacuumed\r\n>\r\n> * vacuum_cycle_ordinal_position: The processing position of the index being vacuumed. This can be useful to determine how many indexes out of the total indexes ( pg_stat_progress_vacuum.total_index_vacuum ) have been vacuumed\r\n>\r\n> * index_tuples_vacuumed: This is the number of index tuples vacuumed for the index overall. This is useful to show that the vacuum is actually doing work, as the # of tuples keeps increasing. \r\n\r\nShould we also provide some information for determining the progress\r\nof the current cycle? Perhaps there should be an\r\nindex_tuples_vacuumed_current_cycle column that users can compare with\r\nthe num_dead_tuples value in pg_stat_progress_vacuum. However,\r\nperhaps the number of tuples vacuumed in the current cycle can already\r\nbe discovered via index_tuples_vacuumed % max_dead_tuples.\r\n\r\n+void\r\n+rusage_adjust(const PGRUsage *ru0, PGRUsage *ru1)\r\n+{\r\n+\tif (ru1->tv.tv_usec < ru0->tv.tv_usec)\r\n+\t{\r\n+\t\tru1->tv.tv_sec--;\r\n+\t\tru1->tv.tv_usec += 1000000;\r\n+\t}\r\n+\tif (ru1->ru.ru_stime.tv_usec < ru0->ru.ru_stime.tv_usec)\r\n+\t{\r\n+\t\tru1->ru.ru_stime.tv_sec--;\r\n+\t\tru1->ru.ru_stime.tv_usec += 1000000;\r\n+\t}\r\n+\tif (ru1->ru.ru_utime.tv_usec < ru0->ru.ru_utime.tv_usec)\r\n+\t{\r\n+\t\tru1->ru.ru_utime.tv_sec--;\r\n+\t\tru1->ru.ru_utime.tv_usec += 1000000;\r\n+\t}\r\n+}\r\n\r\nI think this function could benefit from a comment. Without going\r\nthrough it line by line, it is not clear to me exactly what it is\r\ndoing.\r\n\r\nI know we're still working on what exactly this stuff should look\r\nlike, but I would suggest adding the documentation changes in the near\r\nfuture.\r\n\r\nNathan\r\n\r\n", "msg_date": "Thu, 6 Jan 2022 20:41:30 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Thanks for the review.\r\n\r\nI am hesitant to make column name changes for obvious reasons, as it breaks existing tooling. However, I think there is a really good case to change \"index_vacuum_count\" as the name is confusing. \"index_vacuum_cycles_completed\" is the name I suggest if we agree to rename.\r\n\r\nFor the new column, \"num_indexes_to_vacuum\" is good with me. \r\n\r\nAs far as max_index_vacuum_cycle_time goes, Besides the points you make, another reason is that until one cycle completes, this value will remain at 0. It will not be helpful data for most vacuum cases. Removing it also reduces the complexity of the patch. \r\n\r\n\r\n\r\n\r\nOn 1/6/22, 2:41 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n\r\n On 12/29/21, 8:44 AM, \"Imseih (AWS), Sami\" <simseih@amazon.com> wrote:\r\n > In \"pg_stat_progress_vacuum\", introduce 2 columns:\r\n >\r\n > * total_index_vacuum : This is the # of indexes that will be vacuumed. Keep in mind that if failsafe mode kicks in mid-flight to the vacuum, Postgres may choose to forgo index scans. This value will be adjusted accordingly.\r\n > * max_index_vacuum_cycle_time : The total elapsed time for a index vacuum cycle is calculated and this value will be updated to reflect the longest vacuum cycle. Until the first cycle completes, this value will be 0. The purpose of this column is to give the user an idea of how long an index vacuum cycle takes to complete.\r\n\r\n I think that total_index_vacuum is a good thing to have. I would\r\n expect this to usually just be the number of indexes on the table, but\r\n as you pointed out, this can be different when we are skipping\r\n indexes. My only concern with this new column is the potential for\r\n confusion when compared with the index_vacuum_count value.\r\n index_vacuum_count indicates the number of vacuum cycles completed,\r\n but total_index_vacuum indicates the number of indexes that will be\r\n vacuumed. However, the names sound like they could refer to the same\r\n thing to me. Perhaps we should rename index_vacuum_count to\r\n index_vacuum_cycles/index_vacuum_cycle_count, and the new column\r\n should be something like num_indexes_to_vacuum or index_vacuum_total.\r\n\r\n I don't think we need the max_index_vacuum_cycle_time column. While\r\n the idea is to give users a rough estimate for how long an index cycle\r\n will take, I don't think it will help generate any meaningful\r\n estimates for how much longer the vacuum operation will take. IIUC we\r\n won't have any idea how many total index vacuum cycles will be needed.\r\n Even if we did, the current cycle could take much more or much less\r\n time. Also, none of the other progress views seem to provide any\r\n timing information, which I suspect is by design to avoid inaccurate\r\n estimates.\r\n\r\n > Introduce a new view called \"pg_stat_progress_vacuum_index\". This view will track the progress of a worker ( or leader PID ) while it's vacuuming an index. It will expose some key columns:\r\n >\r\n > * pid: The PID of the worker process\r\n >\r\n > * leader_pid: The PID of the leader process. This is the column that can be joined with \"pg_stat_progress_vacuum\". leader_pid and pid can have the same value as a leader can also perform an index vacuum.\r\n >\r\n > * indrelid: The relid of the index currently being vacuumed\r\n >\r\n > * vacuum_cycle_ordinal_position: The processing position of the index being vacuumed. This can be useful to determine how many indexes out of the total indexes ( pg_stat_progress_vacuum.total_index_vacuum ) have been vacuumed\r\n >\r\n > * index_tuples_vacuumed: This is the number of index tuples vacuumed for the index overall. This is useful to show that the vacuum is actually doing work, as the # of tuples keeps increasing. \r\n\r\n Should we also provide some information for determining the progress\r\n of the current cycle? Perhaps there should be an\r\n index_tuples_vacuumed_current_cycle column that users can compare with\r\n the num_dead_tuples value in pg_stat_progress_vacuum. However,\r\n perhaps the number of tuples vacuumed in the current cycle can already\r\n be discovered via index_tuples_vacuumed % max_dead_tuples.\r\n\r\n +void\r\n +rusage_adjust(const PGRUsage *ru0, PGRUsage *ru1)\r\n +{\r\n +\tif (ru1->tv.tv_usec < ru0->tv.tv_usec)\r\n +\t{\r\n +\t\tru1->tv.tv_sec--;\r\n +\t\tru1->tv.tv_usec += 1000000;\r\n +\t}\r\n +\tif (ru1->ru.ru_stime.tv_usec < ru0->ru.ru_stime.tv_usec)\r\n +\t{\r\n +\t\tru1->ru.ru_stime.tv_sec--;\r\n +\t\tru1->ru.ru_stime.tv_usec += 1000000;\r\n +\t}\r\n +\tif (ru1->ru.ru_utime.tv_usec < ru0->ru.ru_utime.tv_usec)\r\n +\t{\r\n +\t\tru1->ru.ru_utime.tv_sec--;\r\n +\t\tru1->ru.ru_utime.tv_usec += 1000000;\r\n +\t}\r\n +}\r\n\r\n I think this function could benefit from a comment. Without going\r\n through it line by line, it is not clear to me exactly what it is\r\n doing.\r\n\r\n I know we're still working on what exactly this stuff should look\r\n like, but I would suggest adding the documentation changes in the near\r\n future.\r\n\r\n Nathan\r\n\r\n\r\n", "msg_date": "Fri, 7 Jan 2022 02:14:36 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On 1/6/22, 6:14 PM, \"Imseih (AWS), Sami\" <simseih@amazon.com> wrote:\r\n> I am hesitant to make column name changes for obvious reasons, as it breaks existing tooling. However, I think there is a really good case to change \"index_vacuum_count\" as the name is confusing. \"index_vacuum_cycles_completed\" is the name I suggest if we agree to rename.\r\n>\r\n> For the new column, \"num_indexes_to_vacuum\" is good with me. \r\n\r\nYeah, I think we can skip renaming index_vacuum_count for now. In any\r\ncase, it would probably be good to discuss that in a separate thread.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 10 Jan 2022 18:30:09 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "I agree, Renaming \"index_vacuum_count\" can be taken up in a separate discussion.\r\n\r\nI have attached the 3rd revision of the patch which also includes the documentation changes. Also attached is a rendered html of the docs for review.\r\n\r\n\"max_index_vacuum_cycle_time\" has been removed.\r\n\"index_rows_vacuumed\" renamed to \"index_tuples_removed\". \"tuples\" is a more consistent with the terminology used.\r\n\"vacuum_cycle_ordinal_position\" renamed to \"index_ordinal_position\".\r\n\r\nOn 1/10/22, 12:30 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n\r\n On 1/6/22, 6:14 PM, \"Imseih (AWS), Sami\" <simseih@amazon.com> wrote:\r\n > I am hesitant to make column name changes for obvious reasons, as it breaks existing tooling. However, I think there is a really good case to change \"index_vacuum_count\" as the name is confusing. \"index_vacuum_cycles_completed\" is the name I suggest if we agree to rename.\r\n >\r\n > For the new column, \"num_indexes_to_vacuum\" is good with me. \r\n\r\n Yeah, I think we can skip renaming index_vacuum_count for now. In any\r\n case, it would probably be good to discuss that in a separate thread.\r\n\r\n Nathan", "msg_date": "Tue, 11 Jan 2022 01:01:20 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On 1/10/22, 5:01 PM, \"Imseih (AWS), Sami\" <simseih@amazon.com> wrote:\r\n> I have attached the 3rd revision of the patch which also includes the documentation changes. Also attached is a rendered html of the docs for review.\r\n>\r\n> \"max_index_vacuum_cycle_time\" has been removed.\r\n> \"index_rows_vacuumed\" renamed to \"index_tuples_removed\". \"tuples\" is a more consistent with the terminology used.\r\n> \"vacuum_cycle_ordinal_position\" renamed to \"index_ordinal_position\".\r\n\r\nThanks for the new version of the patch!\r\n\r\nnitpick: I get one whitespace error when applying the patch.\r\n\r\n Applying: Expose progress for the \"vacuuming indexes\" phase of a VACUUM operation.\r\n .git/rebase-apply/patch:44: tab in indent.\r\n Whenever <xref linkend=\"guc-vacuum-failsafe-age\"/> is triggered, index\r\n warning: 1 line adds whitespace errors.\r\n\r\n+ <row>\r\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n+ <structfield>num_indexes_to_vacuum</structfield> <type>bigint</type>\r\n+ </para>\r\n+ <para>\r\n+ The number of indexes that will be vacuumed. Only indexes with\r\n+ <literal>pg_index.indisready</literal> set to \"true\" will be vacuumed.\r\n+ Whenever <xref linkend=\"guc-vacuum-failsafe-age\"/> is triggered, index\r\n+ vacuuming will be bypassed.\r\n+ </para></entry>\r\n+ </row>\r\n+ </tbody>\r\n+ </tgroup>\r\n+ </table>\r\n\r\nWe may want to avoid exhaustively listing the cases when this value\r\nwill be zero. I would suggest saying, \"When index cleanup is skipped,\r\nthis value will be zero\" instead.\r\n\r\n+ <row>\r\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n+ <structfield>relid</structfield> <type>oid</type>\r\n+ </para>\r\n+ <para>\r\n+ OID of the table being vacuumed.\r\n+ </para></entry>\r\n+ </row>\r\n\r\nDo we need to include this field? I would expect indexrelid to go\r\nhere.\r\n\r\n+ <row>\r\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n+ <structfield>leader_pid</structfield> <type>bigint</type>\r\n+ </para>\r\n+ <para>\r\n+ Process ID of the parallel group leader. This field is <literal>NULL</literal>\r\n+ if this process is a parallel group leader or the\r\n+ <literal>vacuuming indexes</literal> phase is not performed in parallel.\r\n+ </para></entry>\r\n+ </row>\r\n\r\nAre there cases where the parallel group leader will have an entry in\r\nthis view when parallelism is enabled?\r\n\r\n+ <row>\r\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n+ <structfield>index_ordinal_position</structfield> <type>bigint</type>\r\n+ </para>\r\n+ <para>\r\n+ The order in which the index is being vacuumed. Indexes are vacuumed by OID in ascending order.\r\n+ </para></entry>\r\n+ </row>\r\n\r\nShould we include the bit about the OID ordering? I suppose that is\r\nunlikely to change in the near future, but I don't know if it is\r\nrelevant information. Also, do we need to include the \"index_\"\r\nprefix? This view is specific for indexes. (I have the same question\r\nfor index_tuples_removed.)\r\n\r\nShould this new table go after the \"VACUUM phases\" table? It might\r\nmake sense to keep the phases table closer to where it is referenced.\r\n\r\n+ /* Advertise the number of indexes to vacuum if we are not in failsafe mode */\r\n+ if (!lazy_check_wraparound_failsafe(vacrel))\r\n+ pgstat_progress_update_param(PROGRESS_VACUUM_TOTAL_INDEX_VACUUM, vacrel->nindexes);\r\n\r\nShouldn't this be 0 when INDEX_CLEANUP is off, too?\r\n\r\n+#define PROGRESS_VACUUM_CURRENT_INDRELID 7\r\n+#define PROGRESS_VACUUM_LEADER_PID 8\r\n+#define PROGRESS_VACUUM_INDEX_ORDINAL 9\r\n+#define PROGRESS_VACUUM_TOTAL_INDEX_VACUUM 10\r\n+#define PROGRESS_VACUUM_DEAD_TUPLES_VACUUMED 11\r\n\r\nnitpick: I would suggest the following names to match the existing\r\nstyle:\r\n\r\n PROGRESS_VACUUM_NUM_INDEXES_TO_VACUUM\r\n PROGRESS_VACUUM_INDEX_LEADER_PID\r\n PROGRESS_VACUUM_INDEX_INDEXRELID\r\n PROGRESS_VACUUM_INDEX_ORDINAL_POSITION\r\n PROGRESS_VACUUM_INDEX_TUPLES_REMOVED\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 11 Jan 2022 19:01:37 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On 1/11/22, 1:01 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n\r\n On 1/10/22, 5:01 PM, \"Imseih (AWS), Sami\" <simseih@amazon.com> wrote:\r\n > I have attached the 3rd revision of the patch which also includes the documentation changes. Also attached is a rendered html of the docs for review.\r\n >\r\n > \"max_index_vacuum_cycle_time\" has been removed.\r\n > \"index_rows_vacuumed\" renamed to \"index_tuples_removed\". \"tuples\" is a more consistent with the terminology used.\r\n > \"vacuum_cycle_ordinal_position\" renamed to \"index_ordinal_position\".\r\n\r\n Thanks for the new version of the patch!\r\n\r\n nitpick: I get one whitespace error when applying the patch.\r\n\r\n Applying: Expose progress for the \"vacuuming indexes\" phase of a VACUUM operation.\r\n .git/rebase-apply/patch:44: tab in indent.\r\n Whenever <xref linkend=\"guc-vacuum-failsafe-age\"/> is triggered, index\r\n warning: 1 line adds whitespace errors.\r\n\r\nThat was missed. Will fix it.\r\n\r\n + <row>\r\n + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n + <structfield>num_indexes_to_vacuum</structfield> <type>bigint</type>\r\n + </para>\r\n + <para>\r\n + The number of indexes that will be vacuumed. Only indexes with\r\n + <literal>pg_index.indisready</literal> set to \"true\" will be vacuumed.\r\n + Whenever <xref linkend=\"guc-vacuum-failsafe-age\"/> is triggered, index\r\n + vacuuming will be bypassed.\r\n + </para></entry>\r\n + </row>\r\n + </tbody>\r\n + </tgroup>\r\n + </table>\r\n\r\n We may want to avoid exhaustively listing the cases when this value\r\n will be zero. I would suggest saying, \"When index cleanup is skipped,\r\n this value will be zero\" instead.\r\n\r\nWhat about something like \"The number of indexes that are eligible for vacuuming\".\r\nThis covers the cases where either an individual index is skipped or the entire \"index vacuuming\" phase is skipped.\r\n\r\n + <row>\r\n + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n + <structfield>relid</structfield> <type>oid</type>\r\n + </para>\r\n + <para>\r\n + OID of the table being vacuumed.\r\n + </para></entry>\r\n + </row>\r\n\r\n Do we need to include this field? I would expect indexrelid to go\r\n here.\r\n\r\nHaving indexrelid and relid makes the pg_stat_progress_vacuum_index view \"self-contained\". A user can lookup the index and table being vacuumed without joining back to pg_stat_progress_vacuum.\r\n\r\n + <row>\r\n + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n + <structfield>leader_pid</structfield> <type>bigint</type>\r\n + </para>\r\n + <para>\r\n + Process ID of the parallel group leader. This field is <literal>NULL</literal>\r\n + if this process is a parallel group leader or the\r\n + <literal>vacuuming indexes</literal> phase is not performed in parallel.\r\n + </para></entry>\r\n + </row>\r\n\r\n Are there cases where the parallel group leader will have an entry in\r\n this view when parallelism is enabled?\r\n\r\nYes. A parallel group leader can perform an index vacuum just like a parallel worker. If you do something like \"vacuum (parallel 3) \", you may have up to 4 processes vacuuming indexes. The leader + 3 workers. \r\n\r\n + <row>\r\n + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n + <structfield>index_ordinal_position</structfield> <type>bigint</type>\r\n + </para>\r\n + <para>\r\n + The order in which the index is being vacuumed. Indexes are vacuumed by OID in ascending order.\r\n + </para></entry>\r\n + </row>\r\n\r\n Should we include the bit about the OID ordering? I suppose that is\r\n unlikely to change in the near future, but I don't know if it is\r\n relevant information. Also, do we need to include the \"index_\"\r\n prefix? This view is specific for indexes. (I have the same question\r\n for index_tuples_removed.)\r\n\r\nI was on the fence about both of these as well. Will make a change to this.\r\n\r\n Should this new table go after the \"VACUUM phases\" table? It might\r\n make sense to keep the phases table closer to where it is referenced.\r\n\r\nI did not think that would read better. The introduction discusses both views and the \"phase\" table is linked from the pg_stat_progress_vacuum \r\n\r\n + /* Advertise the number of indexes to vacuum if we are not in failsafe mode */\r\n + if (!lazy_check_wraparound_failsafe(vacrel))\r\n + pgstat_progress_update_param(PROGRESS_VACUUM_TOTAL_INDEX_VACUUM, vacrel->nindexes);\r\n\r\n Shouldn't this be 0 when INDEX_CLEANUP is off, too?\r\n\r\nThis view is only covering the \"vacuum index\" phase, but it should also cover index_cleanup phase as well. Will update the patch.\r\n\r\n +#define PROGRESS_VACUUM_CURRENT_INDRELID 7\r\n +#define PROGRESS_VACUUM_LEADER_PID 8\r\n +#define PROGRESS_VACUUM_INDEX_ORDINAL 9\r\n +#define PROGRESS_VACUUM_TOTAL_INDEX_VACUUM 10\r\n +#define PROGRESS_VACUUM_DEAD_TUPLES_VACUUMED 11\r\n\r\n nitpick: I would suggest the following names to match the existing\r\n style:\r\n\r\n PROGRESS_VACUUM_NUM_INDEXES_TO_VACUUM\r\n PROGRESS_VACUUM_INDEX_LEADER_PID\r\n PROGRESS_VACUUM_INDEX_INDEXRELID\r\n PROGRESS_VACUUM_INDEX_ORDINAL_POSITION\r\n PROGRESS_VACUUM_INDEX_TUPLES_REMOVED\r\n\r\nThat looks better.\r\n\r\n Nathan\r\n\r\n\r\n", "msg_date": "Tue, 11 Jan 2022 20:33:16 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On 1/11/22, 12:33 PM, \"Imseih (AWS), Sami\" <simseih@amazon.com> wrote:\r\n> What about something like \"The number of indexes that are eligible for vacuuming\".\r\n> This covers the cases where either an individual index is skipped or the entire \"index vacuuming\" phase is skipped.\r\n\r\nHm. I don't know if \"eligible\" is the right word. An index can be\r\neligible for vacuuming but skipped because we set INDEX_CLEANUP to\r\nfalse. Maybe we should just stick with \"The number of indexes that\r\nwill be vacuumed.\" The only thing we may want to clarify is whether\r\nthis value will change in some cases (e.g., vacuum failsafe takes\r\neffect).\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 11 Jan 2022 22:18:24 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "(We had better avoid top-posting[1])\n\n\nOn Tue, Jan 11, 2022 at 10:01 AM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n>\n> I agree, Renaming \"index_vacuum_count\" can be taken up in a separate discussion.\n>\n> I have attached the 3rd revision of the patch which also includes the documentation changes. Also attached is a rendered html of the docs for review.\n\nThank you for updating the patch!\n\nRegarding the new pg_stat_progress_vacuum_index view, why do we need\nto have a separate view? Users will have to check two views. If this\nview is expected to be used together with and joined to\npg_stat_progress_vacuum, why don't we provide one view that has full\ninformation from the beginning? Especially, I think it's not useful\nthat the total number of indexes to vacuum (num_indexes_to_vacuum\ncolumn) and the current number of indexes that have been vacuumed\n(index_ordinal_position column) are shown in separate views.\n\nAlso, I’m not sure how useful index_tuples_removed is; what can we\ninfer from this value (without a total number)?\n\nRegards,\n\n[1] https://en.wikipedia.org/wiki/Posting_style#Top-posting\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 12 Jan 2022 16:44:37 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On 1/11/22, 11:46 PM, \"Masahiko Sawada\" <sawada.mshk@gmail.com> wrote:\r\n> Regarding the new pg_stat_progress_vacuum_index view, why do we need\r\n> to have a separate view? Users will have to check two views. If this\r\n> view is expected to be used together with and joined to\r\n> pg_stat_progress_vacuum, why don't we provide one view that has full\r\n> information from the beginning? Especially, I think it's not useful\r\n> that the total number of indexes to vacuum (num_indexes_to_vacuum\r\n> column) and the current number of indexes that have been vacuumed\r\n> (index_ordinal_position column) are shown in separate views.\r\n\r\nI suppose we could add all of the new columns to\r\npg_stat_progress_vacuum and just set columns to NULL as appropriate.\r\nBut is that really better than having a separate view?\r\n\r\n> Also, I’m not sure how useful index_tuples_removed is; what can we\r\n> infer from this value (without a total number)?\r\n\r\nI think the idea was that you can compare it against max_dead_tuples\r\nand num_dead_tuples to get an estimate of the current cycle progress.\r\nOtherwise, it just shows that progress is being made.\r\n\r\nNathan\r\n\r\n[0] https://postgr.es/m/7874FB21-FAA5-49BD-8386-2866552656C7%40amazon.com\r\n\r\n", "msg_date": "Wed, 12 Jan 2022 19:28:23 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On 1/12/22, 1:28 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n\r\n On 1/11/22, 11:46 PM, \"Masahiko Sawada\" <sawada.mshk@gmail.com> wrote:\r\n > Regarding the new pg_stat_progress_vacuum_index view, why do we need\r\n > to have a separate view? Users will have to check two views. If this\r\n > view is expected to be used together with and joined to\r\n > pg_stat_progress_vacuum, why don't we provide one view that has full\r\n > information from the beginning? Especially, I think it's not useful\r\n > that the total number of indexes to vacuum (num_indexes_to_vacuum\r\n > column) and the current number of indexes that have been vacuumed\r\n > (index_ordinal_position column) are shown in separate views.\r\n\r\n > I suppose we could add all of the new columns to\r\n > pg_stat_progress_vacuum and just set columns to NULL as appropriate.\r\n > But is that really better than having a separate view?\r\n\r\nTo add, since a vacuum can utilize parallel worker processes + the main vacuum process to perform index vacuuming, it made sense to separate the backends doing index vacuum/cleanup in a separate view. \r\nBesides what Nathan suggested, the only other clean option I can think of is to perhaps create a json column in pg_stat_progress_vacuum which will include all the new fields. My concern with this approach is that it will make usability, to flatten the json, difficult for users.\r\n\r\n > Also, I’m not sure how useful index_tuples_removed is; what can we\r\n > infer from this value (without a total number)?\r\n\r\n> I think the idea was that you can compare it against max_dead_tuples\r\n> and num_dead_tuples to get an estimate of the current cycle progress.\r\n> Otherwise, it just shows that progress is being made.\r\n\r\nThe main purpose is to really show that the \"index vacuum\" phase is actually making progress. Note that for certain types of indexes, i.e. GIN/GIST the number of tuples_removed will end up exceeding the number of num_dead_tuples.\r\n\r\n Nathan\r\n\r\n [0] https://postgr.es/m/7874FB21-FAA5-49BD-8386-2866552656C7%40amazon.com\r\n\r\n\r\n", "msg_date": "Thu, 13 Jan 2022 03:52:46 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Attached is the latest patch and associated documentation.\r\n\r\nThis version addresses the index_ordinal_position column confusion. Rather than displaying the index position, the pg_stat_progress_vacuum view now has 2 new column(s):\r\nindex_total - this column will show the total number of indexes to be vacuumed\r\nindex_complete_count - this column will show the total number of indexes processed so far. In order to deal with the parallel vacuums, the parallel_workers ( planned workers ) value had to be exposed and each backends performing an index vacuum/cleanup in parallel had to advertise the number of indexes it vacuumed/cleaned. The # of indexes vacuumed for the parallel cleanup can then be derived the pg_stat_progress_vacuum view. \r\n\r\npostgres=# \\d pg_stat_progress_vacuum\r\n View \"pg_catalog.pg_stat_progress_vacuum\"\r\n Column | Type | Collation | Nullable | Default\r\n----------------------+---------+-----------+----------+---------\r\n pid | integer | | |\r\n datid | oid | | |\r\n datname | name | | |\r\n relid | oid | | |\r\n phase | text | | |\r\n heap_blks_total | bigint | | |\r\n heap_blks_scanned | bigint | | |\r\n heap_blks_vacuumed | bigint | | |\r\n index_vacuum_count | bigint | | |\r\n max_dead_tuples | bigint | | |\r\n num_dead_tuples | bigint | | |\r\n index_total | bigint | | |. <<<---------------------\r\n index_complete_count | numeric | | |. <<<---------------------\r\n\r\nThe pg_stat_progress_vacuum_index view includes:\r\n\r\nIndexrelid - the currently vacuumed index\r\nLeader_pid - the pid of the leader process. NULL if the process is the leader or vacuum is not parallel\r\ntuples_removed - the amount of indexes tuples removed. The user can use this column to see that the index vacuum has movement.\r\n\r\npostgres=# \\d pg_stat_progress_vacuum_index\r\n View \"pg_catalog.pg_stat_progress_vacuum_index\"\r\n Column | Type | Collation | Nullable | Default\r\n----------------+---------+-----------+----------+---------\r\n pid | integer | | |\r\n datid | oid | | |\r\n datname | name | | |\r\n indexrelid | bigint | | |\r\n phase | text | | |\r\n leader_pid | bigint | | |\r\n tuples_removed | bigint | | |\r\n\r\n\r\n\r\nOn 1/12/22, 9:52 PM, \"Imseih (AWS), Sami\" <simseih@amazon.com> wrote:\r\n\r\n On 1/12/22, 1:28 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n\r\n On 1/11/22, 11:46 PM, \"Masahiko Sawada\" <sawada.mshk@gmail.com> wrote:\r\n > Regarding the new pg_stat_progress_vacuum_index view, why do we need\r\n > to have a separate view? Users will have to check two views. If this\r\n > view is expected to be used together with and joined to\r\n > pg_stat_progress_vacuum, why don't we provide one view that has full\r\n > information from the beginning? Especially, I think it's not useful\r\n > that the total number of indexes to vacuum (num_indexes_to_vacuum\r\n > column) and the current number of indexes that have been vacuumed\r\n > (index_ordinal_position column) are shown in separate views.\r\n\r\n > I suppose we could add all of the new columns to\r\n > pg_stat_progress_vacuum and just set columns to NULL as appropriate.\r\n > But is that really better than having a separate view?\r\n\r\n To add, since a vacuum can utilize parallel worker processes + the main vacuum process to perform index vacuuming, it made sense to separate the backends doing index vacuum/cleanup in a separate view. \r\n Besides what Nathan suggested, the only other clean option I can think of is to perhaps create a json column in pg_stat_progress_vacuum which will include all the new fields. My concern with this approach is that it will make usability, to flatten the json, difficult for users.\r\n\r\n > Also, I’m not sure how useful index_tuples_removed is; what can we\r\n > infer from this value (without a total number)?\r\n\r\n > I think the idea was that you can compare it against max_dead_tuples\r\n > and num_dead_tuples to get an estimate of the current cycle progress.\r\n > Otherwise, it just shows that progress is being made.\r\n\r\n The main purpose is to really show that the \"index vacuum\" phase is actually making progress. Note that for certain types of indexes, i.e. GIN/GIST the number of tuples_removed will end up exceeding the number of num_dead_tuples.\r\n\r\n Nathan\r\n\r\n [0] https://postgr.es/m/7874FB21-FAA5-49BD-8386-2866552656C7%40amazon.com", "msg_date": "Thu, 27 Jan 2022 02:07:51 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "My apologies. The last attachment of documentation was the wrong file. Attached is the correct documentation file.\r\n\r\nThanks \r\n\r\nOn 1/26/22, 8:07 PM, \"Imseih (AWS), Sami\" <simseih@amazon.com> wrote:\r\n\r\n Attached is the latest patch and associated documentation.\r\n\r\n This version addresses the index_ordinal_position column confusion. Rather than displaying the index position, the pg_stat_progress_vacuum view now has 2 new column(s):\r\n index_total - this column will show the total number of indexes to be vacuumed\r\n index_complete_count - this column will show the total number of indexes processed so far. In order to deal with the parallel vacuums, the parallel_workers ( planned workers ) value had to be exposed and each backends performing an index vacuum/cleanup in parallel had to advertise the number of indexes it vacuumed/cleaned. The # of indexes vacuumed for the parallel cleanup can then be derived the pg_stat_progress_vacuum view. \r\n\r\n postgres=# \\d pg_stat_progress_vacuum\r\n View \"pg_catalog.pg_stat_progress_vacuum\"\r\n Column | Type | Collation | Nullable | Default\r\n ----------------------+---------+-----------+----------+---------\r\n pid | integer | | |\r\n datid | oid | | |\r\n datname | name | | |\r\n relid | oid | | |\r\n phase | text | | |\r\n heap_blks_total | bigint | | |\r\n heap_blks_scanned | bigint | | |\r\n heap_blks_vacuumed | bigint | | |\r\n index_vacuum_count | bigint | | |\r\n max_dead_tuples | bigint | | |\r\n num_dead_tuples | bigint | | |\r\n index_total | bigint | | |. <<<---------------------\r\n index_complete_count | numeric | | |. <<<---------------------\r\n\r\n The pg_stat_progress_vacuum_index view includes:\r\n\r\n Indexrelid - the currently vacuumed index\r\n Leader_pid - the pid of the leader process. NULL if the process is the leader or vacuum is not parallel\r\n tuples_removed - the amount of indexes tuples removed. The user can use this column to see that the index vacuum has movement.\r\n\r\n postgres=# \\d pg_stat_progress_vacuum_index\r\n View \"pg_catalog.pg_stat_progress_vacuum_index\"\r\n Column | Type | Collation | Nullable | Default\r\n ----------------+---------+-----------+----------+---------\r\n pid | integer | | |\r\n datid | oid | | |\r\n datname | name | | |\r\n indexrelid | bigint | | |\r\n phase | text | | |\r\n leader_pid | bigint | | |\r\n tuples_removed | bigint | | |\r\n\r\n\r\n\r\n On 1/12/22, 9:52 PM, \"Imseih (AWS), Sami\" <simseih@amazon.com> wrote:\r\n\r\n On 1/12/22, 1:28 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n\r\n On 1/11/22, 11:46 PM, \"Masahiko Sawada\" <sawada.mshk@gmail.com> wrote:\r\n > Regarding the new pg_stat_progress_vacuum_index view, why do we need\r\n > to have a separate view? Users will have to check two views. If this\r\n > view is expected to be used together with and joined to\r\n > pg_stat_progress_vacuum, why don't we provide one view that has full\r\n > information from the beginning? Especially, I think it's not useful\r\n > that the total number of indexes to vacuum (num_indexes_to_vacuum\r\n > column) and the current number of indexes that have been vacuumed\r\n > (index_ordinal_position column) are shown in separate views.\r\n\r\n > I suppose we could add all of the new columns to\r\n > pg_stat_progress_vacuum and just set columns to NULL as appropriate.\r\n > But is that really better than having a separate view?\r\n\r\n To add, since a vacuum can utilize parallel worker processes + the main vacuum process to perform index vacuuming, it made sense to separate the backends doing index vacuum/cleanup in a separate view. \r\n Besides what Nathan suggested, the only other clean option I can think of is to perhaps create a json column in pg_stat_progress_vacuum which will include all the new fields. My concern with this approach is that it will make usability, to flatten the json, difficult for users.\r\n\r\n > Also, I’m not sure how useful index_tuples_removed is; what can we\r\n > infer from this value (without a total number)?\r\n\r\n > I think the idea was that you can compare it against max_dead_tuples\r\n > and num_dead_tuples to get an estimate of the current cycle progress.\r\n > Otherwise, it just shows that progress is being made.\r\n\r\n The main purpose is to really show that the \"index vacuum\" phase is actually making progress. Note that for certain types of indexes, i.e. GIN/GIST the number of tuples_removed will end up exceeding the number of num_dead_tuples.\r\n\r\n Nathan\r\n\r\n [0] https://postgr.es/m/7874FB21-FAA5-49BD-8386-2866552656C7%40amazon.com", "msg_date": "Thu, 27 Jan 2022 02:15:16 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Resending patch as I see the last attachment was not annotated to the commitfest entry.\r\n\r\nOn 1/26/22, 8:07 PM, \"Imseih (AWS), Sami\" <simseih@amazon.com> wrote:\r\n\r\n Attached is the latest patch and associated documentation.\r\n\r\n This version addresses the index_ordinal_position column confusion. Rather than displaying the index position, the pg_stat_progress_vacuum view now has 2 new column(s):\r\n index_total - this column will show the total number of indexes to be vacuumed\r\n index_complete_count - this column will show the total number of indexes processed so far. In order to deal with the parallel vacuums, the parallel_workers ( planned workers ) value had to be exposed and each backends performing an index vacuum/cleanup in parallel had to advertise the number of indexes it vacuumed/cleaned. The # of indexes vacuumed for the parallel cleanup can then be derived the pg_stat_progress_vacuum view. \r\n\r\n postgres=# \\d pg_stat_progress_vacuum\r\n View \"pg_catalog.pg_stat_progress_vacuum\"\r\n Column | Type | Collation | Nullable | Default\r\n ----------------------+---------+-----------+----------+---------\r\n pid | integer | | |\r\n datid | oid | | |\r\n datname | name | | |\r\n relid | oid | | |\r\n phase | text | | |\r\n heap_blks_total | bigint | | |\r\n heap_blks_scanned | bigint | | |\r\n heap_blks_vacuumed | bigint | | |\r\n index_vacuum_count | bigint | | |\r\n max_dead_tuples | bigint | | |\r\n num_dead_tuples | bigint | | |\r\n index_total | bigint | | |. <<<---------------------\r\n index_complete_count | numeric | | |. <<<---------------------\r\n\r\n The pg_stat_progress_vacuum_index view includes:\r\n\r\n Indexrelid - the currently vacuumed index\r\n Leader_pid - the pid of the leader process. NULL if the process is the leader or vacuum is not parallel\r\n tuples_removed - the amount of indexes tuples removed. The user can use this column to see that the index vacuum has movement.\r\n\r\n postgres=# \\d pg_stat_progress_vacuum_index\r\n View \"pg_catalog.pg_stat_progress_vacuum_index\"\r\n Column | Type | Collation | Nullable | Default\r\n ----------------+---------+-----------+----------+---------\r\n pid | integer | | |\r\n datid | oid | | |\r\n datname | name | | |\r\n indexrelid | bigint | | |\r\n phase | text | | |\r\n leader_pid | bigint | | |\r\n tuples_removed | bigint | | |\r\n\r\n\r\n\r\n On 1/12/22, 9:52 PM, \"Imseih (AWS), Sami\" <simseih@amazon.com> wrote:\r\n\r\n On 1/12/22, 1:28 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n\r\n On 1/11/22, 11:46 PM, \"Masahiko Sawada\" <sawada.mshk@gmail.com> wrote:\r\n > Regarding the new pg_stat_progress_vacuum_index view, why do we need\r\n > to have a separate view? Users will have to check two views. If this\r\n > view is expected to be used together with and joined to\r\n > pg_stat_progress_vacuum, why don't we provide one view that has full\r\n > information from the beginning? Especially, I think it's not useful\r\n > that the total number of indexes to vacuum (num_indexes_to_vacuum\r\n > column) and the current number of indexes that have been vacuumed\r\n > (index_ordinal_position column) are shown in separate views.\r\n\r\n > I suppose we could add all of the new columns to\r\n > pg_stat_progress_vacuum and just set columns to NULL as appropriate.\r\n > But is that really better than having a separate view?\r\n\r\n To add, since a vacuum can utilize parallel worker processes + the main vacuum process to perform index vacuuming, it made sense to separate the backends doing index vacuum/cleanup in a separate view. \r\n Besides what Nathan suggested, the only other clean option I can think of is to perhaps create a json column in pg_stat_progress_vacuum which will include all the new fields. My concern with this approach is that it will make usability, to flatten the json, difficult for users.\r\n\r\n > Also, I’m not sure how useful index_tuples_removed is; what can we\r\n > infer from this value (without a total number)?\r\n\r\n > I think the idea was that you can compare it against max_dead_tuples\r\n > and num_dead_tuples to get an estimate of the current cycle progress.\r\n > Otherwise, it just shows that progress is being made.\r\n\r\n The main purpose is to really show that the \"index vacuum\" phase is actually making progress. Note that for certain types of indexes, i.e. GIN/GIST the number of tuples_removed will end up exceeding the number of num_dead_tuples.\r\n\r\n Nathan\r\n\r\n [0] https://postgr.es/m/7874FB21-FAA5-49BD-8386-2866552656C7%40amazon.com", "msg_date": "Thu, 27 Jan 2022 21:09:05 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "After speaking with Nathan offline, A few changes have been made to the patch.\r\n\r\nAs mentioned earlier in the thread, tracking how many indexes are processed in PARALLEL vacuum mode is not very straightforward since only the workers or leader process have ability to inspect the Vacuum shared parallel state. \r\n\r\nThe latest version of the patch introduces a shared memory to track indexes vacuumed/cleaned by each worker ( or leader ) in a PARALLEL vacuum. In order to present this data in the pg_stat_progress_vacuum view, the value of the new column \"indexes_processed\" is retrieved from shared memory by pg_stat_get_progress_info. For non-parallel vacuums, the value of \"indexes_processed\" is retrieved from the backend progress array directly. \r\n\r\nThe patch also includes the changes to implement the new view pg_stat_progress_vacuum_index which exposes the index being vacuumed/cleaned up.\r\n\r\npostgres=# \\d+ pg_stat_progress_vacuum ;\r\n View \"pg_catalog.pg_stat_progress_vacuum\"\r\n Column | Type | Collation | Nullable | Default | Storage | Description\r\n--------------------+---------+-----------+----------+---------+----------+-------------\r\n pid | integer | | | | plain |\r\n datid | oid | | | | plain |\r\n datname | name | | | | plain |\r\n relid | oid | | | | plain |\r\n phase | text | | | | extended |\r\n heap_blks_total | bigint | | | | plain |\r\n heap_blks_scanned | bigint | | | | plain |\r\n heap_blks_vacuumed | bigint | | | | plain |\r\n index_vacuum_count | bigint | | | | plain |\r\n max_dead_tuples | bigint | | | | plain |\r\n num_dead_tuples | bigint | | | | plain |\r\n indexes_total | bigint | | | | plain | <<<-- new column\r\n indexes_processed | bigint | | | | plain | <<<-- new column\r\n\r\n\r\n<<<--- new view --->>>\r\n\r\npostgres=# \\d pg_stat_progress_vacuum_index\r\n View \"pg_catalog.pg_stat_progress_vacuum_index\"\r\n Column | Type | Collation | Nullable | Default\r\n----------------+---------+-----------+----------+---------\r\n pid | integer | | |\r\n datid | oid | | |\r\n datname | name | | |\r\n indexrelid | bigint | | |\r\n leader_pid | bigint | | |\r\n phase | text | | |\r\n tuples_removed | bigint | | |\r\n\r\n\r\n\r\nOn 1/26/22, 8:07 PM, \"Imseih (AWS), Sami\" <simseih@amazon.com> wrote:\r\n\r\n Attached is the latest patch and associated documentation.\r\n\r\n This version addresses the index_ordinal_position column confusion. Rather than displaying the index position, the pg_stat_progress_vacuum view now has 2 new column(s):\r\n index_total - this column will show the total number of indexes to be vacuumed\r\n index_complete_count - this column will show the total number of indexes processed so far. In order to deal with the parallel vacuums, the parallel_workers ( planned workers ) value had to be exposed and each backends performing an index vacuum/cleanup in parallel had to advertise the number of indexes it vacuumed/cleaned. The # of indexes vacuumed for the parallel cleanup can then be derived the pg_stat_progress_vacuum view. \r\n\r\n postgres=# \\d pg_stat_progress_vacuum\r\n View \"pg_catalog.pg_stat_progress_vacuum\"\r\n Column | Type | Collation | Nullable | Default\r\n ----------------------+---------+-----------+----------+---------\r\n pid | integer | | |\r\n datid | oid | | |\r\n datname | name | | |\r\n relid | oid | | |\r\n phase | text | | |\r\n heap_blks_total | bigint | | |\r\n heap_blks_scanned | bigint | | |\r\n heap_blks_vacuumed | bigint | | |\r\n index_vacuum_count | bigint | | |\r\n max_dead_tuples | bigint | | |\r\n num_dead_tuples | bigint | | |\r\n index_total | bigint | | |. <<<---------------------\r\n index_complete_count | numeric | | |. <<<---------------------\r\n\r\n The pg_stat_progress_vacuum_index view includes:\r\n\r\n Indexrelid - the currently vacuumed index\r\n Leader_pid - the pid of the leader process. NULL if the process is the leader or vacuum is not parallel\r\n tuples_removed - the amount of indexes tuples removed. The user can use this column to see that the index vacuum has movement.\r\n\r\n postgres=# \\d pg_stat_progress_vacuum_index\r\n View \"pg_catalog.pg_stat_progress_vacuum_index\"\r\n Column | Type | Collation | Nullable | Default\r\n ----------------+---------+-----------+----------+---------\r\n pid | integer | | |\r\n datid | oid | | |\r\n datname | name | | |\r\n indexrelid | bigint | | |\r\n phase | text | | |\r\n leader_pid | bigint | | |\r\n tuples_removed | bigint | | |\r\n\r\n\r\n\r\n On 1/12/22, 9:52 PM, \"Imseih (AWS), Sami\" <simseih@amazon.com> wrote:\r\n\r\n On 1/12/22, 1:28 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n\r\n On 1/11/22, 11:46 PM, \"Masahiko Sawada\" <sawada.mshk@gmail.com> wrote:\r\n > Regarding the new pg_stat_progress_vacuum_index view, why do we need\r\n > to have a separate view? Users will have to check two views. If this\r\n > view is expected to be used together with and joined to\r\n > pg_stat_progress_vacuum, why don't we provide one view that has full\r\n > information from the beginning? Especially, I think it's not useful\r\n > that the total number of indexes to vacuum (num_indexes_to_vacuum\r\n > column) and the current number of indexes that have been vacuumed\r\n > (index_ordinal_position column) are shown in separate views.\r\n\r\n > I suppose we could add all of the new columns to\r\n > pg_stat_progress_vacuum and just set columns to NULL as appropriate.\r\n > But is that really better than having a separate view?\r\n\r\n To add, since a vacuum can utilize parallel worker processes + the main vacuum process to perform index vacuuming, it made sense to separate the backends doing index vacuum/cleanup in a separate view. \r\n Besides what Nathan suggested, the only other clean option I can think of is to perhaps create a json column in pg_stat_progress_vacuum which will include all the new fields. My concern with this approach is that it will make usability, to flatten the json, difficult for users.\r\n\r\n > Also, I’m not sure how useful index_tuples_removed is; what can we\r\n > infer from this value (without a total number)?\r\n\r\n > I think the idea was that you can compare it against max_dead_tuples\r\n > and num_dead_tuples to get an estimate of the current cycle progress.\r\n > Otherwise, it just shows that progress is being made.\r\n\r\n The main purpose is to really show that the \"index vacuum\" phase is actually making progress. Note that for certain types of indexes, i.e. GIN/GIST the number of tuples_removed will end up exceeding the number of num_dead_tuples.\r\n\r\n Nathan\r\n\r\n [0] https://postgr.es/m/7874FB21-FAA5-49BD-8386-2866552656C7%40amazon.com", "msg_date": "Tue, 1 Feb 2022 20:33:16 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Attached is the latest version of the patch to deal with the changes in the recent commit aa64f23b02924724eafbd9eadbf26d85df30a12b\r\n\r\nOn 2/1/22, 2:32 PM, \"Imseih (AWS), Sami\" <simseih@amazon.com> wrote:\r\n\r\n After speaking with Nathan offline, A few changes have been made to the patch.\r\n\r\n As mentioned earlier in the thread, tracking how many indexes are processed in PARALLEL vacuum mode is not very straightforward since only the workers or leader process have ability to inspect the Vacuum shared parallel state. \r\n\r\n The latest version of the patch introduces a shared memory to track indexes vacuumed/cleaned by each worker ( or leader ) in a PARALLEL vacuum. In order to present this data in the pg_stat_progress_vacuum view, the value of the new column \"indexes_processed\" is retrieved from shared memory by pg_stat_get_progress_info. For non-parallel vacuums, the value of \"indexes_processed\" is retrieved from the backend progress array directly. \r\n\r\n The patch also includes the changes to implement the new view pg_stat_progress_vacuum_index which exposes the index being vacuumed/cleaned up.\r\n\r\n postgres=# \\d+ pg_stat_progress_vacuum ;\r\n View \"pg_catalog.pg_stat_progress_vacuum\"\r\n Column | Type | Collation | Nullable | Default | Storage | Description\r\n --------------------+---------+-----------+----------+---------+----------+-------------\r\n pid | integer | | | | plain |\r\n datid | oid | | | | plain |\r\n datname | name | | | | plain |\r\n relid | oid | | | | plain |\r\n phase | text | | | | extended |\r\n heap_blks_total | bigint | | | | plain |\r\n heap_blks_scanned | bigint | | | | plain |\r\n heap_blks_vacuumed | bigint | | | | plain |\r\n index_vacuum_count | bigint | | | | plain |\r\n max_dead_tuples | bigint | | | | plain |\r\n num_dead_tuples | bigint | | | | plain |\r\n indexes_total | bigint | | | | plain | <<<-- new column\r\n indexes_processed | bigint | | | | plain | <<<-- new column\r\n\r\n\r\n <<<--- new view --->>>\r\n\r\n postgres=# \\d pg_stat_progress_vacuum_index\r\n View \"pg_catalog.pg_stat_progress_vacuum_index\"\r\n Column | Type | Collation | Nullable | Default\r\n ----------------+---------+-----------+----------+---------\r\n pid | integer | | |\r\n datid | oid | | |\r\n datname | name | | |\r\n indexrelid | bigint | | |\r\n leader_pid | bigint | | |\r\n phase | text | | |\r\n tuples_removed | bigint | | |\r\n\r\n\r\n\r\n On 1/26/22, 8:07 PM, \"Imseih (AWS), Sami\" <simseih@amazon.com> wrote:\r\n\r\n Attached is the latest patch and associated documentation.\r\n\r\n This version addresses the index_ordinal_position column confusion. Rather than displaying the index position, the pg_stat_progress_vacuum view now has 2 new column(s):\r\n index_total - this column will show the total number of indexes to be vacuumed\r\n index_complete_count - this column will show the total number of indexes processed so far. In order to deal with the parallel vacuums, the parallel_workers ( planned workers ) value had to be exposed and each backends performing an index vacuum/cleanup in parallel had to advertise the number of indexes it vacuumed/cleaned. The # of indexes vacuumed for the parallel cleanup can then be derived the pg_stat_progress_vacuum view. \r\n\r\n postgres=# \\d pg_stat_progress_vacuum\r\n View \"pg_catalog.pg_stat_progress_vacuum\"\r\n Column | Type | Collation | Nullable | Default\r\n ----------------------+---------+-----------+----------+---------\r\n pid | integer | | |\r\n datid | oid | | |\r\n datname | name | | |\r\n relid | oid | | |\r\n phase | text | | |\r\n heap_blks_total | bigint | | |\r\n heap_blks_scanned | bigint | | |\r\n heap_blks_vacuumed | bigint | | |\r\n index_vacuum_count | bigint | | |\r\n max_dead_tuples | bigint | | |\r\n num_dead_tuples | bigint | | |\r\n index_total | bigint | | |. <<<---------------------\r\n index_complete_count | numeric | | |. <<<---------------------\r\n\r\n The pg_stat_progress_vacuum_index view includes:\r\n\r\n Indexrelid - the currently vacuumed index\r\n Leader_pid - the pid of the leader process. NULL if the process is the leader or vacuum is not parallel\r\n tuples_removed - the amount of indexes tuples removed. The user can use this column to see that the index vacuum has movement.\r\n\r\n postgres=# \\d pg_stat_progress_vacuum_index\r\n View \"pg_catalog.pg_stat_progress_vacuum_index\"\r\n Column | Type | Collation | Nullable | Default\r\n ----------------+---------+-----------+----------+---------\r\n pid | integer | | |\r\n datid | oid | | |\r\n datname | name | | |\r\n indexrelid | bigint | | |\r\n phase | text | | |\r\n leader_pid | bigint | | |\r\n tuples_removed | bigint | | |\r\n\r\n\r\n\r\n On 1/12/22, 9:52 PM, \"Imseih (AWS), Sami\" <simseih@amazon.com> wrote:\r\n\r\n On 1/12/22, 1:28 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n\r\n On 1/11/22, 11:46 PM, \"Masahiko Sawada\" <sawada.mshk@gmail.com> wrote:\r\n > Regarding the new pg_stat_progress_vacuum_index view, why do we need\r\n > to have a separate view? Users will have to check two views. If this\r\n > view is expected to be used together with and joined to\r\n > pg_stat_progress_vacuum, why don't we provide one view that has full\r\n > information from the beginning? Especially, I think it's not useful\r\n > that the total number of indexes to vacuum (num_indexes_to_vacuum\r\n > column) and the current number of indexes that have been vacuumed\r\n > (index_ordinal_position column) are shown in separate views.\r\n\r\n > I suppose we could add all of the new columns to\r\n > pg_stat_progress_vacuum and just set columns to NULL as appropriate.\r\n > But is that really better than having a separate view?\r\n\r\n To add, since a vacuum can utilize parallel worker processes + the main vacuum process to perform index vacuuming, it made sense to separate the backends doing index vacuum/cleanup in a separate view. \r\n Besides what Nathan suggested, the only other clean option I can think of is to perhaps create a json column in pg_stat_progress_vacuum which will include all the new fields. My concern with this approach is that it will make usability, to flatten the json, difficult for users.\r\n\r\n > Also, I’m not sure how useful index_tuples_removed is; what can we\r\n > infer from this value (without a total number)?\r\n\r\n > I think the idea was that you can compare it against max_dead_tuples\r\n > and num_dead_tuples to get an estimate of the current cycle progress.\r\n > Otherwise, it just shows that progress is being made.\r\n\r\n The main purpose is to really show that the \"index vacuum\" phase is actually making progress. Note that for certain types of indexes, i.e. GIN/GIST the number of tuples_removed will end up exceeding the number of num_dead_tuples.\r\n\r\n Nathan\r\n\r\n [0] https://postgr.es/m/7874FB21-FAA5-49BD-8386-2866552656C7%40amazon.com", "msg_date": "Thu, 10 Feb 2022 19:39:56 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "The change has been broken up as 3 separate patches.\r\n\r\n0007-Expose-progress-for-the-vacuuming-indexes-and-cleani.patch - Introduces 2 new columns to pg_stat_progress_vacuum, indexes_total and indexes_processed. These 2 columns will provide progress on the index vacuuming/cleanup.\r\n0001-Expose-the-index-being-processed-in-the-vacuuming-in.patch - Introduces a new view called pg_stat_prgoress_vacuum_index. This view tracks the index being vacuumed/cleaned and the total number of index tuples removed.\r\n0001-Rename-index_vacuum_count-to-index_vacuum_cycle_coun.patch - Renames the existing index_vacuum_count to index_vacuum_cycle_count in pg_stat_progress_vacuum. Due to the other changes, it makes sense to include \"cycle\" in the column name to be crystal clear that the column refers to the index cycle count.\r\n\r\nThanks\r\n\r\nOn 2/10/22, 1:39 PM, \"Imseih (AWS), Sami\" <simseih@amazon.com> wrote:\r\n\r\n Attached is the latest version of the patch to deal with the changes in the recent commit aa64f23b02924724eafbd9eadbf26d85df30a12b\r\n\r\n On 2/1/22, 2:32 PM, \"Imseih (AWS), Sami\" <simseih@amazon.com> wrote:\r\n\r\n After speaking with Nathan offline, A few changes have been made to the patch.\r\n\r\n As mentioned earlier in the thread, tracking how many indexes are processed in PARALLEL vacuum mode is not very straightforward since only the workers or leader process have ability to inspect the Vacuum shared parallel state. \r\n\r\n The latest version of the patch introduces a shared memory to track indexes vacuumed/cleaned by each worker ( or leader ) in a PARALLEL vacuum. In order to present this data in the pg_stat_progress_vacuum view, the value of the new column \"indexes_processed\" is retrieved from shared memory by pg_stat_get_progress_info. For non-parallel vacuums, the value of \"indexes_processed\" is retrieved from the backend progress array directly. \r\n\r\n The patch also includes the changes to implement the new view pg_stat_progress_vacuum_index which exposes the index being vacuumed/cleaned up.\r\n\r\n postgres=# \\d+ pg_stat_progress_vacuum ;\r\n View \"pg_catalog.pg_stat_progress_vacuum\"\r\n Column | Type | Collation | Nullable | Default | Storage | Description\r\n --------------------+---------+-----------+----------+---------+----------+-------------\r\n pid | integer | | | | plain |\r\n datid | oid | | | | plain |\r\n datname | name | | | | plain |\r\n relid | oid | | | | plain |\r\n phase | text | | | | extended |\r\n heap_blks_total | bigint | | | | plain |\r\n heap_blks_scanned | bigint | | | | plain |\r\n heap_blks_vacuumed | bigint | | | | plain |\r\n index_vacuum_count | bigint | | | | plain |\r\n max_dead_tuples | bigint | | | | plain |\r\n num_dead_tuples | bigint | | | | plain |\r\n indexes_total | bigint | | | | plain | <<<-- new column\r\n indexes_processed | bigint | | | | plain | <<<-- new column\r\n\r\n\r\n <<<--- new view --->>>\r\n\r\n postgres=# \\d pg_stat_progress_vacuum_index\r\n View \"pg_catalog.pg_stat_progress_vacuum_index\"\r\n Column | Type | Collation | Nullable | Default\r\n ----------------+---------+-----------+----------+---------\r\n pid | integer | | |\r\n datid | oid | | |\r\n datname | name | | |\r\n indexrelid | bigint | | |\r\n leader_pid | bigint | | |\r\n phase | text | | |\r\n tuples_removed | bigint | | |\r\n\r\n\r\n\r\n On 1/26/22, 8:07 PM, \"Imseih (AWS), Sami\" <simseih@amazon.com> wrote:\r\n\r\n Attached is the latest patch and associated documentation.\r\n\r\n This version addresses the index_ordinal_position column confusion. Rather than displaying the index position, the pg_stat_progress_vacuum view now has 2 new column(s):\r\n index_total - this column will show the total number of indexes to be vacuumed\r\n index_complete_count - this column will show the total number of indexes processed so far. In order to deal with the parallel vacuums, the parallel_workers ( planned workers ) value had to be exposed and each backends performing an index vacuum/cleanup in parallel had to advertise the number of indexes it vacuumed/cleaned. The # of indexes vacuumed for the parallel cleanup can then be derived the pg_stat_progress_vacuum view. \r\n\r\n postgres=# \\d pg_stat_progress_vacuum\r\n View \"pg_catalog.pg_stat_progress_vacuum\"\r\n Column | Type | Collation | Nullable | Default\r\n ----------------------+---------+-----------+----------+---------\r\n pid | integer | | |\r\n datid | oid | | |\r\n datname | name | | |\r\n relid | oid | | |\r\n phase | text | | |\r\n heap_blks_total | bigint | | |\r\n heap_blks_scanned | bigint | | |\r\n heap_blks_vacuumed | bigint | | |\r\n index_vacuum_count | bigint | | |\r\n max_dead_tuples | bigint | | |\r\n num_dead_tuples | bigint | | |\r\n index_total | bigint | | |. <<<---------------------\r\n index_complete_count | numeric | | |. <<<---------------------\r\n\r\n The pg_stat_progress_vacuum_index view includes:\r\n\r\n Indexrelid - the currently vacuumed index\r\n Leader_pid - the pid of the leader process. NULL if the process is the leader or vacuum is not parallel\r\n tuples_removed - the amount of indexes tuples removed. The user can use this column to see that the index vacuum has movement.\r\n\r\n postgres=# \\d pg_stat_progress_vacuum_index\r\n View \"pg_catalog.pg_stat_progress_vacuum_index\"\r\n Column | Type | Collation | Nullable | Default\r\n ----------------+---------+-----------+----------+---------\r\n pid | integer | | |\r\n datid | oid | | |\r\n datname | name | | |\r\n indexrelid | bigint | | |\r\n phase | text | | |\r\n leader_pid | bigint | | |\r\n tuples_removed | bigint | | |\r\n\r\n\r\n\r\n On 1/12/22, 9:52 PM, \"Imseih (AWS), Sami\" <simseih@amazon.com> wrote:\r\n\r\n On 1/12/22, 1:28 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n\r\n On 1/11/22, 11:46 PM, \"Masahiko Sawada\" <sawada.mshk@gmail.com> wrote:\r\n > Regarding the new pg_stat_progress_vacuum_index view, why do we need\r\n > to have a separate view? Users will have to check two views. If this\r\n > view is expected to be used together with and joined to\r\n > pg_stat_progress_vacuum, why don't we provide one view that has full\r\n > information from the beginning? Especially, I think it's not useful\r\n > that the total number of indexes to vacuum (num_indexes_to_vacuum\r\n > column) and the current number of indexes that have been vacuumed\r\n > (index_ordinal_position column) are shown in separate views.\r\n\r\n > I suppose we could add all of the new columns to\r\n > pg_stat_progress_vacuum and just set columns to NULL as appropriate.\r\n > But is that really better than having a separate view?\r\n\r\n To add, since a vacuum can utilize parallel worker processes + the main vacuum process to perform index vacuuming, it made sense to separate the backends doing index vacuum/cleanup in a separate view. \r\n Besides what Nathan suggested, the only other clean option I can think of is to perhaps create a json column in pg_stat_progress_vacuum which will include all the new fields. My concern with this approach is that it will make usability, to flatten the json, difficult for users.\r\n\r\n > Also, I’m not sure how useful index_tuples_removed is; what can we\r\n > infer from this value (without a total number)?\r\n\r\n > I think the idea was that you can compare it against max_dead_tuples\r\n > and num_dead_tuples to get an estimate of the current cycle progress.\r\n > Otherwise, it just shows that progress is being made.\r\n\r\n The main purpose is to really show that the \"index vacuum\" phase is actually making progress. Note that for certain types of indexes, i.e. GIN/GIST the number of tuples_removed will end up exceeding the number of num_dead_tuples.\r\n\r\n Nathan\r\n\r\n [0] https://postgr.es/m/7874FB21-FAA5-49BD-8386-2866552656C7%40amazon.com", "msg_date": "Thu, 17 Feb 2022 13:52:23 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "> The change has been broken up as 3 separate patches.\r\n\r\n> 0007-Expose-progress-for-the-vacuuming-indexes-and-cleani.patch - Introduces 2 new columns to pg_stat_progress_vacuum, indexes_total and indexes_processed. These 2 columns will provide progress on the index vacuuming/cleanup.\r\n > 0001-Expose-the-index-being-processed-in-the-vacuuming-in.patch - Introduces a new view called pg_stat_prgoress_vacuum_index. This view tracks the index being vacuumed/cleaned and the total number of index tuples removed.\r\n > 0001-Rename-index_vacuum_count-to-index_vacuum_cycle_coun.patch - Renames the existing index_vacuum_count to index_vacuum_cycle_count in pg_stat_progress_vacuum. Due to the other changes, it makes sense to include \"cycle\" in the column name to be crystal clear that the column refers to the index cycle count.\r\n\r\n > Thanks\r\n\r\nSending again with patch files renamed to ensure correct apply order.\r\n\r\n On 2/10/22, 1:39 PM, \"Imseih (AWS), Sami\" <simseih@amazon.com> wrote:\r\n\r\n Attached is the latest version of the patch to deal with the changes in the recent commit aa64f23b02924724eafbd9eadbf26d85df30a12b\r\n\r\n On 2/1/22, 2:32 PM, \"Imseih (AWS), Sami\" <simseih@amazon.com> wrote:\r\n\r\n After speaking with Nathan offline, A few changes have been made to the patch.\r\n\r\n As mentioned earlier in the thread, tracking how many indexes are processed in PARALLEL vacuum mode is not very straightforward since only the workers or leader process have ability to inspect the Vacuum shared parallel state. \r\n\r\n The latest version of the patch introduces a shared memory to track indexes vacuumed/cleaned by each worker ( or leader ) in a PARALLEL vacuum. In order to present this data in the pg_stat_progress_vacuum view, the value of the new column \"indexes_processed\" is retrieved from shared memory by pg_stat_get_progress_info. For non-parallel vacuums, the value of \"indexes_processed\" is retrieved from the backend progress array directly. \r\n\r\n The patch also includes the changes to implement the new view pg_stat_progress_vacuum_index which exposes the index being vacuumed/cleaned up.\r\n\r\n postgres=# \\d+ pg_stat_progress_vacuum ;\r\n View \"pg_catalog.pg_stat_progress_vacuum\"\r\n Column | Type | Collation | Nullable | Default | Storage | Description\r\n --------------------+---------+-----------+----------+---------+----------+-------------\r\n pid | integer | | | | plain |\r\n datid | oid | | | | plain |\r\n datname | name | | | | plain |\r\n relid | oid | | | | plain |\r\n phase | text | | | | extended |\r\n heap_blks_total | bigint | | | | plain |\r\n heap_blks_scanned | bigint | | | | plain |\r\n heap_blks_vacuumed | bigint | | | | plain |\r\n index_vacuum_count | bigint | | | | plain |\r\n max_dead_tuples | bigint | | | | plain |\r\n num_dead_tuples | bigint | | | | plain |\r\n indexes_total | bigint | | | | plain | <<<-- new column\r\n indexes_processed | bigint | | | | plain | <<<-- new column\r\n\r\n\r\n <<<--- new view --->>>\r\n\r\n postgres=# \\d pg_stat_progress_vacuum_index\r\n View \"pg_catalog.pg_stat_progress_vacuum_index\"\r\n Column | Type | Collation | Nullable | Default\r\n ----------------+---------+-----------+----------+---------\r\n pid | integer | | |\r\n datid | oid | | |\r\n datname | name | | |\r\n indexrelid | bigint | | |\r\n leader_pid | bigint | | |\r\n phase | text | | |\r\n tuples_removed | bigint | | |\r\n\r\n\r\n\r\n On 1/26/22, 8:07 PM, \"Imseih (AWS), Sami\" <simseih@amazon.com> wrote:\r\n\r\n Attached is the latest patch and associated documentation.\r\n\r\n This version addresses the index_ordinal_position column confusion. Rather than displaying the index position, the pg_stat_progress_vacuum view now has 2 new column(s):\r\n index_total - this column will show the total number of indexes to be vacuumed\r\n index_complete_count - this column will show the total number of indexes processed so far. In order to deal with the parallel vacuums, the parallel_workers ( planned workers ) value had to be exposed and each backends performing an index vacuum/cleanup in parallel had to advertise the number of indexes it vacuumed/cleaned. The # of indexes vacuumed for the parallel cleanup can then be derived the pg_stat_progress_vacuum view. \r\n\r\n postgres=# \\d pg_stat_progress_vacuum\r\n View \"pg_catalog.pg_stat_progress_vacuum\"\r\n Column | Type | Collation | Nullable | Default\r\n ----------------------+---------+-----------+----------+---------\r\n pid | integer | | |\r\n datid | oid | | |\r\n datname | name | | |\r\n relid | oid | | |\r\n phase | text | | |\r\n heap_blks_total | bigint | | |\r\n heap_blks_scanned | bigint | | |\r\n heap_blks_vacuumed | bigint | | |\r\n index_vacuum_count | bigint | | |\r\n max_dead_tuples | bigint | | |\r\n num_dead_tuples | bigint | | |\r\n index_total | bigint | | |. <<<---------------------\r\n index_complete_count | numeric | | |. <<<---------------------\r\n\r\n The pg_stat_progress_vacuum_index view includes:\r\n\r\n Indexrelid - the currently vacuumed index\r\n Leader_pid - the pid of the leader process. NULL if the process is the leader or vacuum is not parallel\r\n tuples_removed - the amount of indexes tuples removed. The user can use this column to see that the index vacuum has movement.\r\n\r\n postgres=# \\d pg_stat_progress_vacuum_index\r\n View \"pg_catalog.pg_stat_progress_vacuum_index\"\r\n Column | Type | Collation | Nullable | Default\r\n ----------------+---------+-----------+----------+---------\r\n pid | integer | | |\r\n datid | oid | | |\r\n datname | name | | |\r\n indexrelid | bigint | | |\r\n phase | text | | |\r\n leader_pid | bigint | | |\r\n tuples_removed | bigint | | |\r\n\r\n\r\n\r\n On 1/12/22, 9:52 PM, \"Imseih (AWS), Sami\" <simseih@amazon.com> wrote:\r\n\r\n On 1/12/22, 1:28 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n\r\n On 1/11/22, 11:46 PM, \"Masahiko Sawada\" <sawada.mshk@gmail.com> wrote:\r\n > Regarding the new pg_stat_progress_vacuum_index view, why do we need\r\n > to have a separate view? Users will have to check two views. If this\r\n > view is expected to be used together with and joined to\r\n > pg_stat_progress_vacuum, why don't we provide one view that has full\r\n > information from the beginning? Especially, I think it's not useful\r\n > that the total number of indexes to vacuum (num_indexes_to_vacuum\r\n > column) and the current number of indexes that have been vacuumed\r\n > (index_ordinal_position column) are shown in separate views.\r\n\r\n > I suppose we could add all of the new columns to\r\n > pg_stat_progress_vacuum and just set columns to NULL as appropriate.\r\n > But is that really better than having a separate view?\r\n\r\n To add, since a vacuum can utilize parallel worker processes + the main vacuum process to perform index vacuuming, it made sense to separate the backends doing index vacuum/cleanup in a separate view. \r\n Besides what Nathan suggested, the only other clean option I can think of is to perhaps create a json column in pg_stat_progress_vacuum which will include all the new fields. My concern with this approach is that it will make usability, to flatten the json, difficult for users.\r\n\r\n > Also, I’m not sure how useful index_tuples_removed is; what can we\r\n > infer from this value (without a total number)?\r\n\r\n > I think the idea was that you can compare it against max_dead_tuples\r\n > and num_dead_tuples to get an estimate of the current cycle progress.\r\n > Otherwise, it just shows that progress is being made.\r\n\r\n The main purpose is to really show that the \"index vacuum\" phase is actually making progress. Note that for certain types of indexes, i.e. GIN/GIST the number of tuples_removed will end up exceeding the number of num_dead_tuples.\r\n\r\n Nathan\r\n\r\n [0] https://postgr.es/m/7874FB21-FAA5-49BD-8386-2866552656C7%40amazon.com", "msg_date": "Mon, 21 Feb 2022 19:03:39 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Mon, Feb 21, 2022 at 07:03:39PM +0000, Imseih (AWS), Sami wrote:\n> Sending again with patch files renamed to ensure correct apply order.\n\nI haven't had a chance to test this too much, but I did look through the\npatch set and have a couple of small comments.\n\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>indexes_total</structfield> <type>bigint</type>\n+ </para>\n+ <para>\n+ The number of indexes to be processed in the\n+ <literal>vacuuming indexes</literal> or <literal>cleaning up indexes</literal> phase\n+ of the vacuum.\n+ </para></entry>\n+ </row>\n+\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>indexes_processed</structfield> <type>bigint</type>\n+ </para>\n+ <para>\n+ The number of indexes processed in the\n+ <literal>vacuuming indexes</literal> or <literal>cleaning up indexes</literal> phase.\n+ At the start of an index vacuum cycle, this value is set to <literal>0</literal>.\n+ </para></entry>\n+ </row>\n\nWill these be set to 0 for failsafe vacuums and vacuums with INDEX_CLEANUP\nturned off?\n\n+typedef struct VacWorkerProgressInfo\n+{\n+ int num_vacuums; /* number of active VACUUMS with parallel workers */\n+ int max_vacuums; /* max number of VACUUMS with parallel workers */\n+ slock_t mutex;\n+ VacOneWorkerProgressInfo vacuums[FLEXIBLE_ARRAY_MEMBER];\n+} VacWorkerProgressInfo;\n\nmax_vacuums appears to just be a local copy of MaxBackends. Does this\ninformation really need to be stored here? Also, is there a strong reason\nfor using a spinlock instead of an LWLock?\n\n+void\n+vacuum_worker_end(int leader_pid)\n+{\n+ SpinLockAcquire(&vacworkerprogress->mutex);\n+ for (int i = 0; i < vacworkerprogress->num_vacuums; i++)\n+ {\n+ VacOneWorkerProgressInfo *vac = &vacworkerprogress->vacuums[i];\n+\n+ if (vac->leader_pid == leader_pid)\n+ {\n+ *vac = vacworkerprogress->vacuums[vacworkerprogress->num_vacuums - 1];\n+ vacworkerprogress->num_vacuums--;\n+ SpinLockRelease(&vacworkerprogress->mutex);\n+ break;\n+ }\n+ }\n+ SpinLockRelease(&vacworkerprogress->mutex);\n+}\n\nI see this loop pattern in a couple of places, and it makes me wonder if\nthis information would fit more naturally in a hash table.\n\n+ if (callback)\n+ callback(values, 3);\n\nWhy does this need to be set up as a callback function? Could we just call\nthe function if cmdtype == PROGRESS_COMMAND_VACUUM? ISTM that is pretty\nmuch all this is doing.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 21 Feb 2022 11:09:10 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": " + <row>\r\n + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n + <structfield>indexes_total</structfield> <type>bigint</type>\r\n + </para>\r\n + <para>\r\n + The number of indexes to be processed in the\r\n + <literal>vacuuming indexes</literal> or <literal>cleaning up indexes</literal> phase\r\n + of the vacuum.\r\n + </para></entry>\r\n + </row>\r\n +\r\n + <row>\r\n + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n + <structfield>indexes_processed</structfield> <type>bigint</type>\r\n + </para>\r\n + <para>\r\n + The number of indexes processed in the\r\n + <literal>vacuuming indexes</literal> or <literal>cleaning up indexes</literal> phase.\r\n + At the start of an index vacuum cycle, this value is set to <literal>0</literal>.\r\n + </para></entry>\r\n + </row>\r\n\r\n > Will these be set to 0 for failsafe vacuums and vacuums with INDEX_CLEANUP\r\n > turned off?\r\n\r\nIf the failsafe kicks in midway through a vacuum, the number indexes_total will not be reset to 0. If INDEX_CLEANUP is turned off, then the value will be 0 at the start of the vacuum.\r\n\r\n +typedef struct VacWorkerProgressInfo\r\n +{\r\n + int num_vacuums; /* number of active VACUUMS with parallel workers */\r\n + int max_vacuums; /* max number of VACUUMS with parallel workers */\r\n + slock_t mutex;\r\n + VacOneWorkerProgressInfo vacuums[FLEXIBLE_ARRAY_MEMBER];\r\n +} VacWorkerProgressInfo;\r\n\r\n > max_vacuums appears to just be a local copy of MaxBackends. Does this\r\n > information really need to be stored here? Also, is there a strong reason\r\n > for using a spinlock instead of an LWLock?\r\n\r\nFirst, The BTVacInfo code in backend/access/nbtree/nbtutils.c inspired this, so I wanted to follow this pattern. With that said, I do see max_vacuums being redundant here, and I am inclined to replace it with a MaxBackends() call. \r\n\r\nSecond, There is no strong reason to use spinlock here except I incorrectly assumed it will be better for this case. After reading more about this and reading up src/backend/storage/lmgr/README, an LWLock will be better.\r\n\r\n +void\r\n +vacuum_worker_end(int leader_pid)\r\n +{\r\n + SpinLockAcquire(&vacworkerprogress->mutex);\r\n + for (int i = 0; i < vacworkerprogress->num_vacuums; i++)\r\n + {\r\n + VacOneWorkerProgressInfo *vac = &vacworkerprogress->vacuums[i];\r\n +\r\n + if (vac->leader_pid == leader_pid)\r\n + {\r\n + *vac = vacworkerprogress->vacuums[vacworkerprogress->num_vacuums - 1];\r\n + vacworkerprogress->num_vacuums--;\r\n + SpinLockRelease(&vacworkerprogress->mutex);\r\n + break;\r\n + }\r\n + }\r\n + SpinLockRelease(&vacworkerprogress->mutex);\r\n +}\r\n\r\n > I see this loop pattern in a couple of places, and it makes me wonder if\r\n > this information would fit more naturally in a hash table.\r\n\r\nFollowed the pattern in backend/access/nbtree/nbtutils.c for this as well. Using dynahash may make sense here if it simplifies the code. Will look.\r\n\r\n + if (callback)\r\n + callback(values, 3);\r\n\r\n > Why does this need to be set up as a callback function? Could we just call\r\n > the function if cmdtype == PROGRESS_COMMAND_VACUUM? ISTM that is pretty\r\n > much all this is doing.\r\n\r\nThe intention will be for the caller to set the callback early on in the function using the existing \" if (pg_strcasecmp(cmd, \"VACUUM\") == 0), etc.\" statement. This way we avoid having to add another if/else block before tuplestore_putvalues is called.\r\n\r\n--\r\nSami Imseih\r\nAmazon Web Services\r\n\r\n", "msg_date": "Wed, 23 Feb 2022 18:02:08 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Wed, Feb 23, 2022 at 10:02 AM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n> If the failsafe kicks in midway through a vacuum, the number indexes_total will not be reset to 0. If INDEX_CLEANUP is turned off, then the value will be 0 at the start of the vacuum.\n\nThe way that this works with num_index_scans is that we \"round up\"\nwhen there has been non-zero work in lazy_vacuum_all_indexes(), but\nnot if the precheck in lazy_vacuum_all_indexes() fails. That seems\nlike a good model to generalize from here. Note that this makes\nINDEX_CLEANUP=off affect num_index_scans in much the same way as a\nVACUUM where the failsafe kicks in very early, during the initial heap\npass. That is, if the failsafe kicks in before we reach lazy_vacuum()\nfor the first time (which is not unlikely), or even in the\nlazy_vacuum_all_indexes() precheck, then num_index_scans will remain\nat 0, just like INDEX_CLEANUP=off.\n\nThe actual failsafe WARNING shows num_index_scans, possibly before it\ngets incremented one last time (by \"rounding up\"). So it's reasonably\nclear how this all works from that context (assuming that the\nautovacuum logging stuff, which reports num_index_scans, outputs a\nreport for a table where the failsafe kicked in).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 23 Feb 2022 10:41:36 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Wed, Feb 23, 2022 at 10:41:36AM -0800, Peter Geoghegan wrote:\n> On Wed, Feb 23, 2022 at 10:02 AM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n>> If the failsafe kicks in midway through a vacuum, the number indexes_total will not be reset to 0. If INDEX_CLEANUP is turned off, then the value will be 0 at the start of the vacuum.\n> \n> The way that this works with num_index_scans is that we \"round up\"\n> when there has been non-zero work in lazy_vacuum_all_indexes(), but\n> not if the precheck in lazy_vacuum_all_indexes() fails. That seems\n> like a good model to generalize from here. Note that this makes\n> INDEX_CLEANUP=off affect num_index_scans in much the same way as a\n> VACUUM where the failsafe kicks in very early, during the initial heap\n> pass. That is, if the failsafe kicks in before we reach lazy_vacuum()\n> for the first time (which is not unlikely), or even in the\n> lazy_vacuum_all_indexes() precheck, then num_index_scans will remain\n> at 0, just like INDEX_CLEANUP=off.\n> \n> The actual failsafe WARNING shows num_index_scans, possibly before it\n> gets incremented one last time (by \"rounding up\"). So it's reasonably\n> clear how this all works from that context (assuming that the\n> autovacuum logging stuff, which reports num_index_scans, outputs a\n> report for a table where the failsafe kicked in).\n\nI am confused. If failsafe kicks in during the middle of a vacuum, I\n(perhaps naively) would expect indexes_total and indexes_processed to go to\nzero, and I'd expect to no longer see the \"vacuuming indexes\" and \"cleaning\nup indexes\" phases. Otherwise, how would I know that we are now skipping\nindexes? Of course, you won't have any historical context about the index\nwork done before failsafe kicked in, but IMO it is misleading to still\ninclude it in the progress view.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 25 Feb 2022 11:52:53 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": " > On Wed, Feb 23, 2022 at 10:02 AM Imseih (AWS), Sami <simseih@amazon.com> wrote:\r\n >> If the failsafe kicks in midway through a vacuum, the number indexes_total will not be reset to 0. If INDEX_CLEANUP is turned off, then the value will be 0 at the start of the vacuum.\r\n >\r\n > The way that this works with num_index_scans is that we \"round up\"\r\n > when there has been non-zero work in lazy_vacuum_all_indexes(), but\r\n > not if the precheck in lazy_vacuum_all_indexes() fails. That seems\r\n > like a good model to generalize from here. Note that this makes\r\n > INDEX_CLEANUP=off affect num_index_scans in much the same way as a\r\n > VACUUM where the failsafe kicks in very early, during the initial heap\r\n > pass. That is, if the failsafe kicks in before we reach lazy_vacuum()\r\n > for the first time (which is not unlikely), or even in the\r\n > lazy_vacuum_all_indexes() precheck, then num_index_scans will remain\r\n > at 0, just like INDEX_CLEANUP=off.\r\n >\r\n > The actual failsafe WARNING shows num_index_scans, possibly before it\r\n > gets incremented one last time (by \"rounding up\"). So it's reasonably\r\n > clear how this all works from that context (assuming that the\r\n > autovacuum logging stuff, which reports num_index_scans, outputs a\r\n > report for a table where the failsafe kicked in).\r\n\r\n> I am confused. If failsafe kicks in during the middle of a vacuum, I\r\n> (perhaps naively) would expect indexes_total and indexes_processed to go to\r\n> zero, and I'd expect to no longer see the \"vacuuming indexes\" and \"cleaning\r\n> up indexes\" phases. Otherwise, how would I know that we are now skipping\r\n> indexes? Of course, you won't have any historical context about the index\r\n> work done before failsafe kicked in, but IMO it is misleading to still\r\n> include it in the progress view.\r\n\r\nFailsafe occurring in the middle of a vacuum and resetting \"indexes_total\" to 0 will be misleading. I am thinking that it is a better idea to expose only one column \"indexes_remaining\".\r\n\r\nIf index_cleanup is set to OFF, the values of indexes_remaining will be 0 at the start of the vacuum.\r\nIf failsafe kicks in during a vacuum in-progress, \"indexes_remaining\" will be calculated to 0.\r\n\r\nThis approach will provide a progress based on how many indexes remaining with no ambiguity.\r\n\r\n--\r\nSami Imseih\r\nAmazon Web Services\r\n\r\n", "msg_date": "Sun, 27 Feb 2022 17:16:53 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "> >> If the failsafe kicks in midway through a vacuum, the number indexes_total will not be reset to 0. If INDEX_CLEANUP is turned off, then the value will be 0 at the start of the vacuum.\r\n> >\r\n> > The way that this works with num_index_scans is that we \"round up\"\r\n> > when there has been non-zero work in lazy_vacuum_all_indexes(), but\r\n> > not if the precheck in lazy_vacuum_all_indexes() fails. That seems\r\n> > like a good model to generalize from here. Note that this makes\r\n> > INDEX_CLEANUP=off affect num_index_scans in much the same way as a\r\n> > VACUUM where the failsafe kicks in very early, during the initial heap\r\n> > pass. That is, if the failsafe kicks in before we reach lazy_vacuum()\r\n> > for the first time (which is not unlikely), or even in the\r\n> > lazy_vacuum_all_indexes() precheck, then num_index_scans will remain\r\n> > at 0, just like INDEX_CLEANUP=off.\r\n> >\r\n> > The actual failsafe WARNING shows num_index_scans, possibly before it\r\n> > gets incremented one last time (by \"rounding up\"). So it's reasonably\r\n> > clear how this all works from that context (assuming that the\r\n> > autovacuum logging stuff, which reports num_index_scans, outputs a\r\n> > report for a table where the failsafe kicked in).\r\n\r\n> I am confused. If failsafe kicks in during the middle of a vacuum, I\r\n> (perhaps naively) would expect indexes_total and indexes_processed to go to\r\n> zero, and I'd expect to no longer see the \"vacuuming indexes\" and \"cleaning\r\n> up indexes\" phases. Otherwise, how would I know that we are now skipping\r\n> indexes? Of course, you won't have any historical context about the index\r\n> work done before failsafe kicked in, but IMO it is misleading to still\r\n> include it in the progress view.\r\n\r\nAfter speaking with Nathan offline, the best forward is to reset indexes_total and indexes_processed to 0 after the start of \"vacuuming indexes\" or \"cleaning up indexes\" phase. \r\nAlso, if failsafe is triggered midway through a vacuum, the values for both indexes_total and indexes_processed is (re)set to 0.\r\n\r\nRevision of the patch is attached.\r\n\r\nBelow is a test that shows the output.\r\n\r\n-[ RECORD 1 ]------+------------------\r\npid | 4360\r\ndatid | 5\r\ndatname | postgres\r\nrelid | 16399\r\nphase | vacuuming indexes\r\nheap_blks_total | 401092\r\nheap_blks_scanned | 211798\r\nheap_blks_vacuumed | 158847\r\nindex_vacuum_count | 3\r\nmax_dead_tuples | 1747625\r\nnum_dead_tuples | 1747366\r\nindexes_total | 8\t\t\t\t<<<<--- index_vacuum_count is 3, indexes_total is 8 and indexes_processed so far is 1\r\nindexes_processed | 1\r\n\r\n\r\n-[ RECORD 1 ]------+--------------\r\npid | 4360\r\ndatid | 5\r\ndatname | postgres\r\nrelid | 16399\r\nphase | scanning heap\r\nheap_blks_total | 401092\r\nheap_blks_scanned | 234590\r\nheap_blks_vacuumed | 211797\r\nindex_vacuum_count | 4\r\nmax_dead_tuples | 1747625\r\nnum_dead_tuples | 752136\r\nindexes_total | 0\t\t\t\t<<<<--- index_vacuum_count is 4 and not in an index phase. indexes_total is 0 and indexes_processed so far is 0\r\nindexes_processed | 0\r\n\r\n\r\n-[ RECORD 1 ]------+------------------\r\npid | 4360\r\ndatid | 5\r\ndatname | postgres\r\nrelid | 16399\r\nphase | vacuuming indexes\r\nheap_blks_total | 401092\r\nheap_blks_scanned | 264748\r\nheap_blks_vacuumed | 211797\r\nindex_vacuum_count | 4\r\nmax_dead_tuples | 1747625\r\nnum_dead_tuples | 1747350\r\nindexes_total | 8\r\nindexes_processed | 6\t\t <<<<--- index_vacuum_count is 4, indexes_total is 8 and indexes_processed so far is 6\r\n\r\n\r\n--\r\nSami Imseih\r\nAmazon Web Services", "msg_date": "Thu, 3 Mar 2022 05:08:43 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Thu, Mar 3, 2022 at 2:08 PM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n>\n> > >> If the failsafe kicks in midway through a vacuum, the number indexes_total will not be reset to 0. If INDEX_CLEANUP is turned off, then the value will be 0 at the start of the vacuum.\n> > >\n> > > The way that this works with num_index_scans is that we \"round up\"\n> > > when there has been non-zero work in lazy_vacuum_all_indexes(), but\n> > > not if the precheck in lazy_vacuum_all_indexes() fails. That seems\n> > > like a good model to generalize from here. Note that this makes\n> > > INDEX_CLEANUP=off affect num_index_scans in much the same way as a\n> > > VACUUM where the failsafe kicks in very early, during the initial heap\n> > > pass. That is, if the failsafe kicks in before we reach lazy_vacuum()\n> > > for the first time (which is not unlikely), or even in the\n> > > lazy_vacuum_all_indexes() precheck, then num_index_scans will remain\n> > > at 0, just like INDEX_CLEANUP=off.\n> > >\n> > > The actual failsafe WARNING shows num_index_scans, possibly before it\n> > > gets incremented one last time (by \"rounding up\"). So it's reasonably\n> > > clear how this all works from that context (assuming that the\n> > > autovacuum logging stuff, which reports num_index_scans, outputs a\n> > > report for a table where the failsafe kicked in).\n>\n> > I am confused. If failsafe kicks in during the middle of a vacuum, I\n> > (perhaps naively) would expect indexes_total and indexes_processed to go to\n> > zero, and I'd expect to no longer see the \"vacuuming indexes\" and \"cleaning\n> > up indexes\" phases. Otherwise, how would I know that we are now skipping\n> > indexes? Of course, you won't have any historical context about the index\n> > work done before failsafe kicked in, but IMO it is misleading to still\n> > include it in the progress view.\n>\n> After speaking with Nathan offline, the best forward is to reset indexes_total and indexes_processed to 0 after the start of \"vacuuming indexes\" or \"cleaning up indexes\" phase.\n\n+1\n\n+/*\n+ * vacuum_worker_init --- initialize this module's shared memory hash\n+ * to track the progress of a vacuum worker\n+ */\n+void\n+vacuum_worker_init(void)\n+{\n+ HASHCTL info;\n+ long max_table_size = GetMaxBackends();\n+\n+ VacuumWorkerProgressHash = NULL;\n+\n+ info.keysize = sizeof(pid_t);\n+ info.entrysize = sizeof(VacProgressEntry);\n+\n+ VacuumWorkerProgressHash = ShmemInitHash(\"Vacuum Progress Hash\",\n+\n max_table_size,\n+\n max_table_size,\n+\n &info,\n+\n HASH_ELEM | HASH_BLOBS);\n+}\n\nIt seems to me that creating a shmem hash with max_table_size entries\nfor parallel vacuum process tracking is too much. IIRC an old patch\nhad parallel vacuum workers advertise its progress and changed the\npg_stat_progress_vacuum view so that it aggregates the results\nincluding workers' stats. I think it’s better than the current one.\nWhy did you change that?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 8 Mar 2022 15:04:22 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "+ +/*\r\n+ + * vacuum_worker_init --- initialize this module's shared memory hash\r\n+ + * to track the progress of a vacuum worker\r\n+ + */\r\n+ +void\r\n+ +vacuum_worker_init(void)\r\n+ +{\r\n+ + HASHCTL info;\r\n+ + long max_table_size = GetMaxBackends();\r\n+ +\r\n+ + VacuumWorkerProgressHash = NULL;\r\n+ +\r\n+ + info.keysize = sizeof(pid_t);\r\n+ + info.entrysize = sizeof(VacProgressEntry);\r\n+ +\r\n+ + VacuumWorkerProgressHash = ShmemInitHash(\"Vacuum Progress Hash\",\r\n+ +\r\n+ max_table_size,\r\n+ +\r\n+ max_table_size,\r\n+ +\r\n+ &info,\r\n+ +\r\n+ HASH_ELEM | HASH_BLOBS);\r\n+ +}\r\n\r\n+ It seems to me that creating a shmem hash with max_table_size entries\r\n+ for parallel vacuum process tracking is too much. IIRC an old patch\r\n+ had parallel vacuum workers advertise its progress and changed the\r\n+ pg_stat_progress_vacuum view so that it aggregates the results\r\n+ including workers' stats. I think it’s better than the current one.\r\n+ Why did you change that?\r\n\r\n+ Regards,\r\n\r\nI was trying to avoid a shared memory to track completed indexes, but aggregating stats does not work with parallel vacuums. This is because a parallel worker will exit before the vacuum completes causing the aggregated total to be wrong. \r\n\r\nFor example\r\n\r\nLeader_pid advertises it completed 2 indexes\r\nParallel worker advertises it completed 2 indexes\r\n\r\nWhen aggregating we see 4 indexes completed.\r\n\r\nAfter the parallel worker exits, the aggregation will show only 2 indexes completed. \r\n\r\n --\r\n Sami Imseih\r\nAmazon Web Services\r\n\r\n", "msg_date": "Tue, 8 Mar 2022 15:41:47 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Wed, Mar 9, 2022 at 12:41 AM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n>\n> + +/*\n> + + * vacuum_worker_init --- initialize this module's shared memory hash\n> + + * to track the progress of a vacuum worker\n> + + */\n> + +void\n> + +vacuum_worker_init(void)\n> + +{\n> + + HASHCTL info;\n> + + long max_table_size = GetMaxBackends();\n> + +\n> + + VacuumWorkerProgressHash = NULL;\n> + +\n> + + info.keysize = sizeof(pid_t);\n> + + info.entrysize = sizeof(VacProgressEntry);\n> + +\n> + + VacuumWorkerProgressHash = ShmemInitHash(\"Vacuum Progress Hash\",\n> + +\n> + max_table_size,\n> + +\n> + max_table_size,\n> + +\n> + &info,\n> + +\n> + HASH_ELEM | HASH_BLOBS);\n> + +}\n>\n> + It seems to me that creating a shmem hash with max_table_size entries\n> + for parallel vacuum process tracking is too much. IIRC an old patch\n> + had parallel vacuum workers advertise its progress and changed the\n> + pg_stat_progress_vacuum view so that it aggregates the results\n> + including workers' stats. I think it’s better than the current one.\n> + Why did you change that?\n>\n> + Regards,\n>\n> I was trying to avoid a shared memory to track completed indexes, but aggregating stats does not work with parallel vacuums. This is because a parallel worker will exit before the vacuum completes causing the aggregated total to be wrong.\n>\n> For example\n>\n> Leader_pid advertises it completed 2 indexes\n> Parallel worker advertises it completed 2 indexes\n>\n> When aggregating we see 4 indexes completed.\n>\n> After the parallel worker exits, the aggregation will show only 2 indexes completed.\n\nIndeed.\n\nIt might have already been discussed but other than using a new shmem\nhash for parallel vacuum, I wonder if we can allow workers to change\nthe leader’s progress information. It would break the assumption that\nthe backend status entry is modified by its own backend, though. But\nit might help for progress updates of other parallel operations too.\nThis essentially does the same thing as what the current patch does\nbut it doesn't require a new shmem hash.\n\nAnother idea I come up with is that the parallel vacuum leader checks\nPVIndStats.status and updates how many indexes are processed to its\nprogress information. The leader can check it and update the progress\ninformation before and after index vacuuming. And possibly we can add\na callback to the main loop of index AM's bulkdelete and vacuumcleanup\nso that the leader can periodically make it up-to-date.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 9 Mar 2022 10:53:44 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "> Indeed.\r\n\r\n> It might have already been discussed but other than using a new shmem\r\n> hash for parallel vacuum, I wonder if we can allow workers to change\r\n> the leader’s progress information. It would break the assumption that\r\n> the backend status entry is modified by its own backend, though. But\r\n> it might help for progress updates of other parallel operations too.\r\n> This essentially does the same thing as what the current patch does\r\n> but it doesn't require a new shmem hash.\r\n\r\nI experimented with this idea, but it did not work. The idea would have been to create a pgstat_progress_update function that takes the leader pid, however infrastructure does not exist to allow one backend to manipulate another backends backend status array.\r\npgstat_fetch_stat_beentry returns a local copy only. \r\n\r\n> Another idea I come up with is that the parallel vacuum leader checks\r\n> PVIndStats.status and updates how many indexes are processed to its\r\n> progress information. The leader can check it and update the progress\r\n> information before and after index vacuuming. And possibly we can add\r\n> a callback to the main loop of index AM's bulkdelete and vacuumcleanup\r\n> so that the leader can periodically make it up-to-date.\r\n\r\n> Regards,\r\n\r\nThe PVIndStats idea is also one I experimented with but it did not work. The reason being the backend checking the progress needs to do a shm_toc_lookup to access the data, but they are not prepared to do so. \r\n\r\nI have not considered the callback in the index AM's bulkdelete and vacuumcleanup, but I can imagine this is not possible since a leader could be busy vacuuming rather than updating counters, but I may be misunderstanding the suggestion.\r\n\r\n--\r\nSami Imseih\r\nAmazon Web Services\r\n\r\n", "msg_date": "Wed, 9 Mar 2022 02:35:31 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Attached is the latest revision of the patch(s). Renamed the patches correctly for Cfbot.\r\n\r\n--\r\nSami Imseih\r\nAmazon Web Services", "msg_date": "Wed, 9 Mar 2022 02:57:02 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Wed, Mar 9, 2022 at 11:35 AM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n>\n> > Indeed.\n>\n> > It might have already been discussed but other than using a new shmem\n> > hash for parallel vacuum, I wonder if we can allow workers to change\n> > the leader’s progress information. It would break the assumption that\n> > the backend status entry is modified by its own backend, though. But\n> > it might help for progress updates of other parallel operations too.\n> > This essentially does the same thing as what the current patch does\n> > but it doesn't require a new shmem hash.\n>\n> I experimented with this idea, but it did not work. The idea would have been to create a pgstat_progress_update function that takes the leader pid, however infrastructure does not exist to allow one backend to manipulate another backends backend status array.\n> pgstat_fetch_stat_beentry returns a local copy only.\n\nI think if it's a better approach we can do that including adding a\nnew infrastructure for it.\n\n>\n> > Another idea I come up with is that the parallel vacuum leader checks\n> > PVIndStats.status and updates how many indexes are processed to its\n> > progress information. The leader can check it and update the progress\n> > information before and after index vacuuming. And possibly we can add\n> > a callback to the main loop of index AM's bulkdelete and vacuumcleanup\n> > so that the leader can periodically make it up-to-date.\n>\n> > Regards,\n>\n> The PVIndStats idea is also one I experimented with but it did not work. The reason being the backend checking the progress needs to do a shm_toc_lookup to access the data, but they are not prepared to do so.\n\nWhat I imagined is that the leader checks how many PVIndStats.status\nis PARALLEL_INDVAC_STATUS_COMPLETED and updates the result to its\nprogress information as indexes_processed. That way, the backend\nchecking the progress can see it.\n\n>\n> I have not considered the callback in the index AM's bulkdelete and vacuumcleanup, but I can imagine this is not possible since a leader could be busy vacuuming rather than updating counters, but I may be misunderstanding the suggestion.\n\nChecking PVIndStats.status values is cheap. Probably the leader can\ncheck it every 1GB index block, for example.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 9 Mar 2022 12:15:26 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "> I think if it's a better approach we can do that including adding a\r\n> new infrastructure for it.\r\n\r\n+1 This is a beneficial idea, especially to other progress reporting, but I see this as a separate thread targeting the next major version.\r\n\r\n\r\n\r\n", "msg_date": "Wed, 9 Mar 2022 22:52:26 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "I took a look at the latest patch set.\n\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>indexes_total</structfield> <type>bigint</type>\n+ </para>\n+ <para>\n+ The number of indexes to be processed in the \n+ <literal>vacuuming indexes</literal> \n+ or <literal>cleaning up indexes</literal> phase. It is set to\n+ <literal>0</literal> when vacuum is not in any of these phases.\n+ </para></entry>\n\nCould we avoid resetting it to 0 unless INDEX_CLEANUP was turned off or\nfailsafe kicked in? It might be nice to know how many indexes the vacuum\nintends to process. I don't feel too strongly about this, so if it would\nadd a lot of complexity, it might be okay as is.\n\n \tBTreeShmemInit();\n \tSyncScanShmemInit();\n \tAsyncShmemInit();\n+\tvacuum_worker_init();\n\nDon't we also need to add the size of the hash table to\nCalculateShmemSize()?\n\n+ * A command type can optionally define a callback function\n+ * which will derive Datum values rather than use values\n+ * directly from the backends progress array.\n\nI think this can be removed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 9 Mar 2022 16:52:37 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "> I took a look at the latest patch set.\r\n\r\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n> + <structfield>indexes_total</structfield> <type>bigint</type>\r\n> + </para>\r\n> + <para>\r\n> + The number of indexes to be processed in the\r\n> + <literal>vacuuming indexes</literal>\r\n> + or <literal>cleaning up indexes</literal> phase. It is set to\r\n> + <literal>0</literal> when vacuum is not in any of these phases.\r\n> + </para></entry>\r\n\r\n> Could we avoid resetting it to 0 unless INDEX_CLEANUP was turned off or\r\n> failsafe kicked in? It might be nice to know how many indexes the vacuum\r\n> intends to process. I don't feel too strongly about this, so if it would\r\n> add a lot of complexity, it might be okay as is.\r\n\r\nYour suggestion is valid. On INDEX_CLEANUP it is set to 0 from the start and when failsafe kicks in it will be reset to 0. I Will remove the reset call for the common index vacuum path. \r\n\r\n > BTreeShmemInit();\r\n > SyncScanShmemInit();\r\n > AsyncShmemInit();\r\n > + vacuum_worker_init();\r\n\r\n > Don't we also need to add the size of the hash table to\r\n > CalculateShmemSize()?\r\n\r\nNo, ShmemInitHash takes the min and max size of the hash and in turn calls ShmemInitStruct to setup the shared memory.\r\n\r\n> + * A command type can optionally define a callback function\r\n> + * which will derive Datum values rather than use values\r\n> + * directly from the backends progress array.\r\n\r\n> I think this can be removed.\r\n\r\nGood catch.\r\n\r\n--\r\nSami Imseih\r\nAmazon Web Services\r\n\r\n", "msg_date": "Thu, 10 Mar 2022 01:22:37 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "> > BTreeShmemInit();\r\n> > SyncScanShmemInit();\r\n> > AsyncShmemInit();\r\n> > + vacuum_worker_init();\r\n\r\n> > Don't we also need to add the size of the hash table to\r\n> > CalculateShmemSize()?\r\n\r\n> No, ShmemInitHash takes the min and max size of the hash and in turn calls ShmemInitStruct to setup the shared memory.\r\n\r\nSorry, I am wrong here. The size needs to be accounted for at startup. \r\n\r\n --\r\n Sami Imseih\r\n Amazon Web Services\r\n\r\n\r\n", "msg_date": "Thu, 10 Mar 2022 01:37:58 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Attached v4 which includes accounting for the hash size on startup, removal of the no longer needed comment in pgstatfuncs.c and a change in both code/docs to only reset the indexes_total to 0 when failsafe is triggered.\r\n--\r\nSami Imseih\r\nAmazon Web Services", "msg_date": "Thu, 10 Mar 2022 21:30:57 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Thu, Mar 10, 2022 at 09:30:57PM +0000, Imseih (AWS), Sami wrote:\n> Attached v4 which includes accounting for the hash size on startup, removal of the no longer needed comment in pgstatfuncs.c and a change in both code/docs to only reset the indexes_total to 0 when failsafe is triggered.\n\nThanks for the new patch set.\n\n+/*\n+ * Structs for tracking shared Progress information\n+ * amongst worker ( and leader ) processes of a vacuum.\n+ */\n\nnitpick: Can we remove the extra spaces in the parentheses?\n\n+ if (entry != NULL)\n+ values[PGSTAT_NUM_PROGRESS_COMMON + PROGRESS_VACUUM_INDEXES_COMPLETED] = entry->indexes_processed;\n\nWhat does it mean if there isn't an entry in the map? Is this actually\nexpected, or should we ERROR instead?\n\n+ /* vacuum worker progress hash table */\n+ max_table_size = GetMaxBackends();\n+ size = add_size(size, hash_estimate_size(max_table_size,\n+ sizeof(VacProgressEntry)));\n\nI think the number of entries should be shared between\nVacuumWorkerProgressShmemInit() and VacuumWorkerProgressShmemSize().\nOtherwise, we might update one and not the other.\n\n+ /* Call the command specific function to override datum values */\n+ if (pg_strcasecmp(cmd, \"VACUUM\") == 0)\n+ set_vaccum_worker_progress(values);\n\nI think we should elaborate a bit more in this comment. It's difficult to\nfollow what this is doing without referencing the comment above\nset_vacuum_worker_progress().\n\nIMO the patches are in decent shape, and this should likely be marked as\nready-for-committer in the near future. Before doing so, I think we should\ncheck that Sawada-san is okay with moving the deeper infrastructure changes\nto a separate threaḋ.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 10 Mar 2022 14:36:55 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "> nitpick: Can we remove the extra spaces in the parentheses?\r\n\r\nfixed\r\n\r\n> What does it mean if there isn't an entry in the map? Is this actually\r\n> expected, or should we ERROR instead?\r\n\r\nI cleaned up the code here and added comments. \r\n\r\n> I think the number of entries should be shared between\r\n> VacuumWorkerProgressShmemInit() and VacuumWorkerProgressShmemSize().\r\n> Otherwise, we might update one and not the other.\r\n\r\nFixed\r\n\r\n> I think we should elaborate a bit more in this comment. It's difficult to\r\n> follow what this is doing without referencing the comment above\r\n> set_vacuum_worker_progress().\r\n\r\nMore comments added\r\n\r\nI also simplified the 0002 patch as well.\r\n\r\n-- \r\nSami Imseih\r\nAmazon Web Services", "msg_date": "Sat, 12 Mar 2022 07:00:06 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Sat, Mar 12, 2022 at 07:00:06AM +0000, Imseih (AWS), Sami wrote:\n>> nitpick: Can we remove the extra spaces in the parentheses?\n> \n> fixed\n> \n>> What does it mean if there isn't an entry in the map? Is this actually\n>> expected, or should we ERROR instead?\n> \n> I cleaned up the code here and added comments. \n> \n>> I think the number of entries should be shared between\n>> VacuumWorkerProgressShmemInit() and VacuumWorkerProgressShmemSize().\n>> Otherwise, we might update one and not the other.\n> \n> Fixed\n> \n>> I think we should elaborate a bit more in this comment. It's difficult to\n>> follow what this is doing without referencing the comment above\n>> set_vacuum_worker_progress().\n> \n> More comments added\n> \n> I also simplified the 0002 patch as well.\n\nThese patches look pretty good to me. Barring additional feedback, I\nintend to mark this as ready-for-committer early next week.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 12 Mar 2022 13:20:04 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Sat, Mar 12, 2022 at 4:00 PM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n>\n> > nitpick: Can we remove the extra spaces in the parentheses?\n>\n> fixed\n>\n> > What does it mean if there isn't an entry in the map? Is this actually\n> > expected, or should we ERROR instead?\n>\n> I cleaned up the code here and added comments.\n>\n> > I think the number of entries should be shared between\n> > VacuumWorkerProgressShmemInit() and VacuumWorkerProgressShmemSize().\n> > Otherwise, we might update one and not the other.\n>\n> Fixed\n>\n> > I think we should elaborate a bit more in this comment. It's difficult to\n> > follow what this is doing without referencing the comment above\n> > set_vacuum_worker_progress().\n>\n> More comments added\n>\n> I also simplified the 0002 patch as well.\n\nI'm still unsure the current design of 0001 patch is better than other\napproaches we’ve discussed. Even users who don't use parallel vacuum\nare forced to allocate shared memory for index vacuum progress, with\nGetMaxBackends() entries from the beginning. Also, it’s likely to\nextend the progress tracking feature for other parallel operations in\nthe future but I think the current design is not extensible. If we\nwant to do that, we will end up creating similar things for each of\nthem or re-creating index vacuum progress tracking feature while\ncreating a common infra. It might not be a problem as of now but I'm\nconcerned that introducing a feature that is not extensible and forces\nusers to allocate additional shmem might be a blocker in the future.\nLooking at the precedent example, When we introduce the progress\ntracking feature, we implemented it in an extensible way. On the other\nhand, others in this thread seem to agree with this approach, so I'd\nlike to leave it to committers.\n\nAnyway, here are some comments on v5-0001 patch:\n\n+/* in commands/vacuumparallel.c */\n+extern void VacuumWorkerProgressShmemInit(void);\n+extern Size VacuumWorkerProgressShmemSize(void);\n+extern void vacuum_worker_end(int leader_pid);\n+extern void vacuum_worker_update(int leader_pid);\n+extern void vacuum_worker_end_callback(int code, Datum arg);\n+extern void set_vaccum_worker_progress(Datum *values);\n\nThese functions' body is not in vacuumparallel.c. As the comment says,\nI think these functions should be implemented in vacuumparallel.c.\n\n---\n+/*\n+ * set_vaccum_worker_progress --- updates the number of indexes that have been\n+ * vacuumed or cleaned up in a parallel vacuum.\n+ */\n+void\n+set_vaccum_worker_progress(Datum *values)\n\ns/vaccum/vacuum/\n\n---\n+void\n+set_vaccum_worker_progress(Datum *values)\n+{\n+ VacProgressEntry *entry;\n+ int leader_pid = values[0];\n\nI thik we should use DatumGetInt32().\n\n---\n+ entry = (VacProgressEntry *)\nhash_search(VacuumWorkerProgressHash, &leader_pid, HASH_ENTER_NULL,\n&found);\n+\n+ if (!entry)\n+ elog(ERROR, \"cannot allocate shared memory for vacuum\nworker progress\");\n\nSince we raise an error in case of out of memory, I think we can use\nHASH_ENTER instead of HASH_ENTER_NULL. Or do we want to emit a\ndetailed error message here?\n\n---\n+ VacuumWorkerProgressHash = NULL;\n\nThis line is not necessary.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 14 Mar 2022 10:44:32 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "> I'm still unsure the current design of 0001 patch is better than other\r\n> approaches we’ve discussed. Even users who don't use parallel vacuum\r\n> are forced to allocate shared memory for index vacuum progress, with\r\n> GetMaxBackends() entries from the beginning. Also, it’s likely to\r\n> extend the progress tracking feature for other parallel operations in\r\n> the future but I think the current design is not extensible. If we\r\n> want to do that, we will end up creating similar things for each of\r\n> them or re-creating index vacuum progress tracking feature while\r\n> creating a common infra. It might not be a problem as of now but I'm\r\n> concerned that introducing a feature that is not extensible and forces\r\n> users to allocate additional shmem might be a blocker in the future.\r\n> Looking at the precedent example, When we introduce the progress\r\n> tracking feature, we implemented it in an extensible way. On the other\r\n> hand, others in this thread seem to agree with this approach, so I'd\r\n> like to leave it to committers.\r\n\r\nThanks for the review!\r\n\r\nI think you make strong arguments as to why we need to take a different approach now than later. \r\n\r\nFlaws with current patch set:\r\n\r\n1. GetMaxBackends() is a really heavy-handed overallocation of a shared memory serving a very specific purpose.\r\n2. Going with the approach of a vacuum specific hash breaks the design of progress which is meant to be extensible.\r\n3. Even if we go with this current approach as an interim solution, it will be a real pain in the future.\r\n\r\nWith that said, v7 introduces the new infrastructure. 0001 includes the new infrastructure and 0002 takes advantage of this.\r\n\r\nThis approach is the following:\r\n\r\n1. Introduces a new API called pgstat_progress_update_param_parallel along with some others support functions. This new infrastructure is in backend_progress.c\r\n\r\n2. There is still a shared memory involved, but the size is capped to \" max_worker_processes\" which is the max to how many parallel workers can be doing work at any given time. The shared memory hash includes a st_progress_param array just like the Backend Status array.\r\n\r\ntypedef struct ProgressParallelEntry\r\n{\r\n pid_t leader_pid;\r\n int64 st_progress_param[PGSTAT_NUM_PROGRESS_PARAM];\r\n} ProgressParallelEntry;\r\n\r\n3. The progress update function is \"pgstat_progress_update_param_parallel\" and will aggregate totals reported for a specific progress parameter\r\n\r\nFor example , it can be called lie below. In the case below, PROGRESS_VACUUM_INDEXES_COMPLETED is incremented by 1 in the shared memory entry shared by the workers and leader.\r\n\r\ncase PARALLEL_INDVAC_STATUS_NEED_BULKDELETE:\r\n istat_res = vac_bulkdel_one_index(&ivinfo, istat, pvs->dead_items);\r\n pgstat_progress_update_param_parallel(pvs->shared->leader_pid, PROGRESS_VACUUM_INDEXES_COMPLETED, 1); <<-----\r\n break;\r\n\r\n4. pg_stat_get_progress_info will call a function called pgstat_progress_set_parallel which will set the parameter value to the total from the shared memory hash.\r\n\r\nI believe this approach gives proper infrastructure for future use-cases of workers reporting progress -and- does not do the heavy-handed shared memory allocation.\r\n\r\n--\r\nSami Imseih\r\nAmazon Web Services", "msg_date": "Mon, 14 Mar 2022 16:20:51 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Attaching a v8 due to naming convention fixes and one slight change in where index_processed is set after all indexes are vacuumed.\r\n\r\ns/indexes_completed/indexes_processed/\r\n\r\n--\r\nSami Imseih\r\nAmazon Web Services", "msg_date": "Mon, 14 Mar 2022 22:21:50 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Spoke to Nathan offline and fixed some more comments/nitpicks in the patch.", "msg_date": "Wed, 16 Mar 2022 21:47:49 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Wed, Mar 16, 2022 at 09:47:49PM +0000, Imseih (AWS), Sami wrote:\n> Spoke to Nathan offline and fixed some more comments/nitpicks in the patch.\n\nI don't have any substantial comments for v9, so I think this can be marked\nas ready-for-committer. However, we probably should first see whether\nSawada-san has any comments on the revised approach.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 16 Mar 2022 16:33:13 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Sorry for the late reply.\n\nOn Tue, Mar 15, 2022 at 1:20 AM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n>\n> > I'm still unsure the current design of 0001 patch is better than other\n> > approaches we’ve discussed. Even users who don't use parallel vacuum\n> > are forced to allocate shared memory for index vacuum progress, with\n> > GetMaxBackends() entries from the beginning. Also, it’s likely to\n> > extend the progress tracking feature for other parallel operations in\n> > the future but I think the current design is not extensible. If we\n> > want to do that, we will end up creating similar things for each of\n> > them or re-creating index vacuum progress tracking feature while\n> > creating a common infra. It might not be a problem as of now but I'm\n> > concerned that introducing a feature that is not extensible and forces\n> > users to allocate additional shmem might be a blocker in the future.\n> > Looking at the precedent example, When we introduce the progress\n> > tracking feature, we implemented it in an extensible way. On the other\n> > hand, others in this thread seem to agree with this approach, so I'd\n> > like to leave it to committers.\n>\n> Thanks for the review!\n>\n> I think you make strong arguments as to why we need to take a different approach now than later.\n>\n> Flaws with current patch set:\n>\n> 1. GetMaxBackends() is a really heavy-handed overallocation of a shared memory serving a very specific purpose.\n> 2. Going with the approach of a vacuum specific hash breaks the design of progress which is meant to be extensible.\n> 3. Even if we go with this current approach as an interim solution, it will be a real pain in the future.\n>\n> With that said, v7 introduces the new infrastructure. 0001 includes the new infrastructure and 0002 takes advantage of this.\n>\n> This approach is the following:\n>\n> 1. Introduces a new API called pgstat_progress_update_param_parallel along with some others support functions. This new infrastructure is in backend_progress.c\n>\n> 2. There is still a shared memory involved, but the size is capped to \" max_worker_processes\" which is the max to how many parallel workers can be doing work at any given time. The shared memory hash includes a st_progress_param array just like the Backend Status array.\n\nI think that there is a corner case where a parallel operation could\nnot perform due to the lack of a free shared hash entry, because there\nis a window between a parallel worker exiting and the leader\ndeallocating the hash table entry.\n\nBTW have we discussed another idea I mentioned before that we have the\nleader process periodically check the number of completed indexes and\nadvertise it in its progress information? I'm not sure which one is\nbetter but this idea would require only changes of vacuum code and\nprobably simpler than the current idea.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 22 Mar 2022 12:19:27 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Hi,\n\nOn 2022-03-16 21:47:49 +0000, Imseih (AWS), Sami wrote:\n> From 85c47dfb3bb72f764b9052e74a7282c19ebd9898 Mon Sep 17 00:00:00 2001\n> From: \"Sami Imseih (AWS)\" <simseih@amazon.com>\n> Date: Wed, 16 Mar 2022 20:39:52 +0000\n> Subject: [PATCH 1/1] Add infrastructure for parallel progress reporting\n> \n> Infrastructure to allow a parallel worker to report\n> progress. In a PARALLEL command, the workers and\n> leader can report progress using a new pgstat_progress\n> API.\n\nWhat happens if we run out of memory for hashtable entries?\n\n\n> +void\n> +pgstat_progress_update_param_parallel(int leader_pid, int index, int64 val)\n> +{\n> +\tProgressParallelEntry *entry;\n> +\tbool found;\n> +\n> +\tLWLockAcquire(ProgressParallelLock, LW_EXCLUSIVE);\n> +\n> +\tentry = (ProgressParallelEntry *) hash_search(ProgressParallelHash, &leader_pid, HASH_ENTER, &found);\n> +\n> +\t/*\n> +\t * If the entry is not found, set the value for the index'th member,\n> +\t * else increment the current value of the index'th member.\n> +\t */\n> +\tif (!found)\n> +\t\tentry->st_progress_param[index] = val;\n> +\telse\n> +\t\tentry->st_progress_param[index] += val;\n> +\n> +\tLWLockRelease(ProgressParallelLock);\n> +}\n\nI think that's an absolute no-go. Adding locking to progress reporting,\nparticularly a single central lwlock, is going to *vastly* increase the\noverhead incurred by progress reporting.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Mar 2022 20:28:47 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "> BTW have we discussed another idea I mentioned before that we have the\r\n> leader process periodically check the number of completed indexes and\r\n> advertise it in its progress information? I'm not sure which one is\r\n> better but this idea would require only changes of vacuum code and\r\n> probably simpler than the current idea.\r\n\r\n> Regards,\r\n\r\n\r\nIf I understand correctly, to accomplish this we will need to have the leader \r\ncheck the number of indexes completed In the ambukdelete or amvacuumcleanup \r\ncallbacks. These routines do not know about PVIndStats, and they are called \r\nby both parallel and non-parallel vacuums.\r\n\r\nFrom what I can see, PVIndstats will need to be passed down to these routines \r\nor pass a NULL for non-parallel vacuums.\r\n\r\nSami\r\n\r\n\r\n", "msg_date": "Tue, 22 Mar 2022 07:27:32 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Tue, Mar 22, 2022 at 4:27 PM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n>\n> > BTW have we discussed another idea I mentioned before that we have the\n> > leader process periodically check the number of completed indexes and\n> > advertise it in its progress information? I'm not sure which one is\n> > better but this idea would require only changes of vacuum code and\n> > probably simpler than the current idea.\n>\n> > Regards,\n>\n>\n> If I understand correctly, to accomplish this we will need to have the leader\n> check the number of indexes completed In the ambukdelete or amvacuumcleanup\n> callbacks. These routines do not know about PVIndStats, and they are called\n> by both parallel and non-parallel vacuums.\n>\n> From what I can see, PVIndstats will need to be passed down to these routines\n> or pass a NULL for non-parallel vacuums.\n>\n\nCan the leader pass a callback that checks PVIndStats to ambulkdelete\nan amvacuumcleanup callbacks? I think that in the passed callback, the\nleader checks if the number of processed indexes and updates its\nprogress information if the current progress needs to be updated.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 22 Mar 2022 16:48:53 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "> Can the leader pass a callback that checks PVIndStats to ambulkdelete\r\n> an amvacuumcleanup callbacks? I think that in the passed callback, the\r\n> leader checks if the number of processed indexes and updates its\r\n> progress information if the current progress needs to be updated.\r\n\r\nThanks for the suggestion.\r\n\r\nI looked at this option a but today and found that passing the callback \r\nwill also require signature changes to the ambulkdelete and \r\namvacuumcleanup routines. \r\n\r\nThis will also require us to check after x pages have been \r\nscanned inside vacuumscan and vacuumcleanup. After x pages\r\nthe callback can then update the leaders progress.\r\nI am not sure if adding additional complexity to the scan/cleanup path\r\n is justified for what this patch is attempting to do. \r\n\r\nThere will also be a lag of the leader updating the progress as it\r\nmust scan x amount of pages before updating. Obviously, the more\r\nPages to the scan, the longer the lag will be.\r\n\r\nWould like to hear your thoughts on the above.\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services.\r\n\r\n\r\n\r\n\r\n\r\n", "msg_date": "Tue, 22 Mar 2022 21:57:03 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Wed, Mar 23, 2022 at 6:57 AM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n>\n> > Can the leader pass a callback that checks PVIndStats to ambulkdelete\n> > an amvacuumcleanup callbacks? I think that in the passed callback, the\n> > leader checks if the number of processed indexes and updates its\n> > progress information if the current progress needs to be updated.\n>\n> Thanks for the suggestion.\n>\n> I looked at this option a but today and found that passing the callback\n> will also require signature changes to the ambulkdelete and\n> amvacuumcleanup routines.\n\nI think it would not be a critical problem since it's a new feature.\n\n>\n> This will also require us to check after x pages have been\n> scanned inside vacuumscan and vacuumcleanup. After x pages\n> the callback can then update the leaders progress.\n> I am not sure if adding additional complexity to the scan/cleanup path\n> is justified for what this patch is attempting to do.\n>\n> There will also be a lag of the leader updating the progress as it\n> must scan x amount of pages before updating. Obviously, the more\n> Pages to the scan, the longer the lag will be.\n\nFair points.\n\nOn the other hand, the approach of the current patch requires more\nmemory for progress tracking, which could fail, e.g., due to running\nout of hashtable entries. I think that it would be worse that the\nparallel operation failed to start due to not being able to track the\nprogress than the above concerns you mentioned such as introducing\nadditional complexity and a possible lag of progress updates. So if we\ngo with the current approach, I think we need to make sure enough (and\nnot too many) hash table entries.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 25 Mar 2022 23:54:25 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Sorry for the late reply.\r\n\r\n> additional complexity and a possible lag of progress updates. So if we\r\n> go with the current approach, I think we need to make sure enough (and\r\n> not too many) hash table entries.\r\n\r\nThe hash table can be set 4 times the size of \r\nmax_worker_processes which should give more than\r\nenough padding.\r\nNote that max_parallel_maintenance_workers\r\nis what should be used, but since it's dynamic, it cannot\r\nbe used to determine the size of shared memory.\r\n\r\nRegards,\r\n\r\n---\r\nSami Imseih\r\nAmazon Web Services\r\n\r\n", "msg_date": "Tue, 29 Mar 2022 12:08:45 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "> I think that's an absolute no-go. Adding locking to progress reporting,\r\n> particularly a single central lwlock, is going to *vastly* increase the\r\n> overhead incurred by progress reporting.\r\n\r\nSorry for the late reply.\r\n\r\nThe usage of the shared memory will be limited\r\nto PARALLEL maintenance operations. For now,\r\nit will only be populated for parallel vacuums. \r\nAutovacuum for example will not be required to \r\npopulate this shared memory.\r\n\r\nRegards,\r\n\r\n---\r\nSami Imseih\r\nAmazon Web Services\r\n\r\n\r\n", "msg_date": "Tue, 29 Mar 2022 12:25:52 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Hi,\n\nOn 2022-03-29 12:25:52 +0000, Imseih (AWS), Sami wrote:\n> > I think that's an absolute no-go. Adding locking to progress reporting,\n> > particularly a single central lwlock, is going to *vastly* increase the\n> > overhead incurred by progress reporting.\n> \n> Sorry for the late reply.\n> \n> The usage of the shared memory will be limited\n> to PARALLEL maintenance operations. For now,\n> it will only be populated for parallel vacuums. \n> Autovacuum for example will not be required to \n> populate this shared memory.\n\nI nevertheless think that's not acceptable. The whole premise of the progress\nreporting infrastructure is to be low overhead. It's OK to require locking to\ninitialize parallel progress reporting, it's definitely not ok to require\nlocking to report progress.\n\nLeaving the locking aside, doing a hashtable lookup for each progress report\nis pretty expensive.\n\n\nWhy isn't the obvious thing to do here to provide a way to associate workers\nwith their leaders in shared memory, but to use the existing progress fields\nto report progress? Then, when querying progress, the leader and workers\nprogress fields can be combined to show the overall progress?\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 3 Apr 2022 09:50:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "> I nevertheless think that's not acceptable. The whole premise of the progress\r\n> reporting infrastructure is to be low overhead. It's OK to require locking to\r\n> initialize parallel progress reporting, it's definitely not ok to require\r\n> locking to report progress.\r\n\r\nFair point.\r\n\r\n> Why isn't the obvious thing to do here to provide a way to associate workers\r\n> with their leaders in shared memory, but to use the existing progress fields\r\n> to report progress? Then, when querying progress, the leader and workers\r\n> progress fields can be combined to show the overall progress?\r\n\r\nThe original intent was this, however the workers \r\ncan exit before the command completes and the \r\nworker progress data will be lost.\r\nThis is why the shared memory was introduced. \r\nThis allows the worker progress to persist for the duration \r\nof the command.\r\n\r\nRegards, \r\n\r\nSami Imseih\r\nAmazon Web Services.\r\n\r\n\r\n\r\n\r\n", "msg_date": "Tue, 5 Apr 2022 16:42:28 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Tue, Apr 5, 2022 at 12:42 PM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n> > Why isn't the obvious thing to do here to provide a way to associate workers\n> > with their leaders in shared memory, but to use the existing progress fields\n> > to report progress? Then, when querying progress, the leader and workers\n> > progress fields can be combined to show the overall progress?\n>\n> The original intent was this, however the workers\n> can exit before the command completes and the\n> worker progress data will be lost.\n> This is why the shared memory was introduced.\n> This allows the worker progress to persist for the duration\n> of the command.\n\nAt the beginning of a parallel operation, we allocate a chunk of\ndynamic shared memory which persists even after some or all workers\nhave exited. It's only torn down at the end of the parallel operation.\nThat seems like the appropriate place to be storing any kind of data\nthat needs to be propagated between parallel workers. The current\npatch uses the main shared memory segment, which seems unacceptable to\nme.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 5 Apr 2022 13:19:04 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On 2022-04-05 16:42:28 +0000, Imseih (AWS), Sami wrote:\n> > Why isn't the obvious thing to do here to provide a way to associate workers\n> > with their leaders in shared memory, but to use the existing progress fields\n> > to report progress? Then, when querying progress, the leader and workers\n> > progress fields can be combined to show the overall progress?\n>\n> The original intent was this, however the workers\n> can exit before the command completes and the\n> worker progress data will be lost.\n\nCan't the progress data trivially be inferred by the fact that the worker\ncompleted?\n\n\n", "msg_date": "Tue, 5 Apr 2022 10:31:25 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "> Can't the progress data trivially be inferred by the fact that the worker\r\n> completed?\r\n\r\nYes, at some point, this idea was experimented with in\r\n0004-Expose-progress-for-the-vacuuming-indexes-cleanup-ph.patch.\r\nThis patch did the calculation in system_views.sql\r\n\r\nHowever, the view is complex and there could be some edge\r\ncases with inferring the values that lead to wrong values being reported.\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services\r\n\r\n", "msg_date": "Tue, 5 Apr 2022 18:45:09 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "> At the beginning of a parallel operation, we allocate a chunk of>\r\n> dynamic shared memory which persists even after some or all workers\r\n> have exited. It's only torn down at the end of the parallel operation.\r\n> That seems like the appropriate place to be storing any kind of data\r\n> that needs to be propagated between parallel workers. The current\r\n> patch uses the main shared memory segment, which seems unacceptable to\r\n> me.\r\n\r\nCorrect, DSM does track shared data. However only participating\r\nprocesses in the parallel vacuum can attach and lookup this data.\r\n\r\nThe purpose of the main shared memory is to allow a process that\r\nIs querying the progress views to retrieve the information.\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services\r\n\r\n\r\n", "msg_date": "Wed, 6 Apr 2022 21:22:38 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Wed, Apr 6, 2022 at 5:22 PM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n> > At the beginning of a parallel operation, we allocate a chunk of>\n> > dynamic shared memory which persists even after some or all workers\n> > have exited. It's only torn down at the end of the parallel operation.\n> > That seems like the appropriate place to be storing any kind of data\n> > that needs to be propagated between parallel workers. The current\n> > patch uses the main shared memory segment, which seems unacceptable to\n> > me.\n>\n> Correct, DSM does track shared data. However only participating\n> processes in the parallel vacuum can attach and lookup this data.\n>\n> The purpose of the main shared memory is to allow a process that\n> Is querying the progress views to retrieve the information.\n\nSure, but I think that you should likely be doing what Andres\nrecommended before:\n\n# Why isn't the obvious thing to do here to provide a way to associate workers\n# with their leaders in shared memory, but to use the existing progress fields\n# to report progress? Then, when querying progress, the leader and workers\n# progress fields can be combined to show the overall progress?\n\nThat is, I am imagining that you would want to use DSM to propagate\ndata from workers back to the leader and then have the leader report\nthe data using the existing progress-reporting facilities. Now, if we\nreally need a whole row from each worker that doesn't work, but why do\nwe need that?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 7 Apr 2022 09:20:04 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Thu, Apr 7, 2022 at 10:20 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Apr 6, 2022 at 5:22 PM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n> > > At the beginning of a parallel operation, we allocate a chunk of>\n> > > dynamic shared memory which persists even after some or all workers\n> > > have exited. It's only torn down at the end of the parallel operation.\n> > > That seems like the appropriate place to be storing any kind of data\n> > > that needs to be propagated between parallel workers. The current\n> > > patch uses the main shared memory segment, which seems unacceptable to\n> > > me.\n> >\n> > Correct, DSM does track shared data. However only participating\n> > processes in the parallel vacuum can attach and lookup this data.\n> >\n> > The purpose of the main shared memory is to allow a process that\n> > Is querying the progress views to retrieve the information.\n>\n> Sure, but I think that you should likely be doing what Andres\n> recommended before:\n>\n> # Why isn't the obvious thing to do here to provide a way to associate workers\n> # with their leaders in shared memory, but to use the existing progress fields\n> # to report progress? Then, when querying progress, the leader and workers\n> # progress fields can be combined to show the overall progress?\n>\n> That is, I am imagining that you would want to use DSM to propagate\n> data from workers back to the leader and then have the leader report\n> the data using the existing progress-reporting facilities. Now, if we\n> really need a whole row from each worker that doesn't work, but why do\n> we need that?\n\n+1\n\nI also proposed the same idea before[1]. The leader can know how many\nindexes are processed so far by checking PVIndStats.status allocated\non DSM for each index. We can have the leader check it and update the\nprogress information before and after vacuuming one index. If we want\nto update the progress information more timely, probably we can pass a\ncallback function to ambulkdelete and amvacuumcleanup so that the\nleader can do that periodically, e.g., every 1000 blocks, while\nvacuuming an index.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoBW6SMJ96CNoMeu%2Bf_BR4jmatPcfVA016FdD2hkLDsaTA%40mail.gmail.com\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 8 Apr 2022 00:38:36 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "It looks like this patch got feedback from Andres and Robert with some\nsignificant design change recommendations. I'm marking the patch\nReturned with Feedback. Feel free to add it back to a future\ncommitfest when a new version is ready.\n\n\n", "msg_date": "Thu, 7 Apr 2022 19:25:01 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Taking the above feedback, I modified the patches\r\nand I believe this approach should be acceptable.\r\n\r\nFor now, I also reduced the scope to only exposing\r\nIndexes_total and indexes_completed in \r\npg_stat_progress_vacuum. I will create a new CF entry\r\nfor the new view pg_stat_progress_vacuum_index.\r\n\r\nV10-0001: This patch adds a callback to ParallelContext\r\nthat could be called by the leader in vacuumparallel.c\r\nand more importantly while the leader is waiting\r\nfor the parallel workers to complete inside.\r\n\r\nThis ensures that the leader is continuously polling and\r\nreporting completed indexes for the life of the PARALLEL\r\nVACUUM. This covers cases where the leader completes \r\nvacuuming before the workers complete.\r\n\r\nV10-0002: This implements the indexes_total and\r\nindexes_completed columns in pg_stat_progress_vacuum.\r\n\r\nThis work is now tracked in the next commitfest:\r\nhttps://commitfest.postgresql.org/38/3617/\r\n\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services", "msg_date": "Thu, 14 Apr 2022 01:32:54 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Thu, Apr 14, 2022 at 10:32 AM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n>\n> Taking the above feedback, I modified the patches\n> and I believe this approach should be acceptable.\n>\n> For now, I also reduced the scope to only exposing\n> Indexes_total and indexes_completed in\n> pg_stat_progress_vacuum. I will create a new CF entry\n> for the new view pg_stat_progress_vacuum_index.\n>\n> V10-0001: This patch adds a callback to ParallelContext\n> that could be called by the leader in vacuumparallel.c\n> and more importantly while the leader is waiting\n> for the parallel workers to complete inside.\n\nThank you for updating the patch! The new design looks much better to me.\n\n typedef struct ParallelWorkerInfo\n@@ -46,6 +49,8 @@ typedef struct ParallelContext\n ParallelWorkerInfo *worker;\n int nknown_attached_workers;\n bool *known_attached_workers;\n+ ParallelProgressCallback parallel_progress_callback;\n+ void *parallel_progress_callback_arg;\n } ParallelContext;\n\nI think we can pass the progress update function to\nWaitForParallelWorkersToFinish(), which seems simpler. And we can call\nthe function after updating the index status to\nPARALLEL_INDVAC_STATUS_COMPLETED.\n\nBTW, currently we don't need a lock for touching index status since\neach worker touches different indexes. But after this patch, the\nleader will touch all index status, do we need a lock for that?\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 2 May 2022 12:30:12 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Thank you for the feedback!\r\n\r\n> I think we can pass the progress update function to\r\n> WaitForParallelWorkersToFinish(), which seems simpler. And we can call\r\n\r\nDirectly passing the callback to WaitForParallelWorkersToFinish\r\nwill require us to modify the function signature.\r\n\r\nTo me, it seemed simpler and touches less code to have\r\nthe caller set the callback in the ParallelContext.\r\n\r\n> the function after updating the index status to\r\n> PARALLEL_INDVAC_STATUS_COMPLETED.\r\n\r\nI also like this better. Will make the change.\r\n\r\n> BTW, currently we don't need a lock for touching index status since\r\n> each worker touches different indexes. But after this patch, the\r\n> leader will touch all index status, do we need a lock for that?\r\n\r\nI do not think locking is needed here. The leader and workers\r\nwill continue to touch different indexes to update the status.\r\n\r\nHowever, if the process is a leader, it will call the function\r\nwhich will go through indstats and count how many\r\nIndexes have a status of PARALLEL_INDVAC_STATUS_COMPLETED.\r\nThis value is then reported to the leaders backend only.\r\n\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services\r\n\r\n", "msg_date": "Thu, 5 May 2022 19:26:51 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "> the function after updating the index status to\r\n > PARALLEL_INDVAC_STATUS_COMPLETED.\r\n\r\n> I also like this better. Will make the change.\r\n\r\nI updated the patch. The progress function is called after\r\nupdating index status to PARALLEL_INDVAC_STATUS_COMPLETED.\r\n\r\nI believe all comments have been addressed at this point.\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services", "msg_date": "Thu, 26 May 2022 13:41:09 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Fri, May 6, 2022 at 4:26 AM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n>\n> Thank you for the feedback!\n>\n> > I think we can pass the progress update function to\n> > WaitForParallelWorkersToFinish(), which seems simpler. And we can call\n>\n> Directly passing the callback to WaitForParallelWorkersToFinish\n> will require us to modify the function signature.\n>\n> To me, it seemed simpler and touches less code to have\n> the caller set the callback in the ParallelContext.\n\nOkay, but if we do that, I think we should add comments about when\nit's used. The callback is used only when\nWaitForParallelWorkersToFinish(), but not when\nWaitForParallelWorkersToExit().\n\nAnother idea I came up with is that we can wait for all index vacuums\nto finish while checking and updating the progress information, and\nthen calls WaitForParallelWorkersToFinish after confirming all index\nstatus became COMPLETED. That way, we don’t need to change the\nparallel query infrastructure. What do you think?\n\n>\n> > the function after updating the index status to\n> > PARALLEL_INDVAC_STATUS_COMPLETED.\n>\n> I also like this better. Will make the change.\n>\n> > BTW, currently we don't need a lock for touching index status since\n> > each worker touches different indexes. But after this patch, the\n> > leader will touch all index status, do we need a lock for that?\n>\n> I do not think locking is needed here. The leader and workers\n> will continue to touch different indexes to update the status.\n>\n> However, if the process is a leader, it will call the function\n> which will go through indstats and count how many\n> Indexes have a status of PARALLEL_INDVAC_STATUS_COMPLETED.\n> This value is then reported to the leaders backend only.\n\nI was concerned that the leader process could report the wrong\nprogress if updating and checking index status happen concurrently.\nBut I think it should be fine since we can read PVIndVacStatus\natomically.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 27 May 2022 00:43:12 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "> Another idea I came up with is that we can wait for all index vacuums\r\n> to finish while checking and updating the progress information, and\r\n> then calls WaitForParallelWorkersToFinish after confirming all index\r\n> status became COMPLETED. That way, we don’t need to change the\r\n> parallel query infrastructure. What do you think?\r\n\r\nThinking about this a bit more, the idea of using \r\nWaitForParallelWorkersToFinish\r\nWill not work if you have a leader worker that is\r\nstuck on a large index. The progress will not be updated\r\nuntil the leader completes. Even if the parallel workers\r\nfinish.\r\n\r\nWhat are your thought about piggybacking on the \r\nvacuum_delay_point to update progress. The leader can \r\nperhaps keep a counter to update progress every few thousand\r\ncalls to vacuum_delay_point. \r\n\r\nThis goes back to your original idea to keep updating progress\r\nwhile scanning the indexes.\r\n\r\n/*\r\n * vacuum_delay_point --- check for interrupts and cost-based delay.\r\n *\r\n * This should be called in each major loop of VACUUM processing,\r\n * typically once per page processed.\r\n */\r\nvoid\r\nvacuum_delay_point(void)\r\n{\r\n\r\n---\r\nSami Imseih\r\nAmazon Web Services\r\n\r\n", "msg_date": "Fri, 27 May 2022 01:52:10 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Fri, May 27, 2022 at 10:52 AM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n>\n> > Another idea I came up with is that we can wait for all index vacuums\n> > to finish while checking and updating the progress information, and\n> > then calls WaitForParallelWorkersToFinish after confirming all index\n> > status became COMPLETED. That way, we don’t need to change the\n> > parallel query infrastructure. What do you think?\n>\n> Thinking about this a bit more, the idea of using\n> WaitForParallelWorkersToFinish\n> Will not work if you have a leader worker that is\n> stuck on a large index. The progress will not be updated\n> until the leader completes. Even if the parallel workers\n> finish.\n\nRight.\n\n>\n> What are your thought about piggybacking on the\n> vacuum_delay_point to update progress. The leader can\n> perhaps keep a counter to update progress every few thousand\n> calls to vacuum_delay_point.\n>\n> This goes back to your original idea to keep updating progress\n> while scanning the indexes.\n\nI think we can have the leader process wait for all index statuses to\nbecome COMPLETED before WaitForParallelWorkersToFinish(). While\nwaiting for it, the leader can update its progress information. After\nthe leader confirmed all index statuses became COMPLETED, it can wait\nfor the workers to finish by WaitForParallelWorkersToFinish().\n\nRegarding waiting in vacuum_delay_point, it might be a side effect as\nit’s called every page and used not only by vacuum such as analyze,\nbut it seems to be worth trying.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 3 Jun 2022 14:39:52 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Thu, May 26, 2022 at 11:43 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> Another idea I came up with is that we can wait for all index vacuums\n> to finish while checking and updating the progress information, and\n> then calls WaitForParallelWorkersToFinish after confirming all index\n> status became COMPLETED. That way, we don’t need to change the\n> parallel query infrastructure. What do you think?\n\n+1 from me. It doesn't seem to me that we should need to add something\nlike parallel_vacuum_progress_callback in order to solve this problem,\nbecause the parallel index vacuum code could just do the waiting\nitself, as you propose here.\n\nThe question Sami asks him his reply is a good one, though -- who is\nto say that the leader only needs to update progress at the end, once\nit's finished the index it's handling locally? There will need to be a\ncallback system of some kind to allow the leader to update progress as\nother workers finish, even if the leader is still working. I am not\ntoo sure that the idea of using the vacuum delay points is the best\nplan. I think we should try to avoid piggybacking on such general\ninfrastructure if we can, and instead look for a way to tie this to\nsomething that is specific to parallel vacuum. However, I haven't\nstudied the problem so I'm not sure whether there's a reasonable way\nto do that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Jun 2022 10:41:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Mon, Jun 6, 2022 at 11:42 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, May 26, 2022 at 11:43 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > Another idea I came up with is that we can wait for all index vacuums\n> > to finish while checking and updating the progress information, and\n> > then calls WaitForParallelWorkersToFinish after confirming all index\n> > status became COMPLETED. That way, we don’t need to change the\n> > parallel query infrastructure. What do you think?\n>\n> +1 from me. It doesn't seem to me that we should need to add something\n> like parallel_vacuum_progress_callback in order to solve this problem,\n> because the parallel index vacuum code could just do the waiting\n> itself, as you propose here.\n>\n> The question Sami asks him his reply is a good one, though -- who is\n> to say that the leader only needs to update progress at the end, once\n> it's finished the index it's handling locally? There will need to be a\n> callback system of some kind to allow the leader to update progress as\n> other workers finish, even if the leader is still working. I am not\n> too sure that the idea of using the vacuum delay points is the best\n> plan. I think we should try to avoid piggybacking on such general\n> infrastructure if we can, and instead look for a way to tie this to\n> something that is specific to parallel vacuum. However, I haven't\n> studied the problem so I'm not sure whether there's a reasonable way\n> to do that.\n\nOne idea would be to add a flag, say report_parallel_vacuum_progress,\nto IndexVacuumInfo struct and expect index AM to check and update the\nparallel index vacuum progress, say every 1GB blocks processed. The\nflag is true only when the leader process is vacuuming an index.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 20 Jun 2022 15:35:00 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "This entry has been waiting on author input for a while (our current\nthreshold is roughly two weeks), so I've marked it Returned with\nFeedback.\n\nOnce you think the patchset is ready for review again, you (or any\ninterested party) can resurrect the patch entry by visiting\n\n https://commitfest.postgresql.org/38/3617/\n\nand changing the status to \"Needs Review\", and then changing the\nstatus again to \"Move to next CF\". (Don't forget the second step;\nhopefully we will have streamlined this in the near future!)\n\nThanks,\n--Jacob\n\n\n\n", "msg_date": "Tue, 2 Aug 2022 11:06:59 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "> One idea would be to add a flag, say report_parallel_vacuum_progress,\r\n> to IndexVacuumInfo struct and expect index AM to check and update the\r\n> parallel index vacuum progress, say every 1GB blocks processed. The\r\n> flag is true only when the leader process is vacuuming an index.\r\n\r\n> Regards,\r\n\r\nSorry for the long delay on this. I have taken the approach as suggested\r\nby Sawada-san and Robert and attached is v12.\r\n\r\n1. The patch introduces a new counter in the same shared memory already\r\nused by the parallel leader and workers to keep track of the number\r\nof indexes completed. This way there is no reason to loop through\r\nthe index status every time we want to get the status of indexes completed.\r\n\r\n2. A new function in vacuumparallel.c will be used to update\r\nthe progress of indexes completed by reading from the\r\ncounter created in point #1.\r\n\r\n3. The function is called during the vacuum_delay_point as a\r\nmatter of convenience, since this is called in all major vacuum\r\nloops. The function will only do something if the caller\r\nsets a boolean to report progress. Doing so will also ensure\r\nprogress is being reported in case the parallel workers completed\r\nbefore the leader.\r\n\r\n4. Rather than adding any complexity to WaitForParallelWorkersToFinish\r\nand introducing a new callback, vacuumparallel.c will wait until\r\nthe number of vacuum workers is 0 and then call\r\nWaitForParallelWorkersToFinish as it does currently.\r\n\r\n5. Went back to the idea of adding a new view called pg_stat_progress_vacuum_index\r\nwhich is accomplished by adding a new type called VACUUM_PARALLEL in progress.h\r\n\r\nThanks,\r\n\r\nSami Imseih\r\nAmazon Web Servies (AWS)\r\n\r\nFYI: the above message was originally sent yesterday but \r\nwas created under a separate thread. Please ignore this\r\nthread[1]\r\n\r\n[1]: https://www.postgresql.org/message-id/flat/4CD97E17-B9E4-421E-9A53-4317C90EFF35%40amazon.com", "msg_date": "Tue, 11 Oct 2022 13:50:51 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Tue, Oct 11, 2022 at 10:50 PM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n>\n> > One idea would be to add a flag, say report_parallel_vacuum_progress,\n> > to IndexVacuumInfo struct and expect index AM to check and update the\n> > parallel index vacuum progress, say every 1GB blocks processed. The\n> > flag is true only when the leader process is vacuuming an index.\n>\n> > Regards,\n>\n> Sorry for the long delay on this. I have taken the approach as suggested\n> by Sawada-san and Robert and attached is v12.\n\nThank you for updating the patch!\n\n>\n> 1. The patch introduces a new counter in the same shared memory already\n> used by the parallel leader and workers to keep track of the number\n> of indexes completed. This way there is no reason to loop through\n> the index status every time we want to get the status of indexes completed.\n\nWhile it seems to be a good idea to have the atomic counter for the\nnumber of indexes completed, I think we should not use the global\nvariable referencing the counter as follow:\n\n+static pg_atomic_uint32 *index_vacuum_completed = NULL;\n:\n+void\n+parallel_vacuum_progress_report(void)\n+{\n+ if (IsParallelWorker() || !report_parallel_vacuum_progress)\n+ return;\n+\n+ pgstat_progress_update_param(PROGRESS_VACUUM_INDEX_COMPLETED,\n+ pg_atomic_read_u32(index_vacuum_completed));\n+}\n\nI think we can pass ParallelVacuumState (or PVIndStats) to the\nreporting function so that it can check the counter and report the\nprogress.\n\n> 2. A new function in vacuumparallel.c will be used to update\n> the progress of indexes completed by reading from the\n> counter created in point #1.\n>\n> 3. The function is called during the vacuum_delay_point as a\n> matter of convenience, since this is called in all major vacuum\n> loops. The function will only do something if the caller\n> sets a boolean to report progress. Doing so will also ensure\n> progress is being reported in case the parallel workers completed\n> before the leader.\n\nRobert pointed out:\n\n---\nI am not too sure that the idea of using the vacuum delay points is the best\nplan. I think we should try to avoid piggybacking on such general\ninfrastructure if we can, and instead look for a way to tie this to\nsomething that is specific to parallel vacuum.\n---\n\nI agree with this part.\n\nInstead, I think we can add a boolean and the pointer for\nParallelVacuumState to IndexVacuumInfo. If the boolean is true, an\nindex AM can call the reporting function with the pointer to\nParallelVacuumState while scanning index blocks, for example, for\nevery 1GB. The boolean can be true only for the leader process.\n\n>\n> 4. Rather than adding any complexity to WaitForParallelWorkersToFinish\n> and introducing a new callback, vacuumparallel.c will wait until\n> the number of vacuum workers is 0 and then call\n> WaitForParallelWorkersToFinish as it does currently.\n\nAgreed, but with the following change, the leader process waits in a\nbusy loop, which should not be acceptable:\n\n+ if (VacuumActiveNWorkers)\n+ {\n+ while (pg_atomic_read_u32(VacuumActiveNWorkers) > 0)\n+ {\n+ parallel_vacuum_progress_report();\n+ }\n+ }\n+\n\nAlso, I think it's better to check whether idx_completed_progress\nequals to the number indexes instead.\n\n> 5. Went back to the idea of adding a new view called pg_stat_progress_vacuum_index\n> which is accomplished by adding a new type called VACUUM_PARALLEL in progress.h\n\nProbably, we can devide the patch into two patches. One for adding the\nnew statistics of the number of vacuumed indexes to\npg_stat_progress_vacuum and another one for adding new statistics view\npg_stat_progress_vacuum_index.\n\nRegards,\n\n-- \nMasahiko Sawada\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 12 Oct 2022 16:15:19 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Thank you for the feedback!\r\n\r\n> While it seems to be a good idea to have the atomic counter for the\r\n> number of indexes completed, I think we should not use the global\r\n> variable referencing the counter as follow:\r\n\r\n> +static pg_atomic_uint32 *index_vacuum_completed = NULL;\r\n> :\r\n> +void\r\n> +parallel_vacuum_progress_report(void)\r\n> +{\r\n> + if (IsParallelWorker() || !report_parallel_vacuum_progress)\r\n> + return;\r\n> +\r\n> + pgstat_progress_update_param(PROGRESS_VACUUM_INDEX_COMPLETED,\r\n> + pg_atomic_read_u32(index_vacuum_completed));\r\n> +}\r\n\r\n> I think we can pass ParallelVacuumState (or PVIndStats) to the\r\n> reporting function so that it can check the counter and report the\r\n> progress.\r\n\r\nYes, that makes sense.\r\n\r\n> ---\r\n> I am not too sure that the idea of using the vacuum delay points is the best\r\n> plan. I think we should try to avoid piggybacking on such general\r\n> infrastructure if we can, and instead look for a way to tie this to\r\n> something that is specific to parallel vacuum.\r\n> ---\r\n\r\nAdding the call to vacuum_delay_point made sense since it's\r\ncalled in all major vacuum scans. While it is also called\r\nby analyze, it will only do anything if the caller sets a flag\r\nto report_parallel_vacuum_progress.\r\n\r\n\r\n > Instead, I think we can add a boolean and the pointer for\r\n > ParallelVacuumState to IndexVacuumInfo. If the boolean is true, an\r\n > index AM can call the reporting function with the pointer to\r\n > ParallelVacuumState while scanning index blocks, for example, for\r\n > every 1GB. The boolean can be true only for the leader process.\r\n\r\nWill doing this every 1GB be necessary? Considering the function\r\nwill not do much more than update progress from the value\r\nstored in index_vacuum_completed?\r\n\r\n\r\n> Agreed, but with the following change, the leader process waits in a\r\n> busy loop, which should not be acceptable:\r\n\r\nGood point.\r\n\r\n> Also, I think it's better to check whether idx_completed_progress\r\n equals to the number indexes instead.\r\n\r\nGood point\r\n\r\n > 5. Went back to the idea of adding a new view called pg_stat_progress_vacuum_index\r\n > which is accomplished by adding a new type called VACUUM_PARALLEL in progress.h\r\n\r\n> Probably, we can devide the patch into two patches. One for adding the\r\n\r\nMakes sense.\r\n\r\nThanks\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n\r\n\r\n", "msg_date": "Fri, 14 Oct 2022 20:05:41 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Attached is v13-0001--Show-progress-for-index-vacuums.patch which addresses\r\nthe latest comments. The main changes are:\r\n\r\n1/ Call the parallel_vacuum_progress_report inside the AMs rather than vacuum_delay_point.\r\n\r\n2/ A Boolean when set to True in vacuumparallel.c will be used to determine if calling\r\nparallel_vacuum_progress_report is necessary.\r\n\r\n3/ Removed global varilable from vacuumparallel.c\r\n\r\n4/ Went back to calling parallel_vacuum_progress_report inside \r\nWaitForParallelWorkersToFinish to cover the case when a \r\nleader is waiting for parallel workers to finish.\r\n\r\n5/ I did not see a need to only report progress after 1GB as it's a fairly cheap call to update\r\nprogress.\r\n\r\n6/ v1-0001-Function-to-return-currently-vacuumed-or-cleaned-ind.patch is a separate patch\r\nfor exposing the index relid being vacuumed by a backend. \r\n\r\nThanks\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)", "msg_date": "Wed, 2 Nov 2022 16:52:19 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "2022年11月3日(木) 1:52 Imseih (AWS), Sami <simseih@amazon.com>:\n>\n> Attached is v13-0001--Show-progress-for-index-vacuums.patch which addresses\n> the latest comments. The main changes are:\n>\n> 1/ Call the parallel_vacuum_progress_report inside the AMs rather than vacuum_delay_point.\n>\n> 2/ A Boolean when set to True in vacuumparallel.c will be used to determine if calling\n> parallel_vacuum_progress_report is necessary.\n>\n> 3/ Removed global varilable from vacuumparallel.c\n>\n> 4/ Went back to calling parallel_vacuum_progress_report inside\n> WaitForParallelWorkersToFinish to cover the case when a\n> leader is waiting for parallel workers to finish.\n>\n> 5/ I did not see a need to only report progress after 1GB as it's a fairly cheap call to update\n> progress.\n>\n> 6/ v1-0001-Function-to-return-currently-vacuumed-or-cleaned-ind.patch is a separate patch\n> for exposing the index relid being vacuumed by a backend.\n\nThis entry was marked \"Needs review\" in the CommitFest app but cfbot\nreports the patch [1] no longer applies.\n\n[1] this patch:\nv1-0001-Function-to-return-currently-vacuumed-or-cleaned-ind.patch\n\nWe've marked it as \"Waiting on Author\". As CommitFest 2022-11 is\ncurrently underway, this would be an excellent time update the patch.\n\nOnce you think the patchset is ready for review again, you (or any\ninterested party) can move the patch entry forward by visiting\n\n https://commitfest.postgresql.org/40/3617/\n\nand changing the status to \"Needs review\".\n\n\nThanks\n\nIan Barwick\n\n\n", "msg_date": "Thu, 3 Nov 2022 17:16:21 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Resubmitting patches with proper format.\r\n\r\nThanks\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)", "msg_date": "Fri, 4 Nov 2022 13:27:34 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Thu, Nov 3, 2022 at 1:52 AM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n>\n> Attached is v13-0001--Show-progress-for-index-vacuums.patch which addresses\n> the latest comments.\n\nThank you for updating the patch!\n\n> 4/ Went back to calling parallel_vacuum_progress_report inside\n> WaitForParallelWorkersToFinish to cover the case when a\n> leader is waiting for parallel workers to finish.\n\nI don't think we need to modify WaitForParallelWorkersToFinish to\ncover that case. Instead, I think the leader process can execute a new\nfunction. The function will be like WaitForParallelWorkersToFinish()\nbut simpler; it just updates the progress information if necessary and\nthen checks if idx_completed_progress is equal to the number of\nindexes to vacuum. If yes, return from the function and call\nWaitForParallelWorkersToFinish() to wait for all workers to finish.\nOtherwise, it naps by using WaitLatch() and does this loop again.\n\n---\n@@ -46,6 +46,8 @@ typedef struct ParallelContext\n ParallelWorkerInfo *worker;\n int nknown_attached_workers;\n bool *known_attached_workers;\n+ void (*parallel_progress_callback)(void *arg);\n+ void *parallel_progress_arg;\n } ParallelContext;\n\nWith the above change I suggested, I think we won't need to have a\ncallback function in ParallelContext. Instead, I think we can have\nindex-AMs call parallel_vacuum_report() if report_parallel_progress is\ntrue.\n\nRegards,\n\n--\nMasahiko Sawada\n\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 8 Nov 2022 16:49:27 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Hi,\n\nOn 2022-11-04 13:27:34 +0000, Imseih (AWS), Sami wrote:\n> diff --git a/src/backend/access/gin/ginvacuum.c b/src/backend/access/gin/ginvacuum.c\n> index b4fa5f6bf8..3d5e4600dc 100644\n> --- a/src/backend/access/gin/ginvacuum.c\n> +++ b/src/backend/access/gin/ginvacuum.c\n> @@ -633,6 +633,9 @@ ginbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,\n> \t\tUnlockReleaseBuffer(buffer);\n> \t\tbuffer = ReadBufferExtended(index, MAIN_FORKNUM, blkno,\n> \t\t\t\t\t\t\t\t\tRBM_NORMAL, info->strategy);\n> +\n> +\t\tif (info->report_parallel_progress)\n> +\t\t\tinfo->parallel_progress_callback(info->parallel_progress_arg);\n> \t}\n> \n> \t/* right now we found leftmost page in entry's BTree */\n\nI don't think any of these progress callbacks should be done while pinning a\nbuffer and ...\n\n> @@ -677,6 +680,9 @@ ginbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,\n> \t\tbuffer = ReadBufferExtended(index, MAIN_FORKNUM, blkno,\n> \t\t\t\t\t\t\t\t\tRBM_NORMAL, info->strategy);\n> \t\tLockBuffer(buffer, GIN_EXCLUSIVE);\n> +\n> +\t\tif (info->report_parallel_progress)\n> +\t\t\tinfo->parallel_progress_callback(info->parallel_progress_arg);\n> \t}\n> \n> \tMemoryContextDelete(gvs.tmpCxt);\n\n... definitely not while holding a buffer lock.\n\n\nI also don't understand why info->parallel_progress_callback exists? It's only\nset to parallel_vacuum_progress_report(). Why make this stuff more expensive\nthan it has to already be?\n\n\n\n> +parallel_vacuum_progress_report(void *arg)\n> +{\n> +\tParallelVacuumState *pvs = (ParallelVacuumState *) arg;\n> +\n> +\tif (IsParallelWorker())\n> +\t\treturn;\n> +\n> +\tpgstat_progress_update_param(PROGRESS_VACUUM_INDEX_COMPLETED,\n> +\t\t\t\t\t\t\t\t pg_atomic_read_u32(&(pvs->shared->idx_completed_progress)));\n> +}\n\nSo each of the places that call this need to make an additional external\nfunction call for each page, just to find that there's nothing to do or to\nmake yet another indirect function call. This should probably a static inline\nfunction.\n\nThis is called, for every single page, just to read the number of indexes\ncompleted by workers? A number that barely ever changes?\n\nThis seems all wrong.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 8 Nov 2022 19:00:30 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "> I don't think any of these progress callbacks should be done while pinning a\r\n> buffer and ...\r\n\r\nGood point.\r\n\r\n> I also don't understand why info->parallel_progress_callback exists? It's only\r\n> set to parallel_vacuum_progress_report(). Why make this stuff more expensive\r\n> than it has to already be?\r\n\r\nAgree. Modified the patch to avoid the callback .\r\n\r\n> So each of the places that call this need to make an additional external\r\n> function call for each page, just to find that there's nothing to do or to\r\n> make yet another indirect function call. This should probably a static inline\r\n> function.\r\n\r\nEven better to just remove a function call altogether.\r\n\r\n> This is called, for every single page, just to read the number of indexes\r\n> completed by workers? A number that barely ever changes?\r\n\r\nI will take the initial suggestion by Sawada-san to update the progress\r\nevery 1GB of blocks scanned. \r\n\r\nAlso, It sems to me that we don't need to track progress in brin indexes,\r\nSince it is rare, if ever, this type of index will go through very heavy\r\nblock scans. In fact, I noticed the vacuum AMs for brin don't invoke the\r\nvacuum_delay_point at all.\r\n\r\nThe attached patch addresses the feedback.\r\n\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)", "msg_date": "Fri, 11 Nov 2022 19:10:16 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Sat, Nov 12, 2022 at 4:10 AM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n>\n> > I don't think any of these progress callbacks should be done while pinning a\n> > buffer and ...\n>\n> Good point.\n>\n> > I also don't understand why info->parallel_progress_callback exists? It's only\n> > set to parallel_vacuum_progress_report(). Why make this stuff more expensive\n> > than it has to already be?\n>\n> Agree. Modified the patch to avoid the callback .\n>\n> > So each of the places that call this need to make an additional external\n> > function call for each page, just to find that there's nothing to do or to\n> > make yet another indirect function call. This should probably a static inline\n> > function.\n>\n> Even better to just remove a function call altogether.\n>\n> > This is called, for every single page, just to read the number of indexes\n> > completed by workers? A number that barely ever changes?\n>\n> I will take the initial suggestion by Sawada-san to update the progress\n> every 1GB of blocks scanned.\n>\n> Also, It sems to me that we don't need to track progress in brin indexes,\n> Since it is rare, if ever, this type of index will go through very heavy\n> block scans. In fact, I noticed the vacuum AMs for brin don't invoke the\n> vacuum_delay_point at all.\n>\n> The attached patch addresses the feedback.\n>\n\nThank you for updating the patch! Here are review comments on v15 patch:\n\n+ <para>\n+ Number of indexes that wil be vacuumed. This value will be\n+ <literal>0</literal> if there are no indexes to vacuum or\n+ vacuum failsafe is triggered.\n\nI think that indexes_total should be 0 also when INDEX_CLEANUP is off.\n\n---\n+ /*\n+ * Reset the indexes completed at this point.\n+ * If we end up in another index vacuum cycle, we will\n+ * start counting from the start.\n+ */\n+ pgstat_progress_update_param(PROGRESS_VACUUM_INDEX_COMPLETED, 0);\n\nI think we don't need to reset it at the end of index vacuuming. There\nis a small window before switching to the next phase. If we reset this\nvalue while showing \"index vacuuming\" phase, the user might get\nconfused. Instead, we can reset it at the beginning of the index\nvacuuming.\n\n---\n+void\n+parallel_wait_for_workers_to_finish(ParallelVacuumState *pvs)\n+{\n+ while (pg_atomic_read_u32(&(pvs->shared->idx_completed_progress))\n< pvs->nindexes)\n+ {\n+ pgstat_progress_update_param(PROGRESS_VACUUM_INDEX_COMPLETED,\n+\n pg_atomic_read_u32(&(pvs->shared->idx_completed_progress)));\n+\n+ (void) WaitLatch(MyLatch, WL_LATCH_SET |\nWL_EXIT_ON_PM_DEATH, -1,\n+ WAIT_EVENT_PARALLEL_FINISH);\n+ ResetLatch(MyLatch);\n+ }\n+}\n\nWe should add CHECK_FOR_INTERRUPTS() at the beginning of the loop to\nmake the wait interruptible.\n\nI think it would be better to update the counter only when the value\nhas been increased.\n\nI think we should set a timeout, say 1 sec, to WaitLatch so that it\ncan periodically check the progress.\n\nProbably it's better to have a new wait event for this wait in order\nto distinguish wait for workers to complete index vacuum from the wait\nfor workers to exit.\n\n---\n@@ -838,7 +867,12 @@\nparallel_vacuum_process_one_index(ParallelVacuumState *pvs, Relation\nindrel,\n ivinfo.estimated_count = pvs->shared->estimated_count;\n ivinfo.num_heap_tuples = pvs->shared->reltuples;\n ivinfo.strategy = pvs->bstrategy;\n-\n+ ivinfo.idx_completed_progress = pvs->shared->idx_completed_progress;\n\nand\n\n@@ -998,6 +998,9 @@ btvacuumscan(IndexVacuumInfo *info,\nIndexBulkDeleteResult *stats,\n if (info->report_progress)\n\npgstat_progress_update_param(PROGRESS_SCAN_BLOCKS_DONE,\n\n scanblkno);\n+ if (info->report_parallel_progress &&\n(scanblkno % REPORT_PARALLEL_VACUUM_EVERY_PAGES) == 0)\n+\npgstat_progress_update_param(PROGRESS_VACUUM_INDEX_COMPLETED,\n+\n\npg_atomic_read_u32(&(info->idx_completed_progress)));\n }\n\nI think this doesn't work, since ivinfo.idx_completed is in the\nbackend-local memory. Instead, I think we can have a function in\nvacuumparallel.c that updates the progress. Then we can have index AM\ncall this function.\n\n---\n+ if (!IsParallelWorker())\n+ ivinfo.report_parallel_progress = true;\n+ else\n+ ivinfo.report_parallel_progress = false;\n\nWe can do like:\n\nivinfo.report_parallel_progress = !IsParallelWorker();\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 18 Nov 2022 22:07:22 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "> I think that indexes_total should be 0 also when INDEX_CLEANUP is off.\r\n\r\nPatch updated for handling of INDEX_CLEANUP = off, with an update to\r\nthe documentation as well.\r\n\r\n> I think we don't need to reset it at the end of index vacuuming. There\r\n> is a small window before switching to the next phase. If we reset this\r\n> value while showing \"index vacuuming\" phase, the user might get\r\n> confused. Instead, we can reset it at the beginning of the index\r\n> vacuuming.\r\n\r\nNo, I think the way it's currently done is correct. We want to reset the number\r\nof indexes completed before we increase the num_index_scans ( index vacuum cycle ).\r\nThis ensures that when we enter a new index cycle, the number of indexes completed\r\nare already reset. The 2 fields that matter here is how many indexes are vacuumed in the\r\ncurrently cycle and this is called out in the documentation as such.\r\n\r\n> We should add CHECK_FOR_INTERRUPTS() at the beginning of the loop to\r\n> make the wait interruptible.\r\n\r\nDone.\r\n\r\n> I think it would be better to update the counter only when the value\r\n> has been increased.\r\n\r\nDone. Did so by checking the progress value from the beentry directly\r\nin the function. We can do a more generalized \r\n\r\n> I think we should set a timeout, say 1 sec, to WaitLatch so that it\r\n> can periodically check the progress.\r\n\r\nDone.\r\n\r\n> Probably it's better to have a new wait event for this wait in order\r\n> to distinguish wait for workers to complete index vacuum from the wait\r\n> for workers to exit.\r\n\r\nI was trying to avoid introducing a new wait event, but thinking about it, \r\nI agree with your suggestion. \r\n\r\nI created a new ParallelVacuumFinish wait event and documentation\r\nfor the wait event.\r\n\r\n\r\n> I think this doesn't work, since ivinfo.idx_completed is in the\r\n> backend-local memory. Instead, I think we can have a function in\r\n> vacuumparallel.c that updates the progress. Then we can have index AM\r\n> call this function.\r\n\r\nYeah, you're absolutely correct. \r\n\r\nTo make this work correctly, I did something similar to VacuumActiveNWorkers\r\nand introduced an extern variable called ParallelVacuumProgress.\r\nThis variable points to pvs->shared->idx_completed_progress. \r\n\r\nThe index AMs then call parallel_vacuum_update_progress which\r\nIs responsible for updating the progress with the current value\r\nof ParallelVacuumProgress. \r\n\r\nParallelVacuumProgress is also set to NULL at the end of every index cycle.\r\n\r\n> ivinfo.report_parallel_progress = !IsParallelWorker();\r\n\r\nDone\r\n\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)", "msg_date": "Mon, 28 Nov 2022 23:57:14 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Hi,\n\nThank you for updating the patch!\n\nOn Tue, Nov 29, 2022 at 8:57 AM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n>\n> > I think that indexes_total should be 0 also when INDEX_CLEANUP is off.\n>\n> Patch updated for handling of INDEX_CLEANUP = off, with an update to\n> the documentation as well.\n>\n> > I think we don't need to reset it at the end of index vacuuming. There\n> > is a small window before switching to the next phase. If we reset this\n> > value while showing \"index vacuuming\" phase, the user might get\n> > confused. Instead, we can reset it at the beginning of the index\n> > vacuuming.\n>\n> No, I think the way it's currently done is correct. We want to reset the number\n> of indexes completed before we increase the num_index_scans ( index vacuum cycle ).\n> This ensures that when we enter a new index cycle, the number of indexes completed\n> are already reset. The 2 fields that matter here is how many indexes are vacuumed in the\n> currently cycle and this is called out in the documentation as such.\n>\n\nAgreed.\n\nHere are comments on v16 patch.\n\n--- a/contrib/bloom/blvacuum.c\n+++ b/contrib/bloom/blvacuum.c\n@@ -15,12 +15,14 @@\n #include \"access/genam.h\"\n #include \"bloom.h\"\n #include \"catalog/storage.h\"\n+#include \"commands/progress.h\"\n #include \"commands/vacuum.h\"\n #include \"miscadmin.h\"\n #include \"postmaster/autovacuum.h\"\n #include \"storage/bufmgr.h\"\n #include \"storage/indexfsm.h\"\n #include \"storage/lmgr.h\"\n+#include \"utils/backend_progress.h\"\n\nI think we don't need to include them here. Probably the same is true\nfor other index AMs.\n\n---\n vacuum_delay_point();\n+ if (info->report_parallel_progress && (blkno %\nREPORT_PARALLEL_VACUUM_EVERY_PAGES) == 0)\n+ parallel_vacuum_update_progress();\n+\n\n buffer = ReadBufferExtended(index, MAIN_FORKNUM, blkno,\n\nThere is an extra new line.\n\n---\n+ <row>\n+ <entry><literal>ParallelVacuumFinish</literal></entry>\n+ <entry>Waiting for parallel vacuum workers to finish computing.</entry>\n+ </row>\n\nHow about \"Waiting for parallel vacuum workers to finish index vacuum\"?\n\n---\nvacrel->rel = rel;\nvac_open_indexes(vacrel->rel, RowExclusiveLock, &vacrel->nindexes,\n &vacrel->indrels);\n+\nif (instrument && vacrel->nindexes > 0)\n{\n /* Copy index names used by instrumentation (not error reporting) */\n\n\nThis added line is not necessary.\n\n---\n /* Counter for vacuuming and cleanup */\n pg_atomic_uint32 idx;\n+\n+ /*\n+ * Counter for vacuuming and cleanup progress reporting.\n+ * This value is used to report index vacuum/cleanup progress\n+ * in parallel_vacuum_progress_report. We keep this\n+ * counter to avoid having to loop through\n+ * ParallelVacuumState->indstats to determine the number\n+ * of indexes completed.\n+ */\n+ pg_atomic_uint32 idx_completed_progress;\n\nI think the name of idx_completed_progress is very confusing. Since\nthe idx of PVShared refers to the current index in the pvs->indstats[]\nwhereas idx_completed_progress is the number of vacuumed indexes. How\nabout \"nindexes_completed\"?\n\n---\n+ /*\n+ * Set the shared parallel vacuum progress. This will be used\n+ * to periodically update progress.h with completed indexes\n+ * in a parallel vacuum. See comments in\nparallel_vacuum_update_progress\n+ * for more details.\n+ */\n+ ParallelVacuumProgress =\n&(pvs->shared->idx_completed_progress);\n+\n\nSince we pass pvs to parallel_wait_for_workers_to_finish(), we don't\nneed to have ParallelVacuumProgress. I see\nparallel_vacuum_update_progress() uses this value but I think it's\nbetter to pass ParallelVacuumState to via IndexVacuumInfo.\n\n---\n+ /*\n+ * To wait for parallel workers to finish,\n+ * first call parallel_wait_for_workers_to_finish\n+ * which is responsible for reporting the\n+ * number of indexes completed.\n+ *\n+ * Afterwards, WaitForParallelWorkersToFinish is called\n+ * to do the real work of waiting for parallel workers\n+ * to finish.\n+ *\n+ * Note: Both routines will acquire a WaitLatch in their\n+ * respective loops.\n+ */\n\nHow about something like:\n\nWait for all indexes to be vacuumed while updating the parallel vacuum\nindex progress. And then wait for all workers to finish.\n\n---\n RelationGetRelationName(indrel));\n }\n\n+ if (ivinfo.report_parallel_progress)\n+ parallel_vacuum_update_progress();\n+\n\nI think it's better to update the progress info after updating\npvs->shared->idx_completed_progress.\n\n---\n+/*\n+ * Check if we are done vacuuming indexes and report\n+ * progress.\n\nHow about \"Waiting for all indexes to be vacuumed while updating the\nparallel index vacuum progress\"?\n\n+ *\n+ * We nap using with a WaitLatch to avoid a busy loop.\n+ *\n+ * Note: This function should be used by the leader process only,\n+ * and it's up to the caller to ensure this.\n+ */\n\nI think these comments are not necessary.\n\n+void\n+parallel_wait_for_workers_to_finish(ParallelVacuumState *pvs)\n\nHow about \"parallel_vacuum_wait_to_finish\"?\n\n---\n+/*\n+ * Read the shared ParallelVacuumProgress and update progress.h\n+ * with indexes vacuumed so far. This function is called periodically\n+ * by index AMs as well as parallel_vacuum_process_one_index.\n+ *\n+ * To avoid unnecessarily updating progress, we check the progress\n+ * values from the backend entry and only update if the value\n+ * of completed indexes increases.\n+ *\n+ * Note: This function should be used by the leader process only,\n+ * and it's up to the caller to ensure this.\n+ */\n+void\n+parallel_vacuum_update_progress(void)\n+{\n+ volatile PgBackendStatus *beentry = MyBEEntry;\n+\n+ Assert(!IsParallelWorker);\n+\n+ if (beentry && ParallelVacuumProgress)\n+ {\n+ int parallel_vacuum_current_value =\nbeentry->st_progress_param[PROGRESS_VACUUM_INDEX_COMPLETED];\n+ int parallel_vacuum_new_value =\npg_atomic_read_u32(ParallelVacuumProgress);\n+\n+ if (parallel_vacuum_new_value > parallel_vacuum_current_value)\n+\npgstat_progress_update_param(PROGRESS_VACUUM_INDEX_COMPLETED,\nparallel_vacuum_new_value);\n+ }\n+}\n\nparallel_vacuum_update_progress() is typically called every 1GB so I\nthink we don't need to worry about unnecessary update. Also, I think\nthis code doesn't work when pgstat_track_activities is false. Instead,\nI think that in parallel_wait_for_workers_to_finish(), we can check\nthe value of pvs->nindexes_completed and update the progress if there\nis an update or it's first time.\n\n---\n+ (void) WaitLatch(MyLatch, WL_TIMEOUT | WL_LATCH_SET |\nWL_EXIT_ON_PM_DEATH, PARALLEL_VACUUM_PROGRESS_TIMEOUT,\n+\nWAIT_EVENT_PARALLEL_VACUUM_FINISH);\n+ ResetLatch(MyLatch);\n\nI think we don't necessarily need to use\nPARALLEL_VACUUM_PROGRESS_TIMEOUT here. Probably we can use 1000L\ninstead. If we want to use PARALLEL_VACUUM_PROGRESS_TIMEOUT, we need\ncomments for that:\n\n+#define PARALLEL_VACUUM_PROGRESS_TIMEOUT 1000\n\n---\n- WAIT_EVENT_XACT_GROUP_UPDATE\n+ WAIT_EVENT_XACT_GROUP_UPDATE,\n+ WAIT_EVENT_PARALLEL_VACUUM_FINISH\n } WaitEventIPC;\n\n Enums of WaitEventIPC should be defined in alphabetical order.\n\n---\ncfbot fails.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 6 Dec 2022 12:18:05 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Thanks for the feedback. I agree with the feedback, except\r\nfor \r\n\r\n> need to have ParallelVacuumProgress. I see\r\n> parallel_vacuum_update_progress() uses this value but I think it's\r\n> better to pass ParallelVacuumState to via IndexVacuumInfo.\r\n\r\nI was trying to avoid passing a pointer to\r\nParallelVacuumState in IndexVacuuminfo.\r\n\r\nParallelVacuumProgress is implemented in the same\r\nway as VacuumSharedCostBalance and \r\nVacuumActiveNWorkers. See vacuum.h\r\n\r\nThese values are reset at the start of a parallel vacuum cycle\r\nand reset at the end of an index vacuum cycle.\r\n\r\nThis seems like a better approach and less invasive.\r\nWhat would be a reason not to go with this approach?\r\n\r\n\r\n> parallel_vacuum_update_progress() is typically called every 1GB so I\r\n> think we don't need to worry about unnecessary update. Also, I think\r\n> this code doesn't work when pgstat_track_activities is false. Instead,\r\n> I think that in parallel_wait_for_workers_to_finish(), we can check\r\n> the value of pvs->nindexes_completed and update the progress if there\r\n> is an update or it's first time.\r\n\r\nI agree that we don’t need to worry about unnecessary updates\r\nin parallel_vacuum_update_progress since we are calling\r\nevery 1GB. I also don't think we should do anything additional\r\nin parallel_wait_for_workers_to_finish since here we are only\r\nupdating every 1 second.\r\n\r\nThanks,\r\n\r\nSami Imseih\r\nAmazon Web Services\r\n\r\n", "msg_date": "Tue, 13 Dec 2022 04:40:02 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Tue, Dec 13, 2022 at 1:40 PM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n>\n> Thanks for the feedback. I agree with the feedback, except\n> for\n>\n> > need to have ParallelVacuumProgress. I see\n> > parallel_vacuum_update_progress() uses this value but I think it's\n> > better to pass ParallelVacuumState to via IndexVacuumInfo.\n>\n> I was trying to avoid passing a pointer to\n> ParallelVacuumState in IndexVacuuminfo.\n>\n> ParallelVacuumProgress is implemented in the same\n> way as VacuumSharedCostBalance and\n> VacuumActiveNWorkers. See vacuum.h\n>\n> These values are reset at the start of a parallel vacuum cycle\n> and reset at the end of an index vacuum cycle.\n>\n> This seems like a better approach and less invasive.\n> What would be a reason not to go with this approach?\n\nFirst of all, I don't think we need to declare ParallelVacuumProgress\nin vacuum.c since it's set and used only in vacuumparallel.c. But I\ndon't even think it's a good idea to declare it in vacuumparallel.c as\na static variable. The primary reason is that it adds things we need\nto care about. For example, what if we raise an error during index\nvacuum? The transaction aborts but ParallelVacuumProgress still refers\nto something old. Suppose further that the next parallel vacuum\ndoesn't launch any workers, the leader process would still end up\naccessing the old value pointed by ParallelVacuumProgress, which\ncauses a SEGV. So we need to reset it anyway at the beginning of the\nparallel vacuum. It's easy to fix at this time but once the parallel\nvacuum code gets more complex, it could forget to care about it.\n\nIMO VacuumSharedCostBalance and VacuumActiveNWorkers have a different\nstory. They are set in vacuumparallel.c and are used in vacuum.c for\nvacuum delay. If they weren't global variables, we would need to pass\nthem to every function that could eventually call the vacuum delay\nfunction. So it makes sense to me to have them as global variables.On\nthe other hand, for ParallelVacuumProgress, it's a common pattern that\nambulkdelete(), amvacuumcleanup() or a common index scan routine like\nbtvacuumscan() checks the progress. I don't think index AM needs to\npass the value down to many of its functions. So it makes sense to me\nto pass it via IndexVacuumInfo.\n\nHaving said that, I'd like to hear opinions also from other hackers, I\nmight be wrong and it's more invasive as you pointed out.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 14 Dec 2022 10:43:10 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "> First of all, I don't think we need to declare ParallelVacuumProgress\r\n> in vacuum.c since it's set and used only in vacuumparallel.c. But I\r\n> don't even think it's a good idea to declare it in vacuumparallel.c as\r\n> a static variable. The primary reason is that it adds things we need\r\n> to care about. For example, what if we raise an error during index\r\n> vacuum? The transaction aborts but ParallelVacuumProgress still refers\r\n> to something old. Suppose further that the next parallel vacuum\r\n> doesn't launch any workers, the leader process would still end up\r\n> accessing the old value pointed by ParallelVacuumProgress, which\r\n> causes a SEGV. So we need to reset it anyway at the beginning of the\r\n> parallel vacuum. It's easy to fix at this time but once the parallel\r\n> vacuum code gets more complex, it could forget to care about it.\r\n\r\n> IMO VacuumSharedCostBalance and VacuumActiveNWorkers have a different\r\n> story. They are set in vacuumparallel.c and are used in vacuum.c for\r\n> vacuum delay. If they weren't global variables, we would need to pass\r\n> them to every function that could eventually call the vacuum delay\r\n> function. So it makes sense to me to have them as global variables.On\r\n> the other hand, for ParallelVacuumProgress, it's a common pattern that\r\n> ambulkdelete(), amvacuumcleanup() or a common index scan routine like\r\n> btvacuumscan() checks the progress. I don't think index AM needs to\r\n> pass the value down to many of its functions. So it makes sense to me\r\n> to pass it via IndexVacuumInfo.\r\n\r\nThanks for the detailed explanation and especially clearing up\r\nmy understanding of VacuumSharedCostBalance and VacuumActiveNWorker.\r\n\r\nI do now think that passing ParallelVacuumState in IndexVacuumInfo is\r\na more optimal choice.\r\n\r\nAttached version addresses the above and the previous comments.\r\n\r\n\r\nThanks\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)", "msg_date": "Wed, 14 Dec 2022 05:09:46 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Wed, Dec 14, 2022 at 05:09:46AM +0000, Imseih (AWS), Sami wrote:\n> Attached version addresses the above and the previous comments.\n\ncfbot is complaining that this patch no longer applies. Sami, would you\nmind rebasing it?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 30 Dec 2022 10:39:55 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "> cfbot is complaining that this patch no longer applies. Sami, would you\r\n> mind rebasing it?\r\n\r\nRebased patch attached.\r\n\r\n--\r\nSami Imseih \r\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 2 Jan 2023 04:34:26 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Mon, 2 Jan 2023 at 10:04, Imseih (AWS), Sami <simseih@amazon.com> wrote:\n>\n> > cfbot is complaining that this patch no longer applies. Sami, would you\n> > mind rebasing it?\n>\n> Rebased patch attached.\n\nCFBot shows some compilation errors as in [1], please post an updated\nversion for the same:\n[07:01:58.889] In file included from ../../../src/include/postgres.h:47,\n[07:01:58.889] from vacuumparallel.c:27:\n[07:01:58.889] vacuumparallel.c: In function ‘parallel_vacuum_update_progress’:\n[07:01:58.889] vacuumparallel.c:1118:10: error: ‘IsParallelWorker’\nundeclared (first use in this function); did you mean\n‘ParallelWorkerMain’?\n[07:01:58.889] 1118 | Assert(!IsParallelWorker);\n[07:01:58.889] | ^~~~~~~~~~~~~~~~\n[07:01:58.889] ../../../src/include/c.h:859:9: note: in definition of\nmacro ‘Assert’\n[07:01:58.889] 859 | if (!(condition)) \\\n[07:01:58.889] | ^~~~~~~~~\n[07:01:58.889] vacuumparallel.c:1118:10: note: each undeclared\nidentifier is reported only once for each function it appears in\n[07:01:58.889] 1118 | Assert(!IsParallelWorker);\n[07:01:58.889] | ^~~~~~~~~~~~~~~~\n\n[1] - https://cirrus-ci.com/task/4557389261701120\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 3 Jan 2023 16:19:03 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "> cirrus-ci.com/task/4557389261701120\r\n\r\nI earlier compiled without building with --enable-cassert,\r\nwhich is why the compilation errors did not produce on my\r\nbuid.\r\n\r\nFixed in v19.\r\n\r\nThanks\r\n\r\n--\r\nSami Imseih \r\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 3 Jan 2023 16:46:15 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Hi,\n\nOn 1/3/23 5:46 PM, Imseih (AWS), Sami wrote:\n>> cirrus-ci.com/task/4557389261701120\n> \n> I earlier compiled without building with --enable-cassert,\n> which is why the compilation errors did not produce on my\n> buid.\n> \n> Fixed in v19.\n> \n> Thanks\n> \n\nThanks for the updated patch!\n\nSome comments about it:\n\n+\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>indexes_total</structfield> <type>bigint</type>\n+ </para>\n+ <para>\n+ Number of indexes that wil be vacuumed. This value will be\n\nTypo: wil\n\n+ /* report number of indexes to vacuum, if we are told to cleanup indexes */\n+ if (vacrel->do_index_cleanup)\n+ pgstat_progress_update_param(PROGRESS_VACUUM_INDEX_TOTAL, vacrel->nindexes);\n\n\"Report\" instead? (to looks like the surrounding code)\n\n+ /*\n+ * Done vacuuming an index. Increment the indexes completed\n+ */\n+ pgstat_progress_update_param(PROGRESS_VACUUM_INDEX_COMPLETED,\n+ idx + 1);\n\n\"Increment the indexes completed.\" (dot at the end) instead?\n\n- * Increase and report the number of index scans.\n+ * Reset and report the number of indexes scanned.\n+ * Also, increase and report the number of index\n+ * scans.\n\nWhat about \"Reset and report zero as the number of indexes scanned.\"? (just to make clear we\ndon't want to report the value as it was prior to the reset)\n\n- /* Disable index vacuuming, index cleanup, and heap rel truncation */\n+ /*\n+ * Disable index vacuuming, index cleanup, and heap rel truncation\n+ *\n\nThe new \"Disable index vacuuming, index cleanup, and heap rel truncation\" needs a dot at the end.\n\n+ * Also, report to progress.h that we are no longer tracking\n+ * index vacuum/cleanup.\n+ */\n\n\"Also, report that we are\" instead?\n\n+ /*\n+ * Done cleaning an index. Increment the indexes completed\n+ */\n\nNeeds a dot at the end.\n\n- /* Reset the parallel index processing counter */\n+ /* Reset the parallel index processing counter ( index progress counter also ) */\n\n\"Reset the parallel index processing and progress counters\" instead?\n\n+ /* Update the number of indexes completed. */\n+ pg_atomic_add_fetch_u32(&(pvs->shared->nindexes_completed), 1);\n\nRemove the dot at the end? (to looks like the surrounding code)\n\n+\n+/*\n+ * Read pvs->shared->nindexes_completed and update progress.h\n+ * with indexes vacuumed so far. This function is called periodically\n\n\"Read pvs->shared->nindexes_completed and report the number of indexes vacuumed so far\" instead?\n\n+ * Note: This function should be used by the leader process only,\n\n\"called\" instead of \"used\"?\n\n case WAIT_EVENT_XACT_GROUP_UPDATE:\n event_name = \"XactGroupUpdate\";\n break;\n+ case WAIT_EVENT_PARALLEL_VACUUM_FINISH:\n+ event_name = \"ParallelVacuumFinish\";\n+ break;\n /* no default case, so that compiler will warn */\n\nIt seems to me that the case ordering should follow the alphabetical order (that's how it is done currently without the patch).\n\n+#define REPORT_PARALLEL_VACUUM_EVERY_PAGES ((BlockNumber) (((uint64) 1024 * 1024 * 1024) / BLCKSZ))\n\nIt seems to me that \"#define REPORT_PARALLEL_VACUUM_EVERY_PAGES ((BlockNumber) (1024 * 1024 * 1024 / BLCKSZ))\" would be fine too.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 4 Jan 2023 10:50:31 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Thanks for the review!\r\n\r\nAddressed the comments.\r\n\r\n> \"Increment the indexes completed.\" (dot at the end) instead?\r\n\r\nUsed the commenting format being used in other places in this\r\nfile with an inclusion of a double-dash. i.,e.\r\n/* Wraparound emergency -- end current index scan */\r\n\r\n> It seems to me that \"#define REPORT_PARALLEL_VACUUM_EVERY_PAGES ((BlockNumber) (1024 * 1024 * 1024 / BLCKSZ))\" would be fine too.\r\n\r\nI kept this the same as it matches what we are doing in other places such\r\nas FAILSAFE_EVERY_PAGES\r\n\r\nv20 attached.\r\n\r\n\r\nRegards,\r\n\r\n--\r\nSami Imseih \r\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 4 Jan 2023 19:24:26 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Thu, Jan 5, 2023 at 4:24 AM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n>\n> Thanks for the review!\n>\n> Addressed the comments.\n>\n> > \"Increment the indexes completed.\" (dot at the end) instead?\n>\n> Used the commenting format being used in other places in this\n> file with an inclusion of a double-dash. i.,e.\n> /* Wraparound emergency -- end current index scan */\n>\n> > It seems to me that \"#define REPORT_PARALLEL_VACUUM_EVERY_PAGES ((BlockNumber) (1024 * 1024 * 1024 / BLCKSZ))\" would be fine too.\n>\n> I kept this the same as it matches what we are doing in other places such\n> as FAILSAFE_EVERY_PAGES\n>\n> v20 attached.\n\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>indexes_total</structfield> <type>bigint</type>\n+ </para>\n+ <para>\n+ Number of indexes that will be vacuumed. This value will be\n+ <literal>0</literal> if there are no indexes to vacuum,\n<literal>INDEX_CLEANUP</literal>\n+ is set to <literal>OFF</literal>, or vacuum failsafe is triggered.\n\nSimilar to above three cases, vacuum can bypass index vacuuming if\nthere are almost zero TIDs. Should we set indexes_total to 0 in this\ncase too? If so, I think we can set both indexes_total and\nindexes_completed at the beginning of the index vacuuming/cleanup and\nreset them at the end. That is, these values are valid only in index\nvacuum phase and index cleanup phase. Otherwise, 0.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 5 Jan 2023 17:11:36 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "> Similar to above three cases, vacuum can bypass index vacuuming if\r\n> there are almost zero TIDs. Should we set indexes_total to 0 in this\r\n> case too? If so, I think we can set both indexes_total and\r\n> indexes_completed at the beginning of the index vacuuming/cleanup and\r\n> reset them at the end. \r\n\r\nUnlike the other 3 cases, in which the vacuum and cleanup are totally skipped,\r\na cleanup still occurs when the index vacuum is bypassed. From what I can tell,\r\nthis is to allow for things like a gin pending list cleanup even if the index\r\nis not vacuumed. There could be other reasons as well.\r\n\r\n if (bypass)\r\n {\r\n /*\r\n * There are almost zero TIDs. Behave as if there were precisely\r\n * zero: bypass index vacuuming, but do index cleanup.\r\n *\r\n * We expect that the ongoing VACUUM operation will finish very\r\n * quickly, so there is no point in considering speeding up as a\r\n * failsafe against wraparound failure. (Index cleanup is expected to\r\n * finish very quickly in cases where there were no ambulkdelete()\r\n * calls.)\r\n */\r\n vacrel->do_index_vacuuming = false;\r\n }\r\n\r\nSo it seems like we should still report the total number of indexes as we are currently\r\ndoing in the patch.\r\n\r\nWith that said, the documentation should make this be more clear.\r\n\r\nFor indexes_total, the description should be:\r\n\r\n Number of indexes that will be vacuumed or cleaned up. This value will be\r\n <literal>0</literal> if there are no indexes to vacuum, <literal>INDEX_CLEANUP</literal>\r\n is set to <literal>OFF</literal>, or vacuum failsafe is triggered.\r\n See <xref linkend=\"guc-vacuum-failsafe-age\"/>\r\n\r\nFor indexes_completed, it should be:\r\n\r\n Number of indexes vacuumed in the current vacuum cycle when the\r\n phase is <literal>vacuuming indexes</liternal>, or the number\r\n of indexes cleaned up in the <literal>cleaning up indexes<literal>\r\n phase.\r\n\r\n\r\nRegards,\r\n\r\n--\r\nSami Imseih \r\nAmazon Web Services: https://aws.amazon.com\r\n\r\n", "msg_date": "Fri, 6 Jan 2023 03:07:11 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Fri, Jan 6, 2023 at 12:07 PM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n>\n> > Similar to above three cases, vacuum can bypass index vacuuming if\n> > there are almost zero TIDs. Should we set indexes_total to 0 in this\n> > case too? If so, I think we can set both indexes_total and\n> > indexes_completed at the beginning of the index vacuuming/cleanup and\n> > reset them at the end.\n>\n> Unlike the other 3 cases, in which the vacuum and cleanup are totally skipped,\n> a cleanup still occurs when the index vacuum is bypassed. From what I can tell,\n> this is to allow for things like a gin pending list cleanup even if the index\n> is not vacuumed.\n\nRight. But since we set indexes_total also at the beginning of index\ncleanup (i.e. lazy_cleanup_all_indexes()), can't we still show the\nvalid value in this case? My point is whether we should show\nindexes_total throughout the vacuum execution (even also in not\nrelevant phases such as heap scanning/vacuum/truncation). For example,\nin the analyze progress report, we have child_tables_total and\nchild_tables_done. child_tables_total is set before the actual work of\nsampling rows from child tables, but not the beginning of the analyze.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 6 Jan 2023 16:55:54 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "> My point is whether we should show\r\n> indexes_total throughout the vacuum execution (even also in not\r\n> relevant phases such as heap scanning/vacuum/truncation).\r\n\r\nThat is a good point. We should only show indexes_total\r\nand indexes_completed only during the relevant phases.\r\n\r\nV21 addresses this along with a documentation fix.\r\n\r\nindexes_total and indexes_completed can only be a value > 0 while in the\r\n\"vacuuming indexes\" or \"cleaning up indexes\" phases of vacuum. \r\n\r\nIndexes_total is set to 0 at the start of the index vacuum cycle or index cleanups\r\nand set back to 0, along with indexes_completed, at the end of the index vacuum\r\ncycle and index cleanups\r\n\r\nAlso, for the progress updates in vacuumlazy.c that should be done atomically,\r\nI made a change to use pgstat_progress_update_multi_param.\r\n\r\nRegards,\r\n\r\n--\r\nSami Imseih\r\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 7 Jan 2023 01:59:40 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Hi,\n\nOn 2023-01-07 01:59:40 +0000, Imseih (AWS), Sami wrote:\n> --- a/src/backend/access/nbtree/nbtree.c\n> +++ b/src/backend/access/nbtree/nbtree.c\n> @@ -998,6 +998,8 @@ btvacuumscan(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,\n> \t\t\tif (info->report_progress)\n> \t\t\t\tpgstat_progress_update_param(PROGRESS_SCAN_BLOCKS_DONE,\n> \t\t\t\t\t\t\t\t\t\t\t scanblkno);\n> +\t\t\tif (info->report_parallel_progress && (scanblkno % REPORT_PARALLEL_VACUUM_EVERY_PAGES) == 0)\n> +\t\t\t\tparallel_vacuum_update_progress(info->parallel_vacuum_state);\n> \t\t}\n> \t}\n\nI still think it's wrong to need multiple pgstat_progress_*() calls within one\nscan. If we really need it, it should be abstracted into a helper function\nthat wrapps all the vacuum progress stuff needed for an index.\n\n\n> @@ -688,7 +703,13 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan\n> \t */\n> \tif (nworkers > 0)\n> \t{\n> -\t\t/* Wait for all vacuum workers to finish */\n> +\t\t/*\n> +\t\t * Wait for all indexes to be vacuumed while\n> +\t\t * updating the parallel vacuum index progress,\n> +\t\t * and then wait for all workers to finish.\n> +\t\t */\n> +\t\tparallel_vacuum_wait_to_finish(pvs);\n> +\n> \t\tWaitForParallelWorkersToFinish(pvs->pcxt);\n> \n> \t\tfor (int i = 0; i < pvs->pcxt->nworkers_launched; i++)\n\nI don't think it's a good idea to have two difference routines that wait for\nworkers to exit. And certainly not when one of them basically just polls in a\nregular interval as parallel_vacuum_wait_to_finish().\n\n\nI think the problem here is that you're basically trying to work around the\nlack of an asynchronous state update mechanism between leader and workers. The\nworkaround is to add a lot of different places that poll whether there has\nbeen any progress. And you're not doing that by integrating with the existing\nmachinery for interrupt processing (i.e. CHECK_FOR_INTERRUPTS()), but by\ndeveloping a new mechanism.\n\nI think your best bet would be to integrate with HandleParallelMessages().\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 9 Jan 2023 17:52:47 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Thanks for the feedback and I apologize for the delay in response.\r\n\r\n> I think the problem here is that you're basically trying to work around the\r\n> lack of an asynchronous state update mechanism between leader and workers. The\r\n> workaround is to add a lot of different places that poll whether there has\r\n> been any progress. And you're not doing that by integrating with the existing\r\n> machinery for interrupt processing (i.e. CHECK_FOR_INTERRUPTS()), but by\r\n> developing a new mechanism.\r\n\r\n> I think your best bet would be to integrate with HandleParallelMessages().\r\n\r\nYou are correct. I have been trying to work around the async nature\r\nof parallel workers performing the index vacuum. As you have pointed out,\r\nintegrating with HandleParallelMessages does appear to be the proper way.\r\nDoing so will also avoid having to do any progress updates in the index AMs.\r\n\r\nIn the attached patch, the parallel workers send a new type of protocol message\r\ntype to the leader called 'P' which signals the leader that it should handle a\r\nprogress update. The leader then performs the progress update by\r\ninvoking a callback set in the ParallelContext. This is done inside HandleParallelMessages.\r\nIn the index vacuum case, the callback is parallel_vacuum_update_progress. \r\n\r\nThe new message does not contain a payload, and it's merely used to\r\nsignal the leader that it can invoke a progress update.\r\n\r\nAlso, If the leader performs the index vacuum, it can call parallel_vacuum_update_progress\r\ndirectly inside vacuumparallel.c.\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)", "msg_date": "Thu, 12 Jan 2023 14:02:33 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Thu, Jan 12, 2023 at 11:02 PM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n>\n> Thanks for the feedback and I apologize for the delay in response.\n>\n> > I think the problem here is that you're basically trying to work around the\n> > lack of an asynchronous state update mechanism between leader and workers. The\n> > workaround is to add a lot of different places that poll whether there has\n> > been any progress. And you're not doing that by integrating with the existing\n> > machinery for interrupt processing (i.e. CHECK_FOR_INTERRUPTS()), but by\n> > developing a new mechanism.\n>\n> > I think your best bet would be to integrate with HandleParallelMessages().\n>\n> You are correct. I have been trying to work around the async nature\n> of parallel workers performing the index vacuum. As you have pointed out,\n> integrating with HandleParallelMessages does appear to be the proper way.\n> Doing so will also avoid having to do any progress updates in the index AMs.\n\nVery interesting idea. I need to study the parallel query\ninfrastructure more to consider potential downside of this idea but it\nseems okay as far as I researched so far.\n\n> In the attached patch, the parallel workers send a new type of protocol message\n> type to the leader called 'P' which signals the leader that it should handle a\n> progress update. The leader then performs the progress update by\n> invoking a callback set in the ParallelContext. This is done inside HandleParallelMessages.\n> In the index vacuum case, the callback is parallel_vacuum_update_progress.\n>\n> The new message does not contain a payload, and it's merely used to\n> signal the leader that it can invoke a progress update.\n\nThank you for updating the patch. Here are some comments for v22 patch:\n\n---\n+ <para>\n+ Number of indexes that will be vacuumed or cleaned up. This\nvalue will be\n+ <literal>0</literal> if the phase is not <literal>vacuuming\nindexes</literal>\n+ or <literal>cleaning up indexes</literal>,\n<literal>INDEX_CLEANUP</literal>\n+ is set to <literal>OFF</literal>, index vacuum is skipped due to very\n+ few dead tuples in the table, or vacuum failsafe is triggered.\n\nI think that if INDEX_CLEANUP is set to OFF or index vacuum is skipped\ndue to failsafe mode, we enter neither vacuum indexes phase nor\ncleanup indexes phase. So probably we can say something like:\n\nNumber of indexes that will be vacuumed or cleaned up. This counter only\nadvances when the phase is vacuuming indexes or cleaning up indexes.\n\n---\n- /* Report that we are now vacuuming indexes */\n- pgstat_progress_update_param(PROGRESS_VACUUM_PHASE,\n-\nPROGRESS_VACUUM_PHASE_VACUUM_INDEX);\n+ /*\n+ * Report that we are now vacuuming indexes\n+ * and the number of indexes to vacuum.\n+ */\n+ progress_start_val[0] = PROGRESS_VACUUM_PHASE_VACUUM_INDEX;\n+ progress_start_val[1] = vacrel->nindexes;\n+ pgstat_progress_update_multi_param(2, progress_start_index,\nprogress_start_val);\n\nAccording to our code style guideline[1], we limit line lengths so\nthat the code is readable in an 80-column window. Some comments\nupdated in this patch seem too short.\n\n---\n+ StringInfoData msgbuf;\n+\n+ pq_beginmessage(&msgbuf, 'P');\n+ pq_endmessage(&msgbuf);\n\nI think we can use pq_putmessage() instead.\n\n---\n+/* progress callback definition */\n+typedef void (*ParallelProgressCallback) (void\n*parallel_progress_callback_state);\n\nI think it's better to define \"void *arg\".\n\n---\n+ /*\n+ * A Leader process that receives this message\n+ * must be ready to update progress.\n+ */\n+ Assert(pcxt->parallel_progress_callback);\n+ Assert(pcxt->parallel_progress_callback_arg);\n+\n+ /* Report progress */\n+\npcxt->parallel_progress_callback(pcxt->parallel_progress_callback_arg);\n\nI think the parallel query infra should not require\nparallel_progress_callback_arg to always be set. I think it can be\nNULL.\n\n---\n+void\n+parallel_vacuum_update_progress(void *arg)\n+{\n+ ParallelVacuumState *pvs = (ParallelVacuumState *)arg;\n+\n+ Assert(!IsParallelWorker());\n+\n+ if (pvs)\n+ pgstat_progress_update_param(PROGRESS_VACUUM_INDEX_COMPLETED,\n+\n pg_atomic_add_fetch_u32(&(pvs->shared->nindexes_completed), 1));\n+}\n\nSince parallel vacuum always sets the arg, I think we don't need to check it.\n\nRegards,\n\n[1] https://www.postgresql.org/docs/devel/source-format.html\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 20 Jan 2023 16:55:41 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "> Number of indexes that will be vacuumed or cleaned up. This counter only\r\n> advances when the phase is vacuuming indexes or cleaning up indexes.\r\n\r\nI agree, this reads better.\r\n\r\n ---\r\n - /* Report that we are now vacuuming indexes */\r\n - pgstat_progress_update_param(PROGRESS_VACUUM_PHASE,\r\n -\r\n PROGRESS_VACUUM_PHASE_VACUUM_INDEX);\r\n + /*\r\n + * Report that we are now vacuuming indexes\r\n + * and the number of indexes to vacuum.\r\n + */\r\n + progress_start_val[0] = PROGRESS_VACUUM_PHASE_VACUUM_INDEX;\r\n + progress_start_val[1] = vacrel->nindexes;\r\n + pgstat_progress_update_multi_param(2, progress_start_index,\r\n progress_start_val);\r\n\r\n> According to our code style guideline[1], we limit line lengths so\r\n> that the code is readable in an 80-column window. Some comments\r\n > updated in this patch seem too short.\r\n\r\nI will correct this.\r\n\r\n> I think it's better to define \"void *arg\".\r\n\r\nAgree\r\n\r\n> ---\r\n> + /*\r\n> + * A Leader process that receives this message\r\n> + * must be ready to update progress.\r\n> + */\r\n> + Assert(pcxt->parallel_progress_callback);\r\n> + Assert(pcxt->parallel_progress_callback_arg);\r\n> +\r\n> + /* Report progress */\r\n> +\r\n> pcxt->parallel_progress_callback(pcxt->parallel_progress_callback_arg);\r\n\r\n> I think the parallel query infra should not require\r\n> parallel_progress_callback_arg to always be set. I think it can be\r\n> NULL.\r\n\r\nThis assertion is inside the new 'P' message type handling.\r\nIf a leader is consuming this message, they must have a\r\nprogress callback set. Right now we only set the callback\r\nin the parallel vacuum case only, so not all leaders will be prepared\r\nto handle this case. \r\n\r\nWould you agree this is needed for safety?\r\n\r\n case 'P': /* Parallel progress reporting */\r\n {\r\n /*\r\n * A Leader process that receives this message\r\n * must be ready to update progress.\r\n */\r\n Assert(pcxt->parallel_progress_callback);\r\n Assert(pcxt->parallel_progress_callback_arg);\r\n\r\n ---\r\n> +void\r\n> +parallel_vacuum_update_progress(void *arg)\r\n> +{\r\n> + ParallelVacuumState *pvs = (ParallelVacuumState *)arg;\r\n> +\r\n> + Assert(!IsParallelWorker());\r\n> +\r\n> + if (pvs)\r\n> + pgstat_progress_update_param(PROGRESS_VACUUM_INDEX_COMPLETED,\r\n> +\r\n> pg_atomic_add_fetch_u32(&(pvs->shared->nindexes_completed), 1));\r\n> +}\r\n\r\n> Since parallel vacuum always sets the arg, I think we don't need to check it.\r\n\r\nThe arg is only set during parallel vacuum. During non-parallel vacuum,\r\nIt's NULL. This check can be removed, but I realize now that we do need \r\nan Assert(pvs). Do you agree?\r\n\r\n--\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n\r\n", "msg_date": "Fri, 20 Jan 2023 14:38:33 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Fri, Jan 20, 2023 at 11:38 PM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n>\n> > Number of indexes that will be vacuumed or cleaned up. This counter only\n> > advances when the phase is vacuuming indexes or cleaning up indexes.\n>\n> I agree, this reads better.\n>\n> ---\n> - /* Report that we are now vacuuming indexes */\n> - pgstat_progress_update_param(PROGRESS_VACUUM_PHASE,\n> -\n> PROGRESS_VACUUM_PHASE_VACUUM_INDEX);\n> + /*\n> + * Report that we are now vacuuming indexes\n> + * and the number of indexes to vacuum.\n> + */\n> + progress_start_val[0] = PROGRESS_VACUUM_PHASE_VACUUM_INDEX;\n> + progress_start_val[1] = vacrel->nindexes;\n> + pgstat_progress_update_multi_param(2, progress_start_index,\n> progress_start_val);\n>\n> > According to our code style guideline[1], we limit line lengths so\n> > that the code is readable in an 80-column window. Some comments\n> > updated in this patch seem too short.\n>\n> I will correct this.\n>\n> > I think it's better to define \"void *arg\".\n>\n> Agree\n>\n> > ---\n> > + /*\n> > + * A Leader process that receives this message\n> > + * must be ready to update progress.\n> > + */\n> > + Assert(pcxt->parallel_progress_callback);\n> > + Assert(pcxt->parallel_progress_callback_arg);\n> > +\n> > + /* Report progress */\n> > +\n> > pcxt->parallel_progress_callback(pcxt->parallel_progress_callback_arg);\n>\n> > I think the parallel query infra should not require\n> > parallel_progress_callback_arg to always be set. I think it can be\n> > NULL.\n>\n> This assertion is inside the new 'P' message type handling.\n> If a leader is consuming this message, they must have a\n> progress callback set. Right now we only set the callback\n> in the parallel vacuum case only, so not all leaders will be prepared\n> to handle this case.\n>\n> Would you agree this is needed for safety?\n\nI think it makes sense that we assume pcxt->parallel_progress_callback\nis always not NULL but my point is that in the future one might want\nto use this callback without the argument and we should allow it. If\nparallel vacuum assumes pcxt->parallel_progress_callback_arg is not\nNULL, I think we should add an assertion in\nparallel_vacuum_update_progress(), but not in HandleParallelMessage().\n\n>\n> case 'P': /* Parallel progress reporting */\n> {\n> /*\n> * A Leader process that receives this message\n> * must be ready to update progress.\n> */\n> Assert(pcxt->parallel_progress_callback);\n> Assert(pcxt->parallel_progress_callback_arg);\n>\n> ---\n> > +void\n> > +parallel_vacuum_update_progress(void *arg)\n> > +{\n> > + ParallelVacuumState *pvs = (ParallelVacuumState *)arg;\n> > +\n> > + Assert(!IsParallelWorker());\n> > +\n> > + if (pvs)\n> > + pgstat_progress_update_param(PROGRESS_VACUUM_INDEX_COMPLETED,\n> > +\n> > pg_atomic_add_fetch_u32(&(pvs->shared->nindexes_completed), 1));\n> > +}\n>\n> > Since parallel vacuum always sets the arg, I think we don't need to check it.\n>\n> The arg is only set during parallel vacuum. During non-parallel vacuum,\n> It's NULL. This check can be removed, but I realize now that we do need\n> an Assert(pvs). Do you agree?\n\nAgreed to add the assertion in this function.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 2 Feb 2023 15:33:04 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Hi,\r\n\r\nThanks for your reply!\r\n\r\nI addressed the latest comments in v23.\r\n\r\n1/ cleaned up the asserts as discussed.\r\n2/ used pq_putmessage to send the message on index scan completion.\r\n\r\nThanks\r\n\r\n--\r\nSami Imseih\r\nAmazon Web Services (AWS)", "msg_date": "Wed, 8 Feb 2023 02:03:11 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Wed, Feb 8, 2023 at 11:03 AM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n>\n> Hi,\n>\n> Thanks for your reply!\n>\n> I addressed the latest comments in v23.\n>\n> 1/ cleaned up the asserts as discussed.\n> 2/ used pq_putmessage to send the message on index scan completion.\n\n Thank you for updating the patch! Here are some comments for v23 patch:\n\n+ <row>\n+ <entry><literal>ParallelVacuumFinish</literal></entry>\n+ <entry>Waiting for parallel vacuum workers to finish index\nvacuum.</entry>\n+ </row>\n\nThis change is out-of-date.\n\n---\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>indexes_total</structfield> <type>bigint</type>\n+ </para>\n+ <para>\n+ Number of indexes that will be vacuumed or cleaned up. This\nvalue will be\n+ <literal>0</literal> if the phase is not <literal>vacuuming\nindexes</literal>\n+ or <literal>cleaning up indexes</literal>,\n<literal>INDEX_CLEANUP</literal>\n+ is set to <literal>OFF</literal>, index vacuum is skipped due to very\n+ few dead tuples in the table, or vacuum failsafe is triggered.\n+ See <xref linkend=\"guc-vacuum-failsafe-age\"/>\n+ for more on vacuum failsafe.\n+ </para></entry>\n+ </row>\n\nThis explanation looks redundant: setting INDEX_CLEANUP to OFF\nessentially means the vacuum doesn't enter the vacuuming indexes\nphase. The same is true for the case of skipping index vacuum and\nfailsafe mode. How about the following?\n\nTotal number of indexes that will be vacuumed or cleaned up. This\nnumber is reported as of the beginning of the vacuuming indexes phase\nor the cleaning up indexes phase.\n\n---\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>indexes_completed</structfield> <type>bigint</type>\n+ </para>\n+ <para>\n+ Number of indexes vacuumed in the current vacuum cycle when the\n+ phase is <literal>vacuuming indexes</literal>, or the number\n+ of indexes cleaned up during the <literal>cleaning up indexes</literal>\n+ phase.\n+ </para></entry>\n+ </row>\n\nHow about simplfyy the description to something like:\n\nNumber of indexes processed. This counter only advances when the phase\nis vacuuming indexes or cleaning up indexes.\n\nAlso, index_processed sounds accurate to me. What do you think?\n\n---\n+ pcxt->parallel_progress_callback = NULL;\n+ pcxt->parallel_progress_callback_arg = NULL;\n\nI think these settings are not necessary since the pcxt is palloc0'ed.\n\n---\n+void\n+parallel_vacuum_update_progress(void *arg)\n+{\n+ ParallelVacuumState *pvs = (ParallelVacuumState *)arg;\n+\n+ Assert(!IsParallelWorker());\n+ Assert(pvs->pcxt->parallel_progress_callback_arg);\n+\n+ if (pvs)\n+ pgstat_progress_update_param(PROGRESS_VACUUM_INDEX_COMPLETED,\n+\n pg_atomic_add_fetch_u32(&(pvs->shared->nindexes_completed), 1));\n+}\n\nAssert(pvs->pcxt->parallel_progress_callback_arg) looks wrong to me.\nIf 'arg' is NULL, a SEGV happens.\n\nI think it's better to update pvs->shared->nindexes_completed by both\nleader and worker processes who processed the index.\n\n---\n+/* progress callback definition */\n+typedef void (*ParallelProgressCallback) (void\n*parallel_progress_callback_state);\n+\n typedef void (*parallel_worker_main_type) (dsm_segment *seg, shm_toc *toc);\n\nI think it's better to make the function type consistent with the\nexisting parallel_worker_main_type. How about\nparallel_progress_callback_type?\n\nI've attached a patch that incorporates the above comments and has\nsome suggestions of updating comments etc.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 16 Feb 2023 17:27:43 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Thanks for the review!\r\n\r\n> + <row>\r\n> + <entry><literal>ParallelVacuumFinish</literal></entry>\r\n> + <entry>Waiting for parallel vacuum workers to finish index\r\n> vacuum.</entry>\r\n> + </row>\r\n\r\n> This change is out-of-date.\r\n\r\nThat was an oversight. Thanks for catching.\r\n\r\n> Total number of indexes that will be vacuumed or cleaned up. This\r\n> number is reported as of the beginning of the vacuuming indexes phase\r\n> or the cleaning up indexes phase.\r\n\r\nThis is cleaner. I was being unnecessarily verbose in the original description.\r\n\r\n> Number of indexes processed. This counter only advances when the phase\r\n> is vacuuming indexes or cleaning up indexes.\r\n\r\nI agree.\r\n\r\n> Also, index_processed sounds accurate to me. What do you think?\r\n\r\nAt one point, II used index_processed, but decided to change it. \r\n\"processed\" makes sense also. I will use this.\r\n\r\n> I think these settings are not necessary since the pcxt is palloc0'ed.\r\n\r\nGood point.\r\n\r\n> Assert(pvs->pcxt->parallel_progress_callback_arg) looks wrong to me.\r\n> If 'arg' is NULL, a SEGV happens.\r\n\r\nCorrect, Assert(pvs) is all that is needed.\r\n\r\n> I think it's better to update pvs->shared->nindexes_completed by both\r\n> leader and worker processes who processed the index.\r\n\r\nNo reason for that, since only the leader process can report process to\r\nbackend_progress.\r\n\r\n\r\n> I think it's better to make the function type consistent with the\r\n> existing parallel_worker_main_type. How about\r\n> parallel_progress_callback_type?\r\n\r\nYes, that makes sense.\r\n\r\n> I've attached a patch that incorporates the above comments and has\r\n> some suggestions of updating comments etc.\r\n\r\nI reviewed and incorporated these changes, with a slight change. See v24.\r\n\r\n- * Increase and report the number of index. Also, we reset the progress\r\n- * counters.\r\n\r\n\r\n+ * Increase and report the number of index scans. Also, we reset the progress\r\n+ * counters.\r\n\r\n\r\nThanks\r\n\r\n--\r\nSami Imseih\r\nAmazon Web Services (AWS)", "msg_date": "Sat, 18 Feb 2023 02:46:27 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Sat, Feb 18, 2023 at 11:46 AM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n>\n> Thanks for the review!\n>\n> > + <row>\n> > + <entry><literal>ParallelVacuumFinish</literal></entry>\n> > + <entry>Waiting for parallel vacuum workers to finish index\n> > vacuum.</entry>\n> > + </row>\n>\n> > This change is out-of-date.\n>\n> That was an oversight. Thanks for catching.\n>\n> > Total number of indexes that will be vacuumed or cleaned up. This\n> > number is reported as of the beginning of the vacuuming indexes phase\n> > or the cleaning up indexes phase.\n>\n> This is cleaner. I was being unnecessarily verbose in the original description.\n>\n> > Number of indexes processed. This counter only advances when the phase\n> > is vacuuming indexes or cleaning up indexes.\n>\n> I agree.\n>\n> > Also, index_processed sounds accurate to me. What do you think?\n>\n> At one point, II used index_processed, but decided to change it.\n> \"processed\" makes sense also. I will use this.\n>\n> > I think these settings are not necessary since the pcxt is palloc0'ed.\n>\n> Good point.\n>\n> > Assert(pvs->pcxt->parallel_progress_callback_arg) looks wrong to me.\n> > If 'arg' is NULL, a SEGV happens.\n>\n> Correct, Assert(pvs) is all that is needed.\n>\n> > I think it's better to update pvs->shared->nindexes_completed by both\n> > leader and worker processes who processed the index.\n>\n> No reason for that, since only the leader process can report process to\n> backend_progress.\n>\n>\n> > I think it's better to make the function type consistent with the\n> > existing parallel_worker_main_type. How about\n> > parallel_progress_callback_type?\n>\n> Yes, that makes sense.\n>\n> > I've attached a patch that incorporates the above comments and has\n> > some suggestions of updating comments etc.\n>\n> I reviewed and incorporated these changes, with a slight change. See v24.\n>\n> - * Increase and report the number of index. Also, we reset the progress\n> - * counters.\n>\n>\n> + * Increase and report the number of index scans. Also, we reset the progress\n> + * counters.\n>\n>\n> Thanks\n\nThanks for updating the patch!\n\n #define PROGRESS_VACUUM_NUM_INDEX_VACUUMS 4\n #define PROGRESS_VACUUM_MAX_DEAD_TUPLES 5\n #define PROGRESS_VACUUM_NUM_DEAD_TUPLES 6\n+#define PROGRESS_VACUUM_INDEX_TOTAL 7\n+#define PROGRESS_VACUUM_INDEX_PROCESSED 8\n\n- s.param7 AS num_dead_tuples\n+ s.param7 AS num_dead_tuples,\n+ s.param8 AS indexes_total,\n+ s.param9 AS indexes_processed\n\nI think PROGRESS_VACUUM_INDEXES_TOTAL and\nPROGRESS_VACUUM_INDEXES_PROCESSED are better for consistency. The rest\nlooks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 20 Feb 2023 16:14:20 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Thanks!\r\n\r\n> I think PROGRESS_VACUUM_INDEXES_TOTAL and\r\n> PROGRESS_VACUUM_INDEXES_PROCESSED are better for consistency. The rest\r\n> looks good to me.\r\n\r\nTook care of that in v25. \r\n\r\nRegards\r\n\r\n--\r\nSami Imseih\r\nAmazon Web Services", "msg_date": "Mon, 20 Feb 2023 16:47:58 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Tue, Feb 21, 2023 at 1:48 AM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n>\n> Thanks!\n>\n> > I think PROGRESS_VACUUM_INDEXES_TOTAL and\n> > PROGRESS_VACUUM_INDEXES_PROCESSED are better for consistency. The rest\n> > looks good to me.\n>\n> Took care of that in v25.\n\nThanks! It looks good to me so I've marked it as Ready for Committer.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 24 Feb 2023 15:16:10 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Fri, Feb 24, 2023 at 03:16:10PM +0900, Masahiko Sawada wrote:\n> Thanks! It looks good to me so I've marked it as Ready for Committer.\n \n+ case 'P': /* Parallel progress reporting */\n+ {\n+ /* Call the progress reporting callback */\n+ Assert(pcxt->parallel_progress_callback);\n+ pcxt->parallel_progress_callback(pcxt->parallel_progress_callback_arg);\n+\n+ break;\n+ }\n\nThe key point of the patch is here. From what I understand based on\nthe information of the thread, this is used as a way to make the\nprogress reporting done by the leader more responsive so as we'd\nupdate the index counters each time the leader is poked at with a 'P'\nmessage by one of its workers, once a worker is done with the parallel\ncleanup of one of the indexes. That's appealing, because this design\nis responsive and cheap, while we'd rely on CFIs to make sure that the\nleader triggers its callback on a timely basis. Unfortunately,\nsticking a concept of \"Parallel progress reporting\" is rather\nconfusing here? This stuff can be used for much more purposes than\njust progress reporting: the leader would execute the callback it has\nregistered based on the timing given by one or more of its workers,\nthese willing to push an event on the leader. Progress reporting is\none application of that to force a refresh and make the information of\nthe leader accurate. What about things like a chain of callbacks, for\nexample? Could the leader want to register more than one callback and\nact on all of them with one single P message?\n\nAnother question I have: could the reporting of each individual worker\nmake sense on its own? The cleanup of the indexes depends on the\norder they are processed, their number, size and AM with their cleanup\nstrategy, still there may be a point in getting information about how\nmuch work a single worker is doing rather than just have the\naggregated information given to the leader?\n\nBtw, Is an assertion really helpful here? If\nparallel_progress_callback is not set, we'd just crash one line\nafter. It seems to me that it could be cleaner to do nothing if a\nleader gets a poke message from a worker if there are no callbacks\nregistered.\n--\nMichael", "msg_date": "Wed, 5 Apr 2023 16:47:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Wed, Apr 5, 2023 at 4:47 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Feb 24, 2023 at 03:16:10PM +0900, Masahiko Sawada wrote:\n> > Thanks! It looks good to me so I've marked it as Ready for Committer.\n>\n> + case 'P': /* Parallel progress reporting */\n> + {\n> + /* Call the progress reporting callback */\n> + Assert(pcxt->parallel_progress_callback);\n> + pcxt->parallel_progress_callback(pcxt->parallel_progress_callback_arg);\n> +\n> + break;\n> + }\n>\n> The key point of the patch is here. From what I understand based on\n> the information of the thread, this is used as a way to make the\n> progress reporting done by the leader more responsive so as we'd\n> update the index counters each time the leader is poked at with a 'P'\n> message by one of its workers, once a worker is done with the parallel\n> cleanup of one of the indexes. That's appealing, because this design\n> is responsive and cheap, while we'd rely on CFIs to make sure that the\n> leader triggers its callback on a timely basis. Unfortunately,\n> sticking a concept of \"Parallel progress reporting\" is rather\n> confusing here? This stuff can be used for much more purposes than\n> just progress reporting: the leader would execute the callback it has\n> registered based on the timing given by one or more of its workers,\n> these willing to push an event on the leader. Progress reporting is\n> one application of that to force a refresh and make the information of\n> the leader accurate. What about things like a chain of callbacks, for\n> example? Could the leader want to register more than one callback and\n> act on all of them with one single P message?\n\nThat seems a valid argument. I was thinking that such an asynchronous\nstate update mechanism would be a good infrastructure for progress\nreporting of parallel operations. It might be worth considering to use\nit in more general usage but since the current implementation is\nminimal can we extend it in the future when we need it for other use\ncases?\n\n>\n> Another question I have: could the reporting of each individual worker\n> make sense on its own? The cleanup of the indexes depends on the\n> order they are processed, their number, size and AM with their cleanup\n> strategy, still there may be a point in getting information about how\n> much work a single worker is doing rather than just have the\n> aggregated information given to the leader?\n\nIt would also be useful information for users but I don't think it can\nalternate the aggregated information. The aggregated information can\nanswer the question from the user like \"how many indexes to vacuum are\nremaining?\", which helps estimate the remaining time to complete.\n\n>\n> Btw, Is an assertion really helpful here? If\n> parallel_progress_callback is not set, we'd just crash one line\n> after. It seems to me that it could be cleaner to do nothing if a\n> leader gets a poke message from a worker if there are no callbacks\n> registered.\n\nAgreed.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 5 Apr 2023 17:21:02 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "> > The key point of the patch is here. From what I understand based on\r\n> > the information of the thread, this is used as a way to make the\r\n> progress reporting done by the leader more responsive so as we'd\r\n> > update the index counters each time the leader is poked at with a 'P'\r\n> > message by one of its workers, once a worker is done with the parallel\r\n> > cleanup of one of the indexes. That's appealing, because this design\r\n> > is responsive and cheap, while we'd rely on CFIs to make sure that the\r\n> > leader triggers its callback on a timely basis. Unfortunately,\r\n> > sticking a concept of \"Parallel progress reporting\" is rather\r\n> > confusing here? This stuff can be used for much more purposes than\r\n> > just progress reporting: the leader would execute the callback it has\r\n> > registered based on the timing given by one or more of its workers,\r\n> > these willing to push an event on the leader. Progress reporting is\r\n> > one application of that to force a refresh and make the information of\r\n> > the leader accurate. What about things like a chain of callbacks, for\r\n> > example? Could the leader want to register more than one callback and\r\n> > act on all of them with one single P message?\r\n\r\n\r\n> That seems a valid argument. I was thinking that such an asynchronous\r\n> state update mechanism would be a good infrastructure for progress\r\n> reporting of parallel operations. It might be worth considering to use\r\n> it in more general usage but since the current implementation is\r\n> minimal can we extend it in the future when we need it for other use\r\n> cases?\r\n\r\nI don't think we should delay this patch to design a more general\r\ninfrastructure. I agree this can be handled by a future requirement.\r\n\r\n\r\n> >\r\n> > Another question I have: could the reporting of each individual worker\r\n> > make sense on its own? The cleanup of the indexes depends on the\r\n> > order they are processed, their number, size and AM with their cleanup\r\n> > strategy, still there may be a point in getting information about how\r\n> > much work a single worker is doing rather than just have the\r\n> > aggregated information given to the leader?\r\n\r\n\r\n> It would also be useful information for users but I don't think it can\r\n> alternate the aggregated information. The aggregated information can\r\n> answer the question from the user like \"how many indexes to vacuum are\r\n> remaining?\", which helps estimate the remaining time to complete.\r\n\r\nThe original intention of the thread was to expose stats for both \r\naggregate (leader level) and individual index progress. Both the aggregate\r\nand individual indexes information have benefit as mentioned by Swada-San.\r\n\r\nFor the individual index progress, a suggested patch was suggested earlier in\r\nthe thread, v1-0001-Function-to-return-currently-vacuumed-or-cleaned-ind.patch.\r\n\r\nHowever, since this particular thread has focused mainly on the aggregated stats work,\r\nmy thoughts have been to start a new thread for the individual index progress\r\nonce this gets committed.\r\n\r\n\r\n> > Btw, Is an assertion really helpful here? If\r\n> > parallel_progress_callback is not set, we'd just crash one line\r\n> > after. It seems to me that it could be cleaner to do nothing if a\r\n> > leader gets a poke message from a worker if there are no callbacks\r\n> > registered.\r\n\r\n> Agreed.\r\n\r\nI removed the assert and added an if condition instead.\r\n\r\nSee the attached v26 please.\r\n\r\nRegards,\r\n\r\n--\r\nSami Imseih\r\nAmazon Web Services (AWS)", "msg_date": "Wed, 5 Apr 2023 14:31:54 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Wed, Apr 05, 2023 at 02:31:54PM +0000, Imseih (AWS), Sami wrote:\n>> That seems a valid argument. I was thinking that such an asynchronous\n>> state update mechanism would be a good infrastructure for progress\n>> reporting of parallel operations. It might be worth considering to use\n>> it in more general usage but since the current implementation is\n>> minimal can we extend it in the future when we need it for other use\n>> cases?\n> \n> I don't think we should delay this patch to design a more general\n> infrastructure. I agree this can be handled by a future requirement.\n\nNot so sure to agree on that. As the patch stands, we have a rather\ngenerally-purposed new message type and facility (callback trigger\npoke from workers to their leader) used for a not-so-general purpose,\nwhile being documented under this not-so-general purpose, which is\nprogress reporting. I agree that relying on pqmq.c to force the\nleader to be more sensible to refreshes is sensible, because it is\ncheap, but the interface seems kind of misplaced to me. As one thing,\nfor example, it introduces a dependency to parallel.h to do progress\nreporting without touching at backend_progress.h. Is a callback\napproach combined with a counter in shared memory the best thing there\ncould be? Could it be worth thinking about a different design where\nthe value incremented and the parameters of\npgstat_progress_update_param() are passed through the 'P' message\ninstead? It strikes me that gathering data in the leader from a poke\nof the workers is something that could be useful in so much more areas\nthan just the parallel index operations done in a vacuum because we do\nmore and more things in parallel these days, so the API interface\nought to have some attention.\n\nAs some say, the introduction of a new message type in pqmq.c would be\nbasically a one-way door, because we'd have to maintain it in a stable\nbranch. I would not take that lightly. One idea of interface that\ncould be used is an extra set of APIs for workers to do progress\nreporting, part of backend_progress.h, where we use a pqmq message\ntype in a centralized location, say something like a\npgstat_progress_parallel_incr_param().\n\nAbout the callback interface, we may also want to be more careful\nabout more things, like the handling of callback chains, or even\nunregistrations of callbacks? There could be much more uses to that\nthan just progress reporting, though this comes to a balance of what\nthe leader needs to do before the workers are able to poke at it on a\nperiodic basis to make the refresh of the aggregated progress\nreporting data more verbose. There is also an argument where we could\nhave each worker report their progress independently of the leader?\n--\nMichael", "msg_date": "Thu, 6 Apr 2023 12:28:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "> As one thing,\r\n> for example, it introduces a dependency to parallel.h to do progress\r\n> reporting without touching at backend_progress.h. \r\n\r\nContaining the logic in backend_progress.h is a reasonable point\r\nfrom a maintenance standpoint.\r\n\r\nWe can create a new function in backend_progress.h called \r\npgstat_progress_update_leader which is called from\r\nvacuumparallel.c. \r\n\r\npgstat_progress_update_leader can then call pq_putmessage('P', NULL, 0)\r\n\r\n> Is a callback\r\n> approach combined with a counter in shared memory the best thing there\r\n> could be? \r\n\r\nIt seems to be the best way.\r\n\r\nThe shared memory, ParallelVacuumState, is already tracking the\r\ncounters for the Parallel Vacuum.\r\n\r\nAlso, the callback in ParallelContext is the only way I can see\r\nto let the 'P' message know what to do for updating progress\r\nto the leader.\r\n\r\n\r\n> Could it be worth thinking about a different design where\r\n> the value incremented and the parameters of\r\n> pgstat_progress_update_param() are passed through the 'P' message\r\n> instead?\r\n\r\nI am not sure how this is different than the approach suggested.\r\nIn the current design, the 'P' message is used to pass the\r\nParallelvacuumState to parallel_vacuum_update_progress which then\r\ncalls pgstat_progress_update_param.\r\n\r\n\r\n> It strikes me that gathering data in the leader from a poke\r\n> of the workers is something that could be useful in so much more areas\r\n> than just the parallel index operations done in a vacuum because we do\r\n> more and more things in parallel these days, so the API interface\r\n> ought to have some attention.\r\n\r\nWe may need an interface that does more than progress\r\nreporting, but I am not sure what those use cases are at\r\nthis point, besides progress reporting.\r\n\r\n\r\n> There is also an argument where we could\r\n> have each worker report their progress independently of the leader?\r\n\r\nIn this case, we don't need ParallelContext at all or to go through the\r\n'P' message.\r\n\r\n\r\n--\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n", "msg_date": "Thu, 6 Apr 2023 15:14:20 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Thu, Apr 06, 2023 at 03:14:20PM +0000, Imseih (AWS), Sami wrote:\n>> Could it be worth thinking about a different design where\n>> the value incremented and the parameters of\n>> pgstat_progress_update_param() are passed through the 'P' message\n>> instead?\n> \n> I am not sure how this is different than the approach suggested.\n> In the current design, the 'P' message is used to pass the\n> ParallelvacuumState to parallel_vacuum_update_progress which then\n> calls pgstat_progress_update_param.\n\nThe arguments of pgstat_progress_update_param() would be given by the\nworker directly as components of the 'P' message. It seems to me that\nthis approach would have the simplicity to not require the setup of a\nshmem area for the extra counters, and there would be no need for a\ncallback. Hence, the only thing the code paths of workers would need\nto do is to call this routine, then the leaders would increment their\nprogress when they see a CFI to process the 'P' message. Also, I\nguess that we would only need an interface in backend_progress.c to\nincrement counters, like pgstat_progress_incr_param(), but usable by\nworkers. Like a pgstat_progress_worker_incr_param()?\n--\nMichael", "msg_date": "Fri, 7 Apr 2023 19:15:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Hi,\n\nOn 2023-04-06 12:28:04 +0900, Michael Paquier wrote:\n> As some say, the introduction of a new message type in pqmq.c would be\n> basically a one-way door, because we'd have to maintain it in a stable\n> branch.\n\nWhy would it mean that? Parallel workers are updated together with the leader,\nso there's no compatibility issue?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 7 Apr 2023 12:01:17 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "> The arguments of pgstat_progress_update_param() would be given by the\r\n> worker directly as components of the 'P' message. It seems to me that\r\n> this approach would have the simplicity to not require the setup of a\r\n> shmem area for the extra counters, and there would be no need for a\r\n> callback. Hence, the only thing the code paths of workers would need\r\n> to do is to call this routine, then the leaders would increment their\r\n> progress when they see a CFI to process the 'P' message. Also, I\r\n> guess that we would only need an interface in backend_progress.c to\r\n> increment counters, like pgstat_progress_incr_param(), but usable by\r\n> workers. Like a pgstat_progress_worker_incr_param()?\r\n\r\nSo, here is what I think should be workable to give a generic\r\nprogress interface.\r\n\r\npgstat_progress_parallel_incr_param will be a new API that\r\ncan be called by either worker of leader from any parallel\r\ncode path that chooses to increment a progress index. \r\n\r\nIf called by a worker, it will send a 'P' message to the front end\r\npassing both the progress index, i.e. PROGRESS_VACUUM_INDEXES_PROCESSED\r\nAnd the value to increment by, i.e. 1 for index vacuum progress.\r\n\r\nWith that, the additional shared memory counters in ParallelVacuumState\r\nare not needed, and the poke of the worker to the leader goes directly\r\nthrough a generic backend_progress API.\r\n\r\nLet me know your thoughts.\r\n\r\nThanks!\r\n\r\n--\r\nSami Imseih\r\nAmazon Web Services (AWS)", "msg_date": "Fri, 7 Apr 2023 19:27:17 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Fri, Apr 07, 2023 at 07:27:17PM +0000, Imseih (AWS), Sami wrote:\n> If called by a worker, it will send a 'P' message to the front end\n> passing both the progress index, i.e. PROGRESS_VACUUM_INDEXES_PROCESSED\n> And the value to increment by, i.e. 1 for index vacuum progress.\n> \n> With that, the additional shared memory counters in ParallelVacuumState\n> are not needed, and the poke of the worker to the leader goes directly\n> through a generic backend_progress API.\n\nThanks for the new version. This has unfortunately not been able to\nmake the cut for v16, but let's see it done at the beginning of the\nv17 cycle.\n\n+void\n+pgstat_progress_parallel_incr_param(int index, int64 incr)\n+{\n+ /*\n+ * Parallel workers notify a leader through a 'P'\n+ * protocol message to update progress, passing the\n+ * progress index and increment value. Leaders can\n+ * just call pgstat_progress_incr_param directly.\n+ */\n+ if (IsParallelWorker())\n+ {\n+ static StringInfoData progress_message;\n+\n+ initStringInfo(&progress_message);\n+\n+ pq_beginmessage(&progress_message, 'P');\n+ pq_sendint32(&progress_message, index);\n+ pq_sendint64(&progress_message, incr);\n+ pq_endmessage(&progress_message);\n+ }\n+ else\n+ pgstat_progress_incr_param(index, incr);\n+}\n\nI see. You need to handle both the leader and worker case because\nparallel_vacuum_process_one_index() can be called by either of them.\n\n+ case 'P': /* Parallel progress reporting */\n\nPerhaps this comment should say that this is only about incremental\nprogress reporting, for the moment.\n\n+ * Increase and report the number of index scans. Also, we reset the progress\n+ * counters.\n\nThe counters reset are the two index counts, perhaps this comment\nshould mention this fact.\n\n+ /* update progress */\n+ int index = pq_getmsgint(msg, 4);\n+ int incr = pq_getmsgint(msg, 1);\n[...]\n+ pq_beginmessage(&progress_message, 'P');\n+ pq_sendint32(&progress_message, index);\n+ pq_sendint64(&progress_message, incr);\n+ pq_endmessage(&progress_message);\n\nIt seems to me that the receiver side is missing one pq_getmsgend()?\nincr is defined and sent as an int64 on the sender side, hence the\nreceiver should use pq_getmsgint64(), no? pq_getmsgint(msg, 1) means\nto receive only one byte, see pqformat.c. And the order is reversed?\n\nThere may be a case in the future about making 'P' more complicated\nwith more arguments, but what you have here should be sufficient for\nyour use-case. Were there plans to increment more data for some\ndifferent and/or new progress indexes in the VACUUM path, by the way?\nMost of that looked a bit tricky to me as this was AM-dependent, but I\nmay have missed something.\n--\nMichael", "msg_date": "Mon, 10 Apr 2023 07:32:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Fri, Apr 07, 2023 at 12:01:17PM -0700, Andres Freund wrote:\n> Why would it mean that? Parallel workers are updated together with the leader,\n> so there's no compatibility issue?\n\nMy point is that the callback system would still need to be maintained\nin a stable branch, and, while useful, it could be used for much more\nthan it is originally written. I guess that this could be used in\ncustom nodes with their own custom parallel nodes.\n--\nMichael", "msg_date": "Mon, 10 Apr 2023 08:14:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "> + case 'P': /* Parallel progress reporting */\r\n\r\nI kept this comment as-is but inside case code block I added \r\nmore comments. This is to avoid cluttering up the one-liner comment.\r\n\r\n> + * Increase and report the number of index scans. Also, we reset the progress\r\n> + * counters.\r\n\r\n\r\n> The counters reset are the two index counts, perhaps this comment\r\n> should mention this fact.\r\n\r\nYes, since we are using the multi_param API here, it makes sense to \r\nmention the progress fields being reset in the comments.\r\n\r\n\r\n+ /* update progress */\r\n+ int index = pq_getmsgint(msg, 4);\r\n+ int incr = pq_getmsgint(msg, 1);\r\n[...]\r\n+ pq_beginmessage(&progress_message, 'P');\r\n+ pq_sendint32(&progress_message, index);\r\n+ pq_sendint64(&progress_message, incr);\r\n+ pq_endmessage(&progress_message);\r\n\r\n\r\n> It seems to me that the receiver side is missing one pq_getmsgend()?\r\nYes. I added this.\r\n\r\n> incr is defined and sent as an int64 on the sender side, hence the\r\n> receiver should use pq_getmsgint64(), no? pq_getmsgint(msg, 1) means\r\n> to receive only one byte, see pqformat.c. \r\n\r\nAh correct, incr is an int64 so what we need is.\r\n\r\nint64 incr = pq_getmsgint64(msg);\r\n\r\nI also added the pq_getmsgend call.\r\n\r\n\r\n> And the order is reversed?\r\n\r\nI don't think so. The index then incr are sent and they are\r\nback in the same order. Testing the patch shows the value\r\nincrements correctly.\r\n\r\n\r\nSee v28 addressing the comments.\r\n\r\nRegards,\r\n\r\nSami Imseih \r\nAWS (Amazon Web Services)", "msg_date": "Mon, 10 Apr 2023 19:20:42 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "Hi,\n\nOn 2023-04-10 08:14:18 +0900, Michael Paquier wrote:\n> On Fri, Apr 07, 2023 at 12:01:17PM -0700, Andres Freund wrote:\n> > Why would it mean that? Parallel workers are updated together with the leader,\n> > so there's no compatibility issue?\n> \n> My point is that the callback system would still need to be maintained\n> in a stable branch, and, while useful, it could be used for much more\n> than it is originally written. I guess that this could be used in\n> custom nodes with their own custom parallel nodes.\n\nHm, I'm somewhat doubtful that that's something we should encourage. And\ndoubtful we'd get it right without a concrete use case at hand to verify the\ndesign.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 10 Apr 2023 15:34:17 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Mon, Apr 10, 2023 at 07:20:42PM +0000, Imseih (AWS), Sami wrote:\n> See v28 addressing the comments.\n\nThis should be OK (also checked the code paths where the reports are\nadded). Note that the patch needed a few adjustments for its\nindentation.\n--\nMichael", "msg_date": "Wed, 12 Apr 2023 13:46:41 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "> This should be OK (also checked the code paths where the reports are\r\n> added). Note that the patch needed a few adjustments for its\r\n> indentation.\r\n\r\nThanks for the formatting corrections! This looks good to me.\r\n\r\n--\r\nSami\r\n\r\n\r\n\r\n", "msg_date": "Wed, 12 Apr 2023 12:22:34 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Wed, Apr 12, 2023 at 9:22 PM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n>\n> > This should be OK (also checked the code paths where the reports are\n> > added). Note that the patch needed a few adjustments for its\n> > indentation.\n>\n> Thanks for the formatting corrections! This looks good to me.\n\nThank you for updating the patch. It looks good to me too. I've\nupdated the commit message.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 6 Jul 2023 11:07:14 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Thu, Jul 06, 2023 at 11:07:14AM +0900, Masahiko Sawada wrote:\n> Thank you for updating the patch. It looks good to me too. I've\n> updated the commit message.\n\nThanks. I was planning to review this patch today and/or tomorrow now\nthat my stack of things to do is getting slightly lower (registered my\nname as committer as well a few weeks ago to not format). \n\nOne thing I was planning to do is to move the new message processing\nAPI for the incrementational updates in its own commit for clarity, as\nthat's a separate concept than the actual feature, useful on its own.\n\nAnyway, would you prefer taking care of it?\n--\nMichael", "msg_date": "Thu, 6 Jul 2023 11:14:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Thu, Jul 6, 2023 at 11:15 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Jul 06, 2023 at 11:07:14AM +0900, Masahiko Sawada wrote:\n> > Thank you for updating the patch. It looks good to me too. I've\n> > updated the commit message.\n>\n> Thanks. I was planning to review this patch today and/or tomorrow now\n> that my stack of things to do is getting slightly lower (registered my\n> name as committer as well a few weeks ago to not format).\n>\n> One thing I was planning to do is to move the new message processing\n> API for the incrementational updates in its own commit for clarity, as\n> that's a separate concept than the actual feature, useful on its own.\n\n+1. I had the same idea. Please find the attached patches.\n\n> Anyway, would you prefer taking care of it?\n\nI can take care of it if you're okay.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 6 Jul 2023 14:28:57 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" }, { "msg_contents": "On Thu, Jul 6, 2023 at 2:28 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Jul 6, 2023 at 11:15 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Thu, Jul 06, 2023 at 11:07:14AM +0900, Masahiko Sawada wrote:\n> > > Thank you for updating the patch. It looks good to me too. I've\n> > > updated the commit message.\n> >\n> > Thanks. I was planning to review this patch today and/or tomorrow now\n> > that my stack of things to do is getting slightly lower (registered my\n> > name as committer as well a few weeks ago to not format).\n> >\n> > One thing I was planning to do is to move the new message processing\n> > API for the incrementational updates in its own commit for clarity, as\n> > that's a separate concept than the actual feature, useful on its own.\n>\n> +1. I had the same idea. Please find the attached patches.\n>\n> > Anyway, would you prefer taking care of it?\n>\n> I can take care of it if you're okay.\n>\n\nPushed.\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 11 Jul 2023 14:54:26 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" } ]
[ { "msg_contents": "Hi hackers,\r\n\r\nThanks to 61752af, SyncDataDirectory() can make use of syncfs() to\r\navoid individually syncing all database files after a crash. However,\r\nas noted earlier this year [0], there are still a number of O(n) tasks\r\nthat affect startup and checkpointing that I'd like to improve.\r\nBelow, I've attempted to summarize each task and to offer ideas for\r\nimproving matters. I'll likely split each of these into its own\r\nthread, given there is community interest for such changes.\r\n\r\n1) CheckPointSnapBuild(): This function loops through\r\n pg_logical/snapshots to remove all snapshots that are no longer\r\n needed. If there are many entries in this directory, this can take\r\n a long time. The note above this function indicates that this is\r\n done during checkpoints simply because it is convenient. IIUC\r\n there is no requirement that this function actually completes for a\r\n given checkpoint. My current idea is to move this to a new\r\n maintenance worker.\r\n2) CheckPointLogicalRewriteHeap(): This function loops through\r\n pg_logical/mappings to remove old mappings and flush all remaining\r\n ones. IIUC there is no requirement that the \"remove old mappings\"\r\n part must complete for a given checkpoint, but the \"flush all\r\n remaining\" portion allows replay after a checkpoint to only \"deal\r\n with the parts of a mapping that have been written out after the\r\n checkpoint started.\" Therefore, I think we should move the \"remove\r\n old mappings\" part to a new maintenance worker (probably the same\r\n one as for 1), and we should consider using syncfs() for the \"flush\r\n all remaining\" part. (I suspect the main argument against the\r\n latter will be that it could cause IO spikes.)\r\n3) RemovePgTempFiles(): This step can delay startup if there are many\r\n temporary files to individually remove. This step is already\r\n optionally done after a crash via the remove_temp_files_after_crash\r\n GUC. I propose that we have startup move the temporary file\r\n directories aside and create new ones, and then a separate worker\r\n (probably the same one from 1 and 2) could clean up the old files.\r\n4) StartupReorderBuffer(): This step deletes logical slot data that\r\n has been spilled to disk. This code appears to be written to avoid\r\n deleting different types of files in these directories, but AFAICT\r\n there shouldn't be any other files. Therefore, I think we could do\r\n something similar to 3 (i.e., move the directories aside during\r\n startup and clean them up via a new maintenance worker).\r\n\r\nI realize adding a new maintenance worker might be a bit heavy-handed,\r\nbut I think it would be nice to have somewhere to offload tasks that\r\nreally shouldn't impact startup and checkpointing. I imagine such a\r\nprocess would come in handy down the road, too. WDYT?\r\n\r\nNathan\r\n\r\n[0] https://postgr.es/m/32B59582-AA6C-4609-B08F-2256A271F7A5%40amazon.com\r\n\r\n", "msg_date": "Wed, 1 Dec 2021 20:24:25 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "+1 to the idea. I don't see a reason why checkpointer has to do all of\nthat. Keeping checkpoint to minimal essential work helps servers recover\nfaster in the event of a crash.\n\nRemoveOldXlogFiles is also an O(N) operation that can at least be avoided\nduring the end of recovery (CHECKPOINT_END_OF_RECOVERY) checkpoint. When a\nsufficient number of WAL files accumulated and the previous checkpoint did\nnot get a chance to cleanup, this can increase the unavailability of the\nserver.\n\n RemoveOldXlogFiles(_logSegNo, RedoRecPtr, recptr);\n\n\n\nOn Wed, Dec 1, 2021 at 12:24 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n\n> Hi hackers,\n>\n> Thanks to 61752af, SyncDataDirectory() can make use of syncfs() to\n> avoid individually syncing all database files after a crash. However,\n> as noted earlier this year [0], there are still a number of O(n) tasks\n> that affect startup and checkpointing that I'd like to improve.\n> Below, I've attempted to summarize each task and to offer ideas for\n> improving matters. I'll likely split each of these into its own\n> thread, given there is community interest for such changes.\n>\n> 1) CheckPointSnapBuild(): This function loops through\n> pg_logical/snapshots to remove all snapshots that are no longer\n> needed. If there are many entries in this directory, this can take\n> a long time. The note above this function indicates that this is\n> done during checkpoints simply because it is convenient. IIUC\n> there is no requirement that this function actually completes for a\n> given checkpoint. My current idea is to move this to a new\n> maintenance worker.\n> 2) CheckPointLogicalRewriteHeap(): This function loops through\n> pg_logical/mappings to remove old mappings and flush all remaining\n> ones. IIUC there is no requirement that the \"remove old mappings\"\n> part must complete for a given checkpoint, but the \"flush all\n> remaining\" portion allows replay after a checkpoint to only \"deal\n> with the parts of a mapping that have been written out after the\n> checkpoint started.\" Therefore, I think we should move the \"remove\n> old mappings\" part to a new maintenance worker (probably the same\n> one as for 1), and we should consider using syncfs() for the \"flush\n> all remaining\" part. (I suspect the main argument against the\n> latter will be that it could cause IO spikes.)\n> 3) RemovePgTempFiles(): This step can delay startup if there are many\n> temporary files to individually remove. This step is already\n> optionally done after a crash via the remove_temp_files_after_crash\n> GUC. I propose that we have startup move the temporary file\n> directories aside and create new ones, and then a separate worker\n> (probably the same one from 1 and 2) could clean up the old files.\n> 4) StartupReorderBuffer(): This step deletes logical slot data that\n> has been spilled to disk. This code appears to be written to avoid\n> deleting different types of files in these directories, but AFAICT\n> there shouldn't be any other files. Therefore, I think we could do\n> something similar to 3 (i.e., move the directories aside during\n> startup and clean them up via a new maintenance worker).\n>\n> I realize adding a new maintenance worker might be a bit heavy-handed,\n> but I think it would be nice to have somewhere to offload tasks that\n> really shouldn't impact startup and checkpointing. I imagine such a\n> process would come in handy down the road, too. WDYT?\n>\n> Nathan\n>\n> [0] https://postgr.es/m/32B59582-AA6C-4609-B08F-2256A271F7A5%40amazon.com\n>\n>\n\n+1 to the idea. I don't see a reason why checkpointer has to do all of that. Keeping checkpoint to minimal essential work helps servers recover faster in the event of a crash.RemoveOldXlogFiles is also an O(N) operation that can at least be avoided during the end of recovery (CHECKPOINT_END_OF_RECOVERY) checkpoint. When a sufficient number of WAL files accumulated and the previous checkpoint did not get a chance to cleanup, this can increase the unavailability of the server.    RemoveOldXlogFiles(_logSegNo, RedoRecPtr, recptr);On Wed, Dec 1, 2021 at 12:24 PM Bossart, Nathan <bossartn@amazon.com> wrote:Hi hackers,\n\nThanks to 61752af, SyncDataDirectory() can make use of syncfs() to\navoid individually syncing all database files after a crash.  However,\nas noted earlier this year [0], there are still a number of O(n) tasks\nthat affect startup and checkpointing that I'd like to improve.\nBelow, I've attempted to summarize each task and to offer ideas for\nimproving matters.  I'll likely split each of these into its own\nthread, given there is community interest for such changes.\n\n1) CheckPointSnapBuild(): This function loops through\n   pg_logical/snapshots to remove all snapshots that are no longer\n   needed.  If there are many entries in this directory, this can take\n   a long time.  The note above this function indicates that this is\n   done during checkpoints simply because it is convenient.  IIUC\n   there is no requirement that this function actually completes for a\n   given checkpoint.  My current idea is to move this to a new\n   maintenance worker.\n2) CheckPointLogicalRewriteHeap(): This function loops through\n   pg_logical/mappings to remove old mappings and flush all remaining\n   ones.  IIUC there is no requirement that the \"remove old mappings\"\n   part must complete for a given checkpoint, but the \"flush all\n   remaining\" portion allows replay after a checkpoint to only \"deal\n   with the parts of a mapping that have been written out after the\n   checkpoint started.\"  Therefore, I think we should move the \"remove\n   old mappings\" part to a new maintenance worker (probably the same\n   one as for 1), and we should consider using syncfs() for the \"flush\n   all remaining\" part.  (I suspect the main argument against the\n   latter will be that it could cause IO spikes.)\n3) RemovePgTempFiles(): This step can delay startup if there are many\n   temporary files to individually remove.  This step is already\n   optionally done after a crash via the remove_temp_files_after_crash\n   GUC.  I propose that we have startup move the temporary file\n   directories aside and create new ones, and then a separate worker\n   (probably the same one from 1 and 2) could clean up the old files.\n4) StartupReorderBuffer(): This step deletes logical slot data that\n   has been spilled to disk.  This code appears to be written to avoid\n   deleting different types of files in these directories, but AFAICT\n   there shouldn't be any other files.  Therefore, I think we could do\n   something similar to 3 (i.e., move the directories aside during\n   startup and clean them up via a new maintenance worker).\n\nI realize adding a new maintenance worker might be a bit heavy-handed,\nbut I think it would be nice to have somewhere to offload tasks that\nreally shouldn't impact startup and checkpointing.  I imagine such a\nprocess would come in handy down the road, too.  WDYT?\n\nNathan\n\n[0] https://postgr.es/m/32B59582-AA6C-4609-B08F-2256A271F7A5%40amazon.com", "msg_date": "Wed, 1 Dec 2021 13:35:00 -0800", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "Hi,\n\nOn 2021-12-01 20:24:25 +0000, Bossart, Nathan wrote:\n> I realize adding a new maintenance worker might be a bit heavy-handed,\n> but I think it would be nice to have somewhere to offload tasks that\n> really shouldn't impact startup and checkpointing. I imagine such a\n> process would come in handy down the road, too. WDYT?\n\n-1. I think the overhead of an additional worker is disproportional here. And\nthere's simplicity benefits in having a predictable cleanup interlock as well.\n\nI think particularly for the snapshot stuff it'd be better to optimize away\nunnecessary snapshot files, rather than making the cleanup more asynchronous.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 Dec 2021 14:56:32 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On 12/1/21, 2:56 PM, \"Andres Freund\" <andres@anarazel.de> wrote:\r\n> On 2021-12-01 20:24:25 +0000, Bossart, Nathan wrote:\r\n>> I realize adding a new maintenance worker might be a bit heavy-handed,\r\n>> but I think it would be nice to have somewhere to offload tasks that\r\n>> really shouldn't impact startup and checkpointing. I imagine such a\r\n>> process would come in handy down the road, too. WDYT?\r\n>\r\n> -1. I think the overhead of an additional worker is disproportional here. And\r\n> there's simplicity benefits in having a predictable cleanup interlock as well.\r\n\r\nAnother idea I had was to put some upper limit on how much time is\r\nspent on such tasks. For example, a checkpoint would only spend X\r\nminutes on CheckPointSnapBuild() before giving up until the next one.\r\nI think the main downside of that approach is that it could lead to\r\nunbounded growth, so perhaps we would limit (or even skip) such tasks\r\nonly for end-of-recovery and shutdown checkpoints. Perhaps the\r\nstartup tasks could be limited in a similar fashion.\r\n\r\n> I think particularly for the snapshot stuff it'd be better to optimize away\r\n> unnecessary snapshot files, rather than making the cleanup more asynchronous.\r\n\r\nI can look into this. Any pointers would be much appreciated.\r\n\r\nNathan\r\n\r\n", "msg_date": "Thu, 2 Dec 2021 00:19:19 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Wed, Dec 1, 2021, at 9:19 PM, Bossart, Nathan wrote:\n> On 12/1/21, 2:56 PM, \"Andres Freund\" <andres@anarazel.de> wrote:\n> > On 2021-12-01 20:24:25 +0000, Bossart, Nathan wrote:\n> >> I realize adding a new maintenance worker might be a bit heavy-handed,\n> >> but I think it would be nice to have somewhere to offload tasks that\n> >> really shouldn't impact startup and checkpointing. I imagine such a\n> >> process would come in handy down the road, too. WDYT?\n> >\n> > -1. I think the overhead of an additional worker is disproportional here. And\n> > there's simplicity benefits in having a predictable cleanup interlock as well.\n> \n> Another idea I had was to put some upper limit on how much time is\n> spent on such tasks. For example, a checkpoint would only spend X\n> minutes on CheckPointSnapBuild() before giving up until the next one.\n> I think the main downside of that approach is that it could lead to\n> unbounded growth, so perhaps we would limit (or even skip) such tasks\n> only for end-of-recovery and shutdown checkpoints. Perhaps the\n> startup tasks could be limited in a similar fashion.\nSaying that a certain task is O(n) doesn't mean it needs a separate process to\nhandle it. Did you have a use case or even better numbers (% of checkpoint /\nstartup time) that makes your proposal worthwhile?\n\nI would try to optimize (1) and (2). However, delayed removal can be a\nlong-term issue if the new routine cannot keep up with the pace of file\ncreation (specially if the checkpoints are far apart).\n\nFor (3), there is already a GUC that would avoid the slowdown during startup.\nUse it if you think the startup time is more important that disk space occupied\nby useless files.\n\nFor (4), you are forgetting that the on-disk state of replication slots is\nstored in the pg_replslot/SLOTNAME/state. It seems you cannot just rename the\nreplication slot directory and copy the state file. What happen if there is a\ncrash before copying the state file?\n\nWhile we are talking about items (1), (2) and (4), we could probably have an\noption to create some ephemeral logical decoding files into ramdisk (similar to\nstatistics directory). I wouldn't like to hijack this thread but this proposal\ncould alleviate the possible issues that you pointed out. If people are\ninterested in this proposal, I can start a new thread about it.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Dec 1, 2021, at 9:19 PM, Bossart, Nathan wrote:On 12/1/21, 2:56 PM, \"Andres Freund\" <andres@anarazel.de> wrote:> On 2021-12-01 20:24:25 +0000, Bossart, Nathan wrote:>> I realize adding a new maintenance worker might be a bit heavy-handed,>> but I think it would be nice to have somewhere to offload tasks that>> really shouldn't impact startup and checkpointing.  I imagine such a>> process would come in handy down the road, too.  WDYT?>> -1. I think the overhead of an additional worker is disproportional here. And> there's simplicity benefits in having a predictable cleanup interlock as well.Another idea I had was to put some upper limit on how much time isspent on such tasks.  For example, a checkpoint would only spend Xminutes on CheckPointSnapBuild() before giving up until the next one.I think the main downside of that approach is that it could lead tounbounded growth, so perhaps we would limit (or even skip) such tasksonly for end-of-recovery and shutdown checkpoints.  Perhaps thestartup tasks could be limited in a similar fashion.Saying that a certain task is O(n) doesn't mean it needs a separate process tohandle it. Did you have a use case or even better numbers (% of checkpoint /startup time) that makes your proposal worthwhile?I would try to optimize (1) and (2). However, delayed removal can be along-term issue if the new routine cannot keep up with the pace of filecreation (specially if the checkpoints are far apart).For (3), there is already a GUC that would avoid the slowdown during startup.Use it if you think the startup time is more important that disk space occupiedby useless files.For (4), you are forgetting that the on-disk state of replication slots isstored in the pg_replslot/SLOTNAME/state. It seems you cannot just rename thereplication slot directory and copy the state file. What happen if there is acrash before copying the state file?While we are talking about items (1), (2) and (4), we could probably have anoption to create some ephemeral logical decoding files into ramdisk (similar tostatistics directory). I wouldn't like to hijack this thread but this proposalcould alleviate the possible issues that you pointed out. If people areinterested in this proposal, I can start a new thread about it.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Wed, 01 Dec 2021 23:05:03 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Thu, Dec 2, 2021 at 1:54 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> Hi hackers,\n>\n> Thanks to 61752af, SyncDataDirectory() can make use of syncfs() to\n> avoid individually syncing all database files after a crash. However,\n> as noted earlier this year [0], there are still a number of O(n) tasks\n> that affect startup and checkpointing that I'd like to improve.\n> Below, I've attempted to summarize each task and to offer ideas for\n> improving matters. I'll likely split each of these into its own\n> thread, given there is community interest for such changes.\n>\n> 1) CheckPointSnapBuild(): This function loops through\n> pg_logical/snapshots to remove all snapshots that are no longer\n> needed. If there are many entries in this directory, this can take\n> a long time. The note above this function indicates that this is\n> done during checkpoints simply because it is convenient. IIUC\n> there is no requirement that this function actually completes for a\n> given checkpoint. My current idea is to move this to a new\n> maintenance worker.\n> 2) CheckPointLogicalRewriteHeap(): This function loops through\n> pg_logical/mappings to remove old mappings and flush all remaining\n> ones. IIUC there is no requirement that the \"remove old mappings\"\n> part must complete for a given checkpoint, but the \"flush all\n> remaining\" portion allows replay after a checkpoint to only \"deal\n> with the parts of a mapping that have been written out after the\n> checkpoint started.\" Therefore, I think we should move the \"remove\n> old mappings\" part to a new maintenance worker (probably the same\n> one as for 1), and we should consider using syncfs() for the \"flush\n> all remaining\" part. (I suspect the main argument against the\n> latter will be that it could cause IO spikes.)\n> 3) RemovePgTempFiles(): This step can delay startup if there are many\n> temporary files to individually remove. This step is already\n> optionally done after a crash via the remove_temp_files_after_crash\n> GUC. I propose that we have startup move the temporary file\n> directories aside and create new ones, and then a separate worker\n> (probably the same one from 1 and 2) could clean up the old files.\n> 4) StartupReorderBuffer(): This step deletes logical slot data that\n> has been spilled to disk. This code appears to be written to avoid\n> deleting different types of files in these directories, but AFAICT\n> there shouldn't be any other files. Therefore, I think we could do\n> something similar to 3 (i.e., move the directories aside during\n> startup and clean them up via a new maintenance worker).\n>\n> I realize adding a new maintenance worker might be a bit heavy-handed,\n> but I think it would be nice to have somewhere to offload tasks that\n> really shouldn't impact startup and checkpointing. I imagine such a\n> process would come in handy down the road, too. WDYT?\n\n+1 for the overall idea of making the checkpoint faster. In fact, we\nhere at our team have been thinking about this problem for a while. If\nthere are a lot of files that checkpoint has to loop over and remove,\nIMO, that task can be delegated to someone else (maybe a background\nworker called background cleaner or bg cleaner, of course, we can have\na GUC to enable or disable it). The checkpoint can just write some\nmarker files (for instance, it can write snapshot_<cutofflsn> files\nwith file name itself representing the cutoff lsn so that the new bg\ncleaner can remove the snapshot files, similarly it can write marker\nfiles for other file removals). Having said that, a new bg cleaner\ndeleting the files asynchronously on behalf of checkpoint can look an\noverkill until we have some numbers that we could save with this\napproach. For this purpose, I did a small experiment to figure out how\nmuch usually file deletion takes [1] on a SSD, for 1million files\n8seconds, I'm sure it will be much more on HDD.\n\nThe bg cleaner can also be used for RemovePgTempFiles, probably the\npostmaster just renaming the pgsql_temp to something\npgsql_temp_delete, then proceeding with the server startup, the bg\ncleaner can then delete the files.\nAlso, we could do something similar for removing/recycling old xlog\nfiles and StartupReorderBuffer.\n\nAnother idea could be to parallelize the checkpoint i.e. IIUC, the\ntasks that checkpoint do in CheckPointGuts are independent and if we\nhave some counters like (how many snapshot/mapping files that the\nserver generated)\n\n[1] on SSD:\ndeletion of 1000000 files took 7.930380 seconds\ndeletion of 500000 files took 3.921676 seconds\ndeletion of 100000 files took 0.768772 seconds\ndeletion of 50000 files took 0.400623 seconds\ndeletion of 10000 files took 0.077565 seconds\ndeletion of 1000 files took 0.006232 seconds\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 2 Dec 2021 08:17:06 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On 12/1/21, 6:06 PM, \"Euler Taveira\" <euler@eulerto.com> wrote:\r\n> Saying that a certain task is O(n) doesn't mean it needs a separate process to\r\n> handle it. Did you have a use case or even better numbers (% of checkpoint /\r\n> startup time) that makes your proposal worthwhile?\r\n\r\nI don't have specific numbers on hand, but each of the four functions\r\nI listed is something I routinely see impacting customers.\r\n\r\n> For (3), there is already a GUC that would avoid the slowdown during startup.\r\n> Use it if you think the startup time is more important that disk space occupied\r\n> by useless files.\r\n\r\nSetting remove_temp_files_after_crash to false only prevents temp file\r\ncleanup during restart after a backend crash. It is always called for\r\nother startups.\r\n\r\n> For (4), you are forgetting that the on-disk state of replication slots is\r\n> stored in the pg_replslot/SLOTNAME/state. It seems you cannot just rename the\r\n> replication slot directory and copy the state file. What happen if there is a\r\n> crash before copying the state file?\r\n\r\nGood point. I think it's possible to deal with this, though. Perhaps\r\nthe files that should be deleted on startup should go in a separate\r\ndirectory, or maybe we could devise a way to ensure the state file is\r\ncopied even if there is a crash at an inconvenient time.\r\n\r\nNathan\r\n\r\n", "msg_date": "Thu, 2 Dec 2021 21:19:16 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On 12/1/21, 6:48 PM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> +1 for the overall idea of making the checkpoint faster. In fact, we\r\n> here at our team have been thinking about this problem for a while. If\r\n> there are a lot of files that checkpoint has to loop over and remove,\r\n> IMO, that task can be delegated to someone else (maybe a background\r\n> worker called background cleaner or bg cleaner, of course, we can have\r\n> a GUC to enable or disable it). The checkpoint can just write some\r\n\r\nRight. IMO it isn't optimal to have critical things like startup and\r\ncheckpointing depend on somewhat-unrelated tasks. I understand the\r\ndesire to avoid adding additional processes, and maybe it is a bigger\r\nhammer than what is necessary to reduce the impact, but it seemed like\r\na natural solution for this problem. That being said, I'm all for\r\nexploring other ways to handle this.\r\n\r\n> Another idea could be to parallelize the checkpoint i.e. IIUC, the\r\n> tasks that checkpoint do in CheckPointGuts are independent and if we\r\n> have some counters like (how many snapshot/mapping files that the\r\n> server generated)\r\n\r\nCould you elaborate on this? Is your idea that the checkpointer would\r\ncreate worker processes like autovacuum does?\r\n\r\nNathan\r\n\r\n", "msg_date": "Thu, 2 Dec 2021 21:31:17 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Fri, Dec 3, 2021 at 3:01 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 12/1/21, 6:48 PM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > +1 for the overall idea of making the checkpoint faster. In fact, we\n> > here at our team have been thinking about this problem for a while. If\n> > there are a lot of files that checkpoint has to loop over and remove,\n> > IMO, that task can be delegated to someone else (maybe a background\n> > worker called background cleaner or bg cleaner, of course, we can have\n> > a GUC to enable or disable it). The checkpoint can just write some\n>\n> Right. IMO it isn't optimal to have critical things like startup and\n> checkpointing depend on somewhat-unrelated tasks. I understand the\n> desire to avoid adding additional processes, and maybe it is a bigger\n> hammer than what is necessary to reduce the impact, but it seemed like\n> a natural solution for this problem. That being said, I'm all for\n> exploring other ways to handle this.\n\nHaving a generic background cleaner process (controllable via a few\nGUCs), which can delete a bunch of files (snapshot, mapping, old WAL,\ntemp files etc.) or some other task on behalf of the checkpointer,\nseems to be the easiest solution.\n\nI'm too open for other ideas.\n\n> > Another idea could be to parallelize the checkpoint i.e. IIUC, the\n> > tasks that checkpoint do in CheckPointGuts are independent and if we\n> > have some counters like (how many snapshot/mapping files that the\n> > server generated)\n>\n> Could you elaborate on this? Is your idea that the checkpointer would\n> create worker processes like autovacuum does?\n\nYes, I was thinking that the checkpointer creates one or more dynamic\nbackground workers (we can assume one background worker for now) to\ndelete the files. If a threshold of files crosses (snapshot files\ncount is more than this threshold), the new worker gets spawned which\nwould then enumerate the files and delete the unneeded ones, the\ncheckpointer can proceed with the other tasks and finish the\ncheckpointing. Having said this, I prefer the background cleaner\napproach over the dynamic background worker. The advantage with the\nbackground cleaner being that it can do other tasks (like other kinds\nof file deletion).\n\nAnother idea could be that, use the existing background writer to do\nthe file deletion while the checkpoint is happening. But again, this\nmight cause problems because the bg writer flushing dirty buffers will\nget delayed.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 3 Dec 2021 19:26:46 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On 12/3/21, 5:57 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> On Fri, Dec 3, 2021 at 3:01 AM Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>>\r\n>> On 12/1/21, 6:48 PM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n>> > +1 for the overall idea of making the checkpoint faster. In fact, we\r\n>> > here at our team have been thinking about this problem for a while. If\r\n>> > there are a lot of files that checkpoint has to loop over and remove,\r\n>> > IMO, that task can be delegated to someone else (maybe a background\r\n>> > worker called background cleaner or bg cleaner, of course, we can have\r\n>> > a GUC to enable or disable it). The checkpoint can just write some\r\n>>\r\n>> Right. IMO it isn't optimal to have critical things like startup and\r\n>> checkpointing depend on somewhat-unrelated tasks. I understand the\r\n>> desire to avoid adding additional processes, and maybe it is a bigger\r\n>> hammer than what is necessary to reduce the impact, but it seemed like\r\n>> a natural solution for this problem. That being said, I'm all for\r\n>> exploring other ways to handle this.\r\n>\r\n> Having a generic background cleaner process (controllable via a few\r\n> GUCs), which can delete a bunch of files (snapshot, mapping, old WAL,\r\n> temp files etc.) or some other task on behalf of the checkpointer,\r\n> seems to be the easiest solution.\r\n>\r\n> I'm too open for other ideas.\r\n\r\nI might hack something together for the separate worker approach, if\r\nfor no other reason than to make sure I really understand how these\r\nfunctions work. If/when a better idea emerges, we can alter course.\r\n\r\nNathan\r\n\r\n", "msg_date": "Fri, 3 Dec 2021 18:20:44 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Fri, Dec 3, 2021 at 11:50 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 12/3/21, 5:57 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > On Fri, Dec 3, 2021 at 3:01 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n> >>\n> >> On 12/1/21, 6:48 PM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >> > +1 for the overall idea of making the checkpoint faster. In fact, we\n> >> > here at our team have been thinking about this problem for a while. If\n> >> > there are a lot of files that checkpoint has to loop over and remove,\n> >> > IMO, that task can be delegated to someone else (maybe a background\n> >> > worker called background cleaner or bg cleaner, of course, we can have\n> >> > a GUC to enable or disable it). The checkpoint can just write some\n> >>\n> >> Right. IMO it isn't optimal to have critical things like startup and\n> >> checkpointing depend on somewhat-unrelated tasks. I understand the\n> >> desire to avoid adding additional processes, and maybe it is a bigger\n> >> hammer than what is necessary to reduce the impact, but it seemed like\n> >> a natural solution for this problem. That being said, I'm all for\n> >> exploring other ways to handle this.\n> >\n> > Having a generic background cleaner process (controllable via a few\n> > GUCs), which can delete a bunch of files (snapshot, mapping, old WAL,\n> > temp files etc.) or some other task on behalf of the checkpointer,\n> > seems to be the easiest solution.\n> >\n> > I'm too open for other ideas.\n>\n> I might hack something together for the separate worker approach, if\n> for no other reason than to make sure I really understand how these\n> functions work. If/when a better idea emerges, we can alter course.\n\nThanks. As I said upthread we've been discussing the approach of\noffloading some of the checkpoint tasks like (deleting snapshot files)\ninternally for quite some time and I would like to share a patch that\nadds a new background cleaner process (currently able to delete the\nlogical replication snapshot files, if required can be extended to do\nother tasks as well). I don't mind if it gets rejected. Please have a\nlook.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Mon, 6 Dec 2021 17:13:44 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On 12/6/21, 3:44 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> On Fri, Dec 3, 2021 at 11:50 PM Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>> I might hack something together for the separate worker approach, if\r\n>> for no other reason than to make sure I really understand how these\r\n>> functions work. If/when a better idea emerges, we can alter course.\r\n>\r\n> Thanks. As I said upthread we've been discussing the approach of\r\n> offloading some of the checkpoint tasks like (deleting snapshot files)\r\n> internally for quite some time and I would like to share a patch that\r\n> adds a new background cleaner process (currently able to delete the\r\n> logical replication snapshot files, if required can be extended to do\r\n> other tasks as well). I don't mind if it gets rejected. Please have a\r\n> look.\r\n\r\nThanks for sharing! I've also spent some time on a patch set, which I\r\nintend to share once I have handling for all four tasks (so far I have\r\nhandling for CheckPointSnapBuild() and RemovePgTempFiles()). I'll\r\ntake a look at your patch as well.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 6 Dec 2021 19:22:03 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On 12/6/21, 11:23 AM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> On 12/6/21, 3:44 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n>> Thanks. As I said upthread we've been discussing the approach of\r\n>> offloading some of the checkpoint tasks like (deleting snapshot files)\r\n>> internally for quite some time and I would like to share a patch that\r\n>> adds a new background cleaner process (currently able to delete the\r\n>> logical replication snapshot files, if required can be extended to do\r\n>> other tasks as well). I don't mind if it gets rejected. Please have a\r\n>> look.\r\n>\r\n> Thanks for sharing! I've also spent some time on a patch set, which I\r\n> intend to share once I have handling for all four tasks (so far I have\r\n> handling for CheckPointSnapBuild() and RemovePgTempFiles()). I'll\r\n> take a look at your patch as well.\r\n\r\nWell, I haven't had a chance to look at your patch, and my patch set\r\nstill only has handling for CheckPointSnapBuild() and\r\nRemovePgTempFiles(), but I thought I'd share what I have anyway. I\r\nsplit it into 5 patches:\r\n\r\n0001 - Adds a new \"custodian\" auxiliary process that does nothing.\r\n0002 - During startup, remove the pgsql_tmp directories instead of\r\n only clearing the contents.\r\n0003 - Split temporary file cleanup during startup into two stages.\r\n The first renames the directories, and the second clears them.\r\n0004 - Moves the second stage from 0003 to the custodian process.\r\n0005 - Moves CheckPointSnapBuild() to the custodian process.\r\n\r\nThis is still very much a work in progress, and I've done minimal\r\ntesting so far.\r\n\r\nNathan", "msg_date": "Fri, 10 Dec 2021 19:03:17 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Fri, Dec 10, 2021 at 2:03 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> Well, I haven't had a chance to look at your patch, and my patch set\n> still only has handling for CheckPointSnapBuild() and\n> RemovePgTempFiles(), but I thought I'd share what I have anyway. I\n> split it into 5 patches:\n>\n> 0001 - Adds a new \"custodian\" auxiliary process that does nothing.\n> 0002 - During startup, remove the pgsql_tmp directories instead of\n> only clearing the contents.\n> 0003 - Split temporary file cleanup during startup into two stages.\n> The first renames the directories, and the second clears them.\n> 0004 - Moves the second stage from 0003 to the custodian process.\n> 0005 - Moves CheckPointSnapBuild() to the custodian process.\n\nI don't know whether this kind of idea is good or not.\n\nOne thing we've seen a number of times now is that entrusting the same\nprocess with multiple responsibilities often ends poorly. Sometimes\nit's busy with one thing when another thing really needs to be done\nRIGHT NOW. Perhaps that won't be an issue here since all of these\nthings are related to checkpointing, but then the process name should\nreflect that rather than making it sound like we can just keep piling\nmore responsibilities onto this process indefinitely. At some point\nthat seems bound to become an issue.\n\nAnother issue is that we don't want to increase the number of\nprocesses without bound. Processes use memory and CPU resources and if\nwe run too many of them it becomes a burden on the system. Low-end\nsystems may not have too many resources in total, and high-end systems\ncan struggle to fit demanding workloads within the resources that they\nhave. Maybe it would be cheaper to do more things at once if we were\nusing threads rather than processes, but that day still seems fairly\nfar off.\n\nBut against all that, if these tasks are slowing down checkpoints and\nthat's avoidable, that seems pretty important too. Interestingly, I\ncan't say that I've ever seen any of these things be a problem for\ncheckpoint or startup speed. I wonder why you've had a different\nexperience.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 Dec 2021 08:53:37 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Mon, Dec 13, 2021 at 08:53:37AM -0500, Robert Haas wrote:\n> On Fri, Dec 10, 2021 at 2:03 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> > Well, I haven't had a chance to look at your patch, and my patch set\n> > still only has handling for CheckPointSnapBuild() and\n> > RemovePgTempFiles(), but I thought I'd share what I have anyway. I\n> > split it into 5 patches:\n> >\n> > 0001 - Adds a new \"custodian\" auxiliary process that does nothing.\n...\n> \n> I don't know whether this kind of idea is good or not.\n...\n> \n> Another issue is that we don't want to increase the number of\n> processes without bound. Processes use memory and CPU resources and if\n> we run too many of them it becomes a burden on the system. Low-end\n> systems may not have too many resources in total, and high-end systems\n> can struggle to fit demanding workloads within the resources that they\n> have. Maybe it would be cheaper to do more things at once if we were\n> using threads rather than processes, but that day still seems fairly\n> far off.\n\nMaybe that's an argument that this should be a dynamic background worker\ninstead of an auxilliary process. Then maybe it would be controlled by\nmax_parallel_maintenance_workers (or something similar). The checkpointer\nwould need to do these tasks itself if parallel workers were disabled or\ncouldn't be launched.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 13 Dec 2021 11:19:35 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On 12/13/21, 5:54 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n> I don't know whether this kind of idea is good or not.\r\n\r\nThanks for chiming in. I have an almost-complete patch set that I'm\r\nhoping to post to the lists in the next couple of days.\r\n\r\n> One thing we've seen a number of times now is that entrusting the same\r\n> process with multiple responsibilities often ends poorly. Sometimes\r\n> it's busy with one thing when another thing really needs to be done\r\n> RIGHT NOW. Perhaps that won't be an issue here since all of these\r\n> things are related to checkpointing, but then the process name should\r\n> reflect that rather than making it sound like we can just keep piling\r\n> more responsibilities onto this process indefinitely. At some point\r\n> that seems bound to become an issue.\r\n\r\nTwo of the tasks are cleanup tasks that checkpointing handles at the\r\nmoment, and two are cleanup tasks that are done at startup. For now,\r\nall of these tasks are somewhat nonessential. There's no requirement\r\nthat any of these tasks complete in order to finish startup or\r\ncheckpointing. In fact, outside of preventing the server from running\r\nout of disk space, I don't think there's any requirement that these\r\ntasks run at all. IMO this would have to be a core tenet of a new\r\nauxiliary process like this.\r\n\r\nThat being said, I totally understand your point. If there were a\r\ndozen such tasks handled by a single auxiliary process, that could\r\ncause a new set of problems. Your checkpointing and startup might be\r\nfast, but you might run out of disk space because our cleanup process\r\ncan't handle it all. So a new worker could end up becoming an\r\navailability risk as well.\r\n\r\n> Another issue is that we don't want to increase the number of\r\n> processes without bound. Processes use memory and CPU resources and if\r\n> we run too many of them it becomes a burden on the system. Low-end\r\n> systems may not have too many resources in total, and high-end systems\r\n> can struggle to fit demanding workloads within the resources that they\r\n> have. Maybe it would be cheaper to do more things at once if we were\r\n> using threads rather than processes, but that day still seems fairly\r\n> far off.\r\n\r\nI do agree that it is important to be very careful about adding new\r\nprocesses, and if a better idea for how to handle these tasks emerges,\r\nI will readily abandon my current approach. Upthread, Andres\r\nmentioned optimizing unnecessary snapshot files, and I mentioned\r\npossibly limiting how much time startup and checkpoints spend on these\r\ntasks. I don't have too many details for the former, and for the\r\nlatter, I'm worried about not being able to keep up. But if the\r\nprospect of adding a new auxiliary process for this stuff is a non-\r\nstarter, perhaps I should explore that approach some more.\r\n\r\n> But against all that, if these tasks are slowing down checkpoints and\r\n> that's avoidable, that seems pretty important too. Interestingly, I\r\n> can't say that I've ever seen any of these things be a problem for\r\n> checkpoint or startup speed. I wonder why you've had a different\r\n> experience.\r\n\r\nYeah, it's difficult for me to justify why users should suffer long\r\nperiods of downtime because startup or checkpointing is taking a very\r\nlong time doing things that are arguably unrelated to startup and\r\ncheckpointing.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 13 Dec 2021 18:21:02 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On 12/13/21, 9:20 AM, \"Justin Pryzby\" <pryzby@telsasoft.com> wrote:\r\n> On Mon, Dec 13, 2021 at 08:53:37AM -0500, Robert Haas wrote:\r\n>> Another issue is that we don't want to increase the number of\r\n>> processes without bound. Processes use memory and CPU resources and if\r\n>> we run too many of them it becomes a burden on the system. Low-end\r\n>> systems may not have too many resources in total, and high-end systems\r\n>> can struggle to fit demanding workloads within the resources that they\r\n>> have. Maybe it would be cheaper to do more things at once if we were\r\n>> using threads rather than processes, but that day still seems fairly\r\n>> far off.\r\n>\r\n> Maybe that's an argument that this should be a dynamic background worker\r\n> instead of an auxilliary process. Then maybe it would be controlled by\r\n> max_parallel_maintenance_workers (or something similar). The checkpointer\r\n> would need to do these tasks itself if parallel workers were disabled or\r\n> couldn't be launched.\r\n\r\nI think this is an interesting idea. I dislike the prospect of having\r\ntwo code paths for all this stuff, but if it addresses the concerns\r\nabout resource usage, maybe it's worth it.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 13 Dec 2021 18:30:43 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Mon, Dec 13, 2021 at 1:21 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> > But against all that, if these tasks are slowing down checkpoints and\n> > that's avoidable, that seems pretty important too. Interestingly, I\n> > can't say that I've ever seen any of these things be a problem for\n> > checkpoint or startup speed. I wonder why you've had a different\n> > experience.\n>\n> Yeah, it's difficult for me to justify why users should suffer long\n> periods of downtime because startup or checkpointing is taking a very\n> long time doing things that are arguably unrelated to startup and\n> checkpointing.\n\nWell sure. But I've never actually seen that happen.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 Dec 2021 15:36:24 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On 12/13/21, 12:37 PM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n> On Mon, Dec 13, 2021 at 1:21 PM Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>> > But against all that, if these tasks are slowing down checkpoints and\r\n>> > that's avoidable, that seems pretty important too. Interestingly, I\r\n>> > can't say that I've ever seen any of these things be a problem for\r\n>> > checkpoint or startup speed. I wonder why you've had a different\r\n>> > experience.\r\n>>\r\n>> Yeah, it's difficult for me to justify why users should suffer long\r\n>> periods of downtime because startup or checkpointing is taking a very\r\n>> long time doing things that are arguably unrelated to startup and\r\n>> checkpointing.\r\n>\r\n> Well sure. But I've never actually seen that happen.\r\n\r\nI'll admit that surprises me. As noted elsewhere [0], we were seeing\r\nthis enough with pgsql_tmp that we started moving the directory aside\r\nbefore starting the server. Discussions about handling this usually\r\nprompt questions about why there are so many temporary files in the\r\nfirst place (which is fair). FWIW all four functions noted in my\r\noriginal message [1] are things I've seen firsthand affecting users.\r\n\r\nNathan\r\n\r\n[0] https://postgr.es/m/E7573D54-A8C9-40A8-89D7-0596A36ED124%40amazon.com\r\n[1] https://postgr.es/m/C1EE64B0-D4DB-40F3-98C8-0CED324D34CB%40amazon.com\r\n\r\n", "msg_date": "Mon, 13 Dec 2021 23:05:46 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Mon, Dec 13, 2021 at 11:05:46PM +0000, Bossart, Nathan wrote:\n> On 12/13/21, 12:37 PM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\n> > On Mon, Dec 13, 2021 at 1:21 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> >> > But against all that, if these tasks are slowing down checkpoints and\n> >> > that's avoidable, that seems pretty important too. Interestingly, I\n> >> > can't say that I've ever seen any of these things be a problem for\n> >> > checkpoint or startup speed. I wonder why you've had a different\n> >> > experience.\n> >>\n> >> Yeah, it's difficult for me to justify why users should suffer long\n> >> periods of downtime because startup or checkpointing is taking a very\n> >> long time doing things that are arguably unrelated to startup and\n> >> checkpointing.\n> >\n> > Well sure. But I've never actually seen that happen.\n> \n> I'll admit that surprises me. As noted elsewhere [0], we were seeing\n> this enough with pgsql_tmp that we started moving the directory aside\n> before starting the server. Discussions about handling this usually\n> prompt questions about why there are so many temporary files in the\n> first place (which is fair). FWIW all four functions noted in my\n> original message [1] are things I've seen firsthand affecting users.\n\nHave we changed temporary file handling in any recent major releases,\nmeaning is this a current problem or one already improved in PG 14.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 14 Dec 2021 11:59:50 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On 12/14/21, 9:00 AM, \"Bruce Momjian\" <bruce@momjian.us> wrote:\r\n> Have we changed temporary file handling in any recent major releases,\r\n> meaning is this a current problem or one already improved in PG 14.\r\n\r\nI haven't noticed any recent improvements while working in this area.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 14 Dec 2021 20:09:59 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On 12/14/21, 12:09 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> On 12/14/21, 9:00 AM, \"Bruce Momjian\" <bruce@momjian.us> wrote:\r\n>> Have we changed temporary file handling in any recent major releases,\r\n>> meaning is this a current problem or one already improved in PG 14.\r\n>\r\n> I haven't noticed any recent improvements while working in this area.\r\n\r\nOn second thought, the addition of the remove_temp_files_after_crash\r\nGUC is arguably an improvement since it could prevent files from\r\naccumulating after repeated crashes.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 14 Dec 2021 20:13:38 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On 12/13/21, 10:21 AM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> Thanks for chiming in. I have an almost-complete patch set that I'm\r\n> hoping to post to the lists in the next couple of days.\r\n\r\nAs promised, here is v2. This patch set includes handling for all\r\nfour tasks noted upthread. I'd still consider this a work-in-\r\nprogress, as I've done minimal testing. At the very least, it should\r\ndemonstrate what an auxiliary process approach might look like.\r\n\r\nNathan", "msg_date": "Tue, 14 Dec 2021 20:23:57 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "Hi,\n\nOn 2021-12-14 20:23:57 +0000, Bossart, Nathan wrote:\n> As promised, here is v2. This patch set includes handling for all\n> four tasks noted upthread. I'd still consider this a work-in-\n> progress, as I've done minimal testing. At the very least, it should\n> demonstrate what an auxiliary process approach might look like.\n\nThis generates a compiler warning:\nhttps://cirrus-ci.com/task/5740581082103808?logs=mingw_cross_warning#L378\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 2 Jan 2022 13:26:01 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Mon, Jan 3, 2022 at 2:56 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-12-14 20:23:57 +0000, Bossart, Nathan wrote:\n> > As promised, here is v2. This patch set includes handling for all\n> > four tasks noted upthread. I'd still consider this a work-in-\n> > progress, as I've done minimal testing. At the very least, it should\n> > demonstrate what an auxiliary process approach might look like.\n>\n> This generates a compiler warning:\n> https://cirrus-ci.com/task/5740581082103808?logs=mingw_cross_warning#L378\n>\n\nSomehow, I am not getting these compiler warnings on the latest master\nhead (69872d0bbe6).\n\nHere are the few minor comments for the v2 version, I thought would help:\n\n+ * Copyright (c) 2021, PostgreSQL Global Development Group\n\nTime to change the year :)\n--\n\n+\n+ /* These operations are really just a minimal subset of\n+ * AbortTransaction(). We don't have very many resources to worry\n+ * about.\n+ */\n\nIncorrect formatting, the first line should be empty in the multiline\ncode comment.\n--\n\n+ XLogRecPtr logical_rewrite_mappings_cutoff; /* can remove\nolder mappings */\n+ XLogRecPtr logical_rewrite_mappings_cutoff_set;\n\nLook like logical_rewrite_mappings_cutoff gets to set only once and\nnever get reset, if it is true then I think that variable can be\nskipped completely and set the initial logical_rewrite_mappings_cutoff\nto InvalidXLogRecPtr, that will do the needful.\n--\n\nRegards,\nAmul\n\n\n", "msg_date": "Mon, 3 Jan 2022 12:29:04 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "Thanks for your review.\r\n\r\nOn 1/2/22, 11:00 PM, \"Amul Sul\" <sulamul@gmail.com> wrote:\r\n> On Mon, Jan 3, 2022 at 2:56 AM Andres Freund <andres@anarazel.de> wrote:\r\n>> This generates a compiler warning:\r\n>> https://cirrus-ci.com/task/5740581082103808?logs=mingw_cross_warning#L378\r\n>\r\n> Somehow, I am not getting these compiler warnings on the latest master\r\n> head (69872d0bbe6).\r\n\r\nI attempted to fix this by including time.h in custodian.c.\r\n\r\n> Here are the few minor comments for the v2 version, I thought would help:\r\n>\r\n> + * Copyright (c) 2021, PostgreSQL Global Development Group\r\n>\r\n> Time to change the year :)\r\n\r\nFixed in v3.\r\n\r\n> +\r\n> + /* These operations are really just a minimal subset of\r\n> + * AbortTransaction(). We don't have very many resources to worry\r\n> + * about.\r\n> + */\r\n>\r\n> Incorrect formatting, the first line should be empty in the multiline\r\n> code comment.\r\n\r\nFixed in v3.\r\n\r\n> + XLogRecPtr logical_rewrite_mappings_cutoff; /* can remove\r\n> older mappings */\r\n> + XLogRecPtr logical_rewrite_mappings_cutoff_set;\r\n>\r\n> Look like logical_rewrite_mappings_cutoff gets to set only once and\r\n> never get reset, if it is true then I think that variable can be\r\n> skipped completely and set the initial logical_rewrite_mappings_cutoff\r\n> to InvalidXLogRecPtr, that will do the needful.\r\n\r\nI think the problem with this is that when the cutoff is\r\nInvalidXLogRecPtr, it is taken to mean that all logical rewrite files\r\ncan be removed. If we just used the cutoff variable, we could remove\r\nfiles we need if the custodian ran before the cutoff was set. I\r\nsuppose we could initially set the cutoff to MaxXLogRecPtr to indicate\r\nthat the value is not yet set, but I see no real advantage to doing it\r\nthat way versus just using a bool. Speaking of which,\r\nlogical_rewrite_mappings_cutoff_set obviously should be a bool. I've\r\nfixed that in v3.\r\n\r\nNathan", "msg_date": "Wed, 5 Jan 2022 21:28:28 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "The code seems to be in good condition. All the tests are running ok with\nno errors.\n\nI like the whole idea of shifting additional checkpointer jobs as much\nas possible to another worker. In my view, it is more appropriate to call\nthis worker \"bg cleaner\" or \"bg file cleaner\" or smth.\n\nIt could be useful for systems with high load, which may deal with deleting\nmany files at once, but I'm not sure about \"small\" installations. Extra bg\nworker need more resources to do occasional deletion of small amounts of\nfiles. I really do not know how to do it better, maybe to have two\ndifferent code paths switched by GUC?\n\nShould we also think about adding WAL preallocation into custodian worker\nfrom the patch \"Pre-alocationg WAL files\" [1] ?\n\n[1]\nhttps://www.postgresql.org/message-id/flat/20201225200953.jjkrytlrzojbndh5@alap3.anarazel.de\n-- \nBest regards,\nMaxim Orlov.\n\nThe code seems to be in good condition. All the tests are running ok with no errors.I like the whole idea of shifting additional checkpointer jobs as much as possible to another worker. In my view, it is more appropriate to call this worker \"bg cleaner\" or \"bg file cleaner\" or smth.It could be useful for systems with high load, which may deal with deleting many files at once, but I'm not sure about \"small\" installations. Extra bg worker need more resources to do occasional deletion of small amounts of files. I really do not know how to do it better, maybe to have two different code paths switched by GUC? Should we also think about adding WAL preallocation into custodian worker from the patch \"Pre-alocationg WAL files\" [1] ?[1] https://www.postgresql.org/message-id/flat/20201225200953.jjkrytlrzojbndh5@alap3.anarazel.de-- Best regards,Maxim Orlov.", "msg_date": "Fri, 14 Jan 2022 14:41:46 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On 1/14/22, 3:43 AM, \"Maxim Orlov\" <orlovmg@gmail.com> wrote:\r\n> The code seems to be in good condition. All the tests are running ok with no errors.\r\n\r\nThanks for your review.\r\n\r\n> I like the whole idea of shifting additional checkpointer jobs as much as possible to another worker. In my view, it is more appropriate to call this worker \"bg cleaner\" or \"bg file cleaner\" or smth.\r\n>\r\n> It could be useful for systems with high load, which may deal with deleting many files at once, but I'm not sure about \"small\" installations. Extra bg worker need more resources to do occasional deletion of small amounts of files. I really do not know how to do it better, maybe to have two different code paths switched by GUC?\r\n\r\nI'd personally like to avoid creating two code paths for the same\r\nthing. Are there really cases when this one extra auxiliary process\r\nwould be too many? And if so, how would a user know when to adjust\r\nthis GUC? I understand the point that we should introduce new\r\nprocesses sparingly to avoid burdening low-end systems, but I don't\r\nthink we should be afraid to add new ones when it is needed.\r\n\r\nThat being said, if making the extra worker optional addresses the\r\nconcerns about resource usage, maybe we should consider it. Justin\r\nsuggested using something like max_parallel_maintenance_workers\r\nupthread [0].\r\n\r\n> Should we also think about adding WAL preallocation into custodian worker from the patch \"Pre-alocationg WAL files\" [1] ?\r\n\r\nThis was brought up in the pre-allocation thread [1]. I don't think\r\nthe custodian process would be the right place for it, and I'm also\r\nnot as concerned about it because it will generally be a small, fixed,\r\nand configurable amount of work. In any case, I don't sense a ton of\r\nsupport for a new auxiliary process in this thread, so I'm hesitant to\r\ngo down the same path for pre-allocation.\r\n\r\nNathan\r\n\r\n[0] https://postgr.es/m/20211213171935.GX17618%40telsasoft.com\r\n[1] https://postgr.es/m/B2ACCC5A-F9F2-41D9-AC3B-251362A0A254%40amazon.com\r\n\r\n", "msg_date": "Fri, 14 Jan 2022 19:16:01 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Sat, Jan 15, 2022 at 12:46 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 1/14/22, 3:43 AM, \"Maxim Orlov\" <orlovmg@gmail.com> wrote:\n> > The code seems to be in good condition. All the tests are running ok with no errors.\n>\n> Thanks for your review.\n>\n> > I like the whole idea of shifting additional checkpointer jobs as much as possible to another worker. In my view, it is more appropriate to call this worker \"bg cleaner\" or \"bg file cleaner\" or smth.\n\nI personally prefer \"background cleaner\" as the new process name in\nline with \"background writer\" and \"background worker\".\n\n> > It could be useful for systems with high load, which may deal with deleting many files at once, but I'm not sure about \"small\" installations. Extra bg worker need more resources to do occasional deletion of small amounts of files. I really do not know how to do it better, maybe to have two different code paths switched by GUC?\n>\n> I'd personally like to avoid creating two code paths for the same\n> thing. Are there really cases when this one extra auxiliary process\n> would be too many? And if so, how would a user know when to adjust\n> this GUC? I understand the point that we should introduce new\n> processes sparingly to avoid burdening low-end systems, but I don't\n> think we should be afraid to add new ones when it is needed.\n\nIMO, having a GUC for enabling/disabling this new worker and it's\nrelated code would be a better idea. The reason is that if the\npostgres has no replication slots at all(which is quite possible in\nreal stand-alone production environments) or if the file enumeration\n(directory traversal and file removal) is fast enough on the servers,\nthere's no point having this new worker, the checkpointer itself can\ntake care of the work as it is doing today.\n\n> That being said, if making the extra worker optional addresses the\n> concerns about resource usage, maybe we should consider it. Justin\n> suggested using something like max_parallel_maintenance_workers\n> upthread [0].\n\nI don't think having this new process is built as part of\nmax_parallel_maintenance_workers, instead I prefer to have it as an\nauxiliary process much like \"background writer\", \"wal writer\" and so\non.\n\nI think now it's the time for us to run some use cases and get the\nperf reports to see how beneficial this new process is going to be, in\nterms of improving the checkpoint timings.\n\n> > Should we also think about adding WAL preallocation into custodian worker from the patch \"Pre-alocationg WAL files\" [1] ?\n>\n> This was brought up in the pre-allocation thread [1]. I don't think\n> the custodian process would be the right place for it, and I'm also\n> not as concerned about it because it will generally be a small, fixed,\n> and configurable amount of work. In any case, I don't sense a ton of\n> support for a new auxiliary process in this thread, so I'm hesitant to\n> go down the same path for pre-allocation.\n>\n> [0] https://postgr.es/m/20211213171935.GX17618%40telsasoft.com\n> [1] https://postgr.es/m/B2ACCC5A-F9F2-41D9-AC3B-251362A0A254%40amazon.com\n\nI think the idea of weaving every non-critical task to a common\nbackground process is a good idea but let's not mix up with the new\nbackground cleaner process here for now, at least until we get some\nnumbers and prove that the idea proposed here will be beneficial.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 15 Jan 2022 12:55:43 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On 1/14/22, 11:26 PM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> On Sat, Jan 15, 2022 at 12:46 AM Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>> I'd personally like to avoid creating two code paths for the same\r\n>> thing. Are there really cases when this one extra auxiliary process\r\n>> would be too many? And if so, how would a user know when to adjust\r\n>> this GUC? I understand the point that we should introduce new\r\n>> processes sparingly to avoid burdening low-end systems, but I don't\r\n>> think we should be afraid to add new ones when it is needed.\r\n>\r\n> IMO, having a GUC for enabling/disabling this new worker and it's\r\n> related code would be a better idea. The reason is that if the\r\n> postgres has no replication slots at all(which is quite possible in\r\n> real stand-alone production environments) or if the file enumeration\r\n> (directory traversal and file removal) is fast enough on the servers,\r\n> there's no point having this new worker, the checkpointer itself can\r\n> take care of the work as it is doing today.\r\n\r\nIMO introducing a GUC wouldn't be doing users many favors. Their\r\ncluster might work just fine for a long time before they begin\r\nencountering problems during startups/checkpoints. Once the user\r\ndiscovers the underlying reason, they have to then find a GUC for\r\nenabling a special background worker that makes this problem go away.\r\nWhy not just fix the problem for everybody by default?\r\n\r\nI've been thinking about what other approaches we could take besides\r\ncreating more processes. The root of the problem seems to be that\r\nthere are a number of tasks that are performed synchronously that can\r\ntake a long time. The process approach essentially makes these tasks\r\nasynchronous so that they do not block startup and checkpointing. But\r\nperhaps this can be done in an existing process, possibly even the\r\ncheckpointer. Like the current WAL pre-allocation patch, we could do\r\nthis work when the checkpointer isn't checkpointing, and we could also\r\ndo small amounts of work in CheckpointWriteDelay() (or a new function\r\ncalled in a similar way). In theory, this would help avoid delaying\r\ncheckpoints too long while doing cleanup at every opportunity to lower\r\nthe chances it falls far behind.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 18 Jan 2022 20:00:41 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "Here is a rebased patch set.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 11 Feb 2022 10:02:49 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "Here is another rebased patch set.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 16 Feb 2022 16:50:57 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "Hi,\n\nOn 2022-02-16 16:50:57 -0800, Nathan Bossart wrote:\n> + * The custodian process is new as of Postgres 15.\n\nI think this kind of comment tends to age badly and not be very useful.\n\n\n> It's main purpose is to\n> + * offload tasks that could otherwise delay startup and checkpointing, but\n> + * it needn't be restricted to just those things. Offloaded tasks should\n> + * not be synchronous (e.g., checkpointing shouldn't need to wait for the\n> + * custodian to complete a task before proceeding). Also, ensure that any\n> + * offloaded tasks are either not required during single-user mode or are\n> + * performed separately during single-user mode.\n> + *\n> + * The custodian is not an essential process and can shutdown quickly when\n> + * requested. The custodian will wake up approximately once every 5\n> + * minutes to perform its tasks, but backends can (and should) set its\n> + * latch to wake it up sooner.\n\nHm. This kind policy makes it easy to introduce bugs where the occasional runs\nmask forgotten notifications etc.\n\n\n> + * Normal termination is by SIGTERM, which instructs the bgwriter to\n> + * exit(0).\n\ns/bgwriter/.../\n\n> Emergency termination is by SIGQUIT; like any backend, the\n> + * custodian will simply abort and exit on SIGQUIT.\n> + *\n> + * If the custodian exits unexpectedly, the postmaster treats that the same\n> + * as a backend crash: shared memory may be corrupted, so remaining\n> + * backends should be killed by SIGQUIT and then a recovery cycle started.\n\nThis doesn't really seem useful stuff to me.\n\n\n\n> +\t/*\n> +\t * If an exception is encountered, processing resumes here.\n> +\t *\n> +\t * You might wonder why this isn't coded as an infinite loop around a\n> +\t * PG_TRY construct. The reason is that this is the bottom of the\n> +\t * exception stack, and so with PG_TRY there would be no exception handler\n> +\t * in force at all during the CATCH part. By leaving the outermost setjmp\n> +\t * always active, we have at least some chance of recovering from an error\n> +\t * during error recovery. (If we get into an infinite loop thereby, it\n> +\t * will soon be stopped by overflow of elog.c's internal state stack.)\n> +\t *\n> +\t * Note that we use sigsetjmp(..., 1), so that the prevailing signal mask\n> +\t * (to wit, BlockSig) will be restored when longjmp'ing to here. Thus,\n> +\t * signals other than SIGQUIT will be blocked until we complete error\n> +\t * recovery. It might seem that this policy makes the HOLD_INTERRUPS()\n> +\t * call redundant, but it is not since InterruptPending might be set\n> +\t * already.\n> +\t */\n\nI think it's bad to copy this comment into even more places.\n\n\n> +\t\t/* Since not using PG_TRY, must reset error stack by hand */\n> +\tif (sigsetjmp(local_sigjmp_buf, 1) != 0)\n> +\t{\n\nI also think it's a bad idea to introduce even more copies of the error\nhandling body. I think we need to unify this. And yes, it's unfair to stick\nyou with it, but it's been a while since a new aux process has been added.\n\n\n> +\t\t/*\n> +\t\t * These operations are really just a minimal subset of\n> +\t\t * AbortTransaction(). We don't have very many resources to worry\n> +\t\t * about.\n> +\t\t */\n\nGiven what you're proposing this for, are you actually confident that we don't\nneed more than this?\n\n\n> From d9826f75ad2259984d55fc04622f0b91ebbba65a Mon Sep 17 00:00:00 2001\n> From: Nathan Bossart <bossartn@amazon.com>\n> Date: Sun, 5 Dec 2021 19:38:20 -0800\n> Subject: [PATCH v5 2/8] Also remove pgsql_tmp directories during startup.\n>\n> Presently, the server only removes the contents of the temporary\n> directories during startup, not the directory itself. This changes\n> that to prepare for future commits that will move temporary file\n> cleanup to a separate auxiliary process.\n\nIs this actually safe? Is there a guarantee no process can access a temp table\nstored in one of these? Because without WAL guaranteeing consistency, we can't\njust access e.g. temp tables written before a crash.\n\n\n> +extern void RemovePgTempDir(const char *tmpdirname, bool missing_ok,\n> +\t\t\t\t\t\t\tbool unlink_all);\n\nI don't like functions with multiple consecutive booleans, they tend to get\nswapped around. Why not just split unlink_all=true/false into different\nfunctions?\n\n\n> Subject: [PATCH v5 3/8] Split pgsql_tmp cleanup into two stages.\n>\n> First, pgsql_tmp directories will be renamed to stage them for\n> removal.\n\nWhat if the target name already exists?\n\n\n> Then, all files in pgsql_tmp are removed before removing\n> the staged directories themselves. This change is being made in\n> preparation for a follow-up change to offload most temporary file\n> cleanup to the new custodian process.\n>\n> Note that temporary relation files cannot be cleaned up via the\n> aforementioned strategy and will not be offloaded to the custodian.\n\nThis should be in the prior commit message, otherwise people will ask the same\nquestion as I did.\n\n\n\n> +\t/*\n> +\t * Find a name for the stage directory. We just increment an integer at the\n> +\t * end of the name until we find one that doesn't exist.\n> +\t */\n> +\tfor (int n = 0; n <= INT_MAX; n++)\n> +\t{\n> +\t\tsnprintf(stage_path, sizeof(stage_path), \"%s/%s%d\", parent_path,\n> +\t\t\t\t PG_TEMP_DIR_TO_REMOVE_PREFIX, n);\n\nUninterruptible loops up to INT_MAX do not seem like a good idea.\n\n\n> +\t\tdir = AllocateDir(stage_path);\n> +\t\tif (dir == NULL)\n> +\t\t{\n\nWhy not just use stat()? That's cheaper, and there's no\ntime-to-check-time-to-use issue here, we're the only one writing.\n\n\n> +\t\t\tif (errno == ENOENT)\n> +\t\t\t\tbreak;\n> +\n> +\t\t\tereport(LOG,\n> +\t\t\t\t\t(errcode_for_file_access(),\n> +\t\t\t\t\t errmsg(\"could not open directory \\\"%s\\\": %m\",\n> +\t\t\t\t\t\t\tstage_path)));\n\nI think this kind of lenience is just hiding bugs.\n\n\n\n\n> File\n> PathNameCreateTemporaryFile(const char *path, bool error_on_failure)\n> @@ -3175,7 +3178,8 @@ RemovePgTempFiles(bool stage, bool remove_relation_files)\n> \t */\n> \tspc_dir = AllocateDir(\"pg_tblspc\");\n>\n> -\twhile ((spc_de = ReadDirExtended(spc_dir, \"pg_tblspc\", LOG)) != NULL)\n> +\twhile (!ShutdownRequestPending &&\n> +\t\t (spc_de = ReadDirExtended(spc_dir, \"pg_tblspc\", LOG)) != NULL)\n\nUh, huh? It strikes me as a supremely bad idea to have functions *silently*\nnot do their jobs when ShutdownRequestPending is set, particularly without a\nhuge fat comment.\n\n\n> \t{\n> \t\tif (strcmp(spc_de->d_name, \".\") == 0 ||\n> \t\t\tstrcmp(spc_de->d_name, \"..\") == 0)\n> @@ -3211,6 +3215,14 @@ RemovePgTempFiles(bool stage, bool remove_relation_files)\n> \t * would create a race condition. It's done separately, earlier in\n> \t * postmaster startup.\n> \t */\n> +\n> +\t/*\n> +\t * If we just staged some pgsql_tmp directories for removal, wake up the\n> +\t * custodian process so that it deletes all the files in the staged\n> +\t * directories as well as the directories themselves.\n> +\t */\n> +\tif (stage && ProcGlobal->custodianLatch)\n> +\t\tSetLatch(ProcGlobal->custodianLatch);\n\nJust signalling without letting the custodian know what it's expected to do\nstrikes me as a bad idea.\n\n\n\n> From 9c2013d53cc5c857ef8aca3df044613e66215aee Mon Sep 17 00:00:00 2001\n> From: Nathan Bossart <bossartn@amazon.com>\n> Date: Sun, 5 Dec 2021 22:02:40 -0800\n> Subject: [PATCH v5 5/8] Move removal of old serialized snapshots to custodian.\n>\n> This was only done during checkpoints because it was a convenient\n> place to put it. However, if there are many snapshots to remove,\n> it can significantly extend checkpoint time. To avoid this, move\n> this work to the newly-introduced custodian process.\n> ---\n> src/backend/access/transam/xlog.c | 2 --\n> src/backend/postmaster/custodian.c | 11 +++++++++++\n> src/backend/replication/logical/snapbuild.c | 13 +++++++------\n> src/include/replication/snapbuild.h | 2 +-\n> 4 files changed, 19 insertions(+), 9 deletions(-)\n\nWhy does this not open us up to new xid wraparound issues? Before there was a\nhard bound on how long these files could linger around. Now there's not\nanymore.\n\n\n\n> -\twhile ((snap_de = ReadDir(snap_dir, \"pg_logical/snapshots\")) != NULL)\n> +\twhile (!ShutdownRequestPending &&\n> +\t\t (snap_de = ReadDir(snap_dir, \"pg_logical/snapshots\")) != NULL)\n\nI really really strenuously object to these checks.\n\n\n\n> Subject: [PATCH v5 6/8] Move removal of old logical rewrite mapping files to\n> custodian.\n\n> If there are many such files to remove, checkpoints can take much\n> longer. To avoid this, move this work to the newly-introduced\n> custodian process.\n\nSame wraparound concerns.\n\n\n> +#include \"postmaster/bgwriter.h\"\n\nI think it's a bad idea to put these functions into bgwriter.h\n\n\n\n> From cfca62dd55d7be7e0025e5625f18d3ab9180029c Mon Sep 17 00:00:00 2001\n> From: Nathan Bossart <bossartn@amazon.com>\n> Date: Mon, 13 Dec 2021 20:20:12 -0800\n> Subject: [PATCH v5 7/8] Use syncfs() in CheckPointLogicalRewriteHeap() for\n> shutdown and end-of-recovery checkpoints.\n>\n> This may save quite a bit of time when there are many mapping files\n> to flush to disk.\n\nSeems like an a mostly independent proposal.\n\n\n> +#ifdef HAVE_SYNCFS\n> +\n> +\t/*\n> +\t * If we are doing a shutdown or end-of-recovery checkpoint, let's use\n> +\t * syncfs() to flush the mappings to disk instead of flushing each one\n> +\t * individually. This may save us quite a bit of time when there are many\n> +\t * such files to flush.\n> +\t */\n\nI am doubtful this is a good idea. This will cause all dirty files to be\nwritten back, even ones we don't need to be written back. At once. Very\npossibly *slowing down* the shutdown.\n\nWhat is even the theory of the case here? That there's so many dirty mapping\nfiles that fsyncing them will take too long? That iterating would take too\nlong?\n\n\n> From b5923b1b76a1fab6c21d6aec086219160473f464 Mon Sep 17 00:00:00 2001\n> From: Nathan Bossart <nathandbossart@gmail.com>\n> Date: Fri, 11 Feb 2022 09:43:57 -0800\n> Subject: [PATCH v5 8/8] Move removal of spilled logical slot data to\n> custodian.\n>\n> If there are many such files, startup can take much longer than\n> necessary. To handle this, startup creates a new slot directory,\n> copies the state file, and swaps the new directory with the old\n> one. The custodian then asynchronously cleans up the old slot\n> directory.\n\nYou guess it: I don't see what prevents wraparound issues.\n\n\n> 5 files changed, 317 insertions(+), 9 deletions(-)\n\nThis seems such an increase in complexity and fragility that I really doubt\nthis is a good idea.\n\n\n\n> +/*\n> + * This function renames the given directory with a special suffix that the\n> + * custodian will know to look for. An integer is appended to the end of the\n> + * new directory name in case previously staged slot directories have not yet\n> + * been removed.\n> + */\n> +static void\n> +StageSlotDirForRemoval(const char *slotname, const char *slotpath)\n> +{\n> +\tchar\t\tstage_path[MAXPGPATH];\n> +\n> +\t/*\n> +\t * Find a name for the stage directory. We just increment an integer at the\n> +\t * end of the name until we find one that doesn't exist.\n> +\t */\n> +\tfor (int n = 0; n <= INT_MAX; n++)\n> +\t{\n> +\t\tDIR\t\t *dir;\n> +\n> +\t\tsprintf(stage_path, \"pg_replslot/%s.to_remove_%d\", slotname, n);\n> +\n> +\t\tdir = AllocateDir(stage_path);\n> +\t\tif (dir == NULL)\n> +\t\t{\n> +\t\t\tif (errno == ENOENT)\n> +\t\t\t\tbreak;\n> +\n> +\t\t\tereport(ERROR,\n> +\t\t\t\t\t(errcode_for_file_access(),\n> +\t\t\t\t\t errmsg(\"could not open directory \\\"%s\\\": %m\",\n> +\t\t\t\t\t\t\tstage_path)));\n> +\t\t}\n> +\t\tFreeDir(dir);\n> +\n> +\t\tstage_path[0] = '\\0';\n> +\t}\n\nCopy of \"find free name\" logic.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 16 Feb 2022 17:50:52 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "Hi Andres,\n\nI appreciate the feedback.\n\nOn Wed, Feb 16, 2022 at 05:50:52PM -0800, Andres Freund wrote:\n>> +\t\t/* Since not using PG_TRY, must reset error stack by hand */\n>> +\tif (sigsetjmp(local_sigjmp_buf, 1) != 0)\n>> +\t{\n> \n> I also think it's a bad idea to introduce even more copies of the error\n> handling body. I think we need to unify this. And yes, it's unfair to stick\n> you with it, but it's been a while since a new aux process has been added.\n\n+1, I think this is useful refactoring. I might spin this off to its own\nthread.\n\n>> +\t\t/*\n>> +\t\t * These operations are really just a minimal subset of\n>> +\t\t * AbortTransaction(). We don't have very many resources to worry\n>> +\t\t * about.\n>> +\t\t */\n> \n> Given what you're proposing this for, are you actually confident that we don't\n> need more than this?\n\nI will give this a closer look.\n\n>> +extern void RemovePgTempDir(const char *tmpdirname, bool missing_ok,\n>> +\t\t\t\t\t\t\tbool unlink_all);\n> \n> I don't like functions with multiple consecutive booleans, they tend to get\n> swapped around. Why not just split unlink_all=true/false into different\n> functions?\n\nWill do.\n\n>> Subject: [PATCH v5 3/8] Split pgsql_tmp cleanup into two stages.\n>>\n>> First, pgsql_tmp directories will be renamed to stage them for\n>> removal.\n> \n> What if the target name already exists?\n\nThe integer at the end of the target name is incremented until we find a\nunique name.\n\n>> Note that temporary relation files cannot be cleaned up via the\n>> aforementioned strategy and will not be offloaded to the custodian.\n> \n> This should be in the prior commit message, otherwise people will ask the same\n> question as I did.\n\nWill do.\n\n>> +\t/*\n>> +\t * Find a name for the stage directory. We just increment an integer at the\n>> +\t * end of the name until we find one that doesn't exist.\n>> +\t */\n>> +\tfor (int n = 0; n <= INT_MAX; n++)\n>> +\t{\n>> +\t\tsnprintf(stage_path, sizeof(stage_path), \"%s/%s%d\", parent_path,\n>> +\t\t\t\t PG_TEMP_DIR_TO_REMOVE_PREFIX, n);\n> \n> Uninterruptible loops up to INT_MAX do not seem like a good idea.\n\nI modeled this after ChooseRelationName() in indexcmds.c. Looking again, I\nsee that it loops forever until a unique name is found. I suspect this is\nunlikely to be a problem in practice. What strategy would you recommend\nfor choosing a unique name? Should we just append a couple of random\ncharacters?\n\n>> +\t\tdir = AllocateDir(stage_path);\n>> +\t\tif (dir == NULL)\n>> +\t\t{\n> \n> Why not just use stat()? That's cheaper, and there's no\n> time-to-check-time-to-use issue here, we're the only one writing.\n\nI'm not sure why I didn't use stat(). I will update this.\n\n>> -\twhile ((spc_de = ReadDirExtended(spc_dir, \"pg_tblspc\", LOG)) != NULL)\n>> +\twhile (!ShutdownRequestPending &&\n>> +\t\t (spc_de = ReadDirExtended(spc_dir, \"pg_tblspc\", LOG)) != NULL)\n> \n> Uh, huh? It strikes me as a supremely bad idea to have functions *silently*\n> not do their jobs when ShutdownRequestPending is set, particularly without a\n> huge fat comment.\n\nThe idea was to avoid delaying shutdown because we're waiting for the\ncustodian to finish relatively nonessential tasks. Another option might be\nto just exit immediately when the custodian receives a shutdown request.\n\n>> +\t/*\n>> +\t * If we just staged some pgsql_tmp directories for removal, wake up the\n>> +\t * custodian process so that it deletes all the files in the staged\n>> +\t * directories as well as the directories themselves.\n>> +\t */\n>> +\tif (stage && ProcGlobal->custodianLatch)\n>> +\t\tSetLatch(ProcGlobal->custodianLatch);\n> \n> Just signalling without letting the custodian know what it's expected to do\n> strikes me as a bad idea.\n\nGood point. I will work on that.\n\n>> From 9c2013d53cc5c857ef8aca3df044613e66215aee Mon Sep 17 00:00:00 2001\n>> From: Nathan Bossart <bossartn@amazon.com>\n>> Date: Sun, 5 Dec 2021 22:02:40 -0800\n>> Subject: [PATCH v5 5/8] Move removal of old serialized snapshots to custodian.\n>>\n>> This was only done during checkpoints because it was a convenient\n>> place to put it. However, if there are many snapshots to remove,\n>> it can significantly extend checkpoint time. To avoid this, move\n>> this work to the newly-introduced custodian process.\n>> ---\n>> src/backend/access/transam/xlog.c | 2 --\n>> src/backend/postmaster/custodian.c | 11 +++++++++++\n>> src/backend/replication/logical/snapbuild.c | 13 +++++++------\n>> src/include/replication/snapbuild.h | 2 +-\n>> 4 files changed, 19 insertions(+), 9 deletions(-)\n> \n> Why does this not open us up to new xid wraparound issues? Before there was a\n> hard bound on how long these files could linger around. Now there's not\n> anymore.\n\nSorry, I'm probably missing something obvious, but I'm not sure how this\nadds transaction ID wraparound risk. These files are tied to LSNs, and\nAFAIK they won't impact slots' xmins.\n\n>> +#ifdef HAVE_SYNCFS\n>> +\n>> +\t/*\n>> +\t * If we are doing a shutdown or end-of-recovery checkpoint, let's use\n>> +\t * syncfs() to flush the mappings to disk instead of flushing each one\n>> +\t * individually. This may save us quite a bit of time when there are many\n>> +\t * such files to flush.\n>> +\t */\n> \n> I am doubtful this is a good idea. This will cause all dirty files to be\n> written back, even ones we don't need to be written back. At once. Very\n> possibly *slowing down* the shutdown.\n> \n> What is even the theory of the case here? That there's so many dirty mapping\n> files that fsyncing them will take too long? That iterating would take too\n> long?\n\nWell, yes. My idea was to model this after 61752af, which allows using\nsyncfs() instead of individually fsync-ing every file in the data\ndirectory. However, I would likely need to introduce a GUC because 1) as\nyou pointed out, it might be slower and 2) syncfs() doesn't report errors\non older versions of Linux.\n\nTBH I do feel like this one is a bit of a stretch, so I am okay with\nleaving it out for now.\n\n>> 5 files changed, 317 insertions(+), 9 deletions(-)\n> \n> This seems such an increase in complexity and fragility that I really doubt\n> this is a good idea.\n\nI think that's a fair point. I'm okay with leaving this one out for now,\ntoo.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 16 Feb 2022 20:14:04 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "Hi,\n\nOn 2022-02-16 20:14:04 -0800, Nathan Bossart wrote:\n> >> -\twhile ((spc_de = ReadDirExtended(spc_dir, \"pg_tblspc\", LOG)) != NULL)\n> >> +\twhile (!ShutdownRequestPending &&\n> >> +\t\t (spc_de = ReadDirExtended(spc_dir, \"pg_tblspc\", LOG)) != NULL)\n> >\n> > Uh, huh? It strikes me as a supremely bad idea to have functions *silently*\n> > not do their jobs when ShutdownRequestPending is set, particularly without a\n> > huge fat comment.\n>\n> The idea was to avoid delaying shutdown because we're waiting for the\n> custodian to finish relatively nonessential tasks. Another option might be\n> to just exit immediately when the custodian receives a shutdown request.\n\nI think we should just not do either of these and let the functions\nfinish. For the cases where shutdown really needs to be immediate\nthere's, uhm, immediate mode shutdowns.\n\n\n> > Why does this not open us up to new xid wraparound issues? Before there was a\n> > hard bound on how long these files could linger around. Now there's not\n> > anymore.\n>\n> Sorry, I'm probably missing something obvious, but I'm not sure how this\n> adds transaction ID wraparound risk. These files are tied to LSNs, and\n> AFAIK they won't impact slots' xmins.\n\nThey're accessed by xid. The LSN is just for cleanup. Accessing files\nleft over from a previous transaction with the same xid wouldn't be\ngood - we'd read wrong catalog state for decoding...\n\nAndres\n\n\n", "msg_date": "Wed, 16 Feb 2022 22:59:38 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Wed, Feb 16, 2022 at 10:59:38PM -0800, Andres Freund wrote:\n> On 2022-02-16 20:14:04 -0800, Nathan Bossart wrote:\n>> >> -\twhile ((spc_de = ReadDirExtended(spc_dir, \"pg_tblspc\", LOG)) != NULL)\n>> >> +\twhile (!ShutdownRequestPending &&\n>> >> +\t\t (spc_de = ReadDirExtended(spc_dir, \"pg_tblspc\", LOG)) != NULL)\n>> >\n>> > Uh, huh? It strikes me as a supremely bad idea to have functions *silently*\n>> > not do their jobs when ShutdownRequestPending is set, particularly without a\n>> > huge fat comment.\n>>\n>> The idea was to avoid delaying shutdown because we're waiting for the\n>> custodian to finish relatively nonessential tasks. Another option might be\n>> to just exit immediately when the custodian receives a shutdown request.\n> \n> I think we should just not do either of these and let the functions\n> finish. For the cases where shutdown really needs to be immediate\n> there's, uhm, immediate mode shutdowns.\n\nAlright.\n\n>> > Why does this not open us up to new xid wraparound issues? Before there was a\n>> > hard bound on how long these files could linger around. Now there's not\n>> > anymore.\n>>\n>> Sorry, I'm probably missing something obvious, but I'm not sure how this\n>> adds transaction ID wraparound risk. These files are tied to LSNs, and\n>> AFAIK they won't impact slots' xmins.\n> \n> They're accessed by xid. The LSN is just for cleanup. Accessing files\n> left over from a previous transaction with the same xid wouldn't be\n> good - we'd read wrong catalog state for decoding...\n\nOkay, that part makes sense to me. However, I'm still confused about how\nthis is handled today and why moving cleanup to a separate auxiliary\nprocess makes matters worse. I've done quite a bit of reading, and I\nhaven't found anything that seems intended to prevent this problem. Do you\nhave any pointers?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 17 Feb 2022 10:23:37 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "Hi,\n\nOn 2022-02-17 10:23:37 -0800, Nathan Bossart wrote:\n> On Wed, Feb 16, 2022 at 10:59:38PM -0800, Andres Freund wrote:\n> > They're accessed by xid. The LSN is just for cleanup. Accessing files\n> > left over from a previous transaction with the same xid wouldn't be\n> > good - we'd read wrong catalog state for decoding...\n> \n> Okay, that part makes sense to me. However, I'm still confused about how\n> this is handled today and why moving cleanup to a separate auxiliary\n> process makes matters worse.\n\nRight now cleanup happens every checkpoint. So cleanup can't be deferred all\nthat far. We currently include a bunch of 32bit xids inside checkspoints, so\nif they're rarer than 2^31-1, we're in trouble independent of logical\ndecoding.\n\nBut with this patch cleanup of logical decoding mapping files (and other\npieces) can be *indefinitely* deferred, without being noticeable.\n\n\nOne possible way to improve this would be to switch the on-disk filenames to\nbe based on 64bit xids. But that might also present some problems (file name\nlength, cost of converting 32bit xids to 64bit xids).\n\n\n> I've done quite a bit of reading, and I haven't found anything that seems\n> intended to prevent this problem. Do you have any pointers?\n\nI don't know if we have an iron-clad enforcement of checkpoints happening\nevery 2*31-1 xids. It's very unlikely to happen - you'd run out of space\netc. But it'd be good to have something better than that.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 17 Feb 2022 11:27:09 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Thu, Feb 17, 2022 at 11:27:09AM -0800, Andres Freund wrote:\n> On 2022-02-17 10:23:37 -0800, Nathan Bossart wrote:\n>> On Wed, Feb 16, 2022 at 10:59:38PM -0800, Andres Freund wrote:\n>> > They're accessed by xid. The LSN is just for cleanup. Accessing files\n>> > left over from a previous transaction with the same xid wouldn't be\n>> > good - we'd read wrong catalog state for decoding...\n>> \n>> Okay, that part makes sense to me. However, I'm still confused about how\n>> this is handled today and why moving cleanup to a separate auxiliary\n>> process makes matters worse.\n> \n> Right now cleanup happens every checkpoint. So cleanup can't be deferred all\n> that far. We currently include a bunch of 32bit xids inside checkspoints, so\n> if they're rarer than 2^31-1, we're in trouble independent of logical\n> decoding.\n> \n> But with this patch cleanup of logical decoding mapping files (and other\n> pieces) can be *indefinitely* deferred, without being noticeable.\n\nI see. The custodian should ordinarily remove the files as quickly as\npossible. In fact, I bet it will typically line up with checkpoints for\nmost users, as the checkpointer will set the latch. However, if there are\nmany temporary files to clean up, removing the logical decoding files could\nbe delayed for some time, as you said.\n\n> One possible way to improve this would be to switch the on-disk filenames to\n> be based on 64bit xids. But that might also present some problems (file name\n> length, cost of converting 32bit xids to 64bit xids).\n\nOkay.\n\n>> I've done quite a bit of reading, and I haven't found anything that seems\n>> intended to prevent this problem. Do you have any pointers?\n> \n> I don't know if we have an iron-clad enforcement of checkpoints happening\n> every 2*31-1 xids. It's very unlikely to happen - you'd run out of space\n> etc. But it'd be good to have something better than that.\n\nOkay. So IIUC the problem might already exist today, but offloading these\ntasks to a separate process could make it more likely.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 17 Feb 2022 13:00:22 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "Hi,\n\nOn 2022-02-17 13:00:22 -0800, Nathan Bossart wrote:\n> Okay. So IIUC the problem might already exist today, but offloading these\n> tasks to a separate process could make it more likely.\n\nVastly more, yes. Before checkpoints not happening would be a (but not a\ngreat) form of backpressure. You can't cancel them without triggering a\ncrash-restart. Whereas custodian can be cancelled etc.\n\n\nAs I said before, I think this is tackling things from the wrong end. Instead\nof moving the sometimes expensive task out of the way, but still expensive,\nthe focus should be to make the expensive task cheaper.\n\nAs far as I understand, the primary concern are logical decoding serialized\nsnapshots, because a lot of them can accumulate if there e.g. is an old unused\n/ far behind slot. It should be easy to reduce the number of those snapshots\nby e.g. eliding some redundant ones. Perhaps we could also make backends in\nlogical decoding occasionally do a bit of cleanup themselves.\n\nI've not seen reports of the number of mapping files to be an real issue?\n\n\nThe improvements around deleting temporary files and serialized snapshots\nafaict don't require a dedicated process - they're only relevant during\nstartup. We could use the approach of renaming the directory out of the way as\ndone in this patchset but perform the cleanup in the startup process after\nwe're up.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 17 Feb 2022 14:28:29 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Thu, Feb 17, 2022 at 02:28:29PM -0800, Andres Freund wrote:\n> As far as I understand, the primary concern are logical decoding serialized\n> snapshots, because a lot of them can accumulate if there e.g. is an old unused\n> / far behind slot. It should be easy to reduce the number of those snapshots\n> by e.g. eliding some redundant ones. Perhaps we could also make backends in\n> logical decoding occasionally do a bit of cleanup themselves.\n> \n> I've not seen reports of the number of mapping files to be an real issue?\n\nI routinely see all four of these tasks impacting customers, but I'd say\nthe most common one is the temporary file cleanup. Besides eliminating\nsome redundant files and having backends perform some cleanup, what do you\nthink about skipping the logical decoding cleanup during\nend-of-recovery/shutdown checkpoints? This was something that Bharath\nbrought up a while back [0]. As I noted in that thread, startup and\nshutdown could still take a while if checkpoints are regularly delayed due\nto logical decoding cleanup, but that might still help avoid a bit of\ndowntime.\n\n> The improvements around deleting temporary files and serialized snapshots\n> afaict don't require a dedicated process - they're only relevant during\n> startup. We could use the approach of renaming the directory out of the way as\n> done in this patchset but perform the cleanup in the startup process after\n> we're up.\n\nPerhaps this is a good place to start. As I mentioned above, IME the\ntemporary file cleanup is the most common problem, so I think even getting\nthat one fixed would be a huge improvement.\n\n[0] https://postgr.es/m/CALj2ACXkkSL8EBpR7m%3DMt%3DyRGBhevcCs3x4fsp3Bc-D13yyHOg%40mail.gmail.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 17 Feb 2022 14:58:38 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "Hi,\n\nOn 2022-02-17 14:58:38 -0800, Nathan Bossart wrote:\n> On Thu, Feb 17, 2022 at 02:28:29PM -0800, Andres Freund wrote:\n> > As far as I understand, the primary concern are logical decoding serialized\n> > snapshots, because a lot of them can accumulate if there e.g. is an old unused\n> > / far behind slot. It should be easy to reduce the number of those snapshots\n> > by e.g. eliding some redundant ones. Perhaps we could also make backends in\n> > logical decoding occasionally do a bit of cleanup themselves.\n> > \n> > I've not seen reports of the number of mapping files to be an real issue?\n> \n> I routinely see all four of these tasks impacting customers, but I'd say\n> the most common one is the temporary file cleanup.\n\nI took temp file cleanup and StartupReorderBuffer() \"out of consideration\" for\ncustodian, because they're not needed during normal running.\n\n\n> Besides eliminating some redundant files and having backends perform some\n> cleanup, what do you think about skipping the logical decoding cleanup\n> during end-of-recovery/shutdown checkpoints?\n\nI strongly disagree with it. Then you might never get the cleanup done, but\nkeep on operating until you hit corruption issues.\n\n\n> > The improvements around deleting temporary files and serialized snapshots\n> > afaict don't require a dedicated process - they're only relevant during\n> > startup. We could use the approach of renaming the directory out of the way as\n> > done in this patchset but perform the cleanup in the startup process after\n> > we're up.\n> \n> Perhaps this is a good place to start. As I mentioned above, IME the\n> temporary file cleanup is the most common problem, so I think even getting\n> that one fixed would be a huge improvement.\n\nCool.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 17 Feb 2022 15:12:47 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Thu, Feb 17, 2022 at 03:12:47PM -0800, Andres Freund wrote:\n>> > The improvements around deleting temporary files and serialized snapshots\n>> > afaict don't require a dedicated process - they're only relevant during\n>> > startup. We could use the approach of renaming the directory out of the way as\n>> > done in this patchset but perform the cleanup in the startup process after\n>> > we're up.\n>> \n>> Perhaps this is a good place to start. As I mentioned above, IME the\n>> temporary file cleanup is the most common problem, so I think even getting\n>> that one fixed would be a huge improvement.\n> \n> Cool.\n\nHm. How should this work for standbys? I can think of the following\noptions:\n\t1. Do temporary file cleanup in the postmaster (as it does today).\n\t2. Pause after allowing connections to clean up temporary files.\n\t3. Do small amounts of temporary file cleanup whenever there is an\n\t opportunity during recovery.\n\t4. Wait until recovery completes before cleaning up temporary files.\n\nI'm not too thrilled about any of these options.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 18 Feb 2022 08:44:54 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "> On Thu, Feb 17, 2022 at 03:12:47PM -0800, Andres Freund wrote:\n>>> > The improvements around deleting temporary files and serialized snapshots\n>>> > afaict don't require a dedicated process - they're only relevant during\n>>> > startup. We could use the approach of renaming the directory out of the way as\n>>> > done in this patchset but perform the cleanup in the startup process after\n>>> > we're up.\n\nBTW I know you don't like the dedicated process approach, but one\nimprovement to that approach could be to shut down the custodian process\nwhen it has nothing to do.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 18 Feb 2022 12:51:11 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "It seems unlikely that anything discussed in this thread will be committed\nfor v15, so I've adjusted the commitfest entry to v16 and moved it to the\nnext commitfest.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 17 Mar 2022 16:07:58 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Fri, 18 Feb 2022 at 20:51, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> > On Thu, Feb 17, 2022 at 03:12:47PM -0800, Andres Freund wrote:\n> >>> > The improvements around deleting temporary files and serialized snapshots\n> >>> > afaict don't require a dedicated process - they're only relevant during\n> >>> > startup. We could use the approach of renaming the directory out of the way as\n> >>> > done in this patchset but perform the cleanup in the startup process after\n> >>> > we're up.\n>\n> BTW I know you don't like the dedicated process approach, but one\n> improvement to that approach could be to shut down the custodian process\n> when it has nothing to do.\n\nHaving a central cleanup process makes a lot of sense. There is a long\nlist of potential tasks for such a process. My understanding is that\nautovacuum already has an interface for handling additional workload\ntypes, which is how BRIN indexes are handled. Do we really need a new\nprocess? If so, lets do this now.\n\nNathan's point that certain tasks are blocking fast startup is a good\none and higher availability is a critical end goal. The thought that\nwe should complete these tasks during checkpoint is a good one, but\ncheckpoints should NOT be delayed by long running tasks that are\nsecondary to availability.\n\nAndres' point that it would be better to avoid long running tasks is\ngood, if that is possible. That can be done better over time. This\npoint does not block the higher level goal of better availability\nasap, so I support Nathan's overall proposals.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 23 Jun 2022 12:58:15 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Thu, Jun 23, 2022 at 7:58 AM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n> Having a central cleanup process makes a lot of sense. There is a long\n> list of potential tasks for such a process. My understanding is that\n> autovacuum already has an interface for handling additional workload\n> types, which is how BRIN indexes are handled. Do we really need a new\n> process?\n\nIt seems to me that if there's a long list of possible tasks for such\na process, that's actually a trickier situation than if there were\nonly one or two, because it may happen that when task X is really\nurgent, the process is already busy with task Y.\n\nI don't think that piggybacking more stuff onto autovacuum is a very\ngood idea for this exact reason. We already know that autovacuum\nworkers can get so busy that they can't keep up with the need to\nvacuum and analyze tables. If we give them more things to do, that\nfigures to make it worse, at least on busy systems.\n\nI do agree that a general mechanism for getting cleanup tasks done in\nthe background could be a useful thing to have, but I feel like it's\nhard to see exactly how to make it work well. We can't just allow it\nto spin up a million new processes, but at the same time, if it can't\nguarantee that time-critical tasks get performed relatively quickly,\nit's pretty worthless.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 23 Jun 2022 09:46:28 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Thu, 23 Jun 2022 at 14:46, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Jun 23, 2022 at 7:58 AM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> > Having a central cleanup process makes a lot of sense. There is a long\n> > list of potential tasks for such a process. My understanding is that\n> > autovacuum already has an interface for handling additional workload\n> > types, which is how BRIN indexes are handled. Do we really need a new\n> > process?\n>\n> It seems to me that if there's a long list of possible tasks for such\n> a process, that's actually a trickier situation than if there were\n> only one or two, because it may happen that when task X is really\n> urgent, the process is already busy with task Y.\n>\n> I don't think that piggybacking more stuff onto autovacuum is a very\n> good idea for this exact reason. We already know that autovacuum\n> workers can get so busy that they can't keep up with the need to\n> vacuum and analyze tables. If we give them more things to do, that\n> figures to make it worse, at least on busy systems.\n>\n> I do agree that a general mechanism for getting cleanup tasks done in\n> the background could be a useful thing to have, but I feel like it's\n> hard to see exactly how to make it work well. We can't just allow it\n> to spin up a million new processes, but at the same time, if it can't\n> guarantee that time-critical tasks get performed relatively quickly,\n> it's pretty worthless.\n\nMost of the tasks mentioned aren't time critical.\n\nI have no objection to a new auxiliary process to execute those tasks,\nwhich can be spawned when needed.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 23 Jun 2022 18:13:26 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Thu, Jun 23, 2022 at 09:46:28AM -0400, Robert Haas wrote:\n> I do agree that a general mechanism for getting cleanup tasks done in\n> the background could be a useful thing to have, but I feel like it's\n> hard to see exactly how to make it work well. We can't just allow it\n> to spin up a million new processes, but at the same time, if it can't\n> guarantee that time-critical tasks get performed relatively quickly,\n> it's pretty worthless.\n\nMy intent with this new auxiliary process is to offload tasks that aren't\nparticularly time-critical. They are only time-critical in the sense that\n1) you might eventually run out of space and 2) you might encounter\nwraparound with the logical replication files. But AFAICT these same risks\nexist today in the checkpointer approach, although maybe not to the same\nextent. In any case, 2 seems solvable to me outside of this patch set.\n\nI'm grateful for the discussion in this thread so far, but I'm not seeing a\nclear path forward. I'm glad to see threads like the one to stop doing\nend-of-recovery checkpoints [0], but I don't know if it will be possible to\nsolve all of these availability concerns in a piecemeal fashion. I remain\nopen to exploring other suggested approaches beyond creating a new\nauxiliary process.\n\n[0] https://postgr.es/m/CA%2BTgmobrM2jvkiccCS9NgFcdjNSgAvk1qcAPx5S6F%2BoJT3D2mQ%40mail.gmail.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 23 Jun 2022 10:15:52 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Thu, 23 Jun 2022 at 18:15, Nathan Bossart <nathandbossart@gmail.com> wrote:\n\n> I'm grateful for the discussion in this thread so far, but I'm not seeing a\n> clear path forward.\n\n+1 to add the new auxiliary process.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 24 Jun 2022 11:45:22 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Fri, Jun 24, 2022 at 11:45:22AM +0100, Simon Riggs wrote:\n> On Thu, 23 Jun 2022 at 18:15, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> I'm grateful for the discussion in this thread so far, but I'm not seeing a\n>> clear path forward.\n> \n> +1 to add the new auxiliary process.\n\nI went ahead and put together a new patch set for this in which I've\nattempted to address most of the feedback from upthread. Notably, I've\nabandoned 0007 and 0008, added a way for processes to request specific\ntasks for the custodian, and removed all the checks for\nShutdownRequestPending.\n\nI haven't addressed the existing transaction ID wraparound risk with the\nlogical replication files. My instinct is that this deserveѕ its own\nthread, and it might need to be considered a prerequisite to this change\nbased on the prior discussion here.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 2 Jul 2022 15:05:54 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "Hi,\n\nOn 2022-07-02 15:05:54 -0700, Nathan Bossart wrote:\n> +\t\t/* Obtain requested tasks */\n> +\t\tSpinLockAcquire(&CustodianShmem->cust_lck);\n> +\t\tflags = CustodianShmem->cust_flags;\n> +\t\tCustodianShmem->cust_flags = 0;\n> +\t\tSpinLockRelease(&CustodianShmem->cust_lck);\n\nJust resetting the flags to 0 is problematic. Consider what happens if there's\ntwo tasks and and the one processed first errors out. You'll loose information\nabout needing to run the second task.\n\n\n> +\t\t/* TODO: offloaded tasks go here */\n\nSeems we're going to need some sorting of which tasks are most \"urgent\" / need\nto be processed next if we plan to make this into some generic facility.\n\n\n> +/*\n> + * RequestCustodian\n> + *\t\tCalled to request a custodian task.\n> + */\n> +void\n> +RequestCustodian(int flags)\n> +{\n> +\tSpinLockAcquire(&CustodianShmem->cust_lck);\n> +\tCustodianShmem->cust_flags |= flags;\n> +\tSpinLockRelease(&CustodianShmem->cust_lck);\n> +\n> +\tif (ProcGlobal->custodianLatch)\n> +\t\tSetLatch(ProcGlobal->custodianLatch);\n> +}\n\nWith this representation we can't really implement waiting for a task or\nsuch. And it doesn't seem like a great API for the caller to just specify a\nmix of flags.\n\n\n> +\t\t/* Calculate how long to sleep */\n> +\t\tend_time = (pg_time_t) time(NULL);\n> +\t\telapsed_secs = end_time - start_time;\n> +\t\tif (elapsed_secs >= CUSTODIAN_TIMEOUT_S)\n> +\t\t\tcontinue;\t\t\t/* no sleep for us */\n> +\t\tcur_timeout = CUSTODIAN_TIMEOUT_S - elapsed_secs;\n> +\n> +\t\t(void) WaitLatch(MyLatch,\n> +\t\t\t\t\t\t WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n> +\t\t\t\t\t\t cur_timeout * 1000L /* convert to ms */ ,\n> +\t\t\t\t\t\t WAIT_EVENT_CUSTODIAN_MAIN);\n> +\t}\n\nI don't think we should have this thing wake up on a regular basis. We're\ndoing way too much of that already, and I don't think we should add\nmore. Either we need a list of times when tasks need to be processed and wake\nup at that time, or just wake up if somebody requests a task.\n\n\n> From 5e95666efa31d6c8aa351e430c37ead6e27acb72 Mon Sep 17 00:00:00 2001\n> From: Nathan Bossart <bossartn@amazon.com>\n> Date: Sun, 5 Dec 2021 21:16:44 -0800\n> Subject: [PATCH v6 3/6] Split pgsql_tmp cleanup into two stages.\n> \n> First, pgsql_tmp directories will be renamed to stage them for\n> removal. Then, all files in pgsql_tmp are removed before removing\n> the staged directories themselves. This change is being made in\n> preparation for a follow-up change to offload most temporary file\n> cleanup to the new custodian process.\n\n> Note that temporary relation files cannot be cleaned up via the\n> aforementioned strategy and will not be offloaded to the custodian.\n\n> ---\n> src/backend/postmaster/postmaster.c | 8 +-\n> src/backend/storage/file/fd.c | 174 ++++++++++++++++++++++++----\n> src/include/storage/fd.h | 2 +-\n> 3 files changed, 160 insertions(+), 24 deletions(-)\n> \n> diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c\n> index e67370012f..82aa0c6307 100644\n> --- a/src/backend/postmaster/postmaster.c\n> +++ b/src/backend/postmaster/postmaster.c\n> @@ -1402,7 +1402,8 @@ PostmasterMain(int argc, char *argv[])\n> \t * Remove old temporary files. At this point there can be no other\n> \t * Postgres processes running in this directory, so this should be safe.\n> \t */\n> -\tRemovePgTempFiles();\n> +\tRemovePgTempFiles(true, true);\n> +\tRemovePgTempFiles(false, false);\n\nThis is imo hard to read and easy to get wrong. Make it multiple functions or\npass named flags in.\n\n\n> + * StagePgTempDirForRemoval\n> + *\n> + * This function renames the given directory with a special prefix that\n> + * RemoveStagedPgTempDirs() will know to look for. An integer is appended to\n> + * the end of the new directory name in case previously staged pgsql_tmp\n> + * directories have not yet been removed.\n> + */\n\nIt doesn't seem great to need to iterate through a directory that contains\nother files, potentially a significant number. How about having a\nstaged_for_removal/ directory, and then only scanning that?\n\n\n> +static void\n> +StagePgTempDirForRemoval(const char *tmp_dir)\n> +{\n> +\tDIR\t\t *dir;\n> +\tchar\t\tstage_path[MAXPGPATH * 2];\n> +\tchar\t\tparent_path[MAXPGPATH * 2];\n> +\tstruct stat statbuf;\n> +\n> +\t/*\n> +\t * If tmp_dir doesn't exist, there is nothing to stage.\n> +\t */\n> +\tdir = AllocateDir(tmp_dir);\n> +\tif (dir == NULL)\n> +\t{\n> +\t\tif (errno != ENOENT)\n> +\t\t\tereport(LOG,\n> +\t\t\t\t\t(errcode_for_file_access(),\n> +\t\t\t\t\t errmsg(\"could not open directory \\\"%s\\\": %m\", tmp_dir)));\n> +\t\treturn;\n> +\t}\n> +\tFreeDir(dir);\n> +\n> +\tstrlcpy(parent_path, tmp_dir, MAXPGPATH * 2);\n> +\tget_parent_directory(parent_path);\n> +\n> +\t/*\n> +\t * get_parent_directory() returns an empty string if the input argument is\n> +\t * just a file name (see comments in path.c), so handle that as being the\n> +\t * current directory.\n> +\t */\n> +\tif (strlen(parent_path) == 0)\n> +\t\tstrlcpy(parent_path, \".\", MAXPGPATH * 2);\n> +\n> +\t/*\n> +\t * Find a name for the stage directory. We just increment an integer at the\n> +\t * end of the name until we find one that doesn't exist.\n> +\t */\n> +\tfor (int n = 0; n <= INT_MAX; n++)\n> +\t{\n> +\t\tsnprintf(stage_path, sizeof(stage_path), \"%s/%s%d\", parent_path,\n> +\t\t\t\t PG_TEMP_DIR_TO_REMOVE_PREFIX, n);\n> +\n> +\t\tif (stat(stage_path, &statbuf) != 0)\n> +\t\t{\n> +\t\t\tif (errno == ENOENT)\n> +\t\t\t\tbreak;\n> +\n> +\t\t\tereport(LOG,\n> +\t\t\t\t\t(errcode_for_file_access(),\n> +\t\t\t\t\t errmsg(\"could not stat file \\\"%s\\\": %m\", stage_path)));\n> +\t\t\treturn;\n> +\t\t}\n> +\n> +\t\tstage_path[0] = '\\0';\n\nI still dislike this approach. Loops until INT_MAX, not interruptible... Can't\nwe prevent conflicts by adding a timestamp or such?\n\n\n> +\t}\n> +\n> +\t/*\n> +\t * In the unlikely event that we couldn't find a name for the stage\n> +\t * directory, bail out.\n> +\t */\n> +\tif (stage_path[0] == '\\0')\n> +\t{\n> +\t\tereport(LOG,\n> +\t\t\t\t(errmsg(\"could not stage \\\"%s\\\" for deletion\",\n> +\t\t\t\t\t\ttmp_dir)));\n> +\t\treturn;\n> +\t}\n\nThat's imo very much not ok. Just continuing in unexpected situations is a\nrecipe for introducing bugs / being hard to debug.\n\n\n> From 43042799b96b588a446c509637b5acf570e2a325 Mon Sep 17 00:00:00 2001\n\n> From a58a6bb70785a557a150680b64cd8ce78ce1b73a Mon Sep 17 00:00:00 2001\n> From: Nathan Bossart <bossartn@amazon.com>\n> Date: Sun, 5 Dec 2021 22:02:40 -0800\n> Subject: [PATCH v6 5/6] Move removal of old serialized snapshots to custodian.\n> \n> This was only done during checkpoints because it was a convenient\n> place to put it.\n\nAs mentioned before, having it done as part of checkpoints provides pretty\ndecent wraparound protection - yes, it's not theoretically perfect, but in\nreality it's very unlikely you can have an xid wraparound within one\ncheckpoint. I've mentioned this before, so at the very least I'd like to see\nthis acknowledged in the commit message.\n\n\n> However, if there are many snapshots to remove, it can significantly extend\n> checkpoint time.\n\nI'd really like to see a reproducer or profile for this...\n\n\n> +\t\t/*\n> +\t\t * Remove serialized snapshots that are no longer required by any\n> +\t\t * logical replication slot.\n> +\t\t *\n> +\t\t * It is not important for these to be removed in single-user mode, so\n> +\t\t * we don't need any extra handling outside of the custodian process for\n> +\t\t * this.\n> +\t\t */\n\nI don't think this claim is correct.\n\n\n\n> From 0add8bb19a4ee83c6a6ec1f313329d737bf304a5 Mon Sep 17 00:00:00 2001\n> From: Nathan Bossart <bossartn@amazon.com>\n> Date: Sun, 12 Dec 2021 22:07:11 -0800\n> Subject: [PATCH v6 6/6] Move removal of old logical rewrite mapping files to\n> custodian.\n> \n> If there are many such files to remove, checkpoints can take much\n> longer. To avoid this, move this work to the newly-introduced\n> custodian process.\n\nAs above I'd like to know why this could take that long. What are you doing\nthat there's so many mapping files (which only exist for catalog tables!) that\nthis is a significant fraction of a checkpoint?\n\n\n> ---\n> src/backend/access/heap/rewriteheap.c | 79 +++++++++++++++++++++++----\n> src/backend/postmaster/custodian.c | 44 +++++++++++++++\n> src/include/access/rewriteheap.h | 1 +\n> src/include/postmaster/custodian.h | 5 ++\n> 4 files changed, 119 insertions(+), 10 deletions(-)\n> \n> diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c\n> index 2a53826736..edeab65e60 100644\n> --- a/src/backend/access/heap/rewriteheap.c\n> +++ b/src/backend/access/heap/rewriteheap.c\n> @@ -116,6 +116,7 @@\n> #include \"lib/ilist.h\"\n> #include \"miscadmin.h\"\n> #include \"pgstat.h\"\n> +#include \"postmaster/custodian.h\"\n> #include \"replication/logical.h\"\n> #include \"replication/slot.h\"\n> #include \"storage/bufmgr.h\"\n> @@ -1182,7 +1183,8 @@ heap_xlog_logical_rewrite(XLogReaderState *r)\n> * Perform a checkpoint for logical rewrite mappings\n> *\n> * This serves two tasks:\n> - * 1) Remove all mappings not needed anymore based on the logical restart LSN\n> + * 1) Alert the custodian to remove all mappings not needed anymore based on the\n> + * logical restart LSN\n> * 2) Flush all remaining mappings to disk, so that replay after a checkpoint\n> *\t only has to deal with the parts of a mapping that have been written out\n> *\t after the checkpoint started.\n> @@ -1210,6 +1212,10 @@ CheckPointLogicalRewriteHeap(void)\n> \tif (cutoff != InvalidXLogRecPtr && redo < cutoff)\n> \t\tcutoff = redo;\n> \n> +\t/* let the custodian know what it can remove */\n> +\tCustodianSetLogicalRewriteCutoff(cutoff);\n\nSetting this variable in a custodian datastructure and then fetching it from\nthere seems architecturally wrong to me.\n\n\n> +\tRequestCustodian(CUSTODIAN_REMOVE_REWRITE_MAPPINGS);\n\nWhat about single user mode?\n\n\nISTM that RequestCustodian() needs to either assert out if called in single\nuser mode, or execute tasks immediately in that context.\n\n\n> +\n> +/*\n> + * Remove all mappings not needed anymore based on the logical restart LSN saved\n> + * by the checkpointer. We use this saved value instead of calling\n> + * ReplicationSlotsComputeLogicalRestartLSN() so that we don't interfere with an\n> + * ongoing call to CheckPointLogicalRewriteHeap() that is flushing mappings to\n> + * disk.\n> + */\n\nWhat interference could there be?\n\n\n> +void\n> +RemoveOldLogicalRewriteMappings(void)\n> +{\n> +\tXLogRecPtr\tcutoff;\n> +\tDIR\t\t *mappings_dir;\n> +\tstruct dirent *mapping_de;\n> +\tchar\t\tpath[MAXPGPATH + 20];\n> +\tbool\t\tvalue_set = false;\n> +\n> +\tcutoff = CustodianGetLogicalRewriteCutoff(&value_set);\n> +\tif (!value_set)\n> +\t\treturn;\n\nAfaics nothing clears values_set - is that a good idea?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 2 Jul 2022 15:54:56 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "Hi Andres,\n\nThanks for the prompt review.\n\nOn Sat, Jul 02, 2022 at 03:54:56PM -0700, Andres Freund wrote:\n> On 2022-07-02 15:05:54 -0700, Nathan Bossart wrote:\n>> +\t\t/* Obtain requested tasks */\n>> +\t\tSpinLockAcquire(&CustodianShmem->cust_lck);\n>> +\t\tflags = CustodianShmem->cust_flags;\n>> +\t\tCustodianShmem->cust_flags = 0;\n>> +\t\tSpinLockRelease(&CustodianShmem->cust_lck);\n> \n> Just resetting the flags to 0 is problematic. Consider what happens if there's\n> two tasks and and the one processed first errors out. You'll loose information\n> about needing to run the second task.\n\nI think we also want to retry any failed tasks. The way v6 handles this is\nby requesting all tasks after an exception. Another way to handle this\ncould be to reset each individual flag before the task is executed, and\nthen we could surround each one with a PG_CATCH block that resets the flag.\nI'll do it this way in the next revision.\n\n>> +/*\n>> + * RequestCustodian\n>> + *\t\tCalled to request a custodian task.\n>> + */\n>> +void\n>> +RequestCustodian(int flags)\n>> +{\n>> +\tSpinLockAcquire(&CustodianShmem->cust_lck);\n>> +\tCustodianShmem->cust_flags |= flags;\n>> +\tSpinLockRelease(&CustodianShmem->cust_lck);\n>> +\n>> +\tif (ProcGlobal->custodianLatch)\n>> +\t\tSetLatch(ProcGlobal->custodianLatch);\n>> +}\n> \n> With this representation we can't really implement waiting for a task or\n> such. And it doesn't seem like a great API for the caller to just specify a\n> mix of flags.\n\nAt the moment, the idea is that nothing should need to wait for a task\nbecause the custodian only handles things that are relatively non-critical.\nIf that changes, this could probably be expanded to look more like\nRequestCheckpoint().\n\nWhat would you suggest using instead of a mix of flags?\n\n>> +\t\t/* Calculate how long to sleep */\n>> +\t\tend_time = (pg_time_t) time(NULL);\n>> +\t\telapsed_secs = end_time - start_time;\n>> +\t\tif (elapsed_secs >= CUSTODIAN_TIMEOUT_S)\n>> +\t\t\tcontinue;\t\t\t/* no sleep for us */\n>> +\t\tcur_timeout = CUSTODIAN_TIMEOUT_S - elapsed_secs;\n>> +\n>> +\t\t(void) WaitLatch(MyLatch,\n>> +\t\t\t\t\t\t WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n>> +\t\t\t\t\t\t cur_timeout * 1000L /* convert to ms */ ,\n>> +\t\t\t\t\t\t WAIT_EVENT_CUSTODIAN_MAIN);\n>> +\t}\n> \n> I don't think we should have this thing wake up on a regular basis. We're\n> doing way too much of that already, and I don't think we should add\n> more. Either we need a list of times when tasks need to be processed and wake\n> up at that time, or just wake up if somebody requests a task.\n\nI agree. I will remove the timeout in the next revision.\n\n>> diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c\n>> index e67370012f..82aa0c6307 100644\n>> --- a/src/backend/postmaster/postmaster.c\n>> +++ b/src/backend/postmaster/postmaster.c\n>> @@ -1402,7 +1402,8 @@ PostmasterMain(int argc, char *argv[])\n>> \t * Remove old temporary files. At this point there can be no other\n>> \t * Postgres processes running in this directory, so this should be safe.\n>> \t */\n>> -\tRemovePgTempFiles();\n>> +\tRemovePgTempFiles(true, true);\n>> +\tRemovePgTempFiles(false, false);\n> \n> This is imo hard to read and easy to get wrong. Make it multiple functions or\n> pass named flags in.\n\nWill do.\n\n>> + * StagePgTempDirForRemoval\n>> + *\n>> + * This function renames the given directory with a special prefix that\n>> + * RemoveStagedPgTempDirs() will know to look for. An integer is appended to\n>> + * the end of the new directory name in case previously staged pgsql_tmp\n>> + * directories have not yet been removed.\n>> + */\n> \n> It doesn't seem great to need to iterate through a directory that contains\n> other files, potentially a significant number. How about having a\n> staged_for_removal/ directory, and then only scanning that?\n\nYeah, that seems like a good idea. Will do.\n\n>> +\t/*\n>> +\t * Find a name for the stage directory. We just increment an integer at the\n>> +\t * end of the name until we find one that doesn't exist.\n>> +\t */\n>> +\tfor (int n = 0; n <= INT_MAX; n++)\n>> +\t{\n>> +\t\tsnprintf(stage_path, sizeof(stage_path), \"%s/%s%d\", parent_path,\n>> +\t\t\t\t PG_TEMP_DIR_TO_REMOVE_PREFIX, n);\n>> +\n>> +\t\tif (stat(stage_path, &statbuf) != 0)\n>> +\t\t{\n>> +\t\t\tif (errno == ENOENT)\n>> +\t\t\t\tbreak;\n>> +\n>> +\t\t\tereport(LOG,\n>> +\t\t\t\t\t(errcode_for_file_access(),\n>> +\t\t\t\t\t errmsg(\"could not stat file \\\"%s\\\": %m\", stage_path)));\n>> +\t\t\treturn;\n>> +\t\t}\n>> +\n>> +\t\tstage_path[0] = '\\0';\n> \n> I still dislike this approach. Loops until INT_MAX, not interruptible... Can't\n> we prevent conflicts by adding a timestamp or such?\n\nI suppose it's highly unlikely that we'd see a conflict if we used the\ntimestamp instead. I'll do it this way in the next revision if that seems\ngood enough.\n\n>> From a58a6bb70785a557a150680b64cd8ce78ce1b73a Mon Sep 17 00:00:00 2001\n>> From: Nathan Bossart <bossartn@amazon.com>\n>> Date: Sun, 5 Dec 2021 22:02:40 -0800\n>> Subject: [PATCH v6 5/6] Move removal of old serialized snapshots to custodian.\n>> \n>> This was only done during checkpoints because it was a convenient\n>> place to put it.\n> \n> As mentioned before, having it done as part of checkpoints provides pretty\n> decent wraparound protection - yes, it's not theoretically perfect, but in\n> reality it's very unlikely you can have an xid wraparound within one\n> checkpoint. I've mentioned this before, so at the very least I'd like to see\n> this acknowledged in the commit message.\n\nWill do.\n\n>> +\t/* let the custodian know what it can remove */\n>> +\tCustodianSetLogicalRewriteCutoff(cutoff);\n> \n> Setting this variable in a custodian datastructure and then fetching it from\n> there seems architecturally wrong to me.\n\nWhere do you think it should go? I previously had it in the checkpointer's\nshared memory, but you didn't like that the functions were declared in\nbgwriter.h (along with the other checkpoint stuff). If the checkpointer\nshared memory is the right place, should we create checkpointer.h and use\nthat instead?\n\n>> +\tRequestCustodian(CUSTODIAN_REMOVE_REWRITE_MAPPINGS);\n> \n> What about single user mode?\n> \n> \n> ISTM that RequestCustodian() needs to either assert out if called in single\n> user mode, or execute tasks immediately in that context.\n\nI like the idea of executing the tasks immediately since that's what\nhappens today in single-user mode. I will try doing it that way.\n\n>> +/*\n>> + * Remove all mappings not needed anymore based on the logical restart LSN saved\n>> + * by the checkpointer. We use this saved value instead of calling\n>> + * ReplicationSlotsComputeLogicalRestartLSN() so that we don't interfere with an\n>> + * ongoing call to CheckPointLogicalRewriteHeap() that is flushing mappings to\n>> + * disk.\n>> + */\n> \n> What interference could there be?\n\nMy concern is that the custodian could obtain a later cutoff than what the\ncheckpointer does, which might cause files to be concurrently unlinked and\nfsync'd. If we always use the checkpointer's cutoff, that shouldn't be a\nproblem. This could probably be better explained in this comment.\n\n>> +void\n>> +RemoveOldLogicalRewriteMappings(void)\n>> +{\n>> +\tXLogRecPtr\tcutoff;\n>> +\tDIR\t\t *mappings_dir;\n>> +\tstruct dirent *mapping_de;\n>> +\tchar\t\tpath[MAXPGPATH + 20];\n>> +\tbool\t\tvalue_set = false;\n>> +\n>> +\tcutoff = CustodianGetLogicalRewriteCutoff(&value_set);\n>> +\tif (!value_set)\n>> +\t\treturn;\n> \n> Afaics nothing clears values_set - is that a good idea?\n\nI'm using value_set to differentiate the case where InvalidXLogRecPtr means\nthe checkpointer hasn't determined a value yet versus the case where it\nhas. In the former, we don't want to take any action. In the latter, we\nwant to unlink all the files. Since we're moving to a request model for\nthe custodian, I might be able to remove this value_set stuff completely.\nIf that's not possible, it probably deserves a better comment.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 3 Jul 2022 10:07:54 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "Hi,\n\nOn 2022-07-03 10:07:54 -0700, Nathan Bossart wrote:\n> Thanks for the prompt review.\n> \n> On Sat, Jul 02, 2022 at 03:54:56PM -0700, Andres Freund wrote:\n> > On 2022-07-02 15:05:54 -0700, Nathan Bossart wrote:\n> >> +\t\t/* Obtain requested tasks */\n> >> +\t\tSpinLockAcquire(&CustodianShmem->cust_lck);\n> >> +\t\tflags = CustodianShmem->cust_flags;\n> >> +\t\tCustodianShmem->cust_flags = 0;\n> >> +\t\tSpinLockRelease(&CustodianShmem->cust_lck);\n> > \n> > Just resetting the flags to 0 is problematic. Consider what happens if there's\n> > two tasks and and the one processed first errors out. You'll loose information\n> > about needing to run the second task.\n> \n> I think we also want to retry any failed tasks.\n\nI don't think so, at least not if it's just going to retry that task straight\naway - then we'll get stuck on that one task forever. If we had the ability to\n\"queue\" it the end, to be processed after other already dequeued tasks, it'd\nbe a different story.\n\n\n> The way v6 handles this is by requesting all tasks after an exception.\n\nIck. That strikes me as a bad idea.\n\n\n> >> +/*\n> >> + * RequestCustodian\n> >> + *\t\tCalled to request a custodian task.\n> >> + */\n> >> +void\n> >> +RequestCustodian(int flags)\n> >> +{\n> >> +\tSpinLockAcquire(&CustodianShmem->cust_lck);\n> >> +\tCustodianShmem->cust_flags |= flags;\n> >> +\tSpinLockRelease(&CustodianShmem->cust_lck);\n> >> +\n> >> +\tif (ProcGlobal->custodianLatch)\n> >> +\t\tSetLatch(ProcGlobal->custodianLatch);\n> >> +}\n> > \n> > With this representation we can't really implement waiting for a task or\n> > such. And it doesn't seem like a great API for the caller to just specify a\n> > mix of flags.\n> \n> At the moment, the idea is that nothing should need to wait for a task\n> because the custodian only handles things that are relatively non-critical.\n\nWhich is just plainly not true as the patchset stands...\n\nI think we're going to have to block if some cleanup as part of a checkpoint\nhasn't been completed by the next checkpoint - otherwise it'll just end up\nbeing way too confusing and there's absolutely no backpressure anymore.\n\n\n> If that changes, this could probably be expanded to look more like\n> RequestCheckpoint().\n> \n> What would you suggest using instead of a mix of flags?\n\nI suspect an array of tasks with requested and completed counters or such?\nWith a condition variable to wait on?\n\n\n> >> +\t/* let the custodian know what it can remove */\n> >> +\tCustodianSetLogicalRewriteCutoff(cutoff);\n> > \n> > Setting this variable in a custodian datastructure and then fetching it from\n> > there seems architecturally wrong to me.\n> \n> Where do you think it should go? I previously had it in the checkpointer's\n> shared memory, but you didn't like that the functions were declared in\n> bgwriter.h (along with the other checkpoint stuff). If the checkpointer\n> shared memory is the right place, should we create checkpointer.h and use\n> that instead?\n\nWell, so far I have not understood what the whole point of the shared state\nis, so i have a bit of a hard time answering this ;)\n\n\n> >> +/*\n> >> + * Remove all mappings not needed anymore based on the logical restart LSN saved\n> >> + * by the checkpointer. We use this saved value instead of calling\n> >> + * ReplicationSlotsComputeLogicalRestartLSN() so that we don't interfere with an\n> >> + * ongoing call to CheckPointLogicalRewriteHeap() that is flushing mappings to\n> >> + * disk.\n> >> + */\n> > \n> > What interference could there be?\n> \n> My concern is that the custodian could obtain a later cutoff than what the\n> checkpointer does, which might cause files to be concurrently unlinked and\n> fsync'd. If we always use the checkpointer's cutoff, that shouldn't be a\n> problem. This could probably be better explained in this comment.\n\nHow about having a Datum argument to RequestCustodian() that is forwarded to\nthe task?\n\n\n> >> +void\n> >> +RemoveOldLogicalRewriteMappings(void)\n> >> +{\n> >> +\tXLogRecPtr\tcutoff;\n> >> +\tDIR\t\t *mappings_dir;\n> >> +\tstruct dirent *mapping_de;\n> >> +\tchar\t\tpath[MAXPGPATH + 20];\n> >> +\tbool\t\tvalue_set = false;\n> >> +\n> >> +\tcutoff = CustodianGetLogicalRewriteCutoff(&value_set);\n> >> +\tif (!value_set)\n> >> +\t\treturn;\n> > \n> > Afaics nothing clears values_set - is that a good idea?\n> \n> I'm using value_set to differentiate the case where InvalidXLogRecPtr means\n> the checkpointer hasn't determined a value yet versus the case where it\n> has. In the former, we don't want to take any action. In the latter, we\n> want to unlink all the files. Since we're moving to a request model for\n> the custodian, I might be able to remove this value_set stuff completely.\n> If that's not possible, it probably deserves a better comment.\n\nIt would.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 3 Jul 2022 10:27:32 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "Here's a new revision where I've attempted to address all the feedback I've\nreceived thus far. Notably, the custodian now uses a queue for registering\ntasks and determining which tasks to execute. Other changes include\nsplitting the temporary file functions apart to avoid consecutive boolean\nflags, using a timestamp instead of an integer for the staging name for\ntemporary directories, moving temporary directories to a dedicated\ndirectory so that the custodian doesn't need to scan relation files,\nERROR-ing when something goes wrong when cleaning up temporary files,\nexecuting requested tasks immediately in single-user mode, and more.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 6 Jul 2022 09:51:10 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Wed, Jul 06, 2022 at 09:51:10AM -0700, Nathan Bossart wrote:\n> Here's a new revision where I've attempted to address all the feedback I've\n> received thus far. Notably, the custodian now uses a queue for registering\n> tasks and determining which tasks to execute. Other changes include\n> splitting the temporary file functions apart to avoid consecutive boolean\n> flags, using a timestamp instead of an integer for the staging name for\n> temporary directories, moving temporary directories to a dedicated\n> directory so that the custodian doesn't need to scan relation files,\n> ERROR-ing when something goes wrong when cleaning up temporary files,\n> executing requested tasks immediately in single-user mode, and more.\n\nHere is a rebased patch set for cfbot. There are no other differences\nbetween v7 and v8.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 11 Aug 2022 16:09:21 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Thu, Aug 11, 2022 at 04:09:21PM -0700, Nathan Bossart wrote:\n> Here is a rebased patch set for cfbot. There are no other differences\n> between v7 and v8.\n\nAnother rebase for cfbot.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 24 Aug 2022 09:46:24 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Wed, Aug 24, 2022 at 09:46:24AM -0700, Nathan Bossart wrote:\n> Another rebase for cfbot.\n\nAnd another.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 2 Sep 2022 15:07:44 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Fri, Sep 02, 2022 at 03:07:44PM -0700, Nathan Bossart wrote:\n> And another.\n\nv11 adds support for building with meson.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 23 Sep 2022 10:41:54 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Fri, Sep 23, 2022 at 10:41:54AM -0700, Nathan Bossart wrote:\n> v11 adds support for building with meson.\n\nrebased\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sun, 6 Nov 2022 14:38:42 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Sun, Nov 06, 2022 at 02:38:42PM -0800, Nathan Bossart wrote:\n> rebased\n\nanother rebase for cfbot\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 23 Nov 2022 16:19:07 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Thu, 24 Nov 2022 at 00:19, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Sun, Nov 06, 2022 at 02:38:42PM -0800, Nathan Bossart wrote:\n> > rebased\n>\n> another rebase for cfbot\n\n0001 seems good to me\n* I like that it sleeps forever until requested\n* not sure I believe that everything it does can always be aborted out\nof and shutdown - to achieve that you will need a\nCHECK_FOR_INTERRUPTS() calls in the loops in patches 5 and 6 at least\n* not sure why you want immediate execution of custodian tasks - I\nfeel supporting two modes will be a lot harder. For me, I would run\nlocally when !IsUnderPostmaster and also in an Assert build, so we can\ntest it works right - i.e. running in its own process is just a\nproduction optimization for performance (which is the stated reason\nfor having this)\n\n0005 seems good from what I know\n* There is no check to see if it worked in any sane time\n* It seems possible that \"Old\" might change meaning - will that make\nit break/fail?\n\n0006 seems good also\n* same comments for 5\n\nRather than explicitly use DEBUG1 everywhere I would have an\n#define CUSTODIAN_LOG_LEVEL LOG\nso we can run with it in LOG mode and then set it to DEBUG1 with a one\nline change in a later phase of Beta\n\nI can't really comment with knowledge on sub-patches 0002 to 0004.\n\nPerhaps you should aim to get 1, 5, 6 committed first and then return\nto the others in a later CF/separate thread?\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 24 Nov 2022 17:31:02 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "Thanks for taking a look!\n\nOn Thu, Nov 24, 2022 at 05:31:02PM +0000, Simon Riggs wrote:\n> * not sure I believe that everything it does can always be aborted out\n> of and shutdown - to achieve that you will need a\n> CHECK_FOR_INTERRUPTS() calls in the loops in patches 5 and 6 at least\n\nI did something like this earlier, but was advised to simply let the\nfunctions finish as usual during shutdown [0]. I think this is what the\ncheckpointer process does today, anyway.\n\n> * not sure why you want immediate execution of custodian tasks - I\n> feel supporting two modes will be a lot harder. For me, I would run\n> locally when !IsUnderPostmaster and also in an Assert build, so we can\n> test it works right - i.e. running in its own process is just a\n> production optimization for performance (which is the stated reason\n> for having this)\n\nI added this because 0004 involves requesting a task from the postmaster,\nso checking for IsUnderPostmaster doesn't work. Those tasks would always\nrun immediately. However, we could use IsPostmasterEnvironment instead,\nwhich would allow us to remove the \"immediate\" argument. I did it this way\nin v14.\n\nI'm not sure about running locally in Assert builds. It's true that would\nhelp ensure there's test coverage for the task logic, but it would also\nreduce coverage for the custodian logic. And in general, I'm worried about\nhaving Assert builds use a different code path than production builds.\n\n> 0005 seems good from what I know\n> * There is no check to see if it worked in any sane time\n\nWhat did you have in mind? Should the custodian begin emitting WARNINGs\nafter a while?\n\n> * It seems possible that \"Old\" might change meaning - will that make\n> it break/fail?\n\nI don't believe so.\n\n> Rather than explicitly use DEBUG1 everywhere I would have an\n> #define CUSTODIAN_LOG_LEVEL LOG\n> so we can run with it in LOG mode and then set it to DEBUG1 with a one\n> line change in a later phase of Beta\n\nI can create a separate patch for this, but I don't think I've ever seen\nthis sort of thing before. Is the idea just to help with debugging during\nthe development phase?\n\n> I can't really comment with knowledge on sub-patches 0002 to 0004.\n> \n> Perhaps you should aim to get 1, 5, 6 committed first and then return\n> to the others in a later CF/separate thread?\n\nThat seems like a good idea since those are all relatively self-contained.\nI removed 0002-0004 in v14.\n\n[0] https://postgr.es/m/20220217065938.x2esfdppzypegn5j%40alap3.anarazel.de\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sun, 27 Nov 2022 15:34:34 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Sun, 27 Nov 2022 at 23:34, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> Thanks for taking a look!\n>\n> On Thu, Nov 24, 2022 at 05:31:02PM +0000, Simon Riggs wrote:\n> > * not sure I believe that everything it does can always be aborted out\n> > of and shutdown - to achieve that you will need a\n> > CHECK_FOR_INTERRUPTS() calls in the loops in patches 5 and 6 at least\n>\n> I did something like this earlier, but was advised to simply let the\n> functions finish as usual during shutdown [0]. I think this is what the\n> checkpointer process does today, anyway.\n\nIf we say \"The custodian is not an essential process and can shutdown\nquickly when requested.\", and yet we know its not true in all cases,\nthen that will lead to misunderstandings and bugs.\n\nIf we perform a restart and the custodian is performing extra work\nthat delays shutdown, then it also delays restart. Given the title of\nthe thread, we should be looking to improve that, or at least know it\noccurred.\n\n> > * not sure why you want immediate execution of custodian tasks - I\n> > feel supporting two modes will be a lot harder. For me, I would run\n> > locally when !IsUnderPostmaster and also in an Assert build, so we can\n> > test it works right - i.e. running in its own process is just a\n> > production optimization for performance (which is the stated reason\n> > for having this)\n>\n> I added this because 0004 involves requesting a task from the postmaster,\n> so checking for IsUnderPostmaster doesn't work. Those tasks would always\n> run immediately. However, we could use IsPostmasterEnvironment instead,\n> which would allow us to remove the \"immediate\" argument. I did it this way\n> in v14.\n\nThanks\n\n> > 0005 seems good from what I know\n> > * There is no check to see if it worked in any sane time\n>\n> What did you have in mind? Should the custodian begin emitting WARNINGs\n> after a while?\n\nI think it might be useful if it logged anything that took an\n\"extended period\", TBD.\n\nMaybe that is already covered by startup process logging. Please tell\nme that still works?\n\n> > Rather than explicitly use DEBUG1 everywhere I would have an\n> > #define CUSTODIAN_LOG_LEVEL LOG\n> > so we can run with it in LOG mode and then set it to DEBUG1 with a one\n> > line change in a later phase of Beta\n>\n> I can create a separate patch for this, but I don't think I've ever seen\n> this sort of thing before.\n\nMuch of recovery is coded that way, for the same reason.\n\n> Is the idea just to help with debugging during\n> the development phase?\n\n\"Just\", yes. Tests would be desirable also, under src/test/modules.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 28 Nov 2022 13:08:57 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On 2022-11-28 13:08:57 +0000, Simon Riggs wrote:\n> On Sun, 27 Nov 2022 at 23:34, Nathan Bossart <nathandbossart@gmail.com> wrote:\n> > > Rather than explicitly use DEBUG1 everywhere I would have an\n> > > #define CUSTODIAN_LOG_LEVEL LOG\n> > > so we can run with it in LOG mode and then set it to DEBUG1 with a one\n> > > line change in a later phase of Beta\n> >\n> > I can create a separate patch for this, but I don't think I've ever seen\n> > this sort of thing before.\n> \n> Much of recovery is coded that way, for the same reason.\n\nI think that's not a good thing to copy without a lot more justification than\n\"some old code also does it that way\". It's sometimes justified, but also\nmakes code harder to read (one doesn't know what it does without looking up\nthe #define, line length).\n\n\n", "msg_date": "Mon, 28 Nov 2022 10:31:29 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Mon, Nov 28, 2022 at 1:31 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-11-28 13:08:57 +0000, Simon Riggs wrote:\n> > On Sun, 27 Nov 2022 at 23:34, Nathan Bossart <nathandbossart@gmail.com> wrote:\n> > > > Rather than explicitly use DEBUG1 everywhere I would have an\n> > > > #define CUSTODIAN_LOG_LEVEL LOG\n> > > > so we can run with it in LOG mode and then set it to DEBUG1 with a one\n> > > > line change in a later phase of Beta\n> > >\n> > > I can create a separate patch for this, but I don't think I've ever seen\n> > > this sort of thing before.\n> >\n> > Much of recovery is coded that way, for the same reason.\n>\n> I think that's not a good thing to copy without a lot more justification than\n> \"some old code also does it that way\". It's sometimes justified, but also\n> makes code harder to read (one doesn't know what it does without looking up\n> the #define, line length).\n\nYeah. If people need some of the log messages at a higher level during\ndevelopment, they can patch their own copies.\n\nI think there might be some argument for having a facility that lets\nyou pick subsystems or even individual messages that you want to trace\nand pump up the log level for just those call sites. But I don't know\nexactly what that would look like, and I don't think inventing one-off\nmechanisms for particular cases is a good idea.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 28 Nov 2022 13:37:01 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "Okay, here is a new patch set. 0004 adds logic to prevent custodian tasks\nfrom delaying shutdown.\n\nI haven't added any logging for long-running tasks yet. Tasks might\nordinarily take a while, so such logs wouldn't necessarily indicate\nsomething is wrong. Perhaps we could add a GUC for the amount of time to\nwait before logging. This feature would be off by default. Another option\ncould be to create a log_custodian GUC that causes tasks to be logged when\ncompleted, similar to log_checkpoints. Thoughts?\n\nOn Mon, Nov 28, 2022 at 01:37:01PM -0500, Robert Haas wrote:\n> On Mon, Nov 28, 2022 at 1:31 PM Andres Freund <andres@anarazel.de> wrote:\n>> On 2022-11-28 13:08:57 +0000, Simon Riggs wrote:\n>> > On Sun, 27 Nov 2022 at 23:34, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> > > > Rather than explicitly use DEBUG1 everywhere I would have an\n>> > > > #define CUSTODIAN_LOG_LEVEL LOG\n>> > > > so we can run with it in LOG mode and then set it to DEBUG1 with a one\n>> > > > line change in a later phase of Beta\n>> > >\n>> > > I can create a separate patch for this, but I don't think I've ever seen\n>> > > this sort of thing before.\n>> >\n>> > Much of recovery is coded that way, for the same reason.\n>>\n>> I think that's not a good thing to copy without a lot more justification than\n>> \"some old code also does it that way\". It's sometimes justified, but also\n>> makes code harder to read (one doesn't know what it does without looking up\n>> the #define, line length).\n> \n> Yeah. If people need some of the log messages at a higher level during\n> development, they can patch their own copies.\n> \n> I think there might be some argument for having a facility that lets\n> you pick subsystems or even individual messages that you want to trace\n> and pump up the log level for just those call sites. But I don't know\n> exactly what that would look like, and I don't think inventing one-off\n> mechanisms for particular cases is a good idea.\n\nGiven this discussion, I haven't made any changes to the logging in the new\npatch set.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 28 Nov 2022 15:40:39 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Mon, 28 Nov 2022 at 23:40, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> Okay, here is a new patch set. 0004 adds logic to prevent custodian tasks\n> from delaying shutdown.\n\nThat all seems good, thanks.\n\nThe last important point for me is tests, in src/test/modules\nprobably. It might be possible to reuse the final state of other\nmodules' tests to test cleanup, or at least integrate a custodian test\ninto each module.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 29 Nov 2022 12:02:44 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Tue, Nov 29, 2022 at 12:02:44PM +0000, Simon Riggs wrote:\n> The last important point for me is tests, in src/test/modules\n> probably. It might be possible to reuse the final state of other\n> modules' tests to test cleanup, or at least integrate a custodian test\n> into each module.\n\nOf course. I found some existing tests for the test_decoding plugin that\nappear to reliably generate the files we want the custodian to clean up, so\nI added them there.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 29 Nov 2022 19:56:53 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Tue, Nov 29, 2022 at 07:56:53PM -0800, Nathan Bossart wrote:\n> On Tue, Nov 29, 2022 at 12:02:44PM +0000, Simon Riggs wrote:\n>> The last important point for me is tests, in src/test/modules\n>> probably. It might be possible to reuse the final state of other\n>> modules' tests to test cleanup, or at least integrate a custodian test\n>> into each module.\n> \n> Of course. I found some existing tests for the test_decoding plugin that\n> appear to reliably generate the files we want the custodian to clean up, so\n> I added them there.\n\ncfbot is not happy with v16. AFAICT this is just due to poor placement, so\nhere is another attempt with the tests moved to a new location. Apologies\nfor the noise.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 29 Nov 2022 21:18:33 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Wed, 30 Nov 2022 at 03:56, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Tue, Nov 29, 2022 at 12:02:44PM +0000, Simon Riggs wrote:\n> > The last important point for me is tests, in src/test/modules\n> > probably. It might be possible to reuse the final state of other\n> > modules' tests to test cleanup, or at least integrate a custodian test\n> > into each module.\n>\n> Of course. I found some existing tests for the test_decoding plugin that\n> appear to reliably generate the files we want the custodian to clean up, so\n> I added them there.\n\nThanks for adding the tests; I can see they run clean.\n\nThe only minor thing I would personally add is a note in each piece of\ncode to explain where the tests are for each one and/or something in\nthe main custodian file that says tests exist within src/test/module.\n\nOtherwise, ready for committer.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 30 Nov 2022 06:22:49 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Wed, Nov 30, 2022 at 10:48 AM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n>\n>\n> cfbot is not happy with v16. AFAICT this is just due to poor placement, so\n> here is another attempt with the tests moved to a new location. Apologies\n> for the noise.\n\nThanks for the patches. I spent some time on reviewing v17 patch set\nand here are my comments:\n\n0001:\n1. I think the custodian process needs documentation - it needs a\ndefinition in glossary.sgml and perhaps a dedicated page describing\nwhat tasks it takes care of.\n\n2.\n+ LWLockReleaseAll();\n+ ConditionVariableCancelSleep();\n+ AbortBufferIO();\n+ UnlockBuffers();\n+ ReleaseAuxProcessResources(false);\n+ AtEOXact_Buffers(false);\n+ AtEOXact_SMgr();\n+ AtEOXact_Files(false);\n+ AtEOXact_HashTables(false);\nDo we need all of these in the exit path? Isn't the stuff that\nShutdownAuxiliaryProcess() does enough for the custodian process?\nAFAICS, the custodian process uses LWLocks (which the\nShutdownAuxiliaryProcess() takes care of) and it doesn't access shared\nbuffers and so on.\nHaving said that, I'm fine to keep them for future use and all of\nthose cleanup functions exit if nothing related occurs.\n\n3.\n+ * Advertise out latch that backends can use to wake us up while we're\nTypo - %s/out/our\n\n4. Is it a good idea to add log messages in the DoCustodianTasks()\nloop? Maybe at a debug level? The log message can say the current task\nthe custodian is processing. And/Or setting the custodian's status on\nthe ps display is also a good idea IMO.\n\n0002 and 0003:\n1.\n+CHECKPOINT;\n+DO $$\nI think we need to ensure that there are some snapshot files before\nthe checkpoint. Otherwise, it may happen that the above test case\nexits without the custodian process doing anything.\n\n2. I think the best way to test the custodian process code is by\nadding a TAP test module to see actually the custodian process kicks\nin. Perhaps, add elog(DEBUGX,...) messages to various custodian\nprocess functions and see if we see the logs in server logs.\n\n0004:\nI think the 0004 patch can be merged into 0001, 0002 and 0003 patches.\nOtherwise the patch LGTM.\n\nFew thoughts:\n1. I think we can trivially extend the custodian process to remove any\nfuture WAL files on the old timeline, something like the attached\n0001-Move-removal-of-future-WAL-files-on-the-old-timeline.text file).\nWhile this offloads the recovery a bit, the server may archive such\nWAL files before the custodian removes them. We can do a bit more to\nstop the server from archiving such WAL files, but that needs more\ncoding. I don't think we need to do all that now, perhaps, we can give\nit a try once the basic custodian stuff gets in.\n2. Moving RemovePgTempFiles() to the custodian can bring up the server\nsoon. The idea is that the postmaster just renames the temp\ndirectories and informs the custodian so that it can go delete such\ntemp files and directories. I have personally seen cases where the\nserver spent a good amount of time cleaning up temp files. We can park\nit for later.\n3. Moving RemoveOldXlogFiles() to the custodian can make checkpoints faster.\n4. PreallocXlogFiles() - if we ever have plans to make pre-allocation\nmore aggressive (pre-allocate more than 1 WAL file), perhaps letting\ncustodian do that is a good idea. Again, too many tasks for a single\nprocess.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 30 Nov 2022 16:52:44 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Wed, Nov 30, 2022 at 4:52 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Nov 30, 2022 at 10:48 AM Nathan Bossart\n> <nathandbossart@gmail.com> wrote:\n> >\n> >\n> > cfbot is not happy with v16. AFAICT this is just due to poor placement, so\n> > here is another attempt with the tests moved to a new location. Apologies\n> > for the noise.\n>\n> Thanks for the patches. I spent some time on reviewing v17 patch set\n> and here are my comments:\n>\n> 0001:\n> 1. I think the custodian process needs documentation - it needs a\n> definition in glossary.sgml and perhaps a dedicated page describing\n> what tasks it takes care of.\n>\n> 2.\n> + LWLockReleaseAll();\n> + ConditionVariableCancelSleep();\n> + AbortBufferIO();\n> + UnlockBuffers();\n> + ReleaseAuxProcessResources(false);\n> + AtEOXact_Buffers(false);\n> + AtEOXact_SMgr();\n> + AtEOXact_Files(false);\n> + AtEOXact_HashTables(false);\n> Do we need all of these in the exit path? Isn't the stuff that\n> ShutdownAuxiliaryProcess() does enough for the custodian process?\n> AFAICS, the custodian process uses LWLocks (which the\n> ShutdownAuxiliaryProcess() takes care of) and it doesn't access shared\n> buffers and so on.\n> Having said that, I'm fine to keep them for future use and all of\n> those cleanup functions exit if nothing related occurs.\n>\n> 3.\n> + * Advertise out latch that backends can use to wake us up while we're\n> Typo - %s/out/our\n>\n> 4. Is it a good idea to add log messages in the DoCustodianTasks()\n> loop? Maybe at a debug level? The log message can say the current task\n> the custodian is processing. And/Or setting the custodian's status on\n> the ps display is also a good idea IMO.\n>\n> 0002 and 0003:\n> 1.\n> +CHECKPOINT;\n> +DO $$\n> I think we need to ensure that there are some snapshot files before\n> the checkpoint. Otherwise, it may happen that the above test case\n> exits without the custodian process doing anything.\n>\n> 2. I think the best way to test the custodian process code is by\n> adding a TAP test module to see actually the custodian process kicks\n> in. Perhaps, add elog(DEBUGX,...) messages to various custodian\n> process functions and see if we see the logs in server logs.\n>\n> 0004:\n> I think the 0004 patch can be merged into 0001, 0002 and 0003 patches.\n> Otherwise the patch LGTM.\n>\n> Few thoughts:\n> 1. I think we can trivially extend the custodian process to remove any\n> future WAL files on the old timeline, something like the attached\n> 0001-Move-removal-of-future-WAL-files-on-the-old-timeline.text file).\n> While this offloads the recovery a bit, the server may archive such\n> WAL files before the custodian removes them. We can do a bit more to\n> stop the server from archiving such WAL files, but that needs more\n> coding. I don't think we need to do all that now, perhaps, we can give\n> it a try once the basic custodian stuff gets in.\n> 2. Moving RemovePgTempFiles() to the custodian can bring up the server\n> soon. The idea is that the postmaster just renames the temp\n> directories and informs the custodian so that it can go delete such\n> temp files and directories. I have personally seen cases where the\n> server spent a good amount of time cleaning up temp files. We can park\n> it for later.\n> 3. Moving RemoveOldXlogFiles() to the custodian can make checkpoints faster.\n> 4. PreallocXlogFiles() - if we ever have plans to make pre-allocation\n> more aggressive (pre-allocate more than 1 WAL file), perhaps letting\n> custodian do that is a good idea. Again, too many tasks for a single\n> process.\n\nAnother comment:\nIIUC, there's no custodian_delay GUC as we want to avoid unnecessary\nwakeups for power savings (being discussed in the other thread).\nHowever, can it happen that the custodian missed to capture SetLatch\nwakeups by other backends? In other words, can the custodian process\nbe sleeping when there's work to do?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 30 Nov 2022 17:27:10 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Wed, Nov 30, 2022 at 05:27:10PM +0530, Bharath Rupireddy wrote:\n> On Wed, Nov 30, 2022 at 4:52 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> Thanks for the patches. I spent some time on reviewing v17 patch set\n>> and here are my comments:\n\nThanks for reviewing!\n\n>> 0001:\n>> 1. I think the custodian process needs documentation - it needs a\n>> definition in glossary.sgml and perhaps a dedicated page describing\n>> what tasks it takes care of.\n\nGood catch. I added this in v18. I stopped short of adding a dedicated\npage to describe the tasks because 1) there are no parameters for the\ncustodian and 2) AFAICT none of its tasks are described in the docs today.\n\n>> 2.\n>> + LWLockReleaseAll();\n>> + ConditionVariableCancelSleep();\n>> + AbortBufferIO();\n>> + UnlockBuffers();\n>> + ReleaseAuxProcessResources(false);\n>> + AtEOXact_Buffers(false);\n>> + AtEOXact_SMgr();\n>> + AtEOXact_Files(false);\n>> + AtEOXact_HashTables(false);\n>> Do we need all of these in the exit path? Isn't the stuff that\n>> ShutdownAuxiliaryProcess() does enough for the custodian process?\n>> AFAICS, the custodian process uses LWLocks (which the\n>> ShutdownAuxiliaryProcess() takes care of) and it doesn't access shared\n>> buffers and so on.\n>> Having said that, I'm fine to keep them for future use and all of\n>> those cleanup functions exit if nothing related occurs.\n\nYeah, I don't think we need a few of these. In v18, I've kept the\nfollowing:\n\t* LWLockReleaseAll()\n\t* ConditionVariableCancelSleep()\n\t* ReleaseAuxProcessResources(false)\n\t* AtEOXact_Files(false)\n\n>> 3.\n>> + * Advertise out latch that backends can use to wake us up while we're\n>> Typo - %s/out/our\n\nfixed\n\n>> 4. Is it a good idea to add log messages in the DoCustodianTasks()\n>> loop? Maybe at a debug level? The log message can say the current task\n>> the custodian is processing. And/Or setting the custodian's status on\n>> the ps display is also a good idea IMO.\n\nI'd like to pick these up in a new thread if/when this initial patch set is\ncommitted. The tasks already do some logging, and the checkpointer process\ndoesn't update the ps display for these tasks today.\n\n>> 0002 and 0003:\n>> 1.\n>> +CHECKPOINT;\n>> +DO $$\n>> I think we need to ensure that there are some snapshot files before\n>> the checkpoint. Otherwise, it may happen that the above test case\n>> exits without the custodian process doing anything.\n>>\n>> 2. I think the best way to test the custodian process code is by\n>> adding a TAP test module to see actually the custodian process kicks\n>> in. Perhaps, add elog(DEBUGX,...) messages to various custodian\n>> process functions and see if we see the logs in server logs.\n\nThe test appears to reliably create snapshot and mapping files, so if the\ndirectories are empty at some point after the checkpoint at the end, we can\nbe reasonably certain the custodian took action. I didn't add explicit\nchecks that there are files in the directories before the checkpoint\nbecause a concurrent checkpoint could make such checks unreliable.\n\n>> 0004:\n>> I think the 0004 patch can be merged into 0001, 0002 and 0003 patches.\n>> Otherwise the patch LGTM.\n\nI'm keeping this one separate because I've received conflicting feedback\nabout the idea.\n\n>> 1. I think we can trivially extend the custodian process to remove any\n>> future WAL files on the old timeline, something like the attached\n>> 0001-Move-removal-of-future-WAL-files-on-the-old-timeline.text file).\n>> While this offloads the recovery a bit, the server may archive such\n>> WAL files before the custodian removes them. We can do a bit more to\n>> stop the server from archiving such WAL files, but that needs more\n>> coding. I don't think we need to do all that now, perhaps, we can give\n>> it a try once the basic custodian stuff gets in.\n>> 2. Moving RemovePgTempFiles() to the custodian can bring up the server\n>> soon. The idea is that the postmaster just renames the temp\n>> directories and informs the custodian so that it can go delete such\n>> temp files and directories. I have personally seen cases where the\n>> server spent a good amount of time cleaning up temp files. We can park\n>> it for later.\n>> 3. Moving RemoveOldXlogFiles() to the custodian can make checkpoints faster.\n>> 4. PreallocXlogFiles() - if we ever have plans to make pre-allocation\n>> more aggressive (pre-allocate more than 1 WAL file), perhaps letting\n>> custodian do that is a good idea. Again, too many tasks for a single\n>> process.\n\nI definitely want to do #2. І have some patches for that upthread, but I\nremoved them for now based on Simon's feedback. I intend to pick that up\nin a new thread. I haven't thought too much about the others yet.\n\n> Another comment:\n> IIUC, there's no custodian_delay GUC as we want to avoid unnecessary\n> wakeups for power savings (being discussed in the other thread).\n> However, can it happen that the custodian missed to capture SetLatch\n> wakeups by other backends? In other words, can the custodian process\n> be sleeping when there's work to do?\n\nI'm not aware of any way this could happen, but if there is one, I think we\nshould treat it as a bug instead of relying on the custodian process to\nperiodically wake up and check for work to do.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 1 Dec 2022 13:40:26 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Fri, Dec 2, 2022 at 3:10 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> >> 4. Is it a good idea to add log messages in the DoCustodianTasks()\n> >> loop? Maybe at a debug level? The log message can say the current task\n> >> the custodian is processing. And/Or setting the custodian's status on\n> >> the ps display is also a good idea IMO.\n>\n> I'd like to pick these up in a new thread if/when this initial patch set is\n> committed. The tasks already do some logging, and the checkpointer process\n> doesn't update the ps display for these tasks today.\n\nIt'll be good to have some kind of dedicated monitoring for the\ncustodian process as it can do a \"good\" amount of work at times and\nusers will have a way to know what it currently is doing - it can be\nlogs at debug level, progress reporting via\nereport_startup_progress()-sort of mechanism, ps display,\npg_stat_custodian or a special function that tells some details, or\nsome other. In any case, I agree to park this for later.\n\n> >> 0002 and 0003:\n> >> 1.\n> >> +CHECKPOINT;\n> >> +DO $$\n> >> I think we need to ensure that there are some snapshot files before\n> >> the checkpoint. Otherwise, it may happen that the above test case\n> >> exits without the custodian process doing anything.\n> >>\n> >> 2. I think the best way to test the custodian process code is by\n> >> adding a TAP test module to see actually the custodian process kicks\n> >> in. Perhaps, add elog(DEBUGX,...) messages to various custodian\n> >> process functions and see if we see the logs in server logs.\n>\n> The test appears to reliably create snapshot and mapping files, so if the\n> directories are empty at some point after the checkpoint at the end, we can\n> be reasonably certain the custodian took action. I didn't add explicit\n> checks that there are files in the directories before the checkpoint\n> because a concurrent checkpoint could make such checks unreliable.\n\nI think you're right. I added sqls to see if the snapshot and mapping\nfiles count > 0, see [1] and the cirrus-ci members are happy too -\nhttps://github.com/BRupireddy/postgres/tree/custodian_review_2. I\nthink we can consider adding these count > 0 checks to tests.\n\n> >> 0004:\n> >> I think the 0004 patch can be merged into 0001, 0002 and 0003 patches.\n> >> Otherwise the patch LGTM.\n>\n> I'm keeping this one separate because I've received conflicting feedback\n> about the idea.\n\nIf we classify custodian as a process doing non-critical tasks that\nhave nothing to do with regular server functioning, then processing\nShutdownRequestPending looks okay. However, delaying these\nnon-critical tasks such as file removals which reclaims disk space\nmight impact the server overall especially when it's reaching 100%\ndisk usage and we want the custodian to do its job fully before we\nshutdown the server.\n\nIf we delay processing shutdown requests, that can impact the server\noverall (might delay restarts, failovers etc.), because at times there\ncan be a lot of tasks with a good amount of work pending in the\ncustodian's task queue.\n\nHaving said above, I'm okay to process ShutdownRequestPending as early\nas possible, however, should we also add CHECK_FOR_INTERRUPTS()\nalongside ShutdownRequestPending?\n\nAlso, I think it's enough to just have ShutdownRequestPending check in\nDoCustodianTasks(void)'s main loop and we can let\nRemoveOldSerializedSnapshots() and RemoveOldLogicalRewriteMappings()\ndo their jobs to the fullest as they do today.\n\nWhile thinking about this, one thing that really struck me is what\nhappens if we let the custodian exit, say after processing\nShutdownRequestPending immediately or after a restart, leaving other\nqueued tasks? The custodian will never get to work on those tasks\nunless the requestors (say checkpoint or some other process) requests\nit to do so after restart. Maybe, we don't need to worry about it.\nMaybe we need to worry about it. Maybe it's an overkill to save the\ncustodian's task state to disk so that it can come up and do the\nleftover tasks upon restart.\n\n> > Another comment:\n> > IIUC, there's no custodian_delay GUC as we want to avoid unnecessary\n> > wakeups for power savings (being discussed in the other thread).\n> > However, can it happen that the custodian missed to capture SetLatch\n> > wakeups by other backends? In other words, can the custodian process\n> > be sleeping when there's work to do?\n>\n> I'm not aware of any way this could happen, but if there is one, I think we\n> should treat it as a bug instead of relying on the custodian process to\n> periodically wake up and check for work to do.\n\nOne possible scenario is that the requestor adds its task details to\nthe queue and sets the latch, the custodian can miss this SetLatch()\nwhen it's in the midst of processing a task. However, it guarantees\nthe requester that it'll process the added task after it completes the\ncurrent task. And, I don't know the other reasons when the custodian\ncan miss SetLatch().\n\n[1]\ndiff --git a/contrib/test_decoding/expected/rewrite.out\nb/contrib/test_decoding/expected/rewrite.out\nindex 214a514a0a..0029e48852 100644\n--- a/contrib/test_decoding/expected/rewrite.out\n+++ b/contrib/test_decoding/expected/rewrite.out\n@@ -163,6 +163,20 @@ DROP FUNCTION iamalongfunction();\n DROP FUNCTION exec(text);\n DROP ROLE regress_justforcomments;\n -- make sure custodian cleans up files\n+-- make sure snapshot files exist for custodian to clean up\n+SELECT count(*) > 0 FROM pg_ls_logicalsnapdir();\n+ ?column?\n+----------\n+ t\n+(1 row)\n+\n+-- make sure rewrite mapping files exist for custodian to clean up\n+SELECT count(*) > 0 FROM pg_ls_logicalmapdir();\n+ ?column?\n+----------\n+ t\n+(1 row)\n+\n CHECKPOINT;\n DO $$\n DECLARE\ndiff --git a/contrib/test_decoding/sql/rewrite.sql\nb/contrib/test_decoding/sql/rewrite.sql\nindex d66f70f837..c076809f37 100644\n--- a/contrib/test_decoding/sql/rewrite.sql\n+++ b/contrib/test_decoding/sql/rewrite.sql\n@@ -107,6 +107,13 @@ DROP FUNCTION exec(text);\n DROP ROLE regress_justforcomments;\n\n -- make sure custodian cleans up files\n+\n+-- make sure snapshot files exist for custodian to clean up\n+SELECT count(*) > 0 FROM pg_ls_logicalsnapdir();\n+\n+-- make sure rewrite mapping files exist for custodian to clean up\n+SELECT count(*) > 0 FROM pg_ls_logicalmapdir();\n+\n CHECKPOINT;\n DO $$\n DECLARE\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 2 Dec 2022 12:11:35 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Fri, Dec 02, 2022 at 12:11:35PM +0530, Bharath Rupireddy wrote:\n> On Fri, Dec 2, 2022 at 3:10 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> The test appears to reliably create snapshot and mapping files, so if the\n>> directories are empty at some point after the checkpoint at the end, we can\n>> be reasonably certain the custodian took action. I didn't add explicit\n>> checks that there are files in the directories before the checkpoint\n>> because a concurrent checkpoint could make such checks unreliable.\n> \n> I think you're right. I added sqls to see if the snapshot and mapping\n> files count > 0, see [1] and the cirrus-ci members are happy too -\n> https://github.com/BRupireddy/postgres/tree/custodian_review_2. I\n> think we can consider adding these count > 0 checks to tests.\n\nMy worry about adding \"count > 0\" checks is that a concurrent checkpoint\ncould make them unreliable. In other words, those checks might ordinarily\nwork, but if an automatic checkpoint causes the files be cleaned up just\nbeforehand, they will fail.\n\n> Having said above, I'm okay to process ShutdownRequestPending as early\n> as possible, however, should we also add CHECK_FOR_INTERRUPTS()\n> alongside ShutdownRequestPending?\n\nI'm not seeing a need for CHECK_FOR_INTERRUPTS. Do you see one?\n\n> While thinking about this, one thing that really struck me is what\n> happens if we let the custodian exit, say after processing\n> ShutdownRequestPending immediately or after a restart, leaving other\n> queued tasks? The custodian will never get to work on those tasks\n> unless the requestors (say checkpoint or some other process) requests\n> it to do so after restart. Maybe, we don't need to worry about it.\n> Maybe we need to worry about it. Maybe it's an overkill to save the\n> custodian's task state to disk so that it can come up and do the\n> leftover tasks upon restart.\n\nYes, tasks will need to be retried when the server starts again. The ones\nin this patch set should be requested again during the next checkpoint.\nTemporary file cleanup would always be requested during server start, so\nthat should be handled as well. Even today, the server might abruptly shut\ndown while executing these tasks, and we don't have any special handling\nfor that.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 2 Dec 2022 11:15:07 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Sat, Dec 3, 2022 at 12:45 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Fri, Dec 02, 2022 at 12:11:35PM +0530, Bharath Rupireddy wrote:\n> > On Fri, Dec 2, 2022 at 3:10 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> >> The test appears to reliably create snapshot and mapping files, so if the\n> >> directories are empty at some point after the checkpoint at the end, we can\n> >> be reasonably certain the custodian took action. I didn't add explicit\n> >> checks that there are files in the directories before the checkpoint\n> >> because a concurrent checkpoint could make such checks unreliable.\n> >\n> > I think you're right. I added sqls to see if the snapshot and mapping\n> > files count > 0, see [1] and the cirrus-ci members are happy too -\n> > https://github.com/BRupireddy/postgres/tree/custodian_review_2. I\n> > think we can consider adding these count > 0 checks to tests.\n>\n> My worry about adding \"count > 0\" checks is that a concurrent checkpoint\n> could make them unreliable. In other words, those checks might ordinarily\n> work, but if an automatic checkpoint causes the files be cleaned up just\n> beforehand, they will fail.\n\nHm. It would have been better with a TAP test module for testing the\ncustodian code reliably. Anyway, that mustn't stop the patch getting\nin. If required, we can park the TAP test module for later - IMO.\nOthers may have different thoughts here.\n\n> > Having said above, I'm okay to process ShutdownRequestPending as early\n> > as possible, however, should we also add CHECK_FOR_INTERRUPTS()\n> > alongside ShutdownRequestPending?\n>\n> I'm not seeing a need for CHECK_FOR_INTERRUPTS. Do you see one?\n\nSince the custodian has SignalHandlerForShutdownRequest as SIGINT and\nSIGTERM handlers, unlike StatementCancelHandler and die respectively,\nno need of CFI I guess. And also none of the CFI signal handler flags\napplies to the custodian.\n\n> > While thinking about this, one thing that really struck me is what\n> > happens if we let the custodian exit, say after processing\n> > ShutdownRequestPending immediately or after a restart, leaving other\n> > queued tasks? The custodian will never get to work on those tasks\n> > unless the requestors (say checkpoint or some other process) requests\n> > it to do so after restart. Maybe, we don't need to worry about it.\n> > Maybe we need to worry about it. Maybe it's an overkill to save the\n> > custodian's task state to disk so that it can come up and do the\n> > leftover tasks upon restart.\n>\n> Yes, tasks will need to be retried when the server starts again. The ones\n> in this patch set should be requested again during the next checkpoint.\n> Temporary file cleanup would always be requested during server start, so\n> that should be handled as well. Even today, the server might abruptly shut\n> down while executing these tasks, and we don't have any special handling\n> for that.\n\nRight.\n\nThe v18 patch set posted upthread\nhttps://www.postgresql.org/message-id/20221201214026.GA1799688%40nathanxps13\nlooks good to me. I see the CF entry is marked RfC -\nhttps://commitfest.postgresql.org/41/3448/.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 6 Dec 2022 12:58:28 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "rebased for cfbot\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 2 Feb 2023 21:48:08 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Thu, Feb 02, 2023 at 09:48:08PM -0800, Nathan Bossart wrote:\n> rebased for cfbot\n\nanother rebase for cfbot\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 17 Feb 2023 15:43:44 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> another rebase for cfbot\n\nI took a brief look through v20, and generally liked what I saw,\nbut there are a few things troubling me:\n\n* The comments for CustodianEnqueueTask claim that it won't enqueue an\nalready-queued task, but I don't think I believe that, because it stops\nscanning as soon as it finds an empty slot. That data structure seems\nquite oddly designed in any case. Why isn't it simply an array of\nneed-to-run-this-one booleans indexed by the CustodianTask enum?\nFairness of dispatch could be ensured by the same state variable that\nCustodianGetNextTask already uses to track which array element to\ninspect next. While that wouldn't guarantee that tasks A and B are\ndispatched in the same order they were requested in, I'm not sure why\nwe should care.\n\n* I don't much like cust_lck, mainly because you didn't bother to\ndocument what it protects (in general, CustodianShmemStruct deserves\nmore than zero commentary). Do we need it at all? If the task-needed\nflags were sig_atomic_t not bool, we probably don't need it for the\nbasic job of tracking which tasks remain to be run. I see that some\nof the tasks have possibly-non-atomically-assigned parameters to be\ntransmitted, but restricting cust_lck to protect those seems like a\nbetter idea.\n\n* Not quite convinced about handle_arg_func, mainly because the Datum\nAPI would be pretty inconvenient for any task with more than one arg.\nWhy do we need that at all, rather than saying that callers should\nset up any required parameters separately before invoking\nRequestCustodian?\n\n* Why does LookupCustodianFunctions think it needs to search the\nconstant array?\n\n* The original proposal included moving RemovePgTempFiles into this\nmechanism, which I thought was probably the most useful bit of the\nwhole thing. I'm sad to see that gone, what became of it?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 02 Apr 2023 13:40:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "Hi,\n\nOn 2023-04-02 13:40:05 -0400, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n> > another rebase for cfbot\n> \n> I took a brief look through v20, and generally liked what I saw,\n> but there are a few things troubling me:\n\nJust want to note that I've repeatedly objected to 0002 and 0003, i.e. moving\nserialized logical decoding snapshots and mapping files, to custodian, and\nstill do. Without further work it increases wraparound risks (the filenames\ncontain xids), and afaict nothing has been done to ameliorate that.\n\nWithout those, the current patch series does not have any tasks:\n\n> * The original proposal included moving RemovePgTempFiles into this\n> mechanism, which I thought was probably the most useful bit of the\n> whole thing. I'm sad to see that gone, what became of it?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 2 Apr 2023 11:42:26 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Sun, Apr 02, 2023 at 01:40:05PM -0400, Tom Lane wrote:\n> I took a brief look through v20, and generally liked what I saw,\n> but there are a few things troubling me:\n\nThanks for taking a look.\n\n> * The comments for CustodianEnqueueTask claim that it won't enqueue an\n> already-queued task, but I don't think I believe that, because it stops\n> scanning as soon as it finds an empty slot. That data structure seems\n> quite oddly designed in any case. Why isn't it simply an array of\n> need-to-run-this-one booleans indexed by the CustodianTask enum?\n> Fairness of dispatch could be ensured by the same state variable that\n> CustodianGetNextTask already uses to track which array element to\n> inspect next. While that wouldn't guarantee that tasks A and B are\n> dispatched in the same order they were requested in, I'm not sure why\n> we should care.\n\nThat works. Will update.\n\n> * I don't much like cust_lck, mainly because you didn't bother to\n> document what it protects (in general, CustodianShmemStruct deserves\n> more than zero commentary). Do we need it at all? If the task-needed\n> flags were sig_atomic_t not bool, we probably don't need it for the\n> basic job of tracking which tasks remain to be run. I see that some\n> of the tasks have possibly-non-atomically-assigned parameters to be\n> transmitted, but restricting cust_lck to protect those seems like a\n> better idea.\n\nWill do.\n\n> * Not quite convinced about handle_arg_func, mainly because the Datum\n> API would be pretty inconvenient for any task with more than one arg.\n> Why do we need that at all, rather than saying that callers should\n> set up any required parameters separately before invoking\n> RequestCustodian?\n\nI had done it this way earlier, but added the Datum argument based on\nfeedback upthread [0]. It presently has only one proposed use, anyway, so\nI think it would be fine to switch it back for now.\n\n> * Why does LookupCustodianFunctions think it needs to search the\n> constant array?\n\nThe order of the tasks in the array isn't guaranteed to match the order in\nthe CustodianTask enum.\n\n> * The original proposal included moving RemovePgTempFiles into this\n> mechanism, which I thought was probably the most useful bit of the\n> whole thing. I'm sad to see that gone, what became of it?\n\nI postponed that based on advice from upthread [1]. I was hoping to start\na dedicated thread for that immediately after the custodian infrastructure\nwas committed. FWIW I agree that it's the most useful task of what's\nproposed thus far.\n\n[0] https://postgr.es/m/20220703172732.wembjsb55xl63vuw%40awork3.anarazel.de\n[1] https://postgr.es/m/CANbhV-EagKLoUH7tLEfg__VcLu37LY78F8gvLMzHrRZyZKm6sw%40mail.gmail.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 2 Apr 2023 12:30:30 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Sun, Apr 02, 2023 at 11:42:26AM -0700, Andres Freund wrote:\n> Just want to note that I've repeatedly objected to 0002 and 0003, i.e. moving\n> serialized logical decoding snapshots and mapping files, to custodian, and\n> still do. Without further work it increases wraparound risks (the filenames\n> contain xids), and afaict nothing has been done to ameliorate that.\n\n From your feedback earlier [0], I was under the (perhaps false) impression\nthat adding a note about this existing issue in the commit message was\nsufficient, at least initially. I did add such a note in 0003, but it's\nmissing from 0002 for some reason. I suspect I left it out because the\nserialized snapshot file names do not contain XIDs. You cleared that up\nearlier [1], so this is my bad.\n\nIt's been a little while since I dug into this, but I do see your point\nthat the wraparound risk could be higher in some cases. For example, if\nyou have a billion temp files to clean up, the custodian could be stuck on\nthat task for a long time. I will give this some further thought. I'm all\nears if anyone has ideas about how to reduce this risk.\n\n[0] https://postgr.es/m/20220702225456.zit5kjdtdfqmjujt%40alap3.anarazel.de\n[1] https://postgr.es/m/20220217065938.x2esfdppzypegn5j%40alap3.anarazel.de\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 2 Apr 2023 12:50:05 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Sun, Apr 02, 2023 at 01:40:05PM -0400, Tom Lane wrote:\n>> * Why does LookupCustodianFunctions think it needs to search the\n>> constant array?\n\n> The order of the tasks in the array isn't guaranteed to match the order in\n> the CustodianTask enum.\n\nWhy not? It's a constant array, we can surely manage to make its\norder match the enum.\n\n>> * The original proposal included moving RemovePgTempFiles into this\n>> mechanism, which I thought was probably the most useful bit of the\n>> whole thing. I'm sad to see that gone, what became of it?\n\n> I postponed that based on advice from upthread [1]. I was hoping to start\n> a dedicated thread for that immediately after the custodian infrastructure\n> was committed. FWIW I agree that it's the most useful task of what's\n> proposed thus far.\n\nHmm, given Andres' objections there's little point in moving forward\nwithout that task.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 02 Apr 2023 16:23:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> It's been a little while since I dug into this, but I do see your point\n> that the wraparound risk could be higher in some cases. For example, if\n> you have a billion temp files to clean up, the custodian could be stuck on\n> that task for a long time. I will give this some further thought. I'm all\n> ears if anyone has ideas about how to reduce this risk.\n\nI wonder if a single long-lived custodian task is the right model at all.\nAt least for RemovePgTempFiles, it'd make more sense to write it as a\nbackground worker that spawns, does its work, and then exits,\nindependently of anything else. Of course, then you need some mechanism\nfor ensuring that a bgworker slot is available when needed, but that\ndoesn't seem horridly difficult --- we could have a few \"reserved\nbgworker\" slots, perhaps. An idle bgworker slot doesn't cost much.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 02 Apr 2023 16:37:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Sun, Apr 02, 2023 at 04:23:05PM -0400, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> On Sun, Apr 02, 2023 at 01:40:05PM -0400, Tom Lane wrote:\n>>> * Why does LookupCustodianFunctions think it needs to search the\n>>> constant array?\n> \n>> The order of the tasks in the array isn't guaranteed to match the order in\n>> the CustodianTask enum.\n> \n> Why not? It's a constant array, we can surely manage to make its\n> order match the enum.\n\nAlright. I'll change this.\n\n>>> * The original proposal included moving RemovePgTempFiles into this\n>>> mechanism, which I thought was probably the most useful bit of the\n>>> whole thing. I'm sad to see that gone, what became of it?\n> \n>> I postponed that based on advice from upthread [1]. I was hoping to start\n>> a dedicated thread for that immediately after the custodian infrastructure\n>> was committed. FWIW I agree that it's the most useful task of what's\n>> proposed thus far.\n> \n> Hmm, given Andres' objections there's little point in moving forward\n> without that task.\n\nYeah. I should probably tackle that one first and leave the logical tasks\nfor later, given there is some prerequisite work required.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 2 Apr 2023 14:17:42 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Sun, Apr 02, 2023 at 04:37:38PM -0400, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> It's been a little while since I dug into this, but I do see your point\n>> that the wraparound risk could be higher in some cases. For example, if\n>> you have a billion temp files to clean up, the custodian could be stuck on\n>> that task for a long time. I will give this some further thought. I'm all\n>> ears if anyone has ideas about how to reduce this risk.\n> \n> I wonder if a single long-lived custodian task is the right model at all.\n> At least for RemovePgTempFiles, it'd make more sense to write it as a\n> background worker that spawns, does its work, and then exits,\n> independently of anything else. Of course, then you need some mechanism\n> for ensuring that a bgworker slot is available when needed, but that\n> doesn't seem horridly difficult --- we could have a few \"reserved\n> bgworker\" slots, perhaps. An idle bgworker slot doesn't cost much.\n\nThis has crossed my mind. Even if we use the custodian for several\ndifferent tasks, perhaps it could shut down while not in use. For many\nservers, the custodian process will be used sparingly, if at all. And if\nwe introduce something like custodian_max_workers, perhaps we could dodge\nthe wraparound issue a bit by setting the default to the number of\nsupported tasks. That being said, this approach adds some complexity.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 2 Apr 2023 14:31:52 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "I sent this one to the next commitfest and marked it as waiting-on-author\nand targeted for v17. I'm aiming to have something that addresses the\nlatest feedback ready for the July commitfest.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 3 Apr 2023 20:36:26 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "> On 4 Apr 2023, at 05:36, Nathan Bossart <nathandbossart@gmail.com> wrote:\n> \n> I sent this one to the next commitfest and marked it as waiting-on-author\n> and targeted for v17. I'm aiming to have something that addresses the\n> latest feedback ready for the July commitfest.\n\nHave you had a chance to look at this such that there is something ready?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 4 Jul 2023 09:30:43 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" }, { "msg_contents": "On Tue, Jul 04, 2023 at 09:30:43AM +0200, Daniel Gustafsson wrote:\n>> On 4 Apr 2023, at 05:36, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> \n>> I sent this one to the next commitfest and marked it as waiting-on-author\n>> and targeted for v17. I'm aiming to have something that addresses the\n>> latest feedback ready for the July commitfest.\n> \n> Have you had a chance to look at this such that there is something ready?\n\nNot yet, sorry.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 4 Jul 2023 11:37:36 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: O(n) tasks cause lengthy startups and checkpoints" } ]
[ { "msg_contents": "As discussed in the thread [1], I find the wording \"SSL server\ncertificate revocation list\" as misleading or plain wrong.\n\nI used to read it as \"SSL server certificate (of PostgreSQL client)\nrevocation list\" but I find it misleading-ish from fresh eyes. So I'd\nlike to propose a change of the doc as attached.\n\nWhat do you think about this?\n\n[1] https://www.postgresql.org/message-id/20211202.134619.1052008069537649171.horikyota.ntt%40gmail.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 02 Dec 2021 13:54:41 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Is ssl_crl_file \"SSL server cert revocation list\"?" }, { "msg_contents": "At Thu, 02 Dec 2021 13:54:41 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> As discussed in the thread [1], I find the wording \"SSL server\n> certificate revocation list\" as misleading or plain wrong.\n\nFWIW, I'm convinced that that's plain wrong after finding some\noccurances of \"(SSL) client certificate\" in the doc.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 02 Dec 2021 14:07:22 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is ssl_crl_file \"SSL server cert revocation list\"?" }, { "msg_contents": "> On 2 Dec 2021, at 06:07, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> \n> At Thu, 02 Dec 2021 13:54:41 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n>> As discussed in the thread [1], I find the wording \"SSL server\n>> certificate revocation list\" as misleading or plain wrong.\n> \n> FWIW, I'm convinced that that's plain wrong after finding some\n> occurances of \"(SSL) client certificate\" in the doc.\n\nI agree with this, the concepts have been a bit muddled.\n\nWhile in there I noticed that we omitted mentioning sslcrldir in a few cases.\nThe attached v2 adds these and removes the whitespace changes from your patch\nfor easier review.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Thu, 2 Dec 2021 10:42:02 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Is ssl_crl_file \"SSL server cert revocation list\"?" }, { "msg_contents": "On 02.12.21 10:42, Daniel Gustafsson wrote:\n>> On 2 Dec 2021, at 06:07, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>>\n>> At Thu, 02 Dec 2021 13:54:41 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n>>> As discussed in the thread [1], I find the wording \"SSL server\n>>> certificate revocation list\" as misleading or plain wrong.\n>>\n>> FWIW, I'm convinced that that's plain wrong after finding some\n>> occurances of \"(SSL) client certificate\" in the doc.\n> \n> I agree with this, the concepts have been a bit muddled.\n> \n> While in there I noticed that we omitted mentioning sslcrldir in a few cases.\n> The attached v2 adds these and removes the whitespace changes from your patch\n> for easier review.\n\nThis change looks correct to me.\n\n\n", "msg_date": "Thu, 2 Dec 2021 16:04:33 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Is ssl_crl_file \"SSL server cert revocation list\"?" }, { "msg_contents": "> On 2 Dec 2021, at 16:04, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> This change looks correct to me.\n\nThanks for review, I've pushed this backpatched (in part) down to 10.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 3 Dec 2021 14:32:54 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Is ssl_crl_file \"SSL server cert revocation list\"?" }, { "msg_contents": "At Fri, 3 Dec 2021 14:32:54 +0100, Daniel Gustafsson <daniel@yesql.se> wrote in \n> > On 2 Dec 2021, at 16:04, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> > This change looks correct to me.\n> \n> Thanks for review, I've pushed this backpatched (in part) down to 10.\n\nThanks for revising and comitting this.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 06 Dec 2021 09:56:20 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is ssl_crl_file \"SSL server cert revocation list\"?" } ]
[ { "msg_contents": "With Python 3.10, configure spits out warnings about the module \ndistutils.sysconfig being deprecated and scheduled for removal in Python \n3.12:\n\n<string>:1: DeprecationWarning: The distutils.sysconfig module is \ndeprecated, use sysconfig instead\n<string>:1: DeprecationWarning: The distutils package is deprecated and \nslated for removal in Python 3.12. Use setuptools or check PEP 632 for \npotential alternatives\n\nThis patch changes the uses in configure to use the module sysconfig \ninstead. The logic stays the same. (It's basically the same module but \nas its own top-level module.)\n\nNote that sysconfig exists since Python 2.7, so this moves the minimum \nrequired version up from Python 2.6.\n\nBuildfarm impact:\n\ngaur and prariedog use Python 2.6 and would need to be upgraded.\n\nPossible backpatching:\n\nBackpatching should be considered, since surely someone will otherwise \ncomplain when Python 3.12 comes around. But dropping support for Python \nversions in stable branches should be done with some care.\n\nPython 3.10 was released Oct. 4, 2021, so it is quite new. Python major \nreleases are now yearly, so the above-mentioned Python 3.12 can be \nexpected in autumn of 2023.\n\nCurrent PostgreSQL releases support Python versions as follows:\n\nPG10: 2.4+\nPG11: 2.4+\nPG12: 2.4+ (EOL Nov. 2024)\nPG13: 2.6+\nPG14: 2.6+\n\nSo unfortunately, we won't be able to EOL all versions with Python 2.4 \nsupport before Python 3.12 arrives.\n\nI suggest leaving the backbranches alone for now. At the moment, we \ndon't even know whether additional changes will be required for 3.12 \n(and 3.11) support, so the overall impact isn't known yet. In a few \nmonths, we will probably know more about this.\n\nIn the meantime, the warnings can be silenced using\n\nexport PYTHONWARNINGS='ignore::DeprecationWarning'\n\n(It ought to be possible to be more specific, like \n'ignore::DeprecationWarning:distutils.sysconfig', but it doesn't seem to \nwork for me.)\n\n(I don't recommend putting that into configure, since then we wouldn't \nbe able to learn about issues like this.)", "msg_date": "Thu, 2 Dec 2021 08:20:48 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> With Python 3.10, configure spits out warnings about the module \n> distutils.sysconfig being deprecated and scheduled for removal in Python \n> 3.12:\n\nBleah.\n\n> This patch changes the uses in configure to use the module sysconfig \n> instead. The logic stays the same. (It's basically the same module but \n> as its own top-level module.)\n> Note that sysconfig exists since Python 2.7, so this moves the minimum \n> required version up from Python 2.6.\n\nThat's surely no problem in HEAD, but as you say, it is an issue for\nthe older branches. How difficult would it be to teach configure to\ntry both ways, or adapt based on its python version check?\n\n> I suggest leaving the backbranches alone for now. At the moment, we \n> don't even know whether additional changes will be required for 3.12 \n> (and 3.11) support, so the overall impact isn't known yet. In a few \n> months, we will probably know more about this.\n\nAgreed, this is a moving target so we shouldn't be too concerned\nabout it yet.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Dec 2021 13:22:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "On 02.12.21 19:22, Tom Lane wrote:\n> That's surely no problem in HEAD, but as you say, it is an issue for\n> the older branches. How difficult would it be to teach configure to\n> try both ways, or adapt based on its python version check?\n\nI think it wouldn't be unreasonable to do that. I'll look into it.\n\n\n", "msg_date": "Fri, 3 Dec 2021 16:52:10 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "On 02.12.21 08:20, Peter Eisentraut wrote:\n> Buildfarm impact:\n> \n> gaur and prariedog use Python 2.6 and would need to be upgraded.\n\nTom, are you planning to update the Python version on these build farm \nmembers? I realize these are very slow machines and this might take \nsome time; I'm just wondering if this had registered.\n\n\n", "msg_date": "Thu, 9 Dec 2021 10:26:01 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 02.12.21 08:20, Peter Eisentraut wrote:\n>> Buildfarm impact:\n>> gaur and prariedog use Python 2.6 and would need to be upgraded.\n\n> Tom, are you planning to update the Python version on these build farm \n> members? I realize these are very slow machines and this might take \n> some time; I'm just wondering if this had registered.\n\nI can do that when it becomes necessary. I've got one eye on the meson\nconversion discussion, which will kill those two animals altogether;\nso it seems possible that updating their Pythons now would just be\nwasted effort depending on what lands first.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 Dec 2021 08:31:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "On 09.12.21 14:31, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> On 02.12.21 08:20, Peter Eisentraut wrote:\n>>> Buildfarm impact:\n>>> gaur and prariedog use Python 2.6 and would need to be upgraded.\n> \n>> Tom, are you planning to update the Python version on these build farm\n>> members? I realize these are very slow machines and this might take\n>> some time; I'm just wondering if this had registered.\n> \n> I can do that when it becomes necessary. I've got one eye on the meson\n> conversion discussion, which will kill those two animals altogether;\n> so it seems possible that updating their Pythons now would just be\n> wasted effort depending on what lands first.\n\nI saw that the Python installations on gaur and prairiedog had been \nupdated, so I committed this patch. As the buildfarm shows, various \nplatforms have problems with this, in particular because they point to \nthe wrong place for the include directory. AFAICT, in most cases this \nappears to have been fixed in more recent editions of those platforms \n(e.g., Debian unstable members pass but older releases don't), so at \nleast the approach was apparently not wrong in principle. But \nobviously, this leaves us in a mess. I will revert this patch in a bit, \nafter gathering a few more hours of data.\n\nAlso, considering the failure on prairiedog, I do see now on \n<https://docs.python.org/3/library/sysconfig.html> that the sysconfig \nmodule is \"New in version 3.2\". I had interpreted the fact that it \nexists in version 2.7 that that includes all higher versions, but \nobviously there were multiple branches involved, so that was a mistaken \nassumption.\n\n\n", "msg_date": "Tue, 18 Jan 2022 11:20:31 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Also, considering the failure on prairiedog, I do see now on \n> <https://docs.python.org/3/library/sysconfig.html> that the sysconfig \n> module is \"New in version 3.2\". I had interpreted the fact that it \n> exists in version 2.7 that that includes all higher versions, but \n> obviously there were multiple branches involved, so that was a mistaken \n> assumption.\n\nHm. I installed 3.1 because we claim support for that. I don't mind\nupdating to 3.2 (as long as we adjust the docs to match), but it seems\nkinda moot unless you figure out a solution for the include-path\nissue. I see that platforms as recent as Debian 10 are failing,\nso I don't think we can dismiss that as not needing fixing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 18 Jan 2022 10:24:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "On 18.01.22 16:24, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> Also, considering the failure on prairiedog, I do see now on\n>> <https://docs.python.org/3/library/sysconfig.html> that the sysconfig\n>> module is \"New in version 3.2\". I had interpreted the fact that it\n>> exists in version 2.7 that that includes all higher versions, but\n>> obviously there were multiple branches involved, so that was a mistaken\n>> assumption.\n> \n> Hm. I installed 3.1 because we claim support for that. I don't mind\n> updating to 3.2 (as long as we adjust the docs to match), but it seems\n> kinda moot unless you figure out a solution for the include-path\n> issue. I see that platforms as recent as Debian 10 are failing,\n> so I don't think we can dismiss that as not needing fixing.\n\nI have reverted this for now.\n\nI don't have a clear idea how to fix this in the long run. We would \nperhaps need to determine at which points the various platforms had \nfixed this issue in their Python installations and select between the \nold and the new approach based on that. Seems messy.\n\n\n", "msg_date": "Tue, 18 Jan 2022 17:47:37 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> I don't have a clear idea how to fix this in the long run. We would \n> perhaps need to determine at which points the various platforms had \n> fixed this issue in their Python installations and select between the \n> old and the new approach based on that. Seems messy.\n\nAre we sure it's an issue within Python, rather than something we\ncould dodge by invoking sysconfig differently? It's hard to believe\nthat sysconfig could be totally unfit for the purpose of finding out\nthe include path and would remain so for multiple years.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 18 Jan 2022 12:04:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "I wrote:\n> Are we sure it's an issue within Python, rather than something we\n> could dodge by invoking sysconfig differently? It's hard to believe\n> that sysconfig could be totally unfit for the purpose of finding out\n> the include path and would remain so for multiple years.\n\nI dug up a Debian 9 image and found that I could reproduce the problem\nagainst its python2 (2.7.13) installation, but not its python3 (3.5.3):\n\n$ python2 -m sysconfig | grep include\n include = \"/usr/local/include/python2.7\"\n platinclude = \"/usr/local/include/python2.7\"\n...\n$ python3 -m sysconfig | grep include\n include = \"/usr/include/python3.5m\"\n platinclude = \"/usr/include/python3.5m\"\n...\n\nLooking at the buildfarm animals that failed this way, 10 out of 11\nare using python 2.x. The lone exception is Andrew's prion. I wonder\nif there is something unusual about its python3 installation.\n\nAnyway, based on these results, we might have better luck switching to\nsysconfig after we start forcing python3. I'm tempted to resurrect the\nidea of changing configure's probe order to \"python3 python python2\"\nin the meantime, just so we can see how much of the buildfarm is ready\nfor that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 18 Jan 2022 19:18:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "I wrote:\n> Anyway, based on these results, we might have better luck switching to\n> sysconfig after we start forcing python3.\n\nOn the other hand, that answer is not back-patchable, and we surely\nneed a back-patchable fix, because people will try to build the\nback branches against newer pythons.\n\nBased on the buildfarm results so far, the problem can be described\nas \"some installations say /usr/local when they should have said /usr\".\nI experimented with the attached delta patch and it fixes the problem\non my Debian 9 image. (I don't know Python, so there may be a better\nway to do this.) We'd have to also bump the minimum 3.x version to\n3.2, but that seems very unlikely to bother anyone.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 19 Jan 2022 11:21:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "On 19.01.22 01:18, Tom Lane wrote:\n> Anyway, based on these results, we might have better luck switching to\n> sysconfig after we start forcing python3. I'm tempted to resurrect the\n> idea of changing configure's probe order to \"python3 python python2\"\n> in the meantime, just so we can see how much of the buildfarm is ready\n> for that.\n\nThis seems sensible in any case, given that we have quasi-committed to \nenforcing Python 3 soon.\n\n\n", "msg_date": "Wed, 19 Jan 2022 17:49:26 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 19.01.22 01:18, Tom Lane wrote:\n>> ... I'm tempted to resurrect the\n>> idea of changing configure's probe order to \"python3 python python2\"\n>> in the meantime, just so we can see how much of the buildfarm is ready\n>> for that.\n\n> This seems sensible in any case, given that we have quasi-committed to \n> enforcing Python 3 soon.\n\nDone. (I couldn't find any equivalent logic in the MSVC build scripts\nthough; is there something I missed?)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 19 Jan 2022 15:40:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "On Wed, Jan 19, 2022 at 9:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> Done. (I couldn't find any equivalent logic in the MSVC build scripts\n> though; is there something I missed?)\n>\n> MSVC will use the path configured in src\\tools\\msvc\\config.pl $config->{\"python\"},\nthere is no ambiguity.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Wed, Jan 19, 2022 at 9:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nDone.  (I couldn't find any equivalent logic in the MSVC build scripts\nthough; is there something I missed?)MSVC will use the path configured in src\\tools\\msvc\\config.pl $config->{\"python\"}, there is no ambiguity.Regards,Juan José Santamaría Flecha", "msg_date": "Wed, 19 Jan 2022 22:20:37 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "I wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> On 19.01.22 01:18, Tom Lane wrote:\n>>> ... I'm tempted to resurrect the\n>>> idea of changing configure's probe order to \"python3 python python2\"\n>>> in the meantime, just so we can see how much of the buildfarm is ready\n>>> for that.\n\n>> This seems sensible in any case, given that we have quasi-committed to \n>> enforcing Python 3 soon.\n\n> Done.\n\nThe early returns are not great: we have about half a dozen machines\nso far that are finding python3, and reporting sane-looking Python\ninclude paths, but not finding Python.h. They're all Linux-oid\nmachines, so I suppose what is going on is that they have the base\npython3 package installed but not python3-dev or local equivalent.\n\nI want to leave that patch in place long enough so we can get a\nfairly full survey of which machines are OK and which are not,\nbut I suppose I'll have to revert it tomorrow or so. We did\npromise the owners a month to adjust their configurations.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 19 Jan 2022 17:47:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?= <juanjo.santamaria@gmail.com> writes:\n> On Wed, Jan 19, 2022 at 9:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Done. (I couldn't find any equivalent logic in the MSVC build scripts\n>> though; is there something I missed?)\n\n> MSVC will use the path configured in src\\tools\\msvc\\config.pl $config->{\"python\"},\n> there is no ambiguity.\n\nAh, right. We have only three active Windows animals that are building\nwith python, and all three are using 3.something, so that side of the\nhouse seems to be ready to go.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 19 Jan 2022 20:27:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "\nOn 1/18/22 19:18, Tom Lane wrote:\n> I wrote:\n>> Are we sure it's an issue within Python, rather than something we\n>> could dodge by invoking sysconfig differently? It's hard to believe\n>> that sysconfig could be totally unfit for the purpose of finding out\n>> the include path and would remain so for multiple years.\n> I dug up a Debian 9 image and found that I could reproduce the problem\n> against its python2 (2.7.13) installation, but not its python3 (3.5.3):\n>\n> $ python2 -m sysconfig | grep include\n> include = \"/usr/local/include/python2.7\"\n> platinclude = \"/usr/local/include/python2.7\"\n> ...\n> $ python3 -m sysconfig | grep include\n> include = \"/usr/include/python3.5m\"\n> platinclude = \"/usr/include/python3.5m\"\n> ...\n>\n> Looking at the buildfarm animals that failed this way, 10 out of 11\n> are using python 2.x. The lone exception is Andrew's prion. I wonder\n> if there is something unusual about its python3 installation.\n\n\n\nIt's an Amazon Linux instance, and using their packages, which seem a\nbit odd (there's nothing in /usr/local/include). Maybe we should be\nlooking at INCLUEPY?\n\n\n[ec2-user@ip-172-31-22-42 bf]$ python3 -m sysconfig | grep include\n    include = \"/usr/local/include/python3.6m\"\n    platinclude = \"/usr/local/include/python3.6m\"\n    CONFIG_ARGS = \"'--build=x86_64-redhat-linux-gnu'\n'--host=x86_64-redhat-linux-gnu' '--target=x86_64-amazon-linux-gnu'\n'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr'\n'--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc'\n'--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib64'\n'--libexecdir=/usr/libexec' '--localstatedir=/var'\n'--sharedstatedir=/var/lib' '--mandir=/usr/share/man'\n'--infodir=/usr/share/info' '--enable-ipv6' '--enable-shared'\n'--with-computed-gotos=yes' '--with-dbmliborder=gdbm:ndbm:bdb'\n'--with-system-expat' '--with-system-ffi'\n'--enable-loadable-sqlite-extensions' '--with-dtrace' '--with-valgrind'\n'--without-ensurepip' '--enable-optimizations'\n'build_alias=x86_64-redhat-linux-gnu'\n'host_alias=x86_64-redhat-linux-gnu'\n'target_alias=x86_64-amazon-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall\n-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector\n--param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC\n-fwrapv  ' 'LDFLAGS= -g  ' 'CPPFLAGS= '\n'PKG_CONFIG_PATH=:/usr/lib64/pkgconfig:/usr/share/pkgconfig'\"\n    CONFINCLUDEDIR = \"/usr/include\"\n    CONFINCLUDEPY = \"/usr/include/python3.6m\"\n    INCLDIRSTOMAKE = \"/usr/include  /usr/include/python3.6m\"\n    INCLUDEDIR = \"/usr/include\"\n    INCLUDEPY = \"/usr/include/python3.6m\"\n\n\nI have upgraded it to python 3.8, but got similar results.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 20 Jan 2022 10:04:56 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "I wrote:\n> The early returns are not great: we have about half a dozen machines\n> so far that are finding python3, and reporting sane-looking Python\n> include paths, but not finding Python.h. They're all Linux-oid\n> machines, so I suppose what is going on is that they have the base\n> python3 package installed but not python3-dev or local equivalent.\n\n> I want to leave that patch in place long enough so we can get a\n> fairly full survey of which machines are OK and which are not,\n> but I suppose I'll have to revert it tomorrow or so. We did\n> promise the owners a month to adjust their configurations.\n\nI have now reverted that patch, but I think this was a highly\nworthwhile bit of reconnaissance. It identified 18 animals\nthat had incomplete python3 installations (versus only 13\nthat definitely or possibly lack python3 altogether). Their\nowners most likely thought they were already good to go for the\nchangeover, so without this experiment we'd have had a whole lot\nof buildfarm red when the real change is made.\n\nI've notified the owners of these results.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 21 Jan 2022 14:26:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "I wrote:\n> Based on the buildfarm results so far, the problem can be described\n> as \"some installations say /usr/local when they should have said /usr\".\n> I experimented with the attached delta patch and it fixes the problem\n> on my Debian 9 image. (I don't know Python, so there may be a better\n> way to do this.) We'd have to also bump the minimum 3.x version to\n> 3.2, but that seems very unlikely to bother anyone.\n\nI did a little more digging into this. The python2 package on\nmy Deb9 (actually Raspbian) system says it is 2.7.13, but\n/usr/lib/python2.7/sysconfig.py is different from what I find in\na virgin Python 2.7.13 tarball, as per attached diff. I conclude\nthat somebody at Debian decided that Python should live under\n/usr/local, and changed sysconfig.py to match, but then failed\nto adjust the actual install scripts to agree, because there is\ncertainly nothing installed under /usr/local. (I don't know\nenough about Debian packaging to find the smoking gun though;\nwhat apt-get claims is the source package contains no trace of\nthis diff.) There's no sign of comparable changes in\n/usr/lib/python3.5/sysconfig.py on the same machine, either.\n\nSo I think this can fairly be characterized as brain-dead packaging\nerror, and we should just hack around it as per my previous patch.\n\nIn other news, I switched prairiedog and gaur to python 3.2.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 23 Jan 2022 16:06:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "Hi,\n\nOn 2022-01-23 16:06:21 -0500, Tom Lane wrote:\n> I wrote:\n> > Based on the buildfarm results so far, the problem can be described\n> > as \"some installations say /usr/local when they should have said /usr\".\n> > I experimented with the attached delta patch and it fixes the problem\n> > on my Debian 9 image. (I don't know Python, so there may be a better\n> > way to do this.) We'd have to also bump the minimum 3.x version to\n> > 3.2, but that seems very unlikely to bother anyone.\n> \n> I did a little more digging into this. The python2 package on\n> my Deb9 (actually Raspbian) system says it is 2.7.13, but\n> /usr/lib/python2.7/sysconfig.py is different from what I find in\n> a virgin Python 2.7.13 tarball, as per attached diff. I conclude\n> that somebody at Debian decided that Python should live under\n> /usr/local, and changed sysconfig.py to match, but then failed\n> to adjust the actual install scripts to agree, because there is\n> certainly nothing installed under /usr/local. (I don't know\n> enough about Debian packaging to find the smoking gun though;\n> what apt-get claims is the source package contains no trace of\n> this diff.) There's no sign of comparable changes in\n> /usr/lib/python3.5/sysconfig.py on the same machine, either.\n\n> + 'posix_local': {\n> + 'stdlib': '{base}/lib/python{py_version_short}',\n> + 'platstdlib': '{platbase}/lib/python{py_version_short}',\n> + 'purelib': '{base}/local/lib/python{py_version_short}/dist-packages',\n> + 'platlib': '{platbase}/local/lib/python{py_version_short}/dist-packages',\n> + 'include': '{base}/local/include/python{py_version_short}',\n> + 'platinclude': '{platbase}/local/include/python{py_version_short}',\n> + 'scripts': '{base}/local/bin',\n> + 'data': '{base}/local',\n> + },\n> + 'deb_system': {\n> + 'stdlib': '{base}/lib/python{py_version_short}',\n> + 'platstdlib': '{platbase}/lib/python{py_version_short}',\n> + 'purelib': '{base}/lib/python{py_version_short}/dist-packages',\n> + 'platlib': '{platbase}/lib/python{py_version_short}/dist-packages',\n> + 'include': '{base}/include/python{py_version_short}',\n> + 'platinclude': '{platbase}/include/python{py_version_short}',\n> + 'scripts': '{base}/bin',\n> + 'data': '{base}',\n> + },\n> 'posix_home': {\n\n\nHm. It seems the intent of the different paths you show is that we can specify\nwhich type of path we want. The one to locally installed extensions, or the\ndistribution ones. So we'd have to specify the scheme to get the other include\npath?\n\nandres@awork3:~$ python2\nPython 2.7.18 (default, Sep 24 2021, 09:39:51)\n[GCC 10.3.0] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import sysconfig\n>>> sysconfig.get_path('include')\n'/usr/local/include/python2.7'\n>>> sysconfig.get_path('include', 'posix_prefix')\n'/usr/include/python2.7'\n>>> sysconfig.get_path('include', 'posix_local')\n'/usr/local/include/python2.7'\n\nSo it seems we could do something like\n\nsysconfig.get_path('include', 'posix_prefix' if os.name == 'posix' else os.name)\n\nor\n\nscheme = sysconfig._get_default_scheme()\n# Work around Debian / Ubuntu returning paths not useful for finding python headers\nif scheme == 'posix_local':\n scheme = 'posix_prefix'\nsysconfig.get_path('include', scheme = scheme)\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 23 Jan 2022 14:59:32 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "Hi,\n\nOn 2022-01-23 16:06:21 -0500, Tom Lane wrote:\n> (I don't know\n> enough about Debian packaging to find the smoking gun though;\n> what apt-get claims is the source package contains no trace of\n> this diff.) There's no sign of comparable changes in\n> /usr/lib/python3.5/sysconfig.py on the same machine, either.\n\nFWIW, here's the steps to find it (on a debian 9 instance):\n\ndpkg -S /usr/lib/python2.7/sysconfig.py\nlibpython2.7-minimal:amd64: /usr/lib/python2.7/sysconfig.py\nmkdir /tmp/aptsrc\ncd /tmp/aptsrc\napt-get source libpython2.7-minimal\nroot@283a48b8d701:/tmp/aptsrc# grep -lR sysconfig.py python2.7*/debian/\npython2.7-2.7.13/debian/changelog\npython2.7-2.7.13/debian/patches/distutils-install-layout.diff\npython2.7-2.7.13/debian/patches/mangle-fstack-protector.diff\npython2.7-2.7.13/debian/patches/ext-no-libpython-link.diff\npython2.7-2.7.13/debian/patches/multiarch.diff\npython2.7-2.7.13/debian/patches/issue9189.diff\npython2.7-2.7.13/debian/patches/debug-build.diff\npython2.7-2.7.13/debian/patches/distutils-sysconfig.diff\npython2.7-2.7.13/debian/patches/disable-some-tests.patch\n\nThe relevant part of distutils-install-layout.diff explaining this is:\n\n+(0)\n+ Starting with Python-2.6 Debian/Ubuntu uses for the Python which comes within\n+ the Linux distribution a non-default name for the installation directory. This\n+ is to avoid overwriting of the python modules which come with the distribution,\n+ which unfortunately is the upstream behaviour of the installation tools. The\n+ non-default name in :file:`/usr/local` is used not to overwrite a local python\n+ installation (defaulting to :file:`/usr/local`).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 23 Jan 2022 15:07:47 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-01-23 16:06:21 -0500, Tom Lane wrote:\n>> + 'posix_local': {\n>> + 'stdlib': '{base}/lib/python{py_version_short}',\n>> + 'platstdlib': '{platbase}/lib/python{py_version_short}',\n>> + 'purelib': '{base}/local/lib/python{py_version_short}/dist-packages',\n>> + 'platlib': '{platbase}/local/lib/python{py_version_short}/dist-packages',\n>> + 'include': '{base}/local/include/python{py_version_short}',\n>> + 'platinclude': '{platbase}/local/include/python{py_version_short}',\n>> + 'scripts': '{base}/local/bin',\n>> + 'data': '{base}/local',\n>> + },\n>> + 'deb_system': {\n>> + 'stdlib': '{base}/lib/python{py_version_short}',\n>> + 'platstdlib': '{platbase}/lib/python{py_version_short}',\n>> + 'purelib': '{base}/lib/python{py_version_short}/dist-packages',\n>> + 'platlib': '{platbase}/lib/python{py_version_short}/dist-packages',\n>> + 'include': '{base}/include/python{py_version_short}',\n>> + 'platinclude': '{platbase}/include/python{py_version_short}',\n>> + 'scripts': '{base}/bin',\n>> + 'data': '{base}',\n>> + },\n>> 'posix_home': {\n\n> Hm. It seems the intent of the different paths you show is that we can specify\n> which type of path we want. The one to locally installed extensions, or the\n> distribution ones. So we'd have to specify the scheme to get the other include\n> path?\n\nIt may be that one of the other \"scheme\" values accurately describes\nDebian's actual layout of this package. I didn't check, because the\nscheme is defined to be platform-specific. Specifying a particular\nvalue for it would therefore break other platforms. Anyway, trying\nto figure out whether we're on a Debian package with this mistake\ndoesn't seem any cleaner than what I proposed. (In particular,\nblindly changing to a different scheme without a check to see\nwhat's really in the filesystem seems doomed to failure.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 23 Jan 2022 18:11:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "Hi,\n\nOn 2022-01-23 18:11:41 -0500, Tom Lane wrote:\n> It may be that one of the other \"scheme\" values accurately describes\n> Debian's actual layout of this package. I didn't check, because the\n> scheme is defined to be platform-specific.\n\nposix_prefix does, as far as I can see.\n\n\n> Specifying a particular value for it would therefore break other platforms.\n\nHence the suggestion to only force posix_prefix when posix_local (the debian\ninvention) otherwise would get used...\n\n\n> Anyway, trying to figure out whether we're on a Debian package with this\n> mistake doesn't seem any cleaner than what I proposed. (In particular,\n> blindly changing to a different scheme without a check to see what's really\n> in the filesystem seems doomed to failure.)\n\nIf we make it depend on _get_default_scheme() == 'posix_local' that shouldn't\nbe a risk, because that's the debian addition...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 23 Jan 2022 15:19:11 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> The relevant part of distutils-install-layout.diff explaining this is:\n\n> +(0)\n> + Starting with Python-2.6 Debian/Ubuntu uses for the Python which comes within\n> + the Linux distribution a non-default name for the installation directory. This\n> + is to avoid overwriting of the python modules which come with the distribution,\n> + which unfortunately is the upstream behaviour of the installation tools.\n\nYeah, I figured that the explanation was something like that. Too bad\nthey didn't get it right.\n\nI stopped to wonder if maybe the problem is that sysconfig.py is from the\n\"different distribution\" that they're worried about here, but it doesn't\nlook like it:\n\ntgl@rpi3:~$ dpkg -S /usr/lib/python2.7/sysconfig.py\nlibpython2.7-minimal:armhf: /usr/lib/python2.7/sysconfig.py\ntgl@rpi3:~$ dpkg -S /usr/include/python2.7/Python.h \nlibpython2.7-dev:armhf: /usr/include/python2.7/Python.h\n\nOh well. For a moment there I thought maybe this was a \"missing\ndev package\" kind of problem, but it's hard to come to any other\nconclusion than \"packager screwed up\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 23 Jan 2022 18:24:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-01-23 18:11:41 -0500, Tom Lane wrote:\n>> Anyway, trying to figure out whether we're on a Debian package with this\n>> mistake doesn't seem any cleaner than what I proposed. (In particular,\n>> blindly changing to a different scheme without a check to see what's really\n>> in the filesystem seems doomed to failure.)\n\n> If we make it depend on _get_default_scheme() == 'posix_local' that shouldn't\n> be a risk, because that's the debian addition...\n\nYeah, but we don't know whether there are any versions of the Debian\npackaging in which they fixed the file layout, so that 'posix_local'\nactually does describe the layout. I do not think that we are wise\nto suppose we know which scheme to use without a check on what's\nactually there.\n\nI could go for \"if we don't see Python.h where it's claimed to be,\ntry again with scheme = posix_prefix\". But I'm still not convinced\nthat that's noticeably cleaner than the hack I suggested.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 23 Jan 2022 18:31:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "Hi,\n\nOn 2022-01-23 18:31:44 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-01-23 18:11:41 -0500, Tom Lane wrote:\n> >> Anyway, trying to figure out whether we're on a Debian package with this\n> >> mistake doesn't seem any cleaner than what I proposed. (In particular,\n> >> blindly changing to a different scheme without a check to see what's really\n> >> in the filesystem seems doomed to failure.)\n> \n> > If we make it depend on _get_default_scheme() == 'posix_local' that shouldn't\n> > be a risk, because that's the debian addition...\n> \n> Yeah, but we don't know whether there are any versions of the Debian\n> packaging in which they fixed the file layout, so that 'posix_local'\n> actually does describe the layout.\n\nI think posix_local try to achieve something different than what you assume it\ndoes. It's intended to return the location to which \"locally\" intalled python\nextension install their files (including headers) - after having the problem\nthat such local python package installations overwrite (and thus broke) files\ninstalled via the system mechanism.\n\nSo posix_local works \"by design\" if it returns paths in /usr/local that do not\ncontain a python installation. If it did, it wouldn't achieve the goal.\n\nIt's definitely crappily documented. And probably not a great approach as a\nwhole. But...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 23 Jan 2022 15:46:43 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "I wrote:\n> Yeah, but we don't know whether there are any versions of the Debian\n> packaging in which they fixed the file layout, so that 'posix_local'\n> actually does describe the layout.\n\nActually ... scraping the buildfarm to see what we're currently\nfinding shows that the following machines are reporting that\n/usr/local/include/pythonN.N really is the include directory:\n\nconchuela\ncurculio\nflorican\ngombessa\njabiru\nlapwing\nloach\nlongfin\nmarabou\nmorepork\nperipatus\nplover\n\nNow, most of those are BSD machines --- but lapwing isn't.\nIt says\n\nchecking for python... (cached) /usr/bin/python\nconfigure: using python 3.6.9 (default, Jan 14 2022, 06:45:55) \nchecking for Python distutils module... yes\nchecking Python configuration directory... /usr/local/lib/python3.6/config-3.6m-i386-linux-gnu\nchecking Python include directories... -I/usr/local/include/python3.6m\nchecking how to link an embedded Python application... -L/usr/local/lib -lpython3.6m -lpthread -ldl -lutil -lrt -lm\n\nNot sure what to make of that --- maybe that's a handmade build?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 23 Jan 2022 18:53:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-01-23 18:31:44 -0500, Tom Lane wrote:\n>> Yeah, but we don't know whether there are any versions of the Debian\n>> packaging in which they fixed the file layout, so that 'posix_local'\n>> actually does describe the layout.\n\n> I think posix_local try to achieve something different than what you assume it\n> does. It's intended to return the location to which \"locally\" intalled python\n> extension install their files (including headers) - after having the problem\n> that such local python package installations overwrite (and thus broke) files\n> installed via the system mechanism.\n\nOkay, but surely they'd have thought of packages that just want to find\nout where the system Python headers are? Having this be the default\nbehavior seems like it breaks as much as it fixes. (Of course, maybe\nthat's why they gave up on it.)\n\nAnyway, I don't mind trying your second suggestion\n\nscheme = sysconfig._get_default_scheme()\n# Work around Debian / Ubuntu returning paths not useful for finding python headers\nif scheme == 'posix_local':\n scheme = 'posix_prefix'\nsysconfig.get_path('include', scheme = scheme)\n\nIf it doesn't work everywhere, we can adjust it later.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 23 Jan 2022 19:00:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "Hi,\n\nOn 2022-01-23 19:00:41 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-01-23 18:31:44 -0500, Tom Lane wrote:\n> >> Yeah, but we don't know whether there are any versions of the Debian\n> >> packaging in which they fixed the file layout, so that 'posix_local'\n> >> actually does describe the layout.\n> \n> > I think posix_local try to achieve something different than what you assume it\n> > does. It's intended to return the location to which \"locally\" intalled python\n> > extension install their files (including headers) - after having the problem\n> > that such local python package installations overwrite (and thus broke) files\n> > installed via the system mechanism.\n> \n> Okay, but surely they'd have thought of packages that just want to find\n> out where the system Python headers are?\n\nI think this might be problem on our own end, actually. The distutils.sysconfig\ncode did\na = '-I' + distutils.sysconfig.get_python_inc(False)\nb = '-I' + distutils.sysconfig.get_python_inc(True)\n\nwhich the patch upthread changed to\n\n+a = '-I' + sysconfig.get_path('include')\n+b = '-I' + sysconfig.get_path('platinclude')\n\nbut I think that's possibly not quite the right translation?\n\nThe recommended way to find flags to compile against python appears to be the\npython$version-config binary. To which we might not want to switch.\n\nBut even so, it seems using sysconfig.get_config_vars('INCLUDEPY') or such\nseems like it might be a better translation than the above\nsysconfig.get_path() stuff?\n\nFor me that returns more sensible paths.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 23 Jan 2022 16:42:41 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I think this might be problem on our own end, actually. The distutils.sysconfig\n> code did\n> a = '-I' + distutils.sysconfig.get_python_inc(False)\n> b = '-I' + distutils.sysconfig.get_python_inc(True)\n> which the patch upthread changed to\n> +a = '-I' + sysconfig.get_path('include')\n> +b = '-I' + sysconfig.get_path('platinclude')\n> but I think that's possibly not quite the right translation?\n\nI don't buy it. The sysconfig documentation says pretty clearly\nthat get_path('include') and get_path('platinclude') are supposed\nto return the directories we want, and there's nothing there\nsuggesting that we ought to magically know to look in a\nnon-default scheme.\n\n(I do note that the documentation says there's no direct\nequivalent to what get_python_inc does, which is scary.)\n\n> But even so, it seems using sysconfig.get_config_vars('INCLUDEPY') or such\n> seems like it might be a better translation than the above\n> sysconfig.get_path() stuff?\n\nCan you find ANY documentation suggesting that INCLUDEPY is\nmeant as a stable API for outside code to use? That seems\nfar more fragile than anything else we've discussed, even\nif it happens to work today.\n\nI remain of the persuasion that these Debian packages are\nbroken. The fact that they've not perpetuated the scheme\ninto their python3 packages shows that they came to the\nsame conclusion. We should not be inventing usage patterns\nbased on a belief that it's supposed to work like this,\nbecause what we'll mainly get out of that is failures on\nother platforms.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 23 Jan 2022 20:50:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "Hi,\n\nOn 2022-01-23 20:50:23 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I think this might be problem on our own end, actually. The distutils.sysconfig\n> > code did\n> > a = '-I' + distutils.sysconfig.get_python_inc(False)\n> > b = '-I' + distutils.sysconfig.get_python_inc(True)\n> > which the patch upthread changed to\n> > +a = '-I' + sysconfig.get_path('include')\n> > +b = '-I' + sysconfig.get_path('platinclude')\n> > but I think that's possibly not quite the right translation?\n>\n> I don't buy it. The sysconfig documentation says pretty clearly\n> that get_path('include') and get_path('platinclude') are supposed\n> to return the directories we want, and there's nothing there\n> suggesting that we ought to magically know to look in a\n> non-default scheme.\n\nI'm not really convinced. Note that the whole thing is prefixed with\n\n Every new component that is installed using distutils or a Distutils-based\n system will follow the same scheme to copy its file in the right places.\n\nand then\n\n Each scheme is itself composed of a series of paths and each path has a\n unique identifier. Python currently uses eight paths:\n\nand that get_path()'s documentation says:\n\n If scheme is provided, it must be a value from the list returned by\n get_scheme_names(). Otherwise, the default scheme for the current platform is\n used.\n\n(with some 2.7 vs 3.x differences)\n\nThe list of schemas explicitly includes stuff like posix_home, posix_user,\nnt_user, which all won't contain python.h in 'include'. I don't see anything\nimplying scheme on some platform isn't *_user or such.\n\n\n> > But even so, it seems using sysconfig.get_config_vars('INCLUDEPY') or such\n> > seems like it might be a better translation than the above\n> > sysconfig.get_path() stuff?\n>\n> Can you find ANY documentation suggesting that INCLUDEPY is\n> meant as a stable API for outside code to use? That seems\n> far more fragile than anything else we've discussed, even\n> if it happens to work today.\n\nNo, not really. There generally seems to be very little documentation about\nwhat one is supposed to use when embedding python (rather than building a\npython module). The only thing I really see is:\n\nhttps://docs.python.org/3/extending/embedding.html#compiling-and-linking-under-unix-like-systems\n\nwhich says to use python-config.\n\nNote that we are already using get_config_vars() for LIBDIR, LDLIBRARY,\nLDVERSION, VERSION, LIBS, ... which all seem equally undocumented.\n\n\n> I remain of the persuasion that these Debian packages are\n> broken. The fact that they've not perpetuated the scheme\n> into their python3 packages shows that they came to the\n> same conclusion.\n\nYea, I don't like it at all.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 23 Jan 2022 18:24:28 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> No, not really. There generally seems to be very little documentation about\n> what one is supposed to use when embedding python (rather than building a\n> python module). The only thing I really see is:\n\n> https://docs.python.org/3/extending/embedding.html#compiling-and-linking-under-unix-like-systems\n\n> which says to use python-config.\n\nYeah :-(. I don't really want to go there, because it will break\nexisting setups. An example is that on a few machines I have\npointed the build to non-default Python installations by doing\nthings like\n\tln -s /path/to/desired/python ~/bin/python3\nSo there's no matching python-config in my PATH at all.\nYeah, I can change that, but that would be a dealbreaker for\nback-patching this, I think.\n\nGetting back to the INCLUDEPY solution: I see nothing equivalent\nto that for the \"platform-dependent include directory\". But maybe\nwe don't really need that? Not clear.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 23 Jan 2022 21:31:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "Hi,\n\nOn 2022-01-23 21:31:52 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > No, not really. There generally seems to be very little documentation about\n> > what one is supposed to use when embedding python (rather than building a\n> > python module). The only thing I really see is:\n>\n> > https://docs.python.org/3/extending/embedding.html#compiling-and-linking-under-unix-like-systems\n>\n> > which says to use python-config.\n>\n> Yeah :-(. I don't really want to go there, because it will break\n> existing setups.\n\nYea, it seems to introduce a whole set of new complexities (finding python\nfrom python-config, mismatching python-config and explicitly specified python,\n...). And it doesn't exist on windows either :(.\n\n\n> Getting back to the INCLUDEPY solution: I see nothing equivalent\n> to that for the \"platform-dependent include directory\". But maybe\n> we don't really need that? Not clear.\n\nI don't really understand what the various \"platform\" variables / paths are\nsupposed to do. I think it might only differ when compiling stuff as part of\n(or against) the python source tree.\n\nTo avoid too noisy breakages, we could have python.m4 emit INCLUDEPY and then\nsearch the bf logs in a day or three?\n\nNot that that's any guarantee, but it's maybe worth a bit that INCLUDEPY is\none of the few wars that sysconfig.py explicitly computes for windows builds [2].\n\nGreetings,\n\nAndres Freund\n\n[2] https://github.com/python/cpython/blob/3.10/Lib/sysconfig.py#L487\n\n\n", "msg_date": "Sun, 23 Jan 2022 18:53:01 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> To avoid too noisy breakages, we could have python.m4 emit INCLUDEPY and then\n> search the bf logs in a day or three?\n\n+1, it'd give us some info without breaking the farm.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 23 Jan 2022 22:03:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "Hi,\n\nOn 2022-01-23 18:53:01 -0800, Andres Freund wrote:\n> I don't really understand what the various \"platform\" variables / paths are\n> supposed to do.\n\nThe code says:\n\n\n> def get_python_inc(plat_specific=0, prefix=None):\n> \"\"\"Return the directory containing installed Python header files.\n> \n> If 'plat_specific' is false (the default), this is the path to the\n> non-platform-specific header files, i.e. Python.h and so on;\n> otherwise, this is the path to platform-specific header files\n> (namely pyconfig.h).\n> ...\n\nLooking at the code for get_python_inc() in 2.7, it seems that plat_specific\ntoggles between CONFINCLUDEPY and INCLUDEPY. Except on windows, where it uses\nget_path('include') for both. sysconfig.py sets INCLUDEPY to\nget_path('include') on windows, so we'd be good with INCLUDEPY there.\n\n\n\n> To avoid too noisy breakages, we could have python.m4 emit INCLUDEPY and then\n> search the bf logs in a day or three?\n\nMaybe something like the attached? Not particularly nice, but should give us\nmost of the relevant information?\n\nGreetings,\n\nAndres Freund", "msg_date": "Sun, 23 Jan 2022 19:49:57 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "On 24.01.22 03:53, Andres Freund wrote:\n> On 2022-01-23 21:31:52 -0500, Tom Lane wrote:\n>> Andres Freund<andres@anarazel.de> writes:\n>>> No, not really. There generally seems to be very little documentation about\n>>> what one is supposed to use when embedding python (rather than building a\n>>> python module). The only thing I really see is:\n>>> https://docs.python.org/3/extending/embedding.html#compiling-and-linking-under-unix-like-systems\n>>> which says to use python-config.\n>> Yeah :-(. I don't really want to go there, because it will break\n>> existing setups.\n> Yea, it seems to introduce a whole set of new complexities (finding python\n> from python-config, mismatching python-config and explicitly specified python,\n> ...). And it doesn't exist on windows either :(.\n\nAlso note that python-config is itself a Python script that uses \nsysconfig and includes code like this:\n\n elif opt in ('--includes', '--cflags'):\n flags = ['-I' + sysconfig.get_path('include'),\n '-I' + sysconfig.get_path('platinclude')]\n\nSo this would just do the same thing we are already doing anyway.\n\n\n", "msg_date": "Mon, 24 Jan 2022 21:48:21 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "Hi,\n\nOn 2022-01-24 21:48:21 +0100, Peter Eisentraut wrote:\n> Also note that python-config is itself a Python script that uses sysconfig\n> and includes code like this:\n\nHuh. It's a shell script on my debian system. Looks like the python source\ntree has both. Not sure what / who decides which is used.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 24 Jan 2022 12:54:52 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Also note that python-config is itself a Python script that uses \n> sysconfig and includes code like this:\n\n> elif opt in ('--includes', '--cflags'):\n> flags = ['-I' + sysconfig.get_path('include'),\n> '-I' + sysconfig.get_path('platinclude')]\n\n> So this would just do the same thing we are already doing anyway.\n\nIt used to look like that, but at least in my 3.6.8 installation\non RHEL8, it's been rewritten to be a shell script that doesn't\ndepend on sysconfig at all.\n\nThe result is sufficiently bletcherous that you'd have thought\nthey'd reconsider getting rid of sysconfig :-(. Also, it\ndefinitely won't work on Windows.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Jan 2022 16:05:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "Hi,\n\nOn 2022-01-23 19:49:57 -0800, Andres Freund wrote:\n> > To avoid too noisy breakages, we could have python.m4 emit INCLUDEPY and then\n> > search the bf logs in a day or three?\n> \n> Maybe something like the attached? Not particularly nice, but should give us\n> most of the relevant information?\n\nFWIW, so far all 73 animals that reported on HEAD that they ran with the\nchange and that currently detect as with_python='yes', find Python.h via\nINCLUDEPY the same as via get_python_inc(). This includes systems like gadwall\nwhere sysconfig.get_path('include') returned the erroneous\n/usr/local/include/python2.7.\n\nOf course that doesn't guarantee in itself that Python.h is usable that\nway. But none of the systems report a get_python_inc(False) differing from\nget_python_inc(True), or from the value of INCLUDEPY. So I don't see a reason\nfor why it'd not?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 24 Jan 2022 14:22:27 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> ... But none of the systems report a get_python_inc(False) differing from\n> get_python_inc(True), or from the value of INCLUDEPY. So I don't see a reason\n> for why it'd not?\n\nYeah, I was just noticing that. It looks like the whole business\nwith checking both get_python_inc(False) and get_python_inc(True)\nhas been useless from the start: none of the buildfarm animals report\nmore than one -I switch in \"checking Python include directories\".\n\nIt's a little bit too soon to decide that INCLUDEPY is reliably equal\nto that, but if it still looks that way tomorrow, I'll be satisfied.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Jan 2022 17:45:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "I wrote:\n> Yeah, I was just noticing that. It looks like the whole business\n> with checking both get_python_inc(False) and get_python_inc(True)\n> has been useless from the start: none of the buildfarm animals report\n> more than one -I switch in \"checking Python include directories\".\n\nAlso, that appears to be true even in the oldest runs that vendikar\nstill has data for, back in 2015. So it's not something that they\ncleaned up recently.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Jan 2022 18:15:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "I wrote:\n> It's a little bit too soon to decide that INCLUDEPY is reliably equal\n> to that, but if it still looks that way tomorrow, I'll be satisfied.\n\nAs of now, 92 buildfarm animals have reported results from f032f63e7.\nEvery single one of them reports that all the different methods you\ntested give the same answer. So it looks to me like we should just\ngo with get_config_var('INCLUDEPY') and be happy.\n\nI guess next steps are to revert f032f63e7 and then retry e0e567a10\nwith that change. Who's going to do the honors?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Jan 2022 12:45:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "Hi,\n\nOn 2022-01-25 12:45:15 -0500, Tom Lane wrote:\n> As of now, 92 buildfarm animals have reported results from f032f63e7.\n> Every single one of them reports that all the different methods you\n> tested give the same answer. So it looks to me like we should just\n> go with get_config_var('INCLUDEPY') and be happy.\n> \n> I guess next steps are to revert f032f63e7 and then retry e0e567a10\n> with that change. Who's going to do the honors?\n\nI assume Peter is done working for the day. I'm stuck in meetings and stuff\nfor another 2-3 hours. I can give it a go after that, unless you do so\nbefore.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 25 Jan 2022 13:35:52 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-01-25 12:45:15 -0500, Tom Lane wrote:\n>> I guess next steps are to revert f032f63e7 and then retry e0e567a10\n>> with that change. Who's going to do the honors?\n\n> I assume Peter is done working for the day. I'm stuck in meetings and stuff\n> for another 2-3 hours. I can give it a go after that, unless you do so\n> before.\n\nOK, will do.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Jan 2022 18:33:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "On 2022-01-25 18:33:55 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-01-25 12:45:15 -0500, Tom Lane wrote:\n> >> I guess next steps are to revert f032f63e7 and then retry e0e567a10\n> >> with that change. Who's going to do the honors?\n> \n> > I assume Peter is done working for the day. I'm stuck in meetings and stuff\n> > for another 2-3 hours. I can give it a go after that, unless you do so\n> > before.\n> \n> OK, will do.\n\nThanks! Looks pretty good so far. Including on machines that were broken in\ntake 1...\n\n\n", "msg_date": "Tue, 25 Jan 2022 21:06:48 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "Hi,\n\nSorry I somehow missed that email.\n\nOn Mon, Jan 24, 2022 at 7:53 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Now, most of those are BSD machines --- but lapwing isn't.\n> It says\n>\n> checking for python... (cached) /usr/bin/python\n> configure: using python 3.6.9 (default, Jan 14 2022, 06:45:55)\n> checking for Python distutils module... yes\n> checking Python configuration directory... /usr/local/lib/python3.6/config-3.6m-i386-linux-gnu\n> checking Python include directories... -I/usr/local/include/python3.6m\n> checking how to link an embedded Python application... -L/usr/local/lib -lpython3.6m -lpthread -ldl -lutil -lrt -lm\n>\n> Not sure what to make of that --- maybe that's a handmade build?\n\nYes it is. Lapwing is a vanilla debian 7, and the backports only\noffers python 3.2. So in anticipiation to the switch to meson I\ncompiled the latest python 3.6, as it's supposed to be the oldest\nsupported version.\n\n\n", "msg_date": "Wed, 26 Jan 2022 19:03:00 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Thanks! Looks pretty good so far. Including on machines that were broken in\n> take 1...\n\nJust about all of the buildfarm has reported in now, and it's all good.\nSo now we need to discuss whether we want to back-patch this.\n\nPros: avoid configure warning now (not worth much); avoid outright\nbuild failure on Python 3.12+ in future.\n\nCons: breaks compatibility with Python 2.6 and 3.1.\n\nThere are probably not many people using current Postgres builds\nwith 2.6 or 3.1, but we can't rule out that there are some; and\nmoving the compatibility goalposts in minor releases is generally\nnot nice. On the other hand, it's very foreseeable that somebody\nwill want to build our back branches against 3.12 once it's out.\n\n3.12 is scheduled to start beta in roughly May of 2023 (assuming\nthey hold to their annual release cadence, which seems like a\ngood bet).\n\nThe compromise I propose is to back-patch into branches that\nwill still be in-support at that point, which are v11 and up.\nv10 will be dead, and it's perhaps a shade more likely than the\nlater branches to be getting used with hoary Python versions,\nso I think the odds favor not changing it.\n\nWe could also wait until closer to 2023 before doing anything,\nbut I fear we'd forget until complaints start to show up.\nI'd rather get this done while it's front-of-mind.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 27 Jan 2022 17:53:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "Hi,\n\nOn 2022-01-27 17:53:02 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Thanks! Looks pretty good so far. Including on machines that were broken in\n> > take 1...\n>\n> Just about all of the buildfarm has reported in now, and it's all good.\n> So now we need to discuss whether we want to back-patch this.\n>\n> Pros: avoid configure warning now (not worth much); avoid outright\n> build failure on Python 3.12+ in future.\n>\n> Cons: breaks compatibility with Python 2.6 and 3.1.\n\nHow about adding a note about the change to this set of minor releases, and\nbackpatch in the next set?\n\n2.6 has been out of support since October 29, 2013. 2.7 was released\n2010-07-03 and has been EOL 2020-01-01.\n\n\n> There are probably not many people using current Postgres builds\n> with 2.6 or 3.1, but we can't rule out that there are some; and\n> moving the compatibility goalposts in minor releases is generally\n> not nice. On the other hand, it's very foreseeable that somebody\n> will want to build our back branches against 3.12 once it's out.\n\nI don't see much point in worrying somebody still building plpython with 2.6,\ngiven its age. I feel a tad more compassion with a future self that wants to\nbuild a by-then EOL version of postgres, and plpython fails to build. We\ndidn't commit to keeping plpython building, but it's in my default build\nscript, so ...\n\n\n> We could also wait until closer to 2023 before doing anything,\n> but I fear we'd forget until complaints start to show up.\n> I'd rather get this done while it's front-of-mind.\n\nI vote for backpatching all the way either now, or after the next set of minor\nreleases is tagged.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 27 Jan 2022 15:13:37 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "[ I was hoping for more opinions, but I guess nobody cares but us ]\n\nAndres Freund <andres@anarazel.de> writes:\n> On 2022-01-27 17:53:02 -0500, Tom Lane wrote:\n>> So now we need to discuss whether we want to back-patch this.\n>> Pros: avoid configure warning now (not worth much); avoid outright\n>> build failure on Python 3.12+ in future.\n>> Cons: breaks compatibility with Python 2.6 and 3.1.\n\n> How about adding a note about the change to this set of minor releases, and\n> backpatch in the next set?\n\nMeh. Nobody looks at minor release notes to find out what will happen\nin some other minor release. Moreover, the sort of people who might\nbe adversely affected are probably not absorbing every minor release\nright away, so they'd very likely not see the advance warning anyway.\n\n> I don't see much point in worrying somebody still building plpython with 2.6,\n> given its age. I feel a tad more compassion with a future self that wants to\n> build a by-then EOL version of postgres, and plpython fails to build. We\n> didn't commit to keeping plpython building, but it's in my default build\n> script, so ...\n\nHmm, well, we're certainly not making this change in pre-v10 releases,\nso I'm not sure that changing v10 will make things much easier for your\nfuture self. But it's unusual for us to make back-patching decisions\non the sort of basis I proposed here, so I'm okay with just going back\nto v10 instead.\n\n> I vote for backpatching all the way either now, or after the next set of minor\n> releases is tagged.\n\nIf nobody else has weighed in by tomorrow, I'll backpatch to v10.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 31 Jan 2022 17:18:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "On Mon, Jan 31, 2022 at 05:18:47PM -0500, Tom Lane wrote:\n> [ I was hoping for more opinions, but I guess nobody cares but us ]\n> \n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-01-27 17:53:02 -0500, Tom Lane wrote:\n> >> So now we need to discuss whether we want to back-patch this.\n> >> Pros: avoid configure warning now (not worth much); avoid outright\n> >> build failure on Python 3.12+ in future.\n> >> Cons: breaks compatibility with Python 2.6 and 3.1.\n\n> If nobody else has weighed in by tomorrow, I'll backpatch to v10.\n\nWorks for me. I agree wanting Python 3.12 w/ PG10.latest is far more likely\nthan wanting Python 2.6 or 3.1. If someone lodges a non-academic complaint,\nwe could have back branches fallback to the old way if they detect a Python\nversion needing the old way. I doubt anyone will complain.\n\n\n", "msg_date": "Mon, 31 Jan 2022 17:50:55 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Mon, Jan 31, 2022 at 05:18:47PM -0500, Tom Lane wrote:\n>> If nobody else has weighed in by tomorrow, I'll backpatch to v10.\n\n> Works for me. I agree wanting Python 3.12 w/ PG10.latest is far more likely\n> than wanting Python 2.6 or 3.1. If someone lodges a non-academic complaint,\n> we could have back branches fallback to the old way if they detect a Python\n> version needing the old way. I doubt anyone will complain.\n\nI started to do that, but paused when the patch failed on v12, which\nI soon realized is because our minimum requirement before v13 was\nPython 2.4 not 2.6. That means we're moving the goalposts a bit\nfurther in the old branches than this discussion was presuming.\n\nI don't think this changes the conclusion any: there's still little\nchance that anyone wants to build PG against such old Python versions\nin 2022. So I'm going to go ahead with patching; but does anyone want\nto change their vote? (We can always \"git revert\".)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 01 Feb 2022 18:37:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" }, { "msg_contents": "On 2022-02-01 18:37:02 -0500, Tom Lane wrote:\n> So I'm going to go ahead with patching; but does anyone want\n> to change their vote? (We can always \"git revert\".)\n\n+1 for going ahead\n\n\n", "msg_date": "Tue, 1 Feb 2022 15:43:31 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Replace uses of deprecated Python module distutils.sysconfig" } ]
[ { "msg_contents": "There is another snowball release out, and I have prepared a patch to \nintegrate it. Since it's quite big and mostly boring, I'm not attaching \nit here, but you can see it at\n\nhttps://github.com/petere/postgresql/commit/11eade9302d0a737a12f193c41160fb895c0bc67.patch\n\nThe upstream release notes are here:\n\nhttps://github.com/snowballstem/snowball/blob/v2.2.0/NEWS\n\nThere are no new user-visible features.\n\n\n", "msg_date": "Thu, 2 Dec 2021 08:30:36 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "snowball update" } ]
[ { "msg_contents": "pgcrypto tests use encode() and decode() calls to convert to/from hex \nencoding. This was from before the hex format was available in bytea. \nNow we can remove the extra explicit encoding/decoding calls and rely on \nthe default output format.", "msg_date": "Thu, 2 Dec 2021 10:22:49 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "pgcrypto: Remove explicit hex encoding/decoding from tests" }, { "msg_contents": "> On 2 Dec 2021, at 10:22, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> pgcrypto tests use encode() and decode() calls to convert to/from hex encoding. This was from before the hex format was available in bytea. Now we can remove the extra explicit encoding/decoding calls and rely on the default output format.\n\nMy eyes glazed over a bit but definitely a +1 on the idea.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 2 Dec 2021 10:46:46 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: pgcrypto: Remove explicit hex encoding/decoding from tests" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> pgcrypto tests use encode() and decode() calls to convert to/from hex \n> encoding. This was from before the hex format was available in bytea. \n> Now we can remove the extra explicit encoding/decoding calls and rely on \n> the default output format.\n\nGenerally +1, but I see you removed some instances of\n\n--- ensure consistent test output regardless of the default bytea format\n-SET bytea_output TO escape;\n\nI think that the principle still applies that this should work regardless\nof the installation's default bytea format, so I'd recommend putting\n\n-- ensure consistent test output regardless of the default bytea format\nSET bytea_output TO hex;\n\nat the top of each file instead.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Dec 2021 13:30:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgcrypto: Remove explicit hex encoding/decoding from tests" }, { "msg_contents": "On 02.12.21 19:30, Tom Lane wrote:\n> Generally +1, but I see you removed some instances of\n> \n> --- ensure consistent test output regardless of the default bytea format\n> -SET bytea_output TO escape;\n> \n> I think that the principle still applies that this should work regardless\n> of the installation's default bytea format, so I'd recommend putting\n> \n> -- ensure consistent test output regardless of the default bytea format\n> SET bytea_output TO hex;\n> \n> at the top of each file instead.\n\npg_regress.c sets bytea_output to hex already.\n\n\n", "msg_date": "Fri, 3 Dec 2021 16:48:07 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: pgcrypto: Remove explicit hex encoding/decoding from tests" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> pg_regress.c sets bytea_output to hex already.\n\nAh, right. Nevermind that then.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Dec 2021 10:50:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgcrypto: Remove explicit hex encoding/decoding from tests" } ]
[ { "msg_contents": "The parser's type-coercion heuristics include some special rules\nfor types belonging to the STRING category, which are predicated\non the assumption that such types are reasonably general-purpose\nstring types. This assumption has been violated by a few types,\none error being ancient and the rest not so much:\n\n1. The \"char\" type is labeled as STRING category, even though it's\n(a) deprecated for general use and (b) unable to store more than\none byte, making \"string\" quite a misnomer.\n\n2. Various types we invented to store special catalog data, such as\npg_node_tree and pg_ndistinct, are also labeled as STRING category.\nThis seems like a fairly bad idea too.\n\nAn example of the reasons not to treat these types as being\ngeneral-purpose strings can be seen at [1], where the \"char\"\ntype has acquired some never-intended cast behaviors. Taking\nthat to an extreme, we currently accept\n\nregression=# select '(1,2)'::point::\"char\";\n char \n------\n (\n(1 row)\n\nMy first thought about fixing point 1 was to put \"char\" into some\nother typcategory, but that turns out to break some of psql's\ncatalog queries, with results like:\n\nregression=# \\dp\nERROR: operator is not unique: unknown || \"char\"\nLINE 16: E' (' || polcmd || E'):'\n ^\nHINT: Could not choose a best candidate operator. You might need to add explicit type casts.\n\nI looked briefly at rejiggering the casting rules so that that\nwould still work, but it looks like a mess. The problem is that\nunknown || \"char\" can match either text || text or text || anynonarray,\nand it's only the special preference for preferred types *of the\nsame typcategory as the input type* that allows us to prefer one\nof those over the other.\n\nHence, what 0001 below does is to leave \"char\" in the string\ncategory, but explicitly disable its access to the special\ncast-via-I/O rules. This is a hack for sure, but it won't have\nany surprising side-effects on other types, which changing the\ngeneral operator-matching rules could. The only thing it breaks\nin check-world is that contrib/citext expects casting between\n\"char\" and citext to work. I think that's not a very reasonable\nexpectation so I just took out the relevant test cases. (If anyone\nis hot about it, we could add explicit support for such casts in\nthe citext module ... but it doesn't seem worth the trouble.)\n\nAs for point 2, I haven't found any negative side-effects of\ntaking the special types out of the string category, so 0002\nattached invents a separate TYPCATEGORY_INTERNAL category to\nput them in.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/CAOC8YUcXymCMpC5d%3D7JvcwyjXPTT00WeebOM3UqTBreOD1N9hw%40mail.gmail.com", "msg_date": "Thu, 02 Dec 2021 16:22:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Dubious usage of TYPCATEGORY_STRING" }, { "msg_contents": "On 12/02/21 16:22, Tom Lane wrote:\n> ... types belonging to the STRING category, which are predicated\n> on the assumption that such types are reasonably general-purpose\n> string types.\n\nThis prods me to submit a question I've been incubating for a while.\n\nIs there any way to find out, from the catalogs or in any automatable way,\nwhich types are implemented with a dependence on the database encoding\n(or on some encoding)?\n\nYou might think S category types, for a start: name, text, character,\nvarchar, all dependent on the server encoding, as you'd expect. The ones\nTom moves here to category Z were most of the ones I wondered about.\n\nThen there's \"char\". It's category S, but does not apply the server\nencoding. You could call it an 8-bit int type, but it's typically used\nas a character, making it well-defined for ASCII values and not so\nfor others, just like SQL_ASCII encoding. You could as well say that\nthe \"char\" type has a defined encoding of SQL_ASCII at all times,\nregardless of the database encoding.\n\nU types are a mixed bag. Category U includes bytea (no character encoding)\nand xml/json/jsonb (server encoding). Also tied to the server encoding\nare cstring and unknown.\n\nAs an aside, I think it's unfortunate that the xml type has this implicit\ndependency on the server encoding, when XML is by definition Unicode.\nIt means there are valid XML documents that PostgreSQL may not be able\nto store, and which documents those are depends on what the database\nencoding is. I think json and jsonb suffer in the same way.\n\nChanging that would be disruptive at this point and I'm not suggesting it,\nbut there might be value in the thought experiment to see what the\nalternate universe would look like.\n\nIn the alternate world, you would know that certain datatypes were\ninherently encoding-oblivious (numbers, polygons, times, ...), certain\nothers are bound to the server encoding (text, varchar, name, ...), and\nstill others are bound to a known encoding other than the server encoding:\nthe ISO SQL NCHAR type (bound to an alternate configurable database\nencoding), \"char\" (always SQL_ASCII), xml/json/jsonb (always with the full\nUnicode repertoire, however they choose to represent it internally).\n\nThat last parenthetical reminded me that I'm really talking\nabout 'repertoire' here, which ISO SQL treats as a separate topic from\n'encoding'. Exactly how an xml or jsonb type is represented internally\nmight be none of my business (unless I am developing a binary-capable\ndriver), but it's fair to ask what its repertoire is, and whether that's\nfull Unicode or not, and if not, whether the repertoire changes when some\nserver setting does.\n\nI also think in that ideal world, or even this one, you could want\nsome way to query the catalogs to answer that basic question\nabout some given type.\n\nAm I right that we simply don't have that? I currently answer such questions\nby querying the catalog for the type's _send or _recv function name, then\ngoing off to read the code, but that's hard to automate.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Fri, 3 Dec 2021 13:42:30 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "types reliant on encodings [was Re: Dubious usage of\n TYPCATEGORY_STRING]" }, { "msg_contents": "On 02.12.21 22:22, Tom Lane wrote:\n> My first thought about fixing point 1 was to put \"char\" into some\n> other typcategory, but that turns out to break some of psql's\n> catalog queries, with results like:\n> \n> regression=# \\dp\n> ERROR: operator is not unique: unknown || \"char\"\n> LINE 16: E' (' || polcmd || E'):'\n> ^\n> HINT: Could not choose a best candidate operator. You might need to add explicit type casts.\n\nCould we add explicit casts (like polcmd::text) here? Or would it break \ntoo much?\n\n\n\n", "msg_date": "Tue, 7 Dec 2021 16:48:21 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Dubious usage of TYPCATEGORY_STRING" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 02.12.21 22:22, Tom Lane wrote:\n>> My first thought about fixing point 1 was to put \"char\" into some\n>> other typcategory, but that turns out to break some of psql's\n>> catalog queries, with results like:\n>> \n>> regression=# \\dp\n>> ERROR: operator is not unique: unknown || \"char\"\n>> LINE 16: E' (' || polcmd || E'):'\n>> ^\n>> HINT: Could not choose a best candidate operator. You might need to add explicit type casts.\n\n> Could we add explicit casts (like polcmd::text) here? Or would it break \n> too much?\n\nI assumed it'd break too much to consider doing that. But I suppose\nthat since a typcategory change would be initdb-forcing anyway, maybe\nit's not out of the question. I'll investigate and see exactly how\nmany places would need an explicit cast.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Dec 2021 10:51:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Dubious usage of TYPCATEGORY_STRING" }, { "msg_contents": "On 03.12.21 19:42, Chapman Flack wrote:\n> Is there any way to find out, from the catalogs or in any automatable way,\n> which types are implemented with a dependence on the database encoding\n> (or on some encoding)?\n\nWhat is this needed for? C code can internally do whatever it wants, \nand the database encoding is effectively a constant, so there is no need \nfor server-side code to be very much concerned about whether types do this.\n\nAlso, \"types\" is perhaps the wrong subject here. Types only contain \ninput and output functions and a few more bits. Additional functions \noperating on the type could look at the server encoding without the type \nand its core functions knowing about it.\n\n\n\n", "msg_date": "Tue, 7 Dec 2021 16:52:42 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: types reliant on encodings [was Re: Dubious usage of\n TYPCATEGORY_STRING]" }, { "msg_contents": "On Thu, Dec 2, 2021 at 4:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> An example of the reasons not to treat these types as being\n> general-purpose strings can be seen at [1], where the \"char\"\n> type has acquired some never-intended cast behaviors. Taking\n> that to an extreme, we currently accept\n>\n> regression=# select '(1,2)'::point::\"char\";\n> char\n> ------\n> (\n> (1 row)\n\nWhat's wrong with that?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 Dec 2021 10:59:45 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Dubious usage of TYPCATEGORY_STRING" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Dec 2, 2021 at 4:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> An example of the reasons not to treat these types as being\n>> general-purpose strings can be seen at [1], where the \"char\"\n>> type has acquired some never-intended cast behaviors. Taking\n>> that to an extreme, we currently accept\n>> \n>> regression=# select '(1,2)'::point::\"char\";\n>> char\n>> ------\n>> (\n>> (1 row)\n\n> What's wrong with that?\n\nWell, we don't allow things like\n\nregression=# select '(1,2)'::point::float8;\nERROR: cannot cast type point to double precision\nLINE 1: select '(1,2)'::point::float8;\n ^\n\nIt's not very clear to me why \"char\" should get a pass on that.\nWe allow such cases when the target is text/varchar/etc, but\nthe assumption is that the textual representation is sufficient\nfor your purposes. It's hard to claim that just the first\nbyte is a useful textual representation.\n\nWorse, PG is actually treating this as an assignment-level cast,\nso we accept this:\n\nregression=# create table t(f1 \"char\");\nCREATE TABLE\nregression=# insert into t values ('(1,2)'::point);\nINSERT 0 1\nregression=# table t;\n f1 \n----\n (\n(1 row)\n\nI definitely don't think that should have worked without\nany complaint.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Dec 2021 12:19:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Dubious usage of TYPCATEGORY_STRING" }, { "msg_contents": "On Tue, Dec 7, 2021 at 12:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > What's wrong with that?\n>\n> Well, we don't allow things like\n>\n> regression=# select '(1,2)'::point::float8;\n> ERROR: cannot cast type point to double precision\n> LINE 1: select '(1,2)'::point::float8;\n> ^\n>\n> It's not very clear to me why \"char\" should get a pass on that.\n> We allow such cases when the target is text/varchar/etc, but\n> the assumption is that the textual representation is sufficient\n> for your purposes. It's hard to claim that just the first\n> byte is a useful textual representation.\n\nFair enough, I guess. I am pretty skeptical of the merits of refusing\nan explicit cast. If I ran the zoo, I would probably choose to allow\nall such casts and make them coerce via IO when no other pathway is\navailable. But I get that's not our policy.\n\n> Worse, PG is actually treating this as an assignment-level cast,\n> so we accept this:\n>\n> regression=# create table t(f1 \"char\");\n> CREATE TABLE\n> regression=# insert into t values ('(1,2)'::point);\n> INSERT 0 1\n> regression=# table t;\n> f1\n> ----\n> (\n> (1 row)\n>\n> I definitely don't think that should have worked without\n> any complaint.\n\nYes, that one's a bridge too far, even for me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 Dec 2021 13:04:50 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Dubious usage of TYPCATEGORY_STRING" }, { "msg_contents": "I wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> Could we add explicit casts (like polcmd::text) here? Or would it break \n>> too much?\n\n> I assumed it'd break too much to consider doing that. But I suppose\n> that since a typcategory change would be initdb-forcing anyway, maybe\n> it's not out of the question. I'll investigate and see exactly how\n> many places would need an explicit cast.\n\nUm, I definitely gave up too easily there. The one usage in \\dp\nseems to be the *only* thing that breaks in describe.c, and pg_dump\ndoesn't need any changes so far as check-world reveals. So let's\njust move \"char\" to another category, as attached.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 07 Dec 2021 15:24:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Dubious usage of TYPCATEGORY_STRING" }, { "msg_contents": "On 07.12.21 21:24, Tom Lane wrote:\n> I wrote:\n>> Peter Eisentraut<peter.eisentraut@enterprisedb.com> writes:\n>>> Could we add explicit casts (like polcmd::text) here? Or would it break\n>>> too much?\n>> I assumed it'd break too much to consider doing that. But I suppose\n>> that since a typcategory change would be initdb-forcing anyway, maybe\n>> it's not out of the question. I'll investigate and see exactly how\n>> many places would need an explicit cast.\n> Um, I definitely gave up too easily there. The one usage in \\dp\n> seems to be the*only* thing that breaks in describe.c, and pg_dump\n> doesn't need any changes so far as check-world reveals. So let's\n> just move \"char\" to another category, as attached.\n\nLooks good to me.\n\n\n", "msg_date": "Thu, 9 Dec 2021 16:27:31 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Dubious usage of TYPCATEGORY_STRING" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 07.12.21 21:24, Tom Lane wrote:\n>> Um, I definitely gave up too easily there. The one usage in \\dp\n>> seems to be the*only* thing that breaks in describe.c, and pg_dump\n>> doesn't need any changes so far as check-world reveals. So let's\n>> just move \"char\" to another category, as attached.\n\n> Looks good to me.\n\nPushed, thanks for reviewing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 11 Dec 2021 14:12:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Dubious usage of TYPCATEGORY_STRING" }, { "msg_contents": "On 12/02/21 16:22, Tom Lane wrote:\n> taking the special types out of the string category, so 0002\n> attached invents a separate TYPCATEGORY_INTERNAL category to\n> put them in.\n\nOn the same general topic, was there a deliberate choice to put\ninet and cidr in TYPCATEGORY_NETWORK but macaddr and macaddr8\nin TYPCATEGORY_USER?\n\nIt looks like macaddr was put in category U (macaddr8 didn't exist yet)\nin bac3e83, the same commit that put inet and cidr into category I,\napparently in order to \"hew exactly to the behavior of the previous\nhardwired logic\", on the principle that \"any adjustment of the standard\nset of categories should be done separately\".\n\nThe birth of macaddr looks to have been back in 1998 in 2d69fd9, the\nsame commit that added 'ipaddr'. Neither was added at that time to\nthe hardcoded switch in TypeCategory(). The plot thickens....\n\nipaddr became inet in 8849655 (8 Oct 1998). cidr was added in 858a3b5\n(21 Oct 1998).\n\nThen ca2995 added NETWORK_TYPE to TypeCategory and put inet and cidr\nin it (22 Oct 1998). Looks like that was done to reduce duplication\nof pg_proc entries between inet and cidr by allowing implicit coercion.\n\nAnd I guess you wouldn't want to suggest the existence of coercions\nbetween MAC addresses and inet addresses.\n\nBut there aren't any such casts present in pg_cast anyway, so is that\na persuasive present-day rationale for the (otherwise odd-seeming) split\nof these types across categories? They are grouped in a single\ndocumentation \"category\".\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 3 Jan 2022 13:36:52 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "TYPCATEGORY_{NETWORK,USER} [was Dubious usage of TYPCATEGORY_STRING]" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> On the same general topic, was there a deliberate choice to put\n> inet and cidr in TYPCATEGORY_NETWORK but macaddr and macaddr8\n> in TYPCATEGORY_USER?\n\nHard to say how \"deliberate\" it was, at this remove of time.\n\nI do see an argument against reclassifying macaddr[8] into\nTYPCATEGORY_NETWORK now: we generally expect that if a\ncategory has a preferred type, any member type of the category\ncan be cast to that preferred type. (The fact that OID is\nmarked preferred breaks that rule, but it holds pretty well\notherwise.) I think this is why type interval has its own\ncategory rather than being within TYPCATEGORY_DATETIME.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 03 Jan 2022 13:55:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: TYPCATEGORY_{NETWORK,USER} [was Dubious usage of\n TYPCATEGORY_STRING]" }, { "msg_contents": "On 01/03/22 13:55, Tom Lane wrote:\n> I do see an argument against reclassifying macaddr[8] into\n> TYPCATEGORY_NETWORK now: we generally expect that if a\n> category has a preferred type, any member type of the category\n> can be cast to that preferred type.\n\nI was wondering about the details of how that information gets used.\nIt seems partly redundant with what you learn from pg_cast. The\nCREATE TYPE documentation says:\n\n The category and preferred parameters can be used to help control\n which implicit cast will be applied in ambiguous situations. ...\n For types that have no implicit casts to or from any other types,\n it is sufficient to leave these settings at the defaults. However,\n for a group of related types that have implicit casts, it is often\n helpful ...\n\nwhich would suggest (to me on a first reading, anyway) that one starts\nin pg_cast to find out what implicit casts, if any, exist, and then\nlooks to category and preferred if needed to resolve any ambiguity\nthat remains.\n\nIf understood that way, it doesn't seem to imply any ill effect of\nhaving types within a category that might be partitioned into a few\ndisjoint subsets by \"implicit cast exists between\". (Such subsets\nmight be regarded as autodiscovered mini-categories.) But I could be\noff-base to understand it that way.\n\nAre there spots in the code where the expectation \"if a category has\na preferred type, any member type of the category can be cast to that\npreferred type\" really takes that stronger form?\n\nHmm, I guess I can see some spots in Chapter 10, in the rules for\nfinding best-match operators or functions, or resolving UNION/CASE\ntypes.\n\nThe UNION/CASE rules look like the effect might be benign: you have\nstep 4, inputs not of the same category => fail, then step 5, where\ndiscovery of a preferred type can foreclose consideration of other\ninputs, then step 6, implicit cast doesn't exist => fail. At first\nblush, maybe that only fails the same cases that (if you treated\nimplicit-cast-related subsets within a category as mini-categories)\nyou would have failed in step 4.\n\nThe operator and function resolution rules seem harder to reason about,\nand yeah, I haven't convinced myself their \"any candidate accepts a\npreferred type => discard candidates accepting non-preferred types\" rules\ncouldn't end up discarding the part of the solution space where the\nsolution is, if disjoint \"mini-categories\" exist.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 3 Jan 2022 14:54:18 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: TYPCATEGORY_{NETWORK,USER} [was Dubious usage of\n TYPCATEGORY_STRING]" } ]
[ { "msg_contents": "Hi,\n\nA small patch to the documentation about how to reduce the number of\nparameterized paths, because it kept me searching for a long time :-)\n(The code this documents is in add_paths_to_joinrel(), the loop\nforeach(lc, root->join_info_list).)\n\n\ndiff --git a/src/backend/optimizer/README b/src/backend/optimizer/README\nindex 41c120e0cd..79e270188d 100644\n--- a/src/backend/optimizer/README\n+++ b/src/backend/optimizer/README\n@@ -863,6 +863,19 @@ that. An exception occurs for parameterized paths for the RHS relation of\n a SEMI or ANTI join: in those cases, we can stop the inner scan after the\n first match, so it's primarily startup not total cost that we care about.\n \n+Furthermore, join trees involving parameterized paths are kept as left-deep\n+as possible; for nested loops consisting of inner joins only, bushy plans\n+are equivalent to left-deep ones, so keeping bushy plans is only a waste.\n+This is enforced by refusing to join parameterized paths together unless\n+the parameterization is resolved, *or* the remaining parameterization is\n+one that must cannot be delayed right away (because of outer join\n+restrictions). This ensures that we do not keep around large subplans that\n+are parameterized on a whole host of external relations, without losing\n+any plans. An exception is that we are allowed to keep a parameterization\n+around if we *partially* resolve it, i.e., we had a multi-part index and\n+resolved only one table from it. This is known as the \"star-schema\"\n+exception.\n+\n \n LATERAL subqueries\n ------------------\n\n/* Steinar */\n-- \nHomepage: https://www.sesse.net/\n\n\n", "msg_date": "Thu, 2 Dec 2021 23:57:27 +0100", "msg_from": "\"Steinar H. Gunderson\" <steinar+postgres@gunderson.no>", "msg_from_op": true, "msg_subject": "[PATCH] Document heuristics for parameterized paths" }, { "msg_contents": "On Mon, 6 Dec 2021 at 13:01, Steinar H. Gunderson\n<steinar+postgres@gunderson.no> wrote:\n>\n> +one that must cannot be delayed right away (because of outer join\n\nmust cannot?\n\n-- \ngreg\n\n\n", "msg_date": "Wed, 15 Dec 2021 23:22:22 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Document heuristics for parameterized paths" }, { "msg_contents": "On Wed, 15 Dec 2021 at 23:22, Greg Stark <stark@mit.edu> wrote:\n>\n> On Mon, 6 Dec 2021 at 13:01, Steinar H. Gunderson\n> <steinar+postgres@gunderson.no> wrote:\n> >\n> > +one that must cannot be delayed right away (because of outer join\n>\n> must cannot?\n\nActually on further reading... \"delayed right away\"?\n\n-- \ngreg\n\n\n", "msg_date": "Wed, 15 Dec 2021 23:23:12 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Document heuristics for parameterized paths" }, { "msg_contents": "On Wed, Dec 15, 2021 at 11:23:12PM -0500, Greg Stark wrote:\n>>> +one that must cannot be delayed right away (because of outer join\n>> must cannot?\n> Actually on further reading... \"delayed right away\"?\n\nThere are two ways of writing this sentence, and I see that I tried to use\nboth of them in the same sentence :-) What about something like the\nfollowing?\n\n This is enforced by refusing to join parameterized paths together unless\n the parameterization is resolved, *or* the remaining parameterization is\n one that cannot be resolved right away (because of outer join restrictions). \n\n/* Steinar */\n-- \nHomepage: https://www.sesse.net/\n\n\n", "msg_date": "Thu, 16 Dec 2021 17:04:40 +0100", "msg_from": "\"Steinar H. Gunderson\" <steinar+postgres@gunderson.no>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Document heuristics for parameterized paths" } ]
[ { "msg_contents": "It seems there are no environment variables corresponding to keepalives\netc. connection parameters in libpq. Is there any reason for this?\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Fri, 03 Dec 2021 10:28:34 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "keepliaves etc. as environment variables" }, { "msg_contents": "Hi,\n\nOn 2021-12-03 10:28:34 +0900, Tatsuo Ishii wrote:\n> It seems there are no environment variables corresponding to keepalives\n> etc. connection parameters in libpq. Is there any reason for this?\n\nPGOPTIONS='-c tcp_keepalive_*=foo' should work.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 2 Dec 2021 17:50:47 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: keepliaves etc. as environment variables" }, { "msg_contents": "> Hi,\n> \n> On 2021-12-03 10:28:34 +0900, Tatsuo Ishii wrote:\n>> It seems there are no environment variables corresponding to keepalives\n>> etc. connection parameters in libpq. Is there any reason for this?\n> \n> PGOPTIONS='-c tcp_keepalive_*=foo' should work.\n\nSorry I was not clear. I wanted to know why there are no specific\nenvironment variable for keepalives etc. like PGCONNECT_TIMEOUT.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Fri, 03 Dec 2021 10:58:49 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: keepliaves etc. as environment variables" }, { "msg_contents": "On Thu, Dec 2, 2021 at 8:59 PM Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n> > On 2021-12-03 10:28:34 +0900, Tatsuo Ishii wrote:\n> >> It seems there are no environment variables corresponding to keepalives\n> >> etc. connection parameters in libpq. Is there any reason for this?\n> >\n> > PGOPTIONS='-c tcp_keepalive_*=foo' should work.\n>\n> Sorry I was not clear. I wanted to know why there are no specific\n> environment variable for keepalives etc. like PGCONNECT_TIMEOUT.\n\nIn theory we could have an environment variable for every connection\nparameter, but there's some cost to that. For example, if some program\nwants to sanitize the environment of all PG-related environment\nvariables, it has more to do. It seems reasonable to me to have\nenvironment variables only for the most important connection\nparameters, rather than all of them. I would argue we've overdone it\nalready.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 3 Dec 2021 15:52:14 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: keepliaves etc. as environment variables" } ]
[ { "msg_contents": "Hi,\n\nCurrently while changing the owner of ALL TABLES IN SCHEMA\npublication, it is not checked if the new owner has superuser\npermission or not. Added a check to throw an error if the new owner\ndoes not have superuser permission.\nAttached patch has the changes for the same. Thoughts?\n\nRegards,\nVignesh", "msg_date": "Fri, 3 Dec 2021 08:36:23 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Alter all tables in schema owner fix" }, { "msg_contents": "On Fri, Dec 3, 2021 at 2:06 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Currently while changing the owner of ALL TABLES IN SCHEMA\n> publication, it is not checked if the new owner has superuser\n> permission or not. Added a check to throw an error if the new owner\n> does not have superuser permission.\n> Attached patch has the changes for the same. Thoughts?\n>\n\nIt looks OK to me, but just two things:\n\n1) Isn't it better to name \"CheckSchemaPublication\" as\n\"IsSchemaPublication\", since it has a boolean return and also\ntypically CheckXXX type functions normally do checking and error-out\nif they find a problem.\n\n2) Since superuser_arg() caches previous input arg (last_roleid) and\nhas a fast-exit, and has been called immediately before for the FOR\nALL TABLES case, it would be better to write:\n\n+ if (CheckSchemaPublication(form->oid) && !superuser_arg(newOwnerId))\n\nas:\n\n+ if (!superuser_arg(newOwnerId) && IsSchemaPublication(form->oid))\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Fri, 3 Dec 2021 15:23:21 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Alter all tables in schema owner fix" }, { "msg_contents": "On 12/2/21, 7:07 PM, \"vignesh C\" <vignesh21@gmail.com> wrote:\r\n> Currently while changing the owner of ALL TABLES IN SCHEMA\r\n> publication, it is not checked if the new owner has superuser\r\n> permission or not. Added a check to throw an error if the new owner\r\n> does not have superuser permission.\r\n> Attached patch has the changes for the same. Thoughts?\r\n\r\nYeah, the documentation clearly states that \"the new owner of a FOR\r\nALL TABLES or FOR ALL TABLES IN SCHEMA publication must be a\r\nsuperuser\" [0].\r\n\r\n+/*\r\n+ * Check if any schema is associated with the publication.\r\n+ */\r\n+static bool\r\n+CheckSchemaPublication(Oid pubid)\r\n\r\nI don't think the name CheckSchemaPublication() accurately describes\r\nwhat this function is doing. I would suggest something like\r\nPublicationHasSchema() or PublicationContainsSchema(). Also, much of\r\nthis new function appears to be copied from GetPublicationSchemas().\r\nShould we just use that instead?\r\n\r\n+CREATE ROLE regress_publication_user3 LOGIN SUPERUSER;\r\n+GRANT regress_publication_user2 TO regress_publication_user3;\r\n+SET ROLE regress_publication_user3;\r\n+SET client_min_messages = 'ERROR';\r\n+CREATE PUBLICATION testpub4 FOR ALL TABLES IN SCHEMA pub_test;\r\n+RESET client_min_messages;\r\n+SET ROLE regress_publication_user;\r\n+ALTER ROLE regress_publication_user3 NOSUPERUSER;\r\n+SET ROLE regress_publication_user3;\r\n\r\nI think this test setup can be simplified a bit:\r\n\r\n CREATE ROLE regress_publication_user3 LOGIN;\r\n GRANT regress_publication_user2 TO regress_publication_user3;\r\n SET client_min_messages = 'ERROR';\r\n CREATE PUBLICATION testpub4 FOR ALL TABLES IN SCHEMA pub_test;\r\n RESET client_min_messages;\r\n ALTER PUBLICATION testpub4 OWNER TO regress_publication_user3;\r\n SET ROLE regress_publication_user3;\r\n\r\nNathan\r\n\r\n[0] https://www.postgresql.org/docs/devel/sql-alterpublication.html\r\n\r\n", "msg_date": "Fri, 3 Dec 2021 04:28:02 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Alter all tables in schema owner fix" }, { "msg_contents": "On Fri, Dec 3, 2021 at 9:58 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 12/2/21, 7:07 PM, \"vignesh C\" <vignesh21@gmail.com> wrote:\n> > Currently while changing the owner of ALL TABLES IN SCHEMA\n> > publication, it is not checked if the new owner has superuser\n> > permission or not. Added a check to throw an error if the new owner\n> > does not have superuser permission.\n> > Attached patch has the changes for the same. Thoughts?\n>\n> Yeah, the documentation clearly states that \"the new owner of a FOR\n> ALL TABLES or FOR ALL TABLES IN SCHEMA publication must be a\n> superuser\" [0].\n>\n> +/*\n> + * Check if any schema is associated with the publication.\n> + */\n> +static bool\n> +CheckSchemaPublication(Oid pubid)\n>\n> I don't think the name CheckSchemaPublication() accurately describes\n> what this function is doing. I would suggest something like\n> PublicationHasSchema() or PublicationContainsSchema(). Also, much of\n> this new function appears to be copied from GetPublicationSchemas().\n> Should we just use that instead?\n\nI was thinking of changing it to IsSchemaPublication as suggested by\nGreg unless you feel differently. I did not use GetPublicationSchemas\nfunction because in our case we just need to check if there is any\nschema publication, we don't need the schema list to be prepared in\nthis case. That is the reason I wrote a new function which just checks\nif any schema is present or not for the publication. I'm planning to\nuse CheckSchemaPublication (renamed to IsSchemaPublication) so that\nthe list need not be prepared.\n\n> +CREATE ROLE regress_publication_user3 LOGIN SUPERUSER;\n> +GRANT regress_publication_user2 TO regress_publication_user3;\n> +SET ROLE regress_publication_user3;\n> +SET client_min_messages = 'ERROR';\n> +CREATE PUBLICATION testpub4 FOR ALL TABLES IN SCHEMA pub_test;\n> +RESET client_min_messages;\n> +SET ROLE regress_publication_user;\n> +ALTER ROLE regress_publication_user3 NOSUPERUSER;\n> +SET ROLE regress_publication_user3;\n>\n> I think this test setup can be simplified a bit:\n>\n> CREATE ROLE regress_publication_user3 LOGIN;\n> GRANT regress_publication_user2 TO regress_publication_user3;\n> SET client_min_messages = 'ERROR';\n> CREATE PUBLICATION testpub4 FOR ALL TABLES IN SCHEMA pub_test;\n> RESET client_min_messages;\n> ALTER PUBLICATION testpub4 OWNER TO regress_publication_user3;\n> SET ROLE regress_publication_user3;\n\nI will make this change in the next version.\n\nRegards,\nVIgnesh\n\n\n", "msg_date": "Fri, 3 Dec 2021 10:14:33 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Alter all tables in schema owner fix" }, { "msg_contents": "On Fri, Dec 3, 2021 at 9:53 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Fri, Dec 3, 2021 at 2:06 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Currently while changing the owner of ALL TABLES IN SCHEMA\n> > publication, it is not checked if the new owner has superuser\n> > permission or not. Added a check to throw an error if the new owner\n> > does not have superuser permission.\n> > Attached patch has the changes for the same. Thoughts?\n> >\n>\n> It looks OK to me, but just two things:\n>\n> 1) Isn't it better to name \"CheckSchemaPublication\" as\n> \"IsSchemaPublication\", since it has a boolean return and also\n> typically CheckXXX type functions normally do checking and error-out\n> if they find a problem.\n\nModified\n\n> 2) Since superuser_arg() caches previous input arg (last_roleid) and\n> has a fast-exit, and has been called immediately before for the FOR\n> ALL TABLES case, it would be better to write:\n>\n> + if (CheckSchemaPublication(form->oid) && !superuser_arg(newOwnerId))\n>\n> as:\n>\n> + if (!superuser_arg(newOwnerId) && IsSchemaPublication(form->oid))\n\nModified\n\nThanks for the comments, the attached v2 patch has the changes for the same.\n\nRegards,\nVignesh", "msg_date": "Fri, 3 Dec 2021 11:00:49 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Alter all tables in schema owner fix" }, { "msg_contents": "On Friday, December 3, 2021 1:31 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> \r\n> On Fri, Dec 3, 2021 at 9:53 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\r\n> >\r\n> > On Fri, Dec 3, 2021 at 2:06 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> > >\r\n> > > Currently while changing the owner of ALL TABLES IN SCHEMA\r\n> > > publication, it is not checked if the new owner has superuser\r\n> > > permission or not. Added a check to throw an error if the new owner\r\n> > > does not have superuser permission.\r\n> > > Attached patch has the changes for the same. Thoughts?\r\n> > >\r\n> >\r\n> > It looks OK to me, but just two things:\r\n> >\r\n> > 1) Isn't it better to name \"CheckSchemaPublication\" as\r\n> > \"IsSchemaPublication\", since it has a boolean return and also\r\n> > typically CheckXXX type functions normally do checking and error-out\r\n> > if they find a problem.\r\n> \r\n> Modified\r\n> \r\n> > 2) Since superuser_arg() caches previous input arg (last_roleid) and\r\n> > has a fast-exit, and has been called immediately before for the FOR\r\n> > ALL TABLES case, it would be better to write:\r\n> >\r\n> > + if (CheckSchemaPublication(form->oid) && !superuser_arg(newOwnerId))\r\n> >\r\n> > as:\r\n> >\r\n> > + if (!superuser_arg(newOwnerId) && IsSchemaPublication(form->oid))\r\n> \r\n> Modified\r\n> \r\n> Thanks for the comments, the attached v2 patch has the changes for the same.\r\n> \r\n\r\nThanks for your patch.\r\nI tested it and it fixed this problem as expected. It also passed \"make check-world\".\r\n\r\nRegards,\r\nTang\r\n", "msg_date": "Fri, 3 Dec 2021 07:54:06 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Alter all tables in schema owner fix" }, { "msg_contents": "On 12/2/21, 11:57 PM, \"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com> wrote:\r\n> Thanks for your patch.\r\n> I tested it and it fixed this problem as expected. It also passed \"make check-world\".\r\n\r\n+1, the patch looks good to me, too. My only other suggestion would\r\nbe to move IsSchemaPublication() to pg_publication.c\r\n\r\nNathan\r\n\r\n", "msg_date": "Fri, 3 Dec 2021 17:20:35 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Alter all tables in schema owner fix" }, { "msg_contents": "On Fri, Dec 03, 2021 at 05:20:35PM +0000, Bossart, Nathan wrote:\n> On 12/2/21, 11:57 PM, \"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com> wrote:\n> > Thanks for your patch.\n> > I tested it and it fixed this problem as expected. It also passed \"make check-world\".\n> \n> +1, the patch looks good to me, too. My only other suggestion would\n> be to move IsSchemaPublication() to pg_publication.c\n\nThere is more to that, no? It seems to me that anything that opens\nPublicationNamespaceRelationId should be in pg_publication.c, so that\nwould include RemovePublicationSchemaById(). If you do that,\nGetSchemaPublicationRelations() could be local to pg_publication.c.\n\n+ tup = systable_getnext(scan);\n+ if (HeapTupleIsValid(tup))\n+ result = true;\nThis can be written as just \"result = HeapTupleIsValid(tup)\". Anyway,\nthis code also means that once we drop the schema this publication\nwon't be considered anymore as a schema publication, meaning that it\nalso makes this code weaker to actual cache lookup failures? I find\nthe semantics around pg_publication_namespace is bit weird because of\nthat, and inconsistent with the existing\npuballtables/pg_publication_rel.\n--\nMichael", "msg_date": "Mon, 6 Dec 2021 15:16:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Alter all tables in schema owner fix" }, { "msg_contents": "On Mon, Dec 6, 2021 at 11:46 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Dec 03, 2021 at 05:20:35PM +0000, Bossart, Nathan wrote:\n> > On 12/2/21, 11:57 PM, \"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com> wrote:\n> > > Thanks for your patch.\n> > > I tested it and it fixed this problem as expected. It also passed \"make check-world\".\n> >\n> > +1, the patch looks good to me, too. My only other suggestion would\n> > be to move IsSchemaPublication() to pg_publication.c\n>\n> There is more to that, no? It seems to me that anything that opens\n> PublicationNamespaceRelationId should be in pg_publication.c, so that\n> would include RemovePublicationSchemaById().\n>\n\nIt is currently similar to RemovePublicationById,\nRemovePublicationRelById, etc. which are also in publicationcmds.c.\n\n> If you do that,\n> GetSchemaPublicationRelations() could be local to pg_publication.c.\n>\n> + tup = systable_getnext(scan);\n> + if (HeapTupleIsValid(tup))\n> + result = true;\n> This can be written as just \"result = HeapTupleIsValid(tup)\". Anyway,\n> this code also means that once we drop the schema this publication\n> won't be considered anymore as a schema publication, meaning that it\n> also makes this code weaker to actual cache lookup failures?\n>\n\nHow, can you be a bit more specific?\n\n> I find\n> the semantics around pg_publication_namespace is bit weird because of\n> that, and inconsistent with the existing\n> puballtables/pg_publication_rel.\n>\n\nWhat do you mean by inconsistent with puballtables/pg_publication_rel?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 6 Dec 2021 14:14:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Alter all tables in schema owner fix" }, { "msg_contents": "On Fri, Dec 3, 2021 at 10:50 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 12/2/21, 11:57 PM, \"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com> wrote:\n> > Thanks for your patch.\n> > I tested it and it fixed this problem as expected. It also passed \"make check-world\".\n>\n> +1, the patch looks good to me, too. My only other suggestion would\n> be to move IsSchemaPublication() to pg_publication.c\n\nThanks for your comments, I have made the changes. Additionally I have\nrenamed IsSchemaPublication to is_schema_publication for keeping the\nnaming similar around the code. The attached v3 patch has the changes\nfor the same.\n\nRegards,\nVignesh", "msg_date": "Tue, 7 Dec 2021 08:21:40 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Alter all tables in schema owner fix" }, { "msg_contents": "On Tue, Dec 7, 2021 at 8:21 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Thanks for your comments, I have made the changes. Additionally I have\n> renamed IsSchemaPublication to is_schema_publication for keeping the\n> naming similar around the code. The attached v3 patch has the changes\n> for the same.\n>\n\nThanks, the patch looks mostly good to me. I have slightly modified it\nto incorporate one of Michael's suggestions, ran pgindent, and\nmodified the commit message.\n\nI am planning to push the attached tomorrow unless there are further\ncomments. Michael, do let me know if you have any questions or\nobjections about this?\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Tue, 7 Dec 2021 15:30:59 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Alter all tables in schema owner fix" }, { "msg_contents": "On Tue, Dec 7, 2021 at 9:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Thanks, the patch looks mostly good to me. I have slightly modified it\n> to incorporate one of Michael's suggestions, ran pgindent, and\n> modified the commit message.\n>\n\nLGTM, except in the patch commit message I'd change \"Create\nPublication\" to \"CREATE PUBLICATION\".\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Tue, 7 Dec 2021 21:46:36 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Alter all tables in schema owner fix" }, { "msg_contents": "On 12/7/21, 2:47 AM, \"Greg Nancarrow\" <gregn4422@gmail.com> wrote:\r\n> On Tue, Dec 7, 2021 at 9:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n>>\r\n>> Thanks, the patch looks mostly good to me. I have slightly modified it\r\n>> to incorporate one of Michael's suggestions, ran pgindent, and\r\n>> modified the commit message.\r\n>>\r\n>\r\n> LGTM, except in the patch commit message I'd change \"Create\r\n> Publication\" to \"CREATE PUBLICATION\".\r\n\r\nLGTM, too.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 7 Dec 2021 17:49:57 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Alter all tables in schema owner fix" }, { "msg_contents": "On Tue, Dec 7, 2021 at 11:20 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 12/7/21, 2:47 AM, \"Greg Nancarrow\" <gregn4422@gmail.com> wrote:\n> > On Tue, Dec 7, 2021 at 9:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>\n> >> Thanks, the patch looks mostly good to me. I have slightly modified it\n> >> to incorporate one of Michael's suggestions, ran pgindent, and\n> >> modified the commit message.\n> >>\n> >\n> > LGTM, except in the patch commit message I'd change \"Create\n> > Publication\" to \"CREATE PUBLICATION\".\n>\n> LGTM, too.\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 8 Dec 2021 15:45:20 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Alter all tables in schema owner fix" } ]
[ { "msg_contents": "Hi,\n\nAlthough the pg_stat_activity has no entry for the sys logger and\nstats collector (because of no shared memory access), the wait events\nWAIT_EVENT_SYSLOGGER_MAIN and WAIT_EVENT_PGSTAT_MAIN are defined. They\nseem to be unnecessary. Passing 0 or some other undefined wait event\nvalue to the existing calls of WaitLatch and WaitLatchOrSocket instead\nof WAIT_EVENT_SYSLOGGER_MAIN/WAIT_EVENT_PGSTAT_MAIN, would work. We\ncan delete these wait events and their info from pgstat.c.\n\nI'm sure this is not so critical, but I'm just checking if someone\nfeels that they should be removed or have some other reasons for\nkeeping them.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 3 Dec 2021 12:42:53 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Do sys logger and stats collector need wait events\n WAIT_EVENT_SYSLOGGER_MAIN/_PGSTAT_MAIN?" }, { "msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> Although the pg_stat_activity has no entry for the sys logger and\n> stats collector (because of no shared memory access), the wait events\n> WAIT_EVENT_SYSLOGGER_MAIN and WAIT_EVENT_PGSTAT_MAIN are defined. They\n> seem to be unnecessary. Passing 0 or some other undefined wait event\n> value to the existing calls of WaitLatch and WaitLatchOrSocket instead\n> of WAIT_EVENT_SYSLOGGER_MAIN/WAIT_EVENT_PGSTAT_MAIN, would work. We\n> can delete these wait events and their info from pgstat.c.\n\nWell ... mumble. The fact that these events are defined would lead\npeople to wonder why they're not hit, so there's a documentation reason\nto get rid of them. However, I quite dislike the suggestion of \"just\npass zero\"; that will probably draw compiler warnings, or if it doesn't\nit should. We'd have to invent some other \"unused\" wait event, and\nthen it's not clear that we've made any gain in intelligibility.\n\nOn the whole I'd be inclined to leave it alone. Even if the reporting\nmechanisms aren't able to report these events today, maybe they'll\nbe able to do so in future. The context of the stats collector\nprocess, in particular, seems likely to change.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Dec 2021 12:42:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Do sys logger and stats collector need wait events\n WAIT_EVENT_SYSLOGGER_MAIN/_PGSTAT_MAIN?" }, { "msg_contents": "On Fri, Dec 03, 2021 at 12:42:47PM -0500, Tom Lane wrote:\n> On the whole I'd be inclined to leave it alone. Even if the reporting\n> mechanisms aren't able to report these events today, maybe they'll\n> be able to do so in future. The context of the stats collector\n> process, in particular, seems likely to change.\n\nThese come from 6f3bd98, where I am pretty sure that I (or Robert)\ndefined those values to not have users guess what to pass down to\nWaitLatch(), assuming that this may become useful in the future. So\nI'd rather leave them be.\n--\nMichael", "msg_date": "Sun, 5 Dec 2021 09:53:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Do sys logger and stats collector need wait events\n WAIT_EVENT_SYSLOGGER_MAIN/_PGSTAT_MAIN?" } ]
[ { "msg_contents": "Hi,\n\nIt seems like there's an extra Logging_collector check before calling\nSysLogger_Start(). Note that the SysLogger_Start() has a check to\nreturn 0 if Logging_collector is off. This change is consistent with\nthe other usage of SysLogger_Start().\n\n /* If we have lost the log collector, try to start a new one */\n- if (SysLoggerPID == 0 && Logging_collector)\n+ if (SysLoggerPID == 0)\n SysLoggerPID = SysLogger_Start();\n\nAttaching a tiny patch to fix. Thoughts?\n\nRegards,\nBharath Rupireddy.", "msg_date": "Fri, 3 Dec 2021 13:28:25 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Remove extra Logging_collector check before calling SysLogger_Start()" }, { "msg_contents": "> On 3 Dec 2021, at 08:58, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n\n> It seems like there's an extra Logging_collector check before calling\n> SysLogger_Start(). Note that the SysLogger_Start() has a check to\n> return 0 if Logging_collector is off. This change is consistent with\n> the other usage of SysLogger_Start().\n> \n> /* If we have lost the log collector, try to start a new one */\n> - if (SysLoggerPID == 0 && Logging_collector)\n> + if (SysLoggerPID == 0)\n> SysLoggerPID = SysLogger_Start();\n\nI think the code reads clearer with the Logging_collector check left intact,\nand avoiding a function call in this codepath doesn't hurt.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 3 Dec 2021 09:13:14 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Remove extra Logging_collector check before calling\n SysLogger_Start()" }, { "msg_contents": "On Fri, Dec 3, 2021 at 1:43 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 3 Dec 2021, at 08:58, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> > It seems like there's an extra Logging_collector check before calling\n> > SysLogger_Start(). Note that the SysLogger_Start() has a check to\n> > return 0 if Logging_collector is off. This change is consistent with\n> > the other usage of SysLogger_Start().\n> >\n> > /* If we have lost the log collector, try to start a new one */\n> > - if (SysLoggerPID == 0 && Logging_collector)\n> > + if (SysLoggerPID == 0)\n> > SysLoggerPID = SysLogger_Start();\n>\n> I think the code reads clearer with the Logging_collector check left intact,\n> and avoiding a function call in this codepath doesn't hurt.\n\nIn that case, can we remove if (!Logging_collector) within\nSysLogger_Start() and have the check outside? This will save function\ncall costs in the other places too. The pgarch_start and\nAutoVacuumingActive have checks outside PgArchStartupAllowed and\nAutoVacuumingActive.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 3 Dec 2021 14:02:27 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove extra Logging_collector check before calling\n SysLogger_Start()" }, { "msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> On Fri, Dec 3, 2021 at 1:43 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> On 3 Dec 2021, at 08:58, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>> It seems like there's an extra Logging_collector check before calling\n>>> SysLogger_Start().\n\n>> I think the code reads clearer with the Logging_collector check left intact,\n>> and avoiding a function call in this codepath doesn't hurt.\n\n> In that case, can we remove if (!Logging_collector) within\n> SysLogger_Start() and have the check outside?\n\nI think it's fine as-is; good belt-and-suspenders-too programming.\nIt's not like the extra check inside SysLogger_Start() is costly.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Dec 2021 12:45:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Remove extra Logging_collector check before calling\n SysLogger_Start()" }, { "msg_contents": "On Fri, Dec 03, 2021 at 12:45:34PM -0500, Tom Lane wrote:\n> I think it's fine as-is; good belt-and-suspenders-too programming.\n> It's not like the extra check inside SysLogger_Start() is costly.\n\n+1.\n--\nMichael", "msg_date": "Sun, 5 Dec 2021 09:49:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Remove extra Logging_collector check before calling\n SysLogger_Start()" } ]
[ { "msg_contents": "I've now closed the 2021-11 commitfest, ~36% of the patches were closed in some\nway (committed, returned with feedback, withdrawn or rejected) with 184 patches\nmoved to the next CF.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 3 Dec 2021 09:51:21 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Commitfest 2021-11 closed" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> I've now closed the 2021-11 commitfest, ~36% of the patches were closed in some\n> way (committed, returned with feedback, withdrawn or rejected) with 184 patches\n> moved to the next CF.\n\nThanks for all your hard work on managing the CF!\n\n(And particularly, thanks for being so aggressive about closing\nstalled patches instead of just punting them forward.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Dec 2021 10:27:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2021-11 closed" }, { "msg_contents": "> On Fri, Dec 03, 2021 at 09:51:21AM +0100, Daniel Gustafsson wrote:\n> I've now closed the 2021-11 commitfest, ~36% of the patches were closed in some\n> way (committed, returned with feedback, withdrawn or rejected) with 184 patches\n> moved to the next CF.\n\nImpressive numbers, thank you!\n\n\n", "msg_date": "Fri, 3 Dec 2021 17:22:12 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2021-11 closed" } ]
[ { "msg_contents": "[ breaking this off to an actual new thread ]\n\nChapman Flack <chap@anastigmatix.net> writes:\n> Is there any way to find out, from the catalogs or in any automatable way,\n> which types are implemented with a dependence on the database encoding\n> (or on some encoding)?\n\nNope. Base types are quite opaque; we don't have a lot of concepts\nof type properties. We do know which types respond to collations,\nwhich is at least adjacent to your question, but it's not the same.\n\n> In the alternate world, you would know that certain datatypes were\n> inherently encoding-oblivious (numbers, polygons, times, ...), certain\n> others are bound to the server encoding (text, varchar, name, ...), and\n> still others are bound to a known encoding other than the server encoding:\n> the ISO SQL NCHAR type (bound to an alternate configurable database\n> encoding), \"char\" (always SQL_ASCII), xml/json/jsonb (always with the full\n> Unicode repertoire, however they choose to represent it internally).\n\nAnother related problem here is that a \"name\" in a shared catalog could\nbe in some other encoding besides the current database's encoding.\nWe have no way to track that nor perform the implied conversions.\nI don't have a solution for that one either, but we should include\nit in the discussion if we're trying to improve the situation.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Dec 2021 13:50:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: types reliant on encodings [was Re: Dubious usage of\n TYPCATEGORY_STRING]" } ]
[ { "msg_contents": "[ breaking off a different new thread ]\n\nChapman Flack <chap@anastigmatix.net> writes:\n> Then there's \"char\". It's category S, but does not apply the server\n> encoding. You could call it an 8-bit int type, but it's typically used\n> as a character, making it well-defined for ASCII values and not so\n> for others, just like SQL_ASCII encoding. You could as well say that\n> the \"char\" type has a defined encoding of SQL_ASCII at all times,\n> regardless of the database encoding.\n\nThis reminds me of something I've been intending to bring up, which\nis that the \"char\" type is not very encoding-safe. charout() for\nexample just regurgitates the single byte as-is. I think we deemed\nthat okay the last time anyone thought about it, but that was when\nsingle-byte encodings were the mainstream usage for non-ASCII data.\nIf you're using UTF8 or another multi-byte server encoding, it's\nquite easy to get an invalidly-encoded string this way, which at\nminimum is going to break dump/restore scenarios.\n\nI can think of at least three ways we might address this:\n\n* Forbid all non-ASCII values for type \"char\". This results in\nsimple and portable semantics, but it might break usages that\nwork okay today.\n\n* Allow such values only in single-byte server encodings. This\nis a bit messy, but it wouldn't break any cases that are not\nproblematic already.\n\n* Continue to allow non-ASCII values, but change charin/charout,\nchar_text, etc so that the external representation is encoding-safe\n(perhaps make it an octal or decimal number).\n\nEither of the first two ways would have to contemplate what to do\nwith disallowed values that snuck into the DB via pg_upgrade.\nThat leads me to think that the third way might be the most\npreferable, even though it's not terribly backward-compatible.\n\nThere's a nearby issue that we might do something about at the\nsame time, which is that chartoi4() and i4tochar() think that\nthe byte value of a \"char\" is signed, while all the other\noperations treat it as unsigned. I wouldn't be too surprised if\nthis behavior is the direct cause of the bug fixed in a6bd28beb.\nThe issue vanishes if we forbid non-ASCII values, but otherwise\nI'd be inclined to change these functions to treat the byte\nvalues as unsigned.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Dec 2021 14:12:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "The \"char\" type versus non-ASCII characters" }, { "msg_contents": "\nOn 12/3/21 14:12, Tom Lane wrote:\n> [ breaking off a different new thread ]\n>\n> Chapman Flack <chap@anastigmatix.net> writes:\n>> Then there's \"char\". It's category S, but does not apply the server\n>> encoding. You could call it an 8-bit int type, but it's typically used\n>> as a character, making it well-defined for ASCII values and not so\n>> for others, just like SQL_ASCII encoding. You could as well say that\n>> the \"char\" type has a defined encoding of SQL_ASCII at all times,\n>> regardless of the database encoding.\n> This reminds me of something I've been intending to bring up, which\n> is that the \"char\" type is not very encoding-safe. charout() for\n> example just regurgitates the single byte as-is. I think we deemed\n> that okay the last time anyone thought about it, but that was when\n> single-byte encodings were the mainstream usage for non-ASCII data.\n> If you're using UTF8 or another multi-byte server encoding, it's\n> quite easy to get an invalidly-encoded string this way, which at\n> minimum is going to break dump/restore scenarios.\n>\n> I can think of at least three ways we might address this:\n>\n> * Forbid all non-ASCII values for type \"char\". This results in\n> simple and portable semantics, but it might break usages that\n> work okay today.\n>\n> * Allow such values only in single-byte server encodings. This\n> is a bit messy, but it wouldn't break any cases that are not\n> problematic already.\n>\n> * Continue to allow non-ASCII values, but change charin/charout,\n> char_text, etc so that the external representation is encoding-safe\n> (perhaps make it an octal or decimal number).\n>\n> Either of the first two ways would have to contemplate what to do\n> with disallowed values that snuck into the DB via pg_upgrade.\n> That leads me to think that the third way might be the most\n> preferable, even though it's not terribly backward-compatible.\n>\n\n\nI don't like #2. Is #3 going to change the external representation only\nfor non-ASCII values? If so, that seems OK.  Changing it for ASCII\nvalues seems ugly. #1 is the simplest to implement and to understand,\nand I suspect it would break very little in practice, but others might\ndisagree with that assessment.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 3 Dec 2021 14:35:03 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: The \"char\" type versus non-ASCII characters" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 12/3/21 14:12, Tom Lane wrote:\n>> I can think of at least three ways we might address this:\n>> \n>> * Forbid all non-ASCII values for type \"char\". This results in\n>> simple and portable semantics, but it might break usages that\n>> work okay today.\n>> \n>> * Allow such values only in single-byte server encodings. This\n>> is a bit messy, but it wouldn't break any cases that are not\n>> problematic already.\n>> \n>> * Continue to allow non-ASCII values, but change charin/charout,\n>> char_text, etc so that the external representation is encoding-safe\n>> (perhaps make it an octal or decimal number).\n\n> I don't like #2.\n\nYeah, it's definitely messy --- for example, maybe é works in\na latin1 database but is rejected when you try to restore into\na DB with utf8 encoding.\n\n> Is #3 going to change the external representation only\n> for non-ASCII values? If so, that seems OK.\n\nRight, I envisioned that ASCII behaves the same but we'd use\na numeric representation for high-bit-set values. These\ncases could be told apart fairly easily by charin(), since\nthe numeric representation would always be three digits.\n\n> #1 is the simplest to implement and to understand,\n> and I suspect it would break very little in practice, but others might\n> disagree with that assessment.\n\nWe'd still have to decide what to do with pg_upgrade'd\nnon-ASCII values, so there's messiness there too.\nHaving charout() throw an error seems not very nice.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Dec 2021 14:42:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: The \"char\" type versus non-ASCII characters" }, { "msg_contents": "\nOn 12/3/21 14:42, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 12/3/21 14:12, Tom Lane wrote:\n>>> I can think of at least three ways we might address this:\n>>>\n>>> * Forbid all non-ASCII values for type \"char\". This results in\n>>> simple and portable semantics, but it might break usages that\n>>> work okay today.\n>>>\n>>> * Allow such values only in single-byte server encodings. This\n>>> is a bit messy, but it wouldn't break any cases that are not\n>>> problematic already.\n>>>\n>>> * Continue to allow non-ASCII values, but change charin/charout,\n>>> char_text, etc so that the external representation is encoding-safe\n>>> (perhaps make it an octal or decimal number).\n>> Is #3 going to change the external representation only\n>> for non-ASCII values? If so, that seems OK.\n> Right, I envisioned that ASCII behaves the same but we'd use\n> a numeric representation for high-bit-set values. These\n> cases could be told apart fairly easily by charin(), since\n> the numeric representation would always be three digits.\n\n\nOK, this seems the most attractive. Can we also allow 2 hex digits?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 3 Dec 2021 15:11:11 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: The \"char\" type versus non-ASCII characters" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 12/3/21 14:42, Tom Lane wrote:\n>> Right, I envisioned that ASCII behaves the same but we'd use\n>> a numeric representation for high-bit-set values. These\n>> cases could be told apart fairly easily by charin(), since\n>> the numeric representation would always be three digits.\n\n> OK, this seems the most attractive. Can we also allow 2 hex digits?\n\nI think we should pick one base and stick to it. I don't mind\nhex if you have a preference for that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Dec 2021 15:13:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: The \"char\" type versus non-ASCII characters" }, { "msg_contents": "On Fri, Dec 03, 2021 at 03:13:24PM -0500, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > On 12/3/21 14:42, Tom Lane wrote:\n> >> Right, I envisioned that ASCII behaves the same but we'd use\n> >> a numeric representation for high-bit-set values. These\n> >> cases could be told apart fairly easily by charin(), since\n> >> the numeric representation would always be three digits.\n> \n> > OK, this seems the most attractive. Can we also allow 2 hex digits?\n> \n> I think we should pick one base and stick to it. I don't mind\n> hex if you have a preference for that.\n> \n> \t\t\tregards, tom lane\n\n+1 for hex\n\nRegards,\nKen\n\n\n", "msg_date": "Fri, 3 Dec 2021 14:17:49 -0600", "msg_from": "Kenneth Marshall <ktm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: The \"char\" type versus non-ASCII characters" }, { "msg_contents": "On 12/03/21 14:12, Tom Lane wrote:\n> This reminds me of something I've been intending to bring up, which\n> is that the \"char\" type is not very encoding-safe. charout() for\n> example just regurgitates the single byte as-is.\n\nI wonder if maybe what to do about that lies downstream of some other\nthought about encoding-related type properties.\n\nISTM we don't, at present, have a clear story for types that have an\nencoding (or repertoire) property that isn't one of (inapplicable,\nserver_encoding).\n\nAnd yet such things exist, and more such things could or should exist\n(NCHAR, healthier versions of xml or json, ...). \"char\" is an existing\nexample, because its current behavior is exactly as if it declared\n\"I am one byte of SQL_ASCII regardless of server setting\".\n\nWhich is no trouble at all when the server setting is also SQL_ASCII.\nBut what does it mean when the server setting and the inherent\nrepertoire property of a type can be different? The present answer\nisn't pretty.\n\nWhen can charout() be called? typoutput functions don't have any\n'internal' parameters, so nothing stops user code from calling them;\nI don't know how often that's done, and that's a complication.\nThe canonical place for it to be called is inside printtup(), when\nthe client driver has requested format 0 for that attribute.\n\nUp to that point, we could have known it was a type with SQL_ASCII\nwired in, but after charout() we have a cstring, and printtup treats\nthat type as having the server encoding, and it goes through encoding\nconversion from that to the client encoding in pq_sendcountedtext.\n\nIndeed, cstring behaves completely as if it is a type with the server\nencoding. If you send a cstring with format 1 rather than format 0,\nwhile it is no longer subject to the encoding conversion done in\npq_sendcountedtext, it will dutifully perform the same conversion\nin its own cstring_send. unknownsend is the same way.\n\nBut of course a \"char\" column in format 1 would never go through cstring;\nchar_send would be called, and just plop the byte in the buffer unchanged\n(which is the same operation as an encoding conversion from SQL_ASCII\nto anything).\n\nEver since I figured out I have to look at the send/recv functions\nfor a type to find out if it is encoding-dependent, I have to walk myself\nthrough those steps again every time I forget why that is. Having\nthe type's character-encoding details show up in its send/recv functions\nand not in its in/out functions never stops being counterintuitive to me.\nBut for server-encoding-dependent types, that's how it is: you don't\nsee it in the typoutput function, because on the format-0 path,\nthe transcoding happens in pq_sendcountedtext. But on the format-1 path,\nthe same transcoding happens, this time under the type's own control\nin its typsend function.\n\nThat was the second thing that surprised me: we have what we call\na text and a binary path, but for an encoding-dependent type, neither\none is a path where transcoding doesn't happen!\n\nThe difference is, the format-0 transcoding is applied blindly,\nin pq_sendcountedtext, with no surviving information about the data\ntype (which has become cstring by that point). In contrast, on the\nformat-1 path, the type's typsend is in control. In theory, that would\nallow type-aware conversion; a smarter xml_send could use &#n; form\nfor characters that won't go in the client encoding, while the blind\npq transcoding on format 0 would just botch the data.\n\nXML, in an ideal world, might live on disk in a form that cares nothing\nfor the server encoding, and be sent directly over the wire to a client\n(it declares what encoding it's in) and presented to the application\nover an XML-aware API that isn't hamstrung by the client's default\ntext encoding either.\n\nBut in the present world, we have somehow arrived at a setup where\nthere are only two paths that can take, and either one is a funnel\nthat can only be passed by data that survives both the client and\nthe server encoding.\n\nThe FE/BE docs have said \"Text has format code zero, binary has format\ncode one, and all other format codes are reserved for future definition\"\never since 7.4. Maybe the time will come for a format 2, where you say\n\"here's an encoding ID and some bytes\"?\n\nThis rambled on a bit far afield from \"what should charout do with\nnon-ASCII values?\". But honestly, either nobody is storing non-ASCII\nvalues in \"char\", and we could make any choice there and nothing would\nbreak, or somebody is doing that and their stuff would be broken by any\nchoice of change.\n\nSo, is the current \"char\" situation so urgent that it demands some\none-off solution be chosen for it, or could it be neglected with minimal\nrisk until someday we've defined what \"this datatype has encoding X that's\ndifferent from the server encoding\" means, and that takes care of it?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Fri, 3 Dec 2021 16:39:14 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: The \"char\" type versus non-ASCII characters" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> On 12/03/21 14:12, Tom Lane wrote:\n>> This reminds me of something I've been intending to bring up, which\n>> is that the \"char\" type is not very encoding-safe. charout() for\n>> example just regurgitates the single byte as-is.\n\n> I wonder if maybe what to do about that lies downstream of some other\n> thought about encoding-related type properties.\n\nAs you mentioned upthread, it's probably wrong to think of \"char\" as\ncharacter data at all. The catalogs use it as a poor man's enum type,\nand it's just for convenience that we assign readable ASCII codes for\nthe enum values of a given column. The only reason to think of it as\nencoding-dependent would be if you have ambitions to store a non-ASCII\ncharacter in a \"char\". But I think that's something we want to\nstrongly discourage, even if we don't prohibit it altogether. The\nwhole point of the type is to be one byte, so only in legacy encodings\ncan it possibly represent a non-ASCII character.\n\nSo I'm visualizing it as a uint8 that we happen to like to store\nASCII codes in, and that's what prompts the thought of using a\nnumeric representation for non-ASCII values. I think you're just\nin for pain if you want to consider such values as character data\nrather than numbers.\n\n> ... \"char\" is an existing\n> example, because its current behavior is exactly as if it declared\n> \"I am one byte of SQL_ASCII regardless of server setting\".\n\nBut it's not quite that. If we treated it as SQL_ASCII, we'd refuse\nto convert it to some other encoding unless the value passes encoding\nverification, which is exactly what charout() is not doing.\n\n> Indeed, cstring behaves completely as if it is a type with the server\n> encoding.\n\nYup, cstring is definitely presumed to be in the server's encoding.\n\n> So, is the current \"char\" situation so urgent that it demands some\n> one-off solution be chosen for it, or could it be neglected with minimal\n> risk until someday we've defined what \"this datatype has encoding X that's\n> different from the server encoding\" means, and that takes care of it?\n\nI'm not willing to leave it broken in the rather faint hope that\nsomeday there will be a more general solution, especially since\nI don't buy the premise that \"char\" ought to participate in any\nsuch solution.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 04 Dec 2021 11:34:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: The \"char\" type versus non-ASCII characters" }, { "msg_contents": "On 12/04/21 11:34, Tom Lane wrote:\n> Chapman Flack <chap@anastigmatix.net> writes:\n>> \"I am one byte of SQL_ASCII regardless of server setting\".\n> \n> But it's not quite that. If we treated it as SQL_ASCII, we'd refuse\n> to convert it to some other encoding unless the value passes encoding\n> verification, which is exactly what charout() is not doing.\n\nAh, good point. I remembered noticing pg_do_encoding_conversion returning\nthe src pointer unchanged when SQL_ASCII is involved, but see that it does\nverify the dest_encoding when SQL_ASCII is the source.\n\n> encoding-dependent would be if you have ambitions to store a non-ASCII\n> character in a \"char\". But I think that's something we want to\n> strongly discourage, even if we don't prohibit it altogether. ...\n> So I'm visualizing it as a uint8 that we happen to like to store\n> ASCII codes in, and that's what prompts the thought of using a\n> numeric representation for non-ASCII values.\n\nI'm in substantial agreement, though I also see that it is nearly always\nset from a quoted literal, and tested against a quoted literal, and calls\nitself \"char\", so I guess I am thinking for consistency's sake it might\nbe better not to invent some all-new convention for its text representation,\nbut adopt something that's already familiar, like bytea escaped format.\nSo it would always look and act like a one-octet bytea. Maybe have charin\naccept either bytea-escaped or bytea-hex form too. (Or, never mind; when\nrestricted to one octet, bytea-hex and the \\xhh bytea-escape form are\nindistinguishable anyway.)\n\nThen for free we get the property that if somebody today uses 'ű' as\nan enum value, it might start appearing as '\\xfb' now in dumps, etc.,\nbut their existing CASE WHEN thing = 'ű' code doesn't stop working\n(as long as they haven't done something silly like change the encoding),\nand they have the flexibility to update it to WHEN thing = '\\xfb' as\ntime permits if they choose. If they don't, they accept the risk that\nby switching to another encoding in the future, they may either see\ntheir existing tests stop matching, or their existing literals fail\nto parse, but there won't be invalidly-encoded strings created.\n\n> Yup, cstring is definitely presumed to be in the server's encoding.\n\nWithout proposing to change it, I observe that by defining both cstring\nand unknown in this way (with the latter being expressly the type of\nany literal from the client destined for a type we don't know yet), we're\na bit painted into the corner as far as supporting types like NCHAR.\n(I suppose clients could be banned from sending such values as literals,\nand required to use extended form and bind them with a binary message.)\nIt's analogous to the way format-0 and format-1 both act as filters that\nno encoding-dependent data can squish through without surviving both the\nclient and the server encoding, even if it is of a type that's defined\nto be independent of either.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Sat, 4 Dec 2021 13:07:50 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: The \"char\" type versus non-ASCII characters" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> On 12/04/21 11:34, Tom Lane wrote:\n>> So I'm visualizing it as a uint8 that we happen to like to store\n>> ASCII codes in, and that's what prompts the thought of using a\n>> numeric representation for non-ASCII values.\n\n> I'm in substantial agreement, though I also see that it is nearly always\n> set from a quoted literal, and tested against a quoted literal, and calls\n> itself \"char\", so I guess I am thinking for consistency's sake it might\n> be better not to invent some all-new convention for its text representation,\n> but adopt something that's already familiar, like bytea escaped format.\n> So it would always look and act like a one-octet bytea.\n\nHmm. I don't have any great objection to that ... except that\nI observe that bytea rejects a bare backslash:\n\nregression=# select '\\'::bytea;\nERROR: invalid input syntax for type bytea\n\nwhich would be incompatible with \"char\"'s existing behavior. But as\nlong as we don't do that, I'd be okay with having high-bit-set char\nvalues map to backslash-followed-by-three-octal-digits, which is\nwhat bytea escape format would produce.\n\n> Maybe have charin\n> accept either bytea-escaped or bytea-hex form too.\n\nThat seems like more complexity than is warranted, although I suppose\nthat allowing easy interchange between char and bytea is worth\nsomething.\n\nOne other point in this area is that charin does not currently object\nto multiple input characters, it just discards the extra:\n\nregression=# select 'foo'::\"char\";\n char \n------\n f\n(1 row)\n\nI think that was justified by analogy to\n\nregression=# select 'foo'::char(1);\n bpchar \n--------\n f\n(1 row)\n\nbut I think it would be a bad idea to preserve it once we introduce\nany sort of mapping, because it'd mask mistakes. So I'm envisioning\nthat charin should accept any single-byte string (including non-ASCII,\nfor backwards compatibility), but for multi-byte input throw an error\nif it doesn't look like whatever numeric-ish mapping we settle on.\n\n>> Yup, cstring is definitely presumed to be in the server's encoding.\n\n> Without proposing to change it, I observe that by defining both cstring\n> and unknown in this way (with the latter being expressly the type of\n> any literal from the client destined for a type we don't know yet), we're\n> a bit painted into the corner as far as supporting types like NCHAR.\n\nYeah, I'm not sure what to do about that. We convert the query text\nto server encoding before ever attempting to parse it, and I don't\nthink I want to contemplate trying to postpone that (... especially\nnot if the client encoding is an unsafe one like SJIS, as you\nprobably could not avoid SQL-injection hazards). So an in-line\nliteral in some other encoding is basically impossible, or at least\npointless. I'm inclined to think that NCHAR is another one in a\nrather long list of not-that-well-thought-out SQL features.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 05 Dec 2021 12:01:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: The \"char\" type versus non-ASCII characters" }, { "msg_contents": "On 12/05/21 12:01, Tom Lane wrote:\n> regression=# select '\\'::bytea;\n> ERROR: invalid input syntax for type bytea\n> \n> which would be incompatible with \"char\"'s existing behavior. But as\n> long as we don't do that, I'd be okay with having high-bit-set char\n> values map to backslash-followed-by-three-octal-digits, which is\n> what bytea escape format would produce.\n\nIs that a proposal to change nothing about the current treatment\nof values < 128, or just to avoid rejecting bare '\\'?\n\nIt seems defensible to relax the error treatment of bare backslash,\nas it isn't inherently ambiguous; it functions more as an \"are you sure\nyou weren't starting to write an escape sequence here?\" check. If it's\na backslash with nothing after it and you assume the user wrote what\nthey meant, then it's not hard to tell what they meant.\n\nIf there's a way to factor out and reuse the good parts of byteain,\nthat would mean '\\\\' would also be accepted to mean a backslash,\nand the \\r \\n \\t usual escapes would be accepted too, and \\ooo and\n\\xhh.\n\n>> Maybe have charin\n>> accept either bytea-escaped or bytea-hex form too.\n> \n> That seems like more complexity than is warranted\n\nI think it ends up being no more complexity at all, because a single\noctet in bytea-hex form looks like \\xhh, which is exactly what\na single \\xhh in bytea-escape form looks like.\n\nI suppose it's important to consider what comparisons like c = '\\'\nand c = '\\\\' mean, which should be just fine when the type analysis\nproduces char = char or char = unknown, but could be surprising if it\ndoesn't.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Sun, 5 Dec 2021 13:14:28 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: The \"char\" type versus non-ASCII characters" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> On 12/05/21 12:01, Tom Lane wrote:\n>> regression=# select '\\'::bytea;\n>> ERROR: invalid input syntax for type bytea\n>> \n>> which would be incompatible with \"char\"'s existing behavior. But as\n>> long as we don't do that, I'd be okay with having high-bit-set char\n>> values map to backslash-followed-by-three-octal-digits, which is\n>> what bytea escape format would produce.\n\n> Is that a proposal to change nothing about the current treatment\n> of values < 128, or just to avoid rejecting bare '\\'?\n\nI intended to change nothing about charin's treatment of ASCII\ncharacters, nor anything about bytea's behavior. I don't think\nwe should relax the error checks in the latter. That does mean\nthat backslash becomes a problem for the idea of transparent\nconversion from char to bytea or vice versa. We could think\nabout emitting backslash as '\\\\' in charout, I suppose. I'm\nnot really convinced though that bytea compatibility is worth\nchanging a case that's non-problematic today.\n\n> If there's a way to factor out and reuse the good parts of byteain,\n> that would mean '\\\\' would also be accepted to mean a backslash,\n> and the \\r \\n \\t usual escapes would be accepted too, and \\ooo and\n> \\xhh.\n\nUh, what?\n\nregression=# select '\\n'::bytea;\nERROR: invalid input syntax for type bytea\n\nBut I doubt that sharing code here would be worth the trouble.\nThe vast majority of byteain is concerned with managing the\nstring length, which is a nonissue for charin.\n\n> I think it ends up being no more complexity at all, because a single\n> octet in bytea-hex form looks like \\xhh, which is exactly what\n> a single \\xhh in bytea-escape form looks like.\n\nI'm confused by this statement too. AFAIK the alternatives in\nbytea are \\xhh or \\ooo:\n\nregression=# select '\\xEE'::bytea;\n bytea \n-------\n \\xee\n(1 row)\n\nregression=# set bytea_output to escape;\nSET\nregression=# select '\\xEE'::bytea;\n bytea \n-------\n \\356\n(1 row)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 05 Dec 2021 14:51:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: The \"char\" type versus non-ASCII characters" }, { "msg_contents": "On 12/05/21 14:51, Tom Lane wrote:\n> Uh, what?\n> \n> regression=# select '\\n'::bytea;\n> ERROR: invalid input syntax for type bytea\n\nWow, I was completely out to lunch there somehow. Sorry. None of those\nother escaped forms are known to byteain, other than '\\\\' and ''''\naccording to table 8.7. I can't even explain why I thought that.\n\n> I'm confused by this statement too. AFAIK the alternatives in\n> bytea are \\xhh or \\ooo:\n\nHere I think I can at least tell where I went wrong; I saw both an\noctal and a hex column in table 8.7, which I saw located under the\n\"bytea escape format\" heading, and without testing carefully enough,\nI assumed it was telling me that either format would be recognized on\ninput, which would certainly be possible, but clearly I was carrying\nover too many assumptions from other escape formats where I'm used to\nthat being the case. If I wanted to prevent another reader making my\nexact mistake, I might re-title those two table columns to be\n\"In bytea escape format\" and \"In bytea hex format\" to make it more clear\nthe table is combining information for both formats.\n\nI'm sure I did test SELECT '\\x41'::bytea, but that proved nothing,\nbeing simply interpreted as the hex input format. I should have\ntried SELECT 'A\\x41'::bytea, and would have immediately seen it rejected.\n\nI've just looked at datatypes.sgml, where I was expecting to see that\ntable 8.7 actually falls outside of the sect2 for \"bytea escape format\",\nand that I had simply misinterpreted it because the structural nesting\nisn't obvious in the rendered HTML.\n\nBut what I found was that the table actually /is/ nested inside the\n\"bytea escape format\" section, and in the generated HTML it is within\nthe div for that section, and the table's own div has the ID\nDATATYPE-BINARY-SQLESC.\n\nThe change history there appears complex. The table already existed\nat the time of a2a8c7a, which made a \"bytea escape format\" sect2 out\nof the existing text that included the table, and added a separate\n\"bytea hex format\" sect2. But the table at that point showed only the\ninput and output representations for the escape format, didn't say\nanything about hex format, and wasn't touched in that commit.\n\nNine years later, f77de4b changed the values in the rightmost column\nto hex form, but only because that was then the \"output representation\"\ncolumn and the default output format had been changed to hex.\n\nFive months after that, f10a20e changed the heading of that column\nfrom \"output representation\" to \"hex representation\", probably because\nthe values in that column by then were hex. So it ended up as a table\nthat is structurally part of the \"bytea escape format\" section,\nwhose rightmost column shows a hex format, and therefore (ahem)\ncould suggest to a reader (who doesn't rush to psql and test it\nthoroughly) that a hex format is accepted there.\n\nStill, I could have avoided that if I had better tested my reading\nfirst.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Sun, 5 Dec 2021 16:40:20 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: The \"char\" type versus non-ASCII characters" }, { "msg_contents": "On 03.12.21 21:13, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 12/3/21 14:42, Tom Lane wrote:\n>>> Right, I envisioned that ASCII behaves the same but we'd use\n>>> a numeric representation for high-bit-set values. These\n>>> cases could be told apart fairly easily by charin(), since\n>>> the numeric representation would always be three digits.\n> \n>> OK, this seems the most attractive. Can we also allow 2 hex digits?\n> \n> I think we should pick one base and stick to it. I don't mind\n> hex if you have a preference for that.\n\nI think we could consider char to be a single-byte bytea and use the \nescape format of bytea for char. That way there is some precedent and \nwe don't add yet another encoding or escape format.\n\n\n", "msg_date": "Thu, 9 Dec 2021 14:27:20 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: The \"char\" type versus non-ASCII characters" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> I think we could consider char to be a single-byte bytea and use the \n> escape format of bytea for char. That way there is some precedent and \n> we don't add yet another encoding or escape format.\n\nDo you want to take that as far as changing backslash to print\nas '\\\\' ?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 Dec 2021 08:35:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: The \"char\" type versus non-ASCII characters" }, { "msg_contents": "I wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> I think we could consider char to be a single-byte bytea and use the \n>> escape format of bytea for char. That way there is some precedent and \n>> we don't add yet another encoding or escape format.\n\n> Do you want to take that as far as changing backslash to print\n> as '\\\\' ?\n\nThis came up again today [1], so here's a concrete proposal.\nLet's use \\ooo for high-bit-set chars, but keep backslash as just\nbackslash (so it's only semi-compatible with bytea).\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/CAFM5RapGbBQm%2BdH%3D7K80HcvBvEWiV5Tm7N%3DNRaYURfm98YWc8A%40mail.gmail.com", "msg_date": "Wed, 13 Jul 2022 17:24:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: The \"char\" type versus non-ASCII characters" }, { "msg_contents": "В письме от пятница, 3 декабря 2021 г. 22:12:10 MSK пользователь Tom Lane \nнаписал:\n> which\n> is that the \"char\" type is not very encoding-safe. charout() for\n> example just regurgitates the single byte as-is. I think we deemed\n> that okay the last time anyone thought about it, but that was when\n> single-byte encodings were the mainstream usage for non-ASCII data.\n> If you're using UTF8 or another multi-byte server encoding, it's\n> quite easy to get an invalidly-encoded string this way, which at\n> minimum is going to break dump/restore scenarios.\n\nAs I've mentioned in another thread I've been very surprised when I first saw \n\"char\" type name. And I was also very confused.\n\nThis leads me to an idea that may be as we fix \"char\" behaviour, we should also \nchange it's name to something more speaking for itself. Like ascii_char or \nsomething like that.\nOr better to add ascii_char with behaviour we need, update system tables with \nit, and keep \"char\" with old behaviour in \"deprecated\" status in the case \nsomebody still using it. To give them time to change it to something more \ndecent: ascii_char or char(1).\n\nI've also talked to a guy who knows postgres history very well, he told me \nthat \"char\" existed at least from portgres version 3.1, it also had \"char16\", \nand in v.4 \"char2\", \"char4\", \"char8\" were added. But later on they was all \nremoved, and we have only \"char\".\n\nAslo \"char\" has nothing in common with SQL standard. Actually it looks very \nunnaturally. May be it is time to get rid of it too, if we are changing this \npart of code...\n\n> I can think of at least three ways we might address this:\n> \n> * Forbid all non-ASCII values for type \"char\". This results in\n> simple and portable semantics, but it might break usages that\n> work okay today.\n> \n> * Allow such values only in single-byte server encodings. This\n> is a bit messy, but it wouldn't break any cases that are not\n> problematic already.\n> \n> * Continue to allow non-ASCII values, but change charin/charout,\n> char_text, etc so that the external representation is encoding-safe\n> (perhaps make it an octal or decimal number).\n\nThis will give us steady #1 for ascii_char, and deprecation and removal of \n\"char\" later on.\n\n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su", "msg_date": "Sat, 16 Jul 2022 19:43:07 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": false, "msg_subject": "Re: The \"char\" type versus non-ASCII characters" }, { "msg_contents": "Nikolay Shaplov <dhyan@nataraj.su> writes:\n> This leads me to an idea that may be as we fix \"char\" behaviour, we should also \n> change it's name to something more speaking for itself.\n\nI don't think this is going to happen. It's especially not going to\nhappen in the back branches. But in any case, what I'm looking for is\nthe minimum compatibility breakage needed to fix the encoding-unsafety\nproblem. Renaming the type goes far beyond that. It'd likely break\nsome client code that examines the system catalogs, for little gain.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 16 Jul 2022 13:09:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: The \"char\" type versus non-ASCII characters" }, { "msg_contents": "I wrote:\n> This came up again today [1], so here's a concrete proposal.\n> Let's use \\ooo for high-bit-set chars, but keep backslash as just\n> backslash (so it's only semi-compatible with bytea).\n\nHearing no howls of protest, here's a fleshed out, potentially-committable\nversion. I added some regression test coverage for the modified code.\n(I also fixed an astonishingly obsolete comment about what the regular\nchar type does.) I looked at the SGML docs too, but I don't think there\nis anything to change there. The docs say \"single-byte internal type\"\nand are silent about \"char\" beyond that. I think that's exactly where\nwe want to be: any more detail would encourage people to use the type,\nwhich we don't really want. Possibly we could change the text to\n\"single-byte internal type, meant to hold ASCII characters\" but I'm\nnot sure that's better.\n\nThe next question is what to do with this. I propose to commit it into\nHEAD and v15 before next week's beta3 release. If we don't get a lot\nof pushback, we could consider back-patching further for the November\nreleases; but I'm hesitant to shove something like this into stable\nbranches with only a week's notice.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 31 Jul 2022 18:25:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: The \"char\" type versus non-ASCII characters" }, { "msg_contents": "\nOn 2022-07-31 Su 18:25, Tom Lane wrote:\n> I wrote:\n>> This came up again today [1], so here's a concrete proposal.\n>> Let's use \\ooo for high-bit-set chars, but keep backslash as just\n>> backslash (so it's only semi-compatible with bytea).\n> Hearing no howls of protest, here's a fleshed out, potentially-committable\n> version. I added some regression test coverage for the modified code.\n> (I also fixed an astonishingly obsolete comment about what the regular\n> char type does.) I looked at the SGML docs too, but I don't think there\n> is anything to change there. The docs say \"single-byte internal type\"\n> and are silent about \"char\" beyond that. I think that's exactly where\n> we want to be: any more detail would encourage people to use the type,\n> which we don't really want. Possibly we could change the text to\n> \"single-byte internal type, meant to hold ASCII characters\" but I'm\n> not sure that's better.\n>\n> The next question is what to do with this. I propose to commit it into\n> HEAD and v15 before next week's beta3 release. If we don't get a lot\n> of pushback, we could consider back-patching further for the November\n> releases; but I'm hesitant to shove something like this into stable\n> branches with only a week's notice.\n>\n> \t\t\t\n\n\nMaybe we should add some words to the docs explicitly discouraging its\nuse in user tables.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 1 Aug 2022 16:11:01 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: The \"char\" type versus non-ASCII characters" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-07-31 Su 18:25, Tom Lane wrote:\n>> ... I looked at the SGML docs too, but I don't think there\n>> is anything to change there. The docs say \"single-byte internal type\"\n>> and are silent about \"char\" beyond that. I think that's exactly where\n>> we want to be: any more detail would encourage people to use the type,\n>> which we don't really want. Possibly we could change the text to\n>> \"single-byte internal type, meant to hold ASCII characters\" but I'm\n>> not sure that's better.\n\n> Maybe we should add some words to the docs explicitly discouraging its\n> use in user tables.\n\nHmm, I thought we already did --- but you're right, the intro para\nfor Table 8.5 only explicitly discourages use of \"name\". We\nprobably want similar wording for both types. Maybe like\n\n There are two other fixed-length character types in PostgreSQL, shown\n in Table 8.5. Both are used in the system catalogs and are not\n intended for use in user tables. The name type ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Aug 2022 16:16:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: The \"char\" type versus non-ASCII characters" } ]
[ { "msg_contents": "Hello,\n\nPlease find attached a patch for the daitch_mokotoff module.\n\nThis implements the Daitch-Mokotoff Soundex System, as described in\nhttps://www.avotaynu.com/soundex.htm\n\nThe module is used in production at Finance Norway.\n\nIn order to verify correctness, I have compared generated soundex codes\nwith corresponding results from the implementation by Stephen P. Morse\nat https://stevemorse.org/census/soundex.html\n\nWhere soundex codes differ, the daitch_mokotoff module has been found\nto be correct. The Morse implementation uses a few unofficial rules,\nand also has an error in the handling of adjacent identical code\ndigits. Please see daitch_mokotoff.c for further references and\ncomments.\n\nFor reference, detailed instructions for soundex code comparison are\nattached.\n\n\nBest regards\n\nDag Lem", "msg_date": "Fri, 03 Dec 2021 21:07:29 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "daitch_mokotoff module" }, { "msg_contents": "Please find attached an updated patch, with the following fixes:\n\n* Replaced remaining malloc/free with palloc/pfree.\n* Made \"make check\" pass.\n* Updated notes on other implementations.\n\nBest regards\n\nDag Lem\n\n\n\n\n\nDag Lem <dag@nimrod.no> writes:\n\n> Hello,\n>\n> Please find attached a patch for the daitch_mokotoff module.\n>\n> This implements the Daitch-Mokotoff Soundex System, as described in\n> https://www.avotaynu.com/soundex.htm\n>\n> The module is used in production at Finance Norway.\n>\n> In order to verify correctness, I have compared generated soundex codes\n> with corresponding results from the implementation by Stephen P. Morse\n> at https://stevemorse.org/census/soundex.html\n>\n> Where soundex codes differ, the daitch_mokotoff module has been found\n> to be correct. The Morse implementation uses a few unofficial rules,\n> and also has an error in the handling of adjacent identical code\n> digits. Please see daitch_mokotoff.c for further references and\n> comments.\n>\n> For reference, detailed instructions for soundex code comparison are\n> attached.\n>\n>\n> Best regards\n>\n> Dag Lem\n>", "msg_date": "Mon, 13 Dec 2021 14:38:22 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "On 12/13/21 14:38, Dag Lem wrote:\n> Please find attached an updated patch, with the following fixes:\n> \n> * Replaced remaining malloc/free with palloc/pfree.\n> * Made \"make check\" pass.\n> * Updated notes on other implementations.\n> \n\nThanks, looks interesting. A couple generic comments, based on a quick \ncode review.\n\n1) Can the extension be marked as trusted, just like fuzzystrmatch?\n\n2) The docs really need an explanation of what the extension is for, not \njust a link to fuzzystrmatch. Also, a couple examples would be helpful, \nI guess - similarly to fuzzystrmatch. The last line in the docs is \nannoyingly long.\n\n3) What's daitch_mokotov_header.pl for? I mean, it generates the header, \nbut when do we need to run it?\n\n4) It seems to require perl-open, which is a module we did not need \nuntil now. Not sure how well supported it is, but maybe we can use a \nmore standard module?\n\n5) Do we need to keep DM_MAIN? It seems to be meant for some kind of \ntesting, but our regression tests certainly don't need it (or the palloc \nmockup). I suggest to get rid of it.\n\n6) I really don't understand some of the comments in daitch_mokotov.sql, \nlike for example:\n\n-- testEncodeBasic\n-- Tests covered above are omitted.\n\nAlso, comments with names of Java methods seem pretty confusing. It'd be \nbetter to actually explain what rules are the tests checking.\n\n7) There are almost no comments in the .c file (ignoring the comment on \ntop). Short functions like initialize_node are probably fine without \none, but e.g. update_node would deserve one.\n\n8) Some of the lines are pretty long (e.g. the update_node signature is \nalmost 170 chars). That should be wrapped. Maybe try running pgindent on \nthe code, that'll show which parts need better formatting (so as not to \nget broken later).\n\n9) I'm sure there's better way to get the number of valid chars than this:\n\n for (i = 0, ilen = 0; (c = read_char(&str[i], &ilen)) && (c < 'A' || \nc > ']'); i += ilen)\n {\n }\n\nSay, a while loop or something?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 13 Dec 2021 15:26:44 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "\nOn 12/13/21 09:26, Tomas Vondra wrote:\n> On 12/13/21 14:38, Dag Lem wrote:\n>> Please find attached an updated patch, with the following fixes:\n>>\n>> * Replaced remaining malloc/free with palloc/pfree.\n>> * Made \"make check\" pass.\n>> * Updated notes on other implementations.\n>>\n>\n> Thanks, looks interesting. A couple generic comments, based on a quick\n> code review.\n>\n> 1) Can the extension be marked as trusted, just like fuzzystrmatch?\n>\n> 2) The docs really need an explanation of what the extension is for,\n> not just a link to fuzzystrmatch. Also, a couple examples would be\n> helpful, I guess - similarly to fuzzystrmatch. The last line in the\n> docs is annoyingly long.\n\n\nIt's not clear to me why we need a new module for this. Wouldn't it be\nbetter just to add the new function to fuzzystrmatch?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 13 Dec 2021 10:05:35 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "On 12/13/21 16:05, Andrew Dunstan wrote:\n> \n> On 12/13/21 09:26, Tomas Vondra wrote:\n>> On 12/13/21 14:38, Dag Lem wrote:\n>>> Please find attached an updated patch, with the following fixes:\n>>>\n>>> * Replaced remaining malloc/free with palloc/pfree.\n>>> * Made \"make check\" pass.\n>>> * Updated notes on other implementations.\n>>>\n>>\n>> Thanks, looks interesting. A couple generic comments, based on a quick\n>> code review.\n>>\n>> 1) Can the extension be marked as trusted, just like fuzzystrmatch?\n>>\n>> 2) The docs really need an explanation of what the extension is for,\n>> not just a link to fuzzystrmatch. Also, a couple examples would be\n>> helpful, I guess - similarly to fuzzystrmatch. The last line in the\n>> docs is annoyingly long.\n> \n> \n> It's not clear to me why we need a new module for this. Wouldn't it be\n> better just to add the new function to fuzzystrmatch?\n> \n\nYeah, that's a valid point. I think we're quite conservative about \nadding more contrib modules, and adding a function to an existing one \nworks around a lot of that.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 13 Dec 2021 17:18:07 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n\n[...]\n\n>\n> Thanks, looks interesting. A couple generic comments, based on a quick\n> code review.\n\nThank you for the constructive review!\n\n>\n> 1) Can the extension be marked as trusted, just like fuzzystrmatch?\n\nI have now moved the daitch_mokotoff function into the fuzzystrmatch\nmodule, as suggested by Andrew Dunstan.\n\n>\n> 2) The docs really need an explanation of what the extension is for,\n> not just a link to fuzzystrmatch. Also, a couple examples would be\n> helpful, I guess - similarly to fuzzystrmatch. The last line in the\n> docs is annoyingly long.\n\nPlease see the updated documentation for the fuzzystrmatch module.\n\n>\n> 3) What's daitch_mokotov_header.pl for? I mean, it generates the\n> header, but when do we need to run it?\n\nIt only has to be run if the soundex rules are changed. I have now\nmade the dependencies explicit in the fuzzystrmatch Makefile.\n\n>\n> 4) It seems to require perl-open, which is a module we did not need\n> until now. Not sure how well supported it is, but maybe we can use a\n> more standard module?\n\nI believe Perl I/O layers have been part of Perl core for two decades\nnow :-)\n\n>\n> 5) Do we need to keep DM_MAIN? It seems to be meant for some kind of\n> testing, but our regression tests certainly don't need it (or the\n> palloc mockup). I suggest to get rid of it.\n\nDone. BTW this was modeled after dmetaphone.c\n\n>\n> 6) I really don't understand some of the comments in\n> daitch_mokotov.sql, like for example:\n>\n> -- testEncodeBasic\n> -- Tests covered above are omitted.\n>\n> Also, comments with names of Java methods seem pretty confusing. It'd\n> be better to actually explain what rules are the tests checking.\n\nThe tests were copied from various web sites and implementations. I have\ncut down on the number of tests and made the comments more to the point.\n\n>\n> 7) There are almost no comments in the .c file (ignoring the comment\n> on top). Short functions like initialize_node are probably fine\n> without one, but e.g. update_node would deserve one.\n\nMore comments are added to both the .h and the .c file.\n\n>\n> 8) Some of the lines are pretty long (e.g. the update_node signature\n> is almost 170 chars). That should be wrapped. Maybe try running\n> pgindent on the code, that'll show which parts need better formatting\n> (so as not to get broken later).\n\nFixed. I did run pgindent earlier, however it didn't catch those long\nlines.\n\n>\n> 9) I'm sure there's better way to get the number of valid chars than this:\n>\n> for (i = 0, ilen = 0; (c = read_char(&str[i], &ilen)) && (c < 'A' ||\n> c > ']'); i += ilen)\n> {\n> }\n>\n> Say, a while loop or something?\n\nThe code gets to the next encodable character, skipping any other\ncharacters. I have now added a comment which should hopefully make this\nclearer, and broken up the for loop for readability.\n\nPlease find attached the revised patch.\n\nBest regards\n\nDag Lem", "msg_date": "Tue, 14 Dec 2021 23:34:00 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Hello again,\n\nIt turns out that there actually exists an(other) implementation of\nthe Daitch-Mokotoff Soundex System which gets it right; the JOS\nSoundex Calculator at https://www.jewishgen.org/jos/jossound.htm\nOther implementations I have been able to find, like the one in Apache\nCommons Coded used in e.g. Elasticsearch, are far from correct.\n\nThe source code for the JOS Soundex Calculator is not available, as\nfar as I can tell, however I have run the complete list of 98412 last\nnames at\nhttps://raw.githubusercontent.com/philipperemy/name-dataset/master/names_dataset/v1/last_names.all.txt\nthrough the calculator, in order to have a good basis for comparison.\n\nThis revealed a few shortcomings in my implementation. In particular I\nhad to go back to the drawing board in order to handle the dual nature\nof \"J\" correctly. \"J\" can be either a vowel or a consonant in\nDaitch-Mokotoff soundex, and this complicates encoding of the\n*previous* character.\n\nI have also done a more thorough review and refactoring of the code,\nwhich should hopefully make things quite a bit more understandable to\nothers.\n\nThe changes are summarized as follows:\n\n* Returns NULL for input without any encodable characters.\n* Uses the same \"unoffical\" rules for \"UE\" as other implementations.\n* Correctly considers \"J\" as either a vowel or a consonant.\n* Corrected encoding for e.g. \"HANNMANN\".\n* Code refactoring and comments for readability.\n* Better examples in the documentation.\n\nThe implementation is now in correspondence with the JOS Soundex\nCalculator for the 98412 last names mentioned above, with only the\nfollowing exceptions:\n\nJOS: cedeño 430000 530000\nPG: cedeño 436000 536000\nJOS: sadab(khura) 437000\nPG: sadab(khura) 437590\n\nI hope this addition to the fuzzystrmatch extension module will prove\nto be useful to others as well!\n\nThis is my very first code contribution to PostgreSQL, and I would be\ngrateful for any advice on how to proceed in order to get the patch\naccepted.\n\nBest regards\n\nDag Lem", "msg_date": "Tue, 21 Dec 2021 22:41:18 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "On 2021-12-21 22:41:18 +0100, Dag Lem wrote:\n> This is my very first code contribution to PostgreSQL, and I would be\n> grateful for any advice on how to proceed in order to get the patch\n> accepted.\n\nCurrently the tests don't seem to pass on any platform:\nhttps://cirrus-ci.com/task/5941863248035840?logs=test_world#L572\nhttps://api.cirrus-ci.com/v1/artifact/task/5941863248035840/regress_diffs/contrib/fuzzystrmatch/regression.diffs\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 2 Jan 2022 13:31:50 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "On Mon, Jan 3, 2022 at 10:32 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2021-12-21 22:41:18 +0100, Dag Lem wrote:\n> > This is my very first code contribution to PostgreSQL, and I would be\n> > grateful for any advice on how to proceed in order to get the patch\n> > accepted.\n>\n> Currently the tests don't seem to pass on any platform:\n> https://cirrus-ci.com/task/5941863248035840?logs=test_world#L572\n> https://api.cirrus-ci.com/v1/artifact/task/5941863248035840/regress_diffs/contrib/fuzzystrmatch/regression.diffs\n\nErm, it looks like something weird is happening somewhere in cfbot's\npipeline, because Dag's patch says:\n\n+SELECT daitch_mokotoff('Straßburg');\n+ daitch_mokotoff\n+-----------------\n+ 294795\n+(1 row)\n\n... but it's failing like:\n\n SELECT daitch_mokotoff('Straßburg');\n daitch_mokotoff\n -----------------\n- 294795\n+ 297950\n (1 row)\n\nIt's possible that I broke cfbot when upgrading to Python 3 a few\nmonths back (ie encoding snafu when using the \"requests\" module to\npull patches down from the archives). I'll try to fix this soon.\n\n\n", "msg_date": "Mon, 3 Jan 2022 14:47:05 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Erm, it looks like something weird is happening somewhere in cfbot's\n> pipeline, because Dag's patch says:\n\n> +SELECT daitch_mokotoff('Straßburg');\n> + daitch_mokotoff\n> +-----------------\n> + 294795\n> +(1 row)\n\n... so, that test case is guaranteed to fail in non-UTF8 encodings,\nI suppose? I wonder what the LANG environment is in that cfbot\ninstance.\n\n(We do have methods for dealing with non-ASCII test cases, but\nI can't see that this patch is using any of them.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 02 Jan 2022 21:41:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Thomas Munro <thomas.munro@gmail.com> writes:\n>> Erm, it looks like something weird is happening somewhere in cfbot's\n>> pipeline, because Dag's patch says:\n>\n>> +SELECT daitch_mokotoff('Straßburg');\n>> + daitch_mokotoff\n>> +-----------------\n>> + 294795\n>> +(1 row)\n>\n> ... so, that test case is guaranteed to fail in non-UTF8 encodings,\n> I suppose? I wonder what the LANG environment is in that cfbot\n> instance.\n>\n> (We do have methods for dealing with non-ASCII test cases, but\n> I can't see that this patch is using any of them.)\n>\n> \t\t\tregards, tom lane\n>\n\nI naively assumed that tests would be run in an UTF8 environment.\n\nRunning \"ack -l '[\\x80-\\xff]'\" in the contrib/ directory reveals that\ntwo other modules are using UTF8 characters in tests - citext and\nunaccent.\n\nThe citext tests seem to be commented out - \"Multibyte sanity\ntests. Uncomment to run.\"\n\nLooking into the unaccent module, I don't quite understand how it will\nwork with various encodings, since it doesn't seem to decode its input -\nwill it fail if run under anything but ASCII or UTF8?\n\nIn any case, I see that unaccent.sql starts as follows:\n\n\nCREATE EXTENSION unaccent;\n\n-- must have a UTF8 database\nSELECT getdatabaseencoding();\n\nSET client_encoding TO 'UTF8';\n\n\nWould doing the same thing in fuzzystrmatch.sql fix the problem with\nfailing tests? Should I prepare a new patch?\n\n\nBest regards\n\nDag Lem\n\n\n", "msg_date": "Mon, 03 Jan 2022 14:07:09 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Dag Lem <dag@nimrod.no> writes:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>> (We do have methods for dealing with non-ASCII test cases, but\n>> I can't see that this patch is using any of them.)\n\n> I naively assumed that tests would be run in an UTF8 environment.\n\nNope, not necessarily.\n\nOur current best practice for this is to separate out encoding-dependent\ntest cases into their own test script, and guard the script with an\ninitial test on database encoding. You can see an example in\nsrc/test/modules/test_regex/sql/test_regex_utf8.sql\nand the two associated expected-files. It's a good idea to also cover\nas much as you can with pure-ASCII test cases that will run regardless\nof the prevailing encoding.\n\n> Running \"ack -l '[\\x80-\\xff]'\" in the contrib/ directory reveals that\n> two other modules are using UTF8 characters in tests - citext and\n> unaccent.\n\nYeah, neither of those have been upgraded to said best practice.\n(If you feel like doing the legwork to improve that situation,\nthat'd be great.)\n\n> Looking into the unaccent module, I don't quite understand how it will\n> work with various encodings, since it doesn't seem to decode its input -\n> will it fail if run under anything but ASCII or UTF8?\n\nIts Makefile seems to be forcing the test database to use UTF8.\nI think this is a less-than-best-practice choice, because then\nwe have zero test coverage for other encodings; but it does\nprevent test failures.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 03 Jan 2022 11:34:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Hi,\n\nOn 2022-01-02 21:41:53 -0500, Tom Lane wrote:\n> ... so, that test case is guaranteed to fail in non-UTF8 encodings,\n> I suppose? I wonder what the LANG environment is in that cfbot\n> instance.\n\nLANG=\"en_US.UTF-8\"\n\nBut it looks to me like the problem is in the commit cfbot creates, rather\nthan the test run itself:\nhttps://github.com/postgresql-cfbot/postgresql/commit/d5b4ec87cfd65dc08d26e1b789bd254405c90a66#diff-388d4bb360a3b24c425e29a85899315dc02f9c1dd9b9bc9aaa828876bdfea50aR56\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 3 Jan 2022 11:16:09 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n\n> Hi,\n>\n> On 2022-01-02 21:41:53 -0500, Tom Lane wrote:\n>> ... so, that test case is guaranteed to fail in non-UTF8 encodings,\n>> I suppose? I wonder what the LANG environment is in that cfbot\n>> instance.\n>\n> LANG=\"en_US.UTF-8\"\n>\n> But it looks to me like the problem is in the commit cfbot creates, rather\n> than the test run itself:\n> https://github.com/postgresql-cfbot/postgresql/commit/d5b4ec87cfd65dc08d26e1b789bd254405c90a66#diff-388d4bb360a3b24c425e29a85899315dc02f9c1dd9b9bc9aaa828876bdfea50aR56\n>\n> Greetings,\n>\n> Andres Freund\n>\n>\n\nI have now separated out the UTF8-dependent tests, hopefully according\nto the current best practice (based on src/test/modules/test_regex/ and\nhttps://www.postgresql.org/docs/14/regress-variant.html).\n\nHowever I guess this won't make any difference wrt. actually running the\ntests, as long as there seems to be an encoding problem in the cfbot\npipeline.\n\nIs there anything else I can do? Could perhaps fuzzystrmatch_utf8 simply\nbe commented out from the Makefile for the time being?\n\nBest regards\n\nDag Lem", "msg_date": "Tue, 04 Jan 2022 14:49:11 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "On Wed, Jan 5, 2022 at 2:49 AM Dag Lem <dag@nimrod.no> wrote:\n> However I guess this won't make any difference wrt. actually running the\n> tests, as long as there seems to be an encoding problem in the cfbot\n\nFixed -- I told it to pull down patches as binary, not text. Now it\nmakes commits that look healthier, and so far all the Unix systems\nhave survived CI:\n\nhttps://github.com/postgresql-cfbot/postgresql/commit/79700efc61d15c2414b8450a786951fa9308c07f\nhttp://cfbot.cputube.org/dag-lem.html\n\n\n", "msg_date": "Wed, 5 Jan 2022 09:18:56 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n\n> On Wed, Jan 5, 2022 at 2:49 AM Dag Lem <dag@nimrod.no> wrote:\n>> However I guess this won't make any difference wrt. actually running the\n>> tests, as long as there seems to be an encoding problem in the cfbot\n>\n> Fixed -- I told it to pull down patches as binary, not text. Now it\n> makes commits that look healthier, and so far all the Unix systems\n> have survived CI:\n>\n> https://github.com/postgresql-cfbot/postgresql/commit/79700efc61d15c2414b8450a786951fa9308c07f\n> http://cfbot.cputube.org/dag-lem.html\n>\n\nGreat!\n\nDag\n\n\n", "msg_date": "Wed, 05 Jan 2022 08:05:45 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Dag Lem <dag@nimrod.no> writes:\n>\n>> Running \"ack -l '[\\x80-\\xff]'\" in the contrib/ directory reveals that\n>> two other modules are using UTF8 characters in tests - citext and\n>> unaccent.\n>\n> Yeah, neither of those have been upgraded to said best practice.\n> (If you feel like doing the legwork to improve that situation,\n> that'd be great.)\n>\n\nPlease find attached a patch to run the previously commented-out\nUTF8-dependent tests for citext, according to best practice. For now I\ndon't dare to touch the unaccent module, which seems to be UTF8-only\nanyway.\n\n\nBest regards\n\nDag Lem", "msg_date": "Wed, 05 Jan 2022 12:57:21 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "[PATCH] Run UTF8-dependent tests for citext [Re: daitch_mokotoff\n module]" }, { "msg_contents": "Dag Lem <dag@nimrod.no> writes:\n> Please find attached a patch to run the previously commented-out\n> UTF8-dependent tests for citext, according to best practice. For now I\n> don't dare to touch the unaccent module, which seems to be UTF8-only\n> anyway.\n\nI tried this on a bunch of different locale settings and concluded that\nwe need to restrict the locale to avoid failures: it falls over with\nlocale C. With that, it passes on all UTF8 LANG settings on RHEL8\nand FreeBSD 12, and all except am_ET.UTF-8 on current macOS. I'm not\nsure what the deal is with am_ET, but macOS has a long and sad history\nof wonky UTF8 locales, so I was actually expecting worse. If the\nbuildfarm shows more problems, we can restrict it further --- I won't\nbe too upset if we end up restricting to just Linux systems, like\ncollate.linux.utf8. Anyway, pushed to see what happens.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 05 Jan 2022 13:38:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Run UTF8-dependent tests for citext [Re: daitch_mokotoff\n module]" }, { "msg_contents": "Dag Lem <dag@nimrod.no> writes:\n\n> Thomas Munro <thomas.munro@gmail.com> writes:\n>\n>> On Wed, Jan 5, 2022 at 2:49 AM Dag Lem <dag@nimrod.no> wrote:\n>>> However I guess this won't make any difference wrt. actually running the\n>>> tests, as long as there seems to be an encoding problem in the cfbot\n>>\n>> Fixed -- I told it to pull down patches as binary, not text. Now it\n>> makes commits that look healthier, and so far all the Unix systems\n>> have survived CI:\n>>\n>> https://github.com/postgresql-cfbot/postgresql/commit/79700efc61d15c2414b8450a786951fa9308c07f\n>> http://cfbot.cputube.org/dag-lem.html\n>>\n>\n> Great!\n>\n> Dag\n>\n>\n\nAfter this I did the mistake of including a patch for citext in this\nthread, which is now picked up by cfbot instead of the Daitch-Mokotoff\npatch.\n\nAttaching the original patch again in order to hopefully fix my mistake.\n\nBest regards\n\nDag Lem", "msg_date": "Wed, 05 Jan 2022 21:08:54 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Hi,\n\nJust some minor adjustments to the patch:\n\n* Removed call to locale-dependent toupper()\n* Cleaned up input normalization\n\nI have been asked to sign up to review a commitfest patch or patches -\nunfortunately I've been ill with COVID-19 and it's not until now that\nI feel well enough to have a look.\n\nJulien: I'll have a look at https://commitfest.postgresql.org/36/3468/\nas you suggested (https://commitfest.postgresql.org/36/3379/ seems to\nhave been reviewed now).\n\nIf there are other suggestions for a patch or patches to review for\nsomeone new to PostgreSQL internals, I'd be grateful for that.\n\n\nBest regards\n\nDag Lem", "msg_date": "Thu, 03 Feb 2022 15:27:32 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Hi Dag\n\n2022年2月3日(木) 23:27 Dag Lem <dag@nimrod.no>:\n>\n> Hi,\n>\n> Just some minor adjustments to the patch:\n>\n> * Removed call to locale-dependent toupper()\n> * Cleaned up input normalization\n\nThis patch was marked as \"Waiting on Author\" in the CommitFest entry [1]\nbut I see you provided an updated version which hasn't received any feedback,\nso I've move this to the next CommitFest [2] and set it to \"Needs Review\".\n\n[1] https://commitfest.postgresql.org/40/3451/\n[2] https://commitfest.postgresql.org/41/3451/\n\n> I have been asked to sign up to review a commitfest patch or patches -\n> unfortunately I've been ill with COVID-19 and it's not until now that\n> I feel well enough to have a look.\n>\n> Julien: I'll have a look at https://commitfest.postgresql.org/36/3468/\n> as you suggested (https://commitfest.postgresql.org/36/3379/ seems to\n> have been reviewed now).\n>\n> If there are other suggestions for a patch or patches to review for\n> someone new to PostgreSQL internals, I'd be grateful for that.\n\nI see you provided some feedback on https://commitfest.postgresql.org/36/3468/,\nthough the patch seems to have not been accepted (but not conclusively rejected\neither). If you still have the chance to review another patch (or more) it would\nbe much appreciated, as there's quite a few piling up. Things like documentation\nor small improvements to client applications are always a good place to start.\nReviews can be provided at any time, there's no need to wait for the next\nCommitFest.\n\nRegards\n\nIan Barwick\n\n\n", "msg_date": "Wed, 30 Nov 2022 21:56:29 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Hi Ian,\n\nIan Lawrence Barwick <barwick@gmail.com> writes:\n\n> Hi Dag\n>\n> 2022年2月3日(木) 23:27 Dag Lem <dag@nimrod.no>:\n>>\n>> Hi,\n>>\n>> Just some minor adjustments to the patch:\n>>\n>> * Removed call to locale-dependent toupper()\n>> * Cleaned up input normalization\n>\n> This patch was marked as \"Waiting on Author\" in the CommitFest entry [1]\n> but I see you provided an updated version which hasn't received any feedback,\n> so I've move this to the next CommitFest [2] and set it to \"Needs Review\".\n>\n> [1] https://commitfest.postgresql.org/40/3451/\n> [2] https://commitfest.postgresql.org/41/3451/\n>\n>> I have been asked to sign up to review a commitfest patch or patches -\n>> unfortunately I've been ill with COVID-19 and it's not until now that\n>> I feel well enough to have a look.\n>>\n>> Julien: I'll have a look at https://commitfest.postgresql.org/36/3468/\n>> as you suggested (https://commitfest.postgresql.org/36/3379/ seems to\n>> have been reviewed now).\n>>\n>> If there are other suggestions for a patch or patches to review for\n>> someone new to PostgreSQL internals, I'd be grateful for that.\n>\n> I see you provided some feedback on https://commitfest.postgresql.org/36/3468/,\n> though the patch seems to have not been accepted (but not conclusively rejected\n> either). If you still have the chance to review another patch (or more) it would\n> be much appreciated, as there's quite a few piling up. Things like documentation\n> or small improvements to client applications are always a good place to start.\n> Reviews can be provided at any time, there's no need to wait for the next\n> CommitFest.\n>\n\nOK, I'll try to find another patch to review.\n\nRegards\n\nDag Lem\n\n\n", "msg_date": "Mon, 05 Dec 2022 14:24:49 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Hi,\n\nOn 2022-02-03 15:27:32 +0100, Dag Lem wrote:\n> Just some minor adjustments to the patch:\n> \n> * Removed call to locale-dependent toupper()\n> * Cleaned up input normalization\n\nThis patch currently fails in cfbot, likely because meson.build needs to be\nadjusted (this didn't exist at the time you submitted this version of the\npatch):\n\n[23:43:34.796] contrib/fuzzystrmatch/meson.build:18:0: ERROR: File fuzzystrmatch--1.1.sql does not exist.\n\n\n> -DATA = fuzzystrmatch--1.1.sql fuzzystrmatch--1.0--1.1.sql\n> +DATA = fuzzystrmatch--1.2.sql fuzzystrmatch--1.1--1.2.sql fuzzystrmatch--1.0--1.1.sql\n> PGFILEDESC = \"fuzzystrmatch - similarities and distance between strings\"\n\n\nThe patch seems to remove fuzzystrmatch--1.1.sql - I suggest not doing\nthat. In recent years our approach has been to just keep the \"base version\" of\nthe upgrade script, with extension creation running through the upgrade\nscripts.\n\n> \n> +\n> +#include \"daitch_mokotoff.h\"\n> +\n> +#include \"postgres.h\"\n\nPostgres policy is that the include of \"postgres.h\" has to be the first\ninclude in every .c file.\n\n\n> +#include \"utils/builtins.h\"\n> +#include \"mb/pg_wchar.h\"\n> +\n> +#include <string.h>\n> +\n> +/* Internal C implementation */\n> +static char *_daitch_mokotoff(char *word, char *soundex, size_t n);\n> +\n> +\n> +PG_FUNCTION_INFO_V1(daitch_mokotoff);\n> +Datum\n> +daitch_mokotoff(PG_FUNCTION_ARGS)\n> +{\n> +\ttext\t *arg = PG_GETARG_TEXT_PP(0);\n> +\tchar\t *string,\n> +\t\t\t *tmp_soundex;\n> +\ttext\t *soundex;\n> +\n> +\t/*\n> +\t * The maximum theoretical soundex size is several KB, however in practice\n> +\t * anything but contrived synthetic inputs will yield a soundex size of\n> +\t * less than 100 bytes. We thus allocate and free a temporary work buffer,\n> +\t * and return only the actual soundex result.\n> +\t */\n> +\tstring = pg_server_to_any(text_to_cstring(arg), VARSIZE_ANY_EXHDR(arg), PG_UTF8);\n> +\ttmp_soundex = palloc(DM_MAX_SOUNDEX_CHARS);\n\nSeems that just using StringInfo to hold the soundex output would work better\nthan a static allocation?\n\n\n> +\tif (!_daitch_mokotoff(string, tmp_soundex, DM_MAX_SOUNDEX_CHARS))\n\nWe imo shouldn't introduce new functions starting with _.\n\n\n> +/* Mark soundex code tree node as leaf. */\n> +static void\n> +set_leaf(dm_leaves leaves_next, int *num_leaves_next, dm_node * node)\n> +{\n> +\tif (!node->is_leaf)\n> +\t{\n> +\t\tnode->is_leaf = 1;\n> +\t\tleaves_next[(*num_leaves_next)++] = node;\n> +\t}\n> +}\n> +\n> +\n> +/* Find next node corresponding to code digit, or create a new node. */\n> +static dm_node * find_or_create_node(dm_nodes nodes, int *num_nodes,\n> +\t\t\t\t\t\t\t\t\t dm_node * node, char code_digit)\n\nPG code style is to have a line break between a function defintion's return\ntype and the function name - like you actually do above.\n\n\n\n\n> +/* Mapping from ISO8859-1 to upper-case ASCII */\n> +static const char tr_iso8859_1_to_ascii_upper[] =\n> +/*\n> +\"`abcdefghijklmnopqrstuvwxyz{|}~ ¡¢£¤¥¦§¨©ª«¬ ®¯°±²³´µ¶·¸¹º»¼½¾¿ÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖרÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõö÷øùúûüýþÿ\"\n> +*/\n> +\"`ABCDEFGHIJKLMNOPQRSTUVWXYZ{|}~ ! ?AAAAAAECEEEEIIIIDNOOOOO*OUUUUYDSAAAAAAECEEEEIIIIDNOOOOO/OUUUUYDY\";\n> +\n> +static char\n> +iso8859_1_to_ascii_upper(unsigned char c)\n> +{\n> +\treturn c >= 0x60 ? tr_iso8859_1_to_ascii_upper[c - 0x60] : c;\n> +}\n> +\n> +\n> +/* Convert an UTF-8 character to ISO-8859-1.\n> + * Unconvertable characters are returned as '?'.\n> + * NB! Beware of the domain specific conversion of Ą, Ę, and Ţ/Ț.\n> + */\n> +static char\n> +utf8_to_iso8859_1(char *str, int *ix)\n\nIt seems decidedly not great to have custom encoding conversion routines in a\ncontrib module. Is there any way we can avoid this?\n\n\n> +/* Generate all Daitch-Mokotoff soundex codes for word, separated by space. */\n> +static char *\n> +_daitch_mokotoff(char *word, char *soundex, size_t n)\n> +{\n> +\tint\t\t\ti = 0,\n> +\t\t\t\tj;\n> +\tint\t\t\tletter_no = 0;\n> +\tint\t\t\tix_leaves = 0;\n> +\tint\t\t\tnum_nodes = 0,\n> +\t\t\t\tnum_leaves = 0;\n> +\tdm_codes *codes,\n> +\t\t\t *next_codes;\n> +\tdm_node *nodes;\n> +\tdm_leaves *leaves;\n> +\n> +\t/* First letter. */\n> +\tif (!(codes = read_letter(word, &i)))\n> +\t{\n> +\t\t/* No encodable character in input. */\n> +\t\treturn NULL;\n> +\t}\n> +\n> +\t/* Allocate memory for node tree. */\n> +\tnodes = palloc(sizeof(dm_nodes));\n> +\tleaves = palloc(2 * sizeof(dm_leaves));\n\nSo this allocates the worst case memory usage, is that right? That's quite a\nbit of memory. Shouldn't nodes be allocated dynamically?\n\nInstead of carefully freeing individual memory allocations, I think it be\nbetter to create a temporary memory context, allocate the necessary nodes etc\non demand, and destroy the temporary memory context at the end.\n\n\n> +/* Codes for letter sequence at start of name, before a vowel, and any other. */\n> +static dm_codes codes_0_1_X[2] =\n\nAny reason these aren't all const?\n\n\nIt's not clear to me where the intended line between the .h and .c file is.\n\n\n> +print <<EOF;\n> +/*\n> + * Types and lookup tables for Daitch-Mokotoff Soundex\n> + *\n\nIf we generate the code, why is the generated header included in the commit?\n\n> +/* Letter in input sequence */\n> +struct dm_letter\n> +{\n> +\tchar\t\tletter;\t\t\t/* Present letter in sequence */\n> +\tstruct dm_letter *letters;\t/* List of possible successive letters */\n> +\tdm_codes *codes;\t\t\t/* Code sequence(s) for complete sequence */\n> +};\n> +\n> +/* Node in soundex code tree */\n> +struct dm_node\n> +{\n> +\tint\t\t\tsoundex_length; /* Length of generated soundex code */\n> +\tchar\t\tsoundex[DM_MAX_CODE_DIGITS + 1];\t/* Soundex code */\n> +\tint\t\t\tis_leaf;\t\t/* Candidate for complete soundex code */\n> +\tint\t\t\tlast_update;\t/* Letter number for last update of node */\n> +\tchar\t\tcode_digit;\t\t/* Last code digit, 0 - 9 */\n> +\n> +\t/*\n> +\t * One or two alternate code digits leading to this node. If there are two\n> +\t * digits, one of them is always an 'X'. Repeated code digits and 'X' lead\n> +\t * back to the same node.\n> +\t */\n> +\tchar\t\tprev_code_digits[2];\n> +\t/* One or two alternate code digits moving forward. */\n> +\tchar\t\tnext_code_digits[2];\n> +\t/* ORed together code index(es) used to reach current node. */\n> +\tint\t\t\tprev_code_index;\n> +\tint\t\t\tnext_code_index;\n> +\t/* Nodes branching out from this node. */\n> +\tstruct dm_node *next_nodes[DM_MAX_ALTERNATE_CODES + 1];\n> +};\n> +\n> +typedef struct dm_letter dm_letter;\n> +typedef struct dm_node dm_node;\n\nWhy is all this in the generated header? It needs DM_MAX_ALTERNATE_CODES etc,\nbut it seems that the structs could just be defined in the .c file.\n\n\n> +# Table adapted from https://www.jewishgen.org/InfoFiles/Soundex.html\n\nWhat does \"adapted\" mean here? And what's the path to updating the data?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 7 Dec 2022 10:56:56 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Hi Andreas,\n\nThank you for your detailed and constructive review!\n\nI have made a conscientuous effort to address all the issues you point\nout, please see comments below.\n\nAndres Freund <andres@anarazel.de> writes:\n\n> Hi,\n>\n> On 2022-02-03 15:27:32 +0100, Dag Lem wrote:\n\n[...]\n\n> [23:43:34.796] contrib/fuzzystrmatch/meson.build:18:0: ERROR: File\n> fuzzystrmatch--1.1.sql does not exist.\n>\n>\n>> -DATA = fuzzystrmatch--1.1.sql fuzzystrmatch--1.0--1.1.sql\n>> +DATA = fuzzystrmatch--1.2.sql fuzzystrmatch--1.1--1.2.sql\n>> fuzzystrmatch--1.0--1.1.sql\n>> PGFILEDESC = \"fuzzystrmatch - similarities and distance between strings\"\n>\n>\n> The patch seems to remove fuzzystrmatch--1.1.sql - I suggest not doing\n> that. In recent years our approach has been to just keep the \"base version\" of\n> the upgrade script, with extension creation running through the upgrade\n> scripts.\n>\n\nOK, I have now kept fuzzystrmatch--1.1.sql, and omitted\nfuzzystrmatch--1.2.sql\n\nBoth the Makefile and meson.build are updated to handle the new files,\nincluding the generated header.\n\n>> \n>> +\n>> +#include \"daitch_mokotoff.h\"\n>> +\n>> +#include \"postgres.h\"\n>\n> Postgres policy is that the include of \"postgres.h\" has to be the first\n> include in every .c file.\n>\n>\n\nOK, fixed.\n\n>> +#include \"utils/builtins.h\"\n>> +#include \"mb/pg_wchar.h\"\n>> +\n>> +#include <string.h>\n>> +\n>> +/* Internal C implementation */\n>> +static char *_daitch_mokotoff(char *word, char *soundex, size_t n);\n>> +\n>> +\n>> +PG_FUNCTION_INFO_V1(daitch_mokotoff);\n>> +Datum\n>> +daitch_mokotoff(PG_FUNCTION_ARGS)\n>> +{\n>> +\ttext\t *arg = PG_GETARG_TEXT_PP(0);\n>> +\tchar\t *string,\n>> +\t\t\t *tmp_soundex;\n>> +\ttext\t *soundex;\n>> +\n>> +\t/*\n>> + * The maximum theoretical soundex size is several KB, however in\n>> practice\n>> +\t * anything but contrived synthetic inputs will yield a soundex size of\n>> + * less than 100 bytes. We thus allocate and free a temporary work\n>> buffer,\n>> +\t * and return only the actual soundex result.\n>> +\t */\n>> + string = pg_server_to_any(text_to_cstring(arg),\n>> VARSIZE_ANY_EXHDR(arg), PG_UTF8);\n>> +\ttmp_soundex = palloc(DM_MAX_SOUNDEX_CHARS);\n>\n> Seems that just using StringInfo to hold the soundex output would work better\n> than a static allocation?\n>\n\nOK, fixed.\n\n>\n>> +\tif (!_daitch_mokotoff(string, tmp_soundex, DM_MAX_SOUNDEX_CHARS))\n>\n> We imo shouldn't introduce new functions starting with _.\n>\n\nOK, fixed. Note that I just followed the existing pattern in\nfuzzystrmatch.c there.\n\n[...]\n\n>> +/* Find next node corresponding to code digit, or create a new node. */\n>> +static dm_node * find_or_create_node(dm_nodes nodes, int *num_nodes,\n>> + dm_node * node, char code_digit)\n>\n> PG code style is to have a line break between a function defintion's return\n> type and the function name - like you actually do above.\n>\n\nOK, fixed. Both pgindent and I must have missed that particular\nfunction.\n\n>> +/* Mapping from ISO8859-1 to upper-case ASCII */\n>> +static const char tr_iso8859_1_to_ascii_upper[] =\n>> +/*\n>> +\"`abcdefghijklmnopqrstuvwxyz{|}~ ¡¢£¤¥¦§¨©ª«¬\n>> ®¯°±²³´µ¶·¸¹º»¼½¾¿ÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖרÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõö÷øùúûüýþÿ\"\n>> +*/\n>> +\"`ABCDEFGHIJKLMNOPQRSTUVWXYZ{|}~ !\n>> ?AAAAAAECEEEEIIIIDNOOOOO*OUUUUYDSAAAAAAECEEEEIIIIDNOOOOO/OUUUUYDY\";\n>> +\n>> +static char\n>> +iso8859_1_to_ascii_upper(unsigned char c)\n>> +{\n>> +\treturn c >= 0x60 ? tr_iso8859_1_to_ascii_upper[c - 0x60] : c;\n>> +}\n>> +\n>> +\n>> +/* Convert an UTF-8 character to ISO-8859-1.\n>> + * Unconvertable characters are returned as '?'.\n>> + * NB! Beware of the domain specific conversion of Ą, Ę, and Ţ/Ț.\n>> + */\n>> +static char\n>> +utf8_to_iso8859_1(char *str, int *ix)\n>\n> It seems decidedly not great to have custom encoding conversion routines in a\n> contrib module. Is there any way we can avoid this?\n>\n\nI have now replaced the custom UTF-8 decode with calls to\nutf8_to_unicode and pg_utf_mblen, and simplified the subsequent\nconversion to ASCII. Hopefully this makes the conversion code more\npalatable.\n\nI don't see how the conversion to ASCII could be substantially\nsimplified further. The conversion maps lowercase and 8 bit ISO8859-1\ncharacters to ASCII via uppercasing, removal of accents, and discarding\nof special characters. In addition to that, it maps (the non-ISO8859-1)\nĄ, Ę, and Ţ/Ț from the coding chart to [, \\, and ]. After this, a simple\nO(1) table lookup can be used to retrieve the soundex code tree for a\nletter sequence.\n\n>\n>> +/* Generate all Daitch-Mokotoff soundex codes for word, separated\n>> by space. */\n>> +static char *\n>> +_daitch_mokotoff(char *word, char *soundex, size_t n)\n>> +{\n>> +\tint\t\t\ti = 0,\n>> +\t\t\t\tj;\n>> +\tint\t\t\tletter_no = 0;\n>> +\tint\t\t\tix_leaves = 0;\n>> +\tint\t\t\tnum_nodes = 0,\n>> +\t\t\t\tnum_leaves = 0;\n>> +\tdm_codes *codes,\n>> +\t\t\t *next_codes;\n>> +\tdm_node *nodes;\n>> +\tdm_leaves *leaves;\n>> +\n>> +\t/* First letter. */\n>> +\tif (!(codes = read_letter(word, &i)))\n>> +\t{\n>> +\t\t/* No encodable character in input. */\n>> +\t\treturn NULL;\n>> +\t}\n>> +\n>> +\t/* Allocate memory for node tree. */\n>> +\tnodes = palloc(sizeof(dm_nodes));\n>> +\tleaves = palloc(2 * sizeof(dm_leaves));\n>\n> So this allocates the worst case memory usage, is that right? That's quite a\n> bit of memory. Shouldn't nodes be allocated dynamically?\n>\n> Instead of carefully freeing individual memory allocations, I think it be\n> better to create a temporary memory context, allocate the necessary nodes etc\n> on demand, and destroy the temporary memory context at the end.\n>\n\nYes, the one-time allocation was intended to cover the worst case memory\nusage. This was done to avoid any performance hit incurred by allocating\nand deallocating memory for each new node in the soundex code tree.\n\nI have rewritten the bookeeping of nodes in the soundex code tree to use\nlinked lists, and have followed your advice to use a temporary memory\ncontext for allocation.\n\nI also made an optimization by excluding completed soundex nodes from\nthe next letter iteration. This seems to offset any allocation overhead\n- the performance is more or less the same as before.\n\n>\n>> +/* Codes for letter sequence at start of name, before a vowel, and\n>> any other. */\n>> +static dm_codes codes_0_1_X[2] =\n>\n> Any reason these aren't all const?\n>\n\nNo reason why they can't be :-) They are now changed to const.\n\n>\n> It's not clear to me where the intended line between the .h and .c file is.\n>\n>\n>> +print <<EOF;\n>> +/*\n>> + * Types and lookup tables for Daitch-Mokotoff Soundex\n>> + *\n>\n> If we generate the code, why is the generated header included in the commit?\n>\n\nThis was mainly to have the content available for reference without\nhaving to generate the header. I have removed the file - after the\nchange you suggest below, the struct declarations are available in the\n.c file anyway.\n\n>> +/* Letter in input sequence */\n>> +struct dm_letter\n>> +{\n>> +\tchar\t\tletter;\t\t\t/* Present letter in sequence */\n>> + struct dm_letter *letters; /* List of possible successive letters\n>> */\n>> + dm_codes *codes; /* Code sequence(s) for complete sequence */\n>> +};\n>> +\n>> +/* Node in soundex code tree */\n>> +struct dm_node\n>> +{\n>> + int soundex_length; /* Length of generated soundex code */\n>> + char soundex[DM_MAX_CODE_DIGITS + 1]; /* Soundex code */\n>> + int is_leaf; /* Candidate for complete soundex code */\n>> + int last_update; /* Letter number for last update of node */\n>> +\tchar\t\tcode_digit;\t\t/* Last code digit, 0 - 9 */\n>> +\n>> +\t/*\n>> + * One or two alternate code digits leading to this node. If there\n>> are two\n>> + * digits, one of them is always an 'X'. Repeated code digits and\n>> X' lead\n>> +\t * back to the same node.\n>> +\t */\n>> +\tchar\t\tprev_code_digits[2];\n>> +\t/* One or two alternate code digits moving forward. */\n>> +\tchar\t\tnext_code_digits[2];\n>> +\t/* ORed together code index(es) used to reach current node. */\n>> +\tint\t\t\tprev_code_index;\n>> +\tint\t\t\tnext_code_index;\n>> +\t/* Nodes branching out from this node. */\n>> +\tstruct dm_node *next_nodes[DM_MAX_ALTERNATE_CODES + 1];\n>> +};\n>> +\n>> +typedef struct dm_letter dm_letter;\n>> +typedef struct dm_node dm_node;\n>\n> Why is all this in the generated header? It needs DM_MAX_ALTERNATE_CODES etc,\n> but it seems that the structs could just be defined in the .c file.\n>\n\nTo accomplish this, I had to rearrange the code a bit. The structs are\nnow all declared in daitch_mokotoff.c, and the generated header is\nincluded inbetween them.\n\n>\n>> +# Table adapted from https://www.jewishgen.org/InfoFiles/Soundex.html\n>\n> What does \"adapted\" mean here? And what's the path to updating the data?\n>\n\nIt means that the original soundex coding chart, which is referred to,\nhas been converted to a machine readable format, with a few\nmodifications. These modifications are outlined further down in the\ncomments. I expanded a bit on the comments, hopefully making things\nclearer.\n\nI don't think there is much to be said about updating the data - that's\nsimply a question of modifying the table and regenerating the header\nfile. It goes without saying that making changes requires an\nunderstanding of the soundex coding, which is explained in the\nreference. However if anything should be unclear, please do point out\nwhat should be explained better.\n\n> Greetings,\n>\n> Andres Freund\n>\n\nThanks again, and a Merry Christmas to you and all the other PostgreSQL\nhackers!\n\n\nBest regards,\n\nDag Lem", "msg_date": "Wed, 21 Dec 2022 10:26:05 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "I noticed that the Meson builds failed in Cfbot, the updated patch adds\na missing \"include_directories\" line to meson.build.\n\nBest regards\n\nDag Lem", "msg_date": "Thu, 22 Dec 2022 13:00:43 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Dag Lem <dag@nimrod.no> writes:\n\n> I noticed that the Meson builds failed in Cfbot, the updated patch adds\n> a missing \"include_directories\" line to meson.build.\n>\n\nThis should hopefully fix the last Cfbot failures, by exclusion of\ndaitch_mokotoff.h from headerscheck and cpluspluscheck.\n\nBest regards\n\nDag Lem", "msg_date": "Thu, 22 Dec 2022 14:27:54 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Dag Lem <dag@nimrod.no> writes:\n\n> Hi Ian,\n>\n> Ian Lawrence Barwick <barwick@gmail.com> writes:\n>\n\n[...]\n\n>> I see you provided some feedback on\n>> https://commitfest.postgresql.org/36/3468/,\n>> though the patch seems to have not been accepted (but not\n>> conclusively rejected\n>> either). If you still have the chance to review another patch (or\n>> more) it would\n>> be much appreciated, as there's quite a few piling up. Things like\n>> documentation\n>> or small improvements to client applications are always a good place to start.\n>> Reviews can be provided at any time, there's no need to wait for the next\n>> CommitFest.\n>>\n>\n> OK, I'll try to find another patch to review.\n>\n\nI have scanned through all the patches in Commitfest 2023-01 with status\n\"Needs review\", and it is difficult to find something which I can\nmeaningfully review.\n\nThe only thing I felt qualified to comment (or nit-pick?) on was\nhttps://commitfest.postgresql.org/41/4071/\n\nIf something else should turn up which could be reviewed by someone\nwithout intimate knowledge of PostgreSQL internals, then don't hesitate\nto ask.\n\nAs for the Daitch-Mokotoff patch, the review by Andres Freund was very\nhelpful in order to improve the extension and to make it more idiomatic\n- hopefully it is now a bit closer to being included.\n\n\nBest regards\n\nDag Lem\n\n\n", "msg_date": "Thu, 22 Dec 2022 15:02:54 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "On 2022-12-22 14:27:54 +0100, Dag Lem wrote:\n> This should hopefully fix the last Cfbot failures, by exclusion of\n> daitch_mokotoff.h from headerscheck and cpluspluscheck.\n\nBtw, you can do the same tests as cfbot in your own repo by enabling CI\nin a github repo. See src/tools/ci/README\n\n\n", "msg_date": "Fri, 23 Dec 2022 03:22:07 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "On 2022-Dec-22, Dag Lem wrote:\n\n> This should hopefully fix the last Cfbot failures, by exclusion of\n> daitch_mokotoff.h from headerscheck and cpluspluscheck.\n\nHmm, maybe it'd be better to move the typedefs to the .h file instead.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Pensar que el espectro que vemos es ilusorio no lo despoja de espanto,\nsólo le suma el nuevo terror de la locura\" (Perelandra, C.S. Lewis)\n\n\n", "msg_date": "Fri, 23 Dec 2022 13:59:13 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "I wonder why do you have it return the multiple alternative codes as a\nspace-separated string. Maybe an array would be more appropriate. Even\non your documented example use, the first thing you do is split it on\nspaces.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 23 Dec 2022 14:07:47 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "On 2022-Dec-23, Alvaro Herrera wrote:\n\n> I wonder why do you have it return the multiple alternative codes as a\n> space-separated string. Maybe an array would be more appropriate. Even\n> on your documented example use, the first thing you do is split it on\n> spaces.\n\nI tried downloading a list of surnames from here\nhttps://www.bibliotecadenombres.com/apellidos/apellidos-espanoles/\npasted that in a text file and \\copy'ed it into a table. Then I ran\nthis query\n\nselect string_agg(a, ' ' order by a), daitch_mokotoff(a), count(*)\nfrom apellidos\ngroup by daitch_mokotoff(a)\norder by count(*) desc;\n\nso I have a first entry like this\n\nstring_agg │ Balasco Balles Belasco Belles Blas Blasco Fallas Feliz Palos Pelaez Plaza Valles Vallez Velasco Velez Veliz Veloz Villas\ndaitch_mokotoff │ 784000\ncount │ 18\n\nbut then I have a bunch of other entries with the same code 784000 as\nalternative codes,\n\nstring_agg │ Velazco\ndaitch_mokotoff │ 784500 784000\ncount │ 1\n\nstring_agg │ Palacio\ndaitch_mokotoff │ 785000 784000\ncount │ 1\n\nI suppose I need to group these together somehow, and it would make more\nsense to do that if the values were arrays.\n\n\nIf I scroll a bit further down and choose, say, 794000 (a relatively\npopular one), then I have this\n\nstring_agg │ Barraza Barrios Barros Bras Ferraz Frias Frisco Parras Peraza Peres Perez Porras Varas Veras\ndaitch_mokotoff │ 794000\ncount │ 14\n\nand looking for that code in the result I also get these three\n\nstring_agg │ Barca Barco Parco\ndaitch_mokotoff │ 795000 794000\ncount │ 3\n\nstring_agg │ Borja\ndaitch_mokotoff │ 790000 794000\ncount │ 1\n\nstring_agg │ Borjas\ndaitch_mokotoff │ 794000 794400\ncount │ 1\n\nand then I see that I should also search for possible matches in codes\n795000, 790000 and 794400, so that gives me\n\nstring_agg │ Baria Baro Barrio Barro Berra Borra Feria Para Parra Perea Vera\ndaitch_mokotoff │ 790000\ncount │ 11\n\nstring_agg │ Barriga Borge Borrego Burgo Fraga\ndaitch_mokotoff │ 795000\ncount │ 5\n\nstring_agg │ Borjas\ndaitch_mokotoff │ 794000 794400\ncount │ 1\n\nwhich look closely related (compare \"Veras\" in the first to \"Vera\" in\nthe later set. If you ignore that pseudo-match, you're likely to miss\npossible family relationships.)\n\n\nI suppose if I were a genealogy researcher, I would be helped by having\neach of these codes behave as a separate unit, rather than me having to\nsplit the string into the several possible contained values.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Industry suffers from the managerial dogma that for the sake of stability\nand continuity, the company should be independent of the competence of\nindividual employees.\" (E. Dijkstra)\n\n\n", "msg_date": "Fri, 23 Dec 2022 14:25:59 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2022-Dec-22, Dag Lem wrote:\n>> This should hopefully fix the last Cfbot failures, by exclusion of\n>> daitch_mokotoff.h from headerscheck and cpluspluscheck.\n\n> Hmm, maybe it'd be better to move the typedefs to the .h file instead.\n\nIndeed, that sounds like exactly the wrong way to fix such a problem.\nThe bar for excluding stuff from headerscheck needs to be very high.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Dec 2022 09:57:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n\n> On 2022-12-22 14:27:54 +0100, Dag Lem wrote:\n>> This should hopefully fix the last Cfbot failures, by exclusion of\n>> daitch_mokotoff.h from headerscheck and cpluspluscheck.\n>\n> Btw, you can do the same tests as cfbot in your own repo by enabling CI\n> in a github repo. See src/tools/ci/README\n>\n\nOK, thanks, I've set it up now.\n\nBest regards,\n\nDag Lem\n\n\n", "msg_date": "Fri, 23 Dec 2022 21:34:09 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> On 2022-Dec-22, Dag Lem wrote:\n>>> This should hopefully fix the last Cfbot failures, by exclusion of\n>>> daitch_mokotoff.h from headerscheck and cpluspluscheck.\n>\n>> Hmm, maybe it'd be better to move the typedefs to the .h file instead.\n>\n> Indeed, that sounds like exactly the wrong way to fix such a problem.\n> The bar for excluding stuff from headerscheck needs to be very high.\n>\n\nOK, I've moved enough declarations back to the generated header file\nagain so as to avoid excluding it from headerscheck and cpluspluscheck.\n\nBest regards,\n\nDag Lem", "msg_date": "Fri, 23 Dec 2022 21:55:11 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n\n> I wonder why do you have it return the multiple alternative codes as a\n> space-separated string. Maybe an array would be more appropriate. Even\n> on your documented example use, the first thing you do is split it on\n> spaces.\n\nIn the example, the *input* is split on whitespace, the returned soundex\ncodes are not. The splitting of the input is done in order to code each\nword separately. One of the stated rules of the Daitch-Mokotoff Soundex\nCoding is that \"When a name consists of more than one word, it is coded\nas if one word\", and this may not always be desired. See\nhttps://www.avotaynu.com/soundex.htm or\nhttps://www.jewishgen.org/InfoFiles/soundex.html for the rules.\n\nThe intended use for the Daitch-Mokotoff soundex, as for any other\nsoundex algorithm, is to index names (or words) on some representation\nof sound, so that alike sounding names with different spellings will\nmatch.\n\nIn PostgreSQL, the Daitch-Mokotoff Soundex and Full Text Search makes\nfor a powerful combination to match alike sounding names. Full Text\nSearch (as any other free text search engine) works with documents, and\nthus the Daitch-Mokotoff Soundex implementation produces documents\n(words separated by space). As stated in the documentation: \"Any\nalternative soundex codes are separated by space, which makes the\nreturned text suited for use in Full Text Search\".\n\nBest regards,\n\nDag Lem\n\n\n", "msg_date": "Fri, 23 Dec 2022 22:44:26 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n\n> On 2022-Dec-23, Alvaro Herrera wrote:\n>\n\n[...]\n\n> I tried downloading a list of surnames from here\n> https://www.bibliotecadenombres.com/apellidos/apellidos-espanoles/\n> pasted that in a text file and \\copy'ed it into a table. Then I ran\n> this query\n>\n> select string_agg(a, ' ' order by a), daitch_mokotoff(a), count(*)\n> from apellidos\n> group by daitch_mokotoff(a)\n> order by count(*) desc;\n>\n> so I have a first entry like this\n>\n> string_agg │ Balasco Balles Belasco Belles Blas Blasco Fallas Feliz\n> Palos Pelaez Plaza Valles Vallez Velasco Velez Veliz Veloz Villas\n> daitch_mokotoff │ 784000\n> count │ 18\n>\n> but then I have a bunch of other entries with the same code 784000 as\n> alternative codes,\n>\n> string_agg │ Velazco\n> daitch_mokotoff │ 784500 784000\n> count │ 1\n>\n> string_agg │ Palacio\n> daitch_mokotoff │ 785000 784000\n> count │ 1\n>\n> I suppose I need to group these together somehow, and it would make more\n> sense to do that if the values were arrays.\n>\n>\n> If I scroll a bit further down and choose, say, 794000 (a relatively\n> popular one), then I have this\n>\n> string_agg │ Barraza Barrios Barros Bras Ferraz Frias Frisco Parras\n> Peraza Peres Perez Porras Varas Veras\n> daitch_mokotoff │ 794000\n> count │ 14\n>\n> and looking for that code in the result I also get these three\n>\n> string_agg │ Barca Barco Parco\n> daitch_mokotoff │ 795000 794000\n> count │ 3\n>\n> string_agg │ Borja\n> daitch_mokotoff │ 790000 794000\n> count │ 1\n>\n> string_agg │ Borjas\n> daitch_mokotoff │ 794000 794400\n> count │ 1\n>\n> and then I see that I should also search for possible matches in codes\n> 795000, 790000 and 794400, so that gives me\n>\n> string_agg │ Baria Baro Barrio Barro Berra Borra Feria Para Parra\n> Perea Vera\n> daitch_mokotoff │ 790000\n> count │ 11\n>\n> string_agg │ Barriga Borge Borrego Burgo Fraga\n> daitch_mokotoff │ 795000\n> count │ 5\n>\n> string_agg │ Borjas\n> daitch_mokotoff │ 794000 794400\n> count │ 1\n>\n> which look closely related (compare \"Veras\" in the first to \"Vera\" in\n> the later set. If you ignore that pseudo-match, you're likely to miss\n> possible family relationships.)\n>\n>\n> I suppose if I were a genealogy researcher, I would be helped by having\n> each of these codes behave as a separate unit, rather than me having to\n> split the string into the several possible contained values.\n\nIt seems to me like you're trying to use soundex coding for something it\nwas never designed for.\n\nAs stated in my previous mail, soundex algorithms are designed to index\nnames on some representation of sound, so that alike sounding names with\ndifferent spellings will match, and as shown in the documentation\nexample, that is exactly what the implementation facilitates.\n\nDaitch-Mokotoff Soundex indexes alternative sounds for the same name,\nhowever if I understand correctly, you want to index names by single\nsounds, linking all alike sounding names to the same soundex code. I\nfail to see how that is useful - if you want to find matches for a name,\nyou simply match against all indexed names. If you only consider one\nsound, you won't find all names that match.\n\nIn any case, as explained in the documentation, the implementation is\nintended to be a companion to Full Text Search, thus text is the natural\nrepresentation for the soundex codes.\n\nBTW Vera 790000 does not match Veras 794000, because they don't sound\nthe same (up to the maximum soundex code length).\n\nBest regards\n\nDag Lem\n\n\n", "msg_date": "Fri, 23 Dec 2022 23:48:02 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Dag Lem <dag@nimrod.no> writes:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>\n>> On 2022-Dec-23, Alvaro Herrera wrote:\n>>\n>\n> [...]\n>\n>> I tried downloading a list of surnames from here\n>> https://www.bibliotecadenombres.com/apellidos/apellidos-espanoles/\n>> pasted that in a text file and \\copy'ed it into a table. Then I ran\n>> this query\n>>\n>> select string_agg(a, ' ' order by a), daitch_mokotoff(a), count(*)\n>> from apellidos\n>> group by daitch_mokotoff(a)\n>> order by count(*) desc;\n>>\n>> so I have a first entry like this\n>>\n>> string_agg │ Balasco Balles Belasco Belles Blas Blasco Fallas Feliz\n>> Palos Pelaez Plaza Valles Vallez Velasco Velez Veliz Veloz Villas\n>> daitch_mokotoff │ 784000\n>> count │ 18\n>>\n>> but then I have a bunch of other entries with the same code 784000 as\n>> alternative codes,\n>>\n>> string_agg │ Velazco\n>> daitch_mokotoff │ 784500 784000\n>> count │ 1\n>>\n>> string_agg │ Palacio\n>> daitch_mokotoff │ 785000 784000\n>> count │ 1\n>>\n>> I suppose I need to group these together somehow, and it would make more\n>> sense to do that if the values were arrays.\n>>\n>>\n>> If I scroll a bit further down and choose, say, 794000 (a relatively\n>> popular one), then I have this\n>>\n>> string_agg │ Barraza Barrios Barros Bras Ferraz Frias Frisco Parras\n>> Peraza Peres Perez Porras Varas Veras\n>> daitch_mokotoff │ 794000\n>> count │ 14\n>>\n>> and looking for that code in the result I also get these three\n>>\n>> string_agg │ Barca Barco Parco\n>> daitch_mokotoff │ 795000 794000\n>> count │ 3\n>>\n>> string_agg │ Borja\n>> daitch_mokotoff │ 790000 794000\n>> count │ 1\n>>\n>> string_agg │ Borjas\n>> daitch_mokotoff │ 794000 794400\n>> count │ 1\n>>\n>> and then I see that I should also search for possible matches in codes\n>> 795000, 790000 and 794400, so that gives me\n>>\n>> string_agg │ Baria Baro Barrio Barro Berra Borra Feria Para Parra\n>> Perea Vera\n>> daitch_mokotoff │ 790000\n>> count │ 11\n>>\n>> string_agg │ Barriga Borge Borrego Burgo Fraga\n>> daitch_mokotoff │ 795000\n>> count │ 5\n>>\n>> string_agg │ Borjas\n>> daitch_mokotoff │ 794000 794400\n>> count │ 1\n>>\n>> which look closely related (compare \"Veras\" in the first to \"Vera\" in\n>> the later set. If you ignore that pseudo-match, you're likely to miss\n>> possible family relationships.)\n>>\n>>\n>> I suppose if I were a genealogy researcher, I would be helped by having\n>> each of these codes behave as a separate unit, rather than me having to\n>> split the string into the several possible contained values.\n>\n> It seems to me like you're trying to use soundex coding for something it\n> was never designed for.\n>\n> As stated in my previous mail, soundex algorithms are designed to index\n> names on some representation of sound, so that alike sounding names with\n> different spellings will match, and as shown in the documentation\n> example, that is exactly what the implementation facilitates.\n>\n> Daitch-Mokotoff Soundex indexes alternative sounds for the same name,\n> however if I understand correctly, you want to index names by single\n> sounds, linking all alike sounding names to the same soundex code. I\n> fail to see how that is useful - if you want to find matches for a name,\n> you simply match against all indexed names. If you only consider one\n> sound, you won't find all names that match.\n>\n> In any case, as explained in the documentation, the implementation is\n> intended to be a companion to Full Text Search, thus text is the natural\n> representation for the soundex codes.\n>\n> BTW Vera 790000 does not match Veras 794000, because they don't sound\n> the same (up to the maximum soundex code length).\n>\n\nI've been sleeping on this, and perhaps the normal use case can just as\nwell (or better) be covered by the \"@>\" array operator? I originally\nimplemented similar functionality using another soundex algorithm more\nthan a decade ago, and either arrays couldn't be GIN indexed back then,\nor I simply missed it. I'll have to get back to this - now it's\nChristmas!\n\nMerry Christmas!\n\nBest regards,\n\nDag Lem\n\n\n", "msg_date": "Sat, 24 Dec 2022 08:13:07 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Hello\n\nOn 2022-Dec-23, Dag Lem wrote:\n\n> It seems to me like you're trying to use soundex coding for something it\n> was never designed for.\n\nI'm not trying to use it for anything, actually. I'm just reading the\npages your patch links to, to try and understand how this algorithm can\nbe best implemented in Postgres.\n\nSo I got to this page\nhttps://www.avotaynu.com/soundex.htm\nwhich explains that Daitch figured that it would be best if a letter\nthat can have two possible encodings would be encoded in both ways:\n\n> 5. If a combination of letters could have two possible sounds, then it\n> is coded in both manners. For example, the letters ch can have a soft\n> sound such as in Chicago or a hard sound as in Christmas.\n\nwhich I understand as meaning that a single name returns two possible\nencodings, which is why these three names\n Barca Barco Parco\nhave two possible encodings\n 795000 and 794000\nwhich is what your algorithm returns.\n\nIn fact, using the word Christmas we do get alternative codes for the first\nletter (either 4 or 5), precisely as in Daitch's example:\n\n=# select daitch_mokotoff('christmas');\n daitch_mokotoff \n─────────────────\n 594364 494364\n(1 fila)\n\nand if we take out the ambiguous 'ch', we get a single one:\n\n=# select daitch_mokotoff('ristmas');\n daitch_mokotoff \n─────────────────\n 943640\n(1 fila)\n\nand if we add another 'ch', we get the codes for each possibility at each\nposition of the ambiguous 'ch':\n\n=# select daitch_mokotoff('christmach');\n daitch_mokotoff \n─────────────────────────────\n 594365 594364 494365 494364\n(1 fila)\n\n\nSo, yes, I'm proposing that we returns those as array elements and that\n@> is used to match them.\n\n> Daitch-Mokotoff Soundex indexes alternative sounds for the same name,\n> however if I understand correctly, you want to index names by single\n> sounds, linking all alike sounding names to the same soundex code. I\n> fail to see how that is useful - if you want to find matches for a name,\n> you simply match against all indexed names. If you only consider one\n> sound, you won't find all names that match.\n\nHmm, I think we're saying the same thing, but from opposite points of\nview. No, I want each name to return multiple codes, but that those\nmultiple codes can be treated as a multiple-value array of codes, rather\nthan as a single string of space-separated codes.\n\n> In any case, as explained in the documentation, the implementation is\n> intended to be a companion to Full Text Search, thus text is the natural\n> representation for the soundex codes.\n\nHmm, I don't agree with this point. The numbers are representations of\nthe strings, but they don't necessarily have to be strings themselves.\n\n\n> BTW Vera 790000 does not match Veras 794000, because they don't sound\n> the same (up to the maximum soundex code length).\n\nNo, and maybe that's okay because they have different codes. But they\nare both similar, in Daitch-Mokotoff, to Borja, which has two codes,\n790000 and 794000. (Any Spanish speaker will readily tell you that\nneither Vera nor Veras are similar in any way to Borja, but D-M has\nchosen to say that each of them matches one of Borjas' codes. So they\n*are* related, even though indirectly, and as a genealogist you *may* be\ninterested in getting a match for a person called Vera when looking for\nrelatives to a person called Veras. And, as a Spanish speaker, that\nwould make a lot of sense to me.)\n\n\nNow, it's true that I've chosen to use Spanish names for my silly little\nexperiment. Maybe this isn't terribly useful as a practical example,\nbecause this algorithm seems to have been designed for Jew surnames and\nperhaps not many (or not any) Jews had Spanish surnames. I don't know;\nI'm not a Jew myself (though Noah Gordon tells the tale of a Spanish Jew\ncalled Josep Álvarez in his book \"The Winemaker\", so I guess it's not\nimpossible). Anyway, I suspect if you repeat the experiment with names\nof other origins, you'll find pretty much the same results apply there,\nand that is the whole reason D-M returns multiple codes and not just\none.\n\n\nMerry Christmas :-)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Sun, 25 Dec 2022 14:01:36 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n\n> Hello\n>\n> On 2022-Dec-23, Dag Lem wrote:\n>\n\n[...]\n\n> So, yes, I'm proposing that we returns those as array elements and that\n> @> is used to match them.\n\nLooking into the array operators I guess that to match such arrays\ndirectly one would actually use && (overlaps) rather than @> (contains),\nbut I digress.\n\nThe function is changed to return an array of soundex codes - I hope it\nis now to your liking :-)\n\nI also improved on the documentation example (using Full Text Search).\nAFAIK you can't make general queries like that using arrays, however in\nany case I must admit that text arrays seem like more natural building\nblocks than space delimited text here.\n\nSearch to perform\n\nis the best match for Daitch-Mokotoff, however\n\n, but\nin any case I've changed it into return arrays now. I hope it is to your\nliking.\n\n>\n>> Daitch-Mokotoff Soundex indexes alternative sounds for the same name,\n>> however if I understand correctly, you want to index names by single\n>> sounds, linking all alike sounding names to the same soundex code. I\n>> fail to see how that is useful - if you want to find matches for a name,\n>> you simply match against all indexed names. If you only consider one\n>> sound, you won't find all names that match.\n>\n> Hmm, I think we're saying the same thing, but from opposite points of\n> view. No, I want each name to return multiple codes, but that those\n> multiple codes can be treated as a multiple-value array of codes, rather\n> than as a single string of space-separated codes.\n>\n>> In any case, as explained in the documentation, the implementation is\n>> intended to be a companion to Full Text Search, thus text is the natural\n>> representation for the soundex codes.\n>\n> Hmm, I don't agree with this point. The numbers are representations of\n> the strings, but they don't necessarily have to be strings themselves.\n>\n>\n>> BTW Vera 790000 does not match Veras 794000, because they don't sound\n>> the same (up to the maximum soundex code length).\n>\n> No, and maybe that's okay because they have different codes. But they\n> are both similar, in Daitch-Mokotoff, to Borja, which has two codes,\n> 790000 and 794000. (Any Spanish speaker will readily tell you that\n> neither Vera nor Veras are similar in any way to Borja, but D-M has\n> chosen to say that each of them matches one of Borjas' codes. So they\n> *are* related, even though indirectly, and as a genealogist you *may* be\n> interested in getting a match for a person called Vera when looking for\n> relatives to a person called Veras. And, as a Spanish speaker, that\n> would make a lot of sense to me.)\n>\n>\n> Now, it's true that I've chosen to use Spanish names for my silly little\n> experiment. Maybe this isn't terribly useful as a practical example,\n> because this algorithm seems to have been designed for Jew surnames and\n> perhaps not many (or not any) Jews had Spanish surnames. I don't know;\n> I'm not a Jew myself (though Noah Gordon tells the tale of a Spanish Jew\n> called Josep Álvarez in his book \"The Winemaker\", so I guess it's not\n> impossible). Anyway, I suspect if you repeat the experiment with names\n> of other origins, you'll find pretty much the same results apply there,\n> and that is the whole reason D-M returns multiple codes and not just\n> one.\n>\n>\n> Merry Christmas :-)\n\n-- \nDag\n\n\n", "msg_date": "Mon, 02 Jan 2023 21:43:01 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Sorry about the latest unfinished email - don't know what key\ncombination I managed to hit there.\n\nAlvaro Herrera <alvherre@alvh.no-ip.org> writes:\n\n> Hello\n>\n> On 2022-Dec-23, Dag Lem wrote:\n>\n\n[...]\n\n>\n> So, yes, I'm proposing that we returns those as array elements and that\n> @> is used to match them.\n>\n\nLooking into the array operators I guess that to match such arrays\ndirectly one would actually use && (overlaps) rather than @> (contains),\nbut I digress.\n\nThe function is changed to return an array of soundex codes - I hope it\nis now to your liking :-)\n\nI also improved on the documentation example (using Full Text Search).\nAFAIK you can't make general queries like that using arrays, however in\nany case I must admit that text arrays seem like more natural building\nblocks than space delimited text here.\n\n[...]\n\n>> BTW Vera 790000 does not match Veras 794000, because they don't sound\n>> the same (up to the maximum soundex code length).\n>\n> No, and maybe that's okay because they have different codes. But they\n> are both similar, in Daitch-Mokotoff, to Borja, which has two codes,\n> 790000 and 794000. (Any Spanish speaker will readily tell you that\n> neither Vera nor Veras are similar in any way to Borja, but D-M has\n> chosen to say that each of them matches one of Borjas' codes. So they\n> *are* related, even though indirectly, and as a genealogist you *may* be\n> interested in getting a match for a person called Vera when looking for\n> relatives to a person called Veras. And, as a Spanish speaker, that\n> would make a lot of sense to me.)\n\nIt is what it is - we can't call it Daitch-Mokotoff Soundex while\nimplementing something else. Having said that, one can always pre- or\npostprocess to tweak the results.\n\nDaitch-Mokotoff Soundex is known to produce false positives, but that is\nin many cases not a problem.\n\nEven though it's clearly tuned for Jewish names, the soundex algorithm\nseems to work just fine for European names (we use it to match mostly\nNorwegian names).\n\n\nBest regards\n\nDag Lem", "msg_date": "Mon, 02 Jan 2023 22:00:34 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Is there anything else I should do here, to avoid the status being\nincorrectly stuck at \"Waiting for Author\" again.\n\nBest regards\n\nDag Lem\n\n\n", "msg_date": "Thu, 05 Jan 2023 10:43:38 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "On 2023-Jan-05, Dag Lem wrote:\n\n> Is there anything else I should do here, to avoid the status being\n> incorrectly stuck at \"Waiting for Author\" again.\n\nJust mark it Needs Review for now. I'll be back from vacation on Jan\n11th and can have a look then (or somebody else can, perhaps.)\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Puedes vivir sólo una vez, pero si lo haces bien, una vez es suficiente\"\n\n\n", "msg_date": "Thu, 5 Jan 2023 19:16:26 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n\n> On 2023-Jan-05, Dag Lem wrote:\n>\n>> Is there anything else I should do here, to avoid the status being\n>> incorrectly stuck at \"Waiting for Author\" again.\n>\n> Just mark it Needs Review for now. I'll be back from vacation on Jan\n> 11th and can have a look then (or somebody else can, perhaps.)\n\nOK, done. Have a nice vacation!\n\nBest regards\n\nDag Lem\n\n\n", "msg_date": "Fri, 06 Jan 2023 23:27:28 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "On Mon, Jan 2, 2023 at 2:03 PM Dag Lem <dag@nimrod.no> wrote:\n\n> I also improved on the documentation example (using Full Text Search).\n> AFAIK you can't make general queries like that using arrays, however in\n> any case I must admit that text arrays seem like more natural building\n> blocks than space delimited text here.\n\nThis is a fun addition to fuzzystrmatch.\n\nWhile it's a little late in the game, I'll just put it out there:\ndaitch_mokotoff() is way harder to type than soundex_dm(). Not sure\nhow you feel about that.\n\nOn the documentation, I found the leap directly into the tsquery\nexample a bit too big. Maybe start with a very simple example,\n\n--\ndm=# SELECT daitch_mokotoff('Schwartzenegger'),\n daitch_mokotoff('Swartzenegger');\n\n daitch_mokotoff | daitch_mokotoff\n-----------------+-----------------\n {479465} | {479465}\n--\n\nThen transition into a more complex example that illustrates the GIN\nindex technique you mention in the text, but do not show:\n\n--\nCREATE TABLE dm_gin (source text, dm text[]);\n\nINSERT INTO dm_gin (source) VALUES\n ('Swartzenegger'),\n ('John'),\n ('James'),\n ('Steinman'),\n ('Steinmetz');\n\nUPDATE dm_gin SET dm = daitch_mokotoff(source);\n\nCREATE INDEX dm_gin_x ON dm_gin USING GIN (dm);\n\nSELECT * FROM dm_gin WHERE dm && daitch_mokotoff('Schwartzenegger');\n--\n\nAnd only then go into the tsearch example. Incidentally, what does the\ntsearch approach provide that the simple GIN approach does not?\nIdeally explain that briefly before launching into the example. With\nall the custom functions and so on it's a little involved, so maybe if\nthere's not a huge win in using that approach drop it entirely?\n\nATB,\nP\n\n\n", "msg_date": "Wed, 11 Jan 2023 12:40:31 -0800", "msg_from": "Paul Ramsey <pramsey@cleverelephant.ca>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Paul Ramsey <pramsey@cleverelephant.ca> writes:\n\n> On Mon, Jan 2, 2023 at 2:03 PM Dag Lem <dag@nimrod.no> wrote:\n>\n>> I also improved on the documentation example (using Full Text Search).\n>> AFAIK you can't make general queries like that using arrays, however in\n>> any case I must admit that text arrays seem like more natural building\n>> blocks than space delimited text here.\n>\n> This is a fun addition to fuzzystrmatch.\n\nI'm glad to hear it! :-)\n\n>\n> While it's a little late in the game, I'll just put it out there:\n> daitch_mokotoff() is way harder to type than soundex_dm(). Not sure\n> how you feel about that.\n\nI chose the name in order to follow the naming of the other functions in\nfuzzystrmatch, which as far as I can tell are given the name which each\nalgorithm is known by.\n\nPersonally I don't think it's worth it to deviate from the naming of the\nother functions just to avoid typing a few characters, and I certainly\ndon't think daitch_mokotoff is any harder to get right than\nlevenshtein_less_equal ;-)\n\nSo, if I were to decide, I wouldn't change the name of the function.\nHowever I'm obviously not calling the shots on what goes into PostgreSQL\n- perhaps someone else would like to weigh in on this?\n\n>\n> On the documentation, I found the leap directly into the tsquery\n> example a bit too big. Maybe start with a very simple example,\n>\n> --\n> dm=# SELECT daitch_mokotoff('Schwartzenegger'),\n> daitch_mokotoff('Swartzenegger');\n>\n> daitch_mokotoff | daitch_mokotoff\n> -----------------+-----------------\n> {479465} | {479465}\n> --\n>\n> Then transition into a more complex example that illustrates the GIN\n> index technique you mention in the text, but do not show:\n>\n> --\n> CREATE TABLE dm_gin (source text, dm text[]);\n>\n> INSERT INTO dm_gin (source) VALUES\n> ('Swartzenegger'),\n> ('John'),\n> ('James'),\n> ('Steinman'),\n> ('Steinmetz');\n>\n> UPDATE dm_gin SET dm = daitch_mokotoff(source);\n>\n> CREATE INDEX dm_gin_x ON dm_gin USING GIN (dm);\n>\n> SELECT * FROM dm_gin WHERE dm && daitch_mokotoff('Schwartzenegger');\n> --\n\nSure, I can do that. You don't think this much example text will be\nTL;DR?\n\n>\n> And only then go into the tsearch example. Incidentally, what does the\n> tsearch approach provide that the simple GIN approach does not?\n\nThe example shows how to do a simultaneous match on first AND last\nnames, where the first and last names (any number of names) are stored\nin the same indexed column, and the order of the names in the index and\nthe search term does not matter.\n\nIf you were to use the GIN \"&&\" operator, you would get a match if\neither the first OR the last name matches. If you were to use the GIN\n\"@>\" operator, you would *not* get a match if the search term contains\nmore soundex codes than the indexed name.\n\nE.g. this yields a correct match:\nSELECT soundex_tsvector('John Yamson') @@ soundex_tsquery('John Jameson');\n\nWhile this yields a false positive:\nSELECT (daitch_mokotoff('John') || daitch_mokotoff('Yamson')) && (daitch_mokotoff('John') || daitch_mokotoff('Doe'));\n\nAnd this yields a false negative:\nSELECT (daitch_mokotoff('John') || daitch_mokotoff('Yamson')) @> (daitch_mokotoff('John') || daitch_mokotoff('Jameson'));\n\nThis may explained better by simply showing the output of\nsoundex_tsvector and soundex_tsquery:\n\nSELECT soundex_tsvector('John Yamson');\n soundex_tsvector \n----------------------------------\n '160000':1 '164600':3 '460000':2\n\nSELECT soundex_tsquery('John Jameson');\n soundex_tsquery \n---------------------------------------------------\n ( '160000' | '460000' ) & ( '164600' | '464600' )\n\n> Ideally explain that briefly before launching into the example. With\n> all the custom functions and so on it's a little involved, so maybe if\n> there's not a huge win in using that approach drop it entirely?\n\nI believe this functionality is quite useful, and that it's actually\nwhat's called for in many situations. So, I'd rather not drop this\nexample.\n\n>\n> ATB,\n> P\n>\n\nBest regards,\n\nDag Lem\n\n\n", "msg_date": "Thu, 12 Jan 2023 16:30:39 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "\n\n> On Jan 12, 2023, at 7:30 AM, Dag Lem <dag@nimrod.no> wrote:\n> \n> Paul Ramsey <pramsey@cleverelephant.ca> writes:\n> \n>> On Mon, Jan 2, 2023 at 2:03 PM Dag Lem <dag@nimrod.no> wrote:\n>> \n>>> I also improved on the documentation example (using Full Text Search).\n>>> AFAIK you can't make general queries like that using arrays, however in\n>>> any case I must admit that text arrays seem like more natural building\n>>> blocks than space delimited text here.\n>> \n>> This is a fun addition to fuzzystrmatch.\n> \n> I'm glad to hear it! :-)\n> \n>> \n>> While it's a little late in the game, I'll just put it out there:\n>> daitch_mokotoff() is way harder to type than soundex_dm(). Not sure\n>> how you feel about that.\n> \n> I chose the name in order to follow the naming of the other functions in\n> fuzzystrmatch, which as far as I can tell are given the name which each\n> algorithm is known by.\n> \n> Personally I don't think it's worth it to deviate from the naming of the\n> other functions just to avoid typing a few characters, and I certainly\n> don't think daitch_mokotoff is any harder to get right than\n> levenshtein_less_equal ;-)\n\nGood points :)\n\n> \n>> \n>> On the documentation, I found the leap directly into the tsquery\n>> example a bit too big. Maybe start with a very simple example,\n>> \n>> --\n>> dm=# SELECT daitch_mokotoff('Schwartzenegger'),\n>> daitch_mokotoff('Swartzenegger');\n>> \n>> daitch_mokotoff | daitch_mokotoff\n>> -----------------+-----------------\n>> {479465} | {479465}\n>> --\n>> \n>> Then transition into a more complex example that illustrates the GIN\n>> index technique you mention in the text, but do not show:\n>> \n>> --\n>> CREATE TABLE dm_gin (source text, dm text[]);\n>> \n>> INSERT INTO dm_gin (source) VALUES\n>> ('Swartzenegger'),\n>> ('John'),\n>> ('James'),\n>> ('Steinman'),\n>> ('Steinmetz');\n>> \n>> UPDATE dm_gin SET dm = daitch_mokotoff(source);\n>> \n>> CREATE INDEX dm_gin_x ON dm_gin USING GIN (dm);\n>> \n>> SELECT * FROM dm_gin WHERE dm && daitch_mokotoff('Schwartzenegger');\n>> --\n> \n> Sure, I can do that. You don't think this much example text will be\n> TL;DR?\n\nI can only speak for myself, but examples are the meat of documentation learning, so as long as they come with enough explanatory context to be legible it's worth having them, IMO.\n\n> \n>> \n>> And only then go into the tsearch example. Incidentally, what does the\n>> tsearch approach provide that the simple GIN approach does not?\n> \n> The example shows how to do a simultaneous match on first AND last\n> names, where the first and last names (any number of names) are stored\n> in the same indexed column, and the order of the names in the index and\n> the search term does not matter.\n> \n> If you were to use the GIN \"&&\" operator, you would get a match if\n> either the first OR the last name matches. If you were to use the GIN\n> \"@>\" operator, you would *not* get a match if the search term contains\n> more soundex codes than the indexed name.\n> \n> E.g. this yields a correct match:\n> SELECT soundex_tsvector('John Yamson') @@ soundex_tsquery('John Jameson');\n> \n> While this yields a false positive:\n> SELECT (daitch_mokotoff('John') || daitch_mokotoff('Yamson')) && (daitch_mokotoff('John') || daitch_mokotoff('Doe'));\n> \n> And this yields a false negative:\n> SELECT (daitch_mokotoff('John') || daitch_mokotoff('Yamson')) @> (daitch_mokotoff('John') || daitch_mokotoff('Jameson'));\n> \n> This may explained better by simply showing the output of\n> soundex_tsvector and soundex_tsquery:\n> \n> SELECT soundex_tsvector('John Yamson');\n> soundex_tsvector \n> ----------------------------------\n> '160000':1 '164600':3 '460000':2\n> \n> SELECT soundex_tsquery('John Jameson');\n> soundex_tsquery \n> ---------------------------------------------------\n> ( '160000' | '460000' ) & ( '164600' | '464600' )\n> \n>> Ideally explain that briefly before launching into the example. With\n>> all the custom functions and so on it's a little involved, so maybe if\n>> there's not a huge win in using that approach drop it entirely?\n> \n> I believe this functionality is quite useful, and that it's actually\n> what's called for in many situations. So, I'd rather not drop this\n> example.\n\nSounds good\n\nP\n\n> \n>> \n>> ATB,\n>> P\n>> \n> \n> Best regards,\n> \n> Dag Lem\n\n\n\n", "msg_date": "Thu, 12 Jan 2023 07:52:17 -0800", "msg_from": "Paul Ramsey <pramsey@cleverelephant.ca>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Paul Ramsey <pramsey@cleverelephant.ca> writes:\n\n>> On Jan 12, 2023, at 7:30 AM, Dag Lem <dag@nimrod.no> wrote:\n>> \n\n[...]\n\n>> \n>> Sure, I can do that. You don't think this much example text will be\n>> TL;DR?\n>\n> I can only speak for myself, but examples are the meat of\n> documentation learning, so as long as they come with enough\n> explanatory context to be legible it's worth having them, IMO.\n>\n\nI have updated the documentation, hopefully it is more accessible now.\n\nI also corrected documentation for the other functions in fuzzystrmatch\n(function name and argtype in the wrong order).\n\nCrossing fingers that someone will eventually change the status to\n\"Ready for Committer\" :-)\n\nBest regards,\n\nDag Lem", "msg_date": "Tue, 17 Jan 2023 15:18:16 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n\n> On 2023-Jan-05, Dag Lem wrote:\n>\n>> Is there anything else I should do here, to avoid the status being\n>> incorrectly stuck at \"Waiting for Author\" again.\n>\n> Just mark it Needs Review for now. I'll be back from vacation on Jan\n> 11th and can have a look then (or somebody else can, perhaps.)\n\nPaul Ramsey had a few comments in the mean time, and based on this I\nhave produced (yet another) patch, with improved documentation.\n\nHowever it's still not marked as \"Ready for Committer\" - can you please\ntake a look again?\n\nBest regards\n\nDag Lem\n\n\n", "msg_date": "Fri, 20 Jan 2023 14:45:40 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Hi Paul,\n\nI just went by to check the status of the patch, and I noticed that\nyou've added yourself as reviewer earlier - great!\n\nPlease tell me if there is anything I can do to help bring this across\nthe finish line.\n\nBest regards,\n\nDag Lem\n\n\n", "msg_date": "Tue, 07 Feb 2023 15:47:43 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "\n\n> On Feb 7, 2023, at 6:47 AM, Dag Lem <dag@nimrod.no> wrote:\n> \n> I just went by to check the status of the patch, and I noticed that\n> you've added yourself as reviewer earlier - great!\n> \n> Please tell me if there is anything I can do to help bring this across\n> the finish line.\n\nHonestly, I had set it to Ready for Committer, but then I went to run regression one more time and my regression blew up. I found I couldn't enable the UTF tests without things failing. And I don't blame you! I think my installation is probably out-of-alignment in some way, but I didn't want to flip the Ready flag without having run everything through to completion, so I flipped it back. Also, are the UTF tests enabled by default? It wasn't clear to me that they were?\n\nP\n\n", "msg_date": "Tue, 7 Feb 2023 09:08:15 -0800", "msg_from": "Paul Ramsey <pramsey@cleverelephant.ca>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "On 2/7/23 18:08, Paul Ramsey wrote:\n> \n> \n>> On Feb 7, 2023, at 6:47 AM, Dag Lem <dag@nimrod.no> wrote:\n>>\n>> I just went by to check the status of the patch, and I noticed that\n>> you've added yourself as reviewer earlier - great!\n>>\n>> Please tell me if there is anything I can do to help bring this across\n>> the finish line.\n> \n> Honestly, I had set it to Ready for Committer, but then I went to run regression one more time and my regression blew up. I found I couldn't enable the UTF tests without things failing. And I don't blame you! I think my installation is probably out-of-alignment in some way, but I didn't want to flip the Ready flag without having run everything through to completion, so I flipped it back. Also, are the UTF tests enabled by default? It wasn't clear to me that they were?\n> \nThe utf8 tests are enabled depending on the encoding returned by\ngetdatabaseencoding(). Systems with other encodings will simply use the\nalternate .out file. And it works perfectly fine for me.\n\nIMHO it's ready for committer.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 7 Feb 2023 23:28:33 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "On 2023-Jan-17, Dag Lem wrote:\n\n> + * Daitch-Mokotoff Soundex\n> + *\n> + * Copyright (c) 2021 Finance Norway\n> + * Author: Dag Lem <dag@nimrod.no>\n\nHmm, I don't think we accept copyright lines that aren't \"PostgreSQL\nGlobal Development Group\". Is it okay to use that, and update the year\nto 2023? (Note that answering \"no\" very likely means your patch is not\ncandidate for inclusion.) Also, we tend not to have \"Author:\" lines.\n\n> + * Permission to use, copy, modify, and distribute this software and its\n> + * documentation for any purpose, without fee, and without a written agreement\n> + * is hereby granted, provided that the above copyright notice and this\n> + * paragraph and the following two paragraphs appear in all copies.\n> + *\n> + * IN NO EVENT SHALL THE AUTHOR OR DISTRIBUTORS BE LIABLE TO ANY PARTY FOR\n> + * DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING\n> + * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS\n> + * DOCUMENTATION, EVEN IF THE AUTHOR OR DISTRIBUTORS HAVE BEEN ADVISED OF THE\n> + * POSSIBILITY OF SUCH DAMAGE.\n> + *\n> + * THE AUTHOR AND DISTRIBUTORS SPECIFICALLY DISCLAIM ANY WARRANTIES,\n> + * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY\n> + * AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS\n> + * ON AN \"AS IS\" BASIS, AND THE AUTHOR AND DISTRIBUTORS HAS NO OBLIGATIONS TO\n> + * PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.\n\nWe don't keep a separate copyright statement in the file; rather we\nassume that all files are under the PostgreSQL license, which is in the\nCOPYRIGHT file at the top of the tree. Changing it thus has the side\neffect that these disclaim notes refer to the University of California\nrather than \"the Author\". IANAL.\n\n\nI think we should add SPDX markers to all the files we distribute:\n/* SPDX-License-Identifier: PostgreSQL */\n\nhttps://spdx.dev/ids/\nhttps://spdx.org/licenses/PostgreSQL.html\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Tiene valor aquel que admite que es un cobarde\" (Fernandel)\n\n\n", "msg_date": "Wed, 8 Feb 2023 10:09:54 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n\n> On 2/7/23 18:08, Paul Ramsey wrote:\n>> \n>> \n>>> On Feb 7, 2023, at 6:47 AM, Dag Lem <dag@nimrod.no> wrote:\n>>>\n>>> I just went by to check the status of the patch, and I noticed that\n>>> you've added yourself as reviewer earlier - great!\n>>>\n>>> Please tell me if there is anything I can do to help bring this across\n>>> the finish line.\n>> \n>> Honestly, I had set it to Ready for Committer, but then I went to\n>> run regression one more time and my regression blew up. I found I\n>> couldn't enable the UTF tests without things failing. And I don't\n>> blame you! I think my installation is probably out-of-alignment in\n>> some way, but I didn't want to flip the Ready flag without having\n>> run everything through to completion, so I flipped it back. Also,\n>> are the UTF tests enabled by default? It wasn't clear to me that\n>> they were?\n>> \n> The utf8 tests are enabled depending on the encoding returned by\n> getdatabaseencoding(). Systems with other encodings will simply use the\n> alternate .out file. And it works perfectly fine for me.\n>\n> IMHO it's ready for committer.\n>\n>\n> regards\n\nYes, the UTF-8 tests follow the current best practice as has been\nexplained to me earlier. The following patch exemplifies this:\n\nhttps://github.com/postgres/postgres/commit/c2e8bd27519f47ff56987b30eb34a01969b9a9e8\n\n\nBest regards,\n\nDag Lem\n\n\n", "msg_date": "Wed, 08 Feb 2023 14:23:04 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n\n> On 2023-Jan-17, Dag Lem wrote:\n>\n>> + * Daitch-Mokotoff Soundex\n>> + *\n>> + * Copyright (c) 2021 Finance Norway\n>> + * Author: Dag Lem <dag@nimrod.no>\n>\n> Hmm, I don't think we accept copyright lines that aren't \"PostgreSQL\n> Global Development Group\". Is it okay to use that, and update the year\n> to 2023? (Note that answering \"no\" very likely means your patch is not\n> candidate for inclusion.) Also, we tend not to have \"Author:\" lines.\n>\n\nYou'll have to forgive me for not knowing about this rule:\n\n grep -ER \"Copyright.*[0-9]{4}\" contrib/ | grep -v PostgreSQL\n\nIn any case, I have checked with the copyright owner, and it would be OK\nto assign the copyright to \"PostgreSQL Global Development Group\".\n\nTo avoid going back and forth with patches, how do you propose that the\nsponsor and the author of the contributed module should be credited?\nWoule something like this be acceptable?\n\n/*\n * Daitch-Mokotoff Soundex\n *\n * Copyright (c) 2023, PostgreSQL Global Development Group\n *\n * This module was sponsored by Finance Norway / Trafikkforsikringsforeningen\n * and implemented by Dag Lem <dag@nimrod.no>\n *\n ...\n\n[...]\n\n>\n> We don't keep a separate copyright statement in the file; rather we\n> assume that all files are under the PostgreSQL license, which is in the\n> COPYRIGHT file at the top of the tree. Changing it thus has the side\n> effect that these disclaim notes refer to the University of California\n> rather than \"the Author\". IANAL.\n\nOK, no problem. Note that you will again find counterexamples under\ncontrib/ (and in some other places):\n\n grep -R \"Permission to use\" .\n\n> I think we should add SPDX markers to all the files we distribute:\n> /* SPDX-License-Identifier: PostgreSQL */\n>\n> https://spdx.dev/ids/\n> https://spdx.org/licenses/PostgreSQL.html\n\nAs far as I can tell, this is not included in any file so far, and is\nthus better left to decide and implement by someone else.\n\nBest regards\n\nDag Lem\n\n\n", "msg_date": "Wed, 08 Feb 2023 15:31:20 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "\n\nOn 2/8/23 15:31, Dag Lem wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> \n>> On 2023-Jan-17, Dag Lem wrote:\n>>\n>>> + * Daitch-Mokotoff Soundex\n>>> + *\n>>> + * Copyright (c) 2021 Finance Norway\n>>> + * Author: Dag Lem <dag@nimrod.no>\n>>\n>> Hmm, I don't think we accept copyright lines that aren't \"PostgreSQL\n>> Global Development Group\". Is it okay to use that, and update the year\n>> to 2023? (Note that answering \"no\" very likely means your patch is not\n>> candidate for inclusion.) Also, we tend not to have \"Author:\" lines.\n>>\n> \n> You'll have to forgive me for not knowing about this rule:\n> \n> grep -ER \"Copyright.*[0-9]{4}\" contrib/ | grep -v PostgreSQL\n> \n> In any case, I have checked with the copyright owner, and it would be OK\n> to assign the copyright to \"PostgreSQL Global Development Group\".\n> \n\nI'm not entirely sure what's the rule either, and I'm a committer. My\nguess is these cases are either old and/or adding a code that already\nexisted elsewhere (like some of the double metaphone, for example), or\nmaybe both. But I'd bet we'd prefer not adding more ...\n\n> To avoid going back and forth with patches, how do you propose that the\n> sponsor and the author of the contributed module should be credited?\n> Woule something like this be acceptable?\n> \n\nWe generally credit contributors in two ways - by mentioning them in the\ncommit message, and by listing them in the release notes (for individual\nfeatures).\n\n> /*\n> * Daitch-Mokotoff Soundex\n> *\n> * Copyright (c) 2023, PostgreSQL Global Development Group\n> *\n> * This module was sponsored by Finance Norway / Trafikkforsikringsforeningen\n> * and implemented by Dag Lem <dag@nimrod.no>\n> *\n> ...\n> \n> [...]\n> \n>>\n>> We don't keep a separate copyright statement in the file; rather we\n>> assume that all files are under the PostgreSQL license, which is in the\n>> COPYRIGHT file at the top of the tree. Changing it thus has the side\n>> effect that these disclaim notes refer to the University of California\n>> rather than \"the Author\". IANAL.\n> \n> OK, no problem. Note that you will again find counterexamples under\n> contrib/ (and in some other places):\n> \n> grep -R \"Permission to use\" .\n> \n>> I think we should add SPDX markers to all the files we distribute:\n>> /* SPDX-License-Identifier: PostgreSQL */\n>>\n>> https://spdx.dev/ids/\n>> https://spdx.org/licenses/PostgreSQL.html\n> \n> As far as I can tell, this is not included in any file so far, and is\n> thus better left to decide and implement by someone else.\n> \n\nI don't think Alvaro was suggesting this patch should do that. It was\nmore a generic comment about what the project as a whole might do.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 8 Feb 2023 21:42:28 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n\n> On 2/8/23 15:31, Dag Lem wrote:\n>> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> \n>>> On 2023-Jan-17, Dag Lem wrote:\n>>>\n>>>> + * Daitch-Mokotoff Soundex\n>>>> + *\n>>>> + * Copyright (c) 2021 Finance Norway\n>>>> + * Author: Dag Lem <dag@nimrod.no>\n>>>\n>>> Hmm, I don't think we accept copyright lines that aren't \"PostgreSQL\n>>> Global Development Group\". Is it okay to use that, and update the year\n>>> to 2023? (Note that answering \"no\" very likely means your patch is not\n>>> candidate for inclusion.) Also, we tend not to have \"Author:\" lines.\n>>>\n>> \n>> You'll have to forgive me for not knowing about this rule:\n>> \n>> grep -ER \"Copyright.*[0-9]{4}\" contrib/ | grep -v PostgreSQL\n>> \n>> In any case, I have checked with the copyright owner, and it would be OK\n>> to assign the copyright to \"PostgreSQL Global Development Group\".\n>> \n>\n> I'm not entirely sure what's the rule either, and I'm a committer. My\n> guess is these cases are either old and/or adding a code that already\n> existed elsewhere (like some of the double metaphone, for example), or\n> maybe both. But I'd bet we'd prefer not adding more ...\n>\n>> To avoid going back and forth with patches, how do you propose that the\n>> sponsor and the author of the contributed module should be credited?\n>> Woule something like this be acceptable?\n>> \n>\n> We generally credit contributors in two ways - by mentioning them in the\n> commit message, and by listing them in the release notes (for individual\n> features).\n>\n\nI'll ask again, would the proposed credits be acceptable? In this case,\nthe code already existed elsewhere (as in your example for double\nmetaphone) as a separate extension. The copyright owner is OK with\ncopyright assignment, however I find it quite unreasonable that proper\ncredits should not be given. Neither commit messages nor release notes\nfollow the contributed module, which is in its entirety contributed by\nan external entity.\n\nI'll also point out that in addition to credits in code all over the\nplace, PostgreSQL has much more prominent credits in the documentation:\n\n grep -ER \"Author\" doc/ | grep -v PostgreSQL\n\n\"Author\" is even documented as a top level section in the Reference\nPages as \"Author (only used in the contrib section)\", see\n\n https://www.postgresql.org/docs/15/docguide-style.html#id-1.11.11.8.2\n\nIf there really exists some new rule which says that for new\ncontributions under contrib/, credits should not be allowed in any way\nin either code or documentation (IANAL, but AFAIU this would be in\nconflict with laws on author's moral rights in several countries), then\none would reasonably expect that you'd be upfront about this, both in\ndocumentation, and also as the very first thing when a contribution is\nfirst proposed for inclusion.\n\nBest regards\n\nDag Lem\n\n\n", "msg_date": "Thu, 09 Feb 2023 10:28:36 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Hi,\n\nOn 2023-02-09 10:28:36 +0100, Dag Lem wrote:\n> I'll ask again, would the proposed credits be acceptable? In this case,\n> the code already existed elsewhere (as in your example for double\n> metaphone) as a separate extension. The copyright owner is OK with\n> copyright assignment, however I find it quite unreasonable that proper\n> credits should not be given.\n\nYou don't need to assign copyright, it needs however be licensed under the\nterms of the PostgreSQL License.\n\n\n> Neither commit messages nor release notes\n> follow the contributed module, which is in its entirety contributed by\n> an external entity.\n\nThe problem with adding credits to source files is that it's hard to maintain\nthem reasonably over time. At what point has a C file been extended\nsufficiently to warrant an additional author?\n\n\n> I'll also point out that in addition to credits in code all over the\n> place, PostgreSQL has much more prominent credits in the documentation:\n>\n> grep -ER \"Author\" doc/ | grep -v PostgreSQL\n\nFWIW, I'd rather remove them. In several of those the credited author has, by\nnow, only done a small fraction of the overall work.\n\nThey don't make much sense to me - you don't get a permanent mention in other\nparts of the documentation either. Many of the binaries outside of contrib/\ninvolved a lot more work by one individual than cases in contrib/. Lots of\nbackend code has a *lot* of work done by one individual, yet we don't add\nauthorship notes in relevant sections of the documentation.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 10 Feb 2023 18:58:30 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "I sincerely hope this resolves any blocking issues with copyright /\nlegalese / credits.\n\nBest regards\n\nDag Lem", "msg_date": "Tue, 14 Feb 2023 15:27:21 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Dag Lem <dag@nimrod.no> writes:\n\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>\n>> On 2/7/23 18:08, Paul Ramsey wrote:\n>>> \n>>> \n>>>> On Feb 7, 2023, at 6:47 AM, Dag Lem <dag@nimrod.no> wrote:\n>>>>\n>>>> I just went by to check the status of the patch, and I noticed that\n>>>> you've added yourself as reviewer earlier - great!\n>>>>\n>>>> Please tell me if there is anything I can do to help bring this across\n>>>> the finish line.\n>>> \n>>> Honestly, I had set it to Ready for Committer, but then I went to\n>>> run regression one more time and my regression blew up. I found I\n>>> couldn't enable the UTF tests without things failing. And I don't\n>>> blame you! I think my installation is probably out-of-alignment in\n>>> some way, but I didn't want to flip the Ready flag without having\n>>> run everything through to completion, so I flipped it back. Also,\n>>> are the UTF tests enabled by default? It wasn't clear to me that\n>>> they were?\n>>> \n>> The utf8 tests are enabled depending on the encoding returned by\n>> getdatabaseencoding(). Systems with other encodings will simply use the\n>> alternate .out file. And it works perfectly fine for me.\n>>\n>> IMHO it's ready for committer.\n>>\n>>\n>> regards\n>\n> Yes, the UTF-8 tests follow the current best practice as has been\n> explained to me earlier. The following patch exemplifies this:\n>\n> https://github.com/postgres/postgres/commit/c2e8bd27519f47ff56987b30eb34a01969b9a9e8\n>\n>\n\nCan you please have a look at this again?\n\nBest regards,\n\nDag Lem\n\n\n", "msg_date": "Mon, 06 Mar 2023 12:07:03 +0100", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Dag Lem <dag@nimrod.no> writes:\n\n> I sincerely hope this resolves any blocking issues with copyright /\n> legalese / credits.\n>\n\nCan this now be considered ready for commiter, so that Paul or someone\nelse can flip the bit?\n\nBest regards\nDag Lem\n\n\n", "msg_date": "Mon, 03 Apr 2023 15:19:53 +0200", "msg_from": "Dag Lem <dag@nimrod.no>", "msg_from_op": true, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "On 4/3/23 15:19, Dag Lem wrote:\n> Dag Lem <dag@nimrod.no> writes:\n> \n>> I sincerely hope this resolves any blocking issues with copyright /\n>> legalese / credits.\n>>\n> \n> Can this now be considered ready for commiter, so that Paul or someone\n> else can flip the bit?\n> \n\nHi, I think from the technical point of view it's sound and ready for\ncommit. The patch stalled on the copyright/credit stuff, which is\nsomewhat separate and mostly non-technical aspect of patches. Sorry for\nthat, I'm sure it's annoying/frustrating :-(\n\nI see the current patch has two simple lines:\n\n * This module was originally sponsored by Finance Norway /\n * Trafikkforsikringsforeningen, and implemented by Dag Lem\n\nAny objections to this level of attribution in commnents?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 3 Apr 2023 16:45:39 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> Hi, I think from the technical point of view it's sound and ready for\n> commit. The patch stalled on the copyright/credit stuff, which is\n> somewhat separate and mostly non-technical aspect of patches. Sorry for\n> that, I'm sure it's annoying/frustrating :-(\n\n> I see the current patch has two simple lines:\n\n> * This module was originally sponsored by Finance Norway /\n> * Trafikkforsikringsforeningen, and implemented by Dag Lem\n\n> Any objections to this level of attribution in commnents?\n\nThat seems fine to me. I'll check this over and see if I can get\nit pushed today.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Apr 2023 14:55:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "I wrote:\n> That seems fine to me. I'll check this over and see if I can get\n> it pushed today.\n\nI pushed this after some mostly-cosmetic fiddling. Most of the\nbuildfarm seems okay with it, but crake's perlcritic run is not:\n\n./contrib/fuzzystrmatch/daitch_mokotoff_header.pl: I/O layer \":utf8\" used at line 15, column 5. Use \":encoding(UTF-8)\" to get strict validation. ([InputOutput::RequireEncodingWithUTF8Layer] Severity: 5)\n\nAny suggestions on exactly how to pacify that?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Apr 2023 21:13:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Hi,\n\nOn 2023-04-07 21:13:43 -0400, Tom Lane wrote:\n> I wrote:\n> > That seems fine to me. I'll check this over and see if I can get\n> > it pushed today.\n> \n> I pushed this after some mostly-cosmetic fiddling. Most of the\n> buildfarm seems okay with it, but crake's perlcritic run is not:\n> \n> ./contrib/fuzzystrmatch/daitch_mokotoff_header.pl: I/O layer \":utf8\" used at line 15, column 5. Use \":encoding(UTF-8)\" to get strict validation. ([InputOutput::RequireEncodingWithUTF8Layer] Severity: 5)\n> \n> Any suggestions on exactly how to pacify that?\n\nYou could follow it's advise and replace the :utf8 with :encoding(UTF-8), that\nworks here. Or disable it in that piece of code with ## no critic\n(RequireEncodingWithUTF8Layer) Or we could disable the warning in\nperlcriticrc for all files?\n\nUnless it's not available with old versions, using :encoding(UTF-8) seems\nsensible?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 7 Apr 2023 18:25:32 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-04-07 21:13:43 -0400, Tom Lane wrote:\n>> I pushed this after some mostly-cosmetic fiddling. Most of the\n>> buildfarm seems okay with it, but crake's perlcritic run is not:\n>> \n>> ./contrib/fuzzystrmatch/daitch_mokotoff_header.pl: I/O layer \":utf8\" used at line 15, column 5. Use \":encoding(UTF-8)\" to get strict validation. ([InputOutput::RequireEncodingWithUTF8Layer] Severity: 5)\n\n> Unless it's not available with old versions, using :encoding(UTF-8) seems\n> sensible?\n\nYeah, that's the obvious fix, I was just wondering if people with\nmore perl-fu than I have see a problem with it. But I'll go ahead\nand push that for now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Apr 2023 21:27:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "I wrote:\n> I pushed this after some mostly-cosmetic fiddling. Most of the\n> buildfarm seems okay with it,\n\nSpoke too soon [1]:\n\nmake[1]: Entering directory '/home/linux1/build-farm-16-pipit/buildroot/HEAD/pgsql.build/contrib/fuzzystrmatch'\n'/usr/bin/perl' daitch_mokotoff_header.pl daitch_mokotoff.h\nCan't locate open.pm in @INC (you may need to install the open module) (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5) at daitch_mokotoff_header.pl line 15.\nBEGIN failed--compilation aborted at daitch_mokotoff_header.pl line 15.\nmake[1]: *** [Makefile:33: daitch_mokotoff.h] Error 2\n\npipit appears to be running a reasonably current system (RHEL8), so\nthe claim that \"open\" is a Perl core module appears false. We need\nto rewrite this to not use that.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=pipit&dt=2023-04-08%2001%3A02%3A39\n\n\n", "msg_date": "Fri, 07 Apr 2023 21:52:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "On 2023-04-07 Fr 21:52, Tom Lane wrote:\n> I wrote:\n>> I pushed this after some mostly-cosmetic fiddling. Most of the\n>> buildfarm seems okay with it,\n> Spoke too soon [1]:\n>\n> make[1]: Entering directory '/home/linux1/build-farm-16-pipit/buildroot/HEAD/pgsql.build/contrib/fuzzystrmatch'\n> '/usr/bin/perl' daitch_mokotoff_header.pl daitch_mokotoff.h\n> Can't locate open.pm in @INC (you may need to install the open module) (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5) at daitch_mokotoff_header.pl line 15.\n> BEGIN failed--compilation aborted at daitch_mokotoff_header.pl line 15.\n> make[1]: *** [Makefile:33: daitch_mokotoff.h] Error 2\n>\n> pipit appears to be running a reasonably current system (RHEL8), so\n> the claim that \"open\" is a Perl core module appears false. We need\n> to rewrite this to not use that.\n>\n> \t\t\tregards, tom lane\n>\n> [1]https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=pipit&dt=2023-04-08%2001%3A02%3A39\n>\n>\n\nI think it is a core module (See <https://metacpan.org/pod/open>) but it \nappears that some packagers have separated it out for reasons that \naren't entirely obvious:\n\nandrew@emma:~ $ rpm -q -l -f /usr/share/perl5/open.pm\n/usr/share/man/man3/open.3pm.gz\n/usr/share/perl5/open.pm\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-07 Fr 21:52, Tom Lane wrote:\n\n\nI wrote:\n\n\nI pushed this after some mostly-cosmetic fiddling. Most of the\nbuildfarm seems okay with it,\n\n\n\nSpoke too soon [1]:\n\nmake[1]: Entering directory '/home/linux1/build-farm-16-pipit/buildroot/HEAD/pgsql.build/contrib/fuzzystrmatch'\n'/usr/bin/perl' daitch_mokotoff_header.pl daitch_mokotoff.h\nCan't locate open.pm in @INC (you may need to install the open module) (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5) at daitch_mokotoff_header.pl line 15.\nBEGIN failed--compilation aborted at daitch_mokotoff_header.pl line 15.\nmake[1]: *** [Makefile:33: daitch_mokotoff.h] Error 2\n\npipit appears to be running a reasonably current system (RHEL8), so\nthe claim that \"open\" is a Perl core module appears false. We need\nto rewrite this to not use that.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=pipit&dt=2023-04-08%2001%3A02%3A39\n\n\n\n\n\n\nI think it is a core module (See\n <https://metacpan.org/pod/open>) but it appears that some\n packagers have separated it out for reasons that aren't entirely\n obvious:\nandrew@emma:~ $ rpm -q -l -f /usr/share/perl5/open.pm \n /usr/share/man/man3/open.3pm.gz\n /usr/share/perl5/open.pm\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 7 Apr 2023 22:50:32 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2023-04-07 Fr 21:52, Tom Lane wrote:\n>> pipit appears to be running a reasonably current system (RHEL8), so\n>> the claim that \"open\" is a Perl core module appears false. We need\n>> to rewrite this to not use that.\n\n> I think it is a core module (See <https://metacpan.org/pod/open>) but it \n> appears that some packagers have separated it out for reasons that \n> aren't entirely obvious:\n\nHmm, yeah: on my RHEL8 workstation\n\n$ rpm -qf /usr/share/perl5/open.pm\nperl-open-1.11-421.el8.noarch\n\nIt's not exactly clear how that came to be installed, because\n\n$ rpm -q perl-open --whatrequires\nno package requires perl-open\n\nand indeed another nearby RHEL8 machine doesn't have that package\ninstalled at all, even though I've got it loaded up with enough\nstuff for most Postgres work. (Sadly, I'd not tested on that one.)\n\nAnyway, I assume this is just syntactic sugar for something\nwe can do another way? If it's at all fundamental, I'll have\nto back the patch out.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Apr 2023 23:03:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "I wrote:\n> Anyway, I assume this is just syntactic sugar for something\n> we can do another way? If it's at all fundamental, I'll have\n> to back the patch out.\n\nOn closer inspection, this script is completely devoid of any\nneed to deal in non-ASCII data at all. So I just nuked the\n\"use\" lines.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Apr 2023 23:25:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "On 2023-04-07 Fr 23:25, Tom Lane wrote:\n> I wrote:\n>> Anyway, I assume this is just syntactic sugar for something\n>> we can do another way? If it's at all fundamental, I'll have\n>> to back the patch out.\n> On closer inspection, this script is completely devoid of any\n> need to deal in non-ASCII data at all. So I just nuked the\n> \"use\" lines.\n>\n> \t\t\t\n\n\nYeah.\n\nI just spent a little while staring at the perl code. I have to say it \nseems rather opaque, the data structure seems a bit baroque. I'll try to \nsimplify it.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-07 Fr 23:25, Tom Lane wrote:\n\n\nI wrote:\n\n\nAnyway, I assume this is just syntactic sugar for something\nwe can do another way? If it's at all fundamental, I'll have\nto back the patch out.\n\n\n\nOn closer inspection, this script is completely devoid of any\nneed to deal in non-ASCII data at all. So I just nuked the\n\"use\" lines.\n\n\t\t\t\n\n\n\nYeah.\n\nI just spent a little while staring at the perl code. I have to\n say it seems rather opaque, the data structure seems a bit\n baroque. I'll try to simplify it.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sat, 8 Apr 2023 07:50:00 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" }, { "msg_contents": "Buildfarm member hamerkop has a niggle about this patch:\n\nc:\\\\build-farm-local\\\\buildroot\\\\head\\\\pgsql.build\\\\contrib\\\\fuzzystrmatch\\\\daitch_mokotoff.c : warning C4819: The file contains a character that cannot be represented in the current code page (932). Save the file in Unicode format to prevent data loss\n\nIt's complaining about the comment in\n\nstatic const char iso8859_1_to_ascii_upper[] =\n/*\n\"`abcdefghijklmnopqrstuvwxyz{|}~ ¡¢£¤¥¦§¨©ª«¬ ®¯°±²³´µ¶·¸¹º»¼½¾¿ÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖרÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõö÷øùúûüýþÿ\"\n*/\n\"`ABCDEFGHIJKLMNOPQRSTUVWXYZ{|}~ ! ?AAAAAAECEEEEIIIIDNOOOOO*OUUUUYDSAAAAAAECEEEEIIIIDNOOOOO/OUUUUYDY\";\n\nThere are some other comments with non-ASCII characters elsewhere in the\nfile, but I think it's mainly just the weird symbols here that might fail\nto translate to encodings that are not based on ISO 8859-1.\n\nI think we need to get rid of this warning: it's far from obvious that\nit's a non-issue, and because the compiler is not at all specific about\nwhere the issue is, people could waste a lot of time figuring that out.\nIn fact, it might *not* be a non-issue, if it prevents the source tree\nas a whole from being processed by some tool or other.\n\nSo I propose to replace those symbols with \"... random symbols ...\" or\nthe like and see if the warning goes away. If not, we might have to\nresort to something more drastic like removing this comment altogether.\nWe do have non-ASCII text in comments and test cases elsewhere in the\ntree, and have not had a lot of trouble with that, so I'm hoping the\nletters can stay because they are useful to compare to the constant.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 14 Apr 2023 13:57:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: daitch_mokotoff module" } ]
[ { "msg_contents": "I want to present my proof-of-concept patch for the transparent column\nencryption feature. (Some might also think of it as automatic\nclient-side encryption or similar, but I like my name.) This feature\nenables the {automatic,transparent} encryption and decryption of\nparticular columns in the client. The data for those columns then\nonly ever appears in ciphertext on the server, so it is protected from\nthe \"prying eyes\" of DBAs, sysadmins, cloud operators, etc. The\ncanonical use case for this feature is storing credit card numbers\nencrypted, in accordance with PCI DSS, as well as similar situations\ninvolving social security numbers etc. Of course, you can't do any\ncomputations with encrypted values on the server, but for these use\ncases, that is not necessary. This feature does support deterministic\nencryption as an alternative to the default randomized encryption, so\nin that mode you can do equality lookups, at the cost of some\nsecurity.\n\nThis functionality also exists in other SQL database products, so the\noverall concepts weren't invented by me by any means.\n\nAlso, this feature has nothing to do with the on-disk encryption\nfeature being contemplated in parallel. Both can exist independently.\n\nThe attached patch has all the necessary pieces in place to make this\nwork, so you can have an idea how the overall system works. It\ncontains some documentation and tests to help illustrate the\nfunctionality. But it's missing the remaining 90% of the work,\nincluding additional DDL support, error handling, robust memory\nmanagement, protocol versioning, forward and backward compatibility,\npg_dump support, psql \\d support, refinement of the cryptography, and\nso on. But I think obvious solutions exist to all of those things, so\nit isn't that interesting to focus on them for now.\n\n------\n\nNow to the explanation of how it works.\n\nYou declare a column as encrypted in a CREATE TABLE statement. The\ncolumn value is encrypted by a symmetric key called the column\nencryption key (CEK). The CEK is a catalog object. The CEK key\nmaterial is in turn encrypted by an assymmetric key called the column\nmaster key (CMK). The CMK is not stored in the database but somewhere\nwhere the client can get to it, for example in a file or in a key\nmanagement system. When a server sends rows containing encrypted\ncolumn values to the client, it first sends the required CMK and CEK\ninformation (new protocol messages), which the client needs to record.\nThen, the client can use this information to automatically decrypt the\nincoming row data and forward it in plaintext to the application.\n\nFor the CMKs, the catalog object specifies a \"provider\" and generic\noptions. Right now, libpq has a \"file\" provider hardcoded, and it\ntakes a \"filename\" option. Via some mechanism to be determined,\nadditional providers could be loaded and then talk to key management\nsystems via http or whatever. I have left some comments in the libpq\ncode where the hook points for this could be.\n\nThe general idea would be for an application to have one CMK per area\nof secret stuff, for example, for credit card data. The CMK can be\nrotated: each CEK can be represented multiple times in the database,\nencrypted by a different CMK. (The CEK can't be rotated easily, since\nthat would require reading out all the data from a table/column and\nreencrypting it. We could/should add some custom tooling for that,\nbut it wouldn't be a routine operation.)\n\nThe encryption algorithms are mostly hardcoded right now, but there\nare facilities for picking algorithms and adding new ones that will be\nexpanded. The CMK process uses RSA-OAEP. The CEK process uses\nAES-128-CBC right now; a more complete solution should probably\ninvolve some HMAC thrown in.\n\nIn the server, the encrypted datums are stored in types called\nencryptedr and encryptedd (for randomized and deterministic\nencryption). These are essentially cousins of bytea. For the rest of\nthe database system below the protocol handling, there is nothing\nspecial about those. For example, encryptedr has no operators at all,\nencryptedd has only an equality operator. pg_attribute has a new\ncolumn attrealtypid that stores the original type of the data in the\ncolumn. This is only used for providing it to clients, so that\nhigher-level clients can convert the decrypted value to their\nappropriate data types in their environments.\n\nSome protocol extensions are required. These should be guarded by\nsome _pq_... setting, but this is not done in this patch yet. As\nmentioned above, extra messages are added for sending the CMKs and\nCEKs. In the RowDescription message, I have commandeered the format\nfield to add a bit that indicates that the field is encrypted. This\ncould be made a separate field, and there should probably be\nadditional fields to indicate the algorithm and CEK name, but this was\neasiest for now. The ParameterDescription message is extended to\ncontain format fields for each parameter, for the same purpose.\nAgain, this could be done differently.\n\nSpeaking of parameter descriptions, the trickiest part of this whole\nthing appears to be how to get transparently encrypted data into the\ndatabase (as opposed to reading it out). It is required to use\nprotocol-level prepared statements (i.e., extended query) for this.\nThe client must first prepare a statement, then describe the statement\nto get parameter metadata, which indicates which parameters are to be\nencrypted and how. So this will require some care by applications\nthat want to do this, but, well, they probably should be careful\nanyway. In libpq, the existing APIs make this difficult, because\nthere is no way to pass the result of a describe-statement call back\ninto execute-statement-with-parameters. I added new functions that do\nthis, so you then essentially do\n\n res0 = PQdescribePrepared(conn, \"\");\n res = PQexecPrepared2(conn, \"\", 2, values, NULL, NULL, 0, res0);\n\n(The name could obviously be improved.) Other client APIs that have a\n\"statement handle\" concept could do this more elegantly and probably\nwithout any API changes.\n\nAnother challenge is that the parse analysis must check which\nunderlying column a parameter corresponds to. This is similar to\nresorigtbl and resorigcol in the opposite direction. The current\nimplementation of this works for the test cases, but I know it has\nsome problems, so I'll continue working in this. This functionality\nis in principle available to all prepared-statement variants, not only\nprotocol-level. So you can see in the tests that I expanded the\npg_prepared_statements view to show this information as well, which\nalso provides an easy way to test and debug this functionality\nindependent of column encryption.\n\nAnd also, psql doesn't use prepared statements, so writing into\nencrypted columns currently doesn't work at all via psql. (Reading\nworks no problem.) All the test code currently uses custom libpq C\nprograms. We should think about a way to enable prepared statements\nin psql, perhaps something like\n\nINSERT INTO t1 VALUES ($1, $2) \\gg 'val1' 'val2'\n\n(\\gexec and \\gx are already taken.)\n\n------\n\nThis is not targeting PostgreSQL 15. But I'd appreciate some feedback\non the direction. As I mentioned above, a lot of the remaining work\nis arguably mostly straightforward. Some closer examination of the\nissues surrounding the libpq API changes and psql would be useful.\nPerhaps there are other projects where that kind of functionality\nwould also be useful.", "msg_date": "Fri, 3 Dec 2021 22:32:21 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Transparent column encryption" }, { "msg_contents": "On Fri, Dec 3, 2021 at 4:32 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> But it's missing the remaining 90% of the work,\n> including additional DDL support, error handling, robust memory\n> management, protocol versioning, forward and backward compatibility,\n> pg_dump support, psql \\d support, refinement of the cryptography, and\n> so on. But I think obvious solutions exist to all of those things, so\n> it isn't that interesting to focus on them for now.\n\nRight, we wouldn't want to get bogged down at this stage in little\ndetails like, uh, everything.\n\n> Some protocol extensions are required. These should be guarded by\n> some _pq_... setting, but this is not done in this patch yet. As\n> mentioned above, extra messages are added for sending the CMKs and\n> CEKs. In the RowDescription message, I have commandeered the format\n> field to add a bit that indicates that the field is encrypted. This\n> could be made a separate field, and there should probably be\n> additional fields to indicate the algorithm and CEK name, but this was\n> easiest for now. The ParameterDescription message is extended to\n> contain format fields for each parameter, for the same purpose.\n> Again, this could be done differently.\n\nI think this is reasonable. I would choose to use an additional bit in\nthe format field as opposed to a separate field. It is worth\nconsidering whether it makes more sense to extend the existing\nParameterDescription message conditionally on some protocol-level\noption, or whether we should instead, say, add ParameterDescription2\nor the moral equivalent. As I see it, the latter feels conceptually\nsimpler, but on the other hand, our wire protocol supposes that we\nwill never run out of 1-byte codes for messages, so perhaps some\nprudence is needed.\n\n> Speaking of parameter descriptions, the trickiest part of this whole\n> thing appears to be how to get transparently encrypted data into the\n> database (as opposed to reading it out). It is required to use\n> protocol-level prepared statements (i.e., extended query) for this.\n\nWhy? If the client knows the CEK, can't the client choose to send\nunprepared insert or update statements with pre-encrypted blobs? That\nmight be a bad idea from a security perspective because the encrypted\nblob might then got logged, but we sometimes log parameters, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Dec 2021 13:28:28 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On Fri, 2021-12-03 at 22:32 +0100, Peter Eisentraut wrote:\r\n> This feature does support deterministic\r\n> encryption as an alternative to the default randomized encryption, so\r\n> in that mode you can do equality lookups, at the cost of some\r\n> security.\r\n\r\n> +\t\t\t\tif (enc_det)\r\n> +\t\t\t\t\tmemset(iv, ivlen, 0);\r\n\r\nI think reusing a zero IV will potentially leak more information than\r\njust equality, depending on the cipher in use. You may be interested in\r\nsynthetic IVs and nonce-misuse resistance (e.g. [1]), since they seem\r\nlike they would match this use case exactly. (But I'm not a\r\ncryptographer.)\r\n\r\n> The encryption algorithms are mostly hardcoded right now, but there\r\n> are facilities for picking algorithms and adding new ones that will be\r\n> expanded. The CMK process uses RSA-OAEP. The CEK process uses\r\n> AES-128-CBC right now; a more complete solution should probably\r\n> involve some HMAC thrown in.\r\n\r\nHave you given any thought to AEAD? As a client I'd like to be able to\r\ntie an encrypted value to other column (or external) data. For example,\r\nAEAD could be used to prevent a DBA from copying the (encrypted) value\r\nof my credit card column into their account's row to use it.\r\n\r\n> This is not targeting PostgreSQL 15. But I'd appreciate some feedback\r\n> on the direction.\r\n\r\nWhat kinds of attacks are you hoping to prevent (and not prevent)?\r\n\r\n--Jacob\r\n\r\n[1] https://datatracker.ietf.org/doc/html/rfc8452\r\n", "msg_date": "Mon, 6 Dec 2021 20:44:54 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 06.12.21 19:28, Robert Haas wrote:\n>> Speaking of parameter descriptions, the trickiest part of this whole\n>> thing appears to be how to get transparently encrypted data into the\n>> database (as opposed to reading it out). It is required to use\n>> protocol-level prepared statements (i.e., extended query) for this.\n> Why? If the client knows the CEK, can't the client choose to send\n> unprepared insert or update statements with pre-encrypted blobs? That\n> might be a bad idea from a security perspective because the encrypted\n> blob might then got logged, but we sometimes log parameters, too.\n\nThe client can send something like\n\nPQexec(conn, \"INSERT INTO tbl VALUES ('ENCBLOB', 'ENCBLOB')\");\n\nand it will work. (See the included test suite where 'ENCBLOB' is \nactually computed by pgcrypto.) But that is not transparent encryption. \n The client wants to send \"INSERT INTO tbl VALUES ('val1', 'val2')\" and \nhave libpq take care of encrypting 'val1' and 'val2' before hitting the \nwire. For that you need to use the prepared statement API so that the \nvalues are available separately from the statement. And furthermore the \nclient needs to know what columns the insert statements is writing to, \nso that it can get the CEK for that column. That's what it needs the \nparameter description for.\n\nAs alluded to, workarounds exist or might be made available to do part \nof that work yourself, but that shouldn't be the normal way of using it.\n\n\n", "msg_date": "Tue, 7 Dec 2021 16:06:55 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 06.12.21 21:44, Jacob Champion wrote:\n> I think reusing a zero IV will potentially leak more information than\n> just equality, depending on the cipher in use. You may be interested in\n> synthetic IVs and nonce-misuse resistance (e.g. [1]), since they seem\n> like they would match this use case exactly. (But I'm not a\n> cryptographer.)\n\nI'm aware of this and plan to make use of SIV. The current \nimplementation is just an example.\n\n> Have you given any thought to AEAD? As a client I'd like to be able to\n> tie an encrypted value to other column (or external) data. For example,\n> AEAD could be used to prevent a DBA from copying the (encrypted) value\n> of my credit card column into their account's row to use it.\n\nI don't know how that is supposed to work. When the value is encrypted \nfor insertion, the client may know things like table name or column \nname, so it can tie it to those. But it doesn't know what row it will \ngo in, so you can't prevent the value from being copied into another \nrow. You would need some permanent logical row ID for this, I think. \nFor this scenario, the deterministic encryption mode is perhaps not the \nright one.\n\n>> This is not targeting PostgreSQL 15. But I'd appreciate some feedback\n>> on the direction.\n> \n> What kinds of attacks are you hoping to prevent (and not prevent)?\n\nThe point is to prevent admins from getting at plaintext data. The \nscenario you show is an interesting one but I think it's not meant to be \naddressed by this. If admins can alter the database to their advantage, \nthey could perhaps increase their account balance, create discount \ncodes, etc. also.\n\nIf this is a problem, then perhaps a better approach would be to store \nparts of the data in a separate database with separate admins.\n\n\n", "msg_date": "Tue, 7 Dec 2021 16:39:29 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On Tue, 2021-12-07 at 16:39 +0100, Peter Eisentraut wrote:\r\n> On 06.12.21 21:44, Jacob Champion wrote:\r\n> > I think reusing a zero IV will potentially leak more information than\r\n> > just equality, depending on the cipher in use. You may be interested in\r\n> > synthetic IVs and nonce-misuse resistance (e.g. [1]), since they seem\r\n> > like they would match this use case exactly. (But I'm not a\r\n> > cryptographer.)\r\n> \r\n> I'm aware of this and plan to make use of SIV. The current \r\n> implementation is just an example.\r\n\r\nSounds good.\r\n\r\n> > Have you given any thought to AEAD? As a client I'd like to be able to\r\n> > tie an encrypted value to other column (or external) data. For example,\r\n> > AEAD could be used to prevent a DBA from copying the (encrypted) value\r\n> > of my credit card column into their account's row to use it.\r\n> \r\n> I don't know how that is supposed to work. When the value is encrypted \r\n> for insertion, the client may know things like table name or column \r\n> name, so it can tie it to those. But it doesn't know what row it will \r\n> go in, so you can't prevent the value from being copied into another \r\n> row. You would need some permanent logical row ID for this, I think.\r\n\r\nSorry, my description was confusing. There's nothing preventing the DBA\r\nfrom copying the value inside the database, but AEAD can make it so\r\nthat the copied value isn't useful to the DBA.\r\n\r\nSample case. Say I have a webapp backed by Postgres, which stores\r\nencrypted credit card numbers. Users authenticate to the webapp which\r\nthen uses the client (which has the keys) to talk to the database.\r\nAdditionally, I assume that:\r\n\r\n- the DBA can't access the client directly (because if they can, then\r\nthey can unencrypt the victim's info using the client's keys), and\r\n\r\n- the DBA can't authenticate as the user/victim (because if they can,\r\nthey can just log in themselves and have the data). The webapp might\r\nfor example use federated authn with a separate provider, using an\r\nemail address as an identifier.\r\n\r\nNow, if the client encrypts a user's credit card number using their\r\nemail address as associated data, then it doesn't matter if the DBA\r\ncopies that user's encrypted card over to their own account. The DBA\r\ncan't log in as the victim, so the client will fail to authenticate the\r\nvalue because its associated data won't match.\r\n\r\n> > > This is not targeting PostgreSQL 15. But I'd appreciate some feedback\r\n> > > on the direction.\r\n> > \r\n> > What kinds of attacks are you hoping to prevent (and not prevent)?\r\n> \r\n> The point is to prevent admins from getting at plaintext data. The \r\n> scenario you show is an interesting one but I think it's not meant to be \r\n> addressed by this. If admins can alter the database to their advantage, \r\n> they could perhaps increase their account balance, create discount \r\n> codes, etc. also.\r\n\r\nSure, but increasing account balances and discount codes don't lead to\r\ngetting at plaintext data, right? Whereas stealing someone else's\r\nencrypted value seems like it would be covered under your threat model,\r\nsince it lets you trick a real-world client into decrypting it for you.\r\n\r\nOther avenues of attack might depend on how you choose to add HMAC to\r\nthe current choice of AES-CBC. My understanding of AE ciphers (with or\r\nwithout associated data) is that you don't have to design that\r\nyourself, which is nice.\r\n\r\n--Jacob\r\n", "msg_date": "Tue, 7 Dec 2021 18:02:25 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "\n\nOn 12/7/21 19:02, Jacob Champion wrote:\n> On Tue, 2021-12-07 at 16:39 +0100, Peter Eisentraut wrote:\n>> On 06.12.21 21:44, Jacob Champion wrote:\n>>> I think reusing a zero IV will potentially leak more information than\n>>> just equality, depending on the cipher in use. You may be interested in\n>>> synthetic IVs and nonce-misuse resistance (e.g. [1]), since they seem\n>>> like they would match this use case exactly. (But I'm not a\n>>> cryptographer.)\n>>\n>> I'm aware of this and plan to make use of SIV. The current\n>> implementation is just an example.\n> \n> Sounds good.\n> \n>>> Have you given any thought to AEAD? As a client I'd like to be able to\n>>> tie an encrypted value to other column (or external) data. For example,\n>>> AEAD could be used to prevent a DBA from copying the (encrypted) value\n>>> of my credit card column into their account's row to use it.\n>>\n>> I don't know how that is supposed to work. When the value is encrypted\n>> for insertion, the client may know things like table name or column\n>> name, so it can tie it to those. But it doesn't know what row it will\n>> go in, so you can't prevent the value from being copied into another\n>> row. You would need some permanent logical row ID for this, I think.\n> \n> Sorry, my description was confusing. There's nothing preventing the DBA\n> from copying the value inside the database, but AEAD can make it so\n> that the copied value isn't useful to the DBA.\n> \n> Sample case. Say I have a webapp backed by Postgres, which stores\n> encrypted credit card numbers. Users authenticate to the webapp which\n> then uses the client (which has the keys) to talk to the database.\n> Additionally, I assume that:\n> \n> - the DBA can't access the client directly (because if they can, then\n> they can unencrypt the victim's info using the client's keys), and\n> \n> - the DBA can't authenticate as the user/victim (because if they can,\n> they can just log in themselves and have the data). The webapp might\n> for example use federated authn with a separate provider, using an\n> email address as an identifier.\n> \n> Now, if the client encrypts a user's credit card number using their\n> email address as associated data, then it doesn't matter if the DBA\n> copies that user's encrypted card over to their own account. The DBA\n> can't log in as the victim, so the client will fail to authenticate the\n> value because its associated data won't match.\n> \n>>>> This is not targeting PostgreSQL 15. But I'd appreciate some feedback\n>>>> on the direction.\n>>>\n>>> What kinds of attacks are you hoping to prevent (and not prevent)?\n>>\n>> The point is to prevent admins from getting at plaintext data. The\n>> scenario you show is an interesting one but I think it's not meant to be\n>> addressed by this. If admins can alter the database to their advantage,\n>> they could perhaps increase their account balance, create discount\n>> codes, etc. also.\n> \n> Sure, but increasing account balances and discount codes don't lead to\n> getting at plaintext data, right? Whereas stealing someone else's\n> encrypted value seems like it would be covered under your threat model,\n> since it lets you trick a real-world client into decrypting it for you.\n> \n> Other avenues of attack might depend on how you choose to add HMAC to\n> the current choice of AES-CBC. My understanding of AE ciphers (with or\n> without associated data) is that you don't have to design that\n> yourself, which is nice.\n> \n\nIMO it's impossible to solve this attack within TCE, because it requires \nensuring consistency at the row level, but TCE obviously works at column \nlevel only.\n\nI believe TCE can do AEAD at the column level, which protects against \nattacks that flipping bits, and similar attacks. It's just a matter of \nhow the client encrypts the data.\n\nExtending it to protect the whole row seems tricky, because the client \nmay not even know the other columns, and it's not clear to me how it'd \ndeal with things like updates of the other columns, hint bits, dropped \ncolumns, etc.\n\nIt's probably possible to get something like this (row-level AEAD) by \nencrypting enriched data, i.e. not just the card number, but {user ID, \ncard number} or something like that, and verify that in the webapp. The \nproblem of course is that the \"user ID\" is just another column in the \ntable, and there's nothing preventing the DBA from modifying that too.\n\nSo I think it's pointless to try extending this to row-level AEAD.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 7 Dec 2021 22:21:32 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On Tue, 2021-12-07 at 22:21 +0100, Tomas Vondra wrote:\r\n> IMO it's impossible to solve this attack within TCE, because it requires \r\n> ensuring consistency at the row level, but TCE obviously works at column \r\n> level only.\r\n\r\nI was under the impression that clients already had to be modified to\r\nfigure out how to encrypt the data? If part of that process ends up\r\nincluding enforcement of encryption for a specific column set, then the\r\naddition of AEAD data could hypothetically be part of that hand-\r\nwaviness.\r\n\r\nUnless \"transparent\" means that the client completely defers to the\r\nserver on whether to encrypt or not, and silently goes along with it if\r\nthe server tells it not to encrypt? That would only protect against a\r\n_completely_ passive DBA, like someone reading unencrypted backups,\r\netc. And that still has a lot of value, certainly. But it seems like\r\nthis prototype is very close to a system where the client can reliably\r\nsecure data even if the server isn't trustworthy, if that's a use case\r\nyou're interested in.\r\n\r\n> I believe TCE can do AEAD at the column level, which protects against \r\n> attacks that flipping bits, and similar attacks. It's just a matter of \r\n> how the client encrypts the data.\r\n\r\nRight, I think authenticated encryption ciphers (without AD) will be\r\nimportant to support in practice. I think users are going to want\r\n*some* protection against active attacks.\r\n\r\n> Extending it to protect the whole row seems tricky, because the client \r\n> may not even know the other columns, and it's not clear to me how it'd \r\n> deal with things like updates of the other columns, hint bits, dropped \r\n> columns, etc.\r\n\r\nCovering the entire row automatically probably isn't super helpful in\r\npractice. As you mention later:\r\n\r\n> It's probably possible to get something like this (row-level AEAD) by \r\n> encrypting enriched data, i.e. not just the card number, but {user ID, \r\n> card number} or something like that, and verify that in the webapp. The \r\n> problem of course is that the \"user ID\" is just another column in the \r\n> table, and there's nothing preventing the DBA from modifying that too.\r\n\r\nRight. That's why the client has to be able to choose AD according to\r\nthe application. In my previous example, the victim's email address can\r\nbe copied by the DBA, but they wouldn't be able to authenticate as that\r\nuser and couldn't convince the client to use the plaintext on their\r\nbehalf.\r\n\r\n--Jacob\r\n", "msg_date": "Tue, 7 Dec 2021 23:26:14 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "\n\nOn 12/8/21 00:26, Jacob Champion wrote:\n> On Tue, 2021-12-07 at 22:21 +0100, Tomas Vondra wrote:\n>> IMO it's impossible to solve this attack within TCE, because it requires \n>> ensuring consistency at the row level, but TCE obviously works at column \n>> level only.\n> \n> I was under the impression that clients already had to be modified to\n> figure out how to encrypt the data? If part of that process ends up\n> including enforcement of encryption for a specific column set, then the\n> addition of AEAD data could hypothetically be part of that hand-\n> waviness.\n> \n\nI think \"transparency\" here means the client just uses the regular\nprepared-statement API without having to explicitly encrypt/decrypt any\ndata. The problem is we can't easily tie this to other columns in the\ntable, because the client may not even know what values are in those\ncolumns.\n\nImagine you do this\n\n UPDATE t SET encrypted_column = $1 WHERE another_column = $2;\n\nbut you want to ensure the encrypted value belongs to a particular row\n(which may or may not be identified by the another_column value). How\nwould the client do that? Should it fetch the value or what?\n\nSimilarly, what if the client just does\n\n SELECT encrypted_column FROM t;\n\nHow would it verify the values belong to the row, without having all the\ndata for the row (or just the required columns)?\n\n> Unless \"transparent\" means that the client completely defers to the\n> server on whether to encrypt or not, and silently goes along with it if\n> the server tells it not to encrypt?\nI think that's probably a valid concern - a \"bad DBA\" could alter the\ntable definition to not contain the \"ENCRYPTED\" bits, and then peek at\nthe plaintext values.\n\nBut it's not clear to me how exactly would the AEAD prevent this?\nWouldn't that be also specified on the server, somehow? In which case\nthe DBA could just tweak that too, no?\n\nIn other words, this issue seems mostly orthogonal to the AEAD, and the\nright solution would be to allow the client to define which columns have\nto be encrypted (in which case altering the server definition would not\nbe enough).\n\n> That would only protect against a\n> _completely_ passive DBA, like someone reading unencrypted backups,\n> etc. And that still has a lot of value, certainly. But it seems like\n> this prototype is very close to a system where the client can reliably\n> secure data even if the server isn't trustworthy, if that's a use case\n> you're interested in.\n> \n\nRight. IMHO the \"passive attacker\" is a perfectly fine model for use\ncases that would be fine with e.g. pgcrypto if there was no risk of\nleaking plaintext values to logs, system catalogs, etc.\n\nIf we can improve it to provide (at least some) protection against\nactive attackers, that'd be a nice bonus.\n\n>> I believe TCE can do AEAD at the column level, which protects against \n>> attacks that flipping bits, and similar attacks. It's just a matter of \n>> how the client encrypts the data.\n> \n> Right, I think authenticated encryption ciphers (without AD) will be\n> important to support in practice. I think users are going to want\n> *some* protection against active attacks.\n> \n>> Extending it to protect the whole row seems tricky, because the client \n>> may not even know the other columns, and it's not clear to me how it'd \n>> deal with things like updates of the other columns, hint bits, dropped \n>> columns, etc.\n> \n> Covering the entire row automatically probably isn't super helpful in\n> practice. As you mention later:\n> \n>> It's probably possible to get something like this (row-level AEAD) by \n>> encrypting enriched data, i.e. not just the card number, but {user ID, \n>> card number} or something like that, and verify that in the webapp. The \n>> problem of course is that the \"user ID\" is just another column in the \n>> table, and there's nothing preventing the DBA from modifying that too.\n> \n> Right. That's why the client has to be able to choose AD according to\n> the application. In my previous example, the victim's email address can\n> be copied by the DBA, but they wouldn't be able to authenticate as that\n> user and couldn't convince the client to use the plaintext on their\n> behalf.\n> \n\nWell, yeah. But I'm not sure how to make that work easily, because the\nclient may not have the data :-(\n\nI was thinking about using a composite data type combining the data with\nthe extra bits - that'd not be all that transparent as it'd require the\nclient to build this manually and then also cross-check it after loading\nthe data. So the user would be responsible for having all the data.\n\nBut doing that automatically/transparently seems hard, because how would\nyou deal e.g. with SELECT queries reading data through a view or CTE?\n\nHow would you declare this, either at the client or server?\n\nDo any other databases have this capability? How do they do it?\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 8 Dec 2021 02:58:06 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On Wed, 2021-12-08 at 02:58 +0100, Tomas Vondra wrote:\r\n> \r\n> On 12/8/21 00:26, Jacob Champion wrote:\r\n> > On Tue, 2021-12-07 at 22:21 +0100, Tomas Vondra wrote:\r\n> > > IMO it's impossible to solve this attack within TCE, because it requires \r\n> > > ensuring consistency at the row level, but TCE obviously works at column \r\n> > > level only.\r\n> > \r\n> > I was under the impression that clients already had to be modified to\r\n> > figure out how to encrypt the data? If part of that process ends up\r\n> > including enforcement of encryption for a specific column set, then the\r\n> > addition of AEAD data could hypothetically be part of that hand-\r\n> > waviness.\r\n> \r\n> I think \"transparency\" here means the client just uses the regular\r\n> prepared-statement API without having to explicitly encrypt/decrypt any\r\n> data. The problem is we can't easily tie this to other columns in the\r\n> table, because the client may not even know what values are in those\r\n> columns.\r\n\r\nThe way I originally described my request -- \"I'd like to be able to\r\ntie an encrypted value to other column (or external) data\" -- was not\r\nvery clear.\r\n\r\nWith my proposed model -- where the DBA (and the server) are completely\r\nuntrusted, and the DBA needs to be prevented from using the encrypted\r\nvalue -- I don't think there's a useful way for the client to use\r\nassociated data that comes from the server. The client has to know what\r\nthe AD should be beforehand, because otherwise the DBA can make it so\r\nthe server returns whatever is correct.\r\n\r\n> Imagine you do this\r\n> \r\n> UPDATE t SET encrypted_column = $1 WHERE another_column = $2;\r\n> \r\n> but you want to ensure the encrypted value belongs to a particular row\r\n> (which may or may not be identified by the another_column value). How\r\n> would the client do that? Should it fetch the value or what?\r\n> \r\n> Similarly, what if the client just does\r\n> \r\n> SELECT encrypted_column FROM t;\r\n> \r\n> How would it verify the values belong to the row, without having all the\r\n> data for the row (or just the required columns)?\r\n\r\nSo with my (hopefully more clear) model above, it wouldn't. The client\r\nwould already have the AD, and somehow tell libpq what that data was\r\nfor the query.\r\n\r\nThe rabbit hole I led you down is one where we use the rest of the row\r\nas AD, to try to freeze pieces of it in place. That might(?) have some\r\nuseful security properties (if the client defines its use and doesn't\r\ndefer to the server). But it's not what I intended to propose and I'd\r\nhave to think about that case some more.\r\n\r\nIn my credit card example, I'm imagining something like (forgive the\r\ncontrived syntax):\r\n\r\n SELECT address, :{aead(users.credit_card, 'user@example.com')}\r\n FROM users WHERE email = 'user@example.com';\r\n\r\n UPDATE users\r\n SET :{aead(users.credit_card, 'user@example.com')} = '1234-...'\r\n WHERE email = 'user@example.com';\r\n\r\nThe client explicitly links a table's column to its AD for the duration\r\nof the query. This approach can't scale to\r\n\r\n SELECT credit_card FROM users;\r\n\r\nbecause in this case the AD for each row is different, but I'd argue\r\nthat's ideal for this particular case. The client doesn't need to (and\r\nprobably shouldn't) grab everyone's credit card details all at once, so\r\nthere's no reason to optimize for it.\r\n\r\n> > Unless \"transparent\" means that the client completely defers to the\r\n> > server on whether to encrypt or not, and silently goes along with it if\r\n> > the server tells it not to encrypt?\r\n> I think that's probably a valid concern - a \"bad DBA\" could alter the\r\n> table definition to not contain the \"ENCRYPTED\" bits, and then peek at\r\n> the plaintext values.\r\n> \r\n> But it's not clear to me how exactly would the AEAD prevent this?\r\n> Wouldn't that be also specified on the server, somehow? In which case\r\n> the DBA could just tweak that too, no?\r\n>\r\n> In other words, this issue seems mostly orthogonal to the AEAD, and the\r\n> right solution would be to allow the client to define which columns have\r\n> to be encrypted (in which case altering the server definition would not\r\n> be enough).\r\n\r\nRight, exactly. When I mentioned AEAD I had assumed that \"allow the\r\nclient to define which columns have to be encrypted\" was already\r\nplanned or in the works; I just misunderstood pieces of Peter's email.\r\nIt's that piece where a client would probably have to add details\r\naround AEAD and its use.\r\n\r\n> > That would only protect against a\r\n> > _completely_ passive DBA, like someone reading unencrypted backups,\r\n> > etc. And that still has a lot of value, certainly. But it seems like\r\n> > this prototype is very close to a system where the client can reliably\r\n> > secure data even if the server isn't trustworthy, if that's a use case\r\n> > you're interested in.\r\n> \r\n> Right. IMHO the \"passive attacker\" is a perfectly fine model for use\r\n> cases that would be fine with e.g. pgcrypto if there was no risk of\r\n> leaking plaintext values to logs, system catalogs, etc.\r\n> \r\n> If we can improve it to provide (at least some) protection against\r\n> active attackers, that'd be a nice bonus.\r\n\r\nI agree that resistance against offline attacks is a useful step\r\nforward (it seems to be a strict improvement over pgcrypto). I have a\r\nfeeling that end users will *expect* some protection against online\r\nattacks too, since an evil DBA is going to be well-positioned to do\r\nexactly that.\r\n\r\n> > > It's probably possible to get something like this (row-level AEAD) by \r\n> > > encrypting enriched data, i.e. not just the card number, but {user ID, \r\n> > > card number} or something like that, and verify that in the webapp. The \r\n> > > problem of course is that the \"user ID\" is just another column in the \r\n> > > table, and there's nothing preventing the DBA from modifying that too.\r\n> > \r\n> > Right. That's why the client has to be able to choose AD according to\r\n> > the application. In my previous example, the victim's email address can\r\n> > be copied by the DBA, but they wouldn't be able to authenticate as that\r\n> > user and couldn't convince the client to use the plaintext on their\r\n> > behalf.\r\n> \r\n> Well, yeah. But I'm not sure how to make that work easily, because the\r\n> client may not have the data :-(\r\n> \r\n> I was thinking about using a composite data type combining the data with\r\n> the extra bits - that'd not be all that transparent as it'd require the\r\n> client to build this manually and then also cross-check it after loading\r\n> the data. So the user would be responsible for having all the data.\r\n> \r\n> But doing that automatically/transparently seems hard, because how would\r\n> you deal e.g. with SELECT queries reading data through a view or CTE?\r\n> \r\n> How would you declare this, either at the client or server?\r\n\r\nI'll do some more thinking on the case you're talking about here, where\r\npieces of the row are transparently tied together.\r\n\r\n> Do any other databases have this capability? How do they do it?\r\n\r\nBigQuery advertises AEAD support. I don't think their model is the same\r\nas ours, though; from the docs it looks like it's essentially pgcrypto,\r\nwhere you tell the server to encrypt stuff for you.\r\n\r\n--Jacob\r\n", "msg_date": "Thu, 9 Dec 2021 00:12:28 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "\n\nOn 12/9/21 01:12, Jacob Champion wrote:\n> On Wed, 2021-12-08 at 02:58 +0100, Tomas Vondra wrote:\n>>\n>> On 12/8/21 00:26, Jacob Champion wrote:\n>>> On Tue, 2021-12-07 at 22:21 +0100, Tomas Vondra wrote:\n>>>> IMO it's impossible to solve this attack within TCE, because it requires \n>>>> ensuring consistency at the row level, but TCE obviously works at column \n>>>> level only.\n>>>\n>>> I was under the impression that clients already had to be modified to\n>>> figure out how to encrypt the data? If part of that process ends up\n>>> including enforcement of encryption for a specific column set, then the\n>>> addition of AEAD data could hypothetically be part of that hand-\n>>> waviness.\n>>\n>> I think \"transparency\" here means the client just uses the regular\n>> prepared-statement API without having to explicitly encrypt/decrypt any\n>> data. The problem is we can't easily tie this to other columns in the\n>> table, because the client may not even know what values are in those\n>> columns.\n> \n> The way I originally described my request -- \"I'd like to be able to\n> tie an encrypted value to other column (or external) data\" -- was not\n> very clear.\n> \n> With my proposed model -- where the DBA (and the server) are completely\n> untrusted, and the DBA needs to be prevented from using the encrypted\n> value -- I don't think there's a useful way for the client to use\n> associated data that comes from the server. The client has to know what\n> the AD should be beforehand, because otherwise the DBA can make it so\n> the server returns whatever is correct.\n> \n\nTrue. With untrusted server the additional data would have to come from\nsome other source. Say, an isolated auth system or so.\n\n>> Imagine you do this\n>>\n>> UPDATE t SET encrypted_column = $1 WHERE another_column = $2;\n>>\n>> but you want to ensure the encrypted value belongs to a particular row\n>> (which may or may not be identified by the another_column value). How\n>> would the client do that? Should it fetch the value or what?\n>>\n>> Similarly, what if the client just does\n>>\n>> SELECT encrypted_column FROM t;\n>>\n>> How would it verify the values belong to the row, without having all the\n>> data for the row (or just the required columns)?\n> \n> So with my (hopefully more clear) model above, it wouldn't. The client\n> would already have the AD, and somehow tell libpq what that data was\n> for the query.\n> \n> The rabbit hole I led you down is one where we use the rest of the row\n> as AD, to try to freeze pieces of it in place. That might(?) have some\n> useful security properties (if the client defines its use and doesn't\n> defer to the server). But it's not what I intended to propose and I'd\n> have to think about that case some more.\n> \n\nOK\n\n> In my credit card example, I'm imagining something like (forgive the\n> contrived syntax):\n> \n> SELECT address, :{aead(users.credit_card, 'user@example.com')}\n> FROM users WHERE email = 'user@example.com';\n> \n> UPDATE users\n> SET :{aead(users.credit_card, 'user@example.com')} = '1234-...'\n> WHERE email = 'user@example.com';\n> \n> The client explicitly links a table's column to its AD for the duration\n> of the query. This approach can't scale to\n> \n> SELECT credit_card FROM users;\n> \n> because in this case the AD for each row is different, but I'd argue\n> that's ideal for this particular case. The client doesn't need to (and\n> probably shouldn't) grab everyone's credit card details all at once, so\n> there's no reason to optimize for it.\n> \n\nMaybe, but it seems like a rather annoying limitation, as it restricts\nthe client to single-row queries (or at least it looks like that to me).\nYes, it may be fine for some use cases, but I'd bet a DBA who can modify\ndata can do plenty other things - swapping \"old\" values, which will have\nthe right AD, for example.\n\n>>> Unless \"transparent\" means that the client completely defers to the\n>>> server on whether to encrypt or not, and silently goes along with it if\n>>> the server tells it not to encrypt?\n>> I think that's probably a valid concern - a \"bad DBA\" could alter the\n>> table definition to not contain the \"ENCRYPTED\" bits, and then peek at\n>> the plaintext values.\n>>\n>> But it's not clear to me how exactly would the AEAD prevent this?\n>> Wouldn't that be also specified on the server, somehow? In which case\n>> the DBA could just tweak that too, no?\n>>\n>> In other words, this issue seems mostly orthogonal to the AEAD, and the\n>> right solution would be to allow the client to define which columns have\n>> to be encrypted (in which case altering the server definition would not\n>> be enough).\n> \n> Right, exactly. When I mentioned AEAD I had assumed that \"allow the\n> client to define which columns have to be encrypted\" was already\n> planned or in the works; I just misunderstood pieces of Peter's email.\n> It's that piece where a client would probably have to add details\n> around AEAD and its use.\n> \n>>> That would only protect against a\n>>> _completely_ passive DBA, like someone reading unencrypted backups,\n>>> etc. And that still has a lot of value, certainly. But it seems like\n>>> this prototype is very close to a system where the client can reliably\n>>> secure data even if the server isn't trustworthy, if that's a use case\n>>> you're interested in.\n>>\n>> Right. IMHO the \"passive attacker\" is a perfectly fine model for use\n>> cases that would be fine with e.g. pgcrypto if there was no risk of\n>> leaking plaintext values to logs, system catalogs, etc.\n>>\n>> If we can improve it to provide (at least some) protection against\n>> active attackers, that'd be a nice bonus.\n> \n> I agree that resistance against offline attacks is a useful step\n> forward (it seems to be a strict improvement over pgcrypto). I have a\n> feeling that end users will *expect* some protection against online\n> attacks too, since an evil DBA is going to be well-positioned to do\n> exactly that.\n> \n\nYeah.\n\n>>>> It's probably possible to get something like this (row-level AEAD) by \n>>>> encrypting enriched data, i.e. not just the card number, but {user ID, \n>>>> card number} or something like that, and verify that in the webapp. The \n>>>> problem of course is that the \"user ID\" is just another column in the \n>>>> table, and there's nothing preventing the DBA from modifying that too.\n>>>\n>>> Right. That's why the client has to be able to choose AD according to\n>>> the application. In my previous example, the victim's email address can\n>>> be copied by the DBA, but they wouldn't be able to authenticate as that\n>>> user and couldn't convince the client to use the plaintext on their\n>>> behalf.\n>>\n>> Well, yeah. But I'm not sure how to make that work easily, because the\n>> client may not have the data :-(\n>>\n>> I was thinking about using a composite data type combining the data with\n>> the extra bits - that'd not be all that transparent as it'd require the\n>> client to build this manually and then also cross-check it after loading\n>> the data. So the user would be responsible for having all the data.\n>>\n>> But doing that automatically/transparently seems hard, because how would\n>> you deal e.g. with SELECT queries reading data through a view or CTE?\n>>\n>> How would you declare this, either at the client or server?\n> \n> I'll do some more thinking on the case you're talking about here, where\n> pieces of the row are transparently tied together.\n> \n\nOK. In any case, I think we shouldn't require this capability from the\nget go - it's fine to get the simple version done first, which gives us\nprivacy / protects against passive attacker. And then sometime in the\nfuture improve this further.\n\n>> Do any other databases have this capability? How do they do it?\n> \n> BigQuery advertises AEAD support. I don't think their model is the same\n> as ours, though; from the docs it looks like it's essentially pgcrypto,\n> where you tell the server to encrypt stuff for you.\n> \n\nPretty sure it's server-side. The docs say it's for encryption at rest,\nall the examples do the encryption/decryption in SQL, etc.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 9 Dec 2021 11:04:26 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "> In the server, the encrypted datums are stored in types called\n> encryptedr and encryptedd (for randomized and deterministic\n> encryption). These are essentially cousins of bytea.\n\nDoes that mean someone could go in with psql and select out the data\nwithout any keys and just get a raw bytea-like representation? That\nseems like a natural and useful thing to be able to do. For example to\nallow dumping a table and loading it elsewhere and transferring keys\nthrough some other channel (perhaps only as needed).\n\n\n", "msg_date": "Wed, 15 Dec 2021 23:47:39 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 16.12.21 05:47, Greg Stark wrote:\n>> In the server, the encrypted datums are stored in types called\n>> encryptedr and encryptedd (for randomized and deterministic\n>> encryption). These are essentially cousins of bytea.\n> \n> Does that mean someone could go in with psql and select out the data\n> without any keys and just get a raw bytea-like representation? That\n> seems like a natural and useful thing to be able to do. For example to\n> allow dumping a table and loading it elsewhere and transferring keys\n> through some other channel (perhaps only as needed).\n\nYes to all of that.\n\n\n", "msg_date": "Thu, 16 Dec 2021 12:23:11 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On Thu, 2021-12-09 at 11:04 +0100, Tomas Vondra wrote:\r\n> On 12/9/21 01:12, Jacob Champion wrote:\r\n> > \r\n> > The rabbit hole I led you down is one where we use the rest of the row\r\n> > as AD, to try to freeze pieces of it in place. That might(?) have some\r\n> > useful security properties (if the client defines its use and doesn't\r\n> > defer to the server). But it's not what I intended to propose and I'd\r\n> > have to think about that case some more.\r\n\r\nSo after thinking about it some more, in the case where the client is\r\nrelying on the server to return both the encrypted data and its\r\nassociated data -- and you don't trust the server -- then tying even\r\nthe entire row together doesn't help you.\r\n\r\nI was briefly led astray by the idea that you could include a unique or\r\nprimary key column in the associated data, and then SELECT based on\r\nthat column -- but a motivated DBA could simply corrupt state so that\r\nthe row they wanted got returned regardless of the query. So the client\r\nstill has to have prior knowledge.\r\n\r\n> > In my credit card example, I'm imagining something like (forgive the\r\n> > contrived syntax):\r\n> > \r\n> > SELECT address, :{aead(users.credit_card, 'user@example.com')}\r\n> > FROM users WHERE email = 'user@example.com';\r\n> > \r\n> > UPDATE users\r\n> > SET :{aead(users.credit_card, 'user@example.com')} = '1234-...'\r\n> > WHERE email = 'user@example.com';\r\n> > \r\n> > The client explicitly links a table's column to its AD for the duration\r\n> > of the query. This approach can't scale to\r\n> > \r\n> > SELECT credit_card FROM users;\r\n> > \r\n> > because in this case the AD for each row is different, but I'd argue\r\n> > that's ideal for this particular case. The client doesn't need to (and\r\n> > probably shouldn't) grab everyone's credit card details all at once, so\r\n> > there's no reason to optimize for it.\r\n> \r\n> Maybe, but it seems like a rather annoying limitation, as it restricts\r\n> the client to single-row queries (or at least it looks like that to me).\r\n> Yes, it may be fine for some use cases, but I'd bet a DBA who can modify\r\n> data can do plenty other things - swapping \"old\" values, which will have\r\n> the right AD, for example.\r\n\r\nResurrecting old data doesn't help the DBA read the values, right? I\r\nview that as similar to the \"increasing account balance\" problem, in\r\nthat it's definitely a problem but not one we're trying to tackle here.\r\n\r\n(And I'm not familiar with any solutions for resurrections -- other\r\nthan having data expire and tying the timestamp into the\r\nauthentication, which I think again requires AD. Revoking signed data\r\nis one of those hard problems. Do you know a better way?)\r\n\r\n> OK. In any case, I think we shouldn't require this capability from the\r\n> get go - it's fine to get the simple version done first, which gives us\r\n> privacy / protects against passive attacker. And then sometime in the\r\n> future improve this further.\r\n\r\nAgreed. (And I think the client should be able to enforce encryption in\r\nthe first place, before I distract you too much with other stuff.)\r\n\r\n--Jacob\r\n", "msg_date": "Fri, 17 Dec 2021 00:41:00 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 17.12.21 01:41, Jacob Champion wrote:\n> (And I think the client should be able to enforce encryption in\n> the first place, before I distract you too much with other stuff.)\n\nYes, this is a useful point that I have added to my notes.\n\n\n", "msg_date": "Fri, 17 Dec 2021 10:17:09 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "Here is a new version of this patch. See also the original description \nquoted below. I have done a significant amount of work on this over the \nlast few months. Some important news include:\n\n- The cryptography has been improved. It now uses an AEAD scheme, and \nfor deterministic encryption a proper SIV construction.\n\n- The OpenSSL-specific parts have been moved to a separate file in \nlibpq. Non-OpenSSL builds compile and work (without functionality, of \ncourse).\n\n- libpq handles multiple CEKs and CMKs, including changing keys on the fly.\n\n- libpq supports a mode to force encryption of certain values.\n\n- libpq supports a flexible configuration system for looking up CMKs, \nincluding support for external key management systems.\n\n- psql has a new \\gencr command that allows passing in bind parameters \nfor (potential) encryption.\n\n- There is some more pg_dump and psql support.\n\n- The new data types for storing encrypted data have been renamed for \nclarity.\n\n- Various changes to the protocol compared to the previous patch.\n\n- The patch contains full documentation of the protocol changes, \nglossary entries, and more new documentation.\n\nThe major pieces that are still missing are:\n\n- DDL support for registering keys\n\n- Protocol versioning or feature flags\n\nOther than that it's pretty complete in my mind.\n\nFor interested reviewers, I have organized the patch so that you can \nstart reading it top to bottom: The documentation comes first, then the \ntests, then the code changes. Even some feedback on the first or first \ntwo aspects would be valuable to me.\n\nOld news follows:\n\nOn 03.12.21 22:32, Peter Eisentraut wrote:\n> I want to present my proof-of-concept patch for the transparent column\n> encryption feature.  (Some might also think of it as automatic\n> client-side encryption or similar, but I like my name.)  This feature\n> enables the {automatic,transparent} encryption and decryption of\n> particular columns in the client.  The data for those columns then\n> only ever appears in ciphertext on the server, so it is protected from\n> the \"prying eyes\" of DBAs, sysadmins, cloud operators, etc.  The\n> canonical use case for this feature is storing credit card numbers\n> encrypted, in accordance with PCI DSS, as well as similar situations\n> involving social security numbers etc.  Of course, you can't do any\n> computations with encrypted values on the server, but for these use\n> cases, that is not necessary.  This feature does support deterministic\n> encryption as an alternative to the default randomized encryption, so\n> in that mode you can do equality lookups, at the cost of some\n> security.\n> \n> This functionality also exists in other SQL database products, so the\n> overall concepts weren't invented by me by any means.\n> \n> Also, this feature has nothing to do with the on-disk encryption\n> feature being contemplated in parallel.  Both can exist independently.\n> \n> The attached patch has all the necessary pieces in place to make this\n> work, so you can have an idea how the overall system works.  It\n> contains some documentation and tests to help illustrate the\n> functionality.  But it's missing the remaining 90% of the work,\n> including additional DDL support, error handling, robust memory\n> management, protocol versioning, forward and backward compatibility,\n> pg_dump support, psql \\d support, refinement of the cryptography, and\n> so on.  But I think obvious solutions exist to all of those things, so\n> it isn't that interesting to focus on them for now.\n> \n> ------\n> \n> Now to the explanation of how it works.\n> \n> You declare a column as encrypted in a CREATE TABLE statement.  The\n> column value is encrypted by a symmetric key called the column\n> encryption key (CEK).  The CEK is a catalog object.  The CEK key\n> material is in turn encrypted by an assymmetric key called the column\n> master key (CMK).  The CMK is not stored in the database but somewhere\n> where the client can get to it, for example in a file or in a key\n> management system.  When a server sends rows containing encrypted\n> column values to the client, it first sends the required CMK and CEK\n> information (new protocol messages), which the client needs to record.\n> Then, the client can use this information to automatically decrypt the\n> incoming row data and forward it in plaintext to the application.\n> \n> For the CMKs, the catalog object specifies a \"provider\" and generic\n> options.  Right now, libpq has a \"file\" provider hardcoded, and it\n> takes a \"filename\" option.  Via some mechanism to be determined,\n> additional providers could be loaded and then talk to key management\n> systems via http or whatever.  I have left some comments in the libpq\n> code where the hook points for this could be.\n> \n> The general idea would be for an application to have one CMK per area\n> of secret stuff, for example, for credit card data.  The CMK can be\n> rotated: each CEK can be represented multiple times in the database,\n> encrypted by a different CMK.  (The CEK can't be rotated easily, since\n> that would require reading out all the data from a table/column and\n> reencrypting it.  We could/should add some custom tooling for that,\n> but it wouldn't be a routine operation.)\n> \n> The encryption algorithms are mostly hardcoded right now, but there\n> are facilities for picking algorithms and adding new ones that will be\n> expanded.  The CMK process uses RSA-OAEP.  The CEK process uses\n> AES-128-CBC right now; a more complete solution should probably\n> involve some HMAC thrown in.\n> \n> In the server, the encrypted datums are stored in types called\n> encryptedr and encryptedd (for randomized and deterministic\n> encryption).  These are essentially cousins of bytea.  For the rest of\n> the database system below the protocol handling, there is nothing\n> special about those.  For example, encryptedr has no operators at all,\n> encryptedd has only an equality operator.  pg_attribute has a new\n> column attrealtypid that stores the original type of the data in the\n> column.  This is only used for providing it to clients, so that\n> higher-level clients can convert the decrypted value to their\n> appropriate data types in their environments.\n> \n> Some protocol extensions are required.  These should be guarded by\n> some _pq_... setting, but this is not done in this patch yet.  As\n> mentioned above, extra messages are added for sending the CMKs and\n> CEKs.  In the RowDescription message, I have commandeered the format\n> field to add a bit that indicates that the field is encrypted.  This\n> could be made a separate field, and there should probably be\n> additional fields to indicate the algorithm and CEK name, but this was\n> easiest for now.  The ParameterDescription message is extended to\n> contain format fields for each parameter, for the same purpose.\n> Again, this could be done differently.\n> \n> Speaking of parameter descriptions, the trickiest part of this whole\n> thing appears to be how to get transparently encrypted data into the\n> database (as opposed to reading it out).  It is required to use\n> protocol-level prepared statements (i.e., extended query) for this.\n> The client must first prepare a statement, then describe the statement\n> to get parameter metadata, which indicates which parameters are to be\n> encrypted and how.  So this will require some care by applications\n> that want to do this, but, well, they probably should be careful\n> anyway.  In libpq, the existing APIs make this difficult, because\n> there is no way to pass the result of a describe-statement call back\n> into execute-statement-with-parameters.  I added new functions that do\n> this, so you then essentially do\n> \n>     res0 = PQdescribePrepared(conn, \"\");\n>     res = PQexecPrepared2(conn, \"\", 2, values, NULL, NULL, 0, res0);\n> \n> (The name could obviously be improved.)  Other client APIs that have a\n> \"statement handle\" concept could do this more elegantly and probably\n> without any API changes.\n> \n> Another challenge is that the parse analysis must check which\n> underlying column a parameter corresponds to.  This is similar to\n> resorigtbl and resorigcol in the opposite direction.  The current\n> implementation of this works for the test cases, but I know it has\n> some problems, so I'll continue working in this.  This functionality\n> is in principle available to all prepared-statement variants, not only\n> protocol-level.  So you can see in the tests that I expanded the\n> pg_prepared_statements view to show this information as well, which\n> also provides an easy way to test and debug this functionality\n> independent of column encryption.\n> \n> And also, psql doesn't use prepared statements, so writing into\n> encrypted columns currently doesn't work at all via psql.  (Reading\n> works no problem.)  All the test code currently uses custom libpq C\n> programs.  We should think about a way to enable prepared statements\n> in psql, perhaps something like\n> \n> INSERT INTO t1 VALUES ($1, $2) \\gg 'val1' 'val2'\n> \n> (\\gexec and \\gx are already taken.)\n> \n> ------\n> \n> This is not targeting PostgreSQL 15.  But I'd appreciate some feedback\n> on the direction.  As I mentioned above, a lot of the remaining work\n> is arguably mostly straightforward.  Some closer examination of the\n> issues surrounding the libpq API changes and psql would be useful.\n> Perhaps there are other projects where that kind of functionality\n> would also be useful.", "msg_date": "Wed, 29 Jun 2022 00:29:57 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "Rebased patch, no new functionality.\n\nOn 29.06.22 01:29, Peter Eisentraut wrote:\n> Here is a new version of this patch.  See also the original description \n> quoted below.  I have done a significant amount of work on this over the \n> last few months.  Some important news include:\n> \n> - The cryptography has been improved.  It now uses an AEAD scheme, and \n> for deterministic encryption a proper SIV construction.\n> \n> - The OpenSSL-specific parts have been moved to a separate file in \n> libpq.  Non-OpenSSL builds compile and work (without functionality, of \n> course).\n> \n> - libpq handles multiple CEKs and CMKs, including changing keys on the fly.\n> \n> - libpq supports a mode to force encryption of certain values.\n> \n> - libpq supports a flexible configuration system for looking up CMKs, \n> including support for external key management systems.\n> \n> - psql has a new \\gencr command that allows passing in bind parameters \n> for (potential) encryption.\n> \n> - There is some more pg_dump and psql support.\n> \n> - The new data types for storing encrypted data have been renamed for \n> clarity.\n> \n> - Various changes to the protocol compared to the previous patch.\n> \n> - The patch contains full documentation of the protocol changes, \n> glossary entries, and more new documentation.\n> \n> The major pieces that are still missing are:\n> \n> - DDL support for registering keys\n> \n> - Protocol versioning or feature flags\n> \n> Other than that it's pretty complete in my mind.\n> \n> For interested reviewers, I have organized the patch so that you can \n> start reading it top to bottom: The documentation comes first, then the \n> tests, then the code changes.  Even some feedback on the first or first \n> two aspects would be valuable to me.\n> \n> Old news follows:\n> \n> On 03.12.21 22:32, Peter Eisentraut wrote:\n>> I want to present my proof-of-concept patch for the transparent column\n>> encryption feature.  (Some might also think of it as automatic\n>> client-side encryption or similar, but I like my name.)  This feature\n>> enables the {automatic,transparent} encryption and decryption of\n>> particular columns in the client.  The data for those columns then\n>> only ever appears in ciphertext on the server, so it is protected from\n>> the \"prying eyes\" of DBAs, sysadmins, cloud operators, etc.  The\n>> canonical use case for this feature is storing credit card numbers\n>> encrypted, in accordance with PCI DSS, as well as similar situations\n>> involving social security numbers etc.  Of course, you can't do any\n>> computations with encrypted values on the server, but for these use\n>> cases, that is not necessary.  This feature does support deterministic\n>> encryption as an alternative to the default randomized encryption, so\n>> in that mode you can do equality lookups, at the cost of some\n>> security.\n>>\n>> This functionality also exists in other SQL database products, so the\n>> overall concepts weren't invented by me by any means.\n>>\n>> Also, this feature has nothing to do with the on-disk encryption\n>> feature being contemplated in parallel.  Both can exist independently.\n>>\n>> The attached patch has all the necessary pieces in place to make this\n>> work, so you can have an idea how the overall system works.  It\n>> contains some documentation and tests to help illustrate the\n>> functionality.  But it's missing the remaining 90% of the work,\n>> including additional DDL support, error handling, robust memory\n>> management, protocol versioning, forward and backward compatibility,\n>> pg_dump support, psql \\d support, refinement of the cryptography, and\n>> so on.  But I think obvious solutions exist to all of those things, so\n>> it isn't that interesting to focus on them for now.\n>>\n>> ------\n>>\n>> Now to the explanation of how it works.\n>>\n>> You declare a column as encrypted in a CREATE TABLE statement.  The\n>> column value is encrypted by a symmetric key called the column\n>> encryption key (CEK).  The CEK is a catalog object.  The CEK key\n>> material is in turn encrypted by an assymmetric key called the column\n>> master key (CMK).  The CMK is not stored in the database but somewhere\n>> where the client can get to it, for example in a file or in a key\n>> management system.  When a server sends rows containing encrypted\n>> column values to the client, it first sends the required CMK and CEK\n>> information (new protocol messages), which the client needs to record.\n>> Then, the client can use this information to automatically decrypt the\n>> incoming row data and forward it in plaintext to the application.\n>>\n>> For the CMKs, the catalog object specifies a \"provider\" and generic\n>> options.  Right now, libpq has a \"file\" provider hardcoded, and it\n>> takes a \"filename\" option.  Via some mechanism to be determined,\n>> additional providers could be loaded and then talk to key management\n>> systems via http or whatever.  I have left some comments in the libpq\n>> code where the hook points for this could be.\n>>\n>> The general idea would be for an application to have one CMK per area\n>> of secret stuff, for example, for credit card data.  The CMK can be\n>> rotated: each CEK can be represented multiple times in the database,\n>> encrypted by a different CMK.  (The CEK can't be rotated easily, since\n>> that would require reading out all the data from a table/column and\n>> reencrypting it.  We could/should add some custom tooling for that,\n>> but it wouldn't be a routine operation.)\n>>\n>> The encryption algorithms are mostly hardcoded right now, but there\n>> are facilities for picking algorithms and adding new ones that will be\n>> expanded.  The CMK process uses RSA-OAEP.  The CEK process uses\n>> AES-128-CBC right now; a more complete solution should probably\n>> involve some HMAC thrown in.\n>>\n>> In the server, the encrypted datums are stored in types called\n>> encryptedr and encryptedd (for randomized and deterministic\n>> encryption).  These are essentially cousins of bytea.  For the rest of\n>> the database system below the protocol handling, there is nothing\n>> special about those.  For example, encryptedr has no operators at all,\n>> encryptedd has only an equality operator.  pg_attribute has a new\n>> column attrealtypid that stores the original type of the data in the\n>> column.  This is only used for providing it to clients, so that\n>> higher-level clients can convert the decrypted value to their\n>> appropriate data types in their environments.\n>>\n>> Some protocol extensions are required.  These should be guarded by\n>> some _pq_... setting, but this is not done in this patch yet.  As\n>> mentioned above, extra messages are added for sending the CMKs and\n>> CEKs.  In the RowDescription message, I have commandeered the format\n>> field to add a bit that indicates that the field is encrypted.  This\n>> could be made a separate field, and there should probably be\n>> additional fields to indicate the algorithm and CEK name, but this was\n>> easiest for now.  The ParameterDescription message is extended to\n>> contain format fields for each parameter, for the same purpose.\n>> Again, this could be done differently.\n>>\n>> Speaking of parameter descriptions, the trickiest part of this whole\n>> thing appears to be how to get transparently encrypted data into the\n>> database (as opposed to reading it out).  It is required to use\n>> protocol-level prepared statements (i.e., extended query) for this.\n>> The client must first prepare a statement, then describe the statement\n>> to get parameter metadata, which indicates which parameters are to be\n>> encrypted and how.  So this will require some care by applications\n>> that want to do this, but, well, they probably should be careful\n>> anyway.  In libpq, the existing APIs make this difficult, because\n>> there is no way to pass the result of a describe-statement call back\n>> into execute-statement-with-parameters.  I added new functions that do\n>> this, so you then essentially do\n>>\n>>      res0 = PQdescribePrepared(conn, \"\");\n>>      res = PQexecPrepared2(conn, \"\", 2, values, NULL, NULL, 0, res0);\n>>\n>> (The name could obviously be improved.)  Other client APIs that have a\n>> \"statement handle\" concept could do this more elegantly and probably\n>> without any API changes.\n>>\n>> Another challenge is that the parse analysis must check which\n>> underlying column a parameter corresponds to.  This is similar to\n>> resorigtbl and resorigcol in the opposite direction.  The current\n>> implementation of this works for the test cases, but I know it has\n>> some problems, so I'll continue working in this.  This functionality\n>> is in principle available to all prepared-statement variants, not only\n>> protocol-level.  So you can see in the tests that I expanded the\n>> pg_prepared_statements view to show this information as well, which\n>> also provides an easy way to test and debug this functionality\n>> independent of column encryption.\n>>\n>> And also, psql doesn't use prepared statements, so writing into\n>> encrypted columns currently doesn't work at all via psql.  (Reading\n>> works no problem.)  All the test code currently uses custom libpq C\n>> programs.  We should think about a way to enable prepared statements\n>> in psql, perhaps something like\n>>\n>> INSERT INTO t1 VALUES ($1, $2) \\gg 'val1' 'val2'\n>>\n>> (\\gexec and \\gx are already taken.)\n>>\n>> ------\n>>\n>> This is not targeting PostgreSQL 15.  But I'd appreciate some feedback\n>> on the direction.  As I mentioned above, a lot of the remaining work\n>> is arguably mostly straightforward.  Some closer examination of the\n>> issues surrounding the libpq API changes and psql would be useful.\n>> Perhaps there are other projects where that kind of functionality\n>> would also be useful.", "msg_date": "Tue, 5 Jul 2022 12:54:13 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "Updated patch, to resolve some merge conflicts.\n\nAlso, I added some CREATE DDL commands. These aren't fully robust yet, \nbut they do the basic job, so it makes the test cases easier to write \nand read, and they can be referred to in the documentation. (Note that \nthe corresponding DROP aren't there yet.) I also expanded the \ndocumentation in the DDL chapter to give a complete recipe of how to set \nit up and use it.", "msg_date": "Tue, 12 Jul 2022 20:29:47 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 7/12/22 11:29, Peter Eisentraut wrote:\n> \n> Updated patch, to resolve some merge conflicts.\n\nThank you for working on this; it's an exciting feature.\n\n> The CEK key\n> material is in turn encrypted by an assymmetric key called the column\n> master key (CMK).\n\nI'm not yet understanding why the CMK is asymmetric. Maybe you could use\nthe public key to add ephemeral, single-use encryption keys that no one\nbut the private key holder could use (after you forget them on your\nside, that is). But since the entire column is encrypted with a single\nCEK, you would essentially only be able to do that if you created an\nentirely new column or table; do I have that right?\n\nI'm used to public keys being safe for... publication, but if I'm\nunderstanding correctly, it's important that the server admin doesn't\nget hold of the public key for your CMK, because then they could\nsubstitute their own CEKs transparently and undermine future encrypted\nwrites. That seems surprising. Am I just missing something important\nabout RSAES-OAEP?\n\n> +#define PG_CEK_AEAD_AES_128_CBC_HMAC_SHA_256 130\n> +#define PG_CEK_AEAD_AES_192_CBC_HMAC_SHA_384 131\n> +#define PG_CEK_AEAD_AES_256_CBC_HMAC_SHA_384 132\n> +#define PG_CEK_AEAD_AES_256_CBC_HMAC_SHA_512 133\n\nIt looks like these ciphersuites were abandoned by the IETF. Are there\nexisting implementations of them that have been audited/analyzed? Are\nthey safe (and do we know that the claims made in the draft are\ncorrect)? How do they compare to other constructions like AES-GCM-SIV\nand XChacha20-Poly1305?\n\n> +-- \\gencr\n> +-- (This just tests the parameter passing; there is no encryption here.)\n> +CREATE TABLE test_gencr (a int, b text);\n> +INSERT INTO test_gencr VALUES (1, 'one') \\gencr\n> +SELECT * FROM test_gencr WHERE a = 1 \\gencr\n> + a | b\n> +---+-----\n> + 1 | one\n> +(1 row)\n> +\n> +INSERT INTO test_gencr VALUES ($1, $2) \\gencr 2 'two'\n> +SELECT * FROM test_gencr WHERE a IN ($1, $2) \\gencr 2 3\n> + a | b\n> +---+-----\n> + 2 | two\n> +(1 row)\nI'd expect \\gencr to error out without sending plaintext. I know that\nunder the hood this is just setting up a prepared statement, but if I'm\nusing \\gencr, presumably I really do want to be encrypting my data.\nWould it be a problem to always set force-column-encryption for the\nparameters we're given here? Any unencrypted columns could be provided\ndirectly.\n\nAnother idle thought I had was that it'd be nice to have some syntax for\nproviding a null value to \\gencr (assuming I didn't overlook it in the\npatch). But that brings me to...\n\n> + <para>\n> + Null values are not encrypted by transparent column encryption; null values\n> + sent by the client are visible as null values in the database. If the fact\n> + that a value is null needs to be hidden from the server, this information\n> + needs to be encoded into a nonnull value in the client somehow.\n> + </para>\n\nThis is a major gap, IMO. Especially with the switch to authenticated\nciphers, because it means you can't sign your NULL values. And having\neach client or user that's out there solve this with a magic in-band\nvalue seems like a recipe for pain.\n\nSince we're requiring \"canonical\" use of text format, and the docs say\nthere are no embedded or trailing nulls allowed in text values, could we\nsteal the use of a single zero byte to mean NULL? One additional\ncomplication would be that the client would have to double-check that\nwe're not writing a NULL into a NOT NULL column, and complain if it\nreads one during decryption. Another complication would be that the\nclient would need to complain if it got a plaintext NULL.\n\n(The need for robust client-side validation of encrypted columns might\nbe something to expand on in the docs more generally, since before this\nfeature, it could probably be assumed that the server was buggy if it\nsent you unparsable junk in a column.)\n\n> + <para>\n> + The <quote>associated data</quote> in these algorithms consists of 4\n> + bytes: The ASCII letters <literal>P</literal> and <literal>G</literal>\n> + (byte values 80 and 71), followed by the algorithm ID as a 16-bit unsigned\n> + integer in network byte order.\n> + </para>\n\nIs this AD intended as a placeholder for the future, or does it serve a\nparticular purpose?\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Fri, 15 Jul 2022 10:47:20 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 15.07.22 19:47, Jacob Champion wrote:\n>> The CEK key\n>> material is in turn encrypted by an assymmetric key called the column\n>> master key (CMK).\n> \n> I'm not yet understanding why the CMK is asymmetric.\n\nI'm not totally sure either. I started to build it that way because \nother systems were doing it that way, too. But I have been thinking \nabout adding a symmetric alternative for the CMKs as well (probably AESKW).\n\nI think there are a couple of reasons why asymmetric keys are possibly \nuseful for CMKs:\n\nSome other products make use of secure enclaves to do computations on \n(otherwise) encrypted values on the server. I don't fully know how that \nworks, but I suspect that asymmetric keys can play a role in that. (I \ndon't have any immediate plans for that in my patch. It seems to be a \ndying technology at the moment.)\n\nAsymmetric keys gives you some more options for how you set up the keys \nat the beginning. For example, you create the asymmetric key pair on \nthe host where your client program that wants access to the encrypted \ndata will run. You put the private key in an appropriate location for \nrun time. You send the public key to another host. On that other host, \nyou create the CEK, encrypt it with the CMK, and then upload it into the \nserver (CREATE COLUMN ENCRYPTION KEY). Then you can wipe that second \nhost. That way, you can be even more sure that the unencrypted CEK \nisn't left anywhere. I'm not sure whether this method is very useful in \npractice, but it's interesting.\n\nIn any case, as I mentioned above, this particular aspect is up for \ndiscussion.\n\nAlso note that if you use a KMS (cmklookup \"run\" method), the actual \nalgorithm doesn't even matter (depending on details of the KMS setup), \nsince you just tell the KMS \"decrypt this\", and the KMS knows by itself \nwhat algorithm to use. Maybe there should be a way to specify \"unknown\" \nin the ckdcmkalg field.\n\n>> +#define PG_CEK_AEAD_AES_128_CBC_HMAC_SHA_256 130\n>> +#define PG_CEK_AEAD_AES_192_CBC_HMAC_SHA_384 131\n>> +#define PG_CEK_AEAD_AES_256_CBC_HMAC_SHA_384 132\n>> +#define PG_CEK_AEAD_AES_256_CBC_HMAC_SHA_512 133\n> \n> It looks like these ciphersuites were abandoned by the IETF. Are there\n> existing implementations of them that have been audited/analyzed? Are\n> they safe (and do we know that the claims made in the draft are\n> correct)? How do they compare to other constructions like AES-GCM-SIV\n> and XChacha20-Poly1305?\n\nThe short answer is, these same algorithms are used in equivalent \nproducts (see MS SQL Server, MongoDB). They even reference the same \nexact draft document.\n\nBesides that, here is my analysis for why these are good choices: You \ncan't use any of the counter modes, because since the encryption happens \non the client, there is no way to coordinate to avoid nonce reuse. So \namong mainstream modes, you are basically left with AES-CBC with a \nrandom IV. In that case, even if you happen to reuse an IV, the \npossible damage is very contained.\n\nAnd then, if you want to use AEAD, you combine that with some MAC, and \nHMAC is just as good as any for that.\n\nThe referenced draft document doesn't really contain any additional \ncryptographic insights, it's just a guide on a particular way to put \nthese two together.\n\nSo altogether I think this is a pretty solid choice.\n\n>> +-- \\gencr\n>> +-- (This just tests the parameter passing; there is no encryption here.)\n>> +CREATE TABLE test_gencr (a int, b text);\n>> +INSERT INTO test_gencr VALUES (1, 'one') \\gencr\n>> +SELECT * FROM test_gencr WHERE a = 1 \\gencr\n>> + a | b\n>> +---+-----\n>> + 1 | one\n>> +(1 row)\n>> +\n>> +INSERT INTO test_gencr VALUES ($1, $2) \\gencr 2 'two'\n>> +SELECT * FROM test_gencr WHERE a IN ($1, $2) \\gencr 2 3\n>> + a | b\n>> +---+-----\n>> + 2 | two\n>> +(1 row)\n> I'd expect \\gencr to error out without sending plaintext. I know that\n> under the hood this is just setting up a prepared statement, but if I'm\n> using \\gencr, presumably I really do want to be encrypting my data.\n> Would it be a problem to always set force-column-encryption for the\n> parameters we're given here? Any unencrypted columns could be provided\n> directly.\n\nYeah, this needs a bit of refinement. You don't want something named \n\"encr\" but it only encrypts some of the time. We could possibly do what \nyou suggest and make it set the force-encryption flag, or maybe rename \nit or add another command that just uses prepared statements and doesn't \npromise anything about encryption from its name.\n\nThis also ties in with how pg_dump will eventually work. I think by \ndefault pg_dump will just dump things encrypted and set it up so that \nCOPY writes it back encrypted. But there should probably be a mode that \ndumps out plaintext and then uses one of these commands to load the \nplaintext back in. What these psql commands need to do also depends on \nwhat pg_dump needs them to do.\n\n>> + <para>\n>> + Null values are not encrypted by transparent column encryption; null values\n>> + sent by the client are visible as null values in the database. If the fact\n>> + that a value is null needs to be hidden from the server, this information\n>> + needs to be encoded into a nonnull value in the client somehow.\n>> + </para>\n> \n> This is a major gap, IMO. Especially with the switch to authenticated\n> ciphers, because it means you can't sign your NULL values. And having\n> each client or user that's out there solve this with a magic in-band\n> value seems like a recipe for pain.\n> \n> Since we're requiring \"canonical\" use of text format, and the docs say\n> there are no embedded or trailing nulls allowed in text values, could we\n> steal the use of a single zero byte to mean NULL? One additional\n> complication would be that the client would have to double-check that\n> we're not writing a NULL into a NOT NULL column, and complain if it\n> reads one during decryption. Another complication would be that the\n> client would need to complain if it got a plaintext NULL.\n\nYou're already alluding to some of the complications. Also consider \nthat null values could arise from, say, outer joins. So you could be in \na situation where encrypted and unencrypted null values coexist. And of \ncourse the server doesn't know about the encrypted null values. So how \ndo you maintain semantics, like for aggregate functions, primary keys, \nanything that treats null values specially? How do clients deal with a \nmix of encrypted and unencrypted null values, how do they know which one \nis real. What if the client needs to send a null value back as a \nparameter? All of this would create enormous complications, if they can \nbe solved at all.\n\nI think a way to look at this is that this column encryption feature \nisn't suitable for disguising the existence or absence of data, it can \nonly disguise the particular data that you know exists.\n\n>> + <para>\n>> + The <quote>associated data</quote> in these algorithms consists of 4\n>> + bytes: The ASCII letters <literal>P</literal> and <literal>G</literal>\n>> + (byte values 80 and 71), followed by the algorithm ID as a 16-bit unsigned\n>> + integer in network byte order.\n>> + </para>\n> \n> Is this AD intended as a placeholder for the future, or does it serve a\n> particular purpose?\n\nIt has been recommended that you include the identity of the encryption \nalgorithm in the AD. This protects the client from having to decrypt \nstuff that wasn't meant to be decrypted (in that way).\n\n\n", "msg_date": "Mon, 18 Jul 2022 12:53:23 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On Mon, Jul 18, 2022 at 6:53 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> I think a way to look at this is that this column encryption feature\n> isn't suitable for disguising the existence or absence of data, it can\n> only disguise the particular data that you know exists.\n\n+1.\n\nEven there, what can be accomplished with a feature that only encrypts\nindividual column values is by nature somewhat limited. If you have a\ntext column that, for one row, stores the value 'a', and for some\nother row, stores the entire text of Don Quixote in the original\nSpanish, it is going to be really difficult to keep an adversary who\ncan read from the disk from distinguishing those rows. If you want to\nfix that, you're going to need to do block-level encryption or\nsomething of that sort. And even then, you still have another version\nof the problem, because now imagine you have one *table* that contains\nnothing but the value 'a' and another that contains nothing but the\nentire text of Don Quixote, it is going to be possible for an\nadversary to tell those tables apart, even with the corresponding\nfiles encrypted in their entirety.\n\nBut I don't think that this means that a feature like this has no\nvalue. I think it just means that we need to clearly document how the\nfeature works and not over-promise.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 18 Jul 2022 12:06:57 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 12.07.22 20:29, Peter Eisentraut wrote:\n> Updated patch, to resolve some merge conflicts.\n\nRebased patch, no new functionality", "msg_date": "Tue, 19 Jul 2022 15:52:18 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On Tue, Jul 19, 2022 at 10:52 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 12.07.22 20:29, Peter Eisentraut wrote:\n> > Updated patch, to resolve some merge conflicts.\n>\n> Rebased patch, no new functionality\n\nThank you for working on this and updating the patch!\n\nI've mainly looked at the documentation and tests and done some tests.\nBefore looking at the code in depth, I'd like to share my\ncomments/questions.\n\n---\nRegarding the documentation, I'd like to have a page that describes\nthe generic information of the transparent column encryption for users\nsuch as what this feature actually does, what can be achieved by this\nfeature, CMK rotation, and its known limitations. The patch has\n\"Transparent Column Encryption\" section in protocol.sgml but it seems\nto be more internal information.\n\n---\nIn datatype.sgml, it says \"Thus, clients that don't support\ntransparent column encryption or have disabled it will see the\nencrypted values as byte arrays.\" but I got an error rather than\nencrypted values when I tried to connect to the server using by\nclients that don't support the encryption:\n\npostgres(1:6040)=# select * from tbl;\nno CMK lookup found for realm \"\"\n\nno CMK lookup found for realm \"\"\npostgres(1:6040)=#\n\n---\nIn single-user mode, the user cannot decrypt the encrypted value but\nprobably it's fine in practice.\n\n---\nRegarding the column master key rotation, would it be useful if we\nprovide a tool for that? For example, it takes old and new CMK as\ninput, re-encrypt all CEKs realted to the CMK, and registers them to\nthe server.\n\n---\nIs there any convenient way to load a large amount of test data to the\nencrypted columns? I tried to use generate_series() but it seems not\nto work as it generates the data on the server side:\n\npostgres(1:80556)=# create table a (i text encrypted with\n(column_encryption_key = cek1));\nCREATE TABLE\npostgres(1:80556)=# insert into a select i::text from\ngenerate_series(1, 1000) i;\n2022-07-20 15:06:38.502 JST [80556] ERROR: column \"i\" is of type\npg_encrypted_rnd but expression is of type text at character 22\n\nI've also tried to load the data from a file on the client by using\n\\copy command, but it seems not to work:\n\npostgres(1:80556)=# copy (select generate_series(1, 1000)::text) to\n'/tmp/tmp.dat';\nCOPY 1000\npostgres(1:80556)=# \\copy a from '/tmp/tmp.dat'\nCOPY 1000\npostgres(1:80556)=# select * from a;\nout out memory\n\n---\nI got SEGV in the following two situations:\n\n(1) SEGV by backend\npostgres(1:59931)=# create table tbl (i int encrypted with\n(column_encryption_key = cek1));\nCREATE TABLE\npostgres(1:59931)=# insert into tbl values ($1) \\gencr 1\nINSERT 0 1\npostgres(1:59931)=# select * from tbl;\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\n\nThe backtrace is:\n\n(lldb) bt\n* thread #1, stop reason = signal SIGSTOP\n * frame #0: 0x0000000106830a30\npostgres`pg_detoast_datum_packed(datum=0xffffffffab32c563) at\nfmgr.c:1742:6\n frame #1: 0x00000001067d9dbf\npostgres`byteaout(fcinfo=0x00007ffee9c311a8) at varlena.c:392:20\n frame #2: 0x000000010682ed0c\npostgres`FunctionCall1Coll(flinfo=0x00007faeb28193a8, collation=0,\narg1=18446744072286815587) at fmgr.c:1124:11\n frame #3: 0x0000000106830611\npostgres`OutputFunctionCall(flinfo=0x00007faeb28193a8,\nval=18446744072286815587) at fmgr.c:1561:9\n frame #4: 0x0000000105fed702\npostgres`printtup(slot=0x00007faeb2818960, self=0x00007faeb3809390) at\nprinttup.c:519:16\n frame #5: 0x0000000106319318\npostgres`ExecutePlan(estate=0x00007faeb2818520,\nplanstate=0x00007faeb2818758, use_parallel_mode=false,\noperation=CMD_SELECT, sendTuples=true, numberTuples=0,\ndirection=ForwardScanDirection, dest=0x00007faeb3809390,\nexecute_once=true) at execMain.c:1667:9\n frame #6: 0x0000000106319180\npostgres`standard_ExecutorRun(queryDesc=0x00007faeb280d920,\ndirection=ForwardScanDirection, count=0, execute_once=true) at\nexecMain.c:363:3\n frame #7: 0x0000000106318f11\npostgres`ExecutorRun(queryDesc=0x00007faeb280d920,\ndirection=ForwardScanDirection, count=0, execute_once=true) at\nexecMain.c:307:3\n frame #8: 0x000000010661139c\npostgres`PortalRunSelect(portal=0x00007faeb5028920, forward=true,\ncount=0, dest=0x00007faeb3809390) at pquery.c:924:4\n frame #9: 0x0000000106610d3f\npostgres`PortalRun(portal=0x00007faeb5028920,\ncount=9223372036854775807, isTopLevel=true, run_once=true,\ndest=0x00007faeb3809390, altdest=0x00007faeb3809390,\nqc=0x00007ffee9c31620) at pquery.c:768:18\n frame #10: 0x000000010660bb93\npostgres`exec_simple_query(query_string=\"select * from tbl;\") at\npostgres.c:1246:10\n frame #11: 0x000000010660ac2f\npostgres`PostgresMain(dbname=\"postgres\", username=\"masahiko\") at\npostgres.c:4534:7\n frame #12: 0x000000010650d9c6\npostgres`BackendRun(port=0x00007faeb3004210) at postmaster.c:4490:2\n frame #13: 0x000000010650cf8a\npostgres`BackendStartup(port=0x00007faeb3004210) at\npostmaster.c:4218:3\n frame #14: 0x000000010650bd57 postgres`ServerLoop at postmaster.c:1808:7\n frame #15: 0x00000001065094cf postgres`PostmasterMain(argc=5,\nargv=0x00007faeb2406320) at postmaster.c:1480:11\n frame #16: 0x00000001063b4dcf postgres`main(argc=5,\nargv=0x00007faeb2406320) at main.c:197:3\n frame #17: 0x00007fff721abcc9 libdyld.dylib`start + 1\n\n(2) SEGV by psql\n\npostgres(1:47762)=# create table tbl (t text encrypted with\n(column_encryption_key = cek1));\nCREATE TABLE\npostgres(1:47762)=# insert into tbl values ('test');\nINSERT 0 1\npostgres(1:47762)=# select * from tbl;\nSegmentation fault: 11 (core dumped)\n\nThe backtrace is:\n\n(lldb) bt\n* thread #1, stop reason = signal SIGSTOP\n * frame #0: 0x00007fff723a1b36\nlibsystem_platform.dylib`_platform_memmove$VARIANT$Haswell + 566\n frame #1: 0x000000010c509a5f\nlibpq.5.dylib`get_message_auth_tag(md=0x000000010c7f28b8, mac_key=\"\n\\x1c,\\x98g½ȩ[\\x88\\x16\\x12Kiꔂ\\v8g_\\x80, mac_key_len=16, encr=\"test\",\nencrlen=-12, cekalg=130, md_value=\"\", md_len_p=0x00007ffee380a720,\nerrmsgp=0x00007ffee380a8c0) at fe-encrypt-openssl.c:316:2\n frame #2: 0x000000010c509442\nlibpq.5.dylib`decrypt_value(res=0x00007fa098504770,\ncek=0x00007fa0985045d0, cekalg=130, input=\"test\", inputlen=4,\nerrmsgp=0x00007ffee380a8c0) at fe-encrypt-openssl.c:429:7\n frame #3: 0x000000010c4f3c0c\nlibpq.5.dylib`pqRowProcessor(conn=0x00007fa098808e00,\nerrmsgp=0x00007ffee380a8c0) at fe-exec.c:1670:21\n frame #4: 0x000000010c5013bb\nlibpq.5.dylib`getAnotherTuple(conn=0x00007fa098808e00, msgLength=16)\nat fe-protocol3.c:882:6\n frame #5: 0x000000010c4ffbf2\nlibpq.5.dylib`pqParseInput3(conn=0x00007fa098808e00) at\nfe-protocol3.c:410:11\n frame #6: 0x000000010c4f5b65\nlibpq.5.dylib`parseInput(conn=0x00007fa098808e00) at fe-exec.c:2598:2\n frame #7: 0x000000010c4f5c59\nlibpq.5.dylib`PQgetResult(conn=0x00007fa098808e00) at fe-exec.c:2684:3\n frame #8: 0x000000010c401a77\npsql`ExecQueryAndProcessResults(query=\"select * from tbl;\",\nelapsed_msec=0x00007ffee380ab08, svpt_gone_p=0x00007ffee380aafe,\nis_watch=false, opt=0x0000000000000000,\nprintQueryFout=0x0000000000000000) at common.c:1514:11\n frame #9: 0x000000010c402469 psql`SendQuery(query=\"select * from\ntbl;\") at common.c:1171:9\n frame #10: 0x000000010c41a4dd\npsql`MainLoop(source=0x00007fff98ce8d90) at mainloop.c:439:16\n frame #11: 0x000000010c426b44 psql`main(argc=3,\nargv=0x00007ffee380adc8) at startup.c:462:19\n frame #12: 0x00007fff721abcc9 libdyld.dylib`start + 1\n frame #13: 0x00007fff721abcc9 libdyld.dylib`start + 1\n(lldb) f 1\nframe #1: 0x000000010c509a5f\nlibpq.5.dylib`get_message_auth_tag(md=0x000000010c7f28b8, mac_key=\"\n\\x1c,\\x98g½ȩ[\\x88\\x16\\x12Kiꔂ\\v8g_\\x80, mac_key_len=16, encr=\"test\",\nencrlen=-12, cekalg=130, md_value=\"\", md_len_p=0x00007ffee380a720,\nerrmsgp=0x00007ffee380a8c0) at fe-encrypt-openssl.c:316:2\n 313 #else\n 314 memcpy(buf, test_A, sizeof(test_A));\n 315 #endif\n-> 316 memcpy(buf + PG_AD_LEN, encr, encrlen);\n 317 *(int64 *) (buf + PG_AD_LEN + encrlen) =\npg_hton64(PG_AD_LEN * 8);\n 318\n 319 if (!EVP_DigestSignInit(evp_md_ctx, NULL, md, NULL, pkey))\n(lldb) p encrlen\n(int) $0 = -12\n(lldb)\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 20 Jul 2022 15:12:13 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On Mon, Jul 18, 2022 at 3:53 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> Some other products make use of secure enclaves to do computations on\n> (otherwise) encrypted values on the server. I don't fully know how that\n> works, but I suspect that asymmetric keys can play a role in that. (I\n> don't have any immediate plans for that in my patch. It seems to be a\n> dying technology at the moment.)\n>\n> Asymmetric keys gives you some more options for how you set up the keys\n> at the beginning. For example, you create the asymmetric key pair on\n> the host where your client program that wants access to the encrypted\n> data will run. You put the private key in an appropriate location for\n> run time. You send the public key to another host. On that other host,\n> you create the CEK, encrypt it with the CMK, and then upload it into the\n> server (CREATE COLUMN ENCRYPTION KEY). Then you can wipe that second\n> host. That way, you can be even more sure that the unencrypted CEK\n> isn't left anywhere. I'm not sure whether this method is very useful in\n> practice, but it's interesting.\n\nAs long as it's clear to people trying this that the \"public\" key\ncannot actually be made public, I suppose. That needs to be documented\nIMO. I like your idea of providing a symmetric option as well.\n\n> In any case, as I mentioned above, this particular aspect is up for\n> discussion.\n>\n> Also note that if you use a KMS (cmklookup \"run\" method), the actual\n> algorithm doesn't even matter (depending on details of the KMS setup),\n> since you just tell the KMS \"decrypt this\", and the KMS knows by itself\n> what algorithm to use. Maybe there should be a way to specify \"unknown\"\n> in the ckdcmkalg field.\n\n+1, an officially client-defined method would probably be useful.\n\n> The short answer is, these same algorithms are used in equivalent\n> products (see MS SQL Server, MongoDB). They even reference the same\n> exact draft document.\n>\n> Besides that, here is my analysis for why these are good choices: You\n> can't use any of the counter modes, because since the encryption happens\n> on the client, there is no way to coordinate to avoid nonce reuse. So\n> among mainstream modes, you are basically left with AES-CBC with a\n> random IV. In that case, even if you happen to reuse an IV, the\n> possible damage is very contained.)\n\nI think both AES-GCM-SIV and XChaCha20-Poly1305 are designed to handle\nthe nonce problem as well. In any case, if I were deploying this, I'd\nwant to know the characteristics/limits of our chosen suites (e.g. how\nmuch data can be encrypted per key) so that I could plan rotations\ncorrectly. Something like the table in [1]?\n\n> > Since we're requiring \"canonical\" use of text format, and the docs say\n> > there are no embedded or trailing nulls allowed in text values, could we\n> > steal the use of a single zero byte to mean NULL? One additional\n> > complication would be that the client would have to double-check that\n> > we're not writing a NULL into a NOT NULL column, and complain if it\n> > reads one during decryption. Another complication would be that the\n> > client would need to complain if it got a plaintext NULL.\n>\n> You're already alluding to some of the complications. Also consider\n> that null values could arise from, say, outer joins. So you could be in\n> a situation where encrypted and unencrypted null values coexist.\n\n(I realize I'm about to wade into the pool of what NULL means in SQL,\nthe subject of which I've stayed mostly, gleefully, ignorant.)\n\nTo be honest that sounds pretty useful. Any unencrypted null must have\ncome from the server computation; it's a clear claim by the server\nthat no such rows exist. (If the encrypted column is itself NOT NULL\nthen there's no ambiguity to begin with, I think.) That wouldn't be\ntransparent behavior anymore, so it may (understandably) be a non-goal\nfor the patch, but it really does sound useful.\n\nAnd it might be a little extreme, but if I as a user decided that I\nwanted in-band encrypted null, it wouldn't be particularly surprising\nto me if such a column couldn't be included in an outer join. Just\nlike I can't join on a randomized encrypted column, or add two\nencrypted NUMERICs to each other. In fact I might even want the server\nto enforce NOT NULL transparently on the underlying pg_encrypted_*\ncolumn, to make sure that I didn't accidentally push an unencrypted\nNULL by mistake.\n\n> And of\n> course the server doesn't know about the encrypted null values. So how\n> do you maintain semantics, like for aggregate functions, primary keys,\n> anything that treats null values specially?\n\nCould you elaborate? Any special cases seem like they'd be important\nto document regardless of whether or not we support in-band null\nencryption. For example, do you plan to support encrypted primary\nkeys, null or not? That seems like it'd be particularly difficult\nduring CEK rotation.\n\n> How do clients deal with a\n> mix of encrypted and unencrypted null values, how do they know which one\n> is real.\n\nThat one seems straightforward -- a bare null in an encrypted column\nis an assertion by the server. An encrypted null had to have come from\nthe client side originally.\n\n> What if the client needs to send a null value back as a\n> parameter?\n\nCouldn't the client just encrypt it, same as any other column? Or am I\nmissing what you mean by \"parameter\" here?\n\n> All of this would create enormous complications, if they can\n> be solved at all.\n\nThat could be. But I'm wondering if the complications exist\nregardless, and the null example is just making them more obvious.\n\n> It has been recommended that you include the identity of the encryption\n> algorithm in the AD. This protects the client from having to decrypt\n> stuff that wasn't meant to be decrypted (in that way).\n\nDo you have a link? I'd like to read up on that -- I naively assumed\nthat the suite wouldn't be able to decrypt another AEAD cipher without\ncomplaining.\n\n--Jacob\n\n[1] https://doc.libsodium.org/secret-key_cryptography/aead\n\n\n", "msg_date": "Thu, 21 Jul 2022 11:12:52 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On Mon, Jul 18, 2022 at 9:07 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Even there, what can be accomplished with a feature that only encrypts\n> individual column values is by nature somewhat limited. If you have a\n> text column that, for one row, stores the value 'a', and for some\n> other row, stores the entire text of Don Quixote in the original\n> Spanish, it is going to be really difficult to keep an adversary who\n> can read from the disk from distinguishing those rows. If you want to\n> fix that, you're going to need to do block-level encryption or\n> something of that sort.\n\nA minimum padding option would fix the leak here, right? If every\nentry is the same length then there's no information to be gained, at\nleast in an offline analysis.\n\nI think some work around that is probably going to be needed for\nserious use of this encryption, in part because of the use of text\nformat as the canonical input. If the encrypted values of 1, 10, 100,\nand 1000 hypothetically leaked their exact lengths, then an encrypted\nint wouldn't be very useful. So I'd want to quantify (and possibly\nconfigure) exactly how much data you can encrypt in a single message\nbefore the length starts being leaked, and then make sure that my\nencrypted values stay inside that bound.\n\n--Jacob\n\n\n", "msg_date": "Thu, 21 Jul 2022 11:30:46 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On Mon, Jul 18, 2022 at 12:53:23PM +0200, Peter Eisentraut wrote:\n> Asymmetric keys gives you some more options for how you set up the keys at\n> the beginning. For example, you create the asymmetric key pair on the host\n> where your client program that wants access to the encrypted data will run.\n> You put the private key in an appropriate location for run time. You send\n> the public key to another host. On that other host, you create the CEK,\n> encrypt it with the CMK, and then upload it into the server (CREATE COLUMN\n> ENCRYPTION KEY). Then you can wipe that second host. That way, you can be\n> even more sure that the unencrypted CEK isn't left anywhere. I'm not sure\n> whether this method is very useful in practice, but it's interesting.\n> \n> In any case, as I mentioned above, this particular aspect is up for\n> discussion.\n\nI caution against adding complexity without a good reason, because\nhistorically complexity often leads to exploits and bugs, especially\nwith crypto.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Thu, 21 Jul 2022 15:29:00 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On Thu, Jul 21, 2022 at 2:30 PM Jacob Champion <jchampion@timescale.com> wrote:\n> On Mon, Jul 18, 2022 at 9:07 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Even there, what can be accomplished with a feature that only encrypts\n> > individual column values is by nature somewhat limited. If you have a\n> > text column that, for one row, stores the value 'a', and for some\n> > other row, stores the entire text of Don Quixote in the original\n> > Spanish, it is going to be really difficult to keep an adversary who\n> > can read from the disk from distinguishing those rows. If you want to\n> > fix that, you're going to need to do block-level encryption or\n> > something of that sort.\n>\n> A minimum padding option would fix the leak here, right? If every\n> entry is the same length then there's no information to be gained, at\n> least in an offline analysis.\n\nSure, but padding every text column that you have, even the ones\ncontaining only 'a', out to the length of Don Quixote in the original\nSpanish, is unlikely to be an appealing option.\n\n> I think some work around that is probably going to be needed for\n> serious use of this encryption, in part because of the use of text\n> format as the canonical input. If the encrypted values of 1, 10, 100,\n> and 1000 hypothetically leaked their exact lengths, then an encrypted\n> int wouldn't be very useful. So I'd want to quantify (and possibly\n> configure) exactly how much data you can encrypt in a single message\n> before the length starts being leaked, and then make sure that my\n> encrypted values stay inside that bound.\n\nI think most ciphers these days are block ciphers, so you're going to\nget output that is a multiple of the block size anyway - e.g. I think\nfor AES it's 128 bits = 16 bytes. So small differences in length will\nbe concealed naturally, which may be good enough for some use cases.\n\nI'm not really convinced that it's worth putting a lot of effort into\nbolstering the security of this kind of tech above what it naturally\ngives. I think it's likely to be a wild goose chase. If you have major\nworries about someone reading your disk in its entirety, use full-disk\nencryption. Selective encryption is only suitable when you want to add\na modest level of protection for individual value and are willing to\naccept that some information leakage is likely if an adversary can in\nfact read the full disk. Padding values to try to further obscure\nthings may be situationally useful, but if you find yourself worrying\ntoo much about that sort of thing, you likely should have picked\nstronger medicine initially.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 26 Jul 2022 13:51:56 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On Tue, Jul 26, 2022 at 10:52 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, Jul 21, 2022 at 2:30 PM Jacob Champion <jchampion@timescale.com> wrote:\n> > A minimum padding option would fix the leak here, right? If every\n> > entry is the same length then there's no information to be gained, at\n> > least in an offline analysis.\n>\n> Sure, but padding every text column that you have, even the ones\n> containing only 'a', out to the length of Don Quixote in the original\n> Spanish, is unlikely to be an appealing option.\n\nIf you are honestly trying to conceal Don Quixote, I suspect you are\nalready in the business of making unappealing decisions. I don't think\nthat's necessarily an argument against hiding the length for\nreal-world use cases.\n\n> > I think some work around that is probably going to be needed for\n> > serious use of this encryption, in part because of the use of text\n> > format as the canonical input. If the encrypted values of 1, 10, 100,\n> > and 1000 hypothetically leaked their exact lengths, then an encrypted\n> > int wouldn't be very useful. So I'd want to quantify (and possibly\n> > configure) exactly how much data you can encrypt in a single message\n> > before the length starts being leaked, and then make sure that my\n> > encrypted values stay inside that bound.\n>\n> I think most ciphers these days are block ciphers, so you're going to\n> get output that is a multiple of the block size anyway - e.g. I think\n> for AES it's 128 bits = 16 bytes. So small differences in length will\n> be concealed naturally, which may be good enough for some use cases.\n\nRight. My point is, if you have a column that has exactly one\nimportant value that is 17 bytes long when converted to text, you're\ngoing to want to know that block size exactly, because the encryption\nwill be effectively useless for that value. That size needs to be\ndocumented, and it'd be helpful to know that it's longer than, say,\nthe longest text representation of our fixed-length column types.\n\n> I'm not really convinced that it's worth putting a lot of effort into\n> bolstering the security of this kind of tech above what it naturally\n> gives. I think it's likely to be a wild goose chase.\n\nIf the goal is to provide real encryption, and not just a toy, I think\nyou're going to need to put a *lot* of effort into analysis. Even if\nthe result of the analysis is \"we don't plan to address this in v1\".\n\nCrypto is inherently a cycle of\nmake-it-and-break-it-and-fix-it-and-break-it-again. If that's\nconsidered a \"wild goose chase\" and not seriously pursued at some\nlevel, then this implementation will probably not last long in the\nface of real abuse. (That doesn't mean you have to take my advice; I'm\njust a dude with opinions -- but you will need to have real\ncryptographers look at this, and you're going to need to think about\nhow the system will evolve when it's broken.)\n\n> If you have major\n> worries about someone reading your disk in its entirety, use full-disk\n> encryption.\n\nThis patchset was designed to protect against the evil DBA case, I\nthink. Full disk encryption doesn't help.\n\n> Selective encryption is only suitable when you want to add\n> a modest level of protection for individual value and are willing to\n> accept that some information leakage is likely if an adversary can in\n> fact read the full disk.\n\n...but there's a known countermeasure to this particular leakage,\nright? Which would make it more suitable for that case.\n\n> Padding values to try to further obscure\n> things may be situationally useful, but if you find yourself worrying\n> too much about that sort of thing, you likely should have picked\n> stronger medicine initially.\n\nIn my experience, this entire field is the application of\nsituationally useful protection. That's one of the reasons it's hard,\nand designing this sort of patch is going to be hard too. Putting that\non the user isn't quite fair when you're the ones designing the\nsystem; you determine what they have to worry about when you choose\nthe crypto.\n\n--Jacob\n\n\n", "msg_date": "Tue, 26 Jul 2022 11:26:49 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On Tue, Jul 26, 2022 at 2:27 PM Jacob Champion <jchampion@timescale.com> wrote:\n> Right. My point is, if you have a column that has exactly one\n> important value that is 17 bytes long when converted to text, you're\n> going to want to know that block size exactly, because the encryption\n> will be effectively useless for that value. That size needs to be\n> documented, and it'd be helpful to know that it's longer than, say,\n> the longest text representation of our fixed-length column types.\n\nI certainly have no objection to being clear about such details in the\ndocumentation.\n\n> If the goal is to provide real encryption, and not just a toy, I think\n> you're going to need to put a *lot* of effort into analysis. Even if\n> the result of the analysis is \"we don't plan to address this in v1\".\n>\n> Crypto is inherently a cycle of\n> make-it-and-break-it-and-fix-it-and-break-it-again. If that's\n> considered a \"wild goose chase\" and not seriously pursued at some\n> level, then this implementation will probably not last long in the\n> face of real abuse. (That doesn't mean you have to take my advice; I'm\n> just a dude with opinions -- but you will need to have real\n> cryptographers look at this, and you're going to need to think about\n> how the system will evolve when it's broken.)\n\nWell, I'm just a dude with opinions, too. I fear the phenomenon where\ndoing anything about a problem makes you responsible for the whole\nproblem. If we disclaim the ability to hide the length of values,\nthat's clear enough. But if we start padding to try to hide the length\nof values, then people might expect it to work in all cases, and I\ndon't see how it ever can. Moreover, I think that the padding might\nneed to be done in a \"cryptographically intelligent\" way rather than\njust, say, adding trailing blanks. Now that being said, if Peter wants\nto implement something around padding that he has reason to believe\nwill not create cryptographic weaknesses, I have no issue with that. I\njust don't view it as an essential part of the feature, because hiding\nsuch things doesn't seem like it can ever be the main point of a\nfeature like this.\n\n> > Padding values to try to further obscure\n> > things may be situationally useful, but if you find yourself worrying\n> > too much about that sort of thing, you likely should have picked\n> > stronger medicine initially.\n>\n> In my experience, this entire field is the application of\n> situationally useful protection. That's one of the reasons it's hard,\n> and designing this sort of patch is going to be hard too.\n\nAgreed.\n\n> Putting that\n> on the user isn't quite fair when you're the ones designing the\n> system; you determine what they have to worry about when you choose\n> the crypto.\n\nI guess my view on this is that, if you're trying to hide something\nlike a credit card number, most likely every value in the system is\nthe same length, and then this is a non-issue. On the other hand, if\nthe secret column is a person's name, then there is an issue, but\nyou're not going to pad every value out the maximum length of a\nvarlena, so you have to make an estimate of how long a name someone\nmight reasonably have to decide how much padding to include. You also\nhave to decide whether the storage cost of padding every value is\nworth it to you given the potential information leakage. Only the\nhuman user can make those decisions, so some amount of \"putting that\non the user\" feels inevitable. Now, if we don't have a padding system\nbuilt into the feature, then that does put even more on the user; it's\nhard to argue with that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 26 Jul 2022 16:25:35 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 7/26/22 13:25, Robert Haas wrote:\n> I certainly have no objection to being clear about such details in the\n> documentation.\n\nCool.\n\n> I fear the phenomenon where\n> doing anything about a problem makes you responsible for the whole\n> problem. If we disclaim the ability to hide the length of values,\n> that's clear enough.\n\nI don't think disclaiming responsibility absolves you of it here, in\npart because choices are being made (text format) that make length\nhiding even more important than it otherwise would be. A user who\nalready knows that encryption doesn't hide length might still reasonably\nexpect a fixed-length column type like bigint to be protected in all\ncases. It won't be (at least not with your 16-byte example).\n\nAnd sure, you can document that caveat too, but said user might then\nreasonably wonder how they're supposed to actually make it safe.\n\n> But if we start padding to try to hide the length\n> of values, then people might expect it to work in all cases, and I\n> don't see how it ever can.\n\nWell, that's where I agree with you on the value of solid documentation.\nBut there are other things we can do as well. In general we should\n\n- choose a default that will protect most people out of the box,\n- document the heck out of the default's limitations,\n- provide guardrails that warn the user when they're outgrowing those\n limitations, and\n- give people a way to tune it to their own use cases.\n\nAs an example, a naive guardrail in this instance could be to simply\nhave the client refuse to encrypt data past the padding maximum, if\nyou've gone so far as to set one up. It'd suck to hit that maximum in\nproduction and have to rewrite the column, but you did want your\nencryption to hide your data, right?\n\nMaybe that's way too complex to think about for a v1, but it'll be\neasier to maintain this into the future if there's at least a plan to\ncreate a v2. If you declare it out of scope, instead of considering it a\npotential TODO, then I think it'll be a lot harder for people to improve it.\n\n> Moreover, I think that the padding might\n> need to be done in a \"cryptographically intelligent\" way rather than\n> just, say, adding trailing blanks.\n\nPossibly. I think that's where AEAD comes in -- if you've authenticated\nyour ciphertext sufficiently, padding oracles should be prohibitively\ndifficult(?). (But see below; I think we also have other things to worry\nabout in terms of authentication and oracles.)\n\n> Now that being said, if Peter wants\n> to implement something around padding that he has reason to believe\n> will not create cryptographic weaknesses, I have no issue with that. I\n> just don't view it as an essential part of the feature, because hiding\n> such things doesn't seem like it can ever be the main point of a\n> feature like this.\n\nI think that side channel consideration has to be an essential part of\nany cryptography feature. Recent history has shown \"obscure\" side\nchannels gaining power to the point of completely breaking crypto schemes.\n\nAnd it's not like TLS where we have to protect an infinite stream of\narbitrary bytes; this is going to be used on small values that probably\nget repeated often and have (comparatively) very little entropy.\nCryptanalysis based on length seems to me like part and parcel of the\nproblem space.\n\n> I guess my view on this is that, if you're trying to hide something\n> like a credit card number, most likely every value in the system is\n> the same length, and then this is a non-issue.\n\nAgreed.\n\n> On the other hand, if\n> the secret column is a person's name, then there is an issue, but\n> you're not going to pad every value out the maximum length of a\n> varlena, so you have to make an estimate of how long a name someone\n> might reasonably have to decide how much padding to include. You also\n> have to decide whether the storage cost of padding every value is\n> worth it to you given the potential information leakage. Only the\n> human user can make those decisions, so some amount of \"putting that\n> on the user\" feels inevitable.\n\nAgreed.\n\n> Now, if we don't have a padding system\n> built into the feature, then that does put even more on the user; it's\n> hard to argue with that.\nRight. If they can even fix it at all. Having a well-documented padding\nfeature would not only help mitigate that, it would conveniently hang a\nbig sign on the caveats that exist.\n\n--\n\nSpeaking of oracles and side channels. Users may want to use associated\ndata to further lock an encrypted value to its column type, too.\nOtherwise it seems like an active DBA could feed an encrypted text blob\nto a client in place of, say, an int column, to see whether or not that\ntext blob is a number. Seems like AD is going to be important to prevent\nactive attacks in general.\n\n--Jacob\n\n\n", "msg_date": "Tue, 26 Jul 2022 16:19:38 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "Here is an updated patch.\n\nI mainly spent time on adding a full set of DDL commands for the keys. \nThis made the patch very bulky now, but there is not really anything \nsurprising in there. It probably needs another check of permission \nhandling etc., but it's got everything there to try it out. Along with \nthe DDL commands, the pg_dump side is now fully implemented.\n\nSecondly, I isolated the protocol changes into a protocol extension with \nthe name _pq_.column_encryption. So by default there are no protocol \nchanges and this feature is disabled. AFAICT, we haven't actually ever \nused the _pq_ protocol extension mechanism, so it would be good to \nreview whether this was done here in the intended way.\n\nAt this point, the patch is sort of feature complete, meaning it has all \nthe concepts, commands, and interfaces that I had in mind. I have a \nlong list of things to recheck and tighten up, based on earlier feedback \nand some things I found along the way. But I don't currently plan any \nmore major architectural or design changes, pending feedback. (Also, \nthe patch is now very big, so anything additional might be better for a \nfuture separate patch.)", "msg_date": "Tue, 30 Aug 2022 13:35:43 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 20.07.22 08:12, Masahiko Sawada wrote:\n> ---\n> Regarding the documentation, I'd like to have a page that describes\n> the generic information of the transparent column encryption for users\n> such as what this feature actually does, what can be achieved by this\n> feature, CMK rotation, and its known limitations. The patch has\n> \"Transparent Column Encryption\" section in protocol.sgml but it seems\n> to be more internal information.\n\nI have added more documentation in the v6 patch.\n\n> ---\n> In datatype.sgml, it says \"Thus, clients that don't support\n> transparent column encryption or have disabled it will see the\n> encrypted values as byte arrays.\" but I got an error rather than\n> encrypted values when I tried to connect to the server using by\n> clients that don't support the encryption:\n> \n> postgres(1:6040)=# select * from tbl;\n> no CMK lookup found for realm \"\"\n\nThis has now been improved in v6. The protocol changes need to be \nactivated explicitly at connection time, so if you use a client that \ndoesn't support it or activates it, you get the described behavior.\n\n> ---\n> In single-user mode, the user cannot decrypt the encrypted value but\n> probably it's fine in practice.\n\nYes, there is nothing really to do about that.\n\n> ---\n> Regarding the column master key rotation, would it be useful if we\n> provide a tool for that? For example, it takes old and new CMK as\n> input, re-encrypt all CEKs realted to the CMK, and registers them to\n> the server.\n\nI imagine users using a variety of key management systems, so I don't \nsee how a single tool would work. But it's something we can think about \nin the future.\n\n> ---\n> Is there any convenient way to load a large amount of test data to the\n> encrypted columns? I tried to use generate_series() but it seems not\n> to work as it generates the data on the server side:\n\nNo, that doesn't work, by design. You'd have to write a client program \nto generate the data.\n\n> I've also tried to load the data from a file on the client by using\n> \\copy command, but it seems not to work:\n> \n> postgres(1:80556)=# copy (select generate_series(1, 1000)::text) to\n> '/tmp/tmp.dat';\n> COPY 1000\n> postgres(1:80556)=# \\copy a from '/tmp/tmp.dat'\n> COPY 1000\n> postgres(1:80556)=# select * from a;\n> out out memory\n\nThis was a bug that I have fixed.\n\n> ---\n> I got SEGV in the following two situations:\n\nI have fixed these.\n\n\n", "msg_date": "Tue, 30 Aug 2022 13:40:43 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 27.07.22 01:19, Jacob Champion wrote:\n>> Now, if we don't have a padding system\n>> built into the feature, then that does put even more on the user; it's\n>> hard to argue with that.\n> Right. If they can even fix it at all. Having a well-documented padding\n> feature would not only help mitigate that, it would conveniently hang a\n> big sign on the caveats that exist.\n\nI would be interested in learning more about such padding systems. I \nhave done a lot of reading for this development project, and I have \nnever come across a cryptographic approach to hide length differences by \npadding. Of course, padding to the block cipher's block size is already \npart of the process, but that is done out of necessity, not because you \nwant to disguise the length. Are there any other methods? I'm \ninterested to learn more.\n\n\n", "msg_date": "Tue, 30 Aug 2022 13:53:49 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On Tue, Aug 30, 2022 at 4:53 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> I would be interested in learning more about such padding systems. I\n> have done a lot of reading for this development project, and I have\n> never come across a cryptographic approach to hide length differences by\n> padding. Of course, padding to the block cipher's block size is already\n> part of the process, but that is done out of necessity, not because you\n> want to disguise the length. Are there any other methods? I'm\n> interested to learn more.\n\nTLS 1.3 has one example. Here is a description from GnuTLS:\nhttps://gnutls.org/manual/html_node/On-Record-Padding.html (Note the\noption to turn on constant-time padding; that may not be a good\ntradeoff for us if we're focusing on offline attacks.)\n\nHere's a recent paper that claims to formally characterize length\nhiding, but it's behind a wall and I haven't read it:\nhttps://dl.acm.org/doi/abs/10.1145/3460120.3484590\n\nI'll try to find more when I get the chance.\n\n--Jacob\n\n\n", "msg_date": "Wed, 31 Aug 2022 16:29:00 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "Here is an updated patch that resolves some merge conflicts; no \nfunctionality changes over v6.\n\nOn 30.08.22 13:35, Peter Eisentraut wrote:\n> Here is an updated patch.\n> \n> I mainly spent time on adding a full set of DDL commands for the keys. \n> This made the patch very bulky now, but there is not really anything \n> surprising in there.  It probably needs another check of permission \n> handling etc., but it's got everything there to try it out.  Along with \n> the DDL commands, the pg_dump side is now fully implemented.\n> \n> Secondly, I isolated the protocol changes into a protocol extension with \n> the name _pq_.column_encryption.  So by default there are no protocol \n> changes and this feature is disabled.  AFAICT, we haven't actually ever \n> used the _pq_ protocol extension mechanism, so it would be good to \n> review whether this was done here in the intended way.\n> \n> At this point, the patch is sort of feature complete, meaning it has all \n> the concepts, commands, and interfaces that I had in mind.  I have a \n> long list of things to recheck and tighten up, based on earlier feedback \n> and some things I found along the way.  But I don't currently plan any \n> more major architectural or design changes, pending feedback.  (Also, \n> the patch is now very big, so anything additional might be better for a \n> future separate patch.)", "msg_date": "Tue, 13 Sep 2022 10:27:03 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "New version with some merge conflicts resolved, and I have worked to \nresolve several \"TODO\" items that I had noted in the code.\n\nOn 13.09.22 10:27, Peter Eisentraut wrote:\n> Here is an updated patch that resolves some merge conflicts; no \n> functionality changes over v6.\n> \n> On 30.08.22 13:35, Peter Eisentraut wrote:\n>> Here is an updated patch.\n>>\n>> I mainly spent time on adding a full set of DDL commands for the keys. \n>> This made the patch very bulky now, but there is not really anything \n>> surprising in there.  It probably needs another check of permission \n>> handling etc., but it's got everything there to try it out.  Along \n>> with the DDL commands, the pg_dump side is now fully implemented.\n>>\n>> Secondly, I isolated the protocol changes into a protocol extension \n>> with the name _pq_.column_encryption.  So by default there are no \n>> protocol changes and this feature is disabled.  AFAICT, we haven't \n>> actually ever used the _pq_ protocol extension mechanism, so it would \n>> be good to review whether this was done here in the intended way.\n>>\n>> At this point, the patch is sort of feature complete, meaning it has \n>> all the concepts, commands, and interfaces that I had in mind.  I have \n>> a long list of things to recheck and tighten up, based on earlier \n>> feedback and some things I found along the way.  But I don't currently \n>> plan any more major architectural or design changes, pending \n>> feedback.  (Also, the patch is now very big, so anything additional \n>> might be better for a future separate patch.)", "msg_date": "Wed, 21 Sep 2022 17:37:05 -0400", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "Updated version with meson build system support added (for added files \nand new tests).\n\nOn 21.09.22 23:37, Peter Eisentraut wrote:\n> New version with some merge conflicts resolved, and I have worked to \n> resolve several \"TODO\" items that I had noted in the code.\n> \n> On 13.09.22 10:27, Peter Eisentraut wrote:\n>> Here is an updated patch that resolves some merge conflicts; no \n>> functionality changes over v6.\n>>\n>> On 30.08.22 13:35, Peter Eisentraut wrote:\n>>> Here is an updated patch.\n>>>\n>>> I mainly spent time on adding a full set of DDL commands for the \n>>> keys. This made the patch very bulky now, but there is not really \n>>> anything surprising in there.  It probably needs another check of \n>>> permission handling etc., but it's got everything there to try it \n>>> out.  Along with the DDL commands, the pg_dump side is now fully \n>>> implemented.\n>>>\n>>> Secondly, I isolated the protocol changes into a protocol extension \n>>> with the name _pq_.column_encryption.  So by default there are no \n>>> protocol changes and this feature is disabled.  AFAICT, we haven't \n>>> actually ever used the _pq_ protocol extension mechanism, so it would \n>>> be good to review whether this was done here in the intended way.\n>>>\n>>> At this point, the patch is sort of feature complete, meaning it has \n>>> all the concepts, commands, and interfaces that I had in mind.  I \n>>> have a long list of things to recheck and tighten up, based on \n>>> earlier feedback and some things I found along the way.  But I don't \n>>> currently plan any more major architectural or design changes, \n>>> pending feedback.  (Also, the patch is now very big, so anything \n>>> additional might be better for a future separate patch.)", "msg_date": "Tue, 27 Sep 2022 15:51:25 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "Hi,\n\nOn 2022-09-27 15:51:25 +0200, Peter Eisentraut wrote:\n> Updated version with meson build system support added (for added files and\n> new tests).\n\nThis fails on windows: https://cirrus-ci.com/task/6151847080624128\n\nhttps://api.cirrus-ci.com/v1/artifact/task/6151847080624128/testrun/build/testrun/column_encryption/001_column_encryption/log/regress_log_001_column_encryption\n\npsql error: stderr: 'OPENSSL_Uplink(00007FFC165CBD50,08): no OPENSSL_Applink'\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 2 Oct 2022 00:16:36 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 02.10.22 09:16, Andres Freund wrote:\n> On 2022-09-27 15:51:25 +0200, Peter Eisentraut wrote:\n>> Updated version with meson build system support added (for added files and\n>> new tests).\n> \n> This fails on windows: https://cirrus-ci.com/task/6151847080624128\n> \n> https://api.cirrus-ci.com/v1/artifact/task/6151847080624128/testrun/build/testrun/column_encryption/001_column_encryption/log/regress_log_001_column_encryption\n> \n> psql error: stderr: 'OPENSSL_Uplink(00007FFC165CBD50,08): no OPENSSL_Applink'\n\nWhat in the world is that about? What scant information I could find \nsuggests that it has something to do with building a \"release\" build \nagainst an \"debug\" build of the openssl library, or vice versa. But \nthis patch doesn't introduce any use of openssl that we haven't seen before.\n\n\n\n", "msg_date": "Thu, 6 Oct 2022 16:25:51 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "Hi,\n\nOn 2022-10-06 16:25:51 +0200, Peter Eisentraut wrote:\n> On 02.10.22 09:16, Andres Freund wrote:\n> > On 2022-09-27 15:51:25 +0200, Peter Eisentraut wrote:\n> > > Updated version with meson build system support added (for added files and\n> > > new tests).\n> > \n> > This fails on windows: https://cirrus-ci.com/task/6151847080624128\n> > \n> > https://api.cirrus-ci.com/v1/artifact/task/6151847080624128/testrun/build/testrun/column_encryption/001_column_encryption/log/regress_log_001_column_encryption\n> > \n> > psql error: stderr: 'OPENSSL_Uplink(00007FFC165CBD50,08): no OPENSSL_Applink'\n> \n> What in the world is that about? What scant information I could find\n> suggests that it has something to do with building a \"release\" build against\n> an \"debug\" build of the openssl library, or vice versa. But this patch\n> doesn't introduce any use of openssl that we haven't seen before.\n\nIt looks to me that one needs to compile, in some form, openssl/applink.c and\nlink it to the application. No idea why that'd be required now and not\nearlier.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 6 Oct 2022 08:19:38 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 06.10.22 17:19, Andres Freund wrote:\n>>> psql error: stderr: 'OPENSSL_Uplink(00007FFC165CBD50,08): no OPENSSL_Applink'\n>> What in the world is that about? What scant information I could find\n>> suggests that it has something to do with building a \"release\" build against\n>> an \"debug\" build of the openssl library, or vice versa. But this patch\n>> doesn't introduce any use of openssl that we haven't seen before.\n> It looks to me that one needs to compile, in some form, openssl/applink.c and\n> link it to the application. No idea why that'd be required now and not\n> earlier.\n\nI have figured this out. The problem is that on Windows you can't \nreliably pass stdio FILE * handles between the application and OpenSSL. \nTo give the helpful places I found some Google juice, I'll mention them \nhere:\n\n- https://github.com/edenhill/librdkafka/pull/3602\n- https://github.com/edenhill/librdkafka/issues/3554\n- https://www.mail-archive.com/openssl-users@openssl.org/msg91029.html\n\n\n\n", "msg_date": "Thu, 13 Oct 2022 15:43:21 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "If memory serves me correctly, if you statically link openssl this will\nwork. If you are using ssl in a DLL, I believe that the DLL has its own\n\"c-library\" and its own heap.\n\nOn Thu, Oct 13, 2022 at 9:43 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 06.10.22 17:19, Andres Freund wrote:\n> >>> psql error: stderr: 'OPENSSL_Uplink(00007FFC165CBD50,08): no\n> OPENSSL_Applink'\n> >> What in the world is that about? What scant information I could find\n> >> suggests that it has something to do with building a \"release\" build\n> against\n> >> an \"debug\" build of the openssl library, or vice versa. But this patch\n> >> doesn't introduce any use of openssl that we haven't seen before.\n> > It looks to me that one needs to compile, in some form,\n> openssl/applink.c and\n> > link it to the application. No idea why that'd be required now and not\n> > earlier.\n>\n> I have figured this out. The problem is that on Windows you can't\n> reliably pass stdio FILE * handles between the application and OpenSSL.\n> To give the helpful places I found some Google juice, I'll mention them\n> here:\n>\n> - https://github.com/edenhill/librdkafka/pull/3602\n> - https://github.com/edenhill/librdkafka/issues/3554\n> - https://www.mail-archive.com/openssl-users@openssl.org/msg91029.html\n>\n>\n>\n>\n\nIf memory serves me correctly, if you statically link openssl this will work. If you are using ssl in a DLL, I believe that the DLL has its own \"c-library\" and its own heap. On Thu, Oct 13, 2022 at 9:43 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 06.10.22 17:19, Andres Freund wrote:\n>>> psql error: stderr: 'OPENSSL_Uplink(00007FFC165CBD50,08): no OPENSSL_Applink'\n>> What in the world is that about?  What scant information I could find\n>> suggests that it has something to do with building a \"release\" build against\n>> an \"debug\" build of the openssl library, or vice versa.  But this patch\n>> doesn't introduce any use of openssl that we haven't seen before.\n> It looks to me that one needs to compile, in some form, openssl/applink.c and\n> link it to the application. No idea why that'd be required now and not\n> earlier.\n\nI have figured this out.  The problem is that on Windows you can't \nreliably pass stdio FILE * handles between the application and OpenSSL. \nTo give the helpful places I found some Google juice, I'll mention them \nhere:\n\n- https://github.com/edenhill/librdkafka/pull/3602\n- https://github.com/edenhill/librdkafka/issues/3554\n- https://www.mail-archive.com/openssl-users@openssl.org/msg91029.html", "msg_date": "Thu, 13 Oct 2022 09:58:42 -0400", "msg_from": "Mark Woodward <woodwardm@google.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "Here is an updated version with the tests on Windows working again, and \nsome typos fixed.\n\nOn 27.09.22 15:51, Peter Eisentraut wrote:\n> Updated version with meson build system support added (for added files \n> and new tests).\n> \n> On 21.09.22 23:37, Peter Eisentraut wrote:\n>> New version with some merge conflicts resolved, and I have worked to \n>> resolve several \"TODO\" items that I had noted in the code.\n>>\n>> On 13.09.22 10:27, Peter Eisentraut wrote:\n>>> Here is an updated patch that resolves some merge conflicts; no \n>>> functionality changes over v6.\n>>>\n>>> On 30.08.22 13:35, Peter Eisentraut wrote:\n>>>> Here is an updated patch.\n>>>>\n>>>> I mainly spent time on adding a full set of DDL commands for the \n>>>> keys. This made the patch very bulky now, but there is not really \n>>>> anything surprising in there.  It probably needs another check of \n>>>> permission handling etc., but it's got everything there to try it \n>>>> out.  Along with the DDL commands, the pg_dump side is now fully \n>>>> implemented.\n>>>>\n>>>> Secondly, I isolated the protocol changes into a protocol extension \n>>>> with the name _pq_.column_encryption.  So by default there are no \n>>>> protocol changes and this feature is disabled.  AFAICT, we haven't \n>>>> actually ever used the _pq_ protocol extension mechanism, so it \n>>>> would be good to review whether this was done here in the intended way.\n>>>>\n>>>> At this point, the patch is sort of feature complete, meaning it has \n>>>> all the concepts, commands, and interfaces that I had in mind.  I \n>>>> have a long list of things to recheck and tighten up, based on \n>>>> earlier feedback and some things I found along the way.  But I don't \n>>>> currently plan any more major architectural or design changes, \n>>>> pending feedback.  (Also, the patch is now very big, so anything \n>>>> additional might be better for a future separate patch.)", "msg_date": "Fri, 14 Oct 2022 08:27:15 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "Hi,\n\nI did a review of the documentation and usability.\n\n# Applying patch\n\n The patch applied on top of f13b2088fa2 without trouble. Notice a small\n warning during compilation:\n \n colenccmds.c:134:27: warning: ‘encval’ may be used uninitialized\n \n A simple fix could be:\n \n +++ b/src/backend/commands/colenccmds.c\n @@ -119,2 +119,3\n encval = defGetString(encvalEl);\n + *encval_p = encval;\n }\n @@ -132,4 +133,2\n *alg_p = alg;\n - if (encval_p)\n - *encval_p = encval;\n }\n\n# Documentation\n\n * In page \"create_column_encryption_key.sgml\", both encryption algorithms for\n a CMK are declared as the default one:\n\n + <para>\n + The encryption algorithm that was used to encrypt the key material of\n + this column encryption key. Supported algorithms are:\n + <itemizedlist>\n + <listitem>\n + <para><literal>RSAES_OAEP_SHA_1</literal> (default)</para>\n + </listitem>\n + <listitem>\n + <para><literal>RSAES_OAEP_SHA_256</literal> (default)</para>\n + </listitem>\n + </itemizedlist>\n + </para>\n \n As far as I understand the code, I suppose RSAES_OAEP_SHA_1 should be the\n default. \n\n I believe two information should be clearly shown to user somewhere in\n chapter 5.5 instead of being buried deep in documentation:\n \n * «COPY does not support column decryption», currently buried in pg_dump page\n * «When transparent column encryption is enabled, the client encoding must\n match the server encoding», currently buried in the protocol description\n page.\n\n * In the libpq doc of PQexecPrepared2, \"paramForceColumnEncryptions\" might\n deserve a little more detail about the array format, like «0 means \"don't\n enforce\" and anything else enforce the encryption is enabled on this\n column». By the way, maybe this array could be an array of boolean? \n\n * In chapter 55.2.5 (protocol-flow) is stated: «when column encryption is\n used, the plaintext is always in text format (not binary format).». Does it\n means parameter \"resultFormat\" in \"PQexecPrepared2\" should always be 0? If\n yes, is it worth keeping this argument? Moreover, this format constraint\n should probably be explained in the libpq page as well.\n\n# Protocol\n\n * In the ColumnEncryptionKey message, it seems the field holding the length\n key material is redundant with the message length itself, as all other\n fields have a known size. The key material length is the message length -\n (4+4+4+2). For comparison, the AuthenticationSASLContinue message has a\n variable data size but rely only on the message length without additional\n field.\n\n * I wonder if encryption related fields in ParameterDescription and\n RowDescription could be optional somehow? The former might be quite large\n when using a lot of parameters (like, imagine a large and ugly\n \"IN($1...$N)\"). On the other hand, these messages are not sent in high\n frequency anyway...\n\n# libpq\n\n Would it be possible to have an encryption-ready PQexecParams() equivalent\n of what PQprepare/describe/exec do?\n\n# psql\n\n * You already mark \\d in the TODO. But having some psql command to list the\n known CEK/CMK might be useful as well.\n\n * about write queries using psql, would it be possible to use psql\n parameters? Eg.:\n\n => \\set c1 myval\n => INSERT INTO mytable VALUES (:'c1') \\gencr\n\n# Manual tests\n\n * The lookup error message is shown twice for some reason:\n\n => select * from test_tce;\n no CMK lookup found for realm \"\"\n \n no CMK lookup found for realm \"\"\n\n It might worth adding the column name and the CMK/CEK names related to each\n error message? Last, notice the useless empty line between messages.\n\n * When \"DROP IF EXISTS\" a missing CEK or CMK, the command raise an\n \"unrecognized object type\":\n\n => DROP COLUMN MASTER KEY IF EXISTS noexists;\n ERROR: unrecognized object type: 10\n => DROP COLUMN ENCRYPTION KEY IF EXISTS noexists;\n ERROR: unrecognized object type: 8\n\n * I was wondering what \"pg_typeof\" should return. It currently returns\n \"pg_encrypted_*\". If this is supposed to be transparent from the client\n perspective, shouldn't it return \"attrealtypid\" when the field is encrypted?\n\n * any reason to not support altering the CMK realm?\n\nThis patch is really interesting and would be a nice addition to the core.\n\nThanks!\n\nRegards,\n\n\n", "msg_date": "Fri, 28 Oct 2022 12:16:29 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "Hi,\n\nHere are a few more things I noticed :\n\nIf a CEK is encrypted with cmk1 and cmk2, but cmk1 isn't found on the \nclient,the following error is printed twice for the very first SELECT \nstatement:\n\n could not open file \"/path/to/cmk1.pem\": No such file or directory\n\n...and nothing is returned. The next queries in the same session would \nwork correctly (cmk2 is used for the decryption of the CEK). An INSERT \nstatement si handled properly, though : one (and only one) error \nmessage, and line actually inserted in all cases).\n\nFor example :\n\n postgres=# SELECT * FROM customers ;\n could not open file \"/path/to/cmk1.pem\": No such file or directory\n\n could not open file \"/path/to/cmk1.pem\": No such file or directory\n\n postgres=# SELECT * FROM customers ;\n id | name | creditcard_num\n ----+-------+-----------------\n 1 | toto | 546843351354245\n 2 | babar | 546843351354245\n\n<close and open new psql session>\n\n postgres=# INSERT INTO customers (id, name, creditcard_num) VALUES \n ($1, $2, $3) \\gencr '3' 'toto' '546888351354245';\n could not open file \"/path/to/cmk1.pem\": No such file or directory\n\n INSERT 0 1\n postgres=# SELECT * FROM customers ;\n id | name | creditcard_num\n ----+-------+-----------------\n 1 | toto | 546843351354245\n 2 | babar | 546843351354245\n 3 | toto | 546888351354245\n\n\n From the documentation of CREATE COLUMN MASTER KEY, it looks like the \nREALM is optional, but both\n CREATE COLUMN MASTER KEY cmk1;\nand\n CREATE COLUMN MASTER KEY cmk1 WITH ();\nreturns a syntax error.\n\n\nAbout AEAD, the documentation says :\n > The “associated data” in these algorithms consists of 4 bytes: The \nASCII letters P and G (byte values 80 and 71), followed by the algorithm \nID as a 16-bit unsigned integer in network byte order.\n\nMy guess is that it serves no real purpose, did I misunderstand ?\n\n\n", "msg_date": "Fri, 28 Oct 2022 16:07:20 +0200", "msg_from": "=?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Yhuel?= <frederic.yhuel@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "Here is another updated patch. Some preliminary work was committed, \nwhich allowed this patch to get a bit smaller. I have incorporated some \nrecent reviews, and also fixed some issues pointed out by recent CI \nadditions (address sanitizer etc.).\n\nThe psql situation in this patch is temporary: It still has the \\gencr \ncommand from previous versions, but I plan to fold this into the new \n\\bind command.\n\n\nOn 14.10.22 08:27, Peter Eisentraut wrote:\n> Here is an updated version with the tests on Windows working again, and \n> some typos fixed.\n> \n> On 27.09.22 15:51, Peter Eisentraut wrote:\n>> Updated version with meson build system support added (for added files \n>> and new tests).\n>>\n>> On 21.09.22 23:37, Peter Eisentraut wrote:\n>>> New version with some merge conflicts resolved, and I have worked to \n>>> resolve several \"TODO\" items that I had noted in the code.\n>>>\n>>> On 13.09.22 10:27, Peter Eisentraut wrote:\n>>>> Here is an updated patch that resolves some merge conflicts; no \n>>>> functionality changes over v6.\n>>>>\n>>>> On 30.08.22 13:35, Peter Eisentraut wrote:\n>>>>> Here is an updated patch.\n>>>>>\n>>>>> I mainly spent time on adding a full set of DDL commands for the \n>>>>> keys. This made the patch very bulky now, but there is not really \n>>>>> anything surprising in there.  It probably needs another check of \n>>>>> permission handling etc., but it's got everything there to try it \n>>>>> out.  Along with the DDL commands, the pg_dump side is now fully \n>>>>> implemented.\n>>>>>\n>>>>> Secondly, I isolated the protocol changes into a protocol extension \n>>>>> with the name _pq_.column_encryption.  So by default there are no \n>>>>> protocol changes and this feature is disabled.  AFAICT, we haven't \n>>>>> actually ever used the _pq_ protocol extension mechanism, so it \n>>>>> would be good to review whether this was done here in the intended \n>>>>> way.\n>>>>>\n>>>>> At this point, the patch is sort of feature complete, meaning it \n>>>>> has all the concepts, commands, and interfaces that I had in mind. \n>>>>> I have a long list of things to recheck and tighten up, based on \n>>>>> earlier feedback and some things I found along the way.  But I \n>>>>> don't currently plan any more major architectural or design \n>>>>> changes, pending feedback.  (Also, the patch is now very big, so \n>>>>> anything additional might be better for a future separate patch.)", "msg_date": "Wed, 23 Nov 2022 19:39:10 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 28.10.22 12:16, Jehan-Guillaume de Rorthais wrote:\n> I did a review of the documentation and usability.\n\nI have incorporated some of your feedback into the v11 patch I just posted.\n\n> # Applying patch\n> \n> The patch applied on top of f13b2088fa2 without trouble. Notice a small\n> warning during compilation:\n> \n> colenccmds.c:134:27: warning: ‘encval’ may be used uninitialized\n> \n> A simple fix could be:\n> \n> +++ b/src/backend/commands/colenccmds.c\n> @@ -119,2 +119,3\n> encval = defGetString(encvalEl);\n> + *encval_p = encval;\n> }\n> @@ -132,4 +133,2\n> *alg_p = alg;\n> - if (encval_p)\n> - *encval_p = encval;\n> }\n\nfixed\n\n> # Documentation\n> \n> * In page \"create_column_encryption_key.sgml\", both encryption algorithms for\n> a CMK are declared as the default one:\n> \n> + <para>\n> + The encryption algorithm that was used to encrypt the key material of\n> + this column encryption key. Supported algorithms are:\n> + <itemizedlist>\n> + <listitem>\n> + <para><literal>RSAES_OAEP_SHA_1</literal> (default)</para>\n> + </listitem>\n> + <listitem>\n> + <para><literal>RSAES_OAEP_SHA_256</literal> (default)</para>\n> + </listitem>\n> + </itemizedlist>\n> + </para>\n> \n> As far as I understand the code, I suppose RSAES_OAEP_SHA_1 should be the\n> default.\n\nfixed\n\n> I believe two information should be clearly shown to user somewhere in\n> chapter 5.5 instead of being buried deep in documentation:\n> \n> * «COPY does not support column decryption», currently buried in pg_dump page\n> * «When transparent column encryption is enabled, the client encoding must\n> match the server encoding», currently buried in the protocol description\n> page.\n> \n> * In the libpq doc of PQexecPrepared2, \"paramForceColumnEncryptions\" might\n> deserve a little more detail about the array format, like «0 means \"don't\n> enforce\" and anything else enforce the encryption is enabled on this\n> column». By the way, maybe this array could be an array of boolean?\n> \n> * In chapter 55.2.5 (protocol-flow) is stated: «when column encryption is\n> used, the plaintext is always in text format (not binary format).». Does it\n> means parameter \"resultFormat\" in \"PQexecPrepared2\" should always be 0? If\n> yes, is it worth keeping this argument? Moreover, this format constraint\n> should probably be explained in the libpq page as well.\n\nI will keep these suggestions around. Some of these things will \nprobably change again, so I'll make sure to update the documentation \nwhen I touch it again.\n\n> # Protocol\n> \n> * In the ColumnEncryptionKey message, it seems the field holding the length\n> key material is redundant with the message length itself, as all other\n> fields have a known size. The key material length is the message length -\n> (4+4+4+2). For comparison, the AuthenticationSASLContinue message has a\n> variable data size but rely only on the message length without additional\n> field.\n\nI find that weird, though. An explicit length seems better. Things \nlike AuthenticationSASLContinue only work if they have exactly one \nvariable-length data item.\n\n> * I wonder if encryption related fields in ParameterDescription and\n> RowDescription could be optional somehow? The former might be quite large\n> when using a lot of parameters (like, imagine a large and ugly\n> \"IN($1...$N)\"). On the other hand, these messages are not sent in high\n> frequency anyway...\n\nThey are only used if you turn on the column_encryption protocol option. \n Or did you mean make them optional even then?\n\n> # libpq\n> \n> Would it be possible to have an encryption-ready PQexecParams() equivalent\n> of what PQprepare/describe/exec do?\n\nI plan to do that.\n\n> # psql\n> \n> * You already mark \\d in the TODO. But having some psql command to list the\n> known CEK/CMK might be useful as well.\n\nright\n\n> * about write queries using psql, would it be possible to use psql\n> parameters? Eg.:\n> \n> => \\set c1 myval\n> => INSERT INTO mytable VALUES (:'c1') \\gencr\n\nNo, because those are resolved by psql before libpq sees them.\n\n> # Manual tests\n> \n> * The lookup error message is shown twice for some reason:\n> \n> => select * from test_tce;\n> no CMK lookup found for realm \"\"\n> \n> no CMK lookup found for realm \"\"\n> \n> It might worth adding the column name and the CMK/CEK names related to each\n> error message? Last, notice the useless empty line between messages.\n\nI'll look into that.\n\n> * When \"DROP IF EXISTS\" a missing CEK or CMK, the command raise an\n> \"unrecognized object type\":\n> \n> => DROP COLUMN MASTER KEY IF EXISTS noexists;\n> ERROR: unrecognized object type: 10\n> => DROP COLUMN ENCRYPTION KEY IF EXISTS noexists;\n> ERROR: unrecognized object type: 8\n\nfixed\n\n> * I was wondering what \"pg_typeof\" should return. It currently returns\n> \"pg_encrypted_*\". If this is supposed to be transparent from the client\n> perspective, shouldn't it return \"attrealtypid\" when the field is encrypted?\n\nInteresting question. Need to think about it. I'm not sure what the \npurpose of pg_typeof really is. The only use I can recall is for pgTAP.\n\n> * any reason to not support altering the CMK realm?\n\nThis could be added. I have that in my notes.\n\n\n\n", "msg_date": "Wed, 23 Nov 2022 19:45:06 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On Wed, 23 Nov 2022 19:45:06 +0100\nPeter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> On 28.10.22 12:16, Jehan-Guillaume de Rorthais wrote:\n[...]\n\n> > * I wonder if encryption related fields in ParameterDescription and\n> > RowDescription could be optional somehow? The former might be quite\n> > large when using a lot of parameters (like, imagine a large and ugly\n> > \"IN($1...$N)\"). On the other hand, these messages are not sent in high\n> > frequency anyway... \n> \n> They are only used if you turn on the column_encryption protocol option. \n> Or did you mean make them optional even then?\n\nI meant even when column_encryption is turned on.\n\nRegards,\n\n\n", "msg_date": "Thu, 24 Nov 2022 10:22:06 +0100", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 23.11.22 19:39, Peter Eisentraut wrote:\n> Here is another updated patch.  Some preliminary work was committed, \n> which allowed this patch to get a bit smaller.  I have incorporated some \n> recent reviews, and also fixed some issues pointed out by recent CI \n> additions (address sanitizer etc.).\n> \n> The psql situation in this patch is temporary: It still has the \\gencr \n> command from previous versions, but I plan to fold this into the new \n> \\bind command.\n\nI made a bit of progress with this now, based on recent reviews:\n\n- Cleaned up the libpq API. PQexecParams() now supports column \nencryption transparently.\n- psql \\bind can be used; \\gencr is removed.\n- Added psql \\dcek and \\dcmk commands.\n- ALTER COLUMN MASTER KEY to alter realm.", "msg_date": "Mon, 28 Nov 2022 15:05:54 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 28.11.22 15:05, Peter Eisentraut wrote:\n> On 23.11.22 19:39, Peter Eisentraut wrote:\n>> Here is another updated patch.  Some preliminary work was committed, \n>> which allowed this patch to get a bit smaller.  I have incorporated \n>> some recent reviews, and also fixed some issues pointed out by recent \n>> CI additions (address sanitizer etc.).\n>>\n>> The psql situation in this patch is temporary: It still has the \\gencr \n>> command from previous versions, but I plan to fold this into the new \n>> \\bind command.\n> \n> I made a bit of progress with this now, based on recent reviews:\n> \n> - Cleaned up the libpq API.  PQexecParams() now supports column \n> encryption transparently.\n> - psql \\bind can be used; \\gencr is removed.\n> - Added psql \\dcek and \\dcmk commands.\n> - ALTER COLUMN MASTER KEY to alter realm.\n\nAnd another update. The main changes are that I added an 'unspecified' \nCMK algorithm, which indicates that the external KMS knows what it is \nbut the database system doesn't. This was discussed a while ago. I \nalso changed some details about how the \"cmklookup\" works in libpq. \nAlso added more code comments and documentation and rearranged some code.\n\nAccording to my local todo list, this patch is now complete.", "msg_date": "Wed, 21 Dec 2022 06:46:51 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 21.12.22 06:46, Peter Eisentraut wrote:\n> And another update.  The main changes are that I added an 'unspecified' \n> CMK algorithm, which indicates that the external KMS knows what it is \n> but the database system doesn't.  This was discussed a while ago.  I \n> also changed some details about how the \"cmklookup\" works in libpq. Also \n> added more code comments and documentation and rearranged some code.\n> \n> According to my local todo list, this patch is now complete.\n\nAnother update, with some merge conflicts resolved. I also fixed up the \nremaining TODO markers in the code, which had something to do with Perl \nand Windows. I did some more work on schema handling, e.g., CREATE \nTABLE / LIKE, views, partitioning etc. on top of encrypted columns, \nmostly tedious and repetitive, nothing interesting. I also rewrote the \ncode that extracts the underlying tables and columns corresponding to \nquery parameters. It's now much simpler and better encapsulated.", "msg_date": "Sat, 31 Dec 2022 15:17:25 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "\"ON (CASE WHEN a.attrealtypid <> 0 THEN a.attrealtypid ELSE a.atttypid END = t.oid)\\n\"\n\nThis breaks interoperability with older servers:\nERROR: column a.attrealtypid does not exist\n\nSame in describe.c\n\nFind attached some typos and bad indentation. I'm sending this off now\nas I've already sat on it for 2 weeks since starting to look at the\npatch.\n\n-- \nJustin", "msg_date": "Fri, 6 Jan 2023 18:34:03 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "\n\n> On Dec 31, 2022, at 6:17 AM, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> Another update, with some merge conflicts resolved. \n\nHi Peter, thanks for the patch!\n\nI wonder if logical replication could be made to work more easily with this feature. Specifically, subscribers of encrypted columns will need the encrypted column encryption key (CEK) and the name of the column master key (CMD) as exists on the publisher, but getting access to that is not automated as far as I can see. It doesn't come through automatically as part of a subscription, and publisher's can't publish the pg_catalog tables where the keys are kept (because publishing system tables is not supported.) Is it reasonable to make available the CEK and CMK to subscribers in an automated fashion, to facilitate setting up logical replication with less manual distribution of key information? Is this already done, and I'm just not recognizing that you've done it?\n\n\nCan we do anything about the attack vector wherein a malicious DBA simply copies the encrypted datum from one row to another? Imagine the DBA Alice wants to murder a hospital patient Bob by removing the fact that Bob is deathly allergic to latex. She cannot modify the Bob's encrypted and authenticated record, but she can easily update Bob's record with the encrypted record of a different patient Charlie. Likewise, if she want's Bob to pay Charlie's bill, she can replace Charlie's encrypted credit card number with Bob's, and once Bob is dead, he won't dispute the charges.\n\nAn encrypted-and-authenticated column value should be connected with its row in some way that Alice cannot circumvent. In the patch as you have it written, the client application can include row information in the patient record (specifically, the patient's name, ssn, etc) and verify when any patient record is retrieved that this information matches. But that's hardly \"transparent\" to the client. It's something all clients will have to do, and easy to forget to do in some code path. Also, for encrypted fixed-width columns, it is not an option. So it seems the client needs to \"salt\" (maybe not the right term for what I have in mind) the encryption with some relevant other columns, and that's something the libpq client would need to understand, and something the patch's syntax needs to support. Something like:\n\nCREATE TABLE patient_records (\n -- Cryptographically connected to the encrypted record\n patient_id BIGINT NOT NULL,\n patient_ssn CHAR(11),\n\n -- The encrypted record\n patient_record TEXT ENCRYPTED WITH (column_encryption_key = cek1,\n column_encryption_salt = (patient_id, patient_ssn)),\n\n -- Extra stuff, not cryptographically connected to anything\n next_of_kin TEXT,\n phone_number BIGINT,\n ...\n);\n\nI have not selected any algorithms that include such \"salt\"ing (again, maybe the wrong word) because I'm just trying to discuss the general feature, not get into the weeds about which cryptographic algorithm to select.\n\nThoughts?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 10 Jan 2023 09:26:53 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "\n\n> On Jan 10, 2023, at 9:26 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> -- Cryptographically connected to the encrypted record\n> patient_id BIGINT NOT NULL,\n> patient_ssn CHAR(11),\n> \n> -- The encrypted record\n> patient_record TEXT ENCRYPTED WITH (column_encryption_key = cek1,\n> column_encryption_salt = (patient_id, patient_ssn)),\n\nAs you mention upthread, tying columns together creates problems for statements that only operate on a subset of columns. Allowing schema designers a choice about tying the encrypted column to zero or more other columns allows them to choose which works best for their security needs.\n\nThe example above would make a statement like \"UPDATE patient_record SET patient_record = $1 \\bind '{some json whatever}'\" raise an exception at the libpq client level, but maybe that's what schema designers wants it to do. If not, they should omit the column_encryption_salt option in the create table statement; but if so, they should expect to have to specify the other columns as part of the update statement, possibly as part of the where clause, like\n\n\tUPDATE patient_record\n\t\tSET patient_record = $1\n\t\tWHERE patient_id = 12345\n\t\t AND patient_ssn = '111-11-1111' \n\t\t\\bind '{some json record}'\n\nand have the libpq get the salt column values from the where clause (which may be tricky to implement), or perhaps use some new bind syntax like\n\n\tUPDATE patient_record\n\t\tSET patient_record = ($1:$2,$3) -- new, wonky syntax\n\t\tWHERE patient_id = $2\n\t\t AND patient_ssn = $3 \n\t\t\\bind '{some json record}' 12345 '111-11-1111'\n\nwhich would be error prone, since the sql statement could specify the ($1:$2,$3) inconsistently with the where clause, or perhaps specify the \"new\" salt columns even when not changed, like\n\n\tUPDATE patient_record\n\t\tSET patient_record = $1, patient_id = 12345, patient_ssn = \"111-11-1111\"\n\t\tWHERE patient_id = 12345\n\t\t AND patient_ssn = \"111-11-1111\"\n\t\t\\bind '{some json record}'\n\nwhich looks kind of nuts at first glance, but is grammatically consistent with cases where one or both of the patient_id or patient_ssn are also being changed, like\n\n\tUPDATE patient_record\n\t\tSET patient_record = $1, patient_id = 98765, patient_ssn = \"999-99-9999\"\n\t\tWHERE patient_id = 12345\n\t\t AND patient_ssn = \"111-11-1111\"\n\t\t\\bind '{some json record}'\n\nOr, of course, you can ignore these suggestions or punt them to some future patch that extends the current work, rather than trying to get it all done in the first column encryption commit. But it seems useful to think about what future directions would be, to avoid coding ourselves into a corner, making such future work harder.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 10 Jan 2023 11:47:11 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On Sat, 31 Dec 2022 at 19:47, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 21.12.22 06:46, Peter Eisentraut wrote:\n> > And another update. The main changes are that I added an 'unspecified'\n> > CMK algorithm, which indicates that the external KMS knows what it is\n> > but the database system doesn't. This was discussed a while ago. I\n> > also changed some details about how the \"cmklookup\" works in libpq. Also\n> > added more code comments and documentation and rearranged some code.\n> >\n> > According to my local todo list, this patch is now complete.\n>\n> Another update, with some merge conflicts resolved. I also fixed up the\n> remaining TODO markers in the code, which had something to do with Perl\n> and Windows. I did some more work on schema handling, e.g., CREATE\n> TABLE / LIKE, views, partitioning etc. on top of encrypted columns,\n> mostly tedious and repetitive, nothing interesting. I also rewrote the\n> code that extracts the underlying tables and columns corresponding to\n> query parameters. It's now much simpler and better encapsulated.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\n5f6401f81cb24bd3930e0dc589fc4aa8b5424cdc ===\n=== applying patch ./v14-0001-Transparent-column-encryption.patch\n....\nHunk #1 FAILED at 1109.\n....\n1 out of 5 hunks FAILED -- saving rejects to file doc/src/sgml/protocol.sgml.rej\n....\npatching file doc/src/sgml/ref/create_table.sgml\nHunk #3 FAILED at 351.\nHunk #4 FAILED at 704.\n2 out of 4 hunks FAILED -- saving rejects to file\ndoc/src/sgml/ref/create_table.sgml.rej\n....\nHunk #1 FAILED at 1420.\nHunk #2 FAILED at 4022.\n2 out of 2 hunks FAILED -- saving rejects to file\ndoc/src/sgml/ref/psql-ref.sgml.rej\n\n[1] - http://cfbot.cputube.org/patch_41_3718.log\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 11 Jan 2023 22:16:02 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 10.01.23 18:26, Mark Dilger wrote:\n> I wonder if logical replication could be made to work more easily with this feature. Specifically, subscribers of encrypted columns will need the encrypted column encryption key (CEK) and the name of the column master key (CMD) as exists on the publisher, but getting access to that is not automated as far as I can see. It doesn't come through automatically as part of a subscription, and publisher's can't publish the pg_catalog tables where the keys are kept (because publishing system tables is not supported.) Is it reasonable to make available the CEK and CMK to subscribers in an automated fashion, to facilitate setting up logical replication with less manual distribution of key information? Is this already done, and I'm just not recognizing that you've done it?\n\nThis would be done as part of DDL replication.\n\n> Can we do anything about the attack vector wherein a malicious DBA simply copies the encrypted datum from one row to another?\n\nWe discussed this earlier [0]. This patch is not that feature. We \ncould get there eventually, but it would appear to be an immense amount \nof additional work. We have to start somewhere.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/4fbcf5540633699fc3d81ffb59cb0ac884673a7c.camel@vmware.com\n\n\n\n", "msg_date": "Thu, 12 Jan 2023 17:32:10 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 12/31/22 06:17, Peter Eisentraut wrote:\n> On 21.12.22 06:46, Peter Eisentraut wrote:\n>> And another update.  The main changes are that I added an 'unspecified' \n>> CMK algorithm, which indicates that the external KMS knows what it is \n>> but the database system doesn't.  This was discussed a while ago.  I \n>> also changed some details about how the \"cmklookup\" works in libpq. Also \n>> added more code comments and documentation and rearranged some code.\n\nTrying to delay a review until I had \"completed it\" has only led to me\nnot reviewing, so here's a partial one. Let me know what pieces of the\nimplementation and/or architecture you're hoping to get more feedback on.\n\nI like the existing \"caveats\" documentation, and I've attached a sample\npatch with some more caveats documented, based on some of the upthread\nconversation:\n\n- text format makes fixed-length columns leak length information too\n- you only get partial protection against the Evil DBA\n- RSA-OAEP public key safety\n\n(Feel free to use/remix/discard as desired.)\n\nWhen writing the paragraph on RSA-OAEP I was reminded that we didn't\nreally dig into the asymmetric/symmetric discussion. Assuming that most\nfirst-time users will pick the builtin CMK encryption method, do we\nstill want to have an asymmetric scheme implemented first instead of a\nsymmetric keywrap? I'm still concerned about that public key, since it\ncan't really be made public. (And now that \"unspecified\" is available, I\nthink an asymmetric CMK could be easily created by users that have a\nniche use case, and then we wouldn't have to commit to supporting it\nforever.)\n\nFor the padding caveat:\n\n> + There is no concern if all values are of the same length (e.g., credit\n> + card numbers).\n\nI nodded along to this statement last year, and then this year I learned\nthat CCNs aren't fixed-length. So with a 16-byte block, you're probably\ngoing to be able to figure out who has an American Express card.\n\nThe column encryption algorithm is set per-column -- but isn't it\ntightly coupled to the CEK, since the key length has to match? From a\nlayperson perspective, using the same key to encrypt the same plaintext\nunder two different algorithms (if they happen to have the same key\nlength) seems like it might be cryptographically risky. Is there a\nreason I should be encouraged to do that?\n\nWith the loss of \\gencr it looks like we also lost a potential way to\nforce encryption from within psql. Any plans to add that for v1?\n\nWhile testing, I forgot how the new option worked and connected with\n`column_encryption=on` -- and then I accidentally sent unencrypted data\nto the server, since `on` means \"not enabled\". :( The server errors out\nafter the damage is done, of course, but would it be okay to strictly\nvalidate that option's values?\n\nAre there plans to document client-side implementation requirements, to\nensure cross-client compatibility? Things like the \"PG\\x00\\x01\"\nassociated data are buried at the moment (or else I've missed them in\nthe docs). If you're holding off until the feature is more finalized,\nthat's fine too.\n\nSpeaking of cross-client compatibility, I'm still disconcerted by the\nability to write the value \"hello world\" into an encrypted integer\ncolumn. Should clients be required to validate the text format, using\nthe attrealtypid?\n\nIt occurred to me when looking at the \"unspecified\" CMK scheme that the\nCEK doesn't really have to be an encryption key at all. In that case it\ncan function more like a (possibly signed?) cookie for lookup, or even\nbe ignored altogether if you don't want to use a wrapping scheme\n(similar to JWE's \"direct\" mode, maybe?). So now you have three ways to\nlook up or determine a column encryption key (CMK realm, CMK name, CEK\ncookie)... is that a concept worth exploring in v1 and/or the documentation?\n\nThanks,\n--Jacob", "msg_date": "Thu, 19 Jan 2023 12:48:03 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 11.01.23 17:46, vignesh C wrote:\n> On Sat, 31 Dec 2022 at 19:47, Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> On 21.12.22 06:46, Peter Eisentraut wrote:\n>>> And another update. The main changes are that I added an 'unspecified'\n>>> CMK algorithm, which indicates that the external KMS knows what it is\n>>> but the database system doesn't. This was discussed a while ago. I\n>>> also changed some details about how the \"cmklookup\" works in libpq. Also\n>>> added more code comments and documentation and rearranged some code.\n>>>\n>>> According to my local todo list, this patch is now complete.\n>>\n>> Another update, with some merge conflicts resolved. I also fixed up the\n>> remaining TODO markers in the code, which had something to do with Perl\n>> and Windows. I did some more work on schema handling, e.g., CREATE\n>> TABLE / LIKE, views, partitioning etc. on top of encrypted columns,\n>> mostly tedious and repetitive, nothing interesting. I also rewrote the\n>> code that extracts the underlying tables and columns corresponding to\n>> query parameters. It's now much simpler and better encapsulated.\n> \n> The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n\nHere is a new patch. Changes since v14:\n\n- Fixed some typos (review by Justin Pryzby)\n- Fixed backward compat. psql and pg_dump (review by Justin Pryzby)\n- Doc additions (review by Jacob Champion)\n- Validate column_encryption option in libpq (review by Jacob Champion)\n- Handle column encryption in inheritance\n- Change CEKs and CMKs to live inside schemas", "msg_date": "Wed, 25 Jan 2023 19:44:37 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 07.01.23 01:34, Justin Pryzby wrote:\n> \"ON (CASE WHEN a.attrealtypid <> 0 THEN a.attrealtypid ELSE a.atttypid END = t.oid)\\n\"\n> \n> This breaks interoperability with older servers:\n> ERROR: column a.attrealtypid does not exist\n> \n> Same in describe.c\n> \n> Find attached some typos and bad indentation. I'm sending this off now\n> as I've already sat on it for 2 weeks since starting to look at the\n> patch.\n\nThanks, I have integrated all that into the v15 patch I just posted.\n\n\n\n\n", "msg_date": "Wed, 25 Jan 2023 19:45:18 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 12.01.23 17:32, Peter Eisentraut wrote:\n>> Can we do anything about the attack vector wherein a malicious DBA \n>> simply copies the encrypted datum from one row to another?\n> \n> We discussed this earlier [0].  This patch is not that feature.  We \n> could get there eventually, but it would appear to be an immense amount \n> of additional work.  We have to start somewhere.\n\nI've been thinking, this could be done as a \"version 2\" of the currently \nproposed feature, within the same framework. We'd extend the \nRowDescription and ParameterDescription messages to provide primary key \ninformation, some flags, then the client would have enough to know what \nto do. As you wrote in your follow-up message, a challenge would be to \nhandle statements that do not touch all the columns. We'd need to work \nthrough this and consider all the details.\n\n\n\n", "msg_date": "Wed, 25 Jan 2023 19:50:05 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 19.01.23 21:48, Jacob Champion wrote:\n> I like the existing \"caveats\" documentation, and I've attached a sample\n> patch with some more caveats documented, based on some of the upthread\n> conversation:\n> \n> - text format makes fixed-length columns leak length information too\n> - you only get partial protection against the Evil DBA\n> - RSA-OAEP public key safety\n> \n> (Feel free to use/remix/discard as desired.)\n\nI have added those in the v15 patch I just posted.\n\n> When writing the paragraph on RSA-OAEP I was reminded that we didn't\n> really dig into the asymmetric/symmetric discussion. Assuming that most\n> first-time users will pick the builtin CMK encryption method, do we\n> still want to have an asymmetric scheme implemented first instead of a\n> symmetric keywrap? I'm still concerned about that public key, since it\n> can't really be made public.\n\nI had started coding that, but one problem was that the openssl CLI \ndoesn't really provide any means to work with those kinds of keys. The \n\"openssl enc\" command always wants to mix in a password. Without that, \nthere is no way to write a test case, and more crucially no way for \nusers to set up these kinds of keys. Unless we write our own tooling \nfor this, which, you know, the patch just passed 400k in size.\n\n> For the padding caveat:\n> \n>> + There is no concern if all values are of the same length (e.g., credit\n>> + card numbers).\n> \n> I nodded along to this statement last year, and then this year I learned\n> that CCNs aren't fixed-length. So with a 16-byte block, you're probably\n> going to be able to figure out who has an American Express card.\n\nHeh. I have removed that parenthetical remark.\n\n> The column encryption algorithm is set per-column -- but isn't it\n> tightly coupled to the CEK, since the key length has to match? From a\n> layperson perspective, using the same key to encrypt the same plaintext\n> under two different algorithms (if they happen to have the same key\n> length) seems like it might be cryptographically risky. Is there a\n> reason I should be encouraged to do that?\n\nNot really. I was also initially confused by this setup, but that's how \nother similar systems are set up, so I thought it would be confusing to \ndo it differently.\n\n> With the loss of \\gencr it looks like we also lost a potential way to\n> force encryption from within psql. Any plans to add that for v1?\n\n\\gencr didn't do that either. We could do it. The libpq API supports \nit. We just need to come up with some syntax for psql.\n\n> While testing, I forgot how the new option worked and connected with\n> `column_encryption=on` -- and then I accidentally sent unencrypted data\n> to the server, since `on` means \"not enabled\". :( The server errors out\n> after the damage is done, of course, but would it be okay to strictly\n> validate that option's values?\n\nfixed in v15\n\n> Are there plans to document client-side implementation requirements, to\n> ensure cross-client compatibility? Things like the \"PG\\x00\\x01\"\n> associated data are buried at the moment (or else I've missed them in\n> the docs). If you're holding off until the feature is more finalized,\n> that's fine too.\n\nThis is documented in the protocol chapter, which I thought was the \nright place. Did you want more documentation, or in a different place?\n\n> Speaking of cross-client compatibility, I'm still disconcerted by the\n> ability to write the value \"hello world\" into an encrypted integer\n> column. Should clients be required to validate the text format, using\n> the attrealtypid?\n\nWell, we can ask them to, but we can't really require them, in a \ncryptographic sense. I'm not sure what more we can do.\n\n> It occurred to me when looking at the \"unspecified\" CMK scheme that the\n> CEK doesn't really have to be an encryption key at all. In that case it\n> can function more like a (possibly signed?) cookie for lookup, or even\n> be ignored altogether if you don't want to use a wrapping scheme\n> (similar to JWE's \"direct\" mode, maybe?). So now you have three ways to\n> look up or determine a column encryption key (CMK realm, CMK name, CEK\n> cookie)... is that a concept worth exploring in v1 and/or the documentation?\n\nI don't completely follow this.\n\n\n\n", "msg_date": "Wed, 25 Jan 2023 20:00:26 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On Wed, Jan 25, 2023 at 11:00 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> > When writing the paragraph on RSA-OAEP I was reminded that we didn't\n> > really dig into the asymmetric/symmetric discussion. Assuming that most\n> > first-time users will pick the builtin CMK encryption method, do we\n> > still want to have an asymmetric scheme implemented first instead of a\n> > symmetric keywrap? I'm still concerned about that public key, since it\n> > can't really be made public.\n>\n> I had started coding that, but one problem was that the openssl CLI\n> doesn't really provide any means to work with those kinds of keys. The\n> \"openssl enc\" command always wants to mix in a password. Without that,\n> there is no way to write a test case, and more crucially no way for\n> users to set up these kinds of keys. Unless we write our own tooling\n> for this, which, you know, the patch just passed 400k in size.\n\nArrgh: https://github.com/openssl/openssl/issues/10605\n\n> > The column encryption algorithm is set per-column -- but isn't it\n> > tightly coupled to the CEK, since the key length has to match? From a\n> > layperson perspective, using the same key to encrypt the same plaintext\n> > under two different algorithms (if they happen to have the same key\n> > length) seems like it might be cryptographically risky. Is there a\n> > reason I should be encouraged to do that?\n>\n> Not really. I was also initially confused by this setup, but that's how\n> other similar systems are set up, so I thought it would be confusing to\n> do it differently.\n\nWhich systems let you mix and match keys and algorithms this way? I'd\nlike to take a look at them.\n\n> > With the loss of \\gencr it looks like we also lost a potential way to\n> > force encryption from within psql. Any plans to add that for v1?\n>\n> \\gencr didn't do that either. We could do it. The libpq API supports\n> it. We just need to come up with some syntax for psql.\n\nDo you think people would rather set encryption for all parameters at\nonce -- something like \\encbind -- or have the ability to mix\nencrypted and unencrypted parameters?\n\n> > Are there plans to document client-side implementation requirements, to\n> > ensure cross-client compatibility? Things like the \"PG\\x00\\x01\"\n> > associated data are buried at the moment (or else I've missed them in\n> > the docs). If you're holding off until the feature is more finalized,\n> > that's fine too.\n>\n> This is documented in the protocol chapter, which I thought was the\n> right place. Did you want more documentation, or in a different place?\n\nI just missed it; sorry.\n\n> > Speaking of cross-client compatibility, I'm still disconcerted by the\n> > ability to write the value \"hello world\" into an encrypted integer\n> > column. Should clients be required to validate the text format, using\n> > the attrealtypid?\n>\n> Well, we can ask them to, but we can't really require them, in a\n> cryptographic sense. I'm not sure what more we can do.\n\nRight -- I just mean that clients need to pay more attention to it\nnow, whereas before they may have delegated correctness to the server.\nThe problem is documented in the context of deterministic encryption,\nbut I think it applies to randomized as well.\n\nMore concretely: should psql allow you to push arbitrary text into an\nencrypted \\bind parameter, like it does now?\n\n> > It occurred to me when looking at the \"unspecified\" CMK scheme that the\n> > CEK doesn't really have to be an encryption key at all. In that case it\n> > can function more like a (possibly signed?) cookie for lookup, or even\n> > be ignored altogether if you don't want to use a wrapping scheme\n> > (similar to JWE's \"direct\" mode, maybe?). So now you have three ways to\n> > look up or determine a column encryption key (CMK realm, CMK name, CEK\n> > cookie)... is that a concept worth exploring in v1 and/or the documentation?\n>\n> I don't completely follow this.\n\nYeah, I'm not expressing it very well. My feeling is that the\norganization system here -- a realm \"contains\" multiple CMKs, a CMK\nencrypts multiple CEKs -- is so general and flexible that it may need\nsome suggested guardrails for people to use it sanely. I just don't\nknow what those guardrails should be. I was motivated by the\nrealization that CEKs don't even need to be keys.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Mon, 30 Jan 2023 14:30:32 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 30.01.23 23:30, Jacob Champion wrote:\n>>> The column encryption algorithm is set per-column -- but isn't it\n>>> tightly coupled to the CEK, since the key length has to match? From a\n>>> layperson perspective, using the same key to encrypt the same plaintext\n>>> under two different algorithms (if they happen to have the same key\n>>> length) seems like it might be cryptographically risky. Is there a\n>>> reason I should be encouraged to do that?\n>>\n>> Not really. I was also initially confused by this setup, but that's how\n>> other similar systems are set up, so I thought it would be confusing to\n>> do it differently.\n> \n> Which systems let you mix and match keys and algorithms this way? I'd\n> like to take a look at them.\n\nSee here for example: \nhttps://learn.microsoft.com/en-us/sql/relational-databases/security/encryption/always-encrypted-database-engine?view=sql-server-ver15\n\n>>> With the loss of \\gencr it looks like we also lost a potential way to\n>>> force encryption from within psql. Any plans to add that for v1?\n>>\n>> \\gencr didn't do that either. We could do it. The libpq API supports\n>> it. We just need to come up with some syntax for psql.\n> \n> Do you think people would rather set encryption for all parameters at\n> once -- something like \\encbind -- or have the ability to mix\n> encrypted and unencrypted parameters?\n\nFor pg_dump, I'd like a mode that makes all values parameters of an \nINSERT statement. But obviously not all of those will be encrypted. So \nI think we'd want a per-parameter syntax.\n\n> More concretely: should psql allow you to push arbitrary text into an\n> encrypted \\bind parameter, like it does now?\n\nWe don't have any data type awareness like that now in psql or libpq. \nIt would be quite a change to start now. How would that deal with data \ntype extensibility, is an obvious question to start with. Don't know.\n\n\n", "msg_date": "Tue, 31 Jan 2023 14:25:55 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On Tue, Jan 31, 2023 at 5:26 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> See here for example:\n> https://learn.microsoft.com/en-us/sql/relational-databases/security/encryption/always-encrypted-database-engine?view=sql-server-ver15\n\nHm. I notice they haven't implemented more than one algorithm, so I\nwonder if they're going to be happy with their decision to\nmix-and-match when number two arrives.\n\n> For pg_dump, I'd like a mode that makes all values parameters of an\n> INSERT statement. But obviously not all of those will be encrypted. So\n> I think we'd want a per-parameter syntax.\n\nMakes sense. Maybe something that defaults to encrypted with opt-out\nper parameter?\n\n UPDATE t SET name = $1 WHERE id = $2\n \\encbind \"Jacob\" plaintext(24)\n\n> > More concretely: should psql allow you to push arbitrary text into an\n> > encrypted \\bind parameter, like it does now?\n>\n> We don't have any data type awareness like that now in psql or libpq.\n> It would be quite a change to start now.\n\nI agree. It just feels weird that a misbehaving client can \"attack\"\nthe other client implementations using it, and we don't make any\nattempt to mitigate it.\n\n--Jacob\n\n\n", "msg_date": "Fri, 3 Feb 2023 12:43:24 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "\n\n> On Jan 25, 2023, at 10:44 AM, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> Here is a new patch. Changes since v14:\n> \n> - Fixed some typos (review by Justin Pryzby)\n> - Fixed backward compat. psql and pg_dump (review by Justin Pryzby)\n> - Doc additions (review by Jacob Champion)\n> - Validate column_encryption option in libpq (review by Jacob Champion)\n> - Handle column encryption in inheritance\n> - Change CEKs and CMKs to live inside schemas<v15-0001-Transparent-column-encryption.patch>\n\nThanks Peter. Here are some observations about the documentation in patch version 15.\n\nIn acronyms.sgml, the CEK and CMK entries should link to documentation, perhaps linkend=\"glossary-column-encryption-key\" and linkend=\"glossary-column-master-key\". These glossary entries should in turn link to linkend=\"ddl-column-encryption\".\n\nIn ddl.sgml, the sentence \"forcing encryption of certain parameters in the client library (see its documentation)\" should link to linkend=\"libpq-connect-column-encryption\".\n\nDid you intend the use of \"transparent data encryption\" (rather than \"transparent column encryption\") in datatype.sgml? If so, what's the difference?\n\nIs this feature intended to be available from ecpg? If so, can we maybe include an example in 36.3.4. Prepared Statements showing how to pass the encrypted values securely. If not, can we include verbiage about that limitation, so folks don't waste time trying to figure out how to do it?\n\nThe documentation for pg_dump (and pg_dumpall) now includes a --decrypt-encrypted-columns option, which I suppose requires cmklookup to first be configured, and for PGCMKLOOKUP to be exported. There isn't anything in the pg_dump docs about this, though, so maybe a link to section 5.5.3 with a warning about not running pg_dump this way on the database server itself?\n\nHow does a psql user mark a parameter as having forced encryption? A libpq user can specify this in the paramFormats array, but I don't see any syntax for doing this from psql. None of the perl tap tests you've included appear to do this (except indirectly when calling test_client); grep'ing for the libpq error message \"parameter with forced encryption is not to be encrypted\" in the tests has no matches. Is it just not possible? I thought you'd mentioned some syntax for this when we spoke in person, but I don't see it now.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Sat, 11 Feb 2023 13:54:56 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "\n\n> On Feb 11, 2023, at 1:54 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> Here are some observations \n\nI should mention, src/sgml/html/libpq-exec.html needs clarification:\n\n> paramFormats[]Specifies whether parameters are text (put a zero in the array entry for the corresponding parameter) or binary (put a one in the array entry for the corresponding parameter). If the array pointer is null then all parameters are presumed to be text strings.\n\nPerhaps you should edit this last sentence to say that all parameters are presumed to be test strings without forced encryption.\n\n> Values passed in binary format require knowledge of the internal representation expected by the backend. For example, integers must be passed in network byte order. Passing numeric values requires knowledge of the server storage format, as implemented in src/backend/utils/adt/numeric.c::numeric_send() and src/backend/utils/adt/numeric.c::numeric_recv().\n\n> When column encryption is enabled, the second-least-significant byte of this parameter specifies whether encryption should be forced for a parameter.\n\nThe value 0x10 has a one as its second-least-significant *nibble*, but that's a really strange way to describe the high-order nibble, and perhaps not what you mean. Could you clarify?\n\n> Set this byte to one to force encryption.\n\nI think setting the byte to one (0x01) means \"binary unencrypted\", not \"force encryption\". Don't you mean to set this byte to 16? \n\n> For example, use the C code literal 0x10 to specify text format with forced encryption. If the array pointer is null then encryption is not forced for any parameter.\n> If encryption is forced for a parameter but the parameter does not correspond to an encrypted column on the server, then the call will fail and the parameter will not be sent. This can be used for additional security against a compromised server. (The drawback is that application code then needs to be kept up to date with knowledge about which columns are encrypted rather than letting the server specify this.)\n\n I think you should say something about how specifying 0x11 will behave -- in other words, asking for encrypted binary data. I believe that this is will draw a \"format must be text for encrypted parameter\" error, and that the docs should clearly say so.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Sun, 12 Feb 2023 06:48:29 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "New patch.\n\nPer some feedback, I have renamed this feature. People didn't like the \n\"transparent\", for various reasons. The new name I came up with is \n\"automatic client-side column-level encryption\". This also matches the \nterminology used in other products better. (Maybe the acronym ACSCLE -- \npronounced \"a chuckle\" -- will catch on.) I'm also using various \nsubsets of that name when the context is clear.\n\nOther changes since v15:\n\n- CEKs and CMKs now have USAGE privileges. (There are some TODO markers \nwhere I got too bored with boilerplate. I will fill those in, but the \nidea should be clear.)\n\n- Renamed attrealtypid to attusertypid. (It wasn't really \"real\".)\n- Added corresponding attusertypmod.\n- Removed attencalg, it's now stored in atttypmod.\n(The last three together make the whole attribute storage work more \nsensibly and smoothly.)\n\n- Various documentation changes (review by Mark Dilger)\n- Added more explicit documentation that this feature is not to protect \nagainst an \"evil DBA\".", "msg_date": "Wed, 22 Feb 2023 11:25:50 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 11.02.23 22:54, Mark Dilger wrote:\n> Thanks Peter. Here are some observations about the documentation in patch version 15.\n> \n> In acronyms.sgml, the CEK and CMK entries should link to documentation, perhaps linkend=\"glossary-column-encryption-key\" and linkend=\"glossary-column-master-key\". These glossary entries should in turn link to linkend=\"ddl-column-encryption\".\n> \n> In ddl.sgml, the sentence \"forcing encryption of certain parameters in the client library (see its documentation)\" should link to linkend=\"libpq-connect-column-encryption\".\n> \n> Did you intend the use of \"transparent data encryption\" (rather than \"transparent column encryption\") in datatype.sgml? If so, what's the difference?\n\nThere are all addressed in the v16 patch I just posted.\n\n> Is this feature intended to be available from ecpg? If so, can we maybe include an example in 36.3.4. Prepared Statements showing how to pass the encrypted values securely. If not, can we include verbiage about that limitation, so folks don't waste time trying to figure out how to do it?\n\nIt should \"just work\". I will give this a try sometime, but I don't see \nwhy it wouldn't work.\n\n> The documentation for pg_dump (and pg_dumpall) now includes a --decrypt-encrypted-columns option, which I suppose requires cmklookup to first be configured, and for PGCMKLOOKUP to be exported. There isn't anything in the pg_dump docs about this, though, so maybe a link to section 5.5.3 with a warning about not running pg_dump this way on the database server itself?\n\nAlso addressed in v16.\n\n> How does a psql user mark a parameter as having forced encryption? A libpq user can specify this in the paramFormats array, but I don't see any syntax for doing this from psql. None of the perl tap tests you've included appear to do this (except indirectly when calling test_client); grep'ing for the libpq error message \"parameter with forced encryption is not to be encrypted\" in the tests has no matches. Is it just not possible? I thought you'd mentioned some syntax for this when we spoke in person, but I don't see it now.\n\nThis has been asked about before. We just need to come up with a syntax \nfor it. The issue is contained inside psql.\n\n\n\n\n", "msg_date": "Wed, 22 Feb 2023 11:29:41 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 12.02.23 15:48, Mark Dilger wrote:\n> I should mention, src/sgml/html/libpq-exec.html needs clarification:\n> \n>> paramFormats[]Specifies whether parameters are text (put a zero in the array entry for the corresponding parameter) or binary (put a one in the array entry for the corresponding parameter). If the array pointer is null then all parameters are presumed to be text strings.\n> \n> Perhaps you should edit this last sentence to say that all parameters are presumed to be test strings without forced encryption.\n\nThis is actually already mentioned later in that section.\n\n>> When column encryption is enabled, the second-least-significant byte of this parameter specifies whether encryption should be forced for a parameter.\n> \n> The value 0x10 has a one as its second-least-significant *nibble*, but that's a really strange way to describe the high-order nibble, and perhaps not what you mean. Could you clarify?\n> \n>> Set this byte to one to force encryption.\n> \n> I think setting the byte to one (0x01) means \"binary unencrypted\", not \"force encryption\". Don't you mean to set this byte to 16?\n> \n>> For example, use the C code literal 0x10 to specify text format with forced encryption. If the array pointer is null then encryption is not forced for any parameter.\n>> If encryption is forced for a parameter but the parameter does not correspond to an encrypted column on the server, then the call will fail and the parameter will not be sent. This can be used for additional security against a compromised server. (The drawback is that application code then needs to be kept up to date with knowledge about which columns are encrypted rather than letting the server specify this.)\n\nThis was me being confused. I adjusted the text to use \"half-byte\".\n\n> I think you should say something about how specifying 0x11 will behave -- in other words, asking for encrypted binary data. I believe that this is will draw a \"format must be text for encrypted parameter\" error, and that the docs should clearly say so.\n\ndone\n\n\n\n", "msg_date": "Wed, 22 Feb 2023 11:32:19 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 22.02.23 11:25, Peter Eisentraut wrote:\n> Other changes since v15:\n> \n> - CEKs and CMKs now have USAGE privileges.  (There are some TODO markers \n> where I got too bored with boilerplate.  I will fill those in, but the \n> idea should be clear.)\n\nNew patch. The above is all filled in now.\n\nI also figured we need support in the DISCARD command to clear the \nsession state of what keys have already been sent, for the benefit of \nconnection poolers, so I added an option there.\n\nThe only thing left on my list for this whole thing is some syntax in \npsql to force encryption for a parameter. But that could also be done \nas a separate patch.", "msg_date": "Tue, 28 Feb 2023 21:28:38 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "Here is an updated patch. I have done some cosmetic polishing and fixed \na minor Windows-related bug.\n\nIn my mind, the patch is complete.\n\nIf someone wants to do some in-depth code review, I suggest focusing on \nthe following files:\n\n* src/backend/access/common/printtup.c\n* src/backend/commands/colenccmds.c\n* src/backend/commands/tablecmds.c\n* src/backend/parser/parse_param.c\n* src/interfaces/libpq/fe-exec.c\n* src/interfaces/libpq/fe-protocol3.c\n* src/interfaces/libpq/libpq-fe.h\n\n(Most other files are DDL boilerplate or otherwise not too interesting.)", "msg_date": "Fri, 10 Mar 2023 08:18:34 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "\n\n> On Mar 9, 2023, at 11:18 PM, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> Here is an updated patch.\n\nThanks, Peter. The patch appears to be in pretty good shape, but I have a few comments and concerns.\n\nCEKIsVisible() and CMKIsVisible() are obviously copied from TSParserIsVisible(), and the code comments weren't fully updated. Specifically, the phrase \"hidden by another parser of the same name\" should be updated to not mention \"parser\".\n\nWhy does get_cmkalg_name() return the string \"unspecified\" for PG_CMK_UNSPECIFIED, but the next function get_cmkalg_jwa_name() returns NULL for PG_CMK_UNSPECIFIED? It seems they would both return NULL, or both return \"unspecified\". If there's a reason for the divergence, could you add a code comment to clarify? BTW, get_cmkalg_jwa_name() has no test coverage.\n \nLooking further at code coverage, the new conditional in printsimple_startup() is never tested with (MyProcPort->column_encryption_enabled), so the block is never entered. This would seem to be a consequence of backends like walsender not using column encryption, which is not terribly surprising, but it got me wondering if you had a particular client use case in mind when you added this block?\n\nThe new function pg_encrypted_in() appears totally untested, but I have to wonder if that's because we're not round-trip testing pg_dump with column encryption...? The code coverage in pg_dump looks fairly decent, but some column encryption code is not covered.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Sat, 11 Mar 2023 10:08:32 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "Hi,\n\nOn 2023-03-10 08:18:34 +0100, Peter Eisentraut wrote:\n> Here is an updated patch. I have done some cosmetic polishing and fixed a\n> minor Windows-related bug.\n> \n> In my mind, the patch is complete.\n> \n> If someone wants to do some in-depth code review, I suggest focusing on the\n> following files:\n> \n> * src/backend/access/common/printtup.c\n\nHave you done benchmarks of some simple workloads to verify this doesn't cause\nslowdowns (when not using encryption, obviously)? printtup.c is a performance\nsensitive portion for simple queries, particularly when they return multiple\ncolumns.\n\nAnd making tupledescs even wider is likely to have some price, both due to the\nincrease in memory usage, and due to the lower cache density - and that's code\nwhere we're already hurting noticeably.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 11 Mar 2023 16:11:25 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 11.03.23 19:08, Mark Dilger wrote:\n> CEKIsVisible() and CMKIsVisible() are obviously copied from TSParserIsVisible(), and the code comments weren't fully updated. Specifically, the phrase \"hidden by another parser of the same name\" should be updated to not mention \"parser\".\n\nfixed\n\n> \n> Why does get_cmkalg_name() return the string \"unspecified\" for PG_CMK_UNSPECIFIED, but the next function get_cmkalg_jwa_name() returns NULL for PG_CMK_UNSPECIFIED? It seems they would both return NULL, or both return \"unspecified\". If there's a reason for the divergence, could you add a code comment to clarify?\n\nAdded a comment.\n\n> BTW, get_cmkalg_jwa_name() has no test coverage.\n\nOk, I'll look into it.\n\n> Looking further at code coverage, the new conditional in printsimple_startup() is never tested with (MyProcPort->column_encryption_enabled), so the block is never entered. This would seem to be a consequence of backends like walsender not using column encryption, which is not terribly surprising, but it got me wondering if you had a particular client use case in mind when you added this block?\n\nAFAICT, the relationship between printsimple.c and the replicaton \nprotocol is not actually firmly defined anywhere, it just happens that \nthey are used together. So I feel the column encryption mode needs to \nbe supported, technically, even if nothing is using it right now.\n\n> The new function pg_encrypted_in() appears totally untested, but I have to wonder if that's because we're not round-trip testing pg_dump with column encryption...? The code coverage in pg_dump looks fairly decent, but some column encryption code is not covered.\n\nI have added test coverage for pg_encrypted_in() (via a COPY round-trip \ntest in under src/test/column_encryption), as well as additional \ncoverage in pg_dump and some DDL commands. I didn't find any obvious \ngaps in test coverage elsewhere.", "msg_date": "Mon, 13 Mar 2023 21:21:04 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 12.03.23 01:11, Andres Freund wrote:\n> Have you done benchmarks of some simple workloads to verify this doesn't cause\n> slowdowns (when not using encryption, obviously)? printtup.c is a performance\n> sensitive portion for simple queries, particularly when they return multiple\n> columns.\n\nThe additional code isn't used when column encryption is off, so there \nshouldn't be any impact.\n\n\n\n", "msg_date": "Mon, 13 Mar 2023 21:22:29 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "Hi,\n\nOn 2023-03-13 21:22:29 +0100, Peter Eisentraut wrote:\n> On 12.03.23 01:11, Andres Freund wrote:\n> > Have you done benchmarks of some simple workloads to verify this doesn't cause\n> > slowdowns (when not using encryption, obviously)? printtup.c is a performance\n> > sensitive portion for simple queries, particularly when they return multiple\n> > columns.\n> \n> The additional code isn't used when column encryption is off, so there\n> shouldn't be any impact.\n\nIt adds branches, and it makes tupledescs wider. In tight spots, such as\nprinttup, that can hurt, even if the branches aren't ever entered.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 13 Mar 2023 13:41:19 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "Hi,\n\nOn 2023-03-13 13:41:19 -0700, Andres Freund wrote:\n> On 2023-03-13 21:22:29 +0100, Peter Eisentraut wrote:\n> > On 12.03.23 01:11, Andres Freund wrote:\n> > > Have you done benchmarks of some simple workloads to verify this doesn't cause\n> > > slowdowns (when not using encryption, obviously)? printtup.c is a performance\n> > > sensitive portion for simple queries, particularly when they return multiple\n> > > columns.\n> >\n> > The additional code isn't used when column encryption is off, so there\n> > shouldn't be any impact.\n>\n> It adds branches, and it makes tupledescs wider. In tight spots, such as\n> printtup, that can hurt, even if the branches aren't ever entered.\n\nIn fact, I do see a noticable, but not huge, regression:\n\n$ cat /tmp/test.sql\nSELECT * FROM pg_class WHERE oid = 1247;\n\nc=1;taskset -c 10 pgbench -n -M prepared -c$c -j$c -f /tmp/test.sql -P1 -T10\n\nwith the server also pinned to core 1, and turbo boost disabled. Nothing else\nis allowed to run on the core, or its hyperthread sibling. This is my setup\nfor comparing performance with the least noise in general, not related to this\npatch.\n\nhead: 28495.858509 28823.055643 28731.074311\npatch: 28298.498851 28285.426532 28489.359569\n\nA ~1.1% loss.\n\npipelined 50 statements (pgbench pinned to a different otherwise unused core)\nhead: 1147.404506 1147.587475 1151.976547\npatch: 1126.525708 1122.375337 1119.088734\n\nA ~2.2% loss.\n\nThat might not be prohibitive, but it does seem worth analyzing.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 13 Mar 2023 14:11:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 13.03.23 22:11, Andres Freund wrote:\n>> It adds branches, and it makes tupledescs wider. In tight spots, such as\n>> printtup, that can hurt, even if the branches aren't ever entered.\n> In fact, I do see a noticable, but not huge, regression:\n\nI tried to reproduce your measurements, but I don't have the CPU \naffinity tools like that on macOS, so the differences were lost in the \nnoise. (The absolute numbers looked very similar to yours.)\n\nI can reproduce a regression if I keep adding more columns to \npg_attribute, like 8 OID columns does start to show.\n\nI investigated whether I could move some columns to the \n\"variable-length\" part outside the tuple descriptor, but that would \nrequire major surgery in heap.c and tablecmds.c (MergeAttributes() ... \nshudder), and also elsewhere, where you currently only have a tuple \ndescriptor for looking up stuff.\n\nHow do we proceed here? A lot of user-facing table management stuff \nlike compression, statistics, collation, dropped columns, and probably \nfuture features like column reordering or periods, have to be \nrepresented in pg_attribute somehow, at least in the current \narchitecture. Do we hope that hardware keeps up with the pace at which \nwe add new features? Do we need to decouple tuple descriptors from \npg_attribute altogether? How do we decide what goes into the tuple \ndescriptor and what does not? I'm interested in addressing this, \nbecause obviously we do want the ability to add more features in the \nfuture, but I don't know what the direction should be.\n\n\n\n", "msg_date": "Thu, 16 Mar 2023 11:26:46 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "Hi,\n\nOn 2023-03-16 11:26:46 +0100, Peter Eisentraut wrote:\n> On 13.03.23 22:11, Andres Freund wrote:\n> > > It adds branches, and it makes tupledescs wider. In tight spots, such as\n> > > printtup, that can hurt, even if the branches aren't ever entered.\n> > In fact, I do see a noticable, but not huge, regression:\n> \n> I tried to reproduce your measurements, but I don't have the CPU affinity\n> tools like that on macOS, so the differences were lost in the noise. (The\n> absolute numbers looked very similar to yours.)\n> \n> I can reproduce a regression if I keep adding more columns to pg_attribute,\n> like 8 OID columns does start to show.\n>\n> [...]\n> How do we proceed here?\n\nMaybe a daft question, but why do we need a separate type and typmod for\nencrypted columns? Why isn't the fact that the column is encrypted exactly one\nnew field, and we use the existing type/typmod fields?\n\n\n> A lot of user-facing table management stuff like compression, statistics,\n> collation, dropped columns, and probably future features like column\n> reordering or periods, have to be represented in pg_attribute somehow, at\n> least in the current architecture. Do we hope that hardware keeps up with\n> the pace at which we add new features?\n\nYea, it's not great as is. I think it's also OK to decide that the slowdown is\nworth it in some cases - it just has to be a conscious decision, in the open.\n\n\n> Do we need to decouple tuple descriptors from pg_attribute altogether?\n\nYes. Very clearly. The amount of memory and runtime we spent on tupledescs is\ndisproportionate. A second angle is that we build tupledescs way way too\nfrequently. The executor infers them everywhere, so not even prepared\nstatements protect against that.\n\n\n> How do we decide what goes into the tuple descriptor and what does not? I'm\n> interested in addressing this, because obviously we do want the ability to\n> add more features in the future, but I don't know what the direction should\n> be.\n\nWe've had some prior discussion around this, see e.g.\nhttps://postgr.es/m/20210819114435.6r532qbadcsyfscp%40alap3.anarazel.de\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 16 Mar 2023 09:36:48 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 16.03.23 17:36, Andres Freund wrote:\n> Maybe a daft question, but why do we need a separate type and typmod for\n> encrypted columns? Why isn't the fact that the column is encrypted exactly one\n> new field, and we use the existing type/typmod fields?\n\nThe way this is implemented is that for an encrypted column, the real \natttypid and atttypmod are one of the encrypted special types \n(pg_encrypted_*). That way, most of the system doesn't need to care \nabout the details of encryption or whatnot, it just unpacks tuples etc. \nby looking at atttypid, atttyplen, etc., and queries on encrypted data \nbehave normally by just looking at what operators etc. those types have. \n This approach heavily contains the number of places that need to know \nabout this feature at all.\n\n>> Do we need to decouple tuple descriptors from pg_attribute altogether?\n> \n> Yes. Very clearly. The amount of memory and runtime we spent on tupledescs is\n> disproportionate. A second angle is that we build tupledescs way way too\n> frequently. The executor infers them everywhere, so not even prepared\n> statements protect against that.\n> \n> \n>> How do we decide what goes into the tuple descriptor and what does not? I'm\n>> interested in addressing this, because obviously we do want the ability to\n>> add more features in the future, but I don't know what the direction should\n>> be.\n> \n> We've had some prior discussion around this, see e.g.\n> https://postgr.es/m/20210819114435.6r532qbadcsyfscp%40alap3.anarazel.de\n\nThis sounds like a good plan.\n\n\n\n\n", "msg_date": "Tue, 21 Mar 2023 18:05:15 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "Hi,\n\nOn 2023-03-21 18:05:15 +0100, Peter Eisentraut wrote:\n> On 16.03.23 17:36, Andres Freund wrote:\n> > Maybe a daft question, but why do we need a separate type and typmod for\n> > encrypted columns? Why isn't the fact that the column is encrypted exactly one\n> > new field, and we use the existing type/typmod fields?\n> \n> The way this is implemented is that for an encrypted column, the real\n> atttypid and atttypmod are one of the encrypted special types\n> (pg_encrypted_*). That way, most of the system doesn't need to care about\n> the details of encryption or whatnot, it just unpacks tuples etc. by looking\n> at atttypid, atttyplen, etc., and queries on encrypted data behave normally\n> by just looking at what operators etc. those types have. This approach\n> heavily contains the number of places that need to know about this feature\n> at all.\n\nI get that for the type, but why do we need the typmod duplicated as well?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 21 Mar 2023 10:47:58 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 21.03.23 18:47, Andres Freund wrote:\n> On 2023-03-21 18:05:15 +0100, Peter Eisentraut wrote:\n>> On 16.03.23 17:36, Andres Freund wrote:\n>>> Maybe a daft question, but why do we need a separate type and typmod for\n>>> encrypted columns? Why isn't the fact that the column is encrypted exactly one\n>>> new field, and we use the existing type/typmod fields?\n>>\n>> The way this is implemented is that for an encrypted column, the real\n>> atttypid and atttypmod are one of the encrypted special types\n>> (pg_encrypted_*). That way, most of the system doesn't need to care about\n>> the details of encryption or whatnot, it just unpacks tuples etc. by looking\n>> at atttypid, atttyplen, etc., and queries on encrypted data behave normally\n>> by just looking at what operators etc. those types have. This approach\n>> heavily contains the number of places that need to know about this feature\n>> at all.\n> \n> I get that for the type, but why do we need the typmod duplicated as well?\n\nEarlier patch versions didn't do that, but that got really confusing \nabout which type the typmod really belonged to, since code currently \nassumes that typid+typmod makes sense. Earlier patch versions had three \nfields (usertypid, keyid, encalg), and then I changed it to (usertypid, \nusertypmod, keyid) and instead placed the encalg into the real typmod, \nwhich made everything much cleaner.\n\n\n\n", "msg_date": "Wed, 22 Mar 2023 10:00:45 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 22.03.23 10:00, Peter Eisentraut wrote:\n>> I get that for the type, but why do we need the typmod duplicated as \n>> well?\n> \n> Earlier patch versions didn't do that, but that got really confusing \n> about which type the typmod really belonged to, since code currently \n> assumes that typid+typmod makes sense.  Earlier patch versions had three \n> fields (usertypid, keyid, encalg), and then I changed it to (usertypid, \n> usertypmod, keyid) and instead placed the encalg into the real typmod, \n> which made everything much cleaner.\n\nI thought about this some more. I think we could get rid of \nattusertypmod and just hardcode it as -1. The idea would be that if you \nask for an encrypted column of type, say, varchar(500), the server isn't \nable to enforce that anyway, so we could just prohibit specifying a \nnondefault typmod for encrypted columns.\n\nI'm not sure if there are weird types that use typmods in some way where \nthis wouldn't work. But so far I could not think of anything.\n\nI'll look into this some more.\n\n\n\n", "msg_date": "Thu, 23 Mar 2023 14:54:48 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On Thu, Mar 23, 2023 at 9:55 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> I thought about this some more. I think we could get rid of\n> attusertypmod and just hardcode it as -1. The idea would be that if you\n> ask for an encrypted column of type, say, varchar(500), the server isn't\n> able to enforce that anyway, so we could just prohibit specifying a\n> nondefault typmod for encrypted columns.\n>\n> I'm not sure if there are weird types that use typmods in some way where\n> this wouldn't work. But so far I could not think of anything.\n>\n> I'll look into this some more.\n\nI thought we often treated atttypid, atttypmod, and attcollation as a\ntrio, these days. It seems a bit surprising that you'd end up adding\ncolumns for two out of the three.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 23 Mar 2023 11:55:47 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 23.03.23 16:55, Robert Haas wrote:\n> On Thu, Mar 23, 2023 at 9:55 AM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>> I thought about this some more. I think we could get rid of\n>> attusertypmod and just hardcode it as -1. The idea would be that if you\n>> ask for an encrypted column of type, say, varchar(500), the server isn't\n>> able to enforce that anyway, so we could just prohibit specifying a\n>> nondefault typmod for encrypted columns.\n>>\n>> I'm not sure if there are weird types that use typmods in some way where\n>> this wouldn't work. But so far I could not think of anything.\n>>\n>> I'll look into this some more.\n> \n> I thought we often treated atttypid, atttypmod, and attcollation as a\n> trio, these days. It seems a bit surprising that you'd end up adding\n> columns for two out of the three.\n\nInternally, we use all three. But for reporting to the client \n(RowDescription message), we only have slots for type and typmod. We \ncould in theory extend the protocol to report the collation as well, but \nit's probably not too interesting.\n\n\n\n", "msg_date": "Fri, 24 Mar 2023 11:23:34 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "Hi,\n\nOn 2023-03-23 14:54:48 +0100, Peter Eisentraut wrote:\n> On 22.03.23 10:00, Peter Eisentraut wrote:\n> > > I get that for the type, but why do we need the typmod duplicated as\n> > > well?\n> > \n> > Earlier patch versions didn't do that, but that got really confusing\n> > about which type the typmod really belonged to, since code currently\n> > assumes that typid+typmod makes sense.� Earlier patch versions had three\n> > fields (usertypid, keyid, encalg), and then I changed it to (usertypid,\n> > usertypmod, keyid) and instead placed the encalg into the real typmod,\n> > which made everything much cleaner.\n> \n> I thought about this some more. I think we could get rid of attusertypmod\n> and just hardcode it as -1. The idea would be that if you ask for an\n> encrypted column of type, say, varchar(500), the server isn't able to\n> enforce that anyway, so we could just prohibit specifying a nondefault\n> typmod for encrypted columns.\n\nWhy not just use typmod for the underlying typmod? It doesn't seem like\nencrypted datums will need that? Or are you using it for something important there?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 24 Mar 2023 11:12:01 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 24.03.23 19:12, Andres Freund wrote:\n>> I thought about this some more. I think we could get rid of attusertypmod\n>> and just hardcode it as -1. The idea would be that if you ask for an\n>> encrypted column of type, say, varchar(500), the server isn't able to\n>> enforce that anyway, so we could just prohibit specifying a nondefault\n>> typmod for encrypted columns.\n> \n> Why not just use typmod for the underlying typmod? It doesn't seem like\n> encrypted datums will need that? Or are you using it for something important there?\n\nYes, the typmod of encrypted types stores the encryption algorithm.\n\n(Also, mixing a type with the typmod of another type is weird in a \nvariety of ways, so this is a quite clean solution.)\n\n\n\n", "msg_date": "Wed, 29 Mar 2023 18:08:29 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "Hi,\n\nOn 2023-03-29 18:08:29 +0200, Peter Eisentraut wrote:\n> On 24.03.23 19:12, Andres Freund wrote:\n> > > I thought about this some more. I think we could get rid of attusertypmod\n> > > and just hardcode it as -1. The idea would be that if you ask for an\n> > > encrypted column of type, say, varchar(500), the server isn't able to\n> > > enforce that anyway, so we could just prohibit specifying a nondefault\n> > > typmod for encrypted columns.\n> >\n> > Why not just use typmod for the underlying typmod? It doesn't seem like\n> > encrypted datums will need that? Or are you using it for something important there?\n>\n> Yes, the typmod of encrypted types stores the encryption algorithm.\n\nWhy isn't that an attribute of pg_colenckey, given that attcek has been added\nto pg_attribute?\n\n\n> (Also, mixing a type with the typmod of another type is weird in a variety\n> of ways, so this is a quite clean solution.)\n\nIt's not an unrelated type though. It's the actual typmod of the column we're\ntalking about. I find it a lot less clean to make all non-CEK uses of\npg_attribute pay the price of storing three new fields.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 29 Mar 2023 09:24:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 29.03.23 18:24, Andres Freund wrote:\n> Hi,\n> \n> On 2023-03-29 18:08:29 +0200, Peter Eisentraut wrote:\n>> On 24.03.23 19:12, Andres Freund wrote:\n>>>> I thought about this some more. I think we could get rid of attusertypmod\n>>>> and just hardcode it as -1. The idea would be that if you ask for an\n>>>> encrypted column of type, say, varchar(500), the server isn't able to\n>>>> enforce that anyway, so we could just prohibit specifying a nondefault\n>>>> typmod for encrypted columns.\n>>>\n>>> Why not just use typmod for the underlying typmod? It doesn't seem like\n>>> encrypted datums will need that? Or are you using it for something important there?\n>>\n>> Yes, the typmod of encrypted types stores the encryption algorithm.\n> \n> Why isn't that an attribute of pg_colenckey, given that attcek has been added\n> to pg_attribute?\n\nOne might think that, but the precedent in other equivalent systems is \nthat you reference the key and the algorithm separately. There is some \n(admittedly not very conclusive) discussion about this near [0].\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/00b0c4f3-0d9f-dcfd-2ba0-eee5109b4963%40enterprisedb.com#147ad6faafe8cdd2c0d2fd56ec972a40\n\n>> (Also, mixing a type with the typmod of another type is weird in a variety\n>> of ways, so this is a quite clean solution.)\n> \n> It's not an unrelated type though. It's the actual typmod of the column we're\n> talking about.\n\nWhat I mean is that various parts of the system think that typid+typmod \nmake sense together. If the typmod actually refers to usertypid, well, \nthe code doesn't know that, so who knows what happens.\n\nAlso, with the current proposal, the system is internally consistent: \npg_encrypted_* can actually look at their own typmod and verify their \nown input values that way, which is what a typmod is for. This just \nworks out of the box.\n\n> I find it a lot less clean to make all non-CEK uses of\n> pg_attribute pay the price of storing three new fields.\n\nWith the proposed removal of usertypmod, it's only two fields: the link \nto the key and the user-facing type.\n\n\n\n", "msg_date": "Wed, 29 Mar 2023 19:08:25 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "Hi,\n\nOn 2023-03-29 19:08:25 +0200, Peter Eisentraut wrote:\n> On 29.03.23 18:24, Andres Freund wrote:\n> > On 2023-03-29 18:08:29 +0200, Peter Eisentraut wrote:\n> > > On 24.03.23 19:12, Andres Freund wrote:\n> > > > > I thought about this some more. I think we could get rid of attusertypmod\n> > > > > and just hardcode it as -1. The idea would be that if you ask for an\n> > > > > encrypted column of type, say, varchar(500), the server isn't able to\n> > > > > enforce that anyway, so we could just prohibit specifying a nondefault\n> > > > > typmod for encrypted columns.\n> > > > \n> > > > Why not just use typmod for the underlying typmod? It doesn't seem like\n> > > > encrypted datums will need that? Or are you using it for something important there?\n> > > \n> > > Yes, the typmod of encrypted types stores the encryption algorithm.\n> > \n> > Why isn't that an attribute of pg_colenckey, given that attcek has been added\n> > to pg_attribute?\n> \n> One might think that, but the precedent in other equivalent systems is that\n> you reference the key and the algorithm separately. There is some\n> (admittedly not very conclusive) discussion about this near [0].\n> \n> [0]: https://www.postgresql.org/message-id/flat/00b0c4f3-0d9f-dcfd-2ba0-eee5109b4963%40enterprisedb.com#147ad6faafe8cdd2c0d2fd56ec972a40\n\nI'm very much not convinced by that. Either way, there at least there should\nbe a comment mentioning that we intentionally try to allow that.\n\nEven if this feature is something we want (why?), ISTM that this should not be\nimplemented by having multiple fields in pg_attribute, but instead by a table\nreferenced by by pg_attribute.attcek.\n\n\n> > > (Also, mixing a type with the typmod of another type is weird in a variety\n> > > of ways, so this is a quite clean solution.)\n> > \n> > It's not an unrelated type though. It's the actual typmod of the column we're\n> > talking about.\n> \n> What I mean is that various parts of the system think that typid+typmod make\n> sense together. If the typmod actually refers to usertypid, well, the code\n> doesn't know that, so who knows what happens.\n\nYou control what the typmod for encrypted columns does. So I don't see what\nproblems that could be.\n\nI seems quite likely that having a separate typmod for the encrypted type will\ncause problems down the line, because you'll end up having to copy around\ntypid+typmod for the encrypted datum and then also separately for the\nunencrypted one.\n\n\n> Also, with the current proposal, the system is internally consistent:\n> pg_encrypted_* can actually look at their own typmod and verify their own\n> input values that way, which is what a typmod is for. This just works out\n> of the box.\n> \n> > I find it a lot less clean to make all non-CEK uses of\n> > pg_attribute pay the price of storing three new fields.\n> \n> With the proposed removal of usertypmod, it's only two fields: the link to\n> the key and the user-facing type.\n\nThat feels far less clean. I think loosing the ability to set the precision of\na numeric, or the SRID for postgis datums won't be received very well?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 29 Mar 2023 18:29:52 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 30.03.23 03:29, Andres Freund wrote:\n>> One might think that, but the precedent in other equivalent systems is that\n>> you reference the key and the algorithm separately. There is some\n>> (admittedly not very conclusive) discussion about this near [0].\n>>\n>> [0]: https://www.postgresql.org/message-id/flat/00b0c4f3-0d9f-dcfd-2ba0-eee5109b4963%40enterprisedb.com#147ad6faafe8cdd2c0d2fd56ec972a40\n> \n> I'm very much not convinced by that. Either way, there at least there should\n> be a comment mentioning that we intentionally try to allow that.\n> \n> Even if this feature is something we want (why?), ISTM that this should not be\n> implemented by having multiple fields in pg_attribute, but instead by a table\n> referenced by by pg_attribute.attcek.\n\nI don't know if it is clear to everyone here, but the key data model and \nthe surrounding DDL are exact copies of the equivalent MS SQL Server \nfeature. When I was first studying it, I had the exact same doubts \nabout this. But as I was learning more about it, it does make sense, \nbecause this matches a common pattern in key management systems, which \nis relevant because these keys ultimately map into KMS-managed keys in a \ndeployment. Moreover, 1) it is plausible that those people knew what \nthey were doing, and 2) it would be preferable to maintain alignment and \nnot create something that looks the same but is different in some small \nbut important details.\n\n>> With the proposed removal of usertypmod, it's only two fields: the link to\n>> the key and the user-facing type.\n> \n> That feels far less clean. I think loosing the ability to set the precision of\n> a numeric, or the SRID for postgis datums won't be received very well?\n\nIn my mind, and I probably wasn't explicit about this, I'm thinking \nabout what can be done now versus later.\n\nThe feature is arguably useful without typmod support, e.g., for text. \nWe could ship it like that, then do some work to reorganize pg_attribute \nand tuple descriptors to relieve some pressure on each byte, and then \nadd the typmod support back in in a future release. I think that is a \nworkable compromise.\n\n\n", "msg_date": "Thu, 30 Mar 2023 16:01:46 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "Hi,\n\nOn 2023-03-30 16:01:46 +0200, Peter Eisentraut wrote:\n> On 30.03.23 03:29, Andres Freund wrote:\n> > > One might think that, but the precedent in other equivalent systems is that\n> > > you reference the key and the algorithm separately. There is some\n> > > (admittedly not very conclusive) discussion about this near [0].\n> > > \n> > > [0]: https://www.postgresql.org/message-id/flat/00b0c4f3-0d9f-dcfd-2ba0-eee5109b4963%40enterprisedb.com#147ad6faafe8cdd2c0d2fd56ec972a40\n> > \n> > I'm very much not convinced by that. Either way, there at least there should\n> > be a comment mentioning that we intentionally try to allow that.\n> > \n> > Even if this feature is something we want (why?), ISTM that this should not be\n> > implemented by having multiple fields in pg_attribute, but instead by a table\n> > referenced by by pg_attribute.attcek.\n> \n> I don't know if it is clear to everyone here, but the key data model and the\n> surrounding DDL are exact copies of the equivalent MS SQL Server feature.\n> When I was first studying it, I had the exact same doubts about this. But\n> as I was learning more about it, it does make sense, because this matches a\n> common pattern in key management systems, which is relevant because these\n> keys ultimately map into KMS-managed keys in a deployment. Moreover, 1) it\n> is plausible that those people knew what they were doing, and 2) it would be\n> preferable to maintain alignment and not create something that looks the\n> same but is different in some small but important details.\n\nI find it very hard to belief that details of the catalog representation like\nthis will matter to users. How would would it conceivably affect users that we\nstore (key, encryption method) in pg_attribute vs storing an oid that's\neffectively a foreign key reference to (key, encryption method)?\n\n\n> > > With the proposed removal of usertypmod, it's only two fields: the link to\n> > > the key and the user-facing type.\n> > \n> > That feels far less clean. I think loosing the ability to set the precision of\n> > a numeric, or the SRID for postgis datums won't be received very well?\n> \n> In my mind, and I probably wasn't explicit about this, I'm thinking about\n> what can be done now versus later.\n> \n> The feature is arguably useful without typmod support, e.g., for text. We\n> could ship it like that, then do some work to reorganize pg_attribute and\n> tuple descriptors to relieve some pressure on each byte, and then add the\n> typmod support back in in a future release. I think that is a workable\n> compromise.\n\nI doubt that shipping a version of column encryption that breaks our type\nsystem is a good idea.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 30 Mar 2023 08:55:25 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2023-03-30 16:01:46 +0200, Peter Eisentraut wrote:\n> > On 30.03.23 03:29, Andres Freund wrote:\n> > > > One might think that, but the precedent in other equivalent systems is that\n> > > > you reference the key and the algorithm separately. There is some\n> > > > (admittedly not very conclusive) discussion about this near [0].\n> > > > \n> > > > [0]: https://www.postgresql.org/message-id/flat/00b0c4f3-0d9f-dcfd-2ba0-eee5109b4963%40enterprisedb.com#147ad6faafe8cdd2c0d2fd56ec972a40\n> > > \n> > > I'm very much not convinced by that. Either way, there at least there should\n> > > be a comment mentioning that we intentionally try to allow that.\n> > > \n> > > Even if this feature is something we want (why?), ISTM that this should not be\n> > > implemented by having multiple fields in pg_attribute, but instead by a table\n> > > referenced by by pg_attribute.attcek.\n> > \n> > I don't know if it is clear to everyone here, but the key data model and the\n> > surrounding DDL are exact copies of the equivalent MS SQL Server feature.\n> > When I was first studying it, I had the exact same doubts about this. But\n> > as I was learning more about it, it does make sense, because this matches a\n> > common pattern in key management systems, which is relevant because these\n> > keys ultimately map into KMS-managed keys in a deployment. Moreover, 1) it\n> > is plausible that those people knew what they were doing, and 2) it would be\n> > preferable to maintain alignment and not create something that looks the\n> > same but is different in some small but important details.\n\nI was wondering about this- is it really exactly the same, down to the\npoint that there's zero checking of what the data returned actually is\nafter it's decrypted and given to the application, and if it actually\nmatches the claimed data type?\n\n> I find it very hard to belief that details of the catalog representation like\n> this will matter to users. How would would it conceivably affect users that we\n> store (key, encryption method) in pg_attribute vs storing an oid that's\n> effectively a foreign key reference to (key, encryption method)?\n\nI do agree with this.\n\n> > > > With the proposed removal of usertypmod, it's only two fields: the link to\n> > > > the key and the user-facing type.\n> > > \n> > > That feels far less clean. I think loosing the ability to set the precision of\n> > > a numeric, or the SRID for postgis datums won't be received very well?\n> > \n> > In my mind, and I probably wasn't explicit about this, I'm thinking about\n> > what can be done now versus later.\n> > \n> > The feature is arguably useful without typmod support, e.g., for text. We\n> > could ship it like that, then do some work to reorganize pg_attribute and\n> > tuple descriptors to relieve some pressure on each byte, and then add the\n> > typmod support back in in a future release. I think that is a workable\n> > compromise.\n> \n> I doubt that shipping a version of column encryption that breaks our type\n> system is a good idea.\n\nAnd this.\n\nI do feel that column encryption is a useful capability and there's\nlarge parts of this approach that I agree with, but I dislike the idea\nof having our clients be able to depend on what gets returned for\nnon-encrypted columns while not being able to trust what encrypted\ncolumn results are and then trying to say it's 'transparent'. To that\nend, it seems like just saying they get back a bytea and making it clear\nthat they have to provide the validation would be clear, while keeping\nmuch of the rest. Expanding out from that I'd imagine, pie-in-the-sky\nand in some far off land, having our data type in/out validation\nfunctions moved to the common library and then adding client-side\nvalidation of the data going in/out of the encrypted columns would allow\napplication developers to be able to trust what we're returning (as long\nas they're using libpq- and we'd have to document that independent\nimplementations of the protocol have to provide this or just continue to\nreturn bytea's).\n\nNot sure how we'd manage to provide support for extensions though.\n\nThanks,\n\nStephen", "msg_date": "Thu, 30 Mar 2023 14:35:49 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 30.03.23 17:55, Andres Freund wrote:\n> I find it very hard to belief that details of the catalog representation like\n> this will matter to users. How would would it conceivably affect users that we\n> store (key, encryption method) in pg_attribute vs storing an oid that's\n> effectively a foreign key reference to (key, encryption method)?\n\nThe change you are alluding to would also affect how the DDL commands \nwork and interoperate, so it affects the user.\n\nBut also, let's not drive this design decision bottom up. Let's go from \nhow we want the data model and the DDL to work and then figure out \nsuitable ways to record that. I don't really know if you are just \nworried about the catalog size, or you find an actual fault with the \ndata model, or you just find it subjectively odd.\n\n>> The feature is arguably useful without typmod support, e.g., for text. We\n>> could ship it like that, then do some work to reorganize pg_attribute and\n>> tuple descriptors to relieve some pressure on each byte, and then add the\n>> typmod support back in in a future release. I think that is a workable\n>> compromise.\n> \n> I doubt that shipping a version of column encryption that breaks our type\n> system is a good idea.\n\nI don't follow how you get from that to claiming that it breaks the type \nsystem. Please provide more details.\n\n\n\n", "msg_date": "Thu, 30 Mar 2023 21:45:41 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 30.03.23 20:35, Stephen Frost wrote:\n> I do feel that column encryption is a useful capability and there's\n> large parts of this approach that I agree with, but I dislike the idea\n> of having our clients be able to depend on what gets returned for\n> non-encrypted columns while not being able to trust what encrypted\n> column results are and then trying to say it's 'transparent'. To that\n> end, it seems like just saying they get back a bytea and making it clear\n> that they have to provide the validation would be clear, while keeping\n> much of the rest.\n\n[Note that the word \"transparent\" has been removed from the feature \nname. I just didn't change the email thread name.]\n\nThese thoughts are reasonable, but I think there is a tradeoff to be \nmade between having featureful data validation and enhanced security. \nIf you want your database system to validate your data, you have to send \nit in plain text. If you want to have your database system not see the \nplain text, then it cannot validate it. But there is still utility in it.\n\nYou can't really depend on what gets returned even in the non-encrypted \ncase, unless you cryptographically sign the schema against modification \nor something like that. So realistically, a client that cares strongly \nabout the data it receives has to do some kind of client-side validation \nanyway.\n\nNote also that higher-level client libraries like JDBC effectively do \nclient-side data validation, for example when you call \nResultSet.getInt() etc.\n\nThis is also one of the reasons why the user facing type declaration \nexists. You could just make all encrypted columns of type \"opaque\" or \nsomething and not make any promises about what's inside. But client \nAPIs sort or rely on the application being able to ask the result set \nfor what's inside a column value. If we just say, we don't know, then \napplications (or driver APIs) will have to be changed to accommodate \nthat, but the intention was to not require that. So instead we say, \nit's supposed to be int, and then if it's sometimes actually not int, \nthen your application throws an exception you can deal with. This is \narguably a better developer experience, even if it concerns the data \ntype purist.\n\nBut do you have a different idea about how it should work?\n\n> Expanding out from that I'd imagine, pie-in-the-sky\n> and in some far off land, having our data type in/out validation\n> functions moved to the common library and then adding client-side\n> validation of the data going in/out of the encrypted columns would allow\n> application developers to be able to trust what we're returning (as long\n> as they're using libpq- and we'd have to document that independent\n> implementations of the protocol have to provide this or just continue to\n> return bytea's).\n\nAs mentioned, some client libraries effectively already do that. But \neven if we could make this much more comprehensive, I don't see how this \ncan ever actually satisfy your point. It would require that all \nparticipating clients apply validation all the time, and all other \nclients can rely on that happening, which is impossible.\n\n\n", "msg_date": "Thu, 30 Mar 2023 22:10:02 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "To kick some things off for PG18, here is an updated version of the \npatch for automatic client-side column-level encryption. (See commit \nmessage included in the patch for a detailed description, if you have \nforgotten. Also, see [0] if the thread has dropped off your local mail \nstorage.)\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/89157929-c2b6-817b-6025-8e4b2d89d88f@enterprisedb.com\n\nThis patch got stuck around CF 2023-03 because expanding the size of the \ntuple descriptor (with new pg_attribute columns) had a noticeable \nperformance impact. Various work in PG17 has made it more manageable to \nhave columns in pg_attribute that are not in the tuple descriptor, and \nthis patch now takes advantage of that (and I wanted to do this merge \nsoon to verify that the changes in PG17 are usable). Otherwise, this \nversion v20 is functionally unchanged over the last posted version v19. \nObviously, it's early days, so there will be plenty of time to have \ndiscussions on various other aspects of this patch. I'm keeping a keen \neye on the discussion of protocol extensions, for example.", "msg_date": "Wed, 10 Apr 2024 12:12:52 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On Wed, 10 Apr 2024 at 12:13, Peter Eisentraut <peter@eisentraut.org> wrote:\n>\n> To kick some things off for PG18, here is an updated version of the\n> patch for automatic client-side column-level encryption.\n\nI only read the docs and none of the code, but here is my feedback on\nthe current design:\n\n> (The CEK can't be rotated easily, since\n> that would require reading out all the data from a table/column and\n> reencrypting it. We could/should add some custom tooling for that,\n> but it wouldn't be a routine operation.)\n\nThis seems like something that requires some more thought because CEK\nrotation seems just as important as CMK rotation (often both would be\ncompromised at the same time). As far as I can tell the only way to\nrotate a CEK is by re-encrypting the column for all rows in a single\ngo at the client side, thus taking a write-lock on all rows of the\ntable. That seems quite problematic, because that makes key rotation\nan operation that requires application downtime. Allowing online\nrotation is important, otherwise almost no-one will do it preventative\nat a regular interval.\n\nOne way to allow online CEK rotation is by allowing a column to be\nencrypted by one of several keys and/or allow a key to have multiple\nversions. And then for each row we would store which key/version it\nwas encrypted with. That way for new insertions/updates clients would\nuse the newest version. But clients would still be able to decrypt\nboth old rows with the old key and new rows encrypted with the new\nkey, because the server would give them both keys and tell which row\nwas encrypted with which. Then the old rows can be rewritten by a\nclient in small batches, so that writes to the table can keep working\nwhile this operation takes place.\n\nThis could even be used to allow encrypting previously unencrypted\ncolumns using something like \"ALTER COLUMN mycol ENCRYPTION KEY cek1\".\nThen unencrypted rows could be indicated by e.g. returning something\nlike NULL for the CEK.\n\n+ The plaintext inside\n+ the ciphertext is always in text format, but this is invisible to the\n+ protocol.\n\nIt seems like it would be useful to have a way of storing the\nplaintext in binary form too. I'm not saying this should be part of\nthe initial version, but it would be good to keep that in mind with\nthe design.\n\n+ The session-specific identifier of the key.\n\nIs it necessary for this identifier to be session-specific? Why not\nuse a global identifier like an oid? Anything session specific makes\nthe job of transaction poolers quite a bit harder. If this identifier\nwould be global, then the message can be forwarded as is to the client\ninstead of re-mapping identifiers between clients and servers (like is\nneeded for prepared statements).\n\n+ Additional algorithms may be added to this protocol specification without a\n+ change in the protocol version number.\n\nWhat's the reason for not requiring a version bump for this?\n\n+ If the protocol extension <literal>_pq_.column_encryption</literal> is\n+ enabled (see <xref linkend=\"protocol-flow-column-encryption\"/>), then\n+ there is also the following for each parameter:\n\nIt seems a little bit wasteful to include these for all columns, even\nthe ones that don't require encryption. How about only adding these\nfields when format code is 0x11\n\nFinally, I'm trying to figure out if this really needs to be a\nprotocol extension or if a protocol version bump would work as well\nwithout introducing a lot of work for clients/poolers that don't care\nabout it (possibly with some modifications to the proposed protocol\nchanges). What makes this a bit difficult for me is that there's not\nmuch written in the documentation on what is supposed to happen for\nencrypted columns when the protocol extension is not enabled. Is the\ncontent just returned/written like it would be with a bytea? Or is\nwriting disallowed because the format code would never be set to 0x11.\n\nA related question to this is that currently libpq throws an error if\ne.g. a master key realm is not defined but another one is. Is that\nreally what we want? Is not having one of the realms really that\ndifferent from not providing any realms at all?\n\nBut no-matter these behavioural details, I think it would be fairly\neasy to add minimal \"non-support\" for this feature while supporting\nthe new protocol messages. All they would need to do is understand\nwhat the new protocol messages/fields mean and either ignore them or\nthrow a clear error. For poolers it's a different story however. For\ntransaction pooling there's quite a bit of work to be done. I already\nmentioned the session-specific ID being a problem, but even assuming\nwe change that to a global ID there's still difficulties. Key\ninformation is only sent by the server if it wasn't sent before in the\nsession[1], so a pooler would need to keep it's own cache and send\nkeys to clients that haven't received them yet.\n\nSo yeah, I think it would make sense to put this behind a protocol\nextension feature flag, since it's fairly niche and would require\nsignificant work at the pooler side to support.\n\n\n[1]:\n+ When automatic client-side column-level encryption is enabled, the\n+ messages ColumnMasterKey and ColumnEncryptionKey can appear before\n+ RowDescription and ParameterDescription messages. Clients should collect\n+ the information in these messages and keep them for the duration of the\n+ connection. A server is not required to resend the key information for\n+ each statement cycle if it was already sent during this connection.\n\n\n", "msg_date": "Wed, 10 Apr 2024 16:14:51 +0200", "msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On 10.04.24 16:14, Jelte Fennema-Nio wrote:\n>> (The CEK can't be rotated easily, since\n>> that would require reading out all the data from a table/column and\n>> reencrypting it. We could/should add some custom tooling for that,\n>> but it wouldn't be a routine operation.)\n> \n> This seems like something that requires some more thought because CEK\n> rotation seems just as important as CMK rotation (often both would be\n> compromised at the same time).\n\nHopefully, the reason for key rotation is mainly that policies require \nkey rotation, not that keys get compromised all the time. That's the \nreason for having this two-tier key system in the first place. This \nseems pretty standard to me. For example, I can change the password on \nmy laptop's file system encryption, which somehow wraps a lower-level \nkey, but I can't reencrypt the actual file system in place.\n\n> + The plaintext inside\n> + the ciphertext is always in text format, but this is invisible to the\n> + protocol.\n> \n> It seems like it would be useful to have a way of storing the\n> plaintext in binary form too. I'm not saying this should be part of\n> the initial version, but it would be good to keep that in mind with\n> the design.\n\nTwo problems here: One, for deterministic encryption, everyone needs to \nagree on the representation, otherwise equality comparisons won't work. \n Two, if you give clients the option of storing text or binary, then \nclients also get back a mix of text or binary, and it will be a mess.\nJust giving the option of storing the payload in binary wouldn't be that \nhard, but it's not clear what you can sensibly do with that in the end.\n\n> + The session-specific identifier of the key.\n> \n> Is it necessary for this identifier to be session-specific? Why not\n> use a global identifier like an oid? Anything session specific makes\n> the job of transaction poolers quite a bit harder. If this identifier\n> would be global, then the message can be forwarded as is to the client\n> instead of re-mapping identifiers between clients and servers (like is\n> needed for prepared statements).\n\nThe point was just to avoid saying specifically that the OID will be \nsent, because then that would tie the catalog representation to the \nprotocol, which seems unnecessary. Maybe we can reword that somehow.\n\nIn terms of connection pooling, this feature as it is conceived right \nnow would only work in session pooling anyway. Even if the identifiers \nsomehow were global (but OIDs can also change and are not guaranteed \nunique forever), the state of which keys have already been sent is \nsession state.\n\n> + Additional algorithms may be added to this protocol specification without a\n> + change in the protocol version number.\n> \n> What's the reason for not requiring a version bump for this?\n\nThis is kind of like SASL or TLS can add new methods dynamically without \nrequiring a new version. I mean, as we are learning, making new \nprotocol versions is kind of hard, so the point was to avoid it.\n\n> + If the protocol extension <literal>_pq_.column_encryption</literal> is\n> + enabled (see <xref linkend=\"protocol-flow-column-encryption\"/>), then\n> + there is also the following for each parameter:\n> \n> It seems a little bit wasteful to include these for all columns, even\n> the ones that don't require encryption. How about only adding these\n> fields when format code is 0x11\n\nI guess you could do that, but wouldn't that making the decoding of \nthese messages much more complicated? You would first have to read the \n\"short\" variant, decode the format, and then decide to read the rest. \nSeems weird.\n\n> Finally, I'm trying to figure out if this really needs to be a\n> protocol extension or if a protocol version bump would work as well\n> without introducing a lot of work for clients/poolers that don't care\n> about it (possibly with some modifications to the proposed protocol\n> changes).\n\nThat's not something this patch cares about, but the philosophical \ndiscussions in the other thread on protocol versioning etc. appear to \nlean toward protocol extension.\n\n> What makes this a bit difficult for me is that there's not\n> much written in the documentation on what is supposed to happen for\n> encrypted columns when the protocol extension is not enabled. Is the\n> content just returned/written like it would be with a bytea?\n\nYes, that's what would happen, and that's the intention, so that for \nexample you can use pg_dump to back up encrypted columns without having \nto decrypt them.\n\n> A related question to this is that currently libpq throws an error if\n> e.g. a master key realm is not defined but another one is. Is that\n> really what we want? Is not having one of the realms really that\n> different from not providing any realms at all?\n\nCan you provide a more concrete example of what scenario you have a \nconcern about?\n\n\n\n", "msg_date": "Thu, 18 Apr 2024 13:24:59 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On Thu, 18 Apr 2024 at 13:25, Peter Eisentraut <peter@eisentraut.org> wrote:\n> Hopefully, the reason for key rotation is mainly that policies require\n> key rotation, not that keys get compromised all the time.\n\nThese key rotation policies are generally in place to reduce the\nimpact of a key compromise by limiting the time a compromised key is\nvalid.\n\n> This\n> seems pretty standard to me. For example, I can change the password on\n> my laptop's file system encryption, which somehow wraps a lower-level\n> key, but I can't reencrypt the actual file system in place.\n\nI think the threat model for this proposal and a laptop's file system\nencryption are different enough that the same choices/tradeoffs don't\nautomatically translate. Specifically in this proposal the unencrypted\nCEK is present on all servers that need to read/write those encrypted\nvalues. And a successful attacker would then be able to read the\nencrypted values forever with this key, because it effectively cannot\nbe rotated. That is a much bigger attack surface and risk than a\nlaptop's disk encryption. So, I feel quite strongly that shipping the\nproposed feature without being able to re-encrypt columns in an online\nfashion would be a mistake.\n\n> That's the\n> reason for having this two-tier key system in the first place.\n\nIf we allow for online-rotation of the actual encryption key, then\nmaybe we don't even need this two-tier system ;)\n\nNot having this two tier system would have a few benefits in my opinion:\n1. We wouldn't need to be sending encrypted key material from the\nserver to every client. Which seems nice from a security, bandwidth\nand client implementation perspective.\n2. Asymmetric encryption of columns is suddenly an option. Allowing\ncertain clients to enter encrypted data into the database but not read\nit.\n\n\n> Two problems here: One, for deterministic encryption, everyone needs to\n> agree on the representation, otherwise equality comparisons won't work.\n> Two, if you give clients the option of storing text or binary, then\n> clients also get back a mix of text or binary, and it will be a mess.\n> Just giving the option of storing the payload in binary wouldn't be that\n> hard, but it's not clear what you can sensibly do with that in the end.\n\nHow about defining at column creation time if the underlying value\nshould be binary or not? Something like:\n\nCREATE TABLE t(\n mytime timestamp ENCRYPTED WITH (column_encryption_key = cek1, binary=true)\n);\n\n> Even if the identifiers\n> somehow were global (but OIDs can also change and are not guaranteed\n> unique forever),\n\nOIDs of existing rows can't just change while a connection is active,\nright? (all I know is upgrades can change them but that seems fine)\nAlso they are unique within a catalog table, right?\n\n> the state of which keys have already been sent is\n> session state.\n\nI agree that this is the case. But it's state that can be tracked\nfairly easily by a transaction pooler. Similar to how prepared\nstatements can be tracked. And this is easier to do when at the IDs of\nthe same keys are the same across each session to the server, because\nif they differ then you need to do re-mapping of IDs.\n\n> This is kind of like SASL or TLS can add new methods dynamically without\n> requiring a new version. I mean, as we are learning, making new\n> protocol versions is kind of hard, so the point was to avoid it.\n\nFair enough\n\n> I guess you could do that, but wouldn't that making the decoding of\n> these messages much more complicated? You would first have to read the\n> \"short\" variant, decode the format, and then decide to read the rest.\n> Seems weird.\n\nI see your point. But with the current approach even for queries that\ndon't return any encrypted columns, these useless fields would be part\nof the RowDescryption. It seems quite annoying to add extra network\nand parsing overhead all of your queries even if only a small\npercentage use the encryption feature. Maybe we should add a new\nmessage type instead like EncryptedRowDescription, or add some flag\nfield at the start of RowDescription that can be used to indicate that\nthere is encryption info for some of the columns.\n\n> Yes, that's what would happen, and that's the intention, so that for\n> example you can use pg_dump to back up encrypted columns without having\n> to decrypt them.\n\nOkay, makes sense. But I think it would be good to document that.\n\n> > A related question to this is that currently libpq throws an error if\n> > e.g. a master key realm is not defined but another one is. Is that\n> > really what we want? Is not having one of the realms really that\n> > different from not providing any realms at all?\n>\n> Can you provide a more concrete example of what scenario you have a\n> concern about?\n\nA server has table A and B. A is encrypted with a master key realm X\nand B is encrypted with master key realm Y. If libpq is only given a\nkey for realm X, and it then tries to read table B, an error is\nthrown. While if you don't provide any realm at all, you can read from\ntable B just fine, only you will get bytea fields back.\n\n\n", "msg_date": "Thu, 18 Apr 2024 17:17:17 +0200", "msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On Wed, Apr 10, 2024 at 6:13 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> Obviously, it's early days, so there will be plenty of time to have\n> discussions on various other aspects of this patch. I'm keeping a keen\n> eye on the discussion of protocol extensions, for example.\n\nI think the way that you handled that is clever, and along the lines\nof what I had in mind when I invented the _pq_ stuff.\n\nMore specifically, the way that the ColumnEncryptionKey and\nColumnMasterKey messages are handled is exactly the way that I was\nimagining things would work. The client uses _pq_.column_encryption to\nsignal that it can understand those messages, and the server responds\nby including them. I assume that if the client doesn't signal\nunderstanding, then the server simply omits sending those messages. (I\nhave not checked the code.)\n\nI'm less certain about the changes to the ParameterDescription and\nRowDescription messages. I see a couple of potential problems. One is\nthat, if you say you can understand column encryption messages, the\nextra fields are included even for unencrypted columns. The client\nmust choose at connection startup whether it ever wishes to read any\nencrypted data; if so, it pays a portion of that overhead all the\ntime. Another potential problem is with the scalability of this\ndesign. Suppose that we could not only encrypt columns, but also\ncompress, fold, mutilate, and spindle them. Then there might end up\nbeing a dizzying array of variation in the format of what is supposed\nto be the same message. Perhaps it's not so bad: as long as the\ndocumentation is clear about in which order the additional fields will\nappear in the relevant messages when more than one relevant feature is\nused, it's probably not too difficult for clients to cope. And it is\nprobably also true that the precise size of, say, a RowDescription\nmessage will rarely be performance-critical. But another thought is\nthat we might try to redesign this so that we simply add more message\ntypes rather than mutating message types i.e. after sending the\nRowDescription message, if any columns are encrypted, we additionally\nsend a RowEncryptionDescription message. Then this treatment becomes\nsymmetric with the handling of ColumnEncryptionKey and ColumnMasterKey\nmessages, and there's no overhead when the feature is unused.\n\nWith regard to the Bind message, I suggest that we regard the protocol\nchange as reserving a currently-unused bit in the message to indicate\nwhether the value is pre-encrypted, without reference to the protocol\nextension. It could be legal for a client that can't understand\nencryption message from the server to supply an encrypted value to be\ninserted into a column. And I don't think we would ever want the bit\nthat's being reserved here to be used by some other extension for some\nother purpose, even when this extension isn't used. So I don't see a\nneed for this to be tied into the protocol extension.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 18 Apr 2024 12:46:19 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On Thu, 18 Apr 2024 at 18:46, Robert Haas <robertmhaas@gmail.com> wrote:\n> With regard to the Bind message, I suggest that we regard the protocol\n> change as reserving a currently-unused bit in the message to indicate\n> whether the value is pre-encrypted, without reference to the protocol\n> extension. It could be legal for a client that can't understand\n> encryption message from the server to supply an encrypted value to be\n> inserted into a column. And I don't think we would ever want the bit\n> that's being reserved here to be used by some other extension for some\n> other purpose, even when this extension isn't used. So I don't see a\n> need for this to be tied into the protocol extension.\n\nI think this is an interesting idea. I can indeed see use cases for\ne.g. inserting a new row based on another row (where the secret is the\nsame).\n\nIMHO that means that we should also bump the protocol version for this\nchange, because it's changing the wire protocol by adding a new\nparameter format code. And it does so in a way that does not depend on\nthe new protocol extension.\n\n\n", "msg_date": "Thu, 18 Apr 2024 19:49:13 +0200", "msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" }, { "msg_contents": "On Thu, Apr 18, 2024 at 1:49 PM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n> I think this is an interesting idea. I can indeed see use cases for\n> e.g. inserting a new row based on another row (where the secret is the\n> same).\n>\n> IMHO that means that we should also bump the protocol version for this\n> change, because it's changing the wire protocol by adding a new\n> parameter format code. And it does so in a way that does not depend on\n> the new protocol extension.\n\nI think we're more or less covering the same ground we did on the\nother thread here -- in theory I don't love the fact that we never\nbump the protocol version when we change stuff, but in practice if we\nstart bumping it every time we do anything I think it's going to just\nbreak a bunch of stuff without any real benefit.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 18 Apr 2024 15:00:59 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Transparent column encryption" } ]
[ { "msg_contents": "Hi,\n\nWhen SPI produces a tuple table result, its DestReceiver makes a new\n\"SPI TupTable\" memory context, as a child of the SPI Proc context,\nand the received tuples get copied into it. It goes away at SPI_finish,\nas a consequence of its parent SPI Proc context going away.\n\nIf a person wanted to refer to those tuples after SPI_finish,\nwould it be a dangerous idea to just reparent that context into one\nthat will live longer, shortly before SPI_finish is called?\n\nAtEOSubXact_SPI can free tuptables retail, but only in the rollback case.\n\nThe idea makes sense to me, only I notice that SPITupleTable.tuptabcxt\nis under /* Private members, not intended for external callers */.\n\nThere's no such warning painted on GetMemoryChunkContext, but it would be\nhard to get the Woody Guthrie song out of my head after doing it that way.\n\nAm I overlooking a reason that reparenting an SPI TupTable context\nwould be a bad idea?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Fri, 3 Dec 2021 19:43:20 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "SPI TupTable memory context" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> If a person wanted to refer to those tuples after SPI_finish,\n> would it be a dangerous idea to just reparent that context into one\n> that will live longer, shortly before SPI_finish is called?\n\nSeems kinda dangerous to me ...\n\n> AtEOSubXact_SPI can free tuptables retail, but only in the rollback case.\n\n... precisely because of that. If you wanted to take control of\nthe TupTable, you'd really need to unhook it from the SPI context's\ntuptables list, and that *really* seems like undue familiarity\nwith the implementation.\n\nOn the whole this seems like the sort of thing where if it breaks,\nnobody is going to have a lot of sympathy. What I'd suggest,\nif you don't want to let the SPI mechanisms manage that memory,\nis to not put the tuple set into a SPITupleTable in the first\nplace. Run the query with a different DestReceiver that saves the\nquery result someplace you want it to be (see SPI_execute_extended\nand the options->dest argument).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Dec 2021 20:22:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SPI TupTable memory context" }, { "msg_contents": "On 12/03/21 20:22, Tom Lane wrote:\n> Seems kinda dangerous to me ...\n> \n>> AtEOSubXact_SPI can free tuptables retail, but only in the rollback case.\n> \n> ... precisely because of that. If you wanted to take control of\n> the TupTable, you'd really need to unhook it from the SPI context's\n> tuptables list, and that *really* seems like undue familiarity\n> with the implementation.\n\nFair enough. I didn't have an immediate use in mind, but had been reading\nthrough DestReceiver code and noticed it worked that way, and that it\nlooked as if an SPI_keeptuptable could have been implemented in probably\nno more than the 25 lines of SPI_keepplan, and I wasn't sure if that was\nbecause it had been considered and deemed a Bad Thing, or because the idea\nhadn't come up.\n\nThe equivalent with a custom DestReceiver would certainly work, but with\na lot more ceremony.\n\nSo that was why I asked. If the response had been more like \"hmm, no clear\nreason a patch to do that would be bad\", and if such a patch got accepted\nfor PG release n, that could also implicitly assuage worries about undue\nfamiliarity for implementing the compatible behavior when building on < n.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Sat, 4 Dec 2021 13:31:45 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: SPI TupTable memory context" } ]
[ { "msg_contents": "I wonder why we're counting the number of dead tuples (or LP_DEAD stub\nitems) in the relation as a whole in ANALYZE's acquire_sample_rows()\nfunction. Wouldn't it make more sense to focus on the \"live vs dead\ntuple properties\" of heap pages that are not known to be all-visible\nwhen we generate statistics for our pgstat_report_analyze() report?\nThese statistic collector stats are only for the benefit of autovacuum\nscheduling -- and so they're *consumed* in a way that is totally\ndifferent to the nearby pg_statistic stats.\n\nThere is no good reason for the pgstat_report_analyze() stats to be\nbased on the same pg_class.relpages \"denominator\" as the pg_statistic\nstats (it's just slightly easier to do it that way in\nacquire_sample_rows(), I suppose). On the other hand, an alternative\nbehavior involving counting totaldeadrows against sampled\nnot-all-visible pages (but not otherwise) has a big benefit: doing so\nwould remove any risk that older/earlier PageIsAllVisible() pages will\nbias ANALYZE in the direction of underestimating the count. This isn't\na theoretical benefit -- I have tied it to an issue with the\nBenchmarkSQL TPC-C implementation [1].\n\nThis approach just seems natural to me. VACUUM intrinsically only\nexpects dead tuples/line pointers in not-all-visible pages. So\nPageIsAllVisible() pages should not be counted here -- they are simply\nirrelevant, because these stats are for autovacuum, and autovacuum\nthinks they're irrelevant. What's more, VACUUM currently uses\nvac_estimate_reltuples() to compensate for the fact that it skips some\npages using the visibility map -- pgstat_report_vacuum() expects a\nwhole-relation estimate. But if\npgstat_report_vacuum()/pgstat_report_analyze() expected statistics\nabout the general properties of live vs dead tuples (or LP_DEAD items)\non not-all-visible pages in the first place, then we wouldn't need to\ncompensate like this.\n\nThis new approach also buys us the ability to extrapolate a new\nestimated number of dead tuples using old, stale stats. The stats can\nbe combined with the authoritative/known number of not-all-visible\npages right this second, since it's cheap enough to *accurately*\ndetermine the total number of not-all-visible pages for a heap\nrelation by calling visibilitymap_count(). My guess is that this would\nbe much more accurate in practice: provided the original average\nnumber of dead/live tuples (tuples per not-all-visible block) was\nstill reasonably accurate, the extrapolated \"total dead tuples right\nnow\" values would also be accurate.\n\nI'm glossing over some obvious wrinkles here, such as: what happens to\ntotaldeadrows when 100% of all the pages ANALYZE samples are\nPageIsAllVisible() pages? I think that it shouldn't be too hard to\ncome up with solutions to those problems (the extrapolation idea\nalready hints at a solution), but for now I'd like to keep the\ndiscussion high level.\n\n[1] https://postgr.es/m/CAH2-Wz=9R83wcwZcPUH4FVPeDM4znzbzMvp3rt21+XhQWMU8+g@mail.gmail.com\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 4 Dec 2021 21:27:45 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Why doesn't pgstat_report_analyze() focus on not-all-visible-page\n dead tuple counts, specifically?" }, { "msg_contents": "On Sun, Dec 5, 2021 at 12:28 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> I wonder why we're counting the number of dead tuples (or LP_DEAD stub\n> items) in the relation as a whole in ANALYZE's acquire_sample_rows()\n> function. Wouldn't it make more sense to focus on the \"live vs dead\n> tuple properties\" of heap pages that are not known to be all-visible\n> when we generate statistics for our pgstat_report_analyze() report?\n> These statistic collector stats are only for the benefit of autovacuum\n> scheduling -- and so they're *consumed* in a way that is totally\n> different to the nearby pg_statistic stats.\n\nI think this could be the right idea. I'm not certain that it is, but\nit does sound believable.\n\n> This new approach also buys us the ability to extrapolate a new\n> estimated number of dead tuples using old, stale stats. The stats can\n> be combined with the authoritative/known number of not-all-visible\n> pages right this second, since it's cheap enough to *accurately*\n> determine the total number of not-all-visible pages for a heap\n> relation by calling visibilitymap_count(). My guess is that this would\n> be much more accurate in practice: provided the original average\n> number of dead/live tuples (tuples per not-all-visible block) was\n> still reasonably accurate, the extrapolated \"total dead tuples right\n> now\" values would also be accurate.\n\nSo does this. If some of the table is now all-visible when it wasn't\nbefore, it's certainly a good guess that the portions that still\naren't have about the same distribution of dead tuples that they did\nbefore ... although the other direction is less clear: it seems\npossible that newly not-all-visible pages have fewer dead tuples than\nones which have been not-all-visible for a while. But you have to make\nsome guess.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Dec 2021 15:07:44 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why doesn't pgstat_report_analyze() focus on not-all-visible-page\n dead tuple counts, specifically?" }, { "msg_contents": "On Mon, Dec 6, 2021 at 12:07 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> So does this. If some of the table is now all-visible when it wasn't\n> before, it's certainly a good guess that the portions that still\n> aren't have about the same distribution of dead tuples that they did\n> before ... although the other direction is less clear: it seems\n> possible that newly not-all-visible pages have fewer dead tuples than\n> ones which have been not-all-visible for a while. But you have to make\n> some guess.\n\nTo me, it seems natural to accept and even embrace the inherent\nuncertainty about the number of dead tuples. We should model our\ncurrent belief about how many dead tuples are in the table as a\nprobability density function (or something along the same lines).\nThere is a true \"sample space\" here. Once we focus on not-all-visible\npages, using authoritative VM info, many kinds of misestimations are\nclearly impossible. For example, there are only so many\nnot-all-visible heap pages, and they can only hold so many dead tuples\n(up to MaxHeapTuplesPerPage). This is a certainty.\n\nThe number of dead tuples in the table is an inherently dynamic thing,\nwhich makes it totally dissimilar to the pg_statistics-based stats.\nAnd so a single snapshot of a point in time is inherently much less\nuseful -- we ought to keep a few sets of old statistics within our new\npgstat_report_analyze() -- maybe 3 or 5. Each set of statistics\nincludes the total number of relpages at the time, the total number of\nnot-all-visible pages (i.e. interesting pages) at the time, and the\naverage number of live and dead tuples encountered. This is\ninterpreted (along with a current visibilitymap_count()) to get our\nso-called probability density function (probably not really a PDF,\nprobably something simpler and only vaguely similar) within\nautovacuum.c.\n\nIt just occurred to me that it makes zero sense that\npgstat_report_vacuum() does approximately the same thing as\npgstat_report_analyze() -- we make no attempt to compensate for the\nfact that the report is made by VACUUM specifically, and so reflects\nthe state of each page in the table immediately after it was processed\nby VACUUM. ISTM that this makes it much more likely to appear as an\nunderestimate later on -- pgstat_report_vacuum() gets the furthest\npossible thing from a random sample. Whereas if we had more context\n(specifically that there are very few or even 0 all-visible pages), it\nwouldn't hurt us at all, and we wouldn't need to have special\npgstat_report_vacuum()-only heuristics.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 6 Dec 2021 14:37:37 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Why doesn't pgstat_report_analyze() focus on not-all-visible-page\n dead tuple counts, specifically?" }, { "msg_contents": "On Mon, Dec 6, 2021 at 2:37 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Mon, Dec 6, 2021 at 12:07 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > So does this. If some of the table is now all-visible when it wasn't\n> > before, it's certainly a good guess that the portions that still\n> > aren't have about the same distribution of dead tuples that they did\n> > before ... although the other direction is less clear: it seems\n> > possible that newly not-all-visible pages have fewer dead tuples than\n> > ones which have been not-all-visible for a while. But you have to make\n> > some guess.\n>\n> To me, it seems natural to accept and even embrace the inherent\n> uncertainty about the number of dead tuples.\n\n> The number of dead tuples in the table is an inherently dynamic thing,\n> which makes it totally dissimilar to the pg_statistics-based stats.\n> And so a single snapshot of a point in time is inherently much less\n> useful -- we ought to keep a few sets of old statistics within our new\n> pgstat_report_analyze() -- maybe 3 or 5.\n\nI just realized that I didn't really get around to explicitly\nconnecting this to your point about newly not-all-visible pages being\nquite different to older ones that ANALYZE has seen -- which is\ndefinitely an important consideration. I'll do so now:\n\nKeeping some history makes the algorithm \"less gullible\" (a more\nuseful goal than making it \"smarter\", at least IMV). Suppose that our\nstarting point is 2 pieces of authoritative information, which are\ncurrent as of the instant we want to estimate the number of dead\ntuples for VACUUM: 1. total relation size (relpages), and 2. current\nnot-all-visible-pages count (interesting/countable pages, calculated\nby taking the \"complement\" of visibilitymap_count() value). Further\nsuppose we store the same 2 pieces of information in our ANALYZE\nstats, reporting using pgstat_report_analyze() -- the same 2 pieces of\ninformation are stored alongside the actual count of dead tuples and\nlive tuples found on not-all-visible pages.\n\nThe algorithm avoids believing silly things about dead tuples by\nconsidering the delta between each piece of information, particularly\nthe difference between \"right now\" and \"the last time ANALYZE ran and\ncalled pgstat_report_analyze()\". For example, if item 1/relpages\nincreased by exactly the same number of blocks as item\n2/not-all-visible pages (or close enough to it), that is recognized as\na pretty strong signal. The algorithm should consider the newly\nnot-all-visible pages as likely to have very few dead tuples. At the\nsame time, the algorithm should not change its beliefs about the\nconcentration of dead tuples in remaining, older not-all-visible\npages.\n\nThis kind of thing will still have problems, no doubt. But I'd much\nrather err in the direction of over-counting dead tuples like this.\nThe impact of the problem on the workload/autovacuum is a big part of\nthe picture here.\n\nSuppose we believe that not-all-visible pages have 20 LP_DEAD items on\naverage, and they turn out to only have 3 or 5. Theoretically we've\ndone the wrong thing by launching autovacuum workers sooner -- we\nintroduce bias. But we also have lower variance over time, which might\nmake it worth it. I also think that it might not really matter at all.\nIt's no great tragedy if we clean up and set pages all-visible in the\nvisibility map a little earlier on average. It might even be a\npositive thing.\n\nThe fact that the user expresses the dead-tuple-wise threshold using\nautovacuum_vacuum_scale_factor is already somewhat arbitrary -- it is\nbased on some pretty iffy assumptions. Even if we greatly overestimate\ndead tuples with the new algorithm, we're only doing so under\ncircumstances that might have caused\nautovacuum_vacuum_insert_scale_factor to launch an autovacuum worker\nanyway. Just setting the visibility map bit has considerable value.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Mon, 6 Dec 2021 17:13:53 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Why doesn't pgstat_report_analyze() focus on not-all-visible-page\n dead tuple counts, specifically?" }, { "msg_contents": "On Mon, Dec 6, 2021 at 8:14 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Suppose we believe that not-all-visible pages have 20 LP_DEAD items on\n> average, and they turn out to only have 3 or 5. Theoretically we've\n> done the wrong thing by launching autovacuum workers sooner -- we\n> introduce bias. But we also have lower variance over time, which might\n> make it worth it. I also think that it might not really matter at all.\n> It's no great tragedy if we clean up and set pages all-visible in the\n> visibility map a little earlier on average. It might even be a\n> positive thing.\n\nThis doesn't seem convincing. Launching autovacuum too soon surely has\ncosts that someone might not want to pay. Clearly in the degenerate\ncase where we always autovacuum every table every time an autovacuum\nworker is launched, we have gone insane. So arbitrarily large moves in\nthat direction can't be viewed as unproblematic.\n\n> The fact that the user expresses the dead-tuple-wise threshold using\n> autovacuum_vacuum_scale_factor is already somewhat arbitrary -- it is\n> based on some pretty iffy assumptions. Even if we greatly overestimate\n> dead tuples with the new algorithm, we're only doing so under\n> circumstances that might have caused\n> autovacuum_vacuum_insert_scale_factor to launch an autovacuum worker\n> anyway. Just setting the visibility map bit has considerable value.\n\nNow, on the other hand, I *most definitely* think\nautovacuum_vacuum_scale_factor is hogwash. Everything I've seen\nindicates that, while you do want to wait for a larger number of dead\ntuples in a large table than in a small one, it's sublinear. I don't\nknow whether it's proportional to sqrt(table_size) or table_size^(1/3)\nor lg(table_size) or table_size^(0.729837166538), but just plain old\ntable_size is definitely not it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Dec 2021 21:11:01 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why doesn't pgstat_report_analyze() focus on not-all-visible-page\n dead tuple counts, specifically?" }, { "msg_contents": "On Mon, Dec 6, 2021 at 6:11 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> This doesn't seem convincing. Launching autovacuum too soon surely has\n> costs that someone might not want to pay. Clearly in the degenerate\n> case where we always autovacuum every table every time an autovacuum\n> worker is launched, we have gone insane.\n\nUnfortunately we already sometimes behave insanely in exactly the way\nthat you describe:\n\nhttps://postgr.es/m/CAH2-Wz=sJm3tm+FpXbyBhEhX5tbz1trQrhG6eOhYk4-+5uL=ww@mail.gmail.com\n\nThat is, in addition to the problem that I'm highlighting on this\nthread, we also have the opposite problem: autovacuum chases its tail\nwhen it sees dead heap-only tuples that opportunistic pruning can take\ncare of on its own. I bet that both effects sometimes cancel each\nother out, in weird and unpredictable ways. This effect might be\nprotective at first, and then less protective.\n\n> So arbitrarily large moves in\n> that direction can't be viewed as unproblematic.\n\nI certainly wouldn't argue that they are. Just that the current\napproach of simply counting dead tuples in the table (or trying to,\nusing sampling) and later launching autovacuum (when dead tuples\ncrosses a pretty arbitrary threshold) has many problems -- problems\nthat make us either run autovacuum too aggressively, and not\naggressively enough (relative to what the docs suggest is supposed to\nhappen).\n\n> Now, on the other hand, I *most definitely* think\n> autovacuum_vacuum_scale_factor is hogwash. Everything I've seen\n> indicates that, while you do want to wait for a larger number of dead\n> tuples in a large table than in a small one, it's sublinear. I don't\n> know whether it's proportional to sqrt(table_size) or table_size^(1/3)\n> or lg(table_size) or table_size^(0.729837166538), but just plain old\n> table_size is definitely not it.\n\nI think that it'll vary considerably, based on many factors. Making\nthe precise threshold \"open to interpretation\" to some degree (by\nautovacuum.c) seems like it might help us with future optimizations.\n\nIt's hard to define a break-even point for launching an autovacuum\nworker. I think it would be more productive to come up with a design\nthat at least doesn't go completely off the rails in various specific\nways. I also think that our problem is not so much that we don't have\naccurate statistics about dead tuples (though we surely don't have\nmuch accuracy). The main problem seems to be that there are various\nspecific, important ways in which the distribution of dead tuples may\nmatter (leading to various harmful biases). And so it seems reasonable\nto fudge how we interpret dead tuples with the intention of capturing\nsome of that, as a medium term solution. Something that considers the\nconcentration of dead tuples in heap pages seems promising.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Mon, 6 Dec 2021 18:42:14 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Why doesn't pgstat_report_analyze() focus on not-all-visible-page\n dead tuple counts, specifically?" }, { "msg_contents": "On Mon, Dec 6, 2021 at 9:42 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Mon, Dec 6, 2021 at 6:11 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > This doesn't seem convincing. Launching autovacuum too soon surely has\n> > costs that someone might not want to pay. Clearly in the degenerate\n> > case where we always autovacuum every table every time an autovacuum\n> > worker is launched, we have gone insane.\n>\n> Unfortunately we already sometimes behave insanely in exactly the way\n> that you describe:\n>\n> https://postgr.es/m/CAH2-Wz=sJm3tm+FpXbyBhEhX5tbz1trQrhG6eOhYk4-+5uL=ww@mail.gmail.com\n\nIn the same ballpark is http://rhaas.blogspot.com/2020/02/useless-vacuuming.html\n\n> It's hard to define a break-even point for launching an autovacuum\n> worker. I think it would be more productive to come up with a design\n> that at least doesn't go completely off the rails in various specific\n> ways.\n\nI think that's a good observation. I think the current autovacuum\nalgorithm works pretty well when things are normal. But in extreme\nscenarios it does not degrade gracefully. The\nvacuuming-over-and-over-without-doing-anything phenomenon is an\nexample of that. Another example is the user who creates 10,000\ndatabases and we're happy to divide the 60s-autovacuum_naptime by\n10,000 and try to launch a worker every 0.6 ms. A third example is\nvacuuming the tables from pg_class in physical order on disk, so that\na table that is 1 XID past the wraparound limit can result in a long\ndelay vacuuming a table that is bloating quickly, or conversely a\ntable that is bloating very slowly but has just crawled over the\nthreshold for a regular vacuum gets processed before one that is\nthreatening an imminent wraparound shutdown. I think these are all\npathological cases that a well-informed human can easily recognize and\nhandle in an intelligent manner, and it doesn't seem crazy to program\nthose responses into the computer in some way.\n\n> I also think that our problem is not so much that we don't have\n> accurate statistics about dead tuples (though we surely don't have\n> much accuracy). The main problem seems to be that there are various\n> specific, important ways in which the distribution of dead tuples may\n> matter (leading to various harmful biases). And so it seems reasonable\n> to fudge how we interpret dead tuples with the intention of capturing\n> some of that, as a medium term solution. Something that considers the\n> concentration of dead tuples in heap pages seems promising.\n\nI am less convinced about this part. It sort of sounds like you're\nsaying - it doesn't really matter whether the numbers we gather are\naccurate, just that they produce the correct results. If the\ninformation we currently gather wouldn't produce the right results\neven if it were fully accurate, that to me suggests that we're\ngathering the wrong information, and we should gather something else.\nFor example, we could attack the useless-vacuuming problem by having\neach vacuum figure out - and store - the oldest XID that could\npotentially be worth using as a cutoff for vacuuming that table, and\nrefusing to autovacuum that table again until we'd be able to use a\ncutoff >= that value. I suppose this would be the oldest of (a) any\nXID that exists in the table on a tuple that we judged recently dead,\n(b) any XID that is currently-running, and (c) the next XID.\n\nI also accept that knowing the concentration of dead tuples on pages\nthat contain at least 1 dead tuple could be interesting. I've felt for\na while that it's a mistake to know how many dead tuples there are but\nnot how many pages contain them, because that affects both the amount\nof I/O required to vacuum and also how much need we have to set VM\nbits. I'm not sure I would have approached gathering that information\nin the way that you're proposing here, but I'm not deeply against it,\neither. I do think that we should try to keep it as a principle that\nwhatever we do gather, we should try our best to make accurate. If\nthat doesn't work well, then gather different stuff instead.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 Dec 2021 10:58:02 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why doesn't pgstat_report_analyze() focus on not-all-visible-page\n dead tuple counts, specifically?" }, { "msg_contents": "On Tue, Dec 7, 2021 at 7:58 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I think that's a good observation. I think the current autovacuum\n> algorithm works pretty well when things are normal. But in extreme\n> scenarios it does not degrade gracefully.\n\n+1 to all of the specific examples you go on to describe.\n\n> > I also think that our problem is not so much that we don't have\n> > accurate statistics about dead tuples (though we surely don't have\n> > much accuracy). The main problem seems to be that there are various\n> > specific, important ways in which the distribution of dead tuples may\n> > matter (leading to various harmful biases). And so it seems reasonable\n> > to fudge how we interpret dead tuples with the intention of capturing\n> > some of that, as a medium term solution. Something that considers the\n> > concentration of dead tuples in heap pages seems promising.\n>\n> I am less convinced about this part. It sort of sounds like you're\n> saying - it doesn't really matter whether the numbers we gather are\n> accurate, just that they produce the correct results.\n\nThat's not quite it. More like: I think that it would be reasonable to\ndefine dead tuples abstractly, in a way that's more useful but\nnevertheless cannot diverge too much from the current definition. We\nshould try to capture \"directionality\", with clear error bounds. This\nwon't represent the literal truth, but it will be true in spirit (or\nmuch closer).\n\nFor example, why should we count dead heap-only tuples from earlier in\na HOT chain, even when we see no evidence that opportunistic HOT\npruning can't keep up on that page? Since we actually care about the\ndirection of things, not just the present state of things, we'd be\njustified in completely ignoring those dead tuples. Similarly, it\nmight well make sense to give more weight to concentrations of LP_DEAD\nitems on a page -- that is a signal that things are not going well *at\nthe level of the page*. Not so much when you have a few LP_DEAD stubs,\nbut certainly when you have dozens of them on one page, or even\nhundreds. And so ISTM that the conditions of the page should influence\nhow we interpret/count that page's dead tuples, in both directions\n(interpret the page as having more dead tuples, or fewer).\n\nWe all know that there isn't a sensible answer to the question \"if the\nabstract cost units used in the optimizer say that one sequential page\naccess is 4x cheaper than one random page access, then what's the\ndifference between accessing 10 random pages sequentially in close\nsuccession, versus accessing the same 10 pages randomly?\". But at the\nsame time, we can say for sure that random is more expensive to *some*\ndegree, but certainly never by multiple orders of magnitude. The model\nis imperfect, to be sure, but it is better than many alternative\nmodels that are also technically possible. We need *some* cost model\nfor a cost-based optimizer, and it is better to be approximately\ncorrect than exactly wrong. Again, the truly important thing is to not\nbe totally wrong about any one thing.\n\nAnother way of putting it is that I am willing to accept a more biased\ndefinition of dead tuples, if that lowers the variance:\n\nhttps://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff\n\nWe have some certainty about what is possible in a world in which we\nuse the visibility map directly, and so our final estimate should\nnever be wildly unreasonable -- the abstract idea of dead tuples can\nstill be anchored to the physical reality.\n\nAs with the optimizer and its cost units, we don't have that many\ndegrees of freedom when autovacuum.c launches a worker, any I don't\nsee that changing -- we must ultimately decide to launch or not launch\nan autovacuum worker (for any given table) each time the autovacuum\nlauncher wakes up. So we're practically forced to come up with a model\nthat has some abstract idea of one kind of bloat/dead tuple. I would\nsay that we already have one, in fact -- it's just not a very good one\nbecause it doesn't take account of obviously-relevant factors like\nHOT. It could quite easily be less wrong.\n\n> If the\n> information we currently gather wouldn't produce the right results\n> even if it were fully accurate, that to me suggests that we're\n> gathering the wrong information, and we should gather something else.\n\nI think that counting dead tuples is useful, but not quite sufficient\non its own. At the same time, we still need something that works like\na threshold -- because that's just how the autovacuum.c scheduling\nworks. It's a go/no-go decision, regardless of how the decision is\nmade.\n\n> For example, we could attack the useless-vacuuming problem by having\n> each vacuum figure out - and store - the oldest XID that could\n> potentially be worth using as a cutoff for vacuuming that table, and\n> refusing to autovacuum that table again until we'd be able to use a\n> cutoff >= that value. I suppose this would be the oldest of (a) any\n> XID that exists in the table on a tuple that we judged recently dead,\n> (b) any XID that is currently-running, and (c) the next XID.\n\nI agree that that's a good idea, but it seems like something that only\naugments what I'm talking about. I suppose that it might become\nnecessary if we get something along the lines of what we've been\ndiscussing, though.\n\n> I also accept that knowing the concentration of dead tuples on pages\n> that contain at least 1 dead tuple could be interesting. I've felt for\n> a while that it's a mistake to know how many dead tuples there are but\n> not how many pages contain them, because that affects both the amount\n> of I/O required to vacuum and also how much need we have to set VM\n> bits.\n\nRight. And as I keep saying, the truly important thing is to not\n*completely* ignore any relevant dimension of cost. I just don't want\nto ever be wildly wrong -- not even once. We can tolerate being\nsomewhat less accurate all the time (not that we necessarily have to\nmake a trade-off), but we cannot tolerate pathological behavior. Of\ncourse I include new/theoretical pathological behaviors here (not just\nthe ones we know about today).\n\n> I'm not sure I would have approached gathering that information\n> in the way that you're proposing here, but I'm not deeply against it,\n> either. I do think that we should try to keep it as a principle that\n> whatever we do gather, we should try our best to make accurate. If\n> that doesn't work well, then gather different stuff instead.\n\nIt's important to be accurate, but it's also important to be tolerant\nof model error, which is inevitable. We should make pragmatic\ndecisions about what kinds of errors our new model will have. And it\nshould have at least a rudimentary ability to learn from its mistakes.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 7 Dec 2021 11:13:15 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Why doesn't pgstat_report_analyze() focus on not-all-visible-page\n dead tuple counts, specifically?" }, { "msg_contents": "On Tue, Dec 7, 2021 at 2:13 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> For example, why should we count dead heap-only tuples from earlier in\n> a HOT chain, even when we see no evidence that opportunistic HOT\n> pruning can't keep up on that page? Since we actually care about the\n> direction of things, not just the present state of things, we'd be\n> justified in completely ignoring those dead tuples. Similarly, it\n> might well make sense to give more weight to concentrations of LP_DEAD\n> items on a page -- that is a signal that things are not going well *at\n> the level of the page*. Not so much when you have a few LP_DEAD stubs,\n> but certainly when you have dozens of them on one page, or even\n> hundreds. And so ISTM that the conditions of the page should influence\n> how we interpret/count that page's dead tuples, in both directions\n> (interpret the page as having more dead tuples, or fewer).\n\nWell... I mean, I think we're almost saying the same thing, then, but\nI think you're saying it more confusingly. I have no objection to\ncounting the number of dead HOT chains rather than the number of dead\ntules, because that's what affects the index contents, but there's no\nneed to characterize that as \"not the literal truth.\" There's nothing\nfuzzy or untrue about it if we simply say that's what we're doing.\n\n> Right. And as I keep saying, the truly important thing is to not\n> *completely* ignore any relevant dimension of cost. I just don't want\n> to ever be wildly wrong -- not even once. We can tolerate being\n> somewhat less accurate all the time (not that we necessarily have to\n> make a trade-off), but we cannot tolerate pathological behavior. Of\n> course I include new/theoretical pathological behaviors here (not just\n> the ones we know about today).\n\nSure, but we don't *need* to be less accurate, and I don't think we\neven *benefit* from being less accurate. If we do something like count\ndead HOT chains instead of dead tuples, let's not call that a\nless-accurate count of dead tuples. Let's call it an accurate count of\ndead HOT chains.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 Dec 2021 15:27:36 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why doesn't pgstat_report_analyze() focus on not-all-visible-page\n dead tuple counts, specifically?" }, { "msg_contents": "On Tue, Dec 7, 2021 at 12:27 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Well... I mean, I think we're almost saying the same thing, then, but\n> I think you're saying it more confusingly. I have no objection to\n> counting the number of dead HOT chains rather than the number of dead\n> tules, because that's what affects the index contents, but there's no\n> need to characterize that as \"not the literal truth.\"\n\nWorks for me!\n\n> Sure, but we don't *need* to be less accurate, and I don't think we\n> even *benefit* from being less accurate. If we do something like count\n> dead HOT chains instead of dead tuples, let's not call that a\n> less-accurate count of dead tuples. Let's call it an accurate count of\n> dead HOT chains.\n\nFair enough, but even then we still ultimately have to generate a\nfinal number that represents how close we are to a configurable \"do an\nautovacuum\" threshold (such as an autovacuum_vacuum_scale_factor-based\nthreshold) -- the autovacuum.c side of this (the consumer side)\nfundamentally needs the model to reduce everything to a one\ndimensional number (even though the reality is that there isn't just\none dimension). This single number (abstract bloat units, abstract\ndead tuples, whatever) is a function of things like the count of dead\nHOT chains, perhaps the concentration of dead tuples on heap pages,\nwhatever -- but it's not the same thing as any one of those things we\ncount.\n\nI think that this final number needs to be denominated in abstract\nunits -- we need to call those abstract units *something*. I don't\ncare what that name ends up being, as long as it reflects reality.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 7 Dec 2021 12:43:57 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Why doesn't pgstat_report_analyze() focus on not-all-visible-page\n dead tuple counts, specifically?" }, { "msg_contents": "On Tue, Dec 7, 2021 at 3:44 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Fair enough, but even then we still ultimately have to generate a\n> final number that represents how close we are to a configurable \"do an\n> autovacuum\" threshold (such as an autovacuum_vacuum_scale_factor-based\n> threshold) -- the autovacuum.c side of this (the consumer side)\n> fundamentally needs the model to reduce everything to a one\n> dimensional number (even though the reality is that there isn't just\n> one dimension). This single number (abstract bloat units, abstract\n> dead tuples, whatever) is a function of things like the count of dead\n> HOT chains, perhaps the concentration of dead tuples on heap pages,\n> whatever -- but it's not the same thing as any one of those things we\n> count.\n>\n> I think that this final number needs to be denominated in abstract\n> units -- we need to call those abstract units *something*. I don't\n> care what that name ends up being, as long as it reflects reality.\n\nIf we're only trying to decide whether or not to vacuum a table, we\ndon't need units: the output is a Boolean. If we're trying to decide\non an order in which to vacuum tables, then we need units. But such\nunits can't be anything related to dead tuples, because vacuum can be\nneeded based on XID age, or MXID age, or dead tuples. The units would\nhave to be something like abstract vacuum-urgency units (if higher is\nmore urgent) or abstract remaining-headroom-beform-catastrophe units\n(if lower is more urgent).\n\nIgnoring wraparound considerations for a moment, I think that we might\nwant to consider having multiple thresholds and vacuuming the table if\nany one of them are met. For example, suppose a table qualifies for\nvacuum when %-of-not-all-visible-pages > some-threshold, or\nalternatively when %-of-index-tuples-thought-to-be-dead >\nsome-other-threshold.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 Dec 2021 16:59:06 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why doesn't pgstat_report_analyze() focus on not-all-visible-page\n dead tuple counts, specifically?" }, { "msg_contents": "On Tue, Dec 7, 2021 at 1:59 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> If we're only trying to decide whether or not to vacuum a table, we\n> don't need units: the output is a Boolean.\n\nI was imagining a world in which we preserve the\nautovacuum_vacuum_scale_factor design, but interpret it creatively\n(but never too creatively) -- an incremental approach seems best to\nme. We can even sanity check our abstract bloat unit calculation, in\ncase the page-level sampling aggregates into a totally wild number of\ndead tuples (based in part on the current number of not-all-visible\nheap pages) -- so the abstract units are always anchored to the old\nidea of dead tuples. Maybe this isn't the best approach, but at least\nit addresses compatibility.\n\n*Any* approach based on sampling relatively few random blocks (to look\nfor signs of bloat) is inherently prone to hugely underestimating the\nextent of bloat (which is what we see in TPC-C). I am primarily\nconcerned about compensating for the inherent limitations that go with\nthat. To me it seems inappropriate to make statistical inferences\nabout dead tuples based on a random snapshot of random blocks (usually\nonly a tiny minority). It is not only possible for the picture to\nchange utterly -- it is routine, expected, and the whole entire point.\n\nThe entire intellectual justification for statistical sampling (that\nmostly works for optimizer stats) just doesn't carry over to\nautovacuum stats, for many reasons. At the same time, I don't have any\nfundamentally better starting point. That's how I arrived at the idea\nof probabilistic modeling based on several recent snapshots from\nANALYZE. The statistics are often rubbish, whether or not we like it,\nand regardless of how we decide to count things on each page. And so\nit's entirely reasonable to not limit the algorithm to concerns about\nthe state of things -- the actual exposure of the system to harm (from\noverlooking harmful bloat) is also relevant.\n\n> If we're trying to decide\n> on an order in which to vacuum tables, then we need units. But such\n> units can't be anything related to dead tuples, because vacuum can be\n> needed based on XID age, or MXID age, or dead tuples. The units would\n> have to be something like abstract vacuum-urgency units (if higher is\n> more urgent) or abstract remaining-headroom-beform-catastrophe units\n> (if lower is more urgent).\n\nI like that idea. But I wonder if they should be totally unrelated. If\nwe're close to the \"emergency\" XID threshold, and also close to the\n\"bloat units\" threshold, then it seems reasonable to put our finger on\nthe scales, and do an autovacuum before either threshold is crossed.\nI'm not sure how that should work, but I find the idea of interpreting\nthe \"bloat units\" creatively/probabilistically appealing.\n\nWe're not actually making things up by erring in the direction of\nlaunching an autovacuum worker, because we don't actually know the\nnumber of dead tuples (or whatever) anyway -- we're just recognizing\nthe very real role of chance and noise. That is, if the \"bloat units\"\nthreshold might well not have been crossed due to random chance\n(noise, the phase of the moon), why should we defer to random chance?\nIf we have better information to go on, like the thing with the XID\nthreshold, why not prefer that? Similarly, if we see that the system\nas a whole is not very busy right now, why not consider that, just a\nlittle, if the only downside is that we'll ignore a demonstrably\nnoise-level signal (from the stats)?\n\nThat's the high level intuition behind making \"bloat units\" a\nprobability density function, and not just a simple expected value.\nTeaching the autovacuum.c scheduler to distinguish signal from noise\ncould be very valuable, if it enables opportunistic batching of work,\nor off-hours work. We don't have to respect noise. The devil is in the\ndetails, of course.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 7 Dec 2021 15:20:16 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Why doesn't pgstat_report_analyze() focus on not-all-visible-page\n dead tuple counts, specifically?" } ]
[ { "msg_contents": "When working with the Frontend/Backend Protocol implementation in Npgsql\nand discussing things with the team, I often struggle with the fact that\nyou can't set deep links to individual message formats in the somewhat\nlengthy html docs pages.\n\nThe attached patch adds id's to various elements in protocol.sgml to\nmake them more accesssible via the public html documentation interface.\n\nI've tried to follow the style that was already there in a few of the\nelements.\n\nDo you consider this useful and worth merging?\n\nIs there anything I can improve?\n\nBest Regards,\n\nBrar", "msg_date": "Sun, 5 Dec 2021 16:50:47 +0100", "msg_from": "Brar Piening <brar@gmx.de>", "msg_from_op": true, "msg_subject": "Add id's to various elements in protocol.sgml" }, { "msg_contents": "> On 5 Dec 2021, at 16:51, Brar Piening <brar@gmx.de> wrote:\n\n> The attached patch adds id's to various elements in protocol.sgml to\n> make them more accesssible via the public html documentation interface.\n\nOff the cuff without having checked the compiled results yet, it seems like a good idea.\n\n—\nDaniel Gustafsson\n\n", "msg_date": "Sun, 5 Dec 2021 17:15:12 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "On Sun, Dec 5, 2021 at 11:15 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 5 Dec 2021, at 16:51, Brar Piening <brar@gmx.de> wrote:\n>\n> > The attached patch adds id's to various elements in protocol.sgml to\n> > make them more accesssible via the public html documentation interface.\n>\n> Off the cuff without having checked the compiled results yet, it seems\n> like a good idea.\n>\n> —\n> Daniel Gustafsson\n>\n\nI wanted to do something similar for every function specification in the\ndocs. This may inspire me to take another shot at that.\n\nOn Sun, Dec 5, 2021 at 11:15 AM Daniel Gustafsson <daniel@yesql.se> wrote:> On 5 Dec 2021, at 16:51, Brar Piening <brar@gmx.de> wrote:\n\n> The attached patch adds id's to various elements in protocol.sgml to\n> make them more accesssible via the public html documentation interface.\n\nOff the cuff without having checked the compiled results yet, it seems like a good idea.\n\n—\nDaniel GustafssonI wanted to do something similar for every function specification in the docs. This may inspire me to take another shot at that.", "msg_date": "Tue, 14 Dec 2021 13:58:49 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "On 2021-Dec-05, Brar Piening wrote:\n\n> When working with the Frontend/Backend Protocol implementation in Npgsql\n> and discussing things with the team, I often struggle with the fact that\n> you can't set deep links to individual message formats in the somewhat\n> lengthy html docs pages.\n> \n> The attached patch adds id's to various elements in protocol.sgml to\n> make them more accesssible via the public html documentation interface.\n> \n> I've tried to follow the style that was already there in a few of the\n> elements.\n\nHmm, I think we tend to avoid xreflabels; see\nhttps://www.postgresql.org/message-id/8315c0ca-7758-8823-fcb6-f37f9413e6b6@2ndquadrant.com\n\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 14 Dec 2021 16:47:35 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "On Dec 14, 2021 at 20:47, Alvaro Herrera wrote:\n>\n> Hmm, I think we tend to avoid xreflabels; see\n> https://www.postgresql.org/message-id/8315c0ca-7758-8823-fcb6-f37f9413e6b6@2ndquadrant.com\n\nOk, thank you for the hint.\nI added them because <varlistentry> doesn't automatically generate\nlabels and they were present in the current docs for\nCREATE_REPLICATION_SLOT\n(https://github.com/postgres/postgres/blob/22bd3cbe0c284758d7174321f5596763095cdd55/doc/src/sgml/protocol.sgml#L1944).\n\nAfter reading the aforementioned thread to\nhttps://www.postgresql.org/message-id/20200611223836.GA2507%40momjian.us\nI infer the following conclusions:\na) Do *not* include xreflabel for elements that get numbered.\nb) There should be some general utility for the xreflabel, not just the\nlinking needs of one particular source location.\nc) Generally, xreflabels are a bit of antipattern, so there need to be\nsolid arguments in favor of adding more.\n\nSince I can't argue towards some general utility for the xreflabels and\ndon't have any other solid argument in favor of adding more, I will\nremove them from my current patch but leave the existing ones intact.\n\nObjections?\n\n\n\n\n\n", "msg_date": "Wed, 15 Dec 2021 12:07:47 +0100", "msg_from": "Brar Piening <brar@gmx.de>", "msg_from_op": true, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "On 2021-Dec-15, Brar Piening wrote:\n\n> On Dec 14, 2021 at 20:47, Alvaro Herrera wrote:\n> > \n> > Hmm, I think we tend to avoid xreflabels; see\n> > https://www.postgresql.org/message-id/8315c0ca-7758-8823-fcb6-f37f9413e6b6@2ndquadrant.com\n> \n> Ok, thank you for the hint.\n> I added them because <varlistentry> doesn't automatically generate\n> labels and they were present in the current docs for\n> CREATE_REPLICATION_SLOT\n> (https://github.com/postgres/postgres/blob/22bd3cbe0c284758d7174321f5596763095cdd55/doc/src/sgml/protocol.sgml#L1944).\n\nHmm, now that you mention it, we do have xreflabels for varlistentrys in\nquite a few places. Maybe we need to update README.links to mention\nthis.\n\n> Since I can't argue towards some general utility for the xreflabels and\n> don't have any other solid argument in favor of adding more, I will\n> remove them from my current patch but leave the existing ones intact.\n\nYeah, I think not adding them until we have a use for them might be\nwisest.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"La vida es para el que se aventura\"\n\n\n", "msg_date": "Wed, 15 Dec 2021 11:49:21 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "On Dec 15, 2021 at 15:49, Alvaro Herrera wrote:\n> On 2021-Dec-15, Brar Piening wrote:\n>> Since I can't argue towards some general utility for the xreflabels\n>> and don't have any other solid argument in favor of adding more, I\n>> will remove them from my current patch but leave the existing ones\n>> intact.\n> Yeah, I think not adding them until we have a use for them might be\n> wisest.\nA new version of the patch that doesn't add xreflabels is attached.\nThanks for your support.", "msg_date": "Wed, 15 Dec 2021 16:59:19 +0100", "msg_from": "Brar Piening <brar@gmx.de>", "msg_from_op": true, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "On 15.12.21 16:59, Brar Piening wrote:\n> On Dec 15, 2021 at 15:49, Alvaro Herrera wrote:\n>> On 2021-Dec-15, Brar Piening wrote:\n>>> Since I can't argue towards some general utility for the xreflabels\n>>> and don't have any other solid argument in favor of adding more, I\n>>> will remove them from my current patch but leave the existing ones\n>>> intact.\n>> Yeah, I think not adding them until we have a use for them might be\n>> wisest.\n> A new version of the patch that doesn't add xreflabels is attached.\n\nNow this patch adds a bunch of ids, but you can't use them to link to, \nbecause as soon as you do, you will get complaints about a missing \nxreflabel. So what is the remaining purpose?\n\n\n", "msg_date": "Fri, 17 Dec 2021 08:13:51 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "On Dec 17, 2021 at 08:13, Peter Eisentraut wrote:\n> On 15.12.21 16:59, Brar Piening wrote:\n>> On Dec 15, 2021 at 15:49, Alvaro Herrera wrote:\n>>> On 2021-Dec-15, Brar Piening wrote:\n>>>> Since I can't argue towards some general utility for the xreflabels\n>>>> and don't have any other solid argument in favor of adding more, I\n>>>> will remove them from my current patch but leave the existing ones\n>>>> intact.\n>>> Yeah, I think not adding them until we have a use for them might be\n>>> wisest.\n>> A new version of the patch that doesn't add xreflabels is attached.\n>\n> Now this patch adds a bunch of ids, but you can't use them to link to,\n> because as soon as you do, you will get complaints about a missing\n> xreflabel.  So what is the remaining purpose?\n\nThe purpose is that you can directly link to the id in the public html\ndocs which still gets generated (e. g.\nhttps://www.postgresql.org/docs/14/protocol-replication.html#PROTOCOL-REPLICATION-BASE-BACKUP).\n\nEssentially it gives people discussing the protocol and pointing to a\ncertain command or message format the chance to link to the very thing\nthey are discussing instead of the top of the lengthy html page.\n\n\n\n\n", "msg_date": "Sat, 18 Dec 2021 00:53:54 +0100", "msg_from": "Brar Piening <brar@gmx.de>", "msg_from_op": true, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "On Fri, Dec 17, 2021 at 6:54 PM Brar Piening <brar@gmx.de> wrote:\n> The purpose is that you can directly link to the id in the public html\n> docs which still gets generated (e. g.\n> https://www.postgresql.org/docs/14/protocol-replication.html#PROTOCOL-REPLICATION-BASE-BACKUP).\n>\n> Essentially it gives people discussing the protocol and pointing to a\n> certain command or message format the chance to link to the very thing\n> they are discussing instead of the top of the lengthy html page.\n\nAs a data point, this is something I have also wanted to do, from time\nto time. I am generally of the opinion that any place the\ndocumentation has a long list of things, which should add ids, so that\npeople can link to the particular thing in the list to which they want\nto draw someone's attention.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Dec 2021 10:09:35 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "On 20.12.2021 at 16:09, Robert Haas wrote:\n> As a data point, this is something I have also wanted to do, from time\n> to time. I am generally of the opinion that any place the\n> documentation has a long list of things, which should add ids, so that\n> people can link to the particular thing in the list to which they want\n> to draw someone's attention.\n>\nThank you.\n\nIf there is consensus on generally adding links to long lists I'd take\nsuggestions for other places where people think that this would make\nsense and amend my patch.\n\n\n\n\n", "msg_date": "Tue, 21 Dec 2021 08:47:27 +0100", "msg_from": "Brar Piening <brar@gmx.de>", "msg_from_op": true, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "\nOn 18.12.21 00:53, Brar Piening wrote:\n> The purpose is that you can directly link to the id in the public html\n> docs which still gets generated (e. g.\n> https://www.postgresql.org/docs/14/protocol-replication.html#PROTOCOL-REPLICATION-BASE-BACKUP). \n> \n> Essentially it gives people discussing the protocol and pointing to a\n> certain command or message format the chance to link to the very thing\n> they are discussing instead of the top of the lengthy html page.\n\nIs there a way to obtain those URLs other than going into the HTML \nsources and checking if there is an anchor near where you want go?\n\n\n", "msg_date": "Thu, 24 Feb 2022 13:18:36 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n\n> On 18.12.21 00:53, Brar Piening wrote:\n>> The purpose is that you can directly link to the id in the public html\n>> docs which still gets generated (e. g.\n>> https://www.postgresql.org/docs/14/protocol-replication.html#PROTOCOL-REPLICATION-BASE-BACKUP). \n>> Essentially it gives people discussing the protocol and pointing to a\n>> certain command or message format the chance to link to the very thing\n>> they are discussing instead of the top of the lengthy html page.\n>\n> Is there a way to obtain those URLs other than going into the HTML\n> sources and checking if there is an anchor near where you want go?\n\nI use the jump-to-anchor extension: https://github.com/brettz9/jump-to-anchor/\n\nSome sites have javascript that adds a link next to the element that\nbecomes visible when hovering, e.g. the NAME and other headings on\nhttps://metacpan.org/pod/perl.\n\n- ilmari\n\n\n", "msg_date": "Thu, 24 Feb 2022 13:59:04 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "On 2022-Feb-24, Dagfinn Ilmari Mannsåker wrote:\n\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n\n> > Is there a way to obtain those URLs other than going into the HTML\n> > sources and checking if there is an anchor near where you want go?\n> \n> I use the jump-to-anchor extension: https://github.com/brettz9/jump-to-anchor/\n> \n> Some sites have javascript that adds a link next to the element that\n> becomes visible when hovering, e.g. the NAME and other headings on\n> https://metacpan.org/pod/perl.\n\nWould it be possible to create such anchor links as part of the XSL\nstylesheets for HTML?\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 24 Feb 2022 12:46:28 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "On 24.02.2022 at 16:46, Alvaro Herrera wrote:\n> On 2022-Feb-24, Dagfinn Ilmari Mannsåker wrote:\n>\n>> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>>> Is there a way to obtain those URLs other than going into the HTML\n>>> sources and checking if there is an anchor near where you want go?\n>> I use the jump-to-anchor extension: https://github.com/brettz9/jump-to-anchor/\n>>\n>> Some sites have javascript that adds a link next to the element that\n>> becomes visible when hovering, e.g. the NAME and other headings on\n>> https://metacpan.org/pod/perl.\n> Would it be possible to create such anchor links as part of the XSL\n> stylesheets for HTML?\n>\nInitially I thought that most use cases would involve developers who\nwould be perfectly capable of extracting the id they need from the html\nsources but I agree that making that a bit more comfortable (especially\ngiven the fact that others do that too) seems worthwhile.\n\nI'll investiogate our options and report back.\n\n\n\n\n", "msg_date": "Thu, 24 Feb 2022 17:07:07 +0100", "msg_from": "Brar Piening <brar@gmx.de>", "msg_from_op": true, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "At Tue, 21 Dec 2021 08:47:27 +0100, Brar Piening <brar@gmx.de> wrote in \n> On 20.12.2021 at 16:09, Robert Haas wrote:\n> > As a data point, this is something I have also wanted to do, from time\n> > to time. I am generally of the opinion that any place the\n\n+1 from me. When I put an URL in the answer for inquiries, I always\nlook into the html for name/id tags so that the inquirer quickly find\nthe information source (or the backing or reference point) on the\npage. If not found, I place a snippet instead.\n\n> > documentation has a long list of things, which should add ids, so that\n> > people can link to the particular thing in the list to which they want\n> > to draw someone's attention.\n> >\n> Thank you.\n> \n> If there is consensus on generally adding links to long lists I'd take\n> suggestions for other places where people think that this would make\n> sense and amend my patch.\n\nI don't think there is.\n\nI remember sometimes wanted ids on some sections(x.x) and\nitems(x.x.x or lower) (or even clauses, ignoring costs:p)\n\nFWIW in that perspecive, there's no requirement from me that it should\nbe human-readable. I'm fine with automatically-generated ids.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 25 Feb 2022 09:52:35 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "On 02/24/22 19:52, Kyotaro Horiguchi wrote:\n> FWIW in that perspecive, there's no requirement from me that it should\n> be human-readable. I'm fine with automatically-generated ids.\n\nOne thing I would be −many on, though, would be automatically-generated ids\nthat are not, somehow, stable. I've been burned too many times making links\nto auto-generated anchors in other projects' docs that change every time\nthey republish the wretched things.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Thu, 24 Feb 2022 20:01:56 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "On Thu, Feb 24, 2022, 16:52 Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Tue, 21 Dec 2021 08:47:27 +0100, Brar Piening <brar@gmx.de> wrote in\n> > On 20.12.2021 at 16:09, Robert Haas wrote:\n> > > As a data point, this is something I have also wanted to do, from time\n> > > to time. I am generally of the opinion that any place the\n>\n> +1 from me. When I put an URL in the answer for inquiries, I always\n> look into the html for name/id tags so that the inquirer quickly find\n> the information source (or the backing or reference point) on the\n> page.\n\n\n+1 here as well. I often do the exact same thing. It's not hard, but it's a\nlittle tedious, especially considering most modern doc systems support\nlinkable sections.\n\nOn Thu, Feb 24, 2022, 16:52 Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:At Tue, 21 Dec 2021 08:47:27 +0100, Brar Piening <brar@gmx.de> wrote in \n> On 20.12.2021 at 16:09, Robert Haas wrote:\n> > As a data point, this is something I have also wanted to do, from time\n> > to time. I am generally of the opinion that any place the\n\n+1 from me.  When I put an URL in the answer for inquiries, I always\nlook into the html for name/id tags so that the inquirer quickly find\nthe information source (or the backing or reference point) on the\npage.+1 here as well. I often do the exact same thing. It's not hard, but it's a little tedious, especially considering most modern doc systems support linkable sections.", "msg_date": "Thu, 24 Feb 2022 18:03:55 -0800", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "On 24.02.2022 at 17:07, Brar Piening wrote:\n> On 24.02.2022 at 16:46, Alvaro Herrera wrote:\n>> Would it be possible to create such anchor links as part of the XSL\n>> stylesheets for HTML?\n>>\n> I'll investiogate our options and report back.\n>\nYes, that would be possible. In fact appending a link and optionally\nadding a tiny bit of CSS like I show below does the trick.\n\nThe major problem in that regard would probably be my lack of\nXSLT/docbook skills but if no one can jump in here, I can see if I can\nmake it work.\n\nObviously adding the links via javascript would also work (and even be\neasier for me personally) but that seems like the second best solution\nto me since it involves javascript where no javasript is needed.\n\nPersonally I consider having ids to link to and making them more\ncomfortable to use/find as orthogonal problems in that case (mostly\ndeveloper documentation) so IMHO solving this doesn't necessarily need\nto hold back the original patch.\n\n<dl class=\"variablelist\">\n   <dt id=\"PROTOCOL-LOGICALREP-MESSAGE-FORMATS-INSERT\">\n     <span class=\"term\">Insert</span>\n     <a href=\"#PROTOCOL-LOGICALREP-MESSAGE-FORMATS-INSERT\"\nclass=\"anchor\">#</a></dt>\n   <dd>...</dd>\n</dl>\n\n<!-- Optional style to hide the links and make them visible on hover -->\n<style>\n.variablelist a.anchor {\n   visibility: hidden;\n}\n.variablelist *:hover > a.anchor,\n.variablelist a.anchor:focus {\n   visibility: visible;\n}\n</style>\n\n\n\n", "msg_date": "Fri, 25 Feb 2022 06:36:52 +0100", "msg_from": "Brar Piening <brar@gmx.de>", "msg_from_op": true, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "On 25.02.22 06:36, Brar Piening wrote:\n> Yes, that would be possible. In fact appending a link and optionally\n> adding a tiny bit of CSS like I show below does the trick.\n> \n> The major problem in that regard would probably be my lack of\n> XSLT/docbook skills but if no one can jump in here, I can see if I can\n> make it work.\n\nI think that kind of stuff would be added in via the web site \nstylesheets, so you wouldn't have to deal with XSLT at all.\n\n\n", "msg_date": "Fri, 25 Feb 2022 14:31:01 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "On Feb 25, 2022 at 14:31, Peter Eisentraut wrote:\n> I think that kind of stuff would be added in via the web site\n> stylesheets, so you wouldn't have to deal with XSLT at all.\n\nTrue for the CSS but  adding the HTML (<a\nhref=\"#PROTOCOL-LOGICALREP-MESSAGE-FORMATS-INSERT\" class=\"anchor\">#</a>)\nwill need either XSLT or Javascript.\n\n\n\n\n", "msg_date": "Mon, 28 Feb 2022 09:41:14 +0100", "msg_from": "Brar Piening <brar@gmx.de>", "msg_from_op": true, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "On 28.02.22 09:41, Brar Piening wrote:\n> On Feb 25, 2022 at 14:31, Peter Eisentraut wrote:\n>> I think that kind of stuff would be added in via the web site\n>> stylesheets, so you wouldn't have to deal with XSLT at all.\n> \n> True for the CSS but  adding the HTML (<a\n> href=\"#PROTOCOL-LOGICALREP-MESSAGE-FORMATS-INSERT\" class=\"anchor\">#</a>)\n> will need either XSLT or Javascript.\n\nThat is already done by your proposed patch, isn't it?\n\n\n\n", "msg_date": "Mon, 28 Feb 2022 10:24:16 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "On 28.02.2022 at 10:24, Peter Eisentraut wrote:\n> On 28.02.22 09:41, Brar Piening wrote:\n>> On Feb 25, 2022 at 14:31, Peter Eisentraut wrote:\n>>> I think that kind of stuff would be added in via the web site\n>>> stylesheets, so you wouldn't have to deal with XSLT at all.\n>>\n>> True for the CSS but  adding the HTML (<a\n>> href=\"#PROTOCOL-LOGICALREP-MESSAGE-FORMATS-INSERT\" class=\"anchor\">#</a>)\n>> will need either XSLT or Javascript.\n>\n> That is already done by your proposed patch, isn't it?\n>\nNo it isn't. My proposed patch performs the simple task of adding ids to\nthe dt elements (e.g. <dt id=\"PROTOCOL-LOGICALREP-MESSAGE-FORMATS-INSERT\">).\n\nThis makes them usable as targets for links but they remain invisible to\nusers of the docs who don't know about them, and unusable to users who\ndon't know how to extract them from the HTML source code.\n\nThe links (e.g. <a href=\"#PROTOCOL-LOGICALREP-MESSAGE-FORMATS-INSERT\"\nclass=\"anchor\">#</a>) aren't added by the current XSLT transformations\nfrom Docbooc to HTML.\n\nAdding them would create a visible element (I propose a hash '#') next\nto the description term (inside the <dt> element after the text) that\nyou can click on to put the link into the address bar, from where it can\nbe copied for further usage.\n\n\n\n\n", "msg_date": "Mon, 28 Feb 2022 12:17:11 +0100", "msg_from": "Brar Piening <brar@gmx.de>", "msg_from_op": true, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "On Feb 25, 2022 at 06:36, Brar Piening wrote:\n> The major problem in that regard would probably be my lack of\n> XSLT/docbook skills but if no one can jump in here, I can see if I can\n> make it work.\n\nOk, I've figured it out.\n\nAttached is an extended version of the patch that changes the XSL and\nCSS stylesheets to add links to the ids that are visible when hovering.", "msg_date": "Mon, 28 Feb 2022 20:41:13 +0100", "msg_from": "Brar Piening <brar@gmx.de>", "msg_from_op": true, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "On 02/28/22 14:41, Brar Piening wrote:\n> Attached is an extended version of the patch that changes the XSL and\n> CSS stylesheets to add links to the ids that are visible when hovering.\n\nThat works nicely over here.\n\nI think that in other recent examples I've seen, there might be\n(something like a) consensus forming around the Unicode LINK SYMBOL\n&#x1f517; rather than # as the symbol for such things.\n\n... and now that the concept is proven, how hard would it be to broaden\nthat template's pattern to apply to all the other DocBook constructs\n(such as section headings) that emit anchors?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 28 Feb 2022 15:06:20 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "On Feb 28, 2022 at 21:06, Chapman Flack wrote:\n> I think that in other recent examples I've seen, there might be\n> (something like a) consensus forming around the Unicode LINK SYMBOL\n> &#x1f517; rather than # as the symbol for such things.\n\nI intentionally opted for an ASCII character as that definitely won't\ncause any display/font/portability issues but changing that is no problem.\n\n> ... and now that the concept is proven, how hard would it be to broaden\n> that template's pattern to apply to all the other DocBook constructs\n> (such as section headings) that emit anchors?\n\nAs long as we stick to manually assigned ids in the same way my patch\ndoes it, it shouldn't be too hard. Speaking of autogenerated ids, I\nfailed to make use of them since I wasn't able to reproduce the same\nautogenerated id twice in order to use it for the link.\n\nAlso I'm not sure how well the autogenerated ids are reproducible over\ntime/versions/builds, and if it is wise to use them as targets to link\nto from somewhere else.\n\n\n\n\n", "msg_date": "Tue, 1 Mar 2022 18:27:38 +0100", "msg_from": "Brar Piening <brar@gmx.de>", "msg_from_op": true, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "On Mar 01, 2022 at 18:27, Brar Piening wrote:\n> On Feb 28, 2022 at 21:06, Chapman Flack wrote:\n>> I think that in other recent examples I've seen, there might be\n>> (something like a) consensus forming around the Unicode LINK SYMBOL\n>> &#x1f517; rather than # as the symbol for such things.\n>\n> I intentionally opted for an ASCII character as that definitely won't\n> cause any display/font/portability issues but changing that is no\n> problem.\n\nTBH I don't like the visual representation of the unicode link symbol\n(U+1F517) in my browser. It's a bold black fat thing that doesn't\ninherit colors. I've tried to soften it by decreasing the size but that\ndoesn't really solve it for me. Font support also doesn't seem\noverwhelming. Anyway, I've changed my patch to use it so that you can\njudge it yourself.\n\n>> ... and now that the concept is proven, how hard would it be to broaden\n>> that template's pattern to apply to all the other DocBook constructs\n>> (such as section headings) that emit anchors?\n>\n> As long as we stick to manually assigned ids in the same way my patch\n> does it, it shouldn't be too hard.\n\nPatch is attached. I don't think it should get applied this way, though.\nThe fact that you only get links for section headers that have manually\nassigned ids would be pretty surprising for users of the docs and in\nsome files (e.g. protocol-flow.html) only every other section has a\nmanually assigned id. It would be easy to emit a message (or even fail)\nwhenever the template fails to find an id and then manually assign ids\nuntil they are everywhere (currently that means all varlistentries and\nsections) but that would a) be quite some work and b) make the patch\nquite heavy, so I wouldn't even start this before there is really\nconsensus that this is the right direction.", "msg_date": "Tue, 1 Mar 2022 20:50:39 +0100", "msg_from": "Brar Piening <brar@gmx.de>", "msg_from_op": true, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "On 03/01/22 14:50, Brar Piening wrote:\n> TBH I don't like the visual representation of the unicode link symbol\n> (U+1F517) in my browser. It's a bold black fat thing that doesn't\n> inherit colors. I've tried to soften it by decreasing the size but that\n> doesn't really solve it for me. Font support also doesn't seem\n> overwhelming.\n\nThat sounds like it's probably in less wide use than I thought, and if the\nfont support is spotty, that seems like a good enough reason not to go\nthere. I've no objection to the # symbol. Maybe this should really get\na comment from someone more actively involved in styling the web site.\n\n>> As long as we stick to manually assigned ids in the same way my patch\n>> does it, it shouldn't be too hard.\n> \n> Patch is attached. I don't think it should get applied this way, though.\n> The fact that you only get links for section headers that have manually\n> assigned ids would be pretty surprising for users of the docs and in\n> some files (e.g. protocol-flow.html) only every other section has a\n> manually assigned id. It would be easy to emit a message (or even fail)\n> whenever the template fails to find an id and then manually assign ids\n> until they are everywhere (currently that means all varlistentries and\n> sections) but that would a) be quite some work and b) make the patch\n> quite heavy, so I wouldn't even start this before there is really\n> consensus that this is the right direction.\n\nThis sounds like a bigger deal, and I wonder if it is big enough to merit\nsplitting the patch, so the added ids can go into protocol.sgml promptly\n(and not be any harder to find than any of our fragment ids currently are),\nand \"improve html docs to expose fragment ids\" can get more thought.\n\nAs long as we haven't assigned ids to all sections, I could almost think\nof the surprising behavior as a feature, distinguishing the links you can\nreasonably bet on being stable from the ones you can't. (Maybe the latter\nshould have their own symbol! 1F3B2?) But you're probably right that it\nwould seem surprising and arbitrary. And I don't know how much enthusiasm\nthere will be for assigning ids everywhere.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Tue, 1 Mar 2022 15:33:21 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "On 01.03.22 20:50, Brar Piening wrote:\n> Patch is attached. I don't think it should get applied this way, though.\n> The fact that you only get links for section headers that have manually\n> assigned ids would be pretty surprising for users of the docs and in\n> some files (e.g. protocol-flow.html) only every other section has a\n> manually assigned id. It would be easy to emit a message (or even fail)\n> whenever the template fails to find an id and then manually assign ids\n> until they are everywhere (currently that means all varlistentries and\n> sections) but that would a) be quite some work and b) make the patch\n> quite heavy, so I wouldn't even start this before there is really\n> consensus that this is the right direction.\n\nI have applied the part of your patch that adds the id's. The \ndiscussion about the formatting aspect can continue.\n\nI changed the id's for the protocol messages to mixed case, so that it \nmatches how these are commonly referred to elsewhere. It doesn't affect \nthe output.\n\n\n\n", "msg_date": "Wed, 2 Mar 2022 10:37:36 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "On 01.03.22 18:27, Brar Piening wrote:\n> Also I'm not sure how well the autogenerated ids are reproducible over\n> time/versions/builds, and if it is wise to use them as targets to link\n> to from somewhere else.\n\nAutogenerated ids are stable across builds of the same source. They \nwould change if the document structure is changed, for example, a \nsection is inserted.\n\n\n", "msg_date": "Wed, 2 Mar 2022 10:40:18 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "On 02.03.2022 at 10:37, Peter Eisentraut wrote:\n> I have applied the part of your patch that adds the id's.  The\n> discussion about the formatting aspect can continue.\n\nThank you!\n\nI've generated some data by outputting the element name whenever a\nsection or varlistentry lacks an id. That's how the situation in the\ndocs currently looks like:\n\n    element    | count\n--------------+-------\n  sect2        |   275\n  sect3        |    94\n  sect4        |    20\n  simplesect   |    20\n  varlistentry |  3976\n\nLooking at this, I think that manually assigning an id to all ~400\nsections currently lacking one to make them referable in a consistent\nway is a bit of work but feasible.\n\nOnce we consitently have stable ids on section headers IMHO it makes\nsense to also expose them as links. I'd probably also make the\nstylesheet emit a non-terminating message/comment whenever it finds a\nsection without id in order to help keeping the layout consistent over time.\n\nWith regard to varlistentry I'd suggest to decide whether to add ids or\nnot on a case by case base. I already offered to add ids to long lists\nupon request but I wouldn't want to blindly add ~4k ids that nobody\ncares about. That part can also grow over time by people adding ids as\nthey deem them useful.\n\nAny objections/thoughts?\n\n\n\n", "msg_date": "Wed, 2 Mar 2022 18:46:22 +0100", "msg_from": "Brar Piening <brar@gmx.de>", "msg_from_op": true, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "On 03/02/22 12:46, Brar Piening wrote:\n> With regard to varlistentry I'd suggest to decide whether to add ids or\n> not on a case by case base. I already offered to add ids to long lists\n> upon request but I wouldn't want to blindly add ~4k ids that nobody\n\nPerhaps there are a bunch of variablelists where no one cares about\nlinking to any of the entries.\n\nSo maybe a useful non-terminating message to add eventually would\nbe one that identifies any varlistentry lacking an id, with a\nvariablelist where at least one other entry has an id.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Wed, 2 Mar 2022 12:54:08 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "On 02.03.2022 at 18:54, Chapman Flack wrote:\n> Perhaps there are a bunch of variablelists where no one cares about\n> linking to any of the entries.\n>\n> So maybe a useful non-terminating message to add eventually would\n> be one that identifies any varlistentry lacking an id, with a\n> variablelist where at least one other entry has an id.\n\nThat sounds like areasonable approach for now.\n\nIs there anybody objecting to pursue this? Do you need more examples how\nit would look like?\n\nIt would be a bit hurtful to generate a patch that manually adds ~600\nids just to have it rejected as unwanted.\n\n\n\n\n", "msg_date": "Thu, 3 Mar 2022 15:17:12 +0100", "msg_from": "Brar Piening <brar@gmx.de>", "msg_from_op": true, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "On 03.03.2022 at 15:17, Brar Piening wrote:\r\n> On 02.03.2022 at 18:54, Chapman Flack wrote:\r\n>> Perhaps there are a bunch of variablelists where no one cares about\r\n>> linking to any of the entries.\r\n>>\r\n>> So maybe a useful non-terminating message to add eventually would\r\n>> be one that identifies any varlistentry lacking an id, with a\r\n>> variablelist where at least one other entry has an id.\r\n>\r\n> That sounds like areasonable approach for now.\r\n\r\nAttached is a pretty huge patch that adds ids to all sections and all \r\nthe varlistentries where the containing variablelist already had at \r\nleast one id (plus a few additional ones that I stumbled upon and deemed \r\nuseful). It also adds html links next to the respective heading in the \r\nhtml documentation and emits a build message and a comment when a \r\nsection or a relevant (see before) varlistentry doesn't have an id.\r\n\r\nI don't really like how the length of the id tends to grow for deeply \r\nnested elements if you try to create it in a systematic way, so if \r\nsonebody has a better idea for somewhat mnemonic manual ids or wants to \r\nput an upper limit to the length, please say so. Personally I don't care \r\nthat much since for me they are meant for copy-paste anyway.\r\n\r\nI didn't change any existing id, even if that meant that ids of sibling \r\nelements end up being inconsistent. I did so because I consider URLs as \r\nuser interface that shouldn't change without a really good reason.\r\n\r\nI also didn't add any ids to refsect{1:2:3} elements, although we should \r\nprobably discuss this since within the docs there ist no real visual \r\ndifference between a sect{1;2;3} and a refsect{1:2:3}.\r\n\r\nHere are some stats about elements where at least one currently has an \r\nid before and after my patch for today's HEAD.\r\n\r\nBefore:\r\n      name      | with_id | without_id | id_coverage | max_id_len\r\n---------------+---------+------------+-------------+------------\r\n  sect1         |     726 |          0 |      100.00 |         46\r\n  refentry      |     306 |          0 |      100.00 |         37\r\n  chapter       |      74 |          0 |      100.00 |         24\r\n  biblioentry   |      23 |          0 |      100.00 |         15\r\n  appendix      |      15 |          0 |      100.00 |         23\r\n  part          |       8 |          0 |      100.00 |         20\r\n  co            |       4 |          0 |      100.00 |         30\r\n  figure        |       3 |          0 |      100.00 |         28\r\n  reference     |       3 |          0 |      100.00 |         18\r\n  anchor        |       1 |          0 |      100.00 |         21\r\n  bibliography  |       1 |          0 |      100.00 |          8\r\n  book          |       1 |          0 |      100.00 |         10\r\n  index         |       1 |          0 |      100.00 |         11\r\n  legalnotice   |       1 |          0 |      100.00 |         13\r\n  preface       |       1 |          0 |      100.00 |          9\r\n  glossentry    |     115 |         14 |       89.15 |         32\r\n  sect2         |     568 |        274 |       67.46 |         45\r\n  table         |     280 |        161 |       63.49 |         46\r\n  example       |      27 |         16 |       62.79 |         42\r\n  refsect3      |       5 |          3 |       62.50 |         24\r\n  sect3         |     110 |         94 |       53.92 |         49\r\n  refsect2      |      39 |         55 |       41.49 |         36\r\n  sect4         |       8 |         20 |       28.57 |         27\r\n  footnote      |       5 |         18 |       21.74 |         32\r\n  step          |      25 |        128 |       16.34 |         28\r\n  varlistentry  |     746 |       3976 |       15.80 |         58\r\n  refsect1      |     151 |       1326 |       10.22 |         40\r\n  informaltable |       1 |         15 |        6.25 |         25\r\n  phrase        |       1 |         81 |        1.22 |         20\r\n  indexterm     |       5 |       3225 |        0.15 |         26\r\n  variablelist  |       1 |        800 |        0.12 |         21\r\n  function      |       4 |       4000 |        0.10 |         28\r\n  entry         |      10 |      17609 |        0.06 |         40\r\n  para          |       3 |      25180 |        0.01 |         27\r\n\r\n\r\n  After:\r\n      name      | with_id | without_id | id_coverage | max_id_len\r\n---------------+---------+------------+-------------+------------\r\n  sect2         |     842 |          0 |      100.00 |         49\r\n  sect1         |     726 |          0 |      100.00 |         46\r\n  refentry      |     306 |          0 |      100.00 |         37\r\n  sect3         |     204 |          0 |      100.00 |         57\r\n  chapter       |      74 |          0 |      100.00 |         24\r\n  sect4         |      28 |          0 |      100.00 |         47\r\n  biblioentry   |      23 |          0 |      100.00 |         15\r\n  simplesect    |      20 |          0 |      100.00 |         39\r\n  appendix      |      15 |          0 |      100.00 |         23\r\n  part          |       8 |          0 |      100.00 |         20\r\n  co            |       4 |          0 |      100.00 |         30\r\n  figure        |       3 |          0 |      100.00 |         28\r\n  reference     |       3 |          0 |      100.00 |         18\r\n  anchor        |       1 |          0 |      100.00 |         21\r\n  bibliography  |       1 |          0 |      100.00 |          8\r\n  book          |       1 |          0 |      100.00 |         10\r\n  index         |       1 |          0 |      100.00 |         11\r\n  legalnotice   |       1 |          0 |      100.00 |         13\r\n  preface       |       1 |          0 |      100.00 |          9\r\n  glossentry    |     115 |         14 |       89.15 |         32\r\n  table         |     280 |        161 |       63.49 |         46\r\n  example       |      27 |         16 |       62.79 |         42\r\n  refsect3      |       5 |          3 |       62.50 |         24\r\n  refsect2      |      39 |         55 |       41.49 |         36\r\n  varlistentry  |    1607 |       3115 |       34.03 |         61\r\n  footnote      |       5 |         18 |       21.74 |         32\r\n  step          |      25 |        128 |       16.34 |         28\r\n  refsect1      |     151 |       1326 |       10.22 |         40\r\n  informaltable |       1 |         15 |        6.25 |         25\r\n  phrase        |       1 |         81 |        1.22 |         20\r\n  indexterm     |       5 |       3225 |        0.15 |         26\r\n  variablelist  |       1 |        800 |        0.12 |         21\r\n  function      |       4 |       4000 |        0.10 |         28\r\n  entry         |      10 |      17609 |        0.06 |         40\r\n  para          |       3 |      25180 |        0.01 |         27\r\n\r\nRegards,\r\n\r\nBrar", "msg_date": "Wed, 9 Mar 2022 20:43:45 +0100", "msg_from": "Brar Piening <brar@gmx.de>", "msg_from_op": true, "msg_subject": "Re: Add id's to various elements in protocol.sgml" }, { "msg_contents": "On 09.03.2022 at 20:43, Brar Piening wrote:\n> Attached is a pretty huge patch that adds ids to all sections and all\n> the varlistentries where the containing variablelist already had at\n> least one id (plus a few additional ones that I stumbled upon and\n> deemed useful). It also adds html links next to the respective heading\n> in the html documentation and emits a build message and a comment when\n> a section or a relevant (see before) varlistentry doesn't have an id.\n\nI have uploaded a doc build with the patch applied to\nhttps://pgdocs.piening.info/ to make it easier for you all to review the\nresults and see what is there and what isn't and how it feels UI-wise.\n\nYou may want to look at https://pgdocs.piening.info/app-psql.html where\nthe patch adds ids and links to all varlistentries but doesn't do so for\nthe headings (because they are refsect1 headings not sect1 headings).\n\nhttps://pgdocs.piening.info/protocol-flow.html is pretty much the\nopposite. The patch adds ids and links to the headings (they are sect2\nheadings) but doesn't add them to the varlistentries (yet - because I\nmostly sticked to the algorithm suggested at\nhttps://www.postgresql.org/message-id/621FAF40.5070507%40anastigmatix.net\nto contain the workload).\n\n\n\n\n\n", "msg_date": "Sun, 13 Mar 2022 11:26:25 +0100", "msg_from": "Brar Piening <brar@gmx.de>", "msg_from_op": true, "msg_subject": "Re: Add id's to various elements in protocol.sgml" } ]
[ { "msg_contents": "Certain TAP tests rely on settings that the Make files provide for them.\nHowever vcregress.pl doesn't provide those settings. This patch remedies\nthat, and I propose to apply it shortly (when we have a fix for the SSL\ntests that I will write about separately) and backpatch it appropriately.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sun, 5 Dec 2021 11:57:31 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "enable certain TAP tests for MSVC builds" }, { "msg_contents": "On Sun, Dec 05, 2021 at 11:57:31AM -0500, Andrew Dunstan wrote:\n> Certain TAP tests rely on settings that the Make files provide for them.\n> However vcregress.pl doesn't provide those settings. This patch remedies\n> that, and I propose to apply it shortly (when we have a fix for the SSL\n> tests that I will write about separately) and backpatch it appropriately.\n\n> --- a/src/tools/msvc/vcregress.pl\n> +++ b/src/tools/msvc/vcregress.pl\n> @@ -59,6 +59,12 @@ copy(\"$Config/autoinc/autoinc.dll\", \"src/test/regress\");\n> copy(\"$Config/regress/regress.dll\", \"src/test/regress\");\n> copy(\"$Config/dummy_seclabel/dummy_seclabel.dll\", \"src/test/regress\");\n> \n> +# Settings used by TAP tests\n> +$ENV{with_ssl} = $config->{openssl} ? 'openssl' : 'no';\n> +$ENV{with_ldap} = $config->{ldap} ? 'yes' : 'no';\n> +$ENV{with_icu} = $config->{icu} ? 'yes' : 'no';\n> +$ENV{with_gssapi} = $config->{gss} ? 'yes' : 'no';\n\nThat's appropriate. There are more variables to cover:\n\n$ git grep -n ^export ':(glob)**/Makefile'\nsrc/bin/pg_basebackup/Makefile:22:export LZ4\nsrc/bin/pg_basebackup/Makefile:23:export TAR\nsrc/bin/pg_basebackup/Makefile:27:export GZIP_PROGRAM=$(GZIP)\nsrc/bin/psql/Makefile:20:export with_readline\nsrc/test/kerberos/Makefile:16:export with_gssapi with_krb_srvnam\nsrc/test/ldap/Makefile:16:export with_ldap\nsrc/test/modules/ssl_passphrase_callback/Makefile:3:export with_ssl\nsrc/test/recovery/Makefile:20:export REGRESS_SHLIB\nsrc/test/ssl/Makefile:18:export with_ssl\nsrc/test/subscription/Makefile:18:export with_icu\n\n\n", "msg_date": "Sun, 5 Dec 2021 11:47:55 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: enable certain TAP tests for MSVC builds" }, { "msg_contents": "\nOn 12/5/21 14:47, Noah Misch wrote:\n> On Sun, Dec 05, 2021 at 11:57:31AM -0500, Andrew Dunstan wrote:\n>> Certain TAP tests rely on settings that the Make files provide for them.\n>> However vcregress.pl doesn't provide those settings. This patch remedies\n>> that, and I propose to apply it shortly (when we have a fix for the SSL\n>> tests that I will write about separately) and backpatch it appropriately.\n>> --- a/src/tools/msvc/vcregress.pl\n>> +++ b/src/tools/msvc/vcregress.pl\n>> @@ -59,6 +59,12 @@ copy(\"$Config/autoinc/autoinc.dll\", \"src/test/regress\");\n>> copy(\"$Config/regress/regress.dll\", \"src/test/regress\");\n>> copy(\"$Config/dummy_seclabel/dummy_seclabel.dll\", \"src/test/regress\");\n>> \n>> +# Settings used by TAP tests\n>> +$ENV{with_ssl} = $config->{openssl} ? 'openssl' : 'no';\n>> +$ENV{with_ldap} = $config->{ldap} ? 'yes' : 'no';\n>> +$ENV{with_icu} = $config->{icu} ? 'yes' : 'no';\n>> +$ENV{with_gssapi} = $config->{gss} ? 'yes' : 'no';\n> That's appropriate. There are more variables to cover:\n>\n> $ git grep -n ^export ':(glob)**/Makefile'\n> src/bin/pg_basebackup/Makefile:22:export LZ4\n> src/bin/pg_basebackup/Makefile:23:export TAR\n> src/bin/pg_basebackup/Makefile:27:export GZIP_PROGRAM=$(GZIP)\n> src/bin/psql/Makefile:20:export with_readline\n> src/test/kerberos/Makefile:16:export with_gssapi with_krb_srvnam\n> src/test/ldap/Makefile:16:export with_ldap\n> src/test/modules/ssl_passphrase_callback/Makefile:3:export with_ssl\n> src/test/recovery/Makefile:20:export REGRESS_SHLIB\n> src/test/ssl/Makefile:18:export with_ssl\n> src/test/subscription/Makefile:18:export with_icu\n\n\nLZ4/TAR/GZIP_PROGAM: It's not clear what these should be set to. The TAP\ntests skip tests that use them if they are not set.\n\nwith_readline: we don't build with readline on Windows, period. I guess\nwe could just set it to \"no\".\n\nREGRESS_SHLIB: already set in vcregress.pl\n\nwith_krb_srvnam: the default is \"postgres\", we could just set it to that\nI guess.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 5 Dec 2021 18:00:08 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: enable certain TAP tests for MSVC builds" }, { "msg_contents": "On Sun, Dec 05, 2021 at 06:00:08PM -0500, Andrew Dunstan wrote:\n> On 12/5/21 14:47, Noah Misch wrote:\n> > On Sun, Dec 05, 2021 at 11:57:31AM -0500, Andrew Dunstan wrote:\n> >> Certain TAP tests rely on settings that the Make files provide for them.\n> >> However vcregress.pl doesn't provide those settings. This patch remedies\n> >> that, and I propose to apply it shortly (when we have a fix for the SSL\n> >> tests that I will write about separately) and backpatch it appropriately.\n> >> --- a/src/tools/msvc/vcregress.pl\n> >> +++ b/src/tools/msvc/vcregress.pl\n> >> @@ -59,6 +59,12 @@ copy(\"$Config/autoinc/autoinc.dll\", \"src/test/regress\");\n> >> copy(\"$Config/regress/regress.dll\", \"src/test/regress\");\n> >> copy(\"$Config/dummy_seclabel/dummy_seclabel.dll\", \"src/test/regress\");\n> >> \n> >> +# Settings used by TAP tests\n> >> +$ENV{with_ssl} = $config->{openssl} ? 'openssl' : 'no';\n> >> +$ENV{with_ldap} = $config->{ldap} ? 'yes' : 'no';\n> >> +$ENV{with_icu} = $config->{icu} ? 'yes' : 'no';\n> >> +$ENV{with_gssapi} = $config->{gss} ? 'yes' : 'no';\n> > That's appropriate. There are more variables to cover:\n> >\n> > $ git grep -n ^export ':(glob)**/Makefile'\n> > src/bin/pg_basebackup/Makefile:22:export LZ4\n> > src/bin/pg_basebackup/Makefile:23:export TAR\n> > src/bin/pg_basebackup/Makefile:27:export GZIP_PROGRAM=$(GZIP)\n> > src/bin/psql/Makefile:20:export with_readline\n> > src/test/kerberos/Makefile:16:export with_gssapi with_krb_srvnam\n> > src/test/ldap/Makefile:16:export with_ldap\n> > src/test/modules/ssl_passphrase_callback/Makefile:3:export with_ssl\n> > src/test/recovery/Makefile:20:export REGRESS_SHLIB\n> > src/test/ssl/Makefile:18:export with_ssl\n> > src/test/subscription/Makefile:18:export with_icu\n> \n> LZ4/TAR/GZIP_PROGAM: It's not clear what these should be set to. The TAP\n> tests skip tests that use them if they are not set.\n\nCould add config.pl entries for those. Preventing those skips on Windows may\nor may not be worth making config.pl readers think about them.\n\n> with_readline: we don't build with readline on Windows, period. I guess\n> we could just set it to \"no\".\n\n> with_krb_srvnam: the default is \"postgres\", we could just set it to that\n> I guess.\n\nWorks for me.\n\n\n", "msg_date": "Sun, 5 Dec 2021 15:32:38 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: enable certain TAP tests for MSVC builds" }, { "msg_contents": "\nOn 12/5/21 18:32, Noah Misch wrote:\n> On Sun, Dec 05, 2021 at 06:00:08PM -0500, Andrew Dunstan wrote:\n>> On 12/5/21 14:47, Noah Misch wrote:\n>>> On Sun, Dec 05, 2021 at 11:57:31AM -0500, Andrew Dunstan wrote:\n>>>> Certain TAP tests rely on settings that the Make files provide for them.\n>>>> However vcregress.pl doesn't provide those settings. This patch remedies\n>>>> that, and I propose to apply it shortly (when we have a fix for the SSL\n>>>> tests that I will write about separately) and backpatch it appropriately.\n>>>> --- a/src/tools/msvc/vcregress.pl\n>>>> +++ b/src/tools/msvc/vcregress.pl\n>>>> @@ -59,6 +59,12 @@ copy(\"$Config/autoinc/autoinc.dll\", \"src/test/regress\");\n>>>> copy(\"$Config/regress/regress.dll\", \"src/test/regress\");\n>>>> copy(\"$Config/dummy_seclabel/dummy_seclabel.dll\", \"src/test/regress\");\n>>>> \n>>>> +# Settings used by TAP tests\n>>>> +$ENV{with_ssl} = $config->{openssl} ? 'openssl' : 'no';\n>>>> +$ENV{with_ldap} = $config->{ldap} ? 'yes' : 'no';\n>>>> +$ENV{with_icu} = $config->{icu} ? 'yes' : 'no';\n>>>> +$ENV{with_gssapi} = $config->{gss} ? 'yes' : 'no';\n>>> That's appropriate. There are more variables to cover:\n>>>\n>>> $ git grep -n ^export ':(glob)**/Makefile'\n>>> src/bin/pg_basebackup/Makefile:22:export LZ4\n>>> src/bin/pg_basebackup/Makefile:23:export TAR\n>>> src/bin/pg_basebackup/Makefile:27:export GZIP_PROGRAM=$(GZIP)\n>>> src/bin/psql/Makefile:20:export with_readline\n>>> src/test/kerberos/Makefile:16:export with_gssapi with_krb_srvnam\n>>> src/test/ldap/Makefile:16:export with_ldap\n>>> src/test/modules/ssl_passphrase_callback/Makefile:3:export with_ssl\n>>> src/test/recovery/Makefile:20:export REGRESS_SHLIB\n>>> src/test/ssl/Makefile:18:export with_ssl\n>>> src/test/subscription/Makefile:18:export with_icu\n>> LZ4/TAR/GZIP_PROGAM: It's not clear what these should be set to. The TAP\n>> tests skip tests that use them if they are not set.\n> Could add config.pl entries for those. Preventing those skips on Windows may\n> or may not be worth making config.pl readers think about them.\n>\n>> with_readline: we don't build with readline on Windows, period. I guess\n>> we could just set it to \"no\".\n>> with_krb_srvnam: the default is \"postgres\", we could just set it to that\n>> I guess.\n> Works for me.\n\n\n\nAll done.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 7 Dec 2021 15:40:29 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: enable certain TAP tests for MSVC builds" }, { "msg_contents": "On Tue, Dec 07, 2021 at 03:40:29PM -0500, Andrew Dunstan wrote:\n> All done.\n\nbowerbird is complaining here with the tests of pg_basebackup:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bowerbird&dt=2021-12-08%2004%3A52%3A27\n\ntar: Cannot execute remote shell: No such file or directory\ntar: H\\\\:/prog/bf/root/HEAD/pgsql.build/src/bin/pg_basebackup/tmp_check/t_010_pg_basebackup_main_data/backup/tarbackup2/base.tar:\nCannot open: I/O error\ntar: Error is not recoverable: exiting now\n\nThis comes from init_from_backup() via the tar_command. I am not sure\nwhat to think about this error. Perhaps this is related to the\nenvironment.\n\n+$ENV{TAR} ||= 'tar';\n+$ENV{LZ4} ||= 'lz4';\n+$ENV{GZIP_PROGRAM} ||= 'gzip';\nThis means that the default is to assume that those commands will be\nalways present, no? But that may not be the case, which would cause\nthe tests of pg_basebackup to fail in 010_pg_basebackup.pl. Shouldn't\nwe at least check that the command can be executed if defined and skip\nthe tests if not, in the same fashion as LZ4 and GZIP?\n--\nMichael", "msg_date": "Wed, 8 Dec 2021 17:14:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: enable certain TAP tests for MSVC builds" }, { "msg_contents": "\nOn 12/8/21 03:14, Michael Paquier wrote:\n> On Tue, Dec 07, 2021 at 03:40:29PM -0500, Andrew Dunstan wrote:\n>> All done.\n> bowerbird is complaining here with the tests of pg_basebackup:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bowerbird&dt=2021-12-08%2004%3A52%3A27\n>\n> tar: Cannot execute remote shell: No such file or directory\n> tar: H\\\\:/prog/bf/root/HEAD/pgsql.build/src/bin/pg_basebackup/tmp_check/t_010_pg_basebackup_main_data/backup/tarbackup2/base.tar:\n> Cannot open: I/O error\n> tar: Error is not recoverable: exiting now\n>\n> This comes from init_from_backup() via the tar_command. I am not sure\n> what to think about this error. Perhaps this is related to the\n> environment.\n>\n> +$ENV{TAR} ||= 'tar';\n> +$ENV{LZ4} ||= 'lz4';\n> +$ENV{GZIP_PROGRAM} ||= 'gzip';\n> This means that the default is to assume that those commands will be\n> always present, no? But that may not be the case, which would cause\n> the tests of pg_basebackup to fail in 010_pg_basebackup.pl. Shouldn't\n> we at least check that the command can be executed if defined and skip\n> the tests if not, in the same fashion as LZ4 and GZIP?\n\n\n\nYes, I'll fix it.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 8 Dec 2021 08:56:01 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: enable certain TAP tests for MSVC builds" } ]
[ { "msg_contents": "\nI am getting this test failure 001_ssltests.pl on my test MSVC system\nwhen SSL tests are enabled:\n\n not ok 110 - certificate authorization fails with revoked client cert with server-side CRL directory: matches\n\n # Failed test 'certificate authorization fails with revoked client cert with server-side CRL directory: matches'\n # at t/001_ssltests.pl line 618.\n # 'psql: error: connection to server at \"127.0.0.1\", port 59491 failed: server closed the connection unexpectedly\n # This probably means the server terminated abnormally\n # before or while processing the request.\n # server closed the connection unexpectedly\n # This probably means the server terminated abnormally\n # before or while processing the request.\n # server closed the connection unexpectedly\n # This probably means the server terminated abnormally\n # before or while processing the request.'\n # doesn't match '(?^:SSL error: sslv3 alert certificate revoked)'\n\nThere's nothing terribly suspicious in the server log, so I'm not quite\nsure what's going on here.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 5 Dec 2021 12:03:18 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "MSVC SSL test failure" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> I am getting this test failure 001_ssltests.pl on my test MSVC system\n> when SSL tests are enabled:\n\n> not ok 110 - certificate authorization fails with revoked client cert with server-side CRL directory: matches\n\n> # Failed test 'certificate authorization fails with revoked client cert with server-side CRL directory: matches'\n> # at t/001_ssltests.pl line 618.\n> # 'psql: error: connection to server at \"127.0.0.1\", port 59491 failed: server closed the connection unexpectedly\n> # This probably means the server terminated abnormally\n> # before or while processing the request.\n> # server closed the connection unexpectedly\n> # This probably means the server terminated abnormally\n> # before or while processing the request.\n> # server closed the connection unexpectedly\n> # This probably means the server terminated abnormally\n> # before or while processing the request.'\n> # doesn't match '(?^:SSL error: sslv3 alert certificate revoked)'\n\n> There's nothing terribly suspicious in the server log, so I'm not quite\n> sure what's going on here.\n\nHmm .. does enabling log_connections/log_disconnections produce\nany useful info?\n\nThis looks quite a bit like the sort of failure that commit\n6051857fc was meant to forestall. I wonder whether reverting\nthat commit changes the results? You might also try inserting\na shutdown() call, as we'd decided not to do [1].\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/1283317.1638213407%40sss.pgh.pa.us\n\n\n", "msg_date": "Sun, 05 Dec 2021 12:50:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: MSVC SSL test failure" }, { "msg_contents": "> On 5 Dec 2021, at 18:03, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n> I am getting this test failure 001_ssltests.pl on my test MSVC system\n> when SSL tests are enabled:\n> \n> not ok 110 - certificate authorization fails with revoked client cert with server-side CRL directory: matches\n> \n> # Failed test 'certificate authorization fails with revoked client cert with server-side CRL directory: matches'\n> # at t/001_ssltests.pl line 618.\n> # 'psql: error: connection to server at \"127.0.0.1\", port 59491 failed: server closed the connection unexpectedly\n> # This probably means the server terminated abnormally\n> # before or while processing the request.\n> # server closed the connection unexpectedly\n> # This probably means the server terminated abnormally\n> # before or while processing the request.\n> # server closed the connection unexpectedly\n> # This probably means the server terminated abnormally\n> # before or while processing the request.'\n> # doesn't match '(?^:SSL error: sslv3 alert certificate revoked)'\n> \n> There's nothing terribly suspicious in the server log, so I'm not quite\n> sure what's going on here.\n\nTrying out HEAD as of 20 minutes ago in Andres' MSVC setup on Cirrus CI I'm\nunable to replicate this. Did you remake the keys/certs or are they the files\nfrom the repo? Which version of OpenSSL is this using?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Sun, 5 Dec 2021 21:14:48 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: MSVC SSL test failure" }, { "msg_contents": "\nOn 12/5/21 15:14, Daniel Gustafsson wrote:\n>> On 5 Dec 2021, at 18:03, Andrew Dunstan <andrew@dunslane.net> wrote:\n>> I am getting this test failure 001_ssltests.pl on my test MSVC system\n>> when SSL tests are enabled:\n>>\n>> not ok 110 - certificate authorization fails with revoked client cert with server-side CRL directory: matches\n>>\n>> # Failed test 'certificate authorization fails with revoked client cert with server-side CRL directory: matches'\n>> # at t/001_ssltests.pl line 618.\n>> # 'psql: error: connection to server at \"127.0.0.1\", port 59491 failed: server closed the connection unexpectedly\n>> # This probably means the server terminated abnormally\n>> # before or while processing the request.\n>> # server closed the connection unexpectedly\n>> # This probably means the server terminated abnormally\n>> # before or while processing the request.\n>> # server closed the connection unexpectedly\n>> # This probably means the server terminated abnormally\n>> # before or while processing the request.'\n>> # doesn't match '(?^:SSL error: sslv3 alert certificate revoked)'\n>>\n>> There's nothing terribly suspicious in the server log, so I'm not quite\n>> sure what's going on here.\n> Trying out HEAD as of 20 minutes ago in Andres' MSVC setup on Cirrus CI I'm\n> unable to replicate this. Did you remake the keys/certs or are they the files\n> from the repo? Which version of OpenSSL is this using?\n>\n\nIt's WS2019 Build 1809, VS2019, openssl 1.1.1 (via chocolatey).\n\n\nCan you show me the cirrus.yml file you're using to test with? A URL ref\nto the results would also help.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 5 Dec 2021 17:44:32 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: MSVC SSL test failure" }, { "msg_contents": "> On 5 Dec 2021, at 23:44, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n> Can you show me the cirrus.yml file you're using to test with?\n\nI used the 0001 patch from this thread:\n\nhttps://www.postgresql.org/message-id/20211101055720.7mzwtkhzxmorpxth%40alap3.anarazel.de\n\n> A URL ref to the results would also help.\n\nI can't vouch for how helpful it is, but below is the build info that exists:\n\nhttps://cirrus-ci.com/task/5358152892809216\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 6 Dec 2021 00:02:43 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: MSVC SSL test failure" }, { "msg_contents": "\nOn 12/5/21 12:50, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> I am getting this test failure 001_ssltests.pl on my test MSVC system\n>> when SSL tests are enabled:\n>> not ok 110 - certificate authorization fails with revoked client cert with server-side CRL directory: matches\n>> # Failed test 'certificate authorization fails with revoked client cert with server-side CRL directory: matches'\n>> # at t/001_ssltests.pl line 618.\n>> # 'psql: error: connection to server at \"127.0.0.1\", port 59491 failed: server closed the connection unexpectedly\n>> # This probably means the server terminated abnormally\n>> # before or while processing the request.\n>> # server closed the connection unexpectedly\n>> # This probably means the server terminated abnormally\n>> # before or while processing the request.\n>> # server closed the connection unexpectedly\n>> # This probably means the server terminated abnormally\n>> # before or while processing the request.'\n>> # doesn't match '(?^:SSL error: sslv3 alert certificate revoked)'\n>> There's nothing terribly suspicious in the server log, so I'm not quite\n>> sure what's going on here.\n> Hmm .. does enabling log_connections/log_disconnections produce\n> any useful info?\n\n\nNo, that's already turned on (this test is run using a standard\nbuildfarm client).\n\n\n>\n> This looks quite a bit like the sort of failure that commit\n> 6051857fc was meant to forestall. I wonder whether reverting\n> that commit changes the results? You might also try inserting\n> a shutdown() call, as we'd decided not to do [1].\n>\n> \t\t\tregards, tom lane\n>\n> [1] https://www.postgresql.org/message-id/1283317.1638213407%40sss.pgh.pa.us\n\n\nCommenting out the closesocket() worked.\n\n\nI will look at shutdown().\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 5 Dec 2021 21:30:17 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: MSVC SSL test failure" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 12/5/21 12:50, Tom Lane wrote:\n>> This looks quite a bit like the sort of failure that commit\n>> 6051857fc was meant to forestall. I wonder whether reverting\n>> that commit changes the results? You might also try inserting\n>> a shutdown() call, as we'd decided not to do [1].\n\n> Commenting out the closesocket() worked.\n\nMan, that's annoying. Apparently OpenSSL is doing something to\nscrew up the shutdown sequence. According to [1], the graceful\nshutdown sequence will happen by default, but changing SO_LINGER\nor SO_DONTLINGER can get you into abortive shutdown anyway.\nMaybe they change one of those settings (why?)\n\n\t\t\tregards, tom lane\n\n[1] https://docs.microsoft.com/en-us/windows/win32/winsock/graceful-shutdown-linger-options-and-socket-closure-2\n\n\n", "msg_date": "Mon, 06 Dec 2021 01:02:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: MSVC SSL test failure" }, { "msg_contents": "> On 6 Dec 2021, at 07:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 12/5/21 12:50, Tom Lane wrote:\n>>> This looks quite a bit like the sort of failure that commit\n>>> 6051857fc was meant to forestall. I wonder whether reverting\n>>> that commit changes the results? You might also try inserting\n>>> a shutdown() call, as we'd decided not to do [1].\n> \n>> Commenting out the closesocket() worked.\n> \n> Man, that's annoying. Apparently OpenSSL is doing something to\n> screw up the shutdown sequence. According to [1], the graceful\n> shutdown sequence will happen by default, but changing SO_LINGER\n> or SO_DONTLINGER can get you into abortive shutdown anyway.\n> Maybe they change one of those settings (why?)\n\nAFAICT they don't touch either, and nothing really sticks out looking at\nsetsockopt calls in either 1.1.1 or 3.0.0.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 6 Dec 2021 14:38:23 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: MSVC SSL test failure" }, { "msg_contents": "\nOn 12/6/21 01:02, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 12/5/21 12:50, Tom Lane wrote:\n>>> This looks quite a bit like the sort of failure that commit\n>>> 6051857fc was meant to forestall. I wonder whether reverting\n>>> that commit changes the results? You might also try inserting\n>>> a shutdown() call, as we'd decided not to do [1].\n>> Commenting out the closesocket() worked.\n> Man, that's annoying. Apparently OpenSSL is doing something to\n> screw up the shutdown sequence. According to [1], the graceful\n> shutdown sequence will happen by default, but changing SO_LINGER\n> or SO_DONTLINGER can get you into abortive shutdown anyway.\n> Maybe they change one of those settings (why?)\n>\n> \t\t\tregards, tom lane\n>\n> [1] https://docs.microsoft.com/en-us/windows/win32/winsock/graceful-shutdown-linger-options-and-socket-closure-2\n\n\n\nYeah, quite annoying, especially because only some combinations of MSVC\nruntime / openssl version seem to trigger the problem.\n\n\nAdding a shutdown() before the closesocket() also fixes the issue.\n\n\n<https://docs.microsoft.com/en-us/windows/win32/api/winsock/nf-winsock-shutdown>\nsays:\n\n\n To assure that all data is sent and received on a connected socket\n before it is closed, an application should use shutdown to close\n connection before calling closesocket.\n ...\n\n Note  The shutdown function does not block regardless of the\n SO_LINGER setting on the socket.\n\n\nSince we're not expecting anything back from the client I don't think we\nneed any of the recv calls the recipes there suggest.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 6 Dec 2021 09:56:58 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: MSVC SSL test failure" }, { "msg_contents": "> On 6 Dec 2021, at 15:56, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n> Yeah, quite annoying, especially because only some combinations of MSVC\n> runtime / openssl version seem to trigger the problem.\n> \n> Adding a shutdown() before the closesocket() also fixes the issue.\n\nIf you have a patch you're testing I'm happy to run it through Cirrus in as\nmany combinations of MSVC and OpenSSL as I can muster there.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 6 Dec 2021 16:09:31 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: MSVC SSL test failure" }, { "msg_contents": "Hello Andrew,\n06.12.2021 17:56, Andrew Dunstan wrote:\n> Yeah, quite annoying, especially because only some combinations of MSVC\n> runtime / openssl version seem to trigger the problem.\n>\n>\n> Adding a shutdown() before the closesocket() also fixes the issue.\n>\nCan you confirm that adding shutdown(MyProcPort->sock, SD_BOTH) fixes\nthe issue for many test runs?\nI don't see the stable test passing here.\nWithout shutdown() the test failed on iterations 1, 5, 4, but with\nshutdown() it failed too, on iterations 3, 1, 4.\nWithout close() and shutdown() the test passes 20 iterations.\n(I'm still researching how openssl affects the shutdown sequence.)\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Mon, 6 Dec 2021 18:30:01 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MSVC SSL test failure" }, { "msg_contents": "\nOn 12/6/21 10:30, Alexander Lakhin wrote:\n> Hello Andrew,\n> 06.12.2021 17:56, Andrew Dunstan wrote:\n>> Yeah, quite annoying, especially because only some combinations of MSVC\n>> runtime / openssl version seem to trigger the problem.\n>>\n>>\n>> Adding a shutdown() before the closesocket() also fixes the issue.\n>>\n> Can you confirm that adding shutdown(MyProcPort->sock, SD_BOTH) fixes\n> the issue for many test runs?\n> I don't see the stable test passing here.\n> Without shutdown() the test failed on iterations 1, 5, 4, but with\n> shutdown() it failed too, on iterations 3, 1, 4.\n> Without close() and shutdown() the test passes 20 iterations.\n> (I'm still researching how openssl affects the shutdown sequence.)\n\n\n\n\nI have been getting 100% failures on the SSL tests with closesocket()\nalone, and 100% success over 10 tests with this:\n\n\ndiff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c\nindex 96ab37c7d0..5998c089b0 100644\n--- a/src/backend/libpq/pqcomm.c\n+++ b/src/backend/libpq/pqcomm.c\n@@ -295,6 +295,7 @@ socket_close(int code, Datum arg)\n         * Windows too.  But it's a lot more fragile than the other way.\n         */\n #ifdef WIN32\n+       shutdown(MyProcPort->sock, SD_SEND);\n        closesocket(MyProcPort->sock);\n #endif\n\n\nThat said, your results are quite worrying.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 6 Dec 2021 15:51:16 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: MSVC SSL test failure" }, { "msg_contents": "06.12.2021 23:51, Andrew Dunstan wrote:\n> I have been getting 100% failures on the SSL tests with closesocket()\n> alone, and 100% success over 10 tests with this:\n>\n>\n> diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c\n> index 96ab37c7d0..5998c089b0 100644\n> --- a/src/backend/libpq/pqcomm.c\n> +++ b/src/backend/libpq/pqcomm.c\n> @@ -295,6 +295,7 @@ socket_close(int code, Datum arg)\n>          * Windows too.  But it's a lot more fragile than the other way.\n>          */\n>  #ifdef WIN32\n> +       shutdown(MyProcPort->sock, SD_SEND);\n>         closesocket(MyProcPort->sock);\n>  #endif\n>\n>\n> That said, your results are quite worrying.\nMy next results are following:\nIt seems that the test failure rate may depend on the specs/environment.\nWith close-only version, having limited CPU usage for my Windows VM to\n20%, I've got failures on iterations 10, 2, 1.\nWith 100% CPU I've seen 20 successful runs, then fails on iterations 5,\n2. clean&buid and then failed iterations 11, 6, 3.  (So maybe caching is\nanother factor.)\n\nshutdown(MyProcPort->sock, SD_SEND) apparently fixes the issue, I've got\n83 successful runs, but then iteration 84 unfortunately failed:\nt/001_ssltests.pl .. 106/110\n#   Failed test 'intermediate client certificate is missing: matches'\n#   at t/001_ssltests.pl line 608.\n#                   'psql: error: connection to server at \"127.0.0.1\",\nport 63187 failed: could not receive data from server: Software caused\nconnection abort (0x00002745/10053)\n# SSL SYSCALL error: Software caused connection abort (0x00002745/10053)\n# could not send startup packet: No error (0x00000000/0)'\n#     doesn't match '(?^:SSL error: tlsv1 alert unknown ca)'\n# Looks like you failed 1 test of 110.\nt/001_ssltests.pl .. Dubious, test returned 1 (wstat 256, 0x100)\nFailed 1/110 subtests\n        (less 2 skipped subtests: 107 okay)\n\nIt's not that one that we observed with close-only fix, but it still\nworrying. And then exactly this fail occurred again, on iteration 8.\n\nBut \"fortunately\" I've got the same fail as before:\nt/001_ssltests.pl .. 106/110\n#   Failed test 'certificate authorization fails with revoked client\ncert with server-side CRL directory: matches'\n#   at t/001_ssltests.pl line 618.\n#                   'psql: error: connection to server at \"127.0.0.1\",\nport 59220 failed: server closed the connection unexpectedly\n#       This probably means the server terminated abnormally\n#       before or while processing the request.\n# server closed the connection unexpectedly\n#       This probably means the server terminated abnormally\n#       before or while processing the request.\n# server closed the connection unexpectedly\n#       This probably means the server terminated abnormally\n#       before or while processing the request.'\n#     doesn't match '(?^:SSL error: sslv3 alert certificate revoked)'\n# Looks like you failed 1 test of 110.\nt/001_ssltests.pl .. Dubious, test returned 1 (wstat 256, 0x100)\nFailed 1/110 subtests\n        (less 2 skipped subtests: 107 okay)\non 145-th iteration of the test even without close() (I've tried to\ncheck whether the aforementioned fail existed before the fix).\n\nSo probably we found a practical evidence of shutdown() importance we\nmissed before, but it's not the end.\nThere was some test instability even without the close() fix and it\nremains with the shutdown(...SD_SEND).\n\nBy the way, while exploring openssl' behavior, I found that\nSSL_shutdown() has it's own quirks (see [1], return value 0). Maybe now\nwe've encountered one of these.\n\nBest regards,\nAlexander\n\n[1] https://www.openssl.org/docs/man3.0/man3/SSL_shutdown.html\n\n\n", "msg_date": "Tue, 7 Dec 2021 10:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MSVC SSL test failure" }, { "msg_contents": "Alexander Lakhin <exclusion@gmail.com> writes:\n> It seems that the test failure rate may depend on the specs/environment.\n\nNo surprise there, since the issue is almost surely timing-dependent.\n\n> shutdown(MyProcPort->sock, SD_SEND) apparently fixes the issue, I've got\n> 83 successful runs, but then iteration 84 unfortunately failed:\n> t/001_ssltests.pl .. 106/110\n> #   Failed test 'intermediate client certificate is missing: matches'\n> #   at t/001_ssltests.pl line 608.\n> #                   'psql: error: connection to server at \"127.0.0.1\",\n> port 63187 failed: could not receive data from server: Software caused\n> connection abort (0x00002745/10053)\n> # SSL SYSCALL error: Software caused connection abort (0x00002745/10053)\n> # could not send startup packet: No error (0x00000000/0)'\n> #     doesn't match '(?^:SSL error: tlsv1 alert unknown ca)'\n> # Looks like you failed 1 test of 110.\n\nHmm. I wonder whether using SD_BOTH behaves any differently.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Dec 2021 11:25:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: MSVC SSL test failure" }, { "msg_contents": "Hello Tom,\n07.12.2021 19:25, Tom Lane wrote:\n> Hmm. I wonder whether using SD_BOTH behaves any differently. \nWith shutdown(MyProcPort->sock, SD_BOTH) the test failed for me on\niterations 1, 2, 3, 16 (just as without shutdown() at all).\nSo shutdown with the SD_SEND flag definitely behaves much better (I've\nseen over 200 successful iterations).\n \nBest regards,\nAlexander\n\n\n", "msg_date": "Tue, 7 Dec 2021 20:59:59 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MSVC SSL test failure" }, { "msg_contents": "Alexander Lakhin <exclusion@gmail.com> writes:\n> 07.12.2021 19:25, Tom Lane wrote:\n>> Hmm. I wonder whether using SD_BOTH behaves any differently. \n\n> With shutdown(MyProcPort->sock, SD_BOTH) the test failed for me on\n> iterations 1, 2, 3, 16 (just as without shutdown() at all).\n> So shutdown with the SD_SEND flag definitely behaves much better (I've\n> seen over 200 successful iterations).\n\nFun. Well, I'll put in shutdown with SD_SEND for the moment,\nand we'll have to see whether we can improve further than that.\nIt does sound like we may be running into OpenSSL bugs/oddities,\nnot only kernel issues, so it may be impossible to do better\non our side.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Dec 2021 13:22:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: MSVC SSL test failure" }, { "msg_contents": "\nOn 12/7/21 13:22, Tom Lane wrote:\n> Alexander Lakhin <exclusion@gmail.com> writes:\n>> 07.12.2021 19:25, Tom Lane wrote:\n>>> Hmm. I wonder whether using SD_BOTH behaves any differently. \n>> With shutdown(MyProcPort->sock, SD_BOTH) the test failed for me on\n>> iterations 1, 2, 3, 16 (just as without shutdown() at all).\n>> So shutdown with the SD_SEND flag definitely behaves much better (I've\n>> seen over 200 successful iterations).\n> Fun. Well, I'll put in shutdown with SD_SEND for the moment,\n> and we'll have to see whether we can improve further than that.\n> It does sound like we may be running into OpenSSL bugs/oddities,\n> not only kernel issues, so it may be impossible to do better\n> on our side.\n\n\nYeah. My suspicion is that SD_BOTH is what closesocket() does if\nshutdown() hasn't been previously called.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 7 Dec 2021 14:26:55 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: MSVC SSL test failure" } ]
[ { "msg_contents": "Changing \nFcc: +inbox\n--------\nI looked into the failure reported at [1]. Basically what's happening\nthere is that we're allowing a composite datum of type RECORD to get\nstored into a table, whereupon other backends can't make sense of it\nsince they lack the appropriate typcache entry. The reason the datum\nis marked as type RECORD is that ExecTypeSetColNames set things up\nthat way, after observing that the tuple descriptor obtained from\nthe current table definition didn't have the column names it thought\nit should have.\n\nNow, in the test case as given,\tExecTypeSetColNames is in error to\nthink that, because it fails to account for the possibility that the\ntupdesc contains dropped columns that weren't dropped when the relevant\nRTE was made. However, if the test case is modified so that we just\nrename rather than drop some pre-existing column, then even with a\nfix for that problem ExecTypeSetColNames would do the wrong thing.\n\nI made several attempts to work around this problem, but I've now\nconcluded that changing the type OID in ExecTypeSetColNames is just\nfundamentally broken. It can't be okay to decide that a Var that\nclaims to emit type A actually emits type B, and especially not to\ndo so as late as executor startup. I speculated in the other thread\nthat we could do so during planning instead, but that turns out to\njust move the problems around. I think this must be so, because the\nwhole idea is bogus. For example, if we have a function that is\ndeclared to take type \"ct\", it can't be okay in general to pass it\ntype \"record\" instead. We've mistakenly thought that we could fuzz\nthis as long as the two types are physically compatible --- but how\ncan we think that the column names of a composite type aren't a\nbasic part of its definition?\n\nSo 0001 attached fixes this by revoking the decision to apply\nExecTypeSetColNames in cases where a Var or RowExpr is declared\nto return a named composite type. This causes a couple of regression\ntest results to change, but they are ones that were specifically\nadded to exercise this behavior that we now see to be invalid.\n(In passing, this also adjusts ExecEvalWholeRowVar to fail if the\nVar claims to be of a domain-over-composite. I'm not sure what\nI was thinking when I changed it to allow that; the case should\nnot arise, and if it did, it'd be questionable because we're not\nchecking domain constraints here.)\n\n0002 is some inessential mop-up that avoids storing useless column\nname lists in RowExprs where they won't be used.\n\nI'm not really thrilled about back-patching a behavioral change\nof this sort, but I don't see any good alternative. Corrupting\npeople's tables is not okay.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/CAOFAq3BeawPiw9pc3bVGZ%3DRint2txWEBCeDC2wNAhtCZoo2ZqA%40mail.gmail.com", "msg_date": "Sun, 05 Dec 2021 13:45:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "ExecTypeSetColNames is fundamentally broken" }, { "msg_contents": "On Sun, Dec 5, 2021 at 1:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> So 0001 attached fixes this by revoking the decision to apply\n> ExecTypeSetColNames in cases where a Var or RowExpr is declared\n> to return a named composite type. This causes a couple of regression\n> test results to change, but they are ones that were specifically\n> added to exercise this behavior that we now see to be invalid.\n\nI don't understand the code so I can't comment on the code, but I find\nthe regression test changes pretty suspect. Attaching any alias list\nto the RTE ought to rename the output columns for all purposes, not\njust the ones we as implementers find convenient. I understand that we\nhave to do *something* here and that the present behavior is buggy and\nunacceptable ... but I'm not sure I accept that the only possible fix\nis the one you propose here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Dec 2021 15:30:28 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ExecTypeSetColNames is fundamentally broken" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I don't understand the code so I can't comment on the code, but I find\n> the regression test changes pretty suspect. Attaching any alias list\n> to the RTE ought to rename the output columns for all purposes, not\n> just the ones we as implementers find convenient.\n\nWell, that was what I thought when I wrote bf7ca1587, but it leads\nto logical contradictions. Consider\n\ncreate table t (a int, b int);\n\ncreate function f(t) returns ... ;\n\nselect f(t) from t;\n\nselect f(t) from t(x,y);\n\nIf we adopt the \"rename for all purposes\" interpretation, then\nthe second SELECT must fail, because what f() is being passed is\nno longer of type t. If you ask me, that'll be a bigger problem\nfor users than the change I'm proposing (quite independently of\nhow hard it might be to implement). It certainly will break\na behavior that goes back much further than bf7ca1587.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 Dec 2021 16:05:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: ExecTypeSetColNames is fundamentally broken" }, { "msg_contents": "On Mon, Dec 6, 2021 at 4:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Well, that was what I thought when I wrote bf7ca1587, but it leads\n> to logical contradictions. Consider\n>\n> create table t (a int, b int);\n>\n> create function f(t) returns ... ;\n>\n> select f(t) from t;\n>\n> select f(t) from t(x,y);\n>\n> If we adopt the \"rename for all purposes\" interpretation, then\n> the second SELECT must fail, because what f() is being passed is\n> no longer of type t. If you ask me, that'll be a bigger problem\n> for users than the change I'm proposing (quite independently of\n> how hard it might be to implement). It certainly will break\n> a behavior that goes back much further than bf7ca1587.\n\nFor me, the second SELECT does fail:\n\nrhaas=# select f(t) from t(x,y);\nERROR: column \"x\" does not exist\nLINE 1: select f(t) from t(x,y);\n ^\n\nIf it didn't fail, what would the behavior be? I suppose you could\nmake an argument for trying to match up the columns by position, but\nif so this ought to work:\n\nrhaas=# create table u(a int, b int);\nCREATE TABLE\nrhaas=# select f(u) from u;\nERROR: function f(u) does not exist\nrhaas=# select f(u::t) from u;\nERROR: cannot cast type u to t\n\nMatching columns by name can't work because the names don't match.\nMatching columns by position does not work. So if that example\nsucceeds, the only real explanation is that we magically know that\nwe've still got a value of type t despite the user's best attempt to\ndecree otherwise. I know PostgreSQL sometimes ... does things like\nthat. I have no idea why anyone would consider it a desirable\nbehavior, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 Dec 2021 11:19:27 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ExecTypeSetColNames is fundamentally broken" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Dec 6, 2021 at 4:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> select f(t) from t(x,y);\n>> \n>> If we adopt the \"rename for all purposes\" interpretation, then\n>> the second SELECT must fail, because what f() is being passed is\n>> no longer of type t.\n\n> For me, the second SELECT does fail:\n\n> rhaas=# select f(t) from t(x,y);\n> ERROR: column \"x\" does not exist\n\nAh, sorry, I fat-fingered the alias syntax. Here's a tested example:\n\nregression=# create table t (a int, b int);\nCREATE TABLE\nregression=# insert into t values(11,12);\nINSERT 0 1\nregression=# create function f(t) returns int as 'select $1.a' language sql;\nCREATE FUNCTION\nregression=# select f(t) from t as t(x,y);\n f \n----\n 11\n(1 row)\n\nIf we consider that the alias renames the columns \"for all purposes\",\nhow is it okay for f() to select the \"a\" column?\n\nAnother way to phrase the issue is that the column names seen\nby f() are currently different from those seen by row_to_json():\n\nregression=# select row_to_json(t) from t as t(x,y);\n row_to_json \n-----------------\n {\"x\":11,\"y\":12}\n(1 row)\n\nand that seems hard to justify.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Dec 2021 12:30:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: ExecTypeSetColNames is fundamentally broken" }, { "msg_contents": "On Tue, Dec 7, 2021 at 12:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> If we consider that the alias renames the columns \"for all purposes\",\n> how is it okay for f() to select the \"a\" column?\n\nI'd say it isn't.\n\n> Another way to phrase the issue is that the column names seen\n> by f() are currently different from those seen by row_to_json():\n>\n> regression=# select row_to_json(t) from t as t(x,y);\n> row_to_json\n> -----------------\n> {\"x\":11,\"y\":12}\n> (1 row)\n>\n> and that seems hard to justify.\n\nYeah, I agree. The problem I have here is that, with your proposed\nfix, it still won't be very consistent. True, row_to_json() and f()\nwill both see the unaliased column names ... but a straight select *\nfrom t as t(x,y) will show the aliased names. That's unarguably better\nthan corrupting your data, but it seems \"astonishing\" in the POLA\nsense.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 Dec 2021 12:58:48 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ExecTypeSetColNames is fundamentally broken" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Dec 7, 2021 at 12:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> If we consider that the alias renames the columns \"for all purposes\",\n>> how is it okay for f() to select the \"a\" column?\n\n> I'd say it isn't.\n\nIn a green field I'd probably agree with you, but IMO that will\nbreak far too much existing SQL code.\n\nIt'd cause problems for us too, not only end-users. As an example,\nruleutils.c would have to avoid attaching new column aliases to\ntables that are referenced as whole-row Vars. I'm not very sure\nthat that's even possible without creating insurmountable ambiguity\nissues. There are also fun issues around what happens to a stored\nquery after a table column rename. Right now the query acts as\nthough it uses the old name as a column alias, and that introduces\nno semantic problem; but that behavior would no longer be\nacceptable.\n\nSo the alternatives I see are to revert what bf7ca1587 tried to do\nhere, or to try to make it work that way across-the-board, which\nimplies (a) a very much larger amount of work, and (b) breaking\nimportant behaviors that are decades older than that commit.\nIt's not even entirely clear that we could get to complete\nconsistency if we went down that path.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Dec 2021 13:19:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: ExecTypeSetColNames is fundamentally broken" }, { "msg_contents": "On Tue, Dec 7, 2021 at 1:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> So the alternatives I see are to revert what bf7ca1587 tried to do\n> here, or to try to make it work that way across-the-board, which\n> implies (a) a very much larger amount of work, and (b) breaking\n> important behaviors that are decades older than that commit.\n> It's not even entirely clear that we could get to complete\n> consistency if we went down that path.\n\nContinuing my tour through the \"bug fixes\" section of the CommitFest,\nI came upon this thread. Unfortunately there's not that much I can do\nto progress it, because I've already expressed all the opinions that I\nhave on this thread. If we back-patch Tom's originally proposed fix, i\nexpect we might get a complaint or too, but the current behavior of\nbeing able to create unreadable tables is indisputably poor, and I'm\nnot in a position to tell Tom that he has to go write a different fix\ninstead, or even that such a fix is possible. Unless somebody else\nwants to comment, which IMHO would be good, I think it's up to Tom to\nmake a decision here on how he'd like to proceed and then, probably,\njust do it.\n\nAnyone else have thoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 15 Mar 2022 16:23:56 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ExecTypeSetColNames is fundamentally broken" }, { "msg_contents": "Hi hackers,\n\n> Anyone else have thoughts?\n\nI came across this thread while looking for the patches that need review.\n\nMy understanding of the code is limited, but I can say that I don't\nsee anything particularly wrong with it. I can also confirm that it\nfixes the problem reported by the user while passing the rest of the\ntests.\n\nI understand the concern expressed by Robert in respect of backward\ncompatibility. From the user's perspective, personally I would prefer\na fixed bug over backward compatibility. Especially if we consider the\nfact that the current behaviour of cases like `select row_to_json(i)\nfrom int8_tbl i(x,y)` is not necessarily the correct one, at least it\ndoesn't look right to me.\n\nSo unless anyone has strong objections against the proposed fix or can\npropose a better patch, I would suggest merging it.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 16 Mar 2022 15:47:15 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: ExecTypeSetColNames is fundamentally broken" }, { "msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n> I understand the concern expressed by Robert in respect of backward\n> compatibility. From the user's perspective, personally I would prefer\n> a fixed bug over backward compatibility. Especially if we consider the\n> fact that the current behaviour of cases like `select row_to_json(i)\n> from int8_tbl i(x,y)` is not necessarily the correct one, at least it\n> doesn't look right to me.\n\nIt's debatable anyway. I'd be less worried about changing this behavior\nif we didn't have to back-patch it ... but I see no good alternative.\n\n> So unless anyone has strong objections against the proposed fix or can\n> propose a better patch, I would suggest merging it.\n\nDone now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Mar 2022 18:28:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: ExecTypeSetColNames is fundamentally broken" } ]
[ { "msg_contents": "Hi,\n\nIt looks like the logical replication subscribers are receiving the\nquorum uncommitted transactions even before the synchronous (sync)\nstandbys. Most of the times it is okay, but it can be a problem if the\nprimary goes down/crashes (while the primary is in SyncRepWaitForLSN)\nbefore the quorum commit is achieved (i.e. before the sync standbys\nreceive the committed txns from the primary) and the failover is to\nhappen on to the sync standby. The subscriber would have received the\nquorum uncommitted txns whereas the sync standbys didn't. After the\nfailover, the new primary (the old sync standby) would be behind the\nsubscriber i.e. the subscriber will be seeing the data that the new\nprimary can't. Is there a way the subscriber can get back to be in\nsync with the new primary? In other words, can we reverse the effects\nof the quorum uncommitted txns on the subscriber? Naive way is to do\nit manually, but it doesn't seem to be elegant.\n\nWe have performed a small experiment to observe the above behaviour\nwith 1 primary, 1 sync standby and 1 subscriber:\n1) Have a wait loop in SyncRepWaitForLSN (a temporary hack to\nillustrate the standby receiving the txn a bit late or fail to\nreceive)\n2) Insert data into a table on the primary\n3) The primary waits i.e. the insert query hangs (because of the wait\nloop hack ()) before the local txn is sent to the sync standby,\nwhereas the subscriber receives the inserted data.\n4) If the primary crashes/goes down and unable to come up, if the\nfailover happens to sync standby (which didn't receive the data that\ngot inserted on tbe primary), the subscriber would see the data that\nthe sync standby can't.\n\nThis looks to be a problem. A possible solution is to let the\nsubscribers receive the txns only after the primary achieves quorum\ncommit (gets out of the SyncRepWaitForLSN or after all sync standbys\nreceived the txns). The logical replication walsenders can wait until\nthe quorum commit is obtained and then can send the WAL. A new GUC can\nbe introduced to control this, default being the current behaviour.\n\nThoughts?\n\nThanks Satya (cc-ed) for the use-case and off-list discussion.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 6 Dec 2021 10:05:20 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Disallow quorum uncommitted (with synchronous standbys) txns in\n logical replication subscribers" }, { "msg_contents": "Consider a cluster formation where we have a Primary(P), Sync Replica(S1),\nand multiple async replicas for disaster recovery and read scaling (within\nthe region and outside the region). In this setup, S1 is the preferred\nfailover target in an event of the primary failure. When a transaction is\ncommitted on the primary, it is not acknowledged to the client until the\nprimary gets an acknowledgment from the sync standby that the WAL is\nflushed to the disk (assume synchrnous_commit configuration is\nremote_flush). However, walsenders corresponds to the async replica on the\nprimary don't wait for the flush acknowledgment from the primary and send\nthe WAL to the async standbys (also any logical replication/decoding\nclients). So it is possible for the async replicas and logical client ahead\nof the sync replica. If a failover is initiated in such a scenario, to\nbring the formation into a healthy state we have to either\n\n 1. run the pg_rewind on the async replicas for them to reconnect with\n the new primary or\n 2. collect the latest WAL across the replicas and feed the standby.\n\nBoth these operations are involved, error prone, and can cause multiple\nminutes of downtime if done manually. In addition, there is a window where\nthe async replicas can show the data that was neither acknowledged to the\nclient nor committed on standby. Logical clients if they are ahead may need\nto reseed the data as no easy rewind option for them.\n\nI would like to propose a GUC send_Wal_after_quorum_committed which when\nset to ON, walsenders corresponds to async standbys and logical replication\nworkers wait until the LSN is quorum committed on the primary before\nsending it to the standby. This not only simplifies the post failover steps\nbut avoids unnecessary downtime for the async replicas. Thoughts?\n\nThanks,\nSatya\n\n\n\n\nOn Sun, Dec 5, 2021 at 8:35 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> Hi,\n>\n> It looks like the logical replication subscribers are receiving the\n> quorum uncommitted transactions even before the synchronous (sync)\n> standbys. Most of the times it is okay, but it can be a problem if the\n> primary goes down/crashes (while the primary is in SyncRepWaitForLSN)\n> before the quorum commit is achieved (i.e. before the sync standbys\n> receive the committed txns from the primary) and the failover is to\n> happen on to the sync standby. The subscriber would have received the\n> quorum uncommitted txns whereas the sync standbys didn't. After the\n> failover, the new primary (the old sync standby) would be behind the\n> subscriber i.e. the subscriber will be seeing the data that the new\n> primary can't. Is there a way the subscriber can get back to be in\n> sync with the new primary? In other words, can we reverse the effects\n> of the quorum uncommitted txns on the subscriber? Naive way is to do\n> it manually, but it doesn't seem to be elegant.\n>\n> We have performed a small experiment to observe the above behaviour\n> with 1 primary, 1 sync standby and 1 subscriber:\n> 1) Have a wait loop in SyncRepWaitForLSN (a temporary hack to\n> illustrate the standby receiving the txn a bit late or fail to\n> receive)\n> 2) Insert data into a table on the primary\n> 3) The primary waits i.e. the insert query hangs (because of the wait\n> loop hack ()) before the local txn is sent to the sync standby,\n> whereas the subscriber receives the inserted data.\n> 4) If the primary crashes/goes down and unable to come up, if the\n> failover happens to sync standby (which didn't receive the data that\n> got inserted on tbe primary), the subscriber would see the data that\n> the sync standby can't.\n>\n> This looks to be a problem. A possible solution is to let the\n> subscribers receive the txns only after the primary achieves quorum\n> commit (gets out of the SyncRepWaitForLSN or after all sync standbys\n> received the txns). The logical replication walsenders can wait until\n> the quorum commit is obtained and then can send the WAL. A new GUC can\n> be introduced to control this, default being the current behaviour.\n>\n> Thoughts?\n>\n> Thanks Satya (cc-ed) for the use-case and off-list discussion.\n>\n> Regards,\n> Bharath Rupireddy.\n>\n\nConsider a cluster formation where we have a Primary(P), Sync Replica(S1), and multiple async replicas for disaster recovery and read scaling (within the region and outside the region). In this setup, S1 is the preferred failover target in an event of the primary failure. When a transaction is committed on the primary, it is not acknowledged to the client until the primary gets an acknowledgment from the sync standby that the WAL is flushed to the disk (assume synchrnous_commit configuration is remote_flush). However, walsenders corresponds to the async replica on the primary don't wait for the flush acknowledgment from the primary and send the WAL to the async standbys (also any logical replication/decoding clients). So it is possible for the async replicas and logical client ahead of the sync replica. If a failover is initiated in such a scenario, to bring the formation into a healthy state we have to either run the pg_rewind on the async replicas for them to reconnect with the new primary orcollect the latest WAL across the replicas and feed the standby. Both these operations are involved, error prone, and can cause multiple minutes of downtime if done manually. In addition, there is a window where the async replicas can show the data that was neither acknowledged to the client nor committed on standby. Logical clients if they are ahead may need to reseed the data as no easy rewind option for them.I would like to propose a GUC send_Wal_after_quorum_committed which when set to ON, walsenders corresponds to async standbys and logical replication workers wait until the LSN is quorum committed on the primary before sending it to the standby. This not only simplifies the post failover steps but avoids unnecessary downtime for the async replicas. Thoughts?Thanks,SatyaOn Sun, Dec 5, 2021 at 8:35 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:Hi,\n\nIt looks like the logical replication subscribers are receiving the\nquorum uncommitted transactions even before the synchronous (sync)\nstandbys. Most of the times it is okay, but it can be a problem if the\nprimary goes down/crashes (while the primary is in SyncRepWaitForLSN)\nbefore the quorum commit is achieved (i.e. before the sync standbys\nreceive the committed txns from the primary) and the failover is to\nhappen on to the sync standby. The subscriber would have received the\nquorum uncommitted txns whereas the sync standbys didn't. After the\nfailover, the new primary (the old sync standby) would be behind the\nsubscriber i.e. the subscriber will be seeing the data that the new\nprimary can't. Is there a way the subscriber can get back  to be in\nsync with the new primary? In other words, can we reverse the effects\nof the quorum uncommitted txns on the subscriber? Naive way is to do\nit manually, but it doesn't seem to be elegant.\n\nWe have performed a small experiment to observe the above behaviour\nwith 1 primary, 1 sync standby and 1 subscriber:\n1) Have a wait loop in SyncRepWaitForLSN (a temporary hack to\nillustrate the standby receiving the txn a bit late or fail to\nreceive)\n2) Insert data into a table on the primary\n3) The primary waits i.e. the insert query hangs (because of the wait\nloop hack ()) before the local txn is sent to the sync standby,\nwhereas the subscriber receives the inserted data.\n4) If the primary crashes/goes down and unable to come up, if the\nfailover happens to sync standby (which didn't receive the data that\ngot inserted on tbe primary), the subscriber would see the data that\nthe sync standby can't.\n\nThis looks to be a problem. A possible solution is to let the\nsubscribers receive the txns only after the primary achieves quorum\ncommit (gets out of the SyncRepWaitForLSN or after all sync standbys\nreceived the txns). The logical replication walsenders can wait until\nthe quorum commit is obtained and then can send the WAL. A new GUC can\nbe introduced to control this, default being the current behaviour.\n\nThoughts?\n\nThanks Satya (cc-ed) for the use-case and off-list discussion.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Wed, 5 Jan 2022 23:59:32 -0800", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Disallow quorum uncommitted (with synchronous standbys) txns in\n logical replication subscribers" }, { "msg_contents": "On Wed, 2022-01-05 at 23:59 -0800, SATYANARAYANA NARLAPURAM wrote:\n> I would like to propose a GUC send_Wal_after_quorum_committed which\n> when set to ON, walsenders corresponds to async standbys and logical\n> replication workers wait until the LSN is quorum committed on the\n> primary before sending it to the standby. This not only simplifies\n> the post failover steps but avoids unnecessary downtime for the async\n> replicas. Thoughts?\n\nDo we need a GUC? Or should we just always require that sync rep is\nsatisfied before sending to async replicas?\n\nIt feels like the sync quorum should always be ahead of the async\nreplicas. Unless I'm missing a use case, or there is some kind of\nperformance gotcha.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 06 Jan 2022 23:24:40 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Disallow quorum uncommitted (with synchronous standbys) txns in\n logical replication subscribers" }, { "msg_contents": "On Thu, Jan 6, 2022 at 11:24 PM Jeff Davis <pgsql@j-davis.com> wrote:\n\n> On Wed, 2022-01-05 at 23:59 -0800, SATYANARAYANA NARLAPURAM wrote:\n> > I would like to propose a GUC send_Wal_after_quorum_committed which\n> > when set to ON, walsenders corresponds to async standbys and logical\n> > replication workers wait until the LSN is quorum committed on the\n> > primary before sending it to the standby. This not only simplifies\n> > the post failover steps but avoids unnecessary downtime for the async\n> > replicas. Thoughts?\n>\n> Do we need a GUC? Or should we just always require that sync rep is\n> satisfied before sending to async replicas?\n>\n\nI proposed a GUC to not introduce a behavior change by default. I have no\nstrong opinion on having a GUC or making the proposed behavior default,\nwould love to get others' perspectives as well.\n\n\n>\n> It feels like the sync quorum should always be ahead of the async\n> replicas. Unless I'm missing a use case, or there is some kind of\n> performance gotcha.\n>\n\nI couldn't think of a case that can cause serious performance issues but\nwill run some experiments on this and post the numbers.\n\n\n\n>\n> Regards,\n> Jeff Davis\n>\n>\n>\n\nOn Thu, Jan 6, 2022 at 11:24 PM Jeff Davis <pgsql@j-davis.com> wrote:On Wed, 2022-01-05 at 23:59 -0800, SATYANARAYANA NARLAPURAM wrote:\n> I would like to propose a GUC send_Wal_after_quorum_committed which\n> when set to ON, walsenders corresponds to async standbys and logical\n> replication workers wait until the LSN is quorum committed on the\n> primary before sending it to the standby. This not only simplifies\n> the post failover steps but avoids unnecessary downtime for the async\n> replicas. Thoughts?\n\nDo we need a GUC? Or should we just always require that sync rep is\nsatisfied before sending to async replicas? I proposed a GUC to not introduce a behavior change by default. I have no strong opinion on having a GUC or making the proposed behavior default, would love to get others' perspectives as well. \n\nIt feels like the sync quorum should always be ahead of the async\nreplicas. Unless I'm missing a use case, or there is some kind of\nperformance gotcha. I couldn't think of a case that can cause serious performance issues but will run some experiments on this and post the numbers. \n\nRegards,\n        Jeff Davis", "msg_date": "Thu, 6 Jan 2022 23:55:01 -0800", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Disallow quorum uncommitted (with synchronous standbys) txns in\n logical replication subscribers" }, { "msg_contents": "At Thu, 6 Jan 2022 23:55:01 -0800, SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> wrote in \n> On Thu, Jan 6, 2022 at 11:24 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> \n> > On Wed, 2022-01-05 at 23:59 -0800, SATYANARAYANA NARLAPURAM wrote:\n> > > I would like to propose a GUC send_Wal_after_quorum_committed which\n> > > when set to ON, walsenders corresponds to async standbys and logical\n> > > replication workers wait until the LSN is quorum committed on the\n> > > primary before sending it to the standby. This not only simplifies\n> > > the post failover steps but avoids unnecessary downtime for the async\n> > > replicas. Thoughts?\n> >\n> > Do we need a GUC? Or should we just always require that sync rep is\n> > satisfied before sending to async replicas?\n> >\n> \n> I proposed a GUC to not introduce a behavior change by default. I have no\n> strong opinion on having a GUC or making the proposed behavior default,\n> would love to get others' perspectives as well.\n> \n> \n> >\n> > It feels like the sync quorum should always be ahead of the async\n> > replicas. Unless I'm missing a use case, or there is some kind of\n> > performance gotcha.\n> >\n> \n> I couldn't think of a case that can cause serious performance issues but\n> will run some experiments on this and post the numbers.\n\nI think Jeff is saying that \"quorum commit\" already by definition\nmeans that all out-of-quorum standbys are behind of the\nquorum-standbys. I agree to that in a dictionary sense. But I can\nthink of the case where the response from the top-runner standby\nvanishes or gets caught somewhere on network for some reason. In that\ncase the primary happily checks quorum ignoring the top-runner.\n\nTo avoid that misdecision, I can guess two possible \"solutions\".\n\nOne is to serialize WAL sending (of course it is unacceptable at all)\nor aotehr is to send WAL to all standbys at once then make the\ndecision after making sure receiving replies from all standbys (this\nis no longer quorum commit in another sense..)\n\nSo I'm afraid that there's no sensible solution to avoid the\nhiding-forerunner problem on quorum commit.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 07 Jan 2022 17:27:40 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Disallow quorum uncommitted (with synchronous standbys) txns\n in logical replication subscribers" }, { "msg_contents": "On Fri, Jan 7, 2022 at 12:27 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Thu, 6 Jan 2022 23:55:01 -0800, SATYANARAYANA NARLAPURAM <\n> satyanarlapuram@gmail.com> wrote in\n> > On Thu, Jan 6, 2022 at 11:24 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> >\n> > > On Wed, 2022-01-05 at 23:59 -0800, SATYANARAYANA NARLAPURAM wrote:\n> > > > I would like to propose a GUC send_Wal_after_quorum_committed which\n> > > > when set to ON, walsenders corresponds to async standbys and logical\n> > > > replication workers wait until the LSN is quorum committed on the\n> > > > primary before sending it to the standby. This not only simplifies\n> > > > the post failover steps but avoids unnecessary downtime for the async\n> > > > replicas. Thoughts?\n> > >\n> > > Do we need a GUC? Or should we just always require that sync rep is\n> > > satisfied before sending to async replicas?\n> > >\n> >\n> > I proposed a GUC to not introduce a behavior change by default. I have no\n> > strong opinion on having a GUC or making the proposed behavior default,\n> > would love to get others' perspectives as well.\n> >\n> >\n> > >\n> > > It feels like the sync quorum should always be ahead of the async\n> > > replicas. Unless I'm missing a use case, or there is some kind of\n> > > performance gotcha.\n> > >\n> >\n> > I couldn't think of a case that can cause serious performance issues but\n> > will run some experiments on this and post the numbers.\n>\n> I think Jeff is saying that \"quorum commit\" already by definition\n> means that all out-of-quorum standbys are behind of the\n> quorum-standbys. I agree to that in a dictionary sense. But I can\n> think of the case where the response from the top-runner standby\n> vanishes or gets caught somewhere on network for some reason. In that\n> case the primary happily checks quorum ignoring the top-runner.\n>\n> To avoid that misdecision, I can guess two possible \"solutions\".\n>\n> One is to serialize WAL sending (of course it is unacceptable at all)\n> or aotehr is to send WAL to all standbys at once then make the\n> decision after making sure receiving replies from all standbys (this\n> is no longer quorum commit in another sense..)\n>\n\nThere is no need to serialize sending the WAL among sync standbys. The only\nserialization required is first to all the sync replicas and then to sync\nreplicas if any. Once an LSN is quorum committed, no failover subsystem\ninitiates an automatic failover such that the LSN is lost (data loss)\n\n>\n> So I'm afraid that there's no sensible solution to avoid the\n> hiding-forerunner problem on quorum commit.\n>\n\nCould you elaborate on the problem here?\n\n\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n\nOn Fri, Jan 7, 2022 at 12:27 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:At Thu, 6 Jan 2022 23:55:01 -0800, SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> wrote in \n> On Thu, Jan 6, 2022 at 11:24 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> \n> > On Wed, 2022-01-05 at 23:59 -0800, SATYANARAYANA NARLAPURAM wrote:\n> > > I would like to propose a GUC send_Wal_after_quorum_committed which\n> > > when set to ON, walsenders corresponds to async standbys and logical\n> > > replication workers wait until the LSN is quorum committed on the\n> > > primary before sending it to the standby. This not only simplifies\n> > > the post failover steps but avoids unnecessary downtime for the async\n> > > replicas. Thoughts?\n> >\n> > Do we need a GUC? Or should we just always require that sync rep is\n> > satisfied before sending to async replicas?\n> >\n> \n> I proposed a GUC to not introduce a behavior change by default. I have no\n> strong opinion on having a GUC or making the proposed behavior default,\n> would love to get others' perspectives as well.\n> \n> \n> >\n> > It feels like the sync quorum should always be ahead of the async\n> > replicas. Unless I'm missing a use case, or there is some kind of\n> > performance gotcha.\n> >\n> \n> I couldn't think of a case that can cause serious performance issues but\n> will run some experiments on this and post the numbers.\n\nI think Jeff is saying that \"quorum commit\" already by definition\nmeans that all out-of-quorum standbys are behind of the\nquorum-standbys.  I agree to that in a dictionary sense. But I can\nthink of the case where the response from the top-runner standby\nvanishes or gets caught somewhere on network for some reason. In that\ncase the primary happily checks quorum ignoring the top-runner.\n\nTo avoid that misdecision, I can guess two possible \"solutions\".\n\nOne is to serialize WAL sending (of course it is unacceptable at all)\nor aotehr is to send WAL to all standbys at once then make the\ndecision after making sure receiving replies from all standbys (this\nis no longer quorum commit in another sense..)There is no need to serialize sending the WAL among sync standbys. The only serialization required is first to all the sync replicas and then to sync replicas if any. Once an LSN is quorum committed, no failover subsystem initiates an automatic failover such that the LSN is lost (data loss)\n\nSo I'm afraid that there's no sensible solution to avoid the\nhiding-forerunner problem on quorum commit. Could you elaborate on the problem here? \n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 7 Jan 2022 09:44:15 -0800", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Disallow quorum uncommitted (with synchronous standbys) txns in\n logical replication subscribers" }, { "msg_contents": "On 1/6/22, 11:25 PM, \"Jeff Davis\" <pgsql@j-davis.com> wrote:\r\n> On Wed, 2022-01-05 at 23:59 -0800, SATYANARAYANA NARLAPURAM wrote:\r\n>> I would like to propose a GUC send_Wal_after_quorum_committed which\r\n>> when set to ON, walsenders corresponds to async standbys and logical\r\n>> replication workers wait until the LSN is quorum committed on the\r\n>> primary before sending it to the standby. This not only simplifies\r\n>> the post failover steps but avoids unnecessary downtime for the async\r\n>> replicas. Thoughts?\r\n>\r\n> Do we need a GUC? Or should we just always require that sync rep is\r\n> satisfied before sending to async replicas?\r\n>\r\n> It feels like the sync quorum should always be ahead of the async\r\n> replicas. Unless I'm missing a use case, or there is some kind of\r\n> performance gotcha.\r\n\r\nI don't have a strong opinion on whether there needs to be a GUC, but\r\n+1 for the ability to enforce sync quorum before sending WAL to async\r\nstandbys. I think that would be a reasonable default behavior.\r\n\r\nNathan\r\n\r\n", "msg_date": "Fri, 7 Jan 2022 18:23:24 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Disallow quorum uncommitted (with synchronous standbys) txns in\n logical\n replication subscribers" }, { "msg_contents": "On Fri, Jan 7, 2022 at 12:54 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Wed, 2022-01-05 at 23:59 -0800, SATYANARAYANA NARLAPURAM wrote:\n> > I would like to propose a GUC send_Wal_after_quorum_committed which\n> > when set to ON, walsenders corresponds to async standbys and logical\n> > replication workers wait until the LSN is quorum committed on the\n> > primary before sending it to the standby. This not only simplifies\n> > the post failover steps but avoids unnecessary downtime for the async\n> > replicas. Thoughts?\n>\n> Do we need a GUC? Or should we just always require that sync rep is\n> satisfied before sending to async replicas?\n>\n> It feels like the sync quorum should always be ahead of the async\n> replicas. Unless I'm missing a use case, or there is some kind of\n> performance gotcha.\n\nIMO, having GUC is a reasonable choice because some users might be\nokay with it if their async replicas are ahead of the sync ones or\nthey would have dealt with this problem already in their HA solutions\nor they don't want their async replicas to fall behind by the primary\n(most of the times).\n\nIf there are long running txns on the primary and the async standbys\nwere to wait until quorum commit from sync standbys, won't they fall\nbehind the primary by too much? This isn't a problem at all if we\nthink from the perspective that async replicas are anyways prone to\nfalling behind by the primary. But, if the primary is having long\nrunning txns continuously, the async replicas would eventually fall\nbehind more and more. Is there a way we can send the WAL records to\nboth sync and async replicas together but the async replicas won't\napply those WAL records until primary tells the standbys that quorum\ncommit is obtained? If the quorum commit isn't obtained by the\nprimary, the async replicas can ignore to apply the WAL records and\ndiscard them.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 8 Jan 2022 00:13:08 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Disallow quorum uncommitted (with synchronous standbys) txns in\n logical replication subscribers" }, { "msg_contents": "On Sat, 2022-01-08 at 00:13 +0530, Bharath Rupireddy wrote:\n> If there are long running txns on the primary and the async standbys\n> were to wait until quorum commit from sync standbys, won't they fall\n> behind the primary by too much?\n\nNo, because replication is based on LSNs, not transactions.\n\nWith the proposed change: an LSN can be replicated to all sync replicas\nas soon as it's durable on the primary; and an LSN can be replicated to\nall async replicas as soon as it's durable on the primary *and* the\nsync rep quorum is satisfied.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 07 Jan 2022 11:50:28 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Disallow quorum uncommitted (with synchronous standbys) txns in\n logical replication subscribers" }, { "msg_contents": "Hi,\n\nOn 2022-01-06 23:24:40 -0800, Jeff Davis wrote:\n> It feels like the sync quorum should always be ahead of the async\n> replicas. Unless I'm missing a use case, or there is some kind of\n> performance gotcha.\n\nI don't see how it can *not* cause a major performance / latency\ngotcha. You're deliberately delaying replication after all?\n\nSynchronous replication doesn't guarantee *anything* about the ability for to\nfail over for other replicas. Nor would it after what's proposed here -\nanother sync replica would still not be guaranteed to be able to follow the\nnewly promoted primary.\n\nTo me this just sounds like trying to shoehorn something into syncrep that\nit's not made for.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 7 Jan 2022 12:22:36 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Disallow quorum uncommitted (with synchronous standbys) txns in\n logical replication subscribers" }, { "msg_contents": "On Fri, 2022-01-07 at 12:22 -0800, Andres Freund wrote:\n> I don't see how it can *not* cause a major performance / latency\n> gotcha. You're deliberately delaying replication after all?\n\nAre there use cases where someone wants sync rep, and also wants their\nread replicas to get ahead of the sync rep quorum?\n\nIf the use case doesn't exist, it doesn't make sense to talk about how\nwell it performs.\n\n> another sync replica would still not be guaranteed to be able to\n> follow the\n> newly promoted primary.\n\nIf you only promote the furthest-ahead sync replica (which is what you\nshould be doing if you have quorum commit), why wouldn't that work?\n\n> To me this just sounds like trying to shoehorn something into syncrep\n> that\n> it's not made for.\n\nWhat *is* sync rep made for?\n\nThe only justification in the docs is around durability:\n\n\"[sync rep] extends that standard level of durability offered by a\ntransaction commit... [sync rep] can provide a much higher level of\ndurability...\"\n\nIf we take that at face value, then it seems logical to say that async\nread replicas should not get ahead of sync replicas.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 07 Jan 2022 14:36:46 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Disallow quorum uncommitted (with synchronous standbys) txns in\n logical replication subscribers" }, { "msg_contents": "Hi,\n\nOn 2022-01-07 14:36:46 -0800, Jeff Davis wrote:\n> On Fri, 2022-01-07 at 12:22 -0800, Andres Freund wrote:\n> > I don't see how it can *not* cause a major performance / latency\n> > gotcha. You're deliberately delaying replication after all?\n> \n> Are there use cases where someone wants sync rep, and also wants their\n> read replicas to get ahead of the sync rep quorum?\n\nYes. Not in the sense of being ahead of the sync replicas, but in the sense of\nbeing as cought up as possible, and to keep the lost WAL in case of crashes as\nlow as possible.\n\n\n> > another sync replica would still not be guaranteed to be able to\n> > follow the\n> > newly promoted primary.\n> \n> If you only promote the furthest-ahead sync replica (which is what you\n> should be doing if you have quorum commit), why wouldn't that work?\n\nRemove \"sync\" from the above sentence, and the sentence holds true for\ncombinations of sync/async replicas as well.\n\n\n> > To me this just sounds like trying to shoehorn something into syncrep\n> > that\n> > it's not made for.\n> \n> What *is* sync rep made for?\n> \n> The only justification in the docs is around durability:\n> \n> \"[sync rep] extends that standard level of durability offered by a\n> transaction commit... [sync rep] can provide a much higher level of\n> durability...\"\n\nWhat is being proposed here doesn't increase durability. It *reduces* it -\nit's less likely that WAL is replicated before a crash.\n\nThis is a especially relevant in cases where synchronous_commit=on vs local is\nused selectively - after this change the durability of local changes is very\nsubstantially *reduced* because they have to wait for the sync replicas before\nalso replicated to async replicas, but the COMMIT doesn't wait for\nreplication. So this \"feature\" just reduces the durability of such commits.\n\nThe performance overhead of syncrep is high enough that plenty real-world\nusages cannot afford to use it for all transactions. And that's normally fine\nfrom a business logic POV - often the majority of changes aren't that\nimportant. It's non-trivial from an application implementation POV though, but\nthat's imo a separate concern.\n\n\n> If we take that at face value, then it seems logical to say that async\n> read replicas should not get ahead of sync replicas.\n\nI don't see that. This presumes that WAL replicated to async replicas is\nsomehow bad. But pg_rewind exist, async replicas can be promoted and WAL from\nthe async replicas can be transferred to the synchronous replicas if only\nthose should be promoted.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 7 Jan 2022 14:54:29 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Disallow quorum uncommitted (with synchronous standbys) txns in\n logical replication subscribers" }, { "msg_contents": "On Fri, 2022-01-07 at 14:54 -0800, Andres Freund wrote:\n> > If you only promote the furthest-ahead sync replica (which is what\n> > you\n> > should be doing if you have quorum commit), why wouldn't that work?\n> \n> Remove \"sync\" from the above sentence, and the sentence holds true\n> for\n> combinations of sync/async replicas as well.\n\nTechnically that's true, but it seems like a bit of a strange use case.\nI would think people doing that would just include those async replicas\nin the sync quorum instead.\n\nThe main case I can think of for a mix of sync and async replicas are\nif they are just managed differently. For instance, the sync replica\nquorum is managed for a core part of the system, strategically\nallocated on good hardware in different locations to minimize the\nchance of dependent failures; while the async read replicas are\noptional for taking load off the primary, and may appear/disappear in\nwhatever location and on whatever hardware is most convenient.\n\nBut if an async replica can get ahead of the sync rep quorum, then the\nmost recent transactions can appear in query results, so that means the\nWAL shouldn't be lost, and the async read replicas become a part of the\ndurability model.\n\nIf the async read replica can't be promoted because it's not suitable\n(due to location, hardware, whatever), then you need to frantically\ncopy the final WAL records out to an instance in the sync rep quorum.\nThat requires extra ceremony for every failover, and might be dubious\ndepending on how safe the WAL on your async read replicas is, and\nwhether there are dependent failure risks.\n\nYeah, I guess there could be some use case woven amongst those caveats,\nbut I'm not sure if anyone is actually doing that combination of things\nsafely today. If someone is, it would be interesting to know more about\nthat use case.\n\nThe proposal in this thread is quite a bit simpler: manage your sync\nquorum and your async read replicas separately, and keep the sync rep\nquorum ahead.\n\n> > > To me this just sounds like trying to shoehorn something into\n> > > syncrep\n> > > that\n> > > it's not made for.\n> > \n> > What *is* sync rep made for?\n\nThis was a sincere question and an answer would be helpful. I think\nmany of the discussions about sync rep get derailed because people have\ndifferent ideas about when and how it should be used, and the\ndocumentation is pretty light.\n\n> This is a especially relevant in cases where synchronous_commit=on vs\n> local is\n> used selectively\n\nThat's an interesting point.\n\nHowever, it's hard for me to reason about \"kinda durable\" and \"a little\nmore durable\" and I'm not sure how many people would care about that\ndistinction.\n\n> I don't see that. This presumes that WAL replicated to async replicas\n> is\n> somehow bad.\n\nSimple case: primary and async read replica are in the same server\nrack. Sync replicas are geographically distributed with quorum commit.\nRead replica gets the WAL first (because it's closest), starts\nanswering queries that include that WAL, and then the entire rack\ncatches fire. Now you've returned results to the client, but lost the\ntransactions.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 07 Jan 2022 16:52:00 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Disallow quorum uncommitted (with synchronous standbys) txns in\n logical replication subscribers" }, { "msg_contents": "On Fri, Jan 7, 2022 at 4:52 PM Jeff Davis <pgsql@j-davis.com> wrote:\n\n> On Fri, 2022-01-07 at 14:54 -0800, Andres Freund wrote:\n> > > If you only promote the furthest-ahead sync replica (which is what\n> > > you\n> > > should be doing if you have quorum commit), why wouldn't that work?\n> >\n> > Remove \"sync\" from the above sentence, and the sentence holds true\n> > for\n> > combinations of sync/async replicas as well.\n>\n> Technically that's true, but it seems like a bit of a strange use case.\n> I would think people doing that would just include those async replicas\n> in the sync quorum instead.\n>\n> The main case I can think of for a mix of sync and async replicas are\n> if they are just managed differently. For instance, the sync replica\n> quorum is managed for a core part of the system, strategically\n> allocated on good hardware in different locations to minimize the\n> chance of dependent failures; while the async read replicas are\n> optional for taking load off the primary, and may appear/disappear in\n> whatever location and on whatever hardware is most convenient.\n>\n> But if an async replica can get ahead of the sync rep quorum, then the\n> most recent transactions can appear in query results, so that means the\n> WAL shouldn't be lost, and the async read replicas become a part of the\n> durability model.\n>\n> If the async read replica can't be promoted because it's not suitable\n> (due to location, hardware, whatever), then you need to frantically\n> copy the final WAL records out to an instance in the sync rep quorum.\n> That requires extra ceremony for every failover, and might be dubious\n> depending on how safe the WAL on your async read replicas is, and\n> whether there are dependent failure risks.\n>\n\nThis may not even be possible always as described in the scenario below.\n\n\n>\n> Yeah, I guess there could be some use case woven amongst those caveats,\n> but I'm not sure if anyone is actually doing that combination of things\n> safely today. If someone is, it would be interesting to know more about\n> that use case.\n>\n> The proposal in this thread is quite a bit simpler: manage your sync\n> quorum and your async read replicas separately, and keep the sync rep\n> quorum ahead.\n>\n> > > > To me this just sounds like trying to shoehorn something into\n> > > > syncrep\n> > > > that\n> > > > it's not made for.\n> > >\n> > > What *is* sync rep made for?\n>\n> This was a sincere question and an answer would be helpful. I think\n> many of the discussions about sync rep get derailed because people have\n> different ideas about when and how it should be used, and the\n> documentation is pretty light.\n>\n> > This is a especially relevant in cases where synchronous_commit=on vs\n> > local is\n> > used selectively\n>\n> That's an interesting point.\n>\n> However, it's hard for me to reason about \"kinda durable\" and \"a little\n> more durable\" and I'm not sure how many people would care about that\n> distinction.\n>\n> > I don't see that. This presumes that WAL replicated to async replicas\n> > is\n> > somehow bad.\n>\n> Simple case: primary and async read replica are in the same server\n> rack. Sync replicas are geographically distributed with quorum commit.\n> Read replica gets the WAL first (because it's closest), starts\n> answering queries that include that WAL, and then the entire rack\n> catches fire. Now you've returned results to the client, but lost the\n> transactions.\n>\n\nAnother similar example is, in a multi-AZ HA setup, primary and sync\nreplicas are deployed in two different availability zones and the async\nreplicas for reads can be in any availability zone and assume the async\nreplica and primary land in the same AZ. Primary availability zone going\ndown leads to both primary and async replica going down at the same time.\nThis async replica could be ahead of sync replica and WAL can't be\ncollected as both primary and async replica failed together.\n\n\n>\n> Regards,\n> Jeff Davis\n>\n>\n>\n\nOn Fri, Jan 7, 2022 at 4:52 PM Jeff Davis <pgsql@j-davis.com> wrote:On Fri, 2022-01-07 at 14:54 -0800, Andres Freund wrote:\n> > If you only promote the furthest-ahead sync replica (which is what\n> > you\n> > should be doing if you have quorum commit), why wouldn't that work?\n> \n> Remove \"sync\" from the above sentence, and the sentence holds true\n> for\n> combinations of sync/async replicas as well.\n\nTechnically that's true, but it seems like a bit of a strange use case.\nI would think people doing that would just include those async replicas\nin the sync quorum instead.\n\nThe main case I can think of for a mix of sync and async replicas are\nif they are just managed differently. For instance, the sync replica\nquorum is managed for a core part of the system, strategically\nallocated on good hardware in different locations to minimize the\nchance of dependent failures; while the async read replicas are\noptional for taking load off the primary, and may appear/disappear in\nwhatever location and on whatever hardware is most convenient.\n\nBut if an async replica can get ahead of the sync rep quorum, then the\nmost recent transactions can appear in query results, so that means the\nWAL shouldn't be lost, and the async read replicas become a part of the\ndurability model.\n\nIf the async read replica can't be promoted because it's not suitable\n(due to location, hardware, whatever), then you need to frantically\ncopy the final WAL records out to an instance in the sync rep quorum.\nThat requires extra ceremony for every failover, and might be dubious\ndepending on how safe the WAL on your async read replicas is, and\nwhether there are dependent failure risks.This may not even be possible always as described in the scenario below. \n\nYeah, I guess there could be some use case woven amongst those caveats,\nbut I'm not sure if anyone is actually doing that combination of things\nsafely today. If someone is, it would be interesting to know more about\nthat use case.\n\nThe proposal in this thread is quite a bit simpler: manage your sync\nquorum and your async read replicas separately, and keep the sync rep\nquorum ahead.\n\n> > > To me this just sounds like trying to shoehorn something into\n> > > syncrep\n> > > that\n> > > it's not made for.\n> > \n> > What *is* sync rep made for?\n\nThis was a sincere question and an answer would be helpful. I think\nmany of the discussions about sync rep get derailed because people have\ndifferent ideas about when and how it should be used, and the\ndocumentation is pretty light.\n\n> This is a especially relevant in cases where synchronous_commit=on vs\n> local is\n> used selectively\n\nThat's an interesting point.\n\nHowever, it's hard for me to reason about \"kinda durable\" and \"a little\nmore durable\" and I'm not sure how many people would care about that\ndistinction.\n\n> I don't see that. This presumes that WAL replicated to async replicas\n> is\n> somehow bad.\n\nSimple case: primary and async read replica are in the same server\nrack. Sync replicas are geographically distributed with quorum commit.\nRead replica gets the WAL first (because it's closest), starts\nanswering queries that include that WAL, and then the entire rack\ncatches fire. Now you've returned results to the client, but lost the\ntransactions.Another similar example is, in a multi-AZ HA setup, primary and sync replicas are deployed in two different availability zones and the async replicas for reads can be in any availability zone and assume the async replica and primary land in the same AZ. Primary availability zone going down leads to both primary and async replica going down at the same time. This async replica could be ahead of sync replica and WAL can't be collected as both primary and async replica failed together. \n\nRegards,\n        Jeff Davis", "msg_date": "Sat, 8 Jan 2022 15:28:13 -0800", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Disallow quorum uncommitted (with synchronous standbys) txns in\n logical replication subscribers" }, { "msg_contents": "At Fri, 7 Jan 2022 09:44:15 -0800, SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> wrote in \n> On Fri, Jan 7, 2022 at 12:27 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> wrote:\n> > One is to serialize WAL sending (of course it is unacceptable at all)\n> > or aotehr is to send WAL to all standbys at once then make the\n> > decision after making sure receiving replies from all standbys (this\n> > is no longer quorum commit in another sense..)\n> >\n> \n> There is no need to serialize sending the WAL among sync standbys. The only\n> serialization required is first to all the sync replicas and then to sync\n> replicas if any. Once an LSN is quorum committed, no failover subsystem\n> initiates an automatic failover such that the LSN is lost (data loss)\n\nSync standbys on PostgreSQL is ex post facto. When a certain set of\nstandbys have first reported catching-up for a commit, they are called\n\"sync standbys\".\n\nWe can maintain a fixed set of sync standbys based on the set of\nsync-standbys at a past commits, but that implies performance\ndegradation even if not a single standby is gone.\n\nIf we send WAL only to the fixed-set of sync standbys, when any of the\nstandbys is gone, the primary is forced to wait until some timeout\nexpires. The same commit would finish immediately if WAL had been\nsent also to out-of-quorum standbys.\n\n> > So I'm afraid that there's no sensible solution to avoid the\n> > hiding-forerunner problem on quorum commit.\n> \n> Could you elaborate on the problem here?\n\nIf a primary have received response for LSN=X from N standbys, that\nfact doesn't guarantee that none of the other standbys reached the\nsame LSN. If one of the yet-unresponded standbys already reached\nLSN=X+10 but its response does not arrived to the primary for some\nreasons, the true-fastest standby is hiding from primary.\n\nEven if the primary examines the responses from all standbys, it is\nuncertain if the responses reflect the truly current state of the\nstandbys. Thus if we want to guarantee that no unresponded standby is\ngoing beyond LSN=X, there's no means other than we refrain from\nsending WAL beyond X. In that case, we need to serialize the period\nfrom WAL-sending to response-reception, which would lead to critical\nperformance degradation.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 11 Jan 2022 17:30:27 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Disallow quorum uncommitted (with synchronous standbys) txns\n in logical replication subscribers" }, { "msg_contents": "On Thu, Jan 6, 2022 at 1:29 PM SATYANARAYANA NARLAPURAM\n<satyanarlapuram@gmail.com> wrote:\n>\n> Consider a cluster formation where we have a Primary(P), Sync Replica(S1), and multiple async replicas for disaster recovery and read scaling (within the region and outside the region). In this setup, S1 is the preferred failover target in an event of the primary failure. When a transaction is committed on the primary, it is not acknowledged to the client until the primary gets an acknowledgment from the sync standby that the WAL is flushed to the disk (assume synchrnous_commit configuration is remote_flush). However, walsenders corresponds to the async replica on the primary don't wait for the flush acknowledgment from the primary and send the WAL to the async standbys (also any logical replication/decoding clients). So it is possible for the async replicas and logical client ahead of the sync replica. If a failover is initiated in such a scenario, to bring the formation into a healthy state we have to either\n>\n> run the pg_rewind on the async replicas for them to reconnect with the new primary or\n> collect the latest WAL across the replicas and feed the standby.\n>\n> Both these operations are involved, error prone, and can cause multiple minutes of downtime if done manually. In addition, there is a window where the async replicas can show the data that was neither acknowledged to the client nor committed on standby. Logical clients if they are ahead may need to reseed the data as no easy rewind option for them.\n>\n> I would like to propose a GUC send_Wal_after_quorum_committed which when set to ON, walsenders corresponds to async standbys and logical replication workers wait until the LSN is quorum committed on the primary before sending it to the standby. This not only simplifies the post failover steps but avoids unnecessary downtime for the async replicas. Thoughts?\n\nThanks Satya and others for the inputs. Here's the v1 patch that\nbasically allows async wal senders to wait until the sync standbys\nreport their flush lsn back to the primary. Please let me know your\nthoughts.\n\nI've done pgbench testing to see if the patch causes any problems. I\nran tests two times, there isn't much difference in the txns per\nseconds (tps), although there's a delay in the async standby receiving\nthe WAL, after all, that's the feature we are pursuing.\n\n[1]\nHEAD or WITHOUT PATCH:\n./pgbench -c 10 -t 500 -P 10 testdb\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 100\nquery mode: simple\nnumber of clients: 10\nnumber of threads: 1\nnumber of transactions per client: 500\nnumber of transactions actually processed: 5000/5000\nlatency average = 247.395 ms\nlatency stddev = 74.409 ms\ninitial connection time = 13.622 ms\ntps = 39.713114 (without initial connection time)\n\nPATCH:\n./pgbench -c 10 -t 500 -P 10 testdb\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 100\nquery mode: simple\nnumber of clients: 10\nnumber of threads: 1\nnumber of transactions per client: 500\nnumber of transactions actually processed: 5000/5000\nlatency average = 251.757 ms\nlatency stddev = 72.846 ms\ninitial connection time = 13.025 ms\ntps = 39.315862 (without initial connection time)\n\nTEST SETUP:\nprimary in region 1\nasync standby 1 in the same region as that of the primary region 1\ni.e. close to primary\nsync standby 1 in region 2\nsync standby 2 in region 3\nan archive location in a region different from the primary and\nstandbys regions, region 4\nNote that I intentionally kept sync standbys in regions far from\nprimary because it allows sync standbys to receive WAL a bit late by\ndefault, which works well for our testing.\n\nPGBENCH SETUP:\n./psql -d postgres -c \"drop database testdb\"\n./psql -d postgres -c \"create database testdb\"\n./pgbench -i -s 100 testdb\n./psql -d testdb -c \"\\dt\"\n./psql -d testdb -c \"SELECT pg_size_pretty(pg_database_size('testdb'))\"\n./pgbench -c 10 -t 500 -P 10 testdb\n\nRegards,\nBharath Rupireddy.", "msg_date": "Fri, 25 Feb 2022 20:31:37 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Allow async standbys wait for sync replication (was: Disallow quorum\n uncommitted (with synchronous standbys) txns in logical replication\n subscribers)" }, { "msg_contents": "On Fri, Feb 25, 2022 at 08:31:37PM +0530, Bharath Rupireddy wrote:\n> Thanks Satya and others for the inputs. Here's the v1 patch that\n> basically allows async wal senders to wait until the sync standbys\n> report their flush lsn back to the primary. Please let me know your\n> thoughts.\n\nI haven't had a chance to look too closely yet, but IIUC this adds a new\nfunction that waits for synchronous replication. This new function\nessentially spins until the synchronous LSN has advanced.\n\nI don't think it's a good idea to block sending any WAL like this. AFAICT\nit is possible that there will be a lot of synchronously replicated WAL\nthat we can send, and it might just be the last several bytes that cannot\nyet be replicated to the asynchronous standbys. І believe this patch will\ncause the server to avoid sending _any_ WAL until the synchronous LSN\nadvances.\n\nPerhaps we should instead just choose the SendRqstPtr based on the current\nsynchronous LSN. Presumably there are other things we'd need to consider,\nbut in general, I think we ought to send as much WAL as possible for a\ngiven call to XLogSendPhysical().\n\n> I've done pgbench testing to see if the patch causes any problems. I\n> ran tests two times, there isn't much difference in the txns per\n> seconds (tps), although there's a delay in the async standby receiving\n> the WAL, after all, that's the feature we are pursuing.\n\nI'm curious what a longer pgbench run looks like when the synchronous\nreplicas are in the same region. That is probably a more realistic\nuse-case.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 25 Feb 2022 11:38:19 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow async standbys wait for sync replication (was: Disallow\n quorum uncommitted (with synchronous standbys) txns in logical replication\n subscribers)" }, { "msg_contents": "Hello,\n\nOn 2/25/22 11:38 AM, Nathan Bossart wrote:\n\n> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>\n>\n>\n> On Fri, Feb 25, 2022 at 08:31:37PM +0530, Bharath Rupireddy wrote:\n>> Thanks Satya and others for the inputs. Here's the v1 patch that\n>> basically allows async wal senders to wait until the sync standbys\n>> report their flush lsn back to the primary. Please let me know your\n>> thoughts.\n> I haven't had a chance to look too closely yet, but IIUC this adds a new\n> function that waits for synchronous replication. This new function\n> essentially spins until the synchronous LSN has advanced.\n>\n> I don't think it's a good idea to block sending any WAL like this. AFAICT\n> it is possible that there will be a lot of synchronously replicated WAL\n> that we can send, and it might just be the last several bytes that cannot\n> yet be replicated to the asynchronous standbys. І believe this patch will\n> cause the server to avoid sending _any_ WAL until the synchronous LSN\n> advances.\n>\n> Perhaps we should instead just choose the SendRqstPtr based on the current\n> synchronous LSN. Presumably there are other things we'd need to consider,\n> but in general, I think we ought to send as much WAL as possible for a\n> given call to XLogSendPhysical().\n\nI think you're right that we'll avoid sending any WAL until sync_lsn \nadvances. We could setup a contrived situation where the async-walsender \nnever advances because it terminates before the flush_lsn of the \nsynchronous_node catches up. And when the async-walsender restarts, \nit'll start with the latest flushed on the primary and we could go into \na perpetual loop.\n\nI took a look at the patch and tested basic streaming with async \nreplicas ahead of the synchronous standby and with logical clients as \nwell and it works as expected.\n\n >\n > ereport(LOG,\n >            (errmsg(\"async standby WAL sender with request LSN %X/%X \nis waiting as sync standbys are ahead with flush LSN %X/%X\",\n >                    LSN_FORMAT_ARGS(flushLSN), \nLSN_FORMAT_ARGS(sendRqstPtr)),\n >             errhidestmt(true)));\n\nI think this log formatting is incorrect.\ns/sync standbys are ahead/sync standbys are behind/ and I think you need \nto swap flushLsn and sendRqstPtr\n\nWhen a walsender is waiting for the lsn on the synchronous replica to \nadvance and a database stop is issued to the writer, the pg_ctl stop \nisn't able to proceed and the database seems to never shutdown.\n\n > Assert(priority >= 0);\n\nWhat's the point of the assert here?\n\nAlso the comments/code refer to AsyncStandbys, however it's also used \nfor logical clients, which may or may not be standbys. Don't feel too \nstrongly about the naming here but something to note.\n\n\n > if (!ShouldWaitForSyncRepl())\n >        return;\n > ...\n > for (;;)\n > {\n >    // rest of work\n > }\n\nIf we had a walsender already waiting for an ack, and the conditions of \nShouldWaitForSyncRepl() change, such as disabling \nasync_standbys_wait_for_sync_replication or synchronous replication \nit'll still wait since we never re-check the condition.\n\npostgres=# select wait_event from pg_stat_activity where wait_event like \n'AsyncWal%';\n               wait_event\n--------------------------------------\n  AsyncWalSenderWaitForSyncReplication\n  AsyncWalSenderWaitForSyncReplication\n  AsyncWalSenderWaitForSyncReplication\n(3 rows)\n\npostgres=# show synchronous_standby_names;\n  synchronous_standby_names\n---------------------------\n\n(1 row)\n\npostgres=# show async_standbys_wait_for_sync_replication;\n  async_standbys_wait_for_sync_replication\n------------------------------------------\n  off\n(1 row)\n\n >    LWLockAcquire(SyncRepLock, LW_SHARED);\n >    flushLSN = walsndctl->lsn[SYNC_REP_WAIT_FLUSH];\n >    LWLockRelease(SyncRepLock);\n\nShould we configure this similar to the user's setting of \nsynchronous_commit instead of just flush? (SYNC_REP_WAIT_WRITE, \nSYNC_REP_WAIT_APPLY)\n\nThanks,\n\nJohn H\n\n\n", "msg_date": "Fri, 25 Feb 2022 13:52:03 -0800", "msg_from": "\"Hsu, John\" <hsuchen@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Allow async standbys wait for sync replication (was: Disallow\n quorum\n uncommitted (with synchronous standbys) txns in logical replication\n subscribers)" }, { "msg_contents": "On Sat, Feb 26, 2022 at 1:08 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Fri, Feb 25, 2022 at 08:31:37PM +0530, Bharath Rupireddy wrote:\n> > Thanks Satya and others for the inputs. Here's the v1 patch that\n> > basically allows async wal senders to wait until the sync standbys\n> > report their flush lsn back to the primary. Please let me know your\n> > thoughts.\n>\n> I haven't had a chance to look too closely yet, but IIUC this adds a new\n> function that waits for synchronous replication. This new function\n> essentially spins until the synchronous LSN has advanced.\n>\n> I don't think it's a good idea to block sending any WAL like this. AFAICT\n> it is possible that there will be a lot of synchronously replicated WAL\n> that we can send, and it might just be the last several bytes that cannot\n> yet be replicated to the asynchronous standbys. І believe this patch will\n> cause the server to avoid sending _any_ WAL until the synchronous LSN\n> advances.\n>\n> Perhaps we should instead just choose the SendRqstPtr based on the current\n> synchronous LSN. Presumably there are other things we'd need to consider,\n> but in general, I think we ought to send as much WAL as possible for a\n> given call to XLogSendPhysical().\n\nA global min LSN of SendRqstPtr of all the sync standbys can be\ncalculated and the async standbys can send WAL up to global min LSN.\nThis is unlike what the v1 patch does i.e. async standbys will wait\nuntil the sync standbys report flush LSN back to the primary. Problem\nwith the global min LSN approach is that there can still be a small\nwindow where async standbys can get ahead of sync standbys. Imagine\nasync standbys being closer to the primary than sync standbys and if\nthe failover has to happen while the WAL at SendRqstPtr isn't received\nby the sync standbys, but the async standbys can receive them as they\nare closer. We hit the same problem that we are trying to solve with\nthis patch. This is the reason, we are waiting till the sync flush LSN\nas it guarantees more transactional protection.\n\nDo you think allowing async standbys optionally wait for either remote\nwrite or flush or apply or global min LSN of SendRqstPtr so that users\ncan choose what they want?\n\n> > I've done pgbench testing to see if the patch causes any problems. I\n> > ran tests two times, there isn't much difference in the txns per\n> > seconds (tps), although there's a delay in the async standby receiving\n> > the WAL, after all, that's the feature we are pursuing.\n>\n> I'm curious what a longer pgbench run looks like when the synchronous\n> replicas are in the same region. That is probably a more realistic\n> use-case.\n\nWe are performing more tests, I will share the results once done.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 26 Feb 2022 14:17:50 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow async standbys wait for sync replication (was: Disallow\n quorum uncommitted (with synchronous standbys) txns in logical replication\n subscribers)" }, { "msg_contents": "On Sat, Feb 26, 2022 at 3:22 AM Hsu, John <hsuchen@amazon.com> wrote:\n>\n> > On Fri, Feb 25, 2022 at 08:31:37PM +0530, Bharath Rupireddy wrote:\n> >> Thanks Satya and others for the inputs. Here's the v1 patch that\n> >> basically allows async wal senders to wait until the sync standbys\n> >> report their flush lsn back to the primary. Please let me know your\n> >> thoughts.\n> > I haven't had a chance to look too closely yet, but IIUC this adds a new\n> > function that waits for synchronous replication. This new function\n> > essentially spins until the synchronous LSN has advanced.\n> >\n> > I don't think it's a good idea to block sending any WAL like this. AFAICT\n> > it is possible that there will be a lot of synchronously replicated WAL\n> > that we can send, and it might just be the last several bytes that cannot\n> > yet be replicated to the asynchronous standbys. І believe this patch will\n> > cause the server to avoid sending _any_ WAL until the synchronous LSN\n> > advances.\n> >\n> > Perhaps we should instead just choose the SendRqstPtr based on the current\n> > synchronous LSN. Presumably there are other things we'd need to consider,\n> > but in general, I think we ought to send as much WAL as possible for a\n> > given call to XLogSendPhysical().\n>\n> I think you're right that we'll avoid sending any WAL until sync_lsn\n> advances. We could setup a contrived situation where the async-walsender\n> never advances because it terminates before the flush_lsn of the\n> synchronous_node catches up. And when the async-walsender restarts,\n> it'll start with the latest flushed on the primary and we could go into\n> a perpetual loop.\n\nThe async walsender looks at flush LSN from\nwalsndctl->lsn[SYNC_REP_WAIT_FLUSH]; after it comes up and decides to\nsend the WAL up to it. If there are no sync replicats after it comes\nup (users can make sync standbys async without postmaster restart\nbecause synchronous_standby_names is effective with SIGHUP), then it\ndoesn't wait at all and continues to send WAL. I don't see any problem\nwith it. Am I missing something here?\n\n> I took a look at the patch and tested basic streaming with async\n> replicas ahead of the synchronous standby and with logical clients as\n> well and it works as expected.\n\nThanks for reviewing and testing the patch.\n\n> > ereport(LOG,\n> > (errmsg(\"async standby WAL sender with request LSN %X/%X\n> is waiting as sync standbys are ahead with flush LSN %X/%X\",\n> > LSN_FORMAT_ARGS(flushLSN),\n> LSN_FORMAT_ARGS(sendRqstPtr)),\n> > errhidestmt(true)));\n>\n> I think this log formatting is incorrect.\n> s/sync standbys are ahead/sync standbys are behind/ and I think you need\n> to swap flushLsn and sendRqstPtr\n\nI will correct it. \"async standby WAL sender with request LSN %X/%X is\nwaiting as sync standbys are ahead with flush LSN %X/%X\",\nLSN_FORMAT_ARGS(sendRqstP), LSN_FORMAT_ARGS(flushLSN). I will think\nmore about having better wording of these messages, any suggestions\nhere?\n\n> When a walsender is waiting for the lsn on the synchronous replica to\n> advance and a database stop is issued to the writer, the pg_ctl stop\n> isn't able to proceed and the database seems to never shutdown.\n\nI too observed this once or twice. It looks like the walsender isn't\ndetecting postmaster death in for (;;) with WalSndWait. Not sure if\nthis is expected or true with other wait-loops in walsender code. Any\nmore thoughts here?\n\n> > Assert(priority >= 0);\n>\n> What's the point of the assert here?\n\nJust for safety. I can remove it as the sync_standby_priority can\nnever be negative.\n\n> Also the comments/code refer to AsyncStandbys, however it's also used\n> for logical clients, which may or may not be standbys. Don't feel too\n> strongly about the naming here but something to note.\n\nI will try to be more informative by adding something like \"async\nstandbys and logical replication subscribers\".\n\n> > if (!ShouldWaitForSyncRepl())\n> > return;\n> > ...\n> > for (;;)\n> > {\n> > // rest of work\n> > }\n>\n> If we had a walsender already waiting for an ack, and the conditions of\n> ShouldWaitForSyncRepl() change, such as disabling\n> async_standbys_wait_for_sync_replication or synchronous replication\n> it'll still wait since we never re-check the condition.\n\nYeah, I will add the checks inside the async walsender wait-loop.\n\n> postgres=# select wait_event from pg_stat_activity where wait_event like\n> 'AsyncWal%';\n> wait_event\n> --------------------------------------\n> AsyncWalSenderWaitForSyncReplication\n> AsyncWalSenderWaitForSyncReplication\n> AsyncWalSenderWaitForSyncReplication\n> (3 rows)\n>\n> postgres=# show synchronous_standby_names;\n> synchronous_standby_names\n> ---------------------------\n>\n> (1 row)\n>\n> postgres=# show async_standbys_wait_for_sync_replication;\n> async_standbys_wait_for_sync_replication\n> ------------------------------------------\n> off\n> (1 row)\n>\n> > LWLockAcquire(SyncRepLock, LW_SHARED);\n> > flushLSN = walsndctl->lsn[SYNC_REP_WAIT_FLUSH];\n> > LWLockRelease(SyncRepLock);\n>\n> Should we configure this similar to the user's setting of\n> synchronous_commit instead of just flush? (SYNC_REP_WAIT_WRITE,\n> SYNC_REP_WAIT_APPLY)\n\nAs I said upthread, we can allow async standbys optionally wait for\neither remote write or flush or apply or global min LSN of SendRqstPtr\nso that users can choose what they want. I'm open to more thoughts\nhere.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 26 Feb 2022 14:46:56 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow async standbys wait for sync replication (was: Disallow\n quorum uncommitted (with synchronous standbys) txns in logical replication\n subscribers)" }, { "msg_contents": "On Sat, Feb 26, 2022 at 02:17:50PM +0530, Bharath Rupireddy wrote:\n> A global min LSN of SendRqstPtr of all the sync standbys can be\n> calculated and the async standbys can send WAL up to global min LSN.\n> This is unlike what the v1 patch does i.e. async standbys will wait\n> until the sync standbys report flush LSN back to the primary. Problem\n> with the global min LSN approach is that there can still be a small\n> window where async standbys can get ahead of sync standbys. Imagine\n> async standbys being closer to the primary than sync standbys and if\n> the failover has to happen while the WAL at SendRqstPtr isn't received\n> by the sync standbys, but the async standbys can receive them as they\n> are closer. We hit the same problem that we are trying to solve with\n> this patch. This is the reason, we are waiting till the sync flush LSN\n> as it guarantees more transactional protection.\n\nDo you mean that the application of WAL gets ahead on your async standbys\nor that the writing/flushing of WAL gets ahead? If synchronous_commit is\nset to 'remote_write' or 'on', I think either approach can lead to\nsituations where the async standbys are ahead of the sync standbys with WAL\napplication. For example, a conflict between WAL replay and a query on\nyour sync standby could delay WAL replay, but the primary will not wait for\nthis conflict to resolve before considering a transaction synchronously\nreplicated and sending it to the async standbys.\n\nIf writing/flushing WAL gets ahead on async standbys, I think something is\nwrong with the patch. If you aren't sending WAL to async standbys until\nit is synchronously replicated to the sync standbys, it should by\ndefinition be impossible for this to happen.\n\nIf you wanted to make sure that WAL was not applied to async standbys\nbefore it was applied to sync standbys, I think you'd need to set\nsynchronous_commit to 'remote_apply'. This would ensure that the WAL is\nreplayed on sync standbys before the primary considers the transaction\nsynchronously replicated and sends it to the async standbys.\n\n> Do you think allowing async standbys optionally wait for either remote\n> write or flush or apply or global min LSN of SendRqstPtr so that users\n> can choose what they want?\n\nI'm not sure I follow the difference between \"global min LSN of\nSendRqstPtr\" and remote write/flush/apply. IIUC you are saying that we\ncould use the LSN of what is being sent to sync standbys instead of the LSN\nof what the primary considers synchronously replicated. I don't think we\nshould do that because it provides no guarantee that the WAL has even been\nsent to the sync standbys before it is sent to the async standbys. For\nthis feature, I think we always need to consider what the primary considers\nsynchronously replicated. My suggested approach doesn't change that. I'm\nsaying that instead of spinning in a loop waiting for the WAL to be\nsynchronously replicated, we just immediately send WAL up to the LSN that\nis presently known to be synchronously replicated.\n\nYou do bring up an interesting point, though. Is there a use-case for\nspecifying synchronous_commit='on' but not sending WAL to async replicas\nuntil it is synchronously applied? Or alternatively, would anyone want to\nset synchronous_commit='remote_apply' but send WAL to async standbys as\nsoon as it is written to the sync standbys? My initial reaction is that we\nshould depend on the synchronous replication setup. As long as the primary\nconsiders an LSN synchronously replicated, it would be okay to send it to\nthe async standbys. I personally don't think it is worth taking on the\nextra complexity for that level of configuration just yet.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 26 Feb 2022 08:07:11 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow async standbys wait for sync replication (was: Disallow\n quorum uncommitted (with synchronous standbys) txns in logical replication\n subscribers)" }, { "msg_contents": "On Sat, Feb 26, 2022 at 9:37 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Sat, Feb 26, 2022 at 02:17:50PM +0530, Bharath Rupireddy wrote:\n> > A global min LSN of SendRqstPtr of all the sync standbys can be\n> > calculated and the async standbys can send WAL up to global min LSN.\n> > This is unlike what the v1 patch does i.e. async standbys will wait\n> > until the sync standbys report flush LSN back to the primary. Problem\n> > with the global min LSN approach is that there can still be a small\n> > window where async standbys can get ahead of sync standbys. Imagine\n> > async standbys being closer to the primary than sync standbys and if\n> > the failover has to happen while the WAL at SendRqstPtr isn't received\n> > by the sync standbys, but the async standbys can receive them as they\n> > are closer. We hit the same problem that we are trying to solve with\n> > this patch. This is the reason, we are waiting till the sync flush LSN\n> > as it guarantees more transactional protection.\n>\n> Do you mean that the application of WAL gets ahead on your async standbys\n> or that the writing/flushing of WAL gets ahead? If synchronous_commit is\n> set to 'remote_write' or 'on', I think either approach can lead to\n> situations where the async standbys are ahead of the sync standbys with WAL\n> application. For example, a conflict between WAL replay and a query on\n> your sync standby could delay WAL replay, but the primary will not wait for\n> this conflict to resolve before considering a transaction synchronously\n> replicated and sending it to the async standbys.\n>\n> If writing/flushing WAL gets ahead on async standbys, I think something is\n> wrong with the patch. If you aren't sending WAL to async standbys until\n> it is synchronously replicated to the sync standbys, it should by\n> definition be impossible for this to happen.\n\nWith the v1 patch [1], the async standbys will never get WAL ahead of\nsync standbys. That is guaranteed because the walsenders serving async\nstandbys are allowed to send WAL only after the walsenders serving\nsync standbys receive the synchronous flush LSN.\n\n> > Do you think allowing async standbys optionally wait for either remote\n> > write or flush or apply or global min LSN of SendRqstPtr so that users\n> > can choose what they want?\n>\n> I'm not sure I follow the difference between \"global min LSN of\n> SendRqstPtr\" and remote write/flush/apply. IIUC you are saying that we\n> could use the LSN of what is being sent to sync standbys instead of the LSN\n> of what the primary considers synchronously replicated. I don't think we\n> should do that because it provides no guarantee that the WAL has even been\n> sent to the sync standbys before it is sent to the async standbys.\n\nCorrect.\n\n> For\n> this feature, I think we always need to consider what the primary considers\n> synchronously replicated. My suggested approach doesn't change that. I'm\n> saying that instead of spinning in a loop waiting for the WAL to be\n> synchronously replicated, we just immediately send WAL up to the LSN that\n> is presently known to be synchronously replicated.\n\nAs I said above, v1 patch does that i.e. async standbys wait until the\nsync standbys update their flush LSN.\n\nFlush LSN is this - flushLSN = walsndctl->lsn[SYNC_REP_WAIT_FLUSH];\nwhich gets updated in SyncRepReleaseWaiters.\n\nAsync standbys with their SendRqstPtr will wait in XLogSendPhysical or\nXLogSendLogical until SendRqstPtr <= flushLSN.\n\nI will address review comments raised by Hsu, John and send the\nupdated patch for further review. Thanks.\n\n[1] https://www.postgresql.org/message-id/CALj2ACVUa8WddVDS20QmVKNwTbeOQqy4zy59NPzh8NnLipYZGw%40mail.gmail.com\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 28 Feb 2022 18:45:51 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow async standbys wait for sync replication (was: Disallow\n quorum uncommitted (with synchronous standbys) txns in logical replication\n subscribers)" }, { "msg_contents": "On Mon, Feb 28, 2022 at 06:45:51PM +0530, Bharath Rupireddy wrote:\n> On Sat, Feb 26, 2022 at 9:37 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> For\n>> this feature, I think we always need to consider what the primary considers\n>> synchronously replicated. My suggested approach doesn't change that. I'm\n>> saying that instead of spinning in a loop waiting for the WAL to be\n>> synchronously replicated, we just immediately send WAL up to the LSN that\n>> is presently known to be synchronously replicated.\n> \n> As I said above, v1 patch does that i.e. async standbys wait until the\n> sync standbys update their flush LSN.\n> \n> Flush LSN is this - flushLSN = walsndctl->lsn[SYNC_REP_WAIT_FLUSH];\n> which gets updated in SyncRepReleaseWaiters.\n> \n> Async standbys with their SendRqstPtr will wait in XLogSendPhysical or\n> XLogSendLogical until SendRqstPtr <= flushLSN.\n\nMy feedback is specifically about this behavior. I don't think we should\nspin in XLogSend*() waiting for an LSN to be synchronously replicated. I\nthink we should just choose the SendRqstPtr based on what is currently\nsynchronously replicated.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 28 Feb 2022 10:57:32 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow async standbys wait for sync replication (was: Disallow\n quorum uncommitted (with synchronous standbys) txns in logical replication\n subscribers)" }, { "msg_contents": "> The async walsender looks at flush LSN from\n> walsndctl->lsn[SYNC_REP_WAIT_FLUSH]; after it comes up and decides to\n> send the WAL up to it. If there are no sync replicats after it comes\n> up (users can make sync standbys async without postmaster restart\n> because synchronous_standby_names is effective with SIGHUP), then it\n> doesn't wait at all and continues to send WAL. I don't see any problem\n> with it. Am I missing something here? Assuming I understand the code correctly, we have: > SendRqstPtr = \nGetFlushRecPtr(NULL); In this contrived example let's say \nwalsndctl->lsn[SYNC_REP_WAIT_FLUSH] is always 60s behind \nGetFlushRecPtr() and for whatever reason, if the walsender hasn't \nreplicated anything in 30s it'll terminate and re-connect. If \nGetFlushRecPtr() keeps advancing and is always 60s ahead of the sync \nLSN's then we would never stream anything, even though it's advanced \npast what is safe to stream previously.\n> I will correct it. \"async standby WAL sender with request LSN %X/%X is > waiting as sync standbys are ahead with flush LSN %X/%X\", > \nLSN_FORMAT_ARGS(sendRqstP), LSN_FORMAT_ARGS(flushLSN). I will think > \nmore about having better wording of these messages, any suggestions > here?\n\"async standby WAL sender with request LSN %X/%X is waiting for sync \nstandbys at LSN %X/%X to advance past it\" Not sure if that's really \nclearer...\n\n > I too observed this once or twice. It looks like the walsender isn't \n > detecting postmaster death in for (;;) with WalSndWait. Not sure if > \nthis is expected or true with other wait-loops in walsender code. Any > \nmore thoughts here? Unfortunately I haven't had a chance to dig into it \nmore although iirc I hit it fairly often. Thanks, John H\n\n\n\n\n\n\n\n\n\n> The async walsender looks at flush LSN from> walsndctl->lsn[SYNC_REP_WAIT_FLUSH]; after it comes up and decides to> send the WAL up to it. If there are no sync replicats after it comes> up (users can make sync standbys async without postmaster restart> because synchronous_standby_names is effective with SIGHUP), then it> doesn't wait at all and continues to send WAL. I don't see any problem> with it. Am I missing something here?\n\n\nAssuming I understand the code correctly, we have:\n\n> SendRqstPtr = GetFlushRecPtr(NULL);\n\nIn this contrived example let's say walsndctl->lsn[SYNC_REP_WAIT_FLUSH] is always 60s behind GetFlushRecPtr() and for whatever reason, if the walsender hasn't replicated anything in 30s it'll terminate and re-connect. If GetFlushRecPtr() keeps advancing and is always 60s ahead of the sync LSN's then we would never stream anything, even though it's advanced past what is safe to stream previously.\n\n> I will correct it. \"async standby WAL sender with request LSN %X/%X is\n> waiting as sync standbys are ahead with flush LSN %X/%X\",\n> LSN_FORMAT_ARGS(sendRqstP), LSN_FORMAT_ARGS(flushLSN). I will think\n> more about having better wording of these messages, any suggestions\n> here?\n\n\"async standby WAL sender with request LSN %X/%X is\nwaiting for sync standbys at LSN %X/%X to advance past it\"\nNot sure if that's really clearer...\n\n\n> I too observed this once or twice. It looks like the walsender isn't\n> detecting postmaster death in for (;;) with WalSndWait. Not sure if\n> this is expected or true with other wait-loops in walsender code. Any\n> more thoughts here?\n\nUnfortunately I haven't had a chance to dig into it more although iirc I hit it fairly often.\n\nThanks,\nJohn H", "msg_date": "Mon, 28 Feb 2022 18:04:59 -0800", "msg_from": "\"Hsu, John\" <hsuchen@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Allow async standbys wait for sync replication (was: Disallow\n quorum\n uncommitted (with synchronous standbys) txns in logical replication\n subscribers)" }, { "msg_contents": "On Tue, Mar 1, 2022 at 12:27 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Mon, Feb 28, 2022 at 06:45:51PM +0530, Bharath Rupireddy wrote:\n> > On Sat, Feb 26, 2022 at 9:37 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> >> For\n> >> this feature, I think we always need to consider what the primary considers\n> >> synchronously replicated. My suggested approach doesn't change that. I'm\n> >> saying that instead of spinning in a loop waiting for the WAL to be\n> >> synchronously replicated, we just immediately send WAL up to the LSN that\n> >> is presently known to be synchronously replicated.\n> >\n> > As I said above, v1 patch does that i.e. async standbys wait until the\n> > sync standbys update their flush LSN.\n> >\n> > Flush LSN is this - flushLSN = walsndctl->lsn[SYNC_REP_WAIT_FLUSH];\n> > which gets updated in SyncRepReleaseWaiters.\n> >\n> > Async standbys with their SendRqstPtr will wait in XLogSendPhysical or\n> > XLogSendLogical until SendRqstPtr <= flushLSN.\n>\n> My feedback is specifically about this behavior. I don't think we should\n> spin in XLogSend*() waiting for an LSN to be synchronously replicated. I\n> think we should just choose the SendRqstPtr based on what is currently\n> synchronously replicated.\n\nDo you mean something like the following?\n\n/* Main loop of walsender process that streams the WAL over Copy messages. */\nstatic void\nWalSndLoop(WalSndSendDataCallback send_data)\n{\n /*\n * Loop until we reach the end of this timeline or the client requests to\n * stop streaming.\n */\n for (;;)\n {\n if (am_async_walsender && there_are_sync_standbys)\n {\n XLogRecPtr SendRqstLSN;\n XLogRecPtr SyncFlushLSN;\n\n SendRqstLSN = GetFlushRecPtr(NULL);\n LWLockAcquire(SyncRepLock, LW_SHARED);\n SyncFlushLSN = walsndctl->lsn[SYNC_REP_WAIT_FLUSH];\n LWLockRelease(SyncRepLock);\n\n if (SendRqstLSN > SyncFlushLSN)\n continue;\n }\n\n if (!pq_is_send_pending())\n send_data(); /* THIS IS WHERE XLogSendPhysical or\nXLogSendLogical gets called */\n else\n WalSndCaughtUp = false;\n }\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Tue, 1 Mar 2022 11:10:09 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow async standbys wait for sync replication (was: Disallow\n quorum uncommitted (with synchronous standbys) txns in logical replication\n subscribers)" }, { "msg_contents": "On Tue, Mar 01, 2022 at 11:10:09AM +0530, Bharath Rupireddy wrote:\n> On Tue, Mar 1, 2022 at 12:27 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> My feedback is specifically about this behavior. I don't think we should\n>> spin in XLogSend*() waiting for an LSN to be synchronously replicated. I\n>> think we should just choose the SendRqstPtr based on what is currently\n>> synchronously replicated.\n> \n> Do you mean something like the following?\n> \n> /* Main loop of walsender process that streams the WAL over Copy messages. */\n> static void\n> WalSndLoop(WalSndSendDataCallback send_data)\n> {\n> /*\n> * Loop until we reach the end of this timeline or the client requests to\n> * stop streaming.\n> */\n> for (;;)\n> {\n> if (am_async_walsender && there_are_sync_standbys)\n> {\n> XLogRecPtr SendRqstLSN;\n> XLogRecPtr SyncFlushLSN;\n> \n> SendRqstLSN = GetFlushRecPtr(NULL);\n> LWLockAcquire(SyncRepLock, LW_SHARED);\n> SyncFlushLSN = walsndctl->lsn[SYNC_REP_WAIT_FLUSH];\n> LWLockRelease(SyncRepLock);\n> \n> if (SendRqstLSN > SyncFlushLSN)\n> continue;\n> }\n\nNot quite. Instead of \"continue\", I would set SendRqstLSN to SyncFlushLSN\nso that the WAL sender only sends up to the current synchronously\nreplicated LSN. TBH there are probably other things that need to be\nconsidered (e.g., how do we ensure that the WAL sender sends the rest once\nit is replicated?), but I still think we should avoid spinning in the WAL\nsender waiting for WAL to be replicated.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 28 Feb 2022 22:05:28 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow async standbys wait for sync replication (was: Disallow\n quorum uncommitted (with synchronous standbys) txns in logical replication\n subscribers)" }, { "msg_contents": "(Now I understand what \"async\" mean here..)\n\nAt Mon, 28 Feb 2022 22:05:28 -0800, Nathan Bossart <nathandbossart@gmail.com> wrote in \n> On Tue, Mar 01, 2022 at 11:10:09AM +0530, Bharath Rupireddy wrote:\n> > On Tue, Mar 1, 2022 at 12:27 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> >> My feedback is specifically about this behavior. I don't think we should\n> >> spin in XLogSend*() waiting for an LSN to be synchronously replicated. I\n> >> think we should just choose the SendRqstPtr based on what is currently\n> >> synchronously replicated.\n> > \n> > Do you mean something like the following?\n> > \n> > /* Main loop of walsender process that streams the WAL over Copy messages. */\n> > static void\n> > WalSndLoop(WalSndSendDataCallback send_data)\n> > {\n> > /*\n> > * Loop until we reach the end of this timeline or the client requests to\n> > * stop streaming.\n> > */\n> > for (;;)\n> > {\n> > if (am_async_walsender && there_are_sync_standbys)\n> > {\n> > XLogRecPtr SendRqstLSN;\n> > XLogRecPtr SyncFlushLSN;\n> > \n> > SendRqstLSN = GetFlushRecPtr(NULL);\n> > LWLockAcquire(SyncRepLock, LW_SHARED);\n> > SyncFlushLSN = walsndctl->lsn[SYNC_REP_WAIT_FLUSH];\n> > LWLockRelease(SyncRepLock);\n> > \n> > if (SendRqstLSN > SyncFlushLSN)\n> > continue;\n> > }\n\nThe current trend is energy-savings. We never add a \"wait for some\nfixed time then exit if the condition makes, otherwise repeat\" loop\nfor this kind of purpose where there's no guarantee that the loop\nexits quite shortly. Concretely we ought to rely on condition\nvariables to do that.\n\n> Not quite. Instead of \"continue\", I would set SendRqstLSN to SyncFlushLSN\n> so that the WAL sender only sends up to the current synchronously\n\nI'm not sure, but doesn't that makes walsender falsely believes it\nhave caught up to the bleeding edge of WAL?\n\n> replicated LSN. TBH there are probably other things that need to be\n> considered (e.g., how do we ensure that the WAL sender sends the rest once\n> it is replicated?), but I still think we should avoid spinning in the WAL\n> sender waiting for WAL to be replicated.\n\nIt seems to me it would be something similar to\nSyncRepReleaseWaiters(). Or it could be possible to consolidate this\nfeature into the function, I'm not sure, though.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 01 Mar 2022 16:34:31 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow async standbys wait for sync replication" }, { "msg_contents": "On Tue, Mar 01, 2022 at 04:34:31PM +0900, Kyotaro Horiguchi wrote:\n> At Mon, 28 Feb 2022 22:05:28 -0800, Nathan Bossart <nathandbossart@gmail.com> wrote in \n>> replicated LSN. TBH there are probably other things that need to be\n>> considered (e.g., how do we ensure that the WAL sender sends the rest once\n>> it is replicated?), but I still think we should avoid spinning in the WAL\n>> sender waiting for WAL to be replicated.\n> \n> It seems to me it would be something similar to\n> SyncRepReleaseWaiters(). Or it could be possible to consolidate this\n> feature into the function, I'm not sure, though.\n\nYes, perhaps the synchronous replication framework will need to alert WAL\nsenders when the syncrep LSN advances so that the WAL is sent to the async\nstandbys. I'm glossing over the details, but I think that should be the\ngeneral direction.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 1 Mar 2022 09:05:37 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow async standbys wait for sync replication" }, { "msg_contents": "On Tue, Mar 1, 2022 at 10:35 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Tue, Mar 01, 2022 at 04:34:31PM +0900, Kyotaro Horiguchi wrote:\n> > At Mon, 28 Feb 2022 22:05:28 -0800, Nathan Bossart <nathandbossart@gmail.com> wrote in\n> >> replicated LSN. TBH there are probably other things that need to be\n> >> considered (e.g., how do we ensure that the WAL sender sends the rest once\n> >> it is replicated?), but I still think we should avoid spinning in the WAL\n> >> sender waiting for WAL to be replicated.\n> >\n> > It seems to me it would be something similar to\n> > SyncRepReleaseWaiters(). Or it could be possible to consolidate this\n> > feature into the function, I'm not sure, though.\n>\n> Yes, perhaps the synchronous replication framework will need to alert WAL\n> senders when the syncrep LSN advances so that the WAL is sent to the async\n> standbys. I'm glossing over the details, but I think that should be the\n> general direction.\n\nIt's doable. But we can't avoid async walsenders waiting for the flush\nLSN even if we take the SyncRepReleaseWaiters() approach right? I'm\nnot sure (at this moment) what's the biggest advantage of this\napproach i.e. (1) backends waking up walsenders after flush lsn is\nupdated vs (2) walsenders keep looking for the new flush lsn.\n\n> >> My feedback is specifically about this behavior. I don't think we should\n> >> spin in XLogSend*() waiting for an LSN to be synchronously replicated. I\n> >> think we should just choose the SendRqstPtr based on what is currently\n> >> synchronously replicated.\n> >\n> > Do you mean something like the following?\n> >\n> > /* Main loop of walsender process that streams the WAL over Copy messages. */\n> > static void\n> > WalSndLoop(WalSndSendDataCallback send_data)\n> > {\n> > /*\n> > * Loop until we reach the end of this timeline or the client requests to\n> > * stop streaming.\n> > */\n> > for (;;)\n> > {\n> > if (am_async_walsender && there_are_sync_standbys)\n> > {\n> > XLogRecPtr SendRqstLSN;\n> > XLogRecPtr SyncFlushLSN;\n> >\n> > SendRqstLSN = GetFlushRecPtr(NULL);\n> > LWLockAcquire(SyncRepLock, LW_SHARED);\n> > SyncFlushLSN = walsndctl->lsn[SYNC_REP_WAIT_FLUSH];\n> > LWLockRelease(SyncRepLock);\n> >\n> > if (SendRqstLSN > SyncFlushLSN)\n> > continue;\n> > }\n>\n> Not quite. Instead of \"continue\", I would set SendRqstLSN to SyncFlushLSN\n> so that the WAL sender only sends up to the current synchronously\n> replicated LSN. TBH there are probably other things that need to be\n> considered (e.g., how do we ensure that the WAL sender sends the rest once\n> it is replicated?), but I still think we should avoid spinning in the WAL\n> sender waiting for WAL to be replicated.\n\nI did some more analysis on the above point: we can let\nXLogSendPhysical know up to which it can send WAL (SendRqstLSN). But,\nXLogSendLogical reads the WAL using XLogReadRecord mechanism with\nread_local_xlog_page page_read callback to which we can't really say\nSendRqstLSN. May be we have to do something like below:\n\nXLogSendPhysical:\n/* Figure out how far we can safely send the WAL. */\nif (am_async_walsender && there_are_sync_standbys)\n{\n LWLockAcquire(SyncRepLock, LW_SHARED);\n SendRqstPtr = WalSndCtl->lsn[SYNC_REP_WAIT_FLUSH];\n LWLockRelease(SyncRepLock);\n}\n/* Existing code path to determine SendRqstPtr */\nelse if (sendTimeLineIsHistoric)\n{\n}\nelse if (am_cascading_walsender)\n{\n}\nelse\n{\n/*\n* Streaming the current timeline on a primary.\n}\n\nXLogSendLogical:\nif (am_async_walsender && there_are_sync_standbys)\n{\n XLogRecPtr SendRqstLSN;\n XLogRecPtr SyncFlushLSN;\n\n SendRqstLSN = GetFlushRecPtr(NULL);\n LWLockAcquire(SyncRepLock, LW_SHARED);\n SyncFlushLSN = WalSndCtl->lsn[SYNC_REP_WAIT_FLUSH];\n LWLockRelease(SyncRepLock);\n\n if (SendRqstLSN > SyncFlushLSN)\n return;\n }\n\nOn Tue, Mar 1, 2022 at 7:35 AM Hsu, John <hsuchen@amazon.com> wrote:\n> > I too observed this once or twice. It looks like the walsender isn't\n> > detecting postmaster death in for (;;) with WalSndWait. Not sure if\n> > this is expected or true with other wait-loops in walsender code. Any\n> > more thoughts here?\n>\n> Unfortunately I haven't had a chance to dig into it more although iirc I hit it fairly often.\n\nI think I got what the issue is. Below does the trick.\nif (got_STOPPING)\nproc_exit(0);\n\n * If the server is shut down, checkpointer sends us\n * PROCSIG_WALSND_INIT_STOPPING after all regular backends have exited.\n\nI will take care of this in the next patch once the approach we take\nfor this feature gets finalized.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Tue, 1 Mar 2022 23:09:57 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow async standbys wait for sync replication" }, { "msg_contents": "On Tue, Mar 01, 2022 at 11:09:57PM +0530, Bharath Rupireddy wrote:\n> On Tue, Mar 1, 2022 at 10:35 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> Yes, perhaps the synchronous replication framework will need to alert WAL\n>> senders when the syncrep LSN advances so that the WAL is sent to the async\n>> standbys. I'm glossing over the details, but I think that should be the\n>> general direction.\n> \n> It's doable. But we can't avoid async walsenders waiting for the flush\n> LSN even if we take the SyncRepReleaseWaiters() approach right? I'm\n> not sure (at this moment) what's the biggest advantage of this\n> approach i.e. (1) backends waking up walsenders after flush lsn is\n> updated vs (2) walsenders keep looking for the new flush lsn.\n\nI think there are a couple of advantages. For one, spinning is probably\nnot the best from a resource perspective. There is no guarantee that the\ndesired SendRqstPtr will ever be synchronously replicated, in which case\nthe WAL sender would spin forever. Also, this approach might fit in better\nwith the existing synchronous replication framework. When a WAL sender\nrealizes that it can't send up to the current \"flush\" LSN because it's not\nsynchronously replicated, it will request to be alerted when it is. In the\nmeantime, it can send up to the latest syncrep LSN so that the async\nstandby is as up-to-date as possible.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 1 Mar 2022 13:27:00 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow async standbys wait for sync replication" }, { "msg_contents": "On Wed, Mar 2, 2022 at 2:57 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Tue, Mar 01, 2022 at 11:09:57PM +0530, Bharath Rupireddy wrote:\n> > On Tue, Mar 1, 2022 at 10:35 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> >> Yes, perhaps the synchronous replication framework will need to alert WAL\n> >> senders when the syncrep LSN advances so that the WAL is sent to the async\n> >> standbys. I'm glossing over the details, but I think that should be the\n> >> general direction.\n> >\n> > It's doable. But we can't avoid async walsenders waiting for the flush\n> > LSN even if we take the SyncRepReleaseWaiters() approach right? I'm\n> > not sure (at this moment) what's the biggest advantage of this\n> > approach i.e. (1) backends waking up walsenders after flush lsn is\n> > updated vs (2) walsenders keep looking for the new flush lsn.\n>\n> I think there are a couple of advantages. For one, spinning is probably\n> not the best from a resource perspective.\n\nJust to be on the same page - by spinning do you mean - the async\nwalsender waiting for the sync flushLSN in a for-loop with\nWaitLatch()?\n\n> There is no guarantee that the\n> desired SendRqstPtr will ever be synchronously replicated, in which case\n> the WAL sender would spin forever.\n\nThe async walsenders will not exactly wait for SendRqstPtr LSN to be\nthe flush lsn. Say, SendRqstPtr is 100 and the current sync FlushLSN\nis 95, they will have to wait until FlushLSN moves ahead of\nSendRqstPtr i.e. SendRqstPtr <= FlushLSN. I can't think of a scenario\n(right now) that doesn't move the sync FlushLSN at all. If there's\nsuch a scenario, shouldn't it be treated as a sync replication bug?\n\n> Also, this approach might fit in better\n> with the existing synchronous replication framework. When a WAL sender\n> realizes that it can't send up to the current \"flush\" LSN because it's not\n> synchronously replicated, it will request to be alerted when it is.\n\nI think you are referring to the way a backend calls SyncRepWaitForLSN\nand waits until any one of the walsender sets syncRepState to\nSYNC_REP_WAIT_COMPLETE in SyncRepWakeQueue. Firstly, SyncRepWaitForLSN\nblocking i.e. the backend spins/waits in for (;;) loop until its\nsyncRepState becomes SYNC_REP_WAIT_COMPLETE. The backend doesn't do\nany other work but waits. So, spinning isn't avoided completely.\n\nUnless, I'm missing something, the existing syc repl queue\n(SyncRepQueue) mechanism doesn't avoid spinning in the requestors\n(backends) SyncRepWaitForLSN or in the walsenders SyncRepWakeQueue.\n\n> In the\n> meantime, it can send up to the latest syncrep LSN so that the async\n> standby is as up-to-date as possible.\n\nJust to be clear, there can exist the following scenarios:\nFirstly, SendRqstPtr is up to which a walsender can send WAL, it's not the\n\nscenario 1:\nasync SendRqstPtr is 100, sync FlushLSN is 95 - async standbys will\nwait until the FlushLSN moves ahead, once SendRqstPtr <= FlushLSN, it\nsends out the WAL.\n\nscenario 2:\nasync SendRqstPtr is 105, sync FlushLSN is 110 - async standbys will\nnot wait, it just sends out the WAL up to SendRqstPtr i.e. LSN 105.\n\nscenario 3, same as scenario 2 but SendRqstPtr and FlushLSN is same:\nasync SendRqstPtr is 105, sync FlushLSN is 105 - async standbys will\nnot wait, it just sends out the WAL up to SendRqstPtr i.e. LSN 105.\n\nThis way, the async standbys are always as up-to-date as possible with\nthe sync FlushLSN.\n\nAre you referring to any other scenarios?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 2 Mar 2022 09:47:09 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow async standbys wait for sync replication" }, { "msg_contents": "On Wed, Mar 02, 2022 at 09:47:09AM +0530, Bharath Rupireddy wrote:\n> On Wed, Mar 2, 2022 at 2:57 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> I think there are a couple of advantages. For one, spinning is probably\n>> not the best from a resource perspective.\n> \n> Just to be on the same page - by spinning do you mean - the async\n> walsender waiting for the sync flushLSN in a for-loop with\n> WaitLatch()?\n\nYes.\n\n>> Also, this approach might fit in better\n>> with the existing synchronous replication framework. When a WAL sender\n>> realizes that it can't send up to the current \"flush\" LSN because it's not\n>> synchronously replicated, it will request to be alerted when it is.\n> \n> I think you are referring to the way a backend calls SyncRepWaitForLSN\n> and waits until any one of the walsender sets syncRepState to\n> SYNC_REP_WAIT_COMPLETE in SyncRepWakeQueue. Firstly, SyncRepWaitForLSN\n> blocking i.e. the backend spins/waits in for (;;) loop until its\n> syncRepState becomes SYNC_REP_WAIT_COMPLETE. The backend doesn't do\n> any other work but waits. So, spinning isn't avoided completely.\n> \n> Unless, I'm missing something, the existing syc repl queue\n> (SyncRepQueue) mechanism doesn't avoid spinning in the requestors\n> (backends) SyncRepWaitForLSN or in the walsenders SyncRepWakeQueue.\n\nMy point is that there are existing tools for alerting processes when an\nLSN is synchronously replicated and for waking up WAL senders. What I am\nproposing wouldn't involve spinning in XLogSendPhysical() waiting for\nsynchronous replication. Like SyncRepWaitForLSN(), we'd register our LSN\nin the queue (SyncRepQueueInsert()), but we wouldn't sit in a separate loop\nwaiting to be woken. Instead, SyncRepWakeQueue() would eventually wake up\nthe WAL sender and trigger another iteration of WalSndLoop().\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 4 Mar 2022 11:56:02 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow async standbys wait for sync replication" }, { "msg_contents": "On Sat, Mar 5, 2022 at 1:26 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Wed, Mar 02, 2022 at 09:47:09AM +0530, Bharath Rupireddy wrote:\n> > On Wed, Mar 2, 2022 at 2:57 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> >> I think there are a couple of advantages. For one, spinning is probably\n> >> not the best from a resource perspective.\n> >\n> > Just to be on the same page - by spinning do you mean - the async\n> > walsender waiting for the sync flushLSN in a for-loop with\n> > WaitLatch()?\n>\n> Yes.\n>\n> >> Also, this approach might fit in better\n> >> with the existing synchronous replication framework. When a WAL sender\n> >> realizes that it can't send up to the current \"flush\" LSN because it's not\n> >> synchronously replicated, it will request to be alerted when it is.\n> >\n> > I think you are referring to the way a backend calls SyncRepWaitForLSN\n> > and waits until any one of the walsender sets syncRepState to\n> > SYNC_REP_WAIT_COMPLETE in SyncRepWakeQueue. Firstly, SyncRepWaitForLSN\n> > blocking i.e. the backend spins/waits in for (;;) loop until its\n> > syncRepState becomes SYNC_REP_WAIT_COMPLETE. The backend doesn't do\n> > any other work but waits. So, spinning isn't avoided completely.\n> >\n> > Unless, I'm missing something, the existing syc repl queue\n> > (SyncRepQueue) mechanism doesn't avoid spinning in the requestors\n> > (backends) SyncRepWaitForLSN or in the walsenders SyncRepWakeQueue.\n>\n> My point is that there are existing tools for alerting processes when an\n> LSN is synchronously replicated and for waking up WAL senders. What I am\n> proposing wouldn't involve spinning in XLogSendPhysical() waiting for\n> synchronous replication. Like SyncRepWaitForLSN(), we'd register our LSN\n> in the queue (SyncRepQueueInsert()), but we wouldn't sit in a separate loop\n> waiting to be woken. Instead, SyncRepWakeQueue() would eventually wake up\n> the WAL sender and trigger another iteration of WalSndLoop().\n\nI understand. Even if we use the SyncRepWaitForLSN approach, the async\nwalsenders will have to do nothing in WalSndLoop() until the sync\nwalsender wakes them up via SyncRepWakeQueue. For sure, the\nSyncRepWaitForLSN approach avoids extra looping and makes the code\nlook better. One concern is that increased burden on SyncRepLock the\nSyncRepWaitForLSN approach will need to take\n(LWLockAcquire(SyncRepLock, LW_EXCLUSIVE);), now that the async\nwalsenders will get added to the list of backends that contened for\nSyncRepLock. Whereas the other approach that I earlier proposed would\nrequire SyncRepLock shared mode as it just needs to read the flushLSN.\nI'm not sure if it's a bigger problem.\n\nHaving said above, I agree that the SyncRepWaitForLSN approach makes\nthings probably easy and avoids the new wait loops.\n\nLet me think more and work on this approach.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 5 Mar 2022 14:14:54 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow async standbys wait for sync replication" }, { "msg_contents": "Hi,\n\nOn 2022-03-05 14:14:54 +0530, Bharath Rupireddy wrote:\n> I understand. Even if we use the SyncRepWaitForLSN approach, the async\n> walsenders will have to do nothing in WalSndLoop() until the sync\n> walsender wakes them up via SyncRepWakeQueue.\n\nI still think we should flat out reject this approach. The proper way to\nimplement this feature is to change the protocol so that WAL can be sent to\nreplicas with an additional LSN informing them up to where WAL can be\nflushed. That way WAL is already sent when the sync replicas have acknowledged\nreceipt and just an updated \"flush/apply up to here\" LSN has to be sent.\n\n- Andres\n\n\n", "msg_date": "Sat, 5 Mar 2022 12:27:52 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Allow async standbys wait for sync replication" }, { "msg_contents": "On Sun, Mar 6, 2022 at 1:57 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-03-05 14:14:54 +0530, Bharath Rupireddy wrote:\n> > I understand. Even if we use the SyncRepWaitForLSN approach, the async\n> > walsenders will have to do nothing in WalSndLoop() until the sync\n> > walsender wakes them up via SyncRepWakeQueue.\n>\n> I still think we should flat out reject this approach. The proper way to\n> implement this feature is to change the protocol so that WAL can be sent to\n> replicas with an additional LSN informing them up to where WAL can be\n> flushed. That way WAL is already sent when the sync replicas have acknowledged\n> receipt and just an updated \"flush/apply up to here\" LSN has to be sent.\n\nI was having this thought back of my mind. Please help me understand these:\n1) How will the async standbys ignore the WAL received but\nnot-yet-flushed by them in case the sync standbys don't acknowledge\nflush LSN back to the primary for whatever reasons?\n2) When we say the async standbys will receive the WAL, will they just\nkeep the received WAL in the shared memory but not apply or will they\njust write but not apply the WAL and flush the WAL to the pg_wal\ndirectory on the disk or will they write to some other temp wal\ndirectory until they receive go-ahead LSN from the primary?\n3) Won't the network transfer cost be wasted in case the sync standbys\ndon't acknowledge flush LSN back to the primary for whatever reasons?\n\nThe proposed idea in this thread (async standbys waiting for flush LSN\nfrom sync standbys before sending the WAL), although it makes async\nstandby slower in receiving the WAL, it doesn't have the above\nproblems and is simpler to implement IMO. Since this feature is going\nto be optional with a GUC, users can enable it based on the needs.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sun, 6 Mar 2022 12:27:52 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow async standbys wait for sync replication" }, { "msg_contents": "Hi,\n\nOn 3/5/22 10:57 PM, Bharath Rupireddy wrote:\n> On Sun, Mar 6, 2022 at 1:57 AM Andres Freund<andres@anarazel.de> wrote:\n>> Hi,\n>>\n>> On 2022-03-05 14:14:54 +0530, Bharath Rupireddy wrote:\n>>> I understand. Even if we use the SyncRepWaitForLSN approach, the async\n>>> walsenders will have to do nothing in WalSndLoop() until the sync\n>>> walsender wakes them up via SyncRepWakeQueue.\n>> I still think we should flat out reject this approach. The proper way to\n>> implement this feature is to change the protocol so that WAL can be sent to\n>> replicas with an additional LSN informing them up to where WAL can be\n>> flushed. That way WAL is already sent when the sync replicas have acknowledged\n>> receipt and just an updated \"flush/apply up to here\" LSN has to be sent.\n> I was having this thought back of my mind. Please help me understand these:\n> 1) How will the async standbys ignore the WAL received but\n> not-yet-flushed by them in case the sync standbys don't acknowledge\n> flush LSN back to the primary for whatever reasons?\n> 2) When we say the async standbys will receive the WAL, will they just\n> keep the received WAL in the shared memory but not apply or will they\n> just write but not apply the WAL and flush the WAL to the pg_wal\n> directory on the disk or will they write to some other temp wal\n> directory until they receive go-ahead LSN from the primary?\n> 3) Won't the network transfer cost be wasted in case the sync standbys\n> don't acknowledge flush LSN back to the primary for whatever reasons?\n>\n> The proposed idea in this thread (async standbys waiting for flush LSN\n> from sync standbys before sending the WAL), although it makes async\n> standby slower in receiving the WAL, it doesn't have the above\n> problems and is simpler to implement IMO. Since this feature is going\n> to be optional with a GUC, users can enable it based on the needs.\n>\nI think another downside of the approach would be if the async-replica\nhad a lot of changes that were unacknowledged and it were to be\nrestarted for whatever reason we might need to recreate the replica, or\nrun pg_rewind from it again which seems to be what we're trying to avoid.\n\nIt also pushes the complexity to the client side for consumers who stream\nchanges from logical slots which the current proposal seems to prevent.\n\n\n\n\n\nHi,\n\nOn 3/5/22 10:57 PM, Bharath Rupireddy\n wrote:\n \n\nOn Sun, Mar 6, 2022 at 1:57 AM Andres Freund <andres@anarazel.de> wrote:\n\n\n\nHi,\n\nOn 2022-03-05 14:14:54 +0530, Bharath Rupireddy wrote:\n\n\nI understand. Even if we use the SyncRepWaitForLSN approach, the async\nwalsenders will have to do nothing in WalSndLoop() until the sync\nwalsender wakes them up via SyncRepWakeQueue.\n\n\n\nI still think we should flat out reject this approach. The proper way to\nimplement this feature is to change the protocol so that WAL can be sent to\nreplicas with an additional LSN informing them up to where WAL can be\nflushed. That way WAL is already sent when the sync replicas have acknowledged\nreceipt and just an updated \"flush/apply up to here\" LSN has to be sent.\n\n\n\nI was having this thought back of my mind. Please help me understand these:\n1) How will the async standbys ignore the WAL received but\nnot-yet-flushed by them in case the sync standbys don't acknowledge\nflush LSN back to the primary for whatever reasons?\n2) When we say the async standbys will receive the WAL, will they just\nkeep the received WAL in the shared memory but not apply or will they\njust write but not apply the WAL and flush the WAL to the pg_wal\ndirectory on the disk or will they write to some other temp wal\ndirectory until they receive go-ahead LSN from the primary?\n3) Won't the network transfer cost be wasted in case the sync standbys\ndon't acknowledge flush LSN back to the primary for whatever reasons?\n\nThe proposed idea in this thread (async standbys waiting for flush LSN\nfrom sync standbys before sending the WAL), although it makes async\nstandby slower in receiving the WAL, it doesn't have the above\nproblems and is simpler to implement IMO. Since this feature is going\nto be optional with a GUC, users can enable it based on the needs.\n\n\n\nI think\n another downside of the approach would be if the async-replica \nhad a lot\n of changes that were unacknowledged and it were to be \nrestarted\n for whatever reason we might need to recreate the replica, or \nrun\n pg_rewind from it again which seems to be what we're trying to\n avoid.\n\n It also pushes the complexity to the client side for consumers who\n stream\nchanges\n from logical slots which the current proposal seems to prevent.", "msg_date": "Tue, 8 Mar 2022 17:08:49 -0800", "msg_from": "\"Hsu, John\" <hsuchen@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Allow async standbys wait for sync replication" }, { "msg_contents": "Hi,\n\nOn 2022-03-06 12:27:52 +0530, Bharath Rupireddy wrote:\n> On Sun, Mar 6, 2022 at 1:57 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2022-03-05 14:14:54 +0530, Bharath Rupireddy wrote:\n> > > I understand. Even if we use the SyncRepWaitForLSN approach, the async\n> > > walsenders will have to do nothing in WalSndLoop() until the sync\n> > > walsender wakes them up via SyncRepWakeQueue.\n> >\n> > I still think we should flat out reject this approach. The proper way to\n> > implement this feature is to change the protocol so that WAL can be sent to\n> > replicas with an additional LSN informing them up to where WAL can be\n> > flushed. That way WAL is already sent when the sync replicas have acknowledged\n> > receipt and just an updated \"flush/apply up to here\" LSN has to be sent.\n> \n> I was having this thought back of my mind. Please help me understand these:\n> 1) How will the async standbys ignore the WAL received but\n> not-yet-flushed by them in case the sync standbys don't acknowledge\n> flush LSN back to the primary for whatever reasons?\n\nWhat do you mean with \"ignore\"? When replaying?\n\nI think this'd require adding a new pg_control field saying up to which LSN\nWAL is \"valid\". If that field is set, replay would only replay up to that LSN\nunless some explicit operation is taken to replay further (e.g. for data\nrecovery).\n\n\n> 2) When we say the async standbys will receive the WAL, will they just\n> keep the received WAL in the shared memory but not apply or will they\n> just write but not apply the WAL and flush the WAL to the pg_wal\n> directory on the disk or will they write to some other temp wal\n> directory until they receive go-ahead LSN from the primary?\n\nI was thinking that for now it'd go to disk, but eventually would first go to\nwal_buffers and only to disk if wal_buffers needs to be flushed out (and only\nin that case the pg_control field would need to be set).\n\n\n> 3) Won't the network transfer cost be wasted in case the sync standbys\n> don't acknowledge flush LSN back to the primary for whatever reasons?\n\nThat should be *extremely* rare, and in that case a bit of wasted traffic\nisn't going to matter.\n\n\n> The proposed idea in this thread (async standbys waiting for flush LSN\n> from sync standbys before sending the WAL), although it makes async\n> standby slower in receiving the WAL, it doesn't have the above\n> problems and is simpler to implement IMO. Since this feature is going\n> to be optional with a GUC, users can enable it based on the needs.\n\nTo me it's architecturally the completely wrong direction. We should move in\nthe *other* direction, i.e. allow WAL to be sent to standbys before the\nprimary has finished flushing it locally. Which requires similar\ninfrastructure to what we're discussing here.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 8 Mar 2022 18:01:23 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Allow async standbys wait for sync replication" }, { "msg_contents": "On Tue, Mar 08, 2022 at 06:01:23PM -0800, Andres Freund wrote:\n> To me it's architecturally the completely wrong direction. We should move in\n> the *other* direction, i.e. allow WAL to be sent to standbys before the\n> primary has finished flushing it locally. Which requires similar\n> infrastructure to what we're discussing here.\n\nI think this is a good point. After all, WALRead() has the following\ncomment:\n\n * XXX probably this should be improved to suck data directly from the\n * WAL buffers when possible.\n\nOnce you have all the infrastructure for that, holding back WAL replay on\nasync standbys based on synchronous replication might be relatively easy.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 12 Mar 2022 14:33:32 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow async standbys wait for sync replication" }, { "msg_contents": "At Sat, 12 Mar 2022 14:33:32 -0800, Nathan Bossart <nathandbossart@gmail.com> wrote in \n> On Tue, Mar 08, 2022 at 06:01:23PM -0800, Andres Freund wrote:\n> > To me it's architecturally the completely wrong direction. We should move in\n> > the *other* direction, i.e. allow WAL to be sent to standbys before the\n> > primary has finished flushing it locally. Which requires similar\n> > infrastructure to what we're discussing here.\n> \n> I think this is a good point. After all, WALRead() has the following\n> comment:\n> \n> * XXX probably this should be improved to suck data directly from the\n> * WAL buffers when possible.\n> \n> Once you have all the infrastructure for that, holding back WAL replay on\n> async standbys based on synchronous replication might be relatively easy.\n\nThat is, (as my understanding) async standbys are required to allow\noverwriting existing unreplayed records after reconnection. But,\nputting aside how to remember that LSN, if that happens at a segment\nboundary, the async replica may run into the similar situation with\nthe missing-contrecord case. But standby cannot insert any original\nrecord to get out from that situation.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 14 Mar 2022 11:30:02 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow async standbys wait for sync replication" }, { "msg_contents": "At Mon, 14 Mar 2022 11:30:02 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Sat, 12 Mar 2022 14:33:32 -0800, Nathan Bossart <nathandbossart@gmail.com> wrote in \n> > On Tue, Mar 08, 2022 at 06:01:23PM -0800, Andres Freund wrote:\n> > > To me it's architecturally the completely wrong direction. We should move in\n> > > the *other* direction, i.e. allow WAL to be sent to standbys before the\n> > > primary has finished flushing it locally. Which requires similar\n> > > infrastructure to what we're discussing here.\n> > \n> > I think this is a good point. After all, WALRead() has the following\n> > comment:\n> > \n> > * XXX probably this should be improved to suck data directly from the\n> > * WAL buffers when possible.\n> > \n> > Once you have all the infrastructure for that, holding back WAL replay on\n> > async standbys based on synchronous replication might be relatively easy.\n\nJust to make sure and a bit off from the topic, I think the\noptimization itself is quite promising and want to have.\n\n> That is, (as my understanding) async standbys are required to allow\n> overwriting existing unreplayed records after reconnection. But,\n> putting aside how to remember that LSN, if that happens at a segment\n> boundary, the async replica may run into the similar situation with\n> the missing-contrecord case. But standby cannot insert any original\n> record to get out from that situation.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 14 Mar 2022 11:41:55 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow async standbys wait for sync replication" }, { "msg_contents": "Hi,\n\nOn 2022-03-14 11:30:02 +0900, Kyotaro Horiguchi wrote:\n> That is, (as my understanding) async standbys are required to allow\n> overwriting existing unreplayed records after reconnection. But,\n> putting aside how to remember that LSN, if that happens at a segment\n> boundary, the async replica may run into the similar situation with\n> the missing-contrecord case. But standby cannot insert any original\n> record to get out from that situation.\n\nI do not see how that problem arrises on standbys when they aren't allowed to\nread those records. It'll just wait for more data to arrive.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 14 Mar 2022 10:00:23 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Allow async standbys wait for sync replication" }, { "msg_contents": "On Wed, Mar 9, 2022 at 7:31 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-03-06 12:27:52 +0530, Bharath Rupireddy wrote:\n> > On Sun, Mar 6, 2022 at 1:57 AM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > Hi,\n> > >\n> > > On 2022-03-05 14:14:54 +0530, Bharath Rupireddy wrote:\n> > > > I understand. Even if we use the SyncRepWaitForLSN approach, the async\n> > > > walsenders will have to do nothing in WalSndLoop() until the sync\n> > > > walsender wakes them up via SyncRepWakeQueue.\n> > >\n> > > I still think we should flat out reject this approach. The proper way to\n> > > implement this feature is to change the protocol so that WAL can be sent to\n> > > replicas with an additional LSN informing them up to where WAL can be\n> > > flushed. That way WAL is already sent when the sync replicas have acknowledged\n> > > receipt and just an updated \"flush/apply up to here\" LSN has to be sent.\n> >\n> > I was having this thought back of my mind. Please help me understand these:\n> > 1) How will the async standbys ignore the WAL received but\n> > not-yet-flushed by them in case the sync standbys don't acknowledge\n> > flush LSN back to the primary for whatever reasons?\n>\n> What do you mean with \"ignore\"? When replaying?\n\nLet me illustrate with an example:\n\n1) Say, primary at LSN 100, sync standby at LSN 90 (is about to\nreceive/receiving the WAL from LSN 91 - 100 from primary), async\nstandby at LSN 100 - today this is possible if the async standby is\ncloser to primary than sync standby for whatever reasons\n2) With the approach that's originally proposed in this thread - async\nstandbys can never get ahead of LSN 90 (flush LSN reported back to the\nprimary by all sync standbys)\n3) With the approach that's suggested i.e. \"let async standbys receive\nWAL at their own pace, but they should only be allowed to\napply/write/flush to the WAL file in pg_wal directory/disk until the\nsync standbys latest flush LSN\" - async standbys can receive the WAL\nfrom LSN 91 - 100 but they aren't allowed to apply/write/flush. Where\nwill the async standbys hold the WAL from LSN 91 - 100 until the\nlatest flush LSN (100) is reported to them? If they \"somehow\" store\nthe WAL from LSN 91 - 100 and not apply/write/flush, how will they\nignore that WAL, say if the sync standbys don't report the latest\nflush LSN back to the primary (for whatever reasons)? In such cases,\nthe primary has no idea of the latest sync standbys flush LSN (?) if\nat all the sync standbys can't come up and reconnect and resync with\nthe primary? Should the async standby always assume that the WAL from\nLSN 91 -100 is invalid for them as they haven't received the sync\nflush LSN from primary? In such a case, aren't there \"invalid holes\"\nin the WAL files on the async standbys?\n\n> I think this'd require adding a new pg_control field saying up to which LSN\n> WAL is \"valid\". If that field is set, replay would only replay up to that LSN\n> unless some explicit operation is taken to replay further (e.g. for data\n> recovery).\n\nWith the approach that's suggested i.e. \"let async standbys receive\nWAL at their own pace, but they should only be allowed to\napply/write/flush to the WAL file in pg_wal directory/disk until the\nsync standbys latest flush LSN'' - there can be 2 parts to the WAL on\nasync standbys - most of it \"valid and makes sense for async standbys\"\nand some of it \"invalid and doesn't make sense for async standbys''?\nCan't this require us to rework some parts like \"redo/apply/recovery\nlogic on async standbys'', tools like pg_basebackup, pg_rewind,\npg_receivewal, pg_recvlogical, cascading replication etc. that depend\non WAL records and now should know whether the WAL records are valid\nfor them? I may be wrong here though.\n\n> > 2) When we say the async standbys will receive the WAL, will they just\n> > keep the received WAL in the shared memory but not apply or will they\n> > just write but not apply the WAL and flush the WAL to the pg_wal\n> > directory on the disk or will they write to some other temp wal\n> > directory until they receive go-ahead LSN from the primary?\n>\n> I was thinking that for now it'd go to disk, but eventually would first go to\n> wal_buffers and only to disk if wal_buffers needs to be flushed out (and only\n> in that case the pg_control field would need to be set).\n\nIIUC, the WAL buffers (XLogCtl->pages) aren't used on standbys as wal\nreceivers bypass them and flush the data directly to the disk. Hence,\nthe WAL buffers that are allocated(?, I haven't checked the code\nthough) but unused on standbys can be used to hold the WAL until the\nnew flush LSN is reported from the primary. At any point of time, the\nWAL buffers will have the latest WAL that's waiting for a new flush\nLSN from the primary. However, this can be a problem for larger\ntransactions that can eat up the entire WAL buffers and flush LSN is\nfar behind in which case we need to flush the WAL to the latest WAL\nfile in pg_wal/disk but let the other folks in the server know upto\nwhich the WAL is valid.\n\n> > 3) Won't the network transfer cost be wasted in case the sync standbys\n> > don't acknowledge flush LSN back to the primary for whatever reasons?\n>\n> That should be *extremely* rare, and in that case a bit of wasted traffic\n> isn't going to matter.\n\nAgree.\n\n> > The proposed idea in this thread (async standbys waiting for flush LSN\n> > from sync standbys before sending the WAL), although it makes async\n> > standby slower in receiving the WAL, it doesn't have the above\n> > problems and is simpler to implement IMO. Since this feature is going\n> > to be optional with a GUC, users can enable it based on the needs.\n>\n> To me it's architecturally the completely wrong direction. We should move in\n> the *other* direction, i.e. allow WAL to be sent to standbys before the\n> primary has finished flushing it locally. Which requires similar\n> infrastructure to what we're discussing here.\n\nAgree.\n\n* XXX probably this should be improved to suck data directly from the\n* WAL buffers when possible.\n\nLike others pointed out, if done above, it's possible to achieve\n\"allow WAL to be sent to standbys before the primary has finished\nflushing it locally\".\n\nI would like to hear more thoughts and then summarize the design\npoints a bit later.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Tue, 15 Mar 2022 13:08:12 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow async standbys wait for sync replication" }, { "msg_contents": "On Sat, Mar 5, 2022 at 1:26 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> My point is that there are existing tools for alerting processes when an\n> LSN is synchronously replicated and for waking up WAL senders. What I am\n> proposing wouldn't involve spinning in XLogSendPhysical() waiting for\n> synchronous replication. Like SyncRepWaitForLSN(), we'd register our LSN\n> in the queue (SyncRepQueueInsert()), but we wouldn't sit in a separate loop\n> waiting to be woken. Instead, SyncRepWakeQueue() would eventually wake up\n> the WAL sender and trigger another iteration of WalSndLoop().\n\nWhile we continue to discuss the other better design at [1], FWIW, I\nwould like to share a simpler patch that lets wal senders serving\nasync standbys wait until sync standbys report the flush lsn.\nObviously this is not an elegant way to solve the problem reported in\nthis thread, as I have this patch ready long back, I wanted to share\nit here.\n\nNathan, of course, this is not something you wanted.\n\n[1] https://www.postgresql.org/message-id/CALj2ACWCj60g6TzYMbEO07ZhnBGbdCveCrD413udqbRM0O59RA%40mail.gmail.com\n\nRegards,\nBharath Rupireddy.", "msg_date": "Thu, 17 Mar 2022 22:35:50 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow async standbys wait for sync replication" } ]
[ { "msg_contents": "Hi all,\n\nI realized that pg_waldump doesn't show replication origin ID, LSN,\nand timestamp of PREPARE TRANSACTION record. Commit 7b8a899bdeb\nimproved pg_waldump two years ago so that it reports the detail of\ninformation of PREPARE TRANSACTION but ISTM that it overlooked showing\nreplication origin information. As far as I can see in the discussion\nthread[1], there was no discussion on that. These are helpful when\ndiagnosing 2PC related issues on the subscriber side.\n\nI've attached a patch to add replication origin information to\nxact_desc_prepare().\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAHGQGwEvhASad4JJnCv%3D0dW2TJypZgW_Vpb-oZik2a3utCqcrA%40mail.gmail.com\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Mon, 6 Dec 2021 16:35:07 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Make pg_waldump report replication origin ID, LSN, and timestamp." }, { "msg_contents": "On Mon, Dec 06, 2021 at 04:35:07PM +0900, Masahiko Sawada wrote:\n> I've attached a patch to add replication origin information to\n> xact_desc_prepare().\n\nYeah.\n\n+ if (origin_id != InvalidRepOriginId)\n+ appendStringInfo(buf, \"; origin: node %u, lsn %X/%X, at %s\",\n+ origin_id,\n+ LSN_FORMAT_ARGS(parsed.origin_lsn),\n+ timestamptz_to_str(parsed.origin_timestamp));\n\nShouldn't you check for parsed.origin_lsn instead? The replication\norigin is stored there as far as I read EndPrepare().\n\nCommit records check after XACT_XINFO_HAS_ORIGIN, but\nxact_desc_abort() may include this information for ROLLBACK PREPARED \ntransactions so we could use the same logic as xact_desc_commit() for\nthe abort case, no?\n--\nMichael", "msg_date": "Mon, 6 Dec 2021 17:09:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Make pg_waldump report replication origin ID, LSN, and timestamp." }, { "msg_contents": "On Mon, Dec 6, 2021 at 5:09 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Dec 06, 2021 at 04:35:07PM +0900, Masahiko Sawada wrote:\n> > I've attached a patch to add replication origin information to\n> > xact_desc_prepare().\n>\n> Yeah.\n>\n> + if (origin_id != InvalidRepOriginId)\n> + appendStringInfo(buf, \"; origin: node %u, lsn %X/%X, at %s\",\n> + origin_id,\n> + LSN_FORMAT_ARGS(parsed.origin_lsn),\n> + timestamptz_to_str(parsed.origin_timestamp));\n>\n> Shouldn't you check for parsed.origin_lsn instead? The replication\n> origin is stored there as far as I read EndPrepare().\n\nYeah, I was thinking check origin_lsn instead. That way, we don't show\ninvalid origin_timestamp and origin_lsn even if origin_id is set. But\nas far as I read, the same is true for xact_desc_commit() (and\nxact_desc_rollback()). That is, since apply workers always its origin\nid and could do commit transactions that are not replicated from the\npublisher, it's possible that xact_desc_commit() reports like:\n\nrmgr: Transaction len (rec/tot): 117/ 117, tx: 725, lsn:\n0/014BE938, prev 0/014BE908, desc: COMMIT 2021-12-06 22:04:44.462200\nJST; inval msgs: catcache 55 catcache 54 catcache 64; origin: node 1,\nlsn 0/0, at 2000-01-01 09:00:00.000000 JST\n\nAlso, looking at PrepareRedoAdd(), we check the replication origin id.\nSo I think that it'd be better to check origin_id for consistency.\n\n>\n> Commit records check after XACT_XINFO_HAS_ORIGIN, but\n> xact_desc_abort() may include this information for ROLLBACK PREPARED\n> transactions so we could use the same logic as xact_desc_commit() for\n> the abort case, no?\n\nGood catch! I'll submit an updated patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 6 Dec 2021 23:24:09 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make pg_waldump report replication origin ID, LSN, and timestamp." }, { "msg_contents": "On Mon, Dec 6, 2021 at 11:24 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Dec 6, 2021 at 5:09 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Mon, Dec 06, 2021 at 04:35:07PM +0900, Masahiko Sawada wrote:\n> > > I've attached a patch to add replication origin information to\n> > > xact_desc_prepare().\n> >\n> > Yeah.\n> >\n> > + if (origin_id != InvalidRepOriginId)\n> > + appendStringInfo(buf, \"; origin: node %u, lsn %X/%X, at %s\",\n> > + origin_id,\n> > + LSN_FORMAT_ARGS(parsed.origin_lsn),\n> > + timestamptz_to_str(parsed.origin_timestamp));\n> >\n> > Shouldn't you check for parsed.origin_lsn instead? The replication\n> > origin is stored there as far as I read EndPrepare().\n>\n> Yeah, I was thinking check origin_lsn instead. That way, we don't show\n> invalid origin_timestamp and origin_lsn even if origin_id is set. But\n> as far as I read, the same is true for xact_desc_commit() (and\n> xact_desc_rollback()). That is, since apply workers always its origin\n> id and could do commit transactions that are not replicated from the\n> publisher, it's possible that xact_desc_commit() reports like:\n>\n> rmgr: Transaction len (rec/tot): 117/ 117, tx: 725, lsn:\n> 0/014BE938, prev 0/014BE908, desc: COMMIT 2021-12-06 22:04:44.462200\n> JST; inval msgs: catcache 55 catcache 54 catcache 64; origin: node 1,\n> lsn 0/0, at 2000-01-01 09:00:00.000000 JST\n\nHmm, is that okay in the first place? This happens since the apply\nworker updates twophaesstate value of pg_subscription after setting\nthe origin id and before entering the apply loop. No changes in this\ntransaction will be replicated but an empty transaction that has\norigin id and doesn't have origin lsn and time will be replicated.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 7 Dec 2021 09:57:38 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make pg_waldump report replication origin ID, LSN, and timestamp." }, { "msg_contents": "On Mon, Dec 06, 2021 at 11:24:09PM +0900, Masahiko Sawada wrote:\n> On Mon, Dec 6, 2021 at 5:09 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> Shouldn't you check for parsed.origin_lsn instead? The replication\n>> origin is stored there as far as I read EndPrepare().\n> \n> Also, looking at PrepareRedoAdd(), we check the replication origin id.\n> So I think that it'd be better to check origin_id for consistency.\n\nOkay, this consistency would make sense, then. Perhaps some comments\nshould be added to tell that?\n\n>> Commit records check after XACT_XINFO_HAS_ORIGIN, but\n>> xact_desc_abort() may include this information for ROLLBACK PREPARED\n>> transactions so we could use the same logic as xact_desc_commit() for\n>> the abort case, no?\n> \n> Good catch! I'll submit an updated patch.\n\nThanks!\n--\nMichael", "msg_date": "Wed, 8 Dec 2021 16:31:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Make pg_waldump report replication origin ID, LSN, and timestamp." }, { "msg_contents": "On Wed, Dec 8, 2021 at 4:31 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Dec 06, 2021 at 11:24:09PM +0900, Masahiko Sawada wrote:\n> > On Mon, Dec 6, 2021 at 5:09 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >> Shouldn't you check for parsed.origin_lsn instead? The replication\n> >> origin is stored there as far as I read EndPrepare().\n> >\n> > Also, looking at PrepareRedoAdd(), we check the replication origin id.\n> > So I think that it'd be better to check origin_id for consistency.\n>\n> Okay, this consistency would make sense, then. Perhaps some comments\n> should be added to tell that?\n\nAgreed. I've attached an updated patch that incorporated your review\ncomments. Please review it.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Wed, 8 Dec 2021 17:03:30 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make pg_waldump report replication origin ID, LSN, and timestamp." }, { "msg_contents": "On Wed, Dec 08, 2021 at 05:03:30PM +0900, Masahiko Sawada wrote:\n> Agreed. I've attached an updated patch that incorporated your review\n> comments. Please review it.\n\nThat looks correct to me. One thing that I have noticed while\nreviewing is that we don't check XactCompletionApplyFeedback() in\nxact_desc_commit(), which would happen if a transaction needs to do\na remote_apply on a standby. synchronous_commit is a user-settable\nparameter, so it seems to me that it could be useful for debugging?\n\nThat's not related to your patch, but while we are looking at the\narea..\n--\nMichael", "msg_date": "Thu, 9 Dec 2021 16:02:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Make pg_waldump report replication origin ID, LSN, and timestamp." }, { "msg_contents": "On Thu, Dec 9, 2021 at 4:02 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Dec 08, 2021 at 05:03:30PM +0900, Masahiko Sawada wrote:\n> > Agreed. I've attached an updated patch that incorporated your review\n> > comments. Please review it.\n>\n> That looks correct to me. One thing that I have noticed while\n> reviewing is that we don't check XactCompletionApplyFeedback() in\n> xact_desc_commit(), which would happen if a transaction needs to do\n> a remote_apply on a standby. synchronous_commit is a user-settable\n> parameter, so it seems to me that it could be useful for debugging?\n>\n\nAgreed.\n\nThank you for updating the patch. The patch looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 9 Dec 2021 17:42:56 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make pg_waldump report replication origin ID, LSN, and timestamp." }, { "msg_contents": "On Thu, Dec 09, 2021 at 05:42:56PM +0900, Masahiko Sawada wrote:\n> Thank you for updating the patch. The patch looks good to me.\n\nDone this way. Please note that while testing this patch, I have\nfound a completely different issue. I'll spawn a thread about that in\na minute..\n--\nMichael", "msg_date": "Mon, 13 Dec 2021 11:05:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Make pg_waldump report replication origin ID, LSN, and timestamp." } ]
[ { "msg_contents": "Hi,\n\nWhile the database is performing end-of-recovery checkpoint, the\ncontrol file gets updated with db state as \"shutting down\" in\nCreateCheckPoint (see the code snippet at [1]) and at the end it sets\nit back to \"shut down\" for a brief moment and then finally to \"in\nproduction\". If the end-of-recovery checkpoint takes a lot of time or\nthe db goes down during the end-of-recovery checkpoint for whatever\nreasons, the control file ends up having the wrong db state.\n\nShould we add a new db state something like\nDB_IN_END_OF_RECOVERY_CHECKPOINT/\"in end-of-recovery checkpoint\" or\nsomething else to represent the correct state?\n\nThoughts?\n\n[1]\nvoid\nCreateCheckPoint(int flags)\n{\n\n /*\n * An end-of-recovery checkpoint is really a shutdown checkpoint, just\n * issued at a different time.\n */\n if (flags & (CHECKPOINT_IS_SHUTDOWN | CHECKPOINT_END_OF_RECOVERY))\n shutdown = true;\n else\n shutdown = false;\n\n if (shutdown)\n {\n LWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n ControlFile->state = DB_SHUTDOWNING;\n UpdateControlFile();\n LWLockRelease(ControlFileLock);\n }\n\n if (shutdown)\n ControlFile->state = DB_SHUTDOWNED;\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 6 Dec 2021 18:02:52 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Is it correct to update db state in control file as \"shutting down\"\n during end-of-recovery checkpoint?" }, { "msg_contents": "On 12/6/21, 4:34 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> While the database is performing end-of-recovery checkpoint, the\r\n> control file gets updated with db state as \"shutting down\" in\r\n> CreateCheckPoint (see the code snippet at [1]) and at the end it sets\r\n> it back to \"shut down\" for a brief moment and then finally to \"in\r\n> production\". If the end-of-recovery checkpoint takes a lot of time or\r\n> the db goes down during the end-of-recovery checkpoint for whatever\r\n> reasons, the control file ends up having the wrong db state.\r\n>\r\n> Should we add a new db state something like\r\n> DB_IN_END_OF_RECOVERY_CHECKPOINT/\"in end-of-recovery checkpoint\" or\r\n> something else to represent the correct state?\r\n\r\nThis seems like a reasonable change to me. From a quick glance, it\r\nlooks like it should be a simple fix that wouldn't add too much\r\ndivergence between the shutdown and end-of-recovery checkpoint code\r\npaths.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 6 Dec 2021 19:28:03 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\"\n during end-of-recovery checkpoint?" }, { "msg_contents": "At Mon, 6 Dec 2021 19:28:03 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> On 12/6/21, 4:34 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > While the database is performing end-of-recovery checkpoint, the\n> > control file gets updated with db state as \"shutting down\" in\n> > CreateCheckPoint (see the code snippet at [1]) and at the end it sets\n> > it back to \"shut down\" for a brief moment and then finally to \"in\n> > production\". If the end-of-recovery checkpoint takes a lot of time or\n> > the db goes down during the end-of-recovery checkpoint for whatever\n> > reasons, the control file ends up having the wrong db state.\n> >\n> > Should we add a new db state something like\n> > DB_IN_END_OF_RECOVERY_CHECKPOINT/\"in end-of-recovery checkpoint\" or\n> > something else to represent the correct state?\n> \n> This seems like a reasonable change to me. From a quick glance, it\n> looks like it should be a simple fix that wouldn't add too much\n> divergence between the shutdown and end-of-recovery checkpoint code\n> paths.\n\nTechnically end-of-crash-recovery checkpointis actually a kind of\nshutdown checkpoint. In other words, a server that needs to run a\ncrash recovery actually is once shut down then enters normal operation\nmode internally. So if the server crashed after the end-of-recovery\ncheckpoint finished and before it enters DB_IN_PRODUCTION state, the\nserver would start with a clean startup next time. We could treat\nDB_IN_END_OF_RECOVERY_CHECKPOINT as safe state to skip recovery but I\ndon't think we need to preserve that behavior.\n\nIn other places, server log and ps display specifically, we already\nmake distinction between end-of-recovery checkopint and shutdown\ncheckpoint.\n\nFinally, I agree to Nathan that it should be simple enough.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 07 Dec 2021 09:58:00 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\" during end-of-recovery checkpoint?" }, { "msg_contents": "On Tue, Dec 7, 2021 at 12:58 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 12/6/21, 4:34 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > While the database is performing end-of-recovery checkpoint, the\n> > control file gets updated with db state as \"shutting down\" in\n> > CreateCheckPoint (see the code snippet at [1]) and at the end it sets\n> > it back to \"shut down\" for a brief moment and then finally to \"in\n> > production\". If the end-of-recovery checkpoint takes a lot of time or\n> > the db goes down during the end-of-recovery checkpoint for whatever\n> > reasons, the control file ends up having the wrong db state.\n> >\n> > Should we add a new db state something like\n> > DB_IN_END_OF_RECOVERY_CHECKPOINT/\"in end-of-recovery checkpoint\" or\n> > something else to represent the correct state?\n>\n> This seems like a reasonable change to me. From a quick glance, it\n> looks like it should be a simple fix that wouldn't add too much\n> divergence between the shutdown and end-of-recovery checkpoint code\n> paths.\n\nHere's a patch that I've come up with. Please see if this looks okay\nand let me know if we want to take it forward so that I can add a CF\nentry.\n\nThe new status one would see is as follows:\nbharath@bharathubuntu3:~/postgres/inst/bin$ ./pg_controldata -D data\npg_control version number: 1500\nCatalog version number: 202111301\nDatabase system identifier: 7038867865889221935\nDatabase cluster state: in end-of-recovery checkpoint\npg_control last modified: Tue Dec 7 08:06:18 2021\nLatest checkpoint location: 0/14D24A0\nLatest checkpoint's REDO location: 0/14D24A0\nLatest checkpoint's REDO WAL file: 000000010000000000000001\n\nRegards,\nBharath Rupireddy.", "msg_date": "Tue, 7 Dec 2021 13:39:13 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\" during end-of-recovery checkpoint?" }, { "msg_contents": "On 12/7/21, 12:10 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> Here's a patch that I've come up with. Please see if this looks okay\r\n> and let me know if we want to take it forward so that I can add a CF\r\n> entry.\r\n\r\nOverall, the patch seems reasonable to me.\r\n\r\n+\t\tcase DB_IN_END_OF_RECOVERY_CHECKPOINT:\r\n+\t\t\tereport(LOG,\r\n+\t\t\t\t\t(errmsg(\"database system was interrupted while in end-of-recovery checkpoint at %s\",\r\n+\t\t\t\t\t\t\tstr_time(ControlFile->time))));\r\n+\t\t\tbreak;\r\n\r\nI noticed that some (but not all) of the surrounding messages say\r\n\"last known up at\" the control file time. I'm curious why you chose\r\nnot to use that phrasing in this case.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 7 Dec 2021 21:20:46 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\"\n during end-of-recovery checkpoint?" }, { "msg_contents": "On Wed, Dec 8, 2021 at 2:50 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 12/7/21, 12:10 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Here's a patch that I've come up with. Please see if this looks okay\n> > and let me know if we want to take it forward so that I can add a CF\n> > entry.\n>\n> Overall, the patch seems reasonable to me.\n\nThanks for reviewing this.\n\n> + case DB_IN_END_OF_RECOVERY_CHECKPOINT:\n> + ereport(LOG,\n> + (errmsg(\"database system was interrupted while in end-of-recovery checkpoint at %s\",\n> + str_time(ControlFile->time))));\n> + break;\n>\n> I noticed that some (but not all) of the surrounding messages say\n> \"last known up at\" the control file time. I'm curious why you chose\n> not to use that phrasing in this case.\n\nIf state is DB_IN_END_OF_RECOVERY_CHECKPOINT that means the db was\ninterrupted while in end-of-recovery checkpoint, so I used the\nphrasing similar to DB_IN_CRASH_RECOVERY and DB_IN_ARCHIVE_RECOVERY\ncases. I would like to keep it as-is (in the v1 patch) unless anyone\nhas other thoughts here?\n(errmsg(\"database system was interrupted while in recovery at %s\",\n(errmsg(\"database system was interrupted while in recovery at log time %s\",\n\nI added a CF entry here - https://commitfest.postgresql.org/36/3442/\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 8 Dec 2021 06:50:36 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\" during end-of-recovery checkpoint?" }, { "msg_contents": "On 12/7/21, 5:21 PM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> On Wed, Dec 8, 2021 at 2:50 AM Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>> I noticed that some (but not all) of the surrounding messages say\r\n>> \"last known up at\" the control file time. I'm curious why you chose\r\n>> not to use that phrasing in this case.\r\n>\r\n> If state is DB_IN_END_OF_RECOVERY_CHECKPOINT that means the db was\r\n> interrupted while in end-of-recovery checkpoint, so I used the\r\n> phrasing similar to DB_IN_CRASH_RECOVERY and DB_IN_ARCHIVE_RECOVERY\r\n> cases. I would like to keep it as-is (in the v1 patch) unless anyone\r\n> has other thoughts here?\r\n> (errmsg(\"database system was interrupted while in recovery at %s\",\r\n> (errmsg(\"database system was interrupted while in recovery at log time %s\",\r\n\r\nI think that's alright. The only other small suggestion I have would\r\nbe to say \"during end-of-recovery checkpoint\" instead of \"while in\r\nend-of-recovery checkpoint.\"\r\n\r\nAnother option we might want to consider is to just skip updating the\r\nstate entirely for end-of-recovery checkpoints. The state would\r\ninstead go straight from DB_IN_CRASH_RECOVERY to DB_IN_PRODUCTION. I\r\ndon't know if it's crucial to have a dedicated control file state for\r\nend-of-recovery checkpoints.\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 8 Dec 2021 04:19:02 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\"\n during end-of-recovery checkpoint?" }, { "msg_contents": "On Wed, Dec 8, 2021 at 9:49 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 12/7/21, 5:21 PM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > On Wed, Dec 8, 2021 at 2:50 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n> >> I noticed that some (but not all) of the surrounding messages say\n> >> \"last known up at\" the control file time. I'm curious why you chose\n> >> not to use that phrasing in this case.\n> >\n> > If state is DB_IN_END_OF_RECOVERY_CHECKPOINT that means the db was\n> > interrupted while in end-of-recovery checkpoint, so I used the\n> > phrasing similar to DB_IN_CRASH_RECOVERY and DB_IN_ARCHIVE_RECOVERY\n> > cases. I would like to keep it as-is (in the v1 patch) unless anyone\n> > has other thoughts here?\n> > (errmsg(\"database system was interrupted while in recovery at %s\",\n> > (errmsg(\"database system was interrupted while in recovery at log time %s\",\n>\n> I think that's alright. The only other small suggestion I have would\n> be to say \"during end-of-recovery checkpoint\" instead of \"while in\n> end-of-recovery checkpoint.\"\n\n\"while in\" is being used by DB_IN_CRASH_RECOVERY and\nDB_IN_ARCHIVE_RECOVERY messages. I don't think it's a good idea to\ndeviate from that and use \"during\".\n\n> Another option we might want to consider is to just skip updating the\n> state entirely for end-of-recovery checkpoints. The state would\n> instead go straight from DB_IN_CRASH_RECOVERY to DB_IN_PRODUCTION. I\n> don't know if it's crucial to have a dedicated control file state for\n> end-of-recovery checkpoints.\n\nPlease note that end-of-recovery can take a while in production\nsystems (we have observed such things working with our customers) and\nanything can happen during that period of time. The end-of-recovery\ncheckpoint is not something that gets finished momentarily. Therefore,\nhaving a separate db state in the control file is useful.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 8 Dec 2021 10:11:59 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\" during end-of-recovery checkpoint?" }, { "msg_contents": "On 12/7/21, 8:42 PM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> On Wed, Dec 8, 2021 at 9:49 AM Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>> I think that's alright. The only other small suggestion I have would\r\n>> be to say \"during end-of-recovery checkpoint\" instead of \"while in\r\n>> end-of-recovery checkpoint.\"\r\n>\r\n> \"while in\" is being used by DB_IN_CRASH_RECOVERY and\r\n> DB_IN_ARCHIVE_RECOVERY messages. I don't think it's a good idea to\r\n> deviate from that and use \"during\".\r\n\r\nFair enough. I don't have a strong opinion about this.\r\n\r\n>> Another option we might want to consider is to just skip updating the\r\n>> state entirely for end-of-recovery checkpoints. The state would\r\n>> instead go straight from DB_IN_CRASH_RECOVERY to DB_IN_PRODUCTION. I\r\n>> don't know if it's crucial to have a dedicated control file state for\r\n>> end-of-recovery checkpoints.\r\n>\r\n> Please note that end-of-recovery can take a while in production\r\n> systems (we have observed such things working with our customers) and\r\n> anything can happen during that period of time. The end-of-recovery\r\n> checkpoint is not something that gets finished momentarily. Therefore,\r\n> having a separate db state in the control file is useful.\r\n\r\nIs there some useful distinction between the states for users? ISTM\r\nthat users will be waiting either way, and I don't know that an extra\r\ncontrol file state will help all that much. The main reason I bring\r\nup this option is that the list of states is pretty short and appears\r\nto be intended to indicate the high-level status of the server. Most\r\nof the states are over 20 years old, and the newest one is over 10\r\nyears old, so I don't think new states can be added willy-nilly.\r\n\r\nOf course, I could be off-base and others might agree that this new\r\nstate would be nice to have.\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 8 Dec 2021 05:29:28 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\"\n during end-of-recovery checkpoint?" }, { "msg_contents": "On Wed, Dec 8, 2021 at 10:59 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n> >> Another option we might want to consider is to just skip updating the\n> >> state entirely for end-of-recovery checkpoints. The state would\n> >> instead go straight from DB_IN_CRASH_RECOVERY to DB_IN_PRODUCTION. I\n> >> don't know if it's crucial to have a dedicated control file state for\n> >> end-of-recovery checkpoints.\n> >\n> > Please note that end-of-recovery can take a while in production\n> > systems (we have observed such things working with our customers) and\n> > anything can happen during that period of time. The end-of-recovery\n> > checkpoint is not something that gets finished momentarily. Therefore,\n> > having a separate db state in the control file is useful.\n>\n> Is there some useful distinction between the states for users? ISTM\n> that users will be waiting either way, and I don't know that an extra\n> control file state will help all that much. The main reason I bring\n> up this option is that the list of states is pretty short and appears\n> to be intended to indicate the high-level status of the server. Most\n> of the states are over 20 years old, and the newest one is over 10\n> years old, so I don't think new states can be added willy-nilly.\n\nFirstly, updating the control file with \"DB_SHUTDOWNING\" and\n\"DB_SHUTDOWNED\" for end-of-recovery checkpoint is wrong. I don't think\nhaving DB_IN_CRASH_RECOVERY for end-of-recovery checkpoint is a great\nidea. We have a checkpoint (which most of the time takes a while) in\nbetween the states DB_IN_CRASH_RECOVERY to DB_IN_PRODUCTION. The state\nDB_IN_END_OF_RECOVERY_CHECKPOINT added by the v1 patch at [1] (in this\nthread) helps users to understand and clearly distinguish what state\nthe db is in.\n\nIMHO, the age of the code doesn't stop us adding/fixing/improving the code.\n\n> Of course, I could be off-base and others might agree that this new\n> state would be nice to have.\n\nLet's see what others have to say about this.\n\n[1] - https://www.postgresql.org/message-id/CALj2ACVn5M8xgQ3RD%3D6rSTbbXRBdBWZ%3DTTOBOY_5%2BedMCkWjHA%40mail.gmail.com\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 8 Dec 2021 11:47:30 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\" during end-of-recovery checkpoint?" }, { "msg_contents": "At Wed, 8 Dec 2021 11:47:30 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Wed, Dec 8, 2021 at 10:59 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n> > >> Another option we might want to consider is to just skip updating the\n> > >> state entirely for end-of-recovery checkpoints. The state would\n> > >> instead go straight from DB_IN_CRASH_RECOVERY to DB_IN_PRODUCTION. I\n> > >> don't know if it's crucial to have a dedicated control file state for\n> > >> end-of-recovery checkpoints.\n\nFWIW I find it simple but sufficient since I regarded the\nend-of-recovery checkpoint as a part of recovery. In that case what\nis strange here is only that the state transition passes the\nDB_SHUTDOWN(ING/ED) states.\n\nOn the other hand, when a server is going to shutdown, the state stays\nat DB_IN_PRODUCTION if there are clinging clients even if the shutdown\nprocedure has been already started and no new clients can connect to\nthe server. There's no reason we need to be so particular about states\nfor recovery-end.\n\n> > > Please note that end-of-recovery can take a while in production\n> > > systems (we have observed such things working with our customers) and\n> > > anything can happen during that period of time. The end-of-recovery\n> > > checkpoint is not something that gets finished momentarily. Therefore,\n> > > having a separate db state in the control file is useful.\n> >\n> > Is there some useful distinction between the states for users? ISTM\n> > that users will be waiting either way, and I don't know that an extra\n> > control file state will help all that much. The main reason I bring\n> > up this option is that the list of states is pretty short and appears\n> > to be intended to indicate the high-level status of the server. Most\n> > of the states are over 20 years old, and the newest one is over 10\n> > years old, so I don't think new states can be added willy-nilly.\n> \n> Firstly, updating the control file with \"DB_SHUTDOWNING\" and\n> \"DB_SHUTDOWNED\" for end-of-recovery checkpoint is wrong. I don't think\n> having DB_IN_CRASH_RECOVERY for end-of-recovery checkpoint is a great\n> idea. We have a checkpoint (which most of the time takes a while) in\n> between the states DB_IN_CRASH_RECOVERY to DB_IN_PRODUCTION. The state\n> DB_IN_END_OF_RECOVERY_CHECKPOINT added by the v1 patch at [1] (in this\n> thread) helps users to understand and clearly distinguish what state\n> the db is in.\n> \n> IMHO, the age of the code doesn't stop us adding/fixing/improving the code.\n> \n> > Of course, I could be off-base and others might agree that this new\n> > state would be nice to have.\n> \n> Let's see what others have to say about this.\n\nI see it a bit too complex for the advantage. When end-of-recovery\ncheckpoint takes so long, that state is shown in server log, which\noperators would look into before the control file.\n\n> [1] - https://www.postgresql.org/message-id/CALj2ACVn5M8xgQ3RD%3D6rSTbbXRBdBWZ%3DTTOBOY_5%2BedMCkWjHA%40mail.gmail.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 08 Dec 2021 16:35:15 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\" during end-of-recovery checkpoint?" }, { "msg_contents": "On Wed, Dec 8, 2021 at 1:05 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 8 Dec 2021 11:47:30 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > On Wed, Dec 8, 2021 at 10:59 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n> > > >> Another option we might want to consider is to just skip updating the\n> > > >> state entirely for end-of-recovery checkpoints. The state would\n> > > >> instead go straight from DB_IN_CRASH_RECOVERY to DB_IN_PRODUCTION. I\n> > > >> don't know if it's crucial to have a dedicated control file state for\n> > > >> end-of-recovery checkpoints.\n>\n> FWIW I find it simple but sufficient since I regarded the\n> end-of-recovery checkpoint as a part of recovery. In that case what\n> is strange here is only that the state transition passes the\n> DB_SHUTDOWN(ING/ED) states.\n>\n> On the other hand, when a server is going to shutdown, the state stays\n> at DB_IN_PRODUCTION if there are clinging clients even if the shutdown\n> procedure has been already started and no new clients can connect to\n> the server. There's no reason we need to be so particular about states\n> for recovery-end.\n>\n> I see it a bit too complex for the advantage. When end-of-recovery\n> checkpoint takes so long, that state is shown in server log, which\n> operators would look into before the control file.\n\nThanks for your thoughts. I'm fine either way, hence attaching two\npatches here with and I will leave it for the committer 's choice.\n1) v1-0001-Add-DB_IN_END_OF_RECOVERY_CHECKPOINT-state-for-co.patch --\nadds new db state DB_IN_END_OF_RECOVERY_CHECKPOINT for control file.\n2) v1-0001-Skip-control-file-db-state-updation-during-end-of.patch --\njust skips setting db state to DB_SHUTDOWNING and DB_SHUTDOWNED in\ncase of end-of-recovery checkpoint so that the state will be\nDB_IN_CRASH_RECOVERY which then changes to DB_IN_PRODUCTION.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Wed, 8 Dec 2021 16:57:55 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\" during end-of-recovery checkpoint?" }, { "msg_contents": "On 12/8/21, 3:29 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> Thanks for your thoughts. I'm fine either way, hence attaching two\r\n> patches here with and I will leave it for the committer 's choice.\r\n> 1) v1-0001-Add-DB_IN_END_OF_RECOVERY_CHECKPOINT-state-for-co.patch --\r\n> adds new db state DB_IN_END_OF_RECOVERY_CHECKPOINT for control file.\r\n> 2) v1-0001-Skip-control-file-db-state-updation-during-end-of.patch --\r\n> just skips setting db state to DB_SHUTDOWNING and DB_SHUTDOWNED in\r\n> case of end-of-recovery checkpoint so that the state will be\r\n> DB_IN_CRASH_RECOVERY which then changes to DB_IN_PRODUCTION.\r\n\r\nI've bumped this one to ready-for-committer. For the record, my\r\npreference is the second patch (for the reasons discussed upthread).\r\nBoth patches might benefit from a small comment or two, too.\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 8 Dec 2021 17:32:22 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\"\n during end-of-recovery checkpoint?" }, { "msg_contents": "On Wed, Dec 8, 2021 at 11:02 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 12/8/21, 3:29 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Thanks for your thoughts. I'm fine either way, hence attaching two\n> > patches here with and I will leave it for the committer 's choice.\n> > 1) v1-0001-Add-DB_IN_END_OF_RECOVERY_CHECKPOINT-state-for-co.patch --\n> > adds new db state DB_IN_END_OF_RECOVERY_CHECKPOINT for control file.\n> > 2) v1-0001-Skip-control-file-db-state-updation-during-end-of.patch --\n> > just skips setting db state to DB_SHUTDOWNING and DB_SHUTDOWNED in\n> > case of end-of-recovery checkpoint so that the state will be\n> > DB_IN_CRASH_RECOVERY which then changes to DB_IN_PRODUCTION.\n>\n> I've bumped this one to ready-for-committer. For the record, my\n> preference is the second patch (for the reasons discussed upthread).\n> Both patches might benefit from a small comment or two, too.\n\nThanks. I've added a comment to the patch\nv2-0001-Skip-control-file-db-state-updation-during-end-of.patch. The\nother patch remains the same as the new state\nDB_IN_END_OF_RECOVERY_CHECKPOINT introduced there says it all.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Thu, 9 Dec 2021 07:41:52 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\" during end-of-recovery checkpoint?" }, { "msg_contents": "On Thu, Dec 09, 2021 at 07:41:52AM +0530, Bharath Rupireddy wrote:\n> On Wed, Dec 8, 2021 at 11:02 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> >\n> > On 12/8/21, 3:29 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > Thanks for your thoughts. I'm fine either way, hence attaching two\n> > > patches here with and I will leave it for the committer 's choice.\n> > > 1) v1-0001-Add-DB_IN_END_OF_RECOVERY_CHECKPOINT-state-for-co.patch --\n> > > adds new db state DB_IN_END_OF_RECOVERY_CHECKPOINT for control file.\n> > > 2) v1-0001-Skip-control-file-db-state-updation-during-end-of.patch --\n> > > just skips setting db state to DB_SHUTDOWNING and DB_SHUTDOWNED in\n> > > case of end-of-recovery checkpoint so that the state will be\n> > > DB_IN_CRASH_RECOVERY which then changes to DB_IN_PRODUCTION.\n> >\n> > I've bumped this one to ready-for-committer. For the record, my\n> > preference is the second patch (for the reasons discussed upthread).\n> > Both patches might benefit from a small comment or two, too.\n> \n> Thanks. I've added a comment to the patch\n> v2-0001-Skip-control-file-db-state-updation-during-end-of.patch. The\n> other patch remains the same as the new state\n> DB_IN_END_OF_RECOVERY_CHECKPOINT introduced there says it all.\n> \n\nAFAIU is one patch or the other but not both, isn't it?\n\nThis habit of putting two conflicting versions of patches on the same \nthread causes http://cfbot.cputube.org/ to fail.\n\nNow; I do think that the secondd patch, the one that just skips update\nof the state in control file, is the way to go. The other patch adds too\nmuch complexity for a small return.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Mon, 10 Jan 2022 00:28:39 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": false, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\" during end-of-recovery checkpoint?" }, { "msg_contents": "On Mon, Jan 10, 2022 at 10:58 AM Jaime Casanova\n<jcasanov@systemguards.com.ec> wrote:\n>\n> On Thu, Dec 09, 2021 at 07:41:52AM +0530, Bharath Rupireddy wrote:\n> > On Wed, Dec 8, 2021 at 11:02 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> > >\n> > > On 12/8/21, 3:29 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > > Thanks for your thoughts. I'm fine either way, hence attaching two\n> > > > patches here with and I will leave it for the committer 's choice.\n> > > > 1) v1-0001-Add-DB_IN_END_OF_RECOVERY_CHECKPOINT-state-for-co.patch --\n> > > > adds new db state DB_IN_END_OF_RECOVERY_CHECKPOINT for control file.\n> > > > 2) v1-0001-Skip-control-file-db-state-updation-during-end-of.patch --\n> > > > just skips setting db state to DB_SHUTDOWNING and DB_SHUTDOWNED in\n> > > > case of end-of-recovery checkpoint so that the state will be\n> > > > DB_IN_CRASH_RECOVERY which then changes to DB_IN_PRODUCTION.\n> > >\n> > > I've bumped this one to ready-for-committer. For the record, my\n> > > preference is the second patch (for the reasons discussed upthread).\n> > > Both patches might benefit from a small comment or two, too.\n> >\n> > Thanks. I've added a comment to the patch\n> > v2-0001-Skip-control-file-db-state-updation-during-end-of.patch. The\n> > other patch remains the same as the new state\n> > DB_IN_END_OF_RECOVERY_CHECKPOINT introduced there says it all.\n\n> Now; I do think that the secondd patch, the one that just skips update\n> of the state in control file, is the way to go. The other patch adds too\n> much complexity for a small return.\n\nThanks. Attaching the above patch.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Mon, 10 Jan 2022 11:04:05 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\" during end-of-recovery checkpoint?" }, { "msg_contents": "On Mon, Jan 10, 2022 at 11:04:05AM +0530, Bharath Rupireddy wrote:\n> On Mon, Jan 10, 2022 at 10:58 AM Jaime Casanova\n> <jcasanov@systemguards.com.ec> wrote:\n>> Now; I do think that the secondd patch, the one that just skips update\n>> of the state in control file, is the way to go. The other patch adds too\n>> much complexity for a small return.\n> \n> Thanks. Attaching the above patch.\n\nI agree that the addition of DB_IN_END_OF_RECOVERY_CHECKPOINT is not\nnecessary as the control file state will be reflected in a live server\nonce it the instance is ready to write WAL after promotion, as much as\nI agree that the state stored in the control file because of the\nend-of-recovery record does not reflect the reality.\n\nNow, I also find confusing the state of CreateCheckpoint() once this\npatch gets applied. Now the code and comments imply that an\nend-of-recovery checkpoint is a shutdown checkpoint because they\nperform the same actions, which is fine. Could it be less confusing\nto remove completely the \"shutdown\" variable instead and replace those\nchecks with \"flags\"? What the patch is doing is one step in this\ndirection.\n--\nMichael", "msg_date": "Tue, 25 Jan 2022 14:16:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\" during end-of-recovery checkpoint?" }, { "msg_contents": "On 1/24/22, 9:16 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> Now, I also find confusing the state of CreateCheckpoint() once this\r\n> patch gets applied. Now the code and comments imply that an\r\n> end-of-recovery checkpoint is a shutdown checkpoint because they\r\n> perform the same actions, which is fine. Could it be less confusing\r\n> to remove completely the \"shutdown\" variable instead and replace those\r\n> checks with \"flags\"? What the patch is doing is one step in this\r\n> direction.\r\n\r\nI looked into removing the \"shutdown\" variable in favor of using\r\n\"flags\" everywhere, but the patch was quite messy and repetitive. I\r\nthink another way to make things less confusing is to replace\r\n\"shutdown\" with an inverse variable called \"online.\" The attached\r\npatch does it this way.\r\n\r\nNathan", "msg_date": "Tue, 25 Jan 2022 19:20:05 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\"\n during end-of-recovery checkpoint?" }, { "msg_contents": "On Tue, Jan 25, 2022 at 07:20:05PM +0000, Bossart, Nathan wrote:\n> I looked into removing the \"shutdown\" variable in favor of using\n> \"flags\" everywhere, but the patch was quite messy and repetitive. I\n> think another way to make things less confusing is to replace\n> \"shutdown\" with an inverse variable called \"online.\" The attached\n> patch does it this way.\n\nYeah, that sounds like a good compromise. At least, I find the whole\na bit easier to follow.\n\nHeikki was planning to commit a large refactoring of xlog.c, so we'd\nbetter wait for that to happen before concluding here.\n--\nMichael", "msg_date": "Wed, 26 Jan 2022 09:57:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\" during end-of-recovery checkpoint?" }, { "msg_contents": "On Wed, Jan 26, 2022 at 6:28 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Jan 25, 2022 at 07:20:05PM +0000, Bossart, Nathan wrote:\n> > I looked into removing the \"shutdown\" variable in favor of using\n> > \"flags\" everywhere, but the patch was quite messy and repetitive. I\n> > think another way to make things less confusing is to replace\n> > \"shutdown\" with an inverse variable called \"online.\" The attached\n> > patch does it this way.\n>\n> Yeah, that sounds like a good compromise. At least, I find the whole\n> a bit easier to follow.\n\nv3 LGTM.\n\n> Heikki was planning to commit a large refactoring of xlog.c, so we'd\n> better wait for that to happen before concluding here.\n\nWill that patch set fix the issue reported here?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 26 Jan 2022 08:09:16 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\" during end-of-recovery checkpoint?" }, { "msg_contents": "On Wed, Jan 26, 2022 at 08:09:16AM +0530, Bharath Rupireddy wrote:\n> Will that patch set fix the issue reported here?\n\nLooking at [1], the area of CreateCheckPoint() is left untouched.\n\n[1]: https://www.postgresql.org/message-id/52bc9ccd-8591-431b-0086-15d9acf25a3f@iki.fi\n--\nMichael", "msg_date": "Wed, 26 Jan 2022 15:34:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\" during end-of-recovery checkpoint?" }, { "msg_contents": "On Wed, Jan 26, 2022 at 12:04 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Jan 26, 2022 at 08:09:16AM +0530, Bharath Rupireddy wrote:\n> > Will that patch set fix the issue reported here?\n>\n> Looking at [1], the area of CreateCheckPoint() is left untouched.\n>\n> [1]: https://www.postgresql.org/message-id/52bc9ccd-8591-431b-0086-15d9acf25a3f@iki.fi\n\nI see. IMHO, we can fix the issue reported here then. Thoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 26 Jan 2022 12:13:03 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\" during end-of-recovery checkpoint?" }, { "msg_contents": "On Wed, Jan 26, 2022 at 12:13:03PM +0530, Bharath Rupireddy wrote:\n> I see. IMHO, we can fix the issue reported here then.\n\nYes.\n--\nMichael", "msg_date": "Wed, 26 Jan 2022 15:56:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\" during end-of-recovery checkpoint?" }, { "msg_contents": "On Wed, Jan 26, 2022 at 09:57:59AM +0900, Michael Paquier wrote:\n> Yeah, that sounds like a good compromise. At least, I find the whole\n> a bit easier to follow.\n\nSo, I have been checking this idea in details, and spotted what looks\nlike one issue in CreateRestartPoint(), as of:\n\t/*\n\t * Update pg_control, using current time. Check that it still shows\n\t * DB_IN_ARCHIVE_RECOVERY state and an older checkpoint, else do nothing;\n\t * this is a quick hack to make sure nothing really bad happens if somehow\n\t * we get here after the end-of-recovery checkpoint.\n\t */\n\tLWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n\tif (ControlFile->state == DB_IN_ARCHIVE_RECOVERY &&\n\t\tControlFile->checkPointCopy.redo < lastCheckPoint.redo)\n\nThis change increases the window making this code path reachable if an\nend-of-recovery checkpoint is triggered but not finished at the end of\nrecovery (possible of course at the end of crash recovery, but\nDB_IN_ARCHIVE_RECOVERY is also possible when !IsUnderPostmaster with a\npromotion request), before updating ControlFile->checkPointCopy at the\nend of the checkpoint because the state could still be\nDB_IN_ARCHIVE_RECOVERY. The window is wider the longer the\nend-of-recovery checkpoint. And this would be the case of an instance\nrestarted, when a restart point is created.\n\n> Heikki was planning to commit a large refactoring of xlog.c, so we'd\n> better wait for that to happen before concluding here.\n\nI have checked that as well, and both are independent.\n--\nMichael", "msg_date": "Thu, 27 Jan 2022 14:06:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\" during end-of-recovery checkpoint?" }, { "msg_contents": "On Thu, Jan 27, 2022 at 02:06:40PM +0900, Michael Paquier wrote:\n> So, I have been checking this idea in details, and spotted what looks\n> like one issue in CreateRestartPoint(), as of:\n> \t/*\n> \t * Update pg_control, using current time. Check that it still shows\n> \t * DB_IN_ARCHIVE_RECOVERY state and an older checkpoint, else do nothing;\n> \t * this is a quick hack to make sure nothing really bad happens if somehow\n> \t * we get here after the end-of-recovery checkpoint.\n> \t */\n> \tLWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n> \tif (ControlFile->state == DB_IN_ARCHIVE_RECOVERY &&\n> \t\tControlFile->checkPointCopy.redo < lastCheckPoint.redo)\n> \n> This change increases the window making this code path reachable if an\n> end-of-recovery checkpoint is triggered but not finished at the end of\n> recovery (possible of course at the end of crash recovery, but\n> DB_IN_ARCHIVE_RECOVERY is also possible when !IsUnderPostmaster with a\n> promotion request), before updating ControlFile->checkPointCopy at the\n> end of the checkpoint because the state could still be\n> DB_IN_ARCHIVE_RECOVERY. The window is wider the longer the\n> end-of-recovery checkpoint. And this would be the case of an instance\n> restarted, when a restart point is created.\n\nI wonder if this is actually a problem in practice. IIUC all of the values\nupdated in this block should be reset at the end of the end-of-recovery\ncheckpoint. Is the intent of the quick hack to prevent those updates after\nan end-of-recovery checkpoint completes, or is it trying to block them\nafter one begins? It looks like the control file was being updated to\nDB_SHUTDOWNING at the beginning of end-of-recovery checkpoints when that\nchange was first introduced (2de48a8), so I agree that we'd better be\ncareful with this change.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com/\n\n\n", "msg_date": "Thu, 27 Jan 2022 10:47:20 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\" during end-of-recovery checkpoint?" }, { "msg_contents": "At Tue, 25 Jan 2022 19:20:05 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> On 1/24/22, 9:16 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\n> > Now, I also find confusing the state of CreateCheckpoint() once this\n> > patch gets applied. Now the code and comments imply that an\n> > end-of-recovery checkpoint is a shutdown checkpoint because they\n> > perform the same actions, which is fine. Could it be less confusing\n> > to remove completely the \"shutdown\" variable instead and replace those\n> > checks with \"flags\"? What the patch is doing is one step in this\n> > direction.\n> \n> I looked into removing the \"shutdown\" variable in favor of using\n> \"flags\" everywhere, but the patch was quite messy and repetitive. I\n> think another way to make things less confusing is to replace\n> \"shutdown\" with an inverse variable called \"online.\" The attached\n> patch does it this way.\n\nI find that change doesn't work. As Michael said the \"shutdown\" is\nimplies \"shutdown checkpoint\". And end-of-recovery checkpoint is done\nonline (means \"not-shutdowning\"). shutdown_checkpoint works for me.\nRenaming \"shutdown checkpoint\" as \"exclusive checkpoint\" or so also\nworks for me but I think it would cause halation (or zealous\nobjections)..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 28 Jan 2022 16:57:42 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\" during end-of-recovery checkpoint?" }, { "msg_contents": "At Thu, 27 Jan 2022 10:47:20 -0800, Nathan Bossart <nathandbossart@gmail.com> wrote in \n> On Thu, Jan 27, 2022 at 02:06:40PM +0900, Michael Paquier wrote:\n> > So, I have been checking this idea in details, and spotted what looks\n> > like one issue in CreateRestartPoint(), as of:\n> > \t/*\n> > \t * Update pg_control, using current time. Check that it still shows\n> > \t * DB_IN_ARCHIVE_RECOVERY state and an older checkpoint, else do nothing;\n> > \t * this is a quick hack to make sure nothing really bad happens if somehow\n> > \t * we get here after the end-of-recovery checkpoint.\n> > \t */\n> > \tLWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n> > \tif (ControlFile->state == DB_IN_ARCHIVE_RECOVERY &&\n> > \t\tControlFile->checkPointCopy.redo < lastCheckPoint.redo)\n> > \n> > This change increases the window making this code path reachable if an\n> > end-of-recovery checkpoint is triggered but not finished at the end of\n> > recovery (possible of course at the end of crash recovery, but\n> > DB_IN_ARCHIVE_RECOVERY is also possible when !IsUnderPostmaster with a\n> > promotion request), before updating ControlFile->checkPointCopy at the\n> > end of the checkpoint because the state could still be\n> > DB_IN_ARCHIVE_RECOVERY. The window is wider the longer the\n> > end-of-recovery checkpoint. And this would be the case of an instance\n> > restarted, when a restart point is created.\n> \n> I wonder if this is actually a problem in practice. IIUC all of the values\n> updated in this block should be reset at the end of the end-of-recovery\n> checkpoint. Is the intent of the quick hack to prevent those updates after\n> an end-of-recovery checkpoint completes, or is it trying to block them\n> after one begins? It looks like the control file was being updated to\n> DB_SHUTDOWNING at the beginning of end-of-recovery checkpoints when that\n> change was first introduced (2de48a8), so I agree that we'd better be\n> careful with this change.\n\nPutting aside the readyness of the patch, I think what the patch\nintends is to avoid starnge state transitions happen during\nend-of-recovery checkpoint.\n\nSo, I'm confused...\n\nEnd-of-recovery checkpoint is requested as CHECKPOINT_WAIT, which\nseems to me to mean the state is always DB_IN_ARCHIVE_RECOVERY while\nthe checkpoint is running? If correct, if server is killed druing the\nend-of-recovery checkpoint, the state stays DB_IN_ARCHIVE_RECOVERY\ninstead of DB_SHUTDOWNING or DB_SHUTDOWNED. AFAICS there's no\ndifferece between the first two at next startup. I dont' think\nDB_SHUTDOWNED case is not worth considering here.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 28 Jan 2022 18:32:27 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\" during end-of-recovery checkpoint?" }, { "msg_contents": "On Fri, Jan 28, 2022 at 06:32:27PM +0900, Kyotaro Horiguchi wrote:\n> End-of-recovery checkpoint is requested as CHECKPOINT_WAIT, which\n> seems to me to mean the state is always DB_IN_ARCHIVE_RECOVERY while\n> the checkpoint is running?\n\nWith the patch, yes, we would keep the control file under\nDB_IN_{ARCHIVE,CRASH}_RECOVERY during the whole period of the\nend-of-recovery checkpoint. On HEAD, we would have DB_SHUTDOWNING\nuntil the end-of-recovery checkpoint completes, after which we switch\nto DB_SHUTDOWNED for a short time before DB_IN_PRODUCTION.\n\n> If correct, if server is killed druing the\n> end-of-recovery checkpoint, the state stays DB_IN_ARCHIVE_RECOVERY\n> instead of DB_SHUTDOWNING or DB_SHUTDOWNED. AFAICS there's no\n> differece between the first two at next startup.\n\nOne difference is the hint given to the user at the follow-up startup.\nCrash and archive recovery states can freak people easily as they\nmention the risk of corruption. Using DB_SHUTDOWNING reduces this\nwindow.\n\n> I dont' think DB_SHUTDOWNED case is not worth considering here.\n\nWell, this patch also means there is a small window where we have\nDB_IN_ARCHIVE_RECOVERY in the control file with the new timeline and\nminRecoveryPoint not set, rather than DB_SHUTDOWNED.\n--\nMichael", "msg_date": "Sat, 29 Jan 2022 11:12:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\" during end-of-recovery checkpoint?" }, { "msg_contents": "On Sat, Jan 29, 2022 at 7:43 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Jan 28, 2022 at 06:32:27PM +0900, Kyotaro Horiguchi wrote:\n> > End-of-recovery checkpoint is requested as CHECKPOINT_WAIT, which\n> > seems to me to mean the state is always DB_IN_ARCHIVE_RECOVERY while\n> > the checkpoint is running?\n>\n> With the patch, yes, we would keep the control file under\n> DB_IN_{ARCHIVE,CRASH}_RECOVERY during the whole period of the\n> end-of-recovery checkpoint. On HEAD, we would have DB_SHUTDOWNING\n> until the end-of-recovery checkpoint completes, after which we switch\n> to DB_SHUTDOWNED for a short time before DB_IN_PRODUCTION.\n>\n> > If correct, if server is killed druing the\n> > end-of-recovery checkpoint, the state stays DB_IN_ARCHIVE_RECOVERY\n> > instead of DB_SHUTDOWNING or DB_SHUTDOWNED. AFAICS there's no\n> > differece between the first two at next startup.\n>\n> One difference is the hint given to the user at the follow-up startup.\n> Crash and archive recovery states can freak people easily as they\n> mention the risk of corruption. Using DB_SHUTDOWNING reduces this\n > window.\n\nIf the server crashes in end-of-recovery, in the follow-up startup,\nthe server has to start all the recovery right? In that case,\nDB_IN_{ARCHIVE, CRASH}_RECOVERY would represent the correct state to\nthe user, not the DB_SHUTDOWNING/DB_SHUTDOWNED IMO.\n\nThere's another option to have a new state\nDB_IN_END_OF_RECOVERY_CHECKPOINT, if the DB_IN_{ARCHIVE,\nCRASH}_RECOVERY really scares users of end-of-recovery crash?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 29 Jan 2022 20:07:23 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\" during end-of-recovery checkpoint?" }, { "msg_contents": "On Sat, Jan 29, 2022 at 08:07:23PM +0530, Bharath Rupireddy wrote:\n> If the server crashes in end-of-recovery, in the follow-up startup,\n> the server has to start all the recovery right? In that case,\n> DB_IN_{ARCHIVE, CRASH}_RECOVERY would represent the correct state to\n> the user, not the DB_SHUTDOWNING/DB_SHUTDOWNED IMO.\n> \n> There's another option to have a new state\n> DB_IN_END_OF_RECOVERY_CHECKPOINT, if the DB_IN_{ARCHIVE,\n> CRASH}_RECOVERY really scares users of end-of-recovery crash?\n\nWell, an end-of-recovery checkpoint is a shutdown checkpoint, and it\nrelies on the same assumptions in terms of checkpoint logic for the\nlast 10 years or so, so the state of the control file is not wrong\nper-se, either. There are other cases that we may want to worry\nabout with this change, like the fact that unlogged relation reset\nrelies on the cluster to be cleanly shut down when we begin entering\nthe replay loop. And it seems to me that this has not been looked\nat. A second thing would be the introduction of an invalid LSN\nminimum recovery point in the control file while in\nDB_IN_ARCHIVE_RECOVERY when we are done with the checkpoint, joining\nmy point of upthread.\n\nAt the end of the day, it may be better to just let this stuff be.\nAnother argument for doing nothing is that this could cause\nhard-to-spot conflicts when it comes to back-patch something.\n--\nMichael", "msg_date": "Mon, 31 Jan 2022 16:25:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\" during end-of-recovery checkpoint?" }, { "msg_contents": "On Mon, Jan 31, 2022 at 04:25:21PM +0900, Michael Paquier wrote:\n> At the end of the day, it may be better to just let this stuff be.\n> Another argument for doing nothing is that this could cause\n> hard-to-spot conflicts when it comes to back-patch something.\n\nThis one has been quiet for a while. Should we mark it as\nreturned-with-feedback?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 25 Feb 2022 13:09:53 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\" during end-of-recovery checkpoint?" }, { "msg_contents": "On Fri, Feb 25, 2022 at 01:09:53PM -0800, Nathan Bossart wrote:\n> This one has been quiet for a while. Should we mark it as\n> returned-with-feedback?\n\nYes, that's my feeling and I got cold feet about this change. This\npatch would bring some extra visibility for something that's not\nincorrect either on HEAD, as end-of-recovery checkpoints are the same\nthings as shutdown checkpoints. And there is an extra argument where\nback-patching would become a bit more tricky in an area that's already\na lot sensitive.\n--\nMichael", "msg_date": "Sat, 26 Feb 2022 12:11:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\" during end-of-recovery checkpoint?" }, { "msg_contents": "At Sat, 26 Feb 2022 12:11:15 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Fri, Feb 25, 2022 at 01:09:53PM -0800, Nathan Bossart wrote:\n> > This one has been quiet for a while. Should we mark it as\n> > returned-with-feedback?\n> \n> Yes, that's my feeling and I got cold feet about this change. This\n> patch would bring some extra visibility for something that's not\n> incorrect either on HEAD, as end-of-recovery checkpoints are the same\n> things as shutdown checkpoints. And there is an extra argument where\n> back-patching would become a bit more tricky in an area that's already\n> a lot sensitive.\n\nThat sounds like we should reject the patch as we don't agree to its\nobjective. If someday end-of-recovery checkpoints functionally\ndiverge from shutdown checkpoints but leave (somehow) the transition\nalone, we may visit this again but it would be another proposal.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 28 Feb 2022 10:51:06 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\" during end-of-recovery checkpoint?" }, { "msg_contents": "On Mon, Feb 28, 2022 at 10:51:06AM +0900, Kyotaro Horiguchi wrote:\n> That sounds like we should reject the patch as we don't agree to its\n> objective. If someday end-of-recovery checkpoints functionally\n> diverge from shutdown checkpoints but leave (somehow) the transition\n> alone, we may visit this again but it would be another proposal.\n\nThe patch has been already withdrawn in the CF app.\n--\nMichael", "msg_date": "Mon, 28 Feb 2022 11:01:41 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Is it correct to update db state in control file as \"shutting\n down\" during end-of-recovery checkpoint?" } ]
[ { "msg_contents": "Hi,\n\nThe function PreallocXlogFiles doesn't get called during\nend-of-recovery checkpoint in CreateCheckPoint, see [1]. The server\nbecomes operational after the end-of-recovery checkpoint and may need\nWAL files. However, I'm not sure how beneficial it is going to be if\nthe WAL is pre-allocated (as PreallocXlogFiles just allocates only 1\nextra WAL file).\n\nThoughts?\n\n[1]\n /*\n * An end-of-recovery checkpoint is really a shutdown checkpoint, just\n * issued at a different time.\n */\n if (flags & (CHECKPOINT_IS_SHUTDOWN | CHECKPOINT_END_OF_RECOVERY))\n shutdown = true;\n else\n shutdown = false;\n\n /*\n * Make more log segments if needed. (Do this after recycling old log\n * segments, since that may supply some of the needed files.)\n */\n if (!shutdown)\n PreallocXlogFiles(recptr, checkPoint.ThisTimeLineID);\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 6 Dec 2021 18:21:40 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Do we need pre-allocate WAL files during end-of-recovery checkpoint?" }, { "msg_contents": "If the segment size is 16MB it shouldn't take much time but higher segment\nvalues this can be a problem. But again, the current segment has to be\nfilled 75% to precreate new one. I am not sure how much we gain. Do you\nhave some numbers with different segment sizes?\n\nOn Mon, Dec 6, 2021 at 4:51 AM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> Hi,\n>\n> The function PreallocXlogFiles doesn't get called during\n> end-of-recovery checkpoint in CreateCheckPoint, see [1]. The server\n> becomes operational after the end-of-recovery checkpoint and may need\n> WAL files. However, I'm not sure how beneficial it is going to be if\n> the WAL is pre-allocated (as PreallocXlogFiles just allocates only 1\n> extra WAL file).\n>\n> Thoughts?\n>\n> [1]\n> /*\n> * An end-of-recovery checkpoint is really a shutdown checkpoint, just\n> * issued at a different time.\n> */\n> if (flags & (CHECKPOINT_IS_SHUTDOWN | CHECKPOINT_END_OF_RECOVERY))\n> shutdown = true;\n> else\n> shutdown = false;\n>\n> /*\n> * Make more log segments if needed. (Do this after recycling old log\n> * segments, since that may supply some of the needed files.)\n> */\n> if (!shutdown)\n> PreallocXlogFiles(recptr, checkPoint.ThisTimeLineID);\n>\n> Regards,\n> Bharath Rupireddy.\n>\n>\n>\n\nIf the segment size is 16MB it shouldn't take much time but higher segment values this can be a problem. But again, the current segment has to be filled 75% to precreate new one. I am not sure how much we gain. Do you have some numbers with different segment sizes?On Mon, Dec 6, 2021 at 4:51 AM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:Hi,\n\nThe function PreallocXlogFiles doesn't get called during\nend-of-recovery checkpoint in CreateCheckPoint, see [1]. The server\nbecomes operational after the end-of-recovery checkpoint and may need\nWAL files. However, I'm not sure how beneficial it is going to be if\nthe WAL is pre-allocated (as PreallocXlogFiles just allocates only 1\nextra WAL file).\n\nThoughts?\n\n[1]\n    /*\n     * An end-of-recovery checkpoint is really a shutdown checkpoint, just\n     * issued at a different time.\n     */\n    if (flags & (CHECKPOINT_IS_SHUTDOWN | CHECKPOINT_END_OF_RECOVERY))\n        shutdown = true;\n    else\n        shutdown = false;\n\n    /*\n     * Make more log segments if needed.  (Do this after recycling old log\n     * segments, since that may supply some of the needed files.)\n     */\n    if (!shutdown)\n        PreallocXlogFiles(recptr, checkPoint.ThisTimeLineID);\n\nRegards,\nBharath Rupireddy.", "msg_date": "Mon, 6 Dec 2021 11:05:31 -0800", "msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Do we need pre-allocate WAL files during end-of-recovery\n checkpoint?" }, { "msg_contents": "On 12/6/21, 4:54 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> The function PreallocXlogFiles doesn't get called during\r\n> end-of-recovery checkpoint in CreateCheckPoint, see [1]. The server\r\n> becomes operational after the end-of-recovery checkpoint and may need\r\n> WAL files. However, I'm not sure how beneficial it is going to be if\r\n> the WAL is pre-allocated (as PreallocXlogFiles just allocates only 1\r\n> extra WAL file).\r\n\r\nThere is another thread for adding more effective WAL pre-allocation\r\n[0] that you might be interested in.\r\n\r\nNathan\r\n\r\n[0] https://www.postgresql.org/message-id/flat/20201225200953.jjkrytlrzojbndh5%40alap3.anarazel.de\r\n\r\n", "msg_date": "Mon, 6 Dec 2021 19:32:40 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Do we need pre-allocate WAL files during end-of-recovery\n checkpoint?" }, { "msg_contents": "On Mon, Dec 06, 2021 at 06:21:40PM +0530, Bharath Rupireddy wrote:\n> The function PreallocXlogFiles doesn't get called during\n> end-of-recovery checkpoint in CreateCheckPoint, see [1]. The server\n> becomes operational after the end-of-recovery checkpoint and may need\n> WAL files.\n\nPreallocXlogFiles() is never a necessity; it's just an attempted optimization.\nI expect preallocation at end-of-recovery would do more harm than good,\nbecause the system would accept no writes at all while waiting for it.\n\n> However, I'm not sure how beneficial it is going to be if\n> the WAL is pre-allocated (as PreallocXlogFiles just allocates only 1\n> extra WAL file).\n\nYeah, PreallocXlogFiles() feels like a relict from the same era that gave us\ncheckpoint_segments=3. It was more useful before commit 63653f7 (2002).\n\n\n", "msg_date": "Mon, 6 Dec 2021 22:39:04 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Do we need pre-allocate WAL files during end-of-recovery\n checkpoint?" }, { "msg_contents": "On Tue, Dec 7, 2021 at 1:02 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 12/6/21, 4:54 AM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > The function PreallocXlogFiles doesn't get called during\n> > end-of-recovery checkpoint in CreateCheckPoint, see [1]. The server\n> > becomes operational after the end-of-recovery checkpoint and may need\n> > WAL files. However, I'm not sure how beneficial it is going to be if\n> > the WAL is pre-allocated (as PreallocXlogFiles just allocates only 1\n> > extra WAL file).\n>\n> There is another thread for adding more effective WAL pre-allocation\n> [0] that you might be interested in.\n> [0] https://www.postgresql.org/message-id/flat/20201225200953.jjkrytlrzojbndh5%40alap3.anarazel.de\n\nI haven't had a chance to go through the entire thread but I have a\nquick question: why can't the walwriter pre-allocate some of the WAL\nsegments instead of a new background process? Of course, it might\ndelay the main functionality of the walwriter i.e. flush and sync the\nWAL files, but having checkpointer do the pre-allocation makes it do\nanother extra task. Here the amount of walwriter work vs checkpointer\nwork, I'm not sure which one does more work compared to the other.\n\nAnother idea could be to let walwrtier or checkpointer pre-allocate\nthe WAL files whichever seems free as-of-the-moment when the WAL\nsegment pre-allocation request comes. We can go further to let the\nuser choose which process i.e. checkpointer or walwrtier do the\npre-allocation with a GUC maybe?\n\nI will also put the same thoughts in the \"Pre-allocating WAL files\"\nthread so that we can continue the discussion there.\n\n[1] https://www.postgresql.org/message-id/20201225200953.jjkrytlrzojbndh5%40alap3.anarazel.de\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Tue, 7 Dec 2021 13:50:42 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Do we need pre-allocate WAL files during end-of-recovery\n checkpoint?" }, { "msg_contents": "On Tue, Dec 7, 2021 at 12:09 PM Noah Misch <noah@leadboat.com> wrote:\n>\n> On Mon, Dec 06, 2021 at 06:21:40PM +0530, Bharath Rupireddy wrote:\n> > The function PreallocXlogFiles doesn't get called during\n> > end-of-recovery checkpoint in CreateCheckPoint, see [1]. The server\n> > becomes operational after the end-of-recovery checkpoint and may need\n> > WAL files.\n>\n> PreallocXlogFiles() is never a necessity; it's just an attempted optimization.\n> I expect preallocation at end-of-recovery would do more harm than good,\n> because the system would accept no writes at all while waiting for it.\n\nYeah. At times, end-of-recovery checkpoint itself will take a good\namount of time and adding to it the pre-allocation of WAL time doesn't\nmake sense.\n\n> > However, I'm not sure how beneficial it is going to be if\n> > the WAL is pre-allocated (as PreallocXlogFiles just allocates only 1\n> > extra WAL file).\n>\n> Yeah, PreallocXlogFiles() feels like a relict from the same era that gave us\n> checkpoint_segments=3. It was more useful before commit 63653f7 (2002).\n\nI see.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Tue, 7 Dec 2021 13:52:20 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Do we need pre-allocate WAL files during end-of-recovery\n checkpoint?" }, { "msg_contents": "On Tue, Dec 7, 2021 at 12:35 AM SATYANARAYANA NARLAPURAM\n<satyanarlapuram@gmail.com> wrote:\n>\n> If the segment size is 16MB it shouldn't take much time but higher segment values this can be a problem. But again, the current segment has to be filled 75% to precreate new one. I am not sure how much we gain. Do you have some numbers with different segment sizes?\n\nI don't have the numbers. However I agree that the pre-allocation can\nmake end-of-recovery checkpoint times more. At times, the\nend-of-recovery checkpoint itself will take a good amount of time and\nadding to it the pre-allocation of WAL time doesn't make sense.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Tue, 7 Dec 2021 13:54:44 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Do we need pre-allocate WAL files during end-of-recovery\n checkpoint?" } ]
[ { "msg_contents": "I raised this issue a few years ago.\nhttps://www.postgresql.org/message-id/20181217175841.GS13019%40telsasoft.com\n\n|[pryzbyj@database ~]$ psql -v VERBOSITY=terse ts -xtc 'ONE' -c \"SELECT 'TWO'\"; echo \"exit status $?\"\n|ERROR: syntax error at or near \"ONE\" at character 1\n|?column? | TWO\n|\n|exit status 0\n\nThe documentation doen't say what the exit status should be in this case:\n| psql returns 0 to the shell if it finished normally, 1 if a fatal error of its own occurs (e.g., out of memory, file not found), 2 if the connection to the server went bad and the session was not interactive, and 3 if an error occurred in a script and the variable ON_ERROR_STOP was set.\n\nIt returns 1 if the final command fails, even though it's not a \"fatal error\"\n(it would've happily kept running more commands).\n\n| x=`some_command_that_fails`\n| rm -fr \"$x/$y # removes all your data\n\n| psql -c \"begin; C REATE TABLE newtable(LIKE oldtable) INSERT INTO newtable SELECT * FROM oldtable; commit\" -c \"DROP TABLE oldtable\n| psql -v VERBOSITY=terse ts -xtc '0CREATE TABLE newtbl(i int)' -c 'INSERT INTO newtbl SELECT * FROM tbl' -c 'DROP TABLE IF EXISTS tbl' -c 'ALTER TABLE newtbl RENAME TO tbl'; echo ret=$?\n\nDavid J suggested to change the default value of ON_ERROR_STOP. The exit\nstatus in the non-default case would have to be documented. That's one\nsolution, and allows the old behavior if anybody wants it. That probably does\nwhat most people want, too. This is more likely to expose a real problem that\nsomeone would have missed than to break a legitimate use. That doesn't appear\nto break regression tests at all.\n\nThe alternative to David's suggestion is to define some non-zero exit status to\nmean \"an error occurred and ON_ERROR_STOP was not set\".\n\nIf the new definition said that the new exit status was only used for errors\nwhich occur in the final command (-c or -f), and if the exit status in that\ncase were \"1\", then this would be a back-patchable documentation change,\nremoving the word \"fatal\" and removing or updating the parenthetical examples.\nThat would make psql behave exactly like /bin/sh does when used without\n\"set -e\" - which is usually the opposite of the desirable behavior...\n\nI think it's not viable to change the exit status in the case of a single\n-c/-f, nor to change the exit status in back branches. David's suggestion is\nmore radical than the minimal change to a nonzero exit status, but maybe that's\nokay ?\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 6 Dec 2021 09:08:56 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "psql: exit status with multiple -c and -f" }, { "msg_contents": "On Mon, Dec 06, 2021 at 09:08:56AM -0600, Justin Pryzby wrote:\n> I raised this issue a few years ago.\n> https://www.postgresql.org/message-id/20181217175841.GS13019%40telsasoft.com\n> \n> |[pryzbyj@database ~]$ psql -v VERBOSITY=terse ts -xtc 'ONE' -c \"SELECT 'TWO'\"; echo \"exit status $?\"\n> |ERROR: syntax error at or near \"ONE\" at character 1\n> |?column? | TWO\n> |\n> |exit status 0\n> \n> The documentation doen't say what the exit status should be in this case:\n> | psql returns 0 to the shell if it finished normally, 1 if a fatal error of its own occurs (e.g., out of memory, file not found), 2 if the connection to the server went bad and the session was not interactive, and 3 if an error occurred in a script and the variable ON_ERROR_STOP was set.\n> \n> It returns 1 if the final command fails, even though it's not a \"fatal error\"\n> (it would've happily kept running more commands).\n> \n> | x=`some_command_that_fails`\n> | rm -fr \"$x/$y # removes all your data\n> \n> | psql -c \"begin; C REATE TABLE newtable(LIKE oldtable) INSERT INTO newtable SELECT * FROM oldtable; commit\" -c \"DROP TABLE oldtable\n> | psql -v VERBOSITY=terse ts -xtc '0CREATE TABLE newtbl(i int)' -c 'INSERT INTO newtbl SELECT * FROM tbl' -c 'DROP TABLE IF EXISTS tbl' -c 'ALTER TABLE newtbl RENAME TO tbl'; echo ret=$?\n> \n> David J suggested to change the default value of ON_ERROR_STOP. The exit\n> status in the non-default case would have to be documented. That's one\n> solution, and allows the old behavior if anybody wants it. That probably does\n> what most people want, too. This is more likely to expose a real problem that\n> someone would have missed than to break a legitimate use. That doesn't appear\n> to break regression tests at all.\n\nI was wrong - the regression tests specifically exercise failure cases, so the\nscripts must continue after errors.\n\nI think the current behavior of the regression test SQL scripts is exactly the\nopposite of what's desirable for almost all other scripts. The attached makes\nON_ERROR_STOP the default, and runs the regression tests with ON_ERROR_STOP=0.\n\nIs it viable to consider changing this ?", "msg_date": "Mon, 27 Dec 2021 10:10:21 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "default to to ON_ERROR_STOP=on (Re: psql: exit status with multiple\n -c and -f)" }, { "msg_contents": "po 27. 12. 2021 v 17:10 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Mon, Dec 06, 2021 at 09:08:56AM -0600, Justin Pryzby wrote:\n> > I raised this issue a few years ago.\n> >\n> https://www.postgresql.org/message-id/20181217175841.GS13019%40telsasoft.com\n> >\n> > |[pryzbyj@database ~]$ psql -v VERBOSITY=terse ts -xtc 'ONE' -c \"SELECT\n> 'TWO'\"; echo \"exit status $?\"\n> > |ERROR: syntax error at or near \"ONE\" at character 1\n> > |?column? | TWO\n> > |\n> > |exit status 0\n> >\n> > The documentation doen't say what the exit status should be in this case:\n> > | psql returns 0 to the shell if it finished normally, 1 if a fatal\n> error of its own occurs (e.g., out of memory, file not found), 2 if the\n> connection to the server went bad and the session was not interactive, and\n> 3 if an error occurred in a script and the variable ON_ERROR_STOP was set.\n> >\n> > It returns 1 if the final command fails, even though it's not a \"fatal\n> error\"\n> > (it would've happily kept running more commands).\n> >\n> > | x=`some_command_that_fails`\n> > | rm -fr \"$x/$y # removes all your data\n> >\n> > | psql -c \"begin; C REATE TABLE newtable(LIKE oldtable) INSERT INTO\n> newtable SELECT * FROM oldtable; commit\" -c \"DROP TABLE oldtable\n> > | psql -v VERBOSITY=terse ts -xtc '0CREATE TABLE newtbl(i int)' -c\n> 'INSERT INTO newtbl SELECT * FROM tbl' -c 'DROP TABLE IF EXISTS tbl' -c\n> 'ALTER TABLE newtbl RENAME TO tbl'; echo ret=$?\n> >\n> > David J suggested to change the default value of ON_ERROR_STOP. The exit\n> > status in the non-default case would have to be documented. That's one\n> > solution, and allows the old behavior if anybody wants it. That\n> probably does\n> > what most people want, too. This is more likely to expose a real\n> problem that\n> > someone would have missed than to break a legitimate use. That doesn't\n> appear\n> > to break regression tests at all.\n>\n> I was wrong - the regression tests specifically exercise failure cases, so\n> the\n> scripts must continue after errors.\n>\n> I think the current behavior of the regression test SQL scripts is exactly\n> the\n> opposite of what's desirable for almost all other scripts. The attached\n> makes\n> ON_ERROR_STOP the default, and runs the regression tests with\n> ON_ERROR_STOP=0.\n>\n> Is it viable to consider changing this ?\n>\n\nI have not any problems with the proposed feature, and I agree so proposed\nbehavior is more practical for users. Unfortunately, it breaks a lot of\napplications' regress tests, but it can be fixed easily, and it is better\nto do it quickly without more complications.\n\nRegards\n\nPavel\n\npo 27. 12. 2021 v 17:10 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:On Mon, Dec 06, 2021 at 09:08:56AM -0600, Justin Pryzby wrote:\n> I raised this issue a few years ago.\n> https://www.postgresql.org/message-id/20181217175841.GS13019%40telsasoft.com\n> \n> |[pryzbyj@database ~]$ psql -v VERBOSITY=terse ts -xtc 'ONE' -c \"SELECT 'TWO'\"; echo \"exit status $?\"\n> |ERROR:  syntax error at or near \"ONE\" at character 1\n> |?column? | TWO\n> |\n> |exit status 0\n> \n> The documentation doen't say what the exit status should be in this case:\n> | psql returns 0 to the shell if it finished normally, 1 if a fatal error of its own occurs (e.g., out of memory, file not found), 2 if the connection to the server went bad and the session was not interactive, and 3 if an error occurred in a script and the variable ON_ERROR_STOP was set.\n> \n> It returns 1 if the final command fails, even though it's not a \"fatal error\"\n> (it would've happily kept running more commands).\n> \n> | x=`some_command_that_fails`\n> | rm -fr \"$x/$y # removes all your data\n> \n> | psql -c \"begin; C REATE TABLE newtable(LIKE oldtable) INSERT INTO newtable SELECT * FROM oldtable; commit\" -c \"DROP TABLE oldtable\n> | psql -v VERBOSITY=terse ts -xtc '0CREATE TABLE newtbl(i int)' -c 'INSERT INTO newtbl SELECT * FROM tbl' -c 'DROP TABLE IF EXISTS tbl' -c 'ALTER TABLE newtbl RENAME TO tbl'; echo ret=$?\n> \n> David J suggested to change the default value of ON_ERROR_STOP.  The exit\n> status in the non-default case would have to be documented.  That's one\n> solution, and allows the old behavior if anybody wants it.  That probably does\n> what most people want, too.  This is more likely to expose a real problem that\n> someone would have missed than to break a legitimate use.  That doesn't appear\n> to break regression tests at all.\n\nI was wrong - the regression tests specifically exercise failure cases, so the\nscripts must continue after errors.\n\nI think the current behavior of the regression test SQL scripts is exactly the\nopposite of what's desirable for almost all other scripts.  The attached makes\nON_ERROR_STOP the default, and runs the regression tests with ON_ERROR_STOP=0.\n\nIs it viable to consider changing this ?I have not any problems with the proposed feature, and I agree so proposed behavior is more practical for users. Unfortunately, it breaks a lot of applications' regress tests, but it can be fixed easily, and it is better to do it quickly without more complications.RegardsPavel", "msg_date": "Mon, 27 Dec 2021 17:26:28 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: default to to ON_ERROR_STOP=on (Re: psql: exit status with\n multiple -c and -f)" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> I think the current behavior of the regression test SQL scripts is exactly the\n> opposite of what's desirable for almost all other scripts. The attached makes\n> ON_ERROR_STOP the default, and runs the regression tests with ON_ERROR_STOP=0.\n\n> Is it viable to consider changing this ?\n\nI don't think so. The number of scripts you will break is far greater\nthan the number whose behavior will be improved, because people who\nwanted this behavior will already be selecting it. Maybe this wasn't\nthe greatest choice of default, but it's about twenty years too late\nto change it.\n\nI'd also note that I see a fairly direct parallel to \"set -e\" in\nshell scripts, which is likewise not the default.\n\nWe could consider documentation changes to make this issue\nmore visible, perhaps. Not sure what would be a good place.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 27 Dec 2021 12:31:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: default to to ON_ERROR_STOP=on (Re: psql: exit status with\n multiple -c and -f)" }, { "msg_contents": "Hi,\n\nOn Mon, Dec 27, 2021 at 12:31:07PM -0500, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > I think the current behavior of the regression test SQL scripts is exactly the\n> > opposite of what's desirable for almost all other scripts. The attached makes\n> > ON_ERROR_STOP the default, and runs the regression tests with ON_ERROR_STOP=0.\n> \n> > Is it viable to consider changing this ?\n> \n> I don't think so. The number of scripts you will break is far greater\n> than the number whose behavior will be improved, because people who\n> wanted this behavior will already be selecting it. Maybe this wasn't\n> the greatest choice of default, but it's about twenty years too late\n> to change it.\n> \n> I'd also note that I see a fairly direct parallel to \"set -e\" in\n> shell scripts, which is likewise not the default.\n> \n> We could consider documentation changes to make this issue\n> more visible, perhaps. Not sure what would be a good place.\n\nI'm marking the CF entry as returned with feedback as it's been a few weeks\nwithout proposal for documentation change.\n\n\n", "msg_date": "Mon, 17 Jan 2022 08:10:43 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: default to to ON_ERROR_STOP=on (Re: psql: exit status with\n multiple -c and -f)" } ]
[ { "msg_contents": "Over in [1] it was pointed out that I overenthusiastically\ndocumented several geometric operators that, in fact, are\nonly stubs that throw errors when called. Specifically\nthese are\n\ndist_lb:\t<->(line,box)\ndist_bl:\t<->(box,line)\nclose_sl:\tlseg ## line\nclose_lb:\tline ## box\npoly_distance:\tpolygon <-> polygon\npath_center:\t@@ path\n(this also underlies point(path), which is not documented anyway)\n\nThere are three reasonable responses:\n\n1. Remove the documentation, leave the stubs in place;\n2. Rip out the stubs and catalog entries too (only possible in HEAD);\n3. Supply implementations.\n\nI took a brief look at these, and none of them seem exactly hard\nto implement, with the exception of path_center which seems not to\nhave a non-arbitrary definition. (We could model it on poly_center\nbut that one seems rather arbitrary; also, should open paths behave\nany differently than closed ones?) close_lb would also be rather\narbitrary for the case of a line that intersects the box, though\nwe could follow close_sb's lead and return the line's closest point\nto the box center.\n\nOn the other hand, they've been unimplemented for more than twenty years\nand no one has stepped forward to fill the gap, which sure suggests that\nnobody cares and we shouldn't expend effort and code space on them.\n\nThe only one I feel a bit bad about dropping is poly_distance, mainly\non symmetry grounds: we have distance operators for all the geometric\ntypes, so dropping this one would leave a rather obvious hole. The\nappropriate implementation seems like a trivial copy and paste job:\ndistance is zero if the polygons overlap per poly_overlap, otherwise\nit's the same as the closed-path case of path_distance.\n\nSo my inclination for HEAD is to implement poly_distance and nuke\nthe others. I'm a bit less sure about the back branches, but maybe\njust de-document all of them there.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/5405b243-4523-266e-6139-ad9f80a9d9fc%40postgrespro.ru\n\n\n", "msg_date": "Mon, 06 Dec 2021 18:18:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Triage for unimplemented geometric operators" }, { "msg_contents": "On Mon, 2021-12-06 at 18:18 -0500, Tom Lane wrote:\n> Over in [1] it was pointed out that I overenthusiastically\n> documented several geometric operators that, in fact, are\n> only stubs that throw errors when called.  Specifically\n> these are\n> \n> dist_lb:        <->(line,box)\n> dist_bl:        <->(box,line)\n> close_sl:       lseg ## line\n> close_lb:       line ## box\n> poly_distance:  polygon <-> polygon\n> path_center:    @@ path\n> (this also underlies point(path), which is not documented anyway)\n> \n> There are three reasonable responses:\n> \n> 1. Remove the documentation, leave the stubs in place;\n> 2. Rip out the stubs and catalog entries too (only possible in HEAD);\n> 3. Supply implementations.\n> \n> I took a brief look at these, and none of them seem exactly hard\n> to implement, with the exception of path_center which seems not to\n> have a non-arbitrary definition.  (We could model it on poly_center\n> but that one seems rather arbitrary; also, should open paths behave\n> any differently than closed ones?)  close_lb would also be rather\n> arbitrary for the case of a line that intersects the box, though\n> we could follow close_sb's lead and return the line's closest point\n> to the box center.\n> \n> On the other hand, they've been unimplemented for more than twenty years\n> and no one has stepped forward to fill the gap, which sure suggests that\n> nobody cares and we shouldn't expend effort and code space on them.\n> \n> The only one I feel a bit bad about dropping is poly_distance, mainly\n> on symmetry grounds: we have distance operators for all the geometric\n> types, so dropping this one would leave a rather obvious hole.  The\n> appropriate implementation seems like a trivial copy and paste job:\n> distance is zero if the polygons overlap per poly_overlap, otherwise\n> it's the same as the closed-path case of path_distance.\n> \n> So my inclination for HEAD is to implement poly_distance and nuke\n> the others.  I'm a bit less sure about the back branches, but maybe\n> just de-document all of them there.\n> \n> Thoughts?\n>\n> [1] https://www.postgresql.org/message-id/flat/5405b243-4523-266e-6139-ad9f80a9d9fc%40postgrespro.ru\n\nI agree with option #2 for HEAD; if you feel motivated to implement\n\"poly_distance\", fine.\n\nAbout the back branches, removing the documentation is a good choice.\n\nI think the lack of complaints is because everybody who needs serious\ngeometry processing uses PostGIS.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Tue, 07 Dec 2021 04:24:30 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Triage for unimplemented geometric operators" }, { "msg_contents": "On 07/12/2021 06:18, Tom Lane wrote:\n> So my inclination for HEAD is to implement poly_distance and nuke\n> the others. I'm a bit less sure about the back branches, but maybe\n> just de-document all of them there.\n\nI agree, seems to be a reasonable compromise. Removing 20+-years old \nunused and slightly misleading code probably should overweight the \nnatural inclination to implement all of the functions promised in the \ncatalog. Enhancing software by deleting the code is not an everyday \nchance and IMHO should be taken, even when it requires an extra \ncatversion bump.\n\nI am kind of split on whether it is worth it to implement poly_distance \nin back branches. Maybe there is a benefit in implementing: it should \nnot cause any reasonable incompatibilities and will introduce somewhat \nbetter compatibility with v15+. We could even get away with not updating \nv10..12' docs on poly_distance because it's not mentioned anyway.\n\nI agree on de-documenting all of the unimplemented functions in v13 and \nv14. Branches before v13 should not require any changes (including \ndocumentation) because detailed table on which operators support which \ngeometry primitives only came in 13, and I could not find in e.g. 12's \ndocumentation any references to the cases you listed previously:\n > dist_lb:\t<->(line,box)\n > dist_bl:\t<->(box,line)\n > close_sl:\tlseg ## line\n > close_lb:\tline ## box\n > poly_distance:\tpolygon <-> polygon\n > path_center:\t@@ path\n\n-- \nAnton Voloshin\nPostgres Professional, The Russian Postgres Company\nhttps://postgrespro.ru\n\n\n", "msg_date": "Tue, 7 Dec 2021 22:54:57 +0700", "msg_from": "Anton Voloshin <a.voloshin@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Triage for unimplemented geometric operators" } ]
[ { "msg_contents": "Hi all,\n\nWhile updating the patch I recently posted[1] to make pg_waldump\nreport replication origin ID, LSN, and timestamp, I found a bug that\nreplication origin timestamp is not set in ROLLBACK PREPARED case.\nCommit 8bdb1332eb5 (CC'ed Amit) added an argument to\nReorderBufferFinishPrepared() but in ROLLBACK PREPARED case, the\ncaller specified it at the wrong position:\n\n@@ -730,6 +730,7 @@ DecodeCommit(LogicalDecodingContext *ctx,\nXLogRecordBuffer *buf,\n if (two_phase)\n {\n ReorderBufferFinishPrepared(ctx->reorder, xid, buf->origptr,\nbuf->endptr,\n+\nSnapBuildInitialConsistentPoint(ctx->snapshot_builder),\n commit_time, origin_id, origin_lsn,\n parsed->twophase_gid, true);\n }\n@@ -868,6 +869,7 @@ DecodeAbort(LogicalDecodingContext *ctx,\nXLogRecordBuffer *buf,\n {\n ReorderBufferFinishPrepared(ctx->reorder, xid, buf->origptr,\nbuf->endptr,\n abort_time, origin_id, origin_lsn,\n+ InvalidXLogRecPtr,\n parsed->twophase_gid, false);\n }\n\nThis affects the value of rollback_data.rollback_time on the\nsubscriber, resulting in setting the wrong timestamp to both the\nreplication origin timestamp and the error callback argument on the\nsubscriber. I've attached the patch to fix it.\n\nBesides, I think we can improve checking input data on subscribers.\nThis bug was not detected by compilers but it could have been detected\nif we checked the input data. Looking at logicalrep_read_xxx functions\nin proto.c, there are some inconsistencies; we check the value of\nprepare_data->xid in logicalrep_read_prepare_common() but we don't in\nboth logicalrep_read_commit_prepared() and\nlogicalrep_read_rollback_prepared(), and we don't check anything in\nstream_start/stream_stop. Also, IIUC that since timestamps are always\nset in prepare/commit prepared/rollback prepared cases we can check\nthem too. I've attached a PoC patch that introduces macros for\nchecking input data and adds some new checks. Since it could be\nfrequently called, I used unlikely() but probably we can also consider\nreplacing elog(ERROR) with assertions.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoD2dJfgsdxk4_KciAZMZQoUiCvmV9sDpp8ZuKLtKCNXaA%40mail.gmail.com\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Tue, 7 Dec 2021 09:36:19 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Fix a bug in DecodeAbort() and improve input data check on\n subscriber." }, { "msg_contents": "On Tue, Dec 7, 2021 at 6:06 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Hi all,\n>\n> While updating the patch I recently posted[1] to make pg_waldump\n> report replication origin ID, LSN, and timestamp, I found a bug that\n> replication origin timestamp is not set in ROLLBACK PREPARED case.\n> Commit 8bdb1332eb5 (CC'ed Amit) added an argument to\n> ReorderBufferFinishPrepared() but in ROLLBACK PREPARED case, the\n> caller specified it at the wrong position:\n>\n> @@ -730,6 +730,7 @@ DecodeCommit(LogicalDecodingContext *ctx,\n> XLogRecordBuffer *buf,\n> if (two_phase)\n> {\n> ReorderBufferFinishPrepared(ctx->reorder, xid, buf->origptr,\n> buf->endptr,\n> +\n> SnapBuildInitialConsistentPoint(ctx->snapshot_builder),\n> commit_time, origin_id, origin_lsn,\n> parsed->twophase_gid, true);\n> }\n> @@ -868,6 +869,7 @@ DecodeAbort(LogicalDecodingContext *ctx,\n> XLogRecordBuffer *buf,\n> {\n> ReorderBufferFinishPrepared(ctx->reorder, xid, buf->origptr,\n> buf->endptr,\n> abort_time, origin_id, origin_lsn,\n> + InvalidXLogRecPtr,\n> parsed->twophase_gid, false);\n> }\n>\n> This affects the value of rollback_data.rollback_time on the\n> subscriber, resulting in setting the wrong timestamp to both the\n> replication origin timestamp and the error callback argument on the\n> subscriber. I've attached the patch to fix it.\n>\n\nThanks for the report and patches. I see this is a problem and the\nfirst patch will fix it. I'll test the same and review another patch\nas well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 7 Dec 2021 10:41:44 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix a bug in DecodeAbort() and improve input data check on\n subscriber." }, { "msg_contents": "On Tue, Dec 7, 2021 at 6:07 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Hi all,\n>\n> While updating the patch I recently posted[1] to make pg_waldump\n> report replication origin ID, LSN, and timestamp, I found a bug that\n> replication origin timestamp is not set in ROLLBACK PREPARED case.\n> Commit 8bdb1332eb5 (CC'ed Amit) added an argument to\n> ReorderBufferFinishPrepared() but in ROLLBACK PREPARED case, the\n> caller specified it at the wrong position:\n>\n> @@ -730,6 +730,7 @@ DecodeCommit(LogicalDecodingContext *ctx,\n> XLogRecordBuffer *buf,\n> if (two_phase)\n> {\n> ReorderBufferFinishPrepared(ctx->reorder, xid, buf->origptr,\n> buf->endptr,\n> +\n> SnapBuildInitialConsistentPoint(ctx->snapshot_builder),\n> commit_time, origin_id, origin_lsn,\n> parsed->twophase_gid, true);\n> }\n> @@ -868,6 +869,7 @@ DecodeAbort(LogicalDecodingContext *ctx,\n> XLogRecordBuffer *buf,\n> {\n> ReorderBufferFinishPrepared(ctx->reorder, xid, buf->origptr,\n> buf->endptr,\n> abort_time, origin_id, origin_lsn,\n> + InvalidXLogRecPtr,\n> parsed->twophase_gid, false);\n> }\n>\n> This affects the value of rollback_data.rollback_time on the\n> subscriber, resulting in setting the wrong timestamp to both the\n> replication origin timestamp and the error callback argument on the\n> subscriber. I've attached the patch to fix it.\n>\n> Besides, I think we can improve checking input data on subscribers.\n> This bug was not detected by compilers but it could have been detected\n> if we checked the input data. Looking at logicalrep_read_xxx functions\n> in proto.c, there are some inconsistencies; we check the value of\n> prepare_data->xid in logicalrep_read_prepare_common() but we don't in\n> both logicalrep_read_commit_prepared() and\n> logicalrep_read_rollback_prepared(), and we don't check anything in\n> stream_start/stream_stop. Also, IIUC that since timestamps are always\n> set in prepare/commit prepared/rollback prepared cases we can check\n> them too. I've attached a PoC patch that introduces macros for\n> checking input data and adds some new checks. Since it could be\n> frequently called, I used unlikely() but probably we can also consider\n> replacing elog(ERROR) with assertions.\n\nThe first patch is required as suggested and fixes the problem.\nFew comments on the second patch:\n1) Should we validate prepare_time and xid in\nlogicalrep_read_begin_prepare. Is this not checked intentionally?\n2) Similarly should we validate committime in logicalrep_read_stream_commit?\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 7 Dec 2021 14:35:05 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix a bug in DecodeAbort() and improve input data check on\n subscriber." }, { "msg_contents": "On Tue, Dec 7, 2021 at 6:06 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Hi all,\n>\n> While updating the patch I recently posted[1] to make pg_waldump\n> report replication origin ID, LSN, and timestamp, I found a bug that\n> replication origin timestamp is not set in ROLLBACK PREPARED case.\n> Commit 8bdb1332eb5 (CC'ed Amit) added an argument to\n> ReorderBufferFinishPrepared() but in ROLLBACK PREPARED case, the\n> caller specified it at the wrong position:\n>\n> @@ -730,6 +730,7 @@ DecodeCommit(LogicalDecodingContext *ctx,\n> XLogRecordBuffer *buf,\n> if (two_phase)\n> {\n> ReorderBufferFinishPrepared(ctx->reorder, xid, buf->origptr,\n> buf->endptr,\n> +\n> SnapBuildInitialConsistentPoint(ctx->snapshot_builder),\n> commit_time, origin_id, origin_lsn,\n> parsed->twophase_gid, true);\n> }\n> @@ -868,6 +869,7 @@ DecodeAbort(LogicalDecodingContext *ctx,\n> XLogRecordBuffer *buf,\n> {\n> ReorderBufferFinishPrepared(ctx->reorder, xid, buf->origptr,\n> buf->endptr,\n> abort_time, origin_id, origin_lsn,\n> + InvalidXLogRecPtr,\n> parsed->twophase_gid, false);\n> }\n>\n> This affects the value of rollback_data.rollback_time on the\n> subscriber, resulting in setting the wrong timestamp to both the\n> replication origin timestamp and the error callback argument on the\n> subscriber. I've attached the patch to fix it.\n>\n\nPushed.\n\n> Besides, I think we can improve checking input data on subscribers.\n> This bug was not detected by compilers but it could have been detected\n> if we checked the input data. Looking at logicalrep_read_xxx functions\n> in proto.c, there are some inconsistencies; we check the value of\n> prepare_data->xid in logicalrep_read_prepare_common() but we don't in\n> both logicalrep_read_commit_prepared() and\n> logicalrep_read_rollback_prepared(), and we don't check anything in\n> stream_start/stream_stop. Also, IIUC that since timestamps are always\n> set in prepare/commit prepared/rollback prepared cases we can check\n> them too.\n>\n\nI am not sure if we should try adding these validity checks for all\nkinds of input parameters. I think the original code was checking for\nlsn's, see logicalrep_read_begin, and at one place we start validating\nxid and now you are asking for the timestamp. I think it is important\nto ensure the correctness of these lsn values as the processing of\nchanges on subscriber relies on them. I think for the sake of\nconsistency it is better to check the validity of lsn's but if we want\nto add for others, we should analyze and add for all the required ones\nin one shot.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 8 Dec 2021 16:30:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix a bug in DecodeAbort() and improve input data check on\n subscriber." } ]